Software Engineer (Flink/Kafka)
Bengaluru, Karnataka, India
Software and Services
Imagine what you could do here. At Apple, great ideas have a way of becoming great products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish!
This position is in the Data Services Big Data Engineering team that manages various state-of -the-art open source technologies in Streaming, Data Science and Big Data Analytics areas, including Hadoop, Spark, Kafka, Flink, Vector databases and AI/ML. You should have experience in developing/deploying large-scale big data analytics/streaming applications and services. The ideal candidate will bring a lot of energy, initiative and excitement and will be ready to learn and explore new groundbreaking technologies.
Description
Does the idea of mastering a variety of big data technologies and picking/building the right ones for each application make your heart leap? Would you like to work in a fast-paced environment where your technical abilities will be challenged on a day to day basis? Do you want your work to make a difference in the lives of millions of people who use Apple's products and services?
If so, Data Services Big Data Engineering team is looking for a highly motivated, diligent, energetic individual with excellent written and verbal skills who are not afraid to think creatively and question assumptions. In this role, you will be part of a fast growing, cohesive team with many exciting responsibilities related to Big Data, including:
- Setup of Flink platform using Kubernetes/EKS on AWS.
- Build ETL pipeline in Java using Apache Flink & Kafka to load data into Relational Datastore.
- Handle a large-scale fleet of Kafka servers.
- Develop scalable, robust systems that will be highly adaptable to changing business needs.
- Define/develop Big Data technologies, platforms and applications.
- Identify and demonstrate new open-source software stacks to drive business impact.
- Architect, improve, and scale applications to the next level.
- Interface with application owners, developers and project managers.
- Recommend and deploy tools and processes to enable rapid development, deployment and operations of big data solutions.
- Mentor other team members on the development of Big Data technologies and applications.
Minimum Qualifications
- B.Tech Computer Science or equivalent.
- Strong education in Computer Science, Software Engineering, Algorithms, Operating Systems, Networking, etc.
- 5+ years of working experience in designing, building, and managing highly available and large-scale data platforms.
- 5+ years of demonstrated experience of backend/core Java programming, design, architecture, performance, multi-threading, etc.
- Strong experience and knowledge in Big Data Technologies with proven expertise in event streaming technologies Flink and Kafka.
- Solid understanding of ETL pipeline development using Java in data processing frameworks such as Apache Spark, Apache Flink and Apache Kafka.
- Expertise in Kubernetes internals and Kubernetes Operators, with a strong ability to design and develop solutions using Kubernetes.
- Experience with other programming and scripting languages like Python, Go and Bash desirable.
- Available for on-call support during IST hours as needed.
Key Qualifications
Preferred Qualifications
- Experience with public clouds, specially AWS. Preferably with knowledge of cost optimisation.
- Familiarity with DevOps practices, including continuous integration, continuous delivery, testing, and monitoring.
- Strong communication and collaboration skills, with the ability to handle ambiguity and prioritise issues to provide clarity.
- Proven track record to take ownership of a product end-to-end, drive its development, and generate innovative ideas independently.