Software Engineer, Apple Media Products

Santa Clara Valley (Cupertino), California, United States
Software and Services


Weekly Hours: 40
Role Number:200169250
Can you imagine what you could accomplish here? At Apple, new ideas have a way of becoming extraordinary products, services, and customer experiences very quickly. Are you ready to bring your passion and dedication to your next challenging career adventure? Are you interested in a career in data? Data drives the direction and strategy for Apple’s growing Services business. As the centralized team that collects, curates, and provides data of all sorts to the rest of the Apple Media Products organization, we play a critical role in enabling business growth. We have teams responsible for every part of the data lifecycle: from data visualization and scalable data pipelines, to data integrity systems and data science. Are you passionate about presenting with meaningful data optimally? Curious how data is manipulated at petabyte-scale? Want to utilize cutting-edge machine learning techniques to detect anomalies in a sea of data? This is the team to join. You’ll be exposed to modern, open-source technologies that are standard to the big data industry, and will work with data at a scale that few organizations in the world have access to. Our data is used to drive consumer-facing features, like Search in the App Store and Recommendations in Apple Music. It’s used within Apple to drive product decisions. We’ve even built data products, API’s for App developers, Music artists, and podcasters to understand how users engage with their content.

Key Qualifications

  • Demonstrate skills and passion for data management and automation on Spark, Hadoop, and HDFS environments
  • Experience leading data in relational databases and developing ETL pipelines
  • Know-how in maintaining and debugging systems in Java runtime environments
  • Competency in Java & Spring/ Springboot FW’s and other JVM languages like Scala or/and C#, Clojure, or similar object-oriented language
  • Deep understanding, and strong hands on experience - Multithreading, Networking (including non-blocking IO), etc
  • Confidence with SQL databases like Oracle and NoSQL databases like Cassandra
  • Experience driving product features, functional specifications, and development schedules, represent team and technology
  • Passion and prior experience designing and implementing outstanding large distributed systems
  • Be an advocate and driver for performance optimization, automation, and unit tests
  • Ability to pick up new technologies quickly
  • Excellent debugging, critical thinking, and communication skills
  • Tireless attention to detail
  • Solid documentation and technical writing skills


The AMP Analytics team is responsible for collection, analysis, and reporting of data. From this data we generate insights into how customers interact with our products, and use these insights to drive improvements to user-facing features. You'd be working on a small team valuing cooperation, brainstorming, with an emphasis on design. You will be responsible for developing systems, tools, and visualizations to make sense of the data. We are looking for a sharp engineer who also has a keen sense of how to build good products. You are also a teammate -- ready to engage in lively design discussions, and able to give and receive constructive code reviews. Your curiosity drives you to explore new technologies and apply creative solutions to problems. The ideal candidate pays close attention to details, but also keeps sight of the bigger picture. We’re a diverse collection of problem solvers and doers, continually reimagining our products, systems, and practices to help people do what they love in new ways. This is a deeply reciprocal place, where everything we build is the result of people in different roles and teams working together to make each other’s ideas stronger. That same real passion for innovation that goes into our products also applies to our practices, strengthening our dedication to leave the world better than we found it.

Education & Experience

BS degree in Computer Science or a related field

Additional Requirements

  • Proficiency with source control systems (SVN, Git) and build tools including: Gradle, Maven, etc.
  • Experience with the Big-Data ecosystem (Spark, Hadoop, Hive, Pig, etc.)
  • Built and deployed large scale data pipelines (e.g. Kafka, Spark, Storm)
  • Understands different data storage solutions and when to use them (e.g. RDBMS, Cassandra, Solr, Redis)
  • Experience implementing and administering logging, telemetry and monitoring tools like Splunk
  • Experience in cluster management/orchestration software like Aurora or Ansible & using tools such as Docker is a plus
  • Experience working with Cloud based environment a plus