Apple Media Products (AMP) Data Engineering - Software/Data Engineer

London, Greater London, United Kingdom
Software and Services

Summary

Posted: 22 Oct 2018
Weekly Hours: 35
Role Number: 113761653
Have you ever imagined what you could do here? At Apple, new ideas have a way of becoming great products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. The Apple Media Products Data Engineering team is looking for great engineers. We are looking for the very best people to build and positively improve features and services driving the iTunes Store, App Store and Apple Music. Our team is responsible for many of the key systems powering the personalisation features of the AMP ecosystem. This includes features such as ratings & reviews, purchase history, sync services and many more. Here you have a phenomenal opportunity to help build and evolve global-scale, leading-edge dynamic data systems, with positions currently available as we grow our amazing London team.

Key Qualifications

  • Significant experience in crafting, implementing and supporting highly scalable systems and services in Java
  • Bachelors or equivalent in Computer Science or related discipline
  • Experience in two or more Big Data areas is helpful — see Additional Requirements below for examples

Description

Our work covers the full stack, from: internet-facing web services, internal services using various flavours of RPC; design and implementation of data pipelines/life-cycles (Kafka); Hadoop infrastructure, strategy and implementation; distributed key-value storage (Voldemort, Cassandra, Redis, etc); and putting all this together to operate live customer-facing features with millisecond-latencies across multiple data centres with petabyte datasets and > 2 billion users. We promote innovation and new technology to further improve our creative output. If you’re an all round and performance-savvy Java server engineer with an interest in, and experience of, large scale Data technologies and systems at an unprecedented scale we’d love to hear from you.

Education & Experience

Bachelors/Masters in Computer Science or related discipline

Additional Requirements

  • • Experience building and/or using distributed systems, distributed caching, distributed key-value or column stores (e.g. Cassandra, Voldemort, Redis)
  • • A deep understanding of eventual consistency concepts
  • • Experience with and understanding of Hadoop-ecosystem technologies such as MapReduce, Spark, YARN/MR2, etc
  • • Experience in building and running best in class large scale data pipelines, using Kafka, with data ingest to/from multiple sources feeding batch compute components via HDFS and near-realtime components via online key-value storage
  • • Experience and interest in data modelling and data architecture as optimised for large data patterns (warehousing concepts; efficient storage and query on HDFS; support for relevant realtime query patterns in key-value stores; columnar schema design; etc.)
  • • Deep understanding of real time advanced analytics fundamentals and associated stream processing tools and techniques is a plus
  • • Passion for customer-satisfaction ethic and focus on customer privacy
  • • Experience with Scala would be an advantage
  • This position offers competitive salary and benefits.
  • The position will remain open until 29 October 2018