Sr. Data Operations Engineer - News

Santa Clara Valley (Cupertino), California, United States
Software and Services


Weekly Hours: 40
Role Number:200139352
Imagine what you could do here. At Apple, new ideas have a way of becoming extraordinary products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. Does an exciting, dynamic, fast paced and pressure-filled environment catch your attention? Do you like puzzles and determining solutions that are not obvious? Terrific! Consider joining us. The Apple News team is looking for a hardworking Data Operations Engineer with a background in large scale data engineering support. You will use technology to automate solutions and optimize outcomes focusing on data engineering, continuous integration and continuous deployment in a fast-changing world of software delivery.

Key Qualifications

  • Demonstrate proficiency in data management and automation on Spark, Hadoop, and HDFS environments
  • Experience managing data in relational databases and developing ETL pipelines
  • Experience using Spark SQL and Hive to write queries and scripts
  • Experience developing build and deployment automation
  • Know how in maintaining and debugging systems in Java runtime environments
  • Experience implementing and administering logging, telemetry and monitoring tools like Splunk is a plus
  • Experience managing source in git (GitHub ops, branching, merging, etc) a big plus
  • Experience in cluster management/orchestration software like Mesos, Aurora or Ansible using tools such as Docker is a plus
  • Experience working with Cloud based environment a plus
  • Experience in CI build tools such as Gradle and Jenkins a plus


Apple's News team is seeking a high-energy Data Operations Engineer. As a member of our dynamic group, you’ll have the rare and rewarding opportunity to craft upcoming products that will delight and inspire millions of Apple’s customers every day. Responsibilities: Manage data in Spark and Hadoop environments using scripts, automation Automate, deploy and operate data pipelines Administer and manage data in relational databases using scripts and automation Support and optimize Java runtime environment build, delivery, and deployment pipeline Communicate and/or address build, deployment and operational issues as they come up Implement facilities to monitor all aspects of data pipeline Automate build, deployment, and quality processes Interact with, and support, a variety of different teams (engineering, quality, management, SRE, etc)

Education & Experience

BS in Computer Science or related field and 4+ years of relevant work experience

Additional Requirements