Dev Ops Engineer - Ad Platforms

Austin, Texas, United States
Software and Services

Summary

Posted:
Weekly Hours: 40
Role Number:200257868
At Apple, we work every day to create products that enrich people’s lives. Our Ad Platforms group makes it possible for people around the world to easily access informative and imaginative content on their devices while helping publishers and developers promote and monetize their work. Our technology and services power Search Ads in the App Store and the advertising found in Apple News. Our platforms are highly performant, deployed at scale, and set new standards for enabling effective advertising while protecting user privacy! The Ad Platforms Data Insights Engineering team is seeking a DevOps engineer to join in building and deploying the next generation of cloud infrastructure components to enable efficient execution of analytical workloads to empower Sales, Product, and Executive teams. In this role you will be a key member of the team driving the strategy, development, execution, and continuous improvement of core analytical data products for Ad Platforms. Building the foundational data architectures and pipelines for our capabilities.

Key Qualifications

  • Background in computer science, mathematics, or similar quantitative field with a minimum of 4-6 years professional experience
  • Experience supporting and working with cross-functional teams
  • At least four (4) years of experience supporting AWS cloud infrastructure deployments (e.g. JupyterHub, Airflow)
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift, Athena, Glue, Sagemaker
  • At least three (3) years of technical architecture experience integrating identity management, access management and access governance software into cloud infrastructure and applications
  • Proficient with Identity and Access Management with background in oAuth 2.0, OpenID connect, SAML, Single Sign On, multi-tenancy and API authorization/ access management
  • Proficient with containerization and cluster management technologies like Docker and Kubernetes
  • Experience with workflow scheduling/ orchestration such as Airflow or Oozie
  • Expertise with infrastructure-as-code tools, such as Ansible, Chef, Terraform, or CloudFormation
  • Experience with Python packaging and conda environments
  • Experience with revision control systems like GitHub, CI/CD unit testing, and configuration management systems
  • Experience developing, maintaining and debugging distributed systems
  • Proficiency with relational databases (e.g. SQL Server, Oracle, Postgres)
  • Experience using one or more scripting languages (e.g., Python, bash, etc.)
  • Unix-based command line experience required
  • Experience with applying data encryption and data security standards
  • Ability to quickly learn new and existing technologies
  • Strong attention to detail and excellent analytical capabilities
  • Most importantly, a sense of humor and an eagerness to learn!
  • - NICE TO HAVE -
  • Extract Transform Load (ETL) experience using Spark, Kafka, Hadoop, or similar technologies
  • SQL expertise, data modeling, and relational database experience required
  • Presto, Hive, SparkSQL, Cassandra, Solr, or other big data query and transformation experience
  • Ability to design and implement effective testing and operations strategies for data products
  • Experience implementing machine learning and data science workloads is a plus
  • Data visualization experience using R, Python, or Tableau a plus

Description

- Solve tough problems across the technology spectrum including designing and deploying infrastructure on AWS and in some cases on-premises (e.g. cloud kubernetes deployments of JupyterHub, Airflow, conda environments) - Partner with business, analytics and engineering teams to understand specific requirements, build, and deploy infrastructure to support R&D, engineering and data science workloads at scale - Engineer solutions for integration with existing identity, security, and networking services; and provide troubleshooting expertise for high-impact cloud infrastructure incident - Use unix skills such as file permissions, managing file systems, networking, and user management - Use modern tools and technologies to build reliable and performant pipelines and data products - Automate and optimize existing analytic workloads by recognizing patterns of data and technology usage - Must be able to work in a rapidly changing environment and perform effectively in a sprint-based agile development environment

Education & Experience

Bachelor’s degree or equivalent experience required, Master's Degree in Computer Science or other Engineering field strongly desired

Additional Requirements