Sr Hadoop Engineer
Austin, Texas, United States
Software and Services
Imagine what you could do here. At Apple, new ideas have a way of becoming outstanding products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. Our Advertising Platforms group makes it possible for people around the world to easily access informative and imaginative content on their devices while helping publishers and developers promote and monetize their work. Our technology and services power advertising in Apple News and Search Ads in App Store. Our platforms are highly-performant, deployed at scale, and setting new standards for enabling effective advertising while protecting user privacy. We are looking to hire extraordinary individuals to join our team. You should love building your own creative engineering solutions to solve various type of Network, System and Application related issues. Our team provides infrastructure and support by building and maintaining Hadoop clusters and application stack. You love working in Agile environments. You are an energetic self-starter with excellent attention to detail and follow through.
- Deep understanding of UNIX and TCP/IP network fundamentals
- Expertise with Hadoop and its ecosystem Hive, Pig, Spark, HDFS, HBase, Oozie, Sqoop, Flume, Zookeeper, Airflow etc.
- 5+ years managing clustered services, distributed systems, production data stores
- 3+ years experience administering and operating Hadoop clusters
- Cloudera CHD4 /CDH5 cluster management and capacity planning experience
- Experience in at least one language (Shell, Ruby, Python, Java, Perl, Go)
- Ability to rapidly learn new software languages, frameworks and APIs quickly
- Experience scripting for automation and config management (Chef, Puppet)
- Multi-datacenter deployment experience, a plus
- Strong troubleshooting skills with exposure to large scale production systems
In this exciting role, your responsibilities will include: - Design and implement scalable data platforms for our customer facing services - Deploy and scale Hadoop infrastructure - Hadoop / HDFS maintenance and operations - Data cluster monitoring and troubleshooting - Hadoop capacity planning - OS integration and application installation - Create runbooks for Offshore teams - Partner with program management, network engineering, site reliability operations, and other related groups - Willingness to participate in a 24x7 on-call rotation for critical issues
Education & Experience
Bachelors degree in Computer Science or equivalent. Masters degree preferred.