DevOps - Big Data

Hyderabad, Telangana, India
Software and Services

Summary

Posted:
Role Number:200206931
At Apple, great ideas quickly have a way of becoming great products, services, and customer experiences. Bring passion and dedication to your job and there's no telling what you could accomplish here. Do you want to be part of a team that builds cutting edge software service, a team that is continually innovating and is proud of making a difference? If so, bring your passion and talent and come join us to be part of something big and amazing. Apple's IS&T team is looking for highly motivated and talented Big Data DevOps/Site Reliability Engineers (SRE) to build the next generation of software services that powers several mission critical applications.

Key Qualifications

  • SRE Experience on Hadoop based technologies - HDFS/Yarn cluster administration, Hive, Spark
  • Experience managing Hadoop/YARN clusters with thousands of nodes and 10’s of petabytes of data running 10’s of thousands of jobs
  • Have a passion for automation by creating tools using Python, Java or other JVM languages
  • Strong expertise in troubleshooting complex production issues.
  • EXPERT UNDERSTANDING OF UNIX/LINUX BASED OPERATING SYSTEM
  • Excellent problem solving, critical thinking, and communication skills
  • Experience deploying and managing CI/CD pipelines
  • EXPERIENCE WITH SOLR CLUSTER ADMINISTRATION
  • Expertise in configuration management (such as Ansible, salt) for deploying, configuring, and managing servers and systems
  • The candidate should be adapt at prioritizing multiple issues in a high pressure environment
  • Should be able to understand complex architectures and be comfortable working with multiple teams
  • Ability to conduct performance analysis and troubleshoot large scale distributed systems
  • Should be highly proactive with a keen focus on improving uptime availability of our mission-critical services
  • Comfortable working in a fast paced environment while continuously evaluating emerging technologies
  • PROFICIENT IN UNIX, COMMAND-LINE TOOLS, AND GENERAL SYSTEM DEBUGGING
  • The position requires solid knowledge of secure coding practices and experience with the open source technologies.
  • Monitor production, staging, test and development environments for many Hadoop/YARN clusters spanning thousands of nodes, in an agile and dynamic organisation.

Description

You like to automate anything you do and document it for the benefit of others. You are an independent problem-solver who is self-directed and capable of exhibiting deftness to handle multiple simultaneous competing priorities and deliver solutions in a timely manner. Provide incident resolution for all technical production issues. Create and maintain accurate, up-to-date documentation reflecting configuration, and responsible for writing justifications, training users in complex topics, writing status reports, documenting procedures, and interacting with other Apple staff and management. Provide guidance to improve the stability, security, efficiency and scalability of systems. Determine future needs for capacity and investigate new products and/or features. Strong troubleshooting ability will be used daily; will take steps on their own to isolate issues and resolve root cause through investigative analysis in environments where the candidate has little knowledge/experience/documentation. Administer and ensure the proper execution of the backup systems. Provide 24x7 on-call support to handle urgent critical issues.

Education & Experience

BS in computer science with 7-10 years or MS plus 5-7 years experience or related experience.

Additional Requirements

  • Experience with Kubernetes, Docker Swarm, or other container orchestration framework
  • Experience building and operating large scale Hadoop/Spark data infrastructure used for machine learning in a production environment
  • EXPERIENCE IN TUNING COMPLEX HIVE AND SPARK QUERIES
  • Expertise in debugging Hadoop/Spark/Hive issues using Namenode, Datanode, Nodemanager, spark executor logs.
  • EXPERIENCE IN CAPACITY MANAGEMENT ON MULTI TENANT HADOOP CLUSTER
  • Experience in Workflow and data pipeline orchestration (Airflow, Oozie, Jenkins etc.)
  • Experience in Jupyter based notebook infrastructure.