Herzliya, Tel Aviv District, Israel
Software and Services
We're looking for an exceptional data analyst with a strong background in distributed data processing and a demonstrated ability to turn data into actionable insights. As a data analyst for our Storage Data Analytics team at Apple, you'll play a crucial role in developing the fastest and most efficient SSDs for our products, including the iPhone, iPad, Apple Watch, and new Apple Silicon Macs. If you're passionate about continuously improving the ways we use data to make Apple's products amazing, we want to hear from you!
- +3 years of experience.
- Ability to lead data investigations and analysis projects with ambiguous requirements
- Advanced statistic and modeling knowledge
- Experience visualizing and presenting analyses in tools like Jupyter and Tableau
- Experience programming in Python in a professional setting
- Familiarity with distributed computation, storage, and workflow management (e.g. Splunk, Spark, Kubernetes, Kafka, Hadoop, MapReduce, AirFlow etc.)
- Experience with relational databases and SQL
- Highly organized, creative, motivated, and passionate about achieving results
- Curious and have excellent analytical and problem solving skills
- Excellent written, verbal, and data communication skills
On the Storage Data Analytics team, we are responsible for performing ad hoc data analysis, development and maintenance of distributed data pipelines, as well as creation and support of analysis tools. Our team focuses on Apple’s storage solutions. It is small and nimble, able to quickly explore ideas and present them to the storage teams and to leadership. In this role you will be: Building data pipelines to extract data from HDFS for transformation in Spark, and delivery to Postgres and Tableau. Performing exploratory data analysis to evaluate feasibility of investigations. Using Jupyter and Tableau to explore and visualize data for publication to client teams. Delivering analyses via self-service dashboards and in-person presentations with other teams and engineering leaders. Participating in periodic infrastructure support for our team’s Python-based, distributed micro services. Developing storage domain expertise to become a bridge between the storage and the data worlds.
Education & Experience
Bachelor's or Master's degree in computer science.
- SQL, Tableau, Splunk, Python, Jupyter, Spark, Kubernetes, Kafka, Hadoop, MapReduce, AirFlow, HDFS.