Machine Learning Researcher, Multimodal Foundation Models

Seattle, Washington, United States
Machine Learning and AI

Summary

Posted:
Role Number:200527006
Imagine what you could do here. At Apple, new ideas have a way of becoming extraordinary products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. Multifaceted, amazing people and inspiring, innovative technologies are the norm here. The people who work here have reinvented entire industries with all Apple Hardware products. The same passion for innovation that goes into our products also applies to our practices strengthening our commitment to leave the world better than we found it. Join us in this truly exciting era of Artificial Intelligence to help deliver the next groundbreaking Apple product & experiences! As a member of our dynamic group, you will have the unique and rewarding opportunity to craft upcoming research directions in the field of multimodal foundation models that will inspire the future Apple products. We are continuously advancing the state of the art in Computer Vision and Machine Learning. You will be working alongside highly accomplished and deeply technical scientists and engineers to develop state of the art solution for challenging problems. We are touching all aspects of language and multimodal foundation models, from data collection, data curation to modeling, evaluation and deployment. This is a unique opportunity to be part of what forms the future of Apple products that will touch the life of many people. We (Spatial Perception Team) looking for a machine learning researcher to work on the field of Generative AI and multi-modal foundation models. Our team has an established track record of shipping features that leverages multiple sensors, such as FaceID, RoomPlan and hand tracking in VisionPro. We are focused on building experiences that leverages the power of our sensing hardware as well as large foundation models. You will be a part of a diverse, fast moving team based in Cupertino.

Key Qualifications

  • Strong academic and publication record (CVPR, ICCV/ECCV, NeurIPS, ICML, etc)
  • Solid programming skills with Python
  • Deep understanding of large foundation models
  • Deep understanding of multi-task, multi-modal machine learning domain
  • Familiarity with deep learning toolkits
  • Familiar with challenges associated with training large models and working with large data
  • Ability to communicate the results of analyses in a clear and effective manner

Description

This position requires a highly motivated person who wants to help us advance the field of generativeAI and multi-modal foundation models. You will be responsible for designing, implementing, evaluating foundation models based on the latest advancements in the fields, taking into account future hardware design and product needs. In addition, you will have an opportunity to engage and collaborate with several teams across apple to deliver the best products.

Education & Experience

PhD in Computer Science, Computer Vision, Computer Graphics, Machine Learning or equivalent.

Additional Requirements

Pay & Benefits