Applied Researcher in Scene Understanding (SUN) - SIML, ISE

Santa Clara Valley (Cupertino), California, United States
Machine Learning and AI

Summary

Posted:
Role Number:200380909
Are you interested in making an extensive and impactful contribution to Machine Learning at Apple? Our Scene Understanding (SUN) team is looking for candidates who are passionate about leveraging their strong ML background towards impacting user experience at scale. The team comprises domain experts in Computer Vision & Natural Language Processing (NLP) who contribute to a variety of shipping workflows you may already regularly use, including: Photos Search, Curation, Memories, Intelligent Auto-crop, Visual Captioning for Accessibility, Federated Learning on visual content, Real-time Classification & Saliency in Camera, Semantic Segmentation in Camera, and several on- device feature extractors across the system. Further, several of our projects are surfaced to third party developers through Vision & CoreML. Shipping APIs include image tagging, image similarity, saliency estimation and prints for transfer learning. The team collaborates extensively with various teams across Apple in bringing experiences to life across our devices, services and 1st/3rd party APIs. Selected references to our team’s work: 
- https://developer.apple.com/videos/play/wwdc2019/222/
 - https://developer.apple.com/videos/play/wwdc2019/225/ 
- https://machinelearning.apple.com/research/panoptic-segmentation

Key Qualifications

  • Strong ML fundamentals
  • Hands-on experience with building Deep Learning applications
  • Proficiency in using ML toolkits, e.g., PyTorch
  • Strong analytical and problem solving skills
  • Strong programming skills in Python, C and C++
  • You're aware of the challenges associated to the transition of a prototype into a final 
product
  • You're familiar with the challenges of developing algorithms that run efficiently on 
resource constrained platforms
  • You've demonstrated leadership in both applied research and development
  • Excellent written and verbal communications skills, be comfortable presenting 
research to large audiences, and have the ability to work hands-on in multi-functional teams

Description

We are looking for a candidate with a proven track record in applied ML research. Responsibilities in the role will include training large scale multimodal (vision-language) models on distributed backends, deployment of compact neural architectures such as transformers efficiently on device, and learning adaptive policies that can be personalized to the user in a privacy preserving manner. Ensuring quality in the field, with an emphasis on fairness and model robustness would constitute an important part of the role. You will be interacting very closely with a variety of ML researchers, software engineers, hardware & design teams cross functionally. The primary responsibilities associated with this position range from algorithm design and implementation, ability to integrate research into production frameworks, and collaborating closely with product teams before and after feature launch.

Education & Experience

M.S. or PhD in Electrical Engineering/Computer Science or a related field (mathematics, physics or computer engineering), with a focus on computer vision and/or machine learning or comparable professional experience

Additional Requirements

  • Familiarity with Multi-modal ML, Graph ML and/or Reinforcement Learning (RL) Experience in distributed large-scale training is a plus
Experience in neural network deployment optimizations is a plus