AI Safety & Robustness Analysis Manager - System Intelligent and Machine Learning - ISE

Cupertino, California, United States
Software and Services

Summary

Posted:
Weekly Hours: 40
Role Number:200519780
Are you passionate about inclusion, fairness and safety in AI powered features that ship on 1.5B Apple products across the globe? Are you excited about Generative AI and motivated to build out robust and safety capabilities of generative models? We are the Intelligence System Experience (ISE) team within Apple’s software organization. The team works at the intersection between multimodal machine learning and system experiences. System Experience (Springboard, Settings), Keyboards, Pencil & Paper, Shortcuts are some of the experiences that the team oversees. These experiences that our users enjoy are backed by production scale ML workflows. Visual Understanding of People, Text, Handwriting & Scenes, multilingual NLP for writing workflows & knowledge extraction, behavioral modeling for proactive suggestions, and privacy preserving learning are areas our multi disciplinary ML teams focus on. We have multiple on-going efforts involving generative models, and we are looking for talented candidates to led the Robustness Analysis effort in ISE to ensure that features built on top of generative models are safe for deployment, and perform equally well for diverse customers within Apple's global user base. This is an exciting time to join us: grow fast, and have a positive impact on multiple key features on your first day at Apple!

Key Qualifications

  • 2+ years of experience as a machine learning manager, or 7+ years of professional machine learning experience with demonstrated technical leadership
  • Capacity to build a team and establish innovative and agile processes that push for a high level of service and for scalability
  • Strong ML fundamentals and hands-on experience in training ML models (NLP or Computer Vision), familiarity with ML toolkits, e.g., PyTorch
  • Proven experience in accessing and addressing potential risks and ensuring the safety and fairness in generative models. Prior experience in LLM Safety is desired
  • Strong experience in quantitative methods, data analysis and machine learning, leading to a deep understanding of the challenges associated to building ML datasets and machine learning models (potential biases, potential failure modes)
  • Capacity to operate at the intersection of ethics, product experience and machine learning: translate product needs (e.g. fairness & safety) into analytical requirements; design and lead experiments to answer feature level questions

Description

In this position, you will manage a team of people passionate about leading RA operations for key future facing Apple features with focus on ensuring safety and robustness for generative models. Apple’s dedication to deliver incredible experiences to a global and diverse set of users, in full respect of their privacy, has led to the development of a dedicated Robustness Analysis function. With the generative experience, creating a safe and robust platform is vital to our mission. Team’s responsibilities include monitoring ML model performance on relevant axes, and surfacing, measuring and mitigating ML failure modes, in order to improve overall user experience and reduce risks, with specific attention given to safety, inclusion and fairness. THE TEAM’S RESPONSIBILITIES INCLUDE: - research and develop approaches to mitigate harmful and risk behaviors in generative models - define product-centered axes of analysis relevant to target feature, in collaboration with model DRI and feature DRI - develop processes (models, tools and data) to identify other potential biases or failure modes - implement automated pipelines based on advanced ML technology and humans/models in the loop to create test sets covering the various axes of investigation - report progress and issues found in technical and sponsor meetings - suggest mitigation options (data and/or model) and lead mitigation experiments, when issues are found - become key contact within our organization, in company wide efforts related to safety, fairness and inclusion, robustness analysis, interpretability

Education & Experience

M.S. or PhD in Computer Science, Data Science, Mathematics, Physics, or a related field; or equivalent practical experience

Additional Requirements

Pay & Benefits