Model Robustness Applied Researcher - SIML, ISE
Do you think Computer Vision and Machine Learning can change the world? Do you think it can transform the way millions of people collect, discover and share the most special moments of their lives? We truly believe it can! At Apple, we are committed to advancing the safety, reliability, and robustness of AI systems. The System Intelligence and Machine Learning (SIML) group is seeking a Model Robustness Applied Researcher with deep expertise in adversarial machine learning, and responsible AI techniques with a particular focus on Generative AI. This is an opportunity to join a dedicated core group that provides foundational machine learning technologies with focus on AI and User safety.
Our work serves the safety foundation for many Apple Intelligence experiences (Writing Tools, Image Playground, Genmoji, Image Wand, Apple Intelligence in Photos, etc). We are team of researchers with diverse backgrounds in Machine Learning, Statistics and applied mathematics, all passionate about designing AI systems that remain robust, safe and fair. If you are driven by foundational and applied research challenges and excited about translating them into impactful solutions, we encourage you to apply.
SELECTED REFERENCES TO OUR TEAM’S WORK:
1. https://arxiv.org/pdf/2407.21075
2. https://www.apple.com/newsroom/2024/12/apple-intelligence-now-features-image-playground-genmoji-and-more/
3. https://support.apple.com/guide/iphone/create-genmoji-with-apple-intelligence-iph4e76f5667/ios
4. https://support.apple.com/guide/iphone/create-original-images-with-image-playground-iph0063238b5/ios
You will work at the intersection of applied research and real-world constraints, developing novel techniques for advancing responsible AI. We are seeking a candidate with a consistent track record in applied ML research. The primary responsibilities associated with this position range from algorithm design and implementation, ability to integrate research into production frameworks, and collaborating closely with product teams before and after feature launch.
- Masters, or Ph.D. in Computer Science, applied mathematics or comparable professional experience.
- Hands on experience training larges scale multimodal models and/or adapting pre-trained models for downstream tasks.
- Strong programing skills in Python, and proficiency in frameworks (ie TensorFlow, Pytorch) or equivalent deep learning frameworks.
- Strong background in machine learning and deep learning, with expertise in one of the sub-areas of adversarial robustness, responsible AI and/or model alignment.
- Strong background in research and innovation, demonstrated through publications in top-tier journals, conferences, patents, or impactful industry experience.
- Experience in transitioning prototypes into shipping products. Excellent written and verbal communication skills, comfortable presenting research to large audiences, and ability to work in multi-functional teams.
- Experience with adversarial attacks and defenses on Generative AI models, including robust training, secure sampling strategies, and detection of adversarially perturbed generations is a plus.
- Familiarity with red teaming methodologies to assess the robustness and security of Generative AI systems in adversarial and high-risk settings is a plus.
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant.