AI Safety - Machine Learning and Applied Research
Pittsburgh
March 13, 2026
Apple Custom Ats
Summary
Apple Intelligence is designed as a deeply integrated, privacy-preserving capability that seamlessly enhances what people can do across iPhone, iPad, Mac, and other devices. Our goal is to elevate the user experience without requiring our customers to learn new products or fundamentally change how they interact with their technology.
Description
Our team leads Responsible AI & Safety initiatives for global generative AI products, operating at the intersection of policy, product, and GenAI. We're seeking candidates who will shape safety policies in partnership with leadership, design, engineering, legal, and regulatory stakeholders—ensuring our safeguards advance both user protection and product innovation.
You will collaborate closely with top machine learning researchers and engineers, software engineers, and design teams to develop and deliver groundbreaking solutions for Apple products. We believe that the most exciting problems in machine learning research arise at the intersection of emerging technologies and real-world use cases. This is also where the most critical breakthroughs come from.
You will also work on producing safety evaluations that uphold Apple’s Responsible AI values requires thoughtful data sampling, creation, and curation for evaluation datasets; high quality, detailed annotations and careful auto-grading to assess feature performance; and mindful analysis to understand what the evaluation means for the user experience.
Minimum Qualifications
3+ years of proven ability in machine learning, including work with generative models (Transformers, LLMs, VLMs), NLP, or Computer Vision
4+ Years research or product deployment record in areas related to responsible AI, with publications in top ML venues (e.g., ACL, CHI, CVPR, EMNLP, FAccT, ICML, Interspeech, NeurIPS, UIST, etc.)
Strong research fundamentals, machine learning principles, and development methodologies around LLMs, foundation models, and diffusion models
Experience working with generative models for evaluation and/or product development, and up-to-date knowledge of common challenges and failures.
PhD, MS or BS in Computer Science, Machine Learning, or related fields or an equivalent qualification acquired through other avenues
Preferred Qualifications
Experience working in the Responsible AI space.
Curiosity about fairness and bias in generative AI systems, and a strong desire to help make the technology more equitable.
Proven success contributing in a highly cross‑functional environment
Experience shipping complex AI systems at global scale