Research Engineer Intern (Fall 2026)

San Francisco, CA March 10, 2026 Full Time Workday
Introduction
The Center for AI Safety (CAIS) is a leading research and field-building organization on a mission to reduce societal-scale risks from AI. Alongside our sister organization, the CAIS Action Fund, we tackle the toughest AI issues with a mix of technical, societal and policy solutions.

As a research engineer intern here, you will work very closely with our researchers on projects in areas such as AI security, machine ethics, AI alignment, and benchmarking AI risks. We will assign you a dedicated mentor throughout your internship, but we will ultimately be treating you as a colleague. By this we mean, you will have the opportunity to debate for your own experiments or projects, and defend their impact. You will plan and run experiments, conduct code reviews, and work in a small team to create a publication with outsized impact. You will leverage our internal compute cluster to run experiments at scale on large language models.

Timing
This application is for the full-time fall internship position. Applications are due by May 29, 2026.
Introduction
The Center for AI Safety (CAIS) is a leading research and field-building organization on a mission to reduce societal-scale risks from AI. Alongside our sister organization, the CAIS Action Fund, we tackle the toughest AI issues with a mix of technical, societal and policy solutions.

As a research engineer intern here, you will work very closely with our researchers on projects in areas such as AI security, machine ethics, AI alignment, and benchmarking AI risks. We will assign you a dedicated mentor throughout your internship, but we will ultimately be treating you as a colleague. By this we mean, you will have the opportunity to debate for your own experiments or projects, and defend their impact. You will plan and run experiments, conduct code reviews, and work in a small team to create a publication with outsized impact. You will leverage our internal compute cluster to run experiments at scale on large language models.

Timing
This application is for the full-time fall internship position. Applications are due by May 29, 2026.
Know someone who could be a great fit for this role? Submit their details through our Referral Form. If we end up hiring your referral, you’ll receive a $1,500 bonus once they’ve been with CAIS for 90 days.

The Center for AI Safety is an Equal Opportunity Employer. We consider all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity or expression, national origin, ancestry, age, disability, medical condition, marital status, military or veteran status, or any other protected status in accordance with applicable federal, state, and local laws. In alignment with the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records for employment.​

If you require a reasonable accommodation during the application or interview process, please contact [email protected].​

We value diversity and encourage individuals from all backgrounds to apply.

You might be a good fit if you:

  • Are able to read an ML paper, understand the key result, and understand how it fits into the broader literature.
  • Are comfortable setting up, launching, and debugging ML experiments.
  • Are familiar with relevant frameworks and libraries (e.g., pytorch).
  • Communicate clearly and promptly with teammates.
  • Take ownership of your individual part in a project.
  • Have co-authored a ML paper in a top conference.
  • Apply on company site

    How to Get Hired at Aisafety

    • Tailor your resume to each specific Aisafety role — Workday applications are evaluated per-position
    • Aisafety uses Workday to manage applications; PDF format preserves your formatting through their parser
    Read the full guide

    How well do you match this role?

    Check My Resume