How to Apply to Safe Superintelligence

9 min read Last updated April 20, 2026 19 open positions

Key Takeaways

  • SSI is a single-mission research lab, not a product company; if you want to ship to users, this is the wrong place.
  • The company is tiny (roughly twenty to thirty people as of early 2026), so every hire is high-stakes and the bar is correspondingly high.
  • Founders Ilya Sutskever, Daniel Gross, and Daniel Levy are deeply involved in hiring, and you will likely meet at least one before an offer.
  • Compensation is competitive with top frontier labs, but the real upside is equity in a high-valuation private research bet with no near-term liquidity.
  • Apply only if you can articulate, in your own words, why safe superintelligence matters to you and why now.
  • Lead with depth and original technical work; polished resumes full of brand names without substance get filtered out quickly.
  • Expect a quiet, slow process with little public information, infrequent updates, and a high signal-to-noise interview loop.
  • Be ready to relocate; SSI operates from Palo Alto and Tel Aviv and does not appear to support fully remote roles.
  • Treat the interview as a mutual fit evaluation, not a negotiation; aggressive haggling and status games hurt candidates more than they help.

About Safe Superintelligence

Safe Superintelligence Inc. (SSI) is an AI research laboratory founded in June 2024 by Ilya Sutskever, Daniel Gross, and Daniel Levy with a single, uncompromising mission: to build safe superintelligence. Sutskever, the former co-founder and chief scientist of OpenAI, departed amid the leadership turmoil of late 2023 and early 2024 to start a company that, in his words, has 'one focus, one goal, one product.' That product is safe superintelligence itself, and SSI has explicitly committed to not releasing any commercial product until its core scientific objective is met. This positioning is radically different from frontier labs like OpenAI, Anthropic, and Google DeepMind, which fund safety research through commercial revenue. SSI funds its research entirely through investor capital, freeing the team from product cycles, quarterly metrics, and the management overhead that comes with shipping consumer-facing AI. Daniel Gross, the former Apple AI lead and prolific Silicon Valley investor, brings operational and capital-allocation experience. Daniel Levy, formerly of OpenAI's optimization team, leads research alongside Sutskever. The company is intentionally small, with reports placing headcount around twenty to thirty researchers and engineers as of early 2026, and it operates dual offices in Palo Alto, California and Tel Aviv, Israel, the latter chosen partly for access to top-tier technical talent and to take advantage of Sutskever's personal network. SSI raised one billion dollars in its September 2024 seed round at a five-billion-dollar valuation, then a further reported funding round in 2025 that pushed its valuation into the tens of billions despite having no revenue, customers, or shipped product. Investors include Andreessen Horowitz, Sequoia Capital, DST Global, SV Angel, and NFDG. The company communicates almost nothing publicly, has no blog, no published papers under the SSI banner, and no marketing presence beyond its single-page website. Joining SSI means betting your career on a long research arc with no product distractions, working alongside some of the most accomplished researchers in modern AI, and accepting that your work will likely be invisible to the outside world for years.

Application Process

  1. 1
    Visit ssi

    Visit ssi.inc and review the careers section, which lists a small number of open roles and an email address for unsolicited applications (usually [email protected] or a similar contact).

  2. 2
    Submit your application directly via email or the listed form, including a tailo

    Submit your application directly via email or the listed form, including a tailored cover note that explains why SSI's specific mission resonates with you and a resume or CV that emphasizes deep technical work over titles.

  3. 3
    Expect either silence or a brief outreach from a recruiter or researcher within

    Expect either silence or a brief outreach from a recruiter or researcher within one to four weeks; SSI is selective and does not run mass funnels, so volume tactics will not help.

  4. 4
    First conversation is typically a 30 to 45 minute screen with a researcher or op

    First conversation is typically a 30 to 45 minute screen with a researcher or operator focused on your motivations, technical depth, and fit with the no-product, long-horizon culture.

  5. 5
    Technical loops vary by role and lean toward research-style discussions, paper d

    Technical loops vary by role and lean toward research-style discussions, paper deep dives, and live problem-solving rather than competitive-programming style coding tests; expect to discuss your past work in unusual depth.

  6. 6
    Final stages often include conversations with one or more of the founders (Sutsk

    Final stages often include conversations with one or more of the founders (Sutskever, Gross, or Levy) and a values and alignment discussion about safety, secrecy, and long-term commitment.

  7. 7
    Offers are decided as a team and tend to move quickly once consensus is reached;

    Offers are decided as a team and tend to move quickly once consensus is reached; relocation to Palo Alto or Tel Aviv is generally expected and discussed openly during the process.


Resume Tips for Safe Superintelligence

recommended

Lead with depth, not breadth: one project where you went five layers deep beats

Lead with depth, not breadth: one project where you went five layers deep beats ten shallow internships or shipped features.

recommended

Quantify research contributions concretely (papers, citations, open-source impac

Quantify research contributions concretely (papers, citations, open-source impact, benchmark improvements, novel architectures) rather than business KPIs that mean little to a research lab.

recommended

If you have published, link to the papers and clearly state your contribution (f

If you have published, link to the papers and clearly state your contribution (first author, theoretical lead, experimental lead) so reviewers do not have to guess.

recommended

Highlight systems-level technical work: distributed training at scale, custom CU

Highlight systems-level technical work: distributed training at scale, custom CUDA kernels, large model infrastructure, RL pipelines, and evaluation harnesses are all signal.

recommended

For non-research engineers, emphasize building reliable infrastructure for resea

For non-research engineers, emphasize building reliable infrastructure for research workloads, not generic SaaS engineering; mention specific stacks like PyTorch, JAX, Triton, Kubernetes, and large GPU cluster experience.

recommended

Keep the resume short, dense, and free of buzzwords; SSI reviewers will skim for

Keep the resume short, dense, and free of buzzwords; SSI reviewers will skim for substance and discount marketing language quickly.

recommended

Include a one or two sentence statement about why safe superintelligence matters

Include a one or two sentence statement about why safe superintelligence matters to you personally; generic mission statements are easy to spot and do not help.

recommended

Omit irrelevant material entirely; an SSI resume should rarely exceed one page f

Omit irrelevant material entirely; an SSI resume should rarely exceed one page for engineers and two for senior researchers.



Interview Culture

Interviewing at Safe Superintelligence is unusual in tone and substance compared to most frontier AI labs.

The company has stated publicly that it screens not just for technical capability but for 'good character,' and interviewers reportedly probe motivations carefully to understand whether candidates are drawn to the mission itself or to the prestige of working with Ilya Sutskever. Expect at least one conversation where the interviewer simply asks you to talk through a research problem you have struggled with at length, then follows your reasoning in real time, asking what alternatives you considered, where you got stuck, and what you would do differently. This is a craft interview, not a checklist interview. Coding interviews, when they happen, tend to favor short, conceptually rich problems over LeetCode grinding, and the interviewer is more interested in how you think about correctness, edge cases, and numerical stability than in raw speed. Research interviews typically include a deep dive into one or two of your prior projects or papers, where the interviewer will press on assumptions, alternative architectures, scaling behavior, and failure modes. If you have not internalized your own work, this will surface quickly. Founder interviews, particularly with Sutskever, tend to be calm and Socratic, with extended silences that candidates should resist filling with noise. Many candidates report being asked questions about their long-term commitment to a mission with no product feedback loop, their tolerance for working in deep secrecy, and their willingness to stay focused on a single research bet for many years. SSI runs a small process with relatively few interviewers, and decisions are made by consensus among a tight inner circle. Compensation conversations are reportedly straightforward and competitive with top labs, but the company is candid that the deal is fundamentally a long-horizon equity bet, not a cash optimization. Candidates who try to negotiate aggressively on standard tech-industry levers tend to fare poorly; candidates who treat the conversation as a mutual evaluation of fit do far better.

What Safe Superintelligence Looks For

  • Exceptional technical depth in machine learning research, large model training, optimization, or AI safety with a track record of original contributions.
  • Strong systems and software engineering ability, particularly for research engineers who must support large-scale training without a dedicated platform team.
  • Demonstrated ability to work on long-horizon problems without external validation, shipping cycles, or short feedback loops.
  • Genuine intellectual interest in safe superintelligence as a scientific problem, not merely a career opportunity or status play.
  • Discretion and comfort with extreme operational secrecy; SSI publishes almost nothing and expects employees to be similarly quiet externally.
  • High individual leverage: SSI hires generalists who can own a problem end to end rather than specialists who depend on large supporting teams.
  • Cultural fit with a very small, focused team where every hire materially changes the room and personality conflicts are not absorbed by scale.
  • Willingness to relocate to Palo Alto or Tel Aviv and work in person, since SSI is intentionally not a remote-first company.

Frequently Asked Questions

Is Safe Superintelligence hiring in 2026?
Yes, SSI continues to hire selectively for research scientists, research engineers, and a small number of operations and infrastructure roles in both Palo Alto and Tel Aviv. The company keeps a short list of openings on ssi.inc and accepts unsolicited applications by email. Hiring volume is intentionally low because the team itself is small.
What does Safe Superintelligence actually do?
SSI is a research-only AI lab pursuing the development of safe superintelligence as its single product. The company has explicitly stated it will not release commercial products until its core scientific objective is achieved. Its day-to-day work centers on foundational research in alignment, scalable oversight, large model training, and theoretical safety, though specifics are not disclosed publicly. The founders have framed safety and capability as a single integrated engineering problem rather than two competing concerns, which is itself a research bet: build systems whose safety properties scale with their capability rather than being bolted on afterward. This framing shapes hiring, since the company looks for people who reject the safety-versus-capability tradeoff as a false dichotomy.
Where are SSI's offices located?
SSI operates from two primary offices: Palo Alto, California in the United States and Tel Aviv, Israel. These locations were chosen for talent density and the founders' networks. Most roles require relocation and in-person work; SSI is not a remote-first company.
How much does Safe Superintelligence pay?
SSI does not publish compensation bands, but reports from candidates and industry observers suggest total compensation packages are competitive with top frontier labs like OpenAI, Anthropic, and Google DeepMind, with a heavy emphasis on equity given the company's high private valuation. Cash components are reportedly market-rate rather than category-leading, and there is no liquid public market for the equity, so the upside is genuinely a long-horizon bet. Senior researchers with strong leverage have reportedly negotiated outsized equity grants, but this is the exception and requires a track record that justifies it; mid-level candidates should expect packages framed around standard top-lab benchmarks rather than aggressive cash signing bonuses.
What is SSI's valuation and funding history?
SSI raised one billion dollars in September 2024 at a five-billion-dollar post-money valuation. A subsequent 2025 round pushed the valuation reportedly into the tens of billions of dollars, despite the company having no revenue or shipped product. Investors include Andreessen Horowitz, Sequoia Capital, DST Global, SV Angel, and NFDG (the firm associated with Daniel Gross and Nat Friedman).
How is SSI different from OpenAI or Anthropic?
Unlike OpenAI and Anthropic, which fund safety research through commercial revenue from products like ChatGPT and Claude, SSI is pure research with no commercial product roadmap. It funds itself entirely through investor capital and has structured itself to be insulated from product timelines and short-term market pressure. This means the work is more focused but also more speculative.
What kind of background do successful SSI candidates have?
Successful candidates typically have deep technical backgrounds in machine learning research, large-scale model training, optimization, theoretical computer science, or AI safety. Many have published influential papers, contributed to major open-source projects, or led significant research efforts at frontier labs, top PhD programs, or quant firms. Brand names matter less than verifiable technical depth and originality. SSI has also hired a notable number of strong generalist engineers without traditional research credentials, particularly for infrastructure and training-systems roles, on the strength of demonstrable systems work at hyperscale companies. Self-taught contributors with public artifacts that show real depth (kernels, training frameworks, novel optimizers) are taken seriously.
Does SSI sponsor visas?
SSI has hired international talent for both its Palo Alto and Tel Aviv offices, suggesting visa sponsorship is available for the right candidates. However, the company does not advertise specific immigration support, and candidates should raise visa needs early in the conversation to confirm fit and timing for the relevant office. The Tel Aviv office can be an attractive alternative for candidates who would otherwise face long H-1B queues for the United States, provided they are open to relocating to Israel and can navigate Israeli work permits, which the company can typically support for senior hires.
How long does the SSI hiring process take?
Reports suggest the SSI process runs anywhere from three to eight weeks end to end, depending on role and candidate availability. The company is small and decisions are made by a tight inner circle, so when consensus forms the process can move quickly; conversely, ambiguous candidates can sit in the pipeline for weeks while additional conversations are scheduled.
Should I apply to SSI if I want product or applied AI work?
No. SSI has publicly committed to not shipping products until its core research mission is complete, which could be many years. If you want to work on user-facing AI, applied research with short feedback loops, or commercial deployments, frontier labs with product organizations like OpenAI, Anthropic, Google DeepMind, or xAI are a much better fit. SSI also offers no opportunity to build a public portfolio of shipped features or open-source releases, since the company publishes essentially nothing externally. Researchers who define themselves by visible output, conference talks, or paper velocity will likely find the cultural fit difficult; SSI is built for people who are content to do their best work in private for a long time.

Open Positions

Safe Superintelligence currently has 19 open positions.

Check Your Resume Before Applying → View 19 open positions at Safe Superintelligence

Related Resources

Similar Companies

Related Articles


Sources