Software Engineer at Anthropic (2026)
In short
Anthropic is an AI safety company building Claude. Engineering surfaces span the Claude.ai consumer product, the API + Console, Claude Code (the developer CLI), inference infrastructure, training infrastructure, and alignment research engineering. The primary stack is Python (across ML and product), TypeScript (Claude.ai frontend, Console, Claude Code CLI), and Rust (some performance-critical infrastructure). Anthropic is San Francisco-headquartered with hubs in NYC, London, and Dublin; most US SWE roles are San Francisco-based with limited remote eligibility per anthropic.com/jobs.
Key takeaways
- Anthropic is a Public Benefit Corporation (PBC) with stated AI safety mission; mission-alignment is an explicit interview signal at every level (per anthropic.com/jobs).
- Primary stack: Python everywhere (product, infrastructure, ML), TypeScript for web frontends and CLI tools, Rust for performance-critical infrastructure, with significant CUDA work on the inference team.
- Anthropic publishes specific salary ranges per US pay-transparency laws; senior SWE postings (e.g., Member of Technical Staff levels) commonly cite total comp in the $400k-$700k range with significant equity weighting.
- Anthropic's interview process emphasizes both deep technical signal (coding + system design appropriate to role) and explicit AI-safety mission alignment — engineers indifferent to safety motivation tend to fail the loop.
- Familiarity with Claude as a product (using it daily, knowing its strengths/limits, having a position on AI engineering best practices) is a strong implicit signal — Anthropic engineers use Claude in their own daily work.
- Specific job categories: Member of Technical Staff (broad SWE+research), Research Engineer (ML-research-leaning), Product Engineer (Claude.ai, Console, Claude Code), Infrastructure Engineer (inference, training).
Where Anthropic SWEs work — surfaces and teams
From anthropic.com/jobs (verified 2026-04-27):
- Claude.ai (consumer product). The chat interface at claude.ai. TypeScript/React frontend, Python backend. The fastest-shipping surface; product engineers ship UI, conversation features, projects, computer use, file uploads, the artifact system.
- API + Anthropic Console. The developer-facing surface (console.anthropic.com). API design, billing, key management, the Workbench prompt-development tool, evaluation tooling. TypeScript + Python.
- Claude Code (developer CLI). The terminal-native AI coding tool documented at docs.claude.com/en/docs/claude-code/overview. TypeScript primarily; significant integration work with VS Code, JetBrains, Vim, terminal-native workflows, MCP (Model Context Protocol).
- Inference infrastructure. Serving Claude at scale across customers and consumer products. Heavy CUDA + Python + Rust. Distributed serving, batching, KV-cache management, prompt caching, throughput optimization.
- Training infrastructure. Pretraining and post-training infrastructure. Large-scale distributed training (thousands of accelerators), data pipelines, evaluation harnesses, model checkpoint management.
- Alignment Research engineering. Engineering support for safety-research teams: interpretability tooling, RLHF infrastructure, evaluation frameworks for safety benchmarks. Anthropic publishes substantial alignment research at anthropic.com/research.
- Trust & Safety / Policy. Content moderation, abuse mitigation, deployment policy enforcement. Cross-functional with the policy team.
- Internal AI tooling. Building AI-augmented internal tools (Claude integrated into Anthropic's own engineering workflows). Anthropic has published 'Claude Code Best Practices' (anthropic.com/engineering/claude-code-best-practices) reflecting their internal usage.
Anthropic's engineering blog (anthropic.com/news + anthropic.com/research) is canonical pre-interview reading. Posts on Claude's engineering, MCP, prompt caching, and the Claude Code best-practices guide are explicitly designed to communicate engineering culture.
Interview process and what gets graded
Anthropic's process per anthropic.com/jobs and candidate reports (Glassdoor + Blind):
- Recruiter screen (30 min). Background, role-fit, mission-alignment opening. Anthropic recruiters explicitly probe motivation for AI-safety work.
- Coding screen (60 min). Coderpad-style coding. Standard-medium algorithmic problem; Anthropic's screen is reputed to be reasonable difficulty (not Google-tier hard).
- On-site (4-5 rounds, ~5 hours):
- Coding rounds (1-2). Algorithmic problems plus, often, an applied AI-engineering problem (e.g., 'design a tool-use API', 'implement a streaming response handler with cancellation'). Anthropic's product engineers work with Claude APIs daily; coding rounds reflect this.
- System design round (senior+). Standard format. Anthropic's interviewers tend to push on inference-serving challenges (high-throughput LLM serving, KV cache management, prompt caching) for relevant roles, or product/API design challenges for product engineers.
- Research/engineering depth round (varies by role). For Research Engineer roles, expect a depth probe on a paper or technique (transformer attention, RLHF, scaling laws). For product/infrastructure roles, expect depth on relevant systems.
- Mission & values round. Explicit conversation about why Anthropic, what AI safety means to you, how you'd handle decisions involving capability vs safety trade-offs. This round is real evaluation, not a checkbox. Engineers who've thought about AI safety have material to discuss; engineers who haven't tend to underperform.
The 'Member of Technical Staff' framing: Anthropic uses this title broadly across SWE and research-engineering, borrowed from research-lab tradition (Bell Labs, OpenAI). It signals less hierarchical leveling than at Google/Meta and a more generalist expectation. Interviews calibrate to specific role expectations; the title doesn't fully predict scope.
Compensation, sourced
Anthropic publishes specific salary ranges per US pay-transparency laws on individual job postings. Aggregated approximate (per current postings at anthropic.com/jobs and levels.fyi/companies/anthropic):
- L3 (Member of Technical Staff, junior-mid): ~$240k base, $310k-$470k total per levels.fyi/companies/anthropic/salaries/software-engineer.
- L4 (Senior MTS): ~$300k base, $440k-$700k total.
- L5 (Staff MTS): ~$380k base, $700k-$1.1M+ total.
- L6 (Principal MTS): $450k+ base, $1M-$1.8M+ total.
Equity: Anthropic is private (last reported funding round at ~$60B+ valuation as of 2026). Equity is in the form of options or RSU-equivalents; vesting typically 25/25/25/25 over 4 years with one-year cliff. Liquidity is via tender offers (occasional) or eventual IPO/acquisition; plan for 5-10 year holding period at minimum.
Anthropic's pay strategy has been to pay at or above FAANG-tier total comp to attract senior talent into the AI-safety mission. Base salaries are particularly high relative to equity weighting; total comp at senior+ levels can exceed comparable Google/Meta levels.
What Anthropic looks for that other companies don't
Two signals weight more at Anthropic than at peer companies:
1. Mission alignment. Anthropic's stated mission is to ensure transformative AI benefits humanity. The company is a Public Benefit Corporation with this mission encoded in its corporate charter. The interview-loop weighting on mission alignment is real: engineers indifferent to safety motivation, or who view the mission as marketing, fail the loop. The reverse is also true: engineers with deep prior thinking about AI safety (read Yudkowsky's papers, Dario Amodei's posts at darioamodei.com, Anthropic's published research) signal strongly.
What good mission-alignment evidence looks like:
- You can articulate specific risks of advanced AI without retreating to either 'doomerism' or 'it's all hype'.
- You've thought about how engineering decisions trade off capability against safety (model release decisions, deployment policy, evaluation rigor).
- You've used Claude (or similar) extensively and have a position on what's working and what isn't.
- You've engaged with Anthropic's published research (anthropic.com/research) — interpretability work, alignment papers, Constitutional AI.
2. Familiarity with AI engineering as a discipline. Anthropic engineers ship products that use Claude; they're expected to be fluent users of LLM APIs. Specific signals:
- Concrete experience with prompt engineering, tool use, evaluation harnesses.
- Opinion on AI engineering best practices (caching strategies, evaluation methodology, debugging hallucinations).
- Ability to articulate when AI is the right tool vs when traditional engineering is better.
Anthropic's engineering blog includes a 'Claude Code best practices' essay (anthropic.com/engineering/claude-code-best-practices) that reflects internal practice. Reading it before the interview is a positive signal.
Frequently asked questions
- Is Anthropic remote-friendly for SWE roles?
- Limited. Most SWE roles are based at the San Francisco HQ; some at NYC, London, or Dublin hubs. A small number of roles are eligible for fully remote within the US, explicitly noted on job postings. The company has been relatively conservative on remote work compared to some peers; expect on-site collaboration for most SWE roles. Reference: anthropic.com/jobs filters by location.
- What does 'Member of Technical Staff' actually mean?
- An umbrella title borrowed from research-lab tradition (Bell Labs, OpenAI). At Anthropic, MTS spans junior through principal-equivalent levels with internal seniority indicators. The title signals less hierarchical leveling than at FAANG; many engineers operate as generalists with broader scope than typical. Internal levels exist for compensation and promotion purposes but aren't fully exposed externally. Reference: discussion at the Anthropic engineering page (anthropic.com/engineering).
- Does Anthropic require ML / research background for SWE roles?
- Depends on the role. Research Engineer roles require strong ML foundations (transformer architecture, training dynamics, scaling laws). Product Engineer and Infrastructure Engineer roles require general SWE skills plus comfort working with LLMs as APIs and components — no formal ML research background needed, though familiarity helps. The job posting specifies.
- How does Anthropic handle the AI-safety-vs-shipping trade-off in interviews?
- Honestly. Anthropic doesn't pretend the trade-off doesn't exist. Interview discussions often probe: 'how would you handle a decision where shipping a feature would help users but introduce safety risk?' The right answer isn't 'always pick safety' or 'always pick shipping' — it's structured reasoning about the trade-off, with awareness of both sides. Engineers who pick a side dogmatically tend to underperform; engineers who reason through the structure perform well. Reference: Anthropic's published Responsible Scaling Policy (anthropic.com/news/anthropics-responsible-scaling-policy).
- Is Anthropic's pay really above FAANG?
- At senior+ levels, yes. Anthropic's strategy has been to pay at or above FAANG total comp to attract senior talent. Base salaries are particularly high; equity weighting depends on the company's pre-IPO valuation. levels.fyi/companies/anthropic shows L4+ totals exceeding Meta L5 and approaching Meta L6 at peak vesting cycles. Trade-off: equity is illiquid (private company), so realized compensation depends on eventual liquidity event.
- Does Anthropic sponsor visas?
- Yes, broadly. Anthropic sponsors H-1B, O-1, and EU equivalents. Specific roles vary; the job posting and recruiter conversation will confirm. Per anthropic.com/jobs, immigration support is available for most SWE roles.
- What's the right way to talk about Claude in an interview?
- Specifically. 'I use Claude Code for refactors and Claude.ai for documentation drafts — here's a specific case where it saved 4 hours, and here's a specific case where it led me wrong and I had to backtrack' beats 'I think Claude is impressive'. The senior bar: opinions about specific Claude features, awareness of limitations, examples of well-designed prompts and ill-designed prompts. The wrong answer: 'I haven't used Claude much' for a role that involves shipping Claude products.
- How does Anthropic compare to OpenAI for SWE candidates?
- Different cultures. OpenAI is more research-celebrity-driven, Anthropic is more research-engineering-coordinated and explicitly safety-mission-driven. OpenAI has more public visibility; Anthropic publishes more interpretability research and explicit safety frameworks. Compensation is comparable at senior+ levels; Anthropic's mission alignment is more explicit. Engineers choosing between them often weigh 'which mission resonates' rather than 'which compensates better' — the comp is similar enough that it's not the load-bearing factor.
Sources
- Anthropic Careers — official postings (verified 2026-04-27).
- Anthropic Engineering — published engineering blog and best-practices.
- Anthropic Research — published interpretability and alignment research.
- Anthropic — 'Claude Code Best Practices' (canonical engineering essay).
- Anthropic — Responsible Scaling Policy (safety-vs-capability framework).
- levels.fyi — Anthropic SWE compensation.
- Anthropic — Claude Code documentation.
About the author. Blake Crosley founded ResumeGeni and writes about product design, hiring technology, and ATS optimization. More writing at blakecrosley.com.