Career Strategy
Take-Home Assignment Guide for 2026: How to Approach the Third Interview-Round Style at Staff+ Hiring
In short
Take-home assignments appear most often at structured- hiring startups and growth-stage companies; large-company loops at FAANG-tier and major AI-labs more often foreground live coding and system design rounds, with take-homes used selectively for some roles. When the assignment is part of the loop, the submission is typically read at three things: the code at the seams that matter, the README that names your trade-offs, and the scope-and-time disclosure that respects the stated budget. A strong take-home is bounded, honest, and shows judgment; a weak one is over-engineered, gold-plated, or silently thirty hours over budget.
Key takeaways
- Use is concentrated at structured-hiring companies. Companies that publish written hiring practices (GitLab's hiring handbook is a long-running public example) often include take-homes; large-company loops at FAANG-tier more often rely on live coding and system design.1
- Time budget (our editorial guidance): two to six hours of focused work. If the assignment is open-ended or the budget is unstated, ask the recruiter before starting.
- A strong README often helps reviewers understand trade-offs. Name the alternatives you weighed, the decisions you rejected, the scope you cut, and the time you spent. The README is part of how the submission communicates judgment.
- Production-quality at the seams that matter, not everywhere. Error handling, tests at the boundary, naming that holds up. Skipping these reads as junior; over-engineering elsewhere reads as unfocused.
- AI-tooling policies vary; read the assignment first. Some companies encourage AI tools on the take-home; others ask candidates to complete take-home assessments without AI assistance unless otherwise indicated. Follow the policy on the page; if no policy is stated, use AI as you would on real engineering work and disclose non-trivial usage.
When companies use take-homes, and when they skip them
The use pattern in 2026 tech-company hiring is concentrated:
- Structured-hiring startups and growth-stage companies often use them. Companies that have invested in published hiring practices commonly include the take-home as one round of evidence. GitLab publishes its hiring handbook and has long made its overall hiring process public; the handbook is a useful reference for how a structured-hiring company organizes interview evidence.1
- Some AI-labs and growth-stage SaaS use them; practice varies. When used, often as the first or second technical round, with a live follow-up where the interviewer asks you to explain or extend your submission. Other AI-labs prefer live coding tools; check the published interview process for the company you are interviewing with.
- Large-company FAANG-tier loops more often foreground live rounds. Live coding, system design, and behavioral rounds dominate published prep guidance from Amazon, Google, and Meta; take-homes appear less prominently in those guides, though they may still be used selectively for some roles or teams.
- Some traditional-pipeline industries skip them. Consulting, finance, and other interview-heavy industries rely on case interviews or technical screens rather than take-homes; the take-home pattern is largely a tech-industry artifact.
The bar to ask when you are early in the process: does the role include a take-home in the published interview process, or is the recruiter introducing it ad hoc? The published- process answer is fine; the ad-hoc answer is sometimes a sign of disorganized hiring (or of a recruiter using the take-home to filter out candidates without the team's explicit support).
The time-budget question
Our editorial guidance is to treat two to six hours of focused work as the working range for a well-designed take-home (this is our recommendation, not a number we have seen quoted directly in a published source). Will Larson's StaffEng guides discuss staff+ loop ambiguity, signals, formats, and asking the recruiter about process details, which is the right disposition for understanding the time expectation on a specific assignment; Harvard Business Review's "How to Take the Bias Out of Interviews" argues more broadly for structured questions and scoring that can be consistently applied, which a tightly-scoped take-home supports better than an open-ended one.23
The reality is messier. Some companies publish an explicit time budget on the assignment page or in the recruiter email. Some do not. Two failure modes when the budget is unstated:
- Budget-blind candidates. Spend three days on a four-hour assignment, submit something polished, and either run out of energy for the rest of the loop or signal to the company that they are time-blind.
- Budget-rigid candidates. Submit a half-finished assignment after exactly four hours with no scope-and-time disclosure, and read as candidates who did not engage seriously.
The honest middle: ask the recruiter for the time expectation if it is not stated; if you choose to spend more time than stated, disclose it in the README. The submission that says "I spent eight hours on this; I cut these three features; here is what I would do next" reads as confident senior+ judgment. The submission that silently absorbs thirty hours and reads as polished but unsigned reads as candidate-as-victim.
The README often helps reviewers read the submission
A clear README often helps reviewers understand the candidate's trade-offs and scope decisions; this is our editorial recommendation rather than a citation from a published rubric. The README does three things at minimum:
- States the trade-offs you weighed. Not just the decisions you made; the alternatives you considered and why you rejected them. 'I chose Postgres over DynamoDB because the assignment specified transactional semantics; if the constraint were latency-first, DynamoDB would have been the call.'
- Discloses scope and time honestly. What you shipped, what you cut, what you would do next with more time, and roughly how long it took. The scope-and-time disclosure is the highest-yield section of the README in our experience; it shows judgment about what to ship versus what to skip.
- Names the seams. Where the code is production-quality (the boundary tests, the input validation, the error path) and where it is not (the hardcoded credentials, the unhandled edge case, the missing telemetry). Pretending the whole submission is production-quality reads as either unaware or dishonest; naming the seams reads as senior+.
A strong README is roughly 200 to 500 words; longer READMEs lose the reviewer's attention, shorter ones rarely carry enough specificity. Code comments are not a substitute for the README; the reviewer reads the README first to set expectations, then reads the code through that lens.
Production-quality at the seams that matter
The most common scope mistake at senior+ levels is the assumption that "production-quality" means "production- quality everywhere". The honest interpretation: production- quality at the seams the assignment exposes, not at every line.
Where to invest, by category:
- Backend take-homes. Boundary input validation, structured error handling, at least one happy-path integration test, structured logging on the request path. Skip: deep observability, full retry logic, complete documentation generation.
- Frontend take-homes. Accessible markup at the components the assignment names, type-safe props at the boundary, at least one unit test on the component logic, error-state rendering on the network path. Skip: full design-system extraction, theme abstraction, exhaustive visual regression.
- Data / ML take-homes. Input validation, reproducibility (seed, requirements, README run instructions), at least one quantitative result with a baseline, structured failure modes named in the README. Skip: full hyperparameter sweep, exhaustive feature engineering, production-grade serving infrastructure.
- Infrastructure / SRE take-homes. The named SLO with at least one synthetic test against it, a runbook section in the README, reasonable structured logging, the named failure injection. Skip: full chaos engineering, multi-region failover, complete dashboard wiring.
The pattern: invest at the seams the assignment exposes, skip everywhere else, and name what you skipped in the README. The submission that does this reads as senior+ judgment; the submission that gold-plates one area at the expense of another reads as unfocused.
AI-assisted take-home work in 2026
AI-assisted coding tools (Claude, Cursor, GitHub Copilot, and others) are widely used in 2026 senior+ engineering work, and policies on their take-home use vary across companies. Some companies explicitly encourage AI tooling on the assignment. Others ask candidates to complete take-home assessments without AI assistance unless otherwise indicated. The first rule is to read the assignment for an explicit AI policy and follow it; the second is to ask the recruiter if no policy is stated.
Pragmatic rules:
- Read the AI-policy section first. If the assignment specifies a policy, follow it. The policy is sometimes the most important rubric item; ignoring it is a fast-rejection pattern.
- Default: use AI as you would at work. If no policy is stated, use AI tooling as you would on real engineering work; the assignment is a proxy for the work, and your work likely already includes AI tooling.
- Understand what you submit. An interviewer can follow up by asking you to explain or extend any part of the submission; if you do not understand the code, the follow-up will surface it. AI tools accelerate code production but they do not transfer understanding.
- Disclose non-trivial AI usage. If you used AI to generate a substantial portion of the code, say so in the README and name what you reviewed and what you trusted. AI policies vary across companies; the disclosure itself is the signal of editorial discipline, regardless of whether the company expects AI usage or asks you to avoid it.
- Reviewers still evaluate your judgment. Architecture, scope, trade-offs, testing, README quality. AI tools do not produce these on their own; they accelerate the parts of the work where you already have judgment to apply. AI-only submissions that read as unfocused or unscoped fail for the same reason AI-only cover letters do.
Common take-home failure modes
- Over-engineering. Adding patterns or abstractions the assignment does not need ('I added a plugin system because I wanted to show extensibility'). The assignment scope is what to ship to; reading it carefully is part of the work.
- Under-engineering at the seams that matter. Skipping error handling, input validation, or boundary tests in places that obviously matter for the role (input boundaries, error paths, named constraints). The fix: name the seams that matter and invest there.
- No README, or a how-to-run README. The README that documents only how to install dependencies and run the app is a junior signal; the README that names trade-offs, scope, and time spent is a senior+ signal.
- Gold-plating with frameworks the assignment did not call for. Adding Redux to a four-component app, adding Kubernetes to a single-binary deployment, adding microservices to a CRUD app. Reads as unfocused or as showing off; both fail.
- Silent scope inflation. Burning thirty hours on a four-hour assignment without disclosing the time spent. Reviewers can usually tell, and the implicit message ('I will overspend on real work too') is unfavorable.
- Submitting code without running it. The README says 'npm start; visit localhost:3000' and the app crashes on startup. Run your own assignment from a fresh clone before submitting; the failure to do this is the single most preventable rejection pattern.
When to pass on the take-home
Some take-homes are not worth doing. Heuristics that have held up:
- Production-quality with no time budget. 'Build us a real production system; we will let you know when we want to talk.' This can be a red flag for unpaid-consulting-as-take-home; the rejection signal is yours to send.
- Work the company would obviously use. 'Build us a marketing-site rewrite' or 'help us scope our migration plan'. The assignment is often too valuable to be a screening artifact; the company may be asking for free work.
- Open-ended scope with no rubric. 'Surprise us'. Without a rubric, your submission cannot be evaluated against anything specific, and the take-home risks becoming a vibes-test.
- Companies low on your list. Take-homes are time investments; spend them on companies you actually want to work for. Doing a take-home for a backup company that you would only accept under duress is a poor use of senior+ time.
- Recruiter pressure to do it without a hiring manager conversation first. The hiring manager conversation is where you decide whether the role is right for you; doing the take-home before that conversation often reverses the order of investment.
The position that often holds up at senior+ levels: take-homes are negotiable. You can decline politely, ask for a live alternative, or scope the assignment yourself. Companies with mature hiring practices often respond well to scope negotiation; a company that responds poorly to such a request is giving you a useful signal about how it treats senior candidates.
Common questions
When do tech companies use take-home assignments in 2026?
Most often at startups and growth-stage companies, and at companies that publish structured-hiring practices. Large-company loops at FAANG-tier and at major AI-labs more often foreground live coding and system design rounds in their published prep guidance; take-homes may still be used selectively for some roles. Practice at AI-labs and growth-stage SaaS varies: some include a take-home round; others prefer live coding tools. GitLab's public hiring handbook is one long-running example of a structured-hiring company that publishes its process. The bar to ask: does the role include a take-home in the published process, or is the recruiter introducing it ad hoc?
How long should a take-home assignment actually take?
Our editorial guidance is that two to six hours of focused work is the right working range for a well-designed take-home (this is our recommendation, not a number we have seen quoted directly in a published source). The reality is more variable. Some companies publish an explicit time budget ('please spend no more than four hours'); others do not. The honest framing: senior+ candidates routinely spend longer than the stated budget on a take-home they care about, and the company often knows this. The decision is yours: spend the stated budget and submit what you have, or spend more time and submit something more polished. Do not submit something that obviously took ten times the stated budget; it reads as either time-blind or as candidate-as-victim.
Should I do the take-home, or skip companies that use them?
Do them when the role and the company are worth it; skip them when they are not. The argument for doing take-homes: when the company has a written rubric (which you typically cannot see), your submission can compete on the work itself rather than on interviewer rapport or on whether you happened to have the right framework recall in the moment. The argument against: they are unpaid labor, the rubric (if any) is opaque to you, and the time investment is non-trivial. The pragmatic position at senior+ levels: do the take-home if the company is one of your top five targets, the role is well-scoped, and the assignment scope is bounded; pass on companies that ask for production-quality work with no time bound or that ask for work that would normally be paid consulting.
What separates a strong take-home submission from a passing one?
Three structural differences. (1) A README that names the trade-offs you weighed and the decisions you rejected, not just the decisions you made. (2) Code organization that reads as production-quality at the seams the assignment exposes (error handling, tests at the boundary, naming that holds up), not necessarily everywhere. (3) An honest scope-and-time disclosure: what you shipped, what you cut, why, and how long it took. Our editorial recommendation (not a citation from a published rubric): a clear README and scope-disclosure often help reviewers understand the work, while submissions without them ask the code to speak for itself in places where it rarely can.
How do I handle a take-home that asks for production-quality work?
Push back, or scope it explicitly. The production-quality framing is sometimes a miscommunication ('we want to see code we would ship') and sometimes a red flag ('we want unpaid consulting'). The clarifying ask: 'how many hours do you expect this to take?' If the answer is two to six and the assignment is bounded, proceed and submit work that reads as production-quality at the seams that matter. If the answer is open-ended and the assignment is ambitious, submit a scoped version with explicit scope-and-time disclosure: 'I spent four hours on this; I cut these features; here is what I would do next if I had eight more hours.' Some companies will reject the scoped submission; those are companies that wanted free work, and the rejection is a signal.
What are the most common failure modes in take-home submissions?
Five we have seen recur in editorial review of take-home submissions: (1) over-engineering, where the submission introduces patterns or abstractions the assignment does not need ('I added a plugin system because I wanted to show extensibility'); (2) under-engineering at the seams that matter, where the candidate skips error handling, tests, or input validation in places that obviously matter for the role (input boundaries, error paths, named constraints); (3) no README, or a README that reads as a how-to-run rather than a why-I-made-these-choices document; (4) gold-plating with frameworks or tools the assignment did not call for, which reads as either unfocused or as showing off; (5) silent scope inflation, where the candidate burns 30 hours on a 'four-hour' assignment without disclosing the time spent. The unifying fix: read the assignment carefully, write the README first, scope honestly, ship at the stated time budget, and disclose the time and the cuts.
Should I use AI tools (Claude, Cursor, Copilot) on a take-home assignment?
Policies vary; read the assignment first. Some companies explicitly encourage AI tooling on the take-home; others ask candidates to complete take-home assessments without AI assistance unless otherwise indicated. The pragmatic rules: (1) read the assignment for an explicit AI policy, and follow it; (2) if no policy is stated, ask the recruiter, and otherwise use AI as you would on real work; understand the code you are submitting (an interviewer can follow up by asking you to explain or extend any part of it); (3) disclose AI usage in the README if asked or if the use is non-trivial; (4) reviewers still evaluate your judgment (architecture, scope, trade-offs, testing), which AI tools do not produce on their own. AI-assisted submissions that read as AI-only (no editorial discipline, no scope judgment, no opinions in the README) fail for the same reason AI-only cover letters do.
Sources
- GitLab Hiring Handbook. Public hiring handbook from a company that has long published its overall hiring process and interview-process documentation. We cite it as a long-running example of structured-hiring transparency; we do not claim it publishes a take-home rubric specifically.
- StaffEng Guides (Will Larson). Long-running staff-engineer career-guidance site; reference for senior+ interview-process expectations and the time-respecting-candidate principle.
- Harvard Business Review: How to Take the Bias Out of Interviews. The argument for structured questions and scoring as a way to reduce subjectivity in hiring; supports the rubric-based-evaluation framing.
- Harvard FAS Mignone Center for Career Success. Harvard's career-services office; general interview-preparation guidance referenced alongside the role-specific take-home framing here.
- MIT Career Advising and Professional Development. MIT's career-services office; general interview-preparation guidance for tech-track candidates.