Backend Engineer ATS Keywords for Tech Companies (2026)
Backend Engineer (BE) hiring is a different keyword target than frontend or full-stack hiring, and most resume advice flattens the three. Recruiters at tech companies — Stripe, Cloudflare, Datadog, Discord, Snowflake, Shopify, Figma, Anthropic — configure ATS searches for BE roles around five signal classes that don't appear on FE resumes: distributed-systems vocabulary (consensus, replication, sharding, partitioning, CAP, idempotency), database fluency (Postgres, MySQL, Redis, Cassandra, DynamoDB), language depth (Go, Python, Java, Rust, Ruby, C#), API and protocol design (REST, gRPC, GraphQL, OpenAPI, protobuf), and operational evidence (SLA / SLO, latency, throughput, observability with OpenTelemetry / Prometheus / Datadog) [1][2]. A resume that reads like a frontend or full-stack resume with "backend" sprinkled into the summary gets filtered out for BE roles because the keyword density in those five classes is too low. This page lists the BE keywords that pass screens in 2026, grouped by signal class, with the counter-list of keywords that backfire on BE resumes.
Key Takeaways
- BE resumes are scanned for five signal classes — distributed-systems vocabulary, database fluency, language depth, API/protocol design, and operational evidence — and missing any one of them drops the resume below the typical mid-and-senior BE match threshold on Greenhouse and Ashby [1][2].
- Specific database names ("Postgres," "Redis," "Cassandra," "DynamoDB") outperform the generic word "SQL" on every modern ATS engine; Stripe-style postings explicitly list named systems and the parser weights exact-match on those tokens [3][4].
- Designing for Data-Intensive Applications by Martin Kleppmann is the canonical vocabulary source for BE keywords — replication, partitioning, consensus, log-structured storage, stream processing — and recruiters at top-tier companies pattern-match on the textbook's own terminology [5].
- FE-stack keywords ("React," "Tailwind," "jQuery," "Redux," "CSS-in-JS") on a BE resume actively reduce match score on Ashby's LLM-based scorer because the embedding lands closer to FE postings than BE postings — write FE-isolated keywords only into a clearly-marked "Adjacent" line if at all [2].
- BLS reports the median annual wage for Software Developers (SOC 15-1252.00, the closest BLS proxy because BLS does not isolate backend specifically) was $133,080 in May 2024, with projected employment growth of 15% from 2024–2034 [6]. Levels.fyi tracks Backend Engineer comp at top-tier tech companies separately and consistently above the BLS proxy because BLS folds frontend, backend, mobile, and ML engineers into a single occupation [7].
- "Idempotency," "exactly-once," "at-least-once," and "circuit breaker" are Tier-1 BE keywords because Stripe's engineering blog has trained the industry to expect them on senior-BE resumes — the absence of any of them on a payments / fintech BE resume reads as junior [8].
- Observability stack keywords (OpenTelemetry, Prometheus, Grafana, Datadog, Honeycomb, Jaeger) are mandatory for senior+ BE roles in 2026; Google's SRE Book and Site Reliability Workbook are the canonical references recruiters at SRE-mature companies expect candidates to know by name [9].
How Backend Engineer ATS Screens Work
BE hiring runs through the same ATS engines as the rest of engineering — Greenhouse, Lever, Workday, Ashby, SmartRecruiters, iCIMS — but the keyword matrix is denser and more precise than for FE roles. Where an FE search filters on framework names (React, Vue, Svelte) and styling tokens (Tailwind, CSS-in-JS, design system), a BE search filters on language, database, distributed-systems mechanic, protocol, and operational evidence, with framework / library context as a secondary check. The BE ATS scan is calibrating for two things at once: the candidate's stack overlap with the job, and the candidate's level of abstraction on the systems they've worked on [1][2][9].
Engine-specific behavior for BE hiring:
Greenhouse (Stripe, Airbnb, Notion, Robinhood, most Series-B-and-up startups) supports semantic matching, so "Postgres" registers as related to "PostgreSQL" and "PG," and "Kafka" relates to "event streaming" and "log-based queues" [1]. Greenhouse weights experience-bullet keywords more than skills-section keywords for BE roles — the bullets carry the load because that's where scale and outcome live. The recruiter UI lets the filter "shipped production code in the last 2 years" return only candidates whose most recent role had recent BE bullets.
Lever (Eventbrite, Shopify, parts of Lyft) emphasizes recency. For BE roles specifically, Lever recruiters often filter by "currently or recently in backend" — a candidate who pivoted from BE to staff-platform or to engineering management, then is applying back to BE roles, needs to surface backend code work in the most recent 24 months prominently. The "Go within last 2 years" or "Postgres within last 2 years" filters are the BE equivalent of the FE "React within last 2 years" filter.
Workday (Disney, Salesforce, Adobe, large-enterprise BE hires) is the strictest exact-match parser. For BE, Workday filters often require literal phrases like "distributed systems," "microservices," "REST APIs," or "PostgreSQL" — the candidate who writes "designed scalable services using Spring" but never uses the word "microservice" gets filtered out unless the JD's exact phrasing appears verbatim somewhere in the resume.
Ashby (Notion, Linear, Ramp, Anthropic, modern AI-era startups) is the friendliest ATS for nuanced BE resumes because its LLM-based scoring reads bullets and infers level from context. A bullet that describes "owned the consensus-replicated metadata service handling 30k QPS at p99 sub-25ms latency, partnering with platform engineering on the Raft-based rebalancing rollout" registers as senior-BE signal even if the title is "Software Engineer" rather than "Senior Backend Engineer" [2]. Ashby is where deep-systems candidates get the fairest read.
SmartRecruiters (Visa, Atlassian) and iCIMS (Capital One, Disney non-engineering) lean stricter and more exact-match. Both score the title block heavily for BE searches, and both penalize creative titles ("Software Craftsman," "Platform Generalist," "Polyglot Engineer") for not matching canonical strings ("Backend Engineer," "Senior Backend Engineer," "Staff Backend Engineer," "Platform Engineer"). Taleo (legacy enterprise, Oracle) is the oldest and the strictest; for BE Taleo searches, write defensively with explicit phrases like "REST API," "microservices," "PostgreSQL," "AWS," "Java / Spring Boot" or "Python / Django" rather than abbreviated forms.
Tier 1 — Language Keywords
Language-stack match is the strictest first-pass screen on BE resumes [1][2]. Match the JD's primary language exactly, then list one or two recent secondary languages.
| Language | Companies that screen on it heavily | Typical resume phrasing |
|---|---|---|
| Go (Golang) | Cloudflare, Stripe (selected services), Uber, Docker, HashiCorp | "Built and operated Go services handling 30k+ QPS" |
| Python | Stripe (legacy + new services), Reddit, Instacart, Anthropic | "Python 3.11+ services with FastAPI / Django; type-hinted production code" |
| Java | Netflix, LinkedIn, Twitter, Airbnb, most large enterprises | "Java 17 services on Spring Boot; JVM-tuned for sub-50ms p99 latency" |
| Rust | Discord, Cloudflare (Workers), 1Password, Figma (selected services), Linear | "Rust services on Tokio / Axum; ownership-aware async pipelines" |
| Ruby | Shopify, GitHub, Stripe (legacy monolith), Gusto, Square | "Ruby on Rails monolith ownership; sustained Rails 7+ migration" |
| C# / .NET | Microsoft, Stack Overflow, Stripe (selected), large finserv | "C# 12 / .NET 8 services on AWS; ASP.NET Core APIs" |
| TypeScript (server-side) | Vercel, Linear, Notion (parts), Shopify (Hydrogen), Replit | "Server-side TypeScript on Node 20+ / Bun; tRPC and Fastify in production" |
| Kotlin | JetBrains, Pinterest, Square, Spring-shop migrations | "Kotlin services on Spring Boot; null-safe domain models" |
| Elixir | Discord (selected), Bleacher Report, Pinterest (selected) | "Elixir / OTP services; GenServer-based supervision trees" |
Mention version numbers for languages where they matter (Go 1.22, Python 3.12, Java 17 / 21, Rust 1.75+, .NET 8, Node 20+, Bun 1.x). The ATS doesn't always parse versions, but the recruiter and the hiring manager both read them — and missing a current version on a 2026-era resume reads as out-of-date craft.
Tier 1 — Database and Storage Keywords
Database fluency is the second hardest screen on BE resumes — recruiters at modern tech companies expect specific named systems, not the generic word "SQL" [1][3][4][5].
| System | What it signals on a BE resume | Typical resume phrasing |
|---|---|---|
| PostgreSQL (Postgres, PG) | OLTP fluency, relational modeling depth | "Schema design and query optimization on Postgres 15+; partitioning, materialized views, JSONB" |
| MySQL | Established-stack fluency, replication understanding | "MySQL 8 with semi-sync replication; index design and slow-query review" |
| Redis | Cache + ephemeral data + rate-limiting + queueing | "Redis for caching, rate-limiting, and idempotency keys; Redis Streams for at-least-once event handoff" |
| Cassandra (or ScyllaDB) | Wide-column, hyper-scale write throughput | "Cassandra clusters tuned for write-heavy workloads; partition-key design and consistency-level tuning" |
| DynamoDB | AWS-native KV with provisioned throughput | "DynamoDB single-table design; GSI patterns; on-demand vs. provisioned capacity trade-offs" |
| MongoDB | Document-store fluency | "MongoDB 7+ replicated cluster; aggregation pipeline ownership; index review and shard-key choice" |
| Snowflake / BigQuery / Redshift | Analytical / OLAP fluency | "Snowflake schema ownership for analytical reporting; BigQuery partitioning + clustering" |
| Kafka (or Kinesis, Pulsar, RabbitMQ) | Event streaming, log-based architecture | "Kafka topics with consumer-group ordering guarantees; exactly-once via transactional producers" |
| S3 (or GCS, R2) | Blob / object storage | "S3 lifecycle policies; multipart uploads; signed-URL patterns" |
| Elasticsearch / OpenSearch | Search + log indexing | "OpenSearch cluster ownership for product search; analyzers and query-time relevance tuning" |
The Tier-1 rule: name the actual system you've worked on in production, not the family. "Postgres" is a stronger keyword than "relational database." "Cassandra" is a stronger keyword than "NoSQL." Generic family names ("SQL," "NoSQL") are Tier-3 fillers that should appear at most once.
Tier 1 — Distributed-Systems Vocabulary
This is the layer where senior-BE resumes separate from mid-level ones [5][8][9]. The keyword surface is small but high-signal.
Replication — Pattern: "leader-follower replication on Postgres for read-scaling; managed failover during the 2024 outage." Replication is the entry-level distributed-systems keyword.
Sharding / partitioning — Pattern: "horizontal sharding on user_id; rebalancing strategy and resharding playbook ownership." Sharding signals OLTP scale work; partitioning signals OLAP or wide-column.
Consensus / Raft / Paxos — Senior+ signal. Pattern: "operated etcd cluster (Raft) for service-discovery metadata; understood consensus quorum and leader-election semantics." Cite Raft or Paxos only when you've actually worked with the system.
CAP theorem — Tier-1 vocabulary [5]. Don't pad; reference once where you made an actual CP-vs-AP trade-off ("chose CP semantics for the transaction-state service to keep correctness over availability during partitions").
Idempotency — Stripe-cluster Tier-1 [8]. Pattern: "idempotency-key handling for the payments service; built the dedup table and TTL strategy that kept duplicate-charge incidents at zero across 2024."
Exactly-once / at-least-once / at-most-once — Stream-processing vocabulary. Pattern: "exactly-once delivery on the billing-event pipeline using Kafka transactional producers and idempotent consumers."
Circuit breaker / bulkhead / backpressure — Resilience pattern keywords. Pattern: "circuit breaker around the upstream auth service with 50ms timeout and 10s open-state half-trip" or "applied bulkhead isolation for the rate-limited 3rd-party API."
Eventual consistency — Distributed-data Tier-1. Pattern: "eventual consistency model for the read-side projections; reconciled with the write-side via CDC."
Saga / two-phase commit / outbox pattern — Distributed-transaction vocabulary. The outbox pattern in particular is a Tier-1 senior-BE keyword in 2026 because it's the canonical way to ship transactional event-sourced systems.
CDC (change data capture) — Tier-1 streaming. Pattern: "Postgres logical replication into Debezium / Kafka for downstream search-index and analytics consumers."
Tier 1 — API and Protocol Design
API surface is the daily-bread keyword cluster for BE [3][4][8].
REST — Baseline. Pattern: "REST APIs with OpenAPI 3.1 specs; HATEOAS-aware resource modeling for the v3 public API." Always pair with a specific number ("17 endpoints," "3 resource families").
gRPC / protobuf — Senior signal at infra-heavy companies. Pattern: "gRPC services with protobuf schemas; bidirectional streaming for the live-orders feed."
GraphQL — Common at consumer / API-product companies. Pattern: "GraphQL gateway with Apollo Federation; resolver-level dataloader batching to mitigate N+1." Only cite if you've actually owned schema design or resolvers, not just consumed.
OpenAPI / Swagger — Spec-driven keyword. Pattern: "OpenAPI 3.1 spec ownership; generated client SDKs for 4 languages from the spec."
Webhooks — Eventing Tier-1 [8]. Pattern: "outbound webhook delivery service with retry, deduplication, and HMAC-SHA256 signing; sustained five-9s delivery across 2024."
Pagination, rate-limiting, retries — API-quality keywords. Pattern: "cursor-based pagination, token-bucket rate-limiting, and exponential-backoff retry with jitter on the integrations API."
API versioning — Senior pattern. Pattern: "v2 → v3 migration with header-based version negotiation and 18-month deprecation window."
HTTP/2, HTTP/3, gRPC streaming, Server-Sent Events (SSE), WebSockets — Protocol-depth keywords. Cite where you've used them in production.
Tier 1 — Cloud and Infrastructure
BE roles in 2026 expect cloud fluency on at least one major provider [9].
| Cloud | Tier-1 services on a BE resume |
|---|---|
| AWS | EC2, ECS, EKS, Lambda, RDS, DynamoDB, S3, SQS, SNS, Kinesis, CloudWatch, IAM, VPC |
| GCP | GKE, Cloud Run, Cloud Functions, Cloud SQL, BigQuery, Pub/Sub, GCS, Cloud Logging, IAM |
| Cloudflare | Workers, Durable Objects, R2, D1, Queues, KV, Cache API, Workers AI |
| Azure | AKS, App Service, Functions, Cosmos DB, Service Bus, Blob Storage, Application Insights |
Containers and orchestration — Docker, Kubernetes, Helm, Argo CD, Istio (selected). Pattern: "Kubernetes (EKS) cluster ownership with Helm-managed service deployments and Argo Rollouts for canary."
Infrastructure-as-code — Terraform, Pulumi, AWS CDK. Pattern: "Terraform-managed VPC, RDS, and EKS resources; reviewed plans through Atlantis."
CI/CD — GitHub Actions, GitLab CI, Buildkite, CircleCI, Jenkins. Pattern: "GitHub Actions with reusable workflows; matrix builds across Go 1.22 + Python 3.12; required-status-check enforcement on main."
Tier 1 — Observability and Reliability
Observability is a 2026 expectation at any company past Series B [9].
OpenTelemetry (OTel) — Tier-1 standard. Pattern: "OpenTelemetry instrumentation across services; OTLP export to Honeycomb / Datadog." OpenTelemetry is the canonical multi-vendor standard recruiters expect senior BE candidates to know.
Prometheus / Grafana — Open-source metrics stack. Pattern: "Prometheus metrics with Grafana dashboards; histograms with native exponential buckets for latency."
Datadog — Commercial APM Tier-1. Pattern: "Datadog APM tracing; SLO-tracking with Datadog SLO objects."
Honeycomb / Jaeger — Distributed-tracing tools. Honeycomb in particular signals modern observability practice.
SLA / SLO / SLI — Reliability-vocabulary Tier-1 [9]. Pattern: "Owned the 99.95% availability SLO for the orders API; SLI defined as the share of successful 200/2xx responses against burn-rate alerting." Cite the SRE Workbook only if you've actually applied the framework.
Error budget, burn rate — SRE-cluster Tier-1 [9]. Pattern: "Error-budget policy enforced through CI gating during high burn-rate windows."
Latency distribution (p50 / p95 / p99 / p999) — Always cite percentiles, not averages, on BE resumes. Pattern: "drove p99 latency on the catalog API from 480ms to 110ms over 1 quarter through query-plan rewrites and a Redis read-through cache."
Throughput (QPS / TPS / RPS) — Scale Tier-1. Always cite the unit and the workload context. Pattern: "owned the orders ingest pipeline at sustained 30k RPS during peak with sub-100ms p99."
Tier 1 — Security and Auth
Security context is mandatory on BE resumes for any company in finance, payments, healthcare, or platform.
OAuth 2.0 / OIDC — Auth Tier-1. Pattern: "OAuth 2.0 authorization-code-with-PKCE flow ownership; OIDC ID-token verification."
JWT — Token-format Tier-1. Pattern: "JWT validation with RS256; rotated public keys via JWKS endpoint."
mTLS — Service-to-service auth Tier-1. Pattern: "mTLS between internal services via the service mesh; SPIFFE identity workload attestation."
SAML / SSO — Enterprise auth. Cite where you've integrated with enterprise IdPs.
Signed webhooks (HMAC-SHA256) — Stripe-style integration security [8].
Encryption at rest / in transit (TLS 1.3, AES-256) — Compliance baseline.
Secrets management — AWS Secrets Manager, HashiCorp Vault, GCP Secret Manager, 1Password Connect.
OWASP Top 10, SQL injection, XSS, CSRF, SSRF — Security-awareness keywords. Cite where you've actually mitigated, not just heard of.
Tier 2 — Language Runtime and Library Context
Tier-2 keywords appear once or twice in the resume, embedded in experience bullets — not as a 30-item Skills dump [1][2].
Frameworks: FastAPI, Django, Flask, Spring Boot, Quarkus, Micronaut, Rails, Sinatra, Phoenix (Elixir), Gin (Go), Echo (Go), Fiber (Go), Axum (Rust), Actix (Rust), ASP.NET Core, NestJS, Fastify, tRPC.
ORMs and query layers: SQLAlchemy 2.0+, Prisma, Drizzle, GORM, Hibernate, ActiveRecord, Ecto, Diesel.
Migration tools: Alembic, Flyway, Liquibase, Atlas, Goose.
Test frameworks: pytest, Go's `testing` + testify, JUnit 5, RSpec, Jest (Node), Vitest, ExUnit (Elixir), property-based testing (Hypothesis, jqwik, QuickCheck).
Build tools: Make, Bazel, Gradle, Maven, Cargo, uv (Python), Bun, esbuild, Mix (Elixir).
Counter-List — Keywords That Backfire on BE Resumes
This is the part most resume advice misses. BE resumes can be sunk by FE / generalist keywords that read as not-actually-backend signal.
jQuery, Bootstrap, Tailwind, CSS-in-JS — FE-only keywords on a BE resume read as "this candidate hasn't done backend in a while." Skip entirely on BE resumes targeting senior+ roles. The exception: full-stack roles where the JD explicitly lists FE keywords; for pure BE, leave them off.
"Built websites" / "Created landing pages" — Generalist verbs. BE resumes lead with services, APIs, pipelines, and systems — not "websites." Replace with "built and operated [service]" framings.
WordPress / PHP without a modern framework / "WordPress developer" — On a senior-BE resume targeting modern tech companies, WordPress as a primary system reads as legacy / agency work, not platform engineering. The exception: WordPress engineering roles at companies like Automattic — there it's the canonical Tier-1 signal. For everyone else, it's anti-signal at senior+.
"Familiar with" / "Exposure to" / "Knowledge of" — Hedge phrasing. ATS scoring counts these as match, but recruiters discount them. Replace with concrete bullets: "Used Postgres in production for the catalog service" beats "Familiar with PostgreSQL."
No testing / no test framework anywhere on the resume — A 2026 BE resume with zero testing keywords reads as junior or as legacy-shop. Cite at least one test framework and one testing pattern (unit, integration, contract, property-based).
Resume tools-list dump (50+ items in a Skills section) — Triggers spam-detection on Greenhouse and Ashby [1][2]. BE Skills sections work better with 4–5 categorized groups (Languages, Datastores, Distributed Systems, Cloud + Infra, Observability) of 4–8 items each.
"Full-stack developer" as primary identity on a BE-targeted resume — Generic FS framing dilutes BE signal. If the JD says Backend, lead with "Backend Engineer" in the summary. Full-stack experience belongs as one phrase ("with selective FE contributions on the admin panel") in a single bullet.
"10+ years of programming experience" — Vague. Tighten to "10+ years building production backend systems" with two or three named systems immediately after.
Made-up scale numbers ("billions of requests") — Cite numbers you can defend in a hiring-manager interview. The editorial bar is empty space beats fabrication; an unsupported "billions" claim that a hiring manager probes and you can't substantiate damages the entire resume's credibility.
Density and Placement Rules for BE
- Professional Summary: Pack 6–8 Tier-1 BE keywords. Example: "Senior Backend Engineer with 9 years on distributed systems — Go and Python services on AWS / EKS, Postgres + Redis + Kafka stacks, 30k RPS sustained at p99 sub-50ms, OpenTelemetry-instrumented with Datadog SLO ownership; idempotency, exactly-once, and circuit-breaker patterns in production."
- Skills section: 4–5 categories, 4–8 items each. Languages (Go 1.22, Python 3.12, Rust 1.75 — selected), Datastores (PostgreSQL 15, Redis 7, Kafka 3.6, S3, Elasticsearch), Distributed Systems (sharding, consensus / Raft via etcd, idempotency, exactly-once, CDC, outbox), Cloud + Infra (AWS — EC2 / ECS / EKS / RDS / S3 / SQS, Terraform, GitHub Actions), Observability (OpenTelemetry, Prometheus + Grafana, Datadog APM, SLO / error-budget). 20–30 items total, never 50+.
- Experience bullets: Each recent bullet should pair an action verb with a system name and a measured outcome. Aim for 2 Tier-1 BE keywords per bullet, embedded naturally. Always cite percentile latency (not averages) and throughput unit (RPS, QPS, TPS).
- Don't: Mix FE-heavy and BE-heavy bullets in the same role section. Pick one framing per role and commit.
- GitHub link required. A senior-BE resume in 2026 without a GitHub link reads as either "private-only work the recruiter can't verify" or "doesn't engage with the open-source ecosystem." Even if your contributions are sparse, the link belongs in the header.
Density rule of thumb for BE: Tier-1 language keywords appear 3–5 times across the resume (primary language often, secondary once or twice). Tier-1 datastore keywords appear 4–6 times (Postgres often, others where used). Tier-1 distributed-systems keywords appear 4–8 times across recent bullets. Tier-1 observability keywords appear 2–4 times. Tier-1 cloud keywords appear 4–6 times.
Anti-Patterns That Fail BE Screens
- The "FE-with-some-API-work" resume: Bullets are 70% React / Next.js, 30% "wrote some API endpoints." Reads as FE / full-stack, not BE. Recruiters configured to filter for "Postgres within last 2 years" or "Go within last 2 years" auto-reject these.
- No scale numbers: "Built APIs for the team." How many endpoints? What throughput? What latency? Vague phrasing is a screen failure.
- Generic database words only: "Worked with SQL and NoSQL databases." Which ones? "Postgres + Redis + DynamoDB" beats "SQL and NoSQL" on every modern ATS engine.
- Microservice mentions without architecture clarity: "Built microservices for the platform" without naming counts, boundaries, or communication patterns reads as buzzword. Pattern fix: "Owned 4 microservices in the orders domain — communicated via gRPC + Kafka, sized for sub-25ms p99 latency at 8k RPS sustained."
- No testing / no observability: A 2026 BE resume with zero tokens for testing and zero tokens for observability reads as legacy or as junior. Cite at least one of each per recent role.
- Tools-stack Skills dump: A 50-item Skills list across mixed FE / BE / DevOps / data triggers spam-detection. Use the 4–5 category structure above.
- Title inflation: Calling a 1-year mid-BE role "Senior Backend Engineer" when the company didn't actually use the title. Hiring managers cross-check, and the gap shows fast.
- "Familiar with" hedging: First-pass ATS scoring counts hedge phrases, but recruiters discount them. Replace with concrete production bullets.
Worked Examples — BE Keywords in Experience Bullets
Example 1 — Distributed-systems scope
Before (C-grade): Worked on backend services for the platform.
After (A-grade): Owned the consensus-replicated metadata service (etcd / Raft) backing service discovery for 140 microservices — handled 30k RPS sustained at p99 25ms, ran the rebalancing playbook during the 2024 region failover, and partnered with platform engineering on the migration from a single-region cluster to two-region active-passive.
Keywords hit: consensus, Raft, etcd, microservices, RPS, p99, region failover, partnered, platform engineering.
Example 2 — API design and reliability
Before: Built REST APIs for the orders system.
After: Designed and shipped the v3 orders REST API (17 endpoints, OpenAPI 3.1 spec) with cursor-based pagination, idempotency-key handling, exponential-backoff retry, and HMAC-SHA256 webhook signing — drove p99 latency from 480ms to 110ms over Q3 via query-plan rewrites and a Redis read-through cache.
Keywords hit: REST, OpenAPI, cursor pagination, idempotency, retry, HMAC-SHA256, webhook, p99, Redis.
Example 3 — Streaming and CDC
Before: Used Kafka to send events between services.
After: Built the billing-event streaming pipeline on Kafka with transactional producers and idempotent consumers for exactly-once delivery — sourced changes from Postgres logical replication via Debezium (CDC), partitioned topics on tenant_id, and operated sustained 12k events/sec at sub-100ms end-to-end p99 lag.
Keywords hit: Kafka, transactional producers, exactly-once, idempotent, Postgres logical replication, Debezium, CDC, partitioning, p99.
Example 4 — Observability and SLOs
Before: Set up monitoring for the team's services.
After: Instrumented 9 Go services with OpenTelemetry — exported traces and metrics to Datadog APM, defined the 99.95% availability SLO for the catalog API with multi-window multi-burn-rate alerting, and led the post-incident review process that drove SLO-burn incidents from 4 per quarter to 1.
Keywords hit: OpenTelemetry, Datadog APM, traces, metrics, SLO, multi-burn-rate alerting, post-incident review.
Example 5 — Database and query work
Before: Optimized database queries.
After: Owned schema and query design on the orders Postgres cluster (15 with logical replication) — partitioned the orders table by month, introduced a covering index for the hot read path, and rewrote the worst slow query to drop p99 from 1.2s to 70ms.
Keywords hit: Postgres, logical replication, partitioning, covering index, slow query, p99.
Example 6 — Security and auth
Before: Worked on authentication.
After: Built the OAuth 2.0 authorization-code-with-PKCE flow for the public API — JWT validation with RS256 via JWKS rotation, OIDC ID-token verification for partner integrations, and mTLS between internal services through the service mesh.
Keywords hit: OAuth 2.0, PKCE, JWT, RS256, JWKS, OIDC, mTLS, service mesh.
FAQ
How many Tier-1 BE keywords do I need on a senior-BE resume?
The honest target is 25–35 distinct Tier-1 keywords across the five signal classes (language, datastore, distributed-systems, API, observability) — densely embedded in experience bullets, not stuffed into a flat skills list. A resume with fewer than 20 Tier-1 keywords reads as mid-level even if the title is senior. A resume with 40+ Tier-1 keywords stuffed without context reads as keyword-spam. The goal is signal density, not raw count: 6–8 in the summary, 16–20 across the experience bullets, with the skills section as the cleanup category for any not yet surfaced naturally.
Should I list multiple programming languages on a BE resume, or pick one?
List your primary language explicitly (the one you've shipped most production code in over the last 2 years), then 1–2 secondary languages with a "selected" or "production-experience" qualifier. Don't list a 7-language polyglot stack as your primary identity — recruiters at senior-BE roles read deep-language as the more credible signal than broad-language. The exception is staff+ platform / infrastructure roles where a Go + Python + Rust profile reads as appropriate breadth; even there, name which one is primary.
How do I handle a monolith-only resume when applying to microservice companies?
Frame the monolith experience around its boundaries and operational scale, not around the architecture choice. Pattern: "Owned the orders, billing, and integrations bounded contexts inside the Rails monolith — drove the strangler-fig extraction of the integrations context to a standalone Go service over 9 months, sustaining sub-100ms p99 latency through the transition." That bullet reads as senior architectural work even though the starting state was a monolith. Recruiters at microservice-mature companies value the migration narrative more than they discount the monolith origin — but the bullet has to actually show migration, decomposition, or extraction work. A monolith resume that shows zero awareness of decomposition reads as legacy.
Do I need to mention "system design" explicitly on a BE resume?
Mention it once, as part of a specific design ownership claim — not as a standalone skill. Pattern: "Led the system design for the v3 orders API (sequence diagrams, capacity model, consistency-mode trade-offs) before any code was written; aligned 4 partner teams on the contract." The phrase "system design" matters less than the evidence that you led one; ATS scanners count the literal phrase, but hiring managers read the surrounding bullet to verify it was real work. The Hello Interview system-design rubrics codify what hiring managers expect to see [10].
How do I show on-call and production-operations experience on a BE resume?
Cite an on-call rotation by cadence and team size, an SLO ownership claim, and at least one specific incident response or post-incident review you led. Pattern: "On-call rotation lead for the payments-platform team (1 in 6 weeks, 9-engineer rotation); led the post-incident review for the 2024 Q3 cross-region failure that drove the rebalancing playbook redesign." Senior+ BE roles screen heavily for production-operations evidence; an absence of any on-call or incident bullets reads as ivory-tower work.
Should I list LeetCode or system-design prep tools on the resume?
No. The resume is the production-evidence document; interview-prep platforms belong off the resume entirely. The exception is teaching, mentoring, or contributor work on the platforms ("contributed 20+ design solutions to a public system-design repo with 4k stars"), which can appear as a one-line side-project bullet.
How do I handle a pivot from data engineering or platform engineering into backend?
Reframe data / platform work in BE-language vocabulary. Data-engineering ETL work becomes "stream-processing pipeline ownership" with Kafka / Flink keywords; platform-engineering Terraform work becomes "cloud-infrastructure code review and IaC ownership" with Terraform / EKS / VPC keywords. The pivot reads cleanly when the bullets surface the BE-overlap dimensions of the prior role rather than the data-org or platform-org framing. Levels.fyi confirms that data-engineering and platform-engineering compensation bands track BE bands closely at top-tier companies, so the pivot is well-precedented [7].
Do I need an open-source portfolio for BE roles?
Helpful but not mandatory. A clean GitHub with 3–5 production-quality repositories or substantive contributions to a well-known OSS project signals craft and self-direction; an empty GitHub neither helps nor hurts on its own. The trap is a GitHub full of half-finished tutorial projects — those actively damage perceived craft. Prefer "no GitHub link" over "link to 14 abandoned repos." For staff+ roles, OSS contributions to infrastructure projects (Kubernetes, etcd, Postgres extensions, OpenTelemetry SDKs) are differentiating; for mid-and-senior roles, they're a tiebreaker.
References
[1] Greenhouse Software. "Sourcing and Filtering Best Practices — Greenhouse Help Center." https://support.greenhouse.io/hc/en-us/articles/360051506331-Sourcing-best-practices
[2] Ashby HQ. "How Ashby's AI-Powered Sourcing Works." https://www.ashbyhq.com/resources/guides/ai-powered-sourcing
[3] Stripe Engineering Blog. "Online Migrations at Scale." https://stripe.com/blog/online-migrations
[4] AWS Builders' Library. "Caching Challenges and Strategies." https://aws.amazon.com/builders-library/caching-challenges-and-strategies/
[5] Martin Kleppmann. Designing Data-Intensive Applications (O'Reilly, 2017). https://dataintensive.net/
[6] U.S. Bureau of Labor Statistics. "Software Developers, Quality Assurance Analysts, and Testers — Occupational Outlook Handbook." https://www.bls.gov/ooh/computer-and-information-technology/software-developers.htm (BLS does not isolate "Backend Engineer" — SOC 15-1252.00 Software Developers is the closest proxy and folds in frontend, backend, mobile, ML, and embedded engineers; median annual wage $133,080 May 2024; 15% projected employment growth 2024–2034.)
[7] Levels.fyi. "Backend Engineer Salary Data." https://www.levels.fyi/t/software-engineer/focus/backend
[8] Stripe Engineering Blog. "Designing Robust and Predictable APIs with Idempotency." https://stripe.com/blog/idempotency
[9] Google. Site Reliability Engineering: How Google Runs Production Systems. https://sre.google/sre-book/table-of-contents/
[10] Hello Interview. "System Design Interview Rubrics." https://www.hellointerview.com/learn/system-design