SRE Engineer ATS Checklist — Pass Every Production Screen (2026)

Updated April 30, 2026 Current
Quick Answer

SRE Engineer ATS Checklist — Pass Every Production Screen (2026) Site Reliability Engineering (SRE) resumes get filtered by the same ATS engines as so...

SRE Engineer ATS Checklist — Pass Every Production Screen (2026)

Site Reliability Engineering (SRE) resumes get filtered by the same ATS engines as software engineering resumes — Greenhouse, Lever, Workday, Ashby, SmartRecruiters, iCIMS — but the failure modes are different. Where a backend SWE resume gets rejected for a thin algorithms / language surface, an SRE resume gets rejected for reading like a backend engineer who happens to deploy to production: no on-call evidence, no SLOs cited, no Kubernetes specifics, no IaC ownership, no incident-response leadership [1][2][3]. This 22-item checklist walks every SRE candidate through the pre-submission audit specific to reliability roles at infrastructure-heavy tech companies — format, structure, production-on-call signal, Kubernetes depth, IaC specifics, observability fluency, and verification — and names the SRE-specific failure modes that take down even strong candidates.

Key Takeaways

  • Most modern tech companies route resumes through ATS engines before any human review, and the SRE keyword target is wider than backend SWE — eight signal classes (orchestration, IaC, cloud, observability, CI/CD, programming, reliability practices, networking) all need density [1][2].
  • The single most common SRE resume failure is the "backend-engineer-with-deploys" pattern: 80% application-feature work, 20% infrastructure mention. Recruiters configured to filter for "production on-call" or "SLO ownership" auto-reject these [3][4].
  • Production on-call evidence is non-negotiable on every recent SRE role. "Worked on the platform team" without naming pager, rotation, or incident response reads as did-not-actually-carry-pager [3][4].
  • Per the Google SRE Book and the SRE Workbook, the canonical SRE vocabulary — SLO, SLI, error budget, toil, blameless postmortem, on-call sustainability, alerting on symptoms — is the keyword surface recruiters scan for; missing this vocabulary entirely is a hard fail at infrastructure-heavy companies [5][6].
  • Kubernetes depth lives in named components (operators, CRDs, admission webhooks, HPA, VPA, network policies, Helm 3, Kustomize, ArgoCD, Cilium), not in the word "Kubernetes" alone — senior SRE screens filter on depth [7].
  • BLS does not have a dedicated SRE occupation code; SOC 15-1244 Network and Computer Systems Administrators ($96,800 median in May 2024) and SOC 15-1252 Software Developers ($133,080 in May 2024) are the closest proxies [8][9]. Both undercount SRE comp at top-tier infrastructure-heavy companies — anchor honest salary expectations on levels.fyi by company and level rather than the BLS proxies [10].
  • The IaC + production-code split matters: SRE resumes that surface only IaC fluency (Terraform, Helm) without systems-programming specificity (Go, Python, Rust with named libraries) read as junior-platform-ops; senior SRE screens want both [3][4].

Stage 1 — Format and File Prep (Items 1–5)

1. Single-column layout, no exceptions.

Greenhouse and Workday inconsistently parse two-column resumes; the parsed-text version recruiters see often appends the right column after the left, scrambling experience bullets [1][2]. SRE resumes are particularly vulnerable because many engineering-resume templates put a sidebar for tools, a skill-level meter, or a grid of cloud-service icons — exactly the column most likely to mis-parse. Use single-column with vertical sections: Header → Summary → Skills → Experience → Education → Optional (Open Source, Conference Talks, Writing). Verify by copy-pasting the rendered resume into a plain-text editor; if the order is wrong there, it's wrong in the ATS.

2. Submit as .docx or PDF — both work, with caveats.

.docx is the safer default across Workday and Taleo. PDF works on Greenhouse, Lever, and Ashby with high parse fidelity. The trap most relevant for SREs: PDFs exported from LaTeX or design tools sometimes embed text as glyphs or vectorized paths rather than parseable characters, breaking ATS extraction [2]. If you want LaTeX (a common SRE choice), use pdftotext or pdfinfo to verify the output extracts cleanly. Build the resume in Word, Google Docs, Markdown-to-PDF (with a known-good engine), or LaTeX with the verification step — not in Figma.

3. Keep file size under 2 MB.

SRE resumes typically run 1–2 pages and don't include images, so this is rarely an issue. Watch for embedded company logos (skip these — not standard in US tech), headshots (skip), or system-architecture diagrams (those belong in your portfolio site or a public talk, not the resume). Pure-text SRE resumes should be 50–200 KB. Anything over 2 MB has embedded media that should come out and live in a portfolio link instead.

4. Use system fonts only — Calibri, Arial, Helvetica, Georgia, or Times New Roman.

Custom fonts get substituted during ATS parsing, sometimes shifting line breaks and section boundaries unpredictably [2]. SRE resumes don't need typographic personality on the document — the production-credibility signal is in the words. If you build with LaTeX, use the default \usepackage{lmodern} or \usepackage{newtxtext} setup; both extract cleanly in pdftotext.

5. Avoid headers/footers, text boxes, columns, and tables.

Document headers/footers on Workday and older Greenhouse parsers can be ignored entirely [1][2]. SRE resumes sometimes use tables to lay out a tooling matrix ("AWS | GCP | Azure" rows, "Compute | Storage | Network" columns) — this fails ATS parsing universally because the table tokenizer scrambles the cells. The fix: write the same scope summary as a single line of text in the role's first bullet, e.g., "Senior SRE, Platform-Data Org — operated 14-cluster Kubernetes fleet on AWS EKS across 3 regions, primary on-call 1-week rotation across 6-engineer pool, owned 22 SLOs and the error-budget policy."

Stage 2 — Structure and Section Order (Items 6–10)

6. Standard section headers — exactly these names.

Use: "Summary" (or "Professional Summary"), "Skills" (or "Core Competencies"), "Experience" (or "Professional Experience"), "Education," "Open Source" or "Talks" (optional, only if substantial). ATS parsers — especially Taleo and older Workday — pattern-match on exact section names [1][2]. Creative section names ("Ops I've Owned," "Production Stories," "Reliability Receipts") cause the parser to skip those sections. SRE resumes that lean creative here lose ATS points without any compensating gain. Save the creative naming for your engineering blog.

7. Header line: name, location, contact, LinkedIn, GitHub — and that's it.

Format: "Name | City, ST | email | (xxx) xxx-xxxx | linkedin.com/in/firstname-lastname | github.com/handle." GitHub is the SRE-specific addition (more relevant than for design or PM roles) because hiring managers verify open-source contributions, infrastructure code, and runbook quality through the GitHub profile. Skip personal portfolio sites unless they host serious technical writing or a published infrastructure project; skip headshot, address, and "Open to Work" banner — these read as junior-resume conventions.

8. Lead with a 3–4 line Professional Summary that names eight-class density.

The summary gets the highest scan-weight per word [1][2]. For SRE, pack 8–10 Tier-1 keywords across the eight signal classes. Example: "Senior SRE with 7 years operating production infrastructure — owned multi-region Kubernetes fleet (EKS, 14 clusters), Terraform monorepo (180+ modules), and SLO-driven on-call (1-week rotation, 22 SLOs) for the platform-data team. Strengths: Go and Python, AWS (EKS, RDS Aurora, ALB, S3, IAM), observability (Prometheus, Grafana, Datadog, Honeycomb, OpenTelemetry), incident command (14 Sev-1s as IC), and chaos engineering. Recent: drove p99 latency reduction from 850ms to 220ms across the user-services tier in 3 quarters." That's coverage across orchestration, IaC, cloud, observability, programming, reliability practices, and a measurable outcome — in 4 lines.

9. Skills section organized for SRE, not for backend SWE.

SRE skills sections should be category-grouped, not a flat 30-item dump — that's a Greenhouse / Ashby spam-detection trigger [1][11]. Recommended grouping (6 categories, 24–36 items): Orchestration (Kubernetes, Helm 3, Kustomize, ArgoCD, Istio / Cilium, containerd), IaC (Terraform, Pulumi, Ansible, OPA, Packer, CDK), Cloud (AWS EKS / RDS / ALB / S3 / IAM; GCP GKE / Cloud SQL; Cloudflare), Observability (Prometheus, Grafana, Datadog, Honeycomb, OpenTelemetry, Loki / Tempo), CI/CD (GitHub Actions, ArgoCD, Cosign, SLSA, Buildkit), Programming (Go, Python, Rust, Bash, SQL). Skip the per-tool proficiency meters — they read junior and waste the visual real estate.

10. Experience section: reverse-chronological, 5–7 bullets per recent role.

Reverse-chronological is the ATS expectation. For senior SREs and staff-level roles, 5–7 bullets at the most recent role, 4–5 at recent prior roles, 3 at older roles. SRE bullets carry more signal density than most backend roles because each bullet should reference an action verb, a quantified scope (cluster count, service count, fleet size, SLO count, latency or availability number), and the named tooling that did the work. Don't skimp on bullet count for the most recent role — recency-weighted scoring on Lever and Greenhouse pushes the recent role to the top of recruiter screens [1][12].

Stage 3 — SRE-Specific Content Audit (Items 11–16)

11. Every recent role names production on-call status.

This is the highest-leverage check on the entire SRE checklist. For each SRE-or-SRE-adjacent role, the bullet cluster must include: rotation structure (primary / secondary / tertiary), team-pool size, rotation cadence (1-week, 2-week), tenure carrying pager, and at least one named outcome (incident commander on Sev-1s, runbook authorship, sustainable-on-call policy work). Pattern: "Senior SRE, Platform-Data Team — primary on-call 1-week rotation in 6-engineer pool for 24 months; drove 14 Sev-1 incidents as incident commander; authored 22 blameless postmortems." Vague phrasing ("supported the on-call rotation") fails the screen because the recruiter is calibrating commitment level on the structure, not on the verb [3][4].

12. Every recent role surfaces SLO / error-budget ownership.

SLO fluency is the rarest and most-scanned senior SRE signal [3][5][6]. For each recent role, name an SLO portfolio you owned, an error-budget policy you participated in, or an SLI definition you authored. Pattern: "Defined and operated 22 SLOs across the platform-data API and user-services tier; drove the error-budget policy with the product team — halted feature deploys for 9 days in Q3 after budget exhaustion and shipped a connection-pool replacement that brought availability back to SLO inside 4 weeks." If your past role didn't formally use SLOs, surface the equivalent reliability accountability honestly: "Owned the platform-data team's reliability dashboard with 22 alert thresholds tied to user-impact metrics" — names the work without falsely claiming the canonical vocabulary.

13. Kubernetes claims show specificity, not just the name.

"Kubernetes" alone is too generic for senior SRE roles [7]. The depth signal lives in named components: operators, CRDs, admission webhooks, HPA, VPA, network policies, PodDisruptionBudgets, StatefulSets, DaemonSets, Service / Ingress / Gateway API, Helm 3, Kustomize, ArgoCD, Cilium / Istio. Pattern: "Operated 14-cluster Kubernetes fleet across 3 regions with Helm 3 charts, ArgoCD GitOps, and Cilium-based network policies; wrote 4 custom operators using kubebuilder for the platform-data team's reconciliation loops." That bullet is unambiguously senior. "Used Kubernetes for deployments" is unambiguously junior.

14. IaC claims name specific tools and ownership scope.

"Terraform" alone is too generic. The specificity that passes senior screens is the monorepo / multi-account / module count framing: "Owned the Terraform monorepo (180+ modules, 4 AWS accounts) with Terragrunt for environment-stamp dedup and Sentinel policy-as-code gates on the production workspace." Pulumi, CloudFormation / CDK, Ansible, Packer, OPA / Conftest mentions all benefit from the same pattern: name the tool, name the ownership scope, name the framework or pattern adopted. Vague IaC claims read as junior-platform-ops.

15. Cloud claims name 4–6 services per platform, not just the platform.

"AWS, GCP, Azure" without service specificity reads as resume-stuffing [2]. The pattern that passes is platform + 4–6 named services per platform. AWS depth: EKS, EC2, ECS / Fargate, ALB / NLB / API Gateway, CloudFront, Route 53, S3, RDS / Aurora, DynamoDB, Lambda, IAM (and IAM Identity Center), VPC / Transit Gateway, CloudWatch, KMS, Secrets Manager. GCP depth: GKE, Compute Engine, Cloud Load Balancing, Cloud DNS, Cloud Storage, Cloud SQL, Spanner, BigQuery, Pub/Sub, IAM, Cloud Armor. Azure depth: AKS, App Service, Front Door, Cosmos DB, Service Bus, Entra ID, Application Gateway. Pick depth on your primary cloud and credible mention on a secondary cloud rather than shallow surface across all three.

16. Observability claims name specific stacks, not just "monitoring."

"Set up monitoring" is too generic [3][13][14]. Name the stack: Prometheus + Grafana + Alertmanager + Thanos / Cortex / Mimir + Loki + Tempo, or Datadog (APM, logs, infrastructure, synthetics), or Honeycomb (BubbleUp, refinery, high-cardinality), or OpenTelemetry Collector with OTLP export. Pattern: "Owned the Prometheus federation across 14 clusters with Thanos for long-term storage and global query, plus 60+ Grafana dashboards used by the platform team for daily operations; migrated 22 services from vendor-specific tracing SDKs to OpenTelemetry with OTLP export to Honeycomb." That's senior observability signal. "Configured alerts" is junior signal regardless of how good your alerts are.

Stage 4 — SRE Keywords and Mechanics (Items 17–19)

17. Mirror the JD's exact phrasing — title, tooling, and reliability vocabulary.

If the JD says "Senior Site Reliability Engineer," use that exact title in your summary even if your formal title is different. If the JD says "Production Engineer" or "Reliability Engineer," use the JD's preferred form to pass strict-match Workday and Taleo screens [2][12]. If the JD names specific frameworks (SLO, error budget, blameless postmortem, chaos engineering, capacity planning), mirror them. The fix: read the JD twice, list the 14–20 highest-frequency tooling and reliability terms, and verify each appears in your resume in the canonical form. Tools like Jobscan or Resume Worded automate this comparison [15].

18. Don't claim production scope you can't defend in a 60-minute interview.

The ATS rewards scope claims; the SRE interview punishes false claims hard. SRE hiring loops include questions about specific Sev-1 incidents (what was the failure mode, what was the mitigation, what was the long-term fix), SLO definitions (what was the SLI, why that threshold, what the error budget covered), Kubernetes operations (what's in your kube-system namespace, how upgrades happen, what your network policies look like), and incident command structure [3][5][6]. A candidate who claims "operated 14-cluster Kubernetes fleet" but turns out to have observed-from-outside on a fleet someone else operated fails the first technical interview. Limit scope claims to what you've actually owned. If your fleet is 3 clusters, say 3. If your SLO portfolio is 6, say 6. The numbers don't have to be big — they have to be true.

19. Avoid the "kitchen-sink tools dump" anti-pattern.

SRE Skills sections should not list every infrastructure tool that exists. A skills line that reads "AWS, GCP, Azure, Docker, Kubernetes, Helm, Terraform, Ansible, Chef, Puppet, Salt, Jenkins, GitHub Actions, GitLab CI, CircleCI, Spinnaker, ArgoCD, Flux, Prometheus, Grafana, Datadog, New Relic, Splunk, ELK, Loki, Honeycomb, Lightstep, Jaeger, Zipkin, OpenTelemetry, ..." triggers spam-detection on Greenhouse and Ashby and reads as buzzword-stuffing [1][11]. Pick the 24–36 tools you actually operate, group them by category, and put depth in experience bullets, not in the flat list.

Stage 5 — Verification and Submission (Items 20–22)

20. Run your resume through Jobscan or Resume Worded against the SRE JD.

Both tools simulate ATS parsing and produce a match score against the specific JD [15]. SRE matches are often harder than backend SWE matches because the keyword surface is wider — eight signal classes — and the level distinctions (mid vs. senior vs. staff) are tighter. Target 75%+ match score for SRE roles, with most missing-keywords being legitimate tooling-specificity gaps you can fix by naming services beyond the platform name (e.g., adding "EKS, RDS Aurora, ALB" rather than just "AWS"). Under 65% match means the resume needs structural rework before submitting. The 10 minutes of running this check is the single highest-ROI step in the entire submission process.

21. LinkedIn and GitHub match the resume on title, tenure, and tooling.

Recruiters at every modern infrastructure company cross-reference both LinkedIn and GitHub during pre-screen [3][12]. The four checks before submitting: (a) every job title on the resume matches the LinkedIn title exactly (or differs only in the "Senior" / "Staff" / "Principal" prefix in a way you can defend), (b) every cluster-count, service-count, SLO-count, and incident-count number on the resume is consistent with what LinkedIn says about the company headcount and your role, (c) every dated achievement on the resume falls within your LinkedIn employment dates, (d) any open-source contributions referenced on the resume are visible on the linked GitHub profile with commits in the claimed timeframe. Inconsistency between resume, LinkedIn, and GitHub reads as a trust failure, and infrastructure recruiters explicitly check.

22. Final manual parse-test by copying into a plain-text editor.

Open your .docx in Word or Google Docs, select all, copy, paste into TextEdit (Mac), Notepad (Windows), or a plain-text editor. The result approximates what the ATS sees post-parse. Verify: section order is right, bullets aren't scrambled, scope numbers (cluster count, service count, SLO count, latency numbers, incident counts) are intact and correctly attached to their roles, percentage and arrow characters render correctly (no "→" → "â" artifacts), all links — including the GitHub URL — are still readable as text. If anything looks wrong here, it'll look wrong in the ATS. Fix the source until the plain-text version reads cleanly.

Bonus — SRE Resume Failure Modes Beyond the ATS

Even resumes that pass the ATS can fail the recruiter and hiring-manager screens that follow. Six failure modes specific to SRE resumes:

  1. The "title-without-pager" resume. The candidate has been a "Site Reliability Engineer" for 3 years. The resume says SRE. Then no bullet names on-call rotation, no incident-command experience, no SLO ownership, no Kubernetes specifics. Fails the recruiter screen because the production-credibility signal is missing. Fix: name pager structure, incident-commander tenure, SLO portfolio, and at least one Sev-1 you led.
  2. The "all platforms, no depth" resume. Every cloud listed, every database listed, every observability tool listed — and no bullet shows real ownership of any of them. Reads as resume-stuffing. Fix: cut the breadth; double the depth on the 2–3 platforms / tools you've genuinely operated.
  3. The "deployer not engineer" resume. Bullets describe shipping things to production but no investigation, no incident response, no performance debugging. Reads as someone who runs terraform apply but doesn't own production. Fix: add at least one bullet per recent role on a real production debugging story — "drove the Honeycomb-based investigation that surfaced a DNS-resolver contention bug under high concurrent load."
  4. The "buzzword postmortem" resume. Names "blameless postmortem" once and "SRE" three times in the summary — but no bullet describes an actual postmortem, an action item driven to completion, or a real incident. Reads as vocabulary-without-substance. Fix: name 2–3 specific postmortems by the kind of failure (capacity, dependency, deploy, configuration drift) and the action item that closed them out.
  5. The "I personally" voice on infrastructure work. "I configured the cluster," "I deployed the service," "I wrote the alerts." Reads as solo-engineer-without-team-context. SRE work is collaborative; bullets read better in implicit voice describing the team-coordinated work the SRE owned ("Drove the team's adoption of ArgoCD GitOps for the platform org's 60+ services; wrote the migration runbook and led the cut-over with two other SREs").
  6. The "no failure mode named" resume. Every bullet describes successes — never names a failure, a mitigation, a postmortem outcome, or a hard lesson. Reads as either junior or evasive. Fix: include at least one bullet per recent role that names a real production failure and how the team responded — "drove the Sev-1 response when a deploy-pipeline cache poisoning shipped a 3-hour outage across the user-services tier; led the postmortem and the 4-action-item remediation closed within 6 weeks." Failure-named-with-recovery is one of the strongest senior SRE signals.

FAQ

How do I show on-call experience without overstating it?

Name the rotation structure, the team-pool size, the duration, and your role (primary, secondary, tertiary). Pattern: "Secondary on-call (4-engineer pool, 2-week rotation) for 14 months on the platform-services team — primary for the data-pipeline subset of services." If your on-call has been incident-participation rather than rostered pager, frame it that way: "Participated in 18 incident responses as the data-pipeline subject-matter expert across 12 months, including 4 as incident commander." Either is valid; faking primary-on-call status when you were not primary fails the technical interview the moment a hiring manager asks about a specific Sev-1 you led [3][5].

I'm a backend engineer applying to my first SRE role. What goes on the resume?

Surface every reliability-adjacent thing you've done from backend work, framed in SRE-resume vocabulary. On-call participation (even secondary), incident response (even as a contributor), monitoring ownership, runbook authoring, performance investigations, deploy-pipeline contributions, infrastructure code reviews, and any production-impact work. The Google SRE Book and the SRE Workbook are the canonical reference for the vocabulary you should mirror — SLO, SLI, error budget, toil, blameless postmortem, on-call sustainability, alerting on symptoms not causes [5][6]. Then run the resume through Jobscan against an SRE JD; aim for 70%+ match by reframing bullets to mirror the JD's reliability phrasing.

How many years of experience do I need to claim "Senior SRE" titles?

The honest range is 5+ years of sustained infrastructure / production work, with at least 18 months of pager-carrying production on-call, ownership of an SLO portfolio for at least one major service, and lead-role on at least one incident-response or postmortem cycle. Below that, "Senior SRE" reads as inflated even if a small company gave you the title. Above 7 years, "Senior SRE" is the floor; staff / principal / SRE-Lead becomes the next step. Hello Interview's leveling rubrics map these tenure expectations across top-tier infrastructure-heavy companies [16].

Should I include specific incident counts and severity levels?

Yes, when you can defend them. "Drove 14 Sev-1 incidents as incident commander across 18 months" is a strong bullet when the underlying number is real and the severities are honest. The editorial bar: cite numbers you can defend in a hiring-manager interview, and skip the metric if the underlying data is shaky. Internal severity classifications vary across companies (Sev-0 through Sev-4 at some, P0 through P3 at others) — translate to Sev-1 / Sev-2 framing on the resume since it's the most-recognized form, and explain the company-specific definition in interview if asked. An unsourced incident-count claim that a hiring manager probes and you can't substantiate damages the entire resume's credibility — empty space beats fabrication.

Does BLS track SRE salaries?

No. BLS has no Site Reliability Engineer occupation code. The closest proxies are SOC 15-1244 Network and Computer Systems Administrators (median annual wage $96,800 in May 2024) and SOC 15-1252 Software Developers ($133,080 in May 2024) [8][9]. Both undercount SRE comp at top-tier infrastructure-heavy tech companies because BLS aggregates broadly and the SRE specialty commands premium pricing in the labor market. levels.fyi tracks SRE / Production Engineer / Reliability Engineer comp separately at named companies (Google, Stripe, Netflix, Cloudflare, Datadog, Honeycomb, Anthropic) and consistently reports total compensation above both BLS proxies, especially at senior+ levels [10]. For honest salary expectations, anchor on levels.fyi by company and level.

How do I handle a "DevOps Engineer" or "Platform Engineer" title when applying to SRE roles?

Most modern SRE recruiters read DevOps and SRE as overlapping but distinct: DevOps emphasizes the dev/ops bridge and pipeline work; SRE emphasizes production reliability, SLO/error-budget discipline, and pager-carrying. Platform Engineer sits closer to SRE for IDP-style roles. If your DevOps work has been pipeline-and-config-management with no production on-call, frame the resume around the pipeline/IaC/cloud surface honestly and surface any reliability work you did. If your DevOps or Platform work was effectively SRE (production on-call, SLOs, incident response), use the resume bullets to make that explicit and consider a one-line subtitle in the role: "DevOps Engineer (production on-call, SLO ownership, primary incident commander)." Strict-match Workday and Taleo screens will weight the title; Ashby and Greenhouse will read the bullets [12][17].

Yes, if it has any of: serious open-source infrastructure contributions, public Terraform modules, Kubernetes operators or controllers you've authored, technical-writing repositories with substantial content, or maintainership status on infrastructure projects. Skip GitHub if the profile is a graveyard of half-finished tutorials and forks-with-no-commits — an empty GitHub link does more harm than no link. The recruiter signal from a strong GitHub profile is high; the signal from a weak one is negative. Be honest about which you have.

How do I frame a job at a shop that didn't use formal SLOs or error budgets?

Translate the equivalent reliability accountability into canonical vocabulary, but honestly. If your team measured "uptime" with manual dashboards and intuition, write: "Owned the platform-data team's reliability dashboard with 22 alert thresholds tied to user-impact metrics; introduced the team's first SLI definitions for latency and availability over Q3, and partnered with the product team on the error-budget framing for the Q4 reliability investment." That bullet names the work without falsely claiming a formal SLO program existed. The Google SRE Book vocabulary is the keyword surface recruiters scan; translating internal practice to that vocabulary is fair, but inventing programs that didn't exist is not [5][6].


References

[1] Greenhouse Software. "Sourcing and Filtering Best Practices — Greenhouse Help Center." https://support.greenhouse.io/hc/en-us/articles/360051506331-Sourcing-best-practices

[2] Workday. "Workday Recruiting — Candidate Search Documentation." https://doc.workday.com/admin-guide/en-us/staffing/recruiting/candidate-experience.html

[3] Google SRE. "Site Reliability Engineering — How Google Runs Production Systems." https://sre.google/

[4] Brendan Gregg. "Systems Performance — Methodology and Tools." https://www.brendangregg.com/

[5] Beyer, Jones, Petoff, Murphy (eds). Site Reliability Engineering: How Google Runs Production Systems (O'Reilly, 2016). https://sre.google/sre-book/table-of-contents/

[6] Beyer, Murphy, Rensin, Kawahara, Thorne (eds). The Site Reliability Workbook: Practical Ways to Implement SRE (O'Reilly, 2018). https://sre.google/workbook/table-of-contents/

[7] Kubernetes Project. "Kubernetes Documentation." https://kubernetes.io/docs/home/

[8] U.S. Bureau of Labor Statistics. "Network and Computer Systems Administrators (SOC 15-1244) — Occupational Employment and Wage Statistics, May 2024." https://www.bls.gov/oes/current/oes151244.htm

[9] U.S. Bureau of Labor Statistics. "Software Developers (SOC 15-1252) — Occupational Employment and Wage Statistics, May 2024." https://www.bls.gov/oes/current/oes151252.htm

[10] levels.fyi. "Site Reliability Engineer / Production Engineer Salary Data by Company and Level." https://www.levels.fyi/t/software-engineer/focus/devops

[11] Ashby HQ. "How Ashby's AI-Powered Sourcing Works." https://www.ashbyhq.com/resources/guides/ai-powered-sourcing

[12] Lever. "Recruiter Search and Filtering Documentation." https://help.lever.co/

[13] OpenTelemetry Project. "OpenTelemetry Documentation." https://opentelemetry.io/docs/

[14] Charity Majors. "Observability and Engineering Leadership Writing." https://charity.wtf/

[15] Jobscan. "ATS Resume Test — Run Your Resume Through Our Free Scanner." https://www.jobscan.co/

[16] Hello Interview. "SRE / Infrastructure Engineering Leveling and Interview Rubrics." https://www.hellointerview.com/

[17] Cloudflare. "Cloudflare Learning Center — Networking and Edge Security." https://www.cloudflare.com/learning/

See what ATS software sees Your resume looks different to a machine. Free check — PDF, DOCX, or DOC.
Check My Resume

Related ATS Workflows

ATS Score Checker Guides Keyword Scanner Guides Resume Checker Guides
Blake Crosley — Former VP of Design at ZipRecruiter, Founder of ResumeGeni

About Blake Crosley

Blake Crosley spent 12 years at ZipRecruiter, rising from Design Engineer to VP of Design. He designed interfaces used by 110M+ job seekers and built systems processing 7M+ resumes monthly. He founded ResumeGeni to help candidates communicate their value clearly.

12 Years at ZipRecruiter VP of Design 110M+ Job Seekers Served

Ready to test your resume?

Get your free ATS score in 30 seconds. See how your resume performs.

Try Free ATS Analyzer