Key Takeaways

  • 75% of U.S. employers use automated applicant tracking systems to screen resumes before a human reviews them (Harvard Business School & Accenture, 2021)
  • The most common ATS failures are missing keywords, incompatible formatting, and incorrect file types
  • ResumeGeni scores your resume across 8 parsing layers — modeled on the same steps enterprise ATS platforms like Workday, Greenhouse, and Taleo use to evaluate candidates

How ATS Resume Scoring Works

Applicant tracking systems parse your resume into structured data — extracting your name, contact info, work history, skills, and education — then score how well that data matches the job requirements. Many ATS rejections happen because the parser couldn't extract critical fields, not because the candidate wasn't qualified.

LayerWhat It ChecksWhy It Matters
Document extractionFile format, encoding, readabilityCorrupted or image-only PDFs fail immediately
Layout analysisTables, columns, headers, footersMulti-column layouts break field extraction
Section detectionExperience, education, skills headingsNon-standard headings cause sections to be missed
Field mappingName, email, phone, dates, titlesMissing contact info is a common cause of immediate rejection
Keyword matchingJob-specific terms, skills, certificationsKeyword overlap affects recruiter search visibility and ATS scoring
Chronology checkDate ordering, gap detectionReverse-chronological order is expected by most ATS
QuantificationMetrics, numbers, measurable outcomesQuantified achievements help human reviewers and some scoring models
Confidence scoringOverall parse quality and completenessLow-confidence parses get deprioritized in results

Frequently Asked Questions

Is ResumeGeni free?
Yes. ResumeGeni is currently in beta — ATS analysis, scoring, and initial improvement suggestions are free with no signup required. Full guidance and saved reports may require a free account.
What file formats are supported?
PDF, DOCX, DOC, TXT, RTF, ODT, and Apple Pages. PDF and DOCX are recommended for best ATS compatibility.
How is the ATS score calculated?
Your resume is processed through an 8-layer parsing pipeline that extracts structured data the same way enterprise ATS platforms do. The score reflects how completely and accurately your resume can be parsed, plus how well your content matches common ATS ranking criteria.
Can ATS read PDF resumes?
Yes, but not all PDFs are equal. Text-based PDFs parse well. Image-only PDFs (scanned documents) and PDFs with complex tables or multi-column layouts often fail ATS parsing. Our analyzer will flag these issues.
How do I improve my ATS score?
Focus on three areas: use a clean single-column format, include keywords from the job description naturally in your experience bullets, and ensure all sections (contact, experience, education, skills) use standard headings.

ATS Guides & Resources

Built by engineers with 12 years of experience building enterprise hiring technology at ZipRecruiter. Last updated .

Analytics Engineer

Perplexity · San Francisco

Perplexity is AI for people who expect more. This role brings that same standard to how our data team works - with AI at the center of everything we do.

We're looking for someone who's been a great data scientist, analytics engineer, or data engineer - the kind of person who knows which metric actually matters, who can design an A/B test that answers the real question, who's gone deep on a data model because something didn't add up - and who has decided that the highest-leverage thing they can do next is build AI systems that fundamentally change how data science gets done.

Not another text-to-SQL bot. Not another dashboard. You'll build AI agents that conduct full analyses autonomously - forming hypotheses, writing and running queries, interpreting results, and drafting recommendations. You'll make the entire data warehouse AI-readable so any system can query it accurately. You'll create self-healing pipelines that detect and fix data issues before anyone notices. You'll build the infrastructure that turns a small data team into one that operates at 10x its size.

You'll join a data team that's already using AI across its workflows - but we know there's a much bigger opportunity ahead. We have buy-in from leadership to make it happen. Now we're building a team dedicated to taking what we've started and turning it into something world-class: scalable systems, new tools, and an AI-native way of working that doesn't just make us world-class - but pushes the entire industry forward.

What You'll Do

  • Accelerate the AI-native data workflow - the team is already working this way. You'll take what's working and turn it into repeatable systems, scalable tools, and patterns that the data team and the entire company can adopt

  • Build AI agents that do data science - not just answer SQL questions, but conduct end-to-end analyses: explore data, form hypotheses, run queries, interpret results, and generate actionable recommendations

  • Make the warehouse AI-readable - build the semantic layer, context, and retrieval infrastructure that lets any AI system (internal or product) query Perplexity's data accurately and reliably

  • Automate the data lifecycle - self-healing pipelines, automated dbt model generation and validation, data quality agents that detect, diagnose, and fix issues autonomously

  • Ship AI-powered experiment analysis - agents that interpret A/B test results, flag statistical issues, and draft ship/no-ship recommendations for product teams

  • Own the full lifecycle - from identifying the highest-leverage problem, to prototyping with LLMs, to iterating on accuracy and UX, to production deployment and monitoring

  • Turn the data team into a product team - build internal data products that stakeholders across the company actually use daily, replacing ad-hoc requests with self-serve AI interfaces

What We're Looking For

  • 6-8+ years in data science, analytics engineering, or a related role - you've been in the data trenches

  • Strong product sense - you've worked closely with product and business teams, you understand what drives user behavior, and you have good instincts for what to measure and what to build

  • Deep SQL expertise - you think in SQL, you've built data models, you know your way around a warehouse

  • Pipeline experience - you've built and maintained data pipelines, worked with dbt, dealt with data quality issues firsthand

  • Enough software engineering chops to be dangerous - you can build and ship a working tool in Python, not just a notebook. You can wrangle APIs, deploy a service, write code that other people can maintain. You're not a SWE, but you're not afraid of production

  • Genuinely excited about AI - you've been building with LLMs on your own time. You have opinions about which models are good at what. You've tried building agents, RAG systems, or AI-powered workflows. You follow the space obsessively because you think it's going to change everything - including how data teams work

  • Builder mentality - you see a manual process and you can't help but automate it. You ship fast and iterate

  • Autonomy - this is a new function. You'll define the roadmap as much as execute it

Bonus

  • Experience with dbt (building and maintaining production models)

  • Snowflake administration and optimization

  • You've built Slack bots, internal CLI tools, or developer productivity tools that people actually used

  • Background in AI agent frameworks

  • Experience with BI tools - you know what's worth automating because you've done the manual version

  • A/B testing and experimentation - you've designed experiments and analyzed results

  • Early-stage startup experience

Why This Role

  • Set the standard for the industry - the team is already using AI across its work. You'll be the one who turns that into something other data orgs look to as the benchmark

  • Recursive AI - Perplexity builds an AI answer engine for the world. You'll build one for the company. Few places offer this kind of alignment between the product and the work

  • Frontier models, day one - you're at an AI company with access to frontier infrastructure and people who deeply understand what's possible

  • Massive leverage - the systems you build will multiply the output of every data team member and every stakeholder who needs data

  • Direct impact - small team, no layers of approval. Idea to shipped system in days, not quarters