Key Takeaways

  • 75% of U.S. employers use automated applicant tracking systems to screen resumes before a human reviews them (Harvard Business School & Accenture, 2021)
  • The most common ATS failures are missing keywords, incompatible formatting, and incorrect file types
  • ResumeGeni scores your resume across 8 parsing layers — modeled on the same steps enterprise ATS platforms like Workday, Greenhouse, and Taleo use to evaluate candidates

How ATS Resume Scoring Works

Applicant tracking systems parse your resume into structured data — extracting your name, contact info, work history, skills, and education — then score how well that data matches the job requirements. Many ATS rejections happen because the parser couldn't extract critical fields, not because the candidate wasn't qualified.

LayerWhat It ChecksWhy It Matters
Document extractionFile format, encoding, readabilityCorrupted or image-only PDFs fail immediately
Layout analysisTables, columns, headers, footersMulti-column layouts break field extraction
Section detectionExperience, education, skills headingsNon-standard headings cause sections to be missed
Field mappingName, email, phone, dates, titlesMissing contact info is a common cause of immediate rejection
Keyword matchingJob-specific terms, skills, certificationsKeyword overlap affects recruiter search visibility and ATS scoring
Chronology checkDate ordering, gap detectionReverse-chronological order is expected by most ATS
QuantificationMetrics, numbers, measurable outcomesQuantified achievements help human reviewers and some scoring models
Confidence scoringOverall parse quality and completenessLow-confidence parses get deprioritized in results

Frequently Asked Questions

Is ResumeGeni free?
Yes. ResumeGeni is currently in beta — ATS analysis, scoring, and initial improvement suggestions are free with no signup required. Full guidance and saved reports may require a free account.
What file formats are supported?
PDF, DOCX, DOC, TXT, RTF, ODT, and Apple Pages. PDF and DOCX are recommended for best ATS compatibility.
How is the ATS score calculated?
Your resume is processed through an 8-layer parsing pipeline that extracts structured data the same way enterprise ATS platforms do. The score reflects how completely and accurately your resume can be parsed, plus how well your content matches common ATS ranking criteria.
Can ATS read PDF resumes?
Yes, but not all PDFs are equal. Text-based PDFs parse well. Image-only PDFs (scanned documents) and PDFs with complex tables or multi-column layouts often fail ATS parsing. Our analyzer will flag these issues.
How do I improve my ATS score?
Focus on three areas: use a clean single-column format, include keywords from the job description naturally in your experience bullets, and ensure all sections (contact, experience, education, skills) use standard headings.

ATS Guides & Resources

Built by engineers with 12 years of experience building enterprise hiring technology at ZipRecruiter. Last updated .

Senior Deep Learning Engineer

Nanonets · India

Location: Bangalore (Hybrid) | $40M+ Funded | Building State-of-the-Art AI

Nanonets is transforming the way businesses work. Our AI platform takes the manual, messy, time consuming work — that bog down industries like finance, healthcare, supply chain, and more — and turns them into seamless, automated processes. What once took hours of human effort now takes seconds with Nanonets. Our client footprint spans across 34% of Fortune 500 enabling businesses across various industries to unlock the potential of AI in automating their business processes. 

More than 10,000 businesses trust Nanonets because we don’t just promise efficiency — we deliver it with unmatched accuracy, seamless integrations.

Join Nanonets to push the boundaries of what's possible with deep learning. We're not just implementing models – we're setting new benchmarks in document AI, with our open-source models achieving nearly 1 million downloads on Hugging Face and recognition from global AI leaders.

Backed by $40M+ in total funding including our recent $29M Series B from Accel, alongside Elevation Capital and Y Combinator, we're scaling our deep learning capabilities to serve enterprise clients including Toyota, Boston Scientific, and Bill.com. You'll work on genuinely challenging problems at the intersection of computer vision, NLP, and generative AI.

Here's a quick 1-minute intro video.

Read about the release here:

Article 1

Article 2

What You'll Build

Core Technical Challenges:

  • Train & Fine-tune SOTA Architectures: Adapt and optimize transformer-based models, vision-language models, and custom architectures for document understanding at scale
  • Production ML Infrastructure: Design high-performance serving systems handling millions of requests daily using frameworks like TorchServe, Triton Inference Server, and vLLM
  • Agentic AI Systems: Build reasoning-capable OCR that goes beyond extraction – models that understand context, chain operations, and provide confidence-grounded outputs
  • Optimization at Scale: Implement quantization, distillation, and hardware acceleration techniques to achieve fast inference while maintaining accuracy
  • Multi-modal Innovation: Tackle alignment challenges between vision and language models, reduce hallucinations, and improve cross-modal understanding using techniques like RLHF and PEFT

Engineering Responsibilities:

  • Design distributed training pipelines for models with billions of parameters using PyTorch FSDP/DeepSpeed
  • Build comprehensive evaluation frameworks benchmarking against GPT-4V, Claude, and specialized document AI models
  • Implement A/B testing infrastructure for gradual model rollouts in production
  • Create reproducible training pipelines with experiment tracking 
  • Optimize inference costs through dynamic batching, model pruning, and selective computation

We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity.

Technical Requirements

Must-Have:

  • 3+ years of hands-on deep learning experience with production deployments
  • Strong PyTorch expertise – ability to implement custom architectures, loss functions, and training loops from scratch
  • Experience with distributed training and large-scale model optimization
  • Proven track record of taking models from research to production
  • Solid understanding of transformer architectures, attention mechanisms, and modern training techniques
  • B.E./B.Tech from top-tier engineering colleges

Highly Valued:

  • Experience with model serving frameworks (TorchServe, Triton, Ray Serve, vLLM)
  • Knowledge of efficient inference techniques (ONNX, TensorRT, quantization)
  • Contributions to open-source ML projects
  • Experience with vision-language models and document understanding
  • Familiarity with LLM fine-tuning techniques (LoRA, QLoRA, PEFT)

Why This Role is Exceptional

  • Proven Impact: Our models approaching 1 million downloads – your work will have global reach
  • Real Scale: Your models will process millions of documents daily for Fortune 500 companies
  • Well-Funded Innovation: $40M+ in funding means significant GPU resources and freedom to experiment
  • Open Source Leadership: Publish your work and contribute to models already trusted by nearly a million developers
  • Research-Driven Culture: Regular paper reading sessions, collaboration with research community
  • Rapid Growth: Strong financial backing and Series B momentum mean ambitious projects and fast career progression

Our Recent Achievements

  • Nanonets-OCR model: ~1 million downloads on Hugging Face – one of the most adopted document AI models globally
  • Launched industry-first Automation Benchmark defining new standards for AI reliability
  • Published research recognized by leading AI researchers
  • Built agentic OCR systems that reason and adapt, not just extract
  • Secured $40M+ in total funding from Accel, Elevation Capital, and Y Combinator