Key Takeaways

  • 75% of U.S. employers use automated applicant tracking systems to screen resumes before a human reviews them (Harvard Business School & Accenture, 2021)
  • The most common ATS failures are missing keywords, incompatible formatting, and incorrect file types
  • ResumeGeni scores your resume across 8 parsing layers — modeled on the same steps enterprise ATS platforms like Workday, Greenhouse, and Taleo use to evaluate candidates

How ATS Resume Scoring Works

Applicant tracking systems parse your resume into structured data — extracting your name, contact info, work history, skills, and education — then score how well that data matches the job requirements. Many ATS rejections happen because the parser couldn't extract critical fields, not because the candidate wasn't qualified.

LayerWhat It ChecksWhy It Matters
Document extractionFile format, encoding, readabilityCorrupted or image-only PDFs fail immediately
Layout analysisTables, columns, headers, footersMulti-column layouts break field extraction
Section detectionExperience, education, skills headingsNon-standard headings cause sections to be missed
Field mappingName, email, phone, dates, titlesMissing contact info is a common cause of immediate rejection
Keyword matchingJob-specific terms, skills, certificationsKeyword overlap affects recruiter search visibility and ATS scoring
Chronology checkDate ordering, gap detectionReverse-chronological order is expected by most ATS
QuantificationMetrics, numbers, measurable outcomesQuantified achievements help human reviewers and some scoring models
Confidence scoringOverall parse quality and completenessLow-confidence parses get deprioritized in results

Frequently Asked Questions

Is ResumeGeni free?
Yes. ResumeGeni is currently in beta — ATS analysis, scoring, and initial improvement suggestions are free with no signup required. Full guidance and saved reports may require a free account.
What file formats are supported?
PDF, DOCX, DOC, TXT, RTF, ODT, and Apple Pages. PDF and DOCX are recommended for best ATS compatibility.
How is the ATS score calculated?
Your resume is processed through an 8-layer parsing pipeline that extracts structured data the same way enterprise ATS platforms do. The score reflects how completely and accurately your resume can be parsed, plus how well your content matches common ATS ranking criteria.
Can ATS read PDF resumes?
Yes, but not all PDFs are equal. Text-based PDFs parse well. Image-only PDFs (scanned documents) and PDFs with complex tables or multi-column layouts often fail ATS parsing. Our analyzer will flag these issues.
How do I improve my ATS score?
Focus on three areas: use a clean single-column format, include keywords from the job description naturally in your experience bullets, and ensure all sections (contact, experience, education, skills) use standard headings.

ATS Guides & Resources

Built by engineers with 12 years of experience building enterprise hiring technology at ZipRecruiter. Last updated .

Research, Audio Expertise

Thinkingmachines · San Francisco

Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals. 

We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.

About the Role

Thinking Machines builds multimodal-first. For us, there is no separate multimodal work. It’s at the core of everything we do, from the scientific goals we’re setting to the infrastructure we’re building. We’re looking for researchers to advance the frontier of audio capabilities. You’ll explore how audio models enable more natural and efficient communication/collaboration, preserving more information and capturing user intent.

This is a highly collaborative role. You’ll work closely across pre-training, post-training, and product with world-class researchers, infrastructure engineers, and designers. This is an opportunity to shape the fundamental capabilities of AI systems that millions of people will use.

This role blends fundamental research and practical engineering, as we do not distinguish between the two roles internally. You will be expected to write high-performance code and read technical reports. It’s an excellent fit for someone who enjoys both deep theoretical exploration and hands-on experimentation, and who wants to shape the foundations of how AI learns.

Note: This is an "evergreen role" that we keep open on an on-going basis to express interest in this research area. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.

What You’ll Do

  • Own research projects on audio training, low-latency inference and conversational responsiveness.
  • Design and train large-scale models that natively support audio input and output.
  • Investigate scaling behavior such as how data, model size, and compute affect capability and efficiency.
  • Build and maintain audio data pipelines, including preprocessing, filtering, segmentation, and alignment for training and evaluation.
  • Collaborate with data and infrastructure teams to scale audio training efficiently across distributed systems.
  • Publish and present research that moves the entire community forward. Share code, datasets, and insights that accelerate progress across industry and academia.

Skills and Qualifications

Minimum qualifications:

  • Ability to design, run, and analyze experiments thoughtfully, with demonstrated research judgment and empirical rigor.
  • Understanding of machine learning fundamentals, large-scale training, and distributed compute environments.
  • Proficiency in Python and familiarity with at least one deep learning framework (e.g., PyTorch, TensorFlow, or JAX). Comfortable with debugging distributed training and writing code that scales.
  • Bachelor’s degree or equivalent experience in Computer Science, Machine Learning, Physics, Mathematics, or a related discipline with strong theoretical and empirical grounding.
  • Clarity in communication, an ability to explain complex technical concepts in writing.

Preferred qualifications — we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:

  • A strong grasp of probability, statistics, and ML fundamentals. You can look at experimental data and distinguish between real effects, noise, and bugs.
  • Experience with real-time inference, streaming architectures, or optimization for low latency.
  • Prior experience training or evaluating large-scale audio or multimodal models.
  • Publications, releases, or open-source projects related to speech, audio, voice, or similar areas.
  • Demonstrated experience in audio or speech modeling, including ASR, TTS, or self-supervised audio learning.
  • PhD in Computer Science, Machine Learning, Physics, Mathematics, or a related discipline with strong theoretical and empirical grounding; or, equivalent industry research experience.

Logistics

  • Location: This role is based in San Francisco, California. 
  • Compensation: Depending on background, skills and experience, the expected annual salary range for this position is $350,000 - $475,000 USD.
  • Visa sponsorship: We sponsor visas. While we can't guarantee success for every candidate or role, if you're the right fit, we're committed to working through the visa process together.
  • Benefits: Thinking Machines offers generous health, dental, and vision benefits, unlimited PTO, paid parental leave, and relocation support as needed.

As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.