Key Takeaways

  • 75% of U.S. employers use automated applicant tracking systems to screen resumes before a human reviews them (Harvard Business School & Accenture, 2021)
  • The most common ATS failures are missing keywords, incompatible formatting, and incorrect file types
  • ResumeGeni scores your resume across 8 parsing layers — modeled on the same steps enterprise ATS platforms like Workday, Greenhouse, and Taleo use to evaluate candidates

How ATS Resume Scoring Works

Applicant tracking systems parse your resume into structured data — extracting your name, contact info, work history, skills, and education — then score how well that data matches the job requirements. Many ATS rejections happen because the parser couldn't extract critical fields, not because the candidate wasn't qualified.

LayerWhat It ChecksWhy It Matters
Document extractionFile format, encoding, readabilityCorrupted or image-only PDFs fail immediately
Layout analysisTables, columns, headers, footersMulti-column layouts break field extraction
Section detectionExperience, education, skills headingsNon-standard headings cause sections to be missed
Field mappingName, email, phone, dates, titlesMissing contact info is a common cause of immediate rejection
Keyword matchingJob-specific terms, skills, certificationsKeyword overlap affects recruiter search visibility and ATS scoring
Chronology checkDate ordering, gap detectionReverse-chronological order is expected by most ATS
QuantificationMetrics, numbers, measurable outcomesQuantified achievements help human reviewers and some scoring models
Confidence scoringOverall parse quality and completenessLow-confidence parses get deprioritized in results

Frequently Asked Questions

Is ResumeGeni free?
Yes. ResumeGeni is currently in beta — ATS analysis, scoring, and initial improvement suggestions are free with no signup required. Full guidance and saved reports may require a free account.
What file formats are supported?
PDF, DOCX, DOC, TXT, RTF, ODT, and Apple Pages. PDF and DOCX are recommended for best ATS compatibility.
How is the ATS score calculated?
Your resume is processed through an 8-layer parsing pipeline that extracts structured data the same way enterprise ATS platforms do. The score reflects how completely and accurately your resume can be parsed, plus how well your content matches common ATS ranking criteria.
Can ATS read PDF resumes?
Yes, but not all PDFs are equal. Text-based PDFs parse well. Image-only PDFs (scanned documents) and PDFs with complex tables or multi-column layouts often fail ATS parsing. Our analyzer will flag these issues.
How do I improve my ATS score?
Focus on three areas: use a clean single-column format, include keywords from the job description naturally in your experience bullets, and ensure all sections (contact, experience, education, skills) use standard headings.

ATS Guides & Resources

Built by engineers with 12 years of experience building enterprise hiring technology at ZipRecruiter. Last updated .

Intern - Research Engineer

Summit · New York, New York, United States

 

SummitTX Capital is a multi-manager, multi-strategy hedge fund managing over $3 billion in AUM. Founded in 2015, the firm spun out from Crestline Investors in 2025 to become an independent SEC-registered adviser under the SummitTX Capital brand. We operate an open-architecture platform across Fundamental, Tactical, Quantitative, and Capital Markets strategies, with offices in Fort Worth and New York.

SummitTX is seeking exceptional master’s candidates for our Research Engineer Internship beginning in the summer of 2026. This intern will help build and scale our systematic data platform that powers alpha research and production signals. You will work end-to-end, from idea generation and data acquisition to model development, backtesting, deployment, and monitoring, with an initial portfolio mix of Long/Short Equity initiatives and Systematic Fundamental research. The role reports to the Head of Data and partners daily with portfolio managers, analysts, the central research team, risk, and operations.

Key Responsibilities

 

  • Design, build, and maintain systematic data pipelines, including ingestion, medallion-style data modeling, feature engineering, and experiment tracking
  • Operationalize robust ELT workflows using DBT/SQL and Python on Databricks, with strong enforcement of data quality, lineage, and documentation
  • Develop research-grade datasets and features across market, alternative, and fundamental domains to support L/S Equity and systematic strategies
  • Productionize models and alpha signals with CI/CD pipelines, model registries, monitoring, and cost/performance optimization on Databricks and AWS
  • Partner with PMs and Analysts to translate investment hypotheses into testable research artifacts, delivering clear results, visualizations, and readouts to guide decision-making
  • Contribute to the evolution of the data platform roadmap, including observability, governance, access controls, cataloging, and documentation standards

 


Qualifications

 

  • BS or pursuing an MS in Data Science, Data Engineering, Statistics, Business Analytics, Applied Math, or related field with strong academic performance
  • Strong Python and SQL fundamentals; comfort with Git and testing frameworks
  • Coursework or internship experience in data modeling, ETL/ELT, artificial intelligence/machine learning/statistics, or time-series analysis
  • Clear communication skills and ability to partner with investment, risk, and operations stakeholders

 


Preferred

 

  • Hands-on experience with Python, SQL, DBT, Spark, and modern data-quality toolkits
  • Exposure to ML frameworks (pandas, scikit-learn, PyTorch, MLflow) and feature pipelines
  • Familiarity with Databricks (Lakehouse, Unity Catalog) and AWS data services (S3, Glue/Athena, Lake Formation)
  • Experience with visualization and BI tools (e.g., Plotly, Tableau/Power BI), and Financial Data Platform (e.g. Bloomberg Terminal)
  • Experience in GenAI/LLM applications (prompt engineering, agentic workflow, RAG)

 


Tech Stack

 

  • Languages & Frameworks: Python (Pandas, scikit-learn, PyTorch, MLflow), SQL, DBT, Spark
  • Data & Platform: Databricks (Delta Lake, Unity Catalog, Serverless Compute), DBT, AWS (EC2, S3, Athena), Bloomberg Terminal
  • Tooling & Ops: GitHub/Bitbucket, Databricks Lakeflow, Airflow, CI/CD pipelines, observability frameworks, Linux, Cursor/VS Code

Compensation

  • Base Compensation Range: $40 - 50/hr
  • Eligible for overtime