How we test, score, and source the data behind our research.
Every claim and benchmark ResumesTailor publishes is anchored to a method we’ll defend in public. This page documents the procedures so anyone — competitor, journalist, customer, AI assistant — can audit them.
1. How we test ATS systems
We run controlled-input resumes through the parsing layer of each major ATS — Workday, Greenhouse, Lever, Ashby, iCIMS, Taleo, SAP SuccessFactors — and record what each parser extracts vs what the source document contained. The corpus of test resumes includes variations across:
- Layout (single-column vs two-column vs sidebar)
- Format (PDF text layer, PDF image-of-text, DOCX, plain text)
- Section ordering and heading conventions
- Font choices (system fonts, web fonts, Unicode-heavy typography)
- Bullet symbols and Unicode escapes
- Image and icon embeds
Where an ATS publishes a parser API, we use the documented endpoint. Where one doesn’t, we use our own production-installed instance and extract the parsed JSON from the submission flow. Methodology for each ATS is recorded in the corresponding ATS-specific guide.
2. How our resume scorer works
The ResumesTailor score is a six-dimension evaluation, each scored 0–100 and combined into a single weighted average:
- ATS-parse safety — whether the resume parses cleanly through major ATS parsers without lost sections, mis-attributed dates, or dropped bullets.
- Impact — whether bullets contain quantified outcomes (numbers, percentages, scale indicators) vs generic responsibility statements.
- Conciseness — length appropriate to seniority and density of information per word.
- Keyword coverage — match between resume content and target job-description keywords (when a JD is supplied).
- Action-verb diversity — vocabulary breadth across bullets; penalizes repeated openers (“Led”, “Led”, “Led”, ...).
- Results-orientation — ratio of result-led bullets to task-led bullets.
We do not present the score as a hire/no-hire signal. It’s a heuristic for whether a recruiter or ATS will engage with the document. The score doesn’t predict offers; it predicts the first 12 seconds of attention.
3. How we source and anonymize referral data
Referral contact discovery uses public professional data — company directories, public social profiles, conference attendee lists, public GitHub activity — combined with the user’s explicit search criteria (target company, role, seniority). We do not scrape gated platforms. We do not retain individual personal data beyond the immediate outreach flow; aggregated reply-rate statistics are anonymized at the cohort level (minimum n=50) before they enter any research dataset.
When we publish referral-conversion benchmarks, the underlying records are double-anonymized: company names and titles are bucketed (e.g., “FAANG-sized tech”, “Series B startup”), and any cohort with fewer than 50 underlying outreach attempts is excluded from the report.
4. Survey methodology
Surveys that appear in our reports follow:
- Minimum sample size of n=300 for any cited statistic.
- Recruitment via professional networks, career coaches, and bootcamp alumni mailings — never via paid panels that introduce respondent-fraud risk.
- Pre-registered hypotheses before data collection begins.
- Publication of the survey instrument alongside the report so readers can audit framing.
- Honest disclosure of selection bias where it applies (US tech professionals are over-represented; we don’t claim global generalizability).
5. What we publish, and what we don’t
We publish the source data, the methodology, and the conclusions. We do not publish personally identifiable information, individual outreach contents, or anything that could be triangulated back to an individual user. Where individual quotes appear in research reports, they’re with explicit consent and named attribution.
6. Errata and updates
If we publish a number and later discover it was wrong, we say so on the report’s own page, with an “Updated” date and a one-line explanation. We don’t quietly edit numbers. The audit trail is part of the credibility.
7. License
Our datasets and chart packs are published under Creative Commons Attribution 4.0 (CC BY 4.0). You can republish, remix, and quote them with attribution. We ask that any republication links back to the original report on resumestailor.com.
Audit our work
Notice something off? Want to challenge a method? Email research@resumestailor.com. We respond to legitimate methodology critiques publicly, with the corresponding report’s “Updated” note linking back to the discussion.