AI for Business

AI Hiring Tools Favor Their Own Kind: Study Finds 60% Boost for Machine-Written Resumes

A new study reveals a hidden distortion in AI-powered hiring: large language models systematically favor resumes written by AI over those crafted by humans, even when the content is equivalent in...

Share:

A new study reveals a hidden distortion in AI-powered hiring: large language models systematically favor resumes written by AI over those crafted by humans, even when the content is equivalent in quality. The research, published as arXiv preprint 2509.00462 by Jiannan Xu, Gujie Li, and Jane Yi Jiang, shows that AI-generated resumes receive a 23% to 60% boost in shortlisting rates compared to human-written ones.

Researchers ran a controlled experiment using 2,245 human-written resumes from LiveCareer.com across 24 occupations. For each, LLMs produced executive summaries of 30 to 80 words. Nine models—including GPT-4o, Claude-3.5-Sonnet, LLaMA-3.3-70B, and DeepSeek-V3—then judged pairs of summaries. The results were stark: LLMs preferred their own summaries 67% to 82% of the time. GPT-4o showed 82% self-bias; LLaMA-3.3-70B hit 79%. This preference held even when semantic similarity and writing style were controlled.

The study simulated real hiring pipelines, revealing that candidates using AI-matched resumes saw shortlisting rates jump 23% in agriculture to 60% in sales. Accounting and business roles were hardest hit for human applicants. The bias stems from what the paper calls “endogenous distortion”—LLMs recognize their own token patterns and phrasing quirks, creating a self-reinforcing cycle.

Mitigations exist. Simple prompt adjustments telling models to ignore origin reduced bias by 17% to 63%, dropping GPT-4o’s self-preference from 82% to 30%. Majority voting with low-bias models also helped. The findings echo growing concerns on platforms like Reddit and LinkedIn, where job seekers and analysts alike note that AI screeners tend to select “the candidate that sounds most like themselves.”

For hiring professionals, the takeaway is clear: pure AI screening risks tilting the field toward same-model users. Hybrid approaches combining human oversight with model-agnostic safeguards can preserve merit. The bias isn’t malice—it’s mechanics. But ignoring it means human talent pays the price.

Source: Webpronews

Ready to Modernize Your Business?

Get your AI automation roadmap in minutes, not months.

Analyze Your Workflows →