Generative AI such as ChatGPT demonstrably improves cover letters, but does not increase the chances of getting a job. Economists from Tilburg University show that AI mainly polishes generic sections of the letter, while recruiters focus on personal motivation and clarity of writing. If AI is widely adopted, this may even lead to less efficient matching in the labor market.
In two experiments, the researchers examined the effect of AI use on job applications. In the first experiment, some job-seekers were given access to ChatGPT while writing their cover letters, while others were not. Recruiters from major Dutch companies (Philips, PwC, Rabobank, and VodafoneZiggo) evaluated the cover letters. The outcome: with AI, the average quality of cover letters increases. Spelling errors decrease, sentences become more formal, and generic sections, such as the introduction and closing, are better written. But this quality gain does not translate into higher chances of being invited for an interview. The improvements appear mainly in less crucial, generic parts of the cover letter. The two dimensions that recruiters care about most, the applicant’s personal motivation and the clarity of the writing, hardly improve. Moreover, ChatGPT sometimes makes texts more complex and less readable. Demonstrably human work In the second experiment, recruiters proved no better than chance at determining whether a cover letter had been written with AI. Yet transparency makes a difference. When recruiters know that a cover letter was written without AI assistance, they evaluate high-quality, fully human-written letters more positively. A premium thus emerges for high-quality work that is demonstrably human. This points to an important dynamic: AI can flatten signals of quality. Less strong candidates in particular benefit from AI support; they lift their letters to an average level. Candidates who already write strong letters gain little and may even be disadvantaged if recruiters become suspicious of AI use. Less efficient matching To better understand these patterns, the researchers developed a theoretical economic model that simulates how job-seekers are matched to employers when companies cannot perfectly assess applicant quality. In this model, AI compresses the differences between candidates: application materials start to look more alike, making it harder for employers to tell strong applicants from weak ones. ‘This shows that widespread AI use can lead to poorer matching in the labor market’, says researcher Niccolò Zaccaria. ‘Candidates with lower skills are relatively more likely to end up in positions that are actually too demanding, while stronger candidates are displaced into less suitable positions. Although individual employers or employees may benefit from this, total social welfare declines compared to a situation without AI use.’ Beyond the labor market The findings are relevant at a time when regulation around AI use is still under development. In the European Union, for example, mandatory transparency regarding AI use in job applications is being discussed. ‘If such rules are introduced, the labor market may come to resemble the scenario from our second experiment, in which recruiters were explicitly told whether each letter was written with AI’, Zaccaria adds. The consequences also extend beyond the labor market. Written signals, invisible AI use, and evaluators’ expectations also play a role in university admissions and other selection procedures. ‘Because the study used the freely accessible version ChatGPT 3.5, the results likely represent a conservative estimate. As AI tools become more powerful, their influence on job applications and hiring decisions will only increase’, Zaccaria concludes. More information The study Labor Market Signals: The Role of Large Language Models was written by four PhD candidates from Tilburg University (Kian Abbas Nejad, Giuseppe Musillo, Till Wicker, and Niccolò Zaccaria) and published in the top journal Journal of Labor Economics.