AI Interviewers Have Passed the Quality Test
Peer-reviewed studies from LSE and USC show AI-conducted interviews now match or exceed human quality while participants report feeling safer sharing sensitive information.

The question hanging over AI-conducted interviews is simple: are they good enough? The research is now in. Not only do AI interviewers deliver quality comparable to human experts, but in several critical dimensions, they can perform better.
AI Matches Human Experts
The most rigorous test of AI interview quality comes from researchers at the London School of Economics. Jan Geiecke and Xavier Jaravel designed an experiment: they had AI systems conduct interviews, then asked sociology PhD students from Harvard and LSE to evaluate the transcripts without knowing which were AI-generated and which came from human expert interviewers (Geiecke & Jaravel 2024).
The verdict? The PhD evaluators rated AI interview quality as approximately comparable to an average human expert interviewer. This wasn't a case of AI being "good enough for simple tasks." These were sophisticated qualitative research interviews, and blind expert review found them indistinguishable from human-conducted conversations.
The LSE study showed something more: AI-interviewers excel at consistency. Human interviewers drift. They get tired, distracted, or develop patterns over time. The AI system demonstrated perfect consistency across thousands of interviews. Every participant received the same thoughtful, well-structured conversation. When you're collecting information at scale, this uniformity is essential.
People Actually Like Talking to AI
You might assume people tolerate AI interviews as a necessary cost-cutting measure. The data tells a different story.
In the same LSE study, 85% of participants enjoyed the AI interview and preferred it over a standard text survey. Only 15% said they would have preferred a human interviewer (Geiecke & Jaravel 2024). Participants didn't just tolerate the AI. They engaged more. The study found people wrote 142% more words in chatbot conversations compared to static text fields.
Why would anyone prefer talking to a machine? The answer lies in the "safe space" effect. Participants who believed they were talking to an AI interviewer showed significantly lower fear of self-disclosure, less concern about self-presentation, and greater willingness to express emotion.
Some participants explicitly stated the AI was "way better than talking to a person" for personal matters. The nonjudgmental nature of the AI created psychological safety that human presence sometimes undermines. When employees are asked about their employer, or when survey respondents tackle political questions the absence of human judgment becomes a feature, not a bug.
The Engagement Advantage Is Real and Measurable
The preference translates into measurably richer data. Researchers at the University of California, Santa Cruz compared AI chatbot interviews to traditional online surveys and found participants provided approximately 30 more words per open-ended answer when talking to the chatbot (Xiao et al. 2020). They also disclosed about 1.6 additional pieces of personal information on average.
The qualitative difference mattered too. Participants spent longer with the chatbot, and 43% spontaneously commented they "had a good time" during the conversation. When asked for optional feedback, 95% of comments were positive, with people describing the AI as engaging or "like talking to a friend." The answers were more relevant, specific, and clear than web form responses.
Where AI Excels Over Humans
The evidence points to specific domains where AI interviewers don't just match human capability, but exceeds it.
First, there's the consistency advantage already mentioned. Same protocol, every time, with zero interviewer drift or fatigue. When you're collecting data across hundreds or thousands of conversations, this uniformity is invaluable for research quality.
Second, AI eliminates unconscious bias from the interview process itself. Human interviewers react to accents, appearance, and demographics in ways they may not even recognize. An AI doesn't. It engages identically with every participant.
Third, scale becomes trivial. The LSE researchers noted that AI can conduct thousands of interviews simultaneously in hours rather than months (Geiecke & Jaravel 2024). This makes collecting certain kinds of information feasible for the first time.
Fourth, modern AI systems can conduct native-quality conversations in 50 or more languages. For global organizations or cross-cultural research, this multilingual capability transforms what's possible.
Implementation Matters
Not all AI interview systems perform equally. Interface design and conversational quality matter enormously. The successful implementations share common elements: responsive AI that adapts to participant answers, empathetic question framing, natural conversation flow rather than rigid scripts, and interfaces that feel more like messaging than survey forms. These features make the difference between a tool people tolerate and one they prefer.
What This Means for Practice
We've all experienced chatbots that frustrate rather than help. But the evidence now shows that well-designed AI interview systems have crossed a critical threshold. They deliver quality comparable to human experts, generate richer data through increased engagement, provide methodological consistency at scale, and do it at a fraction of the cost.
For organizations, it means employee listening, customer research, and other types of research can happen at scales and speeds that were impossible before. For participants, it often means a more comfortable environment for sharing honest views, especially on sensitive topics.
The question now is how quickly organizations will adapt to take advantage of a tool that people actually prefer, and that works better than the alternatives.
References
Geiecke, J., & Jaravel, X. (2024). Large Language Models as Conversational Survey Interviewers. CEPR Discussion Paper DP19705. Centre for Economic Policy Research. https://cepr.org/publications/dp19705.
Xiao, Z., Zhou, M. X., Liao, Q. V., Mark, G., Chi, C., Chen, W., & Yang, H. (2020). Tell Me About Yourself: Using an AI-Powered Chatbot to Conduct Conversational Surveys with Open-ended Questions. ACM Transactions on Computer-Human Interaction, 27(3), Article 15. https://doi.org/10.1145/3381804.