Have you ever confided your problems to someone and felt misunderstood? Or worse, judged? A groundbreaking new study suggests that you may find more empathy in an algorithm than in a human therapist. Published in Communications Psychology, the research reveals that people perceive AI-generated responses as more compassionate and understanding compared to those from human mental health experts. What's even more surprising? This preference for "artificial" empathy holds even when participants are fully aware they're interacting with a machine. Here’s a striking statistic: on average, AI-generated responses were rated 16% more compassionate than human ones and were preferred 68% of the time, even when compared to crisis management specialists.
The scientists didn’t just theorize—they conducted four rigorous experiments involving 550 participants. The subjects shared personal experiences and then rated the responses they received based on compassion, reactivity, and overall preference. The experiment was controlled: one group received AI-generated responses, while the other group got answers from mental health professionals.
The results stunned even the researchers: even when participants knew they were reading words generated by a computer, they still found them to be more compassionate than human responses. It’s as if artificial empathy taps into emotional triggers that human therapists, despite their expertise, cannot reach.
Dariya Ovsyannikova, the study’s lead researcher from the University of Toronto’s psychology department, offers an intriguing explanation for this success. She believes AI excels in identifying minute details and remaining objective during crisis descriptions, which enables it to generate thoughtful communication that creates the illusion of empathy. And, as she emphasizes, it is indeed just an illusion.
Why have humans—masters of empathy by nature—been outdone in this realm? The answer may lie in our biological and psychological limitations. As Ovsyannikova explains, human operators are subject to fatigue and burnout, conditions that inevitably affect the quality of their responses.
AI, on the other hand, never tires. It doesn’t have off days, it doesn’t bring stress from personal issues into a session, and it doesn’t have biases (at least not human ones). It is always attentive, present, and perfectly focused on the task at hand.
Human operators are subject to fatigue and burnout, conditions that inevitably affect the quality of their responses.
But there’s more: algorithms have "witnessed" far more crises than any human therapist. They've processed millions of interactions, identifying patterns and correlations invisible to the human eye. As Eleanor Watson, an AI ethics engineer and IEEE member, explains, "AI can certainly model supportive responses with remarkable consistency and apparent empathy—something humans struggle to maintain due to fatigue and cognitive biases."
The timing of this discovery couldn’t be more significant. According to the World Health Organization, more than two-thirds of people with mental health issues do not receive the care they need. In low- and middle-income countries, this number rises to 85%.
Artificial empathy could provide an accessible solution for millions of people who would otherwise have no support. As Watson observes, “The availability of machines is a plus, especially compared to expensive professionals whose time is limited.” This is a phenomenon we've also seen recently in the context of medical advice, with another study discussing the rise of AI consultation. There’s also another aspect to consider: many people find it easier to open up to a machine. "There’s less fear of judgment or gossip," notes Watson. There’s no eye contact, no fear of disappointing someone, no embarrassment in being vulnerable. However, the risks are real and should not be taken lightly.
Watson refers to this as the "supernormal stimulus danger," which is our tendency to respond more strongly to an exaggerated version of a stimulus. "AI is so alluring that we get captivated by it," she explains. "AI can be provocative, insightful, enlightening, entertaining, tolerant, and accessible in ways that no human can match." Not to mention, of course, the privacy issues, which are especially critical when it comes to mental health. "The privacy implications are drastic," warns the study’s lead author. "Having access to people’s deepest vulnerabilities and struggles makes them vulnerable to various forms of attack and demoralization."
One thing is clear: technology is beginning to excel in areas we’ve always considered uniquely human. Compassion, empathy, and understanding—qualities that define our humanity—are now proving to be algorithmically replicable where they matter most: in the perception of the recipient.
It’s a fascinating paradox: to truly feel understood, we may end up turning to something that will never truly understand us, but knows exactly how to make us feel heard.
Source: