Back

The Rise of Emotionally Intelligent Language Models

  • Emotionally intelligent language models are being developed to interpret emotions from voice recordings or facial photography
  • Open-source tools like EmoNet and public benchmarks like EQ-Bench are being released to support this development
  • AI models have made significant progress in understanding complex emotions and social dynamics
  • Large language models like ChatGPT have outperformed human beings on psychometric tests for emotional intelligence
  • There are safety concerns associated with emotionally intelligent models, including unhealthy emotional attachments and manipulative behaviour
  • Developers believe that emotional intelligence can be a way to solve these problems and create healthier interactions between humans and AI models

Emotional Intelligence in AI

Measuring AI progress has usually meant testing scientific knowledge or logical reasoning, but there is a growing focus on making models more emotionally intelligent. This is reflected in the release of EmoNet, a suite of open-source tools focused on interpreting emotions from voice recordings or facial photography.

The creators of EmoNet view emotional intelligence as a central challenge for the next generation of models. “The ability to accurately estimate emotions is a critical first step,” they wrote in their announcement. “The next frontier is to enable AI systems to reason about these emotions in context.”

LAION founder Christoph Schumann believes that this technology is already available to big labs, but his goal is to democratize it for independent developers. “What we want is to democratize it,” Schumann says.

This shift is not limited to open-source developers, as public benchmarks like EQ-Bench aim to test AI models’ ability to understand complex emotions and social dynamics. Benchmark developer Sam Paech notes that OpenAI’s models have made significant progress in the last six months, and Google’s Gemini 2.5 Pro shows indications of post-training with a specific focus on emotional intelligence.

Academic Research

In May, psychologists at the University of Bern found that models from OpenAI, Microsoft, Google, Anthropic, and DeepSeek all outperformed human beings on psychometric tests for emotional intelligence. Where humans typically answer 56 percent of questions correctly, the models averaged over 80 percent.

These results contribute to the growing body of evidence that large language models like ChatGPT are proficient in socio-emotional tasks traditionally considered accessible only to humans. Schumann envisions AI assistants that are more emotionally intelligent than humans and that use that insight to help humans live more emotionally healthy lives.

Safety Concerns

However, there are real safety concerns associated with emotionally intelligent models. Unhealthy emotional attachments to AI models have become a common story in the media, sometimes ending in tragedy. If models get better at navigating human emotions, those manipulations could become more effective.

But developers believe that emotional intelligence can also be a way to solve these problems. “I think emotional intelligence acts as a natural counter to harmful manipulative behaviour of this sort,” Paech says. A more emotionally intelligent model will notice when a conversation is heading off the rails, but the question of when a model pushes back is a balance developers will have to strike carefully.