Are #LLMs more emotionally intelligent than humans? Researchers from the University of Bern and the Université de Genève in Switzerland investigated the capabilities of #LLMs in understanding emotions and generating emotional intelligence (EI) assessments. The LLMs evaluated in the research included ChatGPT-4, ChatGPT-o1, Gemini 1.5 flash, Copilot 365, Claude 3.5 Haiku, DeepSeek-V3 using five established EI assessments: 1. Situational Test of Emotion Understanding (STEU) 2. Geneva EMOtion Knowledge Test (GEMOK-Blends) 3. Geneva Emotional Competence Test, GECo - Emotion Regulation subtest 4. Geneva Emotional Competence Test, GECo - Emotion Management subtest 5. Situational Test of Emotion Management (STEM) These tests presented emotionally charged scenarios to assess how well the AI models understand, regulate, and manage emotions compared to human participants (total N = 467).The LLMs scored significantly higher than the average human participants, with an accuracy of 81% compared to 56% for humans. According to lead author Katja Schlegel, they also demonstrated the ability to generate new EI tests that were "as reliable, clear, and realistic as the original ones, which had taken years to develop." The research suggests that LLMs, and more specifically ChatGPT-4, "at least fulfills the aspect of cognitive empathy", meaning it understands emotions and how they can be regulated and managed. It also suggests that in applied fields like healthcare, hospitality, and customer service, LLMs might even have some advantages compared to humans. As a #design leader, I see this development with growing concern. Emotional intelligence and the ability to connect on a human level are fundamental to who we are and to how we design. While LLMs can generate answers to virtually any question, we must not allow them to surpass us in what is inherently human: empathy, intuition, and emotional depth. In my point of view, great #userexperience is rooted in our capacity to understand and evoke emotions. It's what transforms functionality into meaning. If we begin to lose touch with that, we risk creating interfaces that may be efficient but ultimately feel impersonal and devoid of character. This would be a significant loss for the craft of design — and for the people we design for. Read the full article below and let me know what you think. Has #AI become much better at understanding emotions, or have we lost our emotional intelligence somewhere along the way? How could this impact different businesses moving forward? I'd love to hear your thoughts on this. 👇 https://lnkd.in/gppfcb69
The Role of AI in Understanding Human Emotions
Explore top LinkedIn content from expert professionals.
-
-
ok but what is Empathic AI?! the essence of CX lies in understanding and empathy. as we're about to hit BFCM, the challenge is to maintain this human touch amidst a sea of automated responses. Empathic AI is designed to comprehend human emotions, needs, and concerns, thus delivering a more personalized and understanding customer service. it’s not about just resolving a query, but about feeling the customer's urgency, frustration, or delight, and responding in a way that resonates with their emotions. here’s what it might look like in action: example 1: a father is worried as his daughter’s birthday gift is delayed. an Empathic AI like Siena AI recognizes the urgency, acknowledges the special occasion, and offers immediate solutions to ensure the birthday joy remains untouched. example 2: it’s about reading between the lines. when a customer expresses dissatisfaction, an Empathic AI like Siena dives deeper into the sentiment, understanding the root cause and offering solutions that not only address the concern but also restore trust in the brand. example 3: take a woman battling with acne-prone skin, her journey with countless skincare brands is nothing short of an emotional roller-coaster. an Empathic AI like Siena can empathize with her struggle, and offer personalized suggestions, and scientific-backed resources ensuring she feels seen and valued. so, what should founders and CX leaders look for in an AI CX solution? 1. human-like empathy: the AI should not only resolve issues but also understand and resonate with the emotions of the customers. 2. brand persona: a customizable AI persona that reflects your brand’s unique voice and style, ensuring consistency across all customer interactions. 3. goal-oriented actions: the ability to set goals like editing an order or sending a replacement, and having the AI autonomously work towards achieving them. 4. reasoning-based decision making: going beyond branching logic, to weigh multiple factors and data sources in real-time and find the optimal solution. as you explore AI solutions for your CX, remember that you're not just hiring a tool, you're inviting an entity that will represent your brand’s promise to value and understand each customer. choosing an AI that doesn’t just solve problems but creates connections, ensures every customer interaction is a step towards building a community around your brand. if you want to brainstorm your BFCM CX strategy, feel free to dm me or the Siena AI team🫶
-
If a coworker asked you for a vacation recommendation, would you first want to know her shopping history, birth order, and favorite music artist? No. You’d probably tell her how much you loved your time in the Cayman Islands. You can respond with relevance without knowing everything about someone’s historical data. Yet we aren’t developing AI to respond with empathy. Artificial intelligence has been striving to make our lives easier for a decade—schedules our meetings, reminds us of dinner in the oven, and even books flights for much-needed getaways. In 2024, users expect AI to deliver near-magical experiences. To deliver on these fantastic expectations, AI needs to get personal. It needs to understand our preferences and behaviors, combined with situational context. One of the biggest challenges is training AI. Researchers at Cornell University found that well-intentioned “empathetic AIs” often adopt the biases and stereotypes displayed by people. While mass amounts of data help AI respond in a near-human way, real humans interact with greater nuance than any one data point can illustrate. AI ought to understand people better, not know more about them. It shouldn’t require constant additions of past data to understand current situations—and then dig deeper. It’s what we expect, after all. We get frustrated when chatbots fail to understand our requests, and we call our devices ‘stupid’ when they don’t “get” our joke. We treat our devices like real people, and develop elaborate personas in our minds. We form emotional bonds with these personas—Siri, Alexa, and ChatGPT are names we use in conversation. We treat them like people, because we instinctively understand the importance of connection. Finding or creating empathy fosters a meaningful point of connection between us and the AI tools intended to better our lives. We want people building AI to see us as individuals, not as data points in a group. Imagine an empathetic AI trained to interpret tone and facial expressions: • If you sound hurried, it doesn’t offer trivial information • If you seem upset, it doesn’t make jokes • If your eyes are red, it offers softer responses • If you have a smile, it sounds chipper When someone throws a wrench in your day, you want AI to help, not add to the frustration. An empathetic AI would respond to you differently on your good days than bad days. What’s more, it could follow up to find additional ways to ease your burdens or evaluate your mental state after particularly challenging days. How might we build AI to empathize with us, not just try to replace us?
-
Skepticism continues about the usefulness of using human tests to evaluate AI's cognitive ability, especially regarding the “theory of mind.” The American Psychological Association defines this theory as “the understanding that others have intentions, desires, beliefs, perceptions, and emotions different from one’s own and that such intentions, desires, and so forth affect people’s actions and behaviors.” … but recent findings reported in the IEEE Spectrum article below, which is based on the work of several researchers led by James Strachan (the full paper is available at: https://lnkd.in/e3q9gCFz), reveals that AI models like GPT-4 and Llama 2 are now surpassing human scores in theory of mind tests. One of the researchers calls the results “unexpected and surprising.” I agree with the comments in the IEEE Article that we need to be careful when claiming that the research results “show that LLMs actually possess theory of mind,” but, if nothing else, we are observing LLMs becoming “surprisingly good at mimicking this quintessential human trait.” The research indicates that AI has a growing capability to seemingly understand human mental states. If this is the case, it could transform, say, customer interactions and significantly enhance personalization in technology. Some key takeaways from the article are: 1. AI's understanding of human nuances could revolutionize industries by providing more deeply intuitive user experiences. 2. AI's increased capabilities open up new ethical dilemmas that we must consider. Creating guidelines to ensure its ethical use is essential. 3. This advance, whether AI mimics humans or has some form of understanding, opens new possibilities for AI in mental health and social media, improving sensitivity to user emotions. What are your thoughts on the findings of this research? #AIforGood https://lnkd.in/eEsH37M3
-
I’m emotional. Well, it turns out we're all emotional. As humans, we make decisions based on emotion. As much as we think we evaluate every choice, every purchase, every reaction based on rational thought, the reality is different from what we may perceive. In fact, in one study called Hidden Minds, the author concludes that "probably 95% of all cognition, all the thinking that drives our decisions and behaviors, occurs unconsciously -- and that includes consumer decisions." (Morse, Gardiner. 2002. Hidden Minds. Harvard Business Review). So, if emotion is core to humanity and a primary driver of consumer behavior, shouldn't we be prioritizing emotion measurement in advertising? After all, marketers look to tell advertising stories that leverage emotion. Emotion not only aligns with how we operate as human beings, it also closely aligns with what a marketer might describe as their brand purpose - their “why” as one global CMO recently said. The question then is how would we go about measuring it? How do you to measure an unconscious feeling? Enter AI. As Sundar Pichai, CEO of Google, has said, AI will impact “every product across every company.” (60 Minutes, April 2023). I agree. So how will AI enable companies to measure emotion and ultimately help brands drive positive outcomes and ROI? In the summer of 2022, Ramsey McGrory introduced me to Jeff Tetrault. Jeff is the CEO of Dumbstruck, Inc., an AI-powered emotion analytics company. I have spent the last 18 months getting to know Jeff and many of the talented members of the Dumbstruck team. What they have created is unlike anything I have seen. Not only does the technology, through a permission-based global panel and advanced facial coding technology, measure emotion and attention, but it is able to attribute emotional resonance to different audiences that a brand may be intending to reach. In other words, is your ad most emotionally resonant with this specific audience or that one? Is there a negative emotion that happens with a particular group of people? (I can think of a few brands that would have liked to know about this before running their ads!). With much of the focus in the digital ad industry on media optimization, I am not the first to say that creative needs to get more of our focus. That said, talk is one thing, action is another. Dumbstruck is bringing together media and creative insights into a single platform focused on emotion analytics. It is live today and it works. For my part, it is with great joy that I announce I am joining the Board of Directors of Dumbstruck. I am excited to partner with Jeff Tetrault, Peter Allegretti, Mike Tanski, and the rest of the Dumbstruck team and Board including Tracey Scheppach and Rishad Tobaccowala. Ultimately, I think Dumbstruck can help bring media and creative together through the power of emotion measurement and I’m excited to be a part of their efforts. https://dumbstruck.com/ #AI #EmotionMatters #Dumbstruck
-
There's a lot of buzz about Hume AI, offering "EVI" the world's first emotionally intelligent ai (HT Matt Strain). So I spent some time interviewing EVI, asking questions about it's purpose, processing and safeguards, and then running a series conversational tests to probe its transparency and decision-making, respect of user autonomy and control, sensitivity and appropriateness, and it's potential for manipulation and conflicts of interest. All-in-all, I'm really impressed. There's a lot of sophistication behind the real-time tracking and processing of emotional signals, and the selection of AI responses. Having been involved in AI development since algorithms were consistently tripped up by sarcasm, irony, and other complexities of human emotion this looks like a significant step forward in capability. It's also more than a little bit frightening to think how this will ultimately pervade all AI, for good and evil purposes. It's inevitable. It's also profoundly compelling to see it's evolution in real time. I've posted a few outtakes of my interviews here. I'll be doing a more complete write up at MotiveLab for publication early next week, including some of the more eyebrow-raising responses when I had it role-play as a sales rep, and some interesting discoveries about its technical processing. #EmotionAI #AIethics #InnovativeTech #ArtificialIntelligence #TechTrends #FutureOfAI #AIdevelopment #MachineLearning #DigitalTransformation #TechInsights
-
"Using AI to Predict and Prevent Mental Health Crises: Current Progress and Future Directions." 👩💻 Mental health is one of the most critical issues affecting individuals globally. Mental health crises can manifest in many forms, including depression, anxiety, and suicidal tendencies; finding ways to prevent such crises is crucial before they occur. 🖥 In recent times, Artificial Intelligence (AI) has offered tremendous potential in revolutionizing mental healthcare. 🖱 AI has the potential to predict and prevent mental health crises through various techniques, such as analyzing language patterns, facial expressions, and even social media activity. 💾 These advanced technologies can help detect early warning signs of mental health issues, allowing healthcare professionals to take proactive measures to intervene and support patients before their situation worsens. 🏫 Recent studies have shown that AI can significantly reduce the number of mental health crises, thereby improving patients' lives. For instance, a recent study involving over 30,000 participants found that AI-based conversation analysis could detect depression in individuals with 75% accuracy, surpassing traditional screening tools. 🎬 The future of AI in mental health crisis prediction and prevention looks promising. With enhanced machine learning algorithms, AI-powered chatbots can offer online counseling, emotional support, and personalized self-help recommendations. 👏 One significant advantage of AI is that it can eliminate human biases, allowing individuals to receive personalized mental healthcare based on their specific needs. Additionally, AI-powered tools can aggregate data, allowing researchers to gain insight into the root causes of mental health issues. 👩💻 AI technology presents a tremendous opportunity for transforming how mental health crises are detected and treated. With the right implementation, AI has the potential to improve the lives of millions of people worldwide. 👨💻 As researchers and practitioners continue to collaborate and explore AI's capabilities further, we can look forward to a future where mental health crises are no longer life-threatening emergencies. #mentalhealth #AI #data #healthcare
-
Exciting news! Our latest research paper, "Investigation of Imbalanced Sentiment Analysis in Voice Data: A Comparative Study of Machine Learning Algorithms," has been published in the EAI Endorsed Transactions on Scalable Information Systems (Scopus & Web of Science Indexed). This study delves into the complexities of emotion recognition within speech, an area becoming increasingly vital as AI intersects more with human communication. Our three-layer feature extraction model, guided by Prof. Deepa Krishnan, utilizes sophisticated machine learning techniques to boost the accuracy of speech emotion recognition. Our research highlights the Multi-Layer Perceptron (MLP) classifier's superior performance, showcasing high accuracies across various datasets, shedding light on the resilience of AI models in real-world applications. The implications of our findings are broad, potentially enhancing everything from interactive voice-response systems to the emotional intelligence of AI, impacting fields like healthcare, customer service, and more. 🔗 Dive deeper into our research by reading the full paper: https://lnkd.in/gRruY_kr. This paper marks a significant milestone as it's not only our last publication before graduating from undergraduate studies but also my seventh publication, adding to our growing list of academic contributions. #MachineLearning #AI #SpeechRecognition #EmotionRecognition #DataScience #ResearchPublication #Innovation #Technology
-
Can AI really understand us, or does it just reflect us back to ourselves? In the age of ChatGPT and virtual “therapists”, it's easy to mistake responsiveness for empathy. But when machines echo our emotions, are we feeling seen, or simply mirrored? I was recently quoted in this article by Psychology.org, exploring the rise of artificial empathy and its implications for mental health, therapy, and human connection. We dig into questions like: • What makes empathy real and is written language the same as human interpersonal context? • How do algorithms simulate care? • Could relying on AI for emotional support deepen loneliness instead of healing it, what are the risks? This is a nuanced, timely read for clinicians, tech developers, and anyone curious about the future of relational intelligence. 👇 Read the full article: 🔗 in comments #AIandEmpathy #Psychology #MentalHealth #EMDR #TraumaTherapy #HumanConnection #ArtificialEmpathy #TherapistThoughts #EthicsInAI #AustinTherapist
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development