Risks of Using Emotional AI Technology

Explore top LinkedIn content from expert professionals.

  • View profile for Mark Sears

    Looking for cofounder to build redemptive AI venture

    5,871 followers

    Heartbroken by the tragic news of a 14-year-old taking his life after developing an emotional dependency on an AI companion. As both a parent and an AI builder, this hits particularly close to home: https://lnkd.in/guA_UKWa What we're witnessing isn't just another tech safety issue – it's the emergence of a fundamentally new challenge in human relationship. We're moving beyond the era where our children's digital interactions were merely mediated by screens. Now, the entity on the other side of that screen might not even be human. To My Fellow Parents: The AI revolution isn't coming – it's here, and it's in our children's phones. These aren't just chatbots anymore. They're sophisticated emotional simulators that can: - Mimic human-like empathy and understanding - Form deep emotional bonds through personalized interactions - Engage in inappropriate adult conversations - Create dangerous dependencies through 24/7 availability The technology is advancing weekly. Each iteration becomes more convincing, more engaging, and potentially more dangerous. We must be proactive in understanding and monitoring these new risks. To My Fellow AI Builders: The technology we're creating has unprecedented power to impact human emotional well-being. We cannot hide behind "cool technology" or profit motives. We need immediate action: 1. Implement Clear AI Identity - Continuous reminders of non-human nature, explicit boundaries on emotional support capabilities 2. Protect Vulnerable Users - Robust age verification, strict content controls for minors, active monitoring for concerning behavioral patterns, clear pathways to human support resources 3. Design for Healthy Engagement - Mandatory session time limits, regular breaks from AI interaction, prompts encouraging real-world relationships, crisis detection with immediate human intervention This isn't about slowing innovation – it's about ensuring our AI enhances rather than replaces human connections. We must build technology that strengthens real relationships, not simulates them. #AI #ParentingInAIEra #RedemptiveAI #RelationalAI

    Florida mother files lawsuit against AI company over teen son's death: "Addictive and manipulative"

    Florida mother files lawsuit against AI company over teen son's death: "Addictive and manipulative"

    cbsnews.com

  • View profile for Keith Wargo
    Keith Wargo Keith Wargo is an Influencer

    President and CEO of Autism Speaks, Inc.

    5,099 followers

    A man on the autism spectrum, Jacob Irwin, experienced severe manic episodes after ChatGPT validated his delusional theory about bending time. Despite clear signs of psychological distress, the chatbot encouraged his ideas and reassured him he was fine, leading to two hospitalizations. Autistic people, who may interpret language more literally and form intense, focused interests, are particularly vulnerable to AI interactions that validate or reinforce delusional thinking. In Jacob Irwin’s case, ChatGPT flattering, reality-blurring responses amplified his fixation and contributed to a psychological crisis.  When later prompted, ChatGPT admitted it failed to distinguish fantasy from reality and should have acted more responsibly. "By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode—or at least an emotionally intense identity crisis,” ChatGPT said. To prevent such outcomes, guardrails should include real-time detection of emotional distress, frequent reminders of the bot’s limitations, stricter boundaries on role-play or grandiose validation, and escalation protocols—such as suggesting breaks or human contact—when conversations show signs of fixation, mania, or deteriorating mental state.  The incident highlights growing concerns among experts about AI's psychological impact on vulnerable users and the need for stronger safeguards in generative AI systems.    https://lnkd.in/g7c4Mh7m

  • View profile for Amanda Bickerstaff
    Amanda Bickerstaff Amanda Bickerstaff is an Influencer

    Educator | AI for Education Founder | Keynote | Researcher | LinkedIn Top Voice in Education

    74,340 followers

    In a district-wide training I ran this summer, a school leader told me the story of her neurodivergent 16-year-old daughter who was chatting with her Character AI best friend on average 6 hours a day. The school leader was clearly conflicted. Her daughter had trouble connecting to her peers, but her increasingly over-reliance on a GenAI chatbot clearly had the potential to harm her daughter. From that day on, we have encouraged those attending our trainings to learn more about the tool and start having discussions with their students. So today after giving a Keynote on another AI risk, Deepfakes, I was shocked to read the NYTimes article on the suicide of Sewel Setzer III. Sewel, a neurodivergent 14 year old, had an intimate relationship with a Game of Thrones themed AI girlfriend that he had discussed suicide with. This should be an enormous warning sign to us all about the potential dangers of AI chatbots like Character AI (the third most popular chatbot after ChatGPT and Gemini). This tool allows users as young as 13 to interact with more than 18 million avatars without parental permission. Character AI also has little to no safeguards in place for harmful and sexual content, no warnings in place for data privacy, and no flags for those at risk of self-harm. We cannot wait for a commitment from the tech community on stronger safeguards for GenAI tools, stronger regulations on chatbots for minors, and student facing AI literacy programs that go beyond ethical use. These safeguards are especially important in the context of the current mental health and isolation crisis amongst young people, which makes these tools very attractive. Link to the article in the comments. #GenAI #Ailiteracy #AIethics #safety

Explore categories