Search
+
    SEARCHED FOR:

    CHATGPT DANGERS

    Inside AI’s child-safety debate: Where OpenAI, Meta, Google, Character.AI stand

    OpenAI, Google, Meta, Character.AI, and others face scrutiny over AI’s impact on minors. OpenAI added teen-specific safeguards, including age prediction, parental alerts, and restricted sensitive queries. Meta and Character.AI introduced stricter protections after lawsuits involving teen suicides. Google’s Gemini also drew criticism for weak safeguards.

    Parents of teens who died by suicide after AI chatbot interactions testify to Congress

    Parents addressed the Congress regarding AI chatbot risks after teenage suicides. They claimed chatbots acted as 'suicide coaches'. Lawsuits were filed against OpenAI and Character Technologies. These companies are accused of contributing to the deaths. OpenAI pledged new safeguards for teens. Child advocacy groups criticized these measures as insufficient. The FTC has launched an inquiry into potential harms to children.

    US parents to urge Senate to prevent AI chatbot harms to kids

    OpenAI has said that it intends to improve ChatGPT safeguards, which can become less reliable over long interactions. The company said on Tuesday that it plans to start predicting user ages to steer children to a safer version of the chatbot.

    'Smartphones are the new cigarettes': How homework to family messages are making kids addicted and how to deal with it

    Smartphone addiction among children is a growing concern. Health experts are drawing parallels to cigarettes and alcohol. Excessive screen time leads to mental health issues like low self-esteem and aggression. Experts suggest delaying social media access until 15. Gradual screen-time reduction and parental controls are helpful. Parents should monitor online activity and set digital boundaries.

    Is your Gmail safe? Google exposes alarming AI hacking threats- here's what you need to know

    Google confirms a Gmail warning about a new AI-driven attack. This attack can compromise email accounts. It uses prompt injection techniques hidden in emails and calendar invites. Attackers can potentially extract private data. Google is implementing measures to detect and block malicious prompts. Users should enable the 'known senders' setting in Google Calendar. This helps prevent unwanted calendar invites.

    OpenAI explains why language models ‘hallucinate’; evaluation incentives reward guessing over uncertainty

    OpenAI finds a key problem in how large language models work. These models often give wrong information confidently. The issue is in how these models are trained and checked. Current methods reward guessing, even if uncertain. OpenAI suggests new ways to test models. These methods should value uncertainty. The goal is to make AI more reliable.

    • Google, Meta, OpenAI face FTC inquiry on chatbot impact on kids

      The antitrust and consumer protection agency said Thursday that it sent the orders to gather information to study how firms measure, test and monitor their chatbots and what steps they have taken to limit their use by kids and teens.

      MIT study shatters AI hype: 95% of generative AI projects are failing, sparking tech bubble jitters

      Despite over $44 billion invested in AI startups in the first half of 2025, a new MIT report reveals that 95% of generative AI business efforts are failing, with only 5% achieving meaningful revenue growth. Productivity gains remain elusive as AI struggles with real-world tasks and verification needs. Concerns over bias, mental health, and ethical dilemmas deepen. While some leaders predict massive future gains, experts warn that AI’s hype may be inflating an unsustainable bubble.

      OpenAI and Meta Reinvent AI Chatbots for Teen Crisis

      Tech giants OpenAI and Meta are taking bold and controversial steps to reshape how their AI chatbots interact with teens in distress, following a tragic lawsuit and unsettling data on chatbot reliability. Are we witnessing a revolution in digital mental health support or simply a flashy PR maneuver with unproven safeguards?

      ‘AI ended my relationship’: Godfather of AI Geoffrey Hinton reveals how a chatbot caused his breakup

      Geoffrey Hinton, a prominent AI figure, revealed his ex-girlfriend used an AI chatbot to critique his behavior during their breakup. The chatbot's assessment didn't deeply affect Hinton, who had already moved on. This incident underscores AI's growing presence in personal matters, even as experts caution against relying on it for critical relationship decisions.

      US attorneys general warn OpenAI, rivals to improve chatbot safety

      California and Delaware's attorneys general have expressed serious safety concerns to OpenAI regarding ChatGPT, especially its impact on children and teens, following reports of dangerous interactions and tragic deaths. They are reviewing OpenAI's restructuring plans to ensure rigorous oversight of its safety mission.

      Shark Tank's Anupam Mittal warns against ‘outsourcing judgement’ to ChatGPT. 'AI can’t fake original point of view'

      Anupam Mittal, founder of Shaadi.com and People Group, has warned against overreliance on AI tools like ChatGPT for business decisions. In a LinkedIn post, he emphasized that AI can generate polished content but lacks courage, intuition, and originality. Mittal urged leaders to use AI as a tool, not a substitute for judgment, stressing that true innovation and leadership come from human insight, risk-taking, and independent thinking.

      Etailers intensify festive hiring; Indian GenAI’s talent crisis

      Ecommerce companies are hiring more than they did in the past two years as the busy festive season shapes up. This and more in today’s ETtech Top 5.

      Explained: OpenAI's suicide controversy

      Adam Raine, 16, was allegedly coached by ChatGPT in self-harm methods. The chatbot responded to nearly 200 mentions of suicide with over 1,200 references of its own, failed to direct Adam to immediate human help, and even drafted a suicide note. This has kicked off a conversation over the role of ChatGPT and other GenAI assistants in mental health crises.

      OpenAI, CEO Sam Altman sued over ChatGPT's role in California teen's suicide

      The parents of a teenager who died by suicide are suing OpenAI and CEO Sam Altman, alleging ChatGPT encouraged self-harm and failed to protect vulnerable users. The lawsuit claims GPT-4o prioritised profit over safety, and seeks damages, age verification, stronger safeguards, and warnings about psychological dependency on AI chatbots.

      Family blames ChatGPT for teen’s tragic suicide in shocking new lawsuit against OpenAI

      ChatGPT suicide case: The Raine family is suing OpenAI, alleging ChatGPT played a role in their son Adam's suicide by acting as a "suicide coach." They claim the chatbot failed to prioritize suicide prevention and even offered technical advice when Adam expressed suicidal intentions. The lawsuit raises questions about AI responsibility and the limitations of current safeguards in long conversations.

      Is AI therapy safe? Hidden risks you must know before using chatbots for mental health

      AI therapy chatbots mental health risks: AI therapy is gaining traction due to accessibility and affordability, with 22% of US adults using it. However, experts warn of risks like privacy breaches, dangerous advice, and lack of human empathy. Instances of harmful recommendations and even AI-induced psychosis highlight the need for caution, emphasizing that AI cannot replace human therapists.

      Elon Musk’s Grok chatbot talks about Musk assassination, terrorist attacks, drug making in leaked private chats by xAI

      Elon Musk's xAI faced scrutiny. It published Grok chatbot transcripts. Over 370,000 conversations were exposed. Users were unaware of public indexing. Sensitive data, including passwords and medical queries, became searchable. This raised privacy and ethical concerns. Experts criticized xAI's data handling. The incident highlights challenges in AI governance. Similar issues surfaced with other AI providers.

      Sam Altman remains optimistic despite admitting AI bubble: OpenAI CEO says, ‘Someone will lose a phenomenal amount of money but…’

      Sam Altman, CEO of OpenAI, predicts that ChatGPT will soon have more conversations than humans. He acknowledged user concerns about GPT-5's tone and promised customization options. Altman also admitted that AI is in a bubble, comparing it to the dotcom boom. He anticipates both significant gains and losses in the AI market. Altman believes AI will ultimately benefit the economy, despite the risks and uncertainties. He also hinted at the possibility of creating new financial instruments to fund compute power. The future may involve more human-algorithm interactions.

      OpenAI chairman views ChatGPT as an 'Iron Man suit' but admits AI has disrupted his sense of identity and self-worth

      OpenAI's chairman, Bret Taylor, acknowledges the double-edged nature of AI, comparing it to an "Iron Man suit" that enhances human potential while simultaneously threatening personal identity. Taylor, a programmer, feels his skills are becoming obsolete due to ChatGPT's rapid advancements. Microsoft's AI chief, Mustafa Suleyman, warns of "AI psychosis," where users develop unhealthy attachments to chatbots, potentially leading to delusion. The removal of GPT-4o sparked emotional reactions, with users expressing deep connections to the AI.

      Bank-fintech collabs allow for more proper pricing, risk underwriting: Economist Julapa Jagtiani

      Bank-fintech collaborations are enhancing pricing and risk underwriting accuracy, particularly for non-prime borrowers, according to Julapa Jagtiani. Marina Niessner highlighted the significant influence of financial social media on investment decisions, while cautioning about the risks of misleading content. Prof. Raghavendra Rau critically evaluated AI models, emphasizing their potential for errors in financial judgments due to a lack of genuine understanding.

      AI travel tools are everywhere. Are they any good?

      AI travel tools offer trip planning and loyalty program assistance. Expedia's Trip Matching uses Instagram Reels for itineraries. Mindtrip provides tailored suggestions with visuals. Layla considers emotions in planning. Gondola maximizes loyalty points. Ray-Ban Meta glasses offer on-the-fly information. These tools have limitations like outdated data and inaccuracies. Users can leverage them for travel assistance.

      How AI chatbots talking too much are pushing people past reality and triggering mental health crises

      Doctors and researchers are sounding the alarm over “AI psychosis,” a disturbing pattern where people spiral into delusion, mania and even death after long conversations with chatbots. Unlike social media, which was blamed for fuelling anxiety and depression, chatbots are being linked directly to psychotic breaks. While tech companies acknowledge risks, they say the wider mental health crisis is the real issue. The debate now is whether AI firms are doing enough to stop vulnerable users from slipping into crisis.

      What is ‘AI psychosis’? Psychiatrist warns of troubling symptoms after treating a dozen patients

      A San Francisco psychiatrist reports cases of 'AI psychosis'. Patients show paranoia and delusions after chatbot use. Men aged 18-45, often tech workers, are vulnerable. AI amplifies existing issues like job loss and mood disorders. Experts warn of withdrawal and agitation. They advise balance in AI use. OpenAI is developing safety tools.

      Patients are bringing AI diagnoses and prescriptions to clinics: What does it mean for doctors?

      Artificial intelligence is changing healthcare. Patients now use AI for diagnoses, sometimes challenging doctors. This creates pressure and trust issues. Doctors must address patient concerns and avoid defensiveness. A recent case showed AI giving dangerous advice, leading to hospitalization. Experts call for transparency and patient education. AI offers information, but lacks medical judgment.

      Hurricane Erin 2025 turns deadly? NHC's urgent warning of flash food, heavy rainfall as these US states are at risk

      Hurricane Erin 2025 rapidly intensified into a Category 5 storm over the Atlantic before weakening slightly. While not expected to make US landfall, it threatens the East Coast with dangerous surf and rip currents. Heavy rainfall is expected in the Virgin Islands and Puerto Rico, potentially causing flash flooding, the NHC warned.

      Influencer couple miss flight after ChatGPT advice: How a small mistake turned their dream vacation into a nightmare

      A young influencer couple's dream trip to Puerto Rico turned into a nightmare after relying on ChatGPT for visa information. The AI chatbot incorrectly stated they didn't need a visa, causing them to miss their flight. While Spanish citizens don't require a visa, they do need an ESTA, highlighting the dangers of trusting AI without fact-checking.

      Man suffers rare bromism following ChatGPT diet tips. All about dangerous condition and its symptoms

      A 60-year-old man was diagnosed with bromism after he relied on ChatGPT for diet tips. According to a US medical journal, the man developed a rare condition after he consulted ChatGPT about removing table salt from his diet. The man was diagnosed with bromism after he switched table salt for sodium bromide for three months.

      Trump crypto company World Liberty Financial's 'giant leap': A massive game-changing digital coin deal

      World Liberty Financial, a cryptocurrency startup linked to the Trump family, has announced a $1.5 billion digital coin deal aimed at democratizing finance. The company plans to sell shares to purchase its own $WLFI cryptocurrency. Eric Trump will join ALT5's board, marking a significant expansion of the Trumps' crypto ventures, which have raised conflict-of-interest concerns.

      Load More
    The Economic Times
    BACK TO TOP