Search
+
    SEARCHED FOR:

    ONLINE SAFETY FOR TEENS

    Google says Australian law on teen social media use 'extremely difficult' to enforce

    Google has stated that Australia's new law banning social media for under-16s will be difficult to enforce and won't improve child safety online. The tech giant argues that the focus should be on safety tools and parental controls, not restricting access. Australia's law, set to take effect in December, requires platforms to infer age using AI and behavioral data.

    YouTube warns Australia social media ban will not keep children safe

    Australia plans to ban children under sixteen from social media by 2025. Video streaming giant YouTube argues this ban is well-intentioned but will not enhance online safety. The company states enforcement will be difficult and suggests YouTube should be excluded. Social media firms have called the laws vague and rushed. Companies face hefty fines for non-compliance.

    Children plug into AI chatbots, leaving reality, parents behind

    Conversational AI apps like Chai and Character.AI let users create and interact with virtual personas, while Roblox is a social gaming universe where kids play and interact daily. Together, these interactive platforms and games, experts told ET, represent a rapidly expanding digital ecosystem where young users are spending increasing amounts of time—often to the point of problematic use or dependency.

    Denmark to ban social media for under 15s - which networks will face ban?

    Denmark intends to prohibit social media use for children under fifteen. Prime Minister Mette Frederiksen stated that digital platforms are detrimental to childhood development. This initiative follows Australia's recent ban on social media for under-sixteens. The proposed Danish law aims to protect young people online. Details on enforcement are pending. Greece has also proposed an EU-wide digital age of consent.

    Italian families target Facebook, Instagram and TikTok over child safety

    The case asks a Milan court to require the platforms to adopt stronger age-verification systems for users under 14, in line with Italian law. It also seeks to make Meta-owned Facebook and Instagram and TikTok remove potentially manipulative algorithms and provide transparent information on the possible harms of overuse.

    When FIRE backfires: He lived on bare minimum to save $440,000, now retiree regrets if money was worth sacrificing life’s joy

    A Japanese man, Suzuki, saved aggressively for decades. He amassed significant wealth by living very frugally. After his wife's passing, he expressed deep regret. Suzuki wished for more shared experiences like travel and dining out. His story highlights the emotional cost of strict financial discipline. It reminds others to balance savings with life's simple joys.

    • ChatGPT rolls out new restrictions after criticisms over teen suicides

      OpenAI has introduced new parental controls for ChatGPT, enabling parents to link accounts and manage settings such as quiet hours and content filters for their teens. These updates include stronger safeguards against graphic content and a notification system for self-harm. A new parent resource page also offers guidance for responsible AI use.

      Australia’s under-16 social media ban: All you need to know

      Australia is set to implement a new law on December 10, prohibiting children under 16 from accessing major social media platforms like Facebook, Instagram, and TikTok. The Online Safety Amendment (Social Media Minimum Age) Act, 2024, aims to protect children from online harm, requiring platforms to take reasonable steps to enforce the age limit, facing penalties for breaches.

      Parents of teens who died by suicide after AI chatbot interactions testify to Congress

      Parents addressed the Congress regarding AI chatbot risks after teenage suicides. They claimed chatbots acted as 'suicide coaches'. Lawsuits were filed against OpenAI and Character Technologies. These companies are accused of contributing to the deaths. OpenAI pledged new safeguards for teens. Child advocacy groups criticized these measures as insufficient. The FTC has launched an inquiry into potential harms to children.

      Roblox announces second 'Teen Council' amid backlash over kids protection

      Roblox introduces its second Teen Council. This youth-led group aims to shape the platform's future. The council includes members from the US, Canada, and Mexico. Teens aged 14 to 17 can apply. The council will advise on safety tools and community standards. Roblox seeks to create a safe online space. The initiative follows criticism about child safety.

      OpenAI and Meta Reinvent AI Chatbots for Teen Crisis

      Tech giants OpenAI and Meta are taking bold and controversial steps to reshape how their AI chatbots interact with teens in distress, following a tragic lawsuit and unsettling data on chatbot reliability. Are we witnessing a revolution in digital mental health support or simply a flashy PR maneuver with unproven safeguards?

      Building AI guardrails: No child’s play, this!

      As AI tools spread deeper into classrooms, homes and social media feeds, stakes are clear: safeguards can’t be just optics.

      Online age checks proliferating, but so are concerns they curtail internet freedom

      Age verification measures are increasing online, aiming to protect children from harmful content. These checks, involving IDs and face scans, raise privacy and free speech concerns for all users. While proponents see them as essential for child safety, critics worry about potential misuse and restrictions on access to information and anonymous expression, leading to a fragmented internet.

      North Carolina mother sues Roblox; alleges platform enabled exploitation of 10-year-old daughter

      A North Carolina mother has sued Roblox. She alleges the platform facilitated her daughter's sexual exploitation. The lawsuit claims Roblox failed to protect children from online predators. The predator allegedly gained the girl's trust. He then coerced her into sending explicit images for Robux. The lawsuit challenges Roblox's safety measures. It highlights concerns about age verification and moderation.

      Roblox Getting Shut Down rumours: What legal hurdles is the gaming giant facing?

      Roblox, a multi-billion-dollar enterprise, has evolved far beyond being just a game. It functions as a vast platform where millions of creators design, share, and profit from virtual experiences. Any potential shutdown would reverberate widely, carrying significant economic, cultural, and social consequences.

      Roblox under fire for 'not doing enough to protect kids' as Congressman Ro Khanna launches petition

      Congressman Ro Khanna has launched an online petition against Roblox, accusing the platform of insufficient child safety measures. This action follows concerns about inappropriate content, online predators, and a YouTuber's ban for exposing alleged predators. Khanna aims to pressure Roblox into stricter regulations, citing multiple negligence lawsuits involving child exploitation on the platform.

      YouTube to begin testing a new AI-powered age verification system in the US

      The tests initially will only affect a sliver of YouTube's audience in the US, but it will likely become more pervasive if the system works as well at guessing viewers' ages as it does in other parts of the world.

      Teens are increasingly turning to AI companions, and it could be harming them

      About three in four US teens have used AI companion apps such as Character. ai or Replika. ai. The study, which surveyed 1,060 US teens aged 13-17, found one in five teens spent as much or more time with their AI companion than they did with real friends.

      Elon Musk's former DOGE aide 'Big Balls' attacked in Washington: Who is Edward Coristine and what happened

      Edward Coristine, a former member of the Department of Government Efficiency (DOGE), was attacked during a carjacking attempt in Washington, D.C., prompting President Trump to call for federal control of the city. The incident has reignited debates over crime and governance, particularly concerning youth violence, despite overall crime rates decreasing.

      Over 6 lakh Facebook, Instagram accounts gone overnight in Meta's massive purge. The reason will shock you

      Meta has removed approximately 600,000 Instagram and Facebook accounts due to predatory and exploitative behavior targeting minors. This extensive cleanup includes accounts posting sexualized comments and soliciting explicit images. To further protect young users, Meta is introducing new safety features like clearer messaging context, one-tap blocking, and default nudity protection for teen accounts.

      Meta adds more safety controls for teens on Instagram in India

      Similar features are being planned for Facebook and Messenger for rollout later this year. The new features were announced on April 11 at Meta’s Teen Safety Forum. Per this update, teens under 16 will no longer be able to go live or turn off filters that block unwanted or inappropriate images in direct messages unless approved by a parent.

      Meta tightens teen safety rules: Live Streaming now requires parental approval for under-16 users

      Meta is enhancing safety measures for young users by banning Instagram Live for users under 16 without parental permission. The update is part of an expanded rollout of teen accounts on Facebook and Messenger. These changes aim to limit exposure to harmful content, reduce unwanted contact, and provide more parental control over teens’ online experiences.

      Meta expands 'teen accounts' to Facebook, Messenger amid children's online safety regulatory push

      Meta is introducing its "Teen Accounts" feature on Facebook and Messenger, expanding privacy and parental controls to protect young users. This follows growing pressure from lawmakers, including the proposed Kids Online Safety Act (KOSA), to safeguard children from online harm. Meta faces numerous lawsuits over social media's impact on youth.

      Social media design is key to protecting kids online

      Research highlights the complex impact of social media on teens, balancing both benefits and harms. The key lies in platform design—features like algorithms and privacy controls shape user experience. A safety-by-design approach, focusing on improving these features, could protect teens while preserving social media’s positive aspects.

      TikTok expands Family Pairing feature in US with new tools for parental control

      TikTok has introduced new updates to its Family Pairing feature, allowing parents more control over their teens' activities on the platform. The latest tools include a Time Away feature to restrict app access during specific hours and increased visibility into teens' follower lists. The company aims to enhance digital safety and promote healthier screen habits via these measures.

      Will people know if you use the 'dislike' button on Instagram? Meta-owned company clarifies

      Instagram is testing a new dislike button to allow users to privately signal disapproval of comments, with hopes of making comments friendlier. Additionally, Meta has expanded Instagram Teen Accounts to India, enhancing safety measures for users under 16 by providing parental controls and high safety settings to protect teens from cyberbullying and harmful content.

      Meta expands Instagram teen accounts to India with online safety features

      “At Meta, creating a safer and more responsible digital environment is a top priority. With the expansion of Instagram Teen Accounts to India, we are strengthening protections, enhancing content controls, and empowering parents while ensuring a safer experience for teens," said Natasha Jog, director, Public Policy India, Instagram.

      Google makes Bard chatbot available for teens with some guardrails

      Before launching to teens, the tech giant consulted with child safety and development experts to help shape its content policies and an experience that prioritises safety.

      Take It Down: Meta's new tool to prevent spread of minors' revealing images

      The social network said it financially supported the National Center for Missing and Exploited Children (NCMEC) in the development of 'Take It Down', a platform that helps adults stop the spread of such images online.

      Load More
    The Economic Times
    BACK TO TOP