Kids' Safety Theatre in the Age of AI

Kids' Safety Theatre in the Age of AI

Five contradictory signals expose deep dysfunction in the US approach to protecting children online

A crisis of coherence

In recent weeks, five developments in US internet safety came to a head. On paper, at least some should have meant real progress: the implementation of long-overdue COPPA reforms; and the passage of a new law to address non-consensual intimate imagery (the Take It Down Act).

But there was also: the slow death of a sweeping online safety bill (KOSA); a White House freeze on new rulemaking and the gutting of the FTC; and Google’s bombshell announcement that children under 13 will be given access to its Gemini AI chatbot.

Taken together, these events paint a picture of incoherence, contradiction, and political opportunism. The child safety agenda is being pulled in five directions at once—some well-meaning, some cynical, and some deeply dangerous.

COPPA 2.0: progress amid dysfunction

The FTC’s revised COPPA rule is, in many respects, a triumph of public interest policymaking and the culmination of a 5-year labour of love from dedicated staff attorneys inside the Commission. It tightens restrictions on behavioural advertising, introduces new limits on data retention and third-party access to kids’ PII, and closes loopholes around biometric data. It also recognises the need for age-appropriate design by restating its right to prosecute operators that use ‘unfair or deceptive’ practices to exacerbate screen time addiction.

The revision was finally implemented in late April after a months-long delay presumably caused by Trump’s blanket freeze on new federal regulations. As it stands, the rule is now on the books and will come into effect in April 2026. But who will enforce it? Just weeks earlier, Trump had illegally fired the agency's two remaining Democratic commissioners, in a clear break from precedent soon to be challenged at the Supreme Court. The agency is now in disarray, and the future of COPPA enforcement is uncertain at best.

A hollowed-out Federal Trade Commission

The firing of Commissioners Alvaro Bedoya and Rebecca Slaughter was not just an institutional insult—it was a direct blow to the FTC’s effectiveness just as Big Tech’s scale and influence is crying out for more enforcement of both antitrust and consumer protection rules. The pair had led much of the agency's recent work on privacy, algorithmic accountability, and youth protections.

Bedoya and Slaughter's dissenting opinions on a Republican-led Commission have been, and should be, a critical component of democratic rule-making and are essential to the transparency of independent agencies. Without minority commissioners (of either party), the FTC's decision-making becomes opaque, and consumers, markets and courts won't be able to see how and when it has become utterly politicised.

Their removal—alongside threats to its operations and data from DOGE—leaves the FTC not only politically compromised but intellectually and operationally adrift. As Bedoya warned in recent interviews (Decoder, Tech Policy Press), the FTC cannot be trusted to protect children or consumers if it is being weaponised for political ends. The enforcement of child safety rules now risks becoming either arbitrary or selectively punitive.

KOSA burns out

The Kids Online Safety Act (KOSA) was supposed to become the new cornerstone of the U.S. child safety framework. Passed 91-3 in the Senate last year after three years of rewrites, it created a duty of care modelled on the UK’s (and California’s) Age Appropriate Design Code and the EU’s Digital Services Act. It would have required platforms to avoid harmful algorithmic recommendations, set privacy-protective defaults for teens, and provide robust parental controls and reporting tools.

Yet despite bipartisan support, the bill stalled in the Republican-led House at year-end, due at least in part to concerns—ironically from both side of the aisle for different reasons—about its potential to be mis-used for censorship by state attorneys general. Tech companies split on it: Snap, Microsoft, and Pinterest supported the bill, while Meta and Google mounted a vigorous lobbying effort against it. A last-minute revision (weirdly authored in part by Twitter X) stripped AGs of direct enforcement power, but the damage was done. House Republicans declined to vote, choosing instead to delay consideration until after Trump’s inauguration—effectively killing the bill.

Now it would have to be re-introduced in the Senate to be revived, which seems like a tall order in our hyperpartisan world.

KOSA was not without critics. Critics warned that it would create ‘soft censorship,’ leading platforms to err on the side of removal rather than risk penalties. Overbroad content filters would end up suppressing lawful expression, including LGBTQ+ content. While some of these concerns were given fixes of sorts in the final version, it was too late for the shifting political sands.

The Take It Down Act: from deepfake panic to political weapon

While KOSA languished, Congress fast-tracked the Take It Down Act, a bipartisan response (some would say knee-jerk reaction) to the growing scourge of non-consensual intimate imagery, especially AI-generated deepfakes. The bill criminalises knowing distribution of such material and imposes a 48-hour takedown obligation on online platforms once notified. It applies not just to public social media but potentially also to encrypted messaging and cloud storage (depending on your interpretation).

On paper, this sounds like progress. But the Act is riddled with flaws: no meaningful protection against false or malicious takedown requests, which particularly exposes small platforms without the resources to comply[1]; broad (and vague) scope, potentially including storage providers and nonprofits; and a surprisingly expanded FTC jurisdiction at the very moment its enforcement authority is being challenged and its leadership has been politicised.

Worse still, Trump himself has joked about abusing the law to target (nude?) criticism of him, reinforcing fears that the Act will be misused. As Techdirt noted, "…we’ll get to watch as the Trump administration — which has already announced its plans to abuse it — gets handed a shiny new censorship weapon with ‘totally not for political persecution’ written on the side in extremely small print..."

Say hello to your new little friend

Against this chaotic backdrop, Google announced in an email to parents it would roll out its Gemini AI chatbot to children under 13 through the Family Link platform. The timing was astonishing: just days after COPPA was finalised, amid growing concern over AI companions' psychological effects on minors, and just as Google had been declared a monopoly in search and adtech, and is facing the likely breakup of its business. What are they thinking over in Mountain View?

Stanford researchers and Common Sense Media have warned in unambiguous terms that AI chatbots should not be used by children because they create emotional dependency, blur the line between human and machine, and are demonstrably poor at detecting or de-escalating mental health crises. UNICEF has warned that “children are highly susceptible to these techniques which, if used for harmful goals, are unethical and undermine children’s freedom of expression.”

Google acknowledges these risks in its own guidance to parents, noting that Gemini "can make mistakes" and that children "may encounter content you don’t want them to see." You think?[2]

So why make this move now? The commercial upside seems to be nearly nil (at least in the near term), the risks are obvious, and the optics are terrible. It looks less like a product launch and more like a normalisation gambit: get AI into the lives of children before lawmakers can mount a response.

So who’s really being protected?

What a start to the year it has been. Regulations are being passed without the institutional capacity to enforce them. Laws designed to protect children are being weaponised for political ends. Tech giants are taking more, not fewer, risks with kids and teens (and we haven’t even touched on Meta’s wholesale retreat from content moderation). And the one serious, systemic proposal to regulate online harm (with all its faults)—KOSA—has been left to die on the vine.

This isn’t child protection. It’s safety theatre. It’s politicians pretending to make parents feel better by waving something on cable news. It’s tech giants taking advantage of the fog of chaos to become yet more mercenary. But it leaves children more exposed, more manipulated, and more alone than ever.

Until the U.S. builds a consistent and technically literate regulatory framework—ideally on the back of a federal privacy law—we should treat every new child safety announcement with the same skepticism we reserve for AI chatbots themselves: seemingly friendly, often misleading, and not remotely equipped to care for our kids.

This article first appeared on my Substack. If you like it and would like timely delivery of future posts directly in your inbox, please consider subscribing.


[1] We have already seen how this can play out, with the DMCA being widely abused to suppress content and leading to huge costs for internet platforms.

[2] To be clear, generative AI tools have enormous potential to benefit children, as very effective tutors, or as story-telling companions, etc. But today’s chatbots are not safe enough (or correct enough) to be left alone with them, and—given that most adults don’t yet know how to use them without being bamboozled—even supervised useage is iffy at best.

Share

Kate Garland

Product Leader I Gaming | Streaming I Ex eOne / Hasbro, Sony Pictures Entertainment

5mo

Google knows the risks better than most. Introducing AI to under-13s should be done with extreme caution, not speed! “AI chatbots should not be used by children because they create emotional dependency, blur the line between human and machine, and are demonstrably poor at detecting or de-escalating mental health crises.” I can see ‘emotional dependency’ already becoming an issue for adults, let alone children.

SJ McKENZIE

Creative Director & Design Strategist | Research•Story•Strategy | Building Teams That Drive Brand and Business Growth

5mo

I need to learn more about this. Feels unbelievably chaotic and weird and need to get involved on some level, even if it's just awareness and evangelism.

Like
Reply

To view or add a comment, sign in

More articles by Maximilian B.

  • Ofcom tackles age assurance

    Both clarifying and infuriating in equal measure, Ofcom's guidance sets a new standard Ofcom is wasting no time in…

    1 Comment
  • New COPPA, old COPPA?

    Lots of last-minute action from the FTC, but was it all for naught? The FTC fired off a flurry of actions before…

  • It's OK to ask 'How old are you?'

    Why protecting minors online doesn't have to compromise privacy or access Last week Florida and South Carolina began…

    2 Comments
  • KOSA 2.0: X marks the spot?

    Is this a better bill or just better packaging, courtesy of Elon? Congress's latest attempt to ‘think of the children!’…

  • How is your DSA today?

    Trends in enforcement; testing out-of-court settlements; shadowbanning ban It’s been about six months since the Digital…

  • What does the EU really want from Meta?

    And does it really think digital IDs will solve minor age verification? Recently the European Commission announced its…

    1 Comment
  • Gen AI and child safety

    This time around, can we build in some child protections from the start? “The internet was not built for kids.” This…

  • Together, parents can do better

    Do we really need more science to do what's obviously good for our kids? I have to admit I was surprised by the…

    1 Comment
  • One step closer to a Universal Age API

    An initiative from Meta shows the way Just a few days after I wrote about the idea of a Universal Age API, Meta doubled…

  • We can solve for age assurance

    It's time for a Universal Age API These days are seeing a record number of events on the topic of age assurance. First,…

    1 Comment

Others also viewed

Explore content categories