Over 200 child advocacy organisations, experts, and educators, led by Fairplay and including the American Federation of Teachers and researcher Jonathan Haidt, have sent an open letter to YouTube CEO Neal Mohan and Google CEO Sundar Pichai. They are demanding a ban on AI-generated "slop" from YouTube Kids, clear labelling of all AI content platform-wide, algorithmic restrictions for users under 18, and an end to YouTube's investment in AI-driven children's content.
Top channels pumping out these algorithmically optimised, plotless videos designed purely to hijack young attention reportedly generate over $4.25 million in annual ad revenue. The coalition argues this low-effort content distorts children's sense of reality, overwhelms learning, and displaces healthy development.
Short-form video platforms exploit fundamental stimulus-response loops that deliver rapid dopamine hits through variable rewards, creating compulsive engagement similar to gambling. For developing brains, the risks are greater. Emerging research links heavy short-form consumption to changes in areas involved in impulse control, emotional regulation, and sustained attention. This is particularly concerning during childhood and adolescence when the prefrontal cortex and other regions remain highly plastic. AI-generated slop is engineered to maximise these loops, built around relentless novelty with no narrative structure and no educational purpose. The $4.25 million in annual revenue tells you exactly who benefits from keeping it running.
The ID Verification Problem
Restricting children's access to algorithmic feeds has become a key justification for mandatory age verification across platforms. On paper the mechanism is simple. Verify every user's age with government-issued ID, filter out minors, and solve the problem.
In practice, age verification systems are surveillance systems. They require platforms and their vendors to collect and store sensitive government IDs or biometric data. The breach record is already damning. In 2025, a Discord vendor hack exposed government ID photos of around 70,000 users. Identity verification firm AU10TIX left credentials exposed online for over a year, compromising names, dates of birth, nationalities, ID numbers, and document images. Once users surrender immutable personal data, they lose meaningful control over it.
Large centralised stores of identity information become irresistible targets for hackers and government requests alike. Every platform collecting IDs turns into a honeypot. Every third-party vendor adds another point of failure. Forcing companies to gather this data at scale encourages poor security practices and creates persistent identity databases with unclear retention policies and cross-jurisdictional access risks.
The coalition's demands point toward a less invasive path. Algorithmic restrictions for under-18 users and mandatory AI labelling address the content problem directly without requiring every adult on the platform to prove who they are to a third-party vendor.
YouTube hosting AI-generated content engineered to exploit developing brains is worth pushing back on hard. But the emerging legislative response often trades one set of developmental harms for another. Widespread identity surveillance built on the back of moral panic. The breach history shows exactly how that ends.
Blackout VPN exists because privacy is a right. Your first name is too much information for us.
Keep learning
FAQ
What is the open letter demanding?
Over 200 organisations are calling on YouTube to ban AI-generated content from YouTube Kids, label all AI content platform-wide, restrict algorithmic recommendations for under-18 users, and stop investing in AI-driven children's content.
How much do AI slop channels earn on YouTube Kids?
Top channels producing algorithmically optimised, plotless children's content reportedly generate over $4.25 million in annual ad revenue.
What are the cognitive risks of short-form content for children?
Emerging research links heavy short-form video consumption in children and adolescents to changes in impulse control, emotional regulation, and sustained attention. The prefrontal cortex remains highly plastic during this period, making developing brains more vulnerable to dopamine-driven engagement loops.
What went wrong with AU10TIX?
Identity verification firm AU10TIX left login credentials exposed online for over a year, giving access to users' names, dates of birth, nationalities, ID numbers, and images of their identity documents.
What is the alternative to ID verification for protecting children online?
Algorithmic restrictions targeting under-18 users and mandatory AI content labelling address the content problem without requiring platforms to collect and store government-issued identity documents from every adult user.
