X reported cases of child sexual abuse material to the National Center for Missing & Exploited Children in 2023. A tenfold increase from the 98,000 cases reported in 2022. Australia's eSafety Commissioner Julie Inman Grant knew about it. She sent legal notices. She fined X $610,500 in October 2023 for failing to disclose how it polices child abuse content. She never threatened to ban the platform.
The UK knew. Ofcom had been working with X since 2020 through the Five Country Ministerial's "voluntary principles" on child safety. Reports kept climbing. Proactive detection of child abuse material dropped from 90% to 75%. Ofcom issued no ban threats.
Canada knew. The Canadian Centre for Child Protection called X's responses "woefully insufficient" in 2023. They routed takedown notices through Project Arachnid. Lloyd Richardson from the Centre told NBC News in June 2025 that sellers of child sexual abuse material continued using hashtags to advertise their content. Some of the same hashtags identified in 2023 were still active for the same purpose in 2025. No bans were discussed.
On December 24, 2025, X rolled out a new feature allowing users to edit any image on the platform with a single prompt to Grok. Users could ask Grok to "undress" photos. To put women in bikinis. To sexualize images of minors. Within days, researcher Genevieve Oh documented Grok generating up to 7,751 sexualized images per hour. Screenshots showed the AI creating images of girls estimated to be 12 to 16 years old in minimal clothing. The images spread across X and onto dark web forums.
On January 5, 2026, Ofcom contacted X urgently. They set a deadline of January 9 for X to explain what steps it had taken to protect UK users. After X's response, Ofcom opened a formal investigation on January 12 under the Online Safety Act. The UK government said it would back Ofcom if it chose to block X entirely. UK Technology Secretary Liz Kendall publicly supported a potential ban.
Australia's Julie Inman Grant, the same commissioner who issued a $610,500 fine in 2023, echoed calls for crackdowns. The European Commission launched investigations. France opened probes into "the proliferation of sexually explicit deepfakes." Ireland's Minister of State for Artificial Intelligence requested meetings with X. Malaysia and Indonesia actually blocked Grok outright.
Canada's response remained the weakest. Minister of Culture and Identity Marc Miller's office didn't respond to questions about whether cabinet ministers would stop using X. Canadian government departments continued posting through the scandal. Politicians called it "deeply concerning." They suggested RCMP investigations. They didn't commit to action, but they talked about it more than they ever had during years of actual child abuse material flooding the platform.
870,000 reports got a $610,500 fine. Grok got ban threats in 72 hours. Australia could have banned X in 2023. The UK could have used the Online Safety Act. They didn't. Because it was never about protecting children.
Governments tolerated years of child exploitation on X because it didn't threaten their ability to manage the narrative. Fines and audits let them look like they were doing something. X remained online. Free speech advocates could point to it as proof that controversial platforms could exist. Everyone could pretend the system worked.
AI-generated abuse scales too fast. It spreads too wide. It makes the regulatory failure too obvious. When anyone can generate thousands of sexualized images per hour, you can't hide behind voluntary principles anymore. You can't issue a $610,500 fine and call it progress. The abuse becomes undeniable. Governments are using Grok as the excuse they always wanted. Years of child exploitation weren't enough to ban a free speech platform. AI deepfakes give them cover to shut it down while claiming they're protecting children.
Blackout VPN exists because privacy is a right. Your first name is too much information for us.
Keep learning
FAQ
How many child abuse cases did X report in 2023?
X reported 870,000 cases of child sexual abuse material to NCMEC in 2023, up from 98,000 in 2022. Australia's eSafety Commissioner fined them $610,500 but never threatened to ban the platform.
When did Grok start creating sexualized deepfakes?
On December 24, 2025, X added image editing to Grok. Users immediately started creating sexualized deepfakes, including images of minors. Researcher Genevieve Oh documented up to 7,751 sexualized images generated per hour.
What action did governments take against Grok?
Ofcom opened a formal investigation on January 12, 2026. Malaysia and Indonesia blocked Grok. The UK threatened to ban X entirely. This happened within weeks of the Grok scandal, after years of ignoring child abuse reports.
Did Australia know about the child abuse problem before Grok?
Yes. Australia's eSafety Commissioner Julie Inman Grant fined X $610,500 in October 2023 for failing to disclose child abuse information. She knew about the 870,000 reports but never threatened to ban the platform until Grok made the abuse algorithmic and public.
Are governments actually protecting children by targeting Grok?
Governments tolerated 870,000 child abuse reports in 2023 with just fines. They only moved to ban X when AI made the abuse too visible and too fast to manage quietly. They're using Grok as cover to shut down a free speech platform.
