Social Media Bans for Minors Gain Global Momentum, Testing Platform Compliance
A growing wave of legislation targeting underage social media use is sweeping across continents, forcing tech giants to rethink age verification and parental controls. Australia led the charge...
A growing wave of legislation targeting underage social media use is sweeping across continents, forcing tech giants to rethink age verification and parental controls. Australia led the charge last December, banning under-16s from platforms like TikTok, Instagram, and YouTube, with fines up to 49.5 million Australian dollars for noncompliance. Yet a Molly Rose Foundation survey found 61% of 12-to-15-year-olds still access these sites, often bypassing age checks with VPNs or fake IDs.
Turkey’s parliament just passed a bill barring under-15s from social networks, requiring platforms to add age verification and parental controls. Greece plans a similar cutoff by January 2027, with Prime Minister Kyriakos Mitsotakis citing addictive designs for rising anxiety and sleep issues. France approved an under-15 ban in January, awaiting Senate action, while Spain targets under-16s with mandatory age checks and executive accountability.
The UK rejected an under-16 ban for the third time this week, but consultations continue. Teens at a BBC debate expressed mixed views: some see losing four daily TikTok hours as annoying but not life-changing, while others call autoplay addictive. Pilots testing curfews and time limits on 300 homes are underway. Education minister Olivia Bailey insists action is coming.
Asia is piling on. Indonesia deactivated under-16 accounts on high-risk apps like Roblox from March. Malaysia sets June enforcement for its under-16 block under the Online Safety Act, with eKYC checks. Brazil now requires guardians to link under-16 accounts and has killed infinite scrolls. Even China’s “minor mode” caps screen time by age.
Europe fragments further. Austria drafts an under-14 ban by June. Denmark eyes mid-year under-15 rules with a “digital evidence” app. Portugal mandates consent for 13-to-16-year-olds, with fines up to 2% of global revenue. A CEPA report counts 19 nations now restricting access, from outright bans to verification mandates.
Why now? Mental health crises, cyberbullying, and predator concerns drive action. Studies link platforms to teen suicides and body image issues. Juries nailed Meta and YouTube for negligence in U.S. cases last year. Australia’s eSafety Commission deactivated 4.5 million accounts yet admits no clear harm reduction. Critics argue blanket bans push kids to darker corners of the web and that age checks—scanning faces or IDs—create data leak risks.
Tech platforms tout self-regulation: private accounts by default, better algorithms. But regulators remain skeptical. Australia’s law spares WhatsApp and YouTube Kids, creating loopholes. Enforcement bites with hefty fines, yet circumvention thrives. One thing is clear: no easy fix exists. Platforms are designed to hook, kids are wired to explore, and states are stepping in heavy-handed. The bans roll out, and the debates rage on.
Source: Webpronews
Ready to Modernize Your Business?
Get your AI automation roadmap in minutes, not months.
Analyze Your Workflows →