What's happened
Since December 10, 2025, Australia has enforced a landmark ban preventing under-16s from holding accounts on major social media platforms. Over 4.7 million accounts have been deactivated or restricted across 10 platforms, including Facebook, TikTok, and YouTube. TikTok is also rolling out AI age-detection tech in Europe to identify under-13 users. The ban has sparked debate on child safety, privacy, and platform compliance.
What's behind the headline?
Enforcement and Compliance
The Australian social media ban represents a pioneering regulatory approach to child safety online, compelling major platforms to actively remove underage accounts or face substantial fines. Early data showing 4.7 million accounts deactivated indicates significant platform compliance, though the true impact on youth social media use remains uncertain.
Challenges and Loopholes
Despite the scale of account removals, critics highlight that age-verification technologies are imperfect and can be circumvented through parental assistance or alternative platforms not covered by the ban, such as Yope and Lemon8. This raises questions about the ban's long-term effectiveness and whether it merely displaces underage users rather than protecting them.
Global Context and Regulatory Trends
TikTok's rollout of AI-powered age detection in Europe, developed in collaboration with Irish regulators, signals increasing international pressure on platforms to enforce age restrictions. The UK and Denmark are also considering similar bans, reflecting a broader trend toward stricter youth protections online.
Mental Health and Social Impact
While the ban aims to shield children from harmful content linked to depression and anxiety, some youth advocates warn it may isolate vulnerable teens who rely on online communities for support. The balance between protection and access remains a contentious issue.
Outlook
The Australian ban will likely influence global regulatory frameworks, but its success depends on improving age verification, expanding enforcement to smaller platforms, and addressing unintended social consequences. Ongoing monitoring and research will be critical to assess its impact on youth wellbeing and digital behavior.
What the papers say
The Australian government's announcement, reported by The Guardian's Josh Taylor and AP News, revealed that over 4.7 million under-16 accounts were deactivated or restricted shortly after the ban's December 10 implementation. Communications Minister Anika Wells hailed this as a "huge achievement," while Prime Minister Anthony Albanese expressed cautious optimism, noting that "change doesn't happen overnight." Meta disclosed removing nearly 550,000 accounts across Facebook, Instagram, and Threads but criticized the ban as a "blanket" approach that risks pushing teens to less regulated platforms like Yope and Lemon8, as detailed by the NY Post's Ariel Zilber.
TikTok's new AI age-detection system, covered by The Guardian's Mark Sweney and the NY Post's Ben Cost, is being rolled out across Europe to identify under-13 users by analyzing profile data and behavior patterns. TikTok acknowledged no age-verification system is foolproof and allows appeals via government ID or selfies. This move aligns with growing European regulatory scrutiny and follows a yearlong pilot.
Critics, including Digital Rights Watch cited by SBS, argue the ban may harm vulnerable teens by cutting off online support networks, while some opposition politicians in Australia suggest the ban is easily circumvented. Reuters and The Independent report that while platforms comply, the long-term effects on youth social media use and mental health remain to be seen, with studies ongoing.
Together, these sources illustrate a complex picture of ambitious regulation, technological innovation, and ongoing debate about protecting children online without unintended harm.
How we got here
Australia enacted the Online Safety Amendment (Social Media Minimum Age) Act 2024, effective December 2025, to protect children from harmful online content by banning under-16s from major social media platforms. Platforms face fines up to A$49.5 million for non-compliance. The law follows growing global concern over social media's impact on youth mental health and privacy.
Go deeper
- How effective is Australia's social media ban for under-16s?
- What technology does TikTok use for age verification in Europe?
- Are other countries considering similar social media age restrictions?
Common question
-
Why Did Australia Ban Social Media for Under-16s?
Australia's recent move to ban social media accounts for under-16s has sparked widespread interest. This law aims to protect young people's mental health by restricting their access to platforms like TikTok, YouTube, and Meta. But what prompted this change, and what does it mean for the future of online safety? Below, we explore the reasons behind the ban, how platforms are responding, and what it could mean for other countries considering similar laws.
-
How are countries regulating social media for minors?
As concerns about children's safety online grow, many countries are implementing new laws to regulate social media use among minors. From bans and age verification to platform restrictions, these measures aim to protect young users while sparking debate about effectiveness and digital rights. Curious how different nations are tackling this issue? Below are some key questions and answers about the latest regulations shaping social media for minors worldwide.
-
How are countries regulating social media for kids?
With concerns over children's safety online growing, many countries are implementing new rules to regulate social media use among minors. From bans to advanced age-detection tech, these measures aim to protect young users while balancing privacy and freedom. Curious about what’s happening worldwide and how effective these regulations are? Below, we explore key questions about how nations are managing social media for kids today.
-
Are social media bans for kids actually protecting them?
With recent moves like Australia's ban on under-16s from major social media platforms, many wonder if these restrictions truly keep children safe online. While the intention is to protect youth from harmful content and privacy risks, questions remain about their effectiveness and potential unintended consequences. Below, we explore the key debates and concerns surrounding social media bans for minors.
-
Will AI Age-Detection Tech Become Standard Worldwide?
As technology advances, AI age-detection systems are increasingly being adopted by social media platforms to protect minors online. But will these tools become a global standard, and what does that mean for privacy and safety? Below, we explore the future of AI age verification, its implications, and how different regions are approaching this technology.
More on these topics
-
Facebook, Inc. is an American social media conglomerate corporation based in Menlo Park, California. It was founded by Mark Zuckerberg, along with his fellow roommates and students at Harvard College, who were Eduardo Saverin, Andrew McCollum, Dustin Mosk
-
Anthony Norman Albanese (born 2 March 1963) is an Australian politician who has served as the 31st prime minister of Australia since 2022. He has been the leader of the Labor Party since 2019 and the member of parliament (MP) for the New South Wales divis
-
Anika Shay Wells (born 11 August 1985) is an Australian politician. She is currently Minister for Communications and Minister for Sport in the Albanese government, having previously served as Minister for Aged Care from 2022 to 2025. She is a member of...
-
Australia, officially known as the Commonwealth of Australia, is a sovereign country comprising the mainland of the Australian continent, the island of Tasmania, and numerous smaller islands.
-
TikTok/Douyin is a Chinese video-sharing social networking service owned by ByteDance, a Beijing-based Internet technology company founded in 2012 by Zhang Yiming.
-
The Australian Government is the federal government of Australia, a parliamentary constitutional monarchy, and is the first level of government division.