Australia on 10 December 2025 became the first country to enforce a nationwide ban preventing children under the age of 16 from holding accounts on major social media platforms, a move that has triggered intense domestic debate and global attention. The measure, enacted under the Online Safety Amendment (Social Media Minimum Age) Act 2024, applies to platforms such as TikTok, Instagram, Facebook, Snapchat, X and YouTube. Companies that fail to comply face fines of up to AUD 49.5 million.
The federal government says the law responds to mounting evidence that social media is harming children’s mental health. “This is about giving kids the space to grow up without relentless algorithmic pressure,” Prime Minister Anthony Albanese said during the final parliamentary debate in November 2025. Existing under-16 accounts are now being deactivated, while new registrations are blocked nationwide.
Rising concerns over children’s mental health
Government data and independent studies have shown that social media use among Australian children surged dramatically after the COVID-19 pandemic. According to figures cited by the eSafety Commissioner, nearly 80 percent of children aged 8–12 and 95 percent of those aged 13–15 were already using at least one social media service despite long-standing age limits. Child psychologists have linked heavy use to anxiety, sleep disruption and cyberbullying.
Parents’ groups have been among the strongest supporters of the ban. “We’ve watched our kids become glued to screens, constantly comparing themselves to others,” said Melbourne parent Sarah Connolly. “This law gives families some breathing room.” A national poll conducted in early December found that roughly two-thirds of Australians support the restriction, with approval highest among parents of younger children.
Enforcement challenges and industry criticism
Technology companies and civil liberties groups, however, argue that enforcement will be difficult and potentially flawed. The law does not require government-issued ID checks; instead, platforms may use facial-age estimation, video selfies or similar tools. Critics warn these systems are imperfect. “Age-verification technology is not foolproof and risks both errors and privacy concerns,” said digital rights advocate Dan Ilic.
Early cases appear to support those concerns. In one widely reported incident days before the ban took full effect, a 15-year-old reportedly passed a visual age check and retained access to a social media account. Industry representatives also warn that children may simply migrate to unregulated or overseas platforms, or access content without logging in.
The social impact of the ban is also being questioned, particularly in remote and regional communities. Teenagers in rural Australia have told local media that social platforms are often their main link to peers and support networks. “Without social media, it feels like we’re being cut off completely,” said a 14-year-old from the Northern Territory.
Researchers are now moving to assess the real-world consequences of the policy. The Murdoch Children’s Research Institute, together with Deakin University, launched the Connected Minds Study in December 2025 to track changes in screen time, mental health and social behavior among adolescents. Initial findings are expected in mid-2026 and could influence whether the law is amended or expanded.
Internationally, Australia’s decision is being closely watched. Governments in Europe and North America have cited the Australian model in recent discussions on child online safety. Whether the ban becomes a global template or a cautionary tale will depend on what happens next — and on whether Australia can balance child protection with digital rights in an increasingly connected world.




