Revolution or red tape? Australia blocks under-16s from social media
Revolution or red tape? Australia blocks under-16s from social media
Legislation aims to delay exposure to online risks but raises enforcement challenges.
On October 17, 2006, Megan Taylor Meier tragically ended her life by suicide, hanging herself just three weeks before her 14th birthday. This, as revealed months later, was due to severe online abuse and bullying she endured on MySpace, then the world’s largest social network. It was the first documented case of suicide linked to online bullying on social media, sparking widespread concerns about whether such platforms were safe for children and teenagers. In the 18 years since, these concerns have only intensified. MySpace has faded into obscurity, but platforms like Facebook, YouTube, Instagram, and TikTok have taken its place. All of them have been, or still are, immensely popular among teenagers and young adults. Along with instant messaging services such as WhatsApp, they often form the backbone of social interactions in the digital age.
At the same time, the risks for adolescents connected to social media have become increasingly evident. These range from social risks and psychological harms—such as depression, eating disorders, and anxiety—to privacy breaches and exposure to online predators. Physiological issues like sleep disorders, difficulty concentrating, and exacerbation of attention deficit hyperactivity disorder are also notable concerns. While these dangers are not unique to children, they are disproportionately exposed to them compared to adults. Yet, the platforms themselves have done little to address these risks. Reports have even surfaced of platforms deliberately concealing data about the dangers they pose to adolescents.
Governments worldwide have so far taken only limited action against these platforms. However, last week, Australia made an unprecedented move for a liberal democracy by passing a law prohibiting social media companies from allowing users aged 16 and under to access their platforms. This law, which will take effect in 12 months, aims to delay children’s exposure to the potential harms of social media until they are more mature. Yet, it raises many unresolved questions—foremost among them: Can this ban actually be enforced?
Currently, most social media platforms officially allow users to register only if they are 13 or older. This requirement stems from a 1998 U.S. law mandating parental consent for the collection or use of personal information from children under 13. However, this restriction is widely circumvented, with many children registering for platforms—often with parental consent or encouragement—by simply entering a false birth year during registration.
The law does not outright ban children under 13 from using these platforms but restricts companies from collecting data on them. Even when companies were caught violating these rules, penalties were minimal. For instance, Epic Games was fined $275 million in 2022, while TikTok received a mere $5.7 million fine in 2019.
The Australian law takes a more direct approach, prohibiting social networks from allowing users under 16 to log in. The responsibility for compliance falls squarely on the companies. It is unclear, however, which platforms the law will apply to. According to Australian Prime Minister Anthony Albanese, platforms such as Facebook, Instagram, Snapchat, TikTok, and X (formerly Twitter) must adhere to the law, while YouTube—widely used for educational purposes—and WhatsApp—an essential communication tool—are exempt. These exceptions highlight the law's inconsistencies, as problematic content and dangers also exist on YouTube and WhatsApp.
Another challenge is enforcing the ban. Companies are prohibited from requiring government-issued IDs for age verification, as this would risk excessive data collection and compromise user privacy. Biometric age verification is a leading alternative, but it comes with its own issues, including inaccuracy, potential privacy violations, and logistical hurdles, such as how frequently verification must occur.
Julie Inman Grant, Australia’s eSafety Commissioner, remains optimistic. “If they can target you for advertisers, they can use the same technology and knowledge to verify the age of a child,” she told The New York Times. However, advertising algorithms are not fully reliable and depend on collecting user data—something the law expressly forbids for individuals under 18.
The law's sanctions are another potential weak point. Companies face fines of up to A$49.5 million (US$32.3 million) for failing to take “reasonable steps” to prevent under-16s from accessing their platforms. For tech giants like Meta and ByteDance, these penalties are negligible. ByteDance reported annual revenues of $100 billion last year, and Meta earned $40.6 billion in the most recent quarter alone. These companies could easily absorb such fines without changing their practices.
What’s more, social media platforms have strong incentives to attract young users. A well-known marketing principle is “catch them when they’re young.” Convincing a 14-year-old to engage with your brand can create a lifelong customer. For social platforms, younger users mean more data, which translates to more targeted—and profitable—advertising.
Paradoxically, if the law succeeds in barring under-16s from social media, companies might reduce their efforts to create safer environments, knowing younger users are excluded. This could lead to platforms becoming even less safe for older users. Social media poses dangers to all age groups—bullying, emotional harm, radicalization, and incitement affect everyone. By the time 16-year-olds are allowed to join, platforms could be wilder and more dangerous than before.
While banning children from social media seems like a straightforward solution, it is no substitute for deeper, more effective measures. Stricter content moderation and widespread digital literacy education from an early age are essential steps to truly protect young people online.