Australia’s online watchdog has accused the world’s largest social media companies of not adequately implementing the country’s ban on under-16s using their platforms, despite legislation that came into force in December. The eSafety Commissioner, Julie Inman Grant, has expressed “significant concerns” about adherence by Facebook, Instagram, Snapchat, TikTok and YouTube, citing poor practices including permitting prohibited users to make repeated attempts at age verification and inadequate safeguards to stop new account creation. In its initial compliance assessment since the prohibition came into force, the regulator identified multiple shortcomings and has now shifted from observation to active enforcement, warning that platforms must demonstrate they have implemented “appropriate systems and processes” to prevent children under 16 from accessing their services.
Compliance Failures Revealed in Opening Large-scale Review
Australia’s eSafety Commissioner has documented a concerning pattern of failure to comply among the world’s largest social media platforms in her first formal review following the ban came into effect on 10 December. The report shows that Meta, Snap, TikTok, YouTube and Snapchat have collectively neglected to establish appropriate safeguards to prevent minors from using their services. Julie Inman Grant expressed particular concern about systemic weaknesses in age verification processes, noting that some platforms have allowed children who originally stated themselves under 16 to later assert they were older, effectively circumventing the law’s intent.
The findings indicate a significant escalation in the regulatory response, with the eSafety Commissioner transitioning from monitoring towards direct enforcement. The regulator has stressed that simply showing some children still maintain accounts is inadequate; platforms must rather furnish substantive proof that they have established robust systems and processes designed to prevent under-16s from creating accounts in the first place. This shift demonstrates the government’s commitment to ensure tech giants responsible, with potential penalties looming for companies that do not meet the legal requirements.
- Permitting previously banned users to re-verify their age and restore account access
- Enabling multiple tries at the same age assurance method with no repercussions
- Inadequate systems to prevent new under-16 accounts from being established
- Insufficient reporting tools for parents and the general public
- Shortage of clear information about enforcement efforts and account removals
The Scope of the Challenge
The considerable scale of social media activity amongst young Australians highlights the compliance challenge confronting both the authorities and the platforms in question. With millions of accounts already removed or restricted since the ban’s implementation, the figures provide evidence of widespread initial non-compliance. The eSafety Commissioner’s conclusions suggest that the technical and procedural obstacles to enforcing age restrictions have proven far more complex than anticipated, with platforms having difficulty to differentiate authentic age confirmations from fraudulent ones. This complexity has placed enforcement authorities wrestling with the fundamental question of whether current age verification technologies are sufficient for the purpose.
Beyond the technical obstacles lies a wider issue about the willingness of platforms to place compliance ahead of user growth. Social media companies have consistently opposed strict identity verification requirements, citing privacy concerns and the real challenge of confirming age online. However, the Commissioner’s report suggests that some platforms might not be demonstrating sufficient effort to deploy the infrastructure mandated legally. The shift towards active enforcement represents a pivotal moment: either platforms will substantially upgrade their compliance infrastructure, or they risk facing substantial fines that could transform their operations in Australia and potentially influence regulatory approaches internationally.
What the Data Shows
In the initial month following the ban’s launch, Australian authorities indicated that 4.7 million accounts had been restricted or deleted. Whilst this statistic initially seemed to demonstrate regulatory success, later review reveals a more nuanced picture. The sheer volume of account takedowns suggests that many under-16s had managed to establish accounts in the beginning, indicating that protective safeguards were inadequate. Moreover, the data prompts inquiry about whether deleted profiles reflect real regulation or just users removing their accounts of their own accord in response to the new restrictions.
The limited transparency surrounding these figures has frustrated independent observers attempting to evaluate the ban’s true effectiveness. Platforms have revealed minimal information about their compliance procedures, performance indicators, or the characteristics of suspended accounts. This absence of transparency makes it difficult for regulators and the general public to evaluate whether the ban is functioning as designed or whether younger users are simply finding different means to access social media. The Commissioner’s push for comprehensive proof of systematic compliance measures reflects mounting dissatisfaction with platforms’ unwillingness to share full information.
Industry Response and Opposition
The major tech platforms have addressed the regulator’s enforcement action with a combination of compliance assurances and doubts regarding the ban’s practicality. Meta, which runs Facebook and Instagram, stressed its commitment to complying with Australian law whilst at the same time contending that precise age verification continues to be a major challenge across the industry. The company has called for a different approach, proposing that robust age verification and parental approval mechanisms put in place at the application store level would be more efficient than platform-level enforcement. This position demonstrates broader industry concerns that the current regulatory framework puts an unrealistic burden on individual platforms.
Snap, the creator of Snapchat, has adopted a more assertive public position, announcing that it had locked 450,000 accounts following the ban’s implementation and asserting it continues to suspend additional accounts each day. However, industry observers question whether such figures reflect authentic adherence or simply represent reactive account management. The core conflict between platforms’ commercial structures—which historically relied on maximising user engagement and growth—and the regulatory requirement to systematically remove an whole age group persists unaddressed. Companies have consistently opposed rigorous age verification methods, citing privacy issues and technical constraints, creating a standoff between authorities and platforms over who bears responsibility for execution.
- Meta contends age verification should occur at app store level rather than on individual platforms
- Snap claims to have locked 450,000 user accounts since the ban’s implementation in December
- Industry groups cite privacy concerns and technical obstacles as barriers to effective age verification
- Platforms contend they are making their best effort whilst challenging the ban’s overall effectiveness
More Extensive Questions Concerning the Ban’s Effectiveness
As Australia’s under-16 online platform ban moves into its implementation stage, key concerns persist about whether the law will achieve its stated objectives or merely push young users towards less regulated platforms. The regulatory authority’s initial compliance assessment reveals that despite months of implementation, substantial gaps remain—children continue finding ways to bypass age verification systems, and platforms have had difficulty stop new underage accounts from being created. Critics argue that the ban’s effectiveness depends not merely on regulatory oversight but on whether young people will genuinely abandon major social networks or simply shift towards alternative services, secure messaging apps, or VPNs designed to conceal their age and location.
The ban’s global implications increase the complexity of assessments of its effectiveness. Countries such as the United Kingdom, Canada, and several European nations are observing Australia’s experiment closely, exploring similar legislation for their respective populations. If the ban proves ineffective at reducing children’s social media usage or fails to protect them from dangerous online content, it could damage the case for similar measures elsewhere. Conversely, if enforcement becomes sufficiently rigorous to truly restrict underage usage, it may inspire other nations to implement similar strategies. The result will likely influence worldwide regulatory patterns for many years ahead, making Australia’s regulatory efforts scrutinised far beyond its borders.
Those Who Profit and Those Who Suffer
Mental health supporters and organisations focused on child safety have championed the ban as a necessary intervention against algorithmic manipulation and contact with harmful content. Parents and educators maintain that removing young Australians platforms designed to maximise engagement could lower anxiety levels, improve sleep patterns, and reduce exposure to cyberbullying. Tech companies’ own research has acknowledged the mental health risks linked to social media use amongst adolescents, adding weight to these concerns. However, the ban also eliminates valid applications of social media for young people—keeping friendships alive, obtaining educational material, and engaging with online communities around shared interests. The regulatory approach assumes harm outweighs benefit, a calculation that some young people and their families challenge.
The ban’s concrete implications extends beyond individual users to affect content creators, small businesses, and community organisations dependent on social media platforms. Young people who might have taken up creative careers through platforms like TikTok or Instagram now face legal barriers to participation. Small Australian businesses that depend on social media marketing no longer reach younger demographic audiences. Community groups, charities, and educational organisations find it difficult to engage young people through channels they previously employed effectively. Meanwhile, the ban unexpectedly advantages large technology companies with resources to create age verification infrastructure, potentially strengthening their market dominance rather than reducing it. These unforeseen effects suggest the ban’s effects reach well further than the simple goal of child protection.
What Follows for Compliance Monitoring
Australia’s eSafety Commissioner has signalled a marked change from hands-off observation to direct intervention, marking a key milestone in the rollout of the under-16 ban. The authority will now collect data to determine whether services have neglected to implement “reasonable steps” to prevent underage access, a legal standard that goes further than simply recording that minors continue using these platforms. This approach necessitates tangible verification that companies have introduced appropriate systems and procedures designed to exclude minors. The Commissioner’s office has stated it will launch probes carefully, constructing evidence that could result in considerable sanctions for failure to comply. This transition from observation to enforcement reflects increasing dissatisfaction with the companies’ present approach and signals that willing participation by itself is insufficient.
The enforcement phase raises critical issues about the sufficiency of sanctions and the concrete procedures for maintaining corporate responsibility. Australia’s regulatory framework delivers enforcement instruments, but their success relies on the eSafety Commissioner’s willingness to pursue regulatory enforcement and the platforms’ capability to adjust effectively. Global regulators, particularly regulators in the Britain and Europe, will carefully track Australia’s regulatory approach and results. A successful enforcement campaign could create a blueprint for further jurisdictions considering equivalent prohibitions, whilst shortcomings might compromise the entire regulatory framework. The next phase will prove crucial whether Australia’s groundbreaking legislation delivers substantive defence for adolescents or remains largely symbolic in its effect.
