​
Australia’s Social Media Ban On Users Under Age 16
Australia has become the first country to enforce a blanket ban on social media accounts for users under 16, and governments worldwide are closely observing it as a potential model for much tougher regulation of Big Tech and youth online safety.
About the Online Safety Amendment (Social Media Minimum Age) Act
The Online Safety Amendment (Social Media Minimum Age) Act 2024 is an law that prohibits children under the age of 16 from holding accounts on designated social media platforms.
Key Provisions
- Minimum Age of 16: Children under 16 are legally restricted from creating or maintaining accounts on "age-restricted social media platforms".
- Targeted Platforms: The law currently applies to major services including TikTok, Instagram, Facebook, Snapchat, X (formerly Twitter), Reddit, YouTube, Threads, Kick, and Twitch.
- "Reasonable Steps" Requirement: Platforms must implement systems to identify and deactivate accounts held by under-16s. The eSafety Commissioner provides guidelines on what constitutes "reasonable steps," which may include various age assurance technologies.
- Platform Liability, Not Users: The burden of compliance lies entirely with social media companies. There are no penalties for children who bypass the ban or for parents who allow their children to use these platforms.
Exemptions
Certain digital services are excluded from the age restriction to ensure children can still access essential tools for education, health, and communication. These include:
- Messaging & Communication: WhatsApp, Messenger, and standard email or video calling services.
- Gaming Platforms: Roblox and Steam.
- Educational & Health Tools: Google Classroom, YouTube Kids, and help services like Kids Helpline.
- Other: Professional networking (e.g., LinkedIn) and information-focused services like Pinterest.
Penalties and Enforcement
- Civil Fines: Companies that fail to comply face massive penalties of up to A$49.5 million for serious breaches.
- Privacy Mandates: To protect user data during age verification, the law requires platforms to destroy personal information once it has been used to verify a user's age.
Rationale of the Australian Goverment
The Australian Government’s primary rationale for the Online Safety Amendment (Social Media Minimum Age) Act 2024 is to safeguard children from the documented psychological and social harms associated with social media use during "critical stages of their development".
The government has framed this as a "duty of care" obligation for tech platforms, arguing that the industry's existing self-regulation has failed to protect minors.
1. Mental Health and Wellbeing
The central justification is the "social harm" caused by social media, which Prime Minister Anthony Albanese has described as a "scourge".
- Addictive Design: The government targets "predatory algorithms" and design features (such as infinite scrolls and push notifications) that encourage excessive screen time and sleep deprivation.
- Psychological Impact: Officials point to high rates of anxiety, depression, and body dissatisfaction among teens, often driven by social comparison and idealized standards.
- Isolation: The government aims to shift youth activity back to physical environments, with the Prime Minister stating he wants children "on the footy field or the netball court" rather than on their phones.
2. Protection from Online Crimes and Harassment
The Act is a direct response to rising public concern over the safety of children in digital spaces.
- Cyberbullying: Government data indicates that over 50% of young Australians have experienced cyberbullying, which has been linked to several high-profile cases of youth suicide.
- Predatory Behavior: The ban is intended to reduce opportunities for online predators to target, groom, or manipulate children through account-based interactions.
- Harmful Content: Platforms serve up "sensational or polarizing content," including hate speech and extremist ideologies, which the government argues under-16s are not developmentally prepared to manage.
3. Failure of Existing Safeguards
The government argues that current "parental controls" and platform policies are insufficient.
- Bypassing Restrictions: It noted that many children under the current industry standard age of 13 easily bypass restrictions, and that 13-15-year-olds remain highly vulnerable.
- Industry Accountability: By shifting the burden of enforcement to platforms (with fines up to A$49.5 million), the government aims to force "Big Tech" to invest in robust safety and age-assurance technologies that they previously had little financial incentive to implement.
4. Supporting Parents
The rationale includes a strong social component: providing a "basic sensible model" that supports parents in regulating social media use at home. The government views this as a "seatbelt moment"—a necessary legislative intervention to protect the public from a technology that has outpaced existing safety norms.
Arguments Against the Ban
Infringement on Constitutional Rights
Freedom of Political Communication: High-profile legal challenges in the High Court of Australia, led by organizations like the Digital Freedom Project and platforms like Reddit, argue that the law violates the implied constitutional right to freedom of political communication. They contend it prevents young Australians—some approaching voting age—from engaging in essential civic and political discourse.
Access to Information: Human rights advocates argue the ban restricts children’s fundamental rights to seek, receive, and impart information, which are protected under international treaties like the UN Convention on the Rights of the Child.
Privacy and Data Security Risks
Invasive Age Verification: To comply with the law, platforms may require users (including adults) to provide highly sensitive information, such as government-issued IDs, facial biometrics, or bank-verified details.
Data Breach Vulnerability: Critics warn that mandating the collection of such sensitive data creates new targets for cybercriminals. Despite "ringfence and destroy" mandates, there are fears that these systems could lead to identity theft or unauthorized profiling of individuals.
Isolation of Vulnerable Groups
Loss of Support Networks: Mental health experts and organizations like UNICEF Australia warn that the ban could cut off vital lifelines for marginalized or isolated youth, including LGBTQ+ and neurodivergent teens, who often rely on online communities for peer support and health information.
"Hidden" Harm: By forcing children off mainstream platforms, critics argue users may migrate to "darker corners" of the internet—unregulated apps or encrypted messaging services—where they are even harder to monitor and protect.
Technical Infeasibility and Circumvention
Easy Workarounds: Tech experts point out that teenagers frequently bypass restrictions using VPNs, fake ages, or secondary accounts, making the ban largely symbolic rather than practical.
Ineffective Technology: Reports have indicated that current age-estimation tools (such as AI facial analysis) remain inaccurate, leading to accidental bans of adults or failure to catch minors.
Economic and Social Impacts
Stifling Small Business: Organizations like the Digital Industry Group Inc. (DIGI) argue that the high cost of implementing complex age-verification systems could force smaller platforms out of the market, further consolidating power among "Big Tech" giants.
Undermining Digital Literacy: Critics argue that instead of a ban, the focus should be on digital literacy and "duty of care" for platforms. They believe that delaying access by a few years does not equip children with the skills needed to navigate the digital world once they eventually turn 16.
Encroachment on Parental Rights
State Overreach: Many argue that the law removes the agency of parents to decide when their child is mature enough for social media. Opponents contend that the government is essentially "outsourcing parenting" to state regulation rather than empowering families with better tools.
Comparing Australia and India on Regulating Children’s Online Safety
Key Regulatory Differences (2025)
| Feature |
Australia |
India |
| Primary Legislation |
Online Safety Amendment (Social Media Minimum Age) Act 2024 |
Digital Personal Data Protection (DPDP) Act 2023 |
| Minimum Age |
Strict 16 years for social media access; no parental overrides |
18 years for full digital consent, but children of any age can access with parental approval |
| Enforcement Model |
Platform Liability: Fines up to AUD 50 million ($33M) for failing to block under-16s |
Parental Burden: Relies on "verifiable parental consent" and prohibits behavioral tracking |
| Verification |
Mandatory age verification via ID, biometrics, or algorithmic sweeps |
No mandatory verification mechanism; largely relies on self-declaration |
| Key Regulator |
eSafety Commissioner: A dedicated independent office with proactive removal powers |
NCPCR (protection) and MeitY (policy) share oversight roles |
Balanced conclusion
In conclusion, the Act represents a bold regulatory experiment that foregrounds child safety and platform accountability but moves Australia into complex territory on enforceability, privacy, teenage autonomy, and proportionality of restrictions. Its ultimate success will depend less on the statutory age of 16 itself and more on how proportionate, privacy‑protective, and practically effective the age‑assurance and enforcement measures turn out to be once fully implemented and tested in real online behaviour.
Download Pdf