In 2025, as the world continues its shift to the digital and the “artificial”, there is a growing view that young people are overexposed to harmful online content, and that some of the most popular social networks can be addictive, manipulative and dangerous to young minds. In Australia, the first law globally to ban kids from social media comes into effect shortly, and the government plans to spend millions in education efforts. Below we look at what the ban involves, why it has been introduced and some challenges it may face in separating teens from their favourite platforms.
Australia has enacted the world’s first law to ban children under the age of sixteen (U16s) from creating or maintaining accounts on major social media platforms. The reform, introduced through the Online Safety Amendment (Social Media Minimum Age) Act 2024 (Cth) (Minimum Age Act), amends the existing Online Safety Act 2021 (Cth). The Minimum Age Act will come into effect on 10 December 2025.
Under the new framework, the onus is on social media platforms to take reasonable steps to prevent U16s from holding user accounts.
The Minimum Age Act forms part of a broader governmental effort to address the escalating public concern over the effects of social media on the mental health and wellbeing of young Australians. There is a growing view that young people are increasingly overexposed to harmful or inappropriate content, and that platforms design and deploy algorithms that can be addictive and even psychologically manipulative. The government has therefore positioned the reforms as essential measures for helping to protect youth mental health.
The eSafety Commissioner highlighted the Government’s concerns when commenting on the new AI video generator, Sora. Developed by OpenAI, Sora is not yet available in Australia. The eSafety Commissioner says:
Among these risks is the production of abusive, shocking or problematic content. Because the videos produced by Sora are incredibly realistic, people could find it challenging to determine if a video generated by Sora is real or fake. This could lead to the indiscriminate use of deepfakes for disinformation and other harmful content.[1]
Which platforms are regulated?
The Minimum Age Act regulates platforms where the sole or significant purpose is to enable end-users to communicate or interact socially online, including by sharing content, linking with other users, posting material or engaging in social networking activities (Restricted Platforms). If so, the U16 age restrictions apply.
Section 6(a) of the Minimum Act specifies that a platform is not a Restricted Platform if ‘none of the material on the service is accessible to, or delivered to, one or more end-users in Australia’. A Restricted Platform does not necessarily need to be headquartered in Australia or otherwise meet the test of ‘carrying on business in Australia’ – as we looked at in our previous article – but will be regulated if it is accessible or available to Australian U16s. Whilst enforcement against non-resident entities may face practical challenges, the legislation empowers the eSafety Commissioner, and the Government has signalled it expects industry compliance.
This broad language is designed to capture popular platforms such as TikTok, Instagram, Snapchat, X, YouTube, and Facebook, while excluding platforms that primarily serve non-social functions. Sora, if it is made available to Australians, will be regulated by the Minimum Age Act. However, one wonders if the restrictions may have cooled OpenAI’s plans to roll the service out down under.
Online activities that foster learning, communication or wellbeing, that are not primarily designed for social interaction, are expressly excluded, including messaging, email or voice calling-only applications, online gaming services, educational, and health-related platforms[2]. An example of an excluded platform is WhatsApp, on which user-generated content is not publicly posted, shared or interacted with broadly.
What steps must platforms take?
The central obligation of the new regime is that Restricted Platforms must take ‘reasonable steps’ to ensure that U16s cannot create or maintain user accounts.
‘Reasonable steps’ is intentionally flexible, as it recognises the wide range of models that are employed by social media services and is designed to allow compliance through different methods. This may include:
The exact method of achieving compliance is largely left to the discretion of Restricted Platforms, provided that the steps taken can be shown to be proportionate and effective. It may be the case that Restricted Platforms operated by overseas corporations may seek to ‘geo-lock’ their platforms in order to restrict Australian U16s from gaining access and potentially breaching the Minimum Age Act.
What penalties apply and who can be liable?
Breaches can result in fines of up to approximately A$49.5 million (150,000 penalty units) for corporate actors. These penalties align with the upper end of Australia’s recent digital-regulation regime, reflecting the government’s broader ‘$50 million benchmark’ for serious corporate breaches, as seen in the competition and consumer and privacy law contexts.
Importantly, the liability falls exclusively on the Restricted Platforms. In particular, the law does not impose penalties on parents or guardians who continue to permit U16 accounts. (The U16s themselves would not in any event be able to be subjected to penalties)[3]. The government’s position is that the burden should rest with the corporations that design, operate, and profit from the Restricted Platforms, rather than Australian families.
‘Restoring Childhood’
The underlying objective of the new regime is the protection of young people’s mental health and wellbeing. According to a survey conducted by the eSafety Commissioner, 95% of 13 to 15 year-olds had used social media (which would now meet the test of a Restricted Platform) between January and September 2024.[4] They further estimate that approximately 1.3 million 8 to 12 year olds in Australia may have used social media in 2024.[5] Extrapolating these estimates out based on the latest Australian Bureau of Statistics population counts by age[6], we estimate that approximately 2.2 million 8-15 year olds in Australia are using Restricted Platforms.
The second reading speech of the Minimum Age Bill cited a growing body of international research demonstrating correlations between heavy social media use among teens and higher rates of depression, anxiety, body dissatisfaction, sleep disturbance and exposure to harmful content, particularly for girls aged 11 to 13 and boys 14 to 15 years old.[7]
Communications Minister Anika Wells has described the policy as a means of ‘restoring childhood,’ so that ‘young Australians have three more years to build real world connections and online resilience’.[8] The announcement followed years of public debate about social media’s role in rising rates of anxiety, depression, cyberbullying, and body-image concerns among teenagers.
Putting proactive responsibility back on the platforms
The legislation represents an attempt to reallocate responsibility for online safety. Historically, regulation has placed the onus on parents to supervise their children’s internet use. However, policymakers have recognised that the asymmetry of power between parents and multinational technology companies renders this expectation unrealistic.
The new model places accountability squarely on the platforms, recognising that they possess the means to detect, deter and control under-age participation.
The reform aligns with the wider movement within online safety policy towards a ‘duty of care’ model, in which platforms themselves must proactively prevent foreseeable harm rather than respond reactively to incidents.
The legislation has faced criticism from industry stakeholders, social groups and digital rights advocates. The main aspects of concern are verification feasibility, possible circumvention, and discrimination considerations.
Verification Challenges
One of the biggest challenge lies in verifying users’ ages in a manner that is both accurate and privacy-protective. The notion of ‘reasonable steps’ means that platforms must determine for themselves what constitutes a sufficient verification process. However, there is currently no standardised, reliable or universally accepted method of online age-assurance.
Strict identity-based verification, such as requiring government-issued identification, raises substantial privacy and security risks, particularly when applied to minors.
Conversely, behavioural or AI-based age estimation tools, while less intrusive, are often inaccurate and potentially discriminatory.
Major technology companies have already voiced concern. Google has publicly described the new law as ‘extremely difficult to enforce’[9], warning that implementation could be unworkable without intrusive data collection.
Circumvention
There are also concerns that the law may be circumvented too easily. U16s may use older siblings’ or friends’ credentials, register through unregulated or offshore platforms, or rely on virtual private networks (VPNs) to mask their location.
These workarounds could expose compliant Restricted Platforms to penalties while non-compliant or less visible services continue to operate with little oversight.
Discriminatory Considerations
Critics argue that the law may disproportionately affect marginalised or remote communities. For many young people in rural Australia or within culturally diverse communities, online platforms serve as vital tools for connection, identity formation and access to educational or support resources. Excluding these users without providing suitable alternatives could exacerbate social isolation and inequality.
Meta has argued that the regime may have “unintended consequences”[10] for young users, such as limiting digital access for teens who rely on their platforms for legitimate uses including education and cultural or community engagement.
For children and teenagers, the most immediate impact will be the black-balling of their rights to participate in Restricted Platforms. This may result in less exposure to harmful content and a reduction in compulsive or addictive online behaviour. However, it may also generate feelings of exclusion, particularly in social groups where social media has become a dominant form of communication.
For the Restricted Platforms themselves, losing access to over 2 millions users will have a significant impact on their business model. By way of example, the Harvard School of Public Health reported that ‘Social media platforms Facebook, Instagram, Snapchat, TikTok, X (formerly Twitter), and YouTube collectively derived nearly $11 billion in advertising revenue from U.S.-based users younger than 18 in 2022’.[11] This figure was based on 49.7 million US users under 18. Taking the US revenue as a guide, losing over 2 million U16s may cost Restricted Platforms over $400m in advertising revenue.
Not yet available to Australians, one wonders whether OpenAI will abandon plans to make Sora available down under.
Further, the risk of a near $50 million penalty for non-compliance provides a strong incentive for compliance, which may prompt platforms to accept the ‘collateral damage’ of excluding legitimate users, rather than risk being in breach.
From a commercial standpoint, the compliance burden is expected to be substantial. While large global platforms possess the resources to implement complex age-assurance systems, smaller or niche services may find the costs prohibitive. It has been estimated that compliance could cost the social media companies millions, particularly for multinationals with large Australian user bases. This raises concerns about market concentration in social media and the stifling of innovation, as emerging Australian platforms may exit the market rather than bear the regulatory overhead.
It is expected that Restricted Platforms will have to invest heavily to demonstrate regulatory compliance, and some product redesign will be required to segregate content and functionality according to verified age groups.
On the positive side for the technology industry, the legislation may catalyse innovation in privacy-preserving age-assurance technologies, such as facial-age estimation performed on-device without data retention. These developments could have wider benefits for online safety, especially as other jurisdictions consider adopting similar measures.
Australia’s ban on social media use by U16s marks a bold and unprecedented legal move. It reflects a strong governmental commitment to prioritising child protection and mental health over unrestricted digital access.
If platforms adopt intrusive identification processes, the policy could create new privacy and data-protection risks. Conversely, if enforcement remains largely nominal, the reform may fail to achieve its protective objectives.
There is also a deeper normative question about whether access to social media has, in the digital age, become a basic social right. For many teenagers, online spaces constitute the primary arena of self-expression, education and community. As foreshadowed by the Communications Minister, it may be necessary to introduce compensatory initiatives in education, mental-health and digital literacy, to enable the reform to produce meaningful benefits rather than simply delaying social consequences or causing a migration of harmful behaviour into less visible spaces.
If you are looking for specific advice on laws affecting Australian digital businesses, please contact Edwards + Co Legal below.
[1] https://www.esafety.gov.au/key-topics/esafety-guide/sora#:~:text=Among%20these%20risks%20is%20the,Sora%20is%20real%20or%20fake.
[2] s 5 Online Safety (Age-Restricted Social Media Platforms) Rules 2025 (Cth).
[3] Children under 10 have absolute immunity under the doli incapax principle, children 10-14 have the protection of the presumption of doli incapax, requiring the prosecution to prove the child knew the act was seriously wrong, and 15 year olds are subject to various youth justice legislation and processes, which vary state to state.
[4] ‘Behind the screen: The reality of age assurance and social media access for young Australians’ eSafety Commissioner (Transparency Report, February 2025) <https://www.esafety.gov.au/sites/default/files/2025-02/Behind-the-screen-transparency-report-Feb2025.pdf?v=1761092195167>.
[5] Ibid.
[6] ‘Population counts and age – sex distributions’ Australian Bureau of Statistics (Report, 28 June 2022) <https://www.abs.gov.au/census/about-census/census-statistical-independent-assurance-panel-report/34-population-counts-and-age-sex-distributions>.
[7] ‘Online Safety Amendment (Social Media Minimum Age) Bill 2024 Explanatory Memorandum’ House of Representatives (Memorandum, 26 November 2024) <https://parlinfo.aph.gov.au/parlInfo/download/legislation/ems/r7284_ems_b9c134ac-a19a-47b2-9879-b03dda6e3c1a/upload_pdf/JC014726.pdf;fileType=application%2Fpdf#search=%22legislation/ems/r7284_ems_b9c134ac-a19a-47b2-9879-b03dda6e3c1a%22>.
[8] ‘Support for Australians to prepare for new social media minimum age laws’ The Hon Anika Wells MP, Minister for Communications, Minister for Sport (Media Release, 14 October 2025) <https://minister.infrastructure.gov.au/wells/media-release/support-australians-prepare-new-social-media-minimum-age-laws>.
[9] Renju Jose, ‘Google says Australian law on teen social media use ‘extremely difficult’ to enforce’ Reuters (Article, 13 October 2025) <https://www.reuters.com/world/asia-pacific/google-says-australian-law-teen-social-media-use-extremely-difficult-enforce-2025-10-13/>.
[10] Emma Shepard, ‘Meta criticises Australia’s social media ban for under-16s’ MediaWeek (Article, 29 November 2024) <https://www.mediaweek.com.au/meta-criticises-australias-social-media-ban-for-under-16s/?utm_source=chatgpt.com>.
[11] Maya Brownstein, ‘Social media platforms generate billions in annual ad revenue from U.S. youth’ Harvard School of Public Health (Article, 27 December 2023) <https://hsph.harvard.edu/news/social-media-platforms-generate-billions-in-annual-ad-revenue-from-u-s-youth/>.