Australia is set to introduce a groundbreaking nationwide ban preventing children under the age of 16 from using social media platforms such as Instagram, TikTok, and Snapchat. Expected to take effect in late 2025, the new legislation will mandate platforms to demonstrate they are taking ‘reasonable steps’ to block underage users or face fines up to AUD 49.5 million (USD 32 million). YouTube, due to its educational role, is exempt from these rules. However, the law does not yet specify enforcement methods, leaving this to a trial scheduled from January to March.
The trial will involve 1,200 randomly selected Australians. It will be conducted by KJR, a technology firm with experience in defence and election-related projects, according to a report by Reuters. The objective is to explore technologies and methods to verify users’ ages without compromising privacy or security.
How will the ban be executed?
Several potential approaches are under consideration. One involves biometric age estimation, requiring users to upload a video selfie analysed for age-related features. The data is processed and then deleted, estimating age without retaining personal information. Another option is document-based age verification, where users upload documents like a passport or birth certificate to a third-party service, which confirms age anonymously by generating a verification token. A third method involves age inference through data cross-checking, analysing a user’s email or account activity and comparing it to known accounts.
The trial will also assess these methods’ effectiveness against common workarounds, such as appearance-altering filters or fake documents, and will reject solutions that fail to prevent scalable bypassing attempts. Participants will actively try to ‘fool’ the technology to test its robustness.
The trial results will guide lawmakers and social media companies in implementing the legislation. The Age Check Certification Scheme, a British consulting firm, is overseeing the process and will recommend solutions to the Australian government by mid-2025. Tony Allen, CEO of the scheme, stated that while no single method is perfect, the aim is to provide multiple effective solutions that meet high standards of accuracy, privacy, and user-friendliness.
This initiative comes amid global concerns about social media’s impact on teenagers’ mental health and their personal data use. Australia’s decision could set an international precedent, with other countries observing closely. Some European nations and US states have set minimum age requirements for social media, but enforcement has been hindered by privacy and free speech issues.
Critics of the Australian plan, including opposition lawmakers and social media figures like Elon Musk, have expressed concerns about potential overreach and data collection risks. Musk suggested the move could act as a ‘backdoor’ to restrict broader internet access. However, Australia’s Communications Minister, Michelle Rowlands, denied that the law involves government-mandated technologies or sharing personal data with platforms. The legislation also requires platforms to offer alternatives to document-based verification to address privacy concerns.
Although technologies like facial recognition and biometric analysis have advanced rapidly, experts acknowledge they are not foolproof. Many teenagers lack formal identification documents, making age estimation techniques using facial features or behavioural patterns more appealing. Companies like Yoti, providing age verification for Instagram, claim high accuracy rates, but critics argue that even small errors could be significant in a country of 27 million.
Meta, Instagram’s parent company, is uncertain whether its current measures will meet Australia’s standards. Mia Garlick, Meta’s policy director for Australia and New Zealand, stated that while tools like Yoti are useful, they may not fully address the new law’s requirements.