The Human Rights Law Centre has released a new guide that empowers Australian tech employees to speak out against harmful company practices or products.
The guide, Technology-Related Whistleblowing, provides a summary of legally protected avenues for raising concerns about the harmful impacts of technology, as well as practical considerations.
SEE: ‘Right to Disconnect’ Laws Push Employers to Rethink Tech Use for Work-Life Balance
“We’ve heard a lot this year about the harmful conduct of tech-enabled companies, and there is undoubtedly more to come out,” Alice Dawkins, Reset Tech Australia executive director, said in a statement. Reset Tech Australia is a co-author of the report.
She added: “We know it will take time to progress comprehensive protections for Australians for digital harms – it’s especially urgent to open up the gate for public accountability via whistleblowing.”
Australia has experienced relatively little tech-related whistleblowing. In fact, Kieran Pender, the Human Rights Law Centre’s associate legal director, said, “the tech whistleblowing wave hasn’t yet made its way to Australia.”
However, the potential harms involved in technologies and platforms have been in the spotlight due to new laws by the Australian government and various technology-related scandals and media coverage.
Australia has legislated a ban on social media for citizens under 16, coming into force in late 2025. The ban, spurred by questions about the mental health impacts of social media on young people, will require platforms like Snapchat, TikTok, Facebook, Instagram, and Reddit to verify user ages.
Australia is in the process of legislating a “digital duty of care” following a review of its Online Safety Act 2021. The new measure requires tech companies to proactively keep Australians safe and better prevent online harms. It follows a similar legislative approach to the U.K. and European Union versions.
Technology-assisted automation in the form of taxpayer data matching and income-averaging calculations resulted in 470,000 wrongly issued tax debts being pursued by the Australian Taxation Office. The so-called Robodebt scheme was found to be illegal and resulted in a full Royal Commission investigation.
An Australian Senate Select Committee recently recommended establishing an AI law to govern AI companies. OpenAI, Meta, and Google LLMs would be classified as “high-risk” under the new law.
Much of the concerns involved the potential use of copyrighted material in AI model training data without permission and the impact on the livelihoods of creators and other workers due to AI. A recent OpenAI whistleblower shared some concerns in the U.S.
The Technology-Related Whistleblowing guide points to reports that an Australian radiology company handed over medical scans of patients without their knowledge or consent for a healthcare AI start-up to use the scans to train AI models.
Analysis by Human Rights Watch found that LAION-5B, a data set used to train some popular AI tools by scraping internet data, contains links to identifiable photos of Australian children. Children or their families gave no consent.
The Office of the Australian Information Commissioner recently approved a $50 million settlement from Meta following allegations that Facebook user data was harvested by an app, exposed to potential disclosure to Cambridge Analytica and others, and possibly used for political profiling.
The Technology-Related Whistleblowing guide referenced reports about an algorithm being used to rate risk levels associated with immigration detainees. The algorithm’s rating allegedly impacted how immigration detainees were managed, despite questions over the data and ratings.
The guide outlines in detail the protections potentially available to tech employee whistleblowers. For instance, it explains that in the Australian private sector, different whistleblower laws exist that cover certain “disclosable matters” that make employees eligible for legal protections.
Under the Corporations Act, a “disclosable matter” arises when there are reasonable grounds to suspect the information concerns misconduct or an improper state of affairs or circumstances in an organisation.
SEE: Accenture, SAP Leaders on AI Bias Diversity Problems and Solutions
Public sector employees can leverage Public Interest Disclosure legislation in circumstances involving substantial risks to health, safety, or the environment.
“Digital technology concerns are likely to arise in both the public and private sectors which means there is a possibility that your disclosure may be captured by either the private sector whistleblower laws or a PID scheme — depending on the organisation your report relates to,” the guide advised Australian employees.
“In most cases, this will be straightforward to determine, but if not we encourage you to seek legal advice.”
Whistleblower Frances Haugen, the source of the internal Facebook material that led to The Facebook Files investigation at The Wall Street Journal, wrote a forward for the Australian guide. She said the Australian government was signaling moves on tech accountability, but its project “remains nascent.”
“Australia is, in many respects, a testing centre for many of the world’s incumbent tech giants and an incubator for the good, bad, and the unlawful,” she claimed in the whistleblowing guide.
SEE: Australia Proposes Mandatory Guardrails for AI
The authors argue in their release that more people than ever in Australia are being exposed to the harm caused by new technologies, digital platforms, and artificial intelligence. However, they noted that, amidst the policy debate, the role of whistleblowers in exposing wrongdoing has been largely disregarded.
Haugen wrote that “the depth, breadth, and pace of new digital risks are rolling out in real-time.”
“Timely disclosures will continue to be vitally necessary for getting a clearer picture of what risks and potential harm are arising from digital products and services,” she concluded.