Australian News Today

Google and Meta could face defamation risks over AI-generated responses, Australian experts warn

Google and Meta could face defamation risks over AI-generated responses, Australian experts warn

Meta and Google using user comments or reviews as part of generative AI responses to queries on restaurants or to summarise sentiment could introduce new defamation risks, experts have warned.

In Australia, when a user makes an allegedly defamatory post or review on Google or Facebook it is usually the user that faces legal action for defamation. But a landmark 2021 high court ruling in Dylan Voller’s case against news outlets – over comments on their social media pages relating to the young Indigenous man’s mistreatment in Don Dale youth detention centre – has also held that the page that hosts a defamatory comment, such as news pages on Facebook, can also be held liable.

The tech companies are occasionally taken to court in Australia. Google was forced to pay former deputy NSW premier John Barilaro more than $700,000 in 2022 over hosting a defamatory video, and the company was ordered to pay $40,000 in 2020 over search results linking to a news article about a Melbourne lawyer, which was later overturned by the high court.

Last week, Google began rolling out changes to Maps in the United States, with its new AI, Gemini, allowing people to ask Maps for places to visit or activities to do, and summarising the user reviews for restaurants or locations.

Google also began rolling out AI overviews in search results to Australian users last week that provide summaries of search results to users.

Meta has recently commenced providing AI-generated summaries of comments on posts on Facebook, such as those posted by news outlets.

Michael Douglas, a defamation expert and consultant at Bennett Law, said he expected to see some cases reach court as AI was rolled out into these platforms.

“If Meta sucks up comments and spits them out, and if what it spits out is defamatory, it is a publisher and potentially liable for defamation,” he said.

“No doubt such a company would rely on various defences. It may argue ‘innocent dissemination’ under the defamation acts, but I am not sure that the argument would get very far – it ought to have reasonably known that it would be repeating defamatory content.”

He said they might rely on new “digital intermediaries” provisions in defamation laws in some states, but that AI might not be in the scope of the new defences.

skip past newsletter promotion

Prof David Rolph, a senior lecturer in law at the University of Sydney, said an AI repeating allegedly defamatory comments could be a problem for the tech companies, but the introduction of the serious harm requirement in recent defamation reforms may reduce the risk. He said, however, that the recent reforms were introduced prior to the widespread availability of large-language model AI.

“The most recent defamation law reform process obviously didn’t grapple with the new permutations and problems presented by AI,” he said.

“That’s the nature of technology – that law will always have to be behind it, but it will become important, I think, for defamation law to try and reform itself more regularly, because these technologies and the problems that they pose for defamation law are now occurring more rapidly, evolving more rapidly.”

Rolph said given AI could provide a multitude of different responses to every user depending on what was input, it might limit the number of people who saw the allegedly defamatory material.

In response to a question about the defamation risk, Miriam Daniel, vice-president and head of Google Maps, told reporters last week that the company’s team worked hard to remove fake reviews or anything that went against its policies, but Gemini would aim to provide “a balanced point of view”.

“We look for enough number of common themes from enough reviewers, both positive sentiments and negative sentiments, and try to provide a balanced view to when we provide the summary,” she said.

A spokesperson for Meta said its AI is new and might not always return the responses intended by the company.

“We share information within the features themselves to help people understand that AI might return inaccurate or inappropriate outputs,” the spokesperson said. “Since we launched, we’ve constantly released updates and improvements to our models and we’re continuing to work on making them better.”