The Australian privacy regulator has ended its pursuit of Clearview AI over use of the images of Australians’ faces in its facial recognition service, despite no sign from the company it has complied with a ruling ordering the images to be deleted.
Clearview AI is a facial recognition service that has been used by law enforcement across the globe, including in limited trials in Australia. It claims to have a database of over 50bn faces scraped from the internet, including social media.
In 2021, the office of the Australian information commissioner found Clearview AI had breached Australians’ privacy through the collection of these images without consent, and ordered the company to cease collecting the images and delete the ones on record within 90 days. Clearview initially appealed against the decision to the administrative appeals tribunal, but ceased its appeal in August last year before the AAT could make a ruling, meaning the original decision stands.
There is no indication as to whether Clearview has since complied with the order and the company did not respond to a request for comment.
On Wednesday, a year after Clearview ceased its appeal, the privacy commissioner, Carly Kind, announced that the OAIC would not continue to pursue Clearview to enforce the order.
“I have given extensive consideration to the question of whether the OAIC should invest further resources in scrutinising the actions of Clearview AI, a company that has already been investigated by the OAIC and which has found itself the subject of regulatory investigations in at least three jurisdictions around the world as well as a class action in the United States,” she said.
“Considering all the relevant factors, I am not satisfied that further action is warranted in the particular case of Clearview AI at this time.”
In June, Clearview AI agreed to resolve a class action suit brought against the company over the alleged privacy violation of Americans included in the system for an undisclosed amount with no admission of wrongdoing. The settlement has yet to be approved by the court.
A 2022 settlement with the American Civil Liberties Union (ACLU) also restricted Clearview AI from selling its database to most businesses in the United States and all entities including law enforcement in Illinois for five years.
Kind said on Wednesday Clearview’s practices were increasingly common and troublesome in the years since as part of the drive towards generative artificial intelligence models.
The OAIC and 11 other regulators issued a statement in August last year calling on publicly available sites to take reasonable steps to protect personal information on their sites being scraped unlawfully.
Kind said all regulated entities in Australia that use AI to collect, use or disclose personal information were required to comply with the Privacy Act.
“The OAIC will soon be issuing guidance for entities seeking to develop and train generative AI models, including how the APPs (Australian privacy principles) apply to the collection and use of personal information. We will also issue guidance for entities using commercially available AI products, including chatbots.”
Correspondence between Clearview AI and the OAIC as part of the failed AAT appeal from a freedom of information request last year reveals Clearview had been of the view it was not subject to the Australian jurisdiction given it had decided not to do business in Australia, and had blocked its web crawler from obtaining images from servers based in Australia.
Australian and UK users had been able to opt-out of the system, but when the company began re-scraping the internet in January last year it took no steps to reassure users that the faces scraped did not include Australians who had their images on servers located outside Australia, such as those used by social media sites like Facebook.