Clearview AI, the facial recognition company that boasted of scraping billions of photos from the internet, is being investigated by data privacy watchdogs in the UK and Australia.
The UK’s Information Commissioner’s Office (ICO) and the Office of the Australian Information Commissioner (OAIC) announced a joint investigation into Clearview AI on Thursday.
The two regulatory bodies said the investigation will focus on Clearview AI’s “use of ‘scraped’ data and biometrics of individuals.”
Clearview AI describes itself as a “search engine for faces.”
It claims its software scrapes images of people’s faces from popular sites like Facebook, YouTube, and Google.
Users are then able to upload a photo of someone’s face to Clearview AI’s app, and it will show them where else that person’s face has appeared on the internet.
The company was relatively unknown until January when The New York Times published an exposé, claiming the company’s technology was being used by police forces.
This prompted outrage at the way the company collected people’s images and biometric data without their consent, and the company was hit with a series cease-and-desist letters from big tech companies like Facebook, Twitter, and YouTube.
In February the company said it had suffered a major data breach exposing its client list, which revealed it was used by many clients who were not law enforcement.
The ICO and OAIC said in their statement they would collaborate with other data privacy bodies where appropriate. Canada announced in February it was launching a similar probe into Clearview AI’s data practices, and on Monday the company announced it was shutting down its business in Canada due to the investigation.
The ethics of facial recognition are under particular scrutiny at the moment, and last month IBM, Amazon, and Microsoft announced they would suspend the sale of their facial recognition software to law enforcement.
This came about after the George Floyd protests threw the spotlight back onto the ethical problems surrounding selling facial recognition to police, as the technology has frequently been shown to exhibit racial bias.