Facial recognition has split opinions for years and years and most likely will continue to do so, at least for the foreseeable future. Despite some of the security benefits it offers, there are plenty of nefarious uses for this type of technology, so when Clearview AI says it has offered its software to the Ukrainian government, the motives might be benign, but this could well be a crossroads we’re at where people have to decide between security and privacy and control in the future.
The company is not made of angels
What immediately must be known is that Clearview AI was found to have breached numerous data protection laws, with the UK government fining the company £17 million, with Italy also fining them €20 million, whereas Australia told them to remove any photos taken in the country. So, simply put, these guys are certainly no heaven send. But how does the technology work?
Clearview AI’s system allows a user, like a police officer seeking to identify a suspect, to upload a photo of a face and find matches in a database of billions of images it has collected from the internet and social media. The system then provides links to where matching images appeared online. However, it’s more than likely that people would not have given consent for their images to be used, and despite being trialled by a number of UK-based organisations, none proceeded with Clearview AI’s services.
That being said, the tech is used by some American law enforcement agencies, especially at borders, and even then, there are a number of lawsuits the company is facing for the allegedly illicit gathering of people’s photos without their knowledge. So, should Ukraine trust a company with apparently multiple breaches of data protection?
It should be used as support, if at all
Clearview AI is offering its services for free to help Ukraine identify potential pro-Russian spies, fight misinformation, as well as identify deceased people who were as yet unidentified. It sounds like a completely benign proposal; a tech company with face-scanning tech and its own database offering to help a country embroiled in war. However, critics have pointed out that the tech is most certainly not accurate and could misidentify people.
Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project in New York, told Reuters it was possible “we’re going to see well-intentioned technology backfiring and harming the very people it’s supposed to help”.
Clearview AI chief executive Hoan Ton-That told the BBC in a statement that Clearview AI is “pleased to confirm that [it] has provided its groundbreaking facial recognition technology to Ukrainian officials for their use during the crisis they are facing”. Having offered these services to Ukraine via a letter to the government, it is claimed that the company has more than two billion images from the Vkontakte (VK), a social network sometimes dubbed the “Facebook of Russia”.
The tech could prove useful in identifying spies by matching photos of them on Vkontakte or their ID card, identifying the dead without need for fingerprints as well as reuniting families without using paperwork, although Mr Ton-That added that Clearview AI should never be used as the sole source of identification.
Mr Ton-That said that Ukraine began using the technology on Saturday, although the Ukrainian defence ministry has not responded to questions about that yet.
Will the ends justify the means?
As mentioned, facial recognition technology has many critics. Privacy watchdogs, certain governments and even Facebook (now Meta) have opted to drop the technology, with the latter in particular both surprising and indicative of the nature of the tech and the possibilities of it not being used as intended. Without wanting to get all sci-fi about the matter, there are far too many threats to privacy, violations of rights and personal freedoms, potential data theft and other crimes. And that’s before even mentioning the risk of errors due to flaws in the technology.
So, while we wait for the Ukrainian government to confirm whether the tech is truly being used, a number of questions arise. First, if it proves successful, will it remain even after the war comes to one conclusion or another? Second, if it does prove successful, how will other governments react to that? And thirdly, if it does not work as intended and has a low identification rate, will it be the final nail in the coffin for facial recognition tech? Remember, this is but one type of technology and there are plenty out there right now.
The questions of tech and the way it’s implemented that have risen out of this conflict are numerous and answers swing all across the spectrum of right, wrong, morally confounded and every single grey area in between. As with most wars, there are few simple answers. So, the most pertinent question being asked is; if this tech can help bring a quicker end to the invasion, is it worth the price of privacy?
Should facial recognition tech be implemented to safeguard citizens?