Automated facial recognition: Why has it got data protectionists up in arms?21-10-2021 Author: Jan Tissler
If automated facial recognition can be used to solve or prevent crimes, it sounds like a good idea. However, the potential for misuse of this technology is enormous and has got data protectionists and activists flocking to the scene.
In February 2020, an Indiana State Police officer had a seemingly unsolvable case on his hands: An argument had arisen between two men in a park, one of whom had eventually pulled out a gun and shot the other in the stomach. An eyewitness had recorded the event with a smartphone. But there were no clues as to who the perpetrator was. A search for his face in official databases was unsuccessful.
Then the police officer tested a new service called "Clearview AI", through which he was able to find the shooter in a video on the social web. Not only that, the man's name was also in the descriptive text. He was soon arrested and charged. Case solved.
As it turned out, the offender neither had a driving licence nor had ever been arrested as an adult, which meant his face was not on file with the authorities. Clearview AI, however, doesn't limit its search to official sources: By its own data, the start-up has improperly helped itself to more than 3 billion photos on the internet from sites such as Facebook, Instagram and Twitter, without actually having permission to do so.
Copycats already in the starting blocks
Ultimately, Clearview AI is the logical consequence of various developments in recent years. We are now constantly being filmed and photographed: Surveillance cameras are no longer exclusive to shops and government offices, or busy town squares and streets, but are also used in private homes. Moreover, countless photos and videos are taken every minute and uploaded directly to the internet.
At the same time, advancements in artificial intelligence and enhanced computing power has led to facial recognition becoming commonplace. Powerful computers and specialised software can search a photo database for possible matches in no time at all.
Clearview AI was simply the first start-up to efficiently merge these two developments. In the absence of government regulation, imitators will follow. The Polish start-up PimEyes, for example, draws on 900 million photos for its face search engine.
The potential fields of application of these services are broad-ranging and even appear quite sensible at first glance. Security agencies want to use them to protect us from terrorists or to find perpetrators. Shops want to identify known thieves or personally greet a particularly loyal customer.
At the same time, however, data protectionists fear that this development may have considerable consequences for us all.
Potential for abuse known for a long time
Experts like Google's then-CEO Eric Schmidt had seen the potential for abuse coming for some time. In 2011, in an interview at the "All Things Digital D9" conference, he explained that facial recognition was a technology Google had deliberately chosen not to pursue. "Personally, I am very concerned about the combination of mobile tracking and face recognition," he explained.
Microsoft President Brad Smith recently warned that AI and other advanced technologies could soon lead to an "Orwellian society" unless more laws were passed to protect us. Mass surveillance of the population is looming.
This is the same view shared by data protectionists and activists who have joined forces in campaigns such as "Stop facial recognition". They believe privacy and anonymity – the cornerstones of a democratic and free society – to be in danger.
Critics also point to the weaknesses of the technology: While it reliably recognises the faces of white men in particular, in other population groups there is a much higher rate of false identifications.
"The use of biometric mass surveillance in Member States and by EU agencies has been shown to lead to breaches of EU data protection law and unduly restrict people's rights, including their rights to privacy, freedom of expression, protest and freedom from discrimination." – Reclaim Your Face
Ban or regulation?
Critical voices can also be heard in politics. For example, in a resolution, MEPs called for a moratorium on facial recognition in public spaces and demanded a complete ban on automated surveillance using other biometric features such as fingerprints, voice or gait.
Others, however, see clear regulation as a countermeasure. This, in turn, is seen by some experts as illusory: for example, in an open letter, more than 175 civil society organisations, scientists and activists have called for a worldwide ban on biometric surveillance in public spaces. The reasoning: It too often leads to the undermining of fundamental human rights such as freedom of expression and assembly as well as the right to privacy and data protection. "No technical or legal safeguards could ever completely eliminate this risk," the letter says.
Others, like Clearview investor David Scalzo, take a much more relaxed view of the discussion. "I've come to the conclusion that because information constantly increases, there's never going to be privacy," he told the New York Times. "Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it."