Automated facial recognition: why are data protectionists up in arms against it?

2021-10-21
-
Author:
Jan Tissler

If automated facial recognition can be used to solve or even prevent crimes, that sounds like a good idea. However, the potential for misuse of this technology is enormous and has data protectionists and activists up in arms.

An Indiana State Police officer had a seemingly unsolvable case on his hands in February 2020: two men got into an argument in a park and one of them took out a gun and shot his opponent in the stomach. An eyewitness had recorded the incident on a smartphone. But there were no clues as to who the perpetrator was. A search for his face in official databases was unsuccessful.

The police officer then tested a new service called "Clearview AI". He used it to find the shooter in a video on the social web. And not only that: the man's name was in the description text. He was soon arrested and charged. Case solved.

As it turned out, the perpetrator had neither a driver's license nor had he previously been arrested as an adult. Therefore, his face was not stored by the authorities. However, Clearview AI does not limit itself to official sources: According to its own figures, the startup has siphoned off over 3 billion photos from the internet - from sites such as Facebook, Instagram and Twitter, which do not actually allow this.

Imitators already in the starting blocks

Ultimately, Clearview AI is the logical consequence of various developments in recent years. We are now constantly being filmed and photographed: Surveillance cameras are no longer only found in stores and government offices or on important squares and streets, but also on private homes. In addition, countless photos and videos are taken every minute and uploaded to the internet.

At the same time, facial recognition has become commonplace thanks to artificial intelligence and increased computing capacity. Powerful computers and specialized software can search a photo database for possible matches at lightning speed.

Clearview AI was merely the first start-up to consistently combine these two developments. Without state regulation, imitators will follow. The Polish start-up PimEyes, for example, uses 900 million photos for its face search engine.

The possible fields of application for such services are diverse and sound very sensible at first glance. Security authorities want to protect us from terrorists or find criminals. Stores want to identify known thieves or personally greet a particularly loyal customer.

However, data protection experts fear that this development could also have considerable consequences for us all.

Potential for abuse known for a long time

Experts such as Google's CEO at the time, Eric Schmidt, had seen the potential for misuse coming for some time. In 2011, he explained in an interview at the "All Things Digital D9" conference that facial recognition was one of the technologies that Google had deliberately decided not to pursue. "I personally am very concerned about the combination of cell phone tracking and facial recognition," he explained.

Microsoft President Brad Smith recently warned that AI and other advanced technologies could soon lead to an "Orwellian society" if more laws are not passed to protect them. Mass surveillance of the population is imminent.

This is exactly the view of data protectionists and activists who have joined forces in campaigns such as "Stop facial recognition". They see privacy and anonymity at risk - cornerstones of a democratic and liberal society.

Critics also point to the weaknesses of the technology: it mainly recognizes the faces of white men reliably. In other population groups, however, there are more false identifications.

"The use of biometric mass surveillance in Member States and by EU agencies has been shown to have led to breaches of EU data protection law and unduly restricted people's rights, including their right to privacy, freedom of expression, protest and freedom from discrimination." - Reclaim Your Face

Ban or regulation?

Critical voices can also be found in politics. In a resolution, for example, members of the European Parliament called for a moratorium on facial recognition in public spaces and demanded a complete ban on automated surveillance using other biometric features such as fingerprints, voice or gait.

Others, however, see clear regulation as a countermeasure. Some experts, on the other hand, consider this to be illusory: over 175 civil society organizations, scientists and activists have called for a global ban on biometric surveillance in public spaces in an open letter. The reasoning: it too often leads to fundamental human rights such as freedom of expression and assembly as well as the right to privacy and data protection being undermined. "No technical or legal safeguards could ever completely eliminate this risk", the letter states.

Others, such as Clearview investor David Scalzo, take a much more relaxed view of the debate. "I've come to the conclusion that there can be no privacy because the amount of information is constantly increasing," he told the New York Times. "Laws have to define what is legal, but you can't ban technology. Sure, it might lead to a dystopian future or something, but you can't ban it."