When AI Facial Recognition Identifies You for a Crime You Didn’t Commit

Why Trust Techopedia
KEY TAKEAWAYS

The number of innocent people arrested after being misidentified by AI facial recognition technology (FRT) keeps increasing. How ethical it is to keep using this technology in its current form?

Lawsuits against police using facial recognition to arrest people keep cropping up in the United States. The latest one, filed in Detroit in August 2023, is the sixth case in the last three years.

Needless to say, suffering the indignity of being unjustly arrested because artificial intelligence (AI) has made a mistake is a terrifying event that can have devastating consequences on a person.

Even more so when the wrongful charges are not discovered in time, and the victim faces jail.

On the other hand, supporters of this technology claim it has helped law enforcement become much more efficient.

These mishaps can be solved by overcoming some inherent software flaws or by ensuring high-resolution footage images are used more regularly.

However, how ethical is it to keep “testing” AI facial recognition technology (FRT) to arrest people who may be innocent in the meantime?

Advertisements

How ethical is it to use AI facial recognition in general, knowing how much it may represent a constant violation of our privacy – always able to identify individuals without their consent?

Let’s start by looking at the damage it caused so far.

A History of Facial Recognition Mistakes

A Pregnant Victim of FRT Error, 2023

The latest case of FRT misidentifying a person occurred in Detroit earlier this year. Adding grotesque insult to injury, the victim, Porsha Woodruff, 32, was eight months pregnant at the time.

Woodruff was arrested in front of her two daughters, ages 6 and 12, and had to spend a day at the police office. In the aftermath, feeling stressed and unwell, she headed to a medical center where she started experiencing early contractions.

Doctors found her dehydrated and diagnosed her with a low heart rate. Not the best way to spend some of the most delicate days of your pregnancy.

Woodruff was not the only victim of FRT errors.

A Case of False Accusation Based on Grainy Surveillance Footage, 2020

In January 2020, Robert Williams was accused of shoplifting five watches worth $3,800.

A few grainy surveillance footage pictures were all Detroit police needed to arrest the man, who was handcuffed on his front lawn in front of all his neighbors while his wife and two young daughters could do nothing but watch in distress.

In theory, facial recognition matches had to be used only as an investigative lead, not as the sole proof needed to charge Williams with a crime.

However, it was enough for police who arrested him without corroborating evidence – even if, in the end, Williams was found to be driving home from work at the time of the robbery.

If we keep digging, we’ll find out how these are not isolated accidents – there’s a trail of similar issues spanning years.

How a Fake ID Led to a Wrongful Arrest, 2019

In 2019, a shoplifter left a fake Tennessee driver’s license at the crime scene in Woodbridge, New Jersey, after stealing candy. When the fake ID was scanned by facial recognition technology, Nijeer Parks was identified as a “high-profile” match.

He was arrested, and since he was previously convicted for drug-related charges and risked double time, he started evaluating if agreeing to a plea would be the better solution.

Luckily for him, he eventually proved his innocence when he found a receipt for a Western Union money transfer occurring at the same hour as the shoplifting – in a place that was 30 miles away from the gift shop.

According to defense attorneys, it is not so uncommon for people wrongly accused by facial recognition to agree to plea deals, even when they’re innocent.

The NY Sock Case: Six Months of Imprisonment over a Possible Match

For example, in 2018, another man was accused of stealing a pair of socks from a T.J. Maxx store in New York City. The whole case rested on a single, grainy security footage that generated a “possible match” months after the event.

When a witness confirmed that “he was the guy”, the accused spent six months in jail before pleading guilty – although he still maintains his innocence.

The defense’s argument? The man was, in fact, signed in at a hospital for the birth of his child at the time the crime occurred.

In some of the cases above, a counter piece of evidence has shown – successfully in two cases, unsuccessfully in another – that the accused was far away from the crime scene.

But not everyone will be so lucky.

In other words, the cases we know of may be just a small portion of the number of innocent people currently in jail or facing jail time because of a wrong FRT recognition.

“It Should Be Regulated” vs. “It Should Be Banned”

Like many things in life, the examples above highlight more about how people use tools rather than the tools themselves.

In many instances, law enforcement agencies are using FRT as the sole evidence required to put people in jail instead of using potential recognition as a simple lead in a broader investigation process.

Sherlock Holmes may have welcomed the technology – but he would have spent his time trying to tear the evidence down rather than treating it as a fact.

There’s a much more serious underlying problem that makes this technology highly biased and its use contentious at best.

Back in 2019, research from the National Institute of Standards and Technology (NIST) found a growing body of evidence pointing out that FRT is marred by significant racial bias.

AI often, if not regularly, misidentifies people with darker skin tones, younger people, and women. The risk of misidentification is 100 times higher in Asian and African American and even greater in Native Americans.

Demographic differentials such as age and gender also contribute, and this disproportion can become more prominent with some less-accurate systems.

Along with massive concerns about FRT disproportionately targeting people from certain specific ethnicities, the very use of this technology could violate privacy and civil liberties.

Real-time public surveillance identifies individuals without their consent, and aggregated databases are often built without any regulation to define their lawfulness.

Biometrics can be captured far too easily and secretly and used for all kinds of purposes, including an overarching control of our private lives that many of us are likely to find unacceptable.

Technical vulnerabilities allow captured footage to be used for all kinds of malicious activities, ranging from identity theft, deepfakes, physical or digital spoofs, and even harassment.

These technical limitations can be overcome in due time, but as guidelines limiting the use of FRT are developed, innocent people still keep being prosecuted. Some cities, such as San Francisco, have prohibited police and other governmental agencies from using facial recognition at all, and many argue this could be the only solution to this problem.

The Bottom Line

The use of FRT for law enforcement purposes is a very controversial topic. Undoubtedly, it is a great tool to identify threats quickly when the speed of response is critical, for example, stopping terrorists or ensuring airport security.

However, many claim this technology is an unacceptable invasion of private life and that being under the constant scrutiny of the prying eyes of a government is a dystopic monstrosity.

One thing that we can be sure of is that in its current state, this technology is not ready to be used yet – at least not without the risk of serious repercussions.

Still, the unpreparedness comes from more than just the technical limits of FRT alone but from the inappropriate use that humans are making of it.

In other words, for FRT to serve justice, we need a solid set of laws and rules to regulate it. Who watches the watchers?

Advertisements

Related Reading

Related Terms

Advertisements
Claudio Buttice
Data Analyst
Claudio Buttice
Data Analyst

Dr. Claudio Butticè, Pharm.D., is a former Pharmacy Director who worked for several large public hospitals in Southern Italy, as well as for the humanitarian NGO Emergency. He is now an accomplished book author who has written on topics such as medicine, technology, world poverty, human rights, and science for publishers such as SAGE Publishing, Bloomsbury Publishing, and Mission Bell Media. His latest books are "Universal Health Care" (2019) and "What You Need to Know about Headaches" (2022).A data analyst and freelance journalist as well, many of his articles have been published in magazines such as Cracked, The Elephant, Digital…