Facial Recognition Errors Affect Millions Globally

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

Facial recognition technology (FRT) dates back 60 years. A little over a decade ago, deep learning methods shifted technology into more useful – and more threatening – territory. Now retailers, your neighbors, and law enforcement all stock your face and put together a fragmentary photo album of your life.

Yet the story these photos can tell inevitably has errors. Manufacturers of FRT, like those of any diagnostic technology, must balance two types of errors: false positives and false negatives. There are three possible outcomes.

In best cases, such as comparing a person’s ID photo to one taken by a border agent, false negative rates are about two in 1,000 and false positives are less than one in a million.

In the rare case that you are one of these false negatives, a border agent may ask you to show your passport and re-examine your face. But as people demand more technology, more ambitious applications could lead to more catastrophic errors. Let’s say the police are looking for a suspect and compare an image taken with a security camera to a previous “mugshot” of the suspect.

The composition of the training data, differences in how sensors detect faces, and intrinsic differences between groups, such as age, all affect an algorithm’s performance. The UK estimated that its FRT exposed certain groups, such as women and people with darker skin, to risks of misidentification up to two orders of magnitude higher than other groups.

Five faces arranged from left to right, from easiest to hardest to recognize.Less clear photographs are more difficult for FRT to process.iStock

What happens with photos of uncooperative people, or with vendors who train algorithms on biased data sets, or with field workers who demand rapid matching from a huge data set? Here things get blurry.

Consider a busy trade show that uses FRT to check attendees against a database, or gallery, of images of say 10,000 registrants. Even with 99.9 percent accuracy, you’ll get about a dozen false positives or negatives, which may be worth the show organizers’ trouble. But if police start using something like this in a city of a million people, the number of potential victims of mistaken identity increases, and so do the stakes.

What if we asked FRT to tell us if the government has ever recorded and stored an image of a given person? That’s what U.S. Immigration and Customs Enforcement agents have been doing since June 2025, using the Fortify Mobile app. The agency conducted more than 100,000 FRT searches in the first six months. The potential gallery size is at least 1.2 billion images.

At this size, even in the best images, the system is likely to return around 1 million false matches, but at a rate at least 10 times higher for people with darker skin, according to the subgroup.

Responsible use of this powerful technology would involve independent identity checks, multiple data sources and a clear understanding of error thresholds, says computer scientist Erik Learned-Miller of the University of Massachusetts Amherst: “The care we take in deploying such systems must be proportional to the challenges.” »

From the articles on your site

Related articles on the web

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button