Search
Close this search box.
AI Series Part 3: Facial Recognition and Unconscious Bias

AI Series Part 3: Facial Recognition and Unconscious Bias

Artificial Intelligence Facial Recognition and Unconscious Bias

AI Series Part 3: Facial Recognition and Unconscious Bias 

In this third installment of the series exploring the impacts of AI on daily life, we’ll discuss how facial recognition systems can be intrinsically biased and how that impacts the quality of the identifications that they make. 

AI and Facial Recognition 

Facial and other automated recognition systems are a controversial technology that is gaining more widespread adoption.  These systems can be deployed to help law enforcement efforts, to aid in identifying travelers at border crossings, and for similar purposes. 

Facial recognition systems are included in this AI series because AI lies at the heart of how facial recognition works.  AI is required because these systems are designed to work under a wide variety of different circumstances.  Simple picture comparisons will not work because lighting, clothing, camera angle, and other features can impact the image that is captured by the camera.  AI is used to allow these systems to identify people similarly to how humans do so. 

How Facial Recognition Systems Are Developed and Trained 

The designers of facial recognition systems don’t know exactly how humans’ facial recognition works.  If there was a perfect algorithm for facial recognition, then it would be possible to develop these systems to work perfectly every time.  Since the facial recognition technology used by UK law enforcement has a false positive rate of over 90%, this is obviously not how these systems work. 

In the absence of a known algorithm, facial recognition systems turn to machine learning.  By using techniques like reinforcement learning, it is possible to teach a machine learning algorithm to build its own model for facial recognition. 

A system using reinforcement learning (like a neural network) often starts with a completely random state.  It is then provided with inputs and asked to provide a classification, such as whether or not two images are of the same person.  The algorithm then receives feedback based on whether or not its classification was correct.  This feedback is used to modify the system’s internal model. 

A system trained using reinforcement learning starts out essentially making random guesses.  Based on feedback, it learns to assign more weight to certain features or combinations of features and to disregard others.  Over time, this can result in a system that uses its own homegrown model to accurately perform a certain type of classification (like facial recognition). 

Where Facial Recognition Goes Wrong 

Reinforcement learning systems have been around for a long time and are generally well-respected.  This approach to machine learning has solved a number of different hard problems and can provide good results. 

However, facial recognition is not currently one of machine learning’s big success stories.  Facial recognition algorithms can go wrong in a couple of different ways. 

Poor Training and Model Development 

The issues with the UK’s facial recognition system point to something going wrong with the development of the machine learning system’s facial recognition model.  While some false positive rate is expected in any system of this type, one over 90% indicates that something is seriously wrong. 

One of the most common causes of this type of issue is a failure to perform enough training when developing the model.  Reinforcement learning systems use each input from the training set to make small tweaks to its internal parameters.  Over time, these small changes build on each other so that the most valuable and accurate features bubble to the top and are most integral to the system’s decision-making process. 

If a facial recognition system is trained using too small of a dataset, the machine learning system doesn’t have enough time to learn the general rules that govern facial recognition.  This means that the system may be able to accurately differentiate between faces in a small training set but doesn’t scale well to actual usage.  This can cause the high error rates detected in some facial recognition systems. 

Implicit Bias 

Another issue that has been observed in facial recognition systems is implicit bias.  For example, some facial recognition systems exhibit a reasonable error rate for a certain demographic but are much worse at properly classifying faces from another demographic. 

These issues typically boil down to the composition of the dataset used for training such a system.  For example, in Silicon Valley or tech companies in general – where these types of systems are generally developed – the majority of employees are white, able-bodied males.  As a result, it is not uncommon for training datasets for facial recognition algorithms to be biased toward this specific demographic and less representative of other demographics. 

As a result of this unconscious bias in training data, facial recognition algorithms lack the information that they require to develop a general model for facial recognition.  If only a few training images are available for a certain demographic, then a machine learning algorithm can score highly by learning to differentiate these training images based upon high-level features such as age, gender, or skin color. 

What the system doesn’t learn is how to accurately differentiate between individuals within this demographic.  This is why studies by NIST on 189 facial recognition systems found that African or East Asian faces are 10 to 100 times more likely to be misidentified with black females being the most commonly misidentified demographic.  These systems know that a particular person is not a white male but lack the ability to accurately identify individuals within a particular demographic. 

The Impact of Poor Facial Recognition on Personal Privacy and Security 

Facial recognition systems are largely still in their infancy.  However, the fact that these systems have high false positive rates and implicit bias has not stopped them from being used by law enforcement, airports, and similar organizations.  In many cases, the rationale behind the use of these obviously broken systems is that it is worth it for hundreds of people to be misidentified – and undergo additional scrutiny or inconveniences – to accurately identify a single criminal. 

However, this has significant impacts on personal privacy and the fairness of law enforcement.  For those people who are unfairly impacted by poor facial recognition systems, the potential inconvenience and damage to their personal privacy and security caused by these systems can be significant. 

Blog Posts