AI Series Part 3: Facial Recognition and Unconscious Bias

AI Series Part 3: Facial Recognition and Unconscious Bias

Artificial Intelligence Facial Recognition and Unconscious Bias

AI Series Part 3: Facial Recognition and Unconscious Bias 

In this third installment of the series exploring the impacts of AI on daily life, we’ll discuss how facial recognition systems can be intrinsically biased and how that impacts the quality of the identifications that they make. 

AI and Facial Recognition 

Facial and other automated recognition systems are a controversial technology that is gaining more widespread adoption.  These systems can be deployed to help law enforcement efforts, to aid in identifying travelers at border crossings, and for similar purposes. 

Facial recognition systems are included in this AI series because AI lies at the heart of how facial recognition works.  AI is required because these systems are designed to work under a wide variety of different circumstances.  Simple picture comparisons will not work because lighting, clothing, camera angle, and other features can impact the image that is captured by the camera.  AI is used to allow these systems to identify people similarly to how humans do so. 

How Facial Recognition Systems Are Developed and Trained 

The designers of facial recognition systems don’t know exactly how humans’ facial recognition works.  If there was a perfect algorithm for facial recognition, then it would be possible to develop these systems to work perfectly every time.  Since the facial recognition technology used by UK law enforcement has a false positive rate of over 90%, this is obviously not how these systems work. 

In the absence of a known algorithm, facial recognition systems turn to machine learning.  By using techniques like reinforcement learning, it is possible to teach a machine learning algorithm to build its own model for facial recognition. 

A system using reinforcement learning (like a neural network) often starts with a completely random state.  It is then provided with inputs and asked to provide a classification, such as whether or not two images are of the same person.  The algorithm then receives feedback based on whether or not its classification was correct.  This feedback is used to modify the system’s internal model. 

A system trained using reinforcement learning starts out essentially making random guesses.  Based on feedback, it learns to assign more weight to certain features or combinations of features and to disregard others.  Over time, this can result in a system that uses its own homegrown model to accurately perform a certain type of classification (like facial recognition). 

Where Facial Recognition Goes Wrong 

Reinforcement learning systems have been around for a long time and are generally well-respected.  This approach to machine learning has solved a number of different hard problems and can provide good results. 

However, facial recognition is not currently one of machine learning’s big success stories.  Facial recognition algorithms can go wrong in a couple of different ways. 

Poor Training and Model Development 

The issues with the UK’s facial recognition system point to something going wrong with the development of the machine learning system’s facial recognition model.  While some false positive rate is expected in any system of this type, one over 90% indicates that something is seriously wrong. 

One of the most common causes of this type of issue is a failure to perform enough training when developing the model.  Reinforcement learning systems use each input from the training set to make small tweaks to its internal parameters.  Over time, these small changes build on each other so that the most valuable and accurate features bubble to the top and are most integral to the system’s decision-making process. 

If a facial recognition system is trained using too small of a dataset, the machine learning system doesn’t have enough time to learn the general rules that govern facial recognition.  This means that the system may be able to accurately differentiate between faces in a small training set but doesn’t scale well to actual usage.  This can cause the high error rates detected in some facial recognition systems. 

Implicit Bias 

Another issue that has been observed in facial recognition systems is implicit bias.  For example, some facial recognition systems exhibit a reasonable error rate for a certain demographic but are much worse at properly classifying faces from another demographic. 

These issues typically boil down to the composition of the dataset used for training such a system.  For example, in Silicon Valley or tech companies in general – where these types of systems are generally developed – the majority of employees are white, able-bodied males.  As a result, it is not uncommon for training datasets for facial recognition algorithms to be biased toward this specific demographic and less representative of other demographics. 

As a result of this unconscious bias in training data, facial recognition algorithms lack the information that they require to develop a general model for facial recognition.  If only a few training images are available for a certain demographic, then a machine learning algorithm can score highly by learning to differentiate these training images based upon high-level features such as age, gender, or skin color. 

What the system doesn’t learn is how to accurately differentiate between individuals within this demographic.  This is why studies by NIST on 189 facial recognition systems found that African or East Asian faces are 10 to 100 times more likely to be misidentified with black females being the most commonly misidentified demographic.  These systems know that a particular person is not a white male but lack the ability to accurately identify individuals within a particular demographic. 

The Impact of Poor Facial Recognition on Personal Privacy and Security 

Facial recognition systems are largely still in their infancy.  However, the fact that these systems have high false positive rates and implicit bias has not stopped them from being used by law enforcement, airports, and similar organizations.  In many cases, the rationale behind the use of these obviously broken systems is that it is worth it for hundreds of people to be misidentified – and undergo additional scrutiny or inconveniences – to accurately identify a single criminal. 

However, this has significant impacts on personal privacy and the fairness of law enforcement.  For those people who are unfairly impacted by poor facial recognition systems, the potential inconvenience and damage to their personal privacy and security caused by these systems can be significant. 

Blog Posts

Karen Huggins

Chief Financial, HR and Admin Officer
Divider
Karen joined the Netragard team in 2017 and oversees Netragard’s financial, human resources as well as administration functions. She also provides project management support to the operations and overall strategy of Netragard.
 
Prior to joining Netragard, she worked at RBC Investor Services Bank in Luxembourg in the role of Financial Advisor to the Global CIO of Investor Services, as well as several years managing the Financial Risk team to develop and implement new processes in line with regulatory requirements around their supplier services/cost and to minimize the residual risk to the organization.
 
With over 20 years of experience in finance with global organizations, she brings new perspective that will help the organization become more efficient as a team. She received her Bachelor of Finance from The Florida State University in the US and her Master of Business Administration at ESSEC Business School in Paris, France.

Philippe Caturegli

Chief Hacking Officer
Divider
Philippe has over 20 years of experience in information security. Prior to joining Netragard, Philippe was a Senior Manager within the Information & Technology Risk practice at Deloitte Luxembourg where he led a team in charge of Security & Privacy engagements.

Philippe has over 10 years of experience in the banking and financial sector that includes security assessment of large and complex infrastructures and penetration testing of data & voice networks, operating systems, middleware and web applications in Europe, US and Middle East.

Previously, Philippe held roles within the information system security department of a global pharmaceutical company in London. While working with a heterogeneous network of over 100,000 users across the world and strict regulatory requirements, Philippe gained hands-on experience with various security technologies (VPN, Network and Application Firewalls, IDS, IPS, Host Intrusion Prevention, etc.)

Philippe actively participates in the Information Security community. He has discovered and published several security vulnerabilities in leading products such as Cisco, Symantec and Hewlett-Packard.

He is a Certified Information Systems Security Professional (CISSP), Certified Ethical Hacker (CEH), PCI Qualified Security Assessors (PCI-QSA), OSSTMM Professional Security Analyst (OPSA), OSSTMM Professional Security Tester (OPST), Certified in Risk and Information Systems Control (CRISC)and Associate Member of the Business Continuity Institute (AMBCI).

Adriel Desautels

Chief Technology Officer
Divider
Adriel T. Desautels, has over 20 years of professional experience in information security. In 1998, Adriel founded Secure Network Operations, Inc. which was home to the SNOsoft Research Team. SNOsoft gained worldwide recognition for its vulnerability research work which played a pivotal role in helping to establish today’s best practices for responsible disclosure. While running SNOsoft, Adriel created the zeroday Exploit Acquisition Program (“EAP”), which was transferred to, and continued to operate under Netragard.
 
In 2006, Adriel founded Netragard on the premise of delivering high-quality Realistic Threat Penetration Testing services, known today as Red Teaming. Adriel continues to act as a primary architect behind Netragard’s services, created and manages Netragard’s 0-day Exploit Acquisition Program and continues to be an advocate for ethical 0-day research, use and sales.
 
Adriel is frequently interviewed as a subject matter expert by media outlets that include, Forbes, The Economist, Bloomberg, Ars Technica, Gizmodo, and The Register. Adriel is often an invited keynote or panelist at events such as Blackhat USA, InfoSec World, VICELAND Cyberwar, BSides, and NAW Billion Dollar CIO Roundtable.