AI Series Part 6: Securing and Fixing AI-Based Systems

AI Series Part 6: Securing and Fixing AI-Based Systems

AI Series Part 6: Securing and Fixing AI-Based Systems

This is the sixth and final piece in a series exploring the impacts of artificial intelligence on modern life and the security of AI-based systems.  In the previous article, we discussed some of the security issues associated with AI, and we explore how to secure and fix AI in this piece.

How to Secure AI-Based Systems

Machine learning algorithms can suffer from explicit biases in the best of circumstances.  This makes it relatively easy for an intentional attack against these systems to result in an inaccurate and ineffective decision-making system.

Performing adversarial testing and corrupting training data are two ways in which an attacker can “hack” an AI-based system.  Protecting against these potential attack vectors is essential to ensuring AI accuracy and security.

Perform Adversarial Testing First

Systems built using machine learning are designed to build a model from a set of observations and use this model to make decisions.  This means that an AI-based system could be trained to predict and manipulate the decisions of another AI-based system, a practice called “adversarial testing.”

As AI-based systems become more central to daily life and used in critical decision-making, such as autonomous vehicles or cyber defense, cyber threat actors will employ adversarial machine learning against these systems.  The best way to protect against these types of attacks is to do so first.

By performing adversarial testing of machine learning systems, the developers can identify weak points in the model where a small change can have a dramatic impact on the results.  This information can be used to inform further training of the system, resulting in a model that is more resilient against attack.

Modern machine learning algorithms are imperfect, meaning that they make classification errors.  Human beings are far better at certain types of problems than machines, which is why image-based CAPTCHA challenges are a common method of bot detection and defense.  However, by using adversarial machine learning to make it more difficult to identify these weak points in a model, machine learning developers can increase the resilience of their systems against attack.

Implement Training Data Verification and Validation Processes

Good training data is essential to the effectiveness of an AI-based system.  Machine learning systems build their models based on their training datasets.  If the training data is inaccurate or corrupted, the resulting AI model is broken as well.

For this reason, corruption of training data is a low-tech approach to breaking AI systems.  Whether by inserting incorrect data into initial training datasets or performing “low and slow” attacks to slowly corrupt a model, an attacker can skew a machine learning model to misclassify certain types of data.  This could result in a missed cyberattack or an autonomous car running a stop sign.

The datasets that AI-based systems train on are often too large and complex to be completely audited by humans.  However, machine learning researchers can attempt to minimize the risk of these types of attacks via random inspections.  By manually validating a subset of the data and the machine learning algorithm’s classifications, it may be possible to detect corrupted or otherwise inaccurate inputs that could undermine the accuracy and effectiveness of the machine learning algorithm.

Beyond this, standard data security practices are also a good idea for training datasets.  Restricting access to training data, performing integrity validation using checksums or similar algorithms, and other steps to ensure the accuracy of training data can help to identify and block attempts at corruption of training data.

Building AI for the Future

As earlier articles in this series have discussed, artificial intelligence is already a major part of daily life.  As AI becomes more mature, this will only become more common as machine learning algorithms are deployed to solve hard problems in a variety of different industries.

AI has the potential to do a lot of good, but it can also cause a lot of damage in the wrong hands.  In addition to making sure that AI works, it is also essential to ensure that it does its job accurately and securely.

Blog Posts

Karen Huggins

Chief Financial, HR and Admin Officer
Karen joined the Netragard team in 2017 and oversees Netragard’s financial, human resources as well as administration functions. She also provides project management support to the operations and overall strategy of Netragard.
 
Prior to joining Netragard, she worked at RBC Investor Services Bank in Luxembourg in the role of Financial Advisor to the Global CIO of Investor Services, as well as several years managing the Financial Risk team to develop and implement new processes in line with regulatory requirements around their supplier services/cost and to minimize the residual risk to the organization.
 
With over 20 years of experience in finance with global organizations, she brings new perspective that will help the organization become more efficient as a team. She received her Bachelor of Finance from The Florida State University in the US and her Master of Business Administration at ESSEC Business School in Paris, France.

Philippe Caturegli

Chief Hacking Officer
Philippe has over 20 years of experience in information security. Prior to joining Netragard, Philippe was a Senior Manager within the Information & Technology Risk practice at Deloitte Luxembourg where he led a team in charge of Security & Privacy engagements.

Philippe has over 10 years of experience in the banking and financial sector that includes security assessment of large and complex infrastructures and penetration testing of data & voice networks, operating systems, middleware and web applications in Europe, US and Middle East.

Previously, Philippe held roles within the information system security department of a global pharmaceutical company in London. While working with a heterogeneous network of over 100,000 users across the world and strict regulatory requirements, Philippe gained hands-on experience with various security technologies (VPN, Network and Application Firewalls, IDS, IPS, Host Intrusion Prevention, etc.)

Philippe actively participates in the Information Security community. He has discovered and published several security vulnerabilities in leading products such as Cisco, Symantec and Hewlett-Packard.

He is a Certified Information Systems Security Professional (CISSP), Certified Ethical Hacker (CEH), PCI Qualified Security Assessors (PCI-QSA), OSSTMM Professional Security Analyst (OPSA), OSSTMM Professional Security Tester (OPST), Certified in Risk and Information Systems Control (CRISC)and Associate Member of the Business Continuity Institute (AMBCI).

Adriel Desautels

Chief Technology Officer
Adriel T. Desautels, has over 20 years of professional experience in information security. In 1998, Adriel founded Secure Network Operations, Inc. which was home to the SNOsoft Research Team. SNOsoft gained worldwide recognition for its vulnerability research work which played a pivotal role in helping to establish today’s best practices for responsible disclosure. While running SNOsoft, Adriel created the zeroday Exploit Acquisition Program (“EAP”), which was transferred to, and continued to operate under Netragard.
 
In 2006, Adriel founded Netragard on the premise of delivering high-quality Realistic Threat Penetration Testing services, known today as Red Teaming. Adriel continues to act as a primary architect behind Netragard’s services, created and manages Netragard’s 0-day Exploit Acquisition Program and continues to be an advocate for ethical 0-day research, use and sales.
 
Adriel is frequently interviewed as a subject matter expert by media outlets that include, Forbes, The Economist, Bloomberg, Ars Technica, Gizmodo, and The Register. Adriel is often an invited keynote or panelist at events such as Blackhat USA, InfoSec World, VICELAND Cyberwar, BSides, and NAW Billion Dollar CIO Roundtable.