Search
Close this search box.
AI Series Part 6: Securing and Fixing AI-Based Systems

AI Series Part 6: Securing and Fixing AI-Based Systems

Securing and Fixing AI-Based Systems

AI Series Part 6: Securing and Fixing AI-Based Systems

This is the sixth and final piece in a series exploring the impacts of artificial intelligence on modern life and the security of AI-based systems.  In the previous article, we discussed some of the security issues associated with AI, and we explore how to secure and fix AI in this piece.

How to Secure AI-Based Systems

Machine learning algorithms can suffer from explicit biases in the best of circumstances.  This makes it relatively easy for an intentional attack against these systems to result in an inaccurate and ineffective decision-making system.

Performing adversarial testing and corrupting training data are two ways in which an attacker can “hack” an AI-based system.  Protecting against these potential attack vectors is essential to ensuring AI accuracy and security.

Perform Adversarial Testing First

Systems built using machine learning are designed to build a model from a set of observations and use this model to make decisions.  This means that an AI-based system could be trained to predict and manipulate the decisions of another AI-based system, a practice called “adversarial testing.”

As AI-based systems become more central to daily life and used in critical decision-making, such as autonomous vehicles or cyber defense, cyber threat actors will employ adversarial machine learning against these systems.  The best way to protect against these types of attacks is to do so first.

By performing adversarial testing of machine learning systems, the developers can identify weak points in the model where a small change can have a dramatic impact on the results.  This information can be used to inform further training of the system, resulting in a model that is more resilient against attack.

Modern machine learning algorithms are imperfect, meaning that they make classification errors.  Human beings are far better at certain types of problems than machines, which is why image-based CAPTCHA challenges are a common method of bot detection and defense.  However, by using adversarial machine learning to make it more difficult to identify these weak points in a model, machine learning developers can increase the resilience of their systems against attack.

Implement Training Data Verification and Validation Processes

Good training data is essential to the effectiveness of an AI-based system.  Machine learning systems build their models based on their training datasets.  If the training data is inaccurate or corrupted, the resulting AI model is broken as well.

For this reason, corruption of training data is a low-tech approach to breaking AI systems.  Whether by inserting incorrect data into initial training datasets or performing “low and slow” attacks to slowly corrupt a model, an attacker can skew a machine learning model to misclassify certain types of data.  This could result in a missed cyberattack or an autonomous car running a stop sign.

The datasets that AI-based systems train on are often too large and complex to be completely audited by humans.  However, machine learning researchers can attempt to minimize the risk of these types of attacks via random inspections.  By manually validating a subset of the data and the machine learning algorithm’s classifications, it may be possible to detect corrupted or otherwise inaccurate inputs that could undermine the accuracy and effectiveness of the machine learning algorithm.

Beyond this, standard data security practices are also a good idea for training datasets.  Restricting access to training data, performing integrity validation using checksums or similar algorithms, and other steps to ensure the accuracy of training data can help to identify and block attempts at corruption of training data.

Building AI for the Future

As earlier articles in this series have discussed, artificial intelligence is already a major part of daily life.  As AI becomes more mature, this will only become more common as machine learning algorithms are deployed to solve hard problems in a variety of different industries.

AI has the potential to do a lot of good, but it can also cause a lot of damage in the wrong hands.  In addition to making sure that AI works, it is also essential to ensure that it does its job accurately and securely.

Blog Posts