In a common attack technique known as “data poisoning,” an attacker inserts a few biassed samples into the training set of the AI model.

Chinese researchers are already working to develop countermeasures to protect against this emerging threat after a Google researcher issued a warning that attackers could disable AI systems by “poisoning” their data sets.

Nicholas Carlini, a Google Brain research scientist, stated at an AI conference in Shanghai on Friday that attackers could seriously impair the functionality of an AI system by manipulating just a small portion of the training data.

During the Artificial Intelligence Risk and Security Sub-forum at the World Artificial Intelligence Conference, Carlini said, “Some security threats, once used only for academic experimentation, have evolved into tangible threats in real-world contexts.”

In a common attack technique known as “data poisoning,” an attacker inserts a few biassed samples into the training set of the AI model. During the training process, this dishonest technique “poisons” the model, eroding its reliability and usefulness.

The entire algorithm can be tampered with by contaminating just 0.1 percent of the data set, according to Carlini.

It’s time for the community to acknowledge these security threats and understand the potential for real-world implications, as we used to view these attacks as academic games.

An AI model’s decision-making and judgement largely result from its training and learning processes, which are reliant on enormous amounts of data. The training data’s integrity, neutrality, and quality have a big impact on how accurate the model is.

If a model is trained on data that contains malicious images, it won’t perform well. For instance, if a dog image that was incorrectly identified as a cat is fed to an algorithm meant to identify animals, it may misinterpret other dog images as cats.

Poisoning attacks can sometimes be very subtle. The results of poisoned models are incorrect when applied to data that has been specifically targeted by the attacker, but they perform normally when applied to clean data, such as identifying a cat image as a cat.

This kind of attack, which causes the AI model to produce incorrect results on a subset of data, could result in significant harm or even serious security breaches.

Because it is difficult to introduce malicious data into a rival’s machine learning model, poisoning attacks were previously thought to be impractical.

Furthermore, compared to the standards of today, the data sets used for machine learning in the past were manageable and unpolluted.

For instance, there were only 60,000 training images and 10,000 test images in the MNIST database, which was frequently used to train machine learning models in the late 1990s.

Scientists utilize vast data sets, often open source or publicly accessible, to train machine learning models. These sets contain up to 5 billion images, and users can only access the current version, not the original used for training. Malign image alteration could compromise all models trained on the data set.

Carlini tests revealed data poisoning could occur by altering 0.1 percent of a data set, allowing the owner to control the machine learning model. Li Changsheng, a professor at Beijing Institute of Technology, proposed a reverse-engineering AI method to protect against tampered training data.

Li and his team introduced a method called member inference in a paper that was earlier this year published in the Journal of Software.

An auxiliary algorithm in this process uses data that the algorithm receives for a preliminary training exercise and compares the training results to determine whether the data meets the criteria for reasonable training data. Before harmful data enters the algorithm, it can be removed using this technique.

Similar algorithms could be used to eliminate unbalanced data, examine model flaws, and more. This approach requires a lot of resources, though.

According to Li in the paper, “reverse intelligence tasks are significantly more challenging than regular artificial intelligence tasks, requiring higher computational resources and perhaps a new architecture or greater bandwidth.”

Large-scale poisoning attacks are a threat that cannot be disregarded in the present. When training models, Carlini said, “poisoning attacks are a very real threat that must be taken into account.”