What is Data Poisoning?

If you all remember a famous case of data bias issue, wherein Google Photos labeled a picture of African-American couple as "Gorillas", then you know what I am talking about.

ML models which are the subset of AI, are specifically susceptible to such data poisoning attacks. ML models that rely heavily on training and labeled data to learn and make conclusions, are at a risk of making biased decision or conclusion if their training data is corrupted. Similar situation is possible in AI powered cybersecurity solutions such as Intelligent Intrusion Detection Systems and Malware Detectors. And the worst part is, it does not require skilled hackers to hack through or exploit an ML model via data poisoning.

Given the need of ML model to render accurate information, the model needs to be fed with vast amount of data of all kinds. These data are derived from several sources, mostly from Open Sources. For instance, in order to detect a software as malicious, the ML model needs to be trained with vast number of diverse softwares, labeled as authentic or malicious based on various features and attributes. The ML model analyses each of these data and based on the pattern followed by the software attributes and labels, it makes a decision for incoming test data. It identifies if the test software is truly authentic or malicious. However, crafty hackers can make minute changes in a malicious software, label it as an authentic software, such that it escapes the radar of the ML model. Such mechanism of tweaking with the data and making it deceptive, is called Data Poisoning. When such data are released in Open Source, and these are collectively fed to an ML model, the ML model analyses and learns the features and attributes of such biased data. In future, when such malicious software turn in for testing,they bypass the ML-model based detection process thereby failing the purpose of the detection system. Unfortunately, the challenge does not end here.

To counteract such Data Poisoning issues, several companies such as OpenAI LLP, make sure to pass the data through filters and sanitize them from any kind of bias. However, the task of identifying millions and zillions of such kind of data is cumbersome and can leave scope of unidentified, fairly accurate but crafty data to seep through and again be a part of the training data. Besides, filtering a large chunk of such data would leave only a few, selective, curated data that might turn out to be insufficient to train the model accurately. Hence the Catch-22 of Data Poisoning issue in ML world is persistent, unless there exists a strategic method to overcome it. Such a mechanism can be use of Generative Adversarial Network (GAN) or reinforcement learning based method to generate synthetic-yet-relevant training data to help in training the model in an accurate and unbiased method. But there can be even better methods as well to resolve this issue.

Do you know any such ways? If so, feel free to comment.