In an earlier blog, we had a fair knowledge about data poisoning, wherein the adversary is able to make changes to the training data, filling it with corrupt information so as to malign the AI algorithm such that it is trained according to malicious information to render a corrupt, biased AI model. Just when we thought, training data manipulation can only be the way of AI attack, we have the Evasion attack. Although, Evasion attack intends to poison/ manipulate the decision making in AI, the major difference is that it comes into action during testing time i.e., when AI algorithm […]
Data Poisoning: A Catch-22 Situation
What is Data Poisoning? If you all remember a famous case of data bias issue, wherein Google Photos labeled a picture of African-American couple as “Gorillas”, then you know what I am talking about. ML models which are the subset of AI, are specifically susceptible to such data poisoning attacks. ML models that rely heavily on training and labeled data to learn and make conclusions, are at a risk of making biased decision or conclusion if their training data is corrupted. Similar situation is possible in AI powered cybersecurity solutions such as Intelligent Intrusion Detection Systems and Malware Detectors. And […]