AI is fueled with abundant and qualitative data. But deriving such vast amount from real resources can be quite challenging. Not only because resources are limited, but also the privacy factor which at present is a major security requirement to be complied with, by AI powered systems. In this trade-off of providing accuracy and privacy, AI applications cannot serve to the best of their potential. Luckily, the Generator in Generative Adversarial Networks (GAN), has the potential to solve this challenge by generating synthetic data. But can synthetic data serve the purpose of accuracy? No. The accuracy will be heavily faltered […]
Model Stealing: Show me “Everything” you got!
Model Stealing Attack (Ref: Machine Learning Based Cyber Attacks Targeting on Controlled Information: A Survey, Miao et al.) By now you must have realised how Model Stealing attack is different from Inference attack. While Inference attack focuses on extracting training data information and intends to rebuild a training dataset, model Stealing queries an AI model strategically to get the most or almost everything out of it. By “Everything”, I mean the model. While Inference attack is about hampering data privacy, model Stealing is about hampering the confidentiality of the AI model. In this blog, we will get to know the […]
Inference Attack: Show Me What You Got!
Inference Attack (Ref: MEMBERSHIP INFERENCE ATTACKS AGAINST MACHINE LEARNING MODELSReza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov (2017)Presented by Christabella Irwanto) In previous blog entries, we had a basic understanding of what data poisoning attack is, what does Evasion attack do, and how are data poisoning and Evasion attacks different. In this blog entry, we will understand what an inference attack means when it comes to Artificial Intelligence, what are its major forms, their application, and ofcourse, the counter measures. Inference attack is a modus operandi followed by an adversary to determine what an AI algorithm is running on. […]
Evasion Attack: Fooling AI Model
In an earlier blog, we had a fair knowledge about data poisoning, wherein the adversary is able to make changes to the training data, filling it with corrupt information so as to malign the AI algorithm such that it is trained according to malicious information to render a corrupt, biased AI model. Just when we thought, training data manipulation can only be the way of AI attack, we have the Evasion attack. Although, Evasion attack intends to poison/ manipulate the decision making in AI, the major difference is that it comes into action during testing time i.e., when AI algorithm […]
Data Poisoning: A Catch-22 Situation
What is Data Poisoning? If you all remember a famous case of data bias issue, wherein Google Photos labeled a picture of African-American couple as “Gorillas”, then you know what I am talking about. ML models which are the subset of AI, are specifically susceptible to such data poisoning attacks. ML models that rely heavily on training and labeled data to learn and make conclusions, are at a risk of making biased decision or conclusion if their training data is corrupted. Similar situation is possible in AI powered cybersecurity solutions such as Intelligent Intrusion Detection Systems and Malware Detectors. And […]