Machine “Un”learning

With increasing concern for data privacy, there have been several measures taken up to make AI applications privacy friendly. Of many such measures, the most commonly found and practiced method is Federated Learning. While an entire blog post will be dedicated to know how it works and its current application, this post is about yet another least discussed and probably a more theoretical approach as of now, and that is Machine Unlearning. There have been limited yet substantial research work done in this domain with diverse approaches used by the researchers to attain the objective. As the name suggests, an […]

Artificial Intelligence and Cryptography: An Intersection

There has been this common belief among a large sector of academicians and researchers about Artificial Intelligence (AI) and Cryptography – “They are not relatable” or “There is nothing about Cryptography that AI can do.” Up until times when AI was still quite invisible, one might have continued believing the domains to be mutually exclusive. But is this belief still intact? Let’s find out. Ronald L. Rivest in year 1991 published his work Cryptography and Machine Learning where he brings out not only the relationship between both domains but also how each one influences another. Furthermore he also mentions how […]

Backdoor: The Undercover Agent

As I was reading about backdoors sometime back, I could relate them to undercover agents. But much before getting to that, let’s see what backdoors are. A Backdoor in the world of internet and computerized systems, is like a stealthy / secret door that allows a hacker to get into a system by bypassing its security systems. For ML models, it’s pretty much the same except that these can be more scheming yet easier to deploy in ML models. Imagining huge applications running on ML models with such backdoors within, can be really worrisome. Furthermore, these backdoors up until sometime […]

Explainability vs. Confidentiality: A Conundrum

Ever since AI models have rendered biased results and have caused a major deal of dissatisfaction, panic, chaos, and insecurities, “Explainability” has become the buzz word. Indeed it’s genuine and a “Must-have” for an AI based product. The user has the right to question, “Why?” and “How?”. But how much of these queries are enough to set “Explainability” score? In other words, how much of the response to such queries by the model are enough to exceed “Confidentiality” threshold? For an ordinary user, may be a satisfactory response is enough as an explanation. But it’s not enough for a curious […]

Generative Adversarial Networks (GAN): The Devil’s Advocate

AI is fueled with abundant and qualitative data. But deriving such vast amount from real resources can be quite challenging. Not only because resources are limited, but also the privacy factor which at present is a major security requirement to be complied with, by AI powered systems. In this trade-off of providing accuracy and privacy, AI applications cannot serve to the best of their potential. Luckily, the Generator in Generative Adversarial Networks (GAN), has the potential to solve this challenge by generating synthetic data. But can synthetic data serve the purpose of accuracy? No. The accuracy will be heavily faltered […]

AI-powered Fuzz Testing on Automotives

Until sometime back, fuzz testing was pretty much manual operation. Passing random data as input to check how target system reacts is one effective way to identify if the system is having flaws that may go unnoticed and creep their way into release models. But how much of data is enough to test the system’s intended functionality? Can there be a sort of data left out that could make the system act in a bizarre way? Fuzz testing conventionally have limitations of which, constrained dataset for testing the model is a major challenge. More importantly, with growing complications of a […]

AI : Let’s Get Serious

AI is ubiquitous and is finding its application in almost all domains, be it for simple sentence correction purpose or space navigation. The analogy of how AI behaves and thinks like a human, gives an impression that AI is quite simple and does not include much complicated programming. However, the seemingly simple technology of AI equally requires a lot of ground work to not just make it act like a human but also with greater deal of humanity. AI is not like just any other technology and yet is not any different either. Imagine teaching your toddler how to ride […]

Model Stealing: Show me “Everything” you got!

Model Stealing Attack (Ref: Machine Learning Based Cyber Attacks Targeting on Controlled Information: A Survey, Miao et al.) By now you must have realised how Model Stealing attack is different from Inference attack. While Inference attack focuses on extracting training data information and intends to rebuild a training dataset, model Stealing queries an AI model strategically to get the most or almost everything out of it. By “Everything”, I mean the model. While Inference attack is about hampering data privacy, model Stealing is about hampering the confidentiality of the AI model. In this blog, we will get to know the […]

Inference Attack: Show Me What You Got!

Inference Attack (Ref: MEMBERSHIP INFERENCE ATTACKS AGAINST MACHINE LEARNING MODELSReza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov (2017)Presented by Christabella Irwanto) In previous blog entries, we had a basic understanding of what data poisoning attack is, what does Evasion attack do, and how are data poisoning and Evasion attacks different. In this blog entry, we will understand what an inference attack means when it comes to Artificial Intelligence, what are its major forms, their application, and ofcourse, the counter measures. Inference attack is a modus operandi followed by an adversary to determine what an AI algorithm is running on. […]

Evasion Attack: Fooling AI Model

In an earlier blog, we had a fair knowledge about data poisoning, wherein the adversary is able to make changes to the training data, filling it with corrupt information so as to malign the AI algorithm such that it is trained according to malicious information to render a corrupt, biased AI model. Just when we thought, training data manipulation can only be the way of AI attack, we have the Evasion attack. Although, Evasion attack intends to poison/ manipulate the decision making in AI, the major difference is that it comes into action during testing time i.e., when AI algorithm […]