Decoding AI Deception: Poisoning Attack

Hi! Welcome to my series of blogposts, “Decoding AI Deception” wherein we will take a closer look into each kind of adversarial AI attack. This post covers the details of poisoning attack comprising common types of poisoning attacks, their applicable cases, vulnerabilitiesof models that are exploited by these attacks, and remedial measures. Poisoning Attack and its Types As we all know from previous post that poisoning attack is the form of adversarial AI attack that is used to corrupt data intended for either training or retraining of a model. It has few common forms which are as follows: – Applicable […]

Comparative Assessment of Critical Adversarial AI Attacks

Often we come across various adversarial AI attacks. Over the time, there have been numerous attacks surfacing with extensive use of one or more AI model(s) together in any application. In this blog post, a one stop platform summarizing the critical adversarial AI attacks is provided. The comparative assessment of these attacks is performed on certain basic features – Modus Operandi, Type of information affected, Phase of AI operation, and More Realizable Applicable Case Study/ Use Case (Examples are not limited to the ones listed below. The examples below are only for better realization purpose). It is worth noting that, […]

Machine “Un”learning

With increasing concern for data privacy, there have been several measures taken up to make AI applications privacy friendly. Of many such measures, the most commonly found and practiced method is Federated Learning. While an entire blog post will be dedicated to know how it works and its current application, this post is about yet another least discussed and probably a more theoretical approach as of now, and that is Machine Unlearning. There have been limited yet substantial research work done in this domain with diverse approaches used by the researchers to attain the objective. As the name suggests, an […]