Deep Generative Models (DGMs): Understanding Their Power and Vulnerabilities

In the ever-evolving world of AI, Deep Generative Models (DGMs) stand out as a fascinating subset. Let’s understand their capabilities, unique characteristics, and potential vulnerabilities. Introduction to AI Models The Magic Behind DGMs: Latent Codes Imagine condensing an entire book into a short summary. This summary, which captures the essence of the book, is analogous to a latent code in DGMs. It’s a richer, more nuanced representation of data, allowing DGMs to generate new, similar content. DGM vs. DDM: A Comparative Analysis Unique Vulnerabilities of DGMs Countermeasures to Protect DGMs DGMs, with their ability to generate new data and understand […]

Understanding the Essence of Prominent AI/ML Libraries

Artificial Intelligence (AI) and Machine Learning (ML) have become an integral part of many industries. With a plethora of libraries available, choosing the right one can be overwhelming. This blog post explores some of the prominent libraries, their generic use cases, pros, cons, and potential security issues. TensorFlow PyTorch Keras Scikit-learn NumPy Pandas LightGBM, XGBoost, CatBoost OpenCV Conclusion Each library and framework in AI/ML offers unique strengths and potential challenges. Understanding the use cases, examples, pros, cons, and security considerations can guide practitioners to choose the right tools for their specific needs. It’s crucial to stay updated with the latest […]

Decoding AI Deception: Poisoning Attack

Hi! Welcome to my series of blogposts, “Decoding AI Deception” wherein we will take a closer look into each kind of adversarial AI attack. This post covers the details of poisoning attack comprising common types of poisoning attacks, their applicable cases, vulnerabilitiesof models that are exploited by these attacks, and remedial measures. Poisoning Attack and its Types As we all know from previous post that poisoning attack is the form of adversarial AI attack that is used to corrupt data intended for either training or retraining of a model. It has few common forms which are as follows: – Applicable […]

Key Research Work on AI against Traditional Cybersecurity Measures

With the intelligence accompanied, AI has tapped enormous strength to stealthily bypass traditional cybersecurity measures. This blogpost enlists some key research work available in public domain that bring out insightful results on how AI in its adversarial form can be used to fool or bypass traditional cybersecurity measures. Such research work (by and large provide all the more reason why current security measures need to armor for bigger and conniving threats lurking around.

Comparative Assessment of Critical Adversarial AI Attacks

Often we come across various adversarial AI attacks. Over the time, there have been numerous attacks surfacing with extensive use of one or more AI model(s) together in any application. In this blog post, a one stop platform summarizing the critical adversarial AI attacks is provided. The comparative assessment of these attacks is performed on certain basic features – Modus Operandi, Type of information affected, Phase of AI operation, and More Realizable Applicable Case Study/ Use Case (Examples are not limited to the ones listed below. The examples below are only for better realization purpose). It is worth noting that, […]

Triggered vs. Triggerless Backdoor Attacks using a Single Example

In previous blog post, there was an introduction to backdoor attack and its various forms. In this post, I will provide the basic difference between the two forms of attacks using a single example so as to understand the difference in a more precise manner and I will finally provide a comparative assessment of both the forms using different properties/ features. Triggered is the form where a specific input is injected with a trigger / adversarial information so as to activate the malicious behavior of the model. Triggerless is the form which does not inject a typical trigger or adversarial […]

Reviewing Prompt Injection and GPT-3

Recently, AI researcher Simon Willison discovered a new-yet-familiar kind of attack on OpenAI’s GPT-3. The attack dubbed as prompt injection attack has taken the internet by storm over the last couple of weeks highlighting how vulnerable GPT-3 is to this attack. This review article gives a brief overview on GPT-3, its use, vulnerability, and how the said attack has been successful. Apart from that, links to different articles for additional reference and possible security measures are also highlighted in this post. OpenAI’s GPT-3 In May, 2020, San Francisco based AI research laboratory had launched its third generation language prediction model, […]

Machine “Un”learning

With increasing concern for data privacy, there have been several measures taken up to make AI applications privacy friendly. Of many such measures, the most commonly found and practiced method is Federated Learning. While an entire blog post will be dedicated to know how it works and its current application, this post is about yet another least discussed and probably a more theoretical approach as of now, and that is Machine Unlearning. There have been limited yet substantial research work done in this domain with diverse approaches used by the researchers to attain the objective. As the name suggests, an […]

Backdoor: The Undercover Agent

As I was reading about backdoors sometime back, I could relate them to undercover agents. But much before getting to that, let’s see what backdoors are. A Backdoor in the world of internet and computerized systems, is like a stealthy / secret door that allows a hacker to get into a system by bypassing its security systems. For ML models, it’s pretty much the same except that these can be more scheming yet easier to deploy in ML models. Imagining huge applications running on ML models with such backdoors within, can be really worrisome. Furthermore, these backdoors up until sometime […]

Explainability vs. Confidentiality: A Conundrum

Ever since AI models have rendered biased results and have caused a major deal of dissatisfaction, panic, chaos, and insecurities, “Explainability” has become the buzz word. Indeed it’s genuine and a “Must-have” for an AI based product. The user has the right to question, “Why?” and “How?”. But how much of these queries are enough to set “Explainability” score? In other words, how much of the response to such queries by the model are enough to exceed “Confidentiality” threshold? For an ordinary user, may be a satisfactory response is enough as an explanation. But it’s not enough for a curious […]