With the intelligence accompanied, AI has tapped enormous strength to stealthily bypass traditional cybersecurity measures. This blogpost enlists some key research work available in public domain that bring out insightful results on how AI in its adversarial form can be used to fool or bypass traditional cybersecurity measures. Such research work (by and large provide all the more reason why current security measures need to armor for bigger and conniving threats lurking around.
Comparative Assessment of Critical Adversarial AI Attacks
Often we come across various adversarial AI attacks. Over the time, there have been numerous attacks surfacing with extensive use of one or more AI model(s) together in any application. In this blog post, a one stop platform summarizing the critical adversarial AI attacks is provided. The comparative assessment of these attacks is performed on certain basic features – Modus Operandi, Type of information affected, Phase of AI operation, and More Realizable Applicable Case Study/ Use Case (Examples are not limited to the ones listed below. The examples below are only for better realization purpose). It is worth noting that, […]
Comparative Assessment of Critical AI Models
This blog post is a one stop platform for summary of different AI models that are in predominant use. The comparative assessment of these models is based on various parameters such as – Definition, Process, Main Learning Approach, Pros, Cons, and Applications. The idea is to summarize these models and make it available for a quick view. Note that the information about the model’s is not limited to the contents in this post. Readers are highly encouraged to refer valid sources for additional and detailed information. ModelDefinitionProcess Main Learning ApproachProsConsApplicationsLinear RegressionA model that predicts a continuous output by finding the […]
Triggered vs. Triggerless Backdoor Attacks using a Single Example
In previous blog post, there was an introduction to backdoor attack and its various forms. In this post, I will provide the basic difference between the two forms of attacks using a single example so as to understand the difference in a more precise manner and I will finally provide a comparative assessment of both the forms using different properties/ features. Triggered is the form where a specific input is injected with a trigger / adversarial information so as to activate the malicious behavior of the model. Triggerless is the form which does not inject a typical trigger or adversarial […]
ChatGPT: Assignment companion
With all the hype going on lately about ChatGPT, it has become the talk of every household. While a certain clan is reaping its benefits, there are some who are either exploring its breaking point or misusing it incessantly at various degrees. Starting from misusing it for assignments to generating malwares, ChatGPT seems to have become the Messiah lately and is here to stay. You might think this blog is written using ChatGPT as well. While it could have been possible, but that would not have involved the sentience of a human which even ChatGPT acknowledges of in its various […]
Reviewing Prompt Injection and GPT-3
Recently, AI researcher Simon Willison discovered a new-yet-familiar kind of attack on OpenAI’s GPT-3. The attack dubbed as prompt injection attack has taken the internet by storm over the last couple of weeks highlighting how vulnerable GPT-3 is to this attack. This review article gives a brief overview on GPT-3, its use, vulnerability, and how the said attack has been successful. Apart from that, links to different articles for additional reference and possible security measures are also highlighted in this post. OpenAI’s GPT-3 In May, 2020, San Francisco based AI research laboratory had launched its third generation language prediction model, […]
Machine “Un”learning
With increasing concern for data privacy, there have been several measures taken up to make AI applications privacy friendly. Of many such measures, the most commonly found and practiced method is Federated Learning. While an entire blog post will be dedicated to know how it works and its current application, this post is about yet another least discussed and probably a more theoretical approach as of now, and that is Machine Unlearning. There have been limited yet substantial research work done in this domain with diverse approaches used by the researchers to attain the objective. As the name suggests, an […]
Artificial Intelligence and Cryptography: An Intersection
There has been this common belief among a large sector of academicians and researchers about Artificial Intelligence (AI) and Cryptography – “They are not relatable” or “There is nothing about Cryptography that AI can do.” Up until times when AI was still quite invisible, one might have continued believing the domains to be mutually exclusive. But is this belief still intact? Let’s find out. Ronald L. Rivest in year 1991 published his work Cryptography and Machine Learning where he brings out not only the relationship between both domains but also how each one influences another. Furthermore he also mentions how […]
Backdoor: The Undercover Agent
As I was reading about backdoors sometime back, I could relate them to undercover agents. But much before getting to that, let’s see what backdoors are. A Backdoor in the world of internet and computerized systems, is like a stealthy / secret door that allows a hacker to get into a system by bypassing its security systems. For ML models, it’s pretty much the same except that these can be more scheming yet easier to deploy in ML models. Imagining huge applications running on ML models with such backdoors within, can be really worrisome. Furthermore, these backdoors up until sometime […]
Explainability vs. Confidentiality: A Conundrum
Ever since AI models have rendered biased results and have caused a major deal of dissatisfaction, panic, chaos, and insecurities, “Explainability” has become the buzz word. Indeed it’s genuine and a “Must-have” for an AI based product. The user has the right to question, “Why?” and “How?”. But how much of these queries are enough to set “Explainability” score? In other words, how much of the response to such queries by the model are enough to exceed “Confidentiality” threshold? For an ordinary user, may be a satisfactory response is enough as an explanation. But it’s not enough for a curious […]