In the digital age, cybersecurity threats continuously evolve, challenging our defenses and demanding constant vigilance. A groundbreaking development in this field is the emergence of Morris II, an AI-powered worm that marks a significant departure from traditional malware mechanisms. Let’s dive into the intricacies of Morris II, compare it with conventional malware, and contemplate its potential implications on critical industries, alongside strategies for fortification against such advanced threats. The Essence of Morris II Morris II is named after the first computer worm, indicating its legacy as a pioneer but with a modern twist: leveraging AI. Unlike traditional malware, which requires […]
Comparative Assessment of Critical Adversarial AI Attacks
Often we come across various adversarial AI attacks. Over the time, there have been numerous attacks surfacing with extensive use of one or more AI model(s) together in any application. In this blog post, a one stop platform summarizing the critical adversarial AI attacks is provided. The comparative assessment of these attacks is performed on certain basic features – Modus Operandi, Type of information affected, Phase of AI operation, and More Realizable Applicable Case Study/ Use Case (Examples are not limited to the ones listed below. The examples below are only for better realization purpose). It is worth noting that, […]
Reviewing Prompt Injection and GPT-3
Recently, AI researcher Simon Willison discovered a new-yet-familiar kind of attack on OpenAI’s GPT-3. The attack dubbed as prompt injection attack has taken the internet by storm over the last couple of weeks highlighting how vulnerable GPT-3 is to this attack. This review article gives a brief overview on GPT-3, its use, vulnerability, and how the said attack has been successful. Apart from that, links to different articles for additional reference and possible security measures are also highlighted in this post. OpenAI’s GPT-3 In May, 2020, San Francisco based AI research laboratory had launched its third generation language prediction model, […]
Machine “Un”learning
With increasing concern for data privacy, there have been several measures taken up to make AI applications privacy friendly. Of many such measures, the most commonly found and practiced method is Federated Learning. While an entire blog post will be dedicated to know how it works and its current application, this post is about yet another least discussed and probably a more theoretical approach as of now, and that is Machine Unlearning. There have been limited yet substantial research work done in this domain with diverse approaches used by the researchers to attain the objective. As the name suggests, an […]