In the world of Machine Learning (ML), Matrices are not merely arrangements of numbers; they are the foundation stones upon which complex algorithms are built. Their properties—determinant, rank, singularity, and echelon forms—are critical in shaping the efficacy of ML models. Let’s take a closer look at these properties and elucidate their significance through a case study in the automotive industry, particularly in the application of image classification for autonomous vehicles. Determinant: The Indicator of Linear Independence The determinant of a matrix serves as an indicator of linear independence among vectors. In the context of ML, a non-zero determinant is indicative […]
Dredging the Lake of Automotive OS: Balancing Innovation with Security
In an era where vehicles are becoming as connected and complex as any smart device, the automotive industry faces unprecedented challenges in balancing innovation with security. The Operating Systems (OS) at the heart of these advancements are both the catalyst for new features and the gatekeepers of vehicular safety. This piece explores the latest automotive OSs, their inherent security vulnerabilities, and how AI serves as a potential solution in this intricate landscape. Brief Overview on the Automotive OS Titans Security Vulnerabilities AI as a Potential Cybersecurity Solution Given the interesting features and immense capabilities that current AI algorithms possess, some […]
The GPU.zip Side-Channel Attack: Implications for AI and the Threat of Pixel Stealing
The digital era recently witnessed a new side-channel attack named GPU.zip. While its primary target is graphical data compression in modern GPUs, the ripple effects of this vulnerability stretch far and wide, notably impacting the flourishing field of AI. This article understands the intricacies of the GPU.zip attack, its potential for pixel stealing, and the profound implications for AI, using examples from healthcare and automotive domains. Understanding the GPU.zip Attack At its core, the GPU.zip attack exploits data-dependent optimizations in GPUs, specifically graphical data compression. By leveraging this compression channel, attackers can perform what’s termed as “Cross-origin pixel stealing attacks” […]
Understanding different Reinforcement Learning Models using a simple example
In previous blogposts, we saw how supervised and unsupervised learnings have their own types and how they are different from one another. To understand the difference, we had taken a small and simple example and also identified if and how certain model types could be used interchangeably in specific scenarios. In this blogpost, we will see the different types of reinforcement learning and use the same strategy as before, to understand the different types of reinforcement learning and their alternate use in particular cases. Reinforcement Learning: A Brief Overview Reinforcement Learning (RL) is a subfield of machine learning and artificial […]
Reviewing Prompt Injection and GPT-3
Recently, AI researcher Simon Willison discovered a new-yet-familiar kind of attack on OpenAI’s GPT-3. The attack dubbed as prompt injection attack has taken the internet by storm over the last couple of weeks highlighting how vulnerable GPT-3 is to this attack. This review article gives a brief overview on GPT-3, its use, vulnerability, and how the said attack has been successful. Apart from that, links to different articles for additional reference and possible security measures are also highlighted in this post. OpenAI’s GPT-3 In May, 2020, San Francisco based AI research laboratory had launched its third generation language prediction model, […]
Backdoor: The Undercover Agent
As I was reading about backdoors sometime back, I could relate them to undercover agents. But much before getting to that, let’s see what backdoors are. A Backdoor in the world of internet and computerized systems, is like a stealthy / secret door that allows a hacker to get into a system by bypassing its security systems. For ML models, it’s pretty much the same except that these can be more scheming yet easier to deploy in ML models. Imagining huge applications running on ML models with such backdoors within, can be really worrisome. Furthermore, these backdoors up until sometime […]
Generative Adversarial Networks (GAN): The Devil’s Advocate
AI is fueled with abundant and qualitative data. But deriving such vast amount from real resources can be quite challenging. Not only because resources are limited, but also the privacy factor which at present is a major security requirement to be complied with, by AI powered systems. In this trade-off of providing accuracy and privacy, AI applications cannot serve to the best of their potential. Luckily, the Generator in Generative Adversarial Networks (GAN), has the potential to solve this challenge by generating synthetic data. But can synthetic data serve the purpose of accuracy? No. The accuracy will be heavily faltered […]