As I was reading about backdoors sometime back, I could relate them to undercover agents. But much before getting to that, let’s see what backdoors are. A Backdoor in the world of internet and computerized systems, is like a stealthy / secret door that allows a hacker to get into a system by bypassing its security systems. For ML models, it’s pretty much the same except that these can be more scheming yet easier to deploy in ML models. Imagining huge applications running on ML models with such backdoors within, can be really worrisome. Furthermore, these backdoors up until sometime […]
Explainability vs. Confidentiality: A Conundrum
Ever since AI models have rendered biased results and have caused a major deal of dissatisfaction, panic, chaos, and insecurities, “Explainability” has become the buzz word. Indeed it’s genuine and a “Must-have” for an AI based product. The user has the right to question, “Why?” and “How?”. But how much of these queries are enough to set “Explainability” score? In other words, how much of the response to such queries by the model are enough to exceed “Confidentiality” threshold? For an ordinary user, may be a satisfactory response is enough as an explanation. But it’s not enough for a curious […]
Generative Adversarial Networks (GAN): The Devil’s Advocate
AI is fueled with abundant and qualitative data. But deriving such vast amount from real resources can be quite challenging. Not only because resources are limited, but also the privacy factor which at present is a major security requirement to be complied with, by AI powered systems. In this trade-off of providing accuracy and privacy, AI applications cannot serve to the best of their potential. Luckily, the Generator in Generative Adversarial Networks (GAN), has the potential to solve this challenge by generating synthetic data. But can synthetic data serve the purpose of accuracy? No. The accuracy will be heavily faltered […]