The Architect and the Mason: Building a Career in AI and Cybersecurity

Being a career mentoring volunteer also has an exceptional charm to it. It makes you come across diverse people and even more diverse queries that not only give you a peek into different thought process prevailing around, but also propels you to brainstorm on the same to find out the truly satisfying answer. Recently, I found myself pondering a question posed by one of my mentees. They asked, “Which skillset should I build first, AI or cybersecurity, to thrive in the intersection of the two?” This question sparked an analogy in my mind, likening the process to building a house. […]

Generative AI: Breeding Innovation, Not Job Destruction

Today, I found myself in a room, listening to a discussion that was buzzing with words like ‘Deep learning’, ‘Neural networks’, and ‘Generative models’. Amidst the whirlwind of tech jargon, a statement by Dr. Christian Essling stood out: “AI will not replace doctors, but doctors using AI will replace doctors not using AI.” This sentence, much like a well-crafted tweet, was succinct yet profound. It was as if someone had handed me the Rosetta Stone to decipher the future of work in the age of AI. With the advent of Generative AI, a subfield of AI where machines learn to […]

Understanding different Reinforcement Learning Models using a simple example

In previous blogposts, we saw how supervised and unsupervised learnings have their own types and how they are different from one another. To understand the difference, we had taken a small and simple example and also identified if and how certain model types could be used interchangeably in specific scenarios. In this blogpost, we will see the different types of reinforcement learning and use the same strategy as before, to understand the different types of reinforcement learning and their alternate use in particular cases. Reinforcement Learning: A Brief Overview Reinforcement Learning (RL) is a subfield of machine learning and artificial […]

Decoding AI Deception: Poisoning Attack

Hi! Welcome to my series of blogposts, “Decoding AI Deception” wherein we will take a closer look into each kind of adversarial AI attack. This post covers the details of poisoning attack comprising common types of poisoning attacks, their applicable cases, vulnerabilitiesof models that are exploited by these attacks, and remedial measures. Poisoning Attack and its Types As we all know from previous post that poisoning attack is the form of adversarial AI attack that is used to corrupt data intended for either training or retraining of a model. It has few common forms which are as follows: – Applicable […]

Understanding different Unsupervised learning models using a single example

As a part of previous blogpost and in continuation with similar lines, this blogpost will try to clarify the difference and purpose of each kind of Unsupervised learning model using a common example across all these models. Apart from defining each model type, this post will highlight if any models could be used interchangeably for certain scenarios. Types of Unsupervised Learning Models Understanding Models using an Example Let’s consider the example of customer segmentation in a retail store. The store wants to group its customers based on their purchasing behavior and preferences, in order to better target their marketing campaigns […]

Understanding different Supervised learning models using a single example

Often we get confused between different types of Supervised learning models available. This is majorly due to lack of understanding of the goal and applicability of each kind of model. In this blogpost, I will try to clarify the difference and purpose of each kind of Supervised learning model using a common example across all these models. Apart from defining each model type, I will also mention if any models could be used interchangeably for certain scenarios. Types of Supervised Learning Models Understanding Models using an Example Let’s use the example of predicting whether a person has diabetes based on […]

Comparative Assessment of Critical AI Models

This blog post is a one stop platform for summary of different AI models that are in predominant use. The comparative assessment of these models is based on various parameters such as – Definition, Process, Main Learning Approach, Pros, Cons, and Applications. The idea is to summarize these models and make it available for a quick view. Note that the information about the model’s is not limited to the contents in this post. Readers are highly encouraged to refer valid sources for additional and detailed information. ModelDefinitionProcess Main Learning ApproachProsConsApplicationsLinear RegressionA model that predicts a continuous output by finding the […]

ChatGPT: Assignment companion

With all the hype going on lately about ChatGPT, it has become the talk of every household. While a certain clan is reaping its benefits, there are some who are either exploring its breaking point or misusing it incessantly at various degrees.  Starting from misusing it for assignments to generating malwares, ChatGPT seems to have become the Messiah lately and is here to stay. You might think this blog is written using ChatGPT as well. While it could have been possible, but that would not have involved the sentience of a human which even ChatGPT acknowledges of in its various […]

Machine “Un”learning

With increasing concern for data privacy, there have been several measures taken up to make AI applications privacy friendly. Of many such measures, the most commonly found and practiced method is Federated Learning. While an entire blog post will be dedicated to know how it works and its current application, this post is about yet another least discussed and probably a more theoretical approach as of now, and that is Machine Unlearning. There have been limited yet substantial research work done in this domain with diverse approaches used by the researchers to attain the objective. As the name suggests, an […]

Explainability vs. Confidentiality: A Conundrum

Ever since AI models have rendered biased results and have caused a major deal of dissatisfaction, panic, chaos, and insecurities, “Explainability” has become the buzz word. Indeed it’s genuine and a “Must-have” for an AI based product. The user has the right to question, “Why?” and “How?”. But how much of these queries are enough to set “Explainability” score? In other words, how much of the response to such queries by the model are enough to exceed “Confidentiality” threshold? For an ordinary user, may be a satisfactory response is enough as an explanation. But it’s not enough for a curious […]