AI is ubiquitous and is finding its application in almost all domains, be it for simple sentence correction purpose or space navigation. The analogy of how AI behaves and thinks like a human, gives an impression that AI is quite simple and does not include much complicated programming. However, the seemingly simple technology of AI equally requires a lot of ground work to not just make it act like a human but also with greater deal of humanity.
AI is not like just any other technology and yet is not any different either. Imagine teaching your toddler how to ride a bicycle. You train the kid step by step with your instructions being a set of inputs, explaining progressively the logic of how to step on the peddle of the bicycle, how to turn the wheel if an obstacle comes up, how to apply brakes and during what circumstances. It’s like a major handbook or rules you have created on “How to ride a bicycle in a safe manner“. And if an additional scenario comes up for which you had not created the rule, the child may not know what to do and just trip off the bicycle. Again, you will need to provide further rules or instructions to avert similar situation in future such that your kid is able to ride a bicycle efficiently.
On the other hand, imagine you just let your kid self analyse through set of videos or pictures of how to ride a bicycle, take necessary action based on different scenarios, and ask the kid to ride the bicycle now. The kid will observe the patterns in the videos and pictures and come up with his or her rules for rendering the output i.e., riding a bicycle. So, basically you trained the kid without actually manually setting the rules and instructions, like in previous case. In this case, you gave the kid all possible cases and ways of how to take a decision based on his or her observations. There will be an instance, where you may see an unexpected action too, based on kid’s ability to blend patterns as are observed in loads of videos and pictures on skillful bicycle riding that you provided.
Such an implicit logical action of the kid based on learning from mere inputs and desired output is what we know in technical world as AI. It is more or less similar to conventional programming but is least manual oriented and completely based on dataset used for training the kid’s nature of action.
Also, conventional programming cannot replace AI. For instance, imagine training a kid to ride a bicycle in a much diverse and complicated environment. You will need to bring out a lot of rules, as much as you can think of and prepare the kid to fulfil the objective in the best possible manner. This can be an extremely tedious process and equipped with a lot of manual errors and miss outs. But imagine you have a lots of informative videos relevant to the terrain and you use them to help your kid observe and identify the pattern, it not only makes it more convenient, but also helps in forecasting other possibly similar scenarios where informed decision can be of great help.
Nonetheless, conventional programming and AI go hand-in-hand when it comes to an application. For instance, while conventional programming can be used to build the structure or architecture of the application, AI can be the brain of the application, performing all the logical and decision making work.
Now that we understand the decision making ability and process of AI, imagine the decisions it can take if improper data is provided as input. Similarly, with diverse range of data and interpretation made by an AI as a result of it, can also lead AI to make decisions one might not have imagined as a normal human. An AI application being utterly logical but lacks humanity. This makes it different from a human who is trained to not only be logical but also show empathy.
The classic Move 37 by AlphaGo shows how an AI can make unexpected decision although logical. Imagining similar decisions in critical applications like vehicles and healthcare without explanation at that very instance can also lead to humanly undesirable results. For instance, if a vehicle is trained to save passenger’s life in situations of accident, it might decide only on those terms and in complex situations due to lack of substantial training, may compromise life of a pedestrian just to avoid an accident and save its passenger’s life.
Thus it becomes a necessity to determine if for a problem statement, a certain AI model is capable and to what extent. It needs to be trained sufficiently and tested enough to be sure that it does not make an unprecedented move. For every move, not only there should be an explanation but also a degree of accountability and fairness, so that AI learns to rationalize.
That said, this article does not by any means discourage use of AI in applications. With AI there have been breakthroughs in medical science and various sensitive domains. This article encourages readers to develop AI applications with great deal of thoughts given its powerful attributes of self learning and decision making.
It’s time we get serious with developing rational, explainable, and humanity oriented AI applications.