Artificial Intelligence: Servant or Master
CIOREVIEW >> Robotics >>

Artificial Intelligence: Servant or Master

By CIOReview | Tuesday, July 19, 2016


1956 marked the point in history beyond which the identity of computers started to diverge from the perception held till that point; it started on an evolutionary journey to change its identity of being a mere machine and evolve in a way where its abilities would emulate the abilities of its creators. It all started with a new conception of the human mind, before which the grey matter was only a stimulus-response based biological entity; now the brain was like a computer that collects information, processes it and acts upon it. Human thoughts were now viewed as programs that direct the biological machinery.

To strike a conversation about Artificial Intelligence (AI), drawing cues from everyday tasks and trying to notice its covert infiltration into our lives, would be the best foot ahead. Everyday tasks like inquiring weather forecast or demanding a list of Italian restaurants in the locality may serve as the mirror to reflect this infiltration. Widely known faces of AI such as Facebook’s facial recognition, Skype Translator, Siri, Cortana, Amazon’s purchase prediction are mere bubbles in the froth above what is brewing deep down.   

Is it what we think?

The rise of thinking machines, computers to be precise, that can simulate the processing capabilities of the human brain is not only a growing need for the future, but also a demonstration of what we are capable of. AI is an example of mankind’s persistence to create new forces or improve an existing one.

This new fuelling of nearly dormant decades-long AI is an epitome of technological innovations serving as seeds for other technological innovations. In this case, this can be attributed largely to the massive advancements in the Cloud space that has rendered it far more powerful and affordable. Newer energy efficient and powerful hardware and advances in machine learning, including deep neural networks and probabilistic models are also some of the important contributing factors.

Since inception AI has impressed the world with its astounding capabilities. IBM’s Deep Blue and Googles’s AlphaGo have beaten world champions in games counted among the toughest on the planet. Another such example is Chinese search giant Baidu’s, DuLight- an earpiece-cum-camera that uses image, speech and facial recognition to pipe up about the surroundings to the wearer (a visually impaired person).

Although such examples portray an image of technology equivalent to human intelligence, the image is beguiling. Current AI to a large extent fulfills the criteria for “intelligence”- learning from the information, reasoning and arriving at satisfiable conclusion/s, and self-correction. However, the truth is- they are algorithms programmed for competence only in a single, restricted domain. In contrast, human intelligence is much more than the three criterions; human intelligence is adaptable and this thin line separates us from the machines. The subfield of AI, Artificial General Intelligence establishes this thin line as the “characteristic of generality” that lets us collate, analyze information to adapt and undertake new tasks or respond accordingly.

This is evident from the recent incident of Twitter chatbot “Tay”. Microsoft’s creation, “Tay” was supposed to get smarter and learn to engage people through casual and playful conversations. Ironically, within 24 hours the AI chatbot turned racist and misogynistic and ended up Tweeting, “Hitler was right I hate the Jews”. This AI chatbot has managed to show us that the thin line might not be that thin after all and such products are only examples involving a little more than machine learning.


According to Dave Coplin, Chief Envisioning Officer, Microsoft, “AI will change how we relate to technology. This technology will help us augment our processes and methodology, and support us to extend our capability.” Consider the example of Brendan Frey. His previous research involved building AI systems that used probabilistic calculations and huge amounts of data to emulate what any human does when they read a word or recognize a face. He’s using the same approach to build a system that can emulate what a cell does when it reads a genome and generates a new molecule. No matter how minute, there lies possibilities in such attempts that could transform the way we counter diseases; the ways technology can vastly extend our capabilities.

Tech-savvy businesses have already kicked off their own AI projects and its effects are felt as these words take shape. What future AI holds for businesses is only limited by imagination.

The “But” factor

Stephen Hawkins believes that success in creating AI would be the biggest event in human history, but unfortunately, it might also be the last as short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

Nick Bostrom and Eliezer Yudkowsky in their draft for Cambridge Handbook of Artificial Intelligence, 2011, point out three factors which need proper speculation for the AI to further integrate with humanity.

The first is transparency to inspection. In the event of an AI’s misjudgment, rectification thorough human intervention would hold the key. A machine learner based on decision trees or Bayesian networks is fairly transparent to programmer inspection that may allow better spotting of the problem. But, with machine learning algorithm based on neural networks, or self-evolving AI as predicted for the future, it may be nearly impossible for the creators to understand the “why” or “how” behind the algorithm’s judgment.

Next comes predictability, an important factor to consider before AI algorithms take over social functions. The legal principle of “stare decisis” that binds judges to follow past precedent whenever possible is an excellent analogy. One of the most important functions of the legal system is to be predictable that allows, for example, contracts to be written with a prior knowledge about how they will be executed. The job of the legal system is not necessarily to optimize society, but to provide a predictable environment within which citizens can optimize their own lives. On the contrary for an engineer, this preference for precedent may seem incomprehensible—why bind the future to the past, when technology is always improving?

Third is incorruptibility. Immunity to malicious manipulation will be imperative to safe guard society at a time when AI will handle serious security concerns such as scanning airline baggage.

At the business front, present AI’s significance is small as present technology fails short to be pitched as “true AI”. Professor Mary Cummings, director of the Humans and Autonomy Lab affirms that robots and automation solutions in general, excel right now in handling tasks that rely on well-rehearsed skill execution and rule-following. For any tasks that require judgment and expertise based on human knowledge and the ability to consider what-if scenarios, automation falls short.


Forecasting a clear picture at the moment is quite difficult. Undoubtedly, AI has covered a significant distance, but it still is a long way from reckoning into the force it promises to be.

Every innovation, invention comes with strings attached; Cloud and IoT have security issues, the word nuclear instantly instigates terror of “WMD”. However, AI is another beast in itself, it stands alone where calling it just technology is unfulfilling. Innovations no matter how big or small aid our life in a great way, but we control them and any harm it may cast almost entirely depends on human choice. Opposed to it, AI through its name signifies its ability to think, to be intelligent. It is only till now that its intelligence is at par with a new born and as it matures, chances are, its intelligence will expand, arming it with a developed brain or even better, consciousness. At such a stage it will definitely surpass human capabilities becoming an entity in itself, and similar to humans its application of intelligence will only be as good as its intentions.

This in no way questions the huge potential AI holds to consolidate and sustain humanity and push mankind into the future, but only raises a question- “How do we control a technology that rivals and surpasses human intelligence and ensure that it doesn’t turn into a bad master while its creators envisaged a good servant?”