When Deep Blue, a computer-generated chess player created by IBM, won against Garry Kasparov in 1996-97 at a chess game, the world was shocked as a definite example of Artificial Intelligence emerged and engaged the humans. In the movie I, Robot, a robot is suspected of murdering its creator. Let’s get to the reality and meet Siri and Cortana, developed by Apple and Microsoft respectively to help you find things on your smartphone today. What if tomorrow Cortana or Siri gets a physical makeover, starts having anger issues and becomes a deadly, gun-slinging terminator? Is it possible for the AI to take over humans? The answer lies in understanding what exactly is AI.
The term Artificial Intelligence was coined by an American computer scientist, named John McCarthy. He and his team, including Marvin Minsky, Allen Newell and Herbert A Simon are considered the founding fathers of AI. To a layman, Artificial Intelligence is exhibited when machines start behaving like humans and show symptoms of cognitive functions, like learning, independent thinking and complex problem solving. After losing to Deep Blue, Garry Kasparov said he saw “deep intelligence and creativity” in the computer’s moves.
Right now, we are at a preliminary stage of AI development where we target specific problems and then let AI handle that one specific job. AI differs from humans in the sense that humans have general intelligence and can see the big picture while solving problems. But AI is limited to the specific algorithm that it is apt for only that specific task. Let us take the example of Deep Blue, it played chess like a champ but it could not drive or cook a meal which a human is commonly capable of.
For it to be successfully implemented, AI needs to have the basics of a human brain’s functions, like basic perception, communication skills, learning (specially from mistakes), planning (steps and strategies), reasoning (logical thinking and problem solving), etc. Its long-term goal is to manipulate knowledge and simulate the human intelligence.
The downside of AI can be deadly for humans, agreed by one of the greatest physicists of our times, Stephen Hawking. He warned about AI outsmarting stock markets and developing mass destruction weapons beyond the human comprehension. So the short-term problem is how to develop and control Artificial Intelligence. But the long-term problem will be, can the AI even be controlled? How do we know that they will not start controlling the humans?