The statement that AI is a threat is attributed to Jack Ma – the Chinese IT billionaire who is arguably the richest person walking the earth. His statement is interpreted to have implied that it would take away a lot of jobs. But will it? Then there is the issue of remotely controlled or self-run intelligent war machines. This part, however, may turn into reality even faster.
Let us first look into the aspect of AI as a job killer. Historically, at every stage of evolution of automation, the fear of the negative impact on the masses has also found a voice. The more recent incident that is still etched in our memory is the issue of computerization in banking operations. Then, the whole programme got stalled due to the massive opposition from the employee unions.
With the computerization, however, there has been greater absorption of employees due to the expansion of branches and business. However, it is also true that computerization and AI are not exactly the same thing. AI involves adaptive learning by machines. The fear that is doing the round is whether a day will come when artificial intelligence will supersede human intelligence. The readers who have seen Stanley Kubick’s Space Odyssey based on Arthur C Clark’s Sentinel will know what is being said here. In the film, the space ship carrying deep space explorers will take overall controls fearing reversal of the mandated mission. It’s a grim story but, nonetheless, remains a point of reference to the detractors of AI.
The point that is at stake immediately, however, is the issue of humanless war machines that will run on AI. This is a grim picture as the machines will not have emotions and will be senseless intelligent killing robots. With the world increasingly turning more and more violent this issue indeed remains a matter of serious concern, At this point, one is reminded of Isaac Asimov, the seminal science fiction writer and a noted student of science. He created the idea of three laws of robotics.
First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The basic issue for Asimov was to create a set of principles for robotics to cover for the time when the population of robots increase to such an extent that it may overwhelm the humanity. So he thought up this law and later added another that said that robots will self destruct once they face a conflict between the laws.
But that’s in the realm of fiction. What at this stage of development and the information that is understandable for us in the non-scientific community is that an improvement in the machine operations will lead to higher productivity. However, at the same time, we are facing the danger of senseless exploitation of the same in warfare by the technologically advanced nation. This may also create a more unequal distribution of wealth. We should together apply our mind not to halt the progress of science but to avert the possible negative fall out of the same.
Lucidly written 👍Kuldip
ReplyDelete