Artificial Intelligence and World Peace

By CIOReview | Thursday, May 24, 2018

Fiction creators have depicted the possibility of artificial intelligence (AI) turning computers to become self-aware and plot against the human race. Although it does not exercise a direct control, many opine that AI can increase the risk of a nuclear war.

World leaders have come to an open, yet silent acceptance that countries with a strong AI stance have a higher chance of ruling the world, even if metaphorically. The countries of the world are seeing themselves at the incipient of an arms race as countries like China and the U.S. have demonstrated ambitions to excel at integrating military applications with AI.  

There is a setting in of strategic stability when the countries do not choose to use nuclear power to threaten adversaries. Also, nuclear stability provides the assurance to allies that in the event arising when the nukes need to be launched, there will be a risk to the country offering nuclear security guarantee as well. Also, there is a need for reassurance for adversaries that the nuclear power will not be used against them without being provoked. Even when countries lack the interest to attack and conquer, nuclear stability is a challenge since deterrence, assurance, and reassurance disagrees with each other.

With AI building its ability to adapt, overcome and conquer at strategy games, countries are largely tempted to develop intelligence that could make AI their own governmental strategist. Although AI is known to be more error-resistant compared to humans and be able to provide efficient outcomes, when it comes to matters of navigating conflict scenarios, AI is still incapable of proving worthy.

Stepping away from decision-making capabilities, AI can challenge the nuclear stability strategy. Cameras and sensors across the globe, along with the growing ability of AI to predict based on machine learning make cause governments worry about their safety standpoint. A strategy that upholds offense as a defense, countries can be led to believe that preventing retaliation by attacking first is a smart choice. Under this context, the ability of AI to “accidentally” start a nuclear attack is very high. Therefore, world leaders need to understand if AI is really needed to control the strategy of warfare.