Should the Governments Regulate Artificial Intelligence?
Government AI regulation is necessary, AI experts and policymakers agree. But, there is no clear accord on how much regulation by the government is necessary and what to do if the state begins to infringe upon privacy.
FREMONT, CA: As enterprises begin moving AI technologies out of testing and research and into deployment, technologists, consumers, policymakers, and businesses likewise have started to understand that government regulation of digital intelligence is necessary. AI has been dramatically boosting productivity, helping connect people in new ways, and improving healthcare. When used carelessly, AI can also do the opposite as it has the potential to harm human life.
A Powerful Tool:
Like any powerful tool, AI requires rules and regulations for its deployment. The discussions regarding the limit of regulation, especially from the government, are still open to debate. Most AI policymakers and experts concur that at least a simple structure of regulatory policies is needed soon as computing power is increasing steadily. With the rise of AI and data science start-ups, the amount of information collected on people has also grown exponentially.
Governments Leading the Way:
Many governments have already set up guidelines about how data should and should not be used and gathered. Some regulatory policies also administer how AI should be explainable. At present, many AI algorithms run in a black box, and their inner-workings are regarded as proprietary technology as they are sealed off from the public.
Many private firms have moved to set internal policies and guidelines for deployment of AI, and have made such rules public in the hope that other organizations will adapt or adopt them. The sheer volume of different directions that various private organizations have established designates the vast array of diverse viewpoints about regulations of AI.
According to experts, governments should be involved, but their power and scope should be limited. There are speculations that regulations will slow technological growth, although it has been specified that AI should not be deployed without being sufficiently tested and exclusive of adhering to a security framework.
Organizations should focus on creating explainable and transparent AI models first before governments concentrate on regulation. Lack of clear AI employment policies has long been a problem, with organizations and consumers arguing that providers need to do more to make the inner-workings of algorithms easier to understand.
By Tom Farrah, CIO & SVP, Dr Pepper Snapple Group
By George Evans, CIO, Singing River Health System
By John Kamin, EVP and CIO, Old National Bancorp
By Phil Jordan, CIO, Telefonica
By Elliot Garbus, VP-IoT Solutions Group & GM-Automotive...
By Dennis Hodges, CIO, Inteva Products
By Bill Krivoshik, SVP & CIO, Time Warner Inc.
By Gregory Morrison, SVP & CIO, Cox Enterprises
By Alberto Ruocco, CIO, American Electric Power
By Sam Lamonica, CIO & VP Information Systems, Rosendin...
By Sven Gerjets, SVP-IT, DIRECTV
By Marie Blake, EVP & CCO, BankUnited
By Lowell Gilvin, Chief Process Officer, Jabil
By Walter Carvalho, VP & Corporate CIO, Carnival Corporation
By Mary Alice Annecharico, SVP & CIO, Henry Ford Health System
By Bernd Schlotter, President of Services, Unify
By Bob Fecteau, CIO, SAIC
By Jason Alan Snyder, CTO, Momentum Worldwide
By Jim Whitehurst, CEO, Red Hat
By Marc Jones, Distinguished Engineer, IBM Cloud Infrastructure