How to Shield Cloud Applications from Attacks
AI and ML can not only widen an application's abilities, but they can also open an organization to new threats. To protect against these security challenges, risk assessments must become a standard practice at the start of development, and adversarial examples should be generated as a standard activity in the AI training schedule.
FREMONT, CA: IT teams use machine learning (ML) and artificial intelligence (AI) to improve data insights and help businesses through automation, but integrating these technologies into workloads can raise existing cloud security issues―and even generate new ones. In comparison to conventional workloads, ML and AI applications use a wider range of data sources all through the development phase that extends the attack surface needed to safeguard personally identifiable information (PII) and other sensitive data.
ML and AI Data Management Challenges:
A diversity of attack vectors, still unique to AI applications, can compromise ML applications and lead to stolen algorithms and corrupted models. Here are a few ways for enterprises to safeguard sensitive data and judge the implications of how data is used are:
• Restrictions imposed on data streams regarding information upload into data pools
• Enhancement of the ability to track who gets access to data and how it is being managed, transformed, and cleansed
• Consideration of how collected data may be utilized to construct and prepare ML and AI models
• Awareness of how the data and its ML models could be consumed in downstream applications
To help address cloud security issues introduced by ML and AI, organizations can use scoring mechanisms for sensitive data and PII.
Defense against attacks on ML workloads is complicated because there is no standard way of testing a model for safety issues yet, as compared to web applications, mobile applications, or cloud infrastructure. Known attacks on ML and AI systems can be broadly classified into three types:
Data Poisoning Attacks:
Hackers find ways to provide data into machine learning (ML) classifiers that are against an enterprise's interests. This type of attack can take place when a model employs data captured from public sources.
Model Stealing Attacks:
Hackers repeatedly query the model and collect responses for a large set of inputs that can enable an attacker to restructure the model without actually stealing it.
Adversarial Input Attacks:
These types of attacks can trick a spam filter into believing spam mail as legitimate.
Cloud Computing Changing Management
By Tom Farrah, CIO & SVP, Dr Pepper Snapple Group
By George Evans, CIO, Singing River Health System
By John Kamin, EVP and CIO, Old National Bancorp
By Phil Jordan, CIO, Telefonica
By Elliot Garbus, VP-IoT Solutions Group & GM-Automotive...
By Dennis Hodges, CIO, Inteva Products
By Bill Krivoshik, SVP & CIO, Time Warner Inc.
By Gregory Morrison, SVP & CIO, Cox Enterprises
By Alberto Ruocco, CIO, American Electric Power
By Sam Lamonica, CIO & VP Information Systems, Rosendin...
By Sven Gerjets, SVP-IT, DIRECTV
By Marie Blake, EVP & CCO, BankUnited
By Lowell Gilvin, Chief Process Officer, Jabil
By Walter Carvalho, VP & Corporate CIO, Carnival Corporation
By Mary Alice Annecharico, SVP & CIO, Henry Ford Health System
By Bernd Schlotter, President of Services, Unify
By Bob Fecteau, CIO, SAIC
By Jason Alan Snyder, CTO, Momentum Worldwide
By Jim Whitehurst, CEO, Red Hat
By Marc Jones, Distinguished Engineer, IBM Cloud Infrastructure