How to Shield Cloud Applications from Attacks
AI and ML can not only widen an application's abilities, but they can also open an organization to new threats. To protect against these security challenges, risk assessments must become a standard practice at the start of development, and adversarial examples should be generated as a standard activity in the AI training schedule.
FREMONT, CA: IT teams use machine learning (ML) and artificial intelligence (AI) to improve data insights and help businesses through automation, but integrating these technologies into workloads can raise existing cloud security issues―and even generate new ones. In comparison to conventional workloads, ML and AI applications use a wider range of data sources all through the development phase that extends the attack surface needed to safeguard personally identifiable information (PII) and other sensitive data.
ML and AI Data Management Challenges:
A diversity of attack vectors, still unique to AI applications, can compromise ML applications and lead to stolen algorithms and corrupted models. Here are a few ways for enterprises to safeguard sensitive data and judge the implications of how data is used are:
• Restrictions imposed on data streams regarding information upload into data pools
• Enhancement of the ability to track who gets access to data and how it is being managed, transformed, and cleansed
• Consideration of how collected data may be utilized to construct and prepare ML and AI models
• Awareness of how the data and its ML models could be consumed in downstream applications
To help address cloud security issues introduced by ML and AI, organizations can use scoring mechanisms for sensitive data and PII.
Defense against attacks on ML workloads is complicated because there is no standard way of testing a model for safety issues yet, as compared to web applications, mobile applications, or cloud infrastructure. Known attacks on ML and AI systems can be broadly classified into three types:
Data Poisoning Attacks:
Hackers find ways to provide data into machine learning (ML) classifiers that are against an enterprise's interests. This type of attack can take place when a model employs data captured from public sources.
Model Stealing Attacks:
Hackers repeatedly query the model and collect responses for a large set of inputs that can enable an attacker to restructure the model without actually stealing it.
Adversarial Input Attacks:
These types of attacks can trick a spam filter into believing spam mail as legitimate.
Check out: Top Cloud Computing Companies
By Leni Kaufman, VP & CIO, Newport News Shipbuilding
By George Evans, CIO, Singing River Health System
By John Kamin, EVP and CIO, Old National Bancorp
By Elliot Garbus, VP-IoT Solutions Group & GM-Automotive...
By Gregory Morrison, SVP & CIO, Cox Enterprises
By Alberto Ruocco, CIO, American Electric Power
By Sam Lamonica, CIO & VP Information Systems, Rosendin...
By Sergey Cherkasov, CIO, PhosAgro
By Pascal Becotte, MD-Global Supply Chain Practice for the...
By Stephen Caulfield, Executive Director, Global Field...
By Shamim Mohammad, SVP & CIO, CarMax
By Ronald Seymore, Managing Director, Enterprise Performance...
By Brad Bodell, SVP and CIO, CNO Financial Group, Inc.
By Jim Whitehurst, CEO, Red Hat
By Clark Golestani, EVP and CIO, Merck
By Scott Craig, Vice President of Product Marketing, Lexmark...
By Dave Kipe, SVP, Global Operations, Scholastic Inc.
By Meerah Rajavel, CIO, Forcepoint
By Amit Bahree, Executive, Global Technology and Innovation,...
By Greg Tacchetti, CIO, State Auto Insurance