How to Shield Cloud Applications from Attacks
AI and ML can not only widen an application's abilities, but they can also open an organization to new threats. To protect against these security challenges, risk assessments must become a standard practice at the start of development, and adversarial examples should be generated as a standard activity in the AI training schedule.
FREMONT, CA: IT teams use machine learning (ML) and artificial intelligence (AI) to improve data insights and help businesses through automation, but integrating these technologies into workloads can raise existing cloud security issues―and even generate new ones. In comparison to conventional workloads, ML and AI applications use a wider range of data sources all through the development phase that extends the attack surface needed to safeguard personally identifiable information (PII) and other sensitive data.
ML and AI Data Management Challenges:
A diversity of attack vectors, still unique to AI applications, can compromise ML applications and lead to stolen algorithms and corrupted models. Here are a few ways for enterprises to safeguard sensitive data and judge the implications of how data is used are:
• Restrictions imposed on data streams regarding information upload into data pools
• Enhancement of the ability to track who gets access to data and how it is being managed, transformed, and cleansed
• Consideration of how collected data may be utilized to construct and prepare ML and AI models
• Awareness of how the data and its ML models could be consumed in downstream applications
To help address cloud security issues introduced by ML and AI, organizations can use scoring mechanisms for sensitive data and PII.
Defense against attacks on ML workloads is complicated because there is no standard way of testing a model for safety issues yet, as compared to web applications, mobile applications, or cloud infrastructure. Known attacks on ML and AI systems can be broadly classified into three types:
Data Poisoning Attacks:
Hackers find ways to provide data into machine learning (ML) classifiers that are against an enterprise's interests. This type of attack can take place when a model employs data captured from public sources.
Model Stealing Attacks:
Hackers repeatedly query the model and collect responses for a large set of inputs that can enable an attacker to restructure the model without actually stealing it.
Adversarial Input Attacks:
These types of attacks can trick a spam filter into believing spam mail as legitimate.
Check out: Top Cloud Computing Companies
Cloud Computing Changing Management
By Deborah Gash, VP & CIO, Saint Luke’s Health System
By Setrag Khoshafian, Chief Evangelist & VP of BPM...
By Sam Talbot, Director, Worldwide Service, Otis Elevator
By Darrin Whitney, CIO, GENBAND
By Chris Mandel, SVP-Strategic Solutions, Sedgwick
By Rick Schooler, VP & CIO, Orlando Health
By Wes Wright, CTO, Sutter Health
By Jenny Watson, VP-Digital Marketing & Direct, AutoNation
By Arnold Leap, CIO, 1-800-Flowers.com
By Rob Klopp, CIO & Deputy Commissioner-Systems, Social...
By Bill Schimikowski, VP, Customer Experience, Fidelity...
By Tim Porzio, VP-Operations & Infrastructure, IS&T, Sodexo...
By Robert Roser, CIO, Fermilab
By Kevin Kometer, CIO, CME Group
By Joseph Eng, CIO, TravelClick
By Merijn te Booij, CMO, Genesys
By Matt Schlabig, CIO, Worthington Industries
By John Boden, Vice President of Information and Member...
By Christy Hartner, SVP, Commerce Bank
By Greg Toornman, VP, Global Materials, Logistics, and...