How to Shield Cloud Applications from Attacks
AI and ML can not only widen an application's abilities, but they can also open an organization to new threats. To protect against these security challenges, risk assessments must become a standard practice at the start of development, and adversarial examples should be generated as a standard activity in the AI training schedule.
FREMONT, CA: IT teams use machine learning (ML) and artificial intelligence (AI) to improve data insights and help businesses through automation, but integrating these technologies into workloads can raise existing cloud security issues―and even generate new ones. In comparison to conventional workloads, ML and AI applications use a wider range of data sources all through the development phase that extends the attack surface needed to safeguard personally identifiable information (PII) and other sensitive data.
ML and AI Data Management Challenges:
A diversity of attack vectors, still unique to AI applications, can compromise ML applications and lead to stolen algorithms and corrupted models. Here are a few ways for enterprises to safeguard sensitive data and judge the implications of how data is used are:
• Restrictions imposed on data streams regarding information upload into data pools
• Enhancement of the ability to track who gets access to data and how it is being managed, transformed, and cleansed
• Consideration of how collected data may be utilized to construct and prepare ML and AI models
• Awareness of how the data and its ML models could be consumed in downstream applications
To help address cloud security issues introduced by ML and AI, organizations can use scoring mechanisms for sensitive data and PII.
Defense against attacks on ML workloads is complicated because there is no standard way of testing a model for safety issues yet, as compared to web applications, mobile applications, or cloud infrastructure. Known attacks on ML and AI systems can be broadly classified into three types:
Data Poisoning Attacks:
Hackers find ways to provide data into machine learning (ML) classifiers that are against an enterprise's interests. This type of attack can take place when a model employs data captured from public sources.
Model Stealing Attacks:
Hackers repeatedly query the model and collect responses for a large set of inputs that can enable an attacker to restructure the model without actually stealing it.
Adversarial Input Attacks:
These types of attacks can trick a spam filter into believing spam mail as legitimate.
Check out: Top Cloud Computing Companies
Cloud Computing Changing Management
By Patrick Quinn, CIO, Acuity Brands Lighting
By Ritesh Ramesh, Chief Technologist, Global Data and...
By James Streeter, Global VP Life Sciences Strategy, Oracle...
By Leebrian E. Gaskins, CIO, Texas A&M International University
By Anthony Hill, Executive Director Business & Enterprise...
By Bryan Tantzen, Senior Director, Kinetic Industry...
By Anu George, Chief Quality Officer, Morningstar
By Ron Winward, Security Evangelist, Radware
By Cynthia Johnson,Ex VP & CIO, California Resources...
By Miguel Lopes, VP, Product Line Management, Dialogic
By Hiro Imamura, Senior Vice President and General Manager,...
By Diana Bittle, Chief Technology Officer, American Fidelity
By Brady Jensen, Senior Director, Global Human Resources...
By Dave Pearson, Executive Vice President & CIO, Sykes...
By Plamen Petrov, VP, Artificial Intelligence, Anthem, Inc
By John Dyer, Deputy Chief Compliance Officer, Western Union
By Matt Rider, CIO, Information Technology, Franklin...
By Ian Glazer, Founder & President, IDPro
By Tim Skinner, Director Information Security, BlueCross...
By Brad Mitchell, CIO & Head of IT, CTBC Bank Corp. (USA)