As with just about every part of business today, cybersecurity has been awash in the promises of what AI can do for its tools and processes. In fact, cybersecurity vendors have touted the power of algorithmic detection and response for years.
But as AI fervor picks up its pace in security — and the rest of the enterprise — risk management professionals and the application security (AppSec) teams need to broaden their horizons. That’s because the relationship between AI and cybersecurity extends far beyond enhanced algorithms or adding generative AI features into the security tool stack.
Security pros need to ask not what AI can do for cybersecurity, but what cybersecurity can do for AI, said Malcolm Harkins, chief security and trust officer for Hidden Layer. While AI is used to improve cybersecurity — and a lot of cybersecurity vendors are doing this now — AI is also being used by hackers for deepfakes or automated attacks. However, the bigger threat to organizations is how AI is developed.
“[AI] itself is a completely different tech stack: different file types, model types, and totally different ways of being susceptible to attack. And to be blunt, the existing enterprise security stack does not protect AI — particularly AI models — from being attacked.”
—Malcolm Harkins
Don’t get blindsided by the cybersecurity promise AI makes and take your eyes off the security of AI systems deployed in the enterprise — including all AI-developed software running in your organization. Here’s why you need to update your strategy — and your security tooling — for the AI age.
[ See Special Report: Secure Your Organization Against AI/ML Threats ]
As the pace of AI embedded in enterprise systems accelerates, many leaders are at least broadly aware that this new infrastructure will add risk to the technology infrastructure and business processes that it supports. This is why the corporate world has increasingly rolled out AI risk governance boards to explore the issue. The problem is that many of them roll out policy for which they assume they can use traditional security controls to enforce, Harkins said.
In a recent analysis of nine of the most common types of threats to the AI stack that security researchers have uncovered so far, Harkins found that they are spread out across the following three different categories:
Harkins then overlayed those different threats with more than two dozen of the most common security controls in modern enterprises — including application security (AppSec) and vulnerability management biggies like static application security testing (SAST), dynamic application security testing (DAST), and vulneraboilty and malware scans — to conduct a strength of present control analysis.
The result: A simple spreadsheet where red represents controls that couldn’t manage a particular AI risk, yellow for controls that only provided indirect protection or partial coverage, and green for a control sufficient for the AI risk. Harkins noted that there wasn’t a speck of green on that risk map.
“Models today are not only vulnerable, they’re easily exploitable. Our research is proving that all the time.”
—Malcolm Harkins
The one piece of evidence that isn’t as immediately visible is whether attackers are using these flaws out in the wild. Many security leaders have told him they view threats to AI as esoteric and not worth their priority yet, because they aren’t seeing attacks against it crop up with any regularity.
While there is some truth to that, “the absence of evidence doesn’t prove the evidence of absence,” Harkins said. “If I don’t have logging and monitoring purpose-built for AI models, how am I ever going to know an attack occurred?”
This blind spot with AI is the basis for a recent RSA 360 article Harkins wrote, urging enterprises to start getting serious about bolstering the AI-specific controls they have in place. He’s been a champion for best practices and standards works that go beyond vested interests and vendor hype.
One effort Harkins hopes security practitioners will get behind: The Coalition for Secure AI (CoSAI), which sets security standards and frameworks for securing tech from unique AI risks. More standards are expected from the group on model signing that will be similar to what the AppSec world has done with code signing, Harkins said.
As groups like CoSAI and other industry coalitions start to tackle standards and cross-industry cooperation, security leaders can start adding AI visibility and controls a little at a time, Harkins said. “Start embedding AI visibility and awareness into your existing security practices.”.
One example: If you have an existing threat intelligence program, you should be embedding more feeds that cover attacks against AI. And third-party risk management (TRRM) programs should be embedding questions about how all of their vendors use AI.
Most importantly, security teams with asset management and vulnerability management program should find a way to build out an AI inventory and ways to enumerate AI flaws. This is going to further strain the vulnerability management team with even more vulns to prioritize, but to stick with the theme that AI has many connections to cyber, “We might use AI to help in that,” Harkins said.
In order to fund it all, Harkins says CISOs and other risk leaders need to be crafty and aware of when AI initiatives are being vetted. If an AI initiative gets $25 million, then it should only follow that at least some of those funds should be carved out to manage cyber risk.
“If someone is proposing investing in a technology — AI or otherwise —that is meant to generate material benefit to the company, if it is compromised then there’s material risk.”
—Malcolm Harkins
With ML driving the next generation of technology, the security risks associated with model sharing — and specifically issues within ML models such as serialization — are becoming increasingly significant, Dhaval Shah, Senior Director of Product Management at ReversingLabs, wrote recently. Vulnerabilities in serialization and deserialization are common across programming languages and applications, and they present specific challenges in machine learning workflows. For instance, formats like Pickle, frequently used in AI, are especially prone to such risks, Shah wrote.
Shah said organizations need to stay ahead of these evolving threats with advanced detection and mitigation solutions, such as modern ML malware protection. Modern ML malware detection capabilities ensure that your environment remains safe at every stage of the ML model lifecycle, including:
*** This is a Security Bloggers Network syndicated blog from Blog (Main) authored by Ericka Chickowski. Read the original post at: https://www.reversinglabs.com/blog/ai-double-edged-sword-update