As AI-powered cyber threats rise, security operations (SecOps) leaders are evolving their cybersecurity strategies and focusing more on preventative measures, according to a Deep Instinct report.
The survey of 500 U.S. senior cybersecurity experts also found AI is causing greater SecOps burnout and stress. However, more than a third of respondents said they want to use the technology to alleviate repetitive and time-consuming tasks.
Tactics are changing, and burnout levels are rising as organizations confront a rise in AI-aided attacks in the form of deepfakes targeting C-level employees.
The study found 61% of respondents had experienced a deepfake incident in the past year, with 75% of those attacks impersonating CEOs or other C-suite members.
Deep Instinct CIO Carl Froggett said one of the most concerning findings from this year’s report was that 97% of security professionals are worried their organization will suffer an AI-generated security incident. “However, they continue to invest in security technologies that have proven to be ineffective,” he said.
More specifically, 41% of organizations continue to be over-reliant on Endpoint Detection and Response (EDR) tools, which Froggett pointed out are latent, reactive, and far too basic to combat advanced, malicious AI.
“By the time a bad actor or malware reaches the endpoint, a company’s most critical asset, it has already bypassed numerous layers of security controls,” Froggett said. At that point, organizations are at the ‘last resort’ before compromise.”
Froggett said it is crucial for organizations to implement predictive and preventative measures to keep bad actors from landing. Organizations have gotten far too comfortable with an “assume breach” mentality, he argued: It’s not a matter of if but when their business will be breached.
This “detect and respond” or “wait and see” approach to cybersecurity is the root of defender stress and burnout – and it must shift to a cybersecurity strategy that is more proactive in nature. “This year, we’ve seen reports of bad actors spending days, weeks, and months in an organization before taking action,” Froggett said. “How much more time do current solutions need to ‘detect’ the bad actor?”
From Froggett’s perspective, these current platforms are ineffective at detecting and are not sustainable in the new threat landscape.
Meanwhile, SecOps teams and organizations continue to struggle to find and retain a skilled IT security workforce, and they’re feeling extra pressure and burden on their workloads, especially as generative AI fears mount.
To alleviate SecOps burnout, Froggett recommended, organizations should turn to AI. While headlines suggest AI will replace jobs in the coming years, many security professionals feel that it can be used for good, including to alleviate stress that often accompanies mundane tasks.
More than a third (35%) of respondents to the Deep Instinct survey said they want to implement AI tools to help reduce repetitive and time-consuming tasks. For example, AI can help find discrepancies in data patterns, signaling suspicious activity for threat-hunting teams to then investigate – instead of chasing every false alarm.
“This is where good AI can support SecOps teams and reduce the overwhelming volume brought on by false positives,” Froggett said.
The advice for SecOps professionals is to speak up and share how ineffective and costly current tooling is and how much noise it generates. “With these tools, their organization will always be on the back foot responding to breaches rather than preventing them,” Froggett said.
It important to communicate that these tools play a significant role in staff turnover, Froggett added, which leaves employees feeling as if they’re not making a difference in protecting their organization or as if they’re chasing ghosts.
“We’ve entered a pivotal time, one that requires organizations to fight AI with AI,” Froggett said. “But not all AI is created equal.”
Defending against adversarial AI requires solutions powered by a more sophisticated form of AI: deep learning. Most cybersecurity tools use machine learning models that have shortcomings in preventing threats. Deep learning is the most advanced form of AI, Frogget said, as it can self-learn as it ingests data and it works autonomously to identify, detect and prevent complicated threats. “It is the only form of AI that can truly match today’s known and unknown AI-generated threats,” Froggett said.
Photo credit: amir mahdi on Unsplash