Many organizations are playing catch-up in key AI security policy areas, such as usage governance, risk oversight, data protection, and staff training. In this Cybersecurity Snapshot special edition, we round up recent guidance on preparing for, managing and governing AI cyber risks.
In case you missed it, here’s fresh guidance from recent months on how organizations can manage, govern, and prep for the new wave of AI cyber risks.
Most organizations have taken a cavalier attitude towards their use of artificial intelligence (AI) and cloud, a bit along the lines of: “Don’t worry, be happy.”
In other words: Use AI and cloud now, deal with security later. Of course, this puts them in a precarious position to manage their cyber risk.
This is the dangerous scenario that emerges from the new Tenable report “The State of Cloud and AI Security 2025,” published in September.
“Most organizations already operate in hybrid and multi-cloud environments, and over half are using AI for business-critical workloads,” reads the global study, commissioned by Tenable and developed in collaboration with the Cloud Security Alliance.
“While infrastructure and innovation have evolved rapidly, security strategy has not kept pace,” it adds.

Based on a survey of 1,025 IT and security professionals, the report found 82% of organizations have hybrid – on-prem and cloud – environments and 63% use two or more cloud providers.
Meanwhile, organizations are jumping into the AI pond headfirst: 55% are using AI and 34% are testing it. The kicker? About a third of those using AI have suffered an AI-related breach.
“The report confirms what we’re seeing every day in the field. AI workloads are reshaping cloud environments, introducing new risks that traditional tools weren’t built to handle," Liat Hayun, VP of Product and Research at Tenable, said in a statement.
Key obstacles to effectively secure AI systems and cloud environments include:
The fix? Shift from a reactive to a proactive approach. To stay ahead of evolving threats:
To get more details, check out:
For more information about cloud security and AI security, check out these Tenable resources:
AI risk isn't just an IT problem anymore. It's a C-suite and boardroom concern as well.
The sign? Fortune 100 boards of directors have boosted the number and the substance of their AI and cybersecurity oversight disclosures.
That’s the headline from an EY analysis of proxy statements and 10-K filings submitted to the U.S. Securities and Exchange Commission (SEC) by 80 of the Fortune 100 companies in recent years.
“Companies are putting the spotlight on their technology governance, signaling an increasing emphasis on cyber and AI oversight to stakeholders,” reads the EY report “Cyber and AI oversight disclosures: what companies shared in 2025,” published in October.

What’s driving this trend? Cyber threats are getting smarter by the minute, while the use of generative AI, both by security teams and by attackers, is growing exponentially.
Key findings on AI oversight include:
“Board oversight of these areas is critical to identifying and mitigating risks that may pose a significant threat to the company,” reads the report.
For more information about AI governance in the boardroom and the C-suite:
Now that the C-level executives and the board are paying attention, organizations need an AI game plan. A new Cloud Security Alliance AI playbook might be useful in this area.
The CSA’s “Artificial Intelligence Controls Matrix,” published in July, is described as a vendor-agnostic framework for developing, deploying, and running AI systems securely and responsibly.
“The AI Controls Matrix bridges the gap between lofty ethical guidelines and real-world implementation. It enables all stakeholders in the AI value chain to align on their roles and responsibilities and measurably reduce risk,” Jim Reavis, CSA CEO and co-founder, said in a statement.
The matrix maps to cybersecurity standards such as ISO 42001 and the National Institute of Standards and Technology’s “Artificial Intelligence Risk Management Framework” (NIST AI 600-1).

It features 243 AI security controls across 18 domains, including:
For example, the “application and interface security” domain includes controls for secure development, testing, input and output validation, and API security. Meanwhile, the “threat and vulnerability management” domain covers penetration testing, remediation, prioritization, reporting and metrics, and threat analysis and modeling.
For more information about AI data security, check out these Tenable resources:
Once you’ve adopted an AI security playbook, use it.
As IBM’s “Cost of a Data Breach Report 2025” found, companies are paying a pretty penny when they roll out AI systems without the proper usage governance and security controls.
“This year's results show that organizations are bypassing security and governance for AI in favor of do-it-now AI adoption. Ungoverned systems are more likely to be breached—and more costly when they are,” reads an IBM statement.
Check the stats:
The report, released in July, also calls out shadow AI – the unapproved use of AI by employees. This practice caused a breach at 20% of organizations.
And companies with high shadow AI rates experienced higher data breach costs and more compromised personal information and intellectual property.
In short: Cyber attackers are exploiting the lack of basic AI access controls and AI governance.
Impacts of security incidents on authorized AI

(From organizations that reported a security incident involving an AI model or application; more than one response permitted. Source: IBM’s “Cost of a Data Breach Report 2025,” July 2025)
The report is based on analysis of data breaches at 600 organizations. Almost 3,500 security and C-level executives were interviewed.
To get more details, check out:
For more information about shadow AI, check out these Tenable resources:
Lack of governance isn't just a high-level policy failure. It's happening at every desk.
Just how bad is the AI security situation at the user level? Check out these stats:
Those numbers come from the report “Oh, Behave! The Annual Cybersecurity Attitudes and Behaviors Report 2025-2026,” which the National Cybersecurity Alliance (NCA) and CybSafe published in October, based on a survey of 7,000-plus respondents from Australia, Brazil, Germany, India, Mexico, the U.K., and the U.S.
“The rapid rise in AI usage is the double-edged sword to end all double-edged swords: while it boosts productivity, it also opens up new and urgent security risks, particularly as employees share sensitive data without proper oversight,” reads the report.

And it’s not like people are clueless. They worry about AI super-charging scams and cyber crime (63%), fake info (67%), security system bypassing (67%) and identity impersonation (65%). Yet, respondents’ faith in companies adopting AI responsibly and securely is only 45%.
In fact, the report states that shadow AI is “here to stay” and “becoming the new norm,” due to insufficient AI security awareness training.
“Without urgent action to close this gap, millions are at risk of falling victim to AI-enabled scams, impersonation, and data breaches,” Lisa Plaggemier, Executive Director of the NCA, said in a statement.
To learn more about AI security awareness training:
All of these AI challenges have a silver lining for cybersecurity professionals with AI security skills.
That’s the word from Robert Half’s “2026 Salary Guide,” published in October. If you know how to use AI for things like managing vulnerabilities, automating security, or hunting for threats, you're going to be "highly sought."
“Many employers look for candidates who can work with AI programs or models, such as neural networks and natural language processing, for predicting and mitigating cyber risks,” Robert Half wrote in an article about the guide titled “What to Know About Hiring and Salary Trends in Cybersecurity.”
Cyber hiring managers are also eager for candidates with AI-related certifications, like Microsoft’s AI-900 and Google Cloud’s Machine Learning Engineer.

Of course, other skills still shine:
To get more details, check out:
Juan has been writing about IT since the mid-1990s, first as a reporter and editor, and now as a content marketer. He spent the bulk of his journalism career at International Data Group’s IDG News Service, a tech news wire service where he held various positions over the years, including Senior Editor and News Editor. His content marketing journey began at Qualys, with stops at Moogsoft and JFrog. As a content marketer, he's helped plan, write and edit the whole gamut of content assets, including blog posts, case studies, e-books, product briefs and white papers, while supporting a wide variety of teams, including product marketing, demand generation, corporate communications, and events.