As AI adoption accelerates, CISOs face a dual challenge: fueling innovation while mitigating the risks of a rapidly expanding attack surface. Tenable’s five-step framework for securing AI offers a systematic approach to reducing AI security risks as your organization races to achieve the productivity benefits of AI.
As AI transforms enterprises, security leaders like me are grappling with how to most effectively manage the security risks it creates.
The challenge is that AI is now embedded virtually everywhere across our organizations: in employee productivity tools, SaaS platforms, developer libraries, cloud services, APIs, and web apps. The result? Our teams are left with a growing AI exposure gap: a vast and largely invisible attack surface that our traditional security tools weren’t designed to monitor.
Complicating matters is that we often can’t isolate AI risk to a single asset. Rather, it emerges from a string of interconnected elements (such as applications, infrastructure, identities, and data) that in aggregate create exposure. Here’s an example of what I mean.
Let's say an employee uses an approved AI chatbot for technical support resolution that relies on Amazon Bedrock agents, and those agents have elevated privileges to access sensitive internal systems like enterprise resource planning and customer resource management tools. If a threat actor gains access to the agent through an unpatched vulnerability on the employee’s laptop, then the threat actor can use the agent to breach sensitive data, and a seemingly safe use of an approved AI tool turns out to be a high-impact exposure.
Protecting data in today’s AI-assisted work environments becomes exponentially more difficult because each one of the myriad interactions with AI assets (e.g., every prompt, file upload, generated response, integration, and configuration) can put intellectual property, customer information, and confidential plans at risk.
So, how do we tame this challenging new attack surface that grows unchecked as our organizations expand their use of AI? Here’s a strategic framework I’ve implemented for governing, discovering, and securing AI wherever it crops up and creates risk for your organization.
Securing AI starts with setting clear expectations with employees about acceptable use. Establish an AI acceptable use policy that:
Based on your organization’s AI acceptable use policy, you can implement controls to enforce and monitor compliance with it.
When I talk with other CISOs about securing AI, they say discovering and detecting it is one of their biggest challenges. And I get it: it’s freakin’ everywhere, and a lot of it is really hard to find, in part because AI’s presence extends well beyond centrally-managed systems that are clearly visible.
As security leaders, we need to account for:
Your existing data loss prevention (DLP), cloud access security brokers (CASB), and cloud security posture management (CSPM) solutions can provide a good initial starting point to discover AI assets. But holistic discovery requires specialized discovery tools because the non-deterministic nature of AI defies traditional rules-based security protections. It also requires unique detection capabilities to identify embedded AI tools and libraries and understand how AI systems work together to create exposure.
With a continuous and complete view of your enterprise’s AI usage, you’ll know precisely what workloads and infrastructure you need to secure, and you can begin to assess your organization’s overall AI exposure and prioritize specific remediation actions accordingly.
Because AI workloads are deeply interconnected and often severely misconfigured or over-permissioned, this step involves proactively securing the infrastructure where AI runs and hardening AI workloads before attackers can exploit them. For example, if developers at your organization are building AI-enabled applications in the cloud, you want to make sure that cloud infrastructure is secure.
Effective protections require capabilities to:
I’ll dive deeper into this specific topic of securing AI workloads and agents in a follow-up blog I have planned. In the meantime, you can understand how identity weaknesses and infrastructure flaws combine to create critical exposure by conducting deep risk analysis of your AI stack. Based on these insights, you can provide actionable playbooks to your security teams to harden environments and ensure services run on secure, resilient, and validated architectures.
This step involves understanding how your employees interact with generative AI tools and autonomous agents to make sure employees aren’t violating your organization’s AI acceptable use policy. It’s critical to understand how data flows through all AI applications and determine where exposure is being created.
This requires granular visibility into:
Prompt-level visibility into employee AI use allows your security team to detect policy violations and reinforce safe AI behavior. It also allows your security team to identify any sensitive data, including intellectual property and PII, that employees or agents share with AI tools via prompts, uploads, and automated interactions and that could create exposure via an accidental leak. And it enables your security team to detect and respond to new AI-specific threats and misuse, like prompt injection attempts and other malicious instructions designed to manipulate AI systems.
Whether it’s discovering a malicious tool connected to a Microsoft Copilot agent or an employee misusing AI for inappropriate situations that the tool was not designed for (e.g., internal hiring decisions), you need to respond quickly to address the exposure and reinforce safe use.
To mitigate AI security risks, it’s not enough to detect in isolation unpatched vulnerabilities in AI software, weak configurations of AI systems, and overprivileged agents. After all, AI is becoming fully integrated into all of our apps, data, and business processes.
Mitigating AI security risks requires a unified, automated approach to gathering contextually-rich AI security data and correlating and analyzing it alongside other exposure data, such as a publicly exposed S3 bucket, a vulnerable laptop, or an orphaned account with admin privileges. At Tenable, we call this approach exposure management, and we see the industry quickly catching on. Exposure management allows you to proactively see how security weaknesses across your environment combine to create exposure: high-risk attack paths leading to your organization’s most sensitive systems and data.
Exposure management also surfaces risks with precise context — including the specific AI engine, user, and session — to enable high-fidelity issues management and rapid response. It’s about understanding how toxic combinations of risk coalesce to create business exposure. One medium-criticality misconfiguration in Amazon Bedrock could be connected to an unsecured LLM providing over-provisioned entitlement access for agents. Exposure management requires this complete understanding of the entire environment and attack surface.
The rapid integration of AI across the enterprise has created a complex, interconnected attack surface that traditional security controls are simply not equipped to handle. To close the AI exposure gap, security leaders must shift from a reactive, tool-centric approach to a proactive, unified strategy.
By implementing this five-step framework, you can create a resilient security posture that evolves alongside AI technology. Ultimately, effective exposure management isn't about slowing down innovation; it's about providing the necessary guardrails to ensure your organization can embrace the power of AI safely and confidently.
As Tenable’s Chief Security Officer, Head of Research and President of Tenable Public Sector, LLC, Robert Huber oversees the company's global security and research teams, working cross-functionally to reduce risk to the organization, its customers and the broader industry. He has more than 25 years of cyber security experience across the financial, defense, critical infrastructure and technology sectors. Prior to joining Tenable, Robert was a chief security and strategy officer at Eastwind Networks. He was previously co-founder and president of Critical Intelligence, an OT threat intelligence and solutions provider, which cyber threat intelligence leader iSIGHT Partners acquired in 2015. He also served as a member of the Lockheed Martin CIRT, an OT security researcher at Idaho National Laboratory and was a chief security architect for JP Morgan Chase. Robert is a board member and advisor to several security startups and served in the U.S. Air Force and Air National Guard for more than 22 years. Before retiring in 2021, he provided offensive and defensive cyber capabilities supporting the National Security Agency (NSA), United States Cyber Command and state missions.