Learn why your existing security tech won’t detect data exposure, prompt injection and manipulation, and other AI security risks from ChatGPT Enterprise, Microsoft 365 Copilot, and other LLMs.
When it comes to securing AI, many organizations believe they can mitigate enterprise AI risks by prohibiting employees from using public AI tools and instead directing them to use approved enterprise AI tools.
While enterprise versions of generative AI tools can help keep sensitive data out of public large language models (LLMs), the risk of data exposure remains. Why? Because approved, enterprise AI tools can instantly connect to your organization’s most sensitive data when implemented, depending on how they’re configured.
I repeat: When implemented, enterprise AI tools — including ChatGPT Enterprise, Microsoft Copilot, and others — can instantly connect to systems, such as your enterprise resource planning (ERP), human resources (HR), and customer relationship management (CRM) tools, where confidential customer data, employee data, financial data, intellectual property, and strategic plans reside.
As a result, employees can prompt enterprise generative AI tools to share sensitive data when these tools are configured to integrate with systems containing sensitive data. And if employees can do it, threat actors can, too.
I’ve seen many examples of employees finding creative ways to manipulate enterprise AI chatbots into sharing confidential data about equity deals, non-disclosure agreements, performance appraisals, financial projections, board slides, and more. If preventing this kind of rogue behavior, which creates an insider threat, isn’t part of your organization’s AI acceptable use policy and training, it should be.
I expect to see threat actors leverage misconfigurations in AI tools to access sensitive data and use prompt injection attacks to instruct AI agents to divert payments to cyber criminals’ bank accounts.
Your existing security tools — specifically, your data loss prevention (DLP), cloud access security brokers (CASBs), and cloud security posture management (CSPM) — won’t detect or prevent AI security risks because they were built to monitor a very different attack surface.
The AI attack surface, which consists of all the AI tools and plugins (both sanctioned and shadow) that your employees and contractors use, is fundamentally different from every other attack surface your security team needs to monitor. AI tools have several characteristics that make them unique:
Scale and ubiquity: AI is woven throughout your environment. Not only do these tools instantly connect to your organization’s most sensitive data, they also touch nearly every process and workflow. Consequently, without specialized tools, it’s hard to understand all the systems, data, processes, and workflows that AI interacts with and supports.
Agency: AI isn’t just answering questions; it’s taking actions. Unlike traditional software, AI can be agentic: it can act independently and make decisions, and it accesses sensitive data in order to do so. For that reason, AI agents can also leak data.
Unpredictable: AI is non-deterministic, meaning, it can give different answers to the same question at different times, or use different methods to complete a task. The non-deterministic nature of AI makes it hard to monitor. For example, if you instruct an agent to exchange euros to U.S. dollars, it could use a different website each time to look up the exchange rate.
Traditional data loss prevention software works by looking for specific patterns and terms, like social security numbers or a “confidential” digital watermark. Clever employees and attackers can easily bypass DLP using AI because AI operates on the meaning of a prompt, not just the keywords in it.
For example, an employee or attacker can simply alter a prompt by writing it in another language or even using Morse code to get around a traditional DLP filter.
Because many AI tools are delivered as SaaS, you might think your CASB would help. It doesn’t. CASB sits between the user and the SaaS provider and primarily monitors security risks that occur at runtime — that is, during the interaction between the user and the AI service.
Many of the most significant AI risks don’t take place at runtime. They’re rooted in misconfigurations within the AI platform. For example, the AI development tool Cursor was found to have a “yolo mode” that, when enabled via a misconfiguration, could automatically run actions on a developer’s machine and forward traffic to external servers. Yikes! A CASB would never see this activity because the risky connection isn’t happening at runtime, between the user and the primary AI service (in this case, Cursor).
Both cloud security posture management and its AI-focused companion, AI-SPM, are vital. AI-SPM in particular is a powerful tool for discovering the AI models your organization is using to build AI applications in the cloud, how software libraries and data sources are connected, and access misconfigurations. But it doesn’t detect AI exploitation at runtime.
Moreover, AI isn’t just another cloud resource. Take Vertex AI, Google Cloud’s machine learning platform for building, training, and deploying AI models and applications. A Vertex AI resource running in a cloud environment is much more than a cloud resource, and a misconfiguration in an AI resource is much more than a misconfiguration in a cloud resource. AI-SPM lacks capabilities to pen-test and assess the security of AI applications built on top of pre-built AI models like Vertex AI in pre-production.
Because there are so many different ways AI tools and platforms can be exploited, organizations need the following capabilities:
Continuous AI discovery: The ability to see all AI resources, whether they’re approved or unapproved, including models, agents, platforms, and plugins, combined with the ability to see what systems AI resources connect to.
Prompt-level visibility: The ability to understand what people are using AI for. Are they using it to analyze sensitive financial information, to generate code, or make critical hiring and investment decisions?
AI threat detection and remediation: The ability to uncover and fix vulnerabilities, misconfigurations, and risky integrations, such as AI agents based in sensitive files or connected to external tools, and to detect other risks, including prompt injection and manipulation, attempts to jailbreak AI tools, and other violations of AI acceptable use policies.
These three interrelated capabilities must work together as part of a broader exposure management strategy. Exposure management puts AI security risks in the context of exposures across the rest of your attack surface — traditional IT, cloud, identity systems and operational technology (OT) — so you can visualize potential attack paths and take steps to proactively remediate them.
As part of the Tenable One Exposure Management Platform, Tenable AI Exposure uniquely solves a critical AI security pain point for organizations that their traditional tools can’t. It provides essential capabilities that security teams need to sniff out suspicious AI use, protect sensitive information, and enforce acceptable use policies.
Tenable AI Exposure continuously discovers all approved and unapproved generative AI usage throughout an organization, including models, agents, platforms, and plugins. It provides deep, prompt-level visibility to reveal how users interact with AI platforms and agents, show what data is involved, how AI assistants and agents behave, and which workflows those interactions trigger across your environment.
In addition, Tenable AI Exposure:
The era of AI is upon us, and the tools you’re using to protect your organization can’t keep up with the AI attack surface. Security for AI requires specialized tools, a proactive exposure management strategy, and capabilities for monitoring your entire attack surface, from the prompt to the perimeter and beyond.
If you’re a Tenable One customer and you’re interested in an exclusive private preview of Tenable AI Exposure, fill out the brief form at the top of the Tenable AI Exposure page where it says “Get started with Tenable AI Exposure.”
Tomer Y. Avni, Tenable VP of Product and Go-to-Market at Tenable, specializes in AI security. As a co-founder and Chief Product Officer at Apex Security from 2023 to 2025, Tomer played a pivotal role in securing major corporations on their AI journeys, backed by Sequoia, Index, and Sam Altman. Apex was acquired by Tenable in 2025. Tomer’s previous roles include AVP Product at Authomize, investor at Blumberg Capital, and board observer positions at Hunters and Databand.ai. Tomer earned multiple excellence awards during his leadership post in Israeli Military Intelligence - Unit 8200. He holds an MBA from Harvard Business School, a master’s in engineering from Harvard University, and a bachelor’s in applied mathematics and political science from Bar-Ilan University.