October 28, 2025
9 Min Read

An AI acceptable use policy can help your organization mitigate the risk of employees accidentally exposing sensitive data to public AI tools. Benchmark your organization’s policy against our best practices and discover how prompt-level visibility from Tenable AI Exposure eases policy enforcement.
OpenAI’s release of ChatGPT in November 2022 was a seismic event. Built on the GPT 3.5 large language model (LLM), ChatGPT quickly became the fastest growing consumer product ever, according to UBS, with 100 million monthly users in 60 days. In a similar span of time, the risks of this groundbreaking technology also became apparent.
In early 2023, two employees of an electronics company shared confidential source code with ChatGPT, effectively making their source code part of the LLM’s training data without realizing it. The incident, which was widely reported in the media, prompted many organizations to ban public AI tools. This was not an isolated incident. A global survey conducted by the University of Melbourne in early 2025 showed that 48% of employees had uploaded sensitive information to public generative AI tools and 44% had knowingly violated corporate AI policies.
All of this highlights the urgency for organizations to develop and implement a clear and robust AI acceptable use policy.
An AI acceptable use policy provides guidelines on the correct, ethical, and legal use of AI technologies within your organization. An AI governance council, led by a senior member of the IT team and including stakeholders from across the organization, should manage the policy.
An AI acceptable use policy should include:

An AI acceptable use policy helps you manage the risk of data exposure and intellectual property loss by clearly defining what employees can and can’t do. It can also help you maintain compliance with the data handling provisions of regulatory requirements, such as General Data Protection Regulation (GDPR) or California Consumer Privacy Act (CCPA).
Instead of functioning as an onerous rulebook, a well-crafted AI acceptable use policy should empower employees to take advantage of the benefits of AI while keeping risks in check — whether they’re using work devices or their own personal ones.
View the on-demand webinar Securing the Future of AI in Your Enterprise and download our one-page AI Acceptable Use Policy guide
You know you need a guide to help govern AI usage. But what should it include? Based on many conversations with customers that have wrestled with this very issue, we encourage AI governance councils to include these core components in their AI acceptable use policies:
Ensure your organization’s AI acceptable use policy covers the following elements:
| The use of AI tools | Ensure you have a list of approved and prohibited tools that is readily available. Provide a mechanism for employees to submit requests for tools they would like the organization to consider for approval. |
| Ethical principles | Outline your organization’s view on accountability, transparency, fairness, safety, privacy, and security. |
| Requirements for AI use | Lay out the three categories of AI use: permitted (use is unrestricted), prohibited (use is not allowed), and controlled (use requires authorization). |
| Employee responsibilities | Make employee responsibilities for using AI clear, including checking for accuracy and bias and labeling any code appropriately. Above all, make it clear to employees that the organization will not tolerate unlawful or unethical uses of AI (i.e., disinformation, manipulation, discrimination, defamation, invasion of privacy). |
| Data privacy and security | Create guidelines that respect privacy rights and protect the security of data regardless of the AI use case within the organization. |
| Training and awareness | Include information that underscores your commitment to training on the risks and why you don't permit the use of unsanctioned AI tools, including concerns about data exposure, privacy, third-party tracking, security (i.e., vulnerable or unsafe AI tools that are easily compromised, and malicious threat actors that can use an AI tool to gain a foothold in your organization). Make sure all employees review and understand available training resources. |
Source: Tenable, October 2025
Now that you understand the risk and you have an acceptable use policy in place that you’ve communicated to employees, what’s next? You need to enforce it. This can be a challenge. But let us guide you through it because it’s important to get this step right.
Here are two keys to ensuring your policy works:
An AI acceptable use policy without enforcement is just a document. To truly secure your organization, you need to combine a clearly articulated policy with a proactive exposure management program that provides complete visibility into how your team is using these powerful new tools.
You have to secure AI to manage your organization’s risk. And you need to understand how your employees are using AI. So, how do you gain the visibility, context, and control you need to manage it all? And how can you govern AI usage, enforce policies, and prevent exposures?
Tenable AI Exposure directly addresses these challenges. It provides the essential capabilities needed to protect sensitive information and enforce acceptable use policies:
With these capabilities, you gain a proactive approach to managing the complexities of generative AI within the enterprise. As a result, organizations can embrace innovation while maintaining robust security for AI and compliance standards.
If you’re a Tenable One customer and you’re interested in getting an exclusive private preview of Tenable AI Exposure, fill out the short form on the Tenable AI Exposure page. We’ll get right back to you. This offer is limited to Tenable One customers.
Damien Lim is a seasoned cybersecurity marketing leader with over two decades of experience in AI, cybersecurity, and technology. In his role as Senior Product Marketing Manager, Damien serves as an AI evangelist at Tenable, leading global initiatives to drive market awareness and strategy for Tenable's AI. His expertise lies in translating complex technology into impactful go-to-market strategy, engaging with analysts and industry leaders to shape the conversation on AI applications related to cyber risk. Damien is also a published co-author on AI in cybersecurity. He has previously launched industry-leading solutions for organizations ranging from global companies to start-ups, including Fortinet, Palo Alto Networks, FireEye and StrikeReady.
Enter your email and never miss timely alerts and security guidance from the experts at Tenable.