March 25, 2026
12 Min Read

Get a template for an AI coding acceptable use policy with security controls and a list of 25 security questions to ask software developers and “citizen developers” about their AI use. Mitigate the security risks of vibe coding and using AI in software development with Tenable One.
Your organization’s software developers and DevOps teams are using agentic AI, LLMs, and machine learning to do their jobs faster and more efficiently, whether you like it or not. In fact, 81% of developers surveyed by CodeSignal say they’re using AI for development, and some large tech companies mandate the use of AI for their developers.
In the most extreme cases, developers and non-developers (so called “citizen developers”) are resorting to vibe coding, where they tell an agent or an LLM what they want the software to do, the LLM or agent builds it, and the “developer” takes the AI code and puts it into production without any vetting or review.
AI-generated code created on behalf of citizen developers in particular are prone to misconfigurations, excessive data permissions, and weak authentication.
As you build and implement your organization’s AI acceptable use policy, it’s important to familiarize yourself with the various developer and citizen developer use cases, which can differ greatly from how other employees are leveraging AI. In this blog, learn about:
Integrated development environments (IDEs) incorporate AI coding assistants to provide real-time suggestions. These can include auto-completing the next few words in a line, generating entire functions or code blocks (vibe coding), or even creating boilerplate code based on comments or partial code structure.
Security risks of AI-powered code completion and generation: These practices introduce the risk of insecure code suggestions. AI models trained on vulnerable code often replicate insecure patterns in their suggestions. They also raise concerns about intellectual property leaks if developers are using AI tools that may share proprietary code snippets as training data or in suggestions to other users (depending on the AI tool's license and configuration).
Developers use AI and ML tools to analyze existing code, documentation, and user interaction patterns to automatically generate unit tests, integration tests, and even security tests. One famous example is Anthropic using its Claude Opus 4.6 model to discover 500 high-severity vulnerabilities in open source codebases.
AI tools can also prioritize which existing tests to run based on changes made to the code, significantly speeding up the continuous integration (CI) process.
Security risks of using AI to test software: While helpful, the quality of AI-generated tests can vary. An LLM may fail to generate tests that cover subtle logic flaws or security vulnerabilities, leading to a false sense of security. Human review of security-critical tests remains essential. Additionally, a significant drawback of using an LLM to discover security vulnerabilities in software is that does so without any meaningful prioritization, resulting in even more noise for security and DevSecOps teams.
LLMs can review pull requests by summarizing changes, identifying potential bugs, checking code against organizational style guides, and suggesting optimizations or refactoring. Developers also use them to explain complex or legacy code in natural language, reducing onboarding time and maintenance effort.
Security risks of using AI for code review, analysis, and refactoring: AI reviewers configured for static application security testing (SAST) will scan for known security vulnerabilities and suggest fixes. However, they might miss contextual vulnerabilities specific to your application architecture or suggest remediation that, while fixing one issue, introduces a new, subtle one.
Developers use LLMs to automatically generate function docstrings, API reference material, and README files from source code. In a DevOps context, these tools can also summarize long log files or incident reports to quickly identify root causes and patterns.
Security risks of using AI to create documentation: AI-generated documentation can sometimes be inaccurate or incomplete, especially for complex security features or custom encryption logic. Relying solely on these tools for critical security documentation can lead to misunderstandings and misconfigurations.
DevOps teams use LLMs to translate natural language requests (e.g., "Deploy a three-tier web application using AWS, with a PostgreSQL database and a load balancer") into working infrastructure-as-code (IaC) configurations (e.g., Terraform or CloudFormation scripts). This significantly accelerates the provisioning of environments.
Security risks of using AI for natural language-to-infrastructure generation: This is a major area of risk. LLMs may generate IaC scripts that contain insecure defaults (e.g., overly permissive firewall rules, unencrypted storage, or unsecure port configurations) if not explicitly prompted otherwise. Integrate mandatory security checks and scanning tools into the continuous integration/continuous development (CI/CD) pipeline to validate all AI-generated IaC.
As developer and coding use cases get augmented with agentic capabilities, leading to the semi-autonomous or fully autonomous execution of software development and testing tasks, the security situation is only going to get worse.
The primary areas of AI security risk for CISOs are:
Below are key questions to ask your DevOps teams to help you assess your organization’s risk in each of these areas.
According to an October 2025 McKinsey report, business leaders are rushing to embrace agentic AI. For developers, tools like OpenClaw, Cursor, and GitHub Copilot Workspace can execute commands without human intervention, raising concerns about the permissions these tools hold.
Questions to ask your developers:
Like much of the output from AI tools, AI-generated code is often functional but flawed. It’s entirely possible that it may reintroduce long-running vulnerabilities like SQL injection or insecure hardcoded secrets.
Questions to ask your developers:
Using AI-generated code can introduce legal risks such as violating copyrights and licenses. It also raises the stakes for supply chain risk, potentially passing flaws and vulnerabilities to your customers.
Questions to ask your developers:
Are your developers accidentally leaking your company's proprietary code, API keys, or customer data into public AI models? You can mitigate some of this risk by restricting all employees to a closed enterprise-grade platform like ChatGPT Enterprise. Even so, it’s important to be clear about the scope of any tools.
Questions to ask your developers:
Here is a brief overview of key AI governance and AI accountability policies to consider:
| Policy area | Specific policy | Control implementation |
|---|---|---|
| Monitoring | Developers understand their use of AI is monitored for compliance with the organization’s broader AI acceptable use policy, including use of approved and unapproved tools, as well as for data leaks, secrets detection, hallucinated libraries, malicious prompts, etc. | Implement a platform capable of discovering AI in its various forms (agents, plugins, extensions, LLMs, etc.) a across the entire organization — internal and external, on-prem and cloud, approved and unapproved — delivering a complete, risk-aware view of where AI operates, how it is connected, and where exposure is created. |
| Developer accountability | The developer who reviews, modifies, and commits the AI-generated code is fully accountable for its security, compliance, and legal standing (including licensing). | Incorporate this principle into your security awareness training and update your secure software development lifecycle (SSDLC) documents to reflect AI usage as a new form of third-party input. |
| Compliance and licensing | Scrutinize AI-generated code for potential open-source license infringement (a risk when AI models reproduce training code). | Use software composition analysis (SCA) tools and human legal review to check any significant AI-generated code block against your organization’s open-source licensing policies. |
| Training and awareness | All developers must complete annual training on the specific security risks of LLMs, including hallucination, prompt injection, and data leaks. | Create a modular training program focused on AI-specific secure coding patterns and the risks of developer overconfidence in AI-generated code. |
| Vibe coding | Decide if your organization will allow vibe coding, and if so, for what use cases. For instance, you may opt to allow vibe coding for the development of personal productivity scripts and wireframes but not for production systems, customer-facing systems, or any applications that touch sensitive customer, employee, or intellectual property data. | Consider implementing controls for environmental isolation, AI-generated test coverage, traceability, and verification gating. |
| Agentic AI policy | Any use of agentic AI (where an LLM can perform multi-step actions autonomously, like creating a pull request or deploying IaC) must have strict, predefined guardrails, including requirements for human-in-the-loop (HITL), and run in a sandboxed, low-privilege environment. | Require explicit security architecture review and approval before introducing any autonomous AI agent into the CI/CD pipeline. |
Source: Tenable, January 2026
As developers and business users embrace AI coding, you don't have to make them choose between innovation and security. Establishing an AI acceptable use policy for developers, providing them with sanctioned and secure AI platforms to use, educating them about cybersecurity best practices, and monitoring their use of AI tools will reduce your organization’s risk.
Tenable AI Exposure continuously discovers AI across your entire organization — approved and unapproved, internal and external, on-premises and cloud — to deliver a complete, risk-aware view of:
In short, Tenable One correlates the relationships among AI applications, infrastructure, identities, agents, and data to highlight and prioritize the AI exposures that matter most for remediation.
Use Tenable One to:
Unlike point AI security tools that surface isolated findings, Tenable One correlates AI, infrastructure, agents, and data exposure into a unified view, so you can reduce AI risk across all environments, even as your developers leverage AI for scale and productivity.
Tomer Y. Avni, Tenable VP of Product and Go-to-Market at Tenable, specializes in AI security. As a co-founder and Chief Product Officer at Apex Security from 2023 to 2025, Tomer played a pivotal role in securing major corporations on their AI journeys, backed by Sequoia, Index, and Sam Altman. Apex was acquired by Tenable in 2025. Tomer’s previous roles include AVP Product at Authomize, investor at Blumberg Capital, and board observer positions at Hunters and Databand.ai. Tomer earned multiple excellence awards during his leadership post in Israeli Military Intelligence - Unit 8200. He holds an MBA from Harvard Business School, a master’s in engineering from Harvard University, and a bachelor’s in applied mathematics and political science from Bar-Ilan University.
Enter your email and never miss timely alerts and security guidance from the experts at Tenable.