Confident Developers Are the New Security Risk
嗯,用户让我用中文总结一下这篇文章的内容,控制在100字以内,而且不需要特定的开头。首先,我需要通读整篇文章,抓住主要观点。 文章主要讲的是AI编码工具如何改变了软件开发的方式。开发者现在能更快地生成更多代码,但这也带来了安全上的问题。AI生成的代码虽然看起来没问题,但实际上可能有漏洞。安全团队跟不上代码增长的速度,所以需要新的方法来应对这些风险。 接下来,我得提炼这些要点:AI提升效率、引入安全风险、传统安全措施不足、需要新的解决方案。然后用简洁的语言把这些整合起来,确保不超过100字。 可能会遇到的问题是如何在有限的字数内涵盖所有关键点,所以需要选择最重要的信息。比如提到AI生成代码快但不安全,安全团队资源不足,以及需要结合AI和自动化来应对这些挑战。 最后检查一下是否符合用户的要求:中文、简洁、直接描述内容。确保没有使用“文章总结”之类的开头,并且信息准确。 </think> AI编码工具改变了软件开发方式,提升了效率和功能实现速度。然而,快速生成的代码可能隐藏安全漏洞,传统安全措施难以应对。随着代码量激增和开发速度加快,安全团队面临巨大压力。解决方案包括结合AI与自动化技术,在开发过程中嵌入安全性检测与修复机制,以适应快速发展的需求。 2026-3-2 11:58:41 Author: checkmarx.com(查看原文) 阅读量:0 收藏

AI coding tools have fundamentally changed how software gets built. 

After attending and speaking with security and development leaders at OnPoint Ski & Snowboard CyberCon 2026, one theme stood out to me: teams are shipping more code, in more languages, across more projects than ever before. Features that used to take days now take minutes and complex logic can be scaffolded from a single prompt. 

The output is fast, it looks polished, and it runs smoothly. 

And that’s exactly the problem. 

When Confidence Outpaces Security 

As developers rely more on AI tools, something subtle happens: the speed and quality of the output create confidence. The code looks clean, it compiles, it works as expected – so it gets trusted. 

But AI models only predict what is likely to work, they don’t understand your threat model and they aren’t able to assess exploitability in your environment. AI-generated code can function perfectly and still introduce serious vulnerabilities. 

This gap between what works and what’s secure is where risk exponentially builds. 

This isn’t a criticism of developers. AI tools are powerful productivity accelerators, and teams absolutely should use them. But validating functionality is not the same as validating security. And right now, that distinction is getting blurred. 

More Code, Same Security Team

This confidence issue isn’t happening in a vacuum; it’s the byproduct of broader organizational shifts. 

The development lifecycle is shifting to be more agentic, more automated, and moving faster than ever. That means more code written with fewer reviewers, pull requests that are more frequent but also more complex, and an AppSec team expected to keep pace without any additional resources.  

So, the backlog isn’t stabilizing – it’s growing. 

I see this tension in organizations all the time. AppSec teams are expected to keep up with the speed of development while maintaining strong security standards. In practice, they can’t fully do both. Slowing down development usually isn’t an option, so security is expected to adapt. 

Development Is Now Human + AI 

Development is no longer purely human-led – but it isn’t exclusively AI-led either. It is now driven by developers working alongside AI. 

AI is assisting, suggesting, generating, and accelerating, but humans are still making decisions and shipping code. The model has shifted from developers writing everything themselves to developers collaborating with AI systems throughout the process. 

This shift significantly increases output. Teams are producing more features, services, and integrations at a much faster pace. But AI is optimized for speed and plausibility, not security. It can produce functional code, but not inherently secure code. 

The speed AI delivers builds confidence and trust, but it also increases the likelihood of security gaps slipping through unnoticed – especially when developers are shipping code they didn’t write and don’t fully understand. We recently dug more into this trend in our Don’t Trust the Code paper and you can read more about it here.  

But these tools don’t just change how developers work – they also add new components to the software supply chain. Every model integration, MCP connection, and AI-assisted workflow becomes another potential entry point, and the environment is expanding faster than many security teams can track. 

I’ve seen cases where thousands of AI coding assistant licenses were active before the Head of Security even knew they existed. And when organizations don’t know which AI tools are in use or how data is flowing, they can’t properly assess risk – and the attack surface grows, unnoticed. 

Security Has To Evolve 

One of my biggest takeaways is that if AI-driven productivity is the new baseline, security can’t operate the way it did five years ago – it must evolve across these three categories: 

  1. How we identify vulnerabilities in code is changing.
  2. How we identify vulnerabilities in the tools we are using is changing.
  3. How we address vulnerabilities is changing. 

Traditional scanners weren’t built for this environment. They struggle with modern languages and frameworks, generate noise, and can’t keep pace with modern CI/CD pipelines.  

Meanwhile, AI is introducing new threat vectors: 

  • Generated logic that hasn’t been deeply reviewed
  • New dependencies
  • Expanded supply chain components

Organizations still need every line of that code scanned quickly, with findings developers can actually act on. This is why we’re seeing the rise of agentic scanning approaches: hybrid engines that combine deterministic analysis with AI reasoning, LLM-powered workflows, and automated context-aware triage. 

But securing the code is only half of the problem, we also need to secure the AI tools writing it. AI Bills of Materials (AI-BOMs) are emerging to provide visibility into where AI is being used, which models are connected, and how data flows through them. Securing the full AI stack is quickly becoming a core AppSec responsibility. 

From Backlog to Automation 

Detection alone won’t solve the scaling problem. The traditional identify – triage – remediate – verify cycle cannot be managed manually when code is growing exponentially. Without automation, quality declines and backlogs grow. 

Agents become valuable when they’re embedded directly into the development lifecycle, especially in high-volume stages like triage, remediation, and verification. These are areas where automation can absorb the workload security teams can’t handle manually. 

When agents operate within a defined AppSec strategy, they form the foundation for applications that can secure themselves, freeing teams to focus on policy and governance rather than reactive risk management. 

Securing at the Speed of Confidence 

The paradox is clear. AI increases output, which increases risk. At the same time, it increases confidence, and confident developers move faster, question less, and merge code more quickly. 

But beneath that momentum, the gap between perceived security and actual security continues to widen. Since slowing down is not a realistic option, the only path forward is to secure software at the speed AI now sets. 

Checkmarx is built for this shift. It combines deterministic scanning with AI-driven detection to give clear visibility into how AI is being used across development environments, while also automating remediation with tools like Checkmarx Developer Assist.  

The result is security embedded directly into the development process – instead of tacked at the end. 

And the goal isn’t to reduce developer confidence – confidence is a good thing! The goal is to ensure that this confidence is earned, backed by real visibility and security controls that scale with the volume of code being produced. 

At the end of the day, confident developers with guardrails in place move fast and stay secure. Confident developers without them just move fast. 

Tags:

AI

AI generated code

AppSec

developer assist


文章来源: https://checkmarx.com/blog/confident-developers-are-the-new-security-risk/
如有侵权请联系:admin#unsafe.sh