The application security industry has spent two decades telling developers to “shift left,” that is, to own security from the start. We’ve invested billions in developer training, IDE plugins, and security education. And yet… over 48,000 CVEs were published in 2025, a 67% increase from 2023. We haven’t even managed to eliminate SQL injection, a vulnerability that’s been well-understood since the 1990s.
Narrow shift left has failed. And it’s about to get worse.
First, let’s distinguish between two very different interpretations of “shift left”: Broad shift left and narrow shift left.
Broad shift left means establishing security – secure design, threat modeling, architecture patterns, and good hygiene across all projects – before a single line of code is written. Organizations absolutely should embed security thinking into requirements, design reviews, and infrastructure decisions.
Narrow shift left means making individual developers responsible for finding and fixing security bugs in their own code and requires training them to be security experts, equipping them with scanning tools, and expecting them to remediate vulnerabilities between feature commits. This version of shift left has demonstrably failed.
Why the failure? Developers aren’t measured on security. They’re measured on shipping features. When a developer has very limited time per week for learning, security has to compete with new AI development tools, evolving frameworks, and the techniques that directly impact their productivity and job performance. Security loses that competition every time.
The average enterprise vulnerability takes 252 days to fix, up from 171 days in 2020. Manual remediation in enterprises costs $5,000 to $20,000 per vulnerability. Most organizations are finding vulnerabilities faster than they can fix them, creating growing backlogs.
Now add AI-generated code to the mix. Research shows 24.7% of AI-generated code contains security vulnerabilities. According to Black Duck’s 2025 DevSecOps report, 57% of organizations say AI coding assistants have introduced new security risks or made it harder to detect issues.
The vulnerability flood is accelerating. And asking developers to fix it manually one bug at a time while simultaneously learning prompt engineering, mastering AI assistants, and building features won’t stop it.
Many developers aren’t coding anymore – not in the traditional sense. They’re managing AI agents. They orchestrate Claude, Copilot, and other assistants to generate multiple versions of functionality, selecting the approach that works best. Google’s Antigravity environment isn’t an AI development environment it’s an agent management platform.
When a developer manages a dozen agents producing different components of an application, who exactly is supposed to review each output for security vulnerabilities? The developer who didn’t write the code? The AI that doesn’t understand the security context? The security tool that will generate hundreds of findings, most of which are false positives?
This is where narrow shift left collides with reality. Developers using AI assistants aren’t intimately familiar with every line of code they’re deploying. They’re orchestrating outputs. Asking them to also be security experts for code they didn’t write is a recipe for exactly the outcomes we’re seeing: 81% of organizations admit they knowingly ship vulnerable code.
If developers are becoming agent managers, application security professionals need to become security automation engineers.
Consider what AppSec teams do well: they understand vulnerabilities at a portfolio level. They spot patterns across repositories. They know which findings matter and which are just noise. They understand compliance requirements and risk tolerance. What they lack is the capacity to manually triage and remediate thousands of findings across dozens of applications.
The answer isn’t to push that workload onto developers who are even less equipped to handle it. Instead, AppSec teams need automation tools to match the scale of the problem.
Here’s what this looks like in practice. AppSec teams ensure organizations run scans across all repositories at the appropriate place in their dev/ops process. AI automatically triages findings, separating real vulnerabilities from false positives which constitute 30 – 40% of typical SAST results. For validated vulnerabilities, automation generates actual code fixes, tested and ready for deployment.
The developer’s role? Review the pull request and answer one vital question: “Does this change break my functionality?” They don’t need to understand parameterized queries or the nuances of XSS prevention. They need to confirm the fix doesn’t break their feature.
The AppSec engineer’s role? Validate that the security fix is correct. They don’t need to understand the business logic or the application architecture. They are there to confirm the vulnerability is remediated.
Each person answers the questions they can answer.
If you’re an engineering leader, stop pretending developers will magically become security experts. They shouldn’t have to. Give them tools that deliver fixes they can review and approve, not findings they have to research and remediate themselves.
If you’re an AppSec leader, you’re no longer the person who finds vulnerabilities and writes reports hoping developers will act. You’re becoming the person who manages the automation that actually fixes problems at scale.
If you’re a CISO, understand that narrow shift left is a liability, not an asset. Centralized automation managed by security experts gives you actual coverage, actual remediation, and actual metrics you can report to your board.
The first step is acknowledging that narrow shift left hasn’t worked and won’t work at AI scale. The second is evaluating automation solutions that can move the needle. Not more findings, but actual fixes.
Integrate solutions with your existing SAST and DAST tools rather than replacing them. Look for solutions that deliver pull requests, not reports. Look for transparent, verifiable accuracy metrics – benchmarked results you can validate independently.
The security industry has spent twenty years telling developers to fix code they wrote and now they need to fix code they didn’t even write. It’s time for a different approach.
The age of AI coding requires the age of automated remediation.