Checkmarx Influencer 2026 Predictions
文章探讨了人工智能加速在线活动带来的风险与机遇,并预测2026年应用安全将转向持续智能和自动化防御。重点包括信任机器生成代码、软件供应链风险、模型恶意软件及开发者工具集成。 2025-12-2 08:24:55 Author: checkmarx.com(查看原文) 阅读量:2 收藏

AI is accelerating everything online—volume, velocity, and risk. Great Application Security no longer reacts to threats. It prevents them before they happen.

Checkmarx leaders share what is next: a shift from one-off code scans to continuous, agentic intelligence that learns, explains, and protects organizations in real time. Check out these Checkmarx predictions for 2026.

Sandeep Johri

“There’s a collective CEO obsession (and I count myself in this group) with AI-driven productivity that will peak in 2026 when organizations wake up to the reality of weakened security postures.”

Sandeep Johri

CEO, Checkmarx

This year was pivotal in our business. The headlines, trendlines, and our data say the same thing: threats are accelerating—and we must stay vigilant to stay ahead.

Anthropic reported a largely autonomous AI-led espionage campaign against 30 organizations, executed with minimal human intervention and raising serious questions about the stability of business and public infrastructure. Nearly every one of the 1,500 AppSec leaders we surveyed reported breaches tied to vulnerable code. Less than one in five organizations even have governance policies in place to manage the coming wave of autonomous code in 2026.

There’s a collective CEO obsession (and I count myself in this group) with AI-driven productivity that will peak in 2026 when organizations wake up to the reality of weakened security postures.

The new obsession will be a healthier marriage of speed and intelligence, leveraging AI for productivity with security for AI, and AI for security. The way to stem the tide of the coming wave of automated code is to make AppSec agentic: autonomous, intelligent, and developer-first. The future of AppSec is about reimagining security, not as an afterthought or a gatekeeper, but as a strategic engine for innovation at scale.

Untitled design (18)

“AI agents will support developers at every stage of coding, from the first lines they write to the moment they save changes to the shared repository. Traditional IDEs that will not adopt agentic approaches will lose significant developers’ market share to modern AI IDEs.”

Eran Kinsbruner

VP of Portfolio Marketing

2026 will be the year when Agentic AI AppSec becomes a natural component within the integrated development environments (IDEs), supporting traditional application security tasks including prevention, remediation, prioritization, and overall AppSec orchestration activities.

AI agents will support developers at every stage of coding, from the first lines they write to the moment they save changes to the shared repository. Advanced LLM and other intelligent capabilities will proactively enhance AI coding security to spot risks early and suggest fixes before they turn into problems. Traditional IDEs that will not adopt agentic approaches like Eclipse, IntelliJ, and others, will lose significant developers’ market share to the modern AI IDEs like AWS Kiro, Cursor, Windsurf, and Co-Pilot.

Steve_Boone

“The best tools in 2026 won’t be the ones that generate the most code. They will be the ones that help developers see how a line of code came to be, what assumptions it carries, and whether it meets the team’s standards.”

Steve Boon

Director of Product Marketing  

The idea that AI will replace developers misses the point. In practice, it changes what “writing code” means. Developers are becoming editors and reviewers of generated work. Their value shifts toward judgment. They must know when to trust a suggestion and when to discard it.

The best tools in 2026 won’t be the ones that generate the most code. They will be the ones that help developers see how a line of code came to be, what assumptions it carries, and whether it meets the team’s standards.

Developers will shift from coders to curators, guiding and auditing machine-generated output. The next phase isn’t full automation—it’s disciplined collaboration, where transparency, traceability, and trust define maturity in an AI-driven SDLC.

Ori Bendet

“The bigger risk now is the software supply chain. Just look at the attacks on the open-source community in the last few months. With more LLMs and AI-driven apps, dynamic analysis is back in the spotlight.”

Ori Bendet

VP of Product Management

Static analysis—checking code without running it—is losing ground faster than you think. It’s still useful, but its future lies in smarter roles like correlation, which groups related security findings into themes, and context, which explains why issues matter and how to fix root causes instead of chasing individual alerts.

The bigger risk now is the software supply chain, just look at the attacks on the open-source community in the last few months. With more LLMs and AI-driven apps, dynamic analysis, or testing code while it runs, is back in the spotlight to help keep AppSec adaptive and resilient.

Jonathan Rende - BG – 4 web (1)

“The next generation of security must combine classic cybersecurity with AI governance, model integrity, and agent-level risk management. ”

Jonathan Rende

Chief Product Officer

AI is moving from a ‘human hands on the wheel’ approach to a future with no human in the middle. Traditional LLMs like Claude or Google may do a ‘good enough job’ for basic static needs, but security can’t just focus on static versus dynamic systems anymore.  

Agentic and autonomous systems are self-learning ecosystems, which means the next generation of security must combine classic cybersecurity with AI governance, model integrity, and agent-level risk management. This is how we safeguard trust, compliance, and resilience in the AI era.

Untitled design (20)

“’Model-as-Malware’ is rising. Open-source LLMs and fine-tuned weights are slipping into the supply chain like npm packages. Expect poisoned weights, rigged adapters, and backdoored models trusted by teams because they are disguised as ‘productivity tools.’”

Erez Yalon

VP of Security Research

Breach stories won’t change, but they will accelerate. Convenience beats caution and attackers hunt the newest building blocks. We warned that attackers would keep going after the supply chain. AI just made it bigger and harder to see. In 2026, that risk will hit production scale.

API surfaces are getting weirder and noisier. Vibe-augmented apps hit more third-party APIs, more often, with less human oversight—amplifying timing races, auth errors, and logic bugs in blind spots. API security has long focused on awareness, but autonomous callers multiply the blast radius of a single dangling permission.

“Model-as-Malware” is rising. Open-source LLMs and fine-tuned weights are slipping into the supply chain like npm packages. Expect poisoned weights, rigged adapters, and backdoored models trusted by teams because they are disguised as “productivity tools.” For attackers, models are ideal payloads: opaque, bulky, hard to inspect, and everyone’s rushing. This isn’t hypothetical. We already see malicious LLM components in the supply chain.

Untitled design (21)

“Founders will realize they need a solution that figures out security for them the same way Claude and Lovable figured out app building for them. That awareness will expose a market gap, and a major opportunity.”

Frank Emery

Director of Product Management

More and more people who have no idea what it takes to take an application to production will start adopting AI-based development and application-building solutions. “Tar pit ideas” turned into working products will be wildly insecure.

We’ll see more “great ideas” rushed to market, only to be attacked soon after. As this pattern continues, founders will realize they need a solution that figures out security for them the same way Claude and Lovable figured out app building for them. That awareness will expose a market gap, and a major opportunity.

The way people use LLMs will fragment even more.

simon bennetts

“It will become clearer that LLMs cannot develop complex, maintainable solutions on their own (i.e. without very strong technical guidance). Many companies and individuals will find that out the hard way.”

Simon Bennetts

Software Engineering Expert

There will be a significant LLM backlash from people who don’t really understand how to use them best, but who believed the hype.

It will become clearer that LLMs cannot develop complex, maintainable solutions on their own (i.e. without very strong technical guidance). Many companies and individuals will find that out the hard way.

Lots of “AI” startups who raise funding and just build wrappers around LLMs will fail.

People who work out how to integrate LLMs into their workflows and can reliably evaluate their output (and get them to rework it) will get great benefit from them and will keep cheerleading.

There’s significant potential for a solution which automates a set of security tests against LLM generated code (SAST, DAST, SCA, and others,) and then makes it easy to feed that data back into the LLM that the customer used to generate it, as an “auto-correction” loop.
 

Yossi Rifold

“Our solution must be able to differentiate between code written by developers and code written by AI, providing the best insights and solutions. ”

Yossi Rifold

Director of Product Management


AI will be integrated into various parts of the SDLC, meaning we must identify the relevant personas in the right phase of the process (for example, developers in the IDE and PR, AppSec) in their day-to-day activities, etc.)
 
The more developers depend on AI, the more security risks they’ll introduce. Tools like Checkmarx Codebashing and other platforms will become essential to teach secure coding and safer AI use.

Our solution must be able to differentiate between code written by developers and code written by AI, providing the best insights and solutions.

Darren_Meyer

“AppSec teams will give code agents a shot to help secure all AI-generated code, but those tools won’t live up to the hype. Instead, we’ll see a shift toward a hybrid approach: stronger versions of existing controls combined with new agent-based systems that wrap around applications in clever ways.”

Darren Meyer

Security Research Advocate

Driven largely (though not entirely) by AI, AppSec tooling will shift noticeably to the right in 2026, focusing more on what happens later in the SDLC: runtime and production.

As AI starts generating code at unprecedented speed, AppSec teams will be overwhelmed trying to fix vulnerable code and will move toward a combination of tools to prevent security flaws and weak code from resulting in breaches.

AppSec teams will give code agents a shot to help secure all the AI-generated code flying around, but those tools won’t quite live up to the hype. So instead, we’ll see a shift toward a hybrid approach: stronger versions of existing controls like EDR/XDR, CNAPP, and runtime SCA, combined with new agent-based systems that wrap around applications in clever ways. These newer solutions will be marketed hard and we believe, successfully, as modern alternatives to traditional AppSec tooling.

Scott Walston Photo_2023

“AI-coding assistants have changed the game. The new challenge isn’t scanning—it’s trusting machine-generated code and ensuring continuous validation.”

Scott Walston

VP, Americas

Stop building scanners. Start building security intelligence systems that learn, predict, and prevent.
The future AI-driven platforms will sit inside developer workflows—validating AI-generated code, protecting APIs, and delivering contextual, code-to-cloud visibility. Leaders will integrate automation, transparency, and adaptive protection that continuously learns and self-tunes.

AI-coding assistants have changed the game. The new challenge isn’t scanning—it’s trusting machine-generated code and ensuring continuous validation. Security must evolve from reactive testing to adaptive engineering that connects code and runtime intelligence.

As regulations tighten and the AI supply chain expands, success will belong to those who prove security, not just test it—embedding compliance, provenance, and assurance into every build. By 2026, AppSec will mean governing AI-driven software with predictive, autonomous systems that protect at the speed of code.


文章来源: https://checkmarx.com/blog/checkmarx-influencer-2026-predictions/
如有侵权请联系:admin#unsafe.sh