Tomorrow with Checkmarx Product Questions Roundup: SAST powered by Security Intelligence and More
On April 15 we hosted a webinar to discuss the Myt 2026-5-11 14:48:53 Author: checkmarx.com(查看原文) 阅读量:0 收藏

On April 15 we hosted a webinar to discuss the Mythos developments and their impact on our AI strategy and next steps. Customers asked thoughtful questions about the new SAST powered by Security Intelligence, the rollout of new Agentic experiences, SCA, and more.

We took those back to the Product team, their answers follow.

Why AI SAST Exists

Q: My developers say they haven’t seen insecure code generated by AI. Do you have examples?

A: We don’t have LLM–specific examples to share, but the broader picture is worth paying attention to. Checkmarx’s 2026 Future of AppSec research found that 81% of organizations knowingly ship vulnerable code, AI now writes the majority of application code in many environments, and 98% reported a breach in the past year. In addition, because AI writes the code, developers are less familiar with the work because it’s not their handiwork, making review more difficult.

In addition, published reports show that less than 1% of the vulnerabilities found by Anthropic’s Mythos have been patched, so the attack surface exposed by AI-generated code continues to grow. The combination of accelerating AI code generation and stalled remediation rates suggests the gap between discovery and fix is widening.

How AI SAST Works Under the Hood

Q: What AI models will be used for the new SAST powered by Security Intelligence feature?

A: AI SAST is built on a hybrid architecture that pairs LLM-powered probabilistic analysis with our deterministic, query-based SAST engine. We currently use Codex via Azure OpenAI as our AI proxy.

Q: For your internal testing on AI scan consistency, was that done with simple prompts against bare models, or with custom instructions and agents?

A: Consistency in SAST Powered by Security Intelligence results doesn’t rely on the LLM. When a scan runs against a mixed codebase, i.e. Java and Solidity, a routing layer sends each language to the right engine. Java goes to traditional, query-based SAST. Solidity, which traditional SAST doesn’t cover, gets routed to the LLM-powered analysis layer. Both engines run independently, and their results merge into a single output using our Findings Analysis Engine.

To guarantee consistency, results from the first AI SAST scan are cached and used as the reference point for subsequent scans. Determinism is enforced at the system level, not left to the model, so what you found yesterday is still what you’ll find today.

Q: How is the balance between deterministic and LLM opinions on a vulnerability configured? Do they inform each other?

A: They don’t communicate. Each language follows its own flow, scanned by either the query-based engine or the LLM-based engine, depending on coverage. Results are then combined into a single output.

Q: Does AI SAST support languages that traditional Checkmarx SAST doesn’t?

A: Yes. The LLM-powered layer covers any language that the query-based engine doesn’t — including newer, emerging, and even invented languages.

Q: How many runs, tokens, and how much time did the AI take to find the zero-day?

A: We’ve heard from contacts running their own LLM-based security analysis that scanning a full enterprise codebase against a specific CWE can consume one to two months’ worth of tokens, when using a frontier model like Claude Opus out of the box. That’s a big cost, and one of the reasons AI SAST is architected the way it is.

Sending every line of code to a frontier LLM isn’t economically viable at scale, and it’s not the most accurate path either. AI SAST reserves the LLM layer for languages and patterns where rules don’t yet exist, providing broader coverage at a lower cost, with abilities to audit and quantify that pure probabilistic scanning can’t provide.

Access and Early Access

Q: Do CxGov (public sector) customers have access?

A: CxGov customers can participate in AI SAST Early Access if they also have a non-CxGov tenant in Checkmarx One. Reach out to your Customer Success Manager to confirm eligibility and get set up.

Q: How do we get access to the Early Access program? Do we need our own API keys?

A: We’re rolling out Early Access in waves, and many customers are already on the list. Your Customer Success Manager can confirm and work with you, no separate API keys required.

Q: Does the analysis engine apply to both SAST and SCA?

A: AI SAST isn’t available for on-prem deployments (i.e. on-prem SAST,) that’s another reason to consider the move to Checkmarx One.

Pricing, Licensing, and Compliance

Q: Will customization be available based on specific compliance standards (e.g. PCI)?

A: Yes — with a phased rollout. During Early Access, AI SAST scans against the OWASP Top 10. At General Availability, it will support any compliance standard we offer out of the box, including PCI, HIPAA, and others.

One distinction worth flagging: custom queries remain part of the traditional query-based SAST engine, not AI SAST itself. That separation is intentional — the deterministic layer is what gives compliance findings their auditability and ground truth, while AI SAST extends coverage into AI-generated code and emerging languages where formal rules don’t yet exist.

Q: Will there be an additional cost for SAST Powered by Security Intelligence in Early Access? Is it included in existing Checkmarx One SAST subscriptions?

A: SAST Powered by Security Intelligence is included in Checkmarx One — there are no additional costs in the Early Access phase. If you’re already on Checkmarx One, your CSM can walk you through participation and what to expect at General Availability.

For customers not yet on Checkmarx One, this is a good moment to consider the move. AI SAST, agentic security agents, and AI supply chain visibility are all built specifically into the re-architected platform.

Q: Is a separate license required for SAST powered by Security Intelligence?

A: No. AI SAST isn’t a separately licensed product — it’s part of Checkmarx One, enabled by setting rather than purchase. If you’re already a SAST customer on Checkmarx One, your team can be turned on for AI SAST without buying anything new.

One nuance: Checkmarx One employs usage-based pricing tied to contributing developers. AI SAST extends coverage into parts of your codebase that traditional SAST didn’t reach, so developers contributing to those previously-unscanned portions will now be counted. We’re not charging extra for AI SAST itself, but you may end up scanning more of your code than before, which is our design intent.

Q: Do we have to pay credits or tokens for the models behind the scenes?

A: Scanning with AI SAST doesn’t require customers to pay for or supply credits, so no tokens required.

For AI-powered triage and remediation, tokens are used as those capabilities operate on a credit-based usage model, consistent with how other usage-based features work across Checkmarx One.

In addition, customers can’t bring their own models or substitute their own LLM providers behind these capabilities. The hybrid architecture is what helps our AI SAST and agentic features deliver consistent, secure, and auditable results. Reach out to your CSM to learn more about AI SAST, our expanding family of Checkmarx Agents, and what’s coming next on the Checkmarx platform.

Impact On Existing Workflows and Data

Q: Will the updated engines impact local code scans, inline suggestions, or triage history?

A: No. There’s no impact on existing scans, results, or triage history.

Q: Are you using customer code to train the LLM analysis engine?

A: No. Checkmarx doesn’t operate a proprietary model, and we don’t use customer data to train models. We rely on third-party models, and customer data is sent to those models for analysis only, not for training.

Platform Experience

Q: It feels like the focus is on the scanning engine more than the platform itself. UX and collaboration integrations need work.

A: Fair feedback, and we’re actively working on it. The best place to share specifics is our Checkmarx Ideas Portal — it goes directly to the product team.

Triage, Remediation, and SCA

Q: With regards your Triage and Remediation products, how exactly will noise be removed? If something is not exploitable now, it could be exploitable tomorrow. How do you deal with mitigation vs remediation, especially for SCA findings with CVEs greater than 7?

A: Triage Assist is not a noise reduction solution but a prioritization tool. It analyzes exploitability based on the code analysis and associated risk context, conveying a recommendation through the triage state in Checkmarx One. It doesn’t attempt to mitigate the risk. It zeroes in on the root cause and offers full remediation to find a fix of the issue.

Q: With exploitation timelines now in minutes, why does it still take Checkmarx days or weeks for the SCA database to flag malicious packages? What is Checkmarx doing to improve CVE database response speed vs competitors like Snyk and Socket?

A: AVA (Automated Vulnerability Analyzer) is a new AI-enabled process which aims at providing instant analysis of new CVEs as they are collected, enabling customers to respond to threats faster. We are planning for this to roll out in Q2.


文章来源: https://checkmarx.com/sast/tomorrow-checkmarx-product-questions-sast-powered-security-intelligence/
如有侵权请联系:admin#unsafe.sh