Why your AppSec Tool Stack Is Failing in the Age of AI
人工智能正在改变软件开发方式和应用安全。AI生成代码加快了开发速度,但也引入了传统工具难以检测的逻辑漏洞和风险。同时,AI组件的使用扩大了攻击面,并带来了数据泄露等新威胁。为应对这些变化,应用安全团队需采用新工具和策略来保护现代应用程序,并重新设计安全流程以适应AI驱动的环境。 2025-7-10 16:24:21 Author: securityboulevard.com(查看原文) 阅读量:12 收藏

The world of software development is changing fast. AI isn’t just influencing software – it’s reshaping how software is written and the components it’s made of.

First, AI-generated code is accelerating development. Code is produced faster, in larger volumes, and often without the same level of review or accountability as human-written code. Second, teams are integrating AI models, agents, and external AI services into both new and legacy applications to deliver advanced capabilities.

This shift introduces elements that traditional application security programs weren’t designed to assess.

Techstrong Gang Youtube

AWS Hub

How AI-Driven Software is Reshaping Application Security

The way software is built today has direct consequences for application security. AI-generated code changes review processes and introduces logic flaws or vulnerabilities that traditional SAST tools may miss. The rapid pace of AI-assisted development strains AppSec workflows not designed for this speed.

Similarly, the inclusion of AI components expands the attack surface and introduces risks like prompt injection, data leakage, and model misuse – areas where conventional scanners provide little coverage. Together, these changes require AppSec teams to rethink how they secure modern applications.

How AppSec Programs and Tools Must Evolve in Response

AppSec teams need to evolve their programs and tools to meet AI-driven challenges:

1) Securing AI-generated code – Choose tools that integrate with AI-first IDEs, code assistants, and AI code generators – not just to detect issues after the fact, but to embed security directly into the generation flow. The most effective solutions help the AI produce secure code from the start, preventing vulnerabilities before they’re introduced. These tools should also detect logic flaws, insecure patterns, and vulnerabilities common in AI-generated code – including subtle issues that may pass human review. They should provide contextual analysis of how AI-generated code interacts with existing systems, support automated code review for high-volume commits, and integrate into CI/CD to prevent flawed code from advancing in the pipeline.

2) Securing AI components – AppSec platforms must provide visibility across the full stack, including integrated AI components. This includes the ability to identify and assess open-source models for known vulnerabilities, license issues, and compliance risks – much like traditional open-source software scanning. Static and dynamic tools should also identify additional AI components, such as usage of RAGs, Agents and MCPs, monitor model API dependencies, and apply dynamic testing – such as AI red teaming and behavioral fuzzing – to assess real-world risk of AI-driven conversational applications. Platforms should support policy enforcement around external AI services and provide mechanisms for tracking prompt injection resilience and data leakage exposure.

3) Leveraging AI in AppSec workflows – Look for tools that apply AI to improve AppSec workflows meaningfully. This includes using AI for prioritizing vulnerabilities based on exploitability, generating tailored remediation guidance, auto-generating security test cases, and improving visibility across ASPM solutions. To achieve deep, native integration with AI code generators and assistants, these tools must be AI-native themselves – designed from the ground up to understand and operate in AI-driven environments. The best vendors are not just adding AI labels but are embedding AI to address real practitioner pain points and improve team efficiency.

What’s in the Market?

Several AppSec vendors are evolving in this direction, offering varying levels of maturity and focus in their AI security solutions. 

Mend.io, for example, has launched an AI-native AppSec platform that combines capabilities for securing AI-generated code and AI components, including integrated AI red teaming, as part of their application security. Snyk has also recently introduced tools focused primarily on AI-generated code, helping teams identify security flaws when generating code with AI-assisted code tools. Endor Labs, a smaller and emerging player, offers more limited functionality addressing both AI-generated code and AI component security, with features still evolving as the company grows. Semgrep has also recently introduced agentic tools to further support remediation workflows. Practitioners should assess these offerings carefully to understand where each fits in their security program and risk landscape.

How to Future-Proof your AppSec Stack

Every AppSec professional should re-evaluate their program and tooling to ensure they’re prepared for an AI-driven future. This means piloting tools against AI-influenced features in both new development and legacy systems, reviewing how well their processes handle AI-generated code and AI components, and benchmarking solutions in real-world scenarios. It also involves staying engaged with AI security communities, contributing to industry standards efforts, tracking evolving threat models, and demanding transparency, measurable results, and ongoing innovation from vendors.

The Path Forward

AI is reshaping software in ways that demand new approaches to AppSec tooling and mindset. The shift affects everything from development speed to attack surfaces, and it calls for fresh strategies, new skill sets, and tools that can assess not just traditional code but also AI-generated logic and model behavior. Teams that adapt now – by rethinking risk models, piloting AI-aware solutions, and integrating AI into their security workflows – will be ready to protect tomorrow’s applications and stay ahead of evolving threats.


文章来源: https://securityboulevard.com/2025/07/why-your-appsec-tool-stack-is-failing-in-the-age-of-ai/?utm_source=rss&utm_medium=rss&utm_campaign=why-your-appsec-tool-stack-is-failing-in-the-age-of-ai
如有侵权请联系:admin#unsafe.sh