Are There IDORs Lurking in Your Code? LLMs Are Finding Critical Business Logic Vulns—and They’re Everywhere
好的,我现在需要帮用户总结一篇文章的内容,控制在100个字以内。首先,我得通读文章,理解主要观点。 文章主要讲的是IDORs和授权漏洞在代码中的普遍存在,传统工具难以检测。然后引入LLMs(大语言模型)在检测这些漏洞方面的优势,但也提到了LLMs的局限性,比如非确定性、成本高、不适合某些类型漏洞检测。接着讨论了混合检测方法,结合传统静态分析和LLMs的优势。最后提到攻击者也在使用LLMs,强调防御的重要性,并给出了实施建议。 接下来,我需要将这些要点浓缩到100字以内。重点包括:IDORs和授权漏洞普遍、传统工具不足、LLMs的优势与局限、混合方法有效、攻击者使用AI带来的威胁。 现在组织语言,确保简洁明了,不使用总结性的开头词。可能的结构是:问题现状+解决方案+挑战+建议。 最后检查字数是否符合要求,并确保信息准确传达。 </think> 文章指出不安全的直接对象引用(IDOR)和授权漏洞在代码中普遍存在,传统工具难以检测。大型语言模型(LLM)在识别业务逻辑缺陷方面表现出色,但存在非确定性、成本高和不适合某些类型漏洞等问题。结合传统静态分析与LLM的混合方法更有效。攻击者也在利用AI扫描和利用这些漏洞,防御需尽快采用自动化检测技术以应对威胁。 2026-1-9 16:8:42 Author: securityboulevard.com(查看原文) 阅读量:0 收藏

Security teams have always known that insecure direct object references (IDORs) and broken authorization vulnerabilities exist in their codebases. Ask any AppSec leader if they have IDOR issues, and most would readily admit they do. But here’s the uncomfortable truth: they’ve been dramatically underestimating the scope of the problem.

Recent bug bounty data tells a stark story. Roughly half of all high and critical severity findings now involve broken access control vulnerabilities – IDORs, authorization bypasses, and similar business logic flaws. These aren’t theoretical concerns. Each IDOR reported through a bug bounty program typically signals several more lurking undiscovered in the same codebase. Security teams know they’re there, but finding them has always been time-intensive, manual work that gets deprioritized against other pressing demands.

Now, large language models (LLMS) are changing that equation – and revealing just how pervasive these vulnerabilities actually are.

Why Traditional Tools Miss Business Logic Flaws

Traditional static analysis tools excel at finding certain classes of vulnerabilities. They’re effective at catching SQL injection, cross-site scripting, and other issues that follow predictable patterns of data flow. These tools work by tracing how user input moves through code – mechanically following the path from source to sink.

IDORs and authorization flaws are fundamentally different. They’re not about contaminated data flowing to dangerous functions. Rather, they’re about missing context and misunderstood intent. Consider a typical IDOR scenario: an API endpoint accepts a user ID parameter and returns that user’s profile data. The code fetches the data correctly. It returns it properly formatted. From a structural standpoint, everything looks fine. The vulnerability exists not in what the code does, but in what it doesn’t do. It fails to verify that the requesting user has permission to access that particular profile.

Traditional static analyzers struggle in this scenario because the vulnerability is semantic, not structural. If the data returned were intended to be public, such as a list of published articles, authorization might be unnecessary. Distinguishing between these requires understanding what the developer intended, what the business rules should be, and what security controls are missing. That’s exactly where LLMs are useful.

Understanding Context and Intent

LLMs read code differently than rule-based analyzers. They understand variable names, function purposes, code comments, and broader application context. When an LLM sees a function called “getUserInvoice(invoiceId)” that returns sensitive financial data based solely on an ID parameter, it can reason that it requires an authorization check.

This contextual understanding extends beyond individual functions. LLMs can assess whether the data being returned is sensitive, whether the endpoint appears to be public or private, and whether appropriate safeguards exist elsewhere in the call chain. They can infer developer intent and compare it against what the code actually implements.

Security teams that have begun incorporating AI-powered analysis into their scanning workflows report finding previously unknown authorization vulnerabilities across their codebases, often multiple instances of similar flaws that had gone undetected for extended periods. For many teams, this represents their first comprehensive view of how extensively these business logic vulnerabilities permeate their applications, revealing a problem far larger than what periodic penetration tests or bug bounty programs had suggested.

The Limitations of Pure LLM Approaches

Before we get carried away with their ability, note that LLMs still have significant limitations that make them unsuitable as standalone security tools.

  • First, they’re not deterministic. Run the same LLM against the same code twice, and you’ll likely get different results. Independent security researchers have documented this extensively. In a study led by Sean Heelan, an LLM found a critical kernel vulnerability in only 8 of 100 runs against the same benchmark. The other 92 runs missed it entirely, and many produced false positives.
  • Second, LLMs are expensive at scale. Running comprehensive LLM analysis across a large codebase costs 2-3 orders of magnitude more than traditional static analysis. For organizations scanning millions of lines of code regularly, pure LLM approaches become economically impractical.
  • Third, LLMs perform poorly on the vulnerability classes where traditional SAST excels. When tested on SQL injection detection, LLM-based approaches showed false positive rates between 95% and 100%. They struggle with complex data flow tracing across many files and miss sanitization performed in framework layers they don’t fully understand.

This isn’t a failure of LLMs. It’s simply the wrong tool for that job. LLMs excel at semantic reasoning about business logic, not mechanical tracing of data flows through complex application layers.

The Case for Hybrid Detection

The answer isn’t choosing between traditional static analysis and LLMs. It’s combining both approaches strategically.

Static analysis does what it does best: comprehensive, fast, deterministic scanning. It can enumerate every API endpoint in an application, trace every user input parameter, and identify every database query reliably and repeatedly.

LLMs then apply contextual reasoning to those outputs. Given a list of 500 API endpoints that accept user-controlled identifiers, an LLM can systematically evaluate whether each endpoint implements appropriate authorization checks. It can distinguish between intentionally public data and sensitive information that requires protection. It can assess whether the authorization logic makes sense given the apparent business context.

This hybrid approach delivers something neither technique achieves alone: comprehensive coverage of both traditional vulnerabilities and business logic flaws, with practical false positive rates that security teams can actually manage.

The Attacker Advantage

Here’s what should keep security leaders awake at night: attackers also have access to LLMs. While defenders build out security programs and experiment with new strategies for detecting logic vulnerabilities, attackers are gearing up to scan for and exploit them with the same LLMs.

This creates an urgent asymmetry. Offensive use of AI is fast, widely scalable, and easily replicated. A single attacker with access to commercial LLMs can scan for IDORs across numerous endpoints, automating what previously required manual expertise. Defensive security, by contrast, requires careful integration into existing development workflows, prioritization systems, and remediation processes.

Organizations that dismiss this as hype or defer investment until “later” are making a dangerous bet. The window to get ahead of AI-enabled attacks is narrowing.

A Practical Roadmap

For security teams already stretched thin, the future depends on organizational maturity. If you’re just establishing an application security program, focus on building the fundamentals. Deploy scanning tools that catch both traditional vulnerabilities and business logic flaws. Start with critical, high-impact issues and build the habit of regular remediation.

For security-mature organizations drowning in alert volume, the priorities are different. You need detection systems that genuinely prioritize and reduce noise. The most advanced teams are moving beyond basic vulnerability scanners toward platforms that understand their specific business context and adapt to their unique applications.

The economic reality is straightforward: security teams need automated detection for business logic vulnerabilities. The alternative (i.e. manually finding and fixing IDORs through pen tests and bug bounties) doesn’t scale. By the time external researchers find these issues, they’ve likely already been exposed for months or years.

Over the next several years, I expect the relationship between traditional SAST, LLM-based detection, and human security expertise to evolve significantly. Humans will remain in control but progressively move out of the tactical weeds. AI will increasingly handle tasks that previously required human security engineers: triaging findings, applying business context, designing remediations, etc. But AI will not replace the deterministic, reliable static analysis engines that form the foundation of modern application security. Agents are assisting and increasingly replacing simple human tasks. They’re too unreliable and too expensive to replace the fast, deterministic code analysis that humans have already handed over to computers.

The future belongs to platforms that thoughtfully blend both: powerful deterministic engines for comprehensive coverage and structural analysis, orchestrated by increasingly sophisticated AI that understands context, personalizes findings, and adapts to each organization’s unique environment.

The IDORs are already in your code. The only question is whether you’ll find them before someone else does.


文章来源: https://securityboulevard.com/2026/01/are-there-idors-lurking-in-your-code-llms-are-finding-critical-business-logic-vulns-and-theyre-everywhere/
如有侵权请联系:admin#unsafe.sh