API security has never been more crucial. Vulnerabilities are growing in volume and severity. AI integrations are a burgeoning attack vector. Increasing GraphQL adoption presents hidden dangers. To protect your organization, you must secure your APIs.
Keep reading for our key takeaways from the Wallarm Q2 2025 API ThreatStats report – and find out what you need to do to protect yourself.
70% of organizations now use GraphQL. And yet, there were no GraphQL-specific breaches reported in Q2 2025. If that sounds suspicious, it’s because it is.
GraphQL slashes payload sizes by up to 99% and offers clients powerful, flexible control over data. However, that same flexibility opens the door to excessive data exposure, denial of service from nested queries, and resolver-level authorization bypasses.
What’s more, considering that its single dynamic endpoint obscures visibility for traditional security controls, it’s safe to assume that attackers are already exploiting introspection, deep nesting, and injection flaws in poorly secured GraphQL deployments.
So why were there no GraphQL breaches in Q2 2025?
It’s not because GraphQL is safe; it’s likely because organizations are failing to accurately detect and attribute breaches. Traditional API security tools often fail to support GraphQL, and, as such, organizations should treat it as a unique class of API architecture requiring specialized protections. That includes:
APIs security is growing more important by the day. In the second quarter of 2025, we saw a 9.8% increase in API-related CVEs compared to Q1, with 639 vulnerabilities disclosed between April and June up from 582 in Q1.
Of course, we can attribute much of this rise to the increasing volume of APIs in production, but the real story is that attackers are becoming more aware of their inherent vulnerabilities - particularly in the way they integrate with tools and AI systems.
AI-specific API vulnerabilities rocketed again from Q1 to Q2: we identified 34 CVEs tied directly to AI APIs this quarter, up from just 19 in Q1. It’s a drum we’ve been banging for a while now, but the key takeaway here is that AI security is API security.
But it’s not just that API vulnerabilities are more becoming more common, they’re also becoming more severe:
And, although the total number of API-related Known Exploited Vulnerabilities (KEVs) fell from Q1 to Q2 2025, they proportionally increased from 20% to 22% of all confirmed in-the-wild exploits in the quarter. But how are attackers exploiting these vulnerabilities?
The most exploited API flaws in Q2 reveal a familiar pattern of weak access controls, token mishandling, and injection risks:
What, then, should organizations do to protect themselves?
In light of these findings, organizations should take the following steps to secure their APIs:
You can’t defend blind spots. Continuously uncover every API – internal, external, AI-drivem, or shadow – and pair discovery with schema ownership and real-time usage monitoring.
Secure every stage of your AI pipeline, from ingestion to inference. Disable default endpoints, enforce fine-grained access controls, and keep orchestration pipelines under watch.
Granular access policies aren’t optional. Many of Q2’s biggest breaches happened because exposed APIs lacked proper authentication or authorization.
Static checks like schema validation are no longer sufficient. You must simulate misuse to catch logic flaws, sequence abuse, and role escalation through behavior-based testing.
Security doesn’t work if it’s bolted on. Integrate testing early in development and pair it with real-time runtime protection to safeguard APIs from code to production.