Beyond the Crystal Ball: What API security may look like in 2024
2024-1-11 08:0:0 Author: securityboulevard.com(查看原文) 阅读量:8 收藏

API security isn’t a dark art.

But no soothsayer out there can predict what the threat landscape may look like in 2024.

At least, not without looking at how we’ve been trending.

When I was asked to participate in the API Futures project to help identify the most significant opportunities and challenges facing the API community in 2024, it got me thinking. What would 2024 look like from a security perspective to the API community?

Let’s explore that with our mind’s eye looking back at what’s happened in the last few years.

Automated attacks against APIs will continue.

According to Impervia’s latest Bad Bot report, more than 30% of all automated traffic comes from bots. More importantly, much of that traffic is being used to attack APIs.

Automated business logic attacks are on the rise. And that makes sense.

Consider Cloudflare’s own reporting on the landscape of API traffic. It shows over 50% of all traffic processed by Cloudflare is API-based, growing twice as fast as traditional web traffic.

Akamai’s research shows that 83% of all traffic on the web today are API calls.

It is well-known that attackers frequently target publicly accessible APIs due to their ease of discovery. However, there exists another aspect of the attack surface that is currently receiving minimal attention.

And that’s through mobile access.

Threat actors use bots to automate their manual attack processes against the APIs being exposed to mobile apps and devices.

Worse yet, these mobile apps tend to have weaker appsec.

Don’t believe me? Check out my article on all the hardcoded API keys and cloud creds found in mobile apps.

Developers are still not properly safeguarding their secrets. Let’s look into that in more detail.

“Secrets Sprawl” will still be a thing.

GitGuardian’s latest annual State of Secrets Sprawl report continues to show that hard-coded secrets significantly threaten the security of people, enterprises, and even countries worldwide.

Developers continue to place API keys and credentials within their code. That gets checked into source control and published to apps and infrastructure. These then get stolen and abused by threat actors and ultimately lead to resource theft and data breaches.

“Use of stolen or compromised credentials remains the most common cause of a data breach. Stolen or compromised credentials were the primary attack vector in 19% of breaches in the 2022 study and also the top attack vector in the 2021 study, having caused 20% of breaches. Breaches caused by stolen or compromised credentials had an average cost of USD 4.50 million.”

IBM’s Cost of a Data Breach Report

AuthN and AuthZ issues will still exist

Authentication and authorization issues will still exist in APIs throughout 2024.

Broken Authentication and Broken Object Level Authorization are still the top two vulnerabilities listed on the OWASP API Security Top Ten.

And that won’t be going away any time soon.

We will continue to see OAuth 2.0 authentication vulnerabilities in APIs, usually through poor implementation.

But I won’t be surprised if we continue to find research attacking the OAuth implementations themselves, like what Salt Security recently found that affects hundreds of apps.

Or the research Microsoft led that found threat actors misusing OAuth applications to automate financially driven attacks. This whole attack vector of abusing the trust an OAuth-driven app provides and then hiding malicious activity in it will turn more APIs into the footholds to apps and infrastructure.

Generative AI will be leveraged to attack APIs

The explosion of generative AI is proving to be helpful almost everywhere. We saw companies like Postman leveraging AI to improve their scaffolding of API test automation through custom LLMs. I wrote about using AI for API Security Testing before.

But I see some potential dark times ahead. Now, I’ve talked about the fact I don’t think offensive AI will be a problem for us as API hackers. But I think I missed something in the way I looked at it in the past.

Things like OpenAI’s generative AI models in GPT4 are starting to be weaponized for offensive security through their own API.

Real-world Example: AI for API Security Testing

I’ve personally been involved in a project in which we leveraged LLMs to produce a proof of concept (PoC) recon agent. By simply providing a contextual set of prompts with the IP of a target API server, it could instruct us on the commands to run to conduct recon and report on the next steps we should take in security testing.

But here’s where it went sideways. The way we did it was completely dynamic. We didn’t define the methodology it used. We simply told the LLM where to look and what tools we had at our disposal. It told us how to do endpoint enumeration (including recommending the wordlist to use) and what to check for during header and parameter evaluation. Had we built an agent to take those instructions and run them verbatim, we literally would have had AI conducting recon against the target API.

Think it’s a ridiculous assumption? Justin Hutchens already published something similar in his book The Language of Deception: Weaponizing Next Generation AI. He interfaced Kali Linux with ChatGPT to prove that next-generation malware could execute Command and Control (C2) operations without the need for human intervention.

Considering that bots make up a good portion of bad traffic, how far-fetched is it to start seeing generative AI being abused this way to attack APIs?

Security tools still won’t catch critical vulnerabilities

API security vendors are getting a lot better these days. Much of the basic testing for things like the OWASP API Security Top Ten can be validated directly in the dev pipeline through SAST. However, they still can’t catch the more complex business logic bugs that are much more critical to APIs.

You can’t blame them. They can’t understand the API without context when scanning the code at the function/endpoint level.

It’s one of the reasons vulnerabilities like API6:2023 Unrestricted Access to Sensitive Business Flows made it to the 2023 Top Ten list. Sensitive business flows rely on understanding human vs. non-human patterns and detecting when abuse can be seen. Device fingerprinting, “event timing” tracking, and rate limiting all help, but it is difficult for security tools to do real-time detection, evaluation, and response.

So, while it’s essential to include API security tooling in the development process, you can’t rely on it alone. Frequent (or even better, continuous) API pentesting is still an important aspect of a robust API engineering program.

It’s not all doom and gloom.

So, we’ve covered a lot of challenges to API security that we will probably see in 2024.

But don’t lose hope. There are tonnes of opportunities here to improve.

Building more resilient APIs and enforcing good behavior through API gateways can help reduce automated attacks against APIs.

Proper secrets management, coupled with source control guardrails to prevent secrets from even being checked in, can help eliminate “secrets sprawl.” Improving CICD pipelines to leverage this same secrets management infrastructure also helps remove risk in deployment automation.

Using well-established authN & authZ frameworks and middleware can eliminate a lot of risk against broken authentication and authorization. Trusting these frameworks is another thing; we still need to harden deployments and limit the blast radius of potential third-party risk. Limiting roles and scopes can help, but they must be properly developed.

AI can be your API’s best friend instead of its worst enemy. Companies building security testing tools like Postman and Burp Suite will embrace artificial intelligence to make it easier to do our work. While threat actors may abuse AI to attack APIs, we can use it to speed up our security test development and execution to catch issues earlier in the dev cycle.

Security tooling can help. The days of simple OAS/Swagger definition scanning are pretty much gone. Vendors are now not only looking at API metadata but also the behavioral attributes of request and response patterns. They combine all this within large data lakes to build custom machine learning models that allow them to apply AI to your API patterns. This lets them discover and detect your APIs and limit potentially malicious intent to help improve your security.

Conclusion

The future landscape of API security in 2024 is expected to be more dynamic and challenging than ever before.

However, with the advances in AI and machine learning coupled with robust coding frameworks and security tooling, we are better equipped to address these challenges… as long as we do it.

This ongoing innovation paints a future where API security becomes more proactive than reactive, enabling our organizations to stay one step ahead of potential threats.

Now, I’m the security guy. It’s my job to think about how to break things and make APIs work in ways never intended or expected. However, many of my colleagues have a different vision of the future of APIs. You should really check out some of their thoughts on the subject.

One last thing…

API Hacker Inner Circle

Have you joined The API Hacker Inner Circle yet? It’s my FREE weekly newsletter where I share articles like this, along with pro tips, industry insights, and community news that I don’t tend to share publicly. If you haven’t, subscribe at https://apihacker.blog.

The post Beyond the Crystal Ball: What API security may look like in 2024 appeared first on Dana Epp's Blog.

*** This is a Security Bloggers Network syndicated blog from Dana Epp's Blog authored by Dana Epp. Read the original post at: https://danaepp.com/api-security-in-2024


文章来源: https://securityboulevard.com/2024/01/beyond-the-crystal-ball-what-api-security-may-look-like-in-2024/
如有侵权请联系:admin#unsafe.sh