For over a decade, Google told developers it was safe to put API keys in public code. Then, AI changed the rules.
If I asked you right now where you store your API keys, your first answer would probably be: “Securely in environment variables, obviously.”
But what if I told you that for the last ten years, Google explicitly told you not to do that?
If you’ve ever embedded a Google Map on a website or set up Firebase for a web app, chances are you have a Google API key (those familiar strings starting with AIza…) sitting in your public, client-side HTML or JavaScript. And you didn’t make a mistake — you were following the official documentation.
These keys were designed to be public project identifiers for billing, not secret authentication credentials.
But a recent discovery by researchers at Truffle Security has revealed a massive, silent vulnerability: When you enable Google’s Gemini AI on your cloud project, those benign public keys suddenly become highly sensitive credentials.
Here is the story of how millions of developers unknowingly left the keys to their AI kingdom under the welcome mat, and how you can fix it before someone racks up your cloud bill.
Press enter or click to view image in full size
To understand how we got here, we have to look at how Google Maps and Firebase APIs were historically designed.
In the pre-AI days — let’s call it the 2019 era — Google needed a way to track which projects were using their services so they could bill them accordingly. If you wanted to put a map on your website, Google’s instructions were simple: generate an API key and paste it directly into your frontend code.
They were very clear about this. In fact, the official Firebase security checklist explicitly stated that these API keys were not secrets.
Press enter or click to view image in full size
Because these keys were public by design, security relied on other factors, like restricting the key to only work on your specific website domain (HTTP referrers). The key itself was just an identifier. If someone scraped it, the worst they could usually do was maybe load a map on their own site, which you could easily block.
It was a simpler time. But then, the Generative AI arms race began.
Fast forward to today. Your team decides to experiment with Google’s powerful new LLM, Gemini. A developer goes into your Google Cloud Platform (GCP) dashboard and enables the Generative Language API.
What you aren’t told is that by enabling this API, every single existing unrestricted API key in that GCP project just silently inherited access to Gemini.
There is no warning popup. There is no confirmation email.
That Map widget key you deployed three years ago? It is still sitting in your website’s public source code. But overnight, it transformed from a harmless billing tracker into a live, highly sensitive credential capable of accessing Gemini endpoints.
Why does this happen? It comes down to Insecure Defaults (CWE-1188). When you create a new API key in Google Cloud, its default state is “Unrestricted.” This means the key is valid for every single API enabled in that project.
Because Google uses the exact same key format (AIza…) for both public identification and sensitive authentication, the system invites massive confusion.
Press enter or click to view image in full size
You might be thinking, “So what if someone talks to an AI using my key?” But the blast radius of this privilege escalation is severe.
Join Medium for free to get updates from this writer.
This attack requires zero infrastructure access. A hacker doesn’t need to breach your servers. They simply visit your public website, right-click, “View Page Source,” copy the AIza… key, and run a simple terminal command:
code Bash
curl "https://generativelanguage.googleapis.com/v1beta/files?key=$API_KEY"Instead of being blocked, they are greeted with a 200 OK response. From there, they can inflict three major types of damage:
If you feel silly for having unrestricted keys, don’t. This is an architectural trap that has caught some of the biggest tech companies in the world.
Truffle Security, the team that discovered this flaw, scanned the November 2025 Common Crawl dataset — a massive 700-terabyte archive of scraped web content.
They found 2,863 live, vulnerable Google API keys sitting on the public internet.
These didn’t belong to just hobbyists. The victims included major financial institutions, global recruiting firms, top-tier security companies, and incredibly — Google itself.
Researchers found an API key embedded on a public-facing Google product website. The Internet Archive confirmed the key had been there since February 2023, long before Gemini existed. Yet, because of this silent privilege escalation, that ancient public key was successfully used to query Google’s internal Gemini models.
When the vendor’s own engineering teams fall into the trap, you know the system’s defaults are broken.
Press enter or click to view image in full size
Google has acknowledged the issue and is working on a roadmap to fix it. They plan to enforce scoped defaults so new keys created in AI Studio are limited to Gemini-only. They are also actively blocking leaked keys they find in the wild.
However, relying on Google to find your leaked keys is not a security strategy. If your organization uses any Google Cloud service, you need to take these steps immediately:
1. Audit All GCP Projects
Log into your Google Cloud console. Navigate to APIs & Services > Enabled APIs. Check every single project to see if the “Generative Language API” is turned on. If it isn’t, you are safe from this specific vector.
2. Inspect Your API Keys
If the AI API is enabled, go to APIs & Services > Credentials. Look for any API keys that have a yellow warning triangle next to them. This means they are “Unrestricted.” You also need to check if any restricted keys explicitly have the Generative Language API checked in their allow-list.
3. Hunt Down Public Exposure
Search your frontend client-side code, public GitHub repositories, and CI/CD pipelines for any strings beginning with AIza. If any of those keys match the unrestricted keys you found in Step 2, you have a critical vulnerability.
4. Rotate and Restrict
Immediately rotate any exposed keys. Going forward, apply strict API restrictions to every single key you generate. A Maps key should only be allowed to talk to the Maps API.
5. Scan Automatically
Don’t rely on manual searches. Use tools like TruffleHog to scan your codebases. You can run the command trufflehog filesystem /path/to/your/code — only-verified to automatically detect live, verified keys that have Gemini access.
The crisis of the exposed Gemini keys highlights a terrifying reality of the modern tech landscape. As companies rush to bolt powerful AI capabilities onto existing, legacy architectures, the attack surface expands in ways no one anticipated.
What wasn’t a secret yesterday is suddenly the master key to your kingdom today. It is time to treat every string of characters like a liability, because in the AI era, there is no such thing as a “harmless” public credential.