8,000+ ChatGPT API Keys Left Publicly Accessible
好的,我现在要帮用户总结这篇文章的内容。首先,我需要仔细阅读文章,理解其主要观点和关键信息。 文章主要讨论了人工智能(AI)在软件开发中的快速整合带来的新的安全风险。特别是ChatGPT API密钥的泄露问题,导致了严重的安全隐患。研究机构CRIL发现,有超过5000个GitHub仓库和3000个生产网站暴露了这些密钥。 接下来,我需要确定文章的几个关键点:1. 聊天GPT API密钥的泄露情况;2. 开发者在代码中硬编码密钥的问题;3. 暴露密钥带来的安全威胁,如滥用、财务损失等;4. 安全管理的不足以及建议。 用户要求总结控制在100字以内,并且不需要特定的开头。因此,我需要简洁明了地表达这些要点。 可能的结构是:首先指出问题所在(API密钥泄露),然后说明原因(开发者习惯),接着描述后果(滥用、损失),最后提到专家建议或解决方案。 现在,我来组织语言: “人工智能融入软件开发带来新安全风险。研究发现超5000个GitHub仓库及3000个网站暴露ChatGPT API密钥。开发者常将密钥硬编码于代码中,导致滥用风险增加。专家警告需加强安全管理。” 这样大约75字左右,涵盖了主要信息。 </think> 人工智能融入软件开发带来新安全风险。研究发现超5,000个GitHub仓库及3,000个网站暴露ChatGPT API密钥。开发者常将密钥硬编码于代码中,导致滥用风险增加。专家警告需加强安全管理以防止财务损失与隐私泄露。 2026-2-13 07:34:2 Author: thecyberexpress.com(查看原文) 阅读量:0 收藏

The rapid integration of artificial intelligence into mainstream software development has introduced a new category of security risk, one that many organizations are still unprepared to manage. According to research conducted by Cyble Research and Intelligence Labs (CRIL), thousands of exposed ChatGPT API keys are currently accessible across public infrastructure, dramatically lowering the barrier for abuse. 

CRIL identified more than 5,000 publicly accessible GitHub repositories containing hardcoded OpenAI credentials. In parallel, approximately 3,000 live production websites were found to expose active API keys directly in client-side JavaScript and other front-end assets.  

Together, these findings reveal a widespread pattern of credential mismanagement affecting both development and production environments. 

GitHub as a Discovery Engine for Exposed ChatGPT API Keys 

Public GitHub repositories have become one of the most reliable sources for exposed AI credentials. During development cycles, especially in fast-moving environments, developers often embed ChatGPT API keys directly into source code, configuration files, or .env files. While the intent may be to rotate or remove them later, these keys frequently persist in commit histories, forks, archived projects, and cloned repositories. 

CRIL’s analysis shows that these exposures span JavaScript applications, Python scripts, CI/CD pipelines, and infrastructure configuration files. Many repositories were actively maintained or recently updated, increasing the likelihood that the exposed ChatGPT API keys remained valid at the time of discovery. 

Once committed, secrets are quickly indexed by automated scanners that monitor GitHub repositories in near real time. This drastically reduces the window between exposure and exploitation, often to mere hours or minutes. 

report-ad-banner

Exposure in Live Production Websites 

Beyond repositories, CRIL uncovered roughly 3,000 public-facing websites leaking ChatGPT API keys directly in production. In these cases, credentials were embedded within JavaScript bundles, static files, or front-end framework assets, making them visible to anyone inspecting network traffic or application source code. 

A commonly observed implementation resembled: 

const OPENAI_API_KEY = “sk-proj-XXXXXXXXXXXXXXXXXXXXXXXX”;
const OPENAI_API_KEY = “sk-svcacct-XXXXXXXXXXXXXXXXXXXXXXXX”;
 

The sk-proj- prefix typically denotes a project-scoped key tied to a specific environment and billing configuration. The sk-svcacct- prefix generally represents a service-account key intended for backend automation or system-level integration. Despite their differing scopes, both function as privileged authentication tokens granting direct access to AI inference services and billing resources. 

Embedding these keys in client-side JavaScript fully exposes them. Attackers do not need to breach infrastructure or exploit software vulnerabilities; they simply harvest what is publicly available. 

“The AI Era Has Arrived — Security Discipline Has Not” 

Richard Sands, CISO at Cyble, summarized the issue bluntly: “The AI Era Has Arrived — Security Discipline Has Not.” AI systems are no longer experimental tools; they are production-grade infrastructure powering chatbots, copilots, recommendation engines, and automated workflows. Yet the security rigor applied to cloud credentials and identity systems has not consistently extended to ChatGPT API keys. 

A contributing factor is the rise of what some developers call “vibe coding”—a culture that prioritizes speed, experimentation, and rapid feature delivery. While this accelerates innovation, it often sidelines foundational security practices. API keys are frequently treated as configuration values rather than production secrets. 

Sands further emphasized, “Tokens are the new passwords — they are being mishandled.” From a security standpoint, ChatGPT API keys are equivalent to privileged credentials. They control inference access, usage quotas, billing accounts, and sometimes sensitive prompts or application logic. 

Monetization and Criminal Exploitation 

Once discovered, exposed keys are validated through automated scripts and operationalized almost immediately. Threat actors monitor GitHub repositories, forks, gists, and exposed JavaScript assets to harvest credentials at scale. 

CRIL observed that compromised keys are typically used to: 

  • Execute high-volume inference workloads 
  • Generate phishing emails and scam scripts 
  • Assist in malware development 
  • Circumvent service restrictions and usage quotas 
  • Drain victim billing accounts and exhaust API credits 

Some exposed credentials were also referenced in discussions mentioning Cyble Vision, indicating that threat actors may be tracking and sharing discovered keys. Using Cyble Vision, CRIL identified instances in which exposed keys were subsequently leaked and discussed on underground forums. 

Cyble Vision indicates API key exposure leak
Cyble Vision indicates API key exposure leak (Source: Cyble Vision)

Unlike traditional cloud infrastructure, AI API activity is often not integrated into centralized logging systems, SIEM platforms, or anomaly detection pipelines. As a result, abuse can persist undetected until billing spikes, quota exhaustion, or degraded service performance reveal the compromise. 

Kaustubh Medhe, CPO at Cyble, warned: “Hard-coding LLM API keys risks turning innovation into liability, as attackers can drain AI budgets, poison workflows, and access sensitive prompts and outputs. Enterprises must manage secrets and monitor exposure across code and pipelines to prevent misconfigurations from becoming financial, privacy, or compliance issues.” 


文章来源: https://thecyberexpress.com/exposed-chatgpt-api-keys-github-websites/
如有侵权请联系:admin#unsafe.sh