Understanding AI-driven SSRF: How LLMs can be tricked into leaking Cloud Metadata
文章描述了一个涉及ChatGPT和Azure的SSRF(服务器端请求伪造)漏洞案例。攻击者利用ChatGPT定制动作作为“服务器”,让其向Azure元数据服务地址发起请求,获取了敏感API密钥。这表明AI无法自动处理输入安全问题,需警惕传统漏洞在AI环境中的新表现。 2025-12-7 07:20:45 Author: www.reddit.com(查看原文) 阅读量:0 收藏

There is a lot of hype around "AI Hacking," but often it just boils down to classic web vulnerabilities in a new wrapper.

I wrote an analysis of a recent SSRF find involving ChatGPT and Azure that illustrates this perfectly.

The Concept: Server-Side Request Forgery (SSRF) happens when you can make a server make a request on your behalf.

The Modern Twist: In this case, the "Server" was a ChatGPT Custom Action. The attacker asked the AI to fetch data. The AI (running in a cloud environment) made a request to the local link-local address 169.254.169.254 (Azure Metadata Service).

Because the cloud provider saw the request coming from itself, it returned sensitive API keys.

This is a great example of why we can't just trust "AI" to sanitize inputs. If the underlying infrastructure allows internal calls, the AI will happily execute them.

Link to full analysis


文章来源: https://www.reddit.com/r/netsecstudents/comments/1pgcbvg/understanding_aidriven_ssrf_how_llms_can_be/
如有侵权请联系:admin#unsafe.sh