There is a lot of hype around "AI Hacking," but often it just boils down to classic web vulnerabilities in a new wrapper.
I wrote an analysis of a recent SSRF find involving ChatGPT and Azure that illustrates this perfectly.
The Concept: Server-Side Request Forgery (SSRF) happens when you can make a server make a request on your behalf.
The Modern Twist: In this case, the "Server" was a ChatGPT Custom Action. The attacker asked the AI to fetch data. The AI (running in a cloud environment) made a request to the local link-local address 169.254.169.254 (Azure Metadata Service).
Because the cloud provider saw the request coming from itself, it returned sensitive API keys.
This is a great example of why we can't just trust "AI" to sanitize inputs. If the underlying infrastructure allows internal calls, the AI will happily execute them.