Press enter or click to view image in full size
Free Link🔗
TL;DR
Found XSS in an AI chat feature by manipulating the AI to generate malicious JavaScript. The AI became my unwilling accomplice in hacking the application. Spider-Sense: activated 🕸️
The Discovery
Testing an enterprise web app (let’s call it Target-X), I found their AI chat assistant. This wasn’t some developer-focused coding AI — this was a productivity assistant for meetings, scheduling, and general workplace tasks (think Copilot for Outlook vibes).
Important Context: This AI was specifically designed to NOT generate code, scripts, or technical payloads. It’s supposed to help you write meeting notes, not write XSS payloads.
Like any researcher with trust issues, I immediately tried injecting in my messages.
Me: <script>alert('XSS')</script>Result: Properly escaped. User input was sanitized. ✅
But then I thought… what if I manipulate the AI to generate the payload for me, even though it’s not supposed to?