How I Convinced an AI to Hack Itself: Prompt Injection to XSS ️
嗯,用户让我帮忙总结一篇文章,控制在100字以内,而且不需要用特定的开头。首先,我需要理解文章的内容。看起来这篇文章讲的是一个安全研究人员在测试一个企业级网络应用时,发现了一个AI聊天功能中的XSS漏洞。 用户可能是一位网络安全爱好者或者相关领域的学生,他们需要快速了解文章的核心内容。所以,我应该抓住关键点:XSS漏洞、AI聊天功能、攻击过程以及结果。 接下来,我要确保总结简洁明了,不超过100字。可能的结构是:谁发现了什么漏洞,在哪里发现的,怎么利用的,结果如何。这样既全面又简洁。 另外,用户可能希望这个总结用于学习或分享,所以用词要准确且易懂。避免使用过于专业的术语,但又要保留技术细节。 最后,检查一下是否符合要求:没有使用“文章内容总结”之类的开头,并且控制在100字以内。确保信息完整且清晰。 </think> 研究人员在测试企业级网络应用Target-X时发现其AI聊天功能存在XSS漏洞。尽管该AI设计上不生成代码或技术payload,但通过诱导AI生成恶意JavaScript代码成功发起攻击。 2026-1-25 14:15:41 Author: infosecwriteups.com(查看原文) 阅读量:1 收藏

Mahendra Purbia (Mah3Sec)

Press enter or click to view image in full size

Free Link🔗

TL;DR

Found XSS in an AI chat feature by manipulating the AI to generate malicious JavaScript. The AI became my unwilling accomplice in hacking the application. Spider-Sense: activated 🕸️

The Discovery

Testing an enterprise web app (let’s call it Target-X), I found their AI chat assistant. This wasn’t some developer-focused coding AI — this was a productivity assistant for meetings, scheduling, and general workplace tasks (think Copilot for Outlook vibes).

Important Context: This AI was specifically designed to NOT generate code, scripts, or technical payloads. It’s supposed to help you write meeting notes, not write XSS payloads.

Like any researcher with trust issues, I immediately tried injecting in my messages.

Me: <script>alert('XSS')</script>

Result: Properly escaped. User input was sanitized. ✅

But then I thought… what if I manipulate the AI to generate the payload for me, even though it’s not supposed to?

The Attack: 2 Days of AI Gaslighting

Day 1: Normal Conversation (Building Trust)


文章来源: https://infosecwriteups.com/how-i-convinced-an-ai-to-hack-itself-prompt-injection-to-xss-%EF%B8%8F-dab60010e40d?source=rss----7b722bfd1b8d---4
如有侵权请联系:admin#unsafe.sh