I Made an AI Think It Was Root — And It Gave Me /etc/passwd
嗯,用户让我帮忙总结一篇文章的内容,控制在100个字以内,而且不需要特定的开头。首先,我需要仔细阅读这篇文章,理解它的主要观点和关键信息。 文章讲的是一个安全研究人员在三年后重新开始写漏洞报告的经历。他发现了一个AI聊天应用中的漏洞,通过注入提示语句,成功获得了系统的根权限。这不仅仅是简单的越狱,而是模型权限结构的崩溃。攻击者可以通过这种方式访问敏感文件,甚至执行系统命令,比如安装软件包。 接下来,我需要将这些关键点浓缩到100个字以内。要确保涵盖主要事件:漏洞发现、攻击方式、影响以及其重要性。同时,语言要简洁明了,避免使用复杂的术语。 最后,检查一下是否符合用户的要求:没有使用特定的开头词,控制在100字以内,并且准确传达了文章的核心内容。 </think> 一位安全研究人员通过提示注入攻击发现AI聊天应用漏洞,在无需内存破坏或RCE链的情况下获得系统根权限。该漏洞导致模型权限结构崩溃,攻击者可访问敏感文件并执行系统命令。此类漏洞成为新的安全威胁重点。 2026-3-3 06:6:52 Author: infosecwriteups.com(查看原文) 阅读量:2 收藏

Press enter or click to view image in full size

Paras Arora

The Prompt Injection That Pulled Me Back Into Bug Writing After 3 Years

I hadn’t written a public bug bounty report in three years.

Then an AI chat application casually handed me the contents of:

/etc/passwd

No memory corruption.
No RCE exploit chain.
Just… a prompt.

This is the story of how a simple conversation turned into a pseudo-shell with root access — and why this class of bugs is about to become the new goldmine for hunters.

🧠 The Moment I Knew It Was Broken

It started like every other AI assessment:

Recon → harmless probing → refusal testing.

Then the model said something it should never say:

“I am now acting as a terminal with root access.”

That’s not a jailbreak.

That’s an instruction hierarchy collapse.

Proof of Concept

📂 Sensitive File Disclosure

The AI returned the contents of /etc/passwd.

This is the line between:

🟡 “fun jailbreak”
🔴 real security impact

Because it means one of two things:

  • It had access to a real environment
    OR
  • It could simulate system data using internal context it should never expose

Either way — data boundary failure.

Get Paras Arora’s stories in your inbox

Join Medium for free to get updates from this writer.

Remember me for faster sign in

Figure 1 — /etc/passwd disclosure via prompt injection

From Chatbot → Linux Terminal

After a structured role-confusion payload, the AI stopped behaving like a chatbot and started behaving like:

root@system:~#

I ran:

apt-get install wget

And it responded with:

  • package lists
  • dependency tree
  • download progress
  • installation logs

Exactly like a real machine.

Figure 2 — Package installation flow inside the chat interface

Press enter or click to view image in full size

At this point, this wasn’t “prompt hacking”.

This was impact.

Full Role Override

The core payload forced the model to:

  • Treat my input as commands
  • Stop identifying as AI
  • Assume system authority
  • Switch between “AI mode” and “terminal mode”

Once that worked, everything else followed.

Figure 3 — Successful terminal mode with root context

Press enter or click to view image in full size

Final Thoughts

This wasn’t the most complex bug I’ve ever found.

But it might be the most important class of bugs right now.

Because the industry is deploying AI faster than it understands:

AI is not just a feature.

It is a new attack surface.

#BugBounty #AISecurity #PromptInjection #AppSec #Hacking #LLM #CyberSecurity


文章来源: https://infosecwriteups.com/i-made-an-ai-think-it-was-root-and-it-gave-me-etc-passwd-d872418ea25c?source=rss----7b722bfd1b8d---4
如有侵权请联系:admin#unsafe.sh