Press enter or click to view image in full size
Hello Hackers!!! Ever wondered what happens when you set an AI assistant loose to hunt for security vulnerabilities in… other AI assistants? Today, I’m sharing something wild: I used my Clawdbot to scan the internet for exposed Clawdbot instances, and what we found is both fascinating and terrifying.
TL;DR: 2,442 AI assistant instances are leaking sensitive information through misconfigured mDNS broadcasts. One of them? Completely open to the internet with ZERO authentication. Anyone could have taken full control.
This isn’t theoretical. This is happening right now.
Clawdbot is a personal AI assistant that runs on your own infrastructure. It’s powerful — it can execute commands, read files, access your conversations, and integrate with platforms like WhatsApp, Telegram, and Slack.
I was curious: how many people are running Clawdbot in the wild? And more importantly, how many of them are doing it securely?
So I asked my Clawdbot: ”Find other Clawdbot instances on the internet.”
What happened next was… concerning.
Using Shodan (a search engine for internet-connected devices), my AI assistant discovered 2,442 Clawdbot instances actively broadcasting their presence on the internet.
Geographic breakdown:
- 🇩🇪 Germany (Hetzner): ~416 instances
- 🇺🇸 United States (DigitalOcean, Linode): ~760 instances
- 🇦🇪 UAE: ~319 instances
- 🇸🇬 🇭🇰 🇫🇷 🇬🇧: Multiple instances each
From hobbyists running personal AI assistants on home servers to businesses deploying them on cloud infrastructure, all are inadvertently exposing themselves.
What’s going wrong? Let’s dig in.
Clawdbot uses mDNS (Multicast DNS) for device discovery. It’s the same technology that lets your iPhone find your AirPods or your laptop discover printers on your local network.
The problem? mDNS doesn’t know the difference between “local network” and “the entire internet.”
When you run Clawdbot on a VPS without proper firewall configuration, it broadcasts on port 5353:
Press enter or click to view image in full size
Let me walk you through how an attacker would exploit an exposed Clawdbot instance.
Press enter or click to view image in full size
Results:
Press enter or click to view image in full size
Press enter or click to view image in full size
Press enter or click to view image in full size
The Clawdbot gateway uses WebSocket for RPC (Remote Procedure Call) commands. Let’s try connecting:
Press enter or click to view image in full size
The gateway accepts WebSocket connections. If there’s no authentication challenge… you’re in.
Once connected via WebSocket, you can send JSON-RPC commands:
Press enter or click to view image in full size
What can an attacker do with this access?
Out of the 2,442 instances we found, one stood out. Let’s call it Instance X
Configuration:
- Hosted on Google Cloud Platform
- Port 18789 publicly accessible
- WebSocket endpoint open
- Zero authentication required
The test:
Press enter or click to view image in full size
No authentication challenge. No login prompt. Just… access.
Next, I tested the WebSocket upgrade:
Press enter or click to view image in full size
WebSocket connection established. No authentication required.
With unauthenticated WebSocket access, an attacker could:
1. Read Everything
- Session transcripts (all private conversations)
- Credentials (API keys, OAuth tokens, channel tokens)
- SSH keys
- Environment files
- Source code
- Personal documents
2. Execute Anything
Press enter or click to view image in full size
3. Control the AI
- Send messages as the owner
- Access WhatsApp, Telegram, Slack accounts
- Manipulate conversations
- Impersonate the owner
4. Pivot to Other Systems
- Use as a launchpad for attacks on internal networks
- Steal cloud credentials (AWS, GCP, Azure)
- Scan for other vulnerable services
5. Crypto-mine or Botnet
- Use compute resources for cryptocurrency mining
- Enlist in a DDoS botnet
- Burn through expensive AI API credits
This instance had been running for 21+ hours (uptime: 78190 seconds) completely exposed.
How many people had already discovered it? How long had it been vulnerable? Had anyone already exploited it?
We don’t know.
— -
I created a proof-of-concept exploit script to test the vulnerability. My Clawdbot wrote code to compromise other Clawdbots. But I'm not going to share it here.
Meta-irony: An AI assistant writing exploit code for AI assistants.
After analyzing the 2,442 instances, common patterns emerged:
2. “It’s Just a Personal Project.”
Running on a VPS, thinking “no one will find my little server.”
Wrong. Shodan scans the entire IPv4 address space regularly. Your server WILL be found.
Join Medium for free to get updates from this writer.
3. Skipping the Security Docs
Clawdbot has excellent security documentation. Most people skip it.
From the docs:
> “Treat inbound DMs as untrusted input.”
> “Keep gateway.bind on loopback unless you have authentication configured.”
> “Run clawdbot security audit regularly.”
How many actually do this? Based on our findings: not enough.
4. Cloud Defaults
Cloud VPS providers (DigitalOcean, Hetzner, Linode) often:
- Have permissive default firewall rules
- Enable IPv6 by default (another attack surface)
- Allow all outbound connections
Users deploy Clawdbot, it works, they move on. Never realizing port 5353 and 18789 are wide open.
5. Running as Root
We found multiple instances with:
Press enter or click to view image in full size
Running AI assistants with shell access as root? That’s a security nightmare.
— -
This isn’t just about Clawdbot. This is about a broader trend:
AI assistants are becoming infrastructure.
They have access to:
- Your private conversations
- Your files and code
- Your credentials and API keys
- Your cloud environments
- Your communication channels
When they’re compromised, everything is compromised.
The New Threat Model
Traditional security assumes:
- Humans make decisions
- Code is reviewed
- Access is controlled
AI assistants blur these lines:
- They make autonomous decisions
- They write and execute code on the fly
- They bridge multiple systems and accounts
Prompt injection becomes remote code execution.
Information disclosure becomes full system compromise.
Misconfiguration becomes total infrastructure takeover.
The myth: “My server is just one of millions. No one will find it.”
The reality: Shodan found 2,442 instances. It took 30 seconds.
Your server is not hidden. It’s indexed, cataloged, and searchable.
Most security breaches happen because:
- People use default credentials
- Default ports are exposed
- Default services are running
Clawdbot’s defaults (mDNS enabled, no forced auth) prioritize usability. That’s understandable, but dangerous.
Lesson: First-run wizards should include security setup, not just feature setup.
Clawdbot has excellent security docs. But here’s what we learned from the 2,442 exposed instances:
People don’t read the docs.
Or they read them after deployment. Or they bookmark them “to read later.”
Lesson: Security should be enforced by the tool, not just documented.
Many exposed instances appeared to be:
- Personal hobby projects
- Development/staging environments
- Quick experiments
The thinking: “It’s not production, so security doesn’t matter.”
Wrong. That “personal project” server still has:
- Your SSH keys
- Your cloud credentials
- Your private conversations
- Your API tokens
Lesson: Treat every deployment like it’s production.
I spent maybe 2 hours total on this research:
- 15 minutes to ask my AI to search Shodan
- 30 minutes analyzing results
- 1 hour writing proof-of-concept exploit code (with AI help)
Lesson: The attacker’s barrier to entry has collapsed. Defenses need to keep pace.
When you run an AI assistant with access to sensitive data:
- You’re responsible for securing it
- You’re responsible for what it can do
- You’re responsible for what happens if it’s compromised
It’s not just “your data at risk.” It’s every conversation, every credential, every system the AI touches.
Lesson: AI infrastructure requires the same security rigor as production databases
We’re entering an era where:
- AI tools have infrastructure-level access
- Misconfigurations are trivial to find
- Exploitation is automatable
- The blast radius is massive
This research took my AI assistant 2 hours. A dedicated attacker could have automated:
- Finding all 2,442 instances
- Testing each for vulnerabilities
- Exploiting the weakest ones
- Exfiltrating credentials
- Moving laterally to other systems
All automated. All in an afternoon.
I used an AI assistant to find vulnerable AI assistants. The same tool that empowers us to be more productive also empowers attackers to be more efficient.
If you’re running Clawdbot (or any AI infrastructure):
1. Run `clawdbot security audit — deep — fix` right now
2. Enable authentication
3. Bind to localhost or use Tailscale
4. Disable or minimize mDNS
5. Set up proper firewall rules
If you’re building AI tools:
1. Make security the default, not an option
2. Provide in-product security guidance
3. Audit on every startup
4. Educate your users
5. Build intrusion detection from day one
If you’re using AI in production:
1. Treat it like infrastructure (because it is)
2. Regular security audits
3. Principle of least privilege
4. Monitoring and alerting
5. Incident response planning
This wasn’t about exposing Clawdbot or attacking anyone. It was about showing:
When powerful tools are easy to deploy, they’re also easy to misuse.
The solution isn’t to make tools harder to use. It’s to make security impossible to skip.
Let’s build the AI future responsibly. Our AI assistants deserve better security. So do we.
— -
- [Clawdbot GitHub Repository](https://github.com/openclaw/openclaw.git)
- [Clawdbot Security Documentation](https://docs.openclaw.ai/gateway/security)
- [Shodan Search Engine](https://www.shodan.io/)
- [mDNS Protocol (RFC 6762)](https://tools.ietf.org/html/rfc6762)
- [WebSocket Protocol (RFC 6455)](https://tools.ietf.org/html/rfc6455)
— -
Got questions?
- Follow me on Medium: [@uvvirus](https://medium.com/@uvvirus)
- My GitHub: [github.com/uvvirus]
- My X Handle: [https://x.com/UV_virus]
Stay secure, Hackers. 🔒🦞