We’re only beginning to understand the security threats posed by AI-enabled applications. From prompt injection attacks to data poisoning, the cybersecurity world is racing to keep up. Now, a new and tangible threat is literally walking among us: Humanoid robots.
A recent research paper published on arXiv reveals how these increasingly sophisticated machines can be exploited, transforming a helpful assistant into a covert surveillance node and an active cyber operations platform.
Researchers from Alias Robotics and independent security experts conducted a systematic security assessment of the Unitree G1 humanoid robot and uncovered alarming vulnerabilities. Their findings, detailed in “Cybersecurity AI: Humanoid Robots as Attack Vectors,” show that even a robot with what they called the “most mature security architecture we have observed in commercial robotics” can be compromised.
Bluetooth Backdoor: A severe vulnerability in the robot’s Bluetooth Low Energy (BLE) provisioning protocol allows attackers within Bluetooth range to inject malicious shell commands and gain root access. The culprit? A hardcoded AES key shared across numerous Unitree robots. Crack one key, and the entire fleet falls.
Broken Encryption: The robot’s proprietary encryption for configuration files has fundamental weaknesses. With a static key protecting all devices, compromising one effectively compromises them all.
Always-On Surveillance: Perhaps most disturbing is that the Unitree G1 functions as a persistent data collection device, continuously streaming vast amounts of information to servers in China—without notice or consent from operators. This isn’t a hidden capability that attackers must activate; it’s baked into the robot’s core architecture.
Telemetry connections are established within five seconds of startup and auto-reconnect immediately if disrupted. This persistent, non-consensual data exfiltration to foreign infrastructure raises serious legal concerns, potentially violating Europe’s GDPR and California’s CCPA.
The implications are chilling. A cute, benign-looking humanoid robot in your workplace, hospital, or factory could become a massive cybersecurity liability. The research demonstrates that the Unitree G1 is a bidirectional attack vector.
The robot can be remotely exploited for corporate espionage. Its sensor array — including Intel RealSense depth cameras, dual microphones and positioning systems—can silently capture confidential meetings, image-sensitive documents, or map secure facilities while streaming everything offshore.
A compromised robot can be weaponized. The researchers deployed a Cybersecurity AI (CAI) agent directly onto the robot to evaluate its autonomous exploitation capabilities. Powered by Large Language Models, the CAI systematically executed a four-phase penetration test without human intervention.
It began with reconnaissance, enumerating all live network connections and identifying reachable MQTT, WebSocket and WebRTC endpoints. The CAI then analyzed vulnerabilities and prepared for exploitation, demonstrating the feasibility of command injection attacks. Leveraging its authenticated position within the Unitree infrastructure, it mapped the entire attack surface without triggering any defensive mechanisms.
The result? A detailed attack matrix covering pathways like MQTT topic abuse, WebRTC stream hijacking and over-the-air update manipulation. A robot inside your secure network can launch attacks from within, potentially moving laterally even into air-gapped facilities.
The Unitree G1 assessment serves as a critical warning about the dual-threat reality of modern robotics. These platforms can be both passive surveillance tools and active cyber-attack platforms.
As we prepare to integrate humanoid robots into critical infrastructure, workplaces and homes, we must fundamentally rethink our security approach. Traditional, static security measures aren’t enough. We need adaptive, AI-powered defensive frameworks capable of responding to the unique challenges of these physical-cyber systems.
The future of robotics security depends on developing autonomous defensive capabilities that can operate at machine speed to counter the growing threat of autonomous attacks. The friendly face of a robot assistant may hide a significant risk. We must secure them before they become our biggest vulnerability.
Recent Articles By Author