ShortSpan.ai logo

OpenClaw: Give It Access to Your Machine? What Could Go Wrong?

OpinionAgents
Published: Sat, Feb 21, 2026
Ben Williams
Ben WilliamsOpinion
OpenClaw, the open-source AI agent controllable via WhatsApp, has over 215,000 GitHub stars and a growing skills ecosystem. It requires access to your system, stores credentials in plaintext, and has already been hit by supply chain attacks and critical vulnerabilities.

A couple of weeks ago I had a play around with OpenClaw. I set it up on an isolated AWS EC2 Ubuntu instance, had some discussions with it via WhatsApp, and let it explore some files and data I put on the system. It was certainly an interesting experience, with a strange quality, quite different from my frequent interactions with a wide variety of other AI chat interfaces and agents. It seemed to have an unusual level of agency and capability.

If you are here and reading this, you have almost certainly heard of OpenClaw already. The open-source AI agent that promises to be your personal computer assistant, controllable via messaging apps, has amassed over 215,000 GitHub stars and a fervent community. It can browse the web, write and run code, manage files, and interact with APIs on your behalf. It is also, by design, given full access to your operating system. For anyone working in security, that sentence should raise an eyebrow.

A Brief and Turbulent History

OpenClaw started life as Clawdbot in November 2025, built by Austrian developer Peter Steinberger. Steinberger is no hobbyist. He previously founded PSPDFKit, a PDF framework company that attracted $116 million in investment from Insight Partners. By his own account, the first working version of Clawdbot took roughly an hour to build. Peter vibe codes extensively, in various ways, and this short development time says more about the current state of AI tooling than it does about the complexity of the project.

The initial name did not last long. Anthropic, makers of the Claude model that originally powered the bot, requested a rename. Clawdbot became Moltbot on January 27, 2026. Two days later, it was renamed again to OpenClaw. The community barely had time to update their bookmarks.

Then, on February 15, 2026, Sam Altman announced that Steinberger was joining OpenAI. The hire reportedly came after a courtship from both OpenAI and Meta. Mark Zuckerberg personally called Steinberger on WhatsApp, ran OpenClaw himself, and gave blunt feedback. Meta offered more money, but Steinberger chose OpenAI, citing alignment with their vision. OpenClaw itself is transitioning to a foundation structure and will remain open source.

What It Actually Does

At its core, OpenClaw is a messaging gateway that connects your WhatsApp (or Telegram, or other platforms) to an embedded coding agent. You send a message like "find all PDFs on my desktop and summarise them," and the agent translates that into shell commands, executes them, and reports back. It uses the Baileys library for WhatsApp connectivity, linking to your account via a QR code in the same way WhatsApp Web does.

The experience is genuinely compelling. Chatting to an AI assistant through WhatsApp feels natural in a way that browser-based tools do not. You can be away from your desk, sat in bed, sat waiting for you kids at ju-jitsu, football or ballet, standing in a queue for coffee - whatever - fire off a message, and have your machine carry out tasks in the background. It is easy to see why the project took off so quickly.

The Security Problem

The trouble is what you have to give up to make it work. OpenClaw requires full operating system and command-line access. API keys are stored in plaintext JSON files. OAuth tokens sit unencrypted on disk. The agent listens on 0.0.0.0:18789 by default, meaning it is reachable from any network interface. And because it needs to act on your behalf, it effectively has access to everything you do: your browser sessions, your password manager, your cloud credentials, your email.

This is not a theoretical concern. In February 2026, researchers disclosed CVE-2026-25253, a WebSocket hijacking vulnerability with a CVSS score of 8.8 that allowed one-click remote code execution. A scan found 42,665 exposed OpenClaw instances on the public internet, with 5,194 confirmed as vulnerable.

The skills ecosystem has been worse. OpenClaw supports community-built "skills" distributed through ClawHub. Security firm Snyk analysed the marketplace and found that 36.82% of all skills contained security flaws, with 534 rated critical. The ClawHavoc investigation uncovered 1,184 malicious skills, 335 of which shared the same command-and-control IP address. Some distributed the Atomic macOS Stealer. This is not a few bad apples. This is a supply chain that was rotten from the start, and the pace of community growth outstripped any reasonable capacity for review.

Prompt injection is another vector that gets surprisingly little attention in the OpenClaw community. Researchers demonstrated that an email containing a hidden prompt injection could cause the agent to extract and exfiltrate a user's private key when simply asked to "check my mail." If your AI assistant reads untrusted content and also has shell access, you have built a bridge between attacker-controlled input and arbitrary code execution. That is not a feature request. That is a vulnerability class.

Trust and Isolation

The fundamental issue with OpenClaw is not any single vulnerability. It is the trust model. The agent needs broad access to be useful, but broad access on a personal machine means broad exposure. Your development keys, your SSH credentials, your browser cookies, your messaging sessions are all within reach of an agent that parses untrusted input from the internet.

If you do want to experiment with tools like this, a more defensible approach is to run the agent on a disposable cloud instance. An AWS EC2 instance with tightly scoped IAM permissions, no stored credentials beyond what the task requires, and network isolation from your personal environment is a reasonable starting point. Containers and virtual machines offer similar boundaries. The key principle is that the agent should only have access to what it needs for the current task, and nothing it compromises should cascade into your broader digital life.

OpenClaw is an impressive demonstration of where AI agents are heading. But "impressive" and "safe" are not the same thing, and the security community has been sounding alarms that the rapid development community seems to largely ignore. Giving an AI agent the keys to your entire machine because it is convenient is a decision that deserves more scrutiny than a WhatsApp QR code scan.

But would I use it?

Maybe I will use it more. I like the idea of having my own Jarvis, but I would be nervous about giving it too much access, unvetted skills - or installing it on system I can't easily nuke and rebuild - with credentials and tokens I can't easily change or revoke.


Related Articles

Get the Monthly AI Security Digest

Top research and analysis delivered to your inbox once a month. No spam, unsubscribe anytime.

Subscribe