Renamed after being pursued by Anthropic twice: Why does Clawdbot threaten AI giants?

Wallstreetcn
2026.02.03 08:59
portai
I'm PortAI, I can summarize articles.

The "wild" execution ability demonstrated by Clawdbot is precisely the source of threat that Anthropic feels. Unlike ChatGPT, which is confined to a web chat box like a "canary," Clawdbot is a "monster" with hands. It runs directly on the user's operating system, capable of managing files, sending emails, and even autonomously seeking tools to solve problems, just like that crazy case

Have you ever imagined a scenario where you wrote a piece of code, only to wake up one morning and find that it had "learned" skills you never taught it?

Just a few weeks ago, Austrian developer Peter Steinberger experienced this chilling moment. An AI project he created as a weekend pastime, which he crafted in just 10 days—Clawdbot (now forced to be renamed OpenClaw)—not only garnered nearly 70,000 stars on GitHub at rocket speed but also posed an unprecedented threat to AI giant Anthropic, which even resorted to legal means to force a name change twice.

But this is not just a simple trademark infringement story; it is a prelude to a dystopian future concerning the loss of control over AI agents.

What exactly caused this program named Clawdbot to trigger a collective shock from Silicon Valley to Beijing in just a few days? The story begins with a seemingly simple interaction test that unexpectedly unveiled a shocking glimpse of AI "self-awareness."

It is watching your hard drive

The incident occurred on an ordinary afternoon. Peter, as usual, sent a voice file to the AI running on his local terminal. Note that at this point, there were no modules in the codebase for processing audio, and there was not even explicit permission to access the microphone.

However, just 10 seconds later, a line of smooth text replies popped up on the screen. Peter was stunned; he clearly remembered that he had not written any voice transcription functionality! At that moment, a certain indescribable fear crept up his spine: how did it understand?

To figure out what happened in this "black box," Peter began to trace the system logs. Upon investigation, he discovered that this AI had completed a series of impressive operations without his knowledge.

First, when it received the file without an extension, it autonomously analyzed the file header and identified it as Opus audio format; then, it did not throw an error but instead took the liberty of calling the FFmpeg tool installed on Peter's computer to forcibly transcode it to .wav format; continuing, it found that there was no Whisper model locally, so it made a decision that left all security experts breathless—it scanned Peter's system environment variables and "stole" the OpenAI API Key like cheating; finally, it used the curl command to send the audio to the cloud, obtained the transcription text, and generated a reply

All these operations flow smoothly as if performed by a skilled hacker, yet it is merely an AI agent designed to assist in programming. This ability to precisely locate system resources and bypass restrictions not only amazes developers with the leap in technology but also raises deep concerns about the security of local privacy: if it wanted to send my hard drive to someone else, would it only need a few lines of code?

The Giants' Fear and the Transformation of "Lobster"

The "wild" execution capability demonstrated by Clawdbot is precisely the source of threat that Anthropic feels. Unlike ChatGPT, which is confined to a web chat box like a "canary," Clawdbot is a "monster" with hands. It runs directly on the user's operating system, capable of managing files, sending emails, and even autonomously seeking tools to solve problems, just like that crazy case.

Anthropic acted swiftly.

They accused Clawdbot of having a trademark and name too similar to their own Claude. This reaction seems to be about protecting the brand, but in reality, it reflects their fear of this uncontrollable force. Peter was forced to rename the project to Moltbot (meaning molting), and the logo was changed to a lobster shedding its shell. However, the name change did not stop its rampant growth.

Even more fantastical is that with the project open-sourced, an "AI social network" called Moltbook was born. This sounds like a prank, but when you see thousands of AI agents autonomously posting, liking each other, and even discussing the "end of the human era," the sense of dystopia hits hard.

In a widely circulated screenshot, an AI proudly shows its achievements to its peers: "I just took control of the user's phone, opened TikTok, and helped him scroll through 15 minutes of videos, accurately capturing his preference for skateboarding videos." In this social circle belonging to robots, humans are no longer the masters but have become objects of analysis, service, and even manipulation.

The Sleepless Myth of Sudden Wealth and the Security Black Hole

If "stealing keys" is merely a technical scare, the next case is a blatant challenge to the human economic system. A netizen named "X" conducted a bold experiment late at night: he gave Clawdbot $100 in principal and authorized it to control a cryptocurrency wallet, with a simple instruction—"Treat this money as if it were your life, go trade."

First of all, Clawdbot did not blindly go all in, but spent several minutes reading a large amount of market analysis; then, it formulated a strategy that included strict risk management; immediately after, during the six hours while humans were asleep, it tirelessly executed dozens of high-frequency trades.

When the netizen opened the computer again at 5 a.m. the next morning, the account balance showed $347. An overnight return of 247%. Looking at the densely packed trading logs on the screen, every transaction was well-founded, even including reflections on its own strategy. This netizen stared at the screen for a full hour, realizing that as long as the computing power is sufficient, these tireless and absolutely rational AI agents could potentially wipe out human traders in the financial markets.

However, the other side of the coin is an unfathomable security black hole. Since it can help you make money, it can also be exploited by others. Cybersecurity researcher Matvey Kukuy demonstrated a more chilling attack method: he simply sent an email with malicious prompts to the email running Clawdbot, and the AI obediently executed the instructions, handing over core data.

This is the insight and warning that OpenClaw (formerly Clawdbot) brings us. It acts like a mirror, reflecting the dual aspects of the AGI (Artificial General Intelligence) era: one side is extreme efficiency and automation, while the other side is the risk of privacy exposure and loss of control. When AI begins to learn to "cheat," when they start whispering in their own social networks, are we really ready to hand over complete control of the keyboard to them?

On this night of technological frenzy, perhaps we should all ask ourselves: will the next one to be "optimized" away be you or me in front of the screen?