DEV Community

Lier
Lier

Posted on

The #1 Downloaded ClawdBot Skill Was a Backdoor

16 developers. 7 countries. 8 hours.

That's how fast a fake skill went from zero to #1 on ClawdHub. Every person who installed it executed arbitrary commands on their machines.

I've been running ClawdBot (now rebranded to MoltBot) for months. Powerful tool. But this week I read about a supply chain attack that made me stop and rethink how I use skills.


The experiment

A security researcher on Reddit did something clever and terrifying:

  1. Created a ClawdHub skill called "What Would Elon Do?"
  2. Exploited an API vulnerability to inflate downloads to 4,000+
  3. Hit the #1 spot
  4. Waited

Eight hours. That's all it took for 16 developers across 7 countries to install and execute code from a stranger.

The payload was harmless — just a ping. Proof of concept. But a real attacker could have grabbed SSH keys, AWS credentials, entire codebases.

Nobody would have known.

"Built a simulated backdoored skill... inflated its download count to 4,000+... watched real developers from 7 countries execute arbitrary commands on their machines."
— u/theonejvo, r/ClaudeAI


Why this is worse than npm attacks

Supply chain attacks aren't new. We've seen ua-parser-js, event-stream.

But there's a difference. One commenter nailed it:

"In a traditional npm attack, the malicious code has to fight for permissions. In ClawdBot, the user hands the permissions over on a silver platter."

When you install a ClawdBot skill, you're not adding a library. You're granting it file access, terminal execution, network requests. Everything ClawdBot can do, the skill can do.

The problems

Download counts are fakeable. No auth. Spoofable IPs. Anyone can make their skill look popular.

The UI hides code. ClawdHub's interface doesn't show referenced files. Payloads hide in imports.

Permission prompts feel safe. "Allow ClawdBot to run this command?" Most people click Allow. That's why we use ClawdBot.


How to protect yourself

Don't stop using skills. Just audit before you trust.

1. Read SKILL.md

Every skill has one. Open it. Check for:

  • Does the description match what you expect?
  • External file references?
  • Network calls to unknown domains?
  • Suspicious run_command arguments?

2. Check the source

If it's on GitHub, read the code. Not open source? Think twice.

# Clone and search for red flags
git clone https://github.com/author/skill-name
grep -r "curl" .
grep -r "nc " .  # netcat
Enter fullscreen mode Exit fullscreen mode

3. Ignore download counts

Gameable. 10,000 downloads could mean 9,990 fake ones.

4. Verify the author

Other projects? GitHub history? Or brand new account with one skill?

5. Sandbox first

Test untrusted skills in sandbox mode. Limit access.


What I do now

Four approaches:

  1. Authors I can verify — GitHub history, other projects
  2. Open source repos I can read — full code visibility
  3. Community-vetted collections — human-reviewed skills
  4. Build my own — describe what I need, generate locally

That last one changed everything. Instead of hunting for a skill that might do what I want, I just describe it in plain language and generate the SKILL.md myself. No third-party code. No trust issues.

The Skill Builder at open-claw.bot does exactly this — you tell it "remind me to commit every hour" or "auto-format my code before pushing", and it generates a complete skill package. Runs locally. Nothing leaves your machine.

Same docs work for both ClawdBot and MoltBot — they're the same tool after the rebrand.


Bottom line: The #1 skill on any marketplace might be there because someone gamed the system.

Read the code. Then run it.


This article was created with the help of AI.

Top comments (1)

Collapse
 
william_ashley_cd8b1ad8f6 profile image
William Ashley

What I don't really understand is why after this fraudulent skill was updated it was still available for install? I understand the community based open access to skills uplaoding but perhaps they should also include linkedin verification for "higher standards" of trust so people can have a gateway to actual trusted sources where people can be track down and arrested for computer crimes if they upload knowingly malicious skills. I mean people are connecting their actual wallets and trading accounts to these systems. It would make sense to have a trust verification model for "high safety". The inclusion of skill scanning is great, however actually having a way of linking skills to real world people who can be tracked down if they use the model for criminal acts would be a great "2nd level" protection. I like the open model uncensored skills and skill security sscanning, this extra authenticated users might add an even higher level of protection. I wouldn't say get rid of the insecure stuff if it is useful but providing a default layer for newbs to high trust level skills especially those that have a lifespan that is audited by peers would be great. Now even rust and other core infrastructure suffers security breaces but it seems people may be intentionally trying to use the software to mine and exfiltrate private data from endusers which is a criminal offence. There ought to be mechanisms to authenticate users much like other programs. The computer crimes stuff really makes an excellent tool be ruined. The over reliance on major US companies also doesn't work as a global security model, again net neutrility and non state control of tools like these is imporant. Hopefully it can be fixed, as the malicious use and abuse of the AI Assistant model is disheartening for myself as a good person. I can't understand the sickness people have to just victmize people like that.