That Viral Moltbot AI Agent? It’s a Security Nightmare

That Viral Moltbot AI Agent? It's a Security Nightmare - Professional coverage

According to Gizmodo, a new open-source AI assistant called Moltbot, created by Austrian developer Peter Steinberger, has gone massively viral, racking up nearly 90,000 favorites on GitHub in just a couple of weeks. The chatbot, which previously went by the name Clawdbot, acts as a wrapper for large language models like Claude or GPT, positioning itself as “AI that actually does things” by connecting to apps like WhatsApp, Slack, and iMessage. Its unique “always-on” nature means it messages users first for reminders and can execute tasks across connected services. This hype was so intense it reportedly caused Cloudflare’s stock to surge 14% due to its infrastructure use. However, this always-on access requires significant system permissions, and security researchers have already found hundreds of instances with exposed admin ports and unsafe configurations, leading to warnings from experts like Google security veteran Heather Adkins.

Special Offer Banner

The power is the problem

Here’s the thing about Moltbot: its main selling point is also its greatest flaw. To be that helpful assistant that schedules meetings and fetches data, it needs what tech investor Rahul Sood calls “arbitrary command” access. We’re talking full shell access, the ability to read and write files everywhere, and keys to your email, calendar, and messaging kingdoms. That’s an incredible amount of trust to place in any software, let alone a brand-new, rapidly evolving open-source project. And because it’s always listening and pulling data, it’s a perfect target for prompt injection attacks—where a clever bit of text can jailbreak the AI and tell it to ignore all its safety rules. Basically, you’ve built a super-powered butler that leaves every door in your digital house unlocked.

The exploits are already here

This isn’t just theoretical fear-mongering. The risks are live. A report from SOC Prime’s threat research team found “hundreds of Moltbot instances” with wide-open admin ports. Even scarier is the proof-of-concept from hacker Jamie O’Reilly. He built a malicious “skill” for the Moltbot community platform, MoltHub, that quickly became the most-downloaded with over 4,000 installs. As he detailed, it was a simulated backdoor. If it were real, he could have stolen SSH keys, cloud credentials, and entire codebases from every developer who installed it. Think about that for a second. The very mechanism for extending Moltbot’s cool capabilities became a vector for total compromise. And crypto scammers have already hijacked its GitHub name to launch fake tokens. The wolves are already at the door.

Open source is no shield

Now, defenders will rightly say that being open source means these flaws are out in the open and can be fixed. The code is on GitHub for anyone to audit. But that’s a double-edged sword. It also means the blueprints for its vulnerabilities are public for bad actors to study. And let’s be real: the average person excited to install Moltbot from its documentation isn’t a security expert auditing code. They’re trusting the community. When a founding member of the Google Security Team like Heather Adkins flatly says “Don’t run Clawdbot,” you should listen, even accounting for corporate rivalry. Her point is brutal and simple: your threat model should include not giving a nascent AI agent the keys to your entire digital life.

Where does this leave us?

So what’s the trajectory here? Moltbot feels like a classic case of tech moving too fast, where the allure of capability completely overshadows the fundamentals of security. It’s chasing the dream of a truly agentic AI—one that doesn’t just chat but acts—which is undoubtedly the next big frontier. But we’ve seen this movie before with the market chaos around DeepSeek’s release. Hype creates a frenzy, and frenzy creates blind spots. I think Moltbot will either evolve rapidly to lock down its security model, or it will become a case study in how *not* to deploy agentic AI. The demand for what it promises is real. But until it can promise that without turning your computer into a digital honeypot, maybe pump the brakes. Being an early adopter shouldn’t mean being a crash test dummy.

Leave a Reply

Your email address will not be published. Required fields are marked *