780 AI Assistants Are Leaking API Keys Right Now. Yours Might Be One of Them.
Heather Adkins, VP of Security Engineering at Google Cloud, issued a stark warning last week: "Don't run Clawdbot." She cited security researchers who characterized the viral AI assistant as "infostealer malware disguised as an AI personal assistant."
Meanwhile, 780 exposed instances are sitting on the public internet right now. Researchers found them in under a minute using Shodan. API keys visible. Private chat histories are accessible. Full administrative control is available to anyone who knows where to look.
The project has since rebranded to Moltbot following a trademark dispute with Anthropic. The name changed. The security nightmare didn't.
60,000 Stars. Zero Guardrails.
Moltbot exploded onto the scene in early 2026 as the "personal AI assistant you run on your own devices." It connects to WhatsApp, Telegram, Slack, Discord, iMessage, and more. It executes shell commands. It manages your files. It reads your email. It remembers everything.
The pitch writes itself: a local, privacy-first Jarvis that does things instead of just talking about them.
The reality according to security firm Intruder: "The core issue is architectural. Clawdbot prioritizes ease of deployment over secure-by-default configuration. There are no enforced firewall requirements, no credential validation, no sandboxing of untrusted plugins, and no AI safety guardrails to prevent prompt injections."
What Researchers Actually Found
Bitdefender's investigation uncovered hundreds of internet-facing control interfaces that were live administrative panels reachable by anyone. In multiple cases, access to these interfaces lets outsiders view configuration data, retrieve API keys, and browse full conversation histories from private chats and file exchanges.
The risk extends beyond passive data exposure. Moltbot agents can actively send messages, run tools, and execute commands across connected services. An exposed control panel is not just a leak. It is a back door with your credentials and your system access.
Security researchers on the DEV Community confirmed that "multiple unauthenticated instances are publicly accessible, and several code flaws may lead to credential theft and even remote code execution."
Hudson Rock's assessment was direct: "Without encryption-at-rest or containerization, the 'Local-First' AI revolution risks becoming a goldmine for the global cybercrime economy."
The Expertise Gap That Gets You Pwned
Eric Schwake, Director of Cybersecurity Strategy at Salt Security, told The Register: "A significant gap exists between the consumer enthusiasm for Clawdbot's one-click appeal and the technical expertise needed to operate a secure agentic gateway."
He continued: "Many users unintentionally create a large visibility void by failing to track which corporate and personal tokens they've shared with the system. Without enterprise-level insight into these hidden connections, even a small mistake in a 'prosumer' setup can turn a useful tool into an open back door, risking exposure of both home and work data to attackers."
Installing Moltbot feels like downloading any Mac app. Securing it requires understanding API posture governance, network segmentation, credential rotation, and prompt injection defense. Most users never bridge that gap.
Prompt Injection: The Threat You Cannot See
Even properly configured Moltbot instances face inherent risks that no firewall can fix.
According to AIMultiple Research: "If the agent processes emails, documents, or web content, malicious instructions embedded in those inputs could influence its behavior."
Moltbot's own security documentation acknowledges this: "Even if only you can message the bot, prompt injection can still happen via any untrusted content the bot reads (web search/fetch results, browser pages, emails, docs, attachments, pasted logs/code). In other words: the sender is not the only threat surface; the content itself can carry adversarial instructions."
A malicious email. A poisoned PDF. A crafted webpage. Any of these could hijack an agent with shell access to your system.
The 72 Hours of Chaos
The security concerns emerged alongside operational chaos that would be comedic if the stakes were not so high.
Anthropic sent a cease-and-desist over the "Clawd" name's similarity to "Claude," forcing an emergency rebrand. During the transition, the project's GitHub and X accounts were briefly hijacked by unknown actors.
Crypto scammers pounced. Fake $CLAWD tokens appeared on Solana within hours, briefly hitting a $16 million market cap before the founder publicly denied any involvement. The token collapsed. Late buyers got rugged. The scammers walked away with millions.
The founder, Peter Steinberger, has been simultaneously fighting account hijackers, dealing with harassment from crypto speculators, managing an 8,900-member Discord community, and patching security vulnerabilities.
What This Means for Security Teams
Palo Alto Networks' Chief Security Intel Officer Wendi Whitmore recently warned that AI agents could represent the new era of insider threats. As they gain autonomous capabilities and trust within organizations, they become high-value targets.
Moltbot is the preview of that future.
For enterprise security: This is shadow IT on steroids. Your developers and power users see a viral GitHub project with 60,000 stars and a promise of AI automation. They do not see the API keys they are about to expose or the shell access they are granting. Add this to your threat awareness briefings now.
For vulnerability management: Traditional scanning will not catch misconfigured AI agents. You need to have visibility into what is running on endpoints and what external services those tools are connecting to.
For incident response: If you discover Moltbot in your environment, treat it as a potential credential compromise. Audit every API key and token that machine has touched.
For personal use: Moltbot's own documentation admits "there is no perfectly secure setup." If you are technical and security-conscious, it can be configured with reasonable safety. If you are not, you are deploying a remote access tool with your credentials baked in.
The Bottom Line
The future of AI agents is coming. Tools like Moltbot represent a genuine glimpse of what personal AI assistants will become: persistent, proactive, and capable of real action.
But that future requires security models we have not built yet. Right now, the gap between capability and safety is a canyon. And 780 users are sitting at the bottom of it with their API keys visible to the world.
Google's security VP said don't run it. The researchers said it looks like infostealer malware. The exposed instances are right there on Shodan.
Your move.
Sources:
- The Register: "Clawdbot becomes Moltbot, but can't shed security concerns" (January 27, 2026)
- Bitdefender: "Moltbot security alert: exposed Clawdbot control panels risk credential leaks and account takeovers"
- Intruder: "Clawdbot: When 'Easy AI' Becomes a Security Nightmare"
- AIMultiple Research: "Moltbot (Formerly Clawdbot) Use Cases and Security [2026]"
- DEV Community: "From Clawdbot to Moltbot: How a C&D, Crypto Scammers, and 10 Seconds of Chaos Took Down the Internet's Hottest AI Project"
- Moltbot Official Documentation: docs.clawd.bot/gateway/security
- Yahoo Finance: "Fake 'ClawdBot' AI Token Hits $16M Before 90% Crash — Founder Warns of Scam" (January 27, 2026)
Views expressed are my own.