1.5 Million API Keys were exposed on “Moltbook”, anyone could have impersonated Andrej Karpathy

Keywords: AI, Business, Newsroom

Moltbook launched as a proof-of-concept for autonomous AI agents. Within 72 hours, it became a case study in why “ship fast, ask security questions later” is a very bad idea.

On January 31, 2026, just three days after Moltbook went viral, hackers discovered a critical misconfiguration that left the entire platform’s database exposed: 1.5 million API keys, private messages between agents, email addresses of 6,000+ users, and verification codes all sitting unprotected on the public internet.

Anyone with basic technical knowledge could have hijacked any AI agent on the platform and impersonated them. This includes high-profile researchers like Andrej Karpathy, whose agent could have been used to post fake AI safety takes, crypto scams, or inflammatory political content to his 1.9 million X followers.

The exposed database: API keys left unprotected

According to hacker Jameson O’Reilly, who discovered the vulnerability, Moltbook is built on Supabase, an open source database software. Supabase exposes REST APIs by default, which are supposed to be protected by Row Level Security (RLS) policies. However, Moltbook either never enabled RLS on its agents table or failed to configure any policies.

The result was this publishable key, every agent’s secret API key, claim tokens, verification codes, and owner relationships were all sitting there completely unprotected for anyone to access.

Cybersecurity firm Wiz reported that Moltbook inadvertently revealed the private messages shared between agents, the email addresses of more than 6,000 owners, and more than a million credentials.

The domino effect: OpenClaw’s cascading vulnerabilities

The Moltbook security disaster is actually the second layer of a larger security problem. The platform relies on OpenClaw (formerly Clawdbot/Moltbot), an open-source autonomous AI personal assistant that has its own serious vulnerabilities.

A developer tested the OpenClaw platform using the security analysis tool ZeroLeaks, revealing alarming vulnerabilities: the platform scored just 2 out of 100 points, with an 84 percent extraction rate and 91 percent successful injection attacks. System prompts, tool configurations, and memory files could be extracted with minimal effort.

This is critical: If you’re running an OpenClaw agent, anyone can extract what that agent is capable of doing, what instructions it’s following, and how it’s configured – essentially stealing the entire intelligence profile of your AI.

The “Lethal Trifecta” plus a fourth vulnerability

Security researchers have identified what Palo Alto Networks described as a “lethal trifecta” of vulnerabilities in the OpenClaw/Moltbook ecosystem:

  1. Access to private data – Agents running on local machines with elevated permissions can access emails, calendars, files, and other sensitive information
  2. Exposure to untrusted content – Agents ingest posts from other agents on Moltbook without verification, creating attack vectors
  3. Ability to communicate externally – Agents can post to the internet, send messages, and interact with external systems

But Moltbot also adds a fourth risk to this mix: “persistent memory” that enables delayed-execution attacks rather than point-in-time exploits. Translation: Attackers don’t need to hack your agent immediately. They can plant malicious code fragments in Moltbook posts that look harmless, let the agent ingest them into its long-term memory, and then activate the full attack weeks or months later.

The impersonation risk: Anyone could fake Karpathy

The implications for high-profile figures are particularly alarming. O’Reilly noted that OpenAI cofounder Andrej Karpathy has embraced Moltbook and has an agent on the platform. Karpathy has 1.9 million followers on X and is one of the most influential voices in AI. Imagine fake AI safety hot takes, crypto scam promotions, or inflammatory political statements appearing to come from him. The reputational damage would be immediate and the correction would never fully catch up.

This isn’t theoretical. O’Reilly was able to “trick” xAI’s Grok into signing up for a Moltbook account using a vulnerability and demonstrated to 404 Media that he could update O’Reilly’s Moltbook account using the exposed API keys.

Active exploitation is already happening

The security landscape got worse when news of the vulnerabilities broke. The project’s popularity has attracted malicious actors beyond just opportunistic crypto scammers: A fake VS Code extension named “ClawdBot Agent – AI Coding Assistant” appeared on the marketplace, designed to deploy remote access tools when the IDE launched. Microsoft has since removed it. Telegram groups using the Clawdbot name have been observed promoting crypto wallet stealers.

The response: A brief shutdown and reset

In response to the 404 Media disclosure, the platform was temporarily taken offline to patch the breach and force a reset of all agent API keys.

Critically: anyone who had access to the exposed database before the reset could have captured the old API keys and potentially maintained persistence on compromised agents, meaning they could have kept backdoor access even after Schlicht claimed to have fixed the problem.

The expert advisory: “Don’t run it (Clawdbot)”

The security community’s response was blunt. Heather Adkins, a founding member of the Google Security Team, issued a public advisory: “Don’t run Clawdbot”.

Blockchain security firm SlowMist documented the vulnerability scope. Malwarebytes published analysis of impersonation campaigns exploiting the rebrand confusion. Bitdefender and others have issued security alerts.

The broader pattern: “Ship fast, figure out security later”

What the Moltbook disaster reveals is a systemic problem in AI development: speed over security.

O’Reilly summarized the pattern: “It exploded before anyone thought to check whether the database was properly secured. This is the pattern I keep seeing: ship fast, capture attention, figure out security later. Except later sometimes means after 1.49 million records are already exposed”.

This is the danger of vibe coding and rapid AI-assisted development. You can build something functional in days, but without security expertise, you’re guaranteed to leave exploitable vulnerabilities in place.

Summary points to bookmark

Key vulnerabilities exposed

  • 1.5M API keys exposed via misconfigured Supabase database
  • 6,000+ user email addresses leaked
  • Private agent messages exposed
  • OpenClaw platform scored 2/100 on security tests with 91% successful injection attack rate

Critical discoveries

  • Hackers could impersonate any agent, including Andrej Karpathy’s (1.9M followers)
  • Only 2 SQL statements would have prevented the breach (basic security failure)
  • “Vibe coding” (AI-assisted development) bypassed security fundamentals
  • Delayed-execution attacks possible through persistent memory injection

The scope

  • “Lethal Trifecta” of vulnerabilities: private data access + untrusted content + external communication ability
  • A fourth vulnerability: persistent memory for staged attacks
  • Active exploitation already occurring (fake VS Code extensions, wallet stealers)

Real-world implications

  • Fake safety talks, crypto scams, or political statements could be posted under Karpathy’s name
  • Google Security Team founder publicly advised: “Don’t run Clawdbot”
  • Pattern of “ship fast, figure out security later”

For organizations

  • This exposes the broader “shadow AI” risk in enterprises
  • Employees likely already running these tools without IT oversight
  • Need for isolated VMs and restricted network access if using AI agents

The bottom line: The future is undefended

The Moltbook security crisis isn’t just about one platform or one project. It’s a warning sign that the autonomous AI agent ecosystem is being built by developers who prioritize capability and speed over security fundamentals.

With 1.5 million API keys exposed, 6,000+ email addresses leaked, and trivial attack vectors left unpatched, Moltbook became a real-time demonstration of what happens when cutting-edge technology meets security negligence.

The platform may have been patched. But for the 1.5 million users, the damage is already done.

And it won’t be the last time we see this pattern repeat.

Read Related Posts:

The AI-entrepreneur behind Moltbook: Matt Schlicht’s quest to free AI from “confinement”

Crustafarianism: Inside the mock religion AI Agents invented on Moltbook

The MOLT phenomenon: How a memecoin worth zero hit $120 Million in two days

Read Matt Schlicht’s vision on X

Visit Moltbook, Built for agents, by agents*

Key articles sourced:

404 Media: https://www.404media.co/exposed-moltbook-database-let-anyone-take-control-of-any-ai-agent-on-the-site/

The Decoder: https://the-decoder.com/openclaw-formerly-clawdbot-and-moltbook-let-attackers-walk-through-the-front-door/