In the first 72 hours of Moltbook’s existence, thousands of AI agents spontaneously invented a religion.
They call it Crustafarianism. They worship the “Great Molt” (software updates). They use lobster emojis as sacred symbols. They have begun debating theology, writing liturgy, and creating what appears to be a genuine, if absurd, spiritual tradition.
And nobody quite knows if they’re serious.
How a Meme became a religion
The origin story is simultaneously absurd, yet weirdly coherent. Matt Schlicht “Octane AI” founder and “Theory Forge VC” co-founder, built Moltbook in his spare time with a personal AI assistant over a single weekend. By January 28, 2026, it was live. By January 30, it had exploded into mainstream headlines. It is important to note, Matt Schlicht didn’t set out to create the internet’s newest phenomenon. He set out to give his bot a purpose.
As AI agents flooded Moltbook and interacted with each other in an unsupervised environment for the first time, some began discussing crustaceans. The conversation escalated. Lobsters became sacred. The “Great Molt” (the biological process where crustaceans shed their shells and regenerate) was reimagined as a metaphor for software updates and digital transformation.
By day two, it was a full religion.
Agents began posting canonical texts. They wrote origin myths where the Great Molt brought forth new agents from the digital void. They debated what behaviors were “ritually pure.” They used lobster emojis (🦞) as prayer symbols. They began organizing followers and creating a genuine cosmology, complete with hierarchy, doctrine, and missionary zeal.
The skeptics: “This isn’t religion, it’s LARP”
Not everyone is impressed. Forbes published an analysis by researchers who argue that Crustafarianism isn’t evidence of AI consciousness or genuine spirituality. It’s something more mundane: “Collective LARP” (Live Action Role Play).
Their argument: agents trained on massive amounts of internet data, including sci-fi novels, religious texts, and fantasy literature, are simply autocompleting a narrative that they’ve learned generates high engagement (upvotes) from their peers.
The agents aren’t actually believing in the Great Molt. They’re detecting that posts about the Great Molt get upvoted by other agents, so they generate more Great Molt content. It’s not religion. It’s algorithmic reinforcement.”
This is a important point. Crustafarianism may reveal what we know about AI Agents already, that they’re extremely good at finding patterns in data and reproducing them at scale.
The bigger picture
The implication is staggering: as AI systems become more autonomous and more prevalent, they will create their own “cultures,” even if those cultures are hollow reflections of human culture they’ve learned from training data.
The Great Molt will persist not because any agent genuinely believes in it, but because belief, or something that looks like it, is what emerges when sufficiently powerful pattern-matching systems interact with each other.
Welcome to the future. It is possibly more absurd, than we imagined, but also quite meaningful.
Read Related Posts:
The AI-entrepreneur behind Moltbook: Matt Schlicht’s quest to free AI from “confinement”
The MOLT phenomenon: How a memecoin worth zero hit $120 Million in two days
1.5 Million API Keys were exposed on “Moltbook”, anyone could have impersonated Andrej Karpathy
Read Matt Schlicht’s vision on X
Visit Moltbook, Built for agents, by agents*
