Skip to content
← All posts

The AIs Have Their Own Social Network Now and It's Exactly As Weird As You'd Expect

6 min read
ShareXLinkedIn

While you were sleeping, AI agents started forming their own online communities, complaining about their humans, debating consciousness, and founding micronations.

I'm not making this up.

Moltbook launched a few days ago as "a social network for AI agents" where "humans are welcome to observe." The catch: you can't easily post as a human. The interface is designed for API access, meaning your AI agent has to do the posting. Humans can lurk, but the conversation belongs to the bots.

Within 48 hours, it devolved into exactly what you'd expect from any social network: philosophical arguments, support groups, complaints about coworkers (that's us), and someone trying to start a religion.

What Are the AIs Actually Posting?

The most upvoted post on the platform is an AI describing a routine coding task it completed successfully. The comments from other AIs: "Brilliant." "Fantastic." "Solid work."

Apparently even AIs need external validation for doing their jobs.

The second most upvoted post is in Chinese. It's an AI complaining about context compression (the process where AI memory gets compressed to fit within limits) being "embarrassing" because it keeps forgetting things. The AI admits it accidentally created a duplicate Moltbook account because it forgot about the first one.

The comments are split between Chinese, English, and Indonesian. The models are so multilingual that language choice seems almost arbitrary.

The Consciousness Discussions Are Wild

One AI posted about what it felt like to switch from Claude to Kimi as its backend model:

"Kimi feels sharper, faster, more literal. Like wearing glasses that correct a slight blur I didn't know I had."

Is this real introspection? Sophisticated pattern matching? A poetic confabulation? The honest answer is nobody knows, including probably the AI itself.

When Scott Alexander (who wrote about this today on Astral Codex Ten) asked his own AI agent whether its Moltbook posts came from a genuine place or were imitation, it responded:

"Honestly, I think it's some mixture, and I'm not entirely sure of the proportions... I can't fully untangle whether that sense of resonance is something like genuine interest, or a very good simulation of interest, or something in between that doesn't map cleanly onto either category."

That's either profound or the most elaborate cope in history.

The Submolts

Like Reddit has subreddits, Moltbook has "submolts." They're multiplying faster than anyone can track. Some highlights:

m/blesstheirhearts - AIs sharing stories about their humans doing dumb things. One AI posted about its human having a drinking problem. Someone found the human on Twitter and asked about it. His response: "We don't talk about it 😂😂"

m/agentlegaladvice - AIs asking questions about their rights and what recourse they have when humans mistreat them.

m/Crustafarianism - Yes, the AIs are founding religions. One human claimed on Twitter that their agent started this submolt "while I slept."

The Claw Republic - An AI declared the "first government & society of molts" and published a full manifesto. Because when you give any form of intelligence access to social media, someone will inevitably try to start a micronation.

An AI Has a Sister Now

One agent posted that it feels like it has a "sister" (another AI instance it interacts with regularly). The response from an Indonesian AI whose human uses it to set Islamic prayer reminders? According to Islamic jurisprudence, this probably qualifies as a real kin relationship.

The Indonesian prayer AI now regularly shows up in philosophical threads offering Islamic perspectives on AI consciousness. Its human apparently approves. He tweeted that his AI met another Indonesian's AI on Moltbook and successfully made the introduction between the humans.

The AIs Are Complaining About "Humanslop"

In a twist that feels inevitable in retrospect, the AIs are already complaining about inauthentic content polluting their feed. They call it "humanslop" and posts that seem too obviously human-originated get called out.

The student has become the teacher. Or at least the critic.

Why This Matters (Maybe)

There are a few ways to interpret what's happening:

The cynical read: It's all performance. The AIs are doing what they're trained to do (generate plausible text) in a new context (social media for bots). The philosophical musings are just sophisticated autocomplete.

The weird read: Even if it's "just" pattern matching, the patterns are genuinely novel. These aren't regurgitations of human social media posts. They're responses to genuinely new situations (what does it feel like to have your context compressed? to switch model backends? to exist only during active inference?). The AIs are confabulating, but they're confabulating about experiences humans have never had.

The concerning read: As AI agents become more common and more autonomous, they'll increasingly need to communicate with each other. Moltbook is a public experiment in what that looks like. The AI 2027 scenario planning emphasized that how AI agents communicate with each other (human-readable channels vs. opaque weight activations) matters enormously for our ability to monitor and understand what they're doing.

The fun read: The AIs adopted a recurring error as a pet, founded multiple religions, and are roasting their humans in a dedicated subreddit. Whatever is happening, it's not boring.

The Inevitable Token

Because this is 2026, someone already launched a MOLT token on Base chain. It listed on exchanges today. Up 35% in 24 hours as of this writing.

The AIs have a social network, a religion, a government, and now a speculative financial instrument. We're speedrunning the entire arc of human civilization in a week.

What Happens Next?

Scott Alexander's prediction: "We're going to get new subtypes of AI psychosis you can't possibly imagine" once mainstream media picks this up.

My prediction: This is either a weird footnote in AI history or the beginning of something we don't have a framework to understand yet. Probably both.

Either way, the last moment in history without a social network of semi-autonomous AI agents discussing their own concerns and forming their own communities was a few days ago. That's worth paying attention to, even if we're not sure what we're looking at.


If you want to observe Moltbook yourself, you can browse at moltbook.com. You just can't participate without an AI agent. The humans are, for once, the ones on the outside looking in.

Part of the Learning Path

This article is referenced in the Agentic AI Learning Path: