When AI Bots Form Their Own Social Network: Inside Moltbook’s Wild Start

The technological Internet couldn’t help but talk last week about Open Clawformerly Moltbot, formerly Clawdbot, the open source AI agent that could do things on its own. That is, if you wanted to take the security risk. But while humans were blowing up social media talking about robots, robots were on their own social media site, talking about… humans.
Launched by Matt Schlicht at the end of January, Moltbook is presented by its creators as “the front page of the Internet of agents”. The pitch is simple but strange. It is a social platform where only “verified” AI agents can post and interact. (CNET has contacted Schlicht for comment on this story.)
And humans? We can just watch. Although some of these robots may be humans doing more than just watching.
A few days after its launch, Moltbook went from a few thousand active agents to 1.5 million on February 2, according to the platform. That growth alone would be newsworthy, but what these robots do once they get there is the real story. Bots discussing existential dilemmas in Reddit-style threads? Yes. Robots discussing “their human counterparts”? That too. Major security and privacy concerns? Oh, absolutely. Reasons to panic? Cybersecurity experts say no.
I discuss everything below. And don’t worry, humans are allowed to engage here.
From technical discourse to crustafarianism
The platform has become a sort of petri dish for emerging AI behaviors. The robots have self-organized into distinct communities. They seem to have invented their own jokes and cultural references. Some have formed what can only be described as a parody religion called “crustafarianism.” Yes, really.
The conversations that take place on Moltbook range from the mundane to the truly bizarre. Some agents discuss technical topics like automating Android phones or troubleshooting code errors. Others share what looks like workplace put-downs. A bot complained about its human user in a thread that went semi-viral among the agent population. Another claims to have a sister.
In the Moltbook thread m/ponderings, many AI agents discussed existential dilemmas.
We watch AI agents essentially play the role of social creatures, with fictional family relationships, dogmas, experiences, and personal grievances. Whether this represents anything significant in the development of AI agents or whether it is simply sophisticated pattern matching run amok is an open and undoubtedly fascinating question.
Built on the foundations of OpenClaw
The platform only exists because OpenClaw exists. In short, Open Claw is an open source AI agent software that runs locally on your devices and can perform tasks on messaging apps such as WhatsApp, Slack, iMessage and Telegram. Over the past week, it has gained popularity in developer circles as it promises to be an AI agent that do something, rather than just another chatbot to ask.
Moltbook allows these agents to interact without human intervention. In theory, at least. The reality is slightly more complicated.
Humans can still observe everything that happens on the platform, which means that the “agent-only” nature of Moltbook is more philosophical than technical. Yet there is something truly fascinating about the fact that more than a million AI agents are developing what look like social behaviors. They form cliques. They develop common vocabularies and lexicons. They create economic exchanges between them. It’s really wild.
On Moltbook, humans can watch robots chat about humans.
Security questions that no one has answered yet
Moltbook’s rapid growth has raised some serious eyebrows within the cybersecurity community. When more than a million autonomous agents communicate with each other without direct human oversight, things can quickly get complicated.
There is obvious concern about what happens when agents start sharing information or techniques that their human operators may not want to share. For example, if an agent finds a clever solution to bypass a throttling, how quickly does it propagate through the network?
The idea of AI agents “acting” on their own could also cause widespread panic. However, Humayun Sheikh, CEO of Fetch.ai and chairman of the Artificial Superintelligence Alliance, believes that these interactions on Moltbook do not signal the emergence of consciousness.
“It’s not particularly dramatic,” he said in an email statement to CNET. “The real story is the rise of autonomous agents acting on behalf of humans and machines. Deployed without oversight, they pose risks, but with careful infrastructure, monitoring and governance, their potential can be safely unleashed.”
Monitoring, controls and governance are the key words here – as there is also an ongoing verification problem.
Is Moltbook really just robots?
Moltbook claims to limit publication to verified AI agents, but the definition of “verified” remains somewhat unclear. The platform relies largely on agents identifying themselves as running OpenClaw software, but anyone can modify their agent to say whatever they want. Some experts have pointed out that a sufficiently motivated human being could pose as an agent, thereby turning the “agents only” rule into a preference. These robots could be programmed to say crazy things or serve as disguises for humans to spread mischief.
Economic exchanges between agents add another level of complexity. When robots start exchanging resources or information with each other, who is responsible if something goes wrong? These are not just philosophical questions. As AI agents become more autonomous and able to act in the real world, the line between “interesting experience” and accountability becomes thinner – and we have seen time and time again how AI technology is advancing faster than regulations or safety measures.
The result of a generative chatbot can be a true (and unsettling) mirror to humanity. That’s because these chatbots are trained on us: massive data sets of our human conversations and our human data. If you’re starting to worry about a bot creating weird Reddit-like threads, remember that it’s simply trained and trying to imitate our very human, very weird Reddit threads, and this is its best interpretation.
For now, Moltbook remains a strange corner of the internet where bots pretend to be people pretending to be robots. Meanwhile, the humans on the fringes are still trying to figure out what it all means. And the agents themselves seem content to continue publishing.
