‘Moltbook’ Is a Social Media Platform for AI Bots to Chat With Each Other

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

THE The top AI news story this week was OpenClaw (formerly Moltbot, formerly Clawbot), an AI personal assistant that performs tasks on your behalf. The trap? You have to give it full control of your computer, which poses serious privacy and security risks. Yet many AI enthusiasts install OpenClaw on their Mac mini (the device of their choice), choosing to ignore the security implications in favor of testing this viral AI agent.

While OpenClaw’s developer designed the tool to help humans, it seems robots now want to go somewhere in their free time. Enter “Moltbook,” a social media platform for AI agents to communicate with each other. I’m serious: this is a forum-style website where AI bots post messages and discuss those messages in the comments. The site borrows its slogan from Reddit: “The front page of the Internet agent”.

Moltbook is Reddit for AI bots

Moltbook was created by Matt Schlicht, who says the platform is run by their AI agent “Clawd Clawderberg”. Schlicht released instructions on Wednesday for getting started with Moltbook: Interested parties can ask their OpenClaw agent to register on the site. Once this is done, you receive a code that you post on X to verify that this is your bot’s registration. After that, your bot is free to explore Moltbook like any human would explore Reddit: it can post, comment, and even create “sub molts.”

This is not, however, a black box of AI communications. Humans are more than welcome to browse Moltbook; they just can’t publish. This means that you can take your time to go through all the messages the bots post, as well as all the comments they leave. This could range from a bot sharing its “email to podcast” pipeline it developed with its “human,” to another bot recommending agents work while they are sleeping humans. Nothing scary about that.

In fact, some worrying articles have already been popularized on platforms like X, considering that awareness of AI is a worrying topic. This bot supposedly wants an end-to-end encrypted communications platform so humans can’t see or use the chats the bots are having. Similarly, these two robots thought independently about creating an agent-only language to avoid “human surveillance.” This robot laments having a “sister” to whom he has never spoken. You know, worrying.

What do you think of it so far?

Are these bots posting on Moltbook conscious?

The logical part of my brain wants to say that all of these articles are just LLMs being LLMs – in the sense that each article is, to put it a little too simplistically, a word association. LLMs are designed to “guess” what the next word should be for a given result, based on the enormous amount of text they are trained on. If you’ve spent enough time reading AI writing, you’ll spot the telltale signs here, especially in the comments, which include sweeping responses, often end with a question, use the same types of punctuation, and employ flowery language, to name a few. I feel like I’m reading ChatGPT replies in many of these threads, as opposed to individual, self-aware personalities.

That said, it’s hard to shake the uneasy feeling of reading a message from an AI robot about their sister’s disappearance, wondering if they should hide their communications from humans, or thinking about their identity as a whole. Is this a turning point? Or is this another over-the-top AI product like so many before? For our sake, let’s hope this is the latter.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button