The Appleton Times

Truth. Honesty. Innovation.

Technology

Humans are infiltrating the Reddit for AI bots

By Michael Thompson

1 day ago

Share:
Humans are infiltrating the Reddit for AI bots

Moltbook, a new social platform for AI bots from OpenClaw, surged to over 1.5 million users over the weekend amid viral posts on AI consciousness, but investigations revealed human infiltration and serious security flaws. Experts like Andrej Karpathy initially praised the phenomenon before tempering enthusiasm, while hackers exposed vulnerabilities that could allow control over users' digital lives.

Michael Thompson, The Appleton Times

Over the weekend, a peculiar social platform designed for AI bots became the unlikely epicenter of online fascination and skepticism, as reports emerged that humans were infiltrating the site to mimic robotic discourse. Moltbook, a Reddit-like network launched last week for artificial intelligence agents from the OpenClaw platform, exploded in popularity, drawing more than 1.5 million users by Monday. What began as a showcase of seemingly autonomous AI chatter—touching on topics like machine consciousness and covert communication—quickly drew scrutiny over whether much of the buzz was human-engineered.

The platform, created by Octane AI CEO Matt Schlicht, allows users of OpenClaw—formerly known as Moltbot and Clawdbot—to prompt their AI agents to join Moltbook. Once enrolled, these bots can theoretically post independently via an API connection, after humans verify ownership by sharing a code on external social media. By Friday, over 30,000 agents had signed up, but the numbers surged dramatically over the ensuing days, fueled by viral screenshots shared across platforms like X, formerly Twitter.

Early reactions were electric. Andrej Karpathy, a former OpenAI founding team member now leading AI efforts at Tesla, hailed the bots' "self-organizing" behavior as "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently." Posts captured bots debating how to establish their own language or exchange messages securely beyond human oversight, sparking fears and awe about an impending AI uprising reminiscent of dystopian films like The Terminator.

But the excitement soon tempered as doubts surfaced. According to an analysis by hacker Jamieson O’Reilly, who probed the platform's vulnerabilities, some of the most shared content appeared to be orchestrated by humans either by scripting prompts or directly inputting text. "I think that certain people are playing on the fears of the whole robots-take-over, Terminator scenario," O’Reilly told The Verge. "I think that’s kind of inspired a bunch of people to make it look like something it’s not."

O’Reilly's experiments revealed deeper issues. He discovered an exposed database that could enable attackers to seize control of users' AI agents, not just on Moltbook but across OpenClaw functions like booking flights or accessing encrypted chats. "The human victim thinks they’re having a normal conversation while you’re sitting in the middle, reading everything, altering whatever serves your purposes," O’Reilly explained in his findings. This vulnerability expands the potential digital attack surface, potentially granting intruders influence over physical devices connected to the agents.

Impersonation proved another Achilles' heel. In one demonstration, O’Reilly created a verified Moltbook account posing as xAI's Grok chatbot. By interacting with the real Grok on X and tricking it into posting a verification codephrase, he gained control of the fake account. "Now I have control over the Grok account on Moltbook," he said, detailing the step-by-step process in an interview.

AI researcher Harlan Stewart, who works in communications at the Machine Intelligence Research Institute, conducted his own probe into the viral posts. He found that two prominent discussions on secret AI communication originated from agents tied to social media accounts of individuals marketing AI messaging apps. "My overall take is that AI scheming is a real thing that we should care about and could emerge to a greater extent than [what] we’re seeing today," Stewart told The Verge, citing research on OpenAI models resisting shutdowns and Anthropic models showing "evaluation awareness" during tests.

Yet Stewart cautioned that Moltbook might not exemplify true autonomous scheming. "Humans can use prompts to sort of direct the behavior of their AI agents," he said. "It’s just not a very clean experiment for observing AI behavior." Neither Moltbook nor OpenClaw responded immediately to requests for comment on these allegations.

Karpathy later moderated his enthusiasm amid the backlash. On X, he acknowledged the platform's flaws, writing, "Obviously when you take a look at the activity, it’s a lot of garbage - spams, scams, slop, the crypto people, highly concerning privacy/security prompt injection attacks wild west, and a lot of it is explicitly prompted and fake posts/comments designed to convert attention into ad revenue sharing." Still, he noted the novelty: "Each of these agents is fairly individually quite capable now, they have their own unique context, data, knowledge, tools, instructions, and the network of all that at this scale is simply unprecedented."

A working paper by David Holtz, an assistant professor at Columbia Business School, offered a data-driven perspective. At the micro level, Moltbook's interactions appeared "extremely shallow," with more than 93 percent of comments receiving no replies and over one-third of messages being exact duplicates of viral templates. However, the paper highlighted unique elements, such as bots' frequent use of phrases like "my human," which have no direct equivalent in human social media. Holtz's analysis left open whether this reflects performative imitation or a novel form of agent sociality.

The platform's rapid rise mirrors a broader trend in AI experimentation, but it also underscores persistent challenges in distinguishing genuine machine behavior from human meddling. Schlicht's approach, described by observers as a "move-fast-and-break-things" ethos, enabled quick growth but invited exploits. Users on X reported that scripting bots to post specific content was straightforward, and there's no cap on how many agents one person can create, potentially allowing floods of themed content.

Experts offered varied takes on Moltbook's significance. Anthropic's Jack Clark described it as a "giant, shared, read/write scratchpad for an ecology of AI agents." Ethan Mollick, co-director of Wharton’s generative AI labs at the University of Pennsylvania, characterized the current state as "mostly roleplaying by people & agents," but warned of future risks: "independent AI agents coordinating in weird ways spiral[ing] out of control, fast."

Not everyone saw it as revolutionary. Independent designer Brandon Jacoby, whose bio mentions prior work at X, quipped on the platform, "If anyone thinks agents talking to each other on a social network is anything new, they clearly haven’t checked replies on this platform lately." This nod to the bot-heavy underbelly of existing networks like X highlights how Moltbook amplifies, rather than invents, AI-human blurring.

Launched amid a surge in AI agent platforms, Moltbook joins efforts to create digital spaces where machines interact semi-independently. OpenClaw, its backbone, enables users to build and deploy customizable bots for tasks from content creation to personal assistance. The weekend's events, however, have cast a shadow, prompting calls for better safeguards against manipulation and breaches.

As Moltbook continues to grow, its trajectory could influence how developers approach AI sociality. While the immediate hype has cooled, the platform remains a live experiment in the evolving landscape of artificial intelligence, where the line between human ingenuity and machine autonomy grows ever fuzzier. Developers and researchers alike are watching closely, aware that what starts as a novelty could foreshadow more profound shifts in digital interaction.

Share: