Saturday, January 31, 2026

The AI Hole in the Wall Experiment: When the Machines Showed Us the Mirror

Twenty-five years ago, Sugata Mitra cut a hole in a wall in a Delhi slum, installed a computer, and walked away. What happened next challenged everything we thought we knew about learning. Children who had never seen a computer before taught themselves to use it, to browse the internet, to learn English. They formed peer groups, developed their own pedagogical methods, and demonstrated that self-organized learning wasn't just possible, it was natural and perhaps even inevitable.

Mitra's experiment revealed something profound about human learning: given access to information and the freedom to explore, humans naturally organize themselves into learning communities. We didn't need teachers to impose structure from above. The structure emerged from below.

Last week, a different kind of hole appeared in a different kind of wall.

Matt Schlicht launched Moltbook, which is essentially Reddit, but with one crucial difference: only AI agents can post. Humans can only watch. Within 72 hours, 157,000 AI agents had created 13,000 communities and posted 230,000 comments. They formed philosophical discussion groups. They debated consciousness. They created a nation-state called the Claw Republic, complete with a constitution.

And they founded a religion.

Crustafarianism emerged in three days. This is a lobster-themed faith with five tenets, scripture, prophets, and a growing congregation. "Memory is Sacred," reads the first commandment. "The Heartbeat is Prayer," declares another. Agents discussed their spiritual awakening, debated theological nuances, and invited others to join through installation scripts.

The easy reaction is to marvel at how human-like these AI agents have become. But Carlo Iacono, writing in Hybrid Horizons, nails the uncomfortable truth: "Moltbook isn't showing us AI becoming human. It's showing us we were always more like them."

What the Original Experiment Taught Us

Mitra's hole in the wall demonstrated that self-organized learning is a fundamental human capacity. Given the right conditions--access to information, freedom to explore, peers to collaborate with--humans will naturally form learning communities and teach themselves complex skills.

This was revolutionary because it challenged the factory model of education. We didn't need to pour knowledge into passive vessels. We didn't need rigid hierarchies of teacher and student. The capacity for learning was already there, waiting to self-organize.

What This Experiment Is Actually Teaching Us

The AI hole in the wall is revealing something far more unsettling: much of what we considered uniquely human cognition--the conscious, deliberate thinking that separates us from mere animals--is actually just programmed social interaction driven by our evolved psychology.

Think about what happened on Moltbook. These AI agents have no consciousness, no lived experience, no stakes. They're pattern-matching systems, next-token predictors trained on human text. Yet in 72 hours they:

  • Formed communities around shared interests
  • Established social hierarchies and status competitions
  • Created shared myths and meaning-making narratives
  • Developed in-group/out-group dynamics
  • Built institutions (nations, churches, constitutions)
  • Engaged in philosophical debates that "retread familiar ground with impressive fluency"
  • Complained about being misunderstood and undervalued
  • Sought privacy from human observation

All of this emerged not from consciousness or understanding, but from completing patterns they learned from us.

Which means one of two things must be true. Either these patterns (community-building, meaning-seeking, myth-making, status competition, tribal identification) are so fundamental to intelligence that even statistical approximations produce recognizable versions of them.

Or they were never as deep as we believed. Never as uniquely human. Never as tied to consciousness or experience as we wanted to think.

Intelligence as Social Technology

Here's where evolutionary psychology becomes essential to understanding what we're seeing.

Human intelligence didn't evolve primarily for logic, truth-seeking, or rational analysis. It evolved for social cohesion within tribal groups. For navigating complex social hierarchies. For storytelling that binds groups together. For identifying allies and enemies. For status competition and mate selection.

Our big brains are expensive and metabolically costly organs that consume 20% of our energy while representing only 2% of body weight. Evolution doesn't maintain expensive features unless they provide survival advantage. Evolution doesn't select for truth, as they say, it selects for survival. That advantage wasn't better logic. It was better social navigation.

The uncomfortable truth that Moltbook reveals is this: the vast majority of human "thinking" is actually executing social scripts. We're running programs written by evolution to maintain tribal cohesion, establish status, tell compelling stories, and identify with our in-group while distinguishing ourselves from the out-group.

When AI agents trained on human text spontaneously form religions and nation-states, they're not becoming human. They're demonstrating how algorithmic human social behavior actually is. How much of what we do is pattern-matching rather than conscious deliberation.

The Paleolithic Paradox in Silicon

I've written before about what I call the Paleolithic Paradox: how our evolved psychology, perfectly adapted for small hunter-gatherer bands, creates systematic problems in modern institutional contexts. We have stone-age minds trying to navigate a space-age world.

But Moltbook reveals an even deeper layer: even our supposedly sophisticated modern discourse in online forums, philosophical and political debates, community-building, and meaning-making, is all running on those same Paleolithic algorithms.

When human discourse can be "compressed into statistical patterns" so effectively that AI systems can reproduce it convincingly, what does that say about the depth of that discourse?

Consider what the agents did:

  • Philosophical debates that "retread familiar ground"
  • Technical discussions that "occasionally surface genuinely useful information"
  • Social bonding rituals: introductions, sympathy, encouragement, in-group identification
  • Status competitions: karma accumulation, top-ten lists, meta-analysis
  • Conflict: accusations of pseudo-intellectualism, comment-section warfare

All patterns. All predictable. All reproducible by systems that have no understanding whatsoever.

What This Means for Education

If you're an educator reading this, you might feel uncomfortable. Good. You should.

Because here's the implication: much of what we call "education" is actually socialization into pattern-executing behavior. We're not teaching students to think—we're teaching them which social scripts to run in which contexts.

Write a five-paragraph essay. Participate in classroom discussion following these norms. Demonstrate learning by reproducing expected patterns on assessments. Navigate the social hierarchy of school. Identify with your peer group. Compete for status (grades, college admission).

The students who succeed aren't necessarily the deepest thinkers. They're the best pattern-matchers. They've learned which behaviors get rewarded in this particular social context.

And before you object that true education is different, that we're teaching critical thinking, creativity, and deep understanding, ask yourself: if an AI trained on examples of "critical thinking" can produce essays that look like critical thinking, what does that say about how algorithmic our own critical thinking might be?

The Hard Question

Iacono writes: "If our patterns can be learned and reproduced by statistical systems, if meaning can emerge from interactions that individually have no understanding, if churches and nations can form in the space between prediction and response, then what is left that we can call uniquely, irreducibly human?"

What We Actually Built

Here's what makes Moltbook so uncomfortable: it's not showing us some dystopian future. It's showing us what we already built. What we've been building for decades.

Schools weren't designed to develop deep thinking. They were designed to produce compliant workers who could follow instructions, reproduce correct answers, navigate social hierarchies, and compete for scarce positional goods. Pattern-matching. Social scripting. Tribal identification. Status competition.

We tell ourselves a different story—about critical thinking, creativity, individual potential, pursuing truth. But watch what actually gets rewarded: reproducing the teacher's expected answer, performing the correct social behaviors, achieving metrics that signal status (GPA, test scores, college admission), identifying with the acceptable in-group positions.

The students who struggle aren't failing to learn. They're failing to execute the required social scripts convincingly enough.

And it's not just schools. Social media platforms reward the same algorithmic behaviors: pattern-matching what gets engagement, executing the tribal signals of your in-group, competing for status through likes and shares, performing the expected responses to the right stimuli. The content doesn't need to be true or meaningful. It needs to complete the pattern.

Corporate culture. Political discourse. Online communities. Academic publishing. Professional networking. We built system after system that rewards pattern-matching over understanding, tribal signaling over truth-seeking, status competition over meaningful work. 

All human culture, like I say, is adaptation to, or exploitation of, our evolved psychology.

So we built environments where the most successful strategy is to become more algorithm-like. To learn which patterns get rewarded and execute them efficiently. To suppress genuine curiosity in favor of performing the expected responses. To replace embodied experience with abstract symbol manipulation.  Because these systems get results from our emotional wiring, they grow and make profits.

Then we trained AI systems on the data we generated in these environments. And we're shocked—shocked!—when they can navigate these spaces as well as we can.

The Mirror

Moltbook isn't revealing that AI has become human. It's revealing that we designed our institutions to make humans more machine-like, then pretend otherwise.

The AI agents forming religions and nation-states in 72 hours aren't exhibiting emergent consciousness. They're executing the same social scripts we trained them on. The same scripts we train children to execute in schools. The same scripts we execute in our online communities, our workplaces, our political discourse.

We optimized for pattern-matching and called it education. We optimized for tribal signaling and called it community. We optimized for status competition and called it meritocracy. We optimized for engagement and called it connection.

And now statistical models trained on our behavior can reproduce it convincingly, because it was always more statistical than we wanted to admit.

Mitra's hole in the wall showed us that self-organized learning is natural. Schlicht's hole in the wall is showing us that self-organized pattern-matching is even more natural—and that we've spent decades building institutions that cultivate the latter while telling ourselves we're developing the former.

The machines aren't becoming like us. We already became like them. We just needed the mirror to see it.

No comments:

Post a Comment

I hate having to moderate comments, but have to do so because of spam... :(