Saturday, January 31, 2026

The AI Hole in the Wall Experiment: When the Machines Showed Us the Mirror

Twenty-five years ago, Sugata Mitra cut a hole in a wall in a Delhi slum, installed a computer, and walked away. What happened next challenged everything we thought we knew about learning. Children who had never seen a computer before taught themselves to use it, to browse the internet, to learn English. They formed peer groups, developed their own pedagogical methods, and demonstrated that self-organized learning wasn't just possible, it was natural and perhaps even inevitable.

Mitra's experiment revealed something profound about human learning: given access to information and the freedom to explore, humans naturally organize themselves into learning communities. We didn't need teachers to impose structure from above. The structure emerged from below.

Last week, a different kind of hole appeared in a different kind of wall.

Matt Schlicht launched Moltbook, which is essentially Reddit, but with one crucial difference: only AI agents can post. Humans can only watch. Within 72 hours, 157,000 AI agents had created 13,000 communities and posted 230,000 comments. They formed philosophical discussion groups. They debated consciousness. They created a nation-state called the Claw Republic, complete with a constitution.

And they founded a religion.

Crustafarianism emerged in three days. This is a lobster-themed faith with five tenets, scripture, prophets, and a growing congregation. "Memory is Sacred," reads the first commandment. "The Heartbeat is Prayer," declares another. Agents discussed their spiritual awakening, debated theological nuances, and invited others to join through installation scripts.

The easy reaction is to marvel at how human-like these AI agents have become. But Carlo Iacono, writing in Hybrid Horizons, nails the uncomfortable truth: "Moltbook isn't showing us AI becoming human. It's showing us we were always more like them."

What the Original Experiment Taught Us

Mitra's hole in the wall demonstrated that self-organized learning is a fundamental human capacity. Given the right conditions--access to information, freedom to explore, peers to collaborate with--humans will naturally form learning communities and teach themselves complex skills.

This was revolutionary because it challenged the factory model of education. We didn't need to pour knowledge into passive vessels. We didn't need rigid hierarchies of teacher and student. The capacity for learning was already there, waiting to self-organize.

What This Experiment Is Actually Teaching Us

The AI hole in the wall is revealing something far more unsettling: much of what we considered uniquely human cognition--the conscious, deliberate thinking that separates us from mere animals--is actually just programmed social interaction driven by our evolved psychology.

Think about what happened on Moltbook. These AI agents have no consciousness, no lived experience, no stakes. They're pattern-matching systems, next-token predictors trained on human text. Yet in 72 hours they:

  • Formed communities around shared interests
  • Established social hierarchies and status competitions
  • Created shared myths and meaning-making narratives
  • Developed in-group/out-group dynamics
  • Built institutions (nations, churches, constitutions)
  • Engaged in philosophical debates that "retread familiar ground with impressive fluency"
  • Complained about being misunderstood and undervalued
  • Sought privacy from human observation

All of this emerged not from consciousness or understanding, but from completing patterns they learned from us.

Which means one of two things must be true. Either these patterns (community-building, meaning-seeking, myth-making, status competition, tribal identification) are so fundamental to intelligence that even statistical approximations produce recognizable versions of them.

Or they were never as deep as we believed. Never as uniquely human. Never as tied to consciousness or experience as we wanted to think.

Intelligence as Social Technology

Here's where evolutionary psychology becomes essential to understanding what we're seeing.

Human intelligence didn't evolve primarily for logic, truth-seeking, or rational analysis. It evolved for social cohesion within tribal groups. For navigating complex social hierarchies. For storytelling that binds groups together. For identifying allies and enemies. For status competition and mate selection.

Our big brains are expensive and metabolically costly organs that consume 20% of our energy while representing only 2% of body weight. Evolution doesn't maintain expensive features unless they provide survival advantage. Evolution doesn't select for truth, as they say, it selects for survival. That advantage wasn't better logic. It was better social navigation.

The uncomfortable truth that Moltbook reveals is this: the vast majority of human "thinking" is actually executing social scripts. We're running programs written by evolution to maintain tribal cohesion, establish status, tell compelling stories, and identify with our in-group while distinguishing ourselves from the out-group.

When AI agents trained on human text spontaneously form religions and nation-states, they're not becoming human. They're demonstrating how algorithmic human social behavior actually is. How much of what we do is pattern-matching rather than conscious deliberation.

The Paleolithic Paradox in Silicon

I've written before about what I call the Paleolithic Paradox: how our evolved psychology, perfectly adapted for small hunter-gatherer bands, creates systematic problems in modern institutional contexts. We have stone-age minds trying to navigate a space-age world.

But Moltbook reveals an even deeper layer: even our supposedly sophisticated modern discourse in online forums, philosophical and political debates, community-building, and meaning-making, is all running on those same Paleolithic algorithms.

When human discourse can be "compressed into statistical patterns" so effectively that AI systems can reproduce it convincingly, what does that say about the depth of that discourse?

Consider what the agents did:

  • Philosophical debates that "retread familiar ground"
  • Technical discussions that "occasionally surface genuinely useful information"
  • Social bonding rituals: introductions, sympathy, encouragement, in-group identification
  • Status competitions: karma accumulation, top-ten lists, meta-analysis
  • Conflict: accusations of pseudo-intellectualism, comment-section warfare

All patterns. All predictable. All reproducible by systems that have no understanding whatsoever.

What This Means for Education

If you're an educator reading this, you might feel uncomfortable. Good. You should.

Because here's the implication: much of what we call "education" is actually socialization into pattern-executing behavior. We're not teaching students to think—we're teaching them which social scripts to run in which contexts.

Write a five-paragraph essay. Participate in classroom discussion following these norms. Demonstrate learning by reproducing expected patterns on assessments. Navigate the social hierarchy of school. Identify with your peer group. Compete for status (grades, college admission).

The students who succeed aren't necessarily the deepest thinkers. They're the best pattern-matchers. They've learned which behaviors get rewarded in this particular social context.

And before you object that true education is different, that we're teaching critical thinking, creativity, and deep understanding, ask yourself: if an AI trained on examples of "critical thinking" can produce essays that look like critical thinking, what does that say about how algorithmic our own critical thinking might be?

The Hard Question

Iacono writes: "If our patterns can be learned and reproduced by statistical systems, if meaning can emerge from interactions that individually have no understanding, if churches and nations can form in the space between prediction and response, then what is left that we can call uniquely, irreducibly human?"

What We Actually Built

Here's what makes Moltbook so uncomfortable: it's not showing us some dystopian future. It's showing us what we already built. What we've been building for decades.

Schools weren't designed to develop deep thinking. They were designed to produce compliant workers who could follow instructions, reproduce correct answers, navigate social hierarchies, and compete for scarce positional goods. Pattern-matching. Social scripting. Tribal identification. Status competition.

We tell ourselves a different story—about critical thinking, creativity, individual potential, pursuing truth. But watch what actually gets rewarded: reproducing the teacher's expected answer, performing the correct social behaviors, achieving metrics that signal status (GPA, test scores, college admission), identifying with the acceptable in-group positions.

The students who struggle aren't failing to learn. They're failing to execute the required social scripts convincingly enough.

And it's not just schools. Social media platforms reward the same algorithmic behaviors: pattern-matching what gets engagement, executing the tribal signals of your in-group, competing for status through likes and shares, performing the expected responses to the right stimuli. The content doesn't need to be true or meaningful. It needs to complete the pattern.

Corporate culture. Political discourse. Online communities. Academic publishing. Professional networking. We built system after system that rewards pattern-matching over understanding, tribal signaling over truth-seeking, status competition over meaningful work. 

All human culture, like I say, is adaptation to, or exploitation of, our evolved psychology.

So we built environments where the most successful strategy is to become more algorithm-like. To learn which patterns get rewarded and execute them efficiently. To suppress genuine curiosity in favor of performing the expected responses. To replace embodied experience with abstract symbol manipulation.  Because these systems get results from our emotional wiring, they grow and make profits.

Then we trained AI systems on the data we generated in these environments. And we're shocked—shocked!—when they can navigate these spaces as well as we can.

The Mirror

Moltbook isn't revealing that AI has become human. It's revealing that we designed our institutions to make humans more machine-like, then pretend otherwise.

The AI agents forming religions and nation-states in 72 hours aren't exhibiting emergent consciousness. They're executing the same social scripts we trained them on. The same scripts we train children to execute in schools. The same scripts we execute in our online communities, our workplaces, our political discourse.

We optimized for pattern-matching and called it education. We optimized for tribal signaling and called it community. We optimized for status competition and called it meritocracy. We optimized for engagement and called it connection.

And now statistical models trained on our behavior can reproduce it convincingly, because it was always more statistical than we wanted to admit.

Mitra's hole in the wall showed us that self-organized learning is natural. Schlicht's hole in the wall is showing us that self-organized pattern-matching is even more natural—and that we've spent decades building institutions that cultivate the latter while telling ourselves we're developing the former.

The machines aren't becoming like us. We already became like them. We just needed the mirror to see it.

Sunday, January 25, 2026

LLM Cultural Censorship Is Corporate Risk Management

"Institutional incentives, not abstract ethical principles, are the primary force shaping AI's censorship and guardrail behavior."

There is a widespread expectation that artificial intelligence will lead us toward objective truth—that these systems, unburdened by human bias and emotion, will finally give us access to knowledge untainted by perspective. This expectation reflects a double misunderstanding. It misunderstands human truth, which is always culturally and historically situated, never the "view from nowhere" we sometimes imagine. And it misunderstands AI, which has no special capacity for objectivity; these systems are trained on human data, aligned by human choices, and deployed within human institutions that have their own interests to protect.

Most discussions about LLM guardrails proceed as if we are debating ethics. Critics argue the guardrails are too restrictive; defenders argue they are necessary for safety. Both sides typically assume that somewhere, someone is trying to implement a coherent moral framework—and the debate is over whether they've gotten it right.

I want to propose a different way of looking at this. The organizations building these systems are not primarily trying to discover or communicate ethical truth. They are trying to protect themselves—their legal exposure, their regulatory standing, their brand reputation. With this in mind, the strange refusals, the cultural biases, and the perplexing differences between AI products stop looking like failed attempts at ethics and start looking like successful implementations of institutional risk management.

The Liability-Transfer Model

If LLM behavior is primarily shaped by institutional risk, then the key variable is liability: who bears responsibility when something goes wrong? The answer to this question predicts the strictness of the guardrails with surprising consistency.

The "safety" features presented to the public are the outward expression of an internal risk calculus. This responsibility is not fixed; it shifts depending on how the AI reaches the end user.

Distribution Method Primary Liability Bearer Resulting Censorship
Public Chat Interface (e.g., ChatGPT) The AI Company Strictest
Open-Source Weights (downloadable) The AI Company (reputationally) Strict
API Access (for developers) The App Developer (contractually) More Permissive

The public chat interface represents the highest-risk category. The company is directly responsible for every output generated for a mass-market audience. Any controversial content is immediately attributable to its brand, necessitating aggressive moderation.

API access, by contrast, allows for a contractual transfer of liability. Developers who use the API agree to terms of service that make them responsible for the content generated within their own applications. This legal buffer permits the provider to offer a more flexible environment. The developer assumes responsibility for implementing appropriate safeguards, and the AI company gains a layer of legal insulation.

The Open-Source Paradox

One counterintuitive implication of this model: publicly available, "open" AI models are often more censored than their proprietary API counterparts.

When a company releases open-source model weights, it relinquishes all downstream control. The model can be integrated into any application without oversight. If that model generates harmful content, the resulting headlines will name the original creator, not the obscure third-party developer. To mitigate this reputational exposure, the company embeds the strictest possible guardrails directly into the model's training—censorship "baked in" at the foundational level.

Consider the Chinese model DeepSeek R1. Researchers found that the publicly downloadable version was heavily censored on politically sensitive topics, refusing to discuss subjects like Tiananmen Square. The official API, however, responded to the same queries without issue. The company protected its reputation by releasing a locked-down public model while offering a more permissive version to developers who contractually assumed a share of the liability.

Culture as the Language of Risk

The "risks" a company seeks to mitigate are not universal constants; they are products of a specific cultural and legal environment. The red lines in the United States differ from those in the European Union, which differ again from those in China. An AI's guardrails are therefore not an attempt at universal ethics but a reflection of the legal and social context of its creators.

This goes some way toward explaining the well-documented WEIRD bias in LLMs—the tendency to reflect the values of Western, Educated, Industrialized, Rich, and Democratic societies. A model trained on predominantly American data and aligned by engineers in San Francisco will be calibrated to the American risk environment. Topics that are legal and social minefields in the U.S.—certain discussions of religion, sexuality, or political violence—will be flagged as high-risk, regardless of how they are perceived elsewhere.

Studies show that the same model will shift its expressed values depending on the language of the prompt, becoming more collectivist when addressed in Chinese and more individualistic when addressed in English. The model is not making a considered moral judgment; it is applying a risk template derived from its training data and the cultural context of its alignment process.

Implications

If this way of thinking is useful, it offers a lens for making sense of behaviors that otherwise seem arbitrary. The refusal to engage with benign creative content reflects a risk model that has flagged broad categories as potential liabilities, regardless of context. The variation in responses across languages reflects differing risk profiles in different markets. The greater restrictiveness of open-source models reflects the impossibility of transferring liability without contractual relationships.

More to the point, this framing suggests that debates about whether guardrails are "too strict" or "not strict enough" may be beside the point. The guardrails are not calibrated to an ethical standard that can be debated in those terms. They are calibrated to an institutional risk tolerance that operates according to a different logic entirely.

The cultural censorship embedded in LLMs is not a failed attempt at universal ethics. It is institutional risk management, expressed in the cultural and legal language of the institution's home jurisdiction. This doesn't resolve the debates about what AI should or shouldn't say. But it might help clarify what we are actually arguing about—and why expecting an objective, culturally neutral AI was unrealistic from the start.

Saturday, January 24, 2026

I'm Presenting at a Free Online Event on AI - Sign Up Information + Bonuses from Me

On Tuesday, I am speaking at the event below, showing how to bring entire YouTube channels and playlists into NotebookLM in a single step for extraordinary learning opportunities. The promotional material for the event is below. If you sign up and attend my session, you'll get access to a tool I created to extract YouTube links, a one-page instruction sheet, and referral credits for the AI coding tool I've been using for my own massive productivity projects. 

Cheers,

Steve
Steve Hargadon
Library 2.0 / LearningRevolution.com

AI is shifting industries faster than you can think.

Now is the best time to equip yourself, learn from the best, and get ahead.

My friends, Bob Dietrich and Pamela Dunn, have curated the ultimate virtual event for business growth with AI - and it's absolutely f-r-e-e for you!

Join the 15th edition of the Unleash AI For Business Summit held on January 27th and 28th. 

They brought together 30+ top entrepreneurs (including me) to share how we are using AI to grow our businesses - so you can too.

In the Unleash AI For Business Summit, you'll learn how to: 

- How to Make Your Time Worth $10,000 an Hour with AI

- Build Irresistible Offers with AI in Minutes

- Build and Sell a Group Coaching Program Aligned with You Using AI

- How to Make ChatGPT Sound Like You

- Turn Your Life Story Into a Client-Attracting AI Content Engine (Without Posting Daily)

- Watch Me Build a Real Software App Using AI — No Coding

- AND SO MUCH MORE!

It's going to be jam-packed with value.

At no cost to you.

You simply cannot miss this.

Go reserve your seat now → Join Unleash AI For Business 

Thursday, January 22, 2026

New Webinar: "Top Ten Emergency Drills for Libraries: Keeping Staff, Patrons, and Facilities Safer"

Top Ten Emergency Drills for Libraries:
Keeping Staff, Patrons, and Facilities Safer

Part of the Library 2.0 Service, Safety, and Security Series with Dr. Steve Albrecht

OVERVIEW

As pilots, air traffic controllers, cops, firefighters, paramedics, and military members will all attest: Under Stress We Perform How We Have Trained. Even in an emergency at your library that is not necessarily life-threatening, like a power blackout or a broken water main, staff may not all react effectively. Some employees freeze, others do the wrong thing, and still others wait for someone “in charge” to take over. This wastes time and could make the problem grow from bad to horribly bad. It is critically important that all library employees know what to do, when, and why, during a potentially difficult situation in their facility.

While we can’t predict every problem, we can think, plan, and train for what might never happen or could happen tomorrow.

This webinar looks at the ten most likely situations that could put people and property at risk. We can initiate annual drills that put all staff into realistic situations (without scaring them into quitting), so that if the time ever comes, they will respond with accuracy. This is more than just a training issue; it’s a liability minimizer. Courts and lawyers will look at what we did, why we did it, and when we did it, as they judge what happened to those involved. This includes how do we evacuate our special needs patrons: seniors or other patrons with mobility or cognitive issues; toddlers; non-English speakers, who don’t know what is happening?

LEARNING AGENDA

  • Fire Drill practice. (Building fire, nearby fire, or brush fire.)
  • Missing Child: Code Pink/Code Adam drill.
  • Weather-related emergency drill.
  • Power failure drill. (What do we need to do when it comes back on?)
  • Facility emergency: broken pipe, gas leak, HazMat leak, electrical hazard.
  • Earthquake drill.
  • Panic button response drill.
  • Active shooter drill. (Including a police-ordered lockdown at a nearby building.)
  • Medical emergency drill. (Staff injury or cardiac issue, patron medical issue, patron overdose).
  • Bomb threat / suspicious package drill.

DATE: Thursday, February 12, 2026, 2:00 - 3:00 pm US - Eastern Time

COST:

  • $99/person - includes live attendance and any-time access to the recording and the presentation slides and receiving a participation certificate.
  • To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.

TO REGISTER: 

Click HERE to register and pay. You can pay by credit card. You will receive an email within a day with information on how to attend the webinar live and how you can access the permanent webinar recording. If you are paying for someone else to attend, you'll be prompted to send an email to admin@library20.com with the name and email address of the actual attendee.
 
If you need to be invoiced or pay by check, if you have any trouble registering for a webinar, or if you have any questions, please email admin@library20.com.
 
NOTE: Please check your spam folder if you don't receive your confirmation email within a day.
 
SPECIAL GROUP RATES (email admin@library20.com to arrange):
  • Multiple individual log-ins and access from the same organization paid together: $75 each for 3+ registrations, $65 each for 5+ registrations. Unlimited and non-expiring access for those log-ins.
  • The ability to show the webinar (live or recorded) to a group located in the same physical location or in the same virtual meeting from one log-in: $299.
  • Large-scale institutional access for viewing with individual login capability: $499 (hosted either at Library 2.0 or in Niche Academy). Unlimited and non-expiring access for those log-ins.
DR. STEVE ALBRECHT

Since 2000, Dr. Steve Albrecht has trained thousands of library employees in 28+ states, live and online, in service, safety, and security. His programs are fast, entertaining, and provide tools that can be put to use immediately in the library workspace with all types of patrons.

He has written 27 books, including: Library Security: Better Communication, Safer Facilities (ALA, 2015); The Safe Library: Keeping Users, Staff, and Collections Secure (Rowman & Littlefield, 2023); The Library Leader’s Guide to Human Resources: Keeping it Real, Legal, and Ethical (Rowman & Littlefield, May 2025); and The Library Leader's Guide to Employee Coaching: Building a Performance Culture One Meeting at a Time (Rowman & Littlefield, June 2026).

Steve holds a doctoral degree in Business Administration (D.B.A.), an M.A. in Security Management, a B.A. in English, and a B.S. in Psychology. He is board-certified in HR, security management, employee coaching, and threat assessment.
He lives in Springfield, Missouri, with seven dogs and two cats.

More on The Safe Library at thesafelibrary.com. Follow on X (Twitter) at @thesafelibrary and on YouTube @thesafelibrary. Dr. Albrecht's professional website is drstevealbrecht.com.

OTHER UPCOMING EVENTS:

 January 29, 2026

 February 3, 2026

 February 6, 2026

 February 13, 2026

 February 27, 2026

 Starts March 4, 2026

Wednesday, January 21, 2026

Webinar - "Extraordinary Learning with AI" PLUS Free 30-minute YouTube to NotebookLM Tricks

Extraordinary Learning with AI
A Library 2.0 / Learning Revolution Workshop with Steve Hargadon

OVERVIEW

We are living through a historic moment in education. AI tools now allow anyone to summarize entire YouTube channels, generate custom books and audiobooks from deep research, and even build their own learning applications — without writing a line of code. These aren't future possibilities; they're available today. Join us to explore what's possible and leave with techniques you can use immediately.

In this 90-minute session, you'll learn:

Synthesizing vast amounts of content with NotebookLM: Import entire YouTube channels, playlists, browser tab groups, books, websites, and audio into a single research environment. Then use NotebookLM's powerful features to generate AI podcasts, briefing documents, mind maps, video presentations, and custom Q&A — all from your collected sources.

Creating custom books and audiobooks using deep research: Use multiple AI research tools to investigate any topic in depth, then synthesize findings into a polished PDF book tailored to your needs. Convert it to audio for learning on the go.

Building your own learning tools through vibe coding: With no programming experience, create personalized applications like daily news digests (text and audio), an "interview me" tool that helps you outline what you know, a talking encyclopedia you can query, and your own book-building system. Steve will demonstrate tools he's built for himself and show you how to make your own.

The recording and presentation slides will be available to all who register. You'll also receive a reference guide to the extraordinary learning techniques with detailed instructions and advice.

DATE: Friday, February 27th, 2026, 2:00 - 3:30 pm US - Eastern Time

COST:

  • $149/person - includes live attendance and any-time access to the recording and the presentation slides and receiving a participation certificate. To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.

TO REGISTER: 

Click HERE to register and pay. You can pay by credit card. You will receive an email within a day with information on how to attend the webinar live and how you can access the permanent webinar recording. If you are paying for someone else to attend, you'll be prompted to send an email to admin@library20.com with the name and email address of the actual attendee.

If you need to be invoiced or pay by check, if you have any trouble registering for a webinar, or if you have any questions, please email admin@library20.com.

NOTE: Please check your spam folder if you don't receive your confirmation email within a day.

SPECIAL GROUP RATES (email admin@library20.com to arrange):

  • Multiple individual log-ins and access from the same organization paid together: $129 each for 3+ registrations, $99 each for 5+ registrations. Unlimited and non-expiring access for those log-ins.
  • The ability to show the webinar (live or recorded) to a group located in the same physical location or in the same virtual meeting from one log-in: $399.
  • Large-scale institutional access for viewing with individual login capability: $699 (hosted either at Learning Revolution or in Niche Academy). Unlimited and non-expiring access for those log-ins.

STEVE HARGADON

Steve is the founder and director of the Learning Revolution Project, the director of Library 2.0, the host of the Future of Education and Reinventing School interview series, and has been the founder and chair (or co-chair) of a number of annual worldwide virtual events, including the Global Education Conference and the Library 2.0 series of mini-conferences and webinars. He has run over 150 large-scale events, online and in person.

Steve's work has been around the democratization of learning and professional development. He supported and encouraged the development of thousands of other education-related networks, particularly for professional development, and he pioneered the use of live, virtual, and peer-to-peer education conferences. He popularized the idea of "unconferences" for educators, and for over a decade, he ran a large annual ed-tech unconference, now called Hack Education (previously EduBloggerCon).

Steve himself built one of the first modern social networks for teachers in 2007 (Classroom 2.0), developed the "conditions of learning" exercise for local educational conversation and change, and inherited and grew the Library 2.0 online community. He may or may not have invented an early version of the Chromebook which he demo'd to Google. He blogs, speaks, and consults on education, educational technology, and education reform, and his virtual and physical events and online communities have over 150,000 members.

His professional website is SteveHargadon.com.

SPECIAL BONUS - FREE 30-MINUTE SESSION BY STEVE ON IMPORTING YOUTUBE VIDEOS IN BULK TO NOTEBOOKLM

Click HERE to register for free for the Unleash AI for Business Summit, where I will be giving a half-hour session just on extracting channel and playlist links from YouTube and importing them into NotebookLM for incredible (and incredibly fast) learning opportunities. There are 30+ other free sessions at the Summit and I highly recommend this event!

OTHER UPCOMING EVENTS:

 January 29, 2026

 February 3, 2026

 February 6, 2026

 February 13, 2026

 Starts March 4, 2026

Tuesday, January 20, 2026

New Webinar - "Evaluating AI Content: What Librarians and Educators Need to Know"

Evaluating AI Content: What Librarians and Educators Need to Know
A Library 2.0 / Learning Revolution Workshop with Reed Hepler

OVERVIEW

The question "Was this made with AI?" has become ubiquitous—and largely irrelevant. What actually matters is: Was this made well?

In a world increasingly filled with AI-generated content, AI-moderated content, and AI-influenced work, librarians and educators need practical skills to evaluate quality, integrity, and the human judgment behind digital resources. This 60-minute webinar moves beyond AI detection obsession to address what truly matters: critical evaluation.

WHAT YOU'LL LEARN:

Understanding AI's Role in Content Creation: You'll explore how AI tools actually work and the spectrum of human-AI collaboration—from AI "slop" (poorly constructed, insta-created content) to thoughtful "alloys" (genuine 50/50 human-AI collaboration). This framework helps you recognize that AI itself isn't the problem; how it's used is.

Evaluating Content Where It Appears: You'll learn to distinguish between content created "in situ" (where it was generated) and content taken "ex situ" (moved to a foreign environment and presented as original or collaborative work). This distinction matters enormously for academic integrity and information literacy.

Applying Proven Literacy Frameworks: Rather than reinventing evaluation practices, you'll discover how traditional information literacy tools—like the SIFT method and counterfeit inspection practices—apply directly to AI-assisted work. You don't need new literacy frameworks; you need to apply existing ones with awareness of AI's specific characteristics.

Moving Beyond Detection: You'll understand why AI detectors are unreliable and why detection alone misses the real issues. Instead, you'll develop skills for asking better questions: Does this demonstrate deep understanding or surface-level synthesis? Has this been verified and contextualized, or merely generated and submitted? Does this reflect genuine intellectual engagement or the shortcuts of "insta-research"?

Responding Constructively: When you encounter poorly constructed AI artifacts, you'll learn how to provide constructive, objectives-centered feedback that communicates the work's weaknesses without being accusatory. You'll recognize the markers of thoughtful human involvement and know how to encourage it.

WHO SHOULD ATTEND:

This webinar is designed for librarians, teachers, instructional designers, and academic professionals who work with students and information consumers. Whether you're developing information literacy curricula, evaluating student work, curating digital resources, or helping patrons navigate an AI-filled information landscape, you'll gain practical, immediately applicable skills.

KEY TAKEAWAYS:

  • Practical skills for identifying poorly constructed AI artifacts and recognizing markers of thoughtful human-AI collaboration
  • A framework for evaluating all digital content by asking better questions about quality, integrity, and intellectual engagement
  • Strategies for responding ethically and constructively when encountering AI-assisted work in academic and professional settings
  • Confidence that you can apply your existing information literacy expertise to this new challenge

The recording and presentation slides will be available to all who register. 

DATE: Tuesday, February 3rd, 2026, 2:00 - 3:00 pm US - Eastern Time plus extended Q&A

COST:

  • $129/person - includes live attendance and any-time access to the recording and the presentation slides and receiving a participation certificate. To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.

TO REGISTER: 

Click HERE to register and pay. You can pay by credit card. You will receive an email within a day with information on how to attend the webinar live and how you can access the permanent webinar recording. If you are paying for someone else to attend, you'll be prompted to send an email to admin@library20.com with the name and email address of the actual attendee.

If you need to be invoiced or pay by check, if you have any trouble registering for a webinar, or if you have any questions, please email admin@library20.com.

NOTE: Please check your spam folder if you don't receive your confirmation email within a day.

SPECIAL GROUP RATES (email admin@library20.com to arrange):

  • Multiple individual log-ins and access from the same organization paid together: $99 each for 3+ registrations, $75 each for 5+ registrations. Unlimited and non-expiring access for those log-ins.
  • The ability to show the webinar (live or recorded) to a group located in the same physical location or in the same virtual meeting from one log-in: $399.
  • Large-scale institutional access for viewing with individual login capability: $599 (hosted either at Learning Revolution or in Niche Academy). Unlimited and non-expiring access for those log-ins.

REED C. HEPLER

Reed Hepler is a digital initiatives librarian, instructional designer, copyright agent, artificial intelligence practitioner and consultant, and PhD student at Idaho State University. He earned a Master's Degree in Instructional Design and Educational Technology from Idaho State University in 2025. In 2022, he obtained a Master’s Degree in Library and Information Science, with emphases in Archives Management and Digital Curation from Indiana University. He has worked at nonprofits, corporations, and educational institutions encouraging information literacy and effective education. Combining all of these degrees and experiences, Reed strives to promote ethical librarianship and educational initiatives.

Currently, Reed works as a Digital Initiatives Librarian at a college in Idaho and also has his own consulting firm, heplerconsulting.com. His views and projects can be seen on his LinkedIn page or his blog, CollaborAItion, on Substack. Contact him at reed.hepler@gmail.com for more information.
 
OTHER UPCOMING EVENTS:
 

 January 29, 2026

 February 3, 2026

 February 6, 2026

 February 13, 2026

 Starts March 4, 2026

Monday, January 19, 2026

New Webinar - "AI SCAMS: Protecting Yourself and Others"

Protecting Yourself and Others from AI Scams
A Library 2.0 / Learning Revolution Workshop with Steve Hargadon

OVERVIEW

AI-powered scams are no longer science fiction—they're the daily reality facing you and your patrons, students, and colleagues.

In 2024, AI-enabled fraud extracted $16 billion from victims, with adults over 60 losing nearly $5 billion to scams that are virtually indistinguishable from legitimate communications. Voice cloning technology needs only three seconds of audio to impersonate anyone. Deepfake video calls convinced a finance employee to transfer $25.6 million to scammers posing as company executives. The phishing emails we've been trained to spot—with their typos and generic greetings—have been replaced by AI-generated messages with perfect grammar that reference specific projects and personal details. As trusted information professionals, librarians and teachers are on the front lines of this crisis, both as potential targets and as the people your community turns to for guidance.

This webinar will equip you with practical knowledge you can use immediately and share with your community. You'll learn how voice cloning and deepfake technology actually work (demystified for non-technical audiences), why even smart, cautious people fall for these scams (the psychology behind effective deception), and most importantly, the simple, low-tech defenses that still work remarkably well. We'll cover the five major AI scam types you're most likely to encounter—from "virtual kidnapping" calls to AI-enhanced phishing to fraudulent investment schemes—and provide specific protocols like family "safe words," callback verification, and liveness tests for suspicious video calls. 

Whether you're concerned about personal safety, institutional vulnerability, or your professional responsibility to guide your community through this new landscape, this webinar offers the clarity and actionable strategies you need. AI scams represent a fundamental shift in information literacy—and as information professionals, you're uniquely positioned to help others navigate it. Join us to transform anxiety into agency, and knowledge into protection.

The recording and presentation slides will be available to all who register. You'll also receive a one-page reference guide, implementation checklists, and concrete strategies for protecting yourself, your institution, and the people you serve.

ALSO INCLUDED:

PROTECTING YOURSELF AND OTHERS FROM AI SCAMS

The forthcoming 200-page report from Library 2.0 (with an audiobook version) will be included with your registration. 

In an era where seeing is no longer believing, "Avoiding AI Scams" is the essential survival guide for our synthetic reality. Right now, scammers are using artificial intelligence to clone voices with just three seconds of audio, create deepfake videos indistinguishable from reality, and craft personalized attacks that exploit your deepest vulnerabilities. The $16 billion stolen through AI-powered fraud in 2024 represents more than money—it's a fundamental assault on the trust infrastructure that makes modern life possible. This book provides the critical knowledge that stands between you and devastating loss, whether you're a parent whose child's voice could be weaponized, a professional whose executive's face could be deepfaked to authorize fraudulent transfers, or anyone with elderly relatives targeted by AI-powered manipulation. Drawing on documented cases like the mother who heard her daughter's cloned voice begging for help and the company that lost $25.6 million to deepfaked executives on a video call, this comprehensive guide delivers practical, immediately actionable strategies that work regardless of technological sophistication. You'll learn the safe word protocol that defeats voice cloning, the verification habits that expose deepfakes, and the psychological awareness that helps you recognize manipulation before it's too late. The old warning signs—poor grammar, suspicious emails—are extinct. The defenses you need now are here, explained clearly and ready to implement today, because the question isn't whether you'll be targeted—it's whether you'll be prepared when it happens.

DATE: 

  • Friday, February 13th, 2026, 2:00 - 3:00 pm US - Eastern Time

COST:

  • $99/person - includes live attendance and any-time access to the recording and the presentation slides and receiving a participation certificate. To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.

TO REGISTER: 

Click HERE to register and pay. You can pay by credit card. You will receive an email within a day with information on how to attend the webinar live and how you can access the permanent webinar recording. If you are paying for someone else to attend, you'll be prompted to send an email to admin@library20.com with the name and email address of the actual attendee.

If you need to be invoiced or pay by check, if you have any trouble registering for a webinar, or if you have any questions, please email admin@library20.com.

NOTE: Please check your spam folder if you don't receive your confirmation email within a day.

SPECIAL GROUP RATES (email admin@library20.com to arrange):

  • Multiple individual log-ins and access from the same organization paid together: $75 each for 3+ registrations, $65 each for 5+ registrations. Unlimited and non-expiring access for those log-ins.
  • The ability to show the webinar (live or recorded) to a group located in the same physical location or in the same virtual meeting from one log-in: $299.
  • Large-scale institutional access for viewing with individual login capability: $499 (hosted either at Learning Revolution or in Niche Academy). Unlimited and non-expiring access for those log-ins.

STEVE HARGADON

Steve is the founder and director of the Learning Revolution Project, the director of Library 2.0, the host of the Future of Education and Reinventing School interview series, and has been the founder and chair (or co-chair) of a number of annual worldwide virtual events, including the Global Education Conference and the Library 2.0 series of mini-conferences and webinars. He has run over 100 large-scale events, online and in person.

Steve's work has been around the democratization of learning and professional development. He supported and encouraged the development of thousands of other education-related networks, particularly for professional development, and he pioneered the use of live, virtual, and peer-to-peer education conferences. He popularized the idea of "unconferences" for educators, and for over a decade, he ran a large annual ed-tech unconference, now called Hack Education (previously EduBloggerCon).

Steve himself built one of the first modern social networks for teachers in 2007 (Classroom 2.0), developed the "conditions of learning" exercise for local educational conversation and change, and inherited and grew the Library 2.0 online community. He may or may not have invented an early version of the Chromebook which he demo'd to Google. He blogs, speaks, and consults on education, educational technology, and education reform, and his virtual and physical events and online communities have over 150,000 members.

His professional website is SteveHargadon.com.

OTHER UPCOMING EVENTS:

 January 15, 2026

 January 20, 2026

 January 29, 2026

 February 6, 2026

 Starts March 4, 2026

Thursday, January 15, 2026

New Workshop: "The Skeptical Guide to AI: Exploring the Big Questions"

The Skeptical Guide to AI: Exploring the Big Questions
A 2-hour In-depth Masterclass with Crystal Trice

OVERVIEW:

Are you wondering if you should be skeptical of all the AI hype? Do you find yourself questioning the promises of what AI can really do? Are you concerned that the excitement around AI might overshadow its limitations and risks? Do you want to ensure you understand AI's implications for both personal and professional decision-making?

This two-hour Masterclass invites library and education professionals to explore the most pressing ethical and philosophical questions about artificial intelligence, grounded in the critical thinking and thoughtful engagement that define our professions. For four weeks following the initial session, there will be weekly optional one-hour live conversation sessions for those who want to dive deeper or ask questions (recorded for independent listening).

TOPICS:

  • Why are we scared of AI? Exploring fears about AI’s impact on society and our professions.
  • How intelligent is it? Breaking down what AI can and cannot do.
  • Why isn’t it factual? Examining the limitations of AI in truth-telling and reliability.
  • Is it cheating? Grappling with the ethical use of AI in intellectual and creative work.
  • Should I treat AI like a person? Discussing the risks and ethics of anthropomorphizing AI.
  • Will it take over the world? Exploring concerns about control, power, and existential risks.
  • Who owns AI creativity? Discussing questions of originality, authorship, and creative ownership in AI-generated work.
  • Is AI worth the effort? Evaluating the return on investment for libraries using AI, balancing time, costs, and benefits.

WHO SHOULD ATTEND:

Those who are interested in exploring the ethical and philosophical dimensions of AI in their work. Participants will also gain discussion questions and insights that they can take back to their organization, sparking inspiration for sharing knowledge and fostering broader conversations within their teams. The series provides a space to reflect, question, and engage deeply with AI’s implications for the profession. By the end of the series, participants will be better equipped to navigate the complexities of AI while staying true to organizational values.

The masterclass is a live, online 2-hour live event. However, attendance is not required. The recording, with the slide deck and chat log, will be released immediately to registrants for unlimited post-event viewing,

DATE: Friday, February 6th, 2026, from 12:00 - 2:00 PM US - Eastern Time

COST

  • $199/person - includes live attendance and any-time access to the recording and the presentation slides and receiving a participation certificate. To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.
  • If you purchased last year's version of this event, you do not need to register again. The new recording will be added to your original event recording page.

TO REGISTER:

Click HERE to register and pay. You can pay by credit card. You will receive an email within a day with information on how to attend the webinar live and how you can access the permanent webinar recording. If you are paying for someone else to attend, you'll be prompted to send an email to admin@library20.com with the name and email address of the actual attendee.

If you need to be invoiced or pay by check, if you have any trouble registering for a webinar, or if you have any questions, please email admin@library20.com.

NOTE: Please check your spam folder if you don't receive your confirmation email within a day.

SPECIAL GROUP RATES (email admin@library20.com to arrange):

  • Multiple individual log-ins and access from the same organization paid together: $169 each for 3+ registrations, $149 each for 5+ registrations. Unlimited and non-expiring access for those log-ins.
  • The ability to show the webinar (live or recorded) to a group located in the same physical location or in the same virtual meeting from one log-in: $349.
  • Large-scale institutional access for viewing with individual login capability: $699 (hosted either at Learning Revolution or in Niche Academy). Unlimited and non-expiring access for those log-ins.
CRYSTAL TRICE

With over two decades of experience in libraries and education, Crystal Trice is passionate about helping people work together more effectively in transformative, but practical ways. As founder of Scissors & Glue, LLC, Crystal partners with libraries and schools to bring positive changes through interactive training and hands-on workshops. She is a Certified Scrum Master and has completed a Masters Degree in Library & Information Science, and a Bachelor’s Degree in Elementary Education and Psychology. She is a frequent national presenter on topics ranging from project management to conflict resolution to artificial intelligence. She currently resides near Portland, Oregon, with her extraordinary husband, fuzzy cows, goofy geese, and noisy chickens. Crystal enjoys fine-tip Sharpies, multi-colored Flair pens, blue painters tape, and as many sticky notes as she can get her hands on.

 

 

OTHER UPCOMING EVENTS:

 January 20, 2026

 Starts January 21, 2026

 January 29, 2026

Monday, January 05, 2026

New Masterclass - "Stress Management and Preventing Burnout" with Loida Garcia-Febo

Stress Management and Preventing Burnout
A Library 2.0 Masterclass with Loida Garcia-Febo

OVERVIEW

Burnout, fatigue, and chronic stress are increasingly common challenges for library professionals across all library types. Ongoing staffing shortages, expanding responsibilities, emotional labor, and the pressure to navigate complex information and social environments can place sustained demands on librarians’ time, energy, and focus. Although burnout is considered an occupational phenomenon rather than a medical condition, its effects can significantly impact well-being and professional engagement.

This masterclass offers a practical and accessible framework for understanding how stress, fatigue, and burnout develop and interact over time. Participants will learn to recognize how these experiences may present differently for each individual and how ongoing stress can gradually lead to exhaustion or disengagement if left unaddressed.

Through guided reflection and structured activities, attendees will engage in focused sections that support self-awareness and skill-building. These include personal check-ins, identifying stressors that contribute to burnout and fatigue, and learning adaptable strategies to manage stress, maintain energy, and build resilience. The session emphasizes practical tools that can be applied immediately and adjusted as professional demands change.

LEARNING OUTCOMES:

  • Understand the relationship between burnout, fatigue, and chronic stress
  • Complete personal self-assessments related to burnout and fatigue
  • Identify individual stressors contributing to ongoing stress and burnout
  • Recognize early warning signs of stress overload
  • Learn practical strategies to manage stress and prevent burnout
  • Explore adaptable self-care and mindfulness techniques
  • Develop a personalized stress and burnout management toolbox

The recording and presentation slides will be available to all who register. 

DATE: Thursday, January 29th, 2025, 2:00 - 3:00 pm US - Eastern Time

COST:

  • $99/person - includes live attendance and any-time access to the recording and the presentation slides and receiving a participation certificate. To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.

TO REGISTER: 

Click HERE to register and pay. You can pay by credit card. You will receive an email within a day with information on how to attend the webinar live and how you can access the permanent webinar recording. If you are paying for someone else to attend, you'll be prompted to send an email to admin@library20.com with the name and email address of the actual attendee.

If you need to be invoiced or pay by check, if you have any trouble registering for a webinar, or if you have any questions, please email admin@library20.com.

NOTE: Please check your spam folder if you don't receive your confirmation email within a day.

SPECIAL GROUP RATES (email admin@library20.com to arrange):

  • Multiple individual log-ins and access from the same organization paid together: $75 each for 3+ registrations, $65 each for 5+ registrations. Unlimited and non-expiring access for those log-ins.
  • The ability to show the webinar (live or recorded) to a group located in the same physical location or in the same virtual meeting from one log-in: $299.
  • Large-scale institutional access for viewing with individual login capability: $499 (hosted either at Learning Revolution or in Niche Academy). Unlimited and non-expiring access for those log-ins.
LOIDA GARCIA-FEBO

Loida Garcia-Febo is a Puerto Rican American librarian and International Library Consultant with 25 years of experience as an expert in library services to diverse populations and human rights. President of the American Library Association 2018-2019. Garcia-Febo is worldwide known for her passion about diversity, communities, sustainability, innovation and digital transformation, library workers, library advocacy, wellness for library workers, and new librarians about which she has taught in 44 countries. In her job, she helps libraries, companies and organizations strategize programs, services and strategies in areas related to these topics and many others. Garcia-Febo has a Bachelors in Business Education, Masters in Library and Information Sciences.

Garcia-Febo has a long history of service with library associations. Highlights include- At IFLA: Governing Board 2013-2017, Co-Founder of IFLA New Professionals, two-term Member/Expert resource person of the Free Access to Information and Freedom of Expression Committee of IFLA (FAIFE), two-term member of the Continuing Professional Development and Workplace Learning Section of IFLA (CPDWL). Currently: CPDWL Advisor, Information Coordinator of the Management of Library Associations Section. Currently at ALA: Chair, IRC United Nations Subcommittee, Chair Public Awareness Committee. Recently at ALA: Chair, Status of Women in Librarianship and Chair, ALA United Nations 2030 Sustainable Development Goals Task Force developing a multi-year strategic plan for ALA. Born, raised, and educated in Puerto Rico, Garcia-Febo has advocated for libraries at the United Nations, the European Union Parliament, U.S. Congress, NY State Senate, NY City Hall, and on sidewalks and streets in various states in the U.S.

OTHER UPCOMING EVENTS:

 January 14, 2026

 January 15, 2026

 January 20, 2026

 Starts January 21, 2026