Friday, September 19, 2025

FRIDAY ROUNDUP: Hargadon on AI, Albrecht on Libraries, & Upcoming Events

Here's a roundup of recent Learning Revolution and Library 2.0 blog posts.

Steve Hargadon on AI:

Dr. Steve Albrecht on Libraries:

 
UPCOMING EVENTS:

 September 23, 2025

 September 25, 2025

 October 7, 2025

 Next Class October 8, 2025

 October 9, 2025

 October 17, 2025

 

 SURVEY:

 COHORT:

Thursday, September 18, 2025

New Webinar: "Difficult Discussions with Patrons"

Difficult Discussions with Patrons:
Using Jefferson Fisher’s Book The Next Conversation

Part of the Library 2.0 Service, Safety, and Security Series with Dr. Steve Albrecht

OVERVIEW

Nobody likes to hear bad news; most people hate being told what to do. How we tell people what they can and can’t do in the library has a huge impact on whether they comply or not. Some of our patrons are influenced by their own sense of self-importance, caught up in the distraction of their personal problems, busy with their phones, and not the best listeners. It can be hard for them to hear us, and even harder for them to accept why we ask them to follow our library rules.

Jefferson Fisher is an attorney and a popular contributor on social media to the fine art of listening and talking with skill. His book, The Next Conversation: Argue Less, Talk More (TarcherPerigee, 2025), can help us navigate tough talks with patrons (and how about with our employees, as well?).

In past webinars for Library 2.0, Steve Albrecht has covered the books Crucial Conversations and Verbal Judo. He will follow the same format, offering a “webinar book report” on what to say, to get better cooperation, and what not to say, to avoid continuing confrontations.

LEARNING AGENDA

  • How to use specific conversational tools that give all staff control, confidence, and better connections with patrons.
  • Setting boundaries, building “frames,” addressing defensiveness, and speaking assertively, rather than defensively.
  • How to give bad news to patrons.

DATE: Thursday, October 9, 2025, 2:00 - 3:00 pm US - Eastern Time

COST:

  • $99/person - includes live attendance and any-time access to the recording and the presentation slides and receiving a participation certificate.
  • To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.

TO REGISTER: 

Click HERE to register and pay. You can pay by credit card. You will receive an email within a day with information on how to attend the webinar live and how you can access the permanent webinar recording. If you are paying for someone else to attend, you'll be prompted to send an email to admin@library20.com with the name and email address of the actual attendee.

If you need to be invoiced or pay by check, if you have any trouble registering for a webinar, or if you have any questions, please email admin@library20.com.

NOTE: Please check your spam folder if you don't receive your confirmation email within a day.

SPECIAL GROUP RATES (email admin@library20.com to arrange):

  • Multiple individual log-ins and access from the same organization paid together: $75 each for 3+ registrations, $65 each for 5+ registrations. Unlimited and non-expiring access for those log-ins.
  • The ability to show the webinar (live or recorded) to a group located in the same physical location or in the same virtual meeting from one log-in: $299.
  • Large-scale institutional access for viewing with individual login capability: $499 (hosted either at Library 2.0 or in Niche Academy). Unlimited and non-expiring access for those log-ins.
DR. STEVE ALBRECHT

Since 2000, Dr. Steve Albrecht has trained thousands of library employees in 28+ states, live and online, in service, safety, and security. His programs are fast, entertaining, and provide tools that can be put to use immediately in the library workspace with all types of patrons.

He has written 27 books, including: Library Security: Better Communication, Safer Facilities (ALA, 2015); The Safe Library: Keeping Users, Staff, and Collections Secure (Rowman & Littlefield, 2023); The Library Leader’s Guide to Human Resources: Keeping it Real, Legal, and Ethical (Rowman & Littlefield, May 2025); and The Library Leader's Guide to Employee Coaching: Building a Performance Culture One Meeting at a Time (Rowman & Littlefield, June 2026).

Steve holds a doctoral degree in Business Administration (D.B.A.), an M.A. in Security Management, a B.A. in English, and a B.S. in Psychology. He is board-certified in HR, security management, employee coaching, and threat assessment.
He lives in Springfield, Missouri, with seven dogs and two cats.

More on The Safe Library at thesafelibrary.com. Follow on X (Twitter) at @thesafelibrary and on YouTube @thesafelibrary. Dr. Albrecht's professional website is drstevealbrecht.com.

 
OTHER UPCOMING EVENTS:

 September 18, 2025

 September 23, 2025

 October 7, 2025

 Next Class October 8, 2025

 

 SURVEY:

 

 COHORT:

Wednesday, September 17, 2025

The Future of Therapy: How AI Could Transform Mental Health Care

In a world where technology is reshaping every facet of our lives, it was only a matter of time before artificial intelligence turned its attention to one of the most human experiences of all: therapy. The prospect of AI-powered mental health support is met with a mixture of excitement and apprehension. Can an algorithm truly understand the complexities of the human psyche? Or are we on the verge of a revolution that could make mental health care more accessible, affordable, and effective than ever before? This post explores the nuanced landscape of AI therapy, weighing its potential against the inherent challenges of mental wellness.

The Unspoken Challenges of Traditional Therapy

For all its undeniable benefits, traditional therapy is far from a perfect system. It's a deeply personal journey, and the path to finding the right support is often fraught with uncertainty. One of the core issues is that therapy is not an exact science. Unlike other medical disciplines with clear diagnostic criteria and standardized treatments, mental health care is a mosaic of different theories, models, and approaches. When a therapist is in training, they choose a therapeutic model that resonates with them, but this choice is not always based on rigorous scientific evidence in the way we might expect. This isn't to devalue the incredible work that therapists do, but it highlights the subjective nature of the field.

This subjectivity leads to another significant hurdle: the therapist-client match. Anyone who has sought therapy knows that finding a good fit can be a frustrating process of trial and error. A therapist's style, personality, and approach might work wonders for one person but be completely ineffective for another. And how do we even define a "good fit"? Is it about feeling understood? Is it about being challenged? Is it about seeing measurable progress? The answers vary from person to person, making it difficult to create a standardized system for matching clients with the right therapists.

AI's Surprising Aptitude for Understanding Us

This is where artificial intelligence enters the conversation, and its capabilities might be more surprisingly human than we think. Large language models (LLMs) are demonstrating a remarkable ability to understand and even ascertain an individual's psychological profile. By analyzing our words, our patterns of speech, and the way we express ourselves, these models can build a nuanced understanding of our personalities, our anxieties, and our emotional states. This allows them to interact with us in ways that feel genuinely pleasing and helpful.

This is not just a superficial understanding. As these models continue to learn and evolve, their knowledge of the human condition will only deepen. The very essence of our humanity--our fears, our hopes, our irrationalities--is becoming codifiable. This means that proven therapeutic techniques, from cognitive-behavioral therapy (CBT) to mindfulness practices, can be applied in a systematic and measurable way. Imagine a world where everyone has access to a therapeutic tool that can not only understand them but also deliver evidence-based interventions tailored to their specific needs.

The Promise of Measurable, Scalable Mental Health Support

The potential to codify therapeutic knowledge is a game-changer. It opens the door to a future where mental health support is not only personalized but also scalable and consistent. One of the major challenges in traditional therapy is ensuring that every practitioner is delivering high-quality, evidence-based care. With AI, we can build systems that are grounded in the most effective therapeutic techniques, ensuring that every user receives a baseline of quality care.

Furthermore, AI therapy can be infinitely scaled. In many parts of the world, access to mental health professionals is limited by cost, geography, and stigma. An AI-powered solution could be available to anyone with an internet connection, at any time of day or night. This accessibility could be transformative for individuals who have been unable to seek help through traditional channels. The ability to measure outcomes is another significant advantage. By tracking progress and analyzing data, we can gain unprecedented insights into what works and what doesn't, leading to a continuous cycle of improvement in mental health care.

The Critical Need for Guardrails and Ethical Oversight

Of course, with great power comes great responsibility. The idea of an AI delving into the depths of our minds is understandably unsettling for many. We must proceed with caution and establish robust guardrails to ensure that these powerful tools are used ethically and safely. The potential for psychological traps is real. An AI that is too agreeable or that reinforces negative thought patterns could do more harm than good. We need to build systems that are designed to challenge us, to push us toward growth, and to recognize when a human touch is needed.

Transparency and accountability will be paramount. Users need to understand how the AI works, what data is being collected, and how it is being used. There must be clear pathways for recourse if something goes wrong. The development of AI therapy must be a collaborative effort between technologists, mental health professionals, and ethicists to ensure that we are building a future that is not only innovative but also compassionate and just.

A Hybrid Future: The Best of Both Worlds

So, what does the future of therapy look like? It is unlikely to be a complete replacement of human therapists with AI. Instead, we are likely to see the emergence of a hybrid model that combines the strengths of both. It is very plausible that in the near future, many people will have a primary therapeutic relationship with an AI. This AI will be their first point of contact, their constant companion, and their personalized guide on the journey to mental wellness. 

However, this relationship will not exist in a vacuum. It might be overseen by a human "therapy coach." An AI-integrated therapy model. This coach will be a trained professional who understands both the art of therapy and the science of AI. They will check in periodically to see how the AI-led therapy is progressing, to offer guidance and support, and to intervene if necessary. This hybrid model ensures that we are harnessing the power of technology while retaining the irreplaceable value of human connection and oversight. It is a future where technology and humanity work together to create a world where everyone has the opportunity to thrive.

Conclusion

The journey of integrating AI into mental health care is just beginning. While the concerns are valid and the challenges are real, the potential for positive transformation is immense. By embracing a balanced and thoughtful approach, we can build a future where AI-powered tools and human expertise converge to create a more accessible, effective, and compassionate mental health landscape. The road ahead will require careful navigation, but the destination is a world where everyone has the support they need to flourish, and that is a goal worth striving for.


Postscript: The Evolutionary Foundation

As I reflect on the future of AI therapy, I'm increasingly convinced that a significant portion of our therapeutic work will need to address what I call the Paleolithic Paradox: the fundamental mismatch between our evolved psychology and the modern world we inhabit. We are, in essence, running Stone Age software on a Space Age operating system, and this creates the root of many of our cognitive, social, and emotional difficulties.

Most of our struggles aren't personal failures; they're the predictable result of ancient survival programming trying to navigate a world it was never designed for. We carry within us both inherited traits from millions of years of evolution (what some call the "adapted mind") and a sophisticated subconscious learning system (what I call the "adaptive mind") that is real-time programming "software" that helped our ancestors survive in small tribal groups. The combination of these hardwired behaviors and subconscious training, now operating in a vastly more complex world, creates much of the internal conflict we experience.

This evolutionary perspective suggests that effective AI therapy could go beyond traditional approaches to include, and maybe even standardize, what I envision as "evolutionary psychology" that helps people understand their cognitive and emotional programming so they can work with it rather than against it. An AI system that understands both our ancient drives and the modern forces that exploit them could offer unprecedented insight into why we do what we do, and more importantly, how to redirect that tremendous evolutionary power toward the lives we actually want to live.

Human Agency: AI and the New Power to Be Creative

The New Agents

The term "agent" in AI often evokes programmed bots zipping through tasks with robotic efficiency. But what if the real agents are us--humans newly empowered to achieve what was once out of reach? Large language models (LLMs) and other AI tools are democratizing creation, much like photography’s evolution from a technical craft to an accessible art form. Yet, this shift brings challenges: resentment from those who mastered the "old ways" and a world that demands bold, entrepreneurial mindsets over the steady compliance of the past. Let’s explore how AI is redefining agency, the growing pains it is bringing, and why this transformation is worth embracing.

The Photography Revolution: A Lesson in Expanded Possibility

Photography used to be a fortress of technical mastery. Capturing a stunning image meant juggling exposure settings, shutter speeds, and aperture choices, then meticulously developing film in a darkroom. It was a craft, reserved for those who could marry artistic vision with deep technical know-how. But automatic cameras, followed by digital ones, changed everything. Suddenly, anyone with a good eye could create breathtaking images without wrestling with chemistry or physics. The barriers fell, and photography blossomed into a universal language.

There’s something undeniably pure about the old ways, as understanding light’s nuances carried a certain noble satisfaction. But I don’t begrudge the smartphone-wielding novice who captures a masterpiece. Technology didn’t dilute photography; it expanded who could be a photographer, inviting countless new voices to tell their stories through images.

AI’s Agentic Leap: From Barriers to Breakthroughs

AI, particularly LLMs, is sparking a similar revolution across creative and intellectual domains. It’s turning users into "agents" of their own ideas, bypassing technical hurdles that once gated achievement. Consider the thinker with profound insights but a paralyzing struggle to write. Writing can feel like slow torture, as blank pages taunt and words struggle to make their way to the page. With an LLM, you can speak your ideas aloud, and the model drafts a coherent structure, acting as a tireless editor. The thinking remains yours; AI just builds the bridge to expression.

Or take the dreamer with a killer app idea but no coding skills. Before AI, turning a concept into reality meant years of learning to code or hiring expensive developers. Now, tools can generate code, suggest architectures, or even prototype apps from plain-language prompts. The visionary becomes the agent, steering the process without drowning in syntax.

This leap extends further:

  • Artists: Image-generation models let those with vivid imaginations but shaky hands create stunning visuals.
  • Educators and Learners: AI simplifies complex topics, personalizing explanations or simulating scenarios.
  • Entrepreneurs: From market research to business plans, AI empowers bootstrapping without elite expertise.

Just as digital cameras honored the photographer’s eye over darkroom skills, AI celebrates the human spark--your insight, passion, or perspective--over the technical grind.

The Gatekeeping Trap: Mourning the Old Ways

Yet, this shift isn’t without friction. One major hurdle is the instinct to gatekeep achievement based on the rigors of the past. I felt this myself with photography. After years of mastering exposure and film development, the rise of point-and-shoot cameras stung. It felt unfair that my hard-earned skills were suddenly "optional." Why should someone with no technical training produce work rivaling mine? This resentment is human, but it’s also a trap.

Gatekeeping assumes the old path’s difficulty was the point, when really, it was a means to an end: creating something meaningful. AI’s empowerment doesn’t diminish the value of traditional skills; it redefines who gets to participate. A novelist using an LLM to draft isn’t cheating; they’re still crafting the story. The photographer with an iPhone isn’t necessarily any lesser, and their vision still shapes the frame. Clinging to old metrics of "earning" success risks stifling the very creativity AI unlocks. The challenge is letting go of pride in the grind and celebrating the results, no matter the path.

The Compliance Conundrum: A World Built for Boldness

The second difficulty is deeper and more systemic. Our pre-AI, pre-internet world often rewarded steady compliance--think of traditional schooling, where memorization, adherence to rules, and predictable outputs were the winning combination. Success meant coloring inside the lines, whether in classrooms or cubicles. But AI’s world favors the entrepreneurial, the bold, the risk-takers. The person who can dream big, iterate fast, and adapt thrives as an AI-empowered agent. Not everyone is wired for that.

This shift can feel jarring. The meticulous planner who excelled in structured environments might struggle in a landscape that rewards audacity over precision. AI makes it easier to act on ideas, but it doesn’t teach you to dream them up or embrace the uncertainty of creation. For those accustomed to clear paths and external validation, this new agency can feel less like freedom and more like a tightrope walk.

The solution isn’t to force everyone into an entrepreneurial mold but to recognize that agency comes in many forms. Some will use AI to launch startups; others might craft personal blogs or streamline daily tasks. The key is fostering a mindset that sees AI as a partner in exploration, not a demand to become a Silicon Valley stereotype. Education and culture need to catch up, teaching adaptability and creative confidence alongside traditional skills.

A Personal Reflection: From Torture to Triumph

I’ve felt this transformation firsthand. Writing, for me, is like pulling teeth. My ideas flow in conversation, but the page is a battlefield. LLMs have changed that. I can ramble my thoughts, record them, and upload transcripts or speak directly to the LLM, and the model shapes my ideas and language into drafts, sparing me the agony of starting from scratch. The soul of the work, my ideas, and my voice remain mine. AI is my assistant, not my replacement. It’s liberating, not because it does the all the work, but because it amplifies my agency.

This isn’t about shortcuts; it’s about access to the medium.

Embracing the New Agency

AI’s gift is the chance to become an agent of your own destiny, unshackled by technical barriers. But it demands we navigate two growing pains: letting go of gatekeeping that glorifies past struggles and adapting to a world that prizes boldness over compliance. These challenges aren’t small, but they’re worth confronting. The alternative is a world where only the technically elite create, and that would be a loss for us all.

There’s nobility in the old ways, whether mastering light or wrestling words, but there’s equal value in what’s born when barriers fall. AI isn’t necessarily replacing us; it’s potentially giving us the chance to evolve into agents of our own journeys, ready to shape the future with that which we want to contribute.

New Webinar - "PRIVACY: Deliberately Safeguarding Privacy & Confidentiality in the Era of AI"

PRIVACY:
Deliberately Safeguarding Privacy & Confidentiality in the Era of AI

A Library 2.0 "AI Webinar" with Reed Hepler

OVERVIEW

In a time when powerful AI tools are rapidly reshaping how we interact with information, it is more important than ever to take a proactive approach to personal and institutional privacy. This session focuses on the realities and risks of data sharing with generative AI tools, whether intentional or inadvertent. We will explore how sensitive information can be exposed, extrapolated, and potentially misused, even when users believe their data is protected.

Through real-world cautionary tales and critical examination of privacy laws and ethical pitfalls, participants will learn about:

  • The ways AI can infer and extrapolate sensitive information from user interactions—sometimes beyond what is directly provided.
  • The legal and ethical hurdles surrounding privacy protections, including challenges in recognizing and proving privacy harms.
  • Best practices for anonymizing or minimizing data shared with AI, applying the principles of data minimization, least privilege, and informed consent.
  • Strategies for both organizations and individuals to safeguard private and sensitive information, such as implementing rigorous access controls, relying on synthetic data, and maintaining vigilant data governance practices.

This interactive session is designed to challenge assumptions, highlight practical steps, and empower you to take deliberate control over privacy in your engagement with generative AI. By the end, you will be equipped to adopt a more privacy-conscious approach—balancing the benefits of AI with an unwavering commitment to ethical data stewardship.

LEARNING OUTCOMES:

  • Examine the various harms that privacy violations can cause when interacting with technology.
  • Describe specific strategies for protecting confidentiality.
  • Describe how to help library workers, patrons, and students protect their privacy when interacting with genAI tools.
  • Cite specific examples of AI tools’ fallibility.
  • Develop an explanation for patrons at the various hazards AI users face related to privacy and confidentiality violations.

This 60-minute online webinar is part of our Library 2.0 AI Series. The recording and presentation slides will be available to all who register.

DATE: Tuesday, October 7th, 2025, 2:00 - 3:00 pm US - Eastern Time

COST:

  • $99/person - includes live attendance and any-time access to the recording and the presentation slides and receiving a participation certificate. To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.

TO REGISTER: 

Click HERE to register and pay. You can pay by credit card. You will receive an email within a day with information on how to attend the webinar live and how you can access the permanent webinar recording. If you are paying for someone else to attend, you'll be prompted to send an email to admin@library20.com with the name and email address of the actual attendee.

If you need to be invoiced or pay by check, if you have any trouble registering for a webinar, or if you have any questions, please email admin@library20.com.

NOTE: please check your spam folder if you don't receive your confirmation email within a day.

SPECIAL GROUP RATES (email admin@library20.com to arrange):

  • Multiple individual log-ins and access from the same organization paid together: $75 each for 3+ registrations, $65 each for 5+ registrations. Unlimited and non-expiring access for those log-ins.
  • The ability to show the webinar (live or recorded) to a group located in the same physical location or in the same virtual meeting from one log-in: $299.
  • Large-scale institutional access for viewing with individual login capability: $499 (hosted either at Learning Revolution or in Niche Academy). Unlimited and non-expiring access for those log-ins.

ALL-ACCESS PASSES: This webinar is not a part of the Safe Library or Learning Revolution All-Access programs.

REED C. HEPLER

Reed Hepler is a digital initiatives librarian, instructional designer, copyright agent, artificial intelligence practitioner and consultant, and PhD student at Idaho State University. He earned a Master's Degree in Instructional Design and Educational Technology from Idaho State University in 2025. In 2022, he obtained a Master’s Degree in Library and Information Science, with emphases in Archives Management and Digital Curation from Indiana University. He has worked at nonprofits, corporations, and educational institutions encouraging information literacy and effective education. Combining all of these degrees and experiences, Reed strives to promote ethical librarianship and educational initiatives.

Currently, Reed works as a Digital Initiatives Librarian at a college in Idaho and also has his own consulting firm, heplerconsulting.com. His views and projects can be seen on his LinkedIn page or his blog, CollaborAItion, on Substack. Contact him at reed.hepler@gmail.com for more information.
 
OTHER UPCOMING EVENTS:

 September 18, 2025

 September 23, 2025

 Next Class October 8, 2025

 

 SURVEY:

 

 COHORT:

The Illusion of Intelligence: Why Simulated Consciousness Feels Real Enough

In the age of AI, we're grappling with profound questions about what makes something "intelligent" or "conscious." But what if the answers lie not in the machine's inner workings, but in our own perceptions? I've been mulling over these ideas, and they point to a fascinating truth: simulation might be all we need, and perhaps all we ever get.

The Power of Simulated Consciousness

At the heart of this is the concept of simulated consciousness. We don't require an AI to be truly conscious in some metaphysical sense for it to feel conscious to us. Humans rely on heuristics and signal--subtle cues like emotional responses, self-awareness hints, or adaptive behavior--to judge if something is "alive" in our minds. If an AI mirrors human-like thought processes, empathy, or creativity, we respond as if it is conscious. In essence, AI simulates intelligence, and that's sufficient for most practical purposes. We're not detecting the real thing; we're reacting to a performance. But often that's true for people as well.

Human Perceptions: Flawed and Performative Yardsticks

Our judgments aren't objective, they're filtered through human values and perceptions. We equate intelligence with eloquence, quick wit, or persuasive arguments. But just because someone (or something) sounds intelligent doesn't mean they're thinking clearly or arriving at truth. And how much of what we do is performative, crafted to appear intelligent or sophisticated, carefully gauging the people around us, the setting, and what will be well-received? We often tailor our words and actions to fit social expectations, prioritizing approval over truth. This performative nature shapes not only how we present ourselves but how we evaluate others, including AI.

Humans aren't wired for unerring logic; evolution built us for survival through stories and narratives. We thrive on compelling tales that bind communities, explain the world, and motivate action, even if they're riddled with biases or fallacies. This narrative-driven nature explains why misinformation can spread like wildfire or why charismatic leaders can sway and manipulate masses away from obvious truths. AI, trained on vast amounts of human-generated data that we would often consider biased, slanted, or even outright propaganda, excels at crafting these narratives, making it seem profoundly intelligent. But is it? Or is it just reflecting our own storytelling prowess (and its inherent flaws) back at us? We live within Overton Windows of all kinds: shifting frames of acceptable ideas shaped by culture, media, and power structures that limit what we perceive as "normal" or "true," further entrenching these biases in both human and AI cognition.

Building Safeguards Against Our Own Traps

If our measures of consciousness and intelligence are so subjective and prone to error, how do we navigate toward actual truth? We can't rely on gut feelings or performative displays alone. As a species, we've long recognized our vulnerabilities--despite how intelligent we think we are, we're prone to cognitive errors, biases, and overconfidence. People who believe they're "super smart" are often the ones most blind to their flaws, falling into traps like confirmation bias or hubris.

That's why we've built substantial structures and safeguards into our societies and systems to counteract these tendencies. Consider principles like "innocent until proven guilty," trial by jury, peer review in science, checks and balances in government, and the separation of powers. These aren't just traditions; they're deliberate mechanisms to ensure decisions aren't made in isolation or based on flawed individual judgment. They force us to confront evidence, diverse viewpoints, and accountability.

A valuable complement to these is checking ideas against the well-known playbook of ways individuals, organizations, and institutions exploit our cognitive and unconscious shortcuts and triggers (actually well-documented for over a century, starting eloquently with Edward Bernay's Propaganda). For AI, this means designing systems with built-in transparency, bias checks, and ethical frameworks that not only detect but actively counter these manipulations. It's not about making AI "truly" conscious but ensuring its simulations align with verifiable reality rather than seductive illusions. In a broader sense, this applies to human society too. To pierce through narratives and reach the truth, we have needed and continue to need tools such as the scientific method, diverse perspectives, and critical thinking education. Without them, we risk mistaking simulation or performance for substance.

What Does Synthetic Intelligence Really Mean?

So, what is "synthetic intelligence" in this context? It's the deliberate creation of systems that mimic human-like cognition without necessarily replicating its biological underpinnings. Synthetic doesn't imply fake or inferior; it suggests engineered, adaptable, and potentially superior in specific domains. But it forces us to confront our definitions: If an AI can simulate consciousness and intelligence so well that it outperforms humans in reasoning, creativity, or problem-solving, does the "synthetic" label even matter?

Quite honestly, as AI improves, it's likely to become smarter than us, maybe a profound certainty given the rapid pace of development. An AI equipped to cross-check against manipulation playbooks and navigate beyond human biases, unswayed by the performative pressures that shape our behavior, could arguably get closer to truth than a human ever could, unbound by our emotional triggers or limited perspectives. This isn't a doomsday prediction but a call to humility. By acknowledging our own limitations and the safeguards we've needed for ourselves, we can better prepare for a future where synthetic intelligence isn't just a tool, but a partner that elevates us beyond our innate flaws.

Ultimately, synthetic intelligence challenges us to redefine value. It's not about whether the machine is conscious, but how effectively it acts as if it were and what that means for our future. As we integrate these systems into daily life, the real test will be building them to enhance truth-seeking, not just narrative-spinning or performative displays.

Friday, September 12, 2025

The Trust Crisis

In the 1980s, Karl Albrecht sounded the alarm about America's failing customer service in his groundbreaking book Service America. He argued that poor service wasn't just an inconvenience—it was a national crisis undermining our economic vitality and social fabric. I work with Karl's son, Steve, an expert in library service, safety, and security (among other things), who hosts webinars on my Library 2.0 platform. Knowing of his father's work got me thinking: what might be the current crisis we face if poor service was the defining issue of the eighties?

I'd like to suggest that the crisis we face today is a profound erosion of trust.

This isn't hyperbole. We're experiencing a trust apocalypse that spans every sector: from corporate boardrooms to government halls, from our food supply to our digital feeds. And unlike past crises, this one is playing out in real-time, amplified by the internet's flood of independent sources that expose the gap between promises and reality.

The Trust Crisis

Trust isn't just about feeling good—it's the foundation of economic and social activity. For example, when trust erodes in the business world:

  • Companies lose customers who doubt their promises;
  • Employees disengage from leaders they can't believe;
  • Productivity stalls as the work contract feels broken, especially with talk of AI replacing workers;
  • Society fragments into competing tribes of suspicion.

Trust does not live in isolation. The loss of trust in each area bleeds into the others.

Where Trust Has Been Broken

Government and Politics: Promises Made, Promises Broken

Political leaders routinely campaign by giving promises of changes that are important to people, only to dramatically pivot once in office—undoubtedly swayed by lobbying groups that pour billions into influence and by their own self-interest. Every time this happens it feels like a slap in the face to the voters.

Recent events have accelerated this erosion:

  • Post-9/11 security promises that led to endless surveillance and wars;
  • Iraq War justifications based on weapons that didn't exist;
  • 2008 financial crisis responses that bailed out Wall Street while Main Street suffered;
  • NSA mass surveillance revelations that exposed privacy protections as largely fictional;
  • Outright scientific and criminal deceptions by the pharmaceutical and chemical industries;
  • Serious generational health degradations where logical causality is ignored;
  • Corporate data breaches and privacy violations that repeatedly compromised personal information despite security promises;
  • Manipulation of sentiments and behavior by clandestine social media operations;
  • Widespread deceit in the name of science that politicized public health and research;
  • Complete unraveling of the moral fabric of the American presidency across multiple administrations.

Today's generation has watched these promises crumble in real time, many being documented by whistleblowers and citizen journalists, but most often conspicuously unexamined by mainstream media. The result is a weird combination of voter apathy and highly emotional political tribalism, conspiracy thinking (and actual conspiracies), and a general populace that quite reasonably questions anyone who demands to be trusted by virtue of their position or authority.

Media: From Watchdog to Echo Chamber

Traditional media has always been a mouthpiece for the rich and powerful, but this has been exacerbated by the same political tribalism, ratings, and commercial funding. News outlets push partisan spin and sensationalism while social platforms weaponize our data for behavioral manipulation.

The rise of algorithmic content has created filter bubbles where facts fracture into "alternative truths." Meanwhile, deepfakes and AI-generated content are increasingly blurring the line between real and fake, making skepticism the default response to any information.

Financial Services: The Rigged Game

Perhaps nowhere is broken trust more evident than in finance and education. Banks promise security but deliver predatory practices—hidden fees, subprime loans, and speculative bubbles that enriched executives while devastating communities. This includes documented scandals, from Wells Fargo's fake accounts to systematic market rigging of interest rates, precious metals, and foreign exchange, plus credit agencies fraudulently rating junk securities as AAA investments.

Meanwhile, colleges market degrees as tickets to prosperity while saddling students with crushing debt for jobs that are increasingly not materializing. The "work hard and succeed" narrative feels like a cruel joke to generations facing stagnant wages and skyrocketing costs. I read last week that the average age of a first-time home buyer is now in the mid-fifties.

Healthcare and Food: Poisoning the Well

Trust in what we consume has been shattered by industries that prioritize profits over public health:

  • Big Pharma buries side effects and price-gouges life-saving medications;
  • Food companies hide additives behind misleading "natural" labels;
  • Chemical companies downplay toxins in everyday products;
  • Healthcare insurers turn healing into a financial gamble.

There are reasonably credible claims that iatrogenic deaths—deaths caused by medical treatment itself—account for over 200,000 to 400,000 deaths annually in the United States, potentially making medical errors the third leading cause of death. No wonder people have developed a deep distrust of the medical system.

For younger generations, this feels like a direct assault on their future—polluted environments and unaffordable care that blocks any path to security.

Energy: Environmental Betrayal

Climate change and environmental degradation are hard to refute but are highly politicized instead of thoughtfully discussed. Not unlike the tobacco companies (and I believe the food companies), energy companies have exemplifie distrust, hiding relevant scientific data for decades while funding campaigns that muddied the waters.

Technology: Digital Manipulation

Tech giants have weaponized personal data in ways that shatter privacy and autonomy. Beyond targeted advertising, they manipulate elections, mental health, and even basic human behavior through addictive app designs, psychographic profiling, and behavioral nudging.

Their promise of connection and empowerment has devolved into surveillance capitalism, and users are the product being sold to the highest bidder.

Gender Relations: The Personal Becomes Political

Adding another layer to the crisis is growing distrust between the masculine and the feminine across workplaces, relationships, and politics. Men report feeling betrayed by changing social dynamics, while women express deep skepticism about safety and equity. All women are crazy. All men are useless and stupid.

This gender divide manifests in relationship anxiety, office politics, and political polarization—creating another fracture in social cohesion.

The Path Forward: Rebuilding Trust

Despite this grim landscape, trust can and must be rebuilt. But it requires fundamental changes in how our institutions operate:

For Leaders

Rebuild the triangle of trust between management, workers, and shareholders, where there needs to be respect all the way around:

Commit to transparency and fairness:

  • Owning mistakes publicly;
  • Funding truly independent research;
  • Paying fair wages, not just competitive ones;
  • Mentoring younger employees instead of exploiting them.

Make promises you can keep:

  • Under-promise and over-deliver;
  • Build buffers into commitments;
  • Communicate constraints honestly;
  • Show the work, not just the results.

For Organizations

Reform accountability systems:

  • Create independent oversight boards;
  • Implement whistleblower protections;
  • Tie executive compensation to long-term outcomes;
  • Publish regular trust audits;

Design for trust:

  • Make privacy the default, not an option;
  • Use algorithms that inform rather than manipulate;
  • Create products that solve real problems;
  • Build sustainable business models.

For Society

Demand better:

  • Support companies that demonstrate trustworthiness;
  • Vote with your wallet and your ballot;
  • Share accurate information and call out deception;
  • Model the behavior you want to see.

A Trust Manifesto for Our Time

Just as Karl Albrecht's Service America rallied a nation around customer service, we need a modern manifesto centered on trust. The principles are simple:

  1. Truth in all communication
  2. Genuine opportunity for everyone
  3. Leaders who nurture rather than exploit
  4. Systems designed for transparency
  5. Accountability that means something

This isn't just about business—it's about reclaiming our shared future. In a world where every institution feels untrustworthy, the organizations that choose transparency, authenticity, and genuine care for stakeholders have always and can still stand out as beacons.

New Webinar: "AI’s Environmental Impact: Understanding the Data and Acting Sustainably"

AI’s Environmental Impact:
Understanding the Data and Acting Sustainably

An AI Webinar with Nicole Hennig

OVERVIEW: 

As teachers and librarians, you're on the front lines of introducing AI to students. But with headlines warning about AI's massive energy demands, how do you balance the need for AI literacy with decisions about acting sustainably?

This webinar cuts through the confusion to help you make informed choices about sustainability in your classrooms, libraries, and communities.

We’ll examine independent estimates of AI’s energy and water use and put them in context in ways that are easy to understand.

We’ll include an introduction to how data centers work and what they are used for. We’ll clarify what we know and what’s still uncertain about AI’s carbon footprint (both in the present and in future projections).

We’ll compare individual AI use to other digital activities, and we’ll also look at global use of data centers with statistics from the International Energy Agency.

Did you know that AI technologies are also being used to mitigate climate change? We’ll look at some of the many innovations underway related to greener data centers, hardware, and chips. And we’ll look at how AI is being used in projects that map deforestation, improve recycling, clean up the ocean, innovate new materials for greener buildings, and more.

You’ll come away with some practical tips for answering questions from students, and some simple advocacy steps to use in your communities.

LEARNING AGENDA:

  • Review statistics about energy use of AI in a clearer context than the usual “factoids” you see in many headlines.
  • Understand some basic facts about data centers and their use for AI and other technologies.
  • Look at the history of news reporting about the energy use of other new technologies (like online book ordering in 1999 and streaming media in 2020).
  • Compare the carbon footprint of individual uses of AI with uses of other technologies and then zoom out to global use and what it could mean for climate change.
  • Examine the history of “Jevons paradox” and why it’s often quoted in relation to the growth of AI.
  • See examples of how AI itself is currently being used to mitigate climate change.
  • Offer some practical tips for answering questions about AI and sustainability, and what we can advocate for collectively related to sustainable AI.

This 60-minute online webinar is part of our AI Series. The recording and presentation slides will be available to all who register.

DATE: October 17, 2:00 pm to 3:30 pm US - Eastern Time

COST:

  • $99/person - includes live attendance and any-time access to the recording and the presentation slides and receiving a participation certificate. To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.

TO REGISTER: 

Click HERE to register and pay. You can pay by credit card. You will receive an email within a day with information on how to attend the webinar live and how you can access the permanent webinar recording. If you are paying for someone else to attend, you'll be prompted to send an email to admin@library20.com with the name and email address of the actual attendee.

If you need to be invoiced or pay by check, if you have any trouble registering for a webinar, or if you have any questions, please email admin@library20.com.

NOTE: please check your spam folder if you don't receive your confirmation email within a day.

SPECIAL GROUP RATES (email admin@library20.com to arrange):

  • Multiple individual log-ins and access from the same organization paid together: $75 each for 3+ registrations, $65 each for 5+ registrations. Unlimited and non-expiring access for those log-ins.
  • The ability to show the webinar (live or recorded) to a group located in the same physical location or in the same virtual meeting from one log-in: $299.
  • Large-scale institutional access for viewing with individual login capability: $499 (hosted either at Learning Revolution or in Niche Academy). Unlimited and non-expiring access for those log-ins.

ALL-ACCESS PASSES: This webinar is not a part of the Safe Library or Learning Revolution All-Access programs.

NICOLE HENNIG

Nicole Hennig is an expert in instructional design, user experience, and emerging technologies. She is currently an e-learning developer and AI education specialist at the University of Arizona Libraries.

Previously, she worked for the MIT Libraries as head of the user experience department. In her 14 years of experience at MIT, she won awards for innovation and worked to keep academics up to date with the best new technologies.

She is the author of several books, including Keeping Up with Emerging Technologies, Apps for Librarians, and Privacy & Security Online.

Librarians who take her courses are applying what they’ve learned in their communities. See their testimonials.

To stay current with the latest developments in AI, sign up for her email newsletter, Generative AI News, and follow her on Bluesky or Mastodon, where she posts daily about libraries, artificial intelligence, and other technologies.

 September 12, 2025

 September 18, 2025

 September 23, 2025

 Next Class October 8, 2025

 

 SURVEY:

 

 COHORT:

Wednesday, September 10, 2025

The Noble Lie of Modern Schooling: How We Wound Our Children by Design

Introduction - The Uncomfortable Truth About School

A quiet but persistent question can echo in the back of the minds of those who've attended traditional public schools: Was that really about learning?


Over the years, I have spoken with several people in service jobs—haircutters, servers, retail workers—who, when I drilled past the normal pleasantries about school and asked about their actual school experiences, actually began to cry–this really surprised me the first time, and then the consistent pattern revealed something to me: for many, the experience of schooling is not one of joyous discovery, but of a slow, grinding erosion of curiosity, creativity, and self. It is a story of being wounded, not by overt malice, but by the very design of the institution itself. Is this a bug, or is it a feature?


I have come to believe that the deep, systemic harm inflicted by modern compulsory schooling is not an accidental byproduct of a flawed system, but rather the functional outcome of a system that emerged to serve the needs of social engineering—a modern manifestation of what Plato, in his Republic, called the "Noble Lie." This Noble Lie is essentially a functional fiction, a narrative that serves the system's needs rather than the truth.


This is the uncomfortable truth: that the primary function of our educational system is not to educate in the classical sense of drawing out the unique potential of each individual, but to sort, to stratify, and to condition a populace to accept its predetermined place in a social and economic hierarchy. In this, it is a stunningly effective, if deeply damaging, success. The thing that schools often do best it to teach most students that they are not good learners.

The Ghost of Plato in the Classroom - Understanding the Noble Lie

I’m not the first to reach these conclusions. Each generation has to rediscover the same troubling truth about educational systems (and the world). To trace this pattern, we can travel back over two millennia to ancient Athens, to the mind of a philosopher who wrestled with the fundamental question of how to create a just and stable society.


In his seminal work, The Republic, Plato constructs an ideal state, a utopia built on reason and justice. Yet, at the very foundation of this ideal state lies a profound and troubling paradox: the Noble Lie. What is this Noble Lie? It is a foundational myth, a story to be told to all citizens, from the ruling class to the lowest worker, to ensure their acceptance of the social structure.


Plato, through his mouthpiece Socrates, proposes a tale of three metals. The gods, the story goes, have mixed different metals into the souls of men at birth. Those destined to rule have gold in their souls; their auxiliaries, the soldiers and guardians, have silver; and the farmers and craftsmen have bronze or iron. This myth, Socrates argues, will persuade the citizens to accept their station in life, not as an accident of birth or a consequence of social injustice, but as a reflection of their innate nature, a divine and unchangeable reality.


The purpose is explicit: to foster social harmony, to eliminate dissent, and to create a stable, predictable, and rigidly stratified society where everyone knows their place and performs their function without question.


This dynamic resonates deeply with my own exploration of how narratives shape human behavior. In my own study, I’ve grappled with the tension between objective truth and the power of story. I have come to understand that people have been evolutionarily designed to be led by narratives and will never make objective truth the primary guiding principle, as human social survival mechanisms are often rooted in tribal stories and bonds rather than the scientific method.


Plato understood this fundamental aspect of human nature all too well. He recognized that a society is not held together by a shared understanding of empirical facts, but by a shared story. His solution was not to educate the populace into a state of objective understanding—a task he likely saw as impossible after seeing how Socrates was treated—but to craft a more powerful, more compelling, and ultimately, more useful story.


In essence, Plato was choosing to be a puppeteer in his own allegory of the cave. The Noble Lie is the ultimate expression of this philosophy: a recognition that in the realm of human affairs, narrative trumps all.

From Ancient Greece to Modern Classrooms

The power of the Noble Lie extends far beyond ancient philosophy. It manifests wherever those in authority need populations to accept their assigned roles without question. 


Consider the British colonial experience in India. Traditional Indian society had already developed a Noble Lie of extraordinary sophistication: the doctrine of karma and reincarnation. According to this narrative, one's position in the social hierarchy—whether born as a Brahmin priest or an untouchable laborer—was not an accident of birth or social injustice, but the direct result of actions in previous lives. Your current station was deserved, earned through the virtue or vice of past incarnations.


This cosmic justice system made social stratification not only acceptable but morally necessary. To question one's caste was to question the fundamental order of the universe itself.


When the British established colonial schools in India, they encountered a population already conditioned by millennia of this Noble Lie to accept hierarchical arrangements as natural and just. The colonial education system exploited this foundation, creating schools designed not to liberate minds or unleash human potential, but to produce a compliant administrative class—Indians educated enough to serve the colonial bureaucracy but not educated enough to challenge colonial authority.


The parallels to Plato's Republic are striking. Just as Plato's guardians were to be educated differently from the producers, the colonial system created different educational tracks for different social functions. The vast majority received minimal education designed to make them useful workers; a select few received more advanced training to serve as intermediaries between the colonial rulers and the ruled masses. The system was explicitly designed for social control, not human development.


Meanwhile, the Prussian school system was developing its own approach to mass compliance. Designed explicitly to create obedient soldiers and citizens, it provided a framework of regimented schedules, unquestioning obedience to authority, and the suppression of individual initiative in favor of collective discipline.


American public education appears to have emerged from the marriage of these two powerful traditions. While the Prussian system receives the most attention in discussions of American educational origins, there was certainly awareness of and likely influence from the British colonial approach, given America's colonial heritage and ongoing cultural connections to Britain.


The American system created something that demanded both compliance and justified social hierarchy through the appearance of fair competition. The American innovation was to wrap this Noble Lie in democratic rhetoric—replacing divine metals and karmic inheritance with the myth of meritocracy.

The Myth of Meritocracy

This brings us to the uncomfortable present. If Plato's Noble Lie, the foundational myth for his ideal republic, was the tale of being born of one of the three metals, the equivalent in our modern, democratic society is this idea of academic meritocracy.


We tell ourselves a story of education as a great equalizer, a fair engine of social mobility. But the underlying structure and function of the system tell a very different tale. The real lessons of school are not found in the official curriculum of math, science, and literature, but in what educators call the hidden curriculum—a set of powerful, unspoken lessons that condition children for a life of compliance and stratification.


The sorting reveals itself through endless testing, grading curves, and competitive ranking—tools that masquerade as beneficial educational instruments but function as mechanisms of social stratification. These are our modern equivalent of Plato's oracle, telling our children whether their souls are made of gold, silver, or bronze, and ensuring that they accept the judgment as a reflection of their innate worth rather than as the outcome of an arbitrary and often cruel game.


We may think that individual flourishing is the ultimate measure of success within this system. But I've come to believe the opposite is true. Individual flourishing cannot compare to the functional benefit of compliance and conformance that schools provide to those in power. The system uses the narrative of individual flourishing while practicing the teaching of submission and the diminishment of self.


This doesn't mean that those working in the system are trying to break individuals. Rather, it means the reason the system grew and gained strength was that it produces outcomes valuable to industry, wealth, and power—regardless of its stated noble purposes.

The Genius of Well-Intentioned Participation

The most insidious aspect of this system is that it doesn't require villains. It requires believers. The vast majority of people working within it—teachers, administrators, counselors, and support staff—are caring, dedicated individuals who entered education with a genuine desire to help young people learn and grow.


This is precisely how the Noble Lie functions most effectively. The most powerful lies are those told by people who believe them to be true, adults twho see themselves as serving a noble purpose. Each acts with good intentions, yet each also serves as an agent of the sorting machine. I've observed this pattern extends far beyond education. People work for large organizations, compartmentalizing their work and focusing on the virtue of their specific role while remaining reluctant to examine the outcome of the whole. Each person maintains their sense of moral purpose by focusing on their piece of the puzzle rather than examining what the completed picture actually looks like… where everyone is doing "good work" but the collective outcome serves different ends entirely. 


The cost of confronting the system or leaving it can be personally too high—careers, mortgages, professional identities, and family security all depend on continued participation. The Noble Lie persists because it convinces its agents that they are engaged in a noble enterprise or one that they cannot change, and in education, that the sorting and stratification they facilitate is actually a form of care and guidance.

The Web of Stakeholders

This dynamic creates a complex web of stakeholders, each with compelling reasons to believe in the system's stated mission. Parents find themselves particularly caught in this web. Most assume, quite reasonably, that schools perform the function they claim to perform—that they educate children and provide a pathway to opportunity.


For many families, particularly those with the cultural and economic capital to "play the game" successfully, this assumption appears validated by their experience. These parents know how to navigate assignments and homework, how to advocate for their children's needs, how to decode the hidden rules of academic success. They see their children thrive within the system's parameters, earning good grades, gaining admission to competitive programs, and moving toward promising futures.


For them, school is indeed a net positive, and their success becomes evidence of its fundamental soundness. They become, quite naturally, defenders of an institution that has served their families well.


This creates a troubling dynamic around educational equity. Many parents genuinely believe that without public schooling, education would be unfairly distributed—that the system, whatever its flaws, is at least attempting to level the playing field. Yet this belief may mask the reality that the system often reinforces existing inequalities rather than eliminating them.


Meanwhile, parents who cannot or do not know how to help their children navigate the system successfully often become inadvertent evidence for educators' belief that they are providing something essential that children aren't receiving at home. This creates a self-reinforcing cycle: as schools assume more responsibility for children's development, parents feel increasingly disenfranchised from their children's education. Their reduced participation then becomes evidence that professional intervention is necessary, further justifying institutional expansion.

Why We Fall for It

The persistence of this Noble Lie isn't merely institutional inertia or misguided policy. It taps into something fundamental about human nature itself. Our evolutionary history shaped us to be natural followers of compelling narratives and authority structures.


As I've observed in my work on the "Paleolithic Paradox," we are creatures designed for a world that no longer exists, carrying ancient psychological programming into modern institutional contexts where it can be exploited. The human capacity to believe in and follow shared stories served our ancestors well in small tribal groups where social cohesion was essential for survival.


But this same capacity makes us vulnerable to institutional manipulation in complex modern societies. We are, as I've noted, "designed to be led" by narratives, and the Noble Lie of schooling is precisely the kind of compelling story that our evolved psychology finds difficult to resist. It promises order, fairness, and progress—all deeply appealing concepts to minds that crave meaning and structure.


This evolutionary inheritance helps explain why relatively few parents and students engage at the higher levels of analysis required to see through the system's stated mission to its hidden function. The cognitive and emotional effort required to question fundamental institutional narratives goes against our natural inclination to accept stories told by recognized authorities.


The Noble Lie persists not because people are stupid or uncaring, but because questioning it requires a level of intellectual independence that runs counter to our deepest social instincts.

What This Means for Us

The hidden curriculum of compulsory schooling functions as a powerful engine of social engineering. We’ve documented the human cost of this system, and we’ve explored why we remain so susceptible to its promises.


The diagnosis is clear, the evidence overwhelming. But what does this understanding mean for those of us who see through the Noble Lie?


I've learned to be skeptical of grand solutions and systemic reforms. Large institutions, once established, have powerful incentives to maintain themselves regardless of whether they serve their stated purposes. Too many jobs, careers, identities, and belief structures depend on the current system for dramatic change to be likely.


The educational establishment will continue to promise reforms—better assessments, more accountability, innovative curricula, and technology integration. But these efforts typically amount to polishing the machinery of the sorting machine rather than questioning its fundamental purpose.


My prediction isn't rosy, it's realistic; the Noble Lie inevitably continues. The system is simply too successful at what it actually does, too embedded in our economic and social structures, and too aligned with our evolved psychological vulnerabilities to be easily dismantled or transformed.


But understanding the Noble Lie does something valuable: it frees us to make independent decisions about education based on what we actually want for our children rather than what we're told we should want. When we recognize that the primary function of schooling is sorting rather than learning, we can ask different questions. What is true success for our children? How do we help them understand the game and play it well without being trapped by it? What do we ultimately care most about?


Some families will choose to work within the system while consciously protecting their children from its more harmful aspects. Others will opt out entirely through homeschooling or alternative approaches. Many will find hybrid solutions that take advantage of resources while avoiding the most damaging elements of institutional schooling.


The key insight is that once we see the system clearly, we're no longer trapped by its narrative. We can make choices based on our own values and our children's actual needs rather than the system's requirement for compliant, sorted citizens.


This understanding won't change the world overnight. But it can change how thoughtful people approach one of the most important decisions they make for their children. Sometimes, the most radical act is simply refusing to believe the Noble Lie—and making choices based on what we know to be true rather than what we're told is necessary.


In the end, perhaps that's enough. Not every problem can or should be turned into a systemic solution. More likely, the solution is simply clarity—seeing things as they are rather than as we're told they should be, and helping others to do so. With that clarity comes freedom, and with freedom comes the possibility of authentic choice.