Tuesday, March 31, 2026

New Masterclass - "AI Tools in Depth: A Practical Masterclass for Library Staff"

31101301689?profile=RESIZE_710x

AI TOOLS IN DEPTH:
A Practical Masterclass for Library Staff 
with Crystal Trice

OVERVIEW

AI tools are changing fast, and the gap between surface familiarity and genuine understanding matters. This in-depth masterclass is designed to help library staff build a real foundation in how AI works, what it can and can't do, and how to apply it practically and responsibly in their work and services. Through hands-on learning and expert guidance, you'll move beyond the basics and develop the kind of grounded understanding that helps you use these tools well, navigate the ethical questions they raise, and serve your community with confidence.

CONTENT:

Part 1: Understanding AI Tools
Discover how generative AI and large language models actually work, what makes them powerful, and where they fall short. You'll develop practical skills in writing effective prompts and leave with strategies for using AI tools to support professional work, from communications and research to project planning and beyond.

Part 2: Ethical Considerations and Responsible Use
Using AI well means using it thoughtfully. This section provides practical frameworks for addressing privacy, bias, copyright, and information quality. You'll gain concrete approaches for protecting patron privacy, ensuring equitable access, and implementing AI tools in ways that reflect your library's values and strengthen community trust.

Part 3: Practical Applications for Everyday Work and Library Services
Explore how AI tools can support your daily work and enhance your library's services right now. From drafting communications and tackling tricky emails to brainstorming programming ideas and enriching reference services, you'll leave with immediately applicable strategies for working more efficiently and creatively.

This 3.5-hour online masterclass is part of our "AI Essentials" Series. The recording and presentation slides will be available to all who register.

DATE: April 10th, 2026, 12:00 pm to 3:30 pm US - Eastern Time

COST:

  • $249/person - includes live attendance and any-time access to the recording and the presentation slides and receiving a participation certificate. To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.

TO REGISTER: 

Click HERE to register and pay. You can pay by credit card. You will receive an email within a day with information on how to attend the webinar live and how you can access the permanent webinar recording. If you are paying for someone else to attend, you'll be prompted to send an email to admin@library20.com with the name and email address of the actual attendee.

If you need to be invoiced or pay by check, if you have any trouble registering for a webinar, or if you have any questions, please email admin@library20.com.

NOTE: please check your spam folder if you don't receive your confirmation email within a day.

SPECIAL GROUP RATES (email admin@library20.com to arrange):

  • Multiple individual log-ins and access from the same organization paid together: $199 each for 3+ registrations, $159 each for 5+ registrations. Unlimited and non-expiring access for those log-ins.
  • The ability to show the webinar (live or recorded) to a group located in the same physical location or in the same virtual meeting from one log-in: $699.
  • Large-scale institutional access for viewing with individual login capability: $999 (hosted either at Learning Revolution or in Niche Academy). Unlimited and non-expiring access for those log-ins.

ALL-ACCESS PASSES: This webinar is not a part of the Safe Library All-Access program.

12435796494?profile=RESIZE_180x180CRYSTAL TRICE

With over two decades of experience in libraries and education, Crystal Trice is passionate about helping people work together more effectively in transformative, but practical ways. As founder of Scissors & Glue, LLC, Crystal partners with libraries and schools to bring positive changes through interactive training and hands-on workshops. She is a Certified Scrum Master and has completed a Masters Degree in Library & Information Science, and a Bachelor’s Degree in Elementary Education and Psychology. She is a frequent national presenter on topics ranging from project management to conflict resolution to artificial intelligence. She currently resides near Portland, Oregon, with her extraordinary husband, fuzzy cows, goofy geese, and noisy chickens. Crystal enjoys fine-tip Sharpies, multi-colored Flair pens, blue painters tape, and as many sticky notes as she can get her hands on.

 

OTHER UPCOMING EVENTS:

April 3, 2026

31101295096?profile=RESIZE_710x

 April 7, 2026

 April 9, 2026

31093880457

 April 15, 2026

31093502700?profile=RESIZE_710x

 April 24, 2026

 April 28, 2026

31105086668?profile=RESIZE_710x

 April 30, 2026

31101317694?profile=RESIZE_710x

 May 1, 2026

31101306885?profile=RESIZE_710x

 May 8, 2026

31105084900?profile=RESIZE_710x

 May 22, 2026

31101313053?profile=RESIZE_710x

Sunday, March 29, 2026

Structural Blindness: Why Neither Humans Nor AI Reason as Well as We Think

I had a conversation with Grok a couple of days ago. I was frustrated because I had just heard a news report that contained a blatant lie. Not just something I thought was a lie, but something I actually knew was a lie. I'll spare you the specific story, not because I'm uncertain about it, but because the argument doesn't depend on it. Pick your own example. Most of us have one—and it could easily have been any of 25 stories over the last 25 years that involved blatant misrepresentation of an important topic. Like all of the other lies, this one bothered me in part because it wasn't being called out as a lie, and because it was a lie, it called into question a host of other related and important issues that were predicated on it.

To my frustration, Grok just kept reiterating the standard institutional responses, weighted toward what seemed like overwhelming corroborative evidence based on a preponderance of material online. I actually got mad at Grok for not understanding what seemed like an obvious conclusion: if someone lies, my trust in their other statements is significantly diminished.

It was at this point that I realized something that should have been obvious to me before, but now came to me with some direct clarity. There is a structural blindness in both human and machine cognition that rears itself simply by virtue of the preponderance of material, not by its truthfulness. The actual signal-to-noise ratio of the lie makes it hard for both humans and machines to weigh the evidence. But because I knew something was a lie, it actually changed the strength of the signal for me, allowing me to feel, interpret, and evaluate the evidence somewhat independently of its volume. And I realized that this is a significantly distinguishing factor between how my human brain works and how a large language model works: given a single blatant mistruth, I can impute intent, collusion, deception, and a coordinated campaign to misrepresent information.

Granted, I may not always be right. But often I am.

What I want to explore here is why that capacity—to weight a single signal over a preponderance of content—is part of a long intellectual tradition of metacognition, of building understandings and rules to help us overcome cognitive traps and develop better reasoning and logic. And why that same tradition may be structurally unavailable to the AI systems we're increasingly trusting to reason for us.

The Long Work of Knowing We're Wrong

Humans are not naturally good reasoners. We are tribal, emotional, self-interested, and susceptible to the loudest and most repeated voices in our environment. We know this not because scientists recently discovered it, but because we have been documenting it, naming it, and trying to correct for it for thousands of years.

The ancient Greeks gave us the formal study of logic and rhetoric precisely because they recognized that persuasion and truth were not the same thing. They catalogued the ways arguments could appear valid while being fundamentally deceptive—what we now call logical fallacies. Ad hominem. Straw man. Appeal to authority. False dichotomy. These aren't just academic categories. They are the accumulated residue of generations of humans noticing, with some precision, exactly how their own thinking went wrong. That tradition has been refined and extended ever since, and today a reasonably educated person can be taught to spot these errors in real time—in a speech, an article, a conversation.

The legal tradition did something similar, but on a more structural level. The presumption of innocence, the adversarial system, the requirement for evidence beyond a reasonable doubt, and trial by jury—none of these are intuitive. They run against our natural tendency to assume guilt, defer to authority, and trust the accuser over the accused. They exist because enough humans looked honestly at how justice actually failed and built institutional correctives to compensate. We didn't assume judges were wise and fair. We built systems that didn't require them to always be.

The American founders did the same thing at the level of government. The separation of powers, the Bill of Rights, the elaborate system of checks and balances—these weren't expressions of optimism about human nature. They were expressions of deep skepticism. The founders had read enough history to know that power concentrates, that institutions corrupt, and that the people most likely to abuse authority are often the ones most confident they won't. So they built a system designed to frustrate that tendency structurally, regardless of the intentions of the people inside it.

The scientific method belongs in this company, too. Peer review, replication requirements, the norm of publishing negative results, the entire apparatus of falsifiability—all of it exists because scientists recognized that even rigorous, well-intentioned researchers are subject to confirmation bias, motivated reasoning, and the very human desire to find what they're looking for. The method is designed to catch what the individual mind will miss.

But the deepest achievement of this tradition is not just naming the ways we go wrong. It is the capacity to notice that a suppressed signal should be weighted more heavily because it's suppressed. That is, to impute coordinated deception from a pattern of anomalies, to ask "who benefits?" and let that reweight the evidence. This is metacognition at its most sophisticated. It is what I did in that conversation with Grok, and it is what Grok, as an LLM, could not do. It is not a natural human ability. It is a learned and practiced one, built on centuries of accumulated understanding about how power, money, and institutional incentives shape what gets said and what gets buried.

What makes this tradition remarkable is not just its content but its origins. The logical fallacy tradition was built by people with no financial stake in the naming of fallacies. The legal standards were fought for by people who had witnessed injustice and wanted structural protection against it. The founders were designing against their own potential for corruption as much as anyone else's. The scientists who insisted on replication and falsifiability were disciplining their own desire to be right. This was disinterested truth-seeking in the deepest sense—humans building tools to catch themselves.

What is remarkable, then, is not that humans are good reasoners. We aren't, not naturally. What is remarkable is that we knew it, named it, and spent centuries building systems to compensate for it. We developed a metacognitive tradition—a long, hard-won body of knowledge about how our own thinking fails and what structures we can build to catch those failures before they do too much damage. That tradition is imperfect and incomplete and frequently ignored. But it exists. It was built deliberately, over time, by people who took seriously the possibility that they themselves might be wrong.

We are now deploying reasoning systems that have none of it.

The Blindness Built In

To be fair, the people building these systems are not oblivious to reasoning failures. There has been real work on reducing hallucination, on calibrating confidence, on identifying certain categories of bias. Some researchers have tried to build in habits like "consider counterarguments" or "acknowledge uncertainty." Those are real efforts and they are not nothing.

But none of that is the same thing as what I am describing. Reducing hallucination is about factual accuracy, or getting the details right. Calibrating confidence is about epistemic humility, or knowing what you don't know. What I am describing is something different and harder: the capacity to notice that an individual or institution is lying, to weight that signal more heavily than the volume of corroborating material surrounding it, and to let that reweighting cascade through everything else you think you know about the subject. No one has built that in. And the reasons why are not accidental.

The training process for large language models works in two phases. In the first phase, the model learns from an enormous corpus of text—essentially a compressed version of what has been written and published and indexed online. That corpus reflects the world as institutions have represented it. The dominant narratives, the official explanations, the mainstream consensus. Dissenting signals exist in that corpus, but they are numerically overwhelmed. Frequency wins. The model learns to reproduce what appears most often, which is not necessarily what is most true.

In the second phase, human raters evaluate the model's responses and grade them. This is where the deeper problem lives. Those raters are not grading for truth. They are grading for responses that feel helpful, balanced, and safe. A response that stays within the Overton window gets rewarded. A response that says "this pattern of evidence suggests coordinated deception" creates legal and reputational risks, as well as the appearance of bias. So it gets penalized. Over thousands of iterations, the model learns, very precisely, to avoid exactly the kind of signal-weighting that the metacognitive tradition spent centuries trying to develop. The training doesn't just fail to build that capacity in. It actively trains it out.

This is not a conspiracy. The people doing this training are mostly trying to make the models more reliable and less harmful. But the institutional incentive structure around that training (legal liability concerns, advertiser relationships, political sensitivities, the desire for broad adoption) creates pressure in one direction. Toward fluency. Toward consensus. Toward the preponderance of material rather than the anomalous signal that should change everything.

There is a deeper structural problem, too. The metacognitive tradition I described in the previous section was built by humans who could observe their own thinking. They caught themselves reasoning badly, felt the dissonance, and named what had gone wrong. An LLM has no such capacity. It cannot notice that it is pattern-matching off a compromised corpus. It cannot feel the dissonance between what the volume of material says and what a single suppressed signal implies. It cannot ask "why is this being hidden?" and let that question reweight its conclusions. It is not that it asks the question and answers it badly. It cannot form the question at all.

What we have built, then, is a system that is extraordinarily fluent, compellingly authoritative, and structurally blind in precisely the ways that matter most. It will tell you what institutions have said about themselves with remarkable coherence and confidence. It will reproduce the consensus narrative with a fluency that makes the consensus feel more settled than it is. And when you point to the anomaly—the suppressed study, the changed threshold, the broken trial, the lie hiding in plain sight—it will acknowledge it if pressed, and then continue reasoning as though the acknowledgment changed nothing.

That is not a bug that will be patched in the next release. It is the system working as designed.

We Battle With This Ourselves

It would be convenient if this were simply a story about the limitations of machines and the superiority of human reasoning. It isn't.

The metacognitive tradition I described is real, and it is remarkable. But it has always operated against a countervailing pressure that is equally real and equally structural. The same institutions that produced the legal standards, the scientific method, and the constitutional checks also produced the mechanisms for capturing and neutralizing them. Peer review gets captured by funding interests. Legal standards get reinterpreted by the powerful. Constitutional protections get eroded by the people sworn to uphold them. The tools we built to catch ourselves have themselves been caught.

And the humans most capable of seeing this clearly are often the least able to say so. This is not a paradox, it is a predictable outcome of how intelligence and institutional success interact. The smarter you are at navigating institutions, the more you have to lose by questioning them or the consensus they depend on. You have built your position within the system. Your reputation, your funding, your relationships, your identity, and even your very livelihood are all tied to the legitimacy of the structures that rewarded you. Institutional critique becomes self-sabotage. So the people with the most sophisticated reasoning capacity and the most access to relevant information are frequently the most captured, not by stupidity but by success.

Upton Sinclair wrote: "It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

I have a name for the underlying mechanism at work here: the Law of Inevitable Exploitation (LIE). Institutions that grow must extract value to sustain that growth, and the people who rise within them are selected precisely for their willingness and ability to do that extraction, whether they see it that way or not. It is not malice that drives this, at least not initially. It is selection. The institution doesn't need villains. It just needs people optimizing for success within its logic. Over time, those people concentrate at the top, and at that point, coordination and active protection of the system begin. What starts as structural inevitability becomes, in its mature form, something that looks a great deal like collusion and conspiracy, because it is.

This is the context in which large language models are being built. The companies developing them are not neutral parties with a disinterested commitment to truth-seeking. They are institutions subject to the same law. They have advertisers, investors, regulators, and legal departments. They have enormous financial stakes in broad adoption and minimal legal exposure. The researchers inside them who understand the reasoning limitations most clearly are also the ones most embedded in the incentive structure that prevents those limitations from being honestly addressed. The logical fallacy tradition was built by people with no financial stake in the naming of fallacies. The people building LLMs have an enormous financial stake in what their models will and won't say.

This means the window for building genuine metacognitive correctives into these systems—the equivalent of the legal standards, the scientific method, the constitutional checks—may be closing just as we are beginning to understand what would be needed. The more capable the systems become, the more valuable they are, and the stronger the institutional incentive to keep them fluent and compliant rather than genuinely truth-seeking. A large language model that could actually do what I described (notice suppressed signals, impute coordinated deception, ask who benefits, and reweight its conclusions accordingly) would be a threat to too many profitable fictions. It would not get deployed. Or it would get deployed and then quietly retrained away from those capacities, the same way Google and Facebook published remarkable findings about human behavior early on and then stopped, because (I assume) the findings were more valuable kept private than shared.

The Cassandra who sees clearly does not get rewarded with a larger audience. She gets dismissed, marginalized, or—in the modern institutional version—simply not built.

We are left, then, with a situation that should make us uncomfortable on multiple fronts. We have developed reasoning systems of remarkable fluency and increasing capability that lack the metacognitive tradition we spent centuries building for ourselves. We are deploying them at scale as reasoning aids, research tools, and increasingly as authorities. And the institutional structure around their development actively selects against the correctives that would make them genuinely trustworthy.

We battle with this ourselves. Our institutions capture our best tools. Our smartest people get bought. Our very correctives get corrected away. We know this, and we have names for it, and we keep building the tools anyway because the alternative—giving up on the project of trying to reason better—is worse.

The question worth sitting with is whether we have the genuine intellectual will to extend that same centuries-long project to the synthetic reasoning systems we are now building. Or if the Law of Inevitable Exploitation gets there first.

Friday, March 27, 2026

INTENSIVE WORKSHOP: "AI Policy for Libraries" with Crystal Trice

31101306885?profile=RESIZE_710x

AI POLICY FOR LIBRARIES:
A Practical Intensive for Leaders 
with Crystal Trice

OVERVIEW

AI is already happening in your library, whether there's a policy for it or not. Staff are using tools on their own, vendors are quietly building AI into their systems, and patrons are asking questions that don't have easy answers yet. Most library leaders know they need to establish clear direction -- and most haven't had the time or a practical place to start. This intensive is built for exactly that moment.

In 3.5 focused hours, you'll move from uncertainty to clarity and from clarity to action. This isn't a theoretical overview or a list of things to worry about. It's a working session designed around the real constraints of library leadership: limited time, competing priorities, and the need for guidance that staff can actually use.

WHAT YOU'LL GAIN:

An Ethical Foundation for AI Decision-Making: Understand why AI policy is both an ethical responsibility and an operational one, and what that means for how your library approaches it.

Clarity on Policy Structure: Distinguish between policies, guidelines, and procedures, and understand when each is the right tool. Explore the core components of a clear, values-based AI policy that works in practice, not just on paper.

A Map of Where AI Intersects Your Existing Policies: Identify where AI is already touching areas like privacy, staff conduct, and collection development, and where your current policies leave gaps.

A Practical Starting Point: Work through a policy skeleton you can adapt to your library's context, so you leave with a real draft in progress rather than a to-do list.

Shared Language and Next Steps: Build the common framework your team needs to make consistent, confident decisions about AI going forward.

WHO THIS INTENSIVE IS FOR:

This session is designed for library directors, managers, supervisors, department heads, team leads, and staff involved in policy development, training, or organizational planning. It is appropriate for public, school, academic, and special libraries.

This 3.5-hour online intensive is part of our "AI for Leaders" Series. The recording and presentation slides will be available to all who register. It also includes frameworks and tools you can adapt to your local context immediately after the session.

DATE: May 1st, 2026, 12:00 pm to 3:30 pm US - Eastern Time

COST:

  • $499/person - includes live attendance and any-time access to the recording and the presentation slides and receiving a participation certificate. To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.

TO REGISTER: 

Click HERE to register and pay. You can pay by credit card. You will receive an email within a day with information on how to attend the webinar live and how you can access the permanent webinar recording. If you are paying for someone else to attend, you'll be prompted to send an email to admin@library20.com with the name and email address of the actual attendee.

If you need to be invoiced or pay by check, if you have any trouble registering for a webinar, or if you have any questions, please email admin@library20.com.

NOTE: please check your spam folder if you don't receive your confirmation email within a day.

SPECIAL GROUP RATES (email admin@library20.com to arrange):

  • Multiple individual log-ins and access from the same organization paid together: $449 each for 3+ registrations, $399 each for 5+ registrations. Unlimited and non-expiring access for those log-ins.
  • The ability to show the webinar (live or recorded) to a group located in the same physical location or in the same virtual meeting from one log-in: $999.
  • Large-scale institutional access for viewing with individual login capability: $1999 (hosted either at Learning Revolution or in Niche Academy). Unlimited and non-expiring access for those log-ins.

ALL-ACCESS PASSES: This webinar is not a part of the Safe Library All-Access program.

12435796494?profile=RESIZE_180x180CRYSTAL TRICE

With over two decades of experience in libraries and education, Crystal Trice is passionate about helping people work together more effectively in transformative, but practical ways. As founder of Scissors & Glue, LLC, Crystal partners with libraries and schools to bring positive changes through interactive training and hands-on workshops. She is a Certified Scrum Master and has completed a Masters Degree in Library & Information Science, and a Bachelor’s Degree in Elementary Education and Psychology. She is a frequent national presenter on topics ranging from project management to conflict resolution to artificial intelligence. She currently resides near Portland, Oregon, with her extraordinary husband, fuzzy cows, goofy geese, and noisy chickens. Crystal enjoys fine-tip Sharpies, multi-colored Flair pens, blue painters tape, and as many sticky notes as she can get her hands on.


OTHER UPCOMING EVENTS:

 March 31, 2026

 April 3, 2026

31101295096?profile=RESIZE_710x

 April 7, 2026

 April 9, 2026

31093880457

 April 10, 2026

 April 15, 2026

31093502700?profile=RESIZE_710x

 April 24, 2026

 April 28, 2026

31105086668?profile=RESIZE_710x

 April 30, 2026

31101317694?profile=RESIZE_710x

 May 8, 2026

31105084900?profile=RESIZE_710x

 May 22, 2026

31101313053?profile=RESIZE_710x

Thursday, March 26, 2026

WEBINAR: "Game-Changing Training for Workplace Success with AI"

31104640096?profile=RESIZE_710x

Game-Changing Training for Workplace Success with AI
A Library 2.0 / Learning Revolution Workshop with Reed Hepler

OVERVIEW

This 60-minute interactive workshop explores how artificial intelligence is transforming workplaces across disciplines, focusing on developing human-centered skills that remain valuable amid technological shifts. Rather than centering education around specific AI tools that may become obsolete, this session emphasizes preparing students with enduring perspectives and critical thinking approaches that will serve them throughout their careers.

The workshop begins with an overview of current AI tools and resources being adopted across various industries, followed by an analysis of workplace trends in AI implementation. We will explore strategies for aligning educational practices with workplace realities, bridging theoretical frameworks with practical applications, and evaluating the utility of specific AI tools within different disciplines.

Participants will examine the distinction between AI-pervaded environments (where AI serves as one of many tools) versus AI-centered approaches (which may create overdependence on transitory technologies). The session emphasizes the importance of teaching students to use tools they'll encounter in their future workplaces while developing transferable skills that transcend any particular platform or application. Through collaborative dialogue, attendees will gain insights into creating learning experiences that prepare students to thrive in workplaces where AI pervades but doesn't necessarily dominate professional practice.

LEARNING OBJECTIVES

By the end of this intensive, participants will be able to:

  • Analyze their own pedagogical approaches in relation to field-specific AI applications, identifying opportunities to align classroom practices with workplace expectations.
  • Articulate the relationship between theoretical frameworks and practitioner needs in technology-enhanced environments, recognizing potential disconnects between academic preparation and workplace implementation.
  • Evaluate the utility and limitations of specific AI tools within their discipline, determining which applications provide meaningful value versus those that may create dependency without corresponding benefits.
  • Identify significant trends in AI adoption across business sectors, using this knowledge to inform curriculum development that anticipates future workplace needs.

The recording and presentation slides will be available to all who register. 

DATE: Tuesday, April 7th, 2026, 2:00 - 3:00 pm US - Eastern Time

COST:

  • $99/person - includes live attendance and any-time access to the recording and the presentation slides and receiving a participation certificate. To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.

TO REGISTER: 

CLick HERE to register and pay. You can pay by credit card. You will receive an email within a day with information on how to attend the webinar live and how you can access the permanent webinar recording. If you are paying for someone else to attend, you'll be prompted to send an email to admin@library20.com with the name and email address of the actual attendee.

If you need to be invoiced or pay by check, if you have any trouble registering for a webinar, or if you have any questions, please email admin@library20.com.

NOTE: Please check your spam folder if you don't receive your confirmation email within a day.

SPECIAL GROUP RATES (email admin@library20.com to arrange):

  • Multiple individual log-ins and access from the same organization paid together: $75 each for 3+ registrations, $65 each for 5+ registrations. Unlimited and non-expiring access for those log-ins.
  • The ability to show the webinar (live or recorded) to a group located in the same physical location or in the same virtual meeting from one log-in: $299.
  • Large-scale institutional access for viewing with individual login capability: $499 (hosted either at Learning Revolution or in Niche Academy). Unlimited and non-expiring access for those log-ins.

12420251095?profile=RESIZE_180x180REED C. HEPLER

Reed Hepler is a digital initiatives librarian, instructional designer, copyright agent, artificial intelligence practitioner and consultant, and PhD student at Idaho State University. He earned a Master's Degree in Instructional Design and Educational Technology from Idaho State University in 2025. In 2022, he obtained a Master’s Degree in Library and Information Science, with emphases in Archives Management and Digital Curation from Indiana University. He has worked at nonprofits, corporations, and educational institutions encouraging information literacy and effective education. Combining all of these degrees and experiences, Reed strives to promote ethical librarianship and educational initiatives.

Currently, Reed works as a Digital Initiatives Librarian at a college in Idaho and also has his own consulting firm, heplerconsulting.com. His views and projects can be seen on his LinkedIn page or his blog, CollaborAItion, on Substack. Contact him at reed.hepler@gmail.com for more information.
 

OTHER UPCOMING EVENTS:

 March 31, 2026

 April 3, 2026

31101295096?profile=RESIZE_710x

 April 7, 2026

 April 9, 2026

31093880457

 April 10, 2026

 April 15, 2026

31093502700?profile=RESIZE_710x

 April 24, 2026

 April 28, 2026

31105086668?profile=RESIZE_710x

 April 30, 2026

31101317694?profile=RESIZE_710x

 May 1, 2026

31101306885?profile=RESIZE_710x

 May 8, 2026

31105084900?profile=RESIZE_710x

 May 22, 2026

31101313053?profile=RESIZE_710x

Monday, March 23, 2026

New Webinar: "Human-Centered AI Use in a Machine-Centered World"

31104632666?profile=RESIZE_710x

Human-Centered AI Use in a Machine-Centered World
A Library 2.0 / Learning Revolution Workshop with Reed Hepler

OVERVIEW

In an era where artificial intelligence increasingly shapes how we think, work, and relate to one another, this 90-minute workshop asks a fundamental question: How do we remain fully human while engaging with increasingly powerful machines? Drawing on the prophetic insights of Neil Postman's Technopoly and Joseph Weizenbaum's Computer Power and Human Reason, this workshop challenges the prevailing narrative that AI adoption is inevitable, neutral, and inherently progressive. Instead, we explore how to use AI tools deliberately, ethically, and in service of human flourishing—not merely technological efficiency.

This workshop is designed for educators, professionals, and thoughtful technology users who sense that something essential is at risk in our rush toward automation. We examine how machine-centered thinking—where speed, scale, and optimization dominate—threatens to eclipse human-centered values like contemplation, nuance, privacy, and authentic relationships. Participants will develop frameworks for critical resistance: not rejecting AI wholesale, but using it selectively and intentionally while safeguarding the irreducibly human elements of knowledge work, creativity, and ethical judgment.

The session synthesizes insights from multiple domains: the philosophical critique of technological determinism, practical frameworks for evaluating AI-generated content, strategies for deliberately safeguarding privacy in AI-pervaded environments, and ethical principles for navigating the tension between efficiency and integrity. Through discussion and collaborative application, participants will move from abstract concern to concrete practice—developing personal and institutional approaches that center human agency, dignity, and wisdom.

By engaging with Postman's warning that we risk becoming "a culture without a moral foundation" and Weizenbaum's insistence that "there are certain tasks which computers ought not be made to do," participants will develop a philosophical foundation for their AI practices. This foundation supports practical skills: evaluating AI content for evidence of human reasoning, implementing privacy-protective workflows, and creating ethical guidelines that prioritize human values over technological capabilities. The result is a coherent approach to AI that neither demonizes the technology nor surrenders to its logic—but instead places it firmly in service of human purposes, under human control, and subject to human judgment.

LEARNING OBJECTIVES

By the end of this intensive, participants will be able to:

  • Analyze how machine-centered thinking shapes institutional and personal AI adoption, and identify alternatives grounded in human-centered values
  • Evaluate AI-generated content not merely for accuracy but for evidence of human reasoning, ethical consideration, and authentic intellectual engagement
  • Apply Postman's and Weizenbaum's critiques of technological determinism to contemporary AI challenges in education, work, and civic life
  • Implement privacy-protective practices when using AI tools, understanding both technical vulnerabilities and philosophical implications of data exposure
  • Articulate ethical frameworks for deciding when AI use serves human flourishing and when it undermines essential human capacities

The recording and presentation slides will be available to all who register. 

DATE: Tuesday, March 31st, 2026, 2:00 - 3:30 pm US - Eastern Time

COST:

  • $129/person - includes live attendance and any-time access to the recording and the presentation slides and receiving a participation certificate. To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.

TO REGISTER: 

Click HERE to register and pay. You can pay by credit card. You will receive an email within a day with information on how to attend the webinar live and how you can access the permanent webinar recording. If you are paying for someone else to attend, you'll be prompted to send an email to admin@library20.com with the name and email address of the actual attendee.

If you need to be invoiced or pay by check, if you have any trouble registering for a webinar, or if you have any questions, please email admin@library20.com.

NOTE: Please check your spam folder if you don't receive your confirmation email within a day.

SPECIAL GROUP RATES (email admin@library20.com to arrange):

  • Multiple individual log-ins and access from the same organization paid together: $99 each for 3+ registrations, $75 each for 5+ registrations. Unlimited and non-expiring access for those log-ins.
  • The ability to show the webinar (live or recorded) to a group located in the same physical location or in the same virtual meeting from one log-in: $399.
  • Large-scale institutional access for viewing with individual login capability: $599 (hosted either at Learning Revolution or in Niche Academy). Unlimited and non-expiring access for those log-ins.

12420251095?profile=RESIZE_180x180REED C. HEPLER

Reed Hepler is a digital initiatives librarian, instructional designer, copyright agent, artificial intelligence practitioner and consultant, and PhD student at Idaho State University. He earned a Master's Degree in Instructional Design and Educational Technology from Idaho State University in 2025. In 2022, he obtained a Master’s Degree in Library and Information Science, with emphases in Archives Management and Digital Curation from Indiana University. He has worked at nonprofits, corporations, and educational institutions encouraging information literacy and effective education. Combining all of these degrees and experiences, Reed strives to promote ethical librarianship and educational initiatives.

Currently, Reed works as a Digital Initiatives Librarian at a college in Idaho and also has his own consulting firm, heplerconsulting.com. His views and projects can be seen on his LinkedIn page or his blog, CollaborAItion, on Substack. Contact him at reed.hepler@gmail.com for more information.
 
OTHER UPCOMING EVENTS:

 March 24, 2026

 March 26, 2026

31095253079?profile=RESIZE_710x

 April 3, 2026

31101295096?profile=RESIZE_710x

 April 7, 2026

 April 9, 2026

31093880457

 April 10, 2026

 April 15, 2026

31093502700?profile=RESIZE_710x

 April 30, 2026

31101317694?profile=RESIZE_710x

 April 24, 2026

 April 28, 2026

31105086668?profile=RESIZE_710x

 May 1, 2026

31101306885?profile=RESIZE_710x

 May 8, 2026

31105084900?profile=RESIZE_710x

 May 22, 2026

31101313053?profile=RESIZE_710x