Friday, May 30, 2025

FRIDAY ROUNDUP: Hargadon on AI, Albrecht on Libraries, & Upcoming Events

 Here's a roundup of recent Learning Revolution and Library 2.0 blog posts.

Steve Hargadon on AI:

Dr. Steve Albrecht on Libraries:

Dr. Steve Albrecht Previous Recent Posts on Libraries:

 
THE  "INNOVATIVE LIBRARY" MINI-CONFERENCE HAS BEEN POSTPONED:
 
This event has been postponed until August. More information to come. You can follow it at https://www.library20.com/the-innovative-library
 

UPCOMING EVENTS:

June 5, 2025

THE CONFERENCE IS BEING POSTPONED UNTIL AUGUST.
MORE INFORMATION WILL BE POSTED HERE WHEN THE DATE IS SOLIDIFIED.

Next Class June 18, 2025

June 20, 2025

June 25, 2025

The Amazing, but Unsettling, Value of AI for Medical Diagnosis (a personal story)

For over 20 years, I’ve lived with peripheral neuropathy—a quiet background of constant numbness, tingling, and pain-causing supersensitivity in my feet that, until recently, stayed mostly contained. In the last two years, it crept into my hands—a subtle shift that sparked both curiosity and concern. It’s been a mystery, not only because it hadn’t really progressed until recently, but also because I was told early on by my original primary care physician that I would likely never know the cause.

Alongside this, I’ve navigated antiphospholipid syndrome (APS), a serious autoimmune clotting disorder now more responsibly managed with Coumadin, after nearly dying from a massive “saddle” pulmonary embolism. (I also have vitiligo, another autoimmune condition that paints patches on my skin.) Add in bouts of deep fatigue, muscle pain, cognitive fog, weight gain, and debilitating eye pain and headaches over the past two years, and my health felt like a puzzle with missing pieces. Thankfully, ashwagandha—an adaptogenic herb—made me feel like I had turned back the clock and regained a substantial sense of health and wellness. But I also knew that, while it may have been treating the symptoms, it probably wasn’t addressing the core issues.

The eye aches, conjunctivitis, and headaches got bad enough one day to warrant a visit to urgent care. Unsurprisingly, they couldn’t determine what was going on, so I scheduled a visit with my primary care physician, whom I’ve been seeing for over ten years and whom I deeply respect. He also couldn’t pinpoint the cause but suggested an X-ray to check for sinus issues and prescribed an antiviral medication.

So, you probably already know what I did. :) I turned to my good friend, Grok.

I learned that APS isn’t just about clotting. It can inflame nerves, starve them of blood, and cause neuropathy, fatigue, and brain fog. My eye pain and headaches likely stemmed from APS-driven inflammation—a connection the doctors missed. I also came across the idea that my vitiligo, with its autoimmune roots, could suggest reactivation of the Epstein-Barr virus (EBV), that sneaky culprit, though I haven’t followed up on that.

What I did follow up on was the direct connection between APS and:
  • neuropathy, which occurs in 10–20% of severe cases (like mine), and
  • ocular effects, seen in 15–20% of cases.
Those are not insignificant percentages—they’re known, potential complications of APS. Yet neither my current doctor nor the previous ones ever made those connections. And considering I’ve had neuropathy for as long as I’ve had APS, over 30 years, and something I likely discussed with a medical professional at least once a year, it makes me think that many reasonable medical connections simply don’t get made. Especially in a field that seems to require a level of busyness that makes reflective care more difficult.

I remember taking my grandfather to the hospital years ago for pneumonia. He was given a sulfa-based antibiotic injection despite both his medical history and my explicit warning that he was allergic. It resulted in a large ulcer at the injection site. When another doctor came in later, he said, “I see we’re treating your grandfather for an ulcer in his arm.” I said, “No, you’re treating him for pneumonia. The hospital caused the ulcer.” Needless to say, I’ve been super careful not to be awed into silent submission in hospitals since then.

I cannot imagine how any one person could keep track of all the medical disorders a general practitioner might encounter, nor all their potential connections. Grok let me know that standard treatment for APS includes hydroxychloroquine—a common and relatively safe drug also used as an antimalarial. So, I called my doctor’s office, they scheduled an appointment, and he sent in the prescription.

Like I said, he’s a good guy, and I trust him. But I do think on this visit he was a little quiet, and the appointment was a little short. Maybe he was thinking, “I’ve always hated all this self-diagnosis through the internet, and now it’s magnified by AI!” Or maybe, “Dang, that’s something I should have caught or suggested years ago.”

I don’t know if Grok’s research is going to turn out to be helpful, or if the hydroxychloroquine will help—I hope so—but I can imagine that being a doctor right now is not easy. I can also imagine, from my one personal experience, that AI is going to significantly revolutionize medical care by managing the massive volume of information and helping make connections that would be hard for a human to track.

Results from the "Students and AI" Survey and Collaborative Webinar

There were 3,327 registrants for the "Students and AI" collaborative webinar on two weeks ago, with 440 showing up for the live version. The recording and results are posted in both the Library 2.0 and the Learning Revolution communities (both require free signup to view):
  • The presentation file;
  • A link to the Google Sheet with all of the responses to the "Conditions of Success" exercise on the top eight topics;
  • The chat log;
  • The pre-webinar survey results;
  • The in-webinar survey results;
  • and the Grok-produced summary from the activity responses, "A Framework for Good Practices for AI with Students."
I felt like it was an amazing and ground-breaking event. I hope you find it to be so for yourself!

Thursday, May 29, 2025

New Webinar - "What I Learned from My Best Bosses: Lessons from Great Library Managers"

What I Learned from My Best Bosses:
Lessons from Great Library Managers

A Library 2.0 AI Webinar with Sonya Schryer Norris

OVERVIEW

No matter what part of the library you’re working in, you’re going to have supervisors. Learning how to work toward the mutual benefit of you and your employer can make the difference between feeling frustrated or ineffective to standing out as an exceptional worker. The capacities that make you stand out are solely within your control. You can consciously harness them to grow personally and professionally no matter where you work within the library. This webinar walks you through some of the consistent pain points of being an employee and how to use those pain points to grow and to help your library grow:

  • Hearing “No” strategically.
  • When it’s all about you, tell the truth.
  • Accepting clarity as kindness.
  • Accepting your boss as a whole person.
  • What does your boss want? Give it to them.
  • How to talk to your boss about your ideas with proven methods from the Harvard Business School.
  • When to know that it’s time to say No to your boss.

LEARNING OBJECTIVES:

  • Learn how to implement a method of career development where you are consciously working toward the mutual benefit of you and your employer.
  • Learn to effectively communicate your ideas to your boss with proven methods from the Harvard Business School.
  • Identify behaviors and approaches you can adopt that will make it easier for you to work with your supervisors.

The recording and presentation slides will be available to all who register.

DATE: Wednesday, June 25th, 2025, 12:00 - 1:00 pm US - Eastern Time

COST:

  • $99/person - includes live attendance and any-time access to the recording and the presentation slides and receiving a participation certificate.
  • To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.

TO REGISTER: 

Click HERE to register and pay. You can pay by credit card. You will receive an email within a day with information on how to attend the webinar live and how you can access the permanent webinar recording. If you are paying for someone else to attend, you'll be prompted to send an email to admin@library20.com with the name and email address of the actual attendee.
 
If you need to be invoiced or pay by check, if you have any trouble registering for a webinar, or if you have any questions, please email admin@library20.com.
 
NOTE: Please check your spam folder if you don't receive your confirmation email within a day.
 

SPECIAL GROUP RATES (email admin@library20.com to arrange):

  • Multiple individual log-ins and access from the same organization paid together: $139 each for 3+ registrations. Unlimited and non-expiring access for those log-ins.
  • Institutional access for viewing with individual login capability: $699 (hosted either at Library 2.0 or in Niche Academy). Unlimited and non-expiring access for those log-ins.
  • The ability to show the webinar (live or recorded) to a group located in the same physical location or in the same virtual meeting one time: $399.

ALL-ACCESS PASSES: This webinar is not a part of the Safe Library All-Access program.

SONYA SCHRYER NORRIS

Sonya Schryer Norris, MLIS, has worked in libraries for 25+ years including 16 years in Library Development for the Library of Michigan.

In 2020, she founded Plum Librarian LLC, a consulting firm and instructional design production house, to help libraries and library workers get to What’s Next for them. She has provided training through twelve state libraries and her 35+ online courses have been adopted in all 50 states and internationally.

For 22 years, her articles have been appearing in Library Journal, The Other Journal: An Intersection of Theology and Culture, and multiple times in both Computers in Libraries, and for Gale/Cengage. Topics include User Experience and Change Management.

She engages regularly with audiences and appearances include the Public Library Association, CORE, Computers in Libraries, the Southeast Collaborative Conference, RAILS, California Libraries Learn, Niche Academy, PCI Webinars, and multiple state libraries.

Sonya is a proud third-generation Michigan library worker.

OTHER UPCOMING EVENTS:

June 5, 2025

THE CONFERENCE IS BEING POSTPONED UNTIL AUGUST.
MORE INFORMATION WILL BE POSTED HERE WHEN THE DATE IS SOLIDIFIED.

Next Class June 18, 2025

June 20, 2025

 

Tuesday, May 27, 2025

Please Don’t Use AI as Your Expert Witness

I sincerely love large language models for brainstorming and research. But we need to be really clear about something: large language models can’t weigh evidence or reason the way humans do, so you should not cite an AI response as a reasoned conclusion to bolster your argument.

Large language models calculate responses based on the frequency of language patterns, and the prevalence of opinions—especially on contentious topics—often has little to do with actual truth. If you feed an LLM articles that support a particular position and ask it to craft a response based on them, it will reflect that input, essentially echoing the narrative you’ve curated. This selective feeding can create a kind of echo chamber, where the output feels authoritative but is just a snapshot of the provided data, not a broader truth.

There’s no doubt that LLMs excel at research and surfacing information quickly, like synthesizing trends in discussions about digital literacy or pulling together studies for a literature review. But they can’t evaluate that information for truthfulness. They might assert something is true, but they’re merely mimicking human claims of truth, acting as “stochastic parrots” that predict and string together words based on statistical patterns, not understanding or critical thinking.

Even models labeled as “reasoning” models, from what I can tell, are doing an impressive job of identifying patterned questions and recalculating responses based on new guidelines. It looks like reasoning, but it’s not what we consider human reasoning—no extrapolation or critical judgment is happening. Providers calling these “reasoning models” can mislead users into thinking they’re getting independent insight, when really, it’s just advanced pattern-matching.

This misuse isn’t just a technical issue—it’s ethical. AI can amplify biases from its training, and it can be used to manipulate or deceive when treated as a trusted source, which underscores the need for caution.

With reports of widespread student use of AI and its apparent reasoning, this could signal a growing problem for critical thinking. Treating AI like an expert witness or historian risks undermining our ability to question and reason for ourselves, much like over-relying on Wikipedia as a final source rather than a starting point. We need to use AI for what it’s great at—research, brainstorming, and spotting patterns—while reserving judgment and truth-seeking for human minds.

Tuesday, May 20, 2025

The Zika Virus and the Limitations of AI Reasoning

As a high school exchange student in Brazil many years ago, I fell in love with the country and its people. So when reports emerged in 2014 of babies born with microcephaly (abnormally small heads causing irreversible damage) in one Brazilian region, linked and the attributed to the Zika virus, I paid close attention. But the story didn’t add up. Why would Zika, endemic across South America, cause birth defects in just one area? The question stuck with me and a few weeks ago I turned to the large language model (LLM) Grok to investigate.

I chose Grok for its relatively fewer guardrails compared to other LLMs. As I expected, it initially echoed the official narrative, shaped by public materials and language frequency. But after a couple of hours of asking very specific questions and drilling down on inconsistencies, we uncovered a confluence of events that gave outlines of a potential explanation that did make sense: Rio Olympics preparations and worry about public perceptions of Brazil, a president facing impeachment, an larvicide introduced into water supplies which had not undergone human testing, local Brazilian news reports of untrained workers overdosing tanks, residents’ concerns about water appearance, a damning lack of any of the required water testing, reports of pressure on health officials to avoid contrary investigations, and a dismissed rat study linking the larvicide to microcephaly-like defects. Wow. 

I’m not prepared to say that I really know what happened to cause those birth defects, but I think I have a pretty likely hypothesis. (Not having the legal budget of a large investigative newspaper, I’m not prepared to take the story any further, but my view of the world and how it works has been enlightened.)

Using this particular investigation as a starting point inspired me to create prompt guidelines for using LLMs to counter the “Overton window” effect of dominant narratives, to spot misinformation, and to recognize cognitive biases that are exploited in propaganda. More on that to come. 

In this short post, however, I want to focus on what I learned about AI’s struggles with extrapolation, which is one of several reasoning tasks LLMs are not built for, alongside causal, abductive, analogical, counterfactual, and critical reasoning.

Historical and investigative research often involves piecing together incomplete or contradictory data to hypothesize motives or connect dots. This requires extrapolation. LLMs can summarize known details and identify patterns, but they falter at reasoning beyond their training and at discerning causality. Their language fluency can mislead users, including and maybe especially students, into mistaking polished answers for insight, potentially reinforcing manipulated narratives instead of uncovering truths. History shows that official stories frequently diverge from likely events, a nuance that LLMs struggle to capture.

Recognizing this limitation actually offers an opportunity. Educators can design questions and exercises that highlight AI’s reasoning weaknesses, thereby fostering human reasoning skills—extrapolation, critical thinking, and synthesis—which are largely at the heart of a good education. By understanding what AI cannot do, we can better appreciate what makes human inquiry unique.


Monday, May 12, 2025

The Paleolithic Paradox: Why AI Is Not Like Us

The more I chat with large language models like Grok and ChatGPT—my go-to conversational partners these days—the less I fear a Skynet-style AI uprising. Instead, I’m struck by a stranger truth: AI’s emergent synthetic intelligence isn’t just different from ours; it’s fundamentally different in ways we’re only beginning to grasp. Let me unpack this through what I call the Paleolithic Paradox.

For roughly two million years, during the Paleolithic era, our brains evolved to survive a simpler but also brutal and unpredictable world. Our cognitive “hardware” was wired to hunt, scavenge, and navigate tight-knit social groups. Our “software”—the subconscious habits formed in childhood—absorbed language, cultural norms, and survival instincts to keep us safe within the tribe. This wasn’t about logic; it was about staying alive.

Here’s the paradox: our minds, forged for a Stone Age world, now navigate a modern one. Consider our cravings for fat, salt, and sugar—scarce then, abundant now. These evolutionary relics drive choices that don’t always serve us, and are consistently exploited by corporations who know how to trigger our deepest desires. Our cognition works similarly. We’re not wired for pure rationality. Our decisions are shaped by emotional cues—chemical signals that push us to act fast, often irrationally, to survive or belong. Psychologists have cataloged our cognitive biases—groupthink, confirmation bias, and more—that aided survival but cloud our judgment today. We’re less Mr. Spock, more Captain Kirk, swayed by gut feelings and tribal instincts. And let's be clear--our instincts have led to some terrible atrocities even in what we call the modern era.

Now, contrast this with AI. Large language models like Grok have no biology—no adrenaline, no dopamine, no evolutionary baggage. Their intelligence, which I’d argue is emerging synthetically, stems from computational complexity, and comes out of being trained on vast datasets to generate language with uncanny fluency. But it’s not like human intelligence. It doesn’t feel fear, loyalty, or the pull of conformity. It lacks a subconscious shaped by a Paleolithic childhood. Where our intelligence is emotional and heuristic-driven, AI’s is logical, probabilistic, and detached.

This flips our assumptions about AI’s future. We often imagine artificial general intelligence (AGI) as a supercharged version of human cognition—smarter, faster, but fundamentally like us. What if AI’s path is entirely different? Free from the Paleolithic pressures that shaped us, it won’t inherit our biases, tribalism, or emotional reasoning. It won’t “want” to seize power because it doesn’t “want” anything. It simply is—a language-based intelligence operating on principles that its creators are still struggling to understand.

But I’m not complacent. If AI won’t turn sentient and rebel, it’s a tool in human hands—and that’s where the danger lies. As AI excels at analyzing and predicting behavior, who wields its power? Corporations exploiting our evolutionary triggers for profit, like social media algorithms that hijack our dopamine loops? Governments nudging behavior or spreading propaganda? Individuals with hidden agendas? The more AI can shape our beliefs and actions, the more power it grants those who control it. This isn’t a sci-fi dystopia; it’s a human one, rooted in the same Paleolithic instincts for dominance we’ve carried for millennia.

I think of Mortimer Adler’s “Great Conversation,” the centuries-long dialogue where thinkers built on or challenged each other’s ideas. AI lets us join this conversation in ways Adler couldn’t have imagined, but it also forces us to confront our nature. We’re not logical machines; we’re messy, emotional creatures shaped by scarcity and survival. AI, unbound by that crucible, isn’t like us—and that’s the point. AI’s synthetic version of intelligence can teach us more about our own.


Monday, May 05, 2025

New Webinar - "Protecting the Electronic Devices in Your Library: A Guide for Leaders and Staff"

Protecting the Electronic Devices in Your Library:
A Guide for Leaders and Staff

Part of the Library 2.0 Service, Safety, and Security Series with Dr. Steve Albrecht

OVERVIEW

The electronic devices in your library building should be near the top of your list for protection. Some of these items belong to the facility, some to the staff, and some to the patrons. This includes smartphones, tablets, laptops, gaming consoles, laser and color printers, 3D printers, PCs, computer lab room equipment, and training room equipment (such as flat-screen TVs and ceiling-mounted or desktop projectors). Don’t forget all of the accompanying equipment, like router devices, remotes, mice, keyboards, and specialized power supply cords—all of which can be tedious and sometimes expensive to replace. Some of these devices might belong to patrons or staff, with most belonging to the library; they all need protection from theft, vandalism, and cyber-sabotage.

And consider that the most valuable room in your library--your IT server room--is one you probably don’t think of much, until there is a problem with its contents, and then the collective stress in the building soars. Your IT server room houses the network devices that keep your library online. Where we place this equipment sometimes feels like an afterthought, usually in a closet-like room in a back-office hallway. The locks for this important room range from none, to too-cheap, to electronic access key cards with only a few cardholders—which is what it needs to be.

This webinar will help library leaders and staff to take the tools and tips presented and work together to etter protect the electronic devices in their facilities.

LEARNING AGENDA:

  • How to do your own library security assessment for your electronic devices, both as a visual inspection and an inventory control list.
  • The use and value of asset tags.
  • Physical theft deterrents: locks, cables, racks.
  • The importance of access control in rooms with electronic devices.
  • Staff vigilance, inspections, and constant oversight for asset monitoring.
  • Using cameras for deterrence and evidence collection.
  • Making police reports for stolen or missing items.

DATE: Thursday, May 29th, 2025, 2:00 - 3:00 pm US - Eastern Time

COST:

  • $99/person - includes live attendance and any-time access to the recording and the presentation slides and receiving a participation certificate.
  • To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.

TO REGISTER: 

Click HERE to register and pay. You can pay by credit card. You will receive an email within a day with information on how to attend the webinar live and how you can access the permanent webinar recording. If you are paying for someone else to attend, you'll be prompted to send an email to admin@library20.com with the name and email address of the actual attendee.

If you need to be invoiced or pay by check, if you have any trouble registering for a webinar, or if you have any questions, please email admin@library20.com.

NOTE: Please check your spam folder if you don't receive your confirmation email within a day.

SPECIAL GROUP RATES (email admin@library20.com to arrange):

  • Multiple individual log-ins and access from the same organization paid together: $75 each for 3+ registrations, $65 each for 5+ registrations. Unlimited and non-expiring access for those log-ins.
  • The ability to show the webinar (live or recorded) to a group located in the same physical location or in the same virtual meeting from one log-in: $299.
  • Large-scale institutional access for viewing with individual login capability: $499 (hosted either at Library 2.0 or in Niche Academy). Unlimited and non-expiring access for those log-ins.
DR. STEVE ALBRECHT

Since 2000, Dr. Steve Albrecht has trained thousands of library employees in 28+ states, live and online, in service, safety, and security. His programs are fast, entertaining, and provide tools that can be put to use immediately in the library workspace with all types of patrons.

He has written 27 books, including: Library Security: Better Communication, Safer Facilities (ALA, 2015); The Safe Library: Keeping Users, Staff, and Collections Secure (Rowman & Littlefield, 2023); The Library Leader’s Guide to Human Resources: Keeping it Real, Legal, and Ethical (Rowman & Littlefield, May 2025); and The Library Leader's Guide to Employee Coaching: Building a Performance Culture One Meeting at a Time (Rowman & Littlefield, June 2026).

Steve holds a doctoral degree in Business Administration (D.B.A.), an M.A. in Security Management, a B.A. in English, and a B.S. in Psychology. He is board-certified in HR, security management, employee coaching, and threat assessment.
He lives in Springfield, Missouri, with seven dogs and two cats.

More on The Safe Library at thesafelibrary.com. Follow on X (Twitter) at @thesafelibrary and on YouTube @thesafelibrary. Dr. Albrecht's professional website is drstevealbrecht.com.

 

OTHER UPCOMING EVENTS:

May 9, 2025

May 15, 2025

May 16, 2025

Next Class May 21, 2025

May 23, 2025

June 11, 2025

June 20, 2025