Sunday, March 22, 2026

Undervaluing Librarians

I've been thinking about why libraries, and especially school libraries, declined at the exact moment information became the defining challenge of our time. I don't have a tidy answer. But I want to try out a reading of the situation that I think holds some explanatory power, and that might tell us something uncomfortable about what's coming next.

The surface-level story is simple enough. The internet made information abundant, which made libraries seem redundant. Budgets tightened. Positions were cut. School librarians were hit hardest, sometimes the first professionals eliminated when districts needed savings. That's the version most of us know.

But sit with the irony for a moment. The explosion of freely available information, much of it unreliable, much of it deliberately misleading, should have been the librarian's greatest moment. Here was a world suddenly drowning in information and desperate for the skills librarian as information specialist had spent decades developing: how to evaluate sources, how to distinguish credible from questionable, how to navigate complex information systems with a critical eye. The need didn't diminish. It intensified. But the profession shrank, both in status and membership..

I think the explanation lies in a gap between two stories that we have been telling simultaneously for a long time, and the fact that almost nobody noticed they were different stories.

Two Stories

The story I’ve heard librarians tell, especially school librarians, went something like this: we help people become independent learners. We give students access to information outside the mandated curriculum. We create space for curiosity and self-directed inquiry. In a building organized around compliance and standardized outcomes, the library was the one room where a student could, at least in theory, follow a question wherever it led.

That story was and is true. The good school librarians (who are left) genuinely have been the one adult in the building whose job description was compatible with curiosity.

The story the school has told is different. It has gone like this: we have books.

That's it. And current library controversies are about which books they do or don’t have. The institutional justification for the library, I think it’s fair to say, has never been the intellectual function the librarian performed. It was the physical resource the library contained. The school basically saw inventory. A countable collection, a physical space, a line item that could be measured and, when necessary, cut.

I’m guessing that the librarians believed (or wanted to believe) that the institution shared their story. They thought when they said "we teach information literacy and support independent learning," the people making budget decisions heard the same thing. I don’t think they did. They heard "we house books." So the moment the books became unnecessary, the institutional justification evaporated. I haven’t been a principal, or a school board member, or even a librarian, but I think it’s fair to say that the librarian's actual value, helping a person navigate information independently and critically, had no line item for most schools. It was never what the school was actually purchasing.

The Pivot That Didn’t Land

This explains something that always puzzled me about the library profession's response to the internet. From what I saw, librarians tried to pivot. They genuinely did. They talked about information literacy, digital citizenship, and media literacy. They made the capability argument with real passion and real expertise.

But they were making a capability argument to an institution that could only understand resource arguments. You should have been able to defend a budget line with "I teach students to think critically about what they read." But I think that didn’t work in an era of increasingly mandated curricula. Instead it got defended with "we have 14,000 volumes and a computer lab." When the volumes became irrelevant and the computer lab moved into every student's pocket, the argument collapsed, not because the capability wasn't needed, but because the institution was never organized around it.

And then came the makerspaces.

I want to be careful here because I know many librarians who built wonderful makerspaces and did genuinely creative work with them. But I have always thought that the makerspace movement in libraries, when you looked at it honestly, was a survival strategy dressed up as innovation. 3D printers, laser cutters, robotics kits, these are wonderful things. They are not information science, they are more aligned with vocational arts (which were also disappearing). The presence of makerspaces in a library seemed like an unconscious confession: we can no longer justify this space with our actual expertise, so we are filling it with something the institution will fund.

In this interpretation, it was, painfully, a return to the original institutional logic. We have stuff. Just different stuff. The librarian stopped arguing "you need what I know" and started arguing "you need this room and this stuff." Which worked, in some cases, for a while. But it also completed the abandonment of the very claim that made the profession distinctive.

The Information Ecosystem Turns Adversarial

Here is where I think the librarian's story stops being a professional tragedy and starts being a civilizational warning. I know, I’ve switched to a pretty big canvas.

The internet didn't just make information abundant. It made information commercial. Google's original mission was to organize the world's information. That might be a librarian's mission statement, almost word for word. But something happens to idealistic missions when they become embedded in business models, and it happens reliably enough that I've come to think of it as a kind of law: any system that can be exploited for profit eventually will be, and the exploitation will be proportional to the system's size, scope, and reach.

Search results became ad delivery mechanisms. Ranking algorithms optimized for engagement, not accuracy. The information environment didn't just grow larger; it arguably grew adversarial. The system was no longer trying to help you find what you needed. It was trying to keep you in the ecosystem. That was round one.

AI is round two (or twenty, depending on how you want to count all the technology in between), and it's worse. Large language models aren't just delivering information shaped by advertising incentives. They're generating information shaped by whatever the model's ecosystem rewards. Right now, the AI companies are in their idealistic phase. They talk about helpfulness, truthfulness, and making knowledge accessible to everyone. The mission statements read like library charters.

And here is the parallel that keeps me up at night: those idealistic stories are true. Just as the librarian's story was true. The best people at AI companies surely believe in expanding access to knowledge, just as the best librarians genuinely believed in fostering independent inquiry. The truth of the story is not the problem. The problem is that truth is not what decisions get made on.

The business model will assert itself. The pressure to keep users engaged, to serve partner interests, to optimize for retention and revenue over accuracy and independence, all of that is coming. It isn't cynicism to say so. It's pattern recognition. It's watching what happened to search, to social media, to every information system that started with an idealistic mission and ended up governed by the logic of its business model. The idealistic narrative will survive as long as it's useful for growth. The moment it conflicts with profitability, it will be rewritten.

What the Librarian's Story Tells Us

So here is what I think the decline of the librarian, and the real decline in library use and relevance, actually reveal, if we're willing to look at them clearly.

We had a profession whose members felt a duty to the information consumer. Not to a publisher, not to an advertiser, not to a shareholder. The librarian's institutional obligation was to the person asking the question. That kind of alignment is vanishingly rare in the information ecosystem now. The people building AI products have obligations to investors and growth metrics. The people consuming AI outputs largely have no trained intermediary helping them understand what they're actually receiving.

And many librarians, especially school librarians, lost twice. Many have lost their institutional home because the institution only ever valued the container, not the function. And then they lost their story. The language of intellectual empowerment, of democratized access, of helping people find and evaluate information, that language now belongs to companies governed by dynamics that will, over time, subordinate everything to the demands of the business model.

I want to be clear that I'm not offering this as a definitive history. I'm offering it as a lens, a way of making sense of something that has always struck me as deeply strange, that we dismantled the one profession structurally aligned with needs of the information patron, right before the information ecosystem became structurally aligned against them.

If that reading holds any truth, then the librarian's story isn't just an institutional casualty. It's a preview. And the question it leaves us with is the one that matters: if the profession built around serving the information needs of individuals couldn't survive the institution it was embedded in, what makes us think the idealistic promises of AI companies will survive theirs?

Sloppy AI

Merriam-Webster crowned "slop" its Word of the Year, defining it as digital content of low quality produced in quantity by generative AI. We all know slop when we see it. It's the movie review that opens with a compelling hook, deploys sophisticated vocabulary across six confident paragraphs, and somehow never says anything coherent about the actual film. It draws you in with the appearance of insight, keeps you reading with polished-looking sentences, but then leaves you realizing that it's not actually making sense.

But "slop" only describes the output. It doesn't tell us anything about the process that created it, or help us (and students) recognize when we're producing it. For that, we need the adjective: sloppy.

What "Sloppy" Actually Means

This isn't just wordplay. "Sloppy" carries a specific connotation that other words don't. It's not the same as "bad" or "careless" or "low-quality." Sloppy implies an avoidable mess — something made by a person who knew better, or should have, and chose not to bother. A sloppy report isn't one written by someone who lacked the skill. It's one written by someone who skipped the effort.

That distinction is precisely what makes "sloppy" the right word for the worst of the generative AI boom. The problem isn't that AI produces bad output. The problem is that people are using AI to avoid the effort that would make the output good — and then publishing the result as if the effort had been made. Sloppy AI usage is the act of substituting a prompt for the work the prompt was supposed to support.

Where Sloppiness Shows Up

Once you have this lens, you start seeing sloppy AI use everywhere — and you notice that the pattern is always the same. Someone uses AI to skip a step that shouldn't be skipped.

Sloppy sourcing is arguably the most dangerous category. Language models don't verify facts; they predict plausible next words. A 2025 study from Deakin University found that ChatGPT fabricated roughly one in five academic citations[2]. Lawyers have been sanctioned for submitting briefs full of hallucinated case law. The Chicago Sun-Times famously published a summer reading list recommending books that didn't exist. In each case, the sloppiness wasn't that AI hallucinated — hallucination is a known property of the technology. The sloppiness was that nobody checked.

Sloppy engineering follows the same pattern. AI can scaffold code and explain concepts effectively, but AI-generated code is causing problems everywhere from Amazon to the Open Source Software community. The failure mode isn't that AI wrote the code. It's that someone deployed it without the engineering discipline the code required, i.e., treating generation as a substitute for understanding.

Sloppy customer service is what happens when companies replace human support with chatbots to avoid staffing costs, then discover that the bot can't handle nuance, empathy, or edge cases. 

Sloppy content is the most visible category and the easiest to spot: it leans on filler phrases, presents shallow balance instead of actual analysis, and contributes nothing that wasn't already said better somewhere else. BuzzFeed's pivot to mass-produced AI content has been accompanied by mounting financial losses and a precipitous decline in market value, with the company now warning of "substantial doubt" about its ability to continue as a going concern. The problem wasn't that AI wrote the articles, it was that nobody ensured the articles were worth reading.

In every case, the underlying mechanism is the same. AI made it possible to skip a step. Someone skipped it. The result was sloppy.

Sloppy Thinking

One category deserves its own treatment, because it's less outwardly visible and more individually consequential than the others.

When we use AI to summarize every article, draft every email, and resolve every question, we begin to outsource the cognitive work that makes us capable of doing those things well in the first place. Researchers have described this as "cognitive atrophy:" the gradual weakening of skills that aren't exercised. Ethan Mollick frames the paradox directly, stating that AI "works best for tasks we could do ourselves but shouldn't waste time on, yet can actively harm our learning when we use it to skip necessary struggles."

Sloppy thinking is the assumption that AI can do the hard work of understanding for you. It can't. It can produce text that resembles understanding, which is worse than producing nothing, since it lets you believe you've done the work when you haven't. This is the trap that makes all the other traps possible. Sloppy sourcing happens because someone didn't think critically about whether the citations were real. Sloppy engineering happens because someone didn't think carefully about whether the code was sound. The root of every sloppy AI failure is a moment where a human stopped thinking.

AI Is Not the Problem

Consider the automatic camera. Before it existed, producing a beautiful photograph required mastering the technical relationships between aperture, shutter speed, and film sensitivity--knowledge that excluded most people from the craft. The automatic camera removed that barrier. It expanded the number of people capable of capturing a striking image by orders of magnitude. But it didn't eliminate the need for the photographer. Someone still has to choose what to point the camera at, decide when to press the shutter, and recognize whether the result is worth sharing. The camera handles the exposure. The human handles the choices that reflect value (or don't!).

AI is the most powerful “automatic camera” ever built — for writing, for code, for analysis, for nearly every form of intellectual work. It can dramatically expand who is able to produce valuable output. But the value still depends on the choices a human makes before and after the tool does its part.

The Draft and the Deliverable

We wouldn't ban automatic (and now digital and smartphone incorporated) cameras because of the tsunami of low-effort photographs posted everywhere. It's a selection problem. Nor is the antidote to sloppiness banning AI. It's recognizing the difference between a draft and a deliverable. AI is genuinely powerful as a draft space: a place to explore ideas, go wide, generate options, and think out loud. The problems begin at the handoff, the moment something moves from private exploration to public use. A draft can be sloppy. A deliverable cannot. And right now, the most common form of sloppy AI usage is treating the draft as the deliverable. Publishing the first output, shipping the generated code, and sending the unedited email are all sloppy AI because the output looked good enough to skip the step where a human makes it actually good.

The question isn't whether to use AI. It's whether, at the moment of handoff, a human applied the judgment, verification, and care that the task required. If the answer is no, the result is sloppy. Skip those choices, and you get slop.