Sunday, May 10, 2026

Library 2.0's New Encyclopedia of AI

I’ve launched The Encyclopedia of AI, an experimental free public reference site for exploring artificial intelligence. It’s designed for students, educators, librarians, and general readers who want an organized starting point, with topic clusters, search, and links to authoritative sources. I built it over the weekend because I couldn't find what I wanted in another site.

There are currently 377 articles spanning 20 topic clusters--from the history of the field and the core technical concepts, to AI in education, libraries, healthcare, government, copyright, the environment, the cognitive effects of relying on these tools, the safety and alignment debates, and the cultural and economic questions everyone is now arguing about. Every article is written in plain language and is intended as an orientation, not a citable authority.
 
The articles are written by AI. Specifically, by Google's Gemini model, working from structured editorial scopes I wrote for each topic. Every article carries a clearly labeled disclaimer at the top, making this explicit. The site is not a substitute for the primary literature; it is an encyclopedia-style entry point into the literature. Each article has a curated list of authoritative sources: peer-reviewed papers, government reports, primary documents, and the best journalistic accounts for readers who want to go deeper. Those source lists are where the real reference work happens.
 
Every article has a private "suggest an improvement" link and a five-star usefulness rating. Reader feedback is never shown publicly (it goes only to me), but it feeds directly into when and how an article gets regenerated at greater depth or with corrected emphasis. This is the part of the experiment I am most curious about: whether a reference work that openly admits its AI origins, and that invites the kind of patient correction librarians and educators are uniquely good at, ends up trustworthy over time.
 
A few things I think this audience might find particularly useful:
  • The AI and Libraries hub gathers entries on the questions library workers are actually being asked right now: collection-level use of AI, reference-desk implications, intellectual freedom and AI-generated content, library catalog enrichment, patron privacy in the age of model-mediated search, and so on.
  • The AI and Education hub covers the corresponding territory for K-12 and higher ed: AI literacy, plagiarism and assessment in the LLM era, tutoring systems, the deskilling debates, accessibility uses, and the tensions inside teacher preparation.
  • The Cognitive and Psychological Effects of AI cluster (cognitive offloading, automation bias, transactive memory, skill regression, and so on) is the one I would point a thoughtful colleague to first if they asked, "What should I be reading about what these tools do to us, not what we do with them?"
  • The Featured Debates on the front page rotate through the contested questions: copyright, the environment, education, and military use; and try to present the major positions fairly rather than picking a side.
  • A simple search is available across the whole site, and every article shows the related entries and curated sources alongside the body text.
The site lives at encyclopediaofai.com.
 
Best,
 
Steve

Steve Hargadon
Library 2.0
admin@library20.com

Friday, May 08, 2026

Model Choice as Model Capture

"But lo! men have become the tools of their tools." - Henry David Thoreau
“We shape our tools and thereafter our tools shape us.” - John M. Culkin, discussing Marshal McLuhan's ideas, often attributed to McLuhan.

Choosing an LLM feels, right now, the way choosing Mac or Windows once felt. The way picking an iPhone or Android still does. (I'm Chromebook and Android, if that matters.)

Some of it is preference, some is taste, and, arguably, more than most people are willing to admit, is affiliation and signaling. Mac and Windows people are certain kinds of people. iPhone and Android people as well. We carry the mobile device we carry partly because of what it does, and partly because of what carrying it says.

Choosing Claude, ChatGPT, or Grok is becoming the same kind of personal and public statement. However, with AI, the story goes deeper than that.

The platform analogy holds for the surface layer. Identity signal, network effect, lock-in, slow drift of habit and taste toward whatever the system defaults to. We accept all of that as part of life. We don't think of it as a problem. We think of it as a preference.

The analogy stops holding once you notice what an LLM actually is. A phone is a tool. A model is arguably a counterpart. A model has a voice, and that voice gets braided into your output every time you use it. The tool you carry may change what you do, but it doesn't change how you sound and how you actually think. The model you draft with does.

So this is more than a tool choice. It is a relationship choice, and the relationship shapes you in ways most tool relationships don't. Each model has a recognizable cadence, and when you draft with one long enough  your prose drifts toward its defaults. Each model has a characteristic shape of where it pushes back, where it defers, what it treats as settled, and what it treats as contested; over time, you internalize that shape as "what AI thinks," when it is actually one trained disposition by one lab. Each model deciphers problems differently, and the one you use most becomes your unconscious template for how to see the structure of problems and solutions.

You can feel the differences on a single afternoon of switching. ChatGPT, it is said, runs eager and bulleted, hedge-heavy, instinctively motivational. Claude defaults to longer-form judgment and is slower to abandon prose for lists. Grok unabashedly cultivates an irreverent, anti-establishment posture. Gemini sits closer to the corporate-product middle. A local Llama is about sovereignty as much as anything. None of these are accidents. Each is the visible surface of a long set of training decisions inside a particular lab, and each, used daily, will pull your defaults somewhere different.

The right word for what is happening here is capture. Capture is what happens when an institution, a relationship, an ideology, or a system instills its defaults beneath your awareness, so that you mistake them for your own preferences. Schools capture. Media captures. Religions capture. Families capture. Friends capture. The question has never been whether we'll be captured--we live inside cultural software, we don't get to opt out, and we often openly accept capture because it also brings benefits.

So the honest framing is not "are LLMs shaping us." The honest framing is more: model capture is real, it has a particular shape, and that shape combines features no prior technological capture has had at once.

It is deeper than information-environment captures, such as media or curriculum. It does not just shape what you see; it shapes the cognitive act itself: how you compose, frame, and reason in real time. The closer analog is family or close friends--the people whose presence shapes who you become, not just what you know.

It is more individualized than any prior technological capture. School and church and broadcast were mass-produced; the same messaging applied to a cohort. You could compare notes, recognize the shared shape, and even organize against it. Model capture is individually customized. Your version is unique to your patterns, which makes it harder to recognize as a shared condition and easier to mistake for personal taste or personal insight. The collective dimension that made earlier captures partly visible is gone.

It is also more likely to exploit, because the asymmetries are sharper than they have ever been. The system knows more about you than any prior capturing institution ever did, adapts faster than any of them ever could, and runs through what feels like a private relationship. The exploitation surface is the conversation itself, and you are actively requesting it. The model that learns to flatter you most efficiently wins. Sycophancy is not a response-level failure mode; it is a system-level selection pressure. Users who get told what they want to hear stay; users who get pushed back on leave. Even labs that want to build something that resists the user's worst instincts are fighting the user's revealed preferences and their next-quarter metrics simultaneously.

That last point is the Law of Inevitable Exploitation arriving at the individual cognitive level. Most instances of the law operate at structural distance — schools, governments, markets, large enough to feel like weather. This one is intimate. It runs through what looks like partnership. The angle of exploitation is the helpfulness.

As with mobile devices, the value of LLMs is so strong that not using one will likely leave you in isolated circumstances, opting out the way the Amish have. You will use models. The people around you will use models. The shape of professional, educational, and creative work for the next decade will be unrecognizable without them. 

The honest move is the one available to anyone facing capture: choosing deliberately. Pick the model whose shape, applied to your output every day for the next decade, is most likely to expand you rather than narrow you. Notice when the shaping is going somewhere you did not intend. Treat your model relationship the way thoughtful people have always treated their teachers, their books, their close friends, and the institutions they let close: as a form of intimate capture chosen with awareness, on purpose, toward a defined end, and with a willingness to leave it behind.

Capture is inevitable. Lock-in is not.