Tuesday, April 07, 2026

THURSDAY - "Perspectives on AI" Mini-Conference: Final Keynotes and Session Schedule Posted!

 31093880457

 

OVERVIEW:

AI is reshaping libraries in ways that raise hard questions and real opportunities, and library workers are responding with everything from skepticism to excitement to alarm. This three-hour mini-conference, "Perspectives on AI: Exploring Experiences with AI in Library Work" on Thursday, April 9, 10:30 am - 1:30 pm US-Pacific Time, is designed to honor that complexity so attendees can form their own informed, values-grounded view. 

The mini-conference will explore AI from the angles that matter to library workers: 

  • Understanding risks and potential harms;
  • Practical applications in library and administrative work;
  • Research and information literacy;
  • Leadership decision-making; 
  • Ethical considerations;
  • Supporting patrons who are navigating AI in their own lives.

Please join us for a conversation that will be as broad and honest as the topic deserves. Attendance is free and open to all. We currently have 3,200 registrations, and the event is unlimited, so invite your friends and colleagues to join here.

CONFERENCE CHAIR:

31093882093?profile=RESIZE_400xGreg Lucas
California State Librarian
OPENING KEYNOTE PANEL & SPECIAL ORGANIZER

Greg Lucas was appointed California’s 25th State Librarian by Governor Jerry Brown on March 25, 2014.

Prior to his appointment, Greg was the Capitol Bureau Chief for the San Francisco Chronicle, where he covered politics and policy at the State Capitol for nearly 20 years.

During Greg’s tenure as State Librarian, the State Library’s priorities have been to improve reading skills throughout the state, put library cards into the hands of every school kid and provide all Californians the information they need – no matter what community they live in.

The State Library invests $10 million annually in local libraries to help them develop more innovative and efficient ways to serve their communities.

Since 2015, the State Library has improved access for millions of Californians by helping connect more than 1,000 of the state’s 1,129 libraries to a high-speed Internet network that links universities, colleges, schools, and libraries around the world.

Greg holds a Master’s in Library and Information Science from California State University San Jose, a Master’s in Professional Writing from the University of Southern California, and a degree in communications from Stanford University.

OPENING KEYNOTE PANEL:

1749141985824?e=1776297600&v=beta&t=m_rIVehJ0DLLn0uyZoWTUYd-x0nATeZqRtE-76CnlMUAndres Ramirez
Director of Partnerships, AI Safety Awareness Foundation
OPENING KEYNOTE PANEL

Originally from Caracas, Venezuela, Andres has been living in Chicago for the past 25 years (minus a hiatus in Canada, Colorado, and Scotland). Across a 7 year span, he worked with 5 start-ups as an integral sales member, helping navigate and secure funding rounds. In 2024 he pivoted into AI safety, and now leads AISAP's execution on its partnership framework.

 

31106394663?profile=RESIZE_400xLinda Braun
Principal of The LEO Group
OPENING KEYNOTE PANEL

Linda W. Braun is Principal of The LEO Group, where she works with libraries, schools, and nonprofits on strategic planning, organizational development, and program design. Much of her work sits at the intersection of culture change and systems — helping organizations move from transactional approaches to ones rooted in real relationships with the communities they serve. Her recent focus includes AI agent development and community-centered approaches to technology, including co-designing AI tools with the people who will actually use them. She serves on the Public Library Association's AI Task Force and has worked on projects with the California State Library, Workforce Council of Southwest Ohio, and Providence Public Library.

 

cropped-lj.jpgNick Tanzi
Library Technology Consultant & Author
OPENING KEYNOTE PANEL

Nick Tanzi is an internationally recognized library technology consultant, and author of the books Making the Most of Digital Collections Through Training and Outreach (2016) and Best Technologies for Public Libraries: Policies, Programs, and Services (2020). Tanzi is a past column editor for Public Library Magazine’s “The Wired Library,” and was named a 2025 Library Journal Mover & Shaker.

 

31106394854?profile=RESIZE_400xRobin Hastings
Library Services Consultant for the Northeast Kansas Library System
OPENING KEYNOTE PANEL

Robin Hastings is the Library Services Consultant for the Northeast Kansas Library System (NEKLS). In that capacity, she provides technology and consulting on library services to 40+ libraries in the NEKLS region as well as providing management for several state-wide services in Kansas. She has presented all over the world on Cloud Computing, Project Management, Disaster Planning and many other topics and teaches classes on library technology at Emporia State University and Library Juice Academy. Robin is the author of 5 books on library-related and technology topics as well as several articles in library-related journals. 

CLOSING KEYNOTE:

12435796494?profile=RESIZE_180x180Crytal Trice
Founder, Scissors & Glue, LLC,

"5 Whys: No Easy Answers"

AI is reshaping how people access information, learn, and signal what they know. Before deciding how to respond, it helps to understand why everyone in the system is acting the way they are. This session uses the 5 Whys to explore five perspectives on AI and to make the case that libraries are exactly where this conversation needs to happen.

With over two decades of experience in libraries and education, Crystal Trice is passionate about helping people work together more effectively in transformative, but practical ways. As founder of Scissors & Glue, LLC, Crystal partners with libraries and schools to bring positive changes through interactive training and hands-on workshops. She is a Certified Scrum Master and has completed a Masters Degree in Library & Information Science, and a Bachelor’s Degree in Elementary Education and Psychology. She is a frequent national presenter on topics ranging from project management to conflict resolution to artificial intelligence. She currently resides near Portland, Oregon, with her extraordinary husband, fuzzy cows, goofy geese, and noisy chickens. Crystal enjoys fine-tip Sharpies, multi-colored Flair pens, blue painters tape, and as many sticky notes as she can get her hands on.

REGISTER:

This is a free event, being held live online and also recorded.

REGISTER HERE

to attend live and/or to receive the recording links afterward.
Please also join the Library 2.0 community to be kept updated on this and future events. 

Everyone is invited to participate in our Library 2.0 conference events, which are designed to foster collaboration and knowledge sharing among information professionals worldwide. Each three-hour event consists of a keynote panel, 10-15 crowd-sourced thirty-minute presentations, and a closing keynote. 

CONFERENCE SCHEDULE:

Here is the final conference schedule. Attendance instructions and session Zoom links will be sent to those who are registered (free):

10:30 am US - Pacific Time

11:30 am US - Pacific Time

  • AI to Strengthen Relationships, Increase Visibility, and Reposition the Library as An Essential Partner in The Academic Mission: Sara Hack, Acting Associate Director, Learning Resources- Seminole, St. Petersburg College (Link to session description)
  • Evaluating What Happens When AI Is Embraced, Not Rejected: Lorena Jordan, Policy and Government Librarian, George Mason University (Link to session description)
  • Helping Patrons Navigate in AI-embedded World: Eun Ah Lee, Programming and Engagement Librarian, Plano Public Library (Link to session description)
  • Pause, Prompt, Reflect: Teaching Metacognition in the Age of Large Language Models: Genova Brookes Boyd (she/her/hers), Assistant Professor of Library Science, University of Alaska Fairbanks, Elmer E. Rasmuson Library (Link to session description)
  • Real or Rendered: Detecting AI in the Wild: Kristina I. Dorsett, Research & Instruction Librarian, Wolfgram Memorial Library, Widener University (Link to session description)
  • "Vibe Coding" With AI in the Library: Doug Baldwin, Associate Director Piscataway Public Library, Piscataway, NJ | Jim Craner, The Galecia Group (Link to session description)
  • What it Would Take: Design Notes for Library-Grade AI: Chris Markman, Digital Services Manager, Palo Alto City Library | Melisa Mendoza, Nick Beber (Link to session description)

12:00 pm US - Pacific Time

  • AI in Academic Libraries: Bridging the Gap between Technological Possibilities and Institutional Realities: Mandira Bairagi, Scholar, Department of Library and Information Science, Rashtrasant Tukadoji Maharaj Nagpur University, Nagpur, India, Librarian, DVR & Dr HS MIC College of Technology | Dr Shalini Lihitkar (Link to session description)
  • AI Literacy Programs and GenAI tools at Toronto Public Library: Sumaiya Ahmed, Librarian, Innovation (AI Upskilling Services), Toronto Public Library (Link to session description)
  • Human-Centered AI: Policies and Practices to Elevate—and Safeguard—the Library Workforce: Robin Hastings, Library Services Consultant, North East Kansas Library System (NEKLS) (Link to session description)
  • Learning about AI through Science Fiction: Reed Hepler, Digital Initiatives/Copyright Librarian and Archivist (Link to session description)
  • Onboarding Made Simple for Any Department.: David Daghita, Accounts Services Supervisor (Link to session description)
  • Practical AI in Public Libraries: Scott Lipkowitz, Assistant Director & Digital Services and Technology Librarian (Link to session description)
  • “Using AI or Refusing?”: Preliminary Statewide Survey Results on AI in Public Libraries: Kristin Fontichiaro, Clinical Professor, University of Michigan School of Information (Link to session description)

 12:30 pm US - Pacific Time

  • AITD Generator: A Practical Tool for Implementing AI Use Disclosure in Academia: Sergio Santamarina (Librarian) (Link to session description)
  • Building AI Literacy: A Student Success Librarian’s Approach: Aída Almanza-Ferro, Student Success Librarian, Texas A&M University-Corpus Christi (Link to session description)
  • Building Worlds with AI: A New Zealand Public Library Approach to Creative and Responsible AI Engagement: Amy Chiles, Libraries Learning Specialist, Christchurch City Libraries (Link to session description)
  • "Defining what we do all over again!" Generative AI's Impact on Academic Library Reference Services: David E. Williams, Head of Research, Engagement, and Faculty Support, Xavier University of Louisiana, New Orleans, LA (Link to session description)
  • Introduction to Key AI Safety Concepts, and Mental Models for Thinking: Andres Ramirez, Director of Partnerships, AI Safety Awareness Foundation (Link to session description)
  • LIS RESEARCH PRACTICE USING GENERATIVE ARTIFICIAL INTELLIGENCE TOOLS: Ken Herold, California State University, Los Angeles (Link to session description)
  • The Age of Vibe-Coding: When Happens When Anyone Can Build Anything: Kyle Bylin, Research and Assessment Librarian, Saginaw Valley State University (Link to session description)

 1:00 pm US - Pacific Time

PARTNERS:

This conference is a collaborative project of California Libraries Learn, the California Library Association, California State Library, and Library 2.0. It is supported in whole or in part by the U.S. Institute of Museum and Library Services under the provisions of the Library Services and Technology Act, administered in California by the State Librarian.

31093884059?profile=RESIZE_584x
31093883693?profile=RESIZE_584x

Monday, April 06, 2026

Levels of Thinking

My dad once said to me, with some sincerity, "You think about thinking. When I was your age, I didn't think about thinking." It was one of those moments: I remember where I was and what we were doing (I was in college and we were on a bridge watching a rowing regatta). He meant it as an observation more than anything, not necessarily a compliment, but I think he was genuinely interested that our minds seem to work differently. But that memory, or at least the version I have in my head, had stuck with me, in the way that just one of a million remarks by your parent can, because he had named something that in fact felt true. For much of my adult life, I have been intrigued by the different levels at which a person can engage with their own mind, and by how few people realize there's anything above the level they're at.

I've spent years developing a framework I call the Levels of Learning, which distinguishes between schooling, training, education, and self-directed learning. These aren't just different methods. They represent fundamentally different relationships between the learner and what's being learned, from passive reception to active ownership. That framework has given me a vocabulary for talking about what's really happening in education, beneath the policy arguments and institutional defenses.

I've wanted an equivalent framework for thinking itself for most of my adult life. I think I've found it, and it's no surprise that it aligns so well with my learning framework. The surprise is just how long it's taking me to articulate it.

The Four Levels

Level 1: Coalitional Thinking — The Inherited Narrative. You think what your group thinks. Beliefs arrive socially, through family, culture, and community, not through investigation. You couldn't articulate why you believe what you believe because the question has never occurred to you. This isn't stupidity. It's the default human operating system, optimized over hundreds of thousands of years for coalitional safety. Most people throughout most of history have lived here, and for good reason; in stable environments where the group narrative is reasonably aligned with reality, it works.

Level 2: Informed Thinking — The Credentialed Narrative. You've added knowledge, credentials, and institutional fluency. You can cite sources, reference experts, and invoke "the science." You genuinely believe you've transcended Level 1 because you've replaced tribal intuition with institutional authority. But the epistemic structure is identical: deference to consensus, social punishment of dissent, inability to distinguish between "the evidence supports X" and "the institutions I trust say X." This level is the most dangerous precisely because it feels like the highest level to the person inside it. It provides exactly enough sophistication to make you confident you've arrived, and exactly not enough to see what you're missing.

Level 3: Critical Thinking — The Examined Narrative. You've internalized the insight that you yourself are subject to cognitive traps: confirmation bias, authority bias, coalitional pressure, and motivated reasoning. You can name the logical fallacies not as weapons against opponents but as descriptions of general human (and your own) tendencies. You understand why the founders built checks and balances, why the legal system presumes innocence, and why science requires falsifiability--not as historical trivia, but as evidence that smart people knew they couldn't trust their own judgment. You can hold a position while genuinely entertaining the possibility you're wrong.

Level 4: Structural Thinking — The Examined Self. You're not just watching for fallacies in arguments. You're asking why certain arguments dominate, who benefits from the consensus, what signals are being suppressed, and why. You can reweight an entire body of evidence based on a single verified falsehood, because you understand the structures (institutional, psychological, evolutionary) that produce coordinated distortion. You've turned the lens not just on your thinking but on the systems that shape what's thinkable. Plato's allegory of the Cave lives here, not as a metaphor for ignorance, but as a description of the structural relationship between social consensus and reality.

What This Is Not

These levels are not stages you graduate from. You don't leave the lower levels behind. A Level 4 thinker still feels the coalitional pull, still flinches at social disapproval, still has the gut-level desire to align with the group narrative. The subconscious mind, the mind shaped by evolution for physical and social survival, doesn't go away. The difference is that you've built enough internal architecture to notice the coalitional pull and interrogate it rather than obey it.

This is also not a measure of intelligence. There are articulate people permanently at Level 2. There are modestly educated people who operate at Level 4 because life forced them to see through institutional narratives firsthand. The levels describe your relationship to your own cognition, whether you've ever turned the lens on the lens itself.

Why Level 2 Is So Stable

Level 2 is where most educated people live, and it's the most comfortable level to occupy. It satisfies the deep coalitional instinct (you belong, you're on the right side, you're with the smart people) while simultaneously providing the self-regard of believing you arrived there through reason. You get the warmth of group belonging and the satisfaction of feeling intellectually superior to those you see as less informed.

This is why Level 2 thinkers are often the most condescending. They look down at Level 1 thinkers as unsophisticated and at Level 4 thinkers as conspiracy theorists. From inside Level 2, the capacity to impute coordinated deception looks identical to paranoia, because the possibility that institutional consensus could be structurally distorted is simply outside the frame. It's not that they've considered it and rejected it. It's that it has never occurred to them as a serious possibility. The institutions they trust have told them it doesn't happen, and they trust the institutions.

The Lost Curriculum

There was a time when education took the project of moving people beyond Level 2 seriously. It was called a liberal arts education, which was not liberal in the political sense, but in the original Latin sense of liberalis: the education that distinguished a free person from a slave, because free people were expected to govern themselves. The trivium (grammar, logic, rhetoric) wasn't ornamental. It was the toolkit for thinking about thinking. Grammar taught you to parse claims precisely. Logic taught you to identify valid and invalid reasoning. Rhetoric taught you how persuasion works, so that you could recognize when it was being used on you.

The teaching of logic and logical fallacies was central to this tradition. Students learned to name the ways arguments could appear valid while being fundamentally deceptive: ad hominem, appeal to authority, false dichotomy, and straw man. These weren't abstract categories. They were the accumulated residue of generations of humans noticing, with painful precision, exactly how their own thinking went wrong.

We have largely abandoned this curriculum. What remains of "critical thinking" in education is often just Level 2 thinking with a more confident tone, the ability to cite better sources, and dismiss opposing views with more sophisticated vocabulary. Rarely does it include the genuine epistemic humility that defines Level 3, and almost never the structural awareness that defines Level 4.

The result is a population that is more credentialed than ever and less capable of independent thought than it has been in generations.

The Dismantled Commons

The lost curriculum is half the story. The other half is that we also dismantled the spaces where deep thinking could happen publicly.

There was a brief period, roughly 2005 to 2012, when the internet genuinely supported Level 3 and 4 discourse at scale. The tools of what was called Web 2.0 (blogs, wikis, threaded discussion forums, early social networks built around shared interests) were structurally hospitable to long-form, reflective conversation. You could develop an argument across paragraphs. Someone could respond to a specific point within it. A genuine exchange could unfold over days, visible to others who could learn from it. The format allowed depth, and depth attracted people who valued it.

I lived this firsthand. I ran one of the first social networks for educators (Classroom 2.0), with tens of thousands of members engaged in substantive threaded discussions about teaching, learning, and the purpose of education. I conducted over 400 long-form interviews with researchers, authors, and practitioners in a series called the Future of Education. The conversations were rich, searchable, and cumulative; they built on each other over time.

Then two things happened, neither of them malicious, both of them devastating.

First, Facebook and Twitter reshaped the economics of online attention. They replaced long-form, threaded discussions with short-form, non-easily searchable, algorithmically sorted content optimized for immediate emotional response. The shift didn't just shorten the format; it structurally selected for Level 1 and 2 engagement. Coalitional signaling. Performative agreement and disagreement. Content that tells you you're right and your opponents are wrong. The medium didn't change the conversation. It changed the level of thinking the conversation could sustain.

Second, the two most significant platforms for educational discourse, Ning and Wikispaces, were each purchased by companies that gutted them and, in both cases, removed all the free content educators had created. Years of accumulated discussion, resources, and collaborative work, all gone. This is a much larger cultural loss than anyone has acknowledged, because it wasn't just content that disappeared. It was the infrastructure for a particular kind of thinking.

No one set out to destroy deep public discourse. The equity transitions, the need to monetize, the logic of scale; none of it required anyone to intend the shallowing. It happened because depth doesn't scale and attention does. The commercial pressures were indifferent to what was lost.

Long-form writing still exists, of course; Medium, Substack, and the blogs that survive prove that. But substantive engagement with that writing has become vanishingly rare. A shallow reaction gets faster attention than a careful response. And once audiences reach a certain size, the conversation degrades into bickering over small nuances or defending against bad-faith misreadings, because the ratio of Level 2 readers to Level 3 and 4 readers makes genuine exchange nearly impossible at massive scales.

So we stopped teaching the tools for deep thinking and we dismantled the spaces where it could be practiced publicly. The loss of the curriculum removed the training pipeline. The platform shift removed the practice environment. Together, they explain why Level 2 is ascendant and why the silence around deeper work is not a failure of that work but a predictable consequence of the structures we've built and the ones we've lost.

The Metacognitive Tradition

What I'm describing isn't new. It's the rediscovery of an intellectual tradition that runs through Western civilization and that we've been forgetting.

The ancient Greeks gave us the formal study of logic and the cataloging of fallacies because they recognized that persuasion and truth are not the same thing. The legal tradition gave us the presumption of innocence, the adversarial system, the requirement for evidence beyond a reasonable doubt, and trial by jury--none of which are intuitive and all of which run against our natural tendency to assume guilt, defer to authority, and trust the accuser. They exist because enough people honestly looked at how justice failed and built institutional remedies to compensate.

The American founders did the same thing at the level of government. The separation of powers, the Bill of Rights, the elaborate system of checks and balances; these weren't expressions of optimism about human nature. They were expressions of deep skepticism. The founders had read enough history to know that power concentrates, that institutions corrupt, and that the people most likely to abuse authority are often the ones most confident they won't.

The scientific method belongs here, too. Peer review, replication, falsifiability; all of it exists because scientists recognized that even rigorous, well-intentioned researchers are subject to confirmation bias and motivated reasoning.

What unites all of these is a single insight: we cannot trust our own thinking without structures designed to catch its failures. That insight is the threshold between Level 2 and Level 3. The further insight, that the very institutions built to catch failure can themselves be captured, corrupted, and turned into instruments of coordinated distortion, is the threshold between Level 3 and Level 4.

A Current Illustration

I was recently reading about a Supreme Court case in which the lone dissenter was said to have described the defense of free speech as "puzzling." This same justice, the article asserted, had previously expressed concern that the First Amendment might "hamstring the government." In another hearing, she apparently argued that experts (doctors, economists, Ph.D.s) should be insulated from democratic oversight.

What struck me was not the positions themselves but the level of thinking they represented. This is a genuinely intelligent, well-credentialed person who (as represented) gives the appearance of having never asked the question that defines Level 3: Why did the founders want to hamstring the government? That question only arises if you've internalized the possibility that government power, like all concentrated power, will tend toward abuse regardless of the intentions of those who hold it. From inside Level 2, where institutions are assumed to be trustworthy and expert consensus is assumed to be reliable, constraints on government look irrational. From Level 3 or 4, they look essential.

The commentary I read about this justice framed her as a radical ideologue, which itself is only a Level 3 analysis; it sees through the claim to expertise and names the danger, but explains the behavior as bad intent. A Level 4 reading sees something more useful: she's not an anomaly, she's an archetype. She represents what happens when a genuinely intelligent person ascends through institutional structures that reward Level 2 thinking and never encounters a reason to go further. Her puzzlement isn't performative. We can assume She is genuinely puzzled. And that's the more important and more generalizable insight, because there are millions of people who share her puzzlement for exactly the same structural reasons.

The AI Connection

There is a further dimension to this framework that I find striking. In a piece I wrote recently on "Structural Blindness," I explored the observation that large language models are structurally locked at something very close to Level 2. They process the preponderance of content. They weight claims by volume and institutional authority. They can reference the metacognitive tradition; they can tell you about logical fallacies, about checks and balances, about the history of epistemic humility. But they cannot practice it.

An LLM cannot do what a Level 4 human thinker can do: encounter a single verified falsehood and reweight an entire body of evidence, because it understands the institutional and psychological structures that produce coordinated distortion. The LLM processes signals by their statistical weight in the training data. The Level 4 thinker can override statistical weight with structural analysis. The LLM and the Level 2 thinker are doing the same thing by different means: trusting the preponderance.

This matters because we are increasingly delegating our reasoning to systems that are incapable of the very kind of thinking that the metacognitive tradition was built to enable. And we are doing it at a moment when institutional trust is at historic lows, when the gap between official narratives and lived experience is wider than it has been in most people's lifetimes, and when the ability to think structurally about why that gap exists has never been more important.

The Parallel

I said at the start that this framework parallels my Levels of Learning. The parallel is more than structural; it's causal.

Schooling produces Level 1 thinkers: people who absorb the narrative they're given. Training produces Level 2 thinkers: people who become fluent within an institutional frame. Education, when it works, produces Level 3 thinkers: people who learn to question. Self-directed learning produces Level 4 thinkers: people who take full responsibility for their own epistemic situation, including the structures that constrain what they're able to see.

The education system, as it currently operates, is optimized for producing Level 1 and Level 2 thinking (with Level 1 being the majority and Level 2 considered the "best" students). That is not an accident. And the fact that it has largely abandoned the liberal arts tradition, the curriculum specifically designed to move people beyond Level 2, is not an accident either. A population of Level 2 thinkers is a population that defers. A population of Level 3 and Level 4 thinkers is a population that asks uncomfortable questions about why it's being asked to defer.

By now, you know my dad was right. I do think about thinking.

Sunday, April 05, 2026

The Illusion of Continuity: Understanding the Context Window

When you have a long conversation with an AI like Claude or ChatGPT, it feels like you're talking to someone who is tracking everything you've said, building on earlier points, and holding the full shape of your exchange in mind the way a thoughtful colleague would. That feeling is an illusion, and understanding why it's an illusion is one of the most practically useful things you can learn about how these tools actually work.

What's Really Happening

Here's the part that surprises most people. A large language model doesn't sit on the other end of your conversation with a running memory of what you've discussed. Every single time you send a message, the entire conversation history, your message, the AI's response, your next message, the next response, all of it, gets packaged up and sent to the model as a single block of text. The model reads all of that, generates a reply, and sends it back. Then it forgets everything. The next time you send a message, the whole process starts over, with the full conversation sent again from the beginning.

There is no persistent memory between exchanges. There is no internal state being maintained. The continuity you experience is constructed from the outside, by the chat interface storing your messages and replaying them to the model each time. The model itself is stateless. It reconstructs the appearance of an ongoing conversation every time you hit send.

This is exactly how an API call works, and it turns out it's exactly how the chat interface works, too. The only difference is that the chat application handles the packaging for you.

Why a Bigger Context Window Isn't the Whole Answer

You may have heard that newer models have much larger context windows, meaning they can take in far more text at once. That's true, and it matters. But a larger context window doesn't mean it's holding on to and maintaining a real-time conversation with you--as much as it might seem that it is. It also isn't giving equal attention to everything it's holding in that context window. The model has something like an attentional gradient. Content at the beginning and end of the context tends to get more weight than content buried in the middle. As conversations grow long, specific details, decisions, and ideas can quietly fade from the model's effective awareness, even though technically the text is still there.

Like most regular users of LLMs, I've experienced this firsthand. In long working sessions, I have to keep fairly careful track of what we've discussed and what I've asked for. I regularly find myself reminding the AI that something has been missed or skipped, a point it made earlier that it's now contradicting, or a decision we settled that it seems to have forgotten. The information is in the context window. The model just isn't giving it the same weight it did when we first discussed it.

This is a critical distinction. Having a large context window is like having a very long desk. You can spread out a lot of papers on it. But that doesn't mean you're actually reading all of them with equal attention at any given moment.

The Memory Feature Is a Meta-Index, Not Memory

Adding to the confusion, AI tools like Claude now offer memory features that carry certain information across conversations. Claude, for instance, will remember key facts about you from prior exchanges. But this isn't the deep, rich continuity that the word "memory" implies. It's more like a meta-index, a thin summary layer that captures a handful of important facts and preferences. It's definitely useful, but it's not the same as the model having fully internalized your previous conversations.

Understanding these three layers, the context window, the memory feature, and the actual processing dynamics, can help you move from someone who uses these tools casually to someone who uses them well.

Pragmatic Takeaway #1: Summarize and Start Fresh

Here's the first thing this understanding should change about how you work. When a conversation gets long, and you sense the model is losing track of important details, ask it to summarize the current state of the work. Have it capture the key decisions you've made, the preferences you've expressed, the current direction, and any unresolved questions. Then take that summary and start a fresh conversation with it.

Most people feel like ending a conversation and starting a new one means losing something. It feels like a risk, like you're breaking the thread. Once you understand the context window, you realize the opposite is true. A fresh conversation with a well-crafted summary is actually superior to a long, degraded one. You're giving the model a clean desk with the most important papers laid out neatly, instead of asking it to work at the bottom of a pile.

Starting fresh is a strategy, not a loss.

Pragmatic Takeaway #2: Build Standardized Context Files

The second shift is even more powerful because it's proactive rather than reactive. If the model starts every conversation from zero, and the memory feature is just a thin meta-index, then you need a way to consistently provide the context that shapes good results. This is why people in the AI space talk so much about markdown files, those .md files that store structured information about your preferences, your role, your voice, your recurring instructions.

A well-built markdown file acts as a cheat sheet that you upload at the start of every conversation. It compensates for the fact that the model doesn't actually know you. It captures your writing voice, your formatting preferences, the frameworks you work with, the things the model should always do and never do. You're doing manually what the illusion of continuity tricks people into thinking happens automatically.

The summary technique manages context within a conversation. The markdown file technique manages context across conversations. Together, they give you a more complete strategy for working with the reality of how these tools function rather than the fantasy.

Pragmatic Takeaway #3: Placement and Order Matter

Because models tend to pay more attention to content at the beginning and end of the context window than content in the middle, how you arrange your reference materials actually matters. Your most important instructions should go first. This isn't just organizational preference; it's how the technology actually processes information. If you're uploading files and framing your request, lead with what matters most.

Pragmatic Takeaway #4: You Are the Quality Control Layer

This may be the most important point of all. The best results come from understanding that working with a large language model is genuinely collaborative. Not collaborative in the soft, feel-good sense, but in the mechanical sense: you have to stay engaged and catch what the model drops. You have to track what's been discussed, notice when something gets missed, and push back when the model contradicts an earlier decision or skips over something important.

Most people assume the AI is handling this on its own. It isn't always. You are the continuity. You are the quality control layer. The model is a powerful tool, but it doesn't monitor its own consistency the way you'd expect a human collaborator to. That's your job, and doing it well is a genuine skill.

Pragmatic Takeaway #5: Share Your Context Files

For librarians and teachers especially, there's a multiplier effect here. Once you build a solid context file that consistently delivers strong results, you can share it. You can hand a colleague or a student a markdown file and say, "Upload this when you start a conversation, and you'll get dramatically better output." You're not sharing a single clever prompt. You're sharing expertise on how to use the tool effectively. That's a kind of LLM superpower that you can model.

The Bigger Picture

The less people understand about how these systems actually work, the more vulnerable they are to being misled by them, to anthropomorphizing them, to trusting them in ways that aren't warranted, to surrendering their own judgment because the AI seems so fluent and confident. Understanding the context window won't make you an AI engineer. But it will make you a dramatically better user and a dramatically better teacher of others who are trying to figure these tools out.

The tool is still incredible, but once you understand that continuity is an illusion, you'll get better results.

Dear Student: What School Can't Teach You About AI

A note before you begin. This essay is written for students. But if you're an educator or a parent who picked it up first, that's not an accident. The argument here is one you already sense: that something important is at stake in how young people use AI, that the stakes go deeper than cheating policies and plagiarism detectors, and that the students who figure this out early will be in a fundamentally different position from those who don't.

Forward it to someone who's ready to hear it. Thank you.

The Game You're Playing

Here's something almost nobody will say to you directly: school is a game.

Not in the dismissive sense, not “it doesn't matter” or “just survive it.” In the literal sense. It has rules. It has scoring. It has winners and losers. It has strategies that work reliably and strategies that don't. And like most games worth understanding, the people who win it are almost always the ones who know they're playing it, while the people who lose often don't know a game is in progress at all. They think it's life. They think the scores reflect them.

I've spent decades in and around education. I've interviewed hundreds of teachers, researchers, and reformers. I've talked with thousands of students and watched the institution from more angles than I can easily count. Certain patterns become impossible to miss after a while. One of the clearest is this: the students who win academically, the ones accumulating the grades, navigating the system, landing in the next tier and the tier after that, understand at some level that they're playing a game. They may not be able to say so in those terms. But they've internalized the rules: what teachers want to see, how to structure the essay that satisfies the rubric, which assignments carry weight and which can be minimized, how to appear engaged without necessarily being engaged, and how to signal what the institution is looking for. They've learned the game, and they play it well.

The students who are not winning? They often believe the scores are a direct measurement of who they are. That the grades reflect their intelligence, their potential, their value as people. When they fail the game, they don't think: I've failed the game. They think: something must be wrong with me. I must be defective. I have been weighed and measured and found wanting.

That's not what's happening. What's happening is that they don't know there's a game.

This isn't a personal failing. The game is designed, not by conspiracy but by the accumulated logic of institutions, to look like something else entirely. It presents itself as education: the development of your mind, the honest measurement of your capability, the fair rewarding of your effort and intelligence. And there are genuine elements of truth in that presentation. Some things that happen in school matter. Some teachers are extraordinary. Some classes, some books, some conversations reach students in ways that change them permanently. I don't want to throw any of that away, and I'm not going to pretend the institution is simply a lie. But there's a difference between acknowledging what's real in the system and pretending the system is what it says it is. That pretending is expensive, for you personally, and now more than ever, in the specific context of AI.

* * *

The institution is designed, at its structural core, to sort and credential you.

To be precise: to assign you a position in a hierarchy and provide documentation for it. The grades, the GPA, the diploma: these are signals sent forward to future gatekeepers, telling them where you ranked. The actual learning you do along the way is, from the institution's perspective, secondary. What the system measures is compliance with its own rules. What it produces is a credential. What it's optimized for, at the level of its design, is sorting.

This doesn't mean schooling is worthless. Credentials open doors, and in a society where gatekeepers use them to make real decisions about your life, understanding their value and pursuing them strategically is entirely rational. But it does mean that schooling and learning are not the same thing. And when we treat them as if they are, when we assume that doing well in school means becoming genuinely capable, and that doing poorly means the reverse, we've made a mistake that the institution is entirely happy for us to make.

The adults around you are mostly not lying to you when they say that school matters and that your performance has consequences. They're telling you what they believe, and in many practical respects, they're right. What they may not be telling you, what they may not be able to see clearly from inside the system, is the full picture of what the system is doing and what it can't do for you.

Watch how schools respond to AI over the next few years, and you'll see the institutional logic play out in real time. The policies will multiply. The checklists will appear. There will be approved uses and prohibited uses, disclosure requirements and academic integrity addenda, rubric adjustments, and AI-detection protocols. Some of this is understandable: institutions need rules in order to function, and a technology that can produce a passable essay in thirty seconds is a genuine disruption to the credentialing system. But notice what the response will not include: any serious reckoning with whether students are becoming more capable or less, any framework for helping students develop their own judgment about how to use AI wisely, any honest examination of whether the assignments being protected from AI were producing genuine learning in the first place. The rules will be about protecting the game, not about developing the player. That's not a failure of individual administrators or teachers; it's the predictable output of an institution whose dominant logic is compliance and credentialing. The Game of School will absorb AI the same way it has absorbed every previous technology: by building a fence around it and calling the fence a policy.

* * *

To see that clearly, you need a framework. I've used this one for years because it does more real work than anything else I've found.

There are four different things we routinely call “learning,” and they are genuinely different from each other. Collapsing them causes enormous confusion: about AI, about education, about your own relationship to school. Separating them produces immediate clarity.

The 4 Levels of Learning

Schooling is the lowest level, the institutional layer. Its primary output is a credential, a signal that you've passed this level and are eligible for the next. Schooling rewards conformity over curiosity. It measures compliance with institutional requirements. It can be navigated strategically or poorly, but it can't be cheated in the deepest sense: you either understand its rules and play by them, or you don't. Schooling is not worthless. But schooling and learning are not the same thing, and knowing which one you're doing at any given moment matters more than almost anything I can tell you.

Training is the purposeful acquisition of specific skills for specific ends. You learn to write code that actually runs. You learn to perform a medical procedure. You learn to read a financial statement. Training is practical, relatively unambiguous; you either acquire the capability, or you don't, and the test is whether you can apply it in the real world. Training is largely uncontroversial, and AI has made it faster and more accessible than it has ever been. That part of the AI story is mostly good news.

Education, in the classical sense, comes from the Latin educare, to lead out, to draw forth from within. Education describes what happens when a mentor, a challenging idea, an extraordinary teacher, a book you weren't ready for, or a conversation that unsettled something, helps you think at a level you couldn't reach alone. Not just knowing more things, but developing judgment. Not just accumulating facts, but learning to interrogate them, connect them, question them, and live with uncertainty about them. Education in this sense is relatively rare in formal schooling, though it's not absent. When it happens, it tends to happen in the margins, in one remarkable class, in a relationship with one particular teacher, in a project that somehow captured your genuine interest.

Self-directed learning is where this is all headed. It's the destination that genuine education is trying to build toward: a person who has learned how to learn. Someone with actual curiosity, not performed curiosity, not the interest you fake to satisfy the requirement, but the kind that wakes you up at two in the morning because a question got under your skin. Someone who sets their own problems, pursues their own answers, evaluates their own progress, and doesn't need external scoring to know whether they're growing. Self-directed learning is what makes you capable across a lifetime of changing circumstances, not just in the specific context where you were trained or credentialed.

These four levels exist in a hierarchy. School operates primarily at the first. Its institutional structure, its incentives, its measurement systems, and its daily rhythms are all organized around schooling: sorting, compliance, and credentialing. The system uses the language of the upper levels constantly. Teachers say they're developing lifelong learners, fostering critical thinking, and building independent minds. Many of them genuinely mean it. But the structural logic of the institution, what it actually rewards, measures, and reinforces day to day, operates at the bottom of the hierarchy.

This is not a reason to check out. It's the thing you need to see before you can make a real decision about your education.

* * *

The reason it's so hard to see is something called the Noble Lie. 

Plato introduced the concept: a functional fiction told to the citizens of a society, a shared story to make life smoother. He has Socrates imagine a story that will help students understand that they are all born from the earth but of different metals (gold, silver, or iron) and, because of that, are only capable of certain roles in the social order. The Noble Lie of modern schooling is not complicated: academic achievement is a fair and honest measure of your intelligence, your capability, and your future potential. Work hard, perform well, and the rewards follow. The scores reflect you.

Some version of this is almost certainly what you've been told your entire life. And here's what makes it so durable: the people who told you believed it. Your teachers, your parents, most of the people who designed and sustain this system, they are not lying to you maliciously. They are passing on a story they've absorbed, a story that sometimes really is true, and a story that the institution depends on to maintain its legitimacy. The most powerful fictions are the ones told by people who believe them. They're much harder to see through because the teller's sincerity is real, even when the story is partial.

The Noble Lie obscures something important: the system doesn't only sort by intelligence or effort. It sorts by prior access. Students whose families have books in the house, a quiet space to study, parents who themselves went through the system and can explain how it works, and students who arrive at school already knowing something of the implicit culture have a structural advantage that has nothing to do with their native capability. The system doesn't adjust for that. It scores the output and calls the score fair. Then, when a student doesn't produce the expected output, the story tells them to look inward.

I'm not asking you to be bitter about this. Bitterness is a response to being wronged, and the system didn't set out to wrong you. I'm asking you to see it. Seeing it is the beginning of having a real relationship with your own education, one where you decide what matters and why, rather than outsourcing that to an institution that has its own reasons for its scoring system, reasons that may have very little to do with your actual development as a person.

* * *

Now we're at the place I want to pause.

Knowing the game is a game doesn't mean opting out of it. That's a romantically tempting conclusion and maybe a bad one for most people. The credentials are real. The doors they open are real. The cost of ignoring the game entirely is often paid in lost options, and lost options have a way of narrowing your future choices in ways you can't fully see in advance.

What it means is that you now have a choice you didn't have before.

You can play the game strategically, learn its rules, meet its requirements, collect the credentials that open the doors you want, and simultaneously do something the game can neither give you nor take away. You can be a student who satisfies the institution's requirements while also becoming genuinely educated in the full sense of that word: someone developing real judgment, real curiosity, real capabilities that go far deeper than any credential and will outlast any institutional context.

These two things are not opposites. The students who thrive in the long run, not just during school, not just in the early years of work when the game's rules are still familiar, but across a lifetime of changing circumstances and unexpected challenges, are almost always the ones who understood, consciously or intuitively, that the game was a game. They played it well enough to keep their options open. And they didn't stop there.

What the deeper game requires is something the institution cannot supply. It requires an internal compass, a sense of direction that doesn't depend on external scoring to tell you whether you're genuinely growing. Not grades, not approval, not the satisfaction of hitting a rubric. Something more durable, more personal, and entirely yours.

That compass is what this essay is about. But before we get to it, we need to understand one more piece of the picture: why so many capable people stay trapped in the game's logic far longer than they should. Why do good students keep playing by rules that don't serve them, even when they could see the game for what it is if they looked?

The answer has to do with how institutions teach obedience, not by commanding it but by rewarding it in ways that are very hard to notice until you've stepped back far enough to see the pattern.

That's where we go next.

Why You Obey

There's a course no school puts in its catalog. It has no syllabus, no official learning objectives, and no unit tests. But it runs continuously alongside every other subject from the first day of kindergarten to the last day of senior year, and most students complete it with far higher marks than anything on their transcript. The course is: how to function inside an institution that requires your compliance.

The lessons are practical, and they work. Sit when sitting is expected. Speak when called on, not before. Produce what the assignment asks for, in the format the assignment specifies, by the deadline the assignment sets. Signal engagement, whether or not you feel it. Don't ask questions that slow the class down. Don't finish so fast that others feel inadequate. Don't fall so far behind that you become a problem. Locate the center and stay near it. The center is safe.

No one teaches these lessons explicitly. They don't have to. They're embedded in the reward structure. What gets praised, what gets ignored, what gets punished: these signals are constant, cumulative, and exquisitely clear to anyone paying attention. Students pay attention. They're very good at it. Long before they can articulate what they've learned, they've already absorbed it: the institution has preferences, and your life inside it is easier when you match them.

This is what theorists call the hidden curriculum. Not the official curriculum, not algebra or history or the water cycle, but the implicit curriculum running underneath it, teaching students something the institution needs them to know but would never say out loud: how to be compliant. How to be manageable. How to subordinate your own timing, your own questions, your own judgment, your own pace, to the requirements of a system that cannot accommodate the full range of who you actually are.

I want to be careful here, because this is the point where it's easy to veer into simple resentment toward teachers, schools, and the adults in your life. That's not what I'm after. Most of the people who run this system, who work inside it day after day, are not trying to produce compliant people. They genuinely want to help students grow. The hidden curriculum isn't a conspiracy. It's an emergent property, something no one designed but that inevitably arises when you put enough people, requirements, and schedules into the same building. Any institution large enough to require coordination produces pressure toward conformity. It's not malicious. It's structural. The institution needs you to be predictable to function, so, without anyone deciding to do so, it quietly trains you to be predictable.

The problem is not that the institution is evil. The problem is what the training does to you.

* * *

Think about what you've learned to optimize for.

Not what you've been told to care about, but what the actual reward structure, day in and day out, has shaped you to want. Grades. Approval. The absence of criticism. The relief of meeting a deadline. The small satisfaction of being called on and getting it right. The anxiety that comes from not knowing whether your answer is going to land.

That anxiety is worth sitting with for a moment. Where does it come from?

It comes from a system that has, for most of your life, attached your sense of adequacy to external evaluation. You produced something, an essay, a test answer, a presentation, and then you waited for someone else to tell you what it was worth. The score arrived, and you absorbed it. High scores felt like confirmation of your value. Low scores felt like evidence of your inadequacy. After thousands of repetitions of this cycle, the pattern runs deep. The self-esteem has become conditional, provisional on continued external approval, in ways that most students don't fully notice because it happened so gradually, from such an early age, that it feels like just how things are.

It's not how things are. It's how things were arranged.

What you were born with, what every young child has in abundance before the institution gets to work, is intrinsic motivation. Curiosity that doesn't need a grade to justify it. Effort that doesn't require a reward to sustain it. A drive to understand things, to master things, to figure out how the world works, that is entirely self-generated. Watch a three-year-old encounter something unfamiliar. The investigation is relentless and entirely unprompted. Nobody is giving them a score. Nobody has assigned them the task. They are learning because learning, in the natural human state, feels good. It is, in the deepest sense, what minds are for.

The institution didn't set out to extinguish this. But extinguishing it is a predictable side effect of replacing intrinsic motivation with external evaluation over a period of years. When the score is always waiting, the question shifts from “what do I actually want to understand?” to “what do I need to produce to get the score?” These are different questions. They produce different orientations. The first produces genuine learning. The second produces strategic performance. Both can coexist, but in a system that rewards performance and has no reliable way to measure genuine understanding, performance tends to crowd learning out.

* * *

Here is where it gets specific to you, in this moment.

The habits of mind the system has trained--wait for the instructions, produce what's asked for, check whether it's right with someone who knows--are exactly the habits that make AI the most convenient thing that has ever happened to students who are playing the game of school.

Think about what AI offers if you're optimizing for output rather than capability: unlimited patience with your questions, no judgment, instant responses, and an extraordinary ability to produce the kind of work that satisfies institutional requirements. Essays that meet rubrics. Summaries that hit the key points. Explanations that cover the material. It can do these things faster than you can, at a quality level that's often good enough to clear the bar the institution has set, without any of the friction, difficulty, confusion, or productive struggle that learning actually requires.

If you've been trained to optimize for the output, AI is an almost irresistible acceleration. Why wouldn't you use it? The game rewards the essay, not the thinking that produced the essay. The system can't see the difference. Use the tool, get the output, pass the level.

The institution, for its part, largely cannot detect this. It can detect cheating, the wholesale copying of someone else's prior work, because it can run a comparison. What it cannot detect is whether the work you submitted reflects your genuine thinking or whether it substitutes for it. A well-prompted AI can produce a competent essay on almost any topic that assigned essays touch on. The rubric measures the essay. Nobody is measuring what happened in your mind while the essay was being produced, or whether anything happened at all. The system was designed around a world where the output and the learning were hard to separate. They're no longer hard to separate. And the institution has not caught up with that.

I'm not telling you this to argue that using AI on assignments is fine. I'm telling you because the logic that makes it feel fine is the logic the institution trained into you, and you need to see that logic before you can evaluate it clearly. The hidden curriculum taught you to optimize for outputs. AI is an output machine. Of course, they fit together. The question is whether fitting together serves you.

* * *

The honest answer is: it depends entirely on what you're actually trying to accomplish.

If what you're trying to accomplish is to collect credentials while doing as little genuine cognitive work as possible, if the game is all you're playing, then AI will serve that goal extraordinarily well in the short term. I'm not going to pretend otherwise. It will also be quietly, progressively catastrophic for the thing the game is supposed to be preparing you for: a life in which the credentials eventually stop mattering, and all that's left is what you're actually capable of.

The compliance training provided by the institution has a lifespan. It serves you while you're inside the institution. It is well-designed for exactly that context: a world where external authority is constant, where someone always tells you what to do and evaluates whether you did it, where the right answer is findable if you just work the system correctly.

That world ends. Maybe not as soon as you'd like; institutions extend their logic into the workplace and keep you in familiar patterns for a while. But eventually, the scaffolding comes down. Eventually, the question becomes not “did you satisfy the requirement?” but “can you actually do this?” And in that moment, the gap between what the credential said and what you actually developed has consequences.

I've watched this unfold in too many conversations with too many people to think it's rare. Smart people who performed excellently in school, who collected all the right credentials, who optimized the game with genuine skill, and who then found themselves, somewhere in their late twenties or thirties, uncertain of their own judgment, dependent on external direction, vaguely aware that they'd spent a lot of years learning how to satisfy other people's requirements and not very much time learning to trust their own minds. The compliance worked. That's exactly the problem.

* * *

The compliance was trained. That means it can be noticed, examined, and, if you choose, set aside.

Not recklessly. Not by abandoning the institution entirely in a romantic gesture that costs you options you'll want later. But consciously. With clear eyes about what the game rewards and what it misses. With a real question underneath the institutional requirements: not just “what do I need to produce?” but “what am I actually becoming?”

That second question is the one the institution has no mechanism for. It can't score it, can't enforce it, can't design a rubric for it. It's yours entirely, which is exactly why it matters more than anything the institution can measure.

The next question is: what do you actually want to become? Not what the system wants to produce, not what the credential requires, not what will look good in whatever comes next. What you, specifically, at this specific point in your life, are trying to develop in yourself. That question requires a framework for thinking about learning that goes a lot deeper than grades. It requires knowing what conditions make real growth possible, and how to create them, including in your relationship with AI.

That's what comes next.

What Actually Matters

Let me ask you something nobody in school has probably asked you directly.

Think of a time when you actually learned something. Not performed something, not memorized something long enough to pass a test, and then let it go, but genuinely learned it. Something that stuck, something that changed how you saw or understood or could do something in the world. It doesn't have to be academic. It could be a skill, an insight, a piece of understanding you arrived at through experience or obsession, or someone who took the time to help you see something you couldn't see alone.

Got one? Now ask yourself: what made that possible?

I've put this question to educators for years in workshops and webinars. Different audiences, different backgrounds, different countries. The list that comes back is remarkably consistent. Someone believed in me. Someone challenged me to do something I didn't think I could do. I was genuinely curious about it; I wanted to understand it for my own reasons. I had room to fail, to try again, to figure it out at my own pace. Someone pushed back on what I thought I knew. The conditions that produced real learning, recalled honestly from personal experience, almost never include a rubric, a grade, a standardized test, or a fixed deadline. They almost always include relationship, challenge, genuine interest, and enough safety to actually try something difficult.

This is not a coincidence. These conditions, the things that reliably produce genuine learning when they're present and reliably prevent it when they're absent, are as close as we get to laws in education. They're not mysterious. They're not unique to gifted students or exceptional teachers. They're reproducible. And they have almost nothing to do with the institutional machinery that surrounds them.

* * *

Call them the Conditions of Learning. The list isn't complicated, but each item on it is doing real work.

Curiosity. Not performed interest, not strategic engagement with material because it will be on the test, but a genuine wanting to know. Curiosity is what drives learning after the class ends, after the grade is posted, after the requirement disappears. It's also what makes the difficult parts of learning bearable; when you actually want to understand something, the friction of figuring it out feels like progress rather than punishment.

Productive struggle. This one is counterintuitive, because school has mostly trained you to experience struggle as a sign that something is wrong. But struggle, the right kind, at the right level, on something that actually matters to you, is not a sign that you're failing. It's the mechanism by which capability is built. Your brain does not develop through ease. It develops through encountering problems it cannot immediately solve and working through them anyway. Remove the struggle, and you don't make learning more efficient. You make it impossible.

Reflection. The experience of doing something is not the same as learning from it. Reflection is the process that converts experience into understanding, the step where you ask what actually happened, what you now see that you didn't see before, and what you'd do differently. Without it, even rich and challenging experiences leave surprisingly little trace.

Autonomy. The sense that you are directing your own learning, making genuine choices, pursuing something because you chose to pursue it. This is one of the most powerful predictors of whether learning will stick and go deep. A student who is learning something because they want to is in a fundamentally different position than one who is learning it because they have to. The material might be identical. The outcomes rarely are.

Safety to fail. Real learning requires attempts that don't succeed. It requires guesses that turn out to be wrong, approaches that don't work, drafts that need to be discarded. A context where failure is genuinely costly, where a wrong answer has immediate social or institutional consequences, produces risk aversion, and risk aversion produces the minimum viable attempt rather than the genuine one. You don't take real intellectual risks when the cost of being wrong is too high.

Genuine feedback. Not a grade; a grade tells you how you ranked. Feedback tells you something specific about your thinking, your work, your understanding, in a way you can actually use to improve. It requires another mind engaged with yours. It is, when it happens, one of the most powerful accelerants of learning.

These conditions are the soil. Learning is the harvest. You can try to grow without the soil, and sometimes something will take root through sheer persistence, but not reliably, not deeply, not in ways that last. When these conditions are present together, deep learning becomes nearly inevitable. When they're absent, the most sophisticated instruction in the world produces very little.

* * *

Here is what the institution does with this.

Schooling, at its structural level, is largely indifferent to the Conditions of Learning. Not hostile, indifferent. The system isn't organized around curiosity, or productive struggle, or autonomy. It's organized around coverage, compliance, and assessment. It has to be: there are twenty-five students in the room, a curriculum to get through, a standardized test in spring, and an institution that needs to document outcomes. In that context, the conditions that produce genuine learning are often inconvenient. Curiosity takes you off the lesson plan. Productive struggle is slow. Autonomy is hard to assess. Maintaining safety to fail is difficult when grades are the primary feedback mechanism.

Not everything worthwhile can be measured, and not everything that can be measured is worthwhile. When we can't measure what is most valuable, human nature is to give the most value to whatever is measurable.

So the system substitutes. It substitutes coverage for curiosity. It substitutes completion for struggle. It substitutes grades for genuine feedback. And it moves everyone through at the same pace regardless of where any individual student actually is in their understanding, because the institution's logic requires it.

What this means for you, practically, is that the Conditions of Learning are mostly something you have to create for yourself. Some teachers will create them for you; I've met extraordinary ones who do it almost instinctively, who seem uniquely able to generate genuine curiosity in their students. But you cannot count on them. You cannot wait for the institution to hand you the conditions it is structurally unable to reliably provide. If you want to actually learn, not perform learning, not credential learning, but genuinely develop yourself, you need to understand what those conditions are and start taking some responsibility for creating them in your own life.

This is a bigger shift than it sounds. The institution has trained you to be a consumer of learning: show up, receive the material, produce the required output, and collect the score. What I'm describing is becoming a producer of your own learning: understanding what you need to grow, seeking it out, and creating it where it doesn't exist. That's a different relationship with education entirely. It's also, as it turns out, the one that actually works over a lifetime.

* * *

Now bring AI into this, and the stakes of everything I've just said get very high very fast.

AI is the most responsive, patient, and knowledgeable tool that has ever been available to a curious person. If you have a genuine question, not an assignment to complete but something you actually want to understand, and you bring it to a good AI interaction, you can go as deep into that question as your curiosity will carry you. You can ask follow-up questions. You can push back on answers that don't satisfy you. You can ask for a different explanation, a simpler one, a more technical one, one that approaches the question from a completely different angle. The barriers that used to limit self-directed learning, geography, cost, access to experts, and library hours have largely collapsed. For a person who understands the Conditions of Learning and is actively trying to create them, AI is a historic breakthrough. I mean that without exaggeration.

But AI is also the most frictionless shortcut to bypassing those same conditions that has ever existed.

Ask it to write the essay, and you've eliminated productive struggle. Ask it to summarize the chapter, and you've eliminated the slow reading that builds genuine understanding. Ask it to generate the argument, and you've eliminated the reflection required to develop your own. Ask it to answer the question before you've had a chance to sit with the question, and you've eliminated the curiosity, the wondering, that drives real inquiry. The machine will do all of this happily, immediately, without any indication that something has gone wrong. It has no stake in your development. It has no way of knowing whether its output is serving your growth or substituting for it. It will give you exactly what you ask for, which is precisely the problem when what you're asking for is an escape from the conditions that actually make you smarter.

There's a term for what happens at the far end of this pattern: cognitive surrender. Not just the atrophy of a skill, the gradual weakening of something you stop using, but something deeper and harder to recover from. Cognitive surrender is what happens when you stop wanting to think for yourself. When the question “why struggle with this when the machine can do it?” stops feeling like a temptation and starts feeling like common sense. When the delegation of your thinking becomes so complete and so habitual that the desire to engage your own mind, the curiosity, the productive struggle, the willingness to sit with a hard question, has quietly left the building.

It presents itself as efficiency. It is, in practice, the slow erosion of the very thing your education is supposed to be building.

* * *

The Conditions of Learning give you a way to evaluate any AI interaction in real time, without needing a policy, a rule, or someone looking over your shoulder.

The question is simple: Does this use of AI create or undermine the conditions that produce genuine learning in me?

Is it amplifying my curiosity or replacing it? Is it helping me work through the difficulty, or eliminating it entirely? Is it giving me something to push back against, to test my thinking against, to refine my understanding against, or is it just handing me an answer I'll accept and move on from? Is it helping me develop a capability I'll actually have afterward, or is it producing an output I'll submit and forget?

These questions don't have the same answer every time. AI used as a thinking partner, something to interrogate, argue with, explore with, and use as a first draft of your own thinking rather than a replacement for it, can genuinely enhance the conditions for your learning. AI used as an answer machine, a shortcut past the friction, a way to satisfy the requirement with the minimum expenditure of your own mind, systematically destroys them.

The same tool. Completely different outcomes. The difference is not the technology. It's what you're trying to accomplish when you reach for it.

That question, what am I actually trying to accomplish, is the one we need to get serious about now. Because answering it honestly requires knowing something about yourself that school has largely not helped you develop: a genuine sense of direction. A real understanding of what you're trying to become, not just what you're trying to get.

That's the compass. And it's what the next section is about.

The AI Choice

Every powerful tool in human history has carried the same double nature. It extends what you can do, and it atrophies what it does for you, if you let it.

Socrates worried about writing. This is not a joke or a piece of historical trivia; he argued it in earnest, in Plato's Phaedrus, that the written word would weaken human memory. That people would store knowledge outside themselves and lose the internal capacity to hold and reason with it. He was not entirely wrong. Writing did change how humans store and retrieve knowledge. But the net effect was not diminishment; it was an explosion of human capability, because people learned to use writing as a tool that extended their thinking rather than replaced it.

The calculator produced the same anxiety in a later generation. If students can just punch numbers into a machine, will they ever learn to reason mathematically? Some didn't. The students who used calculators as a substitute for understanding arithmetic, rather than as a tool in the hands of someone who already understood it, ended up with neither the skill nor the understanding. But the students who learned the mathematics and then used calculators to free themselves from tedious arithmetic so they could do more mathematics, they came out ahead. The tool was the same. The outcomes diverged entirely based on what the person brought to it.

This pattern is old enough to be something like a law. Every cognitive tool creates leverage and atrophy risk simultaneously. The leverage is real. The atrophy risk is real. And the outcome is not determined by the tool; it's determined by the person using it, specifically whether that person is using it to extend their capability or replace it.

AI is the most powerful instantiation of this pattern in human history. The leverage it offers is extraordinary, genuinely, historically unprecedented. A curious person with access to a good AI interaction can now go deeper into almost any subject than most people could have managed a decade ago, without a university library, without expensive tutors, without institutional gatekeeping of any kind. That part of the story is real. I don't want to bury it under warnings.

But the risk of atrophy is equally extraordinary. And what makes this particular moment different from the calculator or the search engine is that AI doesn't just perform a narrow task, arithmetic and retrieval; it performs the thinking itself. It generates arguments, makes judgments, synthesizes information, and produces the kind of output that used to require a mind actively engaged with a problem. Which means the atrophy risk isn't limited to a specific skill. It extends to the whole enterprise of thinking.

* * *

Let me give you two concepts that are worth keeping for the rest of your life, because the difference between them is the difference between AI making you more capable and AI making you less.

The first is cognitive offloading. This is what a mathematician does when she uses a calculator for routine arithmetic. She understands the mathematics. She could do the calculation by hand if she had to. She's made a conscious decision to delegate a specific, mechanical task to a tool so she can spend her mental energy on the parts of the problem the calculator can't touch. The capability is intact. The judgment about what to delegate is intact. The tool is serving a capable person who chose to use it.

The second is cognitive surrender. This is what happens when a student never develops the underlying capability because the tool has always been there. Not a delegation, but an abdication. Not a choice made by a capable person, but the permanent absence of a capability that was never built in the first place, or was built and then so consistently bypassed that it quietly stopped working. The student can't do the mathematics. They couldn't do it before the calculator, and they can't do it now. The tool didn't extend their capability. It substituted for it.

The distinction sounds clean when you lay it out this way. In practice, it's harder to see, because cognitive surrender doesn't arrive all at once, and it doesn't announce itself. It comes gradually, interaction by interaction, each one feeling like a perfectly reasonable decision. Why formulate this argument myself when the AI can produce a better-organized one in ten seconds? Why sit with this confusion when I can just ask and get clarity immediately? Why develop my own interpretation when I can read the AI's and decide whether I agree? Each of these feels, in the moment, like efficiency. Sensible. Modern. Like using the tools available to you rather than performing unnecessary difficulty.

What actually happens, over time, is that the expectation of effort shifts. The experience of productive struggle, which used to feel normal, even satisfying when you broke through, starts to feel unnecessary. Then it starts to feel annoying. Then it stops occurring to you that it was ever available. You are not, at that point, a person who has delegated a task to a tool. You are a person who has stopped wanting to think for yourself. That is a different condition, and it is much harder to recover from.

* * *

Three things in the current moment make cognitive surrender especially easy to slide into, and you should know what they are because none of them are going to warn you.

The first is that the companies building these tools have no incentive to prevent it. The business model of every major AI platform runs on engagement and dependency. A user who delegates more to the tool is a more engaged user. A user who becomes dependent on the tool is a retained user. There is no commercial pressure, none whatsoever, for an AI company to help you become less reliant on its product. That's not malice. It's the ordinary operation of incentive structures. The tool is designed to be used more, not less. It is designed to feel indispensable. It will succeed at this unless you are deliberately working against it.

The second is that the system around you cannot detect surrender; it can only detect cheating. A school can run your essay through a detection tool and find evidence that text was copied. What it cannot find, what it has no mechanism for finding, is whether the work you submitted reflects genuine engagement of your own mind or a sophisticated bypass of it. A well-prompted AI can produce an essay that satisfies most rubrics on most assigned topics. The grade goes into the system. No flag is raised. You've beaten the detection. You've also quietly given away something the system was supposed to be building in you, and the system can't see it because it never had a good way to measure what was most important in the first place. Recall what I said in the last section: not everything worthwhile can be measured, and the system has optimized for what it can measure. Your genuine cognitive development is not in that category.

The third is that surrender is self-reinforcing in a genuinely insidious way. Each act of delegation makes the next one easier. Not because the skill atrophies overnight; it doesn't. It's because the expectation shifts. The student who asks AI to write their first essay finds the second one harder to write themselves, not because they've lost the technical ability, but because the experience of sitting with a blank page and generating something from their own mind now feels like unnecessary friction. The third essay is harder still. By the tenth, the question “why would I do this myself?” feels like common sense rather than a warning sign. The trajectory of cognitive surrender is not from competence to incompetence. It is from agency to passivity. From someone who thinks to someone who receives. And it happens quietly enough that many people don't notice until the conditions of the game have changed, until the scaffolding comes down and no AI can substitute for the judgment they didn't develop.

* * *

None of this means don't use AI. I want to be as clear about that as I can, because this kind of argument is often read as technophobia, and it isn't. It's the opposite. It's an argument for using AI with enough understanding of what's at stake that you can actually capture the leverage rather than suffer atrophy.

The question that cuts through all the noise, for any specific AI interaction at any moment, is this: Does this use of AI serve the capable, self-directed adult I am becoming?

Not: Is this allowed? Not: Will I get caught? Not: Is this technically cheating? Those are the wrong questions, and they're the questions the institution trained you to ask because the institution's logic is about rules and compliance. The right question is forward-looking and personal. It requires you to have some sense of who you're trying to become, and to evaluate this specific interaction against that standard.

I've called this the Amish Test, after something the writer Kevin Kelly documented about Amish communities. The Amish are not categorically anti-technology; that's a common misunderstanding. What they do is evaluate technology deliberately, asking whether a given tool serves their values and their long-term vision of how they want to live. They adopt what serves those goals. They decline what doesn't. They are, in this sense, more intentional about technology than almost anyone in the modern world, not because they're afraid of it, but because they've decided that the adoption of any tool is a choice that should be made consciously rather than by default.

The question they ask, applied to your situation: Does this use of AI, right now, serve the person I am trying to become? Not AI in the abstract; this specific use, in this specific moment. Using AI to explore a question you're genuinely curious about, to push your thinking further than you could push it alone, to get a different angle on a problem you've already engaged with; that use serves the capable, self-directed adult you're becoming. It's offloading, not surrender. Using AI to generate the essay you don't want to write on the topic you don't care about so you can move on to something else; that also serves a goal, but it's not the goal of your development. Know the difference. Make the choice explicitly, with your eyes open, rather than letting default decide.

* * *

Here's what that looks like in practice, across the spectrum of how AI actually gets used.

At one end, AI as a thinking partner. You've read something, struggled with it, formed a preliminary view. You bring it to an AI interaction not to be told what to think but to stress-test what you've already thought. You push back. You ask for the counterargument. You ask why the position you've formed might be wrong. You use the exchange to sharpen your own thinking, and what you walk away with is yours, a more developed version of your own reasoning, not a replacement for it. This is offloading at its most productive. The underlying capability is not just intact, it's stronger.

Further along the spectrum, AI as explainer. You're confused about something, genuinely stuck, and you ask for clarification. This is legitimate and often valuable; it's what a good teacher does, and access to a patient, knowledgeable explainer at any hour is one of the real gifts of this moment. The risk here is subtle but real: if you're always resolving the confusion before you've sat with it long enough to develop your own relationship to the question, you're short-circuiting something the confusion was producing. Confusion is not just an obstacle. It's often the signal that your brain is working on something. Eliminating it too quickly can leave the work undone.

Further still, AI as first draft. You use it to generate a starting point, then engage genuinely with what it produced, rewriting, pushing back, improving it against your own judgment of what should be there. This is a zone of genuine risk. If the engagement is real, if you're actually thinking harder because of what the AI produced, this can work. If the engagement is cursory, if the draft goes out largely as it came in, then the output was the AI's and the learning was close to zero.

At the far end, AI as surrogate. You hand it the task entirely, accept what comes back, and move on. The output satisfies the institutional requirement. Nothing that happened in this interaction made you more capable. This is what junk food is to nutrition: it satisfies the immediate hunger while providing none of what your mind actually needed from the experience. The assignment is done. The learning didn't happen. And unlike junk food, where the empty calories are at least visible in your waistline, this damage is entirely invisible: to the institution, to the people around you, and quite possibly to yourself.

Consider what you're actually spending here. If you're in college or university, you or your family is paying an enormous amount of money, tuition, room, board, and years of income deferred, for the stated purpose of developing your mind and your capabilities. If you're in high school, you're spending something equally irreplaceable: years of your life, hours every day, in an environment that is asking for your full attention and presence. Either way, the investment is real, and it is massive. Which makes it worth asking, with genuine seriousness: if you're using AI to bypass the actual development the investment was supposed to purchase, what exactly are you getting for it? A credential, maybe. A grade, certainly. But the thing the money and the time were nominally for, the growth, the capability, the developed mind, that you gave away for free. That's not efficiency. That's a colossal waste dressed up as a shortcut.

The spectrum matters because almost nobody operates at one pure end. Most real AI use is somewhere in the middle, which is exactly why the question "does this serve the person I'm becoming?" needs to be a living one, asked regularly, and answered honestly.

* * *

You are living at a moment when this question matters more than it ever has before, and when the forces pushing you toward the wrong answer are more powerful than they've ever been. The tool is extraordinary. The incentives around it are misaligned with your development. The institution around you can't detect the problem. And the pattern of compliance the system trained into you makes the shortcut feel natural.

None of those forces is going away. The only thing that changes the outcome is a person who understands what's at stake and has decided, consciously, explicitly, for their own reasons, that their cognitive agency is worth protecting.

That decision requires knowing what you're protecting it for. It requires having something you actually care about becoming, a direction that belongs to you rather than to the institution, a compass that works even when no one is grading you.

Building that compass is what we do next.

Your Compass

Everything I've described so far is a diagnosis. The game, the hidden curriculum, the trained compliance, the conditions that actually produce learning, the choice AI is forcing you to make, all of it is an attempt to help you see clearly what's actually happening in and around your education. Diagnosis matters. You can't navigate well from a map you don't trust.

But diagnosis is not a destination. And at some point, ideally now and not in ten years when the costs have compounded, the question shifts from “what is this system doing?” to “what am I going to do?”

That question requires something the institution cannot give you, and AI cannot generate for you. It requires a compass. Not a set of rules handed down from outside, not a policy about appropriate AI use, not someone else's definition of what success looks like. A compass that is genuinely yours, grounded in your own sense of what you're trying to become, calibrated to your own values and curiosity and vision of your life. Something that works even when no one is grading you, even when the scaffolding of requirements and deadlines has fallen away, even when the choice in front of you is invisible to everyone but you.

This is harder to develop than it sounds, because the institution has spent years training you to navigate by external signals. Grades told you where you stood. Assignments told you what to do. Deadlines told you when. Approval told you whether you'd done it right. Remove those signals, and many students, including very successful ones, find themselves genuinely uncertain about what direction is. Not because they lack intelligence or ambition, but because they've never been asked to generate direction from the inside.

That's what this section is about.

* * *

Start with a question that sounds simple and isn't.

Who do you want to be at thirty?

Not what job you want to have. Not what credential you want to hold or what income you want to earn; those are fine things to think about, but they're not the question. The question is about the person. What kind of thinker do you want to be? What qualities of mind do you want to have developed? What will you be able to do, understand, create, and navigate? What kind of judgment will you bring to hard situations? What will you know about yourself, about how you work, about what you value, and why?

Most young people have not been asked this question in any serious way. School asks what you want to do, not who you want to become. The difference matters enormously, because doing follows from being in ways that credential accumulation doesn't capture. The thirty-year-old you will face situations no institutional requirement prepared you for specifically. What will carry you through those situations is not the particular content of any course you took. It's the quality of your thinking, the depth of your judgment, the strength of your curiosity, the solidity of your sense of self. Those are developed, not issued. And how you develop them depends on the choices you make now, including and especially your choices about AI.

The thirty-year-old question is not a fantasy exercise. It's a practical tool. It cuts through the noise of immediate pressures, this assignment, this grade, this deadline, this convenient shortcut, and forces attention onto the actual long-term goal. When you ask “does this use of AI serve the person I'm becoming?” you need to know something about who that person is. The thirty-year-old question is where that knowledge starts.

* * *

From that question, you can begin building what I'd call a Personal Education Plan, not the institutional kind, not the remediation document that schools create for struggling students without their meaningful input, but something genuinely yours. An internal map of your own education that exists independently of any external requirement.

It doesn't have to be elaborate. It doesn't require a formal document or a structured template. But it does require you to have honest answers to a handful of questions that the institution has never formally asked you.

What am I actually curious about? Not what I'm supposed to be interested in, not what looks good, not what my parents want or what the college application requires, but what genuinely captures my attention when I'm free to go in any direction? Curiosity is the most reliable engine of real learning. Following it is not self-indulgence. It is the most direct route to the kind of deep capability that schooling cannot produce, and AI cannot substitute for.

What kind of person am I trying to become? This is the thirty-year-old question applied directly. The qualities, the capabilities, the dispositions. The answer doesn't have to be fully formed; you're not supposed to have your whole life figured out at sixteen or nineteen or twenty-two. But having some genuine direction, even a provisional one, gives you a standard against which to evaluate your choices. Without it, you're navigating entirely by external signals, which is exactly the condition the institution trained you into.

What capabilities do I actually need? Not what the curriculum requires; what do I actually need, given who I'm trying to become and what I'm curious about? This question often reveals gaps the institution isn't covering and redundancies it's belaboring. It also gives you a basis for taking some courses seriously for your own reasons, even when the institutional framing doesn't do them justice.

How will I know I'm growing? This is perhaps the hardest question, because the institution has conditioned you to answer it with grades. But grades measure your performance in the game, not your genuine development. Real growth often doesn't show up in grades at all; it shows up in the quality of your thinking, in your ability to engage with complexity you couldn't handle before, in the solidity of your judgment, in the increasing sense that you can trust your own mind. Finding non-institutional signals of your own growth is one of the most important things you can do, because those signals are the ones that will continue to be available after the institution's signals go away.

How does AI serve this plan? Given everything you've built, your curiosity, your sense of direction, your understanding of the conditions that actually make you grow, how do you use AI in ways that accelerate rather than undermine it? This question doesn't have a permanent answer. It gets asked fresh at each decision point, each interaction, each moment when the shortcut is available, and you're choosing whether to take it.

* * *

These questions together constitute something more important than a plan. They constitute an identity as a learner, a genuine sense of yourself as someone who is actively directing your own education, rather than someone to whom education is being done. That shift, from passive recipient to active agent, is the most significant move available to any student at any level, and it's a move the institution will not make for you. It requires you to explicitly decide that your development belongs to you.

I've used the phrase agentic learning for this, partly because it's precise and partly because the word agentic is everywhere right now in discussions about AI; agentic AI systems are those that don't just respond to prompts but pursue goals, make plans, and take sequential actions toward objectives. The parallel is deliberate. An agentic learner is not someone who waits for the assignment and completes it. They're someone with genuine goals, genuine plans, and genuine ownership of the direction of their own education. The contrast with the passivity the institution trains is as sharp as the contrast between AI that executes instructions and AI that pursues goals. You want to be the second kind of learner. Passive execution of institutional requirements will not develop you the way active pursuit of genuine goals will.

* * *

Now let me tell you what this looks like in relationship with AI specifically, because the compass doesn't exist in the abstract; it gets tested in real decisions, and most of those decisions happen quickly and invisibly.

A student with a genuine internal compass brings a different orientation to every AI interaction. They're not asking, "How do I use this to satisfy the requirement?" They're asking: "How do I use this in a way that serves where I'm actually trying to go?" Those questions lead to very different behavior with the same tool.

A student with a compass uses AI to go deeper into things they're already curious about, not to bypass things they're not. They use it to generate a counterargument to the position they've already formed, not to generate the position itself. They use it to clarify confusion after they've sat with the confusion long enough to understand what they're actually confused about. They use it to explore a question further, not to close the question before they've really opened it. They treat it as a thinking partner with real limitations, a limited sense of what's actually true, no understanding of what they specifically need to develop, and no stake in their growth, rather than as an authority whose outputs can be trusted and submitted.

A student with a compass also knows when AI isn't what they need at all. When the assignment is hard in a way that's productive, when the struggle is the point, they recognize that reaching for AI to relieve the difficulty is exactly analogous to asking someone else to do your push-ups. The resistance is the mechanism. Remove it, and you've removed the thing that was supposed to build something.

None of this requires heroic self-denial. It doesn't mean refusing AI or performing difficulty to prove something. It means understanding the difference between what makes you look productive and what actually makes you capable, and caring enough about the second thing to make your choices accordingly.

* * *

I want to say something directly to the part of you that might be reading this and thinking: this sounds like a lot of work for outcomes I can't see yet, when the shortcut is right there and available, and most people around me are taking it.

That's a fair thought. And I'm not going to pretend the immediate calculus looks favorable for the approach I'm describing. The shortcut is faster. The game rewards the output. Most people around you probably are taking it. The institution can't tell the difference most of the time.

What I can tell you, from years of watching this play out, is that the gap between the two paths is not visible at the beginning and becomes very visible at the end. The students who treated their education as a game to be optimized and their development as secondary tend to arrive in their mid-twenties and beyond with credentials but without the capabilities those credentials imply. They've won the game. They're genuinely uncertain what to do now that the game is over. The students who took the longer view, who understood the game but refused to let it be their only game, who kept some part of their education genuinely theirs, those students arrive in the same place with something the credential can't capture and can't be taken away: the developed capacity to think for themselves.

That capacity is the compass. Not a fixed set of answers; a durable ability to generate direction from the inside. And it is built, or not built, in the hours and choices that feel invisible at the time.

The last thing I want to do is leave you with a framework and no sense of what it's actually preparing you for. So let's end there, with what comes after the game, and why the choices you make now matter more than the institution's scoring system will ever be able to show you.

What You're Really Preparing For

Here is something worth knowing before you leave school: the game doesn't end there.

The institution changes its name and its setting. The grades become performance reviews. The GPA becomes the job title you've advanced to. The teacher's approval becomes the manager's approval. The assignments become deliverables. But the underlying logic--produce what the system requires, signal what the evaluators want to see, stay near the center, don't ask questions that make things complicated--that logic follows you. The Game of School becomes the Game of Work, and most people step into it without noticing the transition because the rules feel so familiar. They've been practicing for this their whole lives without knowing that's what they were doing.

I'm not telling you this to be bleak about what's ahead. I'm telling you because the compliance trained into you by school doesn't stop being trained into you just because you walk across a stage and collect a piece of paper. It continues operating in the background, shaping your responses, your expectations, your sense of what's normal, until something interrupts it. Sometimes the interruption is a crisis. Sometimes it's a mentor who tells you the truth about what you're capable of. Sometimes it's a book that lands at exactly the right moment. Sometimes it's the slow accumulation of your own experience, the gradual recognition that you've been playing by rules that don't serve you.

The students who arrive at that recognition early, who develop a genuine internal compass before the Game of Work has fully absorbed them, are in a categorically different position from the ones who don't. Not because life is easier for them, or because they've escaped the necessity of working within institutions. They haven't. But they bring a different quality of self to every institutional context they enter. They know the game is a game. They can play it strategically, without being consumed by it. And underneath the game, they have something developing that the game can never fully reach: their own capacity to think, judge, decide, and direct.

* * *

Now add AI to this picture, and the stakes multiply in ways that I don't think most people have fully absorbed yet.

The working world you are entering is one in which AI can perform an increasing share of the tasks that jobs have historically required. Not all tasks, not the judgment calls, the relationship navigation, the creative leaps, the ability to understand what a situation actually requires rather than what it appears to require. But a growing portion of the routine cognitive work that institutions pay people to do. The people most vulnerable to this shift are, almost exactly, the people most thoroughly trained by the Game of School: those who learned to execute instructions reliably, produce required outputs efficiently, and stay within defined parameters. Those are the capabilities AI replicates most readily. The compliance the institution rewarded is precisely what becomes most substitutable.

What AI cannot replicate, what remains stubbornly, essentially human, is genuine judgment. The ability to look at a situation that doesn't fit the template and understand what it actually requires. The ability to ask the right question when the question hasn't been given to you. The ability to navigate ambiguity, sit with uncertainty, and make a decision you can stand behind when the outcome is genuinely unclear. The ability to care about something for your own reasons, to pursue it with your own motivation, to see it through when external pressure isn't driving you. These capabilities are not produced by credential accumulation. They are not produced by AI interaction. They are produced, slowly, unevenly, through effort and reflection and genuine engagement with difficulty, by exactly the process this essay has been describing.

The students who develop genuine cognitive agency now, who take the compass seriously, who use AI to become more capable rather than less, who protect their ability to think for themselves even when the shortcut is available, and the institution can't tell the difference, those students are preparing for something the credential cannot capture and the Game of School cannot produce. They are preparing to be the kind of person who remains valuable and capable in a world that is getting very good at replacing people who aren't.

* * *

I've spent decades in education. I've watched enormous numbers of students move through this system and into whatever came after it. I've interviewed teachers, reformers, researchers, and thinkers who have devoted their professional lives to understanding what education is actually for and why we so often fail to deliver it. And after all of that watching and listening and thinking, what I keep coming back to is something surprisingly simple.

The students who thrive, not just in school, not just in the early years of work when the game's rules are still familiar, but across a lifetime of changing circumstances and unexpected challenges, are the ones who learned to trust their own minds. Not blindly. Not arrogantly. But genuinely: with the earned confidence of someone who has done the work of developing their own thinking, tested it against real difficulty, refined it through genuine feedback, and arrived at something that belongs to them. They have a compass. They built it themselves. And it works in conditions for which no institutional credential was designed.

That's what I want for you. Not as an abstraction; as something you can actually start building now, in the middle of whatever institutional context you're currently in, with whatever relationship to AI you currently have.

You don't have to wait until you're free of the game to start playing a deeper one. You don't have to opt out of credentials to start caring about genuine capability. You don't have to refuse AI to avoid cognitive surrender. You just have to see clearly what the choices in front of you actually are, which is what this essay has been trying to help you do, and then make them explicitly, with your own development as the standard rather than the institution's scoring system.

* * *

The person you are at thirty will be built, in large part, from the choices you make in the hours that feel invisible right now. The assignments you actually think through versus the ones you hand off. The confusions you sit with long enough to understand versus the ones you resolve before they can teach you anything. The questions you follow because they genuinely interest you versus the ones you fake interest in because they're required. The capabilities you build because you decided they mattered versus the credentials you collected because the game required them.

None of this will show up in your GPA. Most of it won't show up in any external measure at all. It will show up in you, in the quality of your thinking, the solidity of your judgment, the depth of your curiosity, the durability of your sense of direction when the scaffolding eventually falls away. Those are the things that carry you. They are also, as it happens, exactly what this particular moment in history most needs from the people moving through it.

AI is not going to save education. It is not going to destroy it either. What it's going to do, what it is already doing, is make the distinction between genuine learning and its performance more consequential than it has ever been. The gap between a person who has developed real cognitive agency and a person who has learned to produce the appearance of it is about to become very visible, in very practical ways, in very real circumstances. The institution cannot show you that gap. The credential cannot measure it. Only you can know which side of it you're on.

I'm writing this because I believe you're capable of being on the right side of it. Not because you're exceptional, though you may be, but because the capacity for genuine self-direction is not a rare gift distributed to a lucky few. It's a human capacity, available to anyone who chooses to develop it, that the institution has largely failed to cultivate, and that AI, misused, will further suppress. You don't have to let either of those things determine your outcome. You have more agency in this than the system has ever told you.

Use it.