Most of us find ourselves genuinely conflicted about AI in education. AI appears both alarming and exciting in ways that seem difficult to reconcile. If students are using AI to complete assignments, and teachers are using AI to evaluate them, we ask if there is any actual learning going on. But then we can find ourselves using AI to explore ideas in ways that seem genuinely rigorous and exciting, and we wonder if this might be the ultimate fulfillment of our dreams of unlimited access to knowledge.
While our conflicting reactions to the use of AI in education might seem confusing, I would argue that this is because we’re missing an important framework that genuinely clarifies much of what’s going on—both with AI in education and why we seek simple answers to some of life's most complicated dynamics.
The Four Levels of Learning
A few years ago I found myself using the words “school,” “training,” “education,” and “learning” interchangeably, and realized that doing so was making it impossible for me to think clearly. By defining them separately, I immediately felt a great degree of clarity. Let's do the same thing now, adding in observations about AI.
This framework does more than just separate terms; it reveals the cognitive consequences of confusing them. When we can only see the institutional layer of learning, it’s easy to see things as all good or all bad. This kind of all-or-nothing thinking is the hallmark of simplified or narrative-dependent thinking. The difficulty in understanding what is really going on leads us to want clarity, so we gravitate to binary positions: AI is either saving education or destroying it. The all-or-nothing framing isn’t a rhetorical choice; it’s a cognitive consequence of not having a good framework for thinking. Moving through the levels is not about becoming more optimistic about AI, but about seeing it more clearly, allowing us to assess both the potential and the costs at every stage.
Schooling is the institutional layer of formal learning. What it primarily teaches is how to function within an institutional environment: follow rules, meet deadlines, respond to authority, perform consistently on standardized measures. Its output is a credential that signals fitness for the next stage. It rewards conformity over curiosity, and it is, at its core, a sorting system. The students who learn to play it well tend not to describe themselves as good learners; rather, they describe themselves as being good at the game of school. Through the lens of schooling, AI is a threat, and the alarm is legitimate on its own terms. From the institution’s perspective, the negatives—cheating, the disruption of credentialing—are paramount because they challenge its core sorting and signaling functions. The potential positives for genuine learning are largely irrelevant to the institutional logic. The “cost” of AI in this context is that it breaks the game.
Training is the purposeful acquisition of specific skills for defined ends. It is pragmatic, largely uncontroversial, and often genuinely valuable. Through the lens of training, AI can be a practical accelerant. It can compress timelines, provide immediate feedback, and meet learners exactly where they are stuck. The corresponding negative, or “cost,” is the risk of producing shallow competence. While AI can speed up skill acquisition, over-reliance on the tool can prevent genuine skill acquisition and a deeper, more robust understanding of the underlying principles.
Education, in the classical sense, comes from the Latin meaning to lead or draw out from within. It describes what happens when a mentor helps a learner think at a higher level. Through the lens of education, the use of AI can be both more interesting and more demanding. A thoughtful AI interaction can function like a Socratic dialogue, surfacing assumptions and pushing a learner to think more carefully. The negative is the potential reliance on a tool that, for all its fluency, is limited in core human reasoning capabilities—it has no objective sense of true or false, and its confidence bears no relationship to its accuracy.
Self-directed learning is the destination that healthy education is trying to reach. It is what we mean by becoming a life-long learner. Through the lens of self-directed learning, AI is quite possibly an historic breakthrough in human potential. A person with genuine curiosity now has access to a responsive, patient, and knowledgeable interlocutor at any hour. The barriers of geography, cost, and institutional gatekeeping have largely collapsed. But even here there are costs. The self-directed learner faces the danger of intellectual conformity within AI-curated filter bubbles, the potential for curiosity to be flattened by immediate answers, and the heightened need for critical evaluation of AI-generated content. The framework doesn’t eliminate these negatives; it brings them into sharp focus, allowing the learner to navigate them intentionally.
If these distinctions seem obvious once stated, it's worth asking why we so rarely make them.
Definitional Confusion is Not Unique to Education
A stubborn lack of cognitive clarity and the use of binary thinking for relief are common outcomes of an institution taking hold of a domain of human life and quietly collapsing the distinctions within it. The institution and those who work for it need the thing it manages to be uniform, legible, and measurable. And so the original purpose, the deeper human need the institution claims to serve, becomes subordinate to the institution’s own machinery.
This is not corruption or bad intentions. The people who build and sustain institutions often care deeply about the purposes those institutions are meant to serve. But the dynamics of organizing, establishing, and growing any institution produce an inevitable conflicting pressure: the activities and behaviors that keep the institution alive and expanding are generally not the ones that best serve its original mission. What gets rewarded is what maximizes the institution’s reach, efficiency, and self-preservation—and that optimization reshapes the original needs into something that fits an institutional response.
Consider medicine. Health and medical treatment are not the same thing. One is a state of human flourishing. The other is a set of clinical interventions that are quantifiable. A healthcare system can optimize for throughput, billing codes, and procedure volumes, and by its own metrics look like it is thriving, while the health of the population it serves tells a different story. Most people sense this gap but find it difficult to articulate, in part because the institution presents treatment of symptoms as synonymous with health.
Or consider employment. Having a job and having economic security are not the same thing. A person can be fully employed and still economically precarious. But because the institution of wage labor has fused the two ideas together, the policy conversation almost always treats job creation as the solution to insecurity, even when the evidence is more complicated.
And this same dynamic is particularly manifest in education. While we subsume human meaning to institutional outcomes in many fields, education is where this arguably does the most harm, because the stakes are so personal, because the gap between what we say schools are for and what they actually do is so wide, and because our perception of self and our ability to evaluate all other areas of our lives are so impacted.
The Question Worth Asking
When we feel both alarmed and excited about AI in education, we are not being inconsistent. We are responding to genuinely different frames of reference, without realizing they are different. It's important to understand that both the alarm and the excitement are reasonable responses and can be navigated without having to choose only one or the other.
The alarm belongs mostly to schooling. It is not wrong, but it is limited and protective. A profound excitement about the potential of AI is also warranted and is best understood through the upper levels of learning.
Most of the confusion about AI in education is not really about AI. It is about the functional distinctions we have trouble making in an institutionally-oriented world. Seeing the levels clearly won’t resolve every question, but it does help us understand which questions are worth asking at each level.
The Myth of Revolution
This brings us to an often-expressed hope that AI will revolutionize education. This belief in the potential for change misunderstands how institutions, particularly in education, actually function. Schooling is not primarily driven by a desire for better learning outcomes. Its logic is one of credentialing, sorting, and social signaling. This is a recurring historical pattern. New technologies, from the radio to the personal computer to the internet, have been hailed as educational saviors, but their revolutionary potential has been consistently neutralized or co-opted by the institutional imperatives of schooling. The system is designed for stability and self-preservation, not radical change, and it is exceptionally good at absorbing new tools without altering its fundamental structure.
My take is that AI has very real positive and negative consequences, but it will revolutionize human cognition and capability. It won't change the fundamental structures of institutional schooling or revolutionize education, but it will dramatically heighten the tension between schooling and learning.
Hopefully, the framework provided here is a tool for clarity. It allows individuals—students, teachers, parents—to make more conscious choices about which level they are operating on at any given time. It empowers them to pursue genuine education and self-directed learning, even within a system of schooling that is largely indifferent to those goals. The harder questions—how to design AI-assisted experiences that build genuine capability, how to help students who have been trained to optimize for grades rather than understanding, how to recognize and credential real learning—are all genuinely difficult. But they are important and valuable questions. And you cannot ask and answer them clearly until you can see the levels clearly.

No comments:
Post a Comment
I hate having to moderate comments, but have to do so because of spam... :(