Sunday, March 22, 2026

Sloppy AI

Merriam-Webster crowned "slop" its Word of the Year, defining it as digital content of low quality produced in quantity by generative AI. We all know slop when we see it. It's the movie review that opens with a compelling hook, deploys sophisticated vocabulary across six confident paragraphs, and somehow never says anything coherent about the actual film. It draws you in with the appearance of insight, keeps you reading with polished-looking sentences, but then leaves you realizing that it's not actually making sense.

But "slop" only describes the output. It doesn't tell us anything about the process that created it, or help us (and students) recognize when we're producing it. For that, we need the adjective: sloppy.

What "Sloppy" Actually Means

This isn't just wordplay. "Sloppy" carries a specific connotation that other words don't. It's not the same as "bad" or "careless" or "low-quality." Sloppy implies an avoidable mess — something made by a person who knew better, or should have, and chose not to bother. A sloppy report isn't one written by someone who lacked the skill. It's one written by someone who skipped the effort.

That distinction is precisely what makes "sloppy" the right word for the worst of the generative AI boom. The problem isn't that AI produces bad output. The problem is that people are using AI to avoid the effort that would make the output good — and then publishing the result as if the effort had been made. Sloppy AI usage is the act of substituting a prompt for the work the prompt was supposed to support.

Where Sloppiness Shows Up

Once you have this lens, you start seeing sloppy AI use everywhere — and you notice that the pattern is always the same. Someone uses AI to skip a step that shouldn't be skipped.

Sloppy sourcing is arguably the most dangerous category. Language models don't verify facts; they predict plausible next words. A 2025 study from Deakin University found that ChatGPT fabricated roughly one in five academic citations[2]. Lawyers have been sanctioned for submitting briefs full of hallucinated case law. The Chicago Sun-Times famously published a summer reading list recommending books that didn't exist. In each case, the sloppiness wasn't that AI hallucinated — hallucination is a known property of the technology. The sloppiness was that nobody checked.

Sloppy engineering follows the same pattern. AI can scaffold code and explain concepts effectively, but AI-generated code is causing problems everywhere from Amazon to the Open Source Software community. The failure mode isn't that AI wrote the code. It's that someone deployed it without the engineering discipline the code required, i.e., treating generation as a substitute for understanding.

Sloppy customer service is what happens when companies replace human support with chatbots to avoid staffing costs, then discover that the bot can't handle nuance, empathy, or edge cases. 

Sloppy content is the most visible category and the easiest to spot: it leans on filler phrases, presents shallow balance instead of actual analysis, and contributes nothing that wasn't already said better somewhere else. BuzzFeed's pivot to mass-produced AI content has been accompanied by mounting financial losses and a precipitous decline in market value, with the company now warning of "substantial doubt" about its ability to continue as a going concern. The problem wasn't that AI wrote the articles, it was that nobody ensured the articles were worth reading.

In every case, the underlying mechanism is the same. AI made it possible to skip a step. Someone skipped it. The result was sloppy.

Sloppy Thinking

One category deserves its own treatment, because it's less outwardly visible and more individually consequential than the others.

When we use AI to summarize every article, draft every email, and resolve every question, we begin to outsource the cognitive work that makes us capable of doing those things well in the first place. Researchers have described this as "cognitive atrophy:" the gradual weakening of skills that aren't exercised. Ethan Mollick frames the paradox directly, stating that AI "works best for tasks we could do ourselves but shouldn't waste time on, yet can actively harm our learning when we use it to skip necessary struggles."

Sloppy thinking is the assumption that AI can do the hard work of understanding for you. It can't. It can produce text that resembles understanding, which is worse than producing nothing, since it lets you believe you've done the work when you haven't. This is the trap that makes all the other traps possible. Sloppy sourcing happens because someone didn't think critically about whether the citations were real. Sloppy engineering happens because someone didn't think carefully about whether the code was sound. The root of every sloppy AI failure is a moment where a human stopped thinking.

AI Is Not the Problem

Consider the automatic camera. Before it existed, producing a beautiful photograph required mastering the technical relationships between aperture, shutter speed, and film sensitivity--knowledge that excluded most people from the craft. The automatic camera removed that barrier. It expanded the number of people capable of capturing a striking image by orders of magnitude. But it didn't eliminate the need for the photographer. Someone still has to choose what to point the camera at, decide when to press the shutter, and recognize whether the result is worth sharing. The camera handles the exposure. The human handles the choices that reflect value (or don't!).

AI is the most powerful “automatic camera” ever built — for writing, for code, for analysis, for nearly every form of intellectual work. It can dramatically expand who is able to produce valuable output. But the value still depends on the choices a human makes before and after the tool does its part.

The Draft and the Deliverable

We wouldn't ban automatic (and now digital and smartphone incorporated) cameras because of the tsunami of low-effort photographs posted everywhere. It's a selection problem. Nor is the antidote to sloppiness banning AI. It's recognizing the difference between a draft and a deliverable. AI is genuinely powerful as a draft space: a place to explore ideas, go wide, generate options, and think out loud. The problems begin at the handoff, the moment something moves from private exploration to public use. A draft can be sloppy. A deliverable cannot. And right now, the most common form of sloppy AI usage is treating the draft as the deliverable. Publishing the first output, shipping the generated code, and sending the unedited email are all sloppy AI because the output looked good enough to skip the step where a human makes it actually good.

The question isn't whether to use AI. It's whether, at the moment of handoff, a human applied the judgment, verification, and care that the task required. If the answer is no, the result is sloppy. Skip those choices, and you get slop.

Saturday, March 21, 2026

Mimicking Authenticity Has Never Been So Easy

A college admissions expert recently wrote a piece for Business Insider telling parents their teenagers are taking too many AP classes. His advice: drop the scariest advanced class, free up time, and use that margin to do something meaningful in your community. He gives compelling examples. A student who built a wildfire prediction app instead of maxing out his transcript. Another who gave up valedictorian status to serve as a Senate page. Both got into Yale. The actual valedictorian was rejected.

It's good advice, as far as it goes. But read it carefully and you notice something. The wildfire app isn't presented as valuable because the student cared about wildfires. It's presented as valuable because "that human element is what made his application compelling to Yale." The Senate page position wasn't worth pursuing because the student wanted to understand governance. It was worth pursuing because it was a better admissions strategy than getting perfect grades. Gardner hasn't escaped the game (as the article is written). He's presented as updating the mimicry. Instead of performing academic rigor through AP classes, you now perform authentic impact through community projects. The orientation toward the gatekeepers remains identical.

My father was Dean of Admissions at Swarthmore and then at Stanford, so I grew up watching this dynamic from the other side of the curtain. But the admissions game isn't really the point here. It's just where the pattern is easiest to see.


The deeper pattern is this: human beings are mimics first and authentic agents second, if at all. This isn't a moral failing. It's how we're built.

For most of human history, according to evolutionary psychology, survival depended on group membership. Getting expelled from the band was a death sentence. So the mind developed an exquisite sensitivity to social signals: what does the group reward, what does it punish, what performances does it expect from someone in my position? The individuals who tracked these signals well and reproduced them convincingly were the ones who stayed in the group, found mates, and passed on their genes. The ones who didn't, didn't.

This means the default orientation of the human mind is outward, not inward. We don't start with an authentic self and then decide how to present it. We start by scanning the social environment and constructing a self that fits (my theory of the "adaptive mind," the builder of our subconscious). The performance comes first. Whatever we experience as our "real" self is largely a story we tell about the performance after the fact. What we actually are, most of the time, is a performative self: a constantly updated projection, shaped less by inner conviction than by our reading of what the social environment will accept and reward.

Cal Newport, in How to Become a High School Superstar, tells students to "be" rather than to "appear." It's the right instinct. But it underestimates how deep the appearing goes. For most people, most of the time, appearing is being. The adaptive mind doesn't distinguish between them. It produces whatever version of you the environment seems to demand, and it does this so seamlessly that you experience the production as spontaneous self-expression.

This is why the college admissions game is so instructive. It's not that teenagers are uniquely fake or strategically cynical. It's that the admissions process creates an environment with unusually clear reward signals, so the mimicry becomes unusually visible. Twelve AP classes. A nonprofit founded in junior year. An essay about personal growth through adversity. These aren't evidence of who the student is. They're evidence of what the student believes admissions officers want to see. The performance is the point, and everyone involved, students, parents, counselors, even the admissions offices themselves, tacitly participates in the fiction that it isn't.

Gardner's intervention (as described) doesn't change this dynamic. It just shifts the mimicry to a higher register. Now, instead of mimicking rigor, you mimic impact. You build the wildfire app because that's what a "compelling applicant" looks like in 2026. The adaptive mind has simply updated its model of what the tribe rewards.


In a Paleolithic band, mimicry had natural limits. You could watch the good hunter and imitate his stance, but eventually you had to actually kill something. The performance had to cash out against reality. The social environment and the physical environment were the same environment, so the signals you optimized for were tightly coupled to the skills they represented.

Modern life has severed that coupling almost entirely. The "tribe" is now an abstraction, an admissions committee, a LinkedIn audience, an algorithm, a set of metrics designed by people you'll never meet. And the feedback loops operate purely at the level of representation. An admissions officer doesn't watch you build the wildfire app. She reads a 650-word essay about building it. Your manager doesn't see you think. She sees a deliverable that could have been produced by you, by AI, or by a clever remix of someone else's work. The signal and the substance have been pulled apart, and because the rewards track the signal, the substance becomes optional.

This is where mimicry stops being a benign feature of social cognition and becomes something more concerning. When the entire feedback loop operates through representations, the performance can run indefinitely without ever colliding with reality. A student can mimic rigor through twelve AP classes and never encounter what rigor actually feels like. A professional can mimic strategic thinking through well-formatted slide decks for an entire career. The mimicry isn't a phase you pass through on the way to competence. It becomes the competence, and no one in the system has any particular reason to check.


I think this is the key to understanding why social media has been so psychologically destabilizing, especially for young people. It's not just that social media creates pressure to perform. Humans have always performed. It's that social media creates an environment where the performance is the entirety of the interaction. There is no backstage. There is no moment where the mask comes off and you deal with unmediated reality. The adaptive mind, built to scan for social signals and produce fitting responses, finds itself in an environment of pure signal. So it does what it does, endlessly, without the natural interruptions that physical life used to provide.

The result isn't that people become fake in some simple sense. It's that the distinction between authentic and performed stops meaning anything. When every interaction is mediated, when every self-presentation is crafted for an audience even if the audience is imagined, when the feedback you receive is always about the representation rather than the thing represented, then the performative self isn't a layer on top of the real self. It's all there is. Not because people are shallow, but because the environment no longer provides the friction that would allow anything else to develop.

And there is a further turn. The ultimate expression of mimicry is capture: the moment when the performer stops leading the audience and starts being led by them. A politician who began with convictions discovers which lines get applause and gradually becomes a delivery mechanism for what the crowd already wants to hear. A comedian who once challenged audiences learns which bits get clicks and becomes a servant of the algorithm. This used to be a trap for the few, the price of fame, the corruption that came with public life. Social media has democratized it. Now every teenager with a following is subject to audience capture. Every professional curating a LinkedIn presence is adjusting, post by post, to the signals of approval, becoming less the author of their self-presentation and more its product. The performer doesn't just mimic what the tribe rewards. The performer becomes what the tribe rewards, and the original person, if there was one, recedes behind the performance until the distinction is no longer meaningful even to them.


This connects to something I've been thinking about with AI, and it's the part that worries me most.

If the whole apparatus of modern life, schooling, credentialing, professional advancement, social media, trains people to optimize for the appearance of competence rather than competence itself, then AI arrives into a world that has already done most of the preparation for cognitive surrender. The student who stacked AP classes wasn't building knowledge. She was building a transcript. The professional who produces polished deliverables isn't necessarily thinking. He's producing the signals of thinking. When AI offers to generate those signals more efficiently, the transition feels almost natural. You were already outsourcing the substance and keeping the performance. AI just makes the outsourcing frictionless.

The uncomfortable implication is that for many people, AI won't feel like a loss. If you were never optimizing for the real thing, if the performance was always the point, then a tool that produces better performances faster is an unambiguous upgrade. You don't mourn the thinking you're no longer doing if thinking was never what you were doing in the first place. You were mimicking thinking. You were producing its signals for an audience. AI produces better signals with less effort. From inside the logic of mimicry, there's nothing to grieve.

What gets lost is harder to name, precisely because the system was never set up to value it. It's the person you would have become if the doing had been real. The understanding that builds only through genuine struggle with material that resists you. The judgment that develops only through making real decisions with real consequences, not performing decisiveness for an audience. The inner life that takes shape only when you spend time oriented inward rather than outward, toward the thing itself rather than toward how the thing will look to others.

That person is foreclosed not by AI, but by the entire architecture of mimicry that AI completes. The student loading up on AP classes was already foreclosed. The professional optimizing deliverables for optics was already foreclosed. AI just removes the last thin residue of genuine effort that the performance still required, the residue that might, in some cases, have accidentally produced real learning along the way.


I don't think the answer is to tell people to be authentic, as if authenticity were a switch you could flip. The adaptive mind doesn't work that way. It responds to environments, not to exhortations. Newport can tell students to "be" rather than to "appear," and Gardner can tell them to pursue real impact rather than credential-stacking, but as long as the environment rewards the performance, the mind will produce the performance. It will simply incorporate the advice about authenticity into the performance. Now you mimic authenticity. Now you perform real impact. The adaptive mind is very, very good at this.

For devoted educators and parents, what might actually help is designing environments where the mimicry breaks down, where the performance can't substitute for the real thing. Small classes where you can't hide behind a polished essay. Apprenticeships where the work has to function, not just look good. Projects where failure is visible and consequential rather than something you spin into a growth narrative for your college application. Physical work. Embodied challenges. Anything that reintroduces the tight coupling between signal and substance that modern life has systematically dissolved.

The Amish, whatever else you think of them, understand something about this. When a new technology arrives, they don't ask "Is this useful?" They ask "What will this do to our community and our way of life?" (See my Amish Test post.) It's a question about environments, not tools. They know that people will adapt to whatever environment they're placed in, so the question worth asking is what kind of people a given environment will produce.

We could stand to ask that question more often. Not just about AI, though AI makes it urgent. About the whole apparatus of performance, credentialing, and social display that we've built and that is now building us.