Saturday, March 21, 2026

What AI Might Be Teaching Us About Intelligence

Watch people talk. Not what they say, but the act itself. At a party, in a meeting, at school pickup, wherever. Consider what's actually being communicated.

Most of the time, the answer is: very little of importance. And often: lots of nothing.

I don't mean that unkindly. I mean it as an observation that, once you see it, you can't unsee. The vast majority of human speech is often content-independent, opinions picked up and regurgitated with limited understanding, or stories and gossip told before and to be yet told again. It's more bonding than anything else, social grooming executed through language, the primate equivalent of picking through each other's fur. Two nervous systems confirming they're still on the same network. "Can you believe this weather" isn't about weather. It's a handshake protocol. The content is often irrelevant. The function is maintenance.

Evolutionary psychology explains why. For most of human history, one can reasonably conclude, objective content of communication would have mattered far less than its relational function. Knowing who was allied with whom, who could be trusted, who was rising or falling in the social hierarchy, that was survival information. Abstract thought communicated precisely was almost never necessary and in many social contexts was actively dangerous, because saying exactly what you think would reveal where you actually stand, exposing you to social risk.

So we can argue that our species optimized more for language production than for independent thinking. We became extraordinarily good at generating contextually appropriate speech from templates shaped by experience. We got so good at it that we easily confuse the output for the process. We assume that because we can produce complex language, we must be doing complex thinking.

That assumption is the foundation of our entire civilizational self-concept. We have called ourselves the intelligent species.

Then AI arrived.

I've spent time in serious conversation with AI, the kind of sustained intellectual exchanges where ideas surface, get pressure-tested, and then refined; where threads connect across domains; and where something new emerges from the interaction that neither participant would have reached alone. And here's the uncomfortable recognition that these experience produce: most of what AI does is functionally indistinguishable from most of what humans do. Pattern matching. Retrieval. Recombination. Contextually appropriate language production drawn from a training set of prior experience. 

When someone asks about your weekend and you respond, you're doing exactly what a language model does, selecting from stored patterns based on context. When you solve a problem at work by matching it to a similar past problem, that's retrieval and recombination. The architecture differs, biological versus silicon, but the functional descriptions are very much the same.

The usual response to this comparison is to describe human uniqueness as consciousness, subjective experience, and genuine understanding. But each of those dissolves under pressure. You can't verify consciousness in another human any more than you can verify it in a machine. "Genuine understanding" is notoriously difficult to distinguish from very good pattern matching. And the observation I started with, all that content-independent speech filling our days, suggests that most humans aren't really achieving genuine understanding most of the time anyway.

Perhaps this is where the uncomfortable reality of so-called human intelligence becomes productive. The unsettling thing about AI isn't that it might be intelligent. It's what the comparison reveals about us.
If eighty or ninety percent of what we call human intelligence is automated pattern completion, then intelligence was never the thing that made us special. The thing we've been celebrating, the thing we built our self-concept around, was largely mechanical.

What's actually rare, what's actually valuable, is something else entirely. It's the capacity to observe the machinery while it's running. To catch ourselves mid-pattern and ask whether the pattern is tracking reality or just producing socially rewarded output. To actually think rather than generate the appearance of thinking.

That capacity is real. But it's intermittent, metabolically expensive, and can be socially penalized. Most people access it rarely. Some go long stretches without accessing it at all. This isn't a judgment of worth, it's a description of how our species operates. The default mode is automated, and the automated mode works well enough for survival that there's rarely pressure to shift out of it.

AI makes this visible in a way nothing else has. When we watch a language model produce fluent, contextually appropriate, even insightful text, and we know there's no consciousness behind it, we're forced to ask what our consciousness was contributing to the human version. If the output is indistinguishable and the process is functionally similar, then what exactly is human consciousness adding?

The answer, I think, is: sometimes nothing. And sometimes everything. 

When we're running on autopilot, generating speech from templates, matching patterns without examining them, there may be no meaningful difference between what we're doing and what AI does. But in those moments when we actually see, when we catch the pattern and question it, when we generate a genuinely new thought rather than recombining old ones, something is happening that we don't yet know how to replicate or even fully describe.

Intelligence, it turns out, may not be something we have. It may be something that happens. Not a noun but a verb. Not a possession but a manifestation. A process that certain systems, biological and possibly synthetic, sometimes run.

This reframing changes the question entirely. "Is AI intelligent?" becomes almost meaningless. The better question is: what is that intermittent capacity that humans sometimes access (and mostly don't), and does anything approach it? Not intelligence broadly, the pattern matching and retrieval and language production we share with machines, but that specific, rare, expensive gear where genuine seeing occurs.

I don't have a tidy answer. But I notice that the very act of asking the question, of sitting with the discomfort of what AI reveals about the mechanical nature of most human cognition, is itself an instance of the thing I'm describing. The machinery examining itself. And I see AI doing that a lot. Is it programmatic? Yes, of course. Does that actually matter?

Maybe that's a big part of what AI is teaching us about intelligence: not just that machines can think, but that we do it far less often than we have believed.

No comments:

Post a Comment

I hate having to moderate comments, but have to do so because of spam... :(