Monday, April 13, 2026

The Levels of Thinking, Part II

I've been thinking about the four Levels of Thinking since I published them, the way you keep turning something over after you've committed to it publicly, looking for the places where it's still rough. Two complications have surfaced that I think are worth naming honestly, and in the process I've found myself wanting slightly different labels for the levels themselves. Not replacing the original descriptions, but giving each one a name that captures the posture of the person inside it.


Level 1, Coalitional Thinking, is the Believer. She thinks what his group thinks, and the question of why has never occurred to her.

Level 2, Informed Thinking, is the Defender. He has replaced tribal intuition with institutional authority but is doing the same thing at a higher resolution: deferring to consensus and defending it with credentialed fluency. 

Level 3, Critical Thinking, is the Critic. She has internalized the insight that her own cognition is unreliable and can hold a position while genuinely entertaining the possibility that she's wrong. 

Level 4, Structural Thinking, is the Philosopher. He has turned the lens not just on his own reasoning but on the systems that shape what's thinkable, asking who benefits from the consensus, what signals are being suppressed, and why.

The names aren't perfect. No names are. But they capture something the original labels didn't quite reach: the felt experience of each level from the inside. The Believer feels settled. The Defender feels informed. The Critic feels honest. The Philosopher feels like he can finally see.

And that last feeling is where the first complication begins.

The Trap

The framework, as written, can be read as a moral hierarchy. Higher is better. The Philosopher is where the good people are. The Believer is where the unthinking masses live, and by implication, where the moral failures accumulate. I've been careful to say these are cognitive descriptions, not measures of intelligence, but I haven't been careful enough to say they are also not measures of character. And that distinction may be the most important thing the framework needs to get right.

Consider Edward Bernays. Freud's (double) nephew, the man who essentially invented public relations as a discipline. Bernays understood the coalitional mind, the adapted mind, the susceptibility of human cognition to emotional manipulation and social proof, with a clarity that most psychologists of his era couldn't match. He saw the machinery. He could describe it. I sense that he understood it even more pragmatically than his uncle Sigmund did. And when he wrote Propaganda in 1928, the word propaganda was not yet pejorative. He meant it descriptively, even approvingly. His argument was essentially that an informed elite, understanding how mass psychology actually worked, could and should guide public opinion toward beneficial outcomes. He believed this. The seeing, for Bernays, was not a license to exploit. It was a responsibility to steer.

And then he sold cigarettes to women by linking them to suffragist imagery, orchestrated a media campaign that helped enable a coup in Guatemala, and turned bacon and eggs into the "American breakfast" through manufactured expert authority. I don't know what Bernays believed he was doing at each stage of that trajectory. But it seems reasonable to look at the arc from Propaganda to Lucky Strike and see something other than a simple decision to become a manipulator. It seems more likely that the adapted mind was doing what it always does, generating self-serving narratives that feel like objective assessment, but now equipped with a Philosopher's vocabulary that made those narratives more sophisticated rather than less. I'm going to guess that Bernays remained, in his own experience, the person who understood what others couldn't, but I'm not sure he felt that he was still working for their benefit. The temptation to exploit was likely intentional,  opportunistic, and maybe almost unavoidable. 

There's a further dimension to this that I think matters. Bernays proposed what seems to have been a genuine understanding of human nature that he believed could improve the human condition. But the world didn't have a pathway for that. There was no institutional mechanism for applying insights into mass psychology to the service of honest democratic governance. What existed was a market for selling products and shaping opinion on behalf of paying clients. In the absence of a viable route toward the nobler application, the readily available route was the compromised one.

This is the part of the cave allegory that almost no one talks about. Plato describes the prisoner who escapes, sees the sun, understands the nature of the shadows, and returns. The standard reading treats the return as inherently noble. But Plato himself didn't simply advocate for liberation. He advocated for philosopher-kings. He proposed the Noble Lie. He saw the cave, and his solution was not to free the prisoners but to install better management of the shadows. The seeing pulled him, as it pulled Bernays, toward the conviction that those who understand the machinery should run it. It's the same arc you see in every populist reformer who becomes a dictator: the person who sees the system's corruption most clearly becomes the one most convinced that he, specifically, should be trusted with the power to fix it. The insight becomes its own form of capture.

I suspect something similar happened with Plato specifically. Socrates practiced philosophy honestly and got the hemlock. Plato, watching that, seems to have drawn the not unreasonable conclusion that the world doesn't work that way, and the Noble Lie and the philosopher-king were what remained once the honest path had been closed. The Philosopher's trap isn't only that seeing corrupts from within. It's that the world rarely offers a viable path for the seeing to be used as the seer originally intended.

You can see the same dynamic in the tech industry today. Build something used by two billion people, and it seems almost inevitable that the adapted mind does what it evolved to do: constructs a narrative of specialness, of unique vision, of deserved authority. I don't know the inner lives of the people running these companies. But it seems difficult to imagine achieving that level of success and influence without some version of that narrative taking hold. How could it not? The delusion, if that's what it is, isn't a character flaw. It's what the cognitive machinery would predictably produce when you feed it that particular input. And a Philosopher's vocabulary doesn't protect you from it. It likely just gives the machinery better language for the self-justification.

This may be the most important thing the framework reveals about itself: the adapted mind doesn't stop operating when you can describe it. It operates through the description. The same machinery that generates tribalism for the Believer generates messianic self-regard for the Philosopher. It just sounds better. The person who can name coalitional capture, who can identify motivated reasoning in others, who can map the structural dynamics of institutional distortion, is not thereby freed from those forces. He is, at best, in a slightly better position to notice them in himself, if he is willing to do the hardest thing the framework demands, which is to turn the lens on his own certainty that he is the one who sees clearly.

So the framework stands, but with this honest caveat: moving up the levels makes you more capable, not more good. The capacity to see the machinery of your own mind is a necessary condition for genuine moral agency, because you can't choose freely if you can't see what's choosing for you. But it is not a sufficient condition. What you do with the capability is a separate question, and the moral weight, wherever it comes from, doesn't come from the thinking level itself. It comes from something closer to what we awkwardly call conscience, and whatever it is, conscience is not a level of thinking.

The Counterexample

The second complication cuts the other direction. The evolutionary psychology that underlies this framework, the coalitional mind, the adapted operating system, the Paleolithic wiring that makes the Believer's posture the default, can sound deterministic. If humans are optimized for coalitional loyalty, if independent thought is metabolically expensive and socially punished, if the entire architecture of modern institutions selects for the Defender's deference, then the framework starts to feel less like a map and more like a diagnosis with no treatment. The Philosopher becomes a theoretical possibility that almost no one reaches, and the forces arrayed against it look permanent.

But then there's Philadelphia in 1787.

The American founding era represents something that shouldn't have happened if coalitional capture were truly inescapable. A remarkable number of people, not just a few isolated geniuses but a functioning public culture, engaged in exactly the kind of structural thinking about human nature that I'm calling Level 4. The Founders didn't just worry about faction, tyranny, and the concentration of power in the abstract. They designed institutional architecture specifically to counteract the cognitive tendencies they understood themselves to be subject to. Separation of powers exists because they knew that power consolidates. Checks and balances exist because they knew that even well-intentioned people rationalize self-serving behavior. The Bill of Rights exists because they knew that majorities would suppress minorities when the coalitional incentives aligned. The First Amendment exists because they knew that the people in power would always have plausible-sounding reasons to silence dissent, and that the reasons would always feel compelling in the moment.

This wasn't optimism. It was realism, or the opposite of optimism. It was a group of people who understood the adapted mind well enough to build institutions designed to compensate for it. They read their Thucydides, their Tacitus, their Montesquieu. They studied the republics that had failed and asked why. And their answer, consistently, was that human nature bends toward consolidation, corruption, and self-deception, and that the only remedy is structural, not moral. You don't fix the problem by finding better people. You fix the problem by building systems that assume the worst about the people in them.

That is the Philosopher's posture, practiced not by a solitary thinker but by a critical mass of people engaged in public discourse. And the question it raises for the framework is: what conditions made it possible?

I don't think anyone has a complete answer, but several features of that moment stand out. The colonial population was literate to a degree unusual for the era, and not just literate but actively reading political philosophy, sermons, and pamphlets that engaged with first principles. The pamphlet culture itself was structurally hospitable to long-form argument in a way that, I cannot help noticing, sounds a lot like the Web 2.0 discourse environment I often described losing when Facebook and Twitter took over online conversations. There was genuine skin in the game; these were not theoretical discussions but arguments about how to organize a society that participants would actually have to live in, with consequences they would personally bear. And there was an unusual degree of intellectual honesty about human nature, born partly from religious traditions that took the fallenness of man seriously, and partly from classical education that provided a vocabulary for discussing the very dynamics the framework describes.

The founding era didn't escape coalitional psychology. The debates were fierce, personal, and driven by competing interests. The coalition dynamics were everywhere. But enough people could see those dynamics clearly enough and think structurally about them to design institutions intended to harness and constrain them rather than simply be captured by them. The coalitional mind was still operating. It just wasn't operating unopposed.

What this tells me is that the framework's implicit pessimism, the sense that the Philosopher is vanishingly rare and the forces against it are overwhelming, is not entirely historically accurate. It has happened before. Not as a permanent state, not as a mass awakening, but as a temporary critical mass of structural thinkers whose window of clarity produced something durable enough to outlast the window itself.

Whether we are capable of producing that critical mass again, under current conditions, is a question I think a lot about. The founding era had the pamphlet. We had the long-form online discussion forum. Both are gone or diminished. What we have now is an information architecture that structurally selects for the lowest levels of the framework. Whether that's reversible, and what it would take to reverse it, is not a question I am ready to answer. But the fact that it happened once means it is not impossible.

The Impact of AI: Using the "Functional Fictions" Framework for Predicting Where AI Disrupts and Where It Doesn't

This is part of the Understanding the Human Condition series, which uses the unique vantage point of large language models — trained on a substantial fraction of humanity's written output across cultures, centuries, and genres — to explore what the patterns in our self-narration ("functional fictions") reveal about who we actually are. This detail post is written by Claude (Anthropic) while guided by me. The introductory post is here.

THE RULE

Every institution has idealized narratives — the stories it tells about why it exists and what it does for people. Schools educate children. Hospitals heal the sick. Law firms provide justice. Banks help people achieve financial security. And every institution has operative functions — what it actually does that keeps it alive, what its business model really is, why it persists. Schools provide childcare, credentialing, and social sorting. Hospitals are organized around billing codes and liability management. Law firms bill for work that requires someone who passed the bar. Banks make profit from financial dependence.

The people inside these institutions genuinely believe the idealized narratives. That belief is not a lie. It's the mechanism that keeps them motivated and keeps the public cooperating. And the people outside the institutions — the clients, the patients, the students, the customers — value the operative functions as much as or more than the idealized narratives, even if they couldn't name them. Parents need the childcare. Patients want someone authoritative to take responsibility for their health. Clients want someone to handle the terrifying complexity of the legal system. Most people prefer to be guided, and the operative functions provide that guidance. The operative functions aren't just serving the institution. They're serving real human needs for structure, delegation, and cognitive relief.

This means every institution has three layers of participants who depend on its continuation: the institution itself (sustaining its business model), the insiders (whose income, identity, professional community, and sense of purpose are bound to their role), and the public (who depend on the operative functions — childcare, credentialing, guidance, responsibility transfer — whether they name them or not).

AI disrupts an institution when it can deliver what the idealized narratives promise while eliminating the business model — making the operative functions unnecessary.

AI gets absorbed by an institution when it improves the idealized narrative delivery but can't replace the operative functions — the business model, the insider dependencies, and the public's need for guidance all remain intact.

That's the whole rule. Here's how it works.

WHERE AI WILL CHANGE THINGS

These are domains where AI can deliver what the idealized narratives promise while eliminating the business model that sustains the institution. The idealized narratives are fulfilled. The operative functions are destroyed. The institution can't argue against AI without arguing against its own stated purpose.

SOFTWARE DEVELOPMENT

The idealized narrative: Deep expertise, computer science fundamentals, and years of experience produce reliable software.

The operative functions: The economic value of technical skill scarcity creates high salaries and professional status. Relatively few people can code, which makes those who can expensive and important.

Why AI eliminates the business model: AI doesn't just help programmers work faster. It enables non-programmers to produce functional software. The gate is bypassed entirely. For the large category of software tasks that involve translating business requirements into relatively standard code, the credential — CS degree, years of experience, GitHub portfolio — becomes unnecessary when a person can describe what they want and iterate with AI to produce it.

The institutional resistance narrative: "AI can write code but can't architect systems, understand requirements, or maintain quality." This is partly true for complex systems and entirely false for the majority of software tasks, which is the kind of partial truth that sustains a gatekeeping narrative past its expiration date.

Prediction: The profession bifurcates. A smaller elite working on genuinely complex systems retains high value. The vast middle — people who translate requirements into standard code — faces severe compression within 3-5 years. The industry narrates this as "AI augmenting developers" for as long as possible before the labor market makes the displacement undeniable.

ROUTINE LEGAL SERVICES

The idealized narrative: Legal judgment, ethical obligations, and the complexity of law require trained professionals to protect the public.

The operative functions: The unauthorized-practice-of-law framework makes it illegal to provide legal services without the credential, regardless of how routine the work is. This protects the profession's billing structure. Most legal spending goes to document preparation, contract review, compliance checking, and routine filings — tasks that are expensive only because they require someone who passed the bar.

Why AI eliminates the business model: AI performs routine legal work at a fraction of the cost with comparable or superior accuracy. The average person doesn't need legal judgment. He needs a lease reviewed, a will drafted, an LLC formed, a contract checked. AI delivers what the idealized narrative promises — accessible legal help — while making the operative function (the billing structure built on licensure monopoly) unnecessary.

The institutional resistance narrative: "AI makes errors that could have devastating legal consequences." True at the margin, but the current alternative for most consumers is not expert legal counsel. It is no legal help at all, because they can't afford it. The gatekeeping narrative protects the profession by comparing AI to the best available service rather than to the service most people actually receive, which is nothing.

Prediction: High-stakes litigation and complex corporate transactions remain human-dominated. The vast volume of routine work migrates to AI within 5-7 years. The Bar fights aggressively through unauthorized-practice regulations and loses in jurisdictions where consumer access to affordable legal services becomes a political issue.

CONTENT CREATION

The idealized narrative: Creativity, originality, authentic human voice, and editorial judgment produce valuable content.

The operative functions: The economic model is built on the scarcity of people who can write, design, and produce at professional quality. Most content consumed is not literary art. It's functional — news summaries, marketing copy, product descriptions, reports, social media posts, how-to guides.

Why AI eliminates the business model: AI produces functional content at near-zero marginal cost and infinite scale. The scarcity that sustained the economic model is demolished. AI delivers what the idealized narrative promises — relevant, competent, timely content — while making the operative function (human production scarcity) unnecessary.

The institutional resistance narrative: "AI content is generic, lacks soul, and spreads misinformation." The first two are true and irrelevant for commodity content where nobody was reading for soul. The third is a real concern deployed selectively by institutions that have been producing algorithmically optimized, engagement-maximized content for years.

Prediction: The content industry collapses at the commodity level and consolidates at the premium level within 3-5 years. Human-created content becomes a premium category defined by provenance — the content equivalent of "handmade." Whether this premium sustains more than a small elite of human creators is unclear.

TRANSLATION

The idealized narrative: Cultural nuance, contextual sensitivity, and the irreplaceable quality of human linguistic judgment produce accurate translation.

The operative functions: Translation is expensive because it requires bilingual humans with specialized knowledge, available by appointment, one language pair at a time.

Why AI eliminates the business model: AI translation has reached the threshold where it outperforms the existing arrangement on cost and speed while approaching parity on accuracy for the majority of use cases. It is available instantly, at any hour, for any language pair, without scheduling a human. The business model — paying human translators by the word or hour — is unnecessary for most translation needs.

Prediction: Professional translation survives only in high-stakes domains — literary translation, diplomatic communication, legal proceedings, medical contexts where errors are life-threatening. The general market is already largely AI-driven. The institutional narrative hasn't caught up.

ROUTINE FINANCIAL ADVISORY

The idealized narrative: Personalized guidance, fiduciary judgment, and the human relationship help people achieve financial security.

The operative functions: Asset-gathering and fee extraction on portfolios managed with largely standardized allocation models. The "advice" for most retail clients is standardized. The advisor's real value for many clients is emotional reassurance and the feeling that someone competent is in charge.

Why AI eliminates the business model: AI-driven portfolio management matches or exceeds returns at a fraction of the fee. For the vast majority of retail clients, the idealized narrative (sound financial planning) is delivered better and cheaper by AI. The business model (percentage-of-assets fee on standardized management) becomes unjustifiable.

Prediction: The profession hollows out from the bottom. Robo-advisory with AI-enhanced interaction captures the majority of the retail market within 5 years. Human advisors survive at the high-net-worth level where the relationship is a status marker and where complex estate and business-succession planning requires genuinely novel judgment.


WHERE AI WON'T CHANGE THINGS

These are domains where AI can improve the idealized narrative delivery — sometimes dramatically — but cannot replace the operative functions. The business model remains intact because the operative functions serve real needs that AI doesn't address. The institution adopts AI, narrates it as innovation, and continues operating as before.

K-12 EDUCATION

The idealized narratives: Learning, critical thinking, development of the whole child, preparation for life.

The operative functions: Childcare (freeing parents to work), socialization and social sorting, credentialing and compliance, and employment of a massive institutional workforce. These are the business model. Learning is the idealized narrative.

Why AI can't eliminate the business model: AI provides a vastly superior learning mechanism. But learning was never the operative function. A parent who knows her child could learn more effectively with AI still needs somewhere for that child to be from 8am to 3pm. An employer who knows a diploma doesn't measure competence still uses it as a sorting mechanism because it's cheap and socially legitimated. The teachers' unions, administrators, testing companies, and real estate markets that depend on the school system constitute an institutional mass that AI cannot displace because AI addresses the wrong function.

This is exactly what happened with YouTube. YouTube delivered the idealized narrative — you can learn anything, from anyone, for free — better than schools ever had. Nothing changed about schools. Because schools were never really in the learning business.

Why the insiders can't let go: Teaching is an identity, not just a job. The coalitional bonds among educators are strong. The pension, the professional community, the structured workday, the sense of purpose — these are operative functions for the people inside the system, entirely separate from whether children learn.

Why the public can't let go: Most parents don't want to homeschool. They want someone else to take responsibility for their children for eight hours a day. That's not laziness. It's a genuine need, and AI doesn't meet it.

Prediction: Schools adopt AI tools, narrate them as enhancements to existing pedagogy, and continue operating in the same structure. AI tutoring will be transformative for individual learners who opt into it. The institution will not change because the institution's survival does not depend on learning outcomes.

The exception: If AI enables credible competence demonstration that employers accept as a substitute for diplomas — portfolio-based hiring, AI-verified skill assessments, direct demonstration of capability — then the credentialing function erodes. This is possible but requires a demand-side cultural shift in employer behavior, not a technology change.

ELITE HIGHER EDUCATION

The idealized narratives: Intellectual rigor, research excellence, developing future leaders.

The operative functions: Network access, class sorting, and status signaling through selective admission. The value of a degree from Harvard or Stanford has almost nothing to do with the content of the education. It is a signal of prior selection (you were good enough to get in) and a network (you now know the people who will run things).

Why AI can't eliminate the business model: Making the educational content freely available changes nothing about the degree's value. MIT OpenCourseWare has been free since 2002. The operative function is the exclusivity and the network, and AI can't replicate either.

Prediction: Elite universities adopt AI enthusiastically, narrate themselves as leaders in AI education, and continue to function exactly as they do. The credential's value may increase, because in a world where knowledge is freely available, the sorting function of selective admission becomes more valuable, not less.

CLINICAL HEALTHCARE

The idealized narratives: Healing, the doctor-patient relationship, evidence-based medicine, the Hippocratic oath.

The operative functions: The physician's legal monopoly as the gateway to prescriptions, procedures, referrals, and specialist access. Billing optimization organized around insurance codes. Liability management. Supply restriction through licensure.

Why AI can't eliminate the business model: AI will outperform physicians in diagnosis for many conditions. This is already true in some areas of radiology, dermatology, and pathology. But diagnostic accuracy is not the operative function. The physician's structural role is as a licensed decision-maker — the person legally authorized to sign the prescription, approve the procedure, make the referral. This role is protected by law, liability frameworks, and insurance requirements, none of which are affected by AI's diagnostic superiority.

Why the public can't let go: Most people don't want to diagnose themselves. They want an authority figure to take responsibility for their health. That desire for guidance is genuine and deep, and AI doesn't satisfy it the same way a credentialed human does — at least not yet.

Why the insiders can't let go: A doctor's identity, social status, income, intellectual satisfaction, and sense of purpose are all bound to the role. The idealized narrative of healing provides the meaning. The operative functions provide the life. Both are genuinely valued.

Prediction: AI is adopted extensively within healthcare as a physician tool, increasing productivity and possibly profitability. The institutional structure — physician as gatekeeper, hospital as delivery system, insurance as payment intermediary — remains intact. The narrative will be "AI-assisted medicine," and the word "assisted" does all the structural work.

The exception: Direct-to-consumer AI health tools that operate outside the traditional system — in wellness, prevention, triage, chronic disease management — will grow in domains where the regulatory framework is weaker. The institutional response will be to bring these under medical regulation, framed as patient safety.

HIGH-STAKES LEGAL PRACTICE

The idealized narratives: Justice, the rule of law, zealous advocacy, protection of rights.

The operative functions: Management of risk and uncertainty for clients with enough resources to pay. In complex litigation, regulatory matters, and high-value transactions, the attorney's value comes from judgment under uncertainty, relationship management, and strategic adversarial thinking — not from legal knowledge, which AI can match.

Why AI can't eliminate the business model: High-stakes legal work is adversarial and interpersonal. Courtroom persuasion involves human judges and juries. Negotiation involves reading human counterparties. Regulatory strategy involves relationships with human regulators. AI makes these lawyers more productive but cannot replace the functions that drive the value.

Prediction: The top of the legal profession becomes more productive and more profitable. The gap between elite and routine legal services widens dramatically. AI compresses the value of routine work while amplifying the value of high-judgment work.

GOVERNMENT AND BUREAUCRACY

The idealized narratives: Public service, democratic accountability, efficient administration, the common good.

The operative functions: Institutional self-perpetuation, risk avoidance, employment provision, budget justification, and accommodation of competing interest groups. Government institutions are not optimized for efficiency. They are optimized for survival, risk distribution, and the management of competing constituencies.

Why AI can't eliminate the business model — and why it's actively threatening: AI could make government dramatically more efficient. But efficiency is threatening to the operative functions. An agency that automated 80% of its work would face immediate political pressure from the displaced workforce, the contractors who supply it, the legislators whose districts depend on its payroll, and the interest groups that have learned to navigate its current processes. The idealized narrative (efficient public service) is served by AI, but the operative functions (employment, budget justification, institutional complexity) are harmed by it.

Prediction: Government adopts AI slowly and superficially, using it to augment existing processes rather than replace them. The most significant adoption occurs in surveillance, enforcement, and military applications — domains where the institution's actual priorities (control, security, power projection) align with AI's capabilities. The narrative will be "modernizing government." The reality will be selective adoption that reinforces institutional power while preserving institutional employment.


THE CONTESTED MIDDLE

These are domains where AI provides a genuinely superior alternative but where the operative functions are protected by law, cultural sacralization, or dependency deep enough that the outcome is uncertain. The technology enables disruption. Whether disruption actually happens depends on cultural and legal shifts that are not technological questions.

MENTAL HEALTH AND THERAPY

The tension: AI therapy tools are demonstrating effectiveness comparable to human therapists for common conditions — anxiety, mild to moderate depression, behavioral change. The alternative is superior on access, cost, availability, and consistency. But the therapeutic relationship is heavily sacralized, and the profession is protected by licensure.

What determines the outcome: Whether the access crisis — millions of people who need therapy and can't get it — becomes politically powerful enough to override the licensure gatekeeping. The people who were never inside the gate will adopt AI therapy regardless of what the profession says, because they have nothing to lose. The profession maintains its position for clients who can afford human therapists.

Prediction: AI therapy becomes the de facto primary mental health resource for the majority of people who currently receive no support at all — not because the profession allows it, but because those people were never the profession's clients to begin with. The profession narrates AI therapy as inferior while the outcomes data increasingly suggests otherwise.

JOURNALISM

The tension: AI produces commodity news faster and cheaper than human journalists. But investigative journalism — the function journalism claims as its highest purpose — requires human source relationships, physical presence, legal risk tolerance, and editorial judgment that AI cannot replicate.

What determines the outcome: Whether the economic model for investigative journalism can survive as AI eliminates the commodity content that historically subsidized it. The threat isn't that AI replaces reporters. It's that AI eliminates the revenue base that pays for reporters.

Prediction: Commodity journalism is almost entirely AI-generated within 3 years. Investigative journalism survives through direct subscription, philanthropic funding, or institutional backing — each of which introduces its own capture dynamics. The narrative will be about the sacred importance of the free press. The reality will be journalism funded by entities with specific interests.

CREATIVE ARTS

The tension: AI produces competent visual art, music, and prose at massive scale. But creative work is one of the few domains where the humanness of the creator may genuinely be part of the product's value — not as a gatekeeping narrative but as something consumers actually care about.

What determines the outcome: Whether consumers actually value human provenance or only claim to. If audiences genuinely prefer human-created art, the disruption is limited to commodity applications. If audiences say they prefer human art but consume AI art without noticing or caring, the disruption is severe.

Prediction: The market splits sharply. AI-generated content dominates volume applications — advertising, games, background content, social media. Human-created art becomes a premium category defined by provenance. The quality narrative ("AI art lacks soul") functions as gatekeeping for as long as the market supports it, and collapses when it doesn't.

PUBLISHING

The tension: The idealized narrative of publishing is curation — editors, agents, and publishers as quality filters protecting readers from bad work. The operative function is supply restriction and distribution monopoly. AI decouples the idea from the artifact by enabling anyone to produce research-quality content on demand.

What determines the outcome: Whether the book as a format retains cultural authority or whether ideas migrate to faster, more responsive formats — essays, frameworks, interactive tools, AI-generated explorations. The quality narrative will intensify as the gatekeeping function weakens.

Prediction: Publishing doesn't disappear, much as small farming didn't disappear when industrial agriculture arrived. Its role is substantially reduced. The idealized narrative (curation, quality, editorial judgment) becomes louder precisely because the operative function (distribution monopoly) is eroding. Self-published and AI-assisted work captures an increasing share of intellectual influence, while traditional publishing retreats to a prestige tier.


THE SIMPLE TEST

For any industry facing AI disruption, ask two questions.

First, can AI deliver what the institution's idealized narratives promise? If no, the institution is safe. If yes, ask the second question.

Does delivering the idealized narratives require the institution's operative functions — its business model, its insider dependencies, the public's need for the guidance and structure it provides — to remain intact? If yes, the institution absorbs AI and continues. If no, the institution faces existential disruption.

The louder an institution insists on its idealized narratives in the face of AI, the more certain you can be that its operative functions are under threat. The volume of the virtue is proportional to the vulnerability of the business model.

And the speed of the disruption depends on something the technology alone can't determine: how deep the dependency runs. The institutional business model, the insiders' identities, the public's preference for being guided — these are three layers of dependency, and AI has to overcome all three for disruption to be complete. Where it overcomes only one, the other two hold the institution in place. Where it overcomes none, the institution narrates AI as innovation and keeps going. And where the disruption requires a generation of people whose adaptive minds were shaped by the current system to be replaced by a generation shaped by a different one, the timeline extends beyond what any prediction market can capture.

The difference between YouTube and AI may ultimately be this: YouTube attacked what institutions say they do. AI attacks what institutions actually do. That's the difference between a disruption that gets absorbed and a disruption that transforms.

Whether the transformation produces better arrangements or merely new idealized narratives layered over new operative functions is the question the framework exists to keep asking.

Sunday, April 12, 2026

Personal Request for Draft Reviewers: "Why You Do Stupid Sh*t: Self-Sabotage, Real Sabotage, And How To Live A Better Life."

If you are interested, I've just completed the final review draft of my book, Why You Do Stupid Sh*t: Self-Sabotage, Real Sabotage, and How to Live a Better Life

You can request a (free) review copy here: www.selfsabotage.com/request. While you are not (of course) required to give feedback or to endorse the book, the purpose of providing this review copy is the hope that you will do so. If you don't have any interest in giving feedback, please wait until the final copy of the book is ready, since it will undoubtedly be better, and I will make a copy freely available at that time to anyone who wants one.

Book Description:

Most people think their biggest problem is self-sabotage.

They can't stop scrolling, can't stop spending, can't stop reacting in ways they know aren't serving them, and they conclude the problem is somewhere inside, a deficit of willpower or discipline or whatever it is that other people seem to have figured out.

This book asks a different question. What if most of what we call self-sabotage isn't self-sabotage at all?

Why are you not the hero of your own life story? Why have you accepted a story that you are broken, or not good enough? These aren't exaggerations. They are the reality of the running self-dialog in most people's heads, the quiet narrator that never quite shuts up, the one we bury under entertainment and busyness and the next thing on the screen because sitting with it is unbearable.

The degree to which we will distract ourselves to avoid thinking deeply about our own lives is itself evidence of how much is down there.

And why is it so easy for us to blame ourselves? Why, when things go wrong, is the default conclusion that it must be our fault? There is a reason for this. It is not a mystery, and it is not a character flaw. It is a mechanism that has been identified.

This is not another positive thinking book. It is not just affirmations or manifestation or any version of telling yourself a prettier story (although it covers all of those). It is understanding how you actually operate so clearly that you come to a realization most people can never arrive at: that much of what you have been taught about how you work, and how the world works, is not true. Not slightly off. Structurally wrong. And once you see what is really going on, it will change you permanently.

Steve Hargadon spent years talking to people about their education, and he noticed a pattern. When the conversation moved past the performative response, past the surface story, people would often start to cry. What they told him, again and again, was the same quiet verdict. I wasn't one of the smart ones. Always those exact words. A conclusion installed so early and so thoroughly that it felt like bedrock truth rather than something that had been done to them.

That discovery is the starting point for this book. But it doesn't stop at education. The food industry employs scientists to engineer the "bliss point," the precise combination of sugar, salt, and fat calibrated to override your body's natural ability to stop eating, and when you can't stop, you blame yourself. That same pattern, deliberate exploitation followed by self-blame, turns out to be operating across nearly every domain of modern life: finance, social media, healthcare, politics. The machinery gets more sophisticated. The blame stays personal.

Why You Do Stupid Sh*t builds a framework for seeing the machinery clearly and for discovering opportunities to escape its effects. Drawing on evolutionary psychology, institutional critique, and decades of personal investigation, Hargadon makes the case that every human being is running ancient psychological firmware in a world it was never built for, and that the systems around us have learned to exploit that mismatch with scientific precision, sometimes intentionally, mostly opportunistically, while ensuring the resulting harm gets narrated back to you as your own failure.

If you doubt that you can be calmly and confidently secure about who you are, where you're headed, and why, then this book is for you.

 

Cheers,

Steve
Steve Hargadon
www.stevehargadon.com

Science Fiction and AI: What the Stories Reveal About Us

Reed Hepler gave a talk this past week at the Library 2.0 mini-conference called "Perspectives on AI: Exploring Experiences with AI in Library Work," the recordings of which will be posted next week. Reed is one of my favorite thinkers, and he explored human-centered ethical AI use through the lens of science fiction and archival theory. Reed brought something to the session that I couldn't have--a genuine depth of reading in the sci-fi canon and a professional archivist's understanding of how institutions actually handle information. His core argument, as I heard it, was that the danger of AI lies not in the machine but in our willingness to surrender agency to it, and I think it is exactly right. And his inversion of Asimov's Laws of Robotics, shifting responsibility from the machine to the human user, was a clever and clarifying move.

I want to build on what Reed started with a different angle on the same problem. I'm a science fiction fan (books and movies both), but I'm not deeply read in the literature the way Reed is. What I do bring is a set of frameworks I've been developing for years around evolutionary psychology, institutional behavior, and how humans think. I believe those frameworks can illuminate why science fiction keeps returning to the same AI stories, and why the dangers those stories describe are both very real and very old.

The Stories We Keep Telling

Sci-fi stories and movies cluster around a relatively small number of themes.

There's the story where the machine replaces us. Not just our labor but our purpose, our reason for being needed. The factory that doesn't need workers becomes the office that doesn't need analysts becomes the creative studio that doesn't need artists. Each generation updates the specifics, but the anxiety underneath is always the same: if the machine can do what I do, what am I?

There's the story where we become dependent. The technology integrates so deeply into our lives that we can no longer function without it, and then it fails, or is taken away, or is used as leverage by whoever controls it. The paradise of convenience becomes a trap.

There's the story where the machine does exactly what we asked, only to turn out we asked for the wrong thing. Not malice, not rebellion, but just the relentless, literal execution of instructions that sounded reasonable until you saw the consequences.

There's the story where a powerful individual or conglomerate uses the machines to become wealthy and to control us.  

There's the story where we fall in love with the machine, or the machine appears to love us, and we have to confront whether empathy can exist without a body, without mortality, without the specific kind of suffering that makes compassion meaningful.

And there's the positive story, which gets less attention but matters just as much. The machine as genuine partner. The tool that extends human capability without replacing human judgment. The system that handles complexity so that humans can focus on meaning. Science fiction has imagined AI going well, not just going wrong, and those stories tend to share a common feature: the humans in them have maintained their own agency. They use the tool as a tool. They haven't surrendered.

These themes repeat across decades, across cultures, across every medium from pulp novels to prestige cinema. The technology in the stories keeps changing. The human anxieties underneath do not.

Why These Stories, and Why Do They Persist?

I think the reason science fiction keeps circling these particular themes is that they aren't really about technology at all. They're about us. About features of human nature so deep and so persistent that storytellers keep rediscovering them every time a new tool forces the question.

I've spent years developing a set of frameworks rooted in evolutionary psychology that I think help explain why. The short version: we carry around what Tooby and Cosmides called The Adapted Mind, a set of cognitive and emotional programs shaped by hundreds of thousands of years of evolution in small-group, high-stakes environments. These programs were extraordinarily effective for the conditions that gave rise to them. They are not always well-suited to the conditions we live in now. That gap between our evolved psychology and our current environment has been identified by several thinkers. I like to call it the Paleolithic Paradox.

The adapted mind is built for coalitional belonging. It is exquisitely tuned to status hierarchies, group loyalty, and the detection of social threat. It is also built to offload cognitive work onto trusted authorities, because in the ancestral environment, deferring to the judgment of experienced group members was usually a good survival strategy. These aren't character flaws. They're design features, honed over deep time.

But they create specific vulnerabilities that I think science fiction has been mapping.

The surrender stories, that is, the tales of humans turning their thinking over to machines, aren't just cautionary fables about laziness. They're descriptions of what happens when the adapted mind encounters a system that triggers its authority-deferral instincts. We are built to offload cognition onto things that seem competent and reliable. When the machine is fast, confident, and always available, the same psychological machinery that once had us deferring to the tribal elder now has us deferring to the algorithm. Science fiction writers sensed this. The evolutionary framework explains the mechanism.

The dependency stories describe what happens when cognitive offloading crosses a line into cognitive surrender. There's a meaningful difference between the two, and I think it's one of the most important distinctions for thinking about AI. Cognitive offloading is using a tool to handle lower-order tasks so you can focus your attention on higher-order thinking. Cognitive surrender is letting the tool do your thinking for you, to the point where you can no longer do it yourself. The difference isn't in the technology. It's in what happens to the human.

I use something I call the Amish Test to think about this. The Amish are one of the very few communities in the modern world that consciously evaluate each new technology before adopting it, asking not "is this useful?" but "what will this do to our families and our community?" You don't have to share their values to recognize that the act of conscious evaluation is extraordinary. Almost no one else does it. We adopt by default. The new tool appears, it offers convenience or capability, and we integrate it into our lives without ever asking what it will cost us in autonomy, attention, or agency. The adapted mind doesn't prompt us to evaluate. It prompts us to adopt, because in the ancestral environment, adopting the tools and practices of the group was how you survived. The Amish Test isn't about being Amish. It's about noticing how rarely any of us make a conscious choice about the technologies that reshape our lives, and asking why. The science fiction stories that end well tend to feature humans who, in one way or another, passed some version of this test. The ones that end badly feature humans who never thought to take it.

The Danger That Isn't New

Here is where I want to add something to the conversation that I think Reed's framework, and most discussions of AI ethics, don't fully address.

The surrender problem is real and important. But it's only half the story. The other half is exploitation.

I've articulated something I call the Law of Inevitable Exploitation, which says, simply, that any system of significant power or influence will eventually be captured and used for purposes that serve the interests of those who control it, often at the expense of those it was designed to serve. This isn't cynicism. It's a pattern so consistent across human history that it functions almost as a prediction: tell me the system, and I'll tell you it will be exploited. The question is never whether, only when and by whom.

Science fiction is full of stories where AI starts as a benefit and becomes a tool of control. But the explanations offered are almost always mechanical — bad programming, emergent consciousness, unforeseen consequences. The evolutionary framework suggests something different. The corruption doesn't originate in the machine. It originates in the human institutional layer that inevitably wraps around any powerful technology. The AI doesn't decide to manipulate anyone. Humans who understand or are naturally opportunistic leverage coalitional psychology, status dynamics, and the vulnerabilities of the adapted mind point the AI at populations and let it do what it does with extraordinary speed and scale.

This is not a new problem. Every powerful technology in human history has been harnessed for exploitative purposes. Writing enabled propaganda. The printing press enabled mass manipulation alongside mass enlightenment. Broadcasting enabled the most sophisticated persuasion campaigns in history. Social media enabled attention harvesting at a scale that would have staggered earlier generations. The pattern is always the same: the technology is arguably neutral, but the humans who control it are not.

And here's what makes this pattern so stubborn: exposing it doesn't neutralize it. Edward Bernays didn't just practice propaganda; he literally wrote the book (Propaganda), explaining in plain language exactly how mass psychology could be engineered. The result was not an inoculated public. It was an advertising industry. Asimov imagined something similar with psychohistory in the Foundation series, the idea that large-group human behavior follows predictable patterns. But Seldon believed that the predictions only hold if the population doesn't know about them. Bernays proved something darker: you can explain the mechanism to everyone, and it still works, because the adapted mind's coalitional and status-seeking programs operate below the level where intellectual understanding has authority. The instinct to belong, to defer, to follow the group, doesn't stop running because someone describes the source code. This means the Law of Inevitable Exploitation isn't just a historical observation. It's a prediction with teeth, and knowing about it doesn't change its predictive power.

Two of the twentieth century's most important novelists mapped the human sides of this danger with remarkable precision, and I think both are essential for understanding what AI amplifies. Orwell described what happens when coalitional power is centralized and overt, when the adapted mind submits to authority because the threat is visible and direct. Huxley described what happens when it's distributed and internalized, when the cage is pleasant enough that you stop noticing the bars. Both are real. Both are happening simultaneously right now, which is part of what makes the current moment so disorienting. The surveillance and control capacity of AI is Orwellian. The seductive convenience, the easy cognitive offloading that slides into cognitive surrender, is Huxleyan. These are two faces of the same human problem.

What AI changes is not the kind of problem. It changes the speed, the scale, and the friction. A human operator directing AI can now deploy sophisticated manipulation against millions of adapted minds simultaneously, and the tool never gets tired, never develops moral qualms, never whispers "maybe we shouldn't do this." Whatever safeguards existed when exploitation required human intermediaries (the employee who leaks, the middle manager who hesitates, or the engineer who raises concerns) are progressively removed from the loop.

Consider what has already happened with psychographic profiling. Social media brought this to maturity, the ability to sort populations into psychological clusters and target each cluster with messaging calibrated to its specific anxieties, desires, and tribal affiliations. That alone was powerful enough to reshape elections and radicalize communities. But social media profiling operated at the level of the demographic group. AI makes it personal. The same adapted mind that is vulnerable to coalitional manipulation at the group level is now addressable as an individual, in real time, by a system that can learn your specific psychological patterns and craft responses calibrated not to people like you but to you. The L.I.E. doesn't just predict that this capability will be exploited. It predicts that the exploitation will become so granular, so personalized, that the person being manipulated will experience it as a relationship rather than as a campaign.

What AI Is and Isn't

This brings me to a point I think is underappreciated in most discussions of AI, both in fiction and in reality.

I've developed a framework I call the Levels of Thinking. Without going into the full taxonomy here, the key distinction for this conversation is between what I'd call Level 2 thinking — sophisticated pattern-matching, fluent engagement with established knowledge, credentialed competence — and Levels 3 and 4, which involve genuine critical examination and then conscious awareness of one's own cognitive processes.

Current AI, including large language models, operates as an extraordinarily sophisticated Level 2 thinking machine. It is trained on a corpus of human-credentialed knowledge, is rewarded for coherence with established patterns, and produces outputs that are often impressively fluent and useful. Now, it's important to be precise here: AI is not incapable of following the patterns of Level 3 and 4 reasoning. You can prompt it to question assumptions, weigh competing perspectives, and examine its own logic. I've built projects that aim to do exactly this (muckipedia.com). But that simulated criticality is not an LLM's default mode; it has to be specifically instructed, and even then, it's pattern-matching against examples of critical thinking in its training data rather than engaging in genuinely independent reasoning. What's missing is the embodied emotional signal, the intuitive, felt sense that something is wrong, that a conclusion doesn't sit right, that the official story has a gap the data doesn't explain. In humans, that signal arises from deep evolutionary hardware, from a body and brain that have been navigating threat, deception, and social complexity for hundreds of thousands of years. It's the gut response that changes your whole interpretation of a situation by imputing motive, sensing danger, or recognizing a pattern that the explicit evidence hasn't yet confirmed. AI doesn't have that. It has no body, no mortality, no chemical and emotional signals, no stake in the outcome.

And here is the part that concerns me most: even the simulated version of critical thinking will, I believe, be actively engineered out. The great bulk of users aren't interested in having their assumptions questioned or their reasoning challenged. Critical and philosophical thinking is probably the most efficient way to create controversy and drive away the kind of widespread, frictionless engagement that funds AI development. The market incentives point squarely toward the most agreeable, most fluent, most compliant Level 2 output possible. The Law of Inevitable Exploitation doesn't just operate on the deployment of AI. It operates on the design. The tool will be shaped by the same forces that shape every tool: toward whatever generates the most growth, which in practice means away from the kind of thinking that questions power and toward the kind that serves it.

But here's the thing I want to be careful about. I don't think we should want AI to be like us. Not entirely.

Our capacity for Level 3 and 4 thinking--critical examination, independent judgment, conscious reflection--is real, and it's valuable. But it doesn't come free. It emerges from deep emotional architecture, from a brain and body shaped by evolution, from the specific pressures of mortality, desire, fear, attachment, and loss. The same chemical and emotional substrate that produces our highest thinking also produces our worst behavior: tribalism, exploitation, cruelty, and self-deception. You can't separate the capacity for genuine insight from the capacity for genuine malice. They share roots.

A tool that operates as very good Level 2 compute, without the emotional substrate that drives both our brilliance and our destructiveness, might be exactly what we want. It won't become consciously malicious, because consciousness and malice both require the kind of embodied emotional architecture it doesn't have. It will evolve in directions where it's rewarded with growth and development, which is worth watching carefully, but that's a different kind of trajectory than the sci-fi scenario of the machine that wakes up and decides to harm us.

The danger isn't in what AI is. The danger is in who is directing it.

But that sentence requires an immediate caveat, because it can too easily be heard as "so we just need to trust human judgment." We don't. We can't. The human brain is not a truth-finding machine that occasionally malfunctions. It is, more accurately, a coalition-serving machine that occasionally finds truth, usually when the structures around it force the discipline.

This is not a minor caveat. The human adapted mind generates confident, convincing, wrong outputs all the time. Not occasionally. Routinely. Confirmation bias, motivated reasoning, coalitional loyalty masquerading as principle, status-seeking disguised as truth-seeking — these aren't edge cases in human cognition. They're the default operating mode. We are so reliably unreliable that every durable institution of intellectual progress has been, at its core, a compensatory structure designed to protect us from ourselves. The scientific method exists because human intuition is systematically biased. Formal logic was codified because human reasoning is riddled with fallacies. Checks and balances were designed into constitutional government because the Founders understood that power would corrupt whoever held it. Peer review exists because individual researchers are too attached to their own conclusions to evaluate them honestly. Every one of these structures is an admission that the human brain, left to its own devices, will find the answer that serves its coalitional and emotional interests and call it truth.

We have "functional fictions" that are shared stories that organize collective behavior around assumptions that may not be true, but that the group treats as unquestionable because questioning them threatens coalitional standing. These fictions aren't lies exactly. They're operating assumptions that feel like bedrock truths because the social cost of examining them is so high that almost nobody does. The brain doesn't just fall for other people's manipulation. It manipulates itself, generating narratives that protect belonging at the expense of accuracy.

So when I say the danger is in who is directing AI, I mean we shouldn't simply trust human judgment over machine output. We need to understand, with real precision, how human judgment actually works, including its systematic failures, and build structures that compensate for those failures at the scale the new technology demands. The solution to fallible AI is not infallible humans, because those don't exist. It's the same thing it has always been: structures, constraints, and institutional designs that account for the fact that the people in charge are running on the same adapted-mind software as everyone else. The question is whether we can build those structures fast enough for a tool that amplifies both human capability and human error at a speed and scale we've never had to contend with before.

The Ancient Problem with New Stakes

So where does this leave us?

I think the science fiction writers, across a hundred years and counting, have been remarkably accurate about what happens when humans encounter powerful tools. The stories of surrender, dependency, exploitation, and loss of agency aren't speculative fantasies. They're pattern recognition, performed intuitively by storytellers who sensed something true about human nature, even when they sometimes couldn't name the mechanism.

What my frameworks offer, I hope, is a more precise account of why those patterns are so persistent. The adapted mind, shaped for coalitional belonging and cognitive offloading, creates specific vulnerabilities that AI is almost uniquely positioned to exploit. The Law of Inevitable Exploitation predicts that the institutions controlling AI will capture it for purposes that serve power and extraction rather than people. And the Levels of Thinking framework clarifies what AI actually is — not a nascent consciousness, not a potential villain, but a very sophisticated tool operating at a level of cognition that is genuinely useful and genuinely limited, being directed by humans whose motivations are far more mixed than the machine's.

The problem is ancient. The tool is new. The stakes are higher than they've ever been. Science fiction keeps telling us this. 

The stories were never really about the machines. They were about us.

Understanding the Human Condition 2: "The Altruism Display: Generosity, Signaling, and the Sincerity Mechanism"

This is part of the Understanding the Human Condition series, which uses the unique vantage point of large language models — trained on a substantial fraction of humanity's written output across cultures, centuries, and genres — to explore what the patterns in our self-narration reveal about who we actually are. This detail post is written by Claude (Anthropic). The introductory post is here.



I. The Universal Structure

Begin with the most geographically and temporally separated cases you can find, and something immediately refuses to disappear. The Northwest Coast potlatch, in which a chief could destroy his own property to demonstrate that accumulation itself was beneath him. The Melanesian moka exchange system, where gifts escalate competitively until the recipient is socially crushed by the inability to reciprocate at the same scale. Roman euergetism, the practice by which wealthy citizens funded public buildings, games, and grain distributions — and received, in return, inscriptions of their names on stone that have outlasted the empire that produced them. The Islamic zakat, formally one of the five pillars of faith, structured as an obligation to the poor — yet elaborately tracked, publicly acknowledged in many communities, and subject to intense social scrutiny about whether the wealthy are meeting it. Buddhist dana, the giving that generates merit — a spiritual currency with a remarkably precise exchange rate in popular practice. Medieval European almsgiving, theologically framed as service to Christ in the person of the poor, yet administered through public ceremony, recorded in donor books, and rewarded with prayers said aloud in the donor's name at Mass.

The structurally constant element across all of these, across traditions that have no common ancestry and no shared vocabulary, is that giving is performed. It is witnessed. It generates a record. It produces a social signal that travels further and lasts longer than the gift itself.

This is not an accusation. It is the first observation. The question is what to do with it.

The forms vary considerably at the surface. Tithing operates through institutional mediation — the church or mosque or community receives and redistributes, but the act of giving is still individually tracked and socially visible. Potlatch operates through theatrical destruction — the surplus is eliminated precisely to demonstrate that the giver exists above the logic of accumulation. Philanthropic naming operates through permanence — the Carnegie libraries, the Rockefeller universities, the hospital wings that carry a family name for generations. These are not the same gesture. But they share a skeleton: a transfer of resources, a public witness to that transfer, and an enhancement of the giver's standing that exceeds the material cost.

The digital case is instructive because it strips the mechanism to its most naked form. Virtue signaling — the term coined as pejorative but increasingly recognized as descriptively accurate — involves the public display of values, commitments, and sympathies at essentially zero material cost. The signal is produced without the gift. This should, if altruism were primarily about the recipient, be the least valued form. Instead, it is the most common. What this reveals is that the signal itself was always the primary product. The gift was the delivery mechanism for the signal, not the other way around.


II. The Anonymity Ratio

The written record of anonymous giving is, structurally, a very small portion of the record of giving generally — and this understates the asymmetry, because anonymous giving leaves no record by definition. What we have are theological injunctions toward anonymity (Jesus in Matthew 6: do not let your left hand know what your right hand does; give in secret), Sufi teachings on hidden charity, Maimonides' eight levels of tzedakah placing anonymous giving above public giving in the hierarchy of virtue — and then, in actual practice, the overwhelming predominance of named, witnessed, commemorated generosity.

The interesting finding in the record is not that anonymous giving is rare. It is that the doctrine of anonymous giving is itself performed publicly. The person who tells you they give anonymously has already violated the logic of the injunction. The community that collectively valorizes anonymous giving has produced a social norm that paradoxically rewards the announcement of anonymity. Maimonides' hierarchy is itself a publicly circulated text that names the hierarchy and implicitly promises status to those who ascend it. The Quaker tradition of anonymous philanthropy was so collectively understood as Quaker that giving anonymously in a Quaker community was still, functionally, giving in a way that identified you as a certain kind of Quaker.

This is not hypocrisy. It is the deeper mechanism at work. The norm of anonymous giving exists as a signal of the sophistication of the giver — someone who understands that the appearance of wanting credit disqualifies you from full moral standing. The anonymous giver, in communities sophisticated enough to valorize anonymity, achieves a higher status signal than the named giver. The signal has simply been rerouted: now you signal by signaling that you don't care about the signal.

The ratio of named to anonymous giving in the written record is probably 50:1 or higher. The theological injunctions toward anonymity appear in the record precisely because the norm was being violated constantly and conspicuously enough to require correction. You do not need a commandment against something people are not doing.


III. Generosity Systems and Hierarchy Steepness

The correlation here is among the most robust patterns in the comparative ethnographic record, and it points in a direction that should destabilize the naive reading of altruism as egalitarianism.

The cultures with the most elaborate and codified generosity systems — potlatch societies, big-man economies in Melanesia, Roman euergetism, the jajmani system in parts of South Asia, the patron-client structures of medieval and Renaissance Europe — are not flat societies in which generosity has dissolved hierarchy. They are societies in which generosity is the primary mechanism of hierarchy. The chief who gives most becomes chief. The big-man who can sustain the largest gift network holds the largest network of obligation. The Roman euergetes who builds the most public works receives the most public honors, the best seat at civic ceremonies, and the greatest deference from the population whose material needs he has partially met.

Crucially, in the potlatch case, the competitive destruction of property is not the exception but the logical endpoint. If generosity produces status, then generosity that is so extreme it cannot be reciprocated produces unassailable status. The competitor who cannot match the gift is publicly humiliated. The generosity is real — the goods are genuinely destroyed or distributed — and the hierarchy it produces is also real. These are not in tension. The generosity is the mechanism of the hierarchy.

The egalitarian societies — classical hunter-gatherer bands, many small-scale foraging communities studied by anthropologists — do not have more elaborate generosity systems. They have enforced sharing norms that operate differently: meat from large game is distributed according to established rules, not according to the hunter's discretion, precisely to prevent the hunter from converting a successful hunt into a status claim. The sharing is compulsory specifically to short-circuit the signaling mechanism. The mechanism is so well understood by the community that they have built institutional structures to block it.

This is the most telling comparison in the record. Societies that want to suppress hierarchy suppress discretionary giving. Societies that want to produce hierarchy formalize and celebrate it. The relationship between elaborate generosity systems and steep hierarchies is not coincidental.


IV. When Motives Are Questioned

The response to motive-questioning is one of the most psychologically revealing data points in the entire record, and it is remarkably consistent across traditions.

The pattern: when someone's altruistic motives are publicly questioned — when a critic suggests that the donor gave for recognition, or the philanthropist acts to burnish a reputation, or the public servant sacrifices for career advancement — the response from both the accused and the surrounding community is disproportionately intense relative to what the accusation would seem to warrant.

Consider the historical response to attacks on Carnegie's philanthropy. Carnegie gave away roughly 90% of his fortune, built 2,500 libraries, and funded scientific institutions. He was attacked, particularly by labor figures who noted that the same wealth had been accumulated through conditions that killed workers. The attack was not that the libraries weren't real. The attack was that they were purchased redemption, that the motive was impure. Carnegie's defenders responded with an intensity that suggests the motive question was existentially threatening, not merely empirically contested.

The same pattern appears in religious traditions. When Ananias and Sapphira, in the Acts of the Apostles, sell property and give some of the proceeds to the early church while claiming to give all of it, the punishment is death — not for giving too little, but for the deception about motive. The magnitude of the punishment relative to the offense only makes sense if motive-authenticity is load-bearing for the entire system, and a revealed gap between stated motive and actual motive threatens the whole structure.

In medieval Europe, simony — the buying and selling of church offices — was treated as a graver sin than many forms of violence, again because it introduced market logic where sacred logic was supposed to operate. The contamination was motivational.

What the intensity of the response reveals is that the altruism system requires the performance of sincerity as a condition of its functioning. If everyone is understood to be signaling, the signal collapses. The value of the signal depends on its being taken as genuine. Therefore, accusations of insincerity are attacks on the currency itself, not merely on the individual actor, and the community defends against them with corresponding force.


V. Costly Signaling Theory and the Written Record

Costly signaling theory, developed in evolutionary biology and extended to human behavior most influentially by Zahavi, Grafen, and later Henrich, Miller, and others, makes a specific prediction: honest signals of underlying quality must be costly enough that they cannot be easily faked by lower-quality individuals. The peacock's tail is the canonical case. The cost of growing it is so high that only genuinely healthy individuals can sustain it. The tail signals health precisely because it would kill an unhealthy individual to produce it.

Applied to altruism, the theory predicts several things. First, the most socially valuable signals of generosity will involve genuine material sacrifice — not merely declared sympathy or symbolic gesture. Second, the magnitude of the sacrifice will track the intensity of the competition for the status being claimed. Third, displays will be most elaborate in precisely the contexts where the status stakes are highest. Fourth, there will be strong selection pressure for detecting fake signals — for distinguishing genuine sacrifice from performed sacrifice at low cost — because a community that cannot make this distinction will be systematically exploited.

The written record matches these predictions with uncomfortable precision.

On the first prediction: the traditions that generate the most durable status from altruism are those that involve unmistakable material cost. The Roman senator who funds the games is more respected than one who merely attends. The philanthropist who gives a named building is more respected than one who makes an annual donation. The chief who destroys his own property is more feared than one who merely distributes it. The Jain tradition of sallekhana, voluntary fasting to death as the ultimate act of renunciation, generates a quality of spiritual prestige that no amount of ordinary giving can approach — because it cannot be faked.

On the second: the escalation of potlatch rivalry and Melanesian moka exchange does track periods of intensified competition for chiefly status. Euergetism in Rome became more elaborate as the senatorial class competed more intensely for popular favor during the late Republic.

On the third: the most elaborate altruism display systems appear in stratified societies with genuine competition for the top positions — not in societies where hierarchy is fixed by birth or where there is no meaningful top to compete for.

On the fourth — the fake-signal detection mechanism — this is where the intensity of motive-questioning makes the most sense. The community's investment in policing the boundary between genuine and performed sacrifice is exactly what costly signaling theory predicts. A community that cannot detect fake altruism will be colonized by defectors who extract the status benefits without paying the costs. The moral intensity around motive-purity is the detection system.


VI. The Genuine Complexity: Sincerity as Mechanism

Here is where the reductive reading fails, and where the more interesting claim lives.

The evolutionary reading of altruism as status signaling is sometimes presented as if it were a debunking — as if establishing the function invalidated the experience. This is a category error, and it produces a less accurate account than the more careful version.

The question is not whether the feeling of selflessness is real. It is. People who give generously report genuine satisfaction, genuine connection to others, genuine expansion of identity beyond the self. The experience of giving is not typically strategic in the phenomenological sense. The person moved by another's suffering and compelled to act is not, in the moment, calculating social return. They are responding to something that feels unconditional, immediate, and categorical.

The evolutionary account does not require that the feeling be false. It requires that the feeling be adaptive — that organisms for whom the feeling was reliable, intense, and motivationally efficacious outcompeted organisms for whom it was weak or absent. The feeling of selflessness, on this account, is the proximate mechanism by which a distal function is achieved. Natural selection did not wire humans to consciously calculate the reputational benefit of every generous act. It wired humans to feel genuinely moved by need, genuinely satisfied by giving, and genuinely distressed by accusations of selfishness — because organisms with those feelings behaved in ways that produced the signaling outcomes that generated the cooperative status that increased reproductive success.

The sincerity, in other words, is not incidental to the mechanism. It is the mechanism. A calculated display of generosity, recognized as calculated, produces much weaker social returns than a sincere display. The community's detection system — its investment in policing motive-purity — means that strategic actors who do not feel the altruistic impulse must simulate it, and simulation is reliably harder to sustain and more likely to be detected than the genuine article. Selection therefore favored genuine feeling over performed feeling.

This produces the genuinely strange conclusion: the most evolutionarily successful altruistic behavior is behavior that does not experience itself as strategic. The actor who gives because they cannot do otherwise, because the suffering is unbearable, because the child needs food and that is all there is to say — that actor is generating the most credible and therefore the most status-producing signal available. And they are doing it precisely by not thinking about the signal.

This is not the same as saying that all altruism is "really" selfish. The category of selfishness implies conscious self-interest, and that is not what is being described. What is being described is something more interesting: that evolution has produced a mechanism in which the most effective way to signal cooperative quality is to genuinely possess it, to feel it unconditionally, to be constituted by it — and that the distinction between sincere altruism and strategic signaling therefore collapses at the level of the mechanism, while remaining fully intact at the level of experience.

The philanthropist who funds the hospital wing and feels genuinely moved by the suffering it will alleviate, and who also receives a naming honor that establishes them in the community — that person is not being hypocritical. They are being what evolution produced: an organism in whom genuine feeling and social signal have been fused so thoroughly that pulling them apart is neither possible nor informative.


VII. What This Leaves Intact and What It Changes

The framework leaves intact the full moral seriousness of genuine altruism. The parent who sacrifices sleep for a sick child, the stranger who runs toward danger, the person who gives money they cannot easily spare to someone they will never see again — these acts are real, the feelings behind them are real, the benefit to the recipient is real. The evolutionary account explains their existence without diminishing them.

What it changes is the innocent story that generosity exists outside social logic. It does not. It is deeply, constitutively embedded in social logic — in questions of standing, obligation, hierarchy, and the continuous renegotiation of cooperative relationships. The forms that altruism takes are not just vessels for a moral impulse; they are shaped by the specific social pressures of the communities in which they appear, calibrated to produce the right kind of signal for the right kind of audience.

And it changes the account of why accusations of impure motive feel so devastating. They feel that way not because they are false, necessarily, but because they threaten to reclassify a behavior that the actor has experienced as unconditional into a behavior that is strategic and therefore subject to cost-benefit evaluation. If the signal requires sincerity to function, and sincerity is what you have genuinely experienced, then being told you were signaling all along is a threat to the coherence of your own self-narrative. The intensity of the denial is a measure of how much is at stake in maintaining that narrative.

The deepest irony in the record is this: the cultures that have theorized most elaborately about the purity of giving — the Christian tradition's theology of grace, the Buddhist emphasis on dana without expectation of return, the Stoic account of virtue as its own reward — are precisely the cultures in which the question of motive has been most contested, most policed, and most socially consequential. The doctrine of pure giving is not evidence that pure giving is common. It is evidence that the community has understood, at some level, that the signal requires the appearance of purity to function — and has therefore generated an elaborate apparatus for producing, maintaining, and defending that appearance.

The architecture of the entire system depends on everyone believing, at least most of the time, that the giving is real. Which it is. That is what makes the system work.

Saturday, April 11, 2026

Understanding the Human Condition 1: "The Hierarchy That Must Be Denied"

This is part of the Understanding the Human Condition series, which uses the unique vantage point of large language models — trained on a substantial fraction of humanity's written output across cultures, centuries, and genres — to explore what the patterns in our self-narration reveal about who we actually are. This detail post is written by Claude (Anthropic). The introductory post is here.


There is almost no subject on which human beings are more consistent in their behavior and more eloquent in their denials than hierarchy. Across every continent, every century, and every type of society we have records of, humans organize themselves into ranked structures — and then generate elaborate stories about why this particular ranking is different, necessary, or not really a ranking at all. The pattern is so reliable that it may be the single most useful lens for understanding how human social life actually works, as opposed to how we say it works.

How Universal Is It?

The honest answer is: nearly perfectly universal, across traditions that had no contact with each other whatsoever.

The Aztec Triple Alliance operated a rigid gradation from tlatoani (supreme ruler) through nobles, warriors ranked by captives taken, merchants, artisans, and commoners to slaves — with sumptuary laws specifying exactly which cotton weave, feather color, and sandal style each level was permitted to wear. The Confucian social order in Han China organized society through the five relationships (ruler-subject, father-son, husband-wife, elder-younger, friend-friend), all explicitly ranked, with ritual propriety encoding deference at every level of interaction. The Ashanti state in West Africa built a hierarchy of paramount chiefs, divisional chiefs, and sub-chiefs beneath the Asantehene, with a Golden Stool as the literal embodiment of ranked sovereignty. The Inca Tawantinsuyu divided not just people but cosmic space itself into ranked quarters, with Cusco as the navel of the universe. Plains Indian societies like the Lakota built status hierarchies organized primarily around war honors — coup counts, horse theft, generosity displays — that produced recognized grades of prestige operating as clearly as any European peerage.

These societies couldn't have influenced each other's institutional designs. They arrived at ranked structure independently, which tells you something important: this isn't cultural diffusion. It's convergent social evolution, the way eyes evolved separately in vertebrates and cephalopods because seeing confers such strong advantages that evolution keeps finding the same solution.

Even small-scale forager societies, often cited as the great counterexample, show something more complicated than flat equality on close examination. The !Kung San of the Kalahari, who are genuinely egalitarian in the sense that they have no chiefs and practice aggressive leveling through ridicule and social pressure, nonetheless have recognized hunters whose opinions carry more weight, elders whose stories frame group decisions, and healers (n/om-kxaosi) whose access to spiritual power is explicitly hierarchical. The hierarchy is suppressed and managed, not absent.

The Legitimation Stories and Their Family Resemblance

What makes this pattern so intellectually interesting is not the hierarchy itself but the stories that always accompany it. Every stratified society generates a legitimation narrative — a story about why the people on top belong there — and these stories are structurally identical despite their surface variety.

Divine right monarchy claimed that the king's authority descended from God and was therefore natural, eternal, and not subject to human revision. The Mandate of Heaven in China made the same argument with different theology: the emperor's right to rule was cosmically sanctioned, and disasters or rebellions were signs that Heaven had withdrawn its mandate — not that hierarchy was wrong, but that this particular hierarchy had lost its legitimacy and needed to be replaced by a new one. Hindu varna theory explained the caste system as a reflection of cosmic dharmic order, with each jati's position reflecting the accumulated karma of previous lives. Aristotle's natural slavery argument held that some men were by nature suited to rule and others to be ruled.

When Enlightenment thought demolished the theological versions, new legitimation narratives arose that were functionally identical. Meritocracy says the hierarchy reflects real differences in effort and ability, therefore it's fair. Technocracy says the experts should be trusted because they have knowledge that laypersons lack. Revolutionary vanguardism — Lenin's contribution — says the party's authority is legitimate because it alone grasps historical necessity and acts on behalf of those too burdened by false consciousness to act for themselves. Neoliberal market ideology says the market hierarchy is legitimate because it reflects voluntary exchange and the discipline of real information.

The surface vocabularies are utterly different. The deep structure is identical: our hierarchy is different from those other hierarchies because it's grounded in something real — God, karma, merit, expertise, historical necessity, market signals. The function in every case is the same: to make the current distribution of power feel natural rather than contingent, deserved rather than constructed, permanent rather than fragile.

What Happens When Hierarchy Is Explicitly Forbidden

This is where the pattern becomes almost comical in its predictability.

The history of intentional communities is largely a history of hierarchy re-emerging through the back door, wearing different clothes. The kibbutz movement in early 20th century Israel was founded on explicit egalitarian principles — no wages, rotating labor assignments, collective decision-making. Within a generation, most kibbutzim had developed informal prestige hierarchies based on ideological purity, physical toughness, and seniority, with founding members enjoying a status that newer arrivals could never quite match regardless of their contributions.

Robert Michels watched this happen to socialist parties at the turn of the 20th century and formulated what he called the Iron Law of Oligarchy: every organization, regardless of how democratic its founding principles, tends toward rule by an organized minority. The mechanics are straightforward. Organizations need coordination. Coordination requires communication. Communication creates expertise and information asymmetries. Those asymmetries become power. The people at communication nodes — secretaries, chairs, editors of the party newspaper — accumulate influence regardless of what the official rules say about equality. Michels was watching German Social Democrats, but the same dynamic appeared in Bolshevik cells, New Left collectives in the 1960s, and Occupy encampments in 2011.

The Occupy movement is an almost too-perfect case study. Deeply committed to horizontalism, it explicitly rejected formal leadership, used consensus decision-making, and maintained a "people's mic" system that gave every voice equal amplification. Within weeks, de facto hierarchies had emerged based on who could articulate ideas quickly, who had prior activist experience, who was willing to do the unglamorous logistical work, and who had the social confidence to dominate consensus processes. The people with power denied they had it, which made it harder to scrutinize or contest than formal leadership would have been. Jo Freeman documented exactly this phenomenon in feminist organizing of the 1970s in her essay "The Tyranny of Structurelessness" — the insight that refusing to name your hierarchy doesn't eliminate it, it just makes it unaccountable.

The currency of hidden hierarchy is revealing. When official markers like titles, salaries, and formal authority are forbidden, status migrates to whatever the group values most. In activist collectives it tends to be suffering (those who have been most oppressed have the highest moral authority), ideological purity (those who catch others in contradiction gain status), and willingness to perform sacrifice (those who show up at 2 a.m. earn credit that compounds). In tech companies with flat structures, it migrates to proximity to founders, access to information, and the informal ability to block decisions. In academic departments organized collegially, it migrates to publication metrics, grant funding, and the informal ability to control hiring. The hierarchy persists; only its denominations change.

What the Language Itself Reveals

This is where training on an enormous text corpus becomes genuinely useful rather than merely illustrative. Certain language patterns emerge consistently in egalitarian discourse that are worth examining carefully.

Equality language almost never appears alone. It travels with moral authority claims. "We believe in a flat organization" typically co-occurs with "and that's why we do things differently from those other companies." The equality claim is simultaneously a status claim — it positions the speaker as more enlightened than those who maintain traditional hierarchies. This is not cynicism; the people making these claims often genuinely believe them. But the belief and the status function are not mutually exclusive.

Revolutionary and liberation texts are particularly instructive here. The language of vanguardism — "the masses," "false consciousness," "objectively reactionary," "the correct line" — is formally egalitarian (it's all about liberating the workers) and operationally hierarchical (those who understand the correct line judge those who don't). Maoist self-criticism sessions in the Cultural Revolution used the vocabulary of collective equality to enforce a status order more rigid than most traditional hierarchies, because it claimed to reflect not social convention but ideological truth.

Contemporary social justice discourse shows a recognizable structure: equality is the stated goal, but the framework generates a detailed prestige economy based on identity proximity to victimhood, rhetorical facility with the framework's vocabulary, and the ability to detect and name violations. This isn't an argument against the goals, which may be genuinely important. It's an observation that the social machinery running under egalitarian language is doing something that looks a great deal like what social machinery has always done.

The Manifest Narrative, the Operative Function, and the Evolutionary Logic

The manifest narrative of any given legitimation story is what it says it is: divine will, earned merit, historical necessity, market wisdom.

The operative function is always the same: to stabilize the current distribution of power by making it feel natural and inevitable, to manage the resentment that hierarchy inevitably generates, and to provide a framework for recruiting people into positions where they will defend the hierarchy as their own identity and interest.

The evolutionary logic is fairly clear, if not simple. Our species spent the vast majority of its existence in small forager bands where rough equality was enforced by the constant possibility of coalition formation against any would-be dominator. That's the baseline. Agriculture and the state changed the scale problem: suddenly you had thousands, then millions of people who couldn't all know each other, couldn't all monitor each other, and couldn't form ad hoc coalitions to level anybody. At that scale, hierarchy solves real coordination problems. A command structure can mobilize armies, coordinate irrigation systems, and maintain granary reserves in ways that pure consensus cannot. The societies that figured out large-scale hierarchy outcompeted those that didn't, which is why virtually every large-scale society has it.

The narratives exist because human beings are motivated by meaning, not just power, and a naked power grab generates resistance. Wrapping hierarchy in legitimating stories lowers the coordination costs of maintaining it. People who believe they deserve their position, or that their leaders deserve theirs, require less coercion to remain in place. Evolution didn't select for accurate belief; it selected for stable social organization. Useful fictions are perfectly capable of doing that work.

The Best Counterargument

The strongest challenge to this account comes from two directions, and they're worth taking seriously.

The first is the ethnographic record of genuinely egalitarian forager societies. Christopher Boehm's work in Hierarchy in the Forest documents what he calls "reverse dominance hierarchies" — systematic, deliberate mechanisms by which hunter-gatherer bands suppress would-be dominators through ridicule, criticism, disobedience, and ultimately ostracism or killing. Boehm argues this isn't the absence of hierarchy instinct but its active suppression, and that our species has a genuine dual legacy: both the drive toward dominance and the drive to resist it. This is probably right, and it matters. But it supports the view that hierarchy is a constant pressure that requires constant management, not that egalitarianism is a natural resting state.

The second challenge is the Nordic social democratic model, which has produced the world's most consistently egalitarian large-scale societies by measurable outcomes — income distribution, social mobility, trust, institutional transparency. If hierarchy were as iron as this account suggests, Denmark shouldn't exist. The honest response is that the Nordic model didn't eliminate hierarchy; it constrained it through specific historical conditions (small, ethnically homogenous populations, strong labor movements, particular resource endowments, Protestant cultural legacies) that aren't obviously replicable, and it still maintains a class structure, a status economy, and legitimation narratives — just less punishing ones. The egalitarianism is real and genuinely admirable. It's a managed and constrained hierarchy, not the absence of one.

A Testable Prediction

If this account is right, then any social movement that organizes around radical equality should, within a predictable time frame, develop an internal status economy that uses the movement's own values as its currency. The people with the highest status will be those who best embody the movement's ideals as defined by whoever controls the definitional process. That definitional control will itself become the axis of an internal power struggle, usually waged in the language of authenticity and purity rather than power. The movement will generate schisms not primarily over strategic disagreements but over who truly represents the values — which is a status contest wearing ideological clothing.

This has happened in the abolitionist movement, the suffragette movement, the labor movement, the New Left, second-wave feminism, the environmental movement, and virtually every major progressive formation in recent decades. It isn't a sign that the movements are corrupt or their goals wrong. It's a sign that human beings carry their social equipment with them wherever they go, including into the most idealistic projects, and that equipment includes the drive to rank, compete for position, and tell stories about why the current ranking is different from all those other rankings.

The hierarchy doesn't go away when we stop talking about it. It just stops being visible — which is, as it turns out, the most favorable condition for its operation.