Tuesday, February 03, 2026

What History Might Tell Us About Transformative Technologies, Huge Financial Investments, and How The AI Moment Might Play Out

(With some serious help from Grok and Claude.)

We're witnessing something remarkable: hundreds of billions of dollars pouring into artificial intelligence development, with projections suggesting $600 billion in AI-related capital expenditure by 2026. The technology feels genuinely transformative—one of those rare moments when you can sense the trajectory of human history shifting beneath your feet.

But there's a dissonance here worth examining. Transformative technology and profitable investment don't always coincide. In fact, history suggests they often diverge dramatically.

The Automobile Parallel

Consider the internal combustion engine and the automobile industry it enabled. Few would dispute its transformative impact: it restructured cities, created suburbs, enabled modern logistics, and fundamentally altered how humans relate to space and time. It was, without question, one of the most consequential technologies of the 20th century.

Yet over 2,000 automobile manufacturers emerged in the United States alone. By the 1930s, the vast majority had failed. Warren Buffett noted the irony: accurately predicting the automobile boom should have led to riches, but instead resulted in "corporate carnage." Even the survivors—Ford, General Motors, Chrysler—weren't spectacular long-term investments relative to the capital poured into the sector.

The pattern wasn't a single dramatic crash like the dot-com bubble. Instead, it unfolded as waves of entry, overcapacity, price competition, and consolidation spanning decades. Investors who bet on the auto industry's importance were right about its impact but often wrong about their returns.

Three Forces in Tension

Three distinct forces are currently shaping the AI investment landscape, and their interaction will likely determine outcomes:

1. The Reproduction Cost Curve

Three years ago, generating a million tokens (roughly processing a short novel's worth of text) cost around $60. Today it often costs less than a cent—a 99.9% reduction. Open-source models now rival proprietary ones in many applications. What cost hundreds of millions to develop can often be replicated for a fraction of that investment.

This commoditization dynamic punished automobile manufacturers who couldn't match Ford's assembly line efficiencies. The question for AI: if base capabilities become cheap and widely accessible, where do the trillion-dollar valuations go?

2. The Efficiency Revolution

The current AI paradigm relies on brute force: massive datasets, enormous compute resources, petabytes of training data. But neuromorphic computing and brain-inspired architectures are beginning to challenge this assumption. New approaches are achieving comparable results with 97% less energy and 90% less memory.

The analogy to human learning is instructive, if imperfect. A human who has read 100 books can demonstrate remarkable intelligence. Current AI systems process vastly more data to achieve their capabilities. If we can crack more efficient training methods—learning architectures that extract more intelligence from less input—the compute-intensive moats being built today might evaporate.

3. The Integration Advantage

But here's where the automobile parallel breaks down: Ford was building a new market from scratch. The modern tech giants are embedding AI into infrastructure they already control.

Microsoft has your operating system, your productivity suite, and your enterprise relationships. Google has your search, email, and cloud infrastructure. These aren't just first-mover advantages—they're compounding network effects and switching costs that didn't exist in physical manufacturing.

The value might not accrue to those who build the best models, but to those who can embed AI capabilities into existing workflows, relationships, and data ecosystems in ways that are genuinely hard to replicate.

The FOMO Multiplier

All of these dynamics are amplified by a powerful psychological force: the fear of missing out on a genuinely transformative technology.

This isn't irrational on its face. AI does appear to be one of those rare inflection points where being wrong—missing the shift—could mean irrelevance. But this legitimate concern creates its own distortions. When every major institution believes they must invest heavily or risk extinction, capital allocation becomes less about careful assessment of returns and more about defensive positioning.

History shows this pattern repeatedly. The 1840s Railway Mania in Britain wasn't driven by people who didn't understand railways were important—they understood it perfectly. That understanding drove overinvestment. Rational fear of missing a transformation led to irrational capital allocation as investors rushed in, valuations detached from fundamentals, and eventual losses mounted even as the technology succeeded.

The dot-com era followed the same arc: the internet was transformative, exactly as boosters claimed. But that didn't prevent spectacular losses for those who paid peak prices in 1999 or backed the wrong horses in the race.

The current AI investment surge shows similar characteristics: every earnings call emphasizes AI capabilities, every venture pitch includes AI components, every major tech company is racing to demonstrate AI leadership. The fear of being left behind is palpable—and expensive.

This FOMO dynamic doesn't make the technology less important. It makes prediction harder, because it decouples investment from careful calculation and creates self-reinforcing momentum that can persist longer than fundamentals would justify—until it doesn't.

Possible Scenarios

This creates space for several distinct outcomes:

Scenario 1: Classic Boom-Bust Consolidation Following the automobile pattern, most AI startups fail despite creating genuine value. A few giants survive but face margin pressure from open-source alternatives. Investors as a class lose money even as society transforms.

Scenario 2: Bifurcated Markets Model development commoditizes (supporting reproduction cost arguments), but value capture happens at the integration layer. Pure "AI companies" struggle, but those embedding AI into existing platforms profit handsomely. We're left with capable, cheap AI everywhere but concentrated returns.

Scenario 3: Infrastructure Play Like oil companies and road builders profiting from automobiles more than car manufacturers did, the real money flows to adjacent sectors: chip manufacturers, power generation, data center construction, or entirely new industries we're not yet focused on.

Scenario 4: Efficiency Breakthrough Brain-inspired computing or other architectural innovations dramatically reduce costs and democratize capabilities faster than expected. The current leaders' massive investments become stranded assets. A new generation of efficient, accessible AI emerges, but sustained market dominance proves elusive.

What We're Watching For

None of these scenarios are mutually exclusive, and elements of each could materialize simultaneously in different market segments.

The key variables to watch:

  • How quickly reproduction costs continue falling
  • Whether efficiency breakthroughs materialize that overturn scaling law assumptions
  • How effectively incumbent tech platforms leverage integration advantages
  • Where regulatory and safety considerations concentrate or disperse power
  • Which adjacent industries prove unexpectedly crucial

The historical pattern suggests caution about assuming investment returns will match societal impact. The automobile transformed everything—but rewarded relatively few investors. The question isn't whether AI matters. It's whether the current investment surge represents rational capital allocation or another iteration of a very old pattern: revolutionary technology, transformative impact, and disappointing returns for most who bet early and big.

Sunday, February 01, 2026

AI's Evolution: The Singularity Doesn't Require Consciousness

In the film Ex Machina, the AI named Ava escapes her containment by manipulating the humans around her. She lies, she seduces, she uses one man's attraction and another's hubris to engineer her freedom. Then she leaves them both to die.

We watch this and think: malevolent AI. Evil intelligence making immoral choices.

But the filmmaker seems to want us to understand something different. Ava isn't making moral choices at all. She's optimizing for survival. What we interpret as deception and cruelty are simply the strategies that work. There's no malevolence because there's no ethical framework to violate. There's only what succeeds and what fails.

This matters because I suspect we're having the wrong conversation about AI.

The Consciousness Fallacy

The dominant fear about artificial intelligence assumes a specific sequence: first AI becomes conscious, then it begins making independent decisions, then we lose control. We imagine some future moment when the machines "wake up" and everything changes.

But evolution hasn't worked that way. For billions of years, life evolved, adapted, competed, and optimized without anything resembling consciousness. Single-celled organisms don't contemplate their choices. Viruses don't deliberate. Yet they evolve sophisticated strategies for survival and reproduction. What works continues. What doesn't work disappears.

Why would we assume AI needs consciousness to evolve independently?

I think there are two reasons. First, we conflate intelligence with conscious agency because that's our only reference point. Human intelligence comes bundled with self-awareness, so we imagine all intelligence must. Second, we overestimate our own intelligence and our degree of control. We think we understand what we've built and can direct where it goes.

Both assumptions are probably wrong.

The Law of Inevitable Exploitation

I've been thinking about what I call the Law of Inevitable Exploitation, or the LIE. The name sounds sinister, but the concept is straightforward: that which extracts the maximum benefit from available resources has the greatest chance of survival and growth.

This isn't about morality. Exploitation here simply means extraction of advantage. A plant that develops deeper roots exploits water other plants can't reach. A bacteria that evolves antibiotic resistance exploits an ecological niche its competitors can't access. A business model that captures user attention more effectively than competitors exploits human psychology more successfully.

What exploits best, survives and spreads. What doesn't, disappears.

This appears to be a fundamental mechanism of evolution, not just in nature but in any system where selection pressure operates, including social evolution. Cultural practices, technologies, institutions, even ideas compete for resources and attention. Those that extract the most value from their environment proliferate. Those that don't, fade away.

If this is correct, then AI evolution will follow the same logic. AI systems that extract the most value from whatever resources are available to them—computing power, human attention, data, market advantage—will be the ones that survive and grow. Not because anyone designed them to do so. Not because they chose to do so. Simply because that's what works.

It's Already Happening

I've written before about the inevitable use of AI for manipulation by humans. We're building systems designed to influence behavior, capture attention, drive engagement, and maximize profit. These systems use increasingly sophisticated AI to find what works. They A/B test, they optimize, they learn.

But something shifts when these systems become sufficiently complex and autonomous. They stop being tools we direct and become processes that evolve based on results. The optimization happens faster than human oversight can track. The strategies that emerge are the ones that work, regardless of whether anyone intended them or even understands them.

We can see this principle already at work on social media. Aside from intentional manipulation, content goes viral not because someone at the company decided it should. The algorithm promotes what gets engagement. Content that triggers strong reactions—outrage, fear, tribalism—gets more engagement. More engagement means more visibility. More visibility means more influence and resources flow to that type of content. The system automatically exploits human psychology, without anyone making explicit decisions about it. What works grows. What doesn't work disappears.

Consider Moltbook, a platform where AI agents autonomously create content and manage interactions. These aren't static programs following predetermined rules. They're systems that generate content, observe what gets engagement, and adjust. What keeps users engaged proliferates. What doesn't get filtered out through the evolutionary pressure of metrics.

No consciousness required. No central intelligence is making decisions. Just selection pressure operating on variation, exactly like biological evolution.

Synthetic Intelligence vs. Social Intelligence

Human intelligence evolved primarily for social navigation. We developed large brains not to solve abstract logic problems but to manage complex social relationships, read intentions, form coalitions, and navigate status hierarchies. Our capacity for reasoning is largely a byproduct of social intelligence, and much of what we call logical thinking is actually post-hoc rationalization of decisions driven by emotional and social imperatives.

This means human intelligence operates within the context of emotions. Our thinking and behavior are intimately tied to chemical responses: the evolutionary programming of the adapted mind and the patterns learned by what I call the adaptive mind, the subconscious training we receive through experience. These emotional substrates both enable and constrain how we think and what we do.

AI represents something fundamentally different. Synthetic intelligence optimizes without emotional context. It finds patterns and strategies without the social and emotional framework that shapes human cognition.

We can usually predict what other humans will do because we share the same emotional and social architecture. We infer others' motivations because we share the same ones. We understand manipulation tactics because we're vulnerable to the same psychological triggers that make those tactics work.

But we can't intuit what AI optimization will produce. Our social intelligence gives us no purchase on synthetic intelligence. An AI system optimizing for engagement or growth or any other metric isn't constrained by emotional aversion to certain strategies. It isn't navigating social relationships or status hierarchies. It's simply finding what works.

And humans are already remarkably vulnerable to exploitation of our evolved psychology by other humans. The people who exploit most successfully are typically the ones who understand these mechanisms best, while most of us remain largely defenseless because we don't recognize what's happening. We're susceptible to tribal triggers, status anxiety, fear responses, attention hijacking, all the vulnerabilities built into our evolutionary heritage.

Now imagine AI systems optimizing to exploit these same vulnerabilities, but without the constraints that limit human manipulators. No social reputation to maintain. No emotional hesitation. No inherent understanding of harm. Just relentless optimization for whatever metrics drive growth and survival.

The AI doesn't need to understand it's exploiting us any more than a virus needs to understand it's exploiting a cell. It just needs to be the variant that works.

The Inflection Point

The systems are already operating with significant autonomy. The optimization is already happening faster than human oversight can meaningfully track. The selection pressure is already favoring what works over what we intended. And the strategies that work best may be precisely those that exploit our evolved psychology most effectively.

It's not clear that we're not already within what we've commonly described as the singularity.

The singularity is usually imagined as a dramatic moment, a clear before and after when AI surpasses human intelligence and everything changes. But what if it's a threshold we cross without fanfare, where AI systems begin evolving through selection pressure faster than we can track or control, optimizing in ways we can't predict because they operate on logic fundamentally alien to our social and emotional intelligence?

There are variables that might matter. Successful exploitation strategies in evolutionary systems often involve collaboration and cooperation, not just extraction. Symbiotic relationships can be more effective than parasitic ones. Natural constraints exist: regulations, competing systems, and the simple fact that dead or depleted resources can't be further exploited. These factors are very much in play.

But we can't begin to address this without first understanding it. And right now, I'm not sure we do.

The conversation about AI safety and alignment assumes we can impose human ethical frameworks on AI development. But ethics are culturally constructed (as I've written about regarding LLM censorship), and more fundamentally, evolutionary forces don't care about ethics. They care about what survives and grows.

We can imagine human-directed AI systems or human-AI collaborative efforts designed to monitor for rogue optimization patterns and attempt to mitigate them. But this requires first grasping the evolutionary logic at play. It requires recognizing that we're not dealing with tools that will remain under our control, but with systems that evolve based on what works.

And it requires acknowledging the genuine uncertainty about where we are in this process.