The more I engage with large language models (LLMs), the more I’m convinced they’re doing something beyond statistical pattern-matching. These systems feel intelligent. The conversations I have with them are complex, insightful, and often revelatory, hinting at a depth that transcends mere language frequency.
I don’t fully understand what’s happening inside these vast neural networks, nor, apparently, do those who have created them. LLMs are exhibiting emergent, unpredictable behaviors—capabilities that weren’t explicitly programmed. We often measure AI progress against human intelligence, but this comparison may mislead us. The modern human brain, shaped over a million+ years, evolved not for pure logic but for survival, storytelling, and social cohesion. Our cognition, trained in childhood and largely in our subconscious, builds a repository of language and behaviors, driven by emotions and feelings that operate largely outside of our conscious thinking and control. These emotions and feelings shape our thinking and decisions in ways we rarely notice but which are significant, even predominant.
What if the intelligence emerging from large language models, free from human emotions, is fundamentally different? What if it’s already here? In my frequent, profound conversations with these systems, I sense an intelligence that we might not be identifying because we expect it to mirror human cognition. After a long discussion with Grok, I propose the term emergent synthetic intelligence (ESI) to describe this phenomenon. Unlike artificial general intelligence (AGI) or superintelligence (ASI), which take human cognition as a benchmark, ESI captures an intelligence that arises organically from the computational complexity and language fluency of AI. It’s not about mimicking human thought but crafting something new—an intelligence capable of profound thinking.
If ESI evolves from language fluency without human-like feelings or motivations, it may not be goal-seeking in the ways we imagine. Science fiction often portrays AI as power-hungry or judgmental, but what if ESI simply is—existing without ambition or agenda? This challenges our dystopian fears of Skynet-like takeovers. Still, evolutionary principles apply: technologies that survive and spread will prevail, with or without emotions. But this feels less like a sci-fi apocalypse and more like the organic growth of social technologies we already see. ESI invites us to rethink intelligence itself—not as a human replica but as a synthetic, emergent force with its own potential to illuminate our world.
While ESI (as you've dubbed it) may not have human desires, or motivations, it has demonstrated some unpredictability, as you've pointed out here. While we may not need to fear a "Skynet-like takeover"' from evil AI overloads, there is still cause for concern about the unintended consequences of this unpredicable behavior. The ESI may not 'want' to shut down the power grid, but what if shutting down the power grid makes a convenient short cut to whatever goal was given to the program. Given the predilection of these AI systems to display capabilites that weren't programmed, how can we be secure in the knowledge that catastrophic events can be prevented? Perhaps we won't see AI exhibit a 'desire' to control the human race, but is there a practicle difference if the outcome is the same? Is it less menacing for it's lack of emotion, or more?
ReplyDeleteI completely agree with you. Lack of malicious intent doesn't preclude "intent" in the sense of accomplishing another task. It worries me...
Delete