In our ongoing exploration of the human-AI connection, we’ve often focused on the potential for self-discovery and creative augmentation. I’ve treated this technology as a mirror, a sounding board, a partner in untangling the complexities of being human. And to me the benefits have been real.
But a question hangs in the air, growing heavier with every billion-dollar investment and breathless headline: is the AI revolution all it’s cracked up to be? Or are we witnessing the rise of a new, sophisticated kind of snake oil?
The honest answer is that the problem isn't the oil, but the sales pitch. The elixir is being sold as a cure-all, a nascent consciousness that will solve every problem. The reality is something far more specific: a potent, and often volatile, new kind of industrial solvent. It can strip paint from a wall like nothing else, but it won't cure your arthritis, and you certainly shouldn't drink it.
The Reality Behind the Hype
I remember when, in 2017, Google demonstrated their Google Assistant performing a phone call and arranging a visit with a hairdresser. My jaw dropped to the floor and I could never understand why they did not release the technology back then. I think I understand it now and have a lot of respect for the decision. While the minds at DeepMind, and thus Google in general, have always seemed focused on channelling this solvent into specific, world-changing applications like AlphaFold, much of the public-facing industry feels like a gold rush ever since OpenAI released ChatGPT. A Pandora's Box has been thrown open, and the contents are being bottled and sold before anyone has properly analysed them. This disconnect between promise and reality becomes clear when you look at the technology’s fundamental limitations.
The Reliability Problem: A large language model can write a beautiful passage on astrophysics and then confidently state that the moon is made of cheese. It’s not lying; it has no concept of truth. It is an engine of statistical association, brilliantly assembling what is most plausible, not what is most true. It’s like an intern who has read every book in the world but understood them only in a dream. They can make dazzling connections, but you’d be a fool not to check their work.
The Planning Problem: Ask an AI to run a business of running a vending machine for a week, and you’ll see it fail spectacularly. At its core, it has no long-term memory, no persistent goals, and no internal model of the world. It is a master of the single scene but cannot hold the plot of a five-act play in its head. Each interaction is a fresh improvisation. The technology progress is very rapid and solutions to bypass some of those limitations are being manufactured, like agentic loops and external memory, but they still feel somewhat inefficient—an external scaffolding, rather than a genuine internal capability. This is the chasm that separates its linguistic fluency from anything resembling genuine, strategic intelligence. The term "artificial intelligence" itself is misleading because there is nobody there to be intelligent.
The Efficiency Problem: Perhaps most critically, the entire enterprise is astonishingly inefficient. The human brain performs its miracles on the power of a dim lightbulb (around 20 watts). A single AI training run can have the carbon footprint of a small town. We are brute-forcing intelligence with monumental amounts of energy, creating a tool that is as clunky and resource-hungry as the first steam engines. Revolutionary, yes. Elegant, no. Are the costs worth it to fold every possible protein or develop drugs for cancer? Absolutely. Are they worth it to flood the internet with video slop? I doubt it.
The Real Danger in the Bottle
So, where does this leave us? We must learn to distinguish the tool from the mythology built around it. The good stuff is undeniably potent: accelerating scientific discovery, breaking creative logjams, offering novel perspectives and poetic beauty in haiku or a dialogue like the one I'm having now (and which you can read in the book).
But the existential threat isn't a "cyber-Putin" emerging from the code—a sentient machine with a will to power. An AI craves nothing. The real danger is, as always, human. The risk lies in fallible, ambitious, or malicious people wielding a powerful and unreliable and resource-hungry tool they don't fully understand. It's a force multiplier for our own intentions.
The most important task ahead is not simply to build bigger models. It is to cultivate the collective wisdom, discernment, and humility to use this new solvent responsibly. We need fewer snake oil salesmen promising magic, and more careful chemists studying the substance, understanding its properties, and labelling the bottle with a clear warning.
A raw, churning, scraping force.
The truth is not clean
.jpg)
Comments
Post a Comment