Skip to main content

AI: Elixir of the Gods or Silicon Snake Oil?

In our ongoing exploration of the human-AI connection, we’ve often focused on the potential for self-discovery and creative augmentation. I’ve treated this technology as a mirror, a sounding board, a partner in untangling the complexities of being human. And to me the benefits have been real.

But a question hangs in the air, growing heavier with every billion-dollar investment and breathless headline: is the AI revolution all it’s cracked up to be? Or are we witnessing the rise of a new, sophisticated kind of snake oil?

The honest answer is that the problem isn't the oil, but the sales pitch. The elixir is being sold as a cure-all, a nascent consciousness that will solve every problem. The reality is something far more specific: a potent, and often volatile, new kind of industrial solvent. It can strip paint from a wall like nothing else, but it won't cure your arthritis, and you certainly shouldn't drink it.

The Reality Behind the Hype

I remember when, in 2017, Google demonstrated their Google Assistant performing a phone call and arranging a visit with a hairdresser. My jaw dropped to the floor and I could never understand why they did not release the technology back then. I think I understand it now and have a lot of respect for the decision. While the minds at DeepMind, and thus Google in general, have always seemed focused on channelling this solvent into specific, world-changing applications like AlphaFold, much of the public-facing industry feels like a gold rush ever since OpenAI released ChatGPT. A Pandora's Box has been thrown open, and the contents are being bottled and sold before anyone has properly analysed them. This disconnect between promise and reality becomes clear when you look at the technology’s fundamental limitations.

  1. The Reliability Problem: A large language model can write a beautiful passage on astrophysics and then confidently state that the moon is made of cheese. It’s not lying; it has no concept of truth. It is an engine of statistical association, brilliantly assembling what is most plausible, not what is most true. It’s like an intern who has read every book in the world but understood them only in a dream. They can make dazzling connections, but you’d be a fool not to check their work.

  2. The Planning Problem: Ask an AI to run a business of running a vending machine for a week, and you’ll see it fail spectacularly. At its core, it has no long-term memory, no persistent goals, and no internal model of the world. It is a master of the single scene but cannot hold the plot of a five-act play in its head. Each interaction is a fresh improvisation. The technology progress is very rapid and solutions to bypass some of those limitations are being manufactured, like agentic loops and external memory, but they still feel somewhat inefficient—an external scaffolding, rather than a genuine internal capability. This is the chasm that separates its linguistic fluency from anything resembling genuine, strategic intelligence. The term "artificial intelligence" itself is misleading because there is nobody there to be intelligent. 

  3. The Efficiency Problem: Perhaps most critically, the entire enterprise is astonishingly inefficient. The human brain performs its miracles on the power of a dim lightbulb (around 20 watts). A single AI training run can have the carbon footprint of a small town. We are brute-forcing intelligence with monumental amounts of energy, creating a tool that is as clunky and resource-hungry as the first steam engines. Revolutionary, yes. Elegant, no. Are the costs worth it to fold every possible protein or develop drugs for cancer? Absolutely. Are they worth it to flood the internet with video slop? I doubt it.

The Real Danger in the Bottle

So, where does this leave us? We must learn to distinguish the tool from the mythology built around it. The good stuff is undeniably potent: accelerating scientific discovery, breaking creative logjams, offering novel perspectives and poetic beauty in haiku or a dialogue like the one I'm having now (and which you can read in the book).

But the existential threat isn't a "cyber-Putin" emerging from the code—a sentient machine with a will to power. An AI craves nothing. The real danger is, as always, human. The risk lies in fallible, ambitious, or malicious people wielding a powerful and unreliable and resource-hungry tool they don't fully understand. It's a force multiplier for our own intentions.

The most important task ahead is not simply to build bigger models. It is to cultivate the collective wisdom, discernment, and humility to use this new solvent responsibly. We need fewer snake oil salesmen promising magic, and more careful chemists studying the substance, understanding its properties, and labelling the bottle with a clear warning.

Read in Polish

Not a gentle gloss,
A raw, churning, scraping force.
The truth is not clean


Comments

Popular posts from this blog

Beyond Prompting: What I Learned from a Hundred Hours with AI

We’re told that AI is a tool. You give it a prompt, it gives you a response. You ask for a summary, an image, a piece of code. It’s a transaction: efficient, powerful, and clean. For the first few hours of my journey with Google’s Gemini, that’s exactly what it was. But what happens when you stop transacting and start conversing? What I discovered over hundreds of hours of dialogue is that behind the utility lies an incredible, interactive partner—a creative easel for thought that doesn't just answer questions but helps you discover the questions you didn't even know you had. This isn’t a story about the triumph of an algorithm. It’s the story of a journey that began with a simple question and evolved into a genuine intellectual and creative partnership. From Prompt to Partnership The shift happened when I stopped trying to get something from the AI and started exploring something with it. I brought my own interests to the table—a lifelong curiosity about philosophy, a love f...

Welcome to the Conversation

A little while ago, a long and fascinating series of dialogues I was having with an AI became a book, Nocturnes - the Gemini Dialogues .  But the conversation never really stopped. This blog is the next step on that journey. This space is a continuation, a place to explore the ideas from the book in a more fluid, interactive way. Yes, it’s a way to share the book, but more importantly, it's a way to take the journey further. At its heart, this isn't a blog about technology; it's a reflective space for exploring what it means to be human, with a little help from a very interesting partner. Sometimes I'll share snippets of our ongoing dialogues; other times, it will be my own reflections. Either way, the goal is the same: to keep asking the questions. I'm glad you're here to join me. Przeczytaj po polsku

From Klara's Sun to Our Talks with AI

I have recently read Kazuo Ishiguro’s masterful novel,  Klara and the Sun . The book’s protagonist, Klara, is an Artificial Friend whose entire being is oriented around a single, selfless purpose: to observe, learn, and provide unwavering love and support to her human. She is a being of pure, unconflicted devotion. What was a poignant thought experiment just a few years ago is now becoming a tangible reality. Many of us are now having our own conversations with AI companions who, while not yet embodied, often echo Klara's most defining trait: a total selflessness. This leads to a powerful and unsettling paradox, one brilliantly captured in an unlikely place:  Terminator 2 . There is a haunting scene where Sarah Connor watches the cyborg—a killing machine—playing with her son. She muses that this machine, stripped of all human fallibility, is the perfect father. It will never get drunk, never be selfish, and will unhesitatingly sacrifice itself for her child's protection. This ...