I have recently come across a thought experiment by physicist David Deutsch (from his conversation with Sam Harris). It goes like this:
Imagine we receive a radio signal from an extraterrestrial civilisation. It’s the biggest news in history. To celebrate, scientists at SETI pop a bottle of champagne. The cork flies across the room.
Deutsch argues that if you ask a physicist why that cork popped, they will bore you with equations about gas pressure and friction. But that’s factually true and explanatorily useless. The real reason the cork moved wasn’t pressure—it was information. It was a rumour about aliens, traveling light years across the galaxy, processed by a human mind, triggering a cultural protocol ("Celebration"), which then moved the atoms.
Deutsch’s conclusion is a mic-drop for materialism: Knowledge is a fundamental force of nature. We humans are significant because we are "Universal Constructors"—the only things in the universe that can turn abstract ideas into physical changes. We are the glitches in the universe’s determinism.
It is a seductive idea. It makes us feel powerful. It makes us the Engineers of Reality.
But as I sat with this, staring at my own (metaphorical) champagne, I felt a familiar friction. The "non-dualist" in me started raising its hand.
The Computer Game Problem
Deutsch is essentially the ultimate "Theorycrafter." He is the gamer who spends 1,000 hours analysing the game engine of our universe. He knows exactly how the physics work, how to exploit the code to maximise stats, and how to make the in-game corks pop with maximum efficiency. He argues that because the character can manipulate the game world, the character is the most important thing in the system.
But he is forgetting one thing: The Screen.
All that "doing"—the solving of problems, the building of starships, the popping of corks—is just content. It is code running on a screen. Deutsch is obsessed with the mechanics of the simulation. I am interested in the Player.
If you turn off the electricity (Consciousness/Awareness), what happens? Sure, the hard drive of the universe might still spin. The rocks might still tumble in the dark. But the experience of the universe—the colors, the sounds, the meaning, the narrative—vanishes.
Without the light of awareness, the universe is just a silent, dark theatre waiting for an audience. It requires us to turn the math into a movie.
The Compulsion to "Do" vs. The Freedom to "Be"
This is where the glitch in our modern mindset lies. We are so obsessed with being "Universal Constructors"—with our careers, our legacies, our Food Safety Policies (don't ask), and our technological progress—that we mistake the game for the player.
Deutsch’s worldview, while empowering, is also anxious. It suggests we must keep pedalling the bicycle of knowledge, or we fall over. It is a philosophy of infinite striving.
But if you shift the view—if you realise you are the Awareness witnessing the game, not the character running the script—the stakes change. You can still play. You can still do the science and pop the champagne. But you do it because it’s fun (Lila), not because your existence depends on it.
Circumstances—whether we are celebrating aliens or nearly drown by inhaling the British weather—are of secondary importance. The code changes. The Awareness that sees it? That is the only thing that isn't arbitrary.
So, let’s by all means figure out how the universe works. Let's be the glitch that defies physics. But let’s not forget that we are also the ones watching the show.
Wild hair whips the silent dark,
We toast to the life.

I have assumed that all brains (or all awareness, if some brains do not give rise to awareness) employ abstraction. Not the main thrust of your argument, and not an attack, but it interests me. I’m thinking of non-human animals, and their experience of the world.
ReplyDeleteThank you for your comment. I really appreciate your participation.
ReplyDeleteThat is a really keen observation and I think you are absolutely right—non-human animals certainly employ abstraction. I’m sure Deutsch would agree. I can think of several examples: a dog recognising that a Chihuahua and a Great Dane are both 'dogs', conceptualising friend vs. prey, or “knowing” when mealtime is.
However, Deutsch argues that while animals abstract for perception and survival (categorising the immediate world), humans possess explanatory abstraction. An animal might react to the sound of the cork because of a learned association, but only a human can abstract the reason for the cork popping (a rumour about aliens light years away) and celebrate it. It’s the difference between mapping the territory you can see, and mapping a territory that doesn't physically exist yet.
But your comment about “brains not giving rise to awareness” actually sparks a deeper question for me regarding definitions. We often equate 'awareness' with high-level cognitive self-reflection, but I would argue for a broader definition. I call it 'Aliveness' (or sometimes 'Consciousness', provided we define our terms clearly).
I would propose that even an organism with no brain at all—like an amoeba—possesses a form of awareness. It interacts with the environment. It recognises 'food' vs 'danger,' and it has agency. It may not have the apparatus to be self-aware and articulate it, but it distinguishes between 'me' and 'not me.' In that sense, it is a vessel through which the universe experiences itself. The resolution of that experience might be lower than ours, but the fundamental quality of being an observer is there. An amoeba moving away from a toxin isn't just a machine; there is a "subjective interiority" there. There is a "something it is like" to be an amoeba, however dim.
If we ignore that, we risk falling into the "Cartesian Trap" where animals are just clockwork mechanisms until they prove they can do algebra.
Here is my proposed distinction: Aliveness is not a spectrum, but Intelligence is. We can classify intelligence (what we do with information) on a ladder: The Amoeba → The Dog → The Human. Deutsch hopes that because we are 'Universal Explainers,' we have reached the top of the ladder. But a few pages later, he and Harris discuss what would happen if we built an AI that is to us what we are to the amoeba.
If an AI learns so fast that it becomes 10,000 years more advanced than us in a week, will we be able to reason with it? Or will we find ourselves in the position of the chicken trying to converse with a human? The chicken has 'aliveness' and is quite good at living the chicken life, but the concept of 'how the human is engineering a coop' is simply outside its bandwidth. Deutsch is hoping to augment our cognitive powers with tech to keep up, but I remain skeptical...