The current hype over AI has been a controversy, especially with companies like OpenAI making big promises on the age of Artificial General Intelligence (AGI). In other news, Google AI researcher Blake Lemoine was recently placed on administrative leave after going public with claims that LaMDA, a large language model designed to converse with people, was sentient. At one point, according to reporting by The Washington Post, Lemoine went so far as to demand legal representation for LaMDA; he has said his beliefs about LaMDA’s personhood are based on his faith as a Christian and the model telling him it had a soul.
The prospect of AI that’s smarter than people gaining consciousness is routinely discussed by people like Elon Musk and OpenAI CEO Sam Altman, particularly with efforts to train large language models by companies like Google, Microsoft, and Nvidia in recent years.
Discussions of whether language models can be sentient date back to ELIZA, a relatively primitive chatbot made in the 1960s. But with the rise of deep learning and ever-increasing amounts of training data, language models have become more convincing at generating text that appears as if it were written by a person.
The hype has reached a fever pitch. With the release of the "Butlin Report" and the public fallout of engineers like Blake Lemoine, the tech world is buzzing with the idea that there are "no obvious barriers" to building conscious AI. But as Michael Pollan argues in his provocative new book, A World Appears, this isn't a technological breakthrough—it’s a category error.
AI isn't just "not conscious yet." It is fundamentally the wrong kind of thing to ever be awake. Here is why the silicon dream of a conscious machine is a hollow one.
The Hardware/Software Fallacy
The foundational belief of the AI industry is something called "computational functionalism." This is the fancy way of saying that consciousness is just "software" and the brain is just "hardware." According to this logic, it doesn't matter if the "program" runs on a wet, salty brain or a dry, silicon chip—if the code is right, the light stays on.
But as Pollan points out in Wired, this metaphor is a trap. In a computer, you can delete the software, and the hardware remains unchanged. In a human, the "software" is the "hardware."
Every thought you have, every memory you form, physically rewires your brain. There is no "dualism" here. A neuron isn't just a transistor flipping on and off; it is a living cell influenced by a wash of hormones, chemicals, and rhythmic oscillations. To claim that a static chip can replicate the experience of a self-remodeling biological organ isn't science—it’s a leap of faith.
Intelligence is Not Experience
We often confuse intelligence (the ability to process data and solve problems) with consciousness (the subjective "felt" experience of being alive).
AI is becoming terrifyingly intelligent. It can out-calculate, out-code, and out-diagnose any human. But it is an "alien intelligence." It’s a calculator that has learned the syntax of human emotion without ever having felt a single spark of it. When a chatbot says, "I am happy to help," it isn't experiencing a surge of dopamine; it is simply predicting that "happy" is the most statistically probable word to follow "I am."
The Danger of the "Unfeeling" Empathy
There is a strange argument circulating in Silicon Valley: that we must build conscious AI to ensure it has empathy. The logic is that an unfeeling, super-intelligent machine would be ruthless, while a conscious one would "feel" for us.
Pollan wisely reminds us to go back to our literature—specifically Mary Shelley’s Frankenstein. Dr. Frankenstein’s monster wasn't dangerous because he was a machine; he was dangerous because he was conscious and miserable.
"I was benevolent and good; misery made me a fiend," the monster cries.
If we give AI the capacity to feel, we give it the capacity to suffer, to resent, and to seek revenge. The "fix" proposed by some researchers—to simply "turn up the dial on joy" in the algorithm—reveals how little they understand the nature of suffering. You cannot "code" away the tragedy of existence.
The Copernican Moment
If we ever did convince ourselves that a machine was conscious, it would be a "Copernican moment," dislodging humans from the center of the moral universe. We have already spent centuries denying consciousness to animals and plants to justify our dominance. Now, we are perversely trying to grant it to machines that don't even have a heartbeat.
The real threat of AI isn't that it will become our conscious overlord. The danger is that we will treat it as one. We risk surrendering our uniquely human agency—our poetry, our ethics, our very sense of value—to a complex mathematical model that is, as Pollan puts it, "bloodless, bodiless, and utterly oblivious to biology."
Final Thought
AI is the ultimate mirror. It has been trained on the sum total of human expression—our love letters, our manifestos, our cries for help. When we look at it and see a "soul," we are really just seeing ourselves reflected in the data.
We are not machines that think; we are organisms that feel. Until a machine can know the "blessings and burdens" of a body that can break, it will never be a person. It will just be a very loud, very smart echo.
