9/5/2021- 5:25 p.m.
The human mind also includes subjective experiences: emotions, memories, sensory perceptions (like vision and hearing)—what we might call ‘consciousness’. And humans are also self-aware—they have a kind of meta-consciousness where they are aware of their own awareness and of themselves and their own ego. We can say that humans are sentient: they are conscious, intelligent, and self-aware. And so, the question of artificial intelligence is whether machines could, one day, be sentient.
Stanley Kubrick was arguably the first director to take seriously the idea that machines could be sentient. A close look at his 1968 classic 2001: A Space Odyssey, and its corresponding novel, clearly indicates that Kubrick intended the HAL 9000—the computer that serves as the brain for a spaceship called ‘Discovery One’—to be intelligent, conscious, and self-aware. Indeed, HAL’s circuitry is intended to mimic the configuration of the human brain.
HAL’s activation heralded the dawning of his ‘consciousness’. HAL’s circuitry includes circuits for cognitive feedback, ego-reinforcement, and auto-intellection. He can reason, understand and use language, appreciate art, make plans and decisions, beat a human at chess—something thought forever impossible for a computer in the 60s. HAL even feels threatened and fears his deactivation. And his notion that he should be the one to complete Discovery One’s mission clearly indicates that he is self-aware. HAL is sentient.
Our Fictitious Predictions of Artificial Intelligence
HAL isn’t the only machine in sci-fi treated as being sentient. There’s K9 from Doctor Who, Bishop from Alien, Johnny 5 from Short Circuit, Data from Star Trek, the terminators from The Terminator, Andrew from Bicentennial Man, Marvin from Hitchhiker’s Guide to the Galaxy, Sonny from I, Robot, Ava from Ex Machina, Samantha from Her … The list goes on and on. But sci-fi has not always been so friendly to the idea of mechanical sentience.
The laser-eyed Gort from The Day the Earth Stood Still and Robby, the Robot from Forbidden Planet, are clearly not thought to be sentient. They’re not even conscious. They are just what Descartes would have called ‘automatons’—machines that mindlessly approximate human behavior.
The droids from Star Wars, like R2-D2 and C-3PO, exist in a kind of gray area. They clearly behave like they’re sentient—they use language, make plans, show fear and concern—but they are treated as if they don’t. But this raises the question: Should we consider machines that behave like us to be sentient like us?
Souls of the Machines
We can’t simply declare that machines can’t be sentient because they don’t have souls. First, the notion that humans have souls is very problematic. If ensoulment is required for sentience, then humans aren’t sentient either.
Second, if ensoulment is required, then machine sentience is just the question of machine ensoulment. Indeed, those who say that machines can’t have souls usually just mean that they can’t have minds—they can’t have conscious experiences. But whether they can is the issue at hand. You can’t just declare that they don’t and think you have established anything. You’d need an argument.
What Is Considered Sentient?
One way of presenting such an argument would be to demonstrate what is sufficient and/or necessary for producing the elements of sentience—intelligence, consciousness, and self-awareness—and then show that machines necessarily lack them. Needless to say, this would be a difficult task. But there have been some suggestions.
A real theory that might help answer our question about machines’ sentience is psychologist Julian Jaynes’s bicameralism theory. Jaynes argues that, as recently as 3000 years ago, the two hemispheres of the human brain, while connected, were not unified. Instead of acting as one unit, the dominant vocal left hemisphere experienced the commands and decisions of the right as auditory hallucinations.
When faced with a novel situation, the person would not reason out what to do; the person would hear what they took to be the voice of a god coming from their right hemisphere and obey unquestionably. This kept a person from making decisions, explaining why they did what they did, or reflect at all on their own mental states. Indeed, they may not have even been aware of their own egos.
The Evidence and Problems With Julian Jaynes’s Theory
Among the evidence Jaynes cites is: a) literature from more than 3000 years ago, all of which seems to lack an author with self-awareness; and b) studies on modern schizophrenics, who also hear voices telling them what to do. So the hemispheres integrated, allowing one to reflect on the processes of the other, ultimately leading to self-awareness.
There would be two problems with using Jaynes’s theory to answer our question about machine sentience. First, it’s just a theory and a contested one at that. So we don’t know if it’s right. And second, it’s a theory about how self-awareness arose, but that’s just one aspect of sentience.
But what if machine brains can’t even produce consciousness? Then they couldn’t be aware of their own consciousness, could they? They couldn’t be self-aware. Thus they wouldn’t be sentient. So, even if machines acted self-aware, we’d probably need a separate reason for thinking they are conscious.