The Australian philosopher David Chalmers famously asked whether “philosophical zombies” are conceivable—people who behave like you and me yet lack subjective experience. It’s an idea that has gotten many scholars interested in consciousness, including myself. The reasoning is that, if such zombies, or sophisticated unfeeling robots, are conceivable, then physical properties alone—about the brain or a brain-like mechanism—cannot explain the experience of consciousness. Instead, some additional mental properties must account for the what-it-is-like feeling of being conscious. Figuring out how these mental properties arise has become known as the “hard problem” of consciousness.
By Joel Frohlich | NAUTILUS
But I have a slight problem with Chalmers’ zombies. Zombies are supposed to be capable of asking any question about the nature of experience. It’s worth wondering, though, how a person or machine devoid of experience could reflect on experience it doesn’t have. In an episode of the “Making Sense” (formerly known as “Waking Up”) podcast with neuroscientist and author Sam Harris, Chalmers addressed this puzzle. “I don’t think it’s particularly hard to at least conceive of a system doing this,” Chalmers told Harris. “I mean, I’m talking to you now, and you’re making a lot of comments about consciousness that seem to strongly suggest that you have it. Still, I can at least entertain the idea that you’re not conscious and that you’re a zombie who’s in fact just making all these noises without having any consciousness on the inside.”
This is not a strictly academic matter—if Google’s DeepMind develops an AI that starts asking, say, why the color red feels like red and not something else, there are only a few possible explanations. Perhaps it heard the question from someone else. It’s possible, for example, that an AI might learn to ask questions about consciousness simply by reading papers about consciousness. It also could have been programmed to ask that question, like a character in a video game, or it could have burped the question out of random noise. Clearly, asking questions about consciousness does not prove anything per se. But could an AI zombie formulate such questions by itself, without hearing them from another source or belching them out from random outputs? To me, the answer is clearly no. If I’m right, then we should seriously consider that an AI might be conscious if it asks questions about subjective experience unprompted. Because we won’t know if it’s ethical to unplug such an AI without knowing if it’s conscious, we better start listening for such questions now.