Our friend Kevin Kim has written a meaty response to Bill Vallicella’s latest remarks on Dennettian theoskepsis. (The study of religion is Kevin’s academic specialty; and in passing I’ll recommend his book Water From a Skull for those with an interest in the field of comparative religion.)
A quibble: in this post Kevin discusses Bill’s dualistic, antimaterialist position on the nature of consciousness, and in doing so skims a little too close, I think, to conflating intelligence and consciousness. We read:
I also disagree completely with Vallicella’s characterization of neuroscience. For him, neuroscience will never “teach us anything about consciousness.” The reality, though, is that neuroscientific theories are paving the way for us to make machines– robots– whose behaviors are becoming increasingly complex. If one definition of “intelligence” is “problem-solving ability,” then by that standard we have been building increasingly intelligent machines for years. Soon, intelligence will come to mean more than the ability to win at chess or participate in a Jeopardy! competition: it will mean the advent of machines that react without confusion in fluid social or physical situations. While true machine consciousness is probably a long way off, I don’t see its realization as an impossible goal. Intelligence isn’t consciousness, but it’s a vital component of consciousness. [Is it? -MP] One day, a machine is going to stare at us with the same speculative curiosity we train on it.
My point is that the increasing complexity of machine behaviors is the result of scientific theories that are grounded in a naturalistic (or, more precisely, physicalist) philosophy of mind. If mind is indeed utterly dependent on matter, as I believe it is, then we will one day be able to arrange matter in such a way as to form minds. This won’t convince the diehard substance dualists,* of course; they’ll go on believing that mind is somehow independent of matter without ever being able to explain how a particular mind is connected to a particular body. Unfortunately, their philosophy of mind can promise no progress: you can’t strive to create artificial intelligence if you believe it’s inherently unachievable.
I don’t have a lot to disagree with here, but I will say that I think it is critically important to pry apart three (at least) concepts that are often muddled together when people talk about these things: consciousness, intelligence, and intentionality. Bill Vallicella, in particular, seems committed to the idea that consciousness is necessary for intentionality; I’ve tried for many years now to get him to discuss this in light of such obviously intentional (and presumably non-conscious) things as the food-dance of bees, etc., but he won’t engage.
Likewise, I consider it perfectly possible for intelligence, even very high intelligence, to operate completely unconsciously (my experience with the Gurdjieff work drove this home very uncomfortably, if I had any doubt). I agree with Kevin that “If mind is indeed utterly dependent on matter, as I believe it is, then we will one day be able to arrange matter in such a way as to form minds”; we do that already, after all (and with unskilled labor!), by making babies. But it may well be that there is something in particular about biological brains that is uniquely capable of generating consciousness: for all we know, you can’t make a conscious machine out of silicon any more than you can make a ham sandwich out of glass.
But as I said, just a quibble. Go read Kevin’s piece here.