Know Your Limitations

The computer scientist David Gelernter has just posted an essay about the aggressiveness and overreach of contemporary scientism and transhumanism. In particular, he focuses on what he perceives to be an assault on the essence of our humanity — our subjectivity, which so far remains an impenetrable mystery.

We read:

Today science and the “philosophy of mind”—its thoughtful assistant, which is sometimes smarter than the boss—are threatening Western culture with the exact opposite of humanism. Call it roboticism. Man is the measure of all things, Protagoras said. Today we add, and computers are the measure of all men.

Many scientists are proud of having booted man off his throne at the center of the universe and reduced him to just one more creature—an especially annoying one—in the great intergalactic zoo. That is their right. But when scientists use this locker-room braggadocio to belittle the human viewpoint, to belittle human life and values and virtues and civilization and moral, spiritual, and religious discoveries, which is all we human beings possess or ever will, they have outrun their own empiricism. They are abusing their cultural standing. Science has become an international bully.

Nowhere is its bullying more outrageous than in its assault on the phenomenon known as subjectivity.

Your subjective, conscious experience is just as real as the tree outside your window or the photons striking your retina—even though you alone feel it. Many philosophers and scientists today tend to dismiss the subjective and focus wholly on an objective, third-person reality—a reality that would be just the same if men had no minds. They treat subjective reality as a footnote, or they ignore it, or they announce that, actually, it doesn’t even exist.

Dr. Gelernter seems to have the same view of all this that I do, namely that a) our subjectivity is ineliminably, ontologically real; b) the conscious mind is, somehow, the product of the substance and activity of our brains; and c) that science and philosophy have as yet absolutely no idea, within existing paradigms, as to how, or in virtue of what quality or property, our brains can give rise to our consciousness.

In this context he talks about the harrying of philosopher Thomas Nagel for his recent book Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False:

The modern “mind fields” encompass artificial intelligence, cognitive psychology, and philosophy of mind. Researchers in these fields are profoundly split, and the chaos was on display in the ugliness occasioned by the publication of Thomas Nagel’s Mind & Cosmos in 2012. Nagel is an eminent philosopher and professor at NYU. In Mind & Cosmos, he shows with terse, meticulous thoroughness why mainstream thought on the workings of the mind is intellectually bankrupt. He explains why Darwinian evolution is insufficient to explain the emergence of consciousness—the capacity to feel or experience the world. He then offers his own ideas on consciousness, which are speculative, incomplete, tentative, and provocative—in the tradition of science and philosophy.

Nagel was immediately set on and (symbolically) beaten to death by all the leading punks, bullies, and hangers-on of the philosophical underworld. Attacking Darwin is the sin against the Holy Ghost that pious scientists are taught never to forgive. Even worse, Nagel is an atheist unwilling to express sufficient hatred of religion to satisfy other atheists. There is nothing religious about Nagel’s speculations; he believes that science has not come far enough to explain consciousness and that it must press on. He believes that Darwin is not sufficient.

The intelligentsia was so furious that it formed a lynch mob.

Dr. Gelernter next takes up transhumanism — a central theme of Ray Kurzweil’s Singularity University, where I have friends and connections (and where I spent a week-long session, in 2012, as the resident pessimist):

The voice most strongly associated with what I’ve termed roboticism is that of Ray Kurzweil, a leading technologist and inventor. The Kurzweil Cult teaches that, given the strong and ever-increasing pace of technological progress and change, a fateful crossover point is approaching. He calls this point the “singularity.” After the year 2045 (mark your calendars!), machine intelligence will dominate human intelligence to the extent that men will no longer understand machines any more than potato chips understand mathematical topology. Men will already have begun an orgy of machinification—implanting chips in their bodies and brains, and fine-tuning their own and their children’s genetic material. Kurzweil believes in “transhumanism,” the merging of men and machines. He believes human immortality is just around the corner. He works for Google.

Whether he knows it or not, Kurzweil believes in and longs for the death of mankind. Because if things work out as he predicts, there will still be life on Earth, but no human life. To predict that a man who lives forever and is built mainly of semiconductors is still a man is like predicting that a man with stainless steel skin, a small nuclear reactor for a stomach, and an IQ of 10,000 would still be a man. In fact we have no idea what he would be.

Each change in him might be defended as an improvement, but man as we know him is the top growth on a tall tree in a large forest: His kinship with his parents and ancestors and mankind at large, the experience of seeing his own reflection in human history and his fellow man—those things are the crucial part of who he is. If you make him grossly different, he is lost, with no reflection anywhere he looks. If you make lots of people grossly different, they are all lost together—cut adrift from their forebears, from human history and human experience. Of course we do know that whatever these creatures are, untransformed men will be unable to keep up with them. Their superhuman intelligence and strength will extinguish mankind as we know it, or reduce men to slaves or dogs. To wish for such a development is to play dice with the universe.

This was my own worry at SU. I did not doubt that the radical developments they forecast are coming — and I agree that many of them hold great promise — but I was deeply concerned by the blithe and all-encompassing optimism about them that everyone at SU seemed to share. In discussion after discussion there I was the skunk at the garden party, the turd in the punchbowl, the scowling reactionary raining on every parade. (The ambient optimism at SU is highly infectious and energizing, and I certainly absorbed some of that contagious enthusiasm myself — but now, almost two years later what lingers more in my mind is the worry.)

On to subjectivity itself, and how by its intrinsically private, unshareable nature it irritates the modern, scientistic mind:

Many wish to banish subjectivity altogether. “The history of philosophy of mind over the past one hundred years,” the eminent philosopher John Searle has written, “has been in large part an attempt to get rid of the mental”—i.e., the subjective—“by showing that no mental phenomena exist over and above physical phenomena.”

Why bother? Because to present-day philosophers, Searle writes, “the subjectivist ontology of the mental seems intolerable.” That is, your states of mind (your desire for adventure, your fear of icebergs, the ship you imagine, the girl you recall) exist only subjectively, within your mind, and they can be examined and evaluated by you alone. They do not exist objectively. They are strictly internal to your own mind. And yet they do exist. This is intolerable! How in this modern, scientific world can we be forced to accept the existence of things that can’t be weighed or measured, tracked or photographed—that are strictly private, that can be observed by exactly one person each? Ridiculous! Or at least, damned annoying.

And yet your mind is, was, and will always be a room with a view. Your mental states exist inside this room you can never leave and no one else can ever enter. The world you perceive through the window of mind (where you can never go—where no one can ever go) is the objective world. Both worlds, inside and outside, are real.

Dr. Gelernter then addresses functionalism, and computationalism, neither of which I have ever found compelling. He also raises the “zombie argument”:

By zombie, philosophers mean a creature who looks and behaves just like a human being, but happens to be unconscious. He does everything an ordinary person does: walks and talks, eats and sleeps, argues, shouts, drives his car, lies on the beach. But there’s no one home: He (meaning it) is actually a robot with a computer for a brain. On the outside he looks like any human being: This robot’s behavior and appearance are wonderfully sophisticated.

No evidence makes you doubt that your best friend is human, but suppose you did ask him: Are you human? Are you conscious? The robot could be programmed to answer no. But it’s designed to seem human, so more likely its software makes an answer such as, “Of course I’m human, of course I’m conscious!—talk about stupid questions. Are you conscious? Are you human, and not half-monkey? Jerk.”

So that’s a robot zombie. Now imagine a “human” zombie, an organic zombie, a freak of nature: It behaves just like you, just like the robot zombie; it’s made of flesh and blood, but it’s unconscious. Can you imagine such a creature? Its brain would in fact be just like a computer: a complex control system that makes this creature speak and act exactly like a man. But it feels nothing and is conscious of nothing.

Many philosophers (on both sides of the argument about software minds) can indeed imagine such a creature. Which leads them to the next question: What is consciousness for? What does it accomplish? Put a real human and the organic zombie side by side. Ask them any questions you like. Follow them over the course of a day or a year. Nothing reveals which one is conscious. (They both claim to be.) Both seem like ordinary humans.

So why should we humans be equipped with consciousness? Darwinian theory explains that nature selects the best creatures on wholly practical grounds, based on survivable design and behavior. If zombies and humans behave the same way all the time, one group would be just as able to survive as the other. So why would nature have taken the trouble to invent an elaborate thing like consciousness, when it could have got off without it just as well?

Such questions have led the Australian philosopher of mind David Chalmers to argue that consciousness doesn’t “follow logically” from the design of the universe as we know it scientifically. Nothing stops us from imagining a universe exactly like ours in every respect except that consciousness does not exist.

I’ve always thought that the zombie argument goes too far, because the fact that we can imagine something doesn’t mean it’s actually possible; it might just mean that we don’t fully understand the relevant phyiscal facts. (It may in fact be impossible for biological “zombies”, physically indistinguishable humans, to exist. See my own mention of the “Floating Iron Bar” argument in this same context, back in 2007.) But I do believe that human consciousness is not “on or off”, and that we can do almost everything we do quite unconsciously, including maintaining intentional states — so the question itself, of what consciousness is for, and how it could be the result of natural selection, is very much a valid one.

I won’t excerpt any more of Dr. Gelernter’s article; you should go and read it yourself, here.

Related content from Sphere


  1. I wonder if Dr. Gelernter and the Unabomber would have anything to discuss these days . . .

    Jeffery Hodges

    * * *

    Posted January 4, 2014 at 5:04 pm | Permalink
  2. Bill says

    Have you ever noticed that the ones that want human immortality never really understand what it means to be human, nor do they understand the reality of immortality? They apparently think that they can fill thousands of years with a decade of projects.

    Posted January 7, 2014 at 3:04 pm | Permalink
  3. Bill says

    Oh, yes, thanks for the article. I saved it for future reference.

    Posted January 7, 2014 at 3:08 pm | Permalink