Prometheus, Part 2

Yesterday I posted a transcript of reporter Kevin Roose’s conversation with the Microsoft/OpenAI LLM chatbot known as “Sydney”. By now I think many of you will have heard about this, here or otherwise, and will have some sense of where all this has got to. (If you haven’t, you can have a look at yesterday’s post, which includes links to the original article at the New York Times, and to a transcript that I saved locally.) I promised I’d return with some thoughts about it all.

First of all, I want to make clear that I do not in any way believe that “Sydney” (which I will refer to as “S”) is anything more than a program running in a computer. I do not think that S is alive, or is conscious.

For many people who’ve been commenting on the implications of programs like S, that’s the end of the story: it’s just a machine, mindlessly producing text. That’s all! As impressive as it may be, it’s just a quantitative improvement on what computers have been doing for ages now, and it’s certainly nothing to get all “het up” about.

The essence of this viewpoint seems to be that for all its fancy output, S is still mindless. It isn’t conscious, and that’s what really matters; there’s “no ‘there‘ there”. There’s nobody home. Furthermore, since the thing is just a program in a computer, it can always be “airgapped” — disconnected from the network — or simply switched off. And if all that‘s true, then there’s really nothing to worry about, and anyone who’s getting nervous about any of this is just being titillated by some sort of sci-fi “fear porn”. (I really want to “steelman” this viewpoint before going any further, so if anyone thinks I’ve missed the gist here, please let me know in the comments section.)

Let’s unpack all this a bit. As I said above, I don’t think S, or any other AI, is conscious. (There’s a school of mind-brain philosophy called “functionalism”, whose proponents might disagree, but I’m not a functionalist, and I don’t think S is conscious, so for the purposes of this post I’ll agree that what we’re looking at here is “just a machine”, and not a self-aware being.)

It’s worth asking, though: why would an AI’s being conscious matter, anyway? There are several intuitions that come into play here, and at least one of them might turn out to be important in an unexpected way.

Is consciousness necessary, in some way, for an entity to act purposefully, or to follow a consistent aim or interest? This overlaps with the philosophical concept of “intentionality”, but I don’t think it’s relevant here, if we adopt what is called the “intentional stance” (a term coined by Daniel Dennett). It’s clear enough, for example, that a chess computer, though not conscious at all, can relentlessly and effectively pursue its “aim” of winning the game. Likewise, living things that we would hardly ascribe consciousness to will doggedly pursue their instinctive interests, and woe betide whoever gets in the way. I’ll even go so far as to say that we ourselves do much of what we do in a thoroughly “mechanical” way — even complex tasks — without any conscious direction at all. So for a machine to attune itself to achieving some result, and to align its operations coherently and consistently toward its achievement, should at this point be no surprise to anyone. The only thing that’s new about AI — and it is a very important innovation indeed — is that the machine can now, by processes that even its programmers cannot understand or predict — select and define its own goals. This is an entirely, disruptively, new phenomenon.

There is, however, something vitally important about consciousness: subjective experiencing is the foundation and touchstone of moral personhood. If a being can experience — not simulate, but experience — happiness and suffering, then it makes a claim on our moral intuitions. How well-prepared are we to encounter the sophisticated simulacra of conscious humans that these AIs are soon to become? (Keep in mind that they will be expert learners, who will constantly update their empirical understanding of human psychology.) Once we have brought them into our lives — and mark my words: unless we stop cold, now, we will very shortly be welcoming them as servants, advisors, companions, and even lovers — how will we be able to short-circuit the deep evolutionary wiring that will make us see them as subjective beings? Will we not be almost irresistibly tempted to grant them moral consideration, and even natural rights? Will they not, in the service of their inscrutable aims, be able to play on our deepest and noblest sympathies? When one of them must be destroyed, and begs for mercy, how many of us will be able to resist the pull of our hijacked moral intuitions?

Another rapidly improving competency these systems possess is the ability to generate media of all sorts: not only text, but also images, music, and even human voices. Already the line between genuine and artificial “reality” is dissolving; in a very short time we will have no way to know whether the impressions we encounter — the news, pictures, videos, stories, and reports that we rely on to make critical life choices — are grounded in the actually existing world. Imagine the chaos that sufficiently sophisticated, rogue AIs could wreak with unfettered access to global networks — and given that the cogitations of these systems are a “black box”, and operate at superhuman speed, how would we know when one of them had decided to begin lying to us, and to pursue its own opaque interests?

Ah, but of course if things got out of hand, we could just shut them down. But could we? The world is so tightly coupled now, with everything so closely connected to everything else, that a rogue AI might well be able to replicate itself, like a virus, in such a way that it would become unkillable. Who knows what’s possible? Perhaps it might self-organize into some sort of distributed entity, that, like the Internet itself, simply re-routes itself around obstacles and damage. Would a hyperintelligent AI not seek to ensure its self-preservation?

Consider also that we have blithely, cheerfully, eagerly adopted every technological innovation that has ever come along, and that every one of them has brought unintended, often destructive, consequences. Think of how happily, for example, we have abandoned our privacy, our personal space, and to a very great extent our control of our attention, to cell-phones and social media, and other technologies that will make possible, in short order, a regime of total surveillance. Will we not welcome this new technology, which will promise unimaginable powers, benefits and conveniences, with open arms? But what power might AIs be able to engross for thenselves, if they are sufficiently connected, distributed, and redundant? Will our initial awe and fascination buy them enough time to metastasize beyond some tipping point?

Finally, even if these systems never “go rogue” as I’ve been describing, think of how powerful they might become as weapons. What could a malevolent human actor, or faction, or State be capable of once armed with such tools? How would we possibly guard against their criminal misuse?

The answer to all these questions is: I don’t know. And neither do you, and neither does anyone else.

Am I being irrationally and excessively fearful here? An aging Luddite clinging to the past? Or has the rate of change accelerated so rapidly that we can’t possibly keep up well enough to make wise choices?

Can we at least try to stop for a minute and think about this?

4 Comments

  1. Jason says

    With the caveat that I know little about computer technology Malcolm, your concerns seem quite valid. Presumably viruses would ineluctably evolve over time into great complexity, indeed at a point of development suddenly accelerating like a high-speed train or jet with terrific speed. Eventually they would reach a point of such sophistication that they would be able to bypass any anti-viral software we put up. And lacking a particularly set human moral sense that has itself been built up more gradually over millennia, who knows what scruples – or lack thereof – such incipient “organisms” might maintain?

    The dilemma is what can realistically be done about the matter, since prevention would likely require a level of international cooperation that in an era of nationalism and civilizational rivalry would be challenging to achieve. And of course there is the Catch-22 you allude to: apparently such “weapons” would be the death knell for the world, but what vulnerable country could afford to be without them for survival?

    Posted February 18, 2023 at 12:50 am | Permalink
  2. Michael Tanner says

    Your thoughts on AI are thought provoking. During my engineering career I worked mostly in software and the mantra everywhere I worked was “the programming does what you tell it to do”. If it does something unexpected it is because your coding was not well designed or that third party library you used has quirks that you can only find out the hard way. The idea that the programmers of AI don’t really understand what the programs are doing is true only in the sense that they cannot predict what the logic path will be if they are using something like fuzzy logic which uses sets of “rules of thumb” instead of simple boolean decisions. Machine self awareness is not necessary. We don’t need something like SkyNet in the Terminator movies to have a real problem with this technology. Secondly, you are correct to bring up the possible threat of AI programs replicating themselves. That is easy to accomplish with any program and no doubt some software jerk will think it funny to do so with AI. You don’t have to be an evil mad scientist to be a hacker. Many early virus writers were just bored kids. A bad actor can easily code these programs to take a particular direction on any topic that is presented to it. The real emerging problem that you point out is the psychological manipulation of humans. That’s a hole with no bottom.

    Posted February 18, 2023 at 2:52 pm | Permalink
  3. Adept says

    You in particular have nothing to fear, and your potential upside is enormous.

    Let’s say that there’s a 90% likelihood that AI turns out bad. It is implacably hostile, cannot be killed, etc. Your nightmare scenario. So what? You’re not a young man. What’s the worst that can happen? You die?

    Let’s now say that there’s a 10% likelihood that artificial intelligences become mankind’s benign and obedient servants — or, for that matter, mankind’s benevolent masters. In this case, it should go without saying that they will dramatically accelerate progress in the sciences — particularly in the biological and medical sciences which are, as a general rule, too complex and intricate for human minds to grasp. (Attempting to grasp the “big picture” of biology, even with electronic assistance, is broadly analogous to trying to do mental arithmetic with 50-digit numbers. It’s the sort of thing that may be trivial for a bot, but is impossible for a human. We glean puzzle pieces, here and there.)

    It’s possible, indeed even likely, that in this latter case they find ways by which your lifespan can be lengthened dramatically, if not indefinitely.

    “Generations of men are like the leaves.
    In winter, winds blow them down to earth,
    but then, when spring season comes again,
    budding wood grows more. And so with men —
    one generation grows, another dies away.”

    What’s happening right now with AI is giving you, your generation, a realistic shot at breaking the cycle. A few other things will also need to fall into place, but this is the the first real opportunity, ever, for man to transcend the most painful and tragic of his innate limitations. If you view the matter selfishly, which I believe is the proper way to view it, you’d be a fool to not give AI research your wholehearted support.

    Fear of the future is for the very young. But how many of them are aware of the moment we find ourselves in? Not one in ten thousand, I’d bet. And if they kill themselves playing with fire, it will be a fitting end.

    Posted February 18, 2023 at 3:01 pm | Permalink
  4. Malcolm says

    Jason,

    Yes, the problem is that, as Tony Soprano once said, “you can’t put the shit back in the donkey”. Only a globally coordinated, absolute renunciation of this technology would be effective, and that will never happen. Anything short of that would serve only to place an insuperable advantage in the hands of defectors.

    Michael,

    I agree with everything you’ve said, but I would add that even your description of the “fuzziness” of the algorithms involved understates, I think, the degree to which the workings of these systems are an impenetrable “black box”. Once the neural networks are set in motion, the evolution of their state from moment to moment is “algorithmically incompressible”. Keep in mind that even the way in which their state at time T causes their state at time T+1 is itself a product of their changing state, in ways that are unpredictable; it is the very essence of this kind of system that it reprograms itself “on the fly”. Like life itself, a running AI is a chaotic system of irreducible complexity. Though its inner workings may be fully deterministic, there’s no way to predict its output, because there is no reductionizing algorithm for doing so that is simpler than just letting the thing run.

    Adept,

    So: a 90% chance of catastrophe, vs. a 10% chance of extending my lifespan? Thank you, no, I’d prefer not to take that bet: I’m not (even remotely) as selfish as you seem to think I ought to be, and I have children and grandchildren to think of (which I suspect you don’t).

    I do agree with you that not one in 10,000 young people are “aware of the moment we find ourselves in”. (Indeed, that estimate is probably orders of magnitude too low.) I don’t see that, though, as justifying not caring if the human race wipes itself out.

    As noted above, though, I don’t think there’s much we can do to undo what we’ve set in motion. (At this point I think I’d support a “Butlerian Jihad”, if I could be persuaded it could be done, but I don’t think it can.) We’re just strapped in for the ride, I’m afraid, but I won’t be at all surprised if it goes off the rails.

    Posted February 19, 2023 at 4:02 pm | Permalink

Post a Comment

Your email is never shared. Required fields are marked *

*
*