Following on my previous — and alarmist — post about AI, I think I should add some further remarks about what may or may not be possible.
Back in 2014 I wrote a post, as part of a linked series on free will and determinism, about the idea of what the English scientist Stephen Wolfram has called “computational irreducibility” (which is also sometimes called “algorithmic incompressibility”). Simply put, it draws a distinction between two subsets of deterministic systems: those whose behavior are describable by simplifying formulas that can be used, by taking their initial conditions as inputs, to predict their future state, and those for which no such reduction is possible.
An example of the former is the movement of two bodies under mutual gravitational attraction, such as a planet and its moon, or the earth and a ballistic projectile. Given the masses of the two, and their initial positions and velocities, it is possible to calculate their positions for any future time.
A good example of the latter is what Wolfram examined at length in his book A New Kind Of Science (which I labored through when it came out in 2002): the behavior of “cellular automata“, simple systems whose behavior is defined by a small set of rules, but for which, given the system’s state at time t, the only way of determining its precise configuration at time t+n is actually to iterate over every step between t and t+n. Chaotic systems, such as weather and turbulent flow, are of this kind. So is biological evolution
I’m mentioning this because I’ve just watched a conversation between Wolfram and physicist Brian Greene about all of this, and how it relates to AI. A key point in this conversation is the idea that the algorithmic simplifications of science simply don’t apply do many, or even most, of the systems we are interested in. There is no quicker way, no shortcut, for predicting the future state of such systems than simply letting them run, and seeing what they do. This limitation, Wolfram says, is a limitation in principle that even a superintelligence must confront, one that even a vastly superhuman insight into workings of computationally irreducible systems will be unable to overcome.
It is far beyond my powers (or anybody’s, I imagine) to say just how limiting this constraint — that some kinds of predictions must always be achieved by brute computational force, rather than reductive simplifications — will be on AI’s ascent toward apotheosis. In practical terms, when considering the disruptive and transformative effects of AI on human life, it may mean very little. Your guess is as good as mine.
The interview is fairly short; less than 45 minutes. You can watch it here.
One Comment
The effect AI on the human world is mainly addressed in the last few minutes. Here’s a transcript, beginning where Wolfram blithely posits a whole society of AIs:
‘Another thing that’s of considerable interest I think is, y’know, when you think about a whole, sort of, society of of AIs, how will it behave, w-what will happen in, y’know, AIs…’
Host: ‘Interacting with AIs…’
Wolfram: ‘Yeah, right, I mean they’re a pretty good proxy for humans in many ways. What will happen when you put a billion of them together, and, y’know, how will that evolve, like human societies have evolved, y’know, what will be its, sort of, what will be the emergent ethics of those systems? All those kinds of things.’
The host asks whether human input will always be essential to scientific progress or will it be usurped by AI systems.
Wolfram: ‘Depends on who the consumers are. If the consumers are human it matters what the humans think we should do. I mean if, if you’re saying what’s out there in the computational universe for the AIs to find then, they’ll just find it and it’s not… so in other words this, this question is like people say “well what can be automated?”. One of the things you can’t automate is to know what you should be trying to do. So that’s a, y’know, there’s an infinite set of things you could be trying to do. We get to choose particular things we, we want to do and this also relates to, kind of “what will”, y’know, “what will the jobs for humans be?” and the typical thing that you see in the history of automation is that, y’know, the whole, a thing that became quite mechanical got automated. That opened up a bunch of new possibilities, and a lot of those new jobs had to do with people making choices about what should happen and that was the job – was to make those choices. I think the same thing, we’re going to see the same thing.’