Stephen Wolfram On AI And Irreducible Complexity

Following on my previous — and alarmist — post about AI, I think I should add some further remarks about what may or may not be possible.

Back in 2014 I wrote a post, as part of a linked series on free will and determinism, about the idea of what the English scientist Stephen Wolfram has called “computational irreducibility” (which is also sometimes called “algorithmic incompressibility”). Simply put, it draws a distinction between two subsets of deterministic systems: those whose behavior are describable by simplifying formulas that can be used, by taking their initial conditions as inputs, to predict their future state, and those for which no such reduction is possible.

An example of the former is the movement of two bodies under mutual gravitational attraction, such as a planet and its moon, or the earth and a ballistic projectile. Given the masses of the two, and their initial positions and velocities, it is possible to calculate their positions for any future time.

A good example of the latter is what Wolfram examined at length in his book A New Kind Of Science (which I labored through when it cam out in 2002): the behavior of “cellular automata“, simple systems whose behavior is defined by a small set of rules, but for which, given the system’s state at time t, the only way of determining its precise configuration at time t+n is actually to iterate over every step between t and t+n. Chaotic systems, such as weather and turbulent flow, are of this kind. So is biological evolution

I’m mentioning this because I’ve just watched a conversation between Wolfram and physicist Brian Greene about all of this, and how it relates to AI. A key point in this conversation is the idea that the algorithmic simplifications of science simply don’t apply do many, or even most, of the systems we are interested in. There is no quicker way, no shortcut, for predicting the future state of such systems than simply letting them run, and seeing what they do. This limitation, Wolfram says, is a limitation in principle that even a superintelligence must confront, one that even a vastly superhuman insight into workings of computationally irreducible systems will be unable to overcome.

It is far beyond my powers (or anybody’s, I imagine) to say just how limiting this constraint — that some kinds of predictions must always be achieved by brute computational force, rather than reductive simplifications — will be on AI’s ascent toward apotheosis. In practical terms, when considering the disruptive and transformative effects of AI on human life, it may mean very little. Your guess is as good as mine.

The interview is fairly short; less than 45 minutes. You can watch it here.

Post a Comment

Your email is never shared. Required fields are marked *

*
*