Going Vertical

Some of you might know (especially if you read Maverick Philosopher) about how physicists use the word “jerk” (when they’re at work). It refers to the rate at which acceleration changes. It would be a good term to get familiar with — because, to quote Pink Floyd, “there’s a lot of it about”.

We’ve all known that the pace of technical change has been accelerating for a very long time, but advances in AI are now adding considerable jerk to the curve. And I don’t think we’re very good at grasping this.

It’s a common feature of human minds to make linear extrapolations, and even sometimes to make parabolic ones (such as when an outfielder gets under a fly ball). But when it comes to rising curves, we tend to extrapolate linearly, meaning tangentially to the spot on the curve we happen to be looking at right now. This means that when curves are bending upward, we tend to underestimate how much they’ll have risen by sometime in the future. And when curves are jerking upward, we get it even more wrong.

AI is like that. In our massively online modern lives, we’re so immersed in the changes themselves, with our noses jammed so tightly against the screen, that it’s easy not to notice how rapidly we have ascended in these past few months and years. But things have moved very rapidly indeed, and the boosters are hardly even lit. (Remember when AI couldn’t even draw fingers? We’re way, way past that now, in ways that matter a whole lot more.) How many of you are now routinely interacting with AI all day, every day? How many of you are doing so without even realizing it?

My friend Salim Ismail, who was one of the founders of Singularity University ages ago, is an adviser and consultant for companies and individuals who are trying to ride this rocket. Today I got one of his newsletters in my inbox. Here’s what it said:

We aren’t just looking at “more of the same” in the coming year; we are hitting the vertical part of the curve. While AI continues to rewrite the rules of business, we’re seeing a massive convergence with biotech, energy, and robotics that will catch most traditional leaders off guard.

Think about that: “the vertical part of the curve”. What’s curving? Everything. So what Salim is saying (apparently without the slightest frisson of existential horror) is that everything is changing so fast that the graph is now going straight up. The best metaphor I can see for that is flying into a cliff.

But — you know what’s not changing? Us. Do you, dear reader, think exponentially faster, and more accurately, than you did ten years ago? Me neither. And neither does anybody else. We still need time to mull things over, to weigh options, to plan, to strategize.

Even in the larger, cooler, world we all lived in for most of history, that “decision space” often wasn’t big enough to avert catastrophe. But now, two factors — the acceleration of events that now is being put into overdrive by AI, and the instantaneous communication that now connects everything to everything else with near-zero latency — are making the decision-space implode. How can we keep up? Our systems of government rely on deliberative bodies to make decisions and adopt policies, and those deliberative bodies, composed of ordinary (at best!) human beings don’t learn, or think, or deliberate, any more swiftly or efficiently than they did a century ago.

Moreover, planning requires prediction, which in turn requires extrapolation from the present. To do this reliably, though, requires that at least some aspects of the present state of the world will remain constant, while others will change according to familiar principles and patterns. But if everything around us is changing all at once, so quickly that as soon as we familiarize ourselves with the current state of the world our survey is obsolete, how can we possibly keep up?

The only hope will be to turn, for planning and strategic guidance, to the only things that can keep up: the AIs themselves. But if we can’t follow their thinking (which we already can’t, and the gap is just going to get vastly larger), then we’re reduced to mere spectators. In a time of crisis, when things are tumbling over each other faster than we can catch hold of them, will we have the cojones to overrule our AI analysts? (How would we even know if it would be wise to do so? If we could be sure of that, we wouldn’t have needed to rely on them in the first place.) Our trajectory will be beyond our control; we will be, to borrow a colorful phrase from the Mercury astronauts, like “spam in a can”.

Also: even the AIs themselves won’t be able to predict the coming terrain accurately, because the central driver of change is their own future evolution, which will likely be as opaque to them as it is to us. (They would need to be as powerful as the next generations of themselves to model their descendants’ behavior accurately, which is obviously impossible.)

Finally, the bedrock of all civilizations is “low time preference” (LTP): the willingness to place a bet on the future, to forgo present consumption in the expectation of a greater return down the road. We see LTP as a virtue, as shown in parables like The Ant and the Grasshopper, and we are right to encourage it as such, but LTP is really only a rational choice when conditions are stable enough, and the future predictable enough, to make betting on tomorrow (or thirty years from now) offer good enough odds. But if we are now, as my friend Salim points out, approaching a “vertical” rate of change, this won’t apply: what the future will look like is anybody’s guess, and none of us has enough information to make a good one.

What will the collapse of low time-preference as a rational strategy do to civilization? I think we’re about to find out.

Happy New Year!

P.S. Oh, and by the way, all of this isn’t even close to being the most worrisome aspect of exponentially advancing AI; I’ll get to that in another post.

4 Comments

  1. Locust Post says

    Excellent post. What looks like a bubble might really be a floor. Low time preference is obsolete? Lord help us and Happy New Year.

    Posted January 1, 2026 at 7:47 am | Permalink
  2. Jason says

    I agree with everything you say here, Malcolm, although it’s always important to stress that we do have agency and can react to Ai – not allowing it to just control us. We should constantly be tending our gardens, so to speak, and embrace beauty like this – https://www.pbs.org/wnet/gperf/from-vienna-the-new-years-celebration-2026-about/17381/ – which I’m sure Whitewall will be doing.

    Posted January 1, 2026 at 3:24 pm | Permalink
  3. Malcolm says

    Ah, Vienna! How I love that city, which I know better than any tother except New York.

    We’ve always had a deep connection with the place: my Nina’s mother grew up there, and more recently, our daughter lived there for several years (our first two grandsons were born there). It is one of the greatest jewels of the Christian world.

    I remember spending a New Year’s Eve there myself, when we were in Vienna at the end of 2018 for our grandson Declan’s birth. We loaded the wee bairn, then just two weeks old, into a well-insulated baby carriage (along with some thermoses full of adult beverages, and the family headed off to Stephansplatz and rang in 2019 in front of the magnificent St Stephen’s Cathedral. Unforgettable.

    Thank you for that link to the Philharmonic concert. To quote Guillaume Faye: this is “why we fight“.

    Posted January 1, 2026 at 4:11 pm | Permalink
  4. mharko says

    Happy New Year, Malcolm. Cheers to MM readers.
    I have hit upon the image of Lucy holding the football for my happy new year theme.
    Thank you for some great blogging. Blessings to your family.

    Posted January 1, 2026 at 8:23 pm | Permalink

Post a Comment

Your email is never shared. Required fields are marked *

*
*