The Gods Themselves

Most of you will have heard of Eliezer Yudkowski, a highly intelligent young man (he’s now 43) who has for quite a few years now been on the sharp edge of computer science, futurism, rationalistic atheism, and artificial-intelligence research. (I first became acquainted with his work through his blog Less Wrong, and it was his lucid explanation that first taught me the power and beauty of Bayes’ Theorem.)

Yudkowski, who is one of the founders of the Machine Intelligence Research Institute, has long been a leading thinker on the problem of AI safety, focusing for years on the importance of “alignment”: how to ensure that our cybernetic servants keep their aims and interests in harmony with our own. (To put that another way, we want, somehow, to guarantee that what they want is what we want them to want.) The more autonomous and inventive these things become, though, the harder that gets.

Now, however, Yudkowski has declared, in a recent interview, that he has arrived at a bleak conclusion: it just can’t be done, and with the inevitability of breakthrough AI in the next few years (due to the irresistible promise of power and money that AI research offers), we are, in his opinion, almost certainly doomed. As he puts it, AI will spit out solid gold for a little while, and then everybody dies.

It would be one thing if this were some tabloid scaremonger saying this, but to hear it from one of the world’s foremost authorities on AI safety is, well, worrisome. (In the interview, Yudkowski himself seems at times harrowed, and even grief-stricken.)

I have embedded the interview below. It’s long, but I rather think you should watch it. (If you’d prefer to read it, a transcript is here).

Is Yudkowski wrong? We’d better hope so. (He certainly hopes so himself, but he thinks it extremely unlikely.)

2 Comments

  1. Jason says

    Malcolm, perhaps I should backtrack a little bit in reference to an earlier thread. Mr. Yudkowski discusses evolution of AI in the same terms as biological evolution, that this autonomous entity would want to kill us for our atoms if I perceive his point correctly. But why would AI choose to do this, what would propel it to do this? After all, it’s mainly accidental mutations interacting with the environment which cause development over time in the organic world, but can such spontaneous disruptions occur through mechanical, computational lines of code, allowing AI to metastasize into some autonomous monster of its own? In other words, is not Mr. Yudkowski wrongly conflating two distinct phenomena? But then maybe this is a moot point, for as discussed before a rogue actor could simply specifically program AI to develop in this manner, in essence another form of more deliberate evolution with a twisted scientist providing a neo-Big Bang.

    Posted February 25, 2023 at 11:44 pm | Permalink
  2. Malcolm says

    Jason, I think a careful reading of Yudkowski’s interview provides his answers to these objections. I’ll sum up in a new post.

    Posted March 9, 2023 at 11:57 am | Permalink

Post a Comment

Your email is never shared. Required fields are marked *

*
*