Woody Allen once wrote:
“More than any other time in history, mankind faces a crossroads. One path leads to despair and utter hopelessness. The other, to total extinction. Let us pray we have the wisdom to choose correctly.”
On March 29th, Time magazine published an article by Eliezer Yudkowski titled “Pausing AI Developments Isn’t Enough. We Need to Shut it All Down”. In it, he argues that the threat of runaway artificial intelligence is so dire, and so imminent, that any measures whatsoever, up to and including military assault on other nations, would be justified in order to prevent it.
Is this just febrile scaremongering, on a par with climate hysteria? Or should we be as worried as Yudkowski — who probably knows at least as much about this topic as anyone alive — seems to be?
Read the article here.
One Comment
First of all, thank you for the prior intro to Yudkowsky, (and I am glad you’re blogging more frequently lately.) I have since read and continue to peruse his work and the topic of AGI generally,
There’s a new video on y-tube, Lex Friedman w/ Yudkowsky, 3 hours!, post Time article.
He’s a funny guy, reminds me of a nerdy friend I had (who’s passed away.) Lot’s of stuff right I’m sure, but an assumption here and there that is more open to question/dispute. And I speak as a bear of limited brain. Smart people outsmart themselves all the time. I’m not able to cite examples as yet, and I don’t downplay it’s a legitimate concern he expresses.
I will continue to explore, but I find your Woody Allen quote most apt. Which will come first, making the other completely irrelevant? And when. And what’s relevant in the meantime? Plenty of grist to keep the tension churning. Shut down of AGI research/development will not happen. Just like egging on escalation of rivalries and intrigues will continue until morale improves. As Dylan suggests, “we’ll just have to see how it goes.”