Here is a link to a detailed survey of the current status of AI research, including a clear-eyed assessment of what we should expect in the near future.
I won’t lie: I find this extremely alarming. The linked report makes it very clear that we are just a few years away from creating entities that are not just more intelligent than we are, but vastly so; and that we can no more predict what such an intelligence will be capable of, or what it will choose to do, than a mouse or a fish would be capable of understanding and predicting the motives and strategies and actions of a human being.
The report also makes clear that this is inevitable, because the development of these systems has now become an arms-race. Whoever wields this power (insofar as it can be controlled at all, once it really gets going) will have an insuperable advantage over those who don’t — and so the only rational strategy for nations or other agents who don’t want to be on the losing side of that equation is to push the research forward as aggressively as possible. So they will.
The time-horizon, also, is terribly short: a decade at most, but almost certainly much less, because there is a cascading effect as intelligent systems themselves begin to design their successors.
This is going to be a rupture in human history unlike any that has come before — even the end, perhaps, of history being driven by humans at all — and what leaps from every page of this document (despite the author’s wholly unconvincing declarations of optimism scattered throughout) is the fact that nobody has the slightest idea what’s going to happen, and there’s no way to slow down.
Am I over-reacting here? Frankly, I have never been so worried about anything in my life, and I think most people are just blithely chugging along, with no real inkling of what’s about to happen.