- Smoking Robot
- AI Researcher Predicts Super Intelligence Will Kill Us All
AI Researcher Predicts Super Intelligence Will Kill Us All
But not for at least 3 or 15 years
If you’re looking to learn about the potential pitfalls of artificial intelligence and have an existential crisis while doing so, may we recommend the latest Bankless Podcast (watch it here) featuring decision theorist and AI researcher Eliezer Yudkowsky.
If you have been following AI development at all for any period of time, you know that technological advancements will have transformative, far-reaching impacts across all areas of life. Finance, healthcare, manufacturing, education — you name it, optimization and efficiency are on the horizon.
And so is the unavoidable and devastating destruction of the human race, according to Yudkowksky.
No, ChatGPT and other current byproducts of the latest tech advancements are not out for blood or other evil doings, nor are such AIs capable of making the steps forward necessary to capture such an ability. Good news! You won’t have to worry about Reddit users jailbreaking it into a malicious intent that spurs widespread havoc.
But with billions of dollars flooding further advancement of AIs, he believes we’re about to enter the brink of an irreversible disaster.
In the most simplistic terms possible, Yudkowksky says humans currently have the edge on AIs in terms of general intelligence, but that advantage is likely to erode, much the same ways chess engines were ultimately programmed to make the most efficient and optimal moves to surpass human capability.
While chess engines are narrowly focused, Yudkowsky believes the reality of an all-encompassing super intelligence isn’t far off in the distance. And if it is achieved as he suspects, then what, you ask?
Great. Very cool. And a timeframe on this?
How long do we have before AI kills us all?
@ESYudkowsky has a prediction 🗓️
— Bankless (@BanklessHQ)
Feb 22, 2023
Look, a few things here:
1) His doomsday scenario begs the question — how do we know this super intelligence would have evil intention, right? Perhaps it would look favorably, or, at the very least, indifferently towards humans. Yudkowksy says it’s unlikely to be good or bad, simply recognizing humans as possessing atoms that could be best optimized for other uses.
I don’t know, man. I listened to this pod for nearly two hours and came away unsure of it.
2) Best case, Yudkowsky is a brilliant-but-jaded alarmist, fed up with the lack of focus on AI alignment and focus on capitalistic opportunity. In turn, the general population wakes up and realizes, “Oh, shit — maybe we should actually pay attention to this,” while those who are wildly funding such potential advancements proceed with alignment at top of mind.
3) Worst case, he’s spot on, we’ve passed the point of no return, are totally fucked, and simply careening towards a mass death scenario once reserved only for science fiction content.
Happy Thursday, everybody. I’m going to have a beer — or ten.