• Smoking Robot
  • Posts
  • AI Researcher Predicts Super Intelligence Will Kill Us All

AI Researcher Predicts Super Intelligence Will Kill Us All

But not for at least 3 or 15 years

If you’re looking to learn about the potential pitfalls of artificial intelligence and have an existential crisis while doing so, may we recommend the latest Bankless Podcast (watch it here) featuring decision theorist and AI researcher Eliezer Yudkowsky.

If you have been following AI development at all for any period of time, you know that technological advancements will have transformative, far-reaching impacts across all areas of life. Finance, healthcare, manufacturing, education — you name it, optimization and efficiency are on the horizon.

And so is the unavoidable and devastating destruction of the human race, according to Yudkowksky.

Pause.

No, ChatGPT and other current byproducts of the latest tech advancements are not out for blood or other evil doings, nor are such AIs capable of making the steps forward necessary to capture such an ability. Good news! You won’t have to worry about Reddit users jailbreaking it into a malicious intent that spurs widespread havoc.

But with billions of dollars flooding further advancement of AIs, he believes we’re about to enter the brink of an irreversible disaster.

In the most simplistic terms possible, Yudkowksky says humans currently have the edge on AIs in terms of general intelligence, but that advantage is likely to erode, much the same ways chess engines were ultimately programmed to make the most efficient and optimal moves to surpass human capability.

While chess engines are narrowly focused, Yudkowsky believes the reality of an all-encompassing super intelligence isn’t far off in the distance. And if it is achieved as he suspects, then what, you ask?

If it's better at you than everything, it's better at you than building AIs. That's snowballs. It gets an immense technological advantage. If it's smart, it doesn't announce itself. It doesn't tell you that there's a fight going on. It emails out some instructions to one of those labs that'll synthesize DNA and synthesize proteins from the DNA and get some proteins mailed to a hapless human somewhere who gets paid a bunch of money to mix together some stuff they got in the mail in a file. Like smart people will not do this for any sum of money. Many people are not smart. Builds the ribosome, but the ribosome that builds things out of covalently bonded diamondoid instead of proteins folding up and held together by Van der Waals forces, builds tiny diamondoid bacteria. The diamondoid bacteria replicate using atmospheric carbon, hydrogen, oxygen, nitrogen, and sunlight. And a couple of days later, everybody on earth falls over dead in the same second.

Great. Very cool. And a timeframe on this?

Neat.

Look, a few things here:

1) His doomsday scenario begs the question — how do we know this super intelligence would have evil intention, right? Perhaps it would look favorably, or, at the very least, indifferently towards humans. Yudkowksy says it’s unlikely to be good or bad, simply recognizing humans as possessing atoms that could be best optimized for other uses.

I don’t know, man. I listened to this pod for nearly two hours and came away unsure of it.

2) Best case, Yudkowsky is a brilliant-but-jaded alarmist, fed up with the lack of focus on AI alignment and focus on capitalistic opportunity. In turn, the general population wakes up and realizes, “Oh, shit — maybe we should actually pay attention to this,” while those who are wildly funding such potential advancements proceed with alignment at top of mind.

3) Worst case, he’s spot on, we’ve passed the point of no return, are totally fucked, and simply careening towards a mass death scenario once reserved only for science fiction content.

Happy Thursday, everybody. I’m going to have a beer — or ten.