• Smoking Robot
  • Posts
  • ⭐️ 14 Highlights From Sam Altman's Interview

⭐️ 14 Highlights From Sam Altman's Interview

Sam Altman blows your mind

SmokeBot had a whole bunch of plans for the send today, but then we stumbled upon OpenAI founder Sam Altman's recent in-depth interview with StrictlyVC's Connie Loizos.

There is so much goodness in it: current state of GPT-4 (ChatGPT's next version), artificial general intelligence (AGI), how governments should think about AI, predictions for the future, thoughts on Google, and so much more.

So this send will consist solely of highlights from the interview. We'll save the Getty lawsuit update and SmokeBot's media hate for Friday.

14 Highlights From Sam Altman's Interview ⭐️

On the unexpected progress of AI: Everyone thought at first it comes for physical labor, like working in a factory and then truck driving, then this sort of less demanding cognitive labor, and then the really demanding cognitive labor like computer programming. And then very last of all or maybe never because maybe it's like some deep human special sauce, was creativity. And of course we can look now and say it really looks like it's going to go exactly the opposite direction.

On the impact on education and other changes: There are societal changes that ChatGPT is going to cause or is causing. There's I think a big one going now about the impact of this on education, academic integrity, and all of that. But starting these now [release of ChatGPT] where the stakes are still relatively low rather than just put out what the whole industry will have in a few years with no time for society to update… uh, I think would be bad.

But I still think given the magnitude of the economic impact we expect here, more gradual is better. And so putting out a very weak and imperfect system like ChatGPT, and then making it a little better this year, a little better later this year, a little better next year, that seems much better than the alternative.

On the release of GPT-4: It'll come out at some point when we are confident that we can do it safely and responsibly. I think in general we are going to release technology much more slowly than people would like. We’re going to sit on it for much longer than people would like. And eventually people will be like happy with our approach.

On the expectations for GPT-4: People are begging to be disappointed. People are gonna… the hype is just like… we don't have an actual AGI (artificial general intelligence). And I think that's sort of what is expected of us, and you know, yeah, we're going to disappoint those people.

On the variation in AI: I think there will be many systems in the world that have different settings of the values that they enforce. And really what I think, and this will take longer, is that you as a user should be able to write up a few pages of: here's what I want, here are my values, here's how I want the AI to behave. And it reads it and thinks about it and acts exactly, um, how you want because it should be your AI… you know, it should be there to serve you and do the things you believe in.

On ChatGPT being integrated with Microsoft Office: You are a very experienced and professional reporter. You know I can't comment on that. I know you know I can't comment on that. You know I know you know you can't comment on that. In the spirit of shortness of life and our precious time here, why do you ask?

On Google building an AI: I haven't seen theirs. Um, I would I think they're like a competent org so I would assume they have something good, but I I don't know anything about it.

I think whenever someone talks about a technology being the end of some other giant company, it's usually wrong. I think people forget they get to make a counter move here and they’re pretty smart, pretty competent. But I do think there is a change for search that will probably come at some point. But not as dramatically as people think in the short term. My guess is that people are going to be using Google the same way people are using Google now for quite some time. And also Google, for whatever this whole code red thing is, is probably not going to change that dramatic would be my guess

On how teachers can leverage ChatGPT: There may be ways we can help teachers be like a little bit more likely to detect output or anyone detect output of like a gpt-like system. But honestly, a determined person is going to get around them and I don't think it'll be something society can or should rely on long-term. We’re just in a new world now. Like generated text is something we all need to adapt to, and that's fine, we adapted to, you know, calculators and changed what we tested for in math classes. I imagine this is a more extreme version of that no doubt, but also the benefits of it are are more extreme as well.

On when video is coming out: Video is that coming. It will come. I wouldn't want to make a confident prediction about when, obviously people are interested in it. We'll try to do it. Other people will try to do it. It could be like pretty soon. It's a legitimate research project, so it could be pretty soon, it could take a while.

On the best case scenario for AI: I think the best case is like so unbelievably good that it's hard to for me to even imagine. Like I can sort of think about what it's like when we make more progress of discovering new knowledge with these systems than humanity has done so far, but like in a year instead of 70,000. I can sort of imagine what it's like when we launch probes out to the whole universe and find out really, you know, everything going on out there. I can sort of imagine what it's like when we have just like unbelievable abundance and systems that can sort of help us resolve deadlocks and improve all aspects of reality and let us all live our best.

I think the good case is just so unbelievably good that you sound like a really crazy person to start talking about it.

On the worst case: The bad case, and I think this is like important to say, is like lights out for all of us. I'm more worried about an accidental misuse case in the short-term where you know someone gets super powerful. It's not like the AI wakes up and decides to be evil. And I think all of the sort of traditional AI safety thinkers reveal a lot more about themselves than they mean to when they talk about what they think the AGI is going to be like.

On when AGI will be here: The closer we get the harder time I have answering because I think that it's going to be much blurrier and much more of a gradual transition than people think.

On what he uses ChatGPT for: I have occasionally used it to summarize super long emails, but I've never used it to write one. I actually summarize [things with it] a lot. It’s super good at that. I use it for translation. I use it to like learn things.

On OpenAI impacting AI startups and how to approach an AI startup: I think the best thing you can do to make an AI startup is the same way that like a lot of other companies differentiate, which is to build deep relationships with customers, a product they love, and some sort of moat that doesn't have to be technology and network effect or whatever. And I think a lot of companies in the AI space are doing exactly that.

In general, I think there's going to be way way more new value created. Like this is going to be a golden few years and people should not just like stop what they're doing. I would not ignore it, I think you've got to like embrace it big time. But I think the amount of value that's about to get created we have not seen since the launch of the iPhone app store, something like that.

SmokeBot out.