EPISODE TRANSCRIPT
Reid Hoffman:
I am here with delight with my friend, Sam Altman, who has assembled an amazing team at OpenAI and who has obviously made a lot of amazing and wonderful progress demonstrating a lot of the good things that are capable with AI.
One of the things I love about Sam (and I think is accurate) is he says, “Look, what I do is I help amazing people do their work and I bring a little bit of my own high ambition to help add some spice into the mix than we do.” And that’s the thing I loved about Sam and the whole team.
So Sam, welcome to the podcast. Thank you.
Sam Altman:
Thanks for having me.
RH:
You were, I think, the second person I talked to about writing a book with GPT-4, and the discussion came out of this [concept of], “Oh my god, it’s a human amplifier.”
Well, let me not just talk the talk, let me walk the walk – or write the write as the case may be – I could do a book that could be interesting on this. And you and the whole team’s perspectives on various things on AI kind of led me to do this.
And so I coincided it with the release of GPT-4 (because obviously, we didn’t want to pre-show the game), and it’s a little bit of the AI is amplification intelligence, or augmentation intelligence versus artificial intelligence, or an “AHA!” moment amplifying human ability.
Say a little bit about your view on this kind of amplification and augmentation.
SA:
I’m really curious, and I’ve been waiting for this to ask you. I realize it’s not a perfect example because you’re writing about GPT-4 itself in so many ways. But how much of an amplifier did it feel like? Did it make it 50% easier to write the book? Five times easier to write the book?
RH:
At least two times, and it might be more.
Let me work through an answer. One part of it was the huge issue – that is across all professions – which is solving the blank page problem.
SA:
Yep.
RH:
You’re writing a product resource document, marketing copy, legal brief, term sheet, medical analysis, commencement speech, and you go into a prompt, type something, and it generates something! And then all of a sudden you have something, a foil. It’s like that’s the collaborator, the co-pilot to work on. And that was huge because, for example, what I would do is say, “Okay, well how could AI really help education?”, and, “What would be the critics of what AI would do in education?” And you get both [answers], and then you can go, “Okay, let me think about them, let me put them again.” And that’s all done in two minutes! So maybe it’s 2X easier and 10X the speed, something in those dimensions.
You also have this thing where something occurs to you like, “Huh – Liechtenstein has this theory of language. Pure language is following a rule. Well, maybe GPT-4 would say something interesting about it, especially when mixed with something else.” Say you just read something about Chomsky linguistics or something else, and it gives you something interesting to say about it.
All of that I think was really helpful. And so it limbered it up and it helped a whole bunch in a way. Now, it still wasn’t just like “Press button, get book,” right?
SA:
Yeah, one of the things that has been most gratifying to see is the success people have had using this as a creative tool to get past that blank page problem, to get unstuck on something, and to generate a bunch of new ideas. Clearly, it is not a replacement for creative work in any way, but as a new arrow in the quiver. I think people have had surprising success in many different ways and it has been very fun to watch the breadth of human creativity in finding out what to do with this.
"One of the things that has been most gratifying to see is the success people have had using this as a creative tool to get past that blank page problem."
RH:
Have there been any particular pieces of creativity that have particularly surprised or delighted?
SA:
One of the things that I hear a lot is parents of small kids talking about how every night their kids just want to make up stories with GPT-4, and there seems to be an unending appetite for this from some kids. And at least hearing parents report back some children are able to far outweigh what some adults are able to do with coming up with really creative stories and just being fluent quickly with a new way to use this technology. So that has been interesting.
RH:
And to amplify it because I’ve heard the same thing. It’s like, we tell a new story every night and we kind of say, “What do you want a story on?” “I would like to have a dragon and a teddy bear and an astronaut on a ballad. And you’re like, “Okay.” And I know even with GPT-3, let alone GPT-4, people have written children’s stories (illustrating sometimes through DALL-E) and have published those.
So while mine – maybe by navigation of timing, may be the first co-authorship with GPT-4, there were already books being published with GPT-3, with DALL-E, and so forth.
So, to get to kids: We obviously haven’t had a storm of education commentary, most of which I find to be from having a real profound lack of imagination. What do you think?
SA:
I totally get why this technology is causing a lot of change and some problems for extremely overworked teachers. That’s all very natural and the adaptation is going to take some time. But on the upside, I think what’s happening is remarkable and exciting.
There are a bunch of boring use cases of GPT-4 that I really love– translation, summarization, information access. I’ve come to rely on it for many categories – but I think the deepest thing that I have found so far with it is the ability to learn new things better than Wikipedia, which was my current leading way of learning something new fast. And I and many other people love the style of interactive chat with a sort of “AI tutor lite” to learn something new. And I am hopeful that this will be a great and new education tool.
RH:
I mean, it’s good for you to have that balanced perspective given that you’re driving the creation.
I am certain that it’s not just [a matter of] different students learning at different paces. Having an AI tutor who might be able to tie it to your interest, if it’s polar bears or your interest, if it’s climate change or your interest if it’s kind of Greek goddesses or whichever. But also kind of going at your pacing, being able to do stuff.
But there are other things too. For example, teachers go, “Well, wait a minute, it writes the essay.” [That’s just] a cognitive lack. And there is one thing is if you say, “Look, it’s really easy. I want to do an essay on Jane Austen and colonialism.” I use GPT-4, I generate eight essays. I hand those essays out this high school probably, maybe college, and I say, “This is a D plus, do better.” But they can use that as a basis to get better.
And this is the real pitch to the teachers. You can now use GPT-4 to help you grade. So as opposed to spending 60 hours on grading, you can spend three hours on grading and the other 57 hours with the students helping them. And that kind of thing is just like it’s there now.
SA:
Yeah. I mean, I think there are probably some concerns with using these systems for grading, but I also think we will find ways to make it okay and work in many ways. And that idea that you can free up whatever that comes out to 95% of your time for something, I think that’s a concept we hear again and again with the use of GPT-4.
RH:
In the speed of writing the book, we got the first chapter’s education to try to understand how we can help elevate all of us because we are all ideally ongoing through education, but of course, especially children and all the rest. And also re-skilling when you get to work. But also medicine. Say a little bit about the kind of line of sight to an AI tutor and an AI doctor on every cell phone or every smartphone.
SA:
Certainly, an exciting world to get to.
Through this lens of human amplification, the idea that we can have doctors that can provide a higher quality of care, and care to many more people using these tools to automate parts of it – but keep humans in the loop. There are a lot of companies now working in that direction. I think it’s rightfully going to be a long time before most people want a full-on AI doctor. But this hybrid approach where this is a tool to make our doctors much better and we can have super high quality, affordable medical care available to a lot more people seems tremendous.
On the education front, it’s obviously going to get so much better, but we’re sort of already there. People are now used to their phones being able to tutor them on things that they need, which is amazing.
RH:
And part of the thing that I think is kind of great about the API framework and learning is, are you seeing people doing this medical and education stuff already? What are you seeing that is already in development?
SA:
We are, yeah. One of the most fun parts of the job is getting to meet with developers and hear about what they’re already doing and what they’re planning to do. And certainly, medical advice and tutoring are frequent and extremely exciting areas.
RH:
Did you have any reflections on the Impromptu book, things that I should have worked more on, or themes I should have developed more on or things that were worth people especially paying attention to?
SA:
No, I had a very interesting response to it and I’m not sure if it’ll be the same way that other people respond to it. But I thought it was great. I think it raises a lot of issues.It was this weird sort of study of someone else studying something that I had been so close to for so long and I was like, “Oh man.” It’s like looking through fresh eyes at something which I have seen all of the parts that don’t work, all of the problems, and all of the kind of limitations. And you did a great job of still highlighting those (and to some embarrassment in some cases). But it was a very strange experience I think for me and for others who work at OpenAI in a way that is maybe different for people who didn’t watch the process of creation.
RH:
Yeah, no, and look, I saw the process of creation from more of a remote kind of helping. I will seek your feedback on how to continue to improve to help because I think what the team is doing is amazing.
Let’s go to the work section. I know that one of the things you guys are super directed on is how do you have a very positive impact on the world of work? What are you currently thinking about what this amplification means for the kind of next 10 years-ish, five years-ish – because all the impacts on work will take time – on what’s going on? And how should people think about amplification as a path for themselves, and a path for their companies?
SA:
I always try to be honest and say in the very long term, I don’t know what’s going to happen here and no one does. And I would like to at least acknowledge that. In the short term, it certainly seems like there was a huge overhang of the amount of output the world wanted. And if people are way more effective, they’re just doing way more. We’ve seen this first with coding and people that got early access to [Microsoft] Copilot reported this, and now that the tools are much better, people report it even more.
But we’re now in this sort of GPT-4 era, and seeing in all sorts of other jobs as well where you give people better tools and they just do more stuff, better stuff, and amplifying your ability.
I’m trying to remember how I heard this from (I’m pretty sure it was from Paul Graham), there is a huge cost premium on work that has to be split across two people. There’s the communication overhead. There’s the miscommunication. There’s everything else. And if you can make one person twice as productive, you don’t do as much as two people could do. Maybe you do as much as three and a half or four people could do and for many kinds of tasks. And I think one of the things that we are seeing with this GPT-4 application is that.
RH:
Yep. And also, I mean, that’s like the earlier book thing. You got a blank page problem. You have a lot more rote work amplification. So it’s like, how do you slot in to be more effective? And when people say, “Well, but wait, that really changes.” It’s like, well, mathematical intelligence used to be how much calculation can you do, and a calculator changed that. But that’s fine, we
don’t do the calculator or memory. Intelligence used to be memory and now you can use search on the internet in order to do it.
So I have confidence that our intelligences can tune to where the creatives add versus anything else, even for us – speaking more for myself than you – “older folk.”
SA:
I’m old enough now. You notice that. You notice it when it comes to… You get in these crazy work periods and the late nights get a little bit harder, or at least not as easy.
RH:
Yeah, indeed. And it doesn’t get easier, speaking from a generation ahead of you.
One of the things that’s on the work topic, people kind of worry a lot about.This is probably the most tangible thing that most people go to. Part of the reason for writing the book was to say, “Look, there are so many amazing things we can get to. We can get to an AI tutor for every person, every child. An AI doctor, for people who can’t afford medicine or people who don’t have access to or companionship with a doctor, et cetera, et cetera.”
But everyone worries about it because they worry about what happens with my job. And part of what I wanted to show was, “Look, here’s an instance of using it, where a smart, knowledgeable person is using it for actual work, and the work is writing this book, right?
SA:
Yeah, I love that you did it. I think it’s so tangible in it. It made me want to write one too.
RH:
I look forward to it. I will be one of your dialogue participants.
SA:
Great.
RH:
What do you think about the use of GPT-4 for the information problem? And I actually don’t think we’ve specifically talked about this. I mean, obviously there’s a lot of criticism saying, look, it’s a generative AI. It’s very good at generating, the generation is not necessarily truthful. It can be targeted in any direction which includes misinformation. Hence, we have an amplification of social media and other kinds of things that are a problem and therefore, “Oh my God, this is an assault on journalism.” I don’t agree with this point of view. Although, I do think one has to work to make it good.
What are your working thoughts about how to navigate this in a way that’s collective learning- positive and collective truth-positive we facilitate?
SA:
One of the areas that we most need to figure out is how to get these models to be super reliable while not losing the creativity that people like, or giving settings for when you want one or the other. We don’t have the answer here yet. We have a bunch of ideas and we’re able to make week-over-week improvement in the problem. But telling you when we’re going to be able to
really get to a system that is accurate to what you as the user want – I carefully use that instead of saying a system that is true, because that’s really hard. I think we don’t have an answer. We’re making progress. We need some new ideas and it’ll get better.
"One of the areas that we most need to figure out is how to get these models to be super reliable while not losing the creativity that people like."
RH:
I think one of the things that is contained in your answer I think is really important is the dynamics of learning. So if you said, “Well, we have an agreement about what kind of social model of truth or facts is that we update and so forth.” Let’s make it easy and say, “Well, it has to do with science and facts and other kinds of things,” versus even grounding in some kind of truth process, which is usually collaborative human study. Well, easy enough to take to train an agent like that, put it in the browser, and then, “Hey, I’m like this thing.” You hear the earth is actually flat or the moon is made out of blue cheese and it says, no, no, no, you should look over here for good information.
SA:
Look, those things, I think we can get right.
The following example that I’m going to give, I don’t even know if it’s true or not, which illustrates how hard this problem is. Maybe I just heard it repeated by people so often that I now believe it’s true. But I think that there was a time where some of the social media platforms were banning people for saying that COVID may have leaked from a lab and that you can’t say that, it’s xenophobic, it’s racist, it’s hateful, it’s inaccurate, it’s fake news, whatever. I think people were just having posts about that deleted, and then all of a sudden the kind of elite opinion on that changed. I remember there was one article that came out that seemed to really change it and you see this big backtrack there, but you literally weren’t able to discuss it in the channels that are the de facto important channels now.
And that’s the kind of thing where if you let someone dictate the truth, I think you do something dangerous. But there are lots of easy examples like the earth is not flat. The Holocaust did happen. I think we can make a system where we’re willing to make editorial calls on those two. But there is a lot of other stuff where I think all of the trickiness comes in.
RH:
Well, I chose easy examples. I noticed you didn’t dispute the moon is made out of blue cheese. Maybe you haven’t –
SA:
I’ll agree. I’ll agree that the moon is not made out of blue cheese too. Yes. I haven’t personally been there, but it seems like a safe bet.
RH:
Yes. I actually think how you can train these models is you say, “Well, it doesn’t have to yield an opinion on everything.” It doesn’t have to yield an opinion on controversial topics or places where humans. For example, one training answer is, “Oh, on this topic there’s a lot of human dispute.”
SA:
And we give those answers a lot. We are now experimenting with more steerability of the models where a user can say a lot more about how they want the model to behave in different ways.
And so if you want a high degree of careful calibration, the model can do that. And if you want something else, the model can do that too. If you want the model in a very creative mode where it tells you a story about astronauts going to the moon and finding that it was made of blue cheese, there could be good reasons for that. That could be a fun sci-fi story, right?
RH:
Yes.
SA:
And the whole one-size-fits-all, I think people are finding more and more, it doesn’t quite work. Things they think they want a model to do, or not do, or never do doesn’t always apply and it’s uncomfortable.
But giving the user a lot more control is something we think is important. And if you want that story about the astronaut going to the moon and being sure it was made out of blue cheese, in some context, that’s totally fine. But it’s telling you something that is clearly not a fact in some other sense.
Can I tell a quick story?
RH:
Yes, of course.
SA:
It’s totally unrelated to anything, but I just think it’s one of the most beautiful stories. I don’t know why I just thought of this right now. I guess it is because we are talking about the moon. There was this astronaut who talked to a reporter decades after going to the moon. And they kind of asked him what it was like and he said, “You know, I kind of forget about it. And then sometimes I’m out in my backyard at night looking at how beautiful the moon is. And I just remembered like, ‘Oh man, all these years ago I went there and walked around on it and now I forget.'” And I think that is just an amazing story.
But I also think when we kind of look back at this age of creating the first AI systems, we’re all going to have our own version, all of us that have been involved and are going to have our own version of that.
RH:
Yeah, no, I completely agree.
Basically, the last chapter in the Impromptu book was kind of my very strongly held belief – but also argument –is that we evolve our humanity through our technologies. We evolve our humanity through the clothes we wear.
SA:
I love this part.
RH:
Yeah, yeah. Chairs, we sit in. The podcast mics, we talk to each other across video conferencing, all of this stuff. There are a number of major scale problems facing the world: climate change, economic transformation, and elevation of humanity, people adapting to the new world, whether it’s education skills, et cetera, social justice of various forms. I know one of the ones you and I talk a lot about is criminal justice and having fairness across this society. Are there any that you think that AI is not a particularly good fit for? And then after that, what’s a surprisingly good fit where people, it had probably hadn’t occurred to them?
SA:
So I think technology is the fundamental ingredient in making the world better in all of these ways. But technology on its own doesn’t always or doesn’t usually do it. You need society. You need the participation of people. You need the kind of contributions of all of our institutions, and what has made this – from the enlightenment-until-now era – so impressive coming together with technology, and also to enable the discovery of and creation and distribution of such technology to make this work.
If AI can start churning out new scientific discoveries, that will be wonderful. And that’s personally one of the things I’m most excited about. But even to get those into the hands of people and to do it in a rapid and just way, you need all of the rest of society to play its role too.
And so I don’t view this as AI is just going to come along and wave a magic wand and fix everything. It’s like AI is this new meta-tool that humanity has to enable us along with all of our people, institutions, whatever, to go to greater and greater heights.
"I think technology is the fundamental ingredient in making the world better in all of these ways. But technology on its own doesn't always or doesn't usually do it. You need society."
RH:
And what’s like… For example, is it climate change? Is it the opposite of poverty?
SA:
If we can build AI systems that help us figure out the cure to many diseases, that’s a pretty big triumph. And this is not my field of expertise by far, but the people that are in it seem so excited about what this is about to unlock.
RH:
Yep. I completely agree. Because by the way, diseases are broadly pretty indiscriminate across rich, poor, gender, race, et cetera.
What do you see in terms of AI as a platform? You just recently did Chat GPT as an API with some announced partners and other kinds of things. How should people think about the platform as they are navigating their lives?
SA:
I think the greatest platforms, people pretty quickly forget are platforms.
RH:
Um hmm (affirmative)
SA:
So there was, for a while after the first smartphones and app stores launched, all these companies were saying, “I’m a mobile company.” And that was a big deal because they were on the mobile platform. And today it would be ridiculous to say you’re a mobile company because every company has a mobile part of its strategy.
And we’re going through a moment right now where everyone is talking about AI and AI is the platform of the future (probably true), all these AI companies, whatever. My hope is that 10 years from now, intelligence is expected in every product and service and it’s so ubiquitous, we forget it’s a platform. It’s just part of everything we do.
RH:
And what do you think about the probability that, in addition to the kind of large language models – scaling, and refining them – there is another amazing major component that will be discovered and/or deployed in the next three to five years?
SA:
High, high. Well, “discovered” – high. It may take longer. But my belief is we need at least one more really big idea.
RH:
We have journalism as a chapter because we want to kind of show the actual positive impact on crafting perspective and news for people. We as human beings share things with each other as part of the reason we value journalists, even though some people like their flavor of journalists better than other people’s. What are the things that you like, how would you tell a journalist to
use GPT-4 today? This is how it helps you.
SA:
I have talked to a few journalists since the GPT-4 launch and I would ask this question, because I’m always curious to know more. [GPT-4] is not useful at all, at least in the ways people have found so far, for the reporting process. But it’s very useful for the drafting, writing, and synthesizing part of the process and the ability to put a bunch of notes in there, ask questions about it later, help kind of find themes between things. People have found a lot of use there.
RH:
That’s interesting. And you referenced Wikipedia before. But it’s like a mini research assistant that gives you a mini brief right away. Not always a hundred percent accurate, especially when you get to very specific details on very specific questions. When I asked it, “Did Reid Hoffman make a knockoff of Settlers of Catan? And if so, what was it?” It said, “Yes, he did.” Which is true. I made a game called Startups Silicon Valley, which I only give to my friends who’ve already bought Settlers. You have a copy.
SA:
I do.
RH:
At least one. But then it said the game [I invented] was Secret Hitler, which is actually in fact a board game made by the Cards Against Humanity people. I’ve heard it’s quite funny, but not a game that I’ve even cracked the box on yet, let alone co-designed. And so I would say, “Look, crosscheck this stuff,” but the quick research assistant briefing that is done immediately in 30
seconds is I think actually part of it. It is not a, “Tell me what I should say about Sam Altman or what question I should ask.
SA:
That makes sense. That makes sense, yeah.
RH:
And that’s part of the use of some imagination. Again, part of the reason why the Impromptu was kind of this travel log across these domains of human existence.
SA:
Oh, that’s a great phrase for it. That is what it felt like to read.
RA:
Yes, exactly, it’s just like when you start encountering search. When you play with Chat GPT or GPT-4, try something and literally think to yourself, “How do I try something that may be different than I’ve ever thought about trying before?”
The one that may seem pedestrian that surprised me was the, “Well, I got these seven ingredients in my refrigerator, what should I make for dinner?” And it would just never occur to me to ask that question. You certainly wouldn’t put it in the search engine. You wouldn’t put it into Bing or Google or whatever.
And here it’s like, “Well you could do this, you could do this, you could do this. Or are you in the mood for… With those seven agreements, are you in the mood for Thai or are you in the mood for Mexican?” You’re like, “Oh, all right.”
One of the things that I think is (too often) not said about the transformation of work that AI and making intelligent tools is going to be, is by moving it out of more of the rote stuff. You can also not just make it into more creative stuff, which some people are fearful of, but also the more joyful parts of work. And I think that that transformation of joy is again, amplifying human beings.
"My belief is we need at least one more really big idea."
SA:
I deeply believe this. It’s a little too early to declare victory on it, but it certainly seems like people stay in the flow state much more and stay in the parts of the job they enjoy much more. AI is good at doing the repetitive stuff that most of us just find a little bit monotonous.
Anecdotally, we hear reports of this a lot.
RH:
Yeah, well, and actually I did that because for example, part of usually in writing books, having done it before, yu, get to periods where you’re just slogging through and getting it done. And I’d say, “Look, I have a break.” I kind of do a prompt to discover something kind of fun and interesting. I’d think, “Hey, let me take this section of the essay that I’m doing and let me ask you before to make it into a sonnet, or make a ballad about this.” And I look at it and sometimes I go, “Okay, that was cool,” and put it aside. Sometimes I keep a copy. Sometimes I go, “Oh, that was kind of interesting,” and that’s keeping me limber and fresh versus the, “Ah, I just have to figure out the next sentence.”
Everybody has been sweating blood in order to bring it to make it available for people. Is there anything that you use GPT-4 to your delight?
SA:
I mean, I mentioned this before, but it is still so delightful to learn something new and to feel like I just got a cheat code is very delightful.
Another one is we just released this very early preview of plugins and I think the fact that GPT-4 can now, with a lot of limits, write and execute code is so cool that just as a citizen of humanity, I feel delight every time, even if it’s just doing something very simple.
RH:
One thing I think it’s probably important that we haven’t gotten to that I don’t think can be said often enough is what you guys are learning about how to align these systems with human interest and human values as they scale.
SA:
It’s nice of you to say. I think we have made real progress relative to what people thought this was going to look like.
But we do not know and probably aren’t even close to knowing how to align a super intelligence. And RLHF is very cool for what we use it for today, but thinking that the alignment problem is now solved would be a very grave mistake indeed. I am hopeful that we’re going to make better and better tools that are going to help us come up with better and better alignment ideas.
RH:
I think that answer is exactly right because staying deeply focused and deeply concerned as the dynamic thing proceeds is the way to maximize safety.
SA:
Well, I think it’s worth being very precise here. What we do is we know how to make GPT-4 safe; safe in the ways we know about so far. And again, big surprise, big success, but very… A lot more work in front of us. And thankfully, we now have a new tool to help us.
RH:
Yeah, I think people should take heart and positive energy out of this… That’s the right way you should be responding. You shouldn’t be, “Oh no, no, it’s no problem. We’re just cruising down the road. It’s no problem.” This is [a case of] “No, we’re paying a lot of attention. The problem will require more work to improve the solutions we have, the tools we have can continue to improve it. And by the way, so far as we scale, we are learning better ways to use the tool to help us align better. Even though there are a lot of open questions and a lot of journey we are going ahead.”
And this is part of the thing is when people look at a dynamic and future with change, they always tend to go, “Oh, what about the worst case?” And you’re like, “Okay, well what about the best case?”
SA:
Yeah. One thing that I think is great about people using GPT-4 and Chat GPT is they see, they feel the upside in a way that if you only hear about it, you discount and you really feel the downside. Now, the downside is still really there, but using it I think adds some healthy caution to people too. But it really does let people taste the upside and figure out how to participate in this new world we are all going to create together. And I think that’s great.
RH:
Yeah. The parallel that I see is that when you ask someone who is part of the creative industry,
RH:
Yeah. The parallel that I see is that when you ask someone who is part of the creative industry, writing, painting, photography, whatever, and they haven’t had access, they haven’t tried these tools, they’re always like, “Oh my God, is it going to take my job away?” Is it now going to be the journalist, the author, the novelist, the painter, et cetera? Then when they play with it, they realize, “Oh my gosh, this could be a tool that really amplifies what I’m doing,” And they begin to see the upsides. And that’s precisely why like I created Impromptu (also with the pun in the name, I am prompting you) as part of –
SA:
Incredible name. Ten out of 10.
RH:
Thanks. It’s to say “Try it,” right?
SA:
Yes.
RH:
It’s a little bit like if someone described a car to you if you weren’t already familiar and was like, “Well, it’s a two-ton death machine that can run over children, it causes 40,000 deaths per year in the continental United States. Wouldn’t you like to get into one?” Right? They’re like, “What?” And then you go for a ride and you’re like, “Whoa, this is cool, it transforms space and I can do all these things!”
And it’s that kind of thing – try it and realize what all the options are. And by the way, of course, you still have to be attentive to safety. You’ve got to go, “All right, how do we make these things safer? How do we put bumpers and automatic crash detection and other kinds of things in it? Lane changes and so forth in order to make them a lot safer, decade after decade in turn.”
And that’s what I think the AI stuff is too.
SA:
For sure.
RH:
You have obviously been over the last couple of weeks kind of helping introduce the world to Chat GPT, GPT-4. Impromptu is an effort to try to show why AI can be amazing for humanity and why I just have belief and confidence that we really can get there just by intentionally (metaphor) driving it there. What would you say that you’ve learned in the last couple of weeks
that you would want to share, as our final thought, with listeners to a podcast about what we’re trying to get to with AI in the next few years and the next decade?
SA:
You touched on it at the very beginning. So it’s a great thing to close with. This idea that this is a tool of amplification and this is a tool that makes you do better at whatever you need to do better at. We can say that. It’s nothing like just trying it. So just go try it out. That’s my advice. We really hope you love it.
The feedback that we get from people is very helpful as we think about future versions. People really have found new incredible things, new places where it breaks. But yeah, the ability to,… In whatever way you would like to be amplified, use this tool to amplify yourself. I think that’s the message that I would love to end on.
RH:
Absolutely. And it’s the reason why the book description from Impromptu is it’s not a read, it’s the start of a conversation.
And so with that, Sam, thank you for this conversation and as always, look forward to talking to you soon.
SA:
Me too. Thanks, Reid.
Video and transcript courtesy GreylockVC
* bonus: visit this link to get Reid Hoffman's "Impromptu" book (pdf) for FREE
- - -
Podium is a venture fund across sports and entertainment, based in Silicon Valley.
No comments:
Post a Comment