Our Guest Peter Norvig Discusses
AGI Is Here: AI Legend Peter Norvig on Why It Doesn't Matter Anymore
Are we chasing the wrong goal with artificial general intelligence and missing the breakthroughs that matter now?
On this episode of Digital Disruption, we’re joined by former research director at Google and AI legend, Peter Norvig.
Peter is an American computer scientist and a Distinguished Education Fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). He is also a researcher at Google, where he previously served as Director of Research and led the company’s core search algorithms group. Before joining Google, Norvig headed NASA Ames Research Center’s Computational Sciences Division, where he served as NASA’s senior computer scientist and received the NASA Exceptional Achievement Award in 2001. He is best known as the co-author, alongside Stuart J. Russell, of Artificial Intelligence: A Modern Approach — the world’s most widely used textbook in the field of artificial intelligence.
Peter sits down with Geoff to separate facts from fiction about where AI is really headed. He explains why the hype around Artificial General Intelligence (AGI) misses the point, how today’s models are already “general,” and what truly matters most: making AI safer, more reliable, and human-centered. He discusses the rapid evolution of generative models, the risks of misinformation, AI safety, open-source regulation, and the balance between democratizing AI and containing powerful systems. This conversation explores the impact of AI on jobs, education, cybersecurity, and global inequality, and how organizations can adapt, not by chasing hype, but by aligning AI to business and societal goals. If you want to understand where AI actually stands, beyond the headlines, this is the conversation you need to hear.
00;00;00;05 - 00;00;15;17
Speaker 1
Hey, everyone. I'm super excited to be sitting down with AI legend Peter Norvig. Peter is a former research director at Google, an AI fellow at Stanford, and the author of the most important text on AI of the past 30 years, Artificial Intelligence a modern approach.
00;00;15;20 - 00;00;35;18
Speaker 1
What's so special about Peter is that he's sat at the forefront of AI research and teaching for over three decades. So he doesn't just push the technology forward, but educated an entire generation of AI leaders. I want to ask him how close we are to unlocking the true potential of AI. What he's most worried about, and what we need to do to build the future we want.
00;00;35;20 - 00;00;36;24
Speaker 1
Let's find out.
00;00;39;27 - 00;01;02;15
Geoff
Peter, you're the author of, you know, the preeminent, if I can call it that textbook in the AI space, artificial intelligence, a modern approach which has recently turned 30. And so this is an area that you've been thinking about, you know, since at least 1995, I'm sure, a lot longer, as you think about where the technology was than where it is today.
00;01;02;15 - 00;01;18;05
Geoff
One of the things I'm hearing a lot about these days is a lot of hype around, you know, our work. We're only 1 to 2 years out from artificial intelligence reaching its final form, or being AGI and having this full potential. Do you do you believe that?
00;01;18;05 - 00;01;27;17
Geoff
far has this technology come since, you know, you wrote the first edition of this book and, you know, how close are we to the modern version actually achieving that?
00;01;27;22 - 00;01;32;01
Geoff
What is, you know, the complete promise of this technology?
00;01;32;01 - 00;01;59;16
Peter Norvig
So we have seen amazing progress in the last couple of years. I do think it's ironic that, you know, 30 years ago we titled this book a modern approach. And we kept the same title, so it, I don't know how it can be modern. Both 30 years ago and today. And it does seem like, you know, textbooks, seem obsolete now because they come out on, cycle of several years and AI is advancing on the cycle of several weeks.
00;01;59;19 - 00;02;20;28
Peter Norvig
And so it is exciting what's been happening the last couple of years, I think unanticipated by most certainly unanticipated by me. And just this idea that scaling up in data and processing power and with a few very clever ideas for algorithms made such a difference.
00;02;20;28 - 00;02;25;08
Peter Norvig
So I think that's really different in terms of AGI.
00;02;25;10 - 00;02;28;14
Peter Norvig
I think, you know, I don't really like the term,
00;02;28;14 - 00;02;55;12
Peter Norvig
I think there's, there's no clear definition of it. Everybody has, a different idea of what it was, what it is. Depending on what it is achieving, it will vary by five, six, seven orders of magnitude. And I guess I feel like, there's not going to be a moment when we say AGI is here.
00;02;55;15 - 00;03;18;19
Peter Norvig
I don't believe in sort of this hard takeoff idea. I think it'll get better, and we'll just get used to it. And I think past technologies have been like that. Right. So if we had all of a sudden gone from the days when if you wanted to learn something, you had to drive to the library. To the days where you have a machine in your pocket that gives you access to all the world's information.
00;03;18;21 - 00;03;37;22
Peter Norvig
If that had happened in one day, people would said, this is an incredible singularity and transformation. But it happened gradually, and we just got used to it. And so I think it'll be the same with AI. It'll get better and better than won't be. One point when we say this is the transition. It'll just do more and more.
00;03;37;25 - 00;03;47;02
Peter Norvig
Now, Blaise Aguirre, aka and I wrote this article, a year or so ago in which we said, AGI is already here,
00;03;47;02 - 00;04;16;04
Peter Norvig
what we meant by that was not that, the machines we have now are perfect. They're certainly flawed in many, many, many ways. But if you take the GE seriously, we made a transition in, say, 2022 or so, going from writing programs that were specific and programed to play, go, programed to recognize images and so on to programs that are general.
00;04;16;06 - 00;04;41;09
Peter Norvig
So ChatGPT and the like can do lots of things that the inventors never realized. And, and we liken that to the invention of the computer. You know, maybe going back, say, to the, Eniac and in 1945 or, or von Neumann's maniac a couple of years later, where they were 100% general. Now they're terrible computers by today's standards.
00;04;41;13 - 00;05;06;01
Peter Norvig
They're big and clunky and slow and have no memory and slower processing speed. But if you have, a, conditional statement, a branching statement and a sequential statement, and you can read and write memory, then you're 100% general, you're as general as a Turing machine. You can't get more general than that. And so in that sense, we now have programs that are general.
00;05;06;02 - 00;05;26;08
Peter Norvig
We write them. They can do things we didn't think of before. So that's general. They're imperfect and they'll get better. And and we'll play with that technology. But I don't see having AGI as the focus as being that helpful right now. I'd rather focus on how can we make them better. How can we make them more reliable?
00;05;26;09 - 00;05;29;27
Peter Norvig
How can we make them safer? What else can they do?
00;05;30;29 - 00;05;39;23
Geoff
I appreciate that, that, that, that distinction. And so with that in mind as we look at, you know, generative AI and tools like you know, labs,
00;05;39;23 - 00;05;46;06
Geoff
what's your reaction to that. You know, what's what's your reaction to you use the word imperfect. And these sort of, you know, imperfect chat bots. And
00;05;46;06 - 00;05;50;25
Geoff
maybe that's something that will never go away and something that we have to get used to.
00;05;50;25 - 00;05;57;03
Geoff
But it sounds like you're leaning toward acknowledging that they are, AGI
00;05;57;03 - 00;06;01;29
Geoff
in the way that we would have described it 10 or 20 years ago. You know.
00;06;01;29 - 00;06;05;14
Peter Norvig
think that's right. So, you know, just as I was saying, with the phone in your pocket.
00;06;05;14 - 00;06;26;05
Peter Norvig
If we had gone in 1990 and, you know, said all of a sudden, here's a chatbot of today, I think everybody would say, okay, AGI is here. You know, this is an incredible leap. This is AGI. But because we got it gradually, there's been resistance to that.
00;06;26;16 - 00;06;35;09
Geoff
That, that. Yeah, that that makes sense. And so I mean, if you look at the, the progress and you mentioned that you yourself were surprised by this.
00;06;35;09 - 00;06;43;16
Geoff
what specifically did you find, you know, maybe, maybe most surprising or you know, was something that you didn't see coming. Is it is it. Yeah.
00;06;43;16 - 00;07;04;29
Peter Norvig
said, Stuart and I started writing the book in 1995. But I went to grad school in AI in 1980, and my topic was, natural language processing. And the way I look at it, I said, well, there's two problems. One is there's written words on the page, and we have to figure out what they mean or how to process them.
00;07;04;29 - 00;07;24;08
Peter Norvig
And we saw that as, an, a problem in linguistics. We had to figure out what's the syntax of this language, what's the definitions of these words, how they relate to each other. And we said, okay, we think we have a handle on how to do that. But then we said, then there's a lot more that's going on up here in the head.
00;07;24;10 - 00;07;40;02
Peter Norvig
And we said, I don't know how to do that. That's going to be the hard part, right? We're going to get the linguistics part right. But then whether, you know, how do you react to a sentence? What does it mean? How does it relate to another sentence that's going to be hard and that's going to have to figure out what's going on up in the head.
00;07;40;04 - 00;08;01;00
Peter Norvig
And then we built these limbs by saying, the thing we're going to put in the head is, very, sort of broad priors that have the capability to pay attention and learn, but not much else. Otherwise, it's kind of a blank slate. And then we're just going to push billions of words past it. And it worked.
00;08;01;00 - 00;08;03;28
Peter Norvig
And I think nobody really anticipated that that would work.
00;08;04;01 - 00;08;12;18
Peter Norvig
We thought there was going to be have to be a lot more going on and figuring out how thinking works and not just passing a lot of words past it.
00;08;13;19 - 00;08;22;17
Geoff
Did you see the. So I'm glad you said that. And it's it's aligned with kind of my thinking and some of the thinking I've heard from other, you know, kind of AI leaders in this space.
00;08;22;17 - 00;08;40;21
Geoff
the approach of, you know, passing a lot of words through this tool. Do you see that as kind of a continued trajectory to where we need to get to in terms of the next generation of, you know, I that the phrase that comes to mind is AI as agents or a genetic A.I., which is something, you know, you and Stuart have been talking about for a long time.
00;08;40;21 - 00;08;46;23
Peter Norvig
Yeah. I guess, you know, you can look at the picture and react in in different ways.
00;08;46;23 - 00;09;02;07
Peter Norvig
some people say, well, we have to tear it all down and start over again. And some people say, well, we just have to continue to, to make it better. I think it's interesting that, Yann LeCun is on the tear it all down.
00;09;02;10 - 00;09;06;01
Peter Norvig
And I'm on the list. Continue to make it better. But
00;09;06;01 - 00;09;09;27
Peter Norvig
other than that broad philosophy, I think we're very aligned in our views.
00;09;09;27 - 00;09;18;00
Peter Norvig
So we both say, here's some things that are missing. We have to be able to do reasoning better in certain ways. We have to have a better connection to the physical world and so on.
00;09;18;00 - 00;09;23;10
Peter Norvig
And I see that as, as evolution. And he sees it as revolution.
00;09;23;10 - 00;09;25;08
Geoff
it makes sense and
00;09;25;08 - 00;09;44;06
Geoff
I think a useful distinction there to say okay. Well we're, we're still broadly talking about the same thing. So when you look out over the horizon about where this is going and you know there's no shortage of benefits or challenges on each side, what sort of top of mind for you in terms of what we need to get right next?
00;09;44;12 - 00;09;49;10
Geoff
And how is that influencing, you know, where you're spending your time in mental energy.
00;09;49;10 - 00;10;01;17
Peter Norvig
Yeah. So we want, we want to build tools that are useful, and don't make dangerous mistakes.
00;10;01;17 - 00;10;14;25
Peter Norvig
so I think we're on the right track. So, so a language model by itself, seems like it's not all that useful. Right. And so, yeah, there's a very specific mathematical definition of what a language model is.
00;10;14;27 - 00;10;35;27
Peter Norvig
It's a probability distribution over the next words, given the previous words or the context. And just having that doesn't seem that useful. And, you know, when we had the first chat bots, it was like you give it a prompt and then it says, okay, what's the next word I'm going to spit out? And then what's the word after that?
00;10;35;27 - 00;10;48;07
Peter Norvig
And then what's the word after that? To me, that didn't seem like artificial intelligence. That seemed more like, artificial politician. Right. They're really good at, spitting out one word after another without pausing to think.
00;10;48;07 - 00;11;07;19
Peter Norvig
But now we have these models that do more reflection. So rather than just saying, what's the next word I'm going to spit out, it says, well, let me try ten different lines of approach, see where they go, criticize them, compare them, vote.
00;11;07;21 - 00;11;27;02
Peter Norvig
And, you know, maybe in the future, run some experiments. Look something up, see how it relates. And only when I've done all that. Now I'll start responding. And that seems much more like intelligence. And we're starting to see those kinds of approaches. But we got to figure out how to do it better.
00;11;27;29 - 00;11;54;11
Speaker 1
If you work in IT, Infotech research Group is a name you need to know no matter what your needs are. Infotech has you covered. AI strategy? Covered. Disaster recovery? Covered. Vendor negotiation? Covered. Infotech supports you with the best practice research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe!
00;11;56;00 - 00;12;01;10
Geoff
So so coming back to the, you know, the I guess the concern piece, you know, you mentioned
00;12;01;10 - 00;12;07;22
Geoff
we have to make sure that things don't go too off the rails with these technologies. What do you see as some of the,
00;12;07;22 - 00;12;13;06
Geoff
I guess, more and less realistic risks for this technology in the next handful of years?
00;12;13;06 - 00;12;17;10
Peter Norvig
Yeah. So I think there's a lot of issues. Right. And I think,
00;12;17;10 - 00;12;19;26
Peter Norvig
we don't have the best record with that. Right?
00;12;19;26 - 00;12;21;15
Speaker 1
So,
00;12;21;17 - 00;13;01;05
Peter Norvig
You know, as an industry, we invented social media. We saw some of the advantages of bringing people together, but I think we didn't do a good job of, foreseeing all the problems of, of addiction and, misinformation and, and so on. And I think anytime you have a powerful tool, it can be used for good or for bad, and, you know, and maybe it's a flaw of the tech industry that they tend to be maybe more idealistic and less connected to the real world and optimistic and, and see the good uses and don't defend enough against the bad uses.
00;13;01;08 - 00;13;08;03
Peter Norvig
I, I think we're in a pretty good place in AI because right from the start, there's been all this, talk of AI safety.
00;13;08;06 - 00;13;22;18
Peter Norvig
you know, I wish we had done a better job of that with other technologies. Right? Right. So when the internal combustion engine was invented, that was a great thing for humankind, to provide all these services and transportation and get food to people.
00;13;22;21 - 00;13;27;23
Peter Norvig
But we didn't really foresee all the effects of, pollution. And, and so on.
00;13;27;23 - 00;13;31;05
Peter Norvig
But here sort of right from the start, it seems like AI's concerned with that.
00;13;31;05 - 00;13;36;06
Peter Norvig
I'm worried about, misinformation, type issues.
00;13;36;06 - 00;13;57;16
Peter Norvig
To some degree, I guess, a large part of it, I feel like, Well, we've already got that. We've we've already got cheap labor that can, generate, junk and push it out there. And the bottleneck doesn't really seem to be creation of the junk. The bottleneck is building up the networks that can get it propagated to others.
00;13;57;16 - 00;14;11;27
Peter Norvig
So I don't I don't see AI as, fundamentally changing it. It's just another tool that the bad guys can use. So that's an issue, I worry about.
00;14;12;00 - 00;14;12;22
Speaker 1
That.
00;14;12;24 - 00;14;36;28
Peter Norvig
Sort of empowering, bad actors to have more powerful technology. Right? So, you know, a lot of work has gone into if, if you ask, chat bot, how do I make a pathogen that will kill a billion people? It's supposed to say, no, I won't do that for you. But we have open source models that that allow you to get around that.
00;14;36;28 - 00;15;02;19
Peter Norvig
And fortunately, anthropic and others have done this research and said, well, kind of the things you need to know, aren't really out there. And so neither the search engines nor the chatbox really have access to that. So that's a little comforting. But you can easily imagine, you know, and now I'm at the future saying, that's an interesting question.
00;15;02;19 - 00;15;25;19
Peter Norvig
I don't know the answer to that. But from my knowledge of biochemistry in general, here's some ideas you could try this and maybe it could guess. Right. So that would be a bad thing. And I think we're we're seeing this in general, sort of not just with AI. We're seeing sort of the, the dumbing down or the cheapening of, the ability to impose your will on the world.
00;15;25;26 - 00;15;53;14
Peter Norvig
Right? So 50 years ago, if you wanted to impose your will, you kind of needed a big, aircraft carrier group that you would send off to threaten other people. Now, a much cheaper, set of drones, can be as effective. So technology has made the ability of smaller groups, to do more attacks. And that's a danger.
00;15;53;14 - 00;15;58;08
Peter Norvig
And AI helps, helps, make that even more powerful.
00;15;58;08 - 00;16;17;15
Peter Norvig
I'm also worried about, income inequality. And again, that's not, uniquely an AI problem. It it's, an inherent problem in, digital technologies where the cost of, reproduction is, is near zero, and that tends to concentrate wealth in the hands of a few.
00;16;17;17 - 00;16;19;29
Peter Norvig
And I think that's dangerous for society.
00;16;21;08 - 00;16;41;05
Geoff
I think that's well said. And, you know, as I process, you know, sort of both sides of what you said there, starting with. Yes, we seem better positioned than with a lot of new technologies to actually be investing in and caring about the safety. However, there are, you know, a number of varied risks that we can't, you know, necessarily get around.
00;16;41;05 - 00;17;01;17
Geoff
Certainly compared to some of the people I speak to here, it sounds like on balance, you're gently optimistic about this and our ability to to get our arms around this versus, you know, like some of your peers being in, the and and by the way, the one that comes to mind is, you know, Geoffrey Hinton about, you know, we're completely screwed here is that kind of gentle optimism.
00;17;01;17 - 00;17;05;01
Geoff
Is is that a is that reflective, would you say, of where you're at?
00;17;05;05 - 00;17;09;18
Peter Norvig
I think so. Yeah. I mean, I think there's real dangers and I think bad things are going to happen.
00;17;09;18 - 00;17;25;15
Peter Norvig
But I think overall the, the good will will outweigh the bad. And, and Jeff's got his cents, you know, set of concerns. There's certainly people like, Elliot Koski or even farther out on the on the danger side.
00;17;25;15 - 00;17;42;25
Geoff
I understand you're doing some work now with the human centered AI Institute. I, I got that right. Good. And you know, working with with folks over there like like Feifei Lee. What, what what sort of mandate does that institute have? And, you know, is there work over there being done that you think is helping to address some of these challenges?
00;17;42;25 - 00;17;50;17
Peter Norvig
Right. So this is, an institute, at Stanford, founded by, Feifei Lee and some colleagues,
00;17;50;17 - 00;18;19;18
Peter Norvig
looking at, how AI effects society and, and how we can make it, work for people. And so, you know, there's also this, design school at Stanford. And so I see the high as kind of a continuation of that, of how do we do design, products that use AI that will be, useful for people, will augment them rather than replace them.
00;18;19;25 - 00;18;48;24
Peter Norvig
It will be, fair and, and unbiased and will be easy to use. So so that's what the sort of the, the charter of the institute is, I'm teaching a class there. The institute does a lot of, policy, type, work of, so we recently did a, boot camp for congressional aides and, tried to teach them, here's where the current state of AI is.
00;18;48;24 - 00;19;01;11
Peter Norvig
Here's the kinds of things, you might be worried about, both in terms of, promises and threats and in terms of, what possible, legislative role, Congress can play.
00;19;01;11 - 00;19;10;07
Peter Norvig
So I think, that kind of, more not really advocacy is more, education is is part of the role of AGI.
00;19;11;11 - 00;19;17;00
Geoff
What kind of role? On that note, do you see Congress potentially playing?
00;19;17;24 - 00;19;44;02
Peter Norvig
I think, most of the issues could be already covered by existing law. Right. So most of the time when you do something bad, it's the fact that you did something bad, not how you did it. Right. So we have, rules that, murder is bad, and we don't have specific rules for murder with the particular technology.
00;19;44;05 - 00;20;08;23
Peter Norvig
Although, that being the case, there are places where we we, we single out specific technology. So we do say that if you use a gun in certain cases, that changes the nature of the crime. So I think, you know, government has a, has a role to say, how are we going to use this in a way that doesn't take advantage of people?
00;20;08;26 - 00;20;32;07
Peter Norvig
How are we going to share those benefits, across? And how are we going to let it grow, while also keeping it under control? And I kind of feel like there's a lot of players involved, and I don't want to put all the, the emphasis on government. So. So, yes, they, they have a role to play.
00;20;32;10 - 00;20;56;05
Peter Norvig
But, in my experience, government tends to be, reacted to speed that slower than the speed that AI is going. So I worry about that. So I think other players are important too. So, self-governance is important. And all the big AI companies have their AI policies. And, and from my experience, that's taken pretty seriously.
00;20;56;05 - 00;21;22;06
Peter Norvig
So, you know, internal to a company, they say, hey, what can we try doing this? Well, no, we can't, because we have to clear this first, because of our policies. And I think that works. Well, I think there might be a role for professional societies. We haven't had that before in computing. Right. So I get to call myself a computer scientist, and, you know, and I have some degrees and some experience, but I don't have any anything official.
00;21;22;09 - 00;21;41;29
Peter Norvig
And anybody could just say, all right, I'm a computer scientist or I'm a software engineer, and I'm going to release some software and they let you do it. It's great. In other fields, they don't do that. I couldn't go out tomorrow and say, you know what, I'm going to call myself a civil engineer, and I'm going to go build a bridge so they don't let you do that.
00;21;41;29 - 00;21;58;06
Peter Norvig
You need to be certified in order to, to, to do those kinds of things. And I don't want to slow down the software industry, but I think there might be a role to say, if you get to a certain level of power of these models, maybe there should be some certification of the engineers involved.
00;21;58;06 - 00;22;04;23
Peter Norvig
And then finally, I think, external third parties, can play an important role.
00;22;04;26 - 00;22;26;14
Peter Norvig
So I actually joined an AI safety board with Underwriters Laboratory, and I thought they were interesting because the last time we had a technology that the public thought was going to kill everybody, it was electricity, and everybody was worried they were going to get electrocuted. And you could, you know, you can see some of these vintage cartoons, showing me death and destruction.
00;22;26;16 - 00;22;43;04
Peter Norvig
And then Underwriters Laboratory came along and said, we're going to put a little sticker on your toaster or your microwave, and it means it's not going to kill you. And consumers trusted that. And because consumers trusted it, companies voluntarily submitted themselves for certification.
00;22;43;04 - 00;22;52;29
Peter Norvig
And that seemed like a good thing. And I think maybe these, third party non-profits can be more agile than a government can in hidden in setting regulation.
00;22;53;16 - 00;23;24;27
Geoff
The the comparison to electricity is a really interesting one. And you know, I won't drain the story here, but I'm sure you're as familiar, if not more familiar than most about the story of alternating current and direct current and Edison and Tesla. And it's funny, I've, I've never reflected on it before, but it seems like you could draw some parallels with those competing standards with some of the big players in AI and some of the narratives they have about their competitors right now, and how safe or unsafe their models are.
00;23;25;02 - 00;23;31;03
Peter Norvig
A little bit other, though, you know, it isn't to the point where we have to say, you know, we have to choose one standard that everybody's going
00;23;31;25 - 00;23;40;28
Geoff
Well. That's what that's where I was going with this is is one of the narratives is like, oh, this is an arms race. And one person will get there and everybody else will lose out.
00;23;40;29 - 00;23;48;09
Geoff
Do you see it as being winner take all, or do you see it being more of kind of a long tail of different models and technologies that are more fit for purpose?
00;23;48;09 - 00;24;19;02
Peter Norvig
I, I see it does not being winner take all, but I see it as a few winners take most so I guess there's two issues. What is, you know, some of these futurist or single Ontarians, are saying, well, there's this hard takeoff scenario where, you know, one team figures out the magic so that it's model doubles in a week and then it doubles again in a day, and then it doubles again in an hour, and then the rest of the world is left behind.
00;24;19;02 - 00;24;50;13
Peter Norvig
I don't really believe that. You know, so I think so far we've seen parity among the top groups, and I think we'll continue to see that. So so I don't worry so much about one dominating because they have a technology that leaves everybody else behind. If you'd asked me 3 or 4 years ago, I would have said, well, it's going to be a very small number of players because there's only a few, providers who have enough data centers to build these really huge models.
00;24;50;16 - 00;25;13;20
Peter Norvig
And I think those few companies will do really well, but I don't think they're going to capture everything big for two reasons. One, we've seen these much smaller models become, very capable, and we've seen more demand for kind of privacy of, say, people saying, well, I want something on premises because I don't want my data or my queries to, to to go outside.
00;25;13;20 - 00;25;22;25
Peter Norvig
so I think, you know, yes, the big companies are going to capture a lot of market, but there's going to be lots of other ones as well.
00;25;23;23 - 00;25;49;12
Geoff
There's there's an interesting tension in my mind between, you know, fully democratizing some of this technology versus keeping the really important stuff contained within a few different companies. And I think I've heard you say before that, you know, one of the pieces that concerns you the most is open source AI. And what, you know, a bad actor or an organization could do with some of these models.
00;25;49;15 - 00;25;52;03
Geoff
If there isn't enough safety or regulation, there.
00;25;52;11 - 00;26;02;15
Geoff
where does that balance out in your mind, like? Well, when we have to balance it being broader versus more contained. What what is. I don't I don't know how would you answer that question in terms of where we should draw the line.
00;26;02;15 - 00;26;06;24
Peter Norvig
I mean, I guess my feeling is it doesn't matter what I think
00;26;07;29 - 00;26;26;20
Peter Norvig
And, I think you were right that I was hesitant, you know, again, I mentioned Yann LeCun. He's really pushing hard for these open models. I was saying, you know, wait a minute. Maybe it'd be good if somebody is making a query to do something terrible that it gets logged somewhere.
00;26;26;22 - 00;26;46;18
Peter Norvig
And I guess another person I can mention that I've seen the shift in is, my colleague Eric Schmidt, who was, very adamant of saying we can't have open models because of the threat from bad actors. You know, 2 or 3 years ago. And now he's he switched and said, it's too late.
00;26;46;18 - 00;26;48;15
Peter Norvig
these models are powerful enough.
00;26;48;20 - 00;26;53;20
Peter Norvig
If the bad actors want to use them, they can create them. So we might as well harvest the good
00;26;53;20 - 00;27;02;06
Peter Norvig
of the open models, because the bad guys have got them. Anyways. And I think that's right. I think I think there's nothing you can do about that now.
00;27;02;29 - 00;27;19;09
Geoff
So so what are the the implications there. Given that the cat is out of the bag, what do we need to do I guess as you know calling it an industry is a bit strange, but I as kind of an ecosystem to reap the most benefits or at least, you know, mitigate our risk.
00;27;19;14 - 00;27;27;04
Peter Norvig
I mean, I guess what we got to do is say, beware that here's another attack vector, right? And I think,
00;27;27;04 - 00;27;44;27
Peter Norvig
some senses, some areas we've done a really bad job. So in terms of cybersecurity, as an industry, we haven't really focused on that. And there are a lot of losses of bad guys coming in and stealing data and extorting and so on.
00;27;44;29 - 00;28;12;01
Peter Norvig
And we've we've accepted that trade off. We've said we want the industry to move fast. We want to provide you with all these tools. And we'll do that without guaranteeing that they're safe and we'll accept those losses. And maybe that was a good choice, but I think now maybe we can think a little bit harder to say, well, if there's more powerful attacks, maybe we should build our systems into a more reliable standard.
00;28;12;03 - 00;28;28;23
Peter Norvig
And I guess I'm kind of optimistic there. So I'm not an expert in cybersecurity, but I talk to the people who are and they kind of feel like, yes, this would be good tools for the attackers. But they think maybe it's a better tool for the defenders.
00;28;28;23 - 00;28;35;06
Peter Norvig
the reason the attackers are successful is because we build software that's just full of holes.
00;28;35;09 - 00;28;58;25
Peter Norvig
And if we can have and, you know, the systems we build are too complex for a human to understand, but it seems like maybe an AI system could understand it. And we could ask the AI, you know, here's this million lines of software, analyze where the holes are and tell me how to fix it. And if we can do that, then the attackers have a much harder job.
00;28;59;19 - 00;29;19;13
Geoff
Right. So it's almost like I can raise the tide for, for all ships there. And you know, it's a it's an interesting perspective. And it's one I appreciate because I feel like there's so much, there's so much fear these days of well every, every year in all things cyber is going to be riskier than the last. And you're saying, well, maybe, maybe that doesn't have to be the case.
00;29;19;17 - 00;29;22;01
Peter Norvig
Yeah, I think we can make things more secure.
00;29;23;28 - 00;29;26;19
Geoff
So, yeah, you know, kind of adjacent to that,
00;29;26;19 - 00;29;58;01
Geoff
one of the undercurrents of all of this thematically is just speed. And everything is accelerating. It's going faster and faster, and who knows what model will be out tomorrow. That's not out today. And I think about that in relation to, you know, an article that you wrote a number of years ago that teach yourself programing in ten years, which was, you know, in some way a response to the this, you know, addiction to speed and just everything is now, now, now do you, you know, given when that article was written and where we're at.
00;29;58;03 - 00;30;03;19
Geoff
Do you still hold on to those, you know, principles, or do you think something is fundamentally changed?
00;30;06;08 - 00;30;23;00
Peter Norvig
So. Yes and no. Right. So I still think if you want to really understand a field and, software engineering, which could be one example of them, but but any field, you do have to put in that work and, you know, it, it may not take exactly ten years.
00;30;23;00 - 00;30;28;10
Peter Norvig
But you're going to have to put in a lot of time and you're going to have to study, and you're going to have to deeply understand things.
00;30;28;10 - 00;30;49;04
Peter Norvig
On the other hand, there's a lot of things you can do without that deep level of understanding. And and now it seems like programing is one of them. So, I have no issue with, you know, many people, maybe even the majority of people saying, you know, it'd be really cool if I had a piece of software that did this.
00;30;49;06 - 00;31;13;00
Peter Norvig
Let me chat with this chat bot and build something that seems to work. Let's go. I think that's going to be fine. I do see kind of a, a generational schism on this. Right. So this happened a couple times with me and working with, a younger colleague at work. And there's we discover there's a here's this new software package seems to do what we need.
00;31;13;02 - 00;31;33;05
Peter Norvig
Great. You know, let's figure it out. And so I sit down and, you know, I'm reading through the manuals and I'm taking some time, and I have questions. And then my colleague comes back in and says, okay, I'm done. Let's go. And I say, what do you mean, you're done? I said, what's he said, well, I figured out, you know, we call this method A, and then we take this result and then we call B, and then we have it.
00;31;33;05 - 00;31;55;28
Peter Norvig
And and that's the answer. Let's move on to the next problem. And I say, but you know I don't understand this package. How does it do x, y and z. And they say no idea. But I know this will give me the right answer. And I feel like there is this trade off of sometimes it's really important to deeply understand something or else you're going to get bitten later.
00;31;56;01 - 00;32;17;27
Peter Norvig
You know, sort of you're building up technical debt if you don't understand what's going on. Other times, completely understanding them is not important, and just getting the right answer is important. And, it's hard to make that trade off. And I feel like in these kinds of, instances, neither of us is making a rational trade off. Right?
00;32;18;00 - 00;32;36;17
Peter Norvig
So I like to study and understand because that's what I was used to. And my younger colleague likes to go fast because that's what they were used to. And, you know, maybe one of us was right, maybe the other was right. Maybe the the right place is somewhere in the middle. But it's hard to get the experience to know.
00;32;36;19 - 00;32;53;04
Peter Norvig
All right? You only got so much time. What do you and what are you going to understand completely? And what are you going to just accept and move on? And, say, if I get the right answer, it's okay, even if I don't understand it. And I think it's really hard to get that right.
00;32;53;26 - 00;32;59;14
Geoff
And it makes sense. It's a yeah it's a tricky balance. And when, when is good enough. Good enough.
00;32;59;14 - 00;33;13;20
Geoff
so when I think about that when we talk about computer science as a discipline, you know we we've already talked about everything from it's easier and faster than ever to while maybe we need to start thinking about credentials in this more
00;33;13;20 - 00;33;15;08
Geoff
for the safety of the greater good.
00;33;15;11 - 00;33;28;08
Geoff
Well, in your perfect world, how would you like to see, you know, AI in some of these, you know, advancements in technology? Where would you like to see them take computer science that's going to be most beneficial for everybody.
00;33;32;06 - 00;34;02;21
Peter Norvig
you know, we're starting to see, some advances have already been some AI assisted discoveries in computer science and math. There's been, announcements recently from, Terence Tao, you know, one of our foremost mathematicians and others of saying, you know, here's, AI assisted proof that, that I came up with, and, Tao has said, the system seemed to be at the level of a not incompetent grad student.
00;34;04;04 - 00;34;30;22
Peter Norvig
So that's kind of promising. And, you know, maybe soon there'll be a pretty good grad student. They can do things that we couldn't do before. And I feel like it's kind of the first time we've had a technology like this, because all our other tools have been, if I can formalize something, then the tool can help me, work through these these, formal calculations.
00;34;30;22 - 00;34;55;15
Peter Norvig
Once I've done the hard work of of describing what I want it to do, and it feels like the current AI systems are the first ones that can help us in that process of saying, I want to go from a messy semi understanding to something that actually works without having to do all the formalization myself. Right? And so, you know, we've been burned by that in the past.
00;34;55;15 - 00;35;18;25
Peter Norvig
Right? So you go back in the 1980s and there were all these predictions of saying, oh, you know, worker productivity is going to go up so much because we have PCs now and, you know, we have spreadsheets. So, you know, 90% of the accountants will go away because it'll all be automated. Well, that didn't happen. And and why didn't it happen?
00;35;18;28 - 00;35;39;16
Peter Norvig
Well, it's true that if you want to add up a long column of numbers, a spreadsheet is a great tool. But most of what the accountant was doing was not adding up the numbers. That was a very small part of their job. A big part of their job was knowing what numbers go where. And here's this expense and what column does it go in, and so on.
00;35;39;16 - 00;35;51;25
Peter Norvig
And we couldn't automate that part. And so therefore productivity did not go up that much. But the AI systems we have now seems like the first tool that maybe they can do that messy kind of thing.
00;35;52;26 - 00;36;05;08
Geoff
So so what is the implication there? You know, I want to tease this out a little bit more because there's certainly these days it feels like there's a lot riding on. You know, the productivity is going to skyrocket across all these organizations.
00;36;05;08 - 00;36;13;13
Geoff
And you know, on the one hand there's not so fast it hasn't before. And on the other hand, there's well, this time may be different.
00;36;13;16 - 00;36;22;18
Geoff
What's kind of your prediction like is this again just it gradually over time it'll trickle out. Or what will the productivity gains if any look like.
00;36;22;22 - 00;36;24;07
Peter Norvig
I think it'll be gradual.
00;36;24;07 - 00;36;47;27
Peter Norvig
know, if you look at, GDP by year, well, first we have, you know, tens of thousands of years where it was pretty flat. But then we got the Industrial Revolution and it started to go up. And if you look at like GDP in the U.S. over the last hundred years, it's kind of pretty steady line with a little wiggles in it.
00;36;47;29 - 00;37;20;15
Peter Norvig
And you can really only see, two events in the last hundred years. And that's the Great Depression and World War two, and everything else is there's a little dip, but then it goes back to the trend line. So it it feels like technology is giving us, faster progress time than we had technology. But no one has been, kind of instrumental, you know, and you would have, you know, as a computer scientist, maybe I thought, well, the when you get PCs in offices, that would make a big difference.
00;37;20;18 - 00;37;39;01
Peter Norvig
You can't see that on the on the chart of GDP. And maybe GDP is measuring the wrong thing in certain ways, but but basically, you can't see that much difference now, another, chart that I think is interesting to look at is comparing, China to the U.S,
00;37;39;05 - 00;38;03;07
Peter Norvig
So if you compare China to the U.S, in say in the 90s or so, they were up at like 10% GDP growth. And you know, where 2 or 3. And I think what what's happening is they were kind of reaping these technology benefits all at once. Right. So we kind of slowly said, well, computers are coming in 70 today, 80s and 90s.
00;38;03;09 - 00;38;26;27
Peter Norvig
China was not deploying any of that technology. And then they kind of did it all at once. And that gave them, about a 10% GDP annual growth. So I think you could see a case that I could do a similar kind of thing. I, I was at this meeting of, economists and AI people, and they took a poll of what do you think GDP is going to be?
00;38;26;29 - 00;38;34;20
Peter Norvig
I think it was 20 years from now. I forget the exact number. And and the median was about 10%,
00;38;34;20 - 00;38;44;28
Peter Norvig
but the range was 1,000% to the complete destruction of civilization. So, we got some wide error bars on that.
00;38;45;25 - 00;39;27;10
Geoff
That's a that's a very generous way of putting it. So let's let's stay on, on the China piece for a minute. And what can happen when you have these more open source technologies and when you can have these kind of leapfrog technologies, I guess that that help China bend the curve upwards. And so I'll ask the question broadly, but you know, what's I guess your concern level or the concern level that you think the average American or Westerner should have, both in terms of the overall health of the economy as well as the health of the job market, given that that, you know, these technologies are being increasingly available outside the the walled gardens
00;39;27;10 - 00;39;28;08
Geoff
of the West.
00;39;28;08 - 00;39;28;21
Peter Norvig
Yeah.
00;39;29;14 - 00;39;34;19
Peter Norvig
Yeah, we're certainly seeing shocks to the to the job market.
00;39;34;19 - 00;39;51;17
Peter Norvig
I think mostly what we're seeing so far, I think has not that much to do with AI. I think it has more to do with, sort of the recovery from Covid. We had a big up and down and that distilled still reverberating from that.
00;39;51;20 - 00;40;13;05
Peter Norvig
And so I see, you know, students at Stanford, come to me and say, well, I didn't get a job offer yet. What's happening? And, you know, in the past, they would come and say, well, I got a job offer from each of the top six companies and from these four startups. How do I choose between them?
00;40;13;07 - 00;40;42;27
Peter Norvig
Right. So that's a big difference. So far, everyone eventually gets a job. So the difference is, you know, rather than getting ten job offers, they're getting, one, and it's taking a little bit longer to do that. And I think that's because, you know, during Covid, everybody was online companies over hired, now we're coming out of that and there's some threats to the economy and companies are cutting back.
00;40;43;00 - 00;41;08;16
Peter Norvig
I don't think we've seen the the full effects of AI yet. Certainly. You do see it in small places. You know, I was just talking to someone who said, well, we used to have, an artist on staff to, like, make up, logos and, PowerPoint slides and so on, and now we don't have that.
00;41;08;19 - 00;41;32;11
Peter Norvig
We've seen that kind of thing before. Right? So they used to be, you went into, typical office and the Mad Men, 1950s era, and there'd be a lot of people whose job was typing, and now we have much less of that. And and most people are expected to do their own typing. And so now maybe doing art will people will be doing it on their own with the help of these tools.
00;41;32;11 - 00;41;55;00
Peter Norvig
So I don't think that's a huge effect on the economy. I guess, there will still be plenty of jobs. It might be harder to find some of them, but I think the main, issue will be the speed of the disruption. And, you know, so we've seen this disruption before. It used to be the majority of Americans were farmers.
00;41;55;00 - 00;42;21;10
Peter Norvig
And now we went to only a couple percent of farmers. But that happened over generations. And now we're seeing changes happen over months or years. And that may, may be too fast for people to adapt to. Right. It was okay to say, well, my grandparents and my parents and I were all farmers, but it looks like my kids are going to go off to college and take a job in the city.
00;42;21;12 - 00;42;48;11
Peter Norvig
Good for them. And it's much harder to say I had one job and I was laid off. I and then another, and then another and then another. And now I'm pretty pissed. So I think we're going to have to deal with that. I mean, a lot of these proposals for some kind of universal basic income or something like that, I don't know exactly what the right thing is, but I think we do need, better kinds of social safety nets because there will be this kind of disruption.
00;42;49;06 - 00;43;10;29
Geoff
So it sounds like you're concerned then that this isn't just just part of the cycle and, and reverberation as you said from Covid, but that the we may see at least in the medium term a lower or I call it a higher unemployment floor. Before we get back to whatever the new version of farmers is. Is that fair?
00;43;10;29 - 00;43;28;03
Peter Norvig
Yeah. I mean, I don't know if it's about total, employment, unemployment, but I think there'll be more disruption. Right. So, you know, it may be that you're employed, but you may have to move from job to job faster.
00;43;28;03 - 00;43;36;16
Peter Norvig
so, you know, you could see the economy going up, but everybody feeling worse, because they're nervous, right?
00;43;36;23 - 00;43;59;26
Peter Norvig
Yeah. And, you know, I kind of feel like, you know, we we had this invention of the full time job, and I see that as kind of a mutual insurance policy. Right. So what? Why do we get insurance? Well, you know, the expected value of insurance has to be negative, for the individual and positive for the insurer.
00;43;59;29 - 00;44;26;28
Peter Norvig
But the value to the individual is to even out, the ups and downs. And I see, you know, a full time job is like that, right? It's probably not the case that the most value I could provide to the world would be staying at one company permanently. You know, probably if I split my, my work between multiple companies, that would be more effective for the world as a whole.
00;44;27;01 - 00;44;53;27
Peter Norvig
But it would be a cost on me to have to go out and find these gigs and and never know where my next paycheck was coming from. And so we accept these, sort of suboptimal use of resources to have this steadiness and even things out. And if we're going to start losing that steadiness, we're going to need some other, some type of insurance or guilds or, UBI or something, to make people feel more secure.
00;44;54;29 - 00;45;11;28
Geoff
It's a it's an interesting point. And I, you know, as you were talking about it, I was thinking I really like the analogy. And I guess for the employer there are benefits to the employer as well. For full time labor I'm thinking about you know, trust and you know, switching costs in terms of hiring costs and firing costs.
00;45;11;28 - 00;45;26;16
Geoff
And if you just bring in all these fractional people, how useful is that? But, it does sound like in this world where it's a combination of people plus machines. If I can broadly call them bad,
00;45;26;16 - 00;45;35;07
Geoff
if I can just kind of throw that back at you. It sounds like when you think about the future of work, you see it as less rigid, more flexible.
00;45;35;07 - 00;45;46;14
Geoff
If I can choose an optimistic word, but not necessarily in a way that's fully beneficial for the employee. Is that fair? Or how would you how would you color that?
00;45;46;14 - 00;45;52;27
Peter Norvig
that's right. So I think less rigid I think is important part of it. And I think there's kind of
00;45;52;27 - 00;46;08;08
Peter Norvig
communication that, that I enables is a is an important part of that. Right. So you know, why do we have hierarchical structure. Because, you know, it is impossible to have all n squared interactions.
00;46;08;08 - 00;46;39;16
Peter Norvig
And so we have the interactions flow up and down through the hierarchy. And and now there's only order n instead of order n squared. So that was important. But if we have these AI systems that can route the right information to the right person, then we need less hierarchical structure. And so that's a possibility. But if that hierarchical structure means, you know, now you have a job tomorrow, you don't, then that stress on the individual.
00;46;39;16 - 00;47;02;04
Geoff
Well and and I want to come back to the question of social nets and universal basic income because there's an implication there. And I'm not coming out one way or the other on it right now. But there's an implication that the economics of artificial intelligence are creating a world where there's more value capture, if I can call it that, by the firms.
00;47;02;04 - 00;47;16;04
Geoff
And then there's there's got to be some sort of redistribution of wealth that helps the people at the bottom who are struggling. Is that the world you envision in terms of sort of winners and losers, or would you paint it differently?
00;47;18;04 - 00;47;45;11
Peter Norvig
Yeah. So I do think that in general, automation allows firms to, capture more value. And then I think we want to, you know, sort of, figure out where that who, who gets credit for that value. And, you know, and there are a lot of cases we're working through now and things like copyright law.
00;47;45;14 - 00;47;59;03
Peter Norvig
I'm not sure copyright is exactly the right thing to be worried about, but, you know, who deserves credit for this stuff that we've built up and, and how do we allocate that? Fairly.
00;47;59;19 - 00;48;00;18
Unknown
Right.
00;48;00;21 - 00;48;17;12
Geoff
So so maybe let's zoom out for a minute. And who deserves credit. It's such a it's such a big difficult question. And so I'll ask maybe a a slightly easier one which is just broadly who do you see as being the winners and losers of this disruption.
00;48;17;12 - 00;48;38;05
Peter Norvig
I guess, you know, the losers are people who had a, safe position. Now that was, protected by various kinds of moats. And and now we'll see challengers come in. And then the winners will be, those that are, agile enough to to exploit that, to use the technologies and see opportunities.
00;48;40;03 - 00;49;04;22
Geoff
So if you and I don't know if you've been in this position, but if you had a CEO or a CTO come knocking on your door and say, Peter, I'm really interested in AI, in, you know, just basically in the advancement of some of these technologies and how my firm can capitalize on this and what we need to, to get ahead, either in our industry or in terms of, you know, modernizing our organization.
00;49;04;24 - 00;49;07;16
Geoff
what advice would you give them and what would you tell them to watch out for?
00;49;07;29 - 00;49;16;15
Peter Norvig
Yeah. So I do a lot of that. Right. So one of the roles at, at AGI is educating companies as well. So, so we've done a bunch of that.
00;49;16;15 - 00;49;27;21
Peter Norvig
So at Google for startups, we mentor, startup companies and usually a little bit more technically technologically savvy, but are asking some of those questions.
00;49;27;21 - 00;49;32;23
Peter Norvig
I look at it as saying, listen, I think of AI as being unique.
00;49;32;26 - 00;49;54;27
Peter Norvig
Let's think of what are your market opportunities, what are the tools you have to address that? How can you make your organization more efficient? What are your goals that you're trying to achieve? And if you can lay that out clearly, then we can start looking at saying, here's a place where I can play a role with within that workflow.
00;49;56;16 - 00;50;01;13
Geoff
I like that, and it it makes a lot of sense to me. And, you know, in some ways is,
00;50;01;13 - 00;50;07;09
Geoff
seeing tools as tools, I guess, versus seeing them as being a big be all in dollars. Are there any
00;50;07;09 - 00;50;18;07
Geoff
views or assumptions you're seeing people coming to you with as patterns about AI that you're like, that is just totally wrongheaded, or I guess if I can ask that another way.
00;50;18;07 - 00;50;24;29
Geoff
are there particular flavors to the hype that you're saying, like, just look out for that, ignore that. That's going to put you on the wrong track.
00;50;24;29 - 00;50;43;05
Peter Norvig
Yeah. So I think we've seen, you know, pretty fast evolution in sort of the public eye. If you go back like three years every article about I had a picture of the Terminator robot with glowing red eyes. Right. And it was robots are going to kill us off. Now we're seeing less of that now, the robots are only going to steal your job.
00;50;43;05 - 00;50;45;19
Peter Norvig
They're not going to kill you.
00;50;45;19 - 00;51;03;10
Peter Norvig
One thing I see a lot, you know, so talking to the CTOs and so on, you know, they come to me and say, well, I need to hire a PhD in AI. But I can't get any because, Google and Meta already hire them all.
00;51;03;12 - 00;51;29;20
Peter Norvig
And, my colleague Cassie Cosgrove has a great analogy. She says that's like saying, well, I'm the owner of a restaurant, and I need to hire a PhD and stove design. And the answer is no, you don't need that. What you need is a chef who will tell you what stove Devi understands how to operate the stove and knows what the customers want to, to eat and and can make that.
00;51;29;22 - 00;51;47;23
Peter Norvig
And so I think that is a flaw that some people see sort of the cutting edge of AI research and saying, you know, every company has to be doing that. And instead of saying every company should learn how to use the appropriate tools and fitted into what they're trying to do.
00;51;49;17 - 00;52;09;15
Geoff
I really like that. And I'm chuckling to because I just spoke with I just spoke with Kasey very recently as well. And we had a great she she's fantastic. We had a great conversation. So yeah. So she was a lot of fun. So just just taking that that analogy to its logical end, though, if we think about who the chefs are in this AI world.
00;52;09;15 - 00;52;19;03
Geoff
are those still technologists? Are they business people? Are they somewhere in between? What what, you know, skill set or role should we be looking at if we're CTO, CIO, someone like that?
00;52;19;03 - 00;52;26;20
Peter Norvig
that's really interesting. Right. So certainly, somebody who's savvy in technology can do more, faster.
00;52;26;20 - 00;52;41;26
Peter Norvig
But I think a really exciting thing is for the non technologists to be able to get ahead. Right. And so I think particularly in the small business. Right. So say, you know, you want to automate your your workflow in your business.
00;52;41;28 - 00;53;02;22
Peter Norvig
If you're a big company, you have an internal ID team and you give them projects. If you're a medium company, hire sales force and it's more expensive and you pay these consultants and they do it for you, if you're a tiny company, there's nothing you can do, right? You don't have any programmers on staff. It's too expensive to afford that.
00;53;02;22 - 00;53;27;14
Peter Norvig
You can't afford Salesforce or any of their competitors, so you're stuck. But it does feel like now I'm in that small company. We've only got two salespeople. They can sit down together and they can prompt and say, here's what we do on a daily basis, and they can build something that will automate their workflow. And that was never possible before, right before you had to be a real programmer to do that.
00;53;27;14 - 00;53;37;22
Peter Norvig
Now it seems like you can have pretty good luck, doing that kind of thing. And this is the first time we've seen that sort of possibility.
00;53;38;11 - 00;53;52;11
Geoff
Yeah. And it seems like a real boon. I mean, frankly, not just any organization, but to a lot of different roles. If everybody's if what everybody can do with technology increases, right. Everybody's more kind of enabled individually.
00;53;52;11 - 00;54;09;22
Geoff
you know, on that note and staying at kind of you know, you talk about, you know, enterprise it and we're talking about CTOs. One of the big trends we've certainly seen, I probably can say, over the last couple decades or even beyond, is just the proliferation and the sprawl of, you know, corporate technology functions, corporate IT.
00;54;10;00 - 00;54;32;28
Geoff
And suddenly we have to add a whole, you know, data governance function and a whole cybersecurity function and a whole infrastructure function. There's, you know, in my mind, sort of competing futures of whether as technology becomes more central, these functions continue to grow because suddenly we need a new layer of AI people, or they shrink because we've said we can disintermediate.
00;54;32;28 - 00;54;40;25
Geoff
And now the business people are their own technology experts or or somewhere in between. What do you see as being kind of the the future of this function?
00;54;40;25 - 00;54;45;21
Peter Norvig
I, I'm hopeful that it can shrink. And,
00;54;45;21 - 00;55;03;18
Peter Norvig
we build our tools, that we make them better over time. Right? So, you know, if I think back 30 years and I was programing and see, it was easy for me to make lots of mistakes, right? I could pass the wrong type into a function and, it wouldn't complain.
00;55;03;21 - 00;55;27;25
Peter Norvig
Now I get a nice compile time error. Said, do you made a mistake? Here's how to fix it. And so I think we're building tools that can do a better job of of help helping us along the way. And I think with AI, it'll be possible to do that at kind of the, the business level as well as just the, compiler of code level.
00;55;27;28 - 00;55;56;15
Peter Norvig
So that suggests that that maybe there'll be less of this, right, that, you know, and, you know, and I hear this, at Google, you know, I hear from my, friends who have been here for decades saying, you know, it's so much more complicated now. Used to be we just launched a product, and now we have to go through the security review and the privacy review and these five other reviews, and it takes so long.
00;55;56;17 - 00;56;18;24
Peter Norvig
And, you know, one of my reaction to as well. Well, how much longer does it take? And they'd say, you know like five times longer. It's terrible. And then I say, well, you know, how many, more users do you reach on day one? They say, oh, well, like 100 times more users. So I'm saying, well, so you're saying it's 20 times more efficient, per user.
00;56;18;27 - 00;56;37;26
Peter Norvig
And I think that's one way of looking at it. And I think we can, we can get to the point where maybe the AI is automating more of this, right, where there's just sort of company wide policies and you write code and it works with those policies rather than having to have human teams reviewing everything.
00;56;38;26 - 00;56;51;04
Geoff
Right. So so there's potentially you can get a lot more done. You can be a lot more efficient. And I want to continue down that path again because it's interesting to me that there is still a happy
00;56;51;04 - 00;56;58;22
Geoff
story and a concerning story. The happy story is, you know, we're more efficient than ever. We're able to do more with less.
00;56;58;24 - 00;57;32;28
Geoff
The concerning story is and and you know, you touched on this earlier, Peter, but if you're Joe technologist or Jane computer scientist, while maybe, maybe the enterprise just doesn't need you anymore because they're they're running a leaner function. And so if you're someone who finds themselves in this computer science field and is worried about being a winner in this space versus being disrupted, let's say, what's kind of your guidance for these people to make sure that they're that they're still relevant and able to, you know, continue to to be, you know, valued, you know, members of the team.
00;57;33;21 - 00;58;03;21
Peter Norvig
Yeah, I guess, what you want to do to be valued is to understand what the goals are of your organization and fit into that and use your knowledge to to do that. And I think we had this period where, you know, people were told, well, if you become a computer science major, then you automatically get a good high paying job, because there's this scarcity of the ability to write code that will compile.
00;58;03;24 - 00;58;17;28
Peter Norvig
And and now maybe that's not the scarcity. And instead the scarcity is to be able to understand the business need, understand the environment in which you're operating and design something that will solve that need.
00;58;17;28 - 00;58;31;19
Geoff
I like that framing of it. And I think, I think you and I are fairly aligned there. There's another guy that, you know, there's another idea that's been floating around. And I'm curious what you think of it, because we've we're having this conversation in the context of one organization, one enterprise.
00;58;31;20 - 00;58;53;05
Geoff
There's this sort of seductive idea of AI being able to enable, you know, a one person billion dollar shop now. And whether we take it to that extreme or we say, you know, if you're a computer scientist, you can now be an army of one you don't have to work for, you know, Google or, you know, name your big tech company.
00;58;53;12 - 00;59;00;02
Geoff
You could be working for yourself and, you know, now have the tools in your toolkit to do an awful lot more.
00;59;00;02 - 00;59;00;04
Speaker 1
Do
00;59;00;04 - 00;59;13;07
Geoff
do you buy that future? Is that aligned with your view of, you know, more and more flexible or more, you know, kind of bit work or is that kind of fanciful thinking?
00;59;14;03 - 00;59;19;25
Peter Norvig
Certainly one person can do a lot more. But
00;59;19;25 - 00;59;35;00
Peter Norvig
there's a lot to get done that, one person just doesn't know. And it's not clear to me if the AI is going to know all those things. Right. So, you know, why do companies have thousands or hundreds of thousands of people? It's because there are all these exceptions, right?
00;59;35;00 - 01;00;02;14
Peter Norvig
And you got to operate in all these countries and they have different, rules and so on. So I remember, Google bought this, airline travel company called ITA, and they, you know, I was sort of doing due diligence on that. And, and one of the guys on in the company was a colleague I had worked with before, and I remember him telling me, I said, look, I'll talk honestly.
01;00;02;14 - 01;00;25;22
Peter Norvig
Google's got a bunch of smart programmers. You could build what we build, but it's going to take you a really long time. You'd get 90% of it done right away. And then there's all these exceptions, right? So you know, there's all these weird time zones and there's all these weird fees and, and regulations that different airlines have in different countries have it.
01;00;25;22 - 01;00;44;16
Peter Norvig
So, and we've done all that and, and it's not written down anywhere. You have to discover it. So, you know, the value of our company is that we've done this long tail of exceptions, and we figured it out, and there's no easy way to figure that out. And I think there's a lot of that going on.
01;00;44;16 - 01;01;07;06
Peter Norvig
And so it's going to be easy for one person to write something that will solve the easy part of the problem. The rest of it, you know, maybe some of it can get done by if it's written down somewhere, but a lot of it isn't written down anywhere. And, you know, but I think we're still a generation away.
01;01;07;08 - 01;01;28;00
Peter Norvig
Right. So you could imagine that could be solved by an agent that goes out and discovers all these exceptions by taking actions in the world. We don't have anything close to that yet. So I think, you know, for the next number of years, there's still going to be value in having a lot of people that know a lot of these, sort of exceptional things.
01;01;28;00 - 01;01;44;25
Geoff
Right. No, I really like that. Is it sounds like it's basically a parable for the current state of anyone working with AI in an enterprise capacity is sure it can get that 90%, but there's an awful lot of asterisks that you still need all these people to do.
01;01;44;25 - 01;02;12;07
Peter Norvig
Yeah. And I remember, I just read, Joel Spassky's article on, on why you shouldn't rewrite your software. Right. Because he says, you know, you look at it and you say, here's this mess of stuff that seems obsolete and not worth it, and 90% of it is not worth it. And you could throw out, but 10% of it solves a problem, that you don't understand and you've never seen before.
01;02;12;10 - 01;02;13;20
Peter Norvig
But it's still a problem.
01;02;13;20 - 01;02;18;03
Peter Norvig
and you don't know which which of that 10% is important, in which 90% is junk.
01;02;18;24 - 01;02;19;18
Unknown
Yeah.
01;02;19;20 - 01;02;40;07
Geoff
Oh, that makes it makes complete sense. It's it's interesting and it certainly resonates with me. Peter I wanted to talk about something completely different. But, you know, while I have you I wanted to ask you, you know, I, I've heard kind of through the grapevine that that her is your favorite movie about AI. And, you know, it's certainly a movie that I love the movie as well.
01;02;40;07 - 01;02;51;04
Geoff
And it's it's a movie that when I first saw, I remember walking out of the theater and just like, it's just like, I need to think a lot more about that. And certainly it seems like the world has moved in that direction.
01;02;51;04 - 01;02;56;08
Geoff
Do you see this as being a reality that we're in right now, or we're about to be in right now?
01;02;56;08 - 01;03;07;11
Geoff
And you know what? Do you worry that? Do you worry that tech companies are seeing something like that and saying, yeah, we should we should build that versus taking it as a cautionary tale.
01;03;07;11 - 01;03;30;27
Peter Norvig
mean, I think yeah. That's right. So, and, you know, anytime somebody writes a cautionary tale, there'll be some tech leaders says, yeah, I want that. And I think we already have that. Right. So there's lots of people whose boyfriend or girlfriend, is, computer that they chat with. And so we're already there to some extent.
01;03;31;00 - 01;03;58;14
Peter Norvig
I don't know if her's my favorite movie, but, you might have seen. Right. So I did this event where we had a screening of her, and then we had a Q&A afterwards. And I remember, one of the questions was, what other science fiction movie does this remind you of? And my answer was, well, not a science fiction movie, but it really reminds me of Life of Brian because both of them are about faith.
01;03;58;16 - 01;04;29;25
Peter Norvig
Right? So in Life of Brian, here's this schmuck, and everybody wants to believe he's the Messiah. And in her here's this piece of software, and the protagonist wants to believe, this is my girlfriend. And I think we're just built that way. Right? So humans want to do that, right? And, so, you know, talk about, can we build this interactive entity that people will think of as a, as a real person and companion?
01;04;29;28 - 01;04;53;12
Peter Norvig
Well, we did that, decades ago. My daughter loved her teddy bear, and it was not very interactive. And yet she loved it completely. And so we just want to, you know, sort of inject our feelings onto, onto other people, onto other machines, onto other pieces of software. And I think that's just the way, humans are built.
01;04;53;12 - 01;04;55;19
Peter Norvig
And I think, her touched on some of that.
01;04;56;09 - 01;05;06;05
Geoff
Yeah. So so it's in our nature. It's not something we can avoid or you know there's something to be done about. It's just something we need to understand and you know, live with it sounds like.
01;05;06;11 - 01;05;11;26
Peter Norvig
Yeah. I mean, I think we we do want to worry that, you know, there's lots of,
01;05;11;26 - 01;05;23;12
Peter Norvig
things that can give you pleasure in moderation and can lead to bad results when you're overly addicted to them. And we could see this as certainly being in that range.
01;05;23;12 - 01;05;26;06
Geoff
Peter, I wanted to say a big thank you for being on the program today.
01;05;26;06 - 01;05;30;08
Geoff
This has been really interesting and really insightful, and I really appreciate your time.
01;05;30;22 - 01;05;32;11
Peter Norvig
Yeah. Great talking with you, Jeff. Thank you.
01;05;33;19 - 01;06;00;21
Speaker 1
If you work in IT, Infotech research Group is a name you need to know no matter what your needs are. Infotech has you covered. AI strategy? Covered. Disaster recovery? Covered. Vendor negotiation? Covered. Infotech supports you with the best practice research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe!
The Next Industrial Revolution Is Already Here
Digital Disruption is where leaders and experts share their insights on using technology to build the organizations of the future. As intelligent technologies reshape our lives and our livelihoods, we speak with the thinkers and the doers who will help us predict and harness this disruption.
Our Guest Dr. Anne-Marie Imafidon Discusses
Is AI Eroding Identity? Future of Work Expert on How AI Is Taking More Than Jobs
From redefining long-held beliefs about “jobs for life,” to the cultural fractures emerging between companies, workers, and society, Dr. Anne-Marie goes deep on what’s changing, what still isn’t understood, and what leaders must do right now to avoid being left behind.
Our Guest Andy Mills Discusses
How AI Will Save Humanity: Creator of The Last Invention Explains
If you want clarity on AGI, existential risk, the future of work, and what it all means for humanity, this is an episode you won’t want to miss.
Our Guest Peter Norvig Discusses
AGI Is Here: AI Legend Peter Norvig on Why It Doesn't Matter Anymore
Are we chasing the wrong goal with artificial general intelligence and missing the breakthroughs that matter now?
Our Guest Cassie Kozyrkov Discusses
Why AI Is Failing: Ex-Google Chief Cassie Kozyrkov Debunks "AI-First"
In this episode, Cassie Kozyrkov, former Google Chief Decision Scientist and CEO of Kozyr, sits down with Geoff to unpack the hidden cost of the “AI-first” hype, the dangers of AI infrastructure debt, and why real AI readiness starts with people, not technology.