Our Guest Dr. Michael Littman Discusses
AI Expert Dr. Michael Littman: This Is Why Everything You Know About AI Is Wrong
What if everything you thought you knew about AI was wrong?
Today on Digital Disruption, we’re joined by Dr. Michael Littman, Division Director for Information and Intelligent Systems at the National Science Foundation.
The Information and Intelligent Systems division is home to the programs and program officers that support researchers in artificial intelligence, human-centered computing, data management, and assistive technologies, as well as those exploring the impact of intelligent information systems on society. He is also Professor of Computer Science at Brown University, where he studies machine learning and decision-making under uncertainty. He has earned multiple university-level awards for teaching, and his research has been recognized with three best paper awards and three influential paper awards. Littman is a Fellow of the Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery. In the Summer of 2025, he will become Brown University's inaugural Associate Provost for Artificial Intelligence.
Dr. Michael Littman sits down with Geoff Nielson to explore the misconceptions about AI, its real-world applications, and what the future holds. Dr. Littman challenges common fears about AI replacing humans, emphasizing the importance of human-AI collaboration rather than competition. He talks about ethical considerations, the evolution of AI decision-making, and the balance between automation and human oversight. They unpack what is next in the world of artificial intelligence and why understanding the technology's limitations is just as important as embracing its potential.
00;00;00;10 - 00;00;20;28
Geoff Nielson
I'm super excited to talk to Doctor Littman today. You know, this guy is an educator. He works for the National Science Foundation in the US. So he sits at, you know, an intersection of understanding what needs to be done. Where are we going with AI and how do we actually disseminate that knowledge? How do we teach people and bring everyone along for the ride?
00;00;20;29 - 00;00;50;08
Geoff Nielson
So it should be a fantastic discussion. Welcome to Digital disruption. I'm Jeff Nielsen and joining me today is Doctor Michael Lipman who is a computer scientist and educator, a researcher and an author. Michael, thanks so much for joining today. Oh, it's a pleasure to be here. Amazing. You know, I wanted to ask you, you've been in, you know, the machine learning game for for, you know, a long time, certainly enough time that it predates this kind of, you know, modern bonanza, if I can call it that.
00;00;50;11 - 00;00;56;29
Geoff Nielson
What are you seeing that's different now? And you know, what's kind of surprise to you about how this technology has evolved?
00;00;57;01 - 00;01;13;26
Dr. Michael Littman
Yeah, that's a great question. I mean, so I kind of I mean, I've been tracking AI as a as an intense area of interest since like, the early 80s, late 70s. And so it's, you know, I wasn't in I was a, I was a high school student, but, you know, nonetheless, it was something that was really interesting to me.
00;01;13;29 - 00;01;42;11
Dr. Michael Littman
And when I first became a researcher, it was during the previous wave of interest in neural networks, artificial intelligence and, it was very exciting. A lot of people jumped on the bag when it had a lot in common with what we're seeing now. But then it all fizzled out. And I think, when it all started up again in the last, I don't know, five, ten years, I had the same kind of thought that you have, which is how is this similar and different from, let's say, the last time?
00;01;42;14 - 00;02;05;23
Dr. Michael Littman
And what struck me is each time we've seen kind of AI gain, a lot of attention and then lose that attention. Often what happens is there been some kind of really neat breakthrough in the lab that people will start to rush to apply to real world problems? And when I and the real world meet, basically what we saw is that the real world would win and I would lose.
00;02;05;23 - 00;02;25;04
Dr. Michael Littman
And so it would kind of recede back because it just wasn't ready to take on the, the, the messiness of the real world. And I think what we're seeing this time is that AI in the real world are coming into contact and kind of AI is winning. And so, and I mean that in two ways. One is that it?
00;02;25;07 - 00;02;49;00
Dr. Michael Littman
The current wave of technology does seem up to the challenge of being able to work with the messy real world, which I think is great. It's fascinating. It's, it's exciting. But also what we're finding is the real world wasn't quite ready for AI to be effective in that way. And I think what we're doing now, what a lot of people are focused on now, is what can we do in the real world to make it so that things are kind of more in equilibrium?
00;02;49;00 - 00;02;58;29
Dr. Michael Littman
Again, that the AI can be useful and helpful to people, but at the same time, we're not kind of undermining the foundations of society.
00;02;59;01 - 00;03;23;14
Geoff Nielson
So so that, you know, AI readiness or public sentiment toward AI is it's a really interesting area. And, you know, you talk about who, you know, who's winning AI or the real world. You know, you participated in, I 100 and, you know, there was some content there at the time about public sentiment. And, you know, one of the key challenges is getting the public ready for AI.
00;03;23;17 - 00;03;32;02
Geoff Nielson
Where are we on that journey? Are, you know, are we ready? How is it evolved over the last few years? And you know what's left to do?
00;03;32;04 - 00;03;52;07
Dr. Michael Littman
Yeah. Outstanding question. I think this is very much up in the air, and it's very much kind of, a focus of a lot of people's attention right now. There are concerns that that the real benefits to bringing AI into, to solving various kinds of real world problems and to help people do what they're doing, but be more effective in doing it.
00;03;52;07 - 00;04;15;06
Dr. Michael Littman
Do it more accurately, do it more efficiently. And to the extent that people as a whole are wary and concerned, then that maybe that won't happen, right? So we'll kind of lose this opportunity if we don't make sure that people are on board. On the other hand, there are some real concerns, and so making sure that people are appropriately wary is the challenge, right, that there shouldn't be this sort of blind?
00;04;15;06 - 00;04;33;09
Dr. Michael Littman
Let's all rush into this because, you know, the glorious future awaits. But at the same time, if we're so afraid that we're not willing to even try out some of these ideas and, and, you know, risk some maybe bumps early on, but then figure out how to get through them, then I think we really are missing out on an exciting opportunity.
00;04;33;09 - 00;05;05;12
Dr. Michael Littman
So, one of the things that I do in my current role at the National Science Foundation in the US is we look at various ways of supporting AI research as a whole, but we also are very concerned about education around AI. And that goes all the way from educating, you know, medical doctors to help them figure out how they can be more effective in their use of AI in their practice, all the way down to elementary school students and how they can be, learning what they need to learn so they can contribute to this.
00;05;05;14 - 00;05;27;14
Dr. Michael Littman
And one of the areas that I personally am really excited about, and one of the reasons I was so glad that you guys reached out to have me on the podcast, is the notion that we need to be getting kind of getting the word out to the public, right? So we in NSF, we call it the, to informal education, but the sort of idea that there's stuff that that is important that we should learn, that we're not learning in school.
00;05;27;17 - 00;05;44;24
Dr. Michael Littman
And I know that, you know, so my institution is really focused on trying to make that happen, but I think there's a lot of people worldwide who are realizing that this is this is a critical time for people to find out what they need to find out. And so where are we on that journey? I mean, I think we're really close to the beginning of that journey.
00;05;44;24 - 00;06;07;14
Dr. Michael Littman
I think, you know, we're seeing lots of news stories about AI. We're not seeing a lot of, I don't know, like, kid shows around I like I'm not really seeing, people laying the foundations the way that that traditionally folks have done in other science areas. I think there's. Yeah, there's a lot more that we could do there.
00;06;07;16 - 00;06;31;25
Geoff Nielson
Yeah. And I'm just, you know, kind of process seeing, you know, this, you know, absence of education right now and this, you know, reluctance right now for, you know, people to adopt it. Or maybe there's an, you know, an angst about it. My my sense, Michael, is that, you know, if you've been following AI since the 80s, you probably started with almost like a sci fi view of AI.
00;06;31;26 - 00;06;35;06
Geoff Nielson
Is that is that fair?
00;06;35;08 - 00;07;05;29
Dr. Michael Littman
I do love I do love a good sci fi, that's for sure. But that was that was never really my, my motivator. I mean, I think early on this is maybe saying too much about myself, but but, I read a book in the 80s that was it was a bit of a I mean, in retrospect, it was it was highly not inflammatory, but it was like it was getting people excited in a way that probably people didn't need to be excited, but it was about understanding how folks in Japan were embracing the idea of artificial intelligence and how they were going to, like, get ahead of us in the West.
00;07;05;29 - 00;07;25;23
Dr. Michael Littman
And and we were going to be left behind and all these amazing things that were going to happen weren't going to happen. And so from my perspective, it's it's always been about the excitement of the technology, but also the impact that it has on our society. And so, I don't know. I mean, I always thought it seemed like a really cool, impactful idea.
00;07;25;29 - 00;07;30;17
Dr. Michael Littman
And now I think other people agree with me.
00;07;30;20 - 00;07;50;05
Geoff Nielson
That's that's really interesting. And yeah, the reason I mean, the reason I asked, I'm so it's awesome to hear about that excitement and, you know, it, it's contagious. It feels like there's this underlying fear, right, of like, you know, what is the dark side of this? You know what? You know? How worried do I have to be?
00;07;50;07 - 00;08;22;03
Geoff Nielson
And the excitement is, you know, in some ways really refreshing. And so, you know, to what degree is, you know, kind of this, like, existential fear, a barrier. And you know what? What is kind of the fact versus fiction here? Like, are you worried, you know, about us getting to like, a singularity or about, like, you know, an AI overthrow or, or is that like, is that so far removed from reality that, you know, people, you know, it's it's not a rational, you know, point of discussion.
00;08;22;06 - 00;08;47;12
Dr. Michael Littman
No, I definitely don't want to call it irrational. I have some very, very close colleagues who are, focused on that particular issue. So the, the concern just to, just to kind of put it on the table, the concern that a lot of people have is that to the extent that what makes the human species so powerful in the history of our planet and, arguably our intelligence isn't that much greater than, say, that of chimps.
00;08;47;14 - 00;09;13;14
Dr. Michael Littman
But at the same time, if human beings don't decide that they don't really care about chimps anymore, chimps are going to go extinct in favor of us. Right? So it's you only need a little bit of extra intelligence to make it so that you control the destiny of of other entities on our planet. And so the concern, as well as AI is really all that smart, and it's it's heading in a direction that's going to be able to kind of outthink us and maybe, maybe even become something that we don't completely understand.
00;09;13;16 - 00;09;32;23
Dr. Michael Littman
Where does that leave us as human beings in terms of our ability to control our own destiny? So I, I think it's a it's fine to think about that question. And then you were you were making a connection to sci fi before. I think that that science fiction has been a remarkable way for people to kind of grapple with some of those fears.
00;09;32;25 - 00;09;52;17
Dr. Michael Littman
I personally have always been more drawn to things like R2-d2 and C-3po than, say, the Terminator. Right. The notion that they're that these are machines that are here to help us do what it is that we do, and not something that's trying to subvert us and kind of get above us. In principle, it's it's a possibility.
00;09;52;17 - 00;10;23;12
Dr. Michael Littman
I don't think that it's something that we should be very worked up about. I, in 2016, that was kind of at the very beginning of this current round of excitement around neural networks. There was, at least in the within the field, it was obvious that something was new here, something was exciting here. And, folks at the New York University in the US put together a workshop where they tried to bring a bunch of AI thinkers together and to talk about this, this concern, this fear.
00;10;23;14 - 00;10;27;23
Dr. Michael Littman
And.
00;10;27;26 - 00;10;46;29
Dr. Michael Littman
What my impression at that time was that, first of all, that a lot of the people who were really pushing this idea were pushing this idea because they thought they wanted to get attention around it. They wanted people to be thinking about it, but they were overplaying their hand in some ways, and they knew it. They were making this out to be something that was an imminent existential threat.
00;10;47;01 - 00;11;07;19
Dr. Michael Littman
And then when you actually talk to them in detail, they weren't really sure they were. They were there, but they thought it was very important from a from a gathering attention standpoint to do this. But my reaction at the time was, if you think this problem is important and if you think our best minds should be thinking about how to solve it, fear is not a good motivator for creativity, right?
00;11;07;19 - 00;11;33;02
Dr. Michael Littman
Playfulness, I think, is a much better, mindset to be in if you're trying to make, conceptual breakthroughs. And so I don't think it's really smart for any of us to be so worked up and fearful about this that we stop thinking creatively. And so, so my perspective on it is, yeah, we should be paying attention to issues like this, but we shouldn't let them, like, keep us up at night and dominate our thinking.
00;11;33;02 - 00;11;47;16
Dr. Michael Littman
We should be thinking about all the various kinds of implications that that, that you get when you think about, oh my gosh, intelligence is such a defining feature of who we are as a species. What what does it mean for something other than us to be intelligent.
00;11;47;18 - 00;11;55;15
Geoff Nielson
With, with that lens of excitement, though, right. Which is, which is, you know, so, so interesting versus, you know, coming at it from a place of fear.
00;11;55;18 - 00;12;14;26
Dr. Michael Littman
Yeah. No. Exactly. Right. Exactly right. And I think there are plenty of things that we should be concerned about. We I don't think this is like party time that we should just be thinking, oh my gosh, all our problems are solved. We can just like ride it out and have a great time. I think we have to be very, very thoughtful and I think engaging people in that process is critical for us as a society.
00;12;14;29 - 00;12;36;24
Dr. Michael Littman
But yeah, I don't I don't think the fear I mean, I think I don't think it's as bad as the people who think things are super bad. I think it is. And that's across the board. I think in terms of these kind of sci fi outcomes, I think those are overblown. And I think it's some of the societal outcomes are are overblown as well, that we're actually very, adaptive as a, as a species and as a society.
00;12;36;24 - 00;12;47;15
Dr. Michael Littman
And as long as we take things in stride and figure out how to integrate them with what we actually care about, we're going to be fine. We're going to be great. We're going to be better than than we've been up to this point.
00;12;47;18 - 00;12;58;29
Geoff Nielson
So let me maybe push on that for a minute. Michael, you know, which societal outcomes do you think are, you know, overblown. And are there any that you don't think are overblown? And, you know, we should actually be concerned about, for better or for worse?
00;12;59;01 - 00;13;19;12
Dr. Michael Littman
Yeah. I mean, so one of the stories that I think people are grappling with tremendously is the notion that, well, if we were to create, if humanity was to create a kind of artificial general intelligence, something that was smart about the world, the way that people are smart about the world, then in principle, they could do the same kinds of work that people do, right?
00;13;19;12 - 00;13;44;22
Dr. Michael Littman
And we'd be in a situation where, there wouldn't really be any point for us working anymore, which could be great. Or it could be, completely horrible, depending on on how things are frame and how things are structured. What I'm seeing is that I think there's a lot of misunderstanding about what people bring to work, like what it is that they're actually doing, and we are not in a situation now where much of that can be replaced.
00;13;44;22 - 00;14;04;11
Dr. Michael Littman
I think that for the foreseeable future, the tools that we have are much better suited to supporting people who are doing work than it is for supplanting people that are doing work. And so that's an example where I feel like it's really like every single new idea that comes out, some journalist stands up and says, oh my gosh, this is projecting this forward.
00;14;04;16 - 00;14;16;23
Dr. Michael Littman
There's nothing for us to do anymore. Humanity is doomed. And I think that's, it's really not appreciating the complexity of what it is that people need to do to act in the real world.
00;14;16;26 - 00;14;43;22
Geoff Nielson
So so I'm glad you brought up, you know, the work angle because, you know, certainly, you know, in my line of work, this is a conversation that comes up again and again is, you know, fear of replacement. Or you know what? Why does an organization need employees if they can have I now, you know, as a computer scientist, I know you teach and you work with, you know, developers and certainly, you know, code gen, you know, QA, you name it, there's there's a number of, augmentative, you know, capabilities that I can do right now.
00;14;43;25 - 00;14;56;20
Geoff Nielson
How are you seeing? I, you know, transform them what it means to be, you know, in the computer science space. You know what? What is it? What's changed for the, you know, the outlook at that role?
00;14;56;22 - 00;15;16;00
Dr. Michael Littman
Yeah. Well, let me start with the smallest thing. That is kind of a big deal, which is the way that we organize our classes in computer science generally involve explaining to students. We would you know, we would like you to write a piece of code that would carry out the following functionality. It's now possible to take these homework assignments verbatim and hand them to a chat bot.
00;15;16;00 - 00;15;32;05
Dr. Michael Littman
And it will write the code for you. And so that, at the very least, needs to have an impact on how we're doing this teaching. Right. It's not no one's gonna learn anything if all they're doing is, is cut and pasting from the homework assignment into a chat bot and then cut and pasting the program back in the other direction.
00;15;32;11 - 00;15;50;26
Dr. Michael Littman
But that's a that's a small thing. And there's plenty of ways to kind of work around that, I think. So my, my personal interest in programing has always been about thinking of it as a way of empowering people to take the tasks that they want carried out, and then to convey them to another entity that can then carry it out on their behalf.
00;15;50;28 - 00;16;15;10
Dr. Michael Littman
And so that's always been my focus. That's always been what's exciting to me about computing. And I think this current round of AI gives us all new suite of tools that are super exciting for making it possible for all of us to express ideas to the computer and have the computer carry them out on our behalf. And so that's one of the things that I'm studying in my own research is, well, okay, what does that actually look like?
00;16;15;10 - 00;16;37;22
Dr. Michael Littman
What does it mean? I think a lot of people for 40 plus years, there have been people sending the message that, wow, programing is hard. We have to work really hard to teach people programing. It's very arcane in its in its kind of communicative style. Right? There's lots of parentheses and braces and commas that have to be all put into the in the structure very, very, very precisely.
00;16;37;24 - 00;16;56;01
Dr. Michael Littman
I think what we're starting to see, and this is, again, what I'm focused on in my research, is the notion that if you take all that off the table, programing is still hard, it's still hard for people to kind of conceptualize what it is that they want done, and then to convey it accurately enough that that another person could do it, let alone a computer.
00;16;56;04 - 00;17;16;19
Dr. Michael Littman
Right. So being able to converse with a computer natural language is extremely powerful. But it it raises the question, okay, well, how good are people at telling other people what to do to carry out a task on their behalf? And one of the neat things that we're finding is that, at least for the kinds of tasks that we've been experimenting with, the computers are really good at carrying out.
00;17;16;22 - 00;17;37;05
Dr. Michael Littman
People who are trained in programing, are better expressed better at expressing in English in natural language, what it is they want to another person and have that other person carry it out. Then people who have not gone through that kind of instruction right have not really reflected on the process of breaking something down into tasks and then conveying it to, well, the computer.
00;17;37;08 - 00;17;52;29
Dr. Michael Littman
So I think it's actually going to be really neat because I think we're going to it's going to be easier for people to program because it's going to be less arcane and less symbolic. But I think we're going to what's revealed by that is that it's something that we all have to actually work on. We have to practice.
00;17;53;06 - 00;18;05;01
Dr. Michael Littman
We have to think hard. It doesn't make it go away. It's not like just because the computer can understand you, that you don't have to put in effort to figure out what it is that you want done it.
00;18;05;01 - 00;18;24;29
Geoff Nielson
Well, yeah. And you know, that understanding part is so interesting, right? Because now I yeah, I mean, as I said, I've done I've done a bit of coding, but there's the and to me there's always been kind of a misconception, at least in the business world, about coding that like, you know, 90% of coding is just like hands on keyboard, you know, like typing in lines of code.
00;18;24;29 - 00;18;44;14
Geoff Nielson
How many lines of code can you type when you know, it's always like the what separates the best from the rest is like the caliber of that logical reasoning. And how can you how can you frame what you want? And, you know, if I understand you correctly, it sounds like in some ways that's actually more important than ever in this world of AI.
00;18;44;16 - 00;19;04;16
Dr. Michael Littman
That's exactly what I'm starting to see. Yeah. And it's it's, as you say, logical thinking is part of it, but part of it is just being really organized. Right? So if you've ever I see so much so my daughter, for example, is a theater person and television production person, and I've watched her organize a group of people to put on a musical.
00;19;04;18 - 00;19;22;14
Dr. Michael Littman
And so much of what she had to do to just to organize that group and to figure out what it looks so much like programing to me. Right? Because you have to kind of conceptualize what all the different modules are, what their responsibilities are, what their tasked with, what they need to worry about, and what's different from what others have to worry about and how they're going to communicate with each other.
00;19;22;21 - 00;19;45;09
Dr. Michael Littman
That's it's so much that so much of what it means to, to to build big, complicated programs is just getting that organizational structure right. And so, again, I think that like as a species, programing is just one of the things that some of us do. But organizing and working with other people is something we all do. It's sort of inherent in the DNA of our species.
00;19;45;12 - 00;20;05;25
Dr. Michael Littman
And so I'm hoping that this ability like if we if we put more effort not into teaching people to code in the sense of getting all your parentheses to balance, but teaching people to code in the sense of creating organizations that are effective at doing whatever it is you need them to do, that's going to be good for us on the tech side, but also really good for us on the social side.
00;20;05;28 - 00;20;15;07
Dr. Michael Littman
It'll help us work together and all the really hard problems in our world right now. Boil down to we need to figure out how to work together more effectively.
00;20;15;09 - 00;20;31;01
Geoff Nielson
Yeah. Wow. It's such a it's such a mindset shift from what you traditionally think of as computer science. Or maybe or maybe you've been this way all along and, and, you know, the rest of us are just catching up. But, you know, as you put that into practice, you know, as an educator, I mean, has that already started?
00;20;31;01 - 00;20;36;11
Geoff Nielson
And, you know, what are you finding, as you know, students go through this new process?
00;20;36;14 - 00;20;56;22
Dr. Michael Littman
It's a yeah, fantastic question. So my own journey at the moment is a lot of this stuff really started to become a big deal when ChatGPT was released. And it became obvious that anybody could interact with this sort of technology and get something interesting out of it. I was already at the the doing my rotation at the National Science Foundation at that time.
00;20;56;22 - 00;21;22;28
Dr. Michael Littman
So I haven't taught a single class since ChatGPT has come out, but I am in touch with my colleagues back at the university and it's challenging. Right? It's there's there's the kind of just nuts and bolts of, as I mentioned before, how can we restructure our homework assignments so that they are still meaningful. But then it really does, as I think you're raising, raise the question of what do we need to teach people and how can we teach it most effectively?
00;21;22;28 - 00;21;47;00
Dr. Michael Littman
And I think those conversations are still quite, quite early. I know that, you know, my colleagues at the National Science Foundation are really interested in understanding that as well, so that we can help support those efforts with, with grants. And, you know, various kinds of organizational support so that people can study this. But I think across the world, people are trying to address that question in their own way.
00;21;47;00 - 00;22;07;20
Dr. Michael Littman
So I, I don't have any answers yet. It is the case, though, that when when I rotate out of the National Science Foundation this summer, I'm going to be going back to my university and I will be serving as the inaugural Associate Provost for Artificial Intelligence. So as few people who know what AI is, even fewer people know what Provost.
00;22;07;23 - 00;22;28;11
Dr. Michael Littman
So, so I don't know exactly what that's going to entail, but the effort is about trying to coordinate at the university level all these kinds of questions. What is it that people need to know in the computing area? But what do they need to know in in history as well? Like how does I kind of impact that I am yeah, super jazzed about that.
00;22;28;13 - 00;22;42;22
Dr. Michael Littman
Working with my colleagues over the whole planet to try to figure out what's the best way of, of making this, this transition that we're having now as effective as possible, like, really bring out our humanity. What is it that we want and how can we use this technology to bring it about.
00;22;42;25 - 00;23;03;29
Geoff Nielson
Yeah. So so as we you know, my my wheels are turning and I'm thinking about, you know, how you put that into action and you know, the, the intersection of, you know, the, the education piece, but then also the workforce and what organizations need in this new world. You know, one of the the phrases or roles that's gotten a decent amount of attention these days is prompt engineering.
00;23;04;04 - 00;23;16;01
Geoff Nielson
You know, in this age of AI, suddenly we need prompt engineers. Is that a role? Is that a skill? Is it technical? Is it non-technical? Is it a fad? What's kind of your take on that?
00;23;16;03 - 00;23;37;26
Dr. Michael Littman
Yeah, it's really interesting. So, okay. So so what we're seeing on the technical side is and what we've seen over the, you know, the past maybe seven ish years, this is sort of pre ChatGPT once these language models were starting to get built they weren't they weren't they were surprising really amazing from a technical standpoint early on.
00;23;38;01 - 00;23;56;10
Dr. Michael Littman
But they weren't really good at what they were doing in the beginning. Right. It wasn't anything that was ready for prime time for for regular people to use. But what the researchers started to discover is that phrasing questions to them in different forms could have a huge impact on how effective they were. They were actually, in some ways very, very brittle.
00;23;56;10 - 00;24;15;29
Dr. Michael Littman
Like if you if you didn't get if you didn't ask the question the right way, you got nothing good. If you could just if you could just phrase it, you could just prompt it in the right way. Suddenly it unlocks all this potential, all this power. And so that was visible to people. Again, I don't know, maybe ten years ago, that, that that concept was started to come into play.
00;24;16;01 - 00;24;34;07
Dr. Michael Littman
We're still seeing it today. It is still the case that there are people who can make better use of a chat bot than others by being smart about the way that they freeze things. I think in the long run, what we see is it comes back to what I was saying before. It's it's what is it about the prompting that's really helpful.
00;24;34;07 - 00;24;55;26
Dr. Michael Littman
Some of it is just to track, like some of these models have certain words that they just are very responsive to. But part of it is taking what it is that you want it to do. And laying it out really clearly. Right. So you could I think in the long run, what we now call prompt engineering or what is starting to be called prompt engineering, just turns into expressing yourself clearly.
00;24;55;28 - 00;25;10;27
Dr. Michael Littman
Yeah, right. And if you if you ever tried to be a writer as, as you mentioned before, like I wrote a book, I think a lot of people in, in, in journalism have spent a lot of time trying to think of how to hone that craft. Writing is hard. What is it that's hard about writing?
00;25;11;02 - 00;25;32;22
Dr. Michael Littman
It's hard because you're actually taking all this complicated stuff that's going on in your head and like linearizing, right? Like turning it into a string of tokens so that when they land in the head of someone else, it triggers the right thoughts, the right ideas, the right expectations. That's remarkably hard. Right? Writing is hard. Prompt prompting is hard.
00;25;32;25 - 00;25;53;06
Dr. Michael Littman
Prompting other people is hard. And I think in the limit, prompting machines is hard. In the same way, it's really about putting together your thoughts in a way that paint the right picture. One of the it's just, I mean, it's it's mind blowing to me that we can have this conversation now because it wasn't that many years ago that people in my field didn't have to think about that.
00;25;53;12 - 00;26;17;02
Dr. Michael Littman
It was not the case that we needed to figure out how to tell the computer something in a way that it would. It would like land, like it would be expressive to the machine. But that's how people are thinking about it now that it's it's very, very, very similar to or maybe the best model that we have is that it's like talking to a person where you have to be clear and articulate and bring up the right metaphors, potentially.
00;26;17;04 - 00;26;25;08
Dr. Michael Littman
And that's that's a new space for us to be in. But I think that's I think at the end of the day, that's, that's where all this is heading.
00;26;25;10 - 00;26;45;19
Geoff Nielson
Yeah, I it's interesting because, you know, when you were talking about, you know, the brittleness of it and you have to ask the right question, you know, reflecting on like some of it feels new, some of it feels not new. Like, you know, garbage in garbage out. Right? Like we've been saying that, you know, probably since the 80s and if not longer.
00;26;45;21 - 00;27;18;08
Geoff Nielson
But it's just never been like this before, right? It's it's never been, you know, that's been, you know, this kind of abstract technical thing versus this like conversational, you know, non technical piece. So I mean, it's probably too early to ask you this question, but as you, you know, as you end up in your new role and as you think about the field of computer science and what it means to be a computer scientist or someone who uses these technologies, does that field need to expand?
00;27;18;09 - 00;27;25;24
Geoff Nielson
Does it need to shift like, oh, what could that look like? You know, 2 to 5 years from now?
00;27;25;26 - 00;27;52;03
Dr. Michael Littman
Yeah. So one of the ways that we conceptualize that, that very question at the US National Science Foundation is to think about education and different kinds of education, different audiences for the education. Right. So there's always going to be a need to get people to, to really understand the guts of the technology, right? So the technology isn't going to mature, it isn't going to grow, it isn't going to turn into other wonderful things unless there's some people who really know it at that nuts and bolts level.
00;27;52;05 - 00;28;15;02
Dr. Michael Littman
Then there are folks who need to understand and appreciate the technology, because they're using it to solve other highly technical, difficult problems. And so we have to get them to understand it well enough that they know its limitations. They know it's it's affordances, right. Sort of what it's good at and what you can lean into. And so that's another sort of level of education that we have to pay attention to.
00;28;15;02 - 00;28;35;28
Dr. Michael Littman
Like how did what do biologists need to know about this? What did doctors need to know about this? And then there's the just what does everybody need to know? Right. They're not necessarily doing technical work. But so much of the infrastructure of our society is like coded in this way. And so it's you can't really be an effective person unless you have some level of understanding about what's going on.
00;28;36;00 - 00;28;50;17
Dr. Michael Littman
And so all those levels are really important and they're all different. Right? Because the expectations of what people know coming into the conversation are different. And what they want to use it for is different. And so that means really tailoring the education to these, these different levels.
00;28;50;20 - 00;29;15;21
Geoff Nielson
Yeah. So as you as you tailor that education to different levels, you know, as you said, you've got everything from people who just need general education to, you know, specialized education. You've got people who are afraid of AI. People are excited about AI. How are you how are you clustering this? Or like, what are the key ways you're conceptualizing of the different groups to be educated?
00;29;15;21 - 00;29;24;04
Geoff Nielson
Is it like, you know, is it age brackets? Is it use case like how do we what are the like the macro level patterns we're seeing.
00;29;24;06 - 00;29;39;22
Dr. Michael Littman
Now that's that's really interesting. I mean the to me the top level is what I described before, the kind of what level of depth about the technology do you really need to have mastered to be able to be effective about what you're trying to do? But you're absolutely right that cutting across that are things like like age. Right?
00;29;39;22 - 00;29;58;27
Dr. Michael Littman
So so it is the case that that, you know, K-12, elementary level primary school people need some exposure to these things so that they have the right conceptual structures moving forward so that it can bifurcate into one of these, say, three groups, as a, as adults. So there is a there is a difference at the, at the kid level.
00;29;59;00 - 00;30;27;26
Dr. Michael Littman
I think, there's potentially issues at the, at the nation level as well. Right. So different countries, different socio economic groups, required different interventions to really help them kind of get on board with some of this stuff. And then one of the things that I've personally been really excited about and hope to try to find some way to support, is the notion that it's it's not like you educate people about AI and then they know AI, and then we never have to talk to them again.
00;30;27;28 - 00;30;49;00
Dr. Michael Littman
It's it's if if nothing else and, you know, disruption right. Digital disruption I like I really believe very deeply in the idea of digital disruption. And one of the things that AI is providing is a set of capability that are useful today, but also the ability to do continual digital disruption. Right? The world can keep changing, keep being updated.
00;30;49;00 - 00;31;21;28
Dr. Michael Littman
New, new ways of applying this are coming out all the time. And so the idea that we can just educate people and then stop doesn't make any sense to me. We need, almost a, like knowledge distribution network, a two way knowledge distribution network, right, where really important ideas about those kind of three levels of the technology and the use of the technology and the kind of engagement with the technology are constantly being updated as new ideas come out and information is flowing back from all those groups to say, these are the things that aren't really working for us.
00;31;21;28 - 00;31;39;08
Dr. Michael Littman
So, you know, this change happened, but now suddenly we can't do this other thing anymore. So I kind of think of it as, as an analogy with the way that we distribute software today. Right? In the old days, it used to be you write a piece of software, you send it out, people use it. You know, maybe someday someone else will write another piece of software.
00;31;39;15 - 00;31;58;26
Dr. Michael Littman
But that's not how it's done today. Now, piece of software goes out and then security bugs are discovered, or just behavioral bugs are discovered. And so there's a way to for the developers to make a change to the code and then make sure that that change propagates out through the network so that the people who are depending on that code have the best version of it.
00;31;58;28 - 00;32;27;05
Dr. Michael Littman
And so to me, I see an analogy between that and the notion of like the AI knowledge that needs to go out there, right? So we need a way of distributing that, and we need a way of like doing bug reports, right, to things that aren't working. Need to make it back to the developers, like the people who are actually creating this technology, so that there's this constant dialog, right, so that we can always be disrupted in the sense that things change, but not disrupted in the in the sense that it derails us and it makes us ineffective.
00;32;27;07 - 00;32;47;08
Geoff Nielson
Right. And the, you know, that that that dissemination, that ongoing, you know, kind of updating. Has there been any discussion about like how you apply that, you know, what does that look like in practice? And is it, you know, is it public sector, is it higher ed? Is it the actual tech companies? How do you make that work?
00;32;47;11 - 00;32;48;11
Geoff Nielson
I don't know.
00;32;48;14 - 00;33;11;18
Dr. Michael Littman
It's a it's a really good question. So this is like I've been taking every opportunity that I have to kind of bounce that idea off people, because I'm hoping that somebody will see an angle of it that will be really helpful. I yeah, I'm not sure. I think we have various mechanisms today. Things like, like like journalism, like like newspapers and magazines and podcasts and scientific papers.
00;33;11;18 - 00;33;33;23
Dr. Michael Littman
Right. We have all these mechanisms for getting information out. I just think that they're not like they they show us some of the positive, some of the ways that ideas can actually propagate, because ideas do propagate. But I think the time constant is off, right, relative to how fast this technology can change. We don't have good dissemination mechanisms that, they can get the news out as quickly as we want.
00;33;33;23 - 00;33;51;27
Dr. Michael Littman
And I so I think yeah, I think that's sort of on all of us to, to be thinking about. Okay. Well is it, is it like so one of the ways I think about it is could we have like intermediate experts. Right. There's like the I talked about this, there's there's three classes of people before three kind of levels of, of depth of, of people with the technology.
00;33;51;29 - 00;34;13;04
Dr. Michael Littman
Could we have folks that kind of sit in the interstices between some of those, so folks that are not developing the technology, but they know it well enough that they can serve as an intermediary to the people who are putting it into practice. Right. So something to kind of fall in that crack. But could that be a new, almost, professional sector, right, where we support people to help be those intermediaries?
00;34;13;07 - 00;34;24;17
Dr. Michael Littman
That's what we have. Like, we can computer networks, we have computers that just are sitting there helping to move the information back and forth. They're great computers. But that but that's their job is to make that connection. And so maybe we can have people like that.
00;34;24;22 - 00;34;47;03
Geoff Nielson
I'm going to take this in a weird direction, Michael. But like, is there potentially a world where like but like the parallel I saw is like, are there like high priests of AI, you know, like, is it almost like a religion distribution network where, you know, you've got the pope or you've got, you know, cardinals, but then you have, you know, local churches and you've got people who, you know, in some way get is that kind of what you're thinking out that these, these kind of nodes?
00;34;47;06 - 00;35;05;13
Dr. Michael Littman
I had not made the connection to religion before. I will have to kind of think on that a little bit. But, but I think a lot of our human organizations are like, right, right. So we have, we have sort of nation governance, but then there's like region governance as well. There's like provinces in the, in the in Canada, there's states in the United States.
00;35;05;15 - 00;35;36;26
Dr. Michael Littman
The European Union has countries, right, or whatever it. But but then, then down to cities and down to kind of local areas within those cities and the organizations within those cities. So we have yes, we absolutely have things like that. I think that's what we have to leverage is the fact that people do seem to naturally organize themselves into those kinds of patterns, but to to really enable those patterns, right, to make it so that the communication between the levels is, is sort of faster and more efficient and more accurate, so that, so that we really can activate people relatively quickly.
00;35;36;28 - 00;35;50;19
Dr. Michael Littman
I mean, there's some things that are just that we don't know how to speed up. Like it takes a while for people to absorb a new idea. It will take me a while to think about your religion comment. Right? I, I can't do it enough on the fly. And so that's going to that's going to set the pace in many ways.
00;35;50;21 - 00;36;07;24
Dr. Michael Littman
But I think we are far from that at the moment. We're not at the level of how quickly can individuals absorb things. We're at the level of how quickly can an organization kind of distribute the knowledge and, and the organization absorb things. And I think we can make that actually considerably faster than it is now.
00;36;07;26 - 00;36;28;15
Geoff Nielson
So, so there's a there's a flip side there. You know, the whole area is super interesting that there's a flip side there. And what triggered me to it was, you know, you were talking about journalism. You're just talking about kind of, you know, information and knowledge in general. You know, to what degree is there like an AI literacy angle that needs to be in place here?
00;36;28;15 - 00;36;45;19
Geoff Nielson
You know, we talk about sharing the information, you know, you concerned about that being like misappropriated or something like, do we need to instill this kind of, these baseline young literacies in people to be able to absorb this?
00;36;45;21 - 00;37;09;16
Dr. Michael Littman
Yeah. No, I definitely believe that. All right. I definitely think that this is this is the role of informal education to some degree, early education to another degree, continuing education to another degree. But it has to be the case that our our public institutions are supportive of that sort of thing. And, and, like, I don't want to get too close to like, how do you organize a society to be.
00;37;09;19 - 00;37;12;03
Geoff Nielson
Effective since there is no political slope?
00;37;12;03 - 00;37;29;26
Dr. Michael Littman
Yeah, but but I think it really does come down to that. It's like, how do we organize ourselves as people? And if if we do that in a way that is conducive to this kind of knowledge sharing, then I think we're okay. But yeah, language, like all these technologies are really powerful all the way down to just like language itself, right?
00;37;29;26 - 00;37;50;10
Dr. Michael Littman
Language itself can be used for good and for evil. And it's really important that we all as individuals are constantly on on guard for, okay, is this, you know, I'm am I ready to absorb this language? Is it really going to make things better, or is this going to do something bad to me? Yeah. This the same thing is true about, knowledge about these kinds of technologies.
00;37;50;10 - 00;38;00;23
Dr. Michael Littman
And so I, I think it's maybe acute or like it is something that we have, but it's not new. It's not unheard of. It's this is something that we've always had to be be watchful about.
00;38;00;25 - 00;38;19;20
Geoff Nielson
Yeah. I mean that that makes complete sense. I could follow this trail for a very, very long time. Michael, I'm trying to be, you know, conscious of your time as well. I wanted to shift gears a little bit and talk more about the, the work you're doing with the National Science Foundation. You know, you've got a natural excitement, you know, about all of this, through it, through your work there.
00;38;19;22 - 00;38;27;16
Geoff Nielson
Were there any findings that, you know, were particularly exciting for you that you, you know, were maybe not exposed to before you got down this line of work?
00;38;27;18 - 00;39;05;28
Dr. Michael Littman
Yeah, that's a really interesting question. So so there's the way that the the National Science Foundation supports AI research is quite broad spectrum. Right? So there's there's individual projects where where a single researcher and that researcher students are funded to like study something that is connected with AI in some way. At that that's at a very sort of fine grained level, at a much broader level, there are things like the, the AI institutes, there's national AI institutes that got, that kicked off around 2020 when, when, when language models were really starting to become a thing, it was sort of obvious that I was going to be disruptive.
00;39;06;05 - 00;39;31;04
Dr. Michael Littman
And there wasn't really a way for folks outside of computing to participate in the process. And so the NSF created these, these, these large scale, 20 million US dollar institutes where each one is focused on some kind of thematic area. And that's been really neat because there's folks, for example, studying AI in agriculture. We have multiple AI institutes that are studying agriculture.
00;39;31;10 - 00;40;02;10
Dr. Michael Littman
They're studying weather, AI in the weather, to meteorology and communicating about meteorology, AI and decision making in the context of natural disasters. Like there's all sorts of really interesting angles. And it's been exciting to see how some of these institutes have really, like, taken the ball and run with it and done very, very exciting things. Another thing that I've been kind of very pumped about is the notion that, well, my understanding is that Canada has already been doing something like this and I need to understand more about it.
00;40;02;17 - 00;40;30;09
Dr. Michael Littman
But in the US, we don't really have the ability for research researchers to say, hey, I'm doing this kind of breakthrough AI work, but I need more computers, I need more data. I need more resources to be able to do this kind of work. And I think lots of folks have noticed that some of the the splashy new findings have been coming out of industry, where they have the resources to to just throw at the problem and just scale it up massively.
00;40;30;11 - 00;40;51;27
Dr. Michael Littman
What can we do to help academics, researchers, sort of people in the public eye to, to kind of keep pace with that and maybe even drive some of the direction of that research. And so there's envisioned a national AI research resource. So basically an ability for people to apply for resources and then get those resources to apply to the work that they're doing.
00;40;52;00 - 00;41;14;15
Dr. Michael Littman
And this is something that, that I, I've only been at the NSF for. It'll come in on three years. I've seen it come from like basically an idea that was written down in a report to a pilot that we're actually running right now jointly with a bunch of, US government agencies and also tech companies working together to try to make resources available to the to the the research public.
00;41;14;18 - 00;41;22;21
Dr. Michael Littman
And I'm hoping before I go to see it actually turn into a fully funded, scaled, resource.
00;41;22;23 - 00;41;35;18
Geoff Nielson
That's that's super cool. I had no idea. So it's almost like, yeah, it's like a partnership. Like it's like almost like a marketplace of like, where do people need resources? Who's willing to help them out? And how can we, you know, collaborate to, you know, drive the technology forward?
00;41;35;18 - 00;42;00;17
Dr. Michael Littman
Yeah. And it's really interesting to me as a computing person, because it turns out that computing has been supporting other sciences like biology and geology and, engineering, mathematics, physics for a long time like that. There's actually a like a sub community of computing researchers who just, like, help run the computers that are needed for the other scientists to do to make their breakthroughs and to help analyze their data, move the data around, get insight into the data.
00;42;00;20 - 00;42;22;16
Dr. Michael Littman
And computing has generally said, no, thank you. We don't need that. Like the people who are doing research on computing itself. Yeah, for various reasons. You know, some of it is like, well, that's not us, you know, and we could do it better than you guys can do it. But I think what we're starting to realize is that the resources needed to do this kind of scaling up of data and extracting value out of the data really requires shared resources.
00;42;22;16 - 00;42;41;13
Dr. Michael Littman
And so we're seeing, again, it just in the three years that I've been at the NSF, kind of a culture shift, we're moving towards this notion that we could actually be sharing large scale resources. And so we can invest as a as a large community and creating things kind of like giant telescopes, like the astronomers are really good at sharing their resources.
00;42;41;13 - 00;42;47;28
Dr. Michael Littman
So they have a telescope that everybody can make use of to make their discoveries. We're starting to learn that we need that in computing as well.
00;42;48;00 - 00;43;12;16
Geoff Nielson
I love that, you know, for me, one of the most exciting pieces of that is, you one of my personal concerns with AI is it feels like it's become more and more sort of like walled gardens inside some of these, like, you know, tech megacorp if, if you'll call it that and indulge me and the idea that we can, you know, as a broader community start to get those economies of scale, that's I mean, that's massive.
00;43;12;16 - 00;43;16;16
Geoff Nielson
And then it really feels like a force for good.
00;43;16;18 - 00;43;21;17
Dr. Michael Littman
I feel the same way. So, some I'm hopeful that this is something that's going to move forward.
00;43;21;19 - 00;43;41;11
Geoff Nielson
Oh that's awesome. That's that's so amazing. Before I forget, we did briefly talk about AGI, as something that maybe, you know, proverbial around the corner, maybe not. You know, do you have an outlook? Is it realistic? Is it coming soon? You know, is it time to think about it? Seriously? Where where are we sitting?
00;43;41;14 - 00;44;04;06
Dr. Michael Littman
It's it is definitely an exciting time. So. So when I first became an AI researcher, or let's call it decades ago, it was it was never talked about like in the very early days of AI when when the community was first, when it first named itself, there was a sense in which, yeah, obviously the goal is that we're going to create something that is just it's just generally intelligent.
00;44;04;06 - 00;44;21;23
Dr. Michael Littman
It has the ability to have perception and decision making and language like it's all going to be integrated into one system. And then the field went out and sort of balkanized. Right. It was too hard to solve that whole problem all at once. And it made much more sense to kind of drill down and say, like, I'm just going to make one that's really good at math, right?
00;44;21;25 - 00;44;50;06
Dr. Michael Littman
It's not going to it's not going to be able to recognize a picture of a dog in a. Yeah, but but it's going to be really great at, at mathematical reasoning. And so all these different kind of stovepipe style, work, research and discoveries took place. What we're starting to see today is that the same technology, this neural net technology that underlies chat bots, is actually really useful for perception and decision making and action and language.
00;44;50;06 - 00;45;17;21
Dr. Michael Littman
It's one kind of fabric that actually cuts across all these different areas of artificial intelligence. And so it's very sensible to start asking, oh, well, maybe we can bring all these pieces together again and have a kind of a unified intelligence of the kind of the founders of the field first envisioned. And that's what I think a lot of people think of when they're people in the field, think of when they hear AGI, it is, you know, it's absolutely the right time to be thinking about that kind of integration.
00;45;17;21 - 00;45;44;12
Dr. Michael Littman
I it's it's perfect. Is this something that's going to kind of change society and the way that, that, that people live possibly like some people should be thinking about that too, for sure. But is this really right around the corner? I again, I mean, what we've seen time and time again in the history of artificial intelligence is there's a lot more to being an intelligent person than it appears on the surface.
00;45;44;12 - 00;46;07;06
Dr. Michael Littman
So much of what we do is outside our ability to kind of reflect on it, that it just seems like it's just there. It's just easy, right? For the longest time in, AI was easier to make a chess program that could play with the the, the, the strongest players in the history of humanity than it was to be able to have a to like to read a children's story and understand what the moral was.
00;46;07;12 - 00;46;32;00
Dr. Michael Littman
Right? Things that we think of is really easy, have traditionally been very hard for machines and vice versa. So, this, this notion that we've actually kind of maybe started to crack that and we can, we can start to address some of the harder problems is valid. But I don't think we've gotten to the bottom of, well, how hard is this really to do this in a way that is, is really as as rich and sophisticated as the way people do it.
00;46;32;03 - 00;46;33;15
Dr. Michael Littman
I don't think we're there yet.
00;46;33;18 - 00;46;54;27
Geoff Nielson
Well, you know, what I'm hearing as well, Michael, is that it sounds like we don't even understand our own intelligence and how that works well enough to make an educated guess. Like if we would have to be able to say, this is when we can have our, you know, AGI to answer that question, we have to know exactly like what intelligence is.
00;46;54;27 - 00;46;56;24
Geoff Nielson
And we're not close enough to that answer to.
00;46;56;26 - 00;47;16;02
Dr. Michael Littman
I think that's right. And I think I think there's some people, some sort of boosters of the idea who are like, well, we don't really have to understand it at all. We just have to build the machine bigger and it's going to happen. Yeah, and that may be true, but I think the way you said it is great, which is but we won't really know that it's happened unless we have some understanding of what this landscape actually is.
00;47;16;08 - 00;47;38;18
Dr. Michael Littman
Right. So we may created by accident. But the more likely scenario is that we're going to fall well short of it, because we don't know what the terrain actually looks like. And so to me, that's kind of the next wave of, of exciting breakthroughs that we're going to see is that now that we have this, this artifact, right, this machine that can do so much of what we think of as human intelligence, but not everything.
00;47;38;20 - 00;47;59;27
Dr. Michael Littman
It's it allows us to ask in a more scientific way, okay, what did we really mean when we said intelligent, right. It's been the kind of purview of philosophers. It can now become something much more, empirical. Right. And so I think I'm hoping that we come out of that with a better understanding of ourselves, because I think for a lot of us, that's why we do science in the first place.
00;48;00;03 - 00;48;16;20
Geoff Nielson
Yeah, yeah, I think it's a crazy. It's, it's a crazy prospect. It's so it's so interesting. And, you know, to think that we're kind of at that frontier point where, you know, whether we're we get there or not, we're still pushing the boundaries of understanding ourselves. It's it's super, super cool that we're.
00;48;16;25 - 00;48;27;11
Dr. Michael Littman
Yeah, that we're out. We're at a place where we can start to formulate the question. Right? Yeah. Which is not all the way there to to an answer, but it's way farther than we were not too long ago.
00;48;27;14 - 00;48;42;24
Geoff Nielson
Right. Do you have any predictions, Michael, for you know, what is the next ChatGPT like? Not not necessarily like the exact piece of technology, but but what kind of, you know, functional do your approach is going to be the next kind of breakthrough in this space?
00;48;42;26 - 00;49;10;21
Dr. Michael Littman
Yeah, I mean, a lot of people these days are talking about agents as if that was a new thing, though, I think. I mean, the colleagues of mine have been working in the area of, of, intelligent agents for decades. I think what the notion of agency is, is about these chat bots not just having a conversation with you, but being able to actually make a decision and then act in the world in some way, like help you make a reservation or help you kind of organize, more real world things.
00;49;10;23 - 00;49;25;28
Dr. Michael Littman
And so I do think that that's kind of something that we're moving towards. I think that the the bigger breakthrough is when we start to have these machines that are structured in a way. I mean, so think about starting a new job. So, so you're smart.
00;49;25;28 - 00;49;26;27
Geoff Nielson
You you're.
00;49;27;00 - 00;49;42;18
Dr. Michael Littman
You're educated. Say you went to to college even, you're now going to go out into the world and, and get a job. It's not like you walk in the door and you just start to do that job, right. It's not like somehow you knew all the pieces of the job. There's, there's kind of getting on board with the job.
00;49;42;18 - 00;50;07;21
Dr. Michael Littman
Like, how do we talk about it here? What is it that we're actually trying to do? How do we weigh these things against one another? And we don't yet have these machines that could, they can that can kind of in culture create like that. Right? That can actually be dropped into a new circumstance and then start to take on the, the, the values and the goals and the and the objectives of the organization that they sit in, because that involves a kind of intermediate level of learning.
00;50;07;21 - 00;50;35;10
Dr. Michael Littman
We're really good now at pre learning pre-training, where we actually create a neural network that's trained on like the entire internet, like all this data, but that doesn't learn anything new. Right. Because once it's trained it's changed. It knows what it knows at that point. But then some of these systems can actually kind of learn. They call it in context where you actually in the context of, of going back and forth with them, you can give them a piece of information and it can use that piece of information for some time.
00;50;35;12 - 00;51;05;08
Dr. Michael Littman
But what we don't really have is this kind of intermediate level of learning where you can learn about a thing, and then six months from now, you still know that thing, right? A new fact or a new process or a new relationship can be maintained. It's just not it's not there yet. And so I think that's going to require the companies that are doing this sort of stuff to really or and the researchers who are doing this stuff to really engage with what it means for an organization to want onboard a new person.
00;51;05;11 - 00;51;23;16
Dr. Michael Littman
I think that's going to be, I don't know, much less sci fi. Right. It's just sort of kind of blah, like, oh, yeah, onboarding, like with H.R., like, it's like the most boring thing possible. But that's where the that's where the value actually starts to come into play. And it's not just a cool toy, but something that is helping people get their work done.
00;51;23;18 - 00;51;26;27
Dr. Michael Littman
That's I think, we have haven't reached that stage yet.
00;51;27;00 - 00;51;46;10
Geoff Nielson
Yeah. And it's super, super interesting. And you taught me a new word along the way there as well, which it which is, like you said, inculturation, which to me is like, like it's that cultural localization of AI, right? It's not just, oh, I operate out here and you can have that in your organization. It's what do you need?
00;51;46;12 - 00;52;03;24
Geoff Nielson
What does it actually mean here? And yeah, I mean, if we can get to that point to that and that's part of like a genetic AI that's going to be that's going to be huge. Michael, do you want to talk at all, about what you're going to be doing in your new mandate at Brown, or is that still, a bit too early?
00;52;03;27 - 00;52;24;13
Dr. Michael Littman
No, I'm happy to mention it. I think, I, I don't think there's much to say in the sense that it's very, very much like the conversation that we've been having. I think a lot of organizations are excited about AI, maybe a little bit afraid of AI, and they're not really sure how to take it on. And so all the individuals in the organization are doing, you know, what they need to do to get their work done.
00;52;24;15 - 00;52;43;16
Dr. Michael Littman
But there isn't a lot of coordination at the at the organization level. And so I think that's that's mainly what I see my role as is to help the organization as a whole make the best use of this moment. Right. And so I think that's I think there's a lot of I mean, the questions you've been asking your fantastic questions, right?
00;52;43;16 - 00;53;06;05
Dr. Michael Littman
These are the exact things that we're all trying to grapple with. I'd like that the Browns take on it to the my home institutions take on it is it's it's very holistic. Right. So it includes not just the research about AI, which I think is obviously critical. This is what I spent my career doing, but also the implications of AI for teaching, the implications of AI for other fields of study.
00;53;06;05 - 00;53;26;20
Dr. Michael Littman
Right. So what is it if you're not studying AI, how can I help you do the work that you do do better? And then things about just the operations of the campus as a whole, right. So there's admissions. There's, there's, I don't even know. I don't know all the names yet, but like, like the there's a there's a physical plant, there's like maintenance of the buildings.
00;53;26;20 - 00;53;45;24
Dr. Michael Littman
There's the energy of the buildings. These are all things that could benefit from better decision making, better data flow, AI essentially. And so I'm yeah, really excited to see some of these ideas that we've again talked about and been motivated by in the field for social long. We might actually get to see some of the benefits soon.
00;53;45;27 - 00;54;01;29
Geoff Nielson
Yeah. That's that's so fantastic. And it's so exciting. And yeah, I'm excited to, catch up with you. You know, I don't know how long down the road when, you know, you can talk a little bit more about the impact that it's having because it's just it's such an exciting time and, you know, there's no shortage. It seems like it opportunities.
00;54;01;29 - 00;54;11;14
Geoff Nielson
And yeah, I just I, I love, love the education piece that you're doing. It just seems like it's, it, it's right for the opportunity for impact.
00;54;11;17 - 00;54;12;29
Dr. Michael Littman
Outstanding. Thanks.
00;54;13;02 - 00;54;31;10
Geoff Nielson
So having been, you know, working in this space or at least interested in the space for, you know, since since the 80s, you know could you ever have envisioned, you know, the trajectory of this, the pace of this, you know, how does it compare with, you know, your, your, your wildest dreams when you first got into this space?
00;54;31;13 - 00;54;53;15
Dr. Michael Littman
Artificial intelligence is a field that forever was ten years away from the big breakthrough. Right. And so that was a that was like after I had been in the field for ten years and realized, wait a second, that wasn't true. Still, it's still ten years away. Exactly. The fact that we're that those, those that horizon actually does seem to have meaningfully shrunk in the last years is stunning.
00;54;53;17 - 00;55;21;02
Dr. Michael Littman
That said, it's really interesting because if you if you reread or sorry. So if you read the, some of the early work of Alan Turing, who was one of the founders of all of computer science, but also, a leading thinker about what artificial intelligence was, was envisioned to be back in the 40s. And at around there, he, he lays out in a couple places like, well, I don't know how to do this, but if I had to do it, here's how I would do it.
00;55;21;08 - 00;55;37;19
Dr. Michael Littman
It's so similar to what actually is happening, right? That wasn't the field. Spent a long time trying to take the the ideas of what we think of as intelligence. This is the way we build a chess playing program, is we take what we think is the strategy, and we kind of build it in to the the program. Right?
00;55;37;19 - 00;55;54;18
Dr. Michael Littman
We actually just program it. We say, okay, well, you should kind of look ahead, play, play out the game a bunch of steps in advance and see how that what that leads to. And then propagate back to figure out which move you should take right now. Like we we took our very literal thought process and tried to encode it as programs.
00;55;54;21 - 00;56;12;12
Dr. Michael Littman
And what Turing said early on is, well, that's going to be too hard. Like there's just no way that we're going to get all the details of that. Right? So what we need to do is we the people write evolution, we write the, kind of the creation of the overall machine that then becomes like a baby and it learns everything.
00;56;12;14 - 00;56;35;22
Dr. Michael Littman
And so this notion of like we the programmer, what we're responsible for is kind of the, the platform on which learning appears. And then it's through exposure to actual experiences in the world that intelligence really starts to arise. And, you know, sort of again, that's kind of the structure that we're using that people as human beings wrote code that actually is, is really focused on how to learn from data.
00;56;35;24 - 00;56;56;22
Dr. Michael Littman
And then we just hit it with all the data that we can find in the world, know. And, sort of I mean, it's tricky to, to organize that in exactly the right way so that it all ends up, you know, coming up with something that's interesting, but nonetheless like, yeah, it's a long time in coming, but the form of it, we saw it in advance and this is really starting to play out.
00;56;56;23 - 00;57;06;17
Dr. Michael Littman
It took and he, Turing even spelled out like computers would need to be about this powerful for that to happen. Wow. And yeah it's like with.
00;57;06;17 - 00;57;07;18
Geoff Nielson
That super magnitude.
00;57;07;18 - 00;57;31;11
Dr. Michael Littman
He got. Yeah. Yeah. So so it's it's not it's not shocking. Right. But it is it is very exciting. Exciting. And it did take a lot of us by surprise. I talked to some other folks in the field who like the first person who taught me machine learning. I got that he happened to have been making a presentation at the National Science Foundation, so I pulled him aside at the end of it, and I was like, you know, when you were teaching that class, why did we not see this?
00;57;31;11 - 00;57;47;02
Dr. Michael Littman
Why did we not see how this was going to play out? Right. Like we talked about it and we we even we we didn't suggest this model of we should just predict the next word. If you can predict the word, the next word accurate enough in principle, it could sound intelligent, like why don't we see it? And he's like, we just didn't believe it.
00;57;47;03 - 00;57;57;15
Dr. Michael Littman
We didn't have machines that were that had enough data, that had enough compute to actually play out the implications of that. And so we were just thinking too small.
00;57;57;17 - 00;58;04;18
Geoff Nielson
That's so cool. So yeah, it's like it was waiting for the technology to catch up with, with the principle.
00;58;04;21 - 00;58;06;04
Dr. Michael Littman
A little bit. Yeah.
00;58;06;06 - 00;58;13;27
Geoff Nielson
Amazing. Michael, I want to say thanks so much for joining today. This has been a really, really enlightening conversation. And I really, really appreciate your time now.
00;58;13;27 - 00;58;24;12
Dr. Michael Littman
Thanks for all you do and helping all of us grapple with these questions and kind of figure out what it means to us. Thanks.
00;58;24;14 - 00;58;24;22
Dr. Michael Littman
So.



The Next Industrial Revolution Is Already Here
Digital Disruption is where leaders and experts share their insights on using technology to build the organizations of the future. As intelligent technologies reshape our lives and our livelihoods, we speak with the thinkers and the doers who will help us predict and harness this disruption.
Our Guest Ian Beacraft Discusses
AI Is Rewriting the Rules of Work: Futurist Ian Beacraft Explains Why Jobs Are Dead
Today on Digital Disruption, we’re joined by Ian Beacraft, Chief Futurist and Founder of Signal and Cipher.
Our Guest Dr. Michael Littman Discusses
AI Expert Dr. Michael Littman: This Is Why Everything You Know About AI Is Wrong
Today on Digital Disruption, we’re joined by Dr. Michael Littman, Division Director for Information and Intelligent Systems at the National Science Foundation.
Our Guest Daniel Pink Discusses
Daniel Pink: How the Future of Work Is Changing
Today on Digital Disruption, we’re joined by Daniel Pink, New York Times best-selling author and workplace expert.
Our Guest Pau Garcia Discusses
Pau Garcia: How AI Is Connecting People With Their Pasts
Today on Digital Disruption, we’re joined by Pau Garcia, media designer and founder of Domestic Data Streamers.