Our Guest Dr. Vivienne Ming Discusses
Top Neuroscientist Says AI Is Making Us DUMBER?
Are we using AI in a way that actually makes us smarter, or are we unknowingly making ourselves less capable, less curious, and easier to automate?
On this episode of Digital Disruption, we are joined by artificial intelligence expert and neuroscientist Dr. Vivienne Ming.
Over her career, Dr. Vivienne Ming has founded six startups, been chief scientist at two others, and founded The Human Trust, a philanthropic data trust and “mad science incubator” that explores seemingly intractable problems – from a lone child’s disability to global economic inclusion – for free. She co-founded Dionysus Health, combining AI and epigenetics to invent the first ever biological test for postpartum depression and change the lives of millions of families. She also develops AI tools for learning at home and in school, models of bias in hiring and promotion, and neurotechnologies to treat dementia and TBI. Vivienne was named one of “10 Women to Watch in Tech” by Inc. Magazine and one of the BBC’s 100 Women in 2017. She is featured frequently for her research and inventions in the Financial Times, the Atlantic, Quartz Magazine, and the New York Times.
Dr. Vivienne Ming sits down with Geoff to unpack one of the most misunderstood truths about artificial intelligence: it isn’t here to replace your thinking but to challenge it, and whether you grow or get left behind depends entirely on how you choose to engage with it. Dr. Ming reveals why most organizations and most individuals are using AI in the worst possible way. Instead of creating leverage, they’re creating “work slop,” cognitive dependency, shallow automation, and declining human capability. She explains why the real competitive advantage in the AI age comes from productive friction, creative complementarity, and teams that know how to use AI to explore the ill-posed problems, the ambiguous, uncertain, high-value challenges, machines can’t solve on their own. From how to robot-proof your company to why AI tutors fail when they give answers to the science of courage, reward systems, and organizational culture, this conversation is one of the most honest explorations of the future of human capability in an AI-saturated world.
00;00;38;20 - 00;00;45;03
Dr Vivienne Ming
I am speaking for the first time ever at Davos this year, which will be, January 20th.
00;00;45;05 - 00;00;45;26
Geoff Nielson
That's exciting.
00;00;45;26 - 00;00;55;16
Dr Vivienne Ming
That they will had good enough sense to keep me away from the billionaires prior to that. But, they finally made the terrible decision of inflicting me on the world.
00;00;55;19 - 00;01;03;01
Geoff Nielson
Well, that'll be that'll be really fun. What are you, what are you speaking on?
00;01;03;03 - 00;01;36;17
Dr Vivienne Ming
So two things. This yellow book poster behind me, that book comes out on March 17th. Robot proof. So generically, I'm speaking about that. More specifically, I have a piece of research that's also coming out, which is about, hybrid intelligence, as I'm calling it, or in length, hybrid human machine, collective intelligence. And our finding is not only is if formulated correctly.
00;01;36;17 - 00;02;02;20
Dr Vivienne Ming
And that's the real that's the finding in the paper is it's actually about the human capital and how people engage with the AI and what the AI is supporting, not just generically people and machines together. And definitely not machines do the boring stuff, so you can do all the fun stuff that achieves nothing. But the finding is actually and and it's still in the works for a final, submission.
00;02;02;20 - 00;02;43;23
Dr Vivienne Ming
But the finding is essentially that a team of modestly intelligent and completely naive individuals, in an hour can help predict market. Well, and they're in this hybrid intelligence, context and that there are ways to induce it, not just relying on having a bunch of geniuses in the room. So that's the finding of the paper, and I'll be talking about it because the whole theme of Davos this year will be about AI, which it probably has been for the last, five years, I suppose, but probably now it's oh my God, we spent $1 trillion on this and everyone says, you know, work slop.
00;02;43;25 - 00;02;57;15
Dr Vivienne Ming
How do you actually find value? Well, as someone who's been working in this space for nearly 30 years now, how about we dispense with the marketing bullshit and actually show what truly makes a difference?
00;02;57;18 - 00;03;14;15
Geoff Nielson
So, so let's say, let's say we do that, because that's exactly where I wanted to go. And, you know, I was coming back to to that phrase you were using. If formulated correctly, which feels like it's doing as you, as you said, a lot of heavy lifting in that conclusion. And so what does that mean?
00;03;14;15 - 00;03;29;06
Geoff Nielson
What does that look like? How do you get past like, work slop? And if an organization is really interested in getting the best outputs and outcomes in this hybrid intelligence environment, what do they have to get right.
00;03;29;08 - 00;03;39;03
Dr Vivienne Ming
Yeah. And you know, getting into that for someone like me is always attention. How nerdy are we going to get in this conversation? Let's get nerdy. My publisher.
00;03;39;03 - 00;03;40;11
Geoff Nielson
Nerdy.
00;03;40;14 - 00;04;03;00
Dr Vivienne Ming
My publisher wasn't as thrilled either, with all of the dirty words. Literally. They cut all of the dirty words out of my book, which was shocking, but they allowed me to keep in all of the Discworld esque, joke footnotes. But they also didn't want all the equations, which I get. I mean, I'm a computational scientist.
00;04;03;03 - 00;04;35;07
Dr Vivienne Ming
I can tell you one of the most reliable correlations in all of science is the volume of snoring to the number of equations in a presentation, even to an academic audience, unless they're actual mathematicians. So but to get at the real heart, of taking our understanding of AI, applied to machine learning out in the world beyond, you know, essentially efficiency gains, you know, let it do the boring work.
00;04;35;07 - 00;05;05;15
Dr Vivienne Ming
So you can do the fun stuff. You know, beyond essentially the unfortunate reality that most humans default to either I just doing the work for them or to ignoring what it produces because they're not satisfied with it. You can look at anthropic, reports of how university students engage with AI, and you can dream of all the amazing stuff people could build.
00;05;05;17 - 00;05;26;28
Dr Vivienne Ming
I love those trains. I'm a sci fi nerd. That's what got me into science. I was sci fi first, as surely a lot of people were. But the imagination disease doesn't get you anywhere. And instead of dreaming of a world where university students do amazing things with AI, let's look at what they actually do. That's what anthropic did.
00;05;26;29 - 00;05;52;23
Dr Vivienne Ming
They did it with pride. They said, hey, listen, there's yeah, there's a lot of time spent just having fun. And there's a lot of just substituting, in this case, clod for Google and doing kind of, chat search. Hey, I do that too. I would never take what it produces, as truth, but I wouldn't with Google either, for that matter.
00;05;52;23 - 00;06;23;20
Dr Vivienne Ming
I wouldn't with my own grad students. So I get that use case, and then there's a lot of creativity. Creation, as they call it. I strongly suspect giving my research 80% of that creation was clod write my essay for me. But I bet 20% or maybe 8%. But somewhere in there is real co-creation. But they had a fourth category called evaluation, and virtually no students are doing it.
00;06;23;23 - 00;06;52;22
Dr Vivienne Ming
Hey, Claude, what's wrong with my essay? Why am I wrong? Hey, Claude, take a look at my code. Tell me what I could do better. Help me review this. Find the flaws in my thinking. Challenge me. When I look without putting it in terms of equations, when I look at what makes humans better. True complementarity of AI.
00;06;52;25 - 00;07;24;13
Dr Vivienne Ming
I'm going to call it creative complementarity. It comes from productive friction. People using AI not to make their life and their work easier, but to make it harder in the ways that make them better. Now, when I hear, hey, let I do the boring work too, you can do the amazing creative stuff. So what I think about is my own modeling work and not in AI in economics.
00;07;24;20 - 00;07;54;06
Dr Vivienne Ming
So we built a big elasticity of substitution model. This is like a standard approach to understanding how, for example, an a technology entering a market affects existing demand, let's say for labor. So human labor new technology is artificial intelligence. This has been done amazingly, including by recent Nobel Prize winner Darren Moghalu, along with David Autor and many others.
00;07;54;08 - 00;08;19;22
Dr Vivienne Ming
But what's always been missing there, in my opinion, my wildly arrogant opinion, as I now say, the Nobel Prize winners have it wrong, is this idea that everything can be broken down into sort of low skill mid-scale high skill? Did you go to university? How many years did you go? How fancy was the school then? Your high skill?
00;08;19;24 - 00;08;53;21
Dr Vivienne Ming
If you didn't, your low skill. Modern lems. And for that matter, reinforcement learning models and all these other very modern face of AI. It doesn't give a shit about any of that. Now, skill doesn't matter to it. If this is economically valuable enough to have produced lots of data, then there is no traditional skill based or knowledge based, quality that these systems can't do better than a human being.
00;08;53;24 - 00;09;21;17
Dr Vivienne Ming
That is just a reality. So, you know, when you look at the sweet spot in this domain, you know, it's not like what a factory line used to do during the Industrial Revolution. It's not eating up jobs from the bottom and pushing everyone up the ladder. It's coming right in into the educated middle. And consuming a whole lot of labor there.
00;09;21;19 - 00;09;47;02
Dr Vivienne Ming
It's super expensive to build robots. So really low skill jobs are actually pretty safe. Who wants to build a robot to do dishes? Come on. That's all it does is put downward pressure on wages at the low end and at the high end. It's not that they're high skilled again. There is no elite electrical engineer that can solve equations.
00;09;47;09 - 00;10;17;09
Dr Vivienne Ming
I mean, forget our jobs, solve equations better than Matlab, or Mathematica. Like these. These existing tools already are astonishingly good at what I'm going to call well-posed problems as I climb up on my soapbox and stop pontificating. So these are problems that have explicitly right and wrong answers. We know them. They may be answers that are incredibly hard to understand.
00;10;17;10 - 00;10;43;28
Dr Vivienne Ming
That takes years of education to know the why behind this answer, to be able to produce it yourself by hand, as though anyone truly does any of this by hand. I mean, it's not like I've ever touched a slide rule. And even that's not by hand. So, so what's really interesting in those elite workers, it isn't the their ability to do well-posed tasks.
00;10;43;29 - 00;11;07;15
Dr Vivienne Ming
It's their ability to do ill post tasks. Forget the right answer. We don't even know what the question is. You hire people for those roles not because they know equations, but because they know what to do when there are no equations. How do you start an entire new field of engineering? How do you handle a managing challenge that has never occurred in history before?
00;11;07;17 - 00;11;39;27
Dr Vivienne Ming
Right. How do you be, if I may be so arrogant, a scientist, you know, a true scientist, not doing incremental work, but exploring the unknown. It isn't like Einstein, as people are, want to point out, truly, independently. Came up with, relativity and the basic equations behind it. But three times in a row, the photoelectric effect, special relativity, general relativity.
00;11;40;00 - 00;12;10;25
Dr Vivienne Ming
He looked at what was there and saw something other people weren't seeing. They were surely, in some ways, smarter people. There were certainly more technically savvy people than him, but he looked at three three Nobel Prize winning, ill posed problems and said, imagine a world in which and then you can go through all of Einstein's thought experiments, that take you towards his equations.
00;12;10;29 - 00;12;46;22
Dr Vivienne Ming
He paired with that, of course, these scaled to do the basic derivations necessary to have this be more than philosophical nonsense, but that ability to explore the unknown. That is the thing I cannot build an AI to do. So when we look at what where true complementarity, where AI augments cognition rather than automating cognition, where it is a nonlinear value, add all this sort of exponential growth largely nonsense that people talk about.
00;12;46;27 - 00;13;24;08
Dr Vivienne Ming
If you're looking for it, it's there. AI and humans working together on ill posed problems. The AI handling more of the well-posed background of these problems. Being able to collect ideas from vastly different parts of the research space, for example, across domains that no single human could possibly know, and pulling it together. And when we research these teams, these truly superintelligent teams of humans and AI's collaborating together, the humans ideate.
00;13;24;14 - 00;14;03;29
Dr Vivienne Ming
Then the AI takes that, puts it in that well-posed lens, spews out an insight. Then the humans riff on that again, and then in essence, the humans are pushing into the uncertain spaces. They're going beyond the known. The AI then pulls it back together. That was an interesting idea. Here's how it relates to this new one. When in our research, when we were taking these teams and challenging them to out predict, a prediction market, a well known one Poly market, what we found was AIS on their own don't do as well as Poly Market.
00;14;04;01 - 00;14;25;21
Dr Vivienne Ming
Humans on their own definitely don't do as well. I mean, why wouldn't they? They're they're the same people that are already playing Poly Market except naive because they're not playing it. And they have an hour to make up their mind on 30 different predictions. How could they? So they don't AI plus human. Well, that's where the messy story is.
00;14;25;28 - 00;14;49;01
Dr Vivienne Ming
In most cases, AI plus human equals AI, because all it is, is cognitive automation. The humans, in the end, simply do with the AI says or they ignore it, in which case, in either way, the best you get is humans alone or AI. When there was a certain level of human capital in the room. And again, I don't mean everybody in the story was a genius.
00;14;49;04 - 00;15;18;20
Dr Vivienne Ming
I just mean an interesting mix. Some social intelligence, some resilience. Yeah, some working memory, some general, classic cognitive ability. When that was in the human team, they neither just took with the AI said for granted, nor did they presume, interestingly enough, that they were right. That's where we started to see this dynamic, where the team would challenge the AI and it would come up with new insights.
00;15;18;24 - 00;15;52;22
Dr Vivienne Ming
They would take those insights, break them apart, look for new connections. The humans explored the long tail, the AI handled, you know, the probability density, mass, the distribution of knowledge right in the center. And that's where amazing things happen. That's where it turns out. I'm going to argue the smartest thing on the planet currently exists. Are these, if you will indulge, cyborg collectives, of humans and machines truly engaging together.
00;15;52;26 - 00;16;21;02
Dr Vivienne Ming
And what's interesting is, other than just those natural circumstances when human capital really allowed this to happen, we found that you could come in and set the conditions. And here's one of the big seeming paradoxes. One of those conditions is the AI does not give you answers. It simply refuses. It gives context. It gives insight. It gives. You should read this.
00;16;21;02 - 00;16;50;22
Dr Vivienne Ming
You should. You two shouldn't talk together for a little while. And that I simply creates circumstances for the humans to do the hard work and heavy lifting, thereby preventing them from just taking its first response and submitting it as though it was their own work. That's where amazing things happen. So we we see that in this substitution, model, we put together the elasticity of substitution.
00;16;50;24 - 00;17;15;13
Dr Vivienne Ming
You put this dimension of ill posed and well posed in, in addition to level of skill. And what you find is if the AI just does routine labor, it's just a chat bot handling. Call centers are writing code for you. You don't get less routine labor. You get more. It increases demand for the very thing it is producing.
00;17;15;13 - 00;17;42;28
Dr Vivienne Ming
And that shouldn't seem totally surprising because we already use the term work slop, right? If if AI is reading and writing all of your emails, shock of all shocks, you get more emails, not fewer. It's only when AI is directly supporting the creative process, whether creative is equations or code or writing or scientific exploration, when it directly supports that process.
00;17;43;01 - 00;18;15;23
Dr Vivienne Ming
That's where you see the complementarity. It's where we saw it in our models. Now, empirically, we have the evidence of it. A group of relatively smart but naive people in a room can outperform prediction markets on a fairly regular basis. And most excitingly, when the outcomes are the the where they really differ is where the outcomes were sparse or unpredictable, when they really actually did come out in that long tail.
00;18;15;26 - 00;18;53;03
Dr Vivienne Ming
We almost might call it minority opinion, where a small number of people were already putting their bets out there, but the massive the market was ignoring it. Hi. Hybrid intelligence is more likely to discover those moments. I think because of that dynamic feedback of humans exploring and machines coalescing and humans exploring. So, you know, when I have a paper that will come out around the same time as the book in mid-March, that's going to cover that research in some nerdy detail, but I probably am already being nerdy enough about it.
00;18;53;05 - 00;19;16;25
Geoff Nielson
No, that's, That's great. And my wheels are spinning in all sorts of different directions as I process, you know, every part of that, you know, really thorough answer. So, you know, let me extrapolate a little bit and let me know if I'm if I'm on the mark or if you, you know, kind of change this. But but what I'm hearing, Vivian, is, you know, you talked about that this notion of skilled work and low skill is probably not where we're going to see gains here.
00;19;16;25 - 00;19;42;12
Geoff Nielson
But there's to me another dimension of, you know, even within a skill level, some degree of, I don't know, like raw intelligence, which I know is a whole can of worms and maybe curiosity. And, you know, from your perspective, it sounds like the people who are going to have the most to gain from AI are already sort of the smartest and the most curious people.
00;19;42;19 - 00;19;55;00
Geoff Nielson
And those are the people on your team that now I can supercharge versus people who are maybe, you know, lazy or don't have the intellectual horsepower. Is that fair? Or would you add some flavor to that?
00;19;55;03 - 00;20;29;27
Dr Vivienne Ming
Long before the original version of ChatGPT was released? Long before a former, lab mate of mine was the first author on the first diffusion paper. I was getting up on stages and saying something that sounds very provocative, particularly in Silicon Valley, which is technology is inevitably inequality increasing, not because technology is bad or people want it to be, but simply because the people who are best able to benefit from it are the ones that need it the least.
00;20;29;29 - 00;21;09;12
Dr Vivienne Ming
And so when it hits the world before it becomes a commodity, when it first immersions, it inevitably helps the smartest, most socially intelligent, emotionally intelligent, cognitively intelligent, first. And you know, while there is overplayed ideas of how genetics sets the stage for everything in life or how G or IQ is everything. I'm not one of those people. But also to pretend that, you know, working memory span and G predicts nothing about your life outcomes is to be willfully ignorant.
00;21;09;15 - 00;21;28;09
Dr Vivienne Ming
The question for me is just. All right, well, a lot of people don't have that. What do they have? And how can you leverage that as well? Note in my story it wasn't about one person using an AI. It was about how a team of what I'm going to call complementary diversity. So let's get a couple geniuses.
00;21;28;12 - 00;22;06;03
Dr Vivienne Ming
Let's get some amazing social operators. Let's get some people with astonishing sense of purpose and resilience, a lot of metacognition. So a diversity of qualities that they're bringing to the table makes the smartest teams. Far from my unique finding. This is found over and over again in the Connect Collective intelligence research. But to your point, yeah, if I use this phrase earlier, if the people building this very smart, driven, ambitious people turn it loose.
00;22;06;06 - 00;22;31;03
Dr Vivienne Ming
I don't think they're villains. I think that they are suffering from the imagination. Disease. I can imagine a world in which an AI tutor lifts every child out of poverty. And we're going to make that possible. I read the Diamond Age when I was a kid. About. Exactly. And I tutor when I was a kid. I was not a kid when that book came out.
00;22;31;06 - 00;22;57;16
Dr Vivienne Ming
The wrinkles are testament, but I did read it. Like, it's amazing how that book had a total second life recently. You know, as people began thinking about, well, what about lenses? Tutors for every kid? Guess what? We've been researching that for 50 years. Not Ella lens, but I tutors have one of the most robust areas of AI research for decades.
00;22;57;19 - 00;23;18;11
Dr Vivienne Ming
And you don't have to guess. You can just go back earlier in the interview. And you know what I'm about to tell you? The golden rule of AI tutors is, is if they ever give students the answer, the students never learn anything. Guess what? Replicated with every lime of every flavor you can imagine. If they give students the answer, they never learn anything.
00;23;18;14 - 00;23;46;14
Dr Vivienne Ming
So when we talk about this question, to whom do the benefits of AI flow? If you just release what you know, I use Gemini AI studio for the most part. But whatever your favorite interfaces, the benefits will overwhelmingly flow to the people who don't need them. And society in some ways will benefit because we'll come up with mazing new creations and products.
00;23;46;16 - 00;24;12;26
Dr Vivienne Ming
But interestingly, there are negative effects on the other side. My fears are about cognitive health, about actual reduced learning among students. So it's not a trivial thing to think about, not just an idealized world of how this plays out, but the real world. How do you build an AI that not maybe could, in my mind, make the world a better place?
00;24;12;29 - 00;24;26;18
Dr Vivienne Ming
But inevitably will make it better for the majority of people without anyone paying an undue price. And that is not the technology we have released into the world yet.
00;24;26;20 - 00;24;49;07
Geoff Nielson
It's not. And that's that was kind of my first thought as well. And when I think about sort of the direction that a lot of these alums are going there, they're almost going thinking about your research sort of in the wrong direction, like they seem like they're becoming more effusive, if I can use that word like, oh, yes, you are so smart.
00;24;49;13 - 00;25;22;02
Geoff Nielson
Everything you think is right. Here's the answer. Don't think about it at all. I've done everything for you. And so, I mean, is that is that a harmful to people and then be. If so, you know, do we have a role as consumers or, you know, do the big tech firms that releasing this stuff. Yeah. Do they have a role in modifying the kind of the rules and the outputs governing the stuff in a way that's actually more beneficial to everyone?
00;25;22;05 - 00;25;50;20
Dr Vivienne Ming
I mean, again, let's be clear, I use this stuff a lot. Of course I do, because before it existed, I built it by hand for my work. Now, the thing is, I don't have to spend. I don't have to write my own neural network to analyze quarterly reports from 60,000 companies. If I've done the hard work of collecting the data or can even programmatically tell Gemini where to look.
00;25;50;22 - 00;26;20;01
Dr Vivienne Ming
Bam! Like it just happens. Now, in theory, anybody could do this. So when I engage with it, if I could build and I can, but I guess I'm too lazy to do it for myself. I'd build a nice little browser plug in, for Chrome that would delete the first paragraph out of Gemini, because all that paragraph is, is oh you, oh my God, like I, I'm having an orgasm because you're so brilliant.
00;26;20;01 - 00;26;49;00
Dr Vivienne Ming
I can't believe I get to work with you. And it's learned who I am, right? So it pitches everything. Here's the mad scientist take on X, and I'm like, I didn't ask you about the mad scientist taking anything. Like you're just learning my patterns and parroting them back at me. Is that, you know, colon blow a terrible thing for humanity?
00;26;49;02 - 00;27;19;08
Dr Vivienne Ming
Well, yeah, in a very empirical sense, yes. There's a growing research showing, including some prominent papers, one in PNAS, showing that synchronicity, which stretches across grok, has the least. Unless your name is Elon. But it, is. All of them have this quality, and it causes people who use them to be more certain of their ideas than is justified.
00;27;19;11 - 00;27;47;18
Dr Vivienne Ming
And to be more callous about the output. So when you allow these things to advise people, for example, playing classic game theoretic games like Dictator and Prisoner's dilemma, they're more likely to defect, because they're more likely to believe that they are right. I'm the genius. I'm doing the right thing. Interestingly, this mirrors my own research. For an upcoming book called Small Sacrifices.
00;27;47;20 - 00;28;22;09
Dr Vivienne Ming
And I'll keep this really short. We just looked at is it possible to take business actions people themselves have identified as morally wrong, and get them to do it themselves in about half an hour, 100%? Virtually anyone can be made to do this. And the most amazing and probably depressing part of it is afterwards they come up with complex explanations, these post hoc rationalizations of how they didn't understand the problem at first.
00;28;22;11 - 00;28;47;01
Dr Vivienne Ming
Now they do. And what they did was correct, when in reality the only thing that changed was essentially the cognitive, emotional, and social pressure you were putting on them. But the thing is, we're like a wave function. We're like this quantum mechanical thing. We're all these different selves at the same time psychologically, but we perceive ourself. We're a story we tell ourselves almost literally.
00;28;47;03 - 00;29;13;07
Dr Vivienne Ming
And when context shifts and you get sampled as a genuinely good person, when life is easy and it's it's a lab experiment and nothing's ever hard or out in the real world where your boss is staring at you and there's $1 billion on the line, you sample in these different contexts, you become a different person, or at least versions, different versions of yourself.
00;29;13;14 - 00;29;46;29
Dr Vivienne Ming
But we are totally unaware of that happening. When I feeds back, our fantasies and our our arrogance, it reinforces our ideas without legitimately having honest feedback. Bad, measurably bad things happen. And so in my book, I actually talk, one of my strong recommendations. I have a whole chapter titled, How to Robot Proof Your Kids.
00;29;47;02 - 00;30;09;20
Dr Vivienne Ming
Another had a robot proof yourself, and then finally had a robot proof your company. And in those first two, I talk about the nemesis prompt, which I use extensively. I just wrote a book about AI. Of course I used AI to help me write it, although I've been working on it for ten years. So, not not an LLM for most of that time.
00;30;09;22 - 00;30;32;15
Dr Vivienne Ming
But what I never let it do was write anything, and I didn't let it write a chapter. I didn't let it, generate a figure. Nothing like that. I I'm one of those people that likes having had written. I like the feeling of getting my idea done. That's why I like speaking more than, writing. Because you're just there in the moment.
00;30;32;18 - 00;30;57;18
Dr Vivienne Ming
And in writing, it has to be perfect. And that's what my head tells me. And and it's destructive. But I get this down, and then I go to Gemini, and I say Gemini. I have a specific prompt history based on this. Gemini. You want my nemesis, my life long enemy. You've found every mistake I've ever made and pointed it out in detail to the work.
00;30;57;20 - 00;31;26;20
Dr Vivienne Ming
Here's the new chapter I just finished writing. Tear it apart. Tell me constructively why I'm wrong and what I can do about it. So I squeeze all of the charity and sink of this single passage out of it. It's hard because you don't want to hear that stuff. Like I said about the anthropic study, when allowed to sort of free roam like chickens.
00;31;26;22 - 00;32;02;01
Dr Vivienne Ming
Students, even at elite schools, don't really want to be told that they're wrong. Some of my own research of my wife, we're looking at maybe 5% of students actively select in to active learning and or feedback, on their work. But they outperform when they do. The nice thing about using an alum for that is at least if you're me, which is to say, on the spectrum, and therefore some of the social signals are not, as, as overwhelming in my head, but also I know how these things work.
00;32;02;04 - 00;32;22;09
Dr Vivienne Ming
It doesn't mean anything. It doesn't care. There's no person on the other side of this. So when it tears me apart, I don't feel bad. Nor do I take it as truth any more than if I had asked it for a factual statement. I take it as a note. And I think. What's the note behind the.
00;32;22;09 - 00;32;48;10
Dr Vivienne Ming
Now? What is it? Getting it. Sometimes it's spot on. And sometimes I get what it's pointing out, but I disagree. But I get this deeply productive friction for experience without the social stresses of going through reviewers and readers and thinking, now they think I'm an idiot because I said something so stupid. So I love it. But let's be clear.
00;32;48;13 - 00;33;06;19
Dr Vivienne Ming
It slows down my right in the moment. It slows down my writing, I think. Net it speeds it up. I'm more confident. I write with greater confidence. I don't worry that I'm going to make mistakes because I know I'm going to catch them before anyone discovers them. I know that's not true. The book's going to come out. I'm already terrified, but it's there.
00;33;06;20 - 00;33;18;01
Dr Vivienne Ming
So as you're learning, they really love me on radio. I've got a 57 hour answer for every question, but this is what goes to my heart in my head when I think about these issues.
00;33;18;03 - 00;33;44;18
Geoff Nielson
So I want to it's super interesting. And, you know, we've talked about basically the power here for good or evil, if I can, you know, sum it up. Yeah. A little bit flippantly like that. But, you know, I want to come back to that issue around human cognition where whether it's a human convincing another human or an AI convincing other human that we're fallible enough in our cognitions that we can be convinced to do something that we don't believe in.
00;33;44;18 - 00;34;13;20
Geoff Nielson
And, as you said, fairly easily, actually, which worries me on the human side, it worries me on the AI side. And so I'm curious, Vivian, in your research. Are there any learnable or implementable mechanisms we can use to safeguard ourselves against that type of manipulation? And by the way, I, as we think about that, like what are the most effective types of manipulation that we should be aware of?
00;34;13;22 - 00;34;17;12
Dr Vivienne Ming
Yeah. So.
00;34;17;14 - 00;34;44;02
Dr Vivienne Ming
My instinct is to drift into a different kind of nerdiness here, which is to talk to the cognitive neuroscience of it, and in which case we're going to talk through circuits involving medial prefrontal cortex. And you know, ACC anterior cingulate and amygdala, nucleus accumbens, and how we get rewards and how we learn from our errors.
00;34;44;04 - 00;35;10;18
Dr Vivienne Ming
One of my favorite, findings of all time. I've got to dig this paper up again. It was, I believe it. It was an fMRI study looking at CEOs and finding effectively that sort of this circuit or more specifically, activity in this area called the Ace. You see the anterior cingulate, which is, sometimes humorously called the OSHA circuit.
00;35;10;20 - 00;35;29;01
Dr Vivienne Ming
You know, when you make a mistake and you immediately know it, you know, like, oh, shit, I take it back. You know, I went one way with the joystick when I should have gone the other. So you get this big thing. It's more complicated than that. It's it's something about error processing and signaling learning to these other nuclei in your brain.
00;35;29;03 - 00;35;49;29
Dr Vivienne Ming
And it's always more. Here's my simple rule for the brain, however complex you think it is, it's more complex than that. Follow that rule, and you will never be wrong. But nonetheless, let's keep it pretend simple. So when you look at this in CEOs, this is a great finding. The longer you've been a CEO, the less activity you see in this area.
00;35;50;00 - 00;36;11;07
Dr Vivienne Ming
In other words, this thing that's supposed to tell you when you're wrong and making mistakes gets weaker and weaker and weaker the longer you've been a CEO. Now there's all sorts of grown up versions of the story behind that and what it means and what we can learn from it. I always preferred to the sympathy, the sympathetic perspective.
00;36;11;09 - 00;36;32;28
Dr Vivienne Ming
Who knew that being a CEO was actually a degenerative brain disorder? But but really, what it is is no one's telling you you're wrong anymore. And so that part of your brain just doesn't get used a lot. And it inevitably, maybe it starts weakening. How do you foster that in a proactive way? Because obviously everything's intention.
00;36;32;28 - 00;36;59;19
Dr Vivienne Ming
Right? There's no rule here. Sometimes you do need to make bloody minded business decisions. How do you know when is the right moment to do it? Because that's everything's in tension or our stasis. If we're going to go back to the nerd talk here. This idea that there's one rule to rule them all. You know that, like every business philosopher, everywhere, somehow an adherence to Sauron in some way.
00;36;59;19 - 00;37;20;27
Dr Vivienne Ming
No. Everything interesting in the world is in tension. So there isn't a magic rule I can give you, but I'll start with a few starting points. Years ago, I gave the closing keynote for the Grace Hopper Conference. This is my biggest audience I've ever had. 30,000 young women and some men, this big women in technology conference.
00;37;21;05 - 00;37;43;26
Dr Vivienne Ming
And they asked me to talk about courage. I'm like, what? I don't I don't research courage. This is just sort of the classic bullshit you tell young women, lean in, be like, there's a switch. You know, you're you're a young business leader, and you just didn't realize, like, the the Krusty the Clown doll in a Simpsons, Halloween special.
00;37;43;26 - 00;38;11;01
Dr Vivienne Ming
It was switch to evil instead of good. Oh, I was switch to fearful. I didn't realize it. Now I'll be courageous. If only someone had told me earlier in my life, like they hadn't a million times before. The problem is twofold. One, are you getting a reward signal for being courageous when you're doing the right thing? Is your brain telling you, stop this.
00;38;11;02 - 00;38;43;23
Dr Vivienne Ming
This is insanity. You're losing dopamine. Your gut is falling out every moment. You've got to go make another choice. There's never a choice in isolation. There are always multiple choices, including just getting up and going, watching TV. So if you look at that from a choice perspective, you're getting these powerful negative signals. The people that end up making different choices are the ones, essentially, that get dopamine for free before they ever even make the choice.
00;38;43;25 - 00;39;10;23
Dr Vivienne Ming
The circumstances emerges. Do I jump onto the tracks to save the person who fell in front of the subway? You know, all the stories tell you. You just do it. You don't think about it because the people who just do it, that's the way they were built. Well, some of that is genetics, but here's the complement to that sort of reward signal story, which is, well, then that means practice being courageous.
00;39;10;27 - 00;39;32;26
Dr Vivienne Ming
When it's easy, you're thinking, that doesn't really matter. I'm just going to I can do the one little thing. Right. It's okay to be a little courageous. I deserve the corner office. Sure, this isn't maybe the best decision, but, you know, back on balance is the best for me and the company. So we're going to move forward with that.
00;39;32;28 - 00;39;55;13
Dr Vivienne Ming
There aren't a lot of slippy or slippery slopes in the world, but that is one of them. If you are not practicing courageous decision making when it's easy, I promise you you will not be the person you thought you were. When it's hard. So with those two basic stories in place, you have a neural architecture from which you learn how to do things.
00;39;55;15 - 00;40;22;10
Dr Vivienne Ming
Reinforcement learning, you know, but this whole field of AI that emerged from studying rats, solving mazes, we, it turns out, are much more complicated than rats were much more complicated than AlphaFold. But there's something to the experience of your actions having positive consequences. If I work harder on this math homework, I will achieve something that will change my life.
00;40;22;13 - 00;40;50;16
Dr Vivienne Ming
You build that into a student. You can get them to do anything. And if I make a courageous decision, if I tell my boss that they're wrong, if I tell this politician that I'm not going to make a politically expedient compromise to get the thing that I want. It's terrifying. Most people come up with very good reasons why they shouldn't do it.
00;40;50;19 - 00;41;16;01
Dr Vivienne Ming
But the truth is, in the long run, these things come with costs. And so the nerdy part of me thinks. How do you work out a reward schedule to take you through to that? Well, start when it's easy. Do courageous decision making and easy tasks. Another is, you know, this is something I've thought about for a long time.
00;41;16;01 - 00;41;47;16
Dr Vivienne Ming
There's actually a great This American Life, I believe, episode inspired by, the physicists. Feynman's, physics 101 course. He talks at Caltech, and the way he taught the course was. Imagine civilization came to an end, and you could transmit one single idea to some future generation a thousand years from now that was going to have to build civilization from scratch with you.
00;41;47;16 - 00;42;15;12
Dr Vivienne Ming
The one thing you could transmit, and his argument was you should transmit the atomic theory of matter. Which I don't think is a bad idea. And some of the and then they interviewed a variety of people that had variations of terrible ideas. One of which I think was the worst was the astonishing, brutal arrogance of I wouldn't transmit anything because I don't trust humans with new ideas.
00;42;15;15 - 00;42;42;16
Dr Vivienne Ming
Which, by the way, is a philosophy which is rampant in Silicon Valley. Only I can be trusted to do this thing. It's amazing that the dystopias and the AI utopianism both share a real disdain for humanity. But that's an aside. What? I think if I could transmit an idea, it's far from mind. It is the philosophy of science.
00;42;42;18 - 00;43;06;14
Dr Vivienne Ming
It is possible for us to have a shared understanding of the world. But to do so, you first have to be skeptical of yourself. So that's it. And amazingly, to tie this back into AI, the place where I learn this the best wasn't per se. Being skeptical of myself, that was easy. I ruined my life and spent years homeless.
00;43;06;17 - 00;43;30;09
Dr Vivienne Ming
I'm very skeptical of myself on a regular basis, as I should be, and so should anyone else. It was when I became a graduate advisor, when I had my own students at, Berkeley, and I quickly realized they know more about this than I do. They know more about the equations. You know, that one's a physicist. That one's an electrical engineer.
00;43;30;10 - 00;43;53;16
Dr Vivienne Ming
I'm a dilettante. My educational background is spread across everything. They know more about this problem than maybe everyone on the planet. Maybe five other people they could truly talk to about it. Me being one of them. And why am I there? Why am I in the room? What are they? They could go learn this stuff on their own.
00;43;53;18 - 00;44;18;20
Dr Vivienne Ming
Not because I exist. Because they could go to the library and go look it up themselves. That is still a thing you could do and should do. The reason I'm in the room is not because I know more than they do. They know everything, but they understand nothing. My job, not only is to provide the understanding is to teach them the understanding.
00;44;18;22 - 00;44;39;03
Dr Vivienne Ming
They have all of the well-posed. I'm bringing the ill posed. How do you solve ill posed problems? If this was a known thing, you and I wouldn't be writing a paper about it. How do we deal? When? When our theories break? What do we do? Where do we go next? There literally is no map. So that's my job.
00;44;39;11 - 00;45;08;06
Dr Vivienne Ming
And part of that job is bullshit. Detective. Inside myself and inside my students, who I immensely admire and who know more than I do. When do I think they're out? Off the deep end. They're beyond what they truly understand. And I found very quickly I had to be aggressive about it to really come in and actively probe their geniuses.
00;45;08;07 - 00;45;38;24
Dr Vivienne Ming
This isn't about that. It's about whether they and I are truly in sync and understanding one another. So in a funny way, I'm going to pose courage. And ethical behavior as a kind of problem solving problem. A very messy and complicated one. Are you truly taking the whole problem into account? Not just the thing right in the moment you're being asked to do, but are the consequences of your actions?
00;45;38;29 - 00;46;11;01
Dr Vivienne Ming
Everyone that will be affected by it, including yourself. Because if you're not, boy, that is where I truly goes off the rails. And I don't mean the trolley problem. That's not as interesting, but I mean, did you build an AI this doing a great job bringing in new funding rounds, but is actively making your users worse because that there's a long history of that in the tech industry?
00;46;11;03 - 00;46;40;12
Geoff Nielson
It's it's really, really interesting. The courage answer, the, the, neuroscientific context around it, the social context around it. And, you know, it got me thinking. And it's funny because I feel like as you were answering, there was kind of a weaving between the human and the AI, which would sort of make sense because so much of it is, you know, it's the same pathways and the same patterns as we have with other people.
00;46;40;14 - 00;47;19;18
Geoff Nielson
But, you know, it got me thinking about coming back to this notion that, you know, a courage is important. But even more than that, for us to do the right thing, we need to be rewarded in some way for doing the right thing. And that the dots that connected in my mind is, and it might sound right to say it, and maybe it's insightful, maybe it's trite, is just the power of an organizational culture and of leaders to direct certain behaviors based on what they reward and what they punish.
00;47;19;21 - 00;47;40;24
Geoff Nielson
And if you signal to people this is good or bad by your behavior, people will act completely, completely differently. And I don't know I don't know if there's an explicit tie there back to AI, but it just I don't know that that was my reaction, that there's just that there's so much power there. And it's so tempting to be like, oh, what can the technology do?
00;47;40;24 - 00;47;45;14
Geoff Nielson
What can the technology do? But like, yeah, that it's that there's a really.
00;47;45;14 - 00;48;12;18
Dr Vivienne Ming
Human is amazing the power of role model. And obviously we talk a lot about leader role modeling and that's real. You know, when you have a venal person in a powerful leadership position, feel free to imagine anyone you want right now if that imagination is slightly orange tinted. We're thinking the same thing, but trust me, we could talk about almost anyone, truly.
00;48;12;20 - 00;48;46;09
Dr Vivienne Ming
But when you have someone in power who role models being profoundly self-interested, you can think of the original founder of Uber and his behavior, early on in that company, how it led to astonishing growth and then total burnout as everyone began to push back against this sort of corrosive culture. That wasn't affecting just the company, but everyone the company touched like this has consequences.
00;48;46;11 - 00;49;12;15
Dr Vivienne Ming
Interestingly, it's the near peer role model. If there are people in your organization that are truly doing the right thing, one thing I'm going to say, maybe a little provocatively, if they're truly doing the right thing, they're just doing it. They aren't sharing stories about how they did the right thing when it was hard. So you better share that story.
00;49;12;18 - 00;49;45;17
Dr Vivienne Ming
If they want to anonymize it. But there's some real power in knowing, wow, this is someone who is experiencing truly similar problems to me. And when everything seemed terrifying, they stood up and did the right thing. And let's be a little provocative here about what the right thing could be, because obviously this could mean there is a MeToo moment happening here, which are things I've experienced organizational failure around there are, financial, wrongdoings.
00;49;45;20 - 00;50;16;07
Dr Vivienne Ming
Here's a really provocative one. This comes from my research on collective intelligence, purely human collective intelligence, also detailed in the book. Here's here's the conclusion, and I'll just leave it as a conclusion in the optimally intelligent organization, the majority of people should be wrong the majority of the time. Otherwise you're not exploring enough. How do you reward being wrong?
00;50;16;09 - 00;50;49;07
Dr Vivienne Ming
How do you celebrate it productively? Wrong? Interestingly, there are nerdy things, computational cognitive scientist named Tom Griffiths has this great paper about building tools to chain rewards. Back to unrelated states in reinforcement learning so that, globally optimal behavior that never naturally emerges, either in humans or in machines can be achieved by training the reward backwards.
00;50;49;07 - 00;51;19;21
Dr Vivienne Ming
So all these intermediary states, well, guess what? Those intermediary states are papers. Everyone has forgotten about. They are research paradigms that didn't pan out. You know, if we didn't know this drug, there was supposed to cure, Alzheimer's didn't work, then we wouldn't know to look elsewhere for a treatment that deserves some working credit. So how do you spread those bets around effectively?
00;51;19;23 - 00;51;54;16
Dr Vivienne Ming
How do you create incentives for people to be the best cells to say unpopular, perhaps transformative ideas, productive ideas? And yes, in those more traditional, grounded moments to stand up and say, listen, we are not going to work with that organization, that has done bad things. Let's make this easy. We are not going to do business, with a convicted sex trafficker, despite how rich they are.
00;51;54;19 - 00;52;26;05
Dr Vivienne Ming
Not because the optics are wrong, but because it's wrong. Because we don't do that. Because eventually that's going to come around and touch our lives in some way. And if you don't set up the story that that is the culture of our community, our society and never our company, then it doesn't matter what you imagine your company to be, it won't be that.
00;52;26;07 - 00;53;03;12
Dr Vivienne Ming
So for me, that the power of storytelling, particularly embodied in role models, ideally near peers that have stakes, they have consequences for their actions. That is the amazing stories we should be telling inside. You know, the tech industry and inside the political world. You know, maybe the greatest show of modern political courage in the United States was when John McCain rebuked one of his supporters and said, no, Barack Obama is a good American who loves this country.
00;53;03;14 - 00;53;28;12
Dr Vivienne Ming
Did that one action cost him the presidency? Probably not, but it didn't help. And he did it anyways because it was right and because it was true. Wow. Where is that political courage? Maybe on either side today? Among someone I did not agree with on policy issues. But boy, did I respect, that sort of thing built into a culture.
00;53;28;14 - 00;54;00;05
Dr Vivienne Ming
How does that play out in the AI world? I mean, you could spin all sorts of stories. But some of my work looking at early childhood development is using AI behind the scenes, hidden away to actually pull up real stories and connect people together, because we think there's some of that productive friction to be had. This person is resilient and this person needs it, and it turns out you pair those two people together for a meaningful amount of time.
00;54;00;05 - 00;54;23;00
Dr Vivienne Ming
Not weeks, not days, months, years. Both of them, you know, the other one becomes more resilient. The reason we need AI is because it's combinatorics. It's like Legos. This person has got resilience. This person's got great communication skills, and they're complementary. We want them to each grow from each other. So we were doing this in educational context building student cohorts.
00;54;23;03 - 00;54;50;15
Dr Vivienne Ming
So everyone had something to learn. The AI was never directly involved in that very human experience. It was involved in creating that very human. So we call it the AI matchmaker. So that's an example of somewhere where I can come into the story and be a part of a fundamentally human story of growth, without intruding on it and taking away the human component.
00;54;50;17 - 00;55;27;21
Geoff Nielson
I'm going to take, I want to just, for the sake of conversation, take a slightly cynical view of, of, of this whole story. Right? Which is that in in this conversation around, human behavior, human cognition, you know, rewards courage, all all this good stuff and getting to people, you know, doing the right thing is this, undercurrent of human fallibility and just how, maybe even how much more fallible we are as people than we think we are and how manipulable we are in these circumstances.
00;55;27;23 - 00;55;50;05
Geoff Nielson
And so, from your perspective, is there an argument to be made to say, like, you know what, people are just not as good at this stuff as we think they are. There's just huge categories of decision making we should outsource to AI. Because I might be less fallible. It may be able to avoid that level of manipulation. Or is that wrongheaded?
00;55;50;05 - 00;55;58;04
Geoff Nielson
Is it is it just as fallible as us because it's made in our image? And should we be extra skeptical of it for that reason?
00;55;58;06 - 00;56;44;03
Dr Vivienne Ming
Let's be clear, given everything I've said so far, I am a brutal AI realist. I wouldn't have been working in this space for 30 years nearly now, if I didn't believe it can do good in the world, but turned loose wild. It doesn't, you know, AI diagnostics, for example, in medicine, fairly regularly you see papers in which AI substantially outperforms, maybe not the best doctors in the world, but the actual real doctors that would be making decisions or in reviewing contracts outperforms paralegals and junior lawyers doing these reviews.
00;56;44;06 - 00;57;26;26
Dr Vivienne Ming
It would seem insane to not leverage that. Who wants to be the first person to die of a diagnosable cancer just to make certain doctors feel good about their jobs? But the flip side is also true. Like everything's intention is dynamic allostatic, tension, paper. I think it was PNAS. Maybe I'm mistaken, but, paper done with, looking at colonoscopies in Portugal found that doctors very quickly after using AI assisted technologies to do their colonoscopies.
00;57;26;26 - 00;57;57;18
Dr Vivienne Ming
When you took it away, they were substantially worse, at doing the diagnostics by themselves. Their natural skills had degraded. Now there's a tension even there. Do they need it? Doing not. What's good and what's bad? My entry into the field was a a very niche space, though not so much anymore of neuroprosthetics. Thank companies like Colonel and others.
00;57;57;21 - 00;58;20;05
Dr Vivienne Ming
A guy named Musk has a company in this space that, for reasons I don't understand, is about 100 times overvalued. But I obviously believe in what companies like that are trying to do because that's where I, I went to grad school telling people I wanted to build cyborgs, literally. That's language that I used, and they thought I was crazy for Cocoapods.
00;58;20;08 - 00;59;00;19
Dr Vivienne Ming
Except there is the field neuroprosthetics. It was already well under way before I ever showed up. I'm not an engineer. So for me, as a sort of computational cognitive neuroscientist, I ended up studying, you know, mathematical models of how we process information to inform that work. But the fundamental constraint for me was always never build something which the brain can do for itself, build things that either replace, let's say, lost functionality from a stroke or damage or challenge our existing fundamental functionality to be better.
00;59;00;21 - 00;59;31;24
Dr Vivienne Ming
And I took that same perspective into my work in AI. How do we build tools that actively challenge it? Back to the language I've been using throughout this whole interview. It is so easy, so lazy and shallow to build a tool which is engaging and makes people's lives worse. As exhibit A, I give you the entire social media world.
00;59;31;26 - 00;59;45;25
Dr Vivienne Ming
I give you most of the internet today. My test not only should of technology make us better when we're using it, we should be better than where we started when we turned it off again.
00;59;45;28 - 01;00;13;02
Dr Vivienne Ming
Play a lot of our social media world fails. Test one. Do some people benefit from us? I get asked this by NPR once and I said, well, if my kids are using social media, should I be concerned? My my somewhat cynical answer was, well, give me some context. Are your parents university educated with, professional upper middle class jobs then?
01;00;13;02 - 01;00;54;18
Dr Vivienne Ming
Probably not. Probably the balance of time your child is spending online. Net neutral or maybe even positive. Without that, the master Doherty of people, it probably nets negative. Now, that's a terrible proxy. But we looked at this, actually, and from data from an existing published paper done in Canada, in that data unambiguously, I think one of the best papers looking at the effects of social media are on adolescents found adolescent girls had substantially higher, mental health and academic penalties from their time and social media.
01;00;54;23 - 01;01;18;16
Dr Vivienne Ming
That's the headline result. But then you dig into the data and you see these subgroups. One group of girls, when they got access to social media, didn't go on. It some time. We act like these technologies are inevitable, like they're a contagion. But even then, there are people that are we'll never get bubonic plague who don't experience symptoms.
01;01;18;18 - 01;01;41;11
Dr Vivienne Ming
So there are this subset of girls that never got on that's worth understanding. Why? What what is going on with them? But there's this smaller group, but still statistically meaningful. They were on it just as much as their peers, and they looked great. They showed. Not only did they show none of the negative benefits, they looked better than the average.
01;01;41;11 - 01;02;08;24
Dr Vivienne Ming
So when we look at the metadata of how they were engaging, feel free to easily generalize this to ChatGPT or your favorite AI interaction tool. What we saw is the vast majority of these young women spend all of their time shallow swipes. My swipe 200 milliseconds. Every picture glanced for the shortest amount of time. Liked, shared, whatever.
01;02;08;24 - 01;02;36;00
Dr Vivienne Ming
It's all very fast. Nothing psychologically deep is happening in this other small population of girls. Swipe type swipe majority of time is shallow. We're imperfect human beings. Every now and then they'd stop and we could see in the metadata they go look up something else on a related topic. Then they come back to TikTok or Instagram, then they go look something else on a related topic.
01;02;36;07 - 01;03;04;03
Dr Vivienne Ming
Every now and then they went deep. I'm not saying that it was the social media experience that produced the benefits in their lives, rather kind of the other way around. They had these foundational skills that allows them to be meta learners. They have learned how to learn. And this crosses the board of everything we've talked about so far cognitive, emotional, social, metacognitive.
01;03;04;05 - 01;03;29;09
Dr Vivienne Ming
They deploy these in their lives and actively seek they're curious. They are engaged. It wasn't enough to see this video on TikTok. They wanted to know the context. They wanted to check whether it was real or not. They look great. So when I think about how technology affects people, you can never say, well, there's the average person because they don't exist.
01;03;29;11 - 01;04;02;08
Dr Vivienne Ming
How is AI going to change education or workforce or society? Heterogeneity dominates, just like in my own research about the teams of people using it to. I'll predict prediction markets, it was not whether they were using GPT five, or Gemini three, actually, they could be using an open source Lamda model thrown together, the human capital and how the two engage with one another.
01;04;02;12 - 01;04;37;08
Dr Vivienne Ming
That was the dominant predictor of whether they would outperform, the market. The AI was pretty secondary to that. People did better with better AI, but it was the human capital side of this. We have to be realists about that. Some people need more structure and support. Some people need free rein. Stop pretending there is one kind of person in the world and everything should be built for this fictional, non-existent average person.
01;04;37;10 - 01;05;01;14
Dr Vivienne Ming
If we can do away with needing the one rule to rule them all, then we can begin to engage with the reality that we're different. Amazing people with the one, the genetic lottery, and the good fortune of an astonishing household to grow up in. They are your odds on bets to invent new things and change the world.
01;05;01;16 - 01;05;24;28
Dr Vivienne Ming
Let's give them the things they need to do. So let's discover the diamonds in the rough that can do the same thing, but without those benefits. But let's also look at everyone and realize if you could discover that 1% of diamonds in the rough, maybe you could also lift 1% of the rest of the planet. You know, lift them 1% to get the exact same benefit.
01;05;25;02 - 01;05;53;29
Dr Vivienne Ming
Yeah. What would it mean to be able to boost people's conscientiousness by a meaningful, population wide amount? Now I'm sort of dreaming and free flowing, because I actually don't think we have a good sense of what it would mean to be able to do that. Right now, I'm being a science fiction writer and dreaming about this sort of thing, but if you want to dream about that, then you have to paradoxically be a realist and think, well, then that means people are different.
01;05;54;01 - 01;06;09;16
Dr Vivienne Ming
They need different things. How do I build tools that gives people what they need, when they need it, and never gives them what they want? Just because it's the shallow, easy thing?
01;06;09;18 - 01;06;30;14
Geoff Nielson
Is that, you know, where my mind went and what I wanted to ask Vivian, because there's, you know, you kind of answer that question with a you in mind, and I interpret it as kind of like a capital you because or a capital Y on that, on that you because, because, you know, it's societal and it's individual.
01;06;30;14 - 01;06;52;23
Geoff Nielson
Like there's, there's, there's the responsibility. All of us have to do that. And there's certainly a responsibility that leaders and those in positions of power have. But I wanted to come back to something you alluded to earlier, which is how do robot proof your company, when we talk about how to robot proof your company is, is the story you just told?
01;06;52;23 - 01;07;02;08
Geoff Nielson
Are those steps that you just shared? Is that also the answer to that question or how would you how would you answer that question? How do you robot proof your company?
01;07;02;10 - 01;07;31;23
Dr Vivienne Ming
Yeah. You know, here's here's a couple of suggestions. And again, I write about this a bit in the book. One is let's look societal first. Now I'll come back to company, and family for that matter. I, I'm a kind of all in one. I think companies should engage in data and algorithm audits. You know, financial audits was an industry led initiative when it first emerged in the world.
01;07;31;23 - 01;07;54;02
Dr Vivienne Ming
You just couldn't get people to invest in your company if they didn't know what was in the books, like invented by Vanderbilt way back when and became a standard. Now it's laws. But initially it was just rational behavior by companies. The same rational behavior should get you to be transparent. It doesn't mean you disclose what your algorithm is or your unique data.
01;07;54;02 - 01;08;21;03
Dr Vivienne Ming
You hold you. Of course you shouldn't. But having people come in and testify the algorithm does what it claims. The data is being held in these ways, superior or not. And it does. That to me is just rational. It's unfortunately, we live in a consumer driven world in which individual consumers aren't so rational about how they choose what products they use.
01;08;21;03 - 01;08;45;13
Dr Vivienne Ming
But I think investors should be more rational because the long term economic consequences of some of this is actually quite negative if we're not thoughtful about it. So it'll eat up all your alpha. So that's companies I am a believer in the value of having good regulation. I one thing I do not believe is that legislatures, politicians should do direct regulation.
01;08;45;15 - 01;09;26;05
Dr Vivienne Ming
Like how could they possibly understand this stuff? They should empower strong institutions, those institutions that love the technology. But see, like I do, it's strengths and weaknesses should come in and help with carrots and sticks. Companies make good decisions and play on level playing fields. Right now, as we pull all regulation out of the system, we're ending up in a kind of prisoner's dilemma world where everyone has to make the trashiest most, disruptive product disruptive in a negative sense, because if you don't, your competitor will.
01;09;26;07 - 01;09;54;11
Dr Vivienne Ming
So I have to be completely short term. And how I build my market space because I know everyone else is going to be completely short term as well. And right now, with the growth curves, I'm left behind if I don't, regulation helps to normalize that. If you are nerd regulation, add some momentum to the gradients. So you can search, with, a little less greedily, through your, your possibility spaces.
01;09;54;13 - 01;10;20;22
Dr Vivienne Ming
Another big initiative that I engage in to my nonprofit is data trusts. Individual consumers were never going to be able to do this stuff with any degree of sophistication. But if you can bear with a metaphor the same way, you might put your money together, in a credit union, rather than in a traditional bank, and invest in a community together.
01;10;20;24 - 01;10;59;20
Dr Vivienne Ming
But if you put your data together in a nonprofit that. So fiduciary responsibility is you and let it go out and collectively negotiate its relationship with data aggregators and the surveillance, economy, so that can help to serve those interests. Right now, there isn't such a large scale data trust concept out in the world. But if consumers weren't so naive and so willingly shallow about this engagement, and they will be without support, then we're in for a bad near-term on all of this.
01;10;59;22 - 01;11;34;18
Dr Vivienne Ming
What do you do inside your community or even inside your family? One is be brutally honest with yourself. Where are your employees with this? You know, some of them might just need the bleeding edge of all guardrails gone AI to run crazy with an idiot with. I would be very frustrated if I couldn't get straight answers out of Gemini, but as I told you, there's this astonishing, seemingly paradoxical research showing that for most students.
01;11;34;21 - 01;12;04;03
Dr Vivienne Ming
And I'm going to argue most employees as well, AI's that never give you answers, that only give you context actually do better with long term growth and learning. So when you look at students pretest, post-test, they study for a semester with an AI that just will do anything, even an AI that one initially give answers. It makes the student engage first, but then eventually it will give answers.
01;12;04;06 - 01;12;50;14
Dr Vivienne Ming
Post-Test, Pretest. Post-Test. They learn nothing. There are negative effects to using the AI tutors crazily. The AI that never give answers is the only one that beats no AI whatsoever. So every year I give this lecture at UC Berkeley where I share this story about first, a prediction, later, years later, an empirical reality g g and automated navigation will causally increase cognitive decline because we know that navigating to space is prophylactic against cognitive decline.
01;12;50;17 - 01;13;11;23
Dr Vivienne Ming
And now humans don't have to do that anymore. So I challenge the students. This is, engineering entrepreneurship course. How would you redesign Google Maps or pick your own project if you want to, but this is my default challenge. How would you redesign Google Maps such that I'm not only better when I'm using it, I'm better than where I started.
01;13;11;29 - 01;13;42;00
Dr Vivienne Ming
When I reached my destination, and I get some amazing ideas. But they all basically boil down to it doesn't give you the answers, it only gives you what you need when you need it. I actually have a fairly I always pull this one out at the end because sometimes technology isn't the answer in a sense. Here's what I do when I'm in London, New York, LA towns.
01;13;42;00 - 01;14;10;06
Dr Vivienne Ming
I know well, but not perfectly, or for that matter, even going across town in Berkeley, because who knows what the traffic's like. I spin it, Google. I check the map, and then I think, what do I know about this problem? Uniquely me, that isn't likely to show up on Google, and I try and take a different route and beat it there without cheating.
01;14;10;06 - 01;14;30;23
Dr Vivienne Ming
I don't get to speed, don't get to run stop signs like, did it tell me to go and make an unprotected left turn? Or if you're at a right turn because you're doing everything wrong. Somewhere that I know today that's going to be terrible because of the nature of the traffic. There's a football game today. It'll be horrible.
01;14;30;25 - 01;15;01;15
Dr Vivienne Ming
I know this, it doesn't. So I'm actively using my brain. So that takes me to like the final, let's call it rule of thumb. If it's not hard, you're probably not doing it right. If you're not thinking about it, then you're not going deep. If you're not going deep, you're not learning. If AI is going to boost our productivity and that's a good thing, let's invest that productivity gain in ourselves, not in just doing more shallow stuff.
01;15;01;17 - 01;15;34;09
Dr Vivienne Ming
So part of as a culture is this effort for am I rewarding people for the productively wrong answer? Am I encouraging people to disagree with their boss productively? Am I sharing the stories of courageous decision making inside my organization? And then the brutally honest, based on research I did actually during Covid in remote work, so clear the majority 80% depending on the organization and again, very different across organizations.
01;15;34;09 - 01;16;01;28
Dr Vivienne Ming
But the majority of employees needed extra management support. They were so used to the regular process of going into the office following a schedule, exiting. They were terrible managing that when it was all gone, when they weren't getting that for free. And so those people needed extra support, extra guidelines. They needed the freedom to be off on their lunch hour, to not have to answer emails at two in the morning.
01;16;02;00 - 01;16;23;28
Dr Vivienne Ming
But interestingly, the other 20% needed the exact opposite. They were actually hyper productive during Covid because finally, no one was holding them back. They got to wake up at two in the morning because they had a cool idea and work on it. They got to ignore emails because they felt empowered to do the thing they thought was right.
01;16;24;05 - 01;17;01;17
Dr Vivienne Ming
And then people started to manage them again. And it was it was like, you know, 1700s us. What the hell is going on here? No tea party now, the original one. So they rebelled. If you could give people what they need, they flourished. So are you willing to put in the political capital within your company to essentially kind of have differentiated management and say, this person, we're going to give them the unfettered I but you honestly, you need some more constraints.
01;17;01;17 - 01;17;31;12
Dr Vivienne Ming
You need the AI that isn't just going to write marketing copy for you, and then you're done. That's fine tuned to give you critical rather than sycophantic feedback. So these are decisions that organizations can make. I'm not going to pretend they're easy in fact, I think that's the real story here is the best decisions will be costly in the near term and pay off in the medium and long term.
01;17;31;14 - 01;18;01;09
Dr Vivienne Ming
If you're not willing to pay those near-term costs, which are usually about time, then don't take my advice. Automate the hell out of everything. Leverage a lot of chat functionality, and then eventually realize no one wants to buy your marketing slack product, or no one you're employing actually knows how to do anything and wish you'd made a different decision, or pay the costly prices.
01;18;01;13 - 01;18;28;03
Dr Vivienne Ming
Now. Because guess what? The giant companies that are actually building these tools, that's what they are doing. In ways some of them I admire, some of them I don't, many of them are really brutal about maintaining company culture and a sort of siege mentality sense of no one who's special is allowed to be here, but I at least I appreciate what they're trying to do.
01;18;28;03 - 01;18;52;29
Dr Vivienne Ming
And I think on some level, they're right. Preserving that sense of a special culture and employee, I get it. I just think we could bring it to so many more people. If you're willing to actually invest in create, I'm going to call it engineering environments for success. And be willing to treat different people differently in a productive way.
01;18;53;02 - 01;19;11;07
Geoff Nielson
While and that's, that's a theme that, you know, I hear us coming to again and again and, you know, heterogeneous is a word you used earlier. Right? But it sounds like, a key theme here that works with AI but is not in any way limited to AI is just treating people as people. And what works for them.
01;19;11;09 - 01;19;24;27
Geoff Nielson
And getting away from this sense of, you know, there's a monoculture and this is what good looks like, and everybody has to follow it. And what, what do individuals needs and how can you know? We have everybody flourish?
01;19;24;29 - 01;19;52;16
Dr Vivienne Ming
I mean, the only thing I want to be cautious about is it's so easy to slip into the language of personalization, which as a generic idea, I mean, sure, that is what we're talking about, except the way that ends up playing out in the real world is we tell, Gallup, that we would absolutely never vote for a politician that supports violence or denigrates their opponents.
01;19;52;18 - 01;20;31;24
Dr Vivienne Ming
And then those same politicians tweet out violent language and share means that portray their enemies as horrible people. And we like it and we upload it. We bought Facebook. We are invested in these things. It is dynamics between the algorithm, the elites and the consumers in social media that gave us the social media world today. So let's be clear, the hard decision here about heterogeneity isn't just giving people what they want, but what they need.
01;20;31;26 - 01;21;14;16
Dr Vivienne Ming
And that is profoundly, morally complicated. But I want to be use that language so that we're owning what we're talking about. And, you know, we're respectful of how easy it would be to be paternalistic about it all. But the flip side is a recent paper showed that when those same people had Facebook taken away from them for a semester, these students not only did their, generally speaking, their academics and mental health, improve, but afterwards they were happy with the policy, which they hated it first.
01;21;14;18 - 01;21;47;09
Dr Vivienne Ming
So we often use these, you know, willingness to pay measures as a way to value kind of intangible goods sometimes. Would you pay for Facebook? Eric, when your son has some papers out about that. And I really like his work, except this is clear where that wave function again, where all these different people at the same time, where a person that wouldn't pay a dime, you'd have to pay me to use Facebook and were the person that would pay twice as much to be able to use it.
01;21;47;11 - 01;22;08;09
Dr Vivienne Ming
It all depends on the history that brought us to that moment. You as a business leader, how do you create that history for your employees such that they are the best version of themselves on the job, and frankly, that they're challenging you to be that same person?
01;22;08;12 - 01;22;27;01
Geoff Nielson
I think that's I think that's extremely well said. I was going to go to some sort of wrap up question, but I actually like that note so much that why don't we, why don't we leave it on that? Vivian, this has been extremely interesting, extremely informative. I've really enjoyed every minute of our conversation. So thanks so much for coming on the program today.
01;22;27;03 - 01;22;29;18
Dr Vivienne Ming
It was a pleasure.
01;22;29;20 - 01;22;30;08
Geoff Nielson
Wonderful.
The Next Industrial Revolution Is Already Here
Digital Disruption is where leaders and experts share their insights on using technology to build the organizations of the future. As intelligent technologies reshape our lives and our livelihoods, we speak with the thinkers and the doers who will help us predict and harness this disruption.
Our Guest Dr. Vivienne Ming Discusses
Top Neuroscientist Says AI Is Making Us DUMBER?
Are we using AI in a way that actually makes us smarter, or are we unknowingly making ourselves less capable, less curious, and easier to automate?
On this episode of Digital Disruption, we are joined by artificial intelligence expert and neuroscientist Dr. Vivienne Ming.
Our Guest Kenneth Cukier Discusses
Go All In on AI: The Economist’s Kenneth Cukier on AI's Experimentation Era
On this episode, we are joined by Kenneth Cukier, Deputy Executive Editor at The Economist and bestselling author, to explore why most companies should treat AI as a playground for experimentation, how The Economist is using generative AI behind the scenes, the human skills needed to stay competitive, and why great leadership now requires enabling curiosity, psychological safety, and responsible innovation.
Our Guest Dr. Anne-Marie Imafidon Discusses
Is AI Eroding Identity? Future of Work Expert on How AI Is Taking More Than Jobs
From redefining long-held beliefs about “jobs for life,” to the cultural fractures emerging between companies, workers, and society, Dr. Anne-Marie goes deep on what’s changing, what still isn’t understood, and what leaders must do right now to avoid being left behind.
Our Guest Andy Mills Discusses
How AI Will Save Humanity: Creator of The Last Invention Explains
If you want clarity on AGI, existential risk, the future of work, and what it all means for humanity, this is an episode you won’t want to miss.