Our Guest Gregory Warner Discusses
AI's Most Dangerous Truth: We've Already Lost Control
What happens when the people building artificial intelligence quietly believe it might destroy us?
On this episode of Digital Disruption, we’re joined by Gregory Warner, Peabody Award–winning journalist, former NPR correspondent, and host of the hit AI podcast The Last Invention.
Gregory Warner is a versatile journalist and podcaster. He has been recognized with a Peabody Award and other awards from organizations like Edward R. Murrow, New York Festivals, AP, and PRNDI. Warner's career includes serving as an East Africa correspondent, where he covered the region's economic growth and terrorism threats. He has also worked as a senior reporter for American Public Media's Marketplace, focusing on the economics of American health care. His work has been recognized with a Best News Feature award from the Third Coast International Audio Festival.
Gregory sits down with Geoff for an honest conversation about the AI race unfolding today. After years spent interviewing the architects, skeptics, and true believers behind advanced AI systems, Gregory has come away with an unsettling insight: the same people racing to build more powerful models are often the most worried about where this technology is heading. This episode explores whether we’re already living inside the AI risk window, why AI safety may be even harder than nuclear safety, and why Silicon Valley’s “move fast and fix later” mindset may not apply to superintelligence. It also examines the growing philosophical divide between AI doomers and AI accelerationists. This conversation goes far beyond chatbots and job-loss headlines. It asks a deeper question few are willing to confront: are we building something we can’t control and, doing it anyway?
00;00;00;14 - 00;00;01;08
Hey everyone!
00;00;01;08 - 00;00;02;26
I'm super excited to be sitting down
00;00;02;26 - 00;00;06;22
with Gregory Warner,
Peabody Award winning journalist, ex NPR
00;00;06;22 - 00;00;10;25
correspondent, and current host of the hit
AI podcast The Last Invention.
00;00;10;28 - 00;00;15;10
Greg is a fellow traveler in the quest to
understand the race to build advanced AI.
00;00;15;13 - 00;00;18;18
His full time
job is examining the existential questions
00;00;18;21 - 00;00;21;21
and key players
at the heart of the AI revolution.
00;00;21;27 - 00;00;25;27
I want to ask him whether we're creating
something that will save us or destroy us,
00;00;26;03 - 00;00;30;02
what future he thinks is most likely,
and what we need to do to prepare.
00;00;30;05 - 00;00;35;04
Let's find out.
00;00;35;07 - 00;00;36;24
I'm here with Gregory Warner.
00;00;36;24 - 00;00;38;29
He is a Peabody Award winning journalist.
00;00;38;29 - 00;00;43;25
He's the producer and host of The Last
Invention podcast, all about AI.
00;00;43;25 - 00;00;47;09
And maybe just to start things off, Greg,
tell me a little bit about
00;00;47;12 - 00;00;51;16
why you and some of your co-producers
created this podcast.
00;00;51;17 - 00;00;52;14
what was kind of the,
00;00;52;14 - 00;00;55;21
you know, the rationale
for why you wanted to tell this story?
00;00;55;24 - 00;00;56;09
Sure.
00;00;56;09 - 00;01;00;11
I mean, I think what for me would
what got me hooked on the on
00;01;00;11 - 00;01;06;09
the topic was realizing
that the people making I were, had a,
00;01;06;12 - 00;01;10;07
I had a sense
that this, this might kill us all.
00;01;10;10 - 00;01;11;12
It's as simple as that.
00;01;11;12 - 00;01;15;01
I mean, the fact that the
that they felt that the risk,
00;01;15;04 - 00;01;19;20
of this thing that they were building to
humanity was, was real,
00;01;19;23 - 00;01;23;13
just felt like such an interesting time
to live in.
00;01;23;13 - 00;01;28;07
And the fact that that that debate
over the existential risks and rewards
00;01;28;07 - 00;01;32;09
of this technology, because I think that
the potential upsides are just as radical.
00;01;32;14 - 00;01;36;05
It felt like while we were discussing
in the news at the time, things like,
00;01;36;05 - 00;01;37;18
oh, the danger of deepfakes
00;01;37;18 - 00;01;41;11
and the possibility of AI taking jobs,
these more existential questions
00;01;41;11 - 00;01;46;10
which had haunted AI really from its
beginning, as we found out.
00;01;46;13 - 00;01;48;24
It felt
like, okay, we need to have this debate
00;01;48;24 - 00;01;53;17
and try to bring this to people
in a way that didn't,
00;01;53;20 - 00;01;56;01
for lack of a better word,
didn't just freak people out.
00;01;56;01 - 00;01;59;15
You know, the
the idea was, let's introduce people to
00;01;59;15 - 00;02;05;21
the kind of debates that are being had in,
and have been had in these circles and,
00;02;05;22 - 00;02;10;29
and how we might talk about the future
with some kind of superintelligence.
00;02;11;02 - 00;02;12;05
It's, you know, that, to me,
00;02;12;05 - 00;02;15;29
is one of the most interesting things
about the topic is that if you if you ask
00;02;16;02 - 00;02;19;09
some of the most knowledgeable people
and the most central people
00;02;19;12 - 00;02;23;04
in this conversation,
what their outlook is, that just
00;02;23;04 - 00;02;28;04
the range of answers is so radical from,
you know, basically utopia
00;02;28;04 - 00;02;32;16
to destroy us as a species
to, you know, this, you know,
00;02;32;16 - 00;02;36;09
other cohort of voices saying it's all,
you know, basically a nothing burger.
00;02;36;12 - 00;02;40;01
You know, at the risk of asking you
to editorialize as a journalist,
00;02;40;04 - 00;02;42;27
I'm curious, you know, having talked
to all these people, having heard
00;02;42;27 - 00;02;44;13
all the sides of the story,
00;02;44;13 - 00;02;48;00
you know, what
what outlook do you find most compelling?
00;02;48;00 - 00;02;51;22
Are are you worried about the existential
risk or where have you kind of landed?
00;02;51;29 - 00;02;56;06
As you know, Greg, the human.
00;02;56;09 - 00;02;56;26
Okay.
00;02;56;26 - 00;02;57;25
Greg. The human.
00;02;57;25 - 00;03;00;03
Where do I land? well,
let me tell you a story.
00;03;00;03 - 00;03;03;23
So there was a hacker,
a white hat hacker named Dawn song.
00;03;03;24 - 00;03;07;21
I think her name is, And she took
00;03;07;21 - 00;03;11;05
all the AI, the leading AI models,
00;03;11;08 - 00;03;14;27
and she found, 20 zero days,
00;03;14;27 - 00;03;18;06
if you're familiar with the zero
day, it's a that's a hacker terminology.
00;03;18;06 - 00;03;20;09
And basically it's a it's a huge deal.
00;03;20;09 - 00;03;24;13
It's the sort of thing you stop everything
and you focus on this, this weakness,
00;03;24;13 - 00;03;27;26
because it's incredibly essential
to fix it.
00;03;27;29 - 00;03;31;22
To find even 102 is a big deal.
00;03;31;26 - 00;03;33;16
She found 20.
00;03;33;16 - 00;03;35;21
And using ChatGPT
00;03;35;21 - 00;03;39;26
and the current model, we're not even
talking about some future far off thing.
00;03;39;26 - 00;03;43;11
And then I think she used Gemini
and found it even faster.
00;03;43;11 - 00;03;44;08
Or something like this.
00;03;44;08 - 00;03;50;01
And the important thing is that
I hadn't been trained to find a zero day.
00;03;50;04 - 00;03;53;04
It literally became a top level hacker
00;03;53;10 - 00;03;56;16
with just a few smart prompts.
00;03;56;19 - 00;04;03;13
And so I think my take from that is
00;04;03;16 - 00;04;06;00
we are talking about
when will these systems
00;04;06;00 - 00;04;10;02
be good enough to pose a threat.
00;04;10;05 - 00;04;13;27
But I think that time is already here.
00;04;14;00 - 00;04;20;08
And so it's not will we awaken a God
or will we a will we summon the demon?
00;04;20;09 - 00;04;23;28
It's it's it's it's
not a future conversation.
00;04;24;01 - 00;04;27;00
It's it's that we we are living already
00;04;27;00 - 00;04;32;12
with a with a technology
that is so much more capable
00;04;32;12 - 00;04;38;02
than we realize that is becoming
increasingly capable and by design, is,
00;04;38;05 - 00;04;43;12
its capacities, its capabilities
are not known until the model is released.
00;04;43;12 - 00;04;45;04
That's amazing.
00;04;45;04 - 00;04;49;26
Really, when you think about it, that,
that it takes being out in the world
00;04;49;26 - 00;04;54;16
in order to or being out being created
in order to figure out what it can do.
00;04;54;19 - 00;04;59;29
And so I guess, you know, in terms
of my level of doom, which is like
00;04;59;29 - 00;05;04;23
my probability of doom, I guess you
other folks have used that on the show.
00;05;04;26 - 00;05;06;10
I don't think of it as Will.
00;05;06;10 - 00;05;11;21
Are we headed toward Utopia or are
we headed toward, you know, apocalypse?
00;05;11;24 - 00;05;16;15
It's there are weak points in our world.
00;05;16;18 - 00;05;19;22
We we have clearly ways
00;05;19;22 - 00;05;22;25
in which small conflicts
can scale up into big ones.
00;05;22;25 - 00;05;25;05
I mean, I've seen
that is a is a foreign correspondent.
00;05;25;05 - 00;05;26;11
I've seen as a war journalist.
00;05;26;11 - 00;05;29;11
And there are pathways to harm.
00;05;29;17 - 00;05;34;09
And so the question is how bad an impact
will that be?
00;05;34;12 - 00;05;36;10
Because it's definitely not zero. It
feels.
00;05;36;10 - 00;05;41;27
My doom in that sense is one like I am
sure that some bad things will happen.
00;05;42;04 - 00;05;44;16
I think actually that's inevitable.
00;05;44;16 - 00;05;47;23
But does that mean
the extension of the human race does
00;05;47;29 - 00;05;51;21
not mean
we can't recover and learn more? No.
00;05;51;21 - 00;05;53;25
I mean, I'm actually kind of an optimist
in that sense.
00;05;53;25 - 00;05;57;00
I don't like to even consider
that we are headed toward extinction.
00;05;57;04 - 00;05;58;04
But I don't know why.
00;05;58;04 - 00;06;01;21
We're not talking more about AI safety.
00;06;01;24 - 00;06;02;01
Well,
00;06;02;01 - 00;06;05;08
and you know, what's so compelling to me
about that is,
00;06;05;08 - 00;06;09;11
you know, in that story, in that example,
not about a crystal ball, right?
00;06;09;11 - 00;06;13;11
It's not about saying, oh, how
how quickly does the technology improve?
00;06;13;11 - 00;06;15;22
What can the technology do tomorrow?
00;06;15;22 - 00;06;17;10
It's real.
00;06;17;10 - 00;06;19;08
I think rational concern
00;06;19;08 - 00;06;23;15
about the disruption it can have based on
what's what's out there today.
00;06;23;15 - 00;06;26;20
And so you know, hasn't journalists,
you've talked to everybody from,
00;06;26;20 - 00;06;30;07
you know, AI leaders
to, you know, folks around the world and
00;06;30;07 - 00;06;32;28
you know, more kind of political roles or,
or military roles.
00;06;32;28 - 00;06;37;09
And so, I guess I'll ask you this way,
00;06;37;11 - 00;06;41;00
how ready are we right now for that
threat?
00;06;41;03 - 00;06;45;13
And, you know, what do we need to do
collectively as a society
00;06;45;16 - 00;06;50;13
to, you know, minimize
that that particular lens on zoom to
00;06;50;13 - 00;06;53;25
to minimize that,
the risk that it's going to provide,
00;06;53;25 - 00;06;58;07
you know, tremendous
harm to us as a society.
00;06;58;10 - 00;06;58;18
Yeah.
00;06;58;18 - 00;06;59;25
I would give two answers. I mean, there's
00;06;59;25 - 00;07;04;07
there's been some interesting papers
from the Frontier Model Forum.
00;07;04;10 - 00;07;05;13
I don't know if you've interviewed
00;07;05;13 - 00;07;08;13
anybody from there, but they're basically
an industry trade group.
00;07;08;18 - 00;07;10;29
And also interesting papers from
00;07;10;29 - 00;07;13;29
from anthropic, the, the AI company
00;07;14;06 - 00;07;17;17
that that really have have done a lot
to sort of
00;07;17;17 - 00;07;21;05
look at what are specifically the risks
and also the remedies.
00;07;21;05 - 00;07;21;27
They're not just red
00;07;21;27 - 00;07;25;28
teaming these models in terms of
trying to get them to do things that were,
00;07;26;00 - 00;07;29;06
you know, they were programed to do or
they're not supposed to do, like give you
00;07;29;06 - 00;07;32;16
the ingredients for a biological weapon.
00;07;32;19 - 00;07;35;01
And red teaming, tries
00;07;35;01 - 00;07;38;06
to get them to do those things or
to figure out if they will blackmail you.
00;07;38;06 - 00;07;41;21
But they're also saying, okay, wait,
wait, let's say it did those things.
00;07;41;21 - 00;07;43;21
How do we correct it?
00;07;43;21 - 00;07;48;10
And,
that's where most work needs to be done.
00;07;48;13 - 00;07;53;10
Because even if let's say according
to this paper by the frontier model form,
00;07;53;13 - 00;07;58;12
if you get the the model to give you, it's
not supposed to give you the, the,
00;07;58;12 - 00;08;01;28
you know, the the recipe for anthrax,
but let's say it does,
00;08;02;01 - 00;08;06;29
it tells you exactly how to make it
and how to get away with it.
00;08;07;02 - 00;08;07;12
Okay.
00;08;07;12 - 00;08;11;11
So how do we then program
the model to unlearn that?
00;08;11;14 - 00;08;14;23
It's, you can do a step.
00;08;14;23 - 00;08;15;29
It's called targeted on learning.
00;08;15;29 - 00;08;20;02
But what they then found was that it
actually didn't really unlearn it.
00;08;20;02 - 00;08;21;16
It just said it did.
00;08;21;16 - 00;08;24;12
And,
then there's also a technique which was,
00;08;24;12 - 00;08;27;20
where you get it
to give false information.
00;08;27;20 - 00;08;31;00
So if somebody asks for anthrax,
it will leave out a couple of ingredients.
00;08;31;03 - 00;08;31;14
Okay.
00;08;31;14 - 00;08;35;18
But then you're introducing
falsehood into the system
00;08;35;21 - 00;08;40;28
and you may have knock down effects which
are bad for actually legitimate research.
00;08;41;01 - 00;08;45;10
So that's odd
and an odd interesting situation,
00;08;45;10 - 00;08;47;21
which is not only that
we're not doing enough
00;08;47;21 - 00;08;51;21
sort of safety testing,
but we don't actually know the best way
00;08;51;24 - 00;08;56;10
to truly put guardrails
on this, on these technologies.
00;08;56;13 - 00;09;00;09
The best we can do
is, is, is a sort of a training overlay
00;09;00;14 - 00;09;03;18
where you, you essentially train it not to
00;09;03;18 - 00;09;06;25
or train it, to,
to not answer those questions.
00;09;06;28 - 00;09;08;24
But still it's a genetic
00;09;08;24 - 00;09;12;17
and as an agent system,
we do not know what it's going to do.
00;09;12;20 - 00;09;16;10
So, yeah, in terms of your question,
where are we at?
00;09;16;10 - 00;09;18;07
I think that,
00;09;18;10 - 00;09;21;00
the reason
I would want more of us to talk about this
00;09;21;00 - 00;09;24;08
is because in interacting with the models,
00;09;24;11 - 00;09;28;04
we are actually playing a role in
AI safety.
00;09;28;07 - 00;09;29;29
This is not true in nuclear weapons.
00;09;29;29 - 00;09;32;29
You know, nuclear weapons,
we have no impact
00;09;33;04 - 00;09;35;29
on whether there's nuclear war,
except for maybe who we vote for.
00;09;35;29 - 00;09;37;03
Perhaps.
00;09;37;03 - 00;09;39;21
But we do,
even if we're not a technologist,
00;09;39;21 - 00;09;42;10
even if we're not a lawmaker,
actually play a role.
00;09;42;10 - 00;09;47;08
There's there's all kinds of forms
where if if the AI does something weird,
00;09;47;08 - 00;09;49;21
you can post it,
you know, and it will be looked at.
00;09;49;21 - 00;09;53;02
In fact, all the AI companies say
they want that material,
00;09;53;02 - 00;09;54;23
they want that data.
00;09;54;23 - 00;09;58;02
And so we're all playing
a, I think a role in terms of,
00;09;58;02 - 00;10;00;11
as we interact with these models.
00;10;00;11 - 00;10;03;05
And we shouldn't just talk about chat
bots, I assume in the conversation.
00;10;03;05 - 00;10;04;20
I mean, I so much more than chat bots,
00;10;04;20 - 00;10;08;20
but just because it's
the most obvious thing, yes, we
00;10;08;26 - 00;10;12;16
we can also think about alignment
in our in our life.
00;10;12;16 - 00;10;16;04
We can try to treat these models
more carefully.
00;10;16;04 - 00;10;18;15
Maybe not give them access to everything.
00;10;18;15 - 00;10;21;22
yet I don't think we are doing that.
00;10;21;22 - 00;10;27;29
I think mainly we're just either getting
freaked out or ignoring the problem.
00;10;28;02 - 00;10;28;24
If you work in
00;10;28;24 - 00;10;32;14
IT, Infotech research Group is a name
you need to know.
00;10;32;17 - 00;10;35;17
No matter what your needs are, Infotech
has you covered.
00;10;35;22 - 00;10;36;29
AI strategy?
00;10;36;29 - 00;10;39;11
Covered. Disaster recovery?
00;10;39;11 - 00;10;40;11
Covered.
00;10;40;11 - 00;10;42;26
Vendor negotiation? Covered.
00;10;42;26 - 00;10;46;19
Infotech supports you with the best
practice research and a team of analysts
00;10;46;19 - 00;10;50;12
standing by ready to help you
tackle your toughest challenges.
00;10;50;15 - 00;10;55;19
Check it out at the link below
and don't forget to like and subscribe!
00;10;55;22 - 00;10;58;27
you, you talk about with AI and
especially with chat bots, that
00;10;58;27 - 00;11;04;07
we all play a role there and how that can,
you know, influence safety.
00;11;04;07 - 00;11;05;16
It can influence,
00;11;05;16 - 00;11;09;13
you know, kind of a trajectory of, of,
you know, the risk that's created here.
00;11;09;16 - 00;11;11;00
what is that role like?
00;11;11;00 - 00;11;16;28
What what message do you want to impart
on, you know, the the average chat
00;11;16;28 - 00;11;24;04
bot user out there to get them to interact
with the technology more thoughtfully?
00;11;24;07 - 00;11;25;24
Well, I mean, a couple of things.
00;11;25;24 - 00;11;30;12
One is just the basics, which is if
00;11;30;15 - 00;11;32;26
if I does something
00;11;32;26 - 00;11;36;05
odd, if it behaves in a certain way.
00;11;36;08 - 00;11;38;29
I mean, I think if you are a hacker
and you can figure out
00;11;38;29 - 00;11;42;23
how to use it to hack things,
that would be a more specialized tool.
00;11;42;26 - 00;11;46;18
I mean, look, it's a light role, that
if we're interacting with the technology,
00;11;46;18 - 00;11;51;13
but it's also about if we're bringing it
into our companies, what is then?
00;11;51;16 - 00;11;55;18
Do you know what is
then the role of AI in our company?
00;11;55;18 - 00;12;00;14
I mean, they talk about this idea of human
in the loop so often use phrase.
00;12;00;17 - 00;12;05;17
But on the positive side,
the human in the loop, we know that,
00;12;05;20 - 00;12;08;20
AI is more capable
00;12;08;26 - 00;12;11;16
if it has a human in the loop.
00;12;11;16 - 00;12;12;17
And that's, that's it's
00;12;12;17 - 00;12;16;17
not just a, it's a safety thing,
but it's a, it's a value thing that,
00;12;16;17 - 00;12;19;19
the perfect example is this,
this question of the chess computer.
00;12;19;19 - 00;12;21;14
People are amazed at chess.
00;12;21;14 - 00;12;24;19
Computers can beat humans,
but a human with a chess computer
00;12;24;19 - 00;12;27;19
even human with a kind of slightly,
00;12;27;22 - 00;12;32;13
less and more dumb chess computer
can be any person and any computer.
00;12;32;16 - 00;12;33;23
And so
00;12;33;23 - 00;12;38;12
I think that as we are in the situation
in this decision making capacity as to how
00;12;38;13 - 00;12;42;23
much are we going to give over to the AI,
how much are we going to outsource?
00;12;42;26 - 00;12;44;13
And also
00;12;44;16 - 00;12;46;18
how much do we trust it?
00;12;46;18 - 00;12;49;18
I think we have to get away from an,
00;12;49;22 - 00;12;52;16
the kind of anthropomorphizing mentality
where we think,
00;12;52;16 - 00;12;55;16
wow, this is a really capable,
amazing worker
00;12;55;19 - 00;12;58;25
who can do all kinds of things,
what more can we give it?
00;12;58;28 - 00;13;01;26
That's probably the wrong, idea.
00;13;01;26 - 00;13;05;15
Rather, we should we should think of this
00;13;05;15 - 00;13;09;08
as a completely alien
00;13;09;11 - 00;13;12;22
kind of intelligence,
which in terms of mimicking
00;13;12;22 - 00;13;16;12
human, intelligence, it's
been programed to do that.
00;13;16;15 - 00;13;20;12
It's it's model is designed to interact
with you as a human.
00;13;20;12 - 00;13;25;09
That's a that's a blueprint that was
created 75 years ago by Alan Turing.
00;13;25;12 - 00;13;27;08
But when we interact
with the intelligence,
00;13;27;08 - 00;13;29;29
we should just have a certain alienation
from it.
00;13;29;29 - 00;13;33;18
And it's, and treat it as a
00;13;33;21 - 00;13;38;00
is an incredibly I don't know,
I mean, is an incredibly strange,
00;13;38;03 - 00;13;42;02
incredibly wonderful, marvelous tool
00;13;42;05 - 00;13;46;02
that we have in our world now,
but that, perhaps
00;13;46;02 - 00;13;49;26
it shouldn't have access
to confidential information.
00;13;49;26 - 00;13;53;02
We do know that
these models can blackmail,
00;13;53;02 - 00;13;57;01
it shouldn't have,
perhaps, control over the company.
00;13;57;01 - 00;13;57;15
You wouldn't.
00;13;57;15 - 00;14;02;02
You wouldn't leave an incredibly capable
intern in charge of the entire operation.
00;14;02;04 - 00;14;04;10
Even if they did amazing
with their photographic memory
00;14;04;10 - 00;14;05;03
and the fact that they didn't
00;14;05;03 - 00;14;08;03
need to sleep in the fact
that they were master of all subjects.
00;14;08;06 - 00;14;11;19
Nevertheless, we would be
we would be careful.
00;14;11;19 - 00;14;15;23
And I think when we read things
like the anthropic blackmail paper, which,
00;14;16;00 - 00;14;20;15
which was a fascinating paper,
back in July, that showed that the
00;14;20;15 - 00;14;25;07
anthropic model, in order to not be turned
off, threatened blackmail of the user.
00;14;25;07 - 00;14;27;10
This was a red team model.
00;14;27;10 - 00;14;29;15
We shouldn't
00;14;29;15 - 00;14;35;03
get scared that these models, quote
unquote, really want to do us harm
00;14;35;06 - 00;14;38;15
or have, evil intentions,
00;14;38;18 - 00;14;41;07
but rather realize that agency
00;14;41;07 - 00;14;46;20
that giving agency to a technology is,
00;14;46;23 - 00;14;48;07
is a powerful thing.
00;14;48;07 - 00;14;51;04
And we should,
we should treat it with respect
00;14;51;04 - 00;14;54;04
and with with caution.
00;14;54;09 - 00;14;55;27
I think that's really well said.
00;14;55;27 - 00;14;59;12
And there's one word in there, Greg,
that that caught my attention in the sense
00;14;59;12 - 00;14;59;25
that it's a word
00;14;59;25 - 00;15;02;27
that I don't know that anyone's ever said
in our conversations before, which is,
00;15;03;04 - 00;15;05;06
you said a couple of times alien.
00;15;05;06 - 00;15;08;04
And, you know, there's
a couple of different ways to interpret
00;15;08;04 - 00;15;08;26
the word alien.
00;15;08;26 - 00;15;12;12
There's like, you know, E.T.,
like extraterrestrial life.
00;15;12;16 - 00;15;14;29
There's also
alien is just outsider. Right.
00;15;14;29 - 00;15;19;01
But but there's some there's
some sense here that this is a foreign
00;15;19;01 - 00;15;22;05
or external presence in our organizations
00;15;22;05 - 00;15;25;26
and maybe even to us as humans,
that's not fully understood.
00;15;25;29 - 00;15;29;19
And it sounds like
it sounds like your approach and and maybe
00;15;29;19 - 00;15;33;06
even if I can, you know, push you
a little bit further that your advice
00;15;33;06 - 00;15;36;27
to business leaders would probably be,
you know, proceed with caution.
00;15;36;27 - 00;15;38;23
Is that fair?
00;15;38;26 - 00;15;39;19
Yeah, I think that's fair.
00;15;39;19 - 00;15;40;23
You know, I mean, I would
00;15;40;23 - 00;15;44;04
I would also credit Lisa Eliza Reed Koski,
who wrote this book called
00;15;44;04 - 00;15;47;16
If Anyone Builds It, Everyone Dies,
which to me is the
00;15;47;19 - 00;15;50;26
these got to be the best title of the book
ever.
00;15;50;29 - 00;15;53;29
It's direct, but, he really explores
00;15;53;29 - 00;15;57;12
this question of the alien intelligence
in a, in a very smart way.
00;15;57;12 - 00;16;00;14
And,
you know, just to paraphrase what he says
00;16;00;21 - 00;16;03;01
is that,
00;16;03;01 - 00;16;06;29
so when, when,
when we think about this, these
00;16;07;02 - 00;16;09;26
but he warns these as if anyone builds it,
everyone dies.
00;16;09;26 - 00;16;14;17
Meaning a superintelligence
is fundamentally unnavigable
00;16;14;20 - 00;16;17;17
or uncontrollable and unpredictable.
00;16;17;17 - 00;16;20;01
And this is
this is not just you Koski saying this.
00;16;20;01 - 00;16;22;07
We know that,
00;16;22;07 - 00;16;25;25
we know the about the basic technology
that that these models,
00;16;25;25 - 00;16;28;03
we don't know what they will do
before they're made.
00;16;28;03 - 00;16;32;09
We don't we can't tell you how they are
making the decisions they're making.
00;16;32;09 - 00;16;35;14
So there is a kind of black box
unknowability at the core.
00;16;35;17 - 00;16;38;14
But in terms of the alien, this
I think this is so interesting because.
00;16;38;14 - 00;16;41;08
It sort of gets it the danger of sci fi,
right?
00;16;41;08 - 00;16;44;08
Sci fi is written for humans.
00;16;44;09 - 00;16;49;16
It's written by humans, for humans, and
so even if there are aliens in the sci fi,
00;16;49;19 - 00;16;55;04
and we are all familiar with a very common
trope about AI versus humanity,
00;16;55;07 - 00;16;57;04
AI rebelling against humanity,
00;16;57;04 - 00;17;02;11
AI doing something evil,
and also aliens versus humans.
00;17;02;14 - 00;17;05;02
There's a certain way
in which those stories play out,
00;17;05;02 - 00;17;09;25
and this is Yudkowsky main point that,
he sort of follows the rules of narrative
00;17;09;26 - 00;17;13;26
where, okay,
the humans are battling against the AI,
00;17;13;26 - 00;17;18;11
the AI has an incredibly new,
powerful weapon or something,
00;17;18;11 - 00;17;22;28
or the AI is willing to act
inhumanely in some important way.
00;17;23;01 - 00;17;26;04
And then, you know, the question is,
what will humanity do about it?
00;17;26;04 - 00;17;27;12
And there's a big conflict.
00;17;27;12 - 00;17;30;22
But what he says
is that we have to understand
00;17;30;22 - 00;17;34;14
that as these that there's just so much
that is
00;17;34;17 - 00;17;38;04
non and it's a non-human
or he uses the word alien about it.
00;17;38;04 - 00;17;42;14
It's thinking that the ways
in which this might go wrong in
00;17;42;14 - 00;17;44;27
most of his scenarios are in ways
in which it goes wrong.
00;17;44;27 - 00;17;47;22
I don't think he has any.
Which goes perfectly right.
00;17;47;22 - 00;17;51;22
The ways in which this goes wrong
will be complicated and weird.
00;17;51;25 - 00;17;53;18
The complicated and weird.
00;17;53;18 - 00;17;56;10
They won't look like Skynet,
00;17;56;13 - 00;17;58;19
you know, from the Terminator, film.
00;17;58;19 - 00;18;01;04
They won't look like,
00;18;01;04 - 00;18;04;02
you know, another sort of situation
where, like,
00;18;04;02 - 00;18;06;05
they won't look like how 9000 or something
like that.
00;18;06;05 - 00;18;07;10
It'll look.
00;18;07;10 - 00;18;10;10
Or from, from, Stanley Kubrick.
00;18;10;13 - 00;18;13;10
It'll look like.
00;18;13;10 - 00;18;16;10
Okay, we never, predicted this.
00;18;16;10 - 00;18;17;29
This wasn't programed.
00;18;17;29 - 00;18;22;00
How did the how did the AI even grow
00;18;22;00 - 00;18;26;10
to want this, for example?
00;18;26;13 - 00;18;31;26
Anyway, he has these nightmarish scenarios
which we could go into later, but, that
00;18;31;27 - 00;18;37;12
that is his that is his term that we need
to not anthropomorphize this thing.
00;18;37;12 - 00;18;38;29
Not thinking of is just a tool.
00;18;38;29 - 00;18;43;05
Not thinking of his as another person, a
super smart Einstein or, you know, stereo.
00;18;43;05 - 00;18;46;26
I'm a day says, you know,
millions of Einsteins in a data center.
00;18;46;29 - 00;18;50;12
But just think of it
as an alien intelligence that,
00;18;50;16 - 00;18;52;27
and then most of you
Koski says this is going to go
00;18;52;27 - 00;18;54;27
horribly wrong,
but I think we could also talk together
00;18;54;27 - 00;18;59;12
about how an alien intelligence
could radically improve our lives,
00;18;59;15 - 00;19;02;10
which we should definitely get to, but,
yes,
00;19;02;10 - 00;19;06;02
I think resisting anthropomorphizing is
is absolutely important.
00;19;06;05 - 00;19;09;23
Well and and and recognizing
the inherent unpredictability
00;19;09;29 - 00;19;12;15
it sounds like
if something that just thinks thinks
00;19;12;15 - 00;19;14;27
and I'm even anthropomorphizing
by saying thinking I guess.
00;19;14;27 - 00;19;19;09
But that just behaves in a way
fundamentally different from,
00;19;19;14 - 00;19;24;12
you know, us Behaves and also wants things
fundamentally different than ours.
00;19;24;12 - 00;19;24;18
Right.
00;19;24;18 - 00;19;27;15
I think that's the key thing
is that, in a movie.
00;19;27;15 - 00;19;28;08
Right.
00;19;28;08 - 00;19;33;20
Even the enemies of humanity
want something that is recognizably worth
00;19;33;20 - 00;19;38;26
wanting,
like power, for example, or control.
00;19;38;29 - 00;19;40;16
Whereas
00;19;40;16 - 00;19;43;24
in AI, a superintelligent,
I mean, a want those things.
00;19;43;24 - 00;19;46;02
It may just want some other thing
00;19;46;02 - 00;19;48;20
and destroy the world
in the process of getting that thing.
00;19;48;20 - 00;19;50;10
And we'll think, well,
why did even want that?
00;19;50;10 - 00;19;52;22
That that makes no sense.
You would not go for those things.
00;19;52;22 - 00;19;56;17
And and yes, Eliezer cascades
all kinds of examples of that.
00;19;56;20 - 00;19;59;08
So, you know, there's a backdrop here.
00;19;59;08 - 00;20;02;06
We've talked about the need for AI safety
and for guardrails.
00;20;02;06 - 00;20;06;01
And, you know, I think there's
some really, really important points,
00;20;06;04 - 00;20;09;22
that, that you've made
that others have made in the space.
00;20;09;25 - 00;20;14;07
There's, there seems to be this spectrum
right now in terms of where people fall.
00;20;14;10 - 00;20;18;07
And on the one end it's
slow down guardrails, safety.
00;20;18;07 - 00;20;20;00
Let's really understand this.
00;20;20;00 - 00;20;23;29
And on the other end, there's
this notion of a winner take all race
00;20;24;05 - 00;20;28;28
where it's as we need this as fast
as possible, guardrails be damned.
00;20;29;01 - 00;20;31;12
Let's just let's just get there first.
00;20;31;12 - 00;20;34;12
It's almost like the anti
Uber kowski model where
00;20;34;12 - 00;20;38;23
it's just,
you know, caution to the wind. And
00;20;38;26 - 00;20;39;25
I don't know
00;20;39;25 - 00;20;42;12
you talked earlier Greg about you know
00;20;42;12 - 00;20;47;15
some of the key players in Silicon Valley
and beyond recognizing the risk.
00;20;47;15 - 00;20;51;06
But you know how have you
how have you seen them
00;20;51;09 - 00;20;55;04
behaving in practice space based on your
research and based on your interviews,
00;20;55;04 - 00;20;59;05
like where are we falling in terms
of the development of this technology?
00;20;59;05 - 00;21;01;04
Where, where should we be falling?
00;21;01;04 - 00;21;05;07
And, you know, is there a message
for the people at the helm in terms of,
00;21;05;10 - 00;21;08;05
like, should we be collectively trying
00;21;08;05 - 00;21;11;05
to influence the behavior here?
00;21;11;11 - 00;21;11;19
Yeah.
00;21;11;19 - 00;21;17;04
It's it's such a great question
because, the story of AI in the last
00;21;17;04 - 00;21;22;16
ten years, certainly has been a story
where essentially one after another,
00;21;22;16 - 00;21;26;15
people have said, oh, I don't trust
that guy to build a I, I need to build AI
00;21;26;15 - 00;21;31;02
and I need to build it faster than they do
because only I can build it safely.
00;21;31;03 - 00;21;32;21
And you see this.
00;21;32;21 - 00;21;36;00
So Demis Hassabis
at DeepMind, Google by DeepMind,
00;21;36;03 - 00;21;41;17
Elon Musk, Sam Altman create
OpenAI as a direct rival to Google.
00;21;41;20 - 00;21;46;19
Dario Amodei leaves OpenAI because he says
those guys are not committed to safety.
00;21;46;20 - 00;21;50;09
Meanwhile,
Elon Musk's Musk is kicked out of OpenAI.
00;21;50;09 - 00;21;54;22
He's forms X AI,
this and essentially this drive
00;21;54;22 - 00;21;59;03
to create it first and to create it.
00;21;59;06 - 00;22;01;12
And they all say for the benefit
of humanity.
00;22;01;12 - 00;22;05;29
Well, actually, I don't think Elon Musk
says that directly, but nevertheless,
00;22;05;29 - 00;22;10;01
the, the that drive has
then resulted in a race
00;22;10;04 - 00;22;14;22
and even just we're talking about the race
in, within the US.
00;22;14;26 - 00;22;20;24
We're not even talking about the race
with China, which amplifies things. So
00;22;20;27 - 00;22;22;05
that kind of,
00;22;22;05 - 00;22;25;29
I guess you'd call it
Silicon Valley approach to,
00;22;26;02 - 00;22;30;03
to product development,
where certainly making it first,
00;22;30;06 - 00;22;33;05
making it fast
00;22;33;05 - 00;22;34;06
has a lot of benefits.
00;22;34;06 - 00;22;38;14
I mean, not only being first to market,
but really being able to set the create
00;22;38;14 - 00;22;42;16
create the,
the model and create the, create the,
00;22;42;16 - 00;22;46;05
sort of the prototype and sort of what
people are interacting with.
00;22;46;05 - 00;22;47;25
And then you fix it,
00;22;47;25 - 00;22;51;15
you know, you you release it
and then you fix the bugs afterward.
00;22;51;18 - 00;22;55;26
So that kind of approach
to superintelligence
00;22;55;29 - 00;23;01;04
is I mean, it's that's it's really it's
really amazing to me that, Neda, for
00;23;01;04 - 00;23;04;10
example, has a superintelligence division,
00;23;04;13 - 00;23;08;13
so it feels like something sci fi.
00;23;08;16 - 00;23;12;02
But, the other I think question of it is,
00;23;12;05 - 00;23;18;03
why why are they doing this? Why
00;23;18;06 - 00;23;19;11
why the risk?
00;23;19;11 - 00;23;19;23
I'm sorry.
00;23;19;23 - 00;23;24;28
Why the race given the risk and and
and how do they justify it,
00;23;25;01 - 00;23;28;07
given that every one of the people I just
mentioned
00;23;28;10 - 00;23;32;25
has a well, not meta,
but but others have a have have stated
00;23;32;25 - 00;23;35;28
that they are very worried
about the dangers of this technology.
00;23;35;29 - 00;23;41;13
How is it that those same people are in a
race to create it as fast as possible?
00;23;41;14 - 00;23;42;29
Right.
00;23;42;29 - 00;23;44;20
And I think
00;23;44;20 - 00;23;49;04
one book that I would recommend people
or read or other essay
00;23;49;10 - 00;23;53;10
is, it's called Machines of Loving Grace
by Dario.
00;23;53;10 - 00;23;56;08
I'm a day, who started anthropic.
00;23;56;08 - 00;23;57;17
I don't know.
00;23;57;17 - 00;23;59;10
Have you have you read this this essay?
00;23;59;10 - 00;24;02;25
It's it's it's I it
I first encountered it because a number
00;24;02;25 - 00;24;07;02
of my humanitarian friends,
you know, from from Nairobi and Ukraine
00;24;07;02 - 00;24;09;27
and others,
they were all loving this book and,
00;24;09;27 - 00;24;13;09
and just to, just to sort of understand
the context here.
00;24;13;09 - 00;24;17;04
I mean, these are a lot of folks
who started off in NGOs
00;24;17;07 - 00;24;21;15
and then got disappointed with NGOs,
started companies to really do good work
00;24;21;18 - 00;24;25;04
that they felt they could do, work faster,
more technologically savvy,
00;24;25;11 - 00;24;30;02
and help humanity not under the rubric
of kind of philanthropy and NGOs,
00;24;30;02 - 00;24;33;28
but rather through a through
a startup model that kind of cohort.
00;24;34;01 - 00;24;38;00
They were blown away by this essay,
Machines of loving Grace.
00;24;38;03 - 00;24;41;23
Machines
of loving grace is, I think, the best,
00;24;41;26 - 00;24;46;12
probably most clear headed, manifesto
00;24;46;15 - 00;24;49;17
for for the acceleration ist
point of view.
00;24;49;17 - 00;24;53;25
Mark Andreasen also has the techno
optimist manifesto, I think it's called.
00;24;53;25 - 00;24;57;22
But, I would say Machines of Loving Grace
is far clearer in terms of
00;24;57;22 - 00;25;01;07
what is the approach of somebody who,
for instance, Dario.
00;25;01;07 - 00;25;02;13
I'm a day.
00;25;02;13 - 00;25;04;02
He is an effective altruist.
00;25;04;02 - 00;25;07;00
He believes he kind of disavows that now.
00;25;07;00 - 00;25;08;22
But he's a certainly a person who believes
00;25;08;22 - 00;25;10;27
we need to do the most good
for the most people and live,
00;25;10;27 - 00;25;13;12
live our lives according to that.
So what does that mean?
00;25;13;12 - 00;25;18;20
That he is now creating a superintelligent
AI that might destroy us all?
00;25;18;23 - 00;25;21;23
What he lays out is he makes this case.
00;25;21;23 - 00;25;26;16
He says, you know, it is critical
to have a genuinely inspiring
00;25;26;16 - 00;25;30;17
vision of the future
and not just the plan to fight fires.
00;25;30;20 - 00;25;33;13
He says, yes, there are risks.
00;25;33;13 - 00;25;36;17
There are dangers of of of of powerful AI.
00;25;36;18 - 00;25;39;18
He doesn't use
the word AGI is powerful AI.
00;25;39;18 - 00;25;43;11
But at the end, there has to be something
we're fighting for, right?
00;25;43;11 - 00;25;47;10
Some something that we can rally,
rally towards.
00;25;47;13 - 00;25;50;24
I think he says
fear is only one kind of motivator.
00;25;50;24 - 00;25;52;16
We also need hope.
00;25;52;19 - 00;25;54;03
So what is it
00;25;54;03 - 00;25;57;03
that those who are building
AI are fighting for?
00;25;57;06 - 00;26;02;15
And he lays out this vision in that
in that essay of the Compressed century.
00;26;02;18 - 00;26;05;02
And if you've you've come across
the compressed century idea.
00;26;05;02 - 00;26;10;10
But this is like 100 years of progress
in, in 5 or 10.
00;26;10;13 - 00;26;13;08
And so all the scientific developments
that we may have
00;26;13;08 - 00;26;19;09
in the entire 21st century
and a bit of the 22nd will all happen.
00;26;19;12 - 00;26;22;06
He says in the 5 to 10 year window,
00;26;22;06 - 00;26;26;13
after we have a suitably advanced AI and a
when when will that will happen?
00;26;26;17 - 00;26;28;21
It's, you know, that's
there's a lot of debate about that.
00;26;28;21 - 00;26;32;22
He's even said it might happen as
soon as 2026, but nevertheless it's soon.
00;26;32;22 - 00;26;33;22
It's within our lifetimes.
00;26;33;22 - 00;26;34;17
This is what he believes.
00;26;34;17 - 00;26;38;12
So then all of this scientific progress,
00;26;38;15 - 00;26;41;02
and what will it lead to?
00;26;41;02 - 00;26;44;18
And so, we could go into what he thinks
it'll lead to.
00;26;44;18 - 00;26;45;21
It's actually quite fascinating.
00;26;45;21 - 00;26;49;23
Lis talks about biology,
health, what work and meaning.
00;26;49;26 - 00;26;54;13
But the reason that resonated
with so many people that I know,
00;26;54;16 - 00;26;58;18
in the developing world, in other places,
is that they're talking with people
00;26;58;18 - 00;27;01;24
who are not worried about their jobs
being taken away.
00;27;01;24 - 00;27;04;28
They're they're they're in terrible jobs.
00;27;04;28 - 00;27;07;18
So they don't have a job. They don't
they don't like their careers.
00;27;07;18 - 00;27;08;23
We know it's easy for you.
00;27;08;23 - 00;27;13;06
And I just sit here and say, God,
I can't believe I might take our jobs.
00;27;13;06 - 00;27;16;07
We I think we sort of like our jobs
generally, but,
00;27;16;07 - 00;27;19;09
there's a lot of people in the world
who are suffering, a lot of people,
00;27;19;09 - 00;27;23;01
a world who need solutions,
major solutions,
00;27;23;08 - 00;27;29;26
to, you know, climate change,
to poverty, to, you know, hunger.
00;27;29;29 - 00;27;33;28
And the acceleration is believe
or certainly Dario Ahmadi
00;27;33;28 - 00;27;37;04
in this essay believes that an advanced
00;27;37;04 - 00;27;41;16
AI is a radical solution.
00;27;41;19 - 00;27;43;20
And it will come and it will it will
00;27;43;20 - 00;27;47;03
bring about changes
that we cannot even imagine.
00;27;47;03 - 00;27;51;18
And what's so fascinating is how similar
00;27;51;21 - 00;27;56;14
somebody like Dario
Ahmed's vision is to Eliezer Yudkowsky.
00;27;56;17 - 00;27;59;13
The if anyone builds it, everyone dies.
00;27;59;13 - 00;28;02;08
Author, both of them
00;28;02;08 - 00;28;06;14
this complete dumber
and then one and in a fairly, you know,
00;28;06;17 - 00;28;09;01
acceleration test and we make it
whatever you want to call it.
00;28;09;01 - 00;28;13;23
But he's certainly a,
you know, believer in the power of AI.
00;28;13;26 - 00;28;16;14
They both, believe that this is going
00;28;16;14 - 00;28;19;14
to be such a radical change
00;28;19;18 - 00;28;22;28
and, and fundamentally upend money,
00;28;22;28 - 00;28;27;21
the things that we treat
as normal, both of them say,
00;28;27;24 - 00;28;30;19
the only thing normal about normal
is that it ends.
00;28;30;19 - 00;28;32;27
Normality always ends.
00;28;32;27 - 00;28;37;20
And so, yeah, the only difference
between them, of course, is,
00;28;37;25 - 00;28;40;28
is whether that will end in disaster
00;28;40;28 - 00;28;44;22
or whether it'll end in, in delight.
00;28;44;25 - 00;28;48;25
And but I think these both of these
people know the models very, very closely.
00;28;48;28 - 00;28;51;01
They, they're,
they're staring directly at them.
00;28;51;01 - 00;28;52;09
They've seen the progress of them.
00;28;52;09 - 00;28;54;17
They understand how they work.
00;28;54;17 - 00;28;55;26
So it's worth. Yeah. Yeah.
00;28;55;26 - 00;29;00;17
And being with that,
00;29;00;20 - 00;29;03;14
sitting with that imagination,
whether it's the darker side
00;29;03;14 - 00;29;06;26
or the or the positive side, but but yeah,
the simple answer your question is
00;29;06;29 - 00;29;09;21
I think they see a tremendous upside
to this.
00;29;09;21 - 00;29;11;08
And it's worth the risk.
00;29;11;08 - 00;29;13;02
So so I want to
I want to dig into that a little bit
00;29;13;02 - 00;29;15;21
because I'm, you know, I'm
a self-proclaimed cynic
00;29;15;21 - 00;29;17;19
for a lot of this stuff
or at least a skeptic.
00;29;17;19 - 00;29;21;07
And so the cynic or the skeptic in me
says that,
00;29;21;10 - 00;29;25;03
you know, the dark side and the light side
are, you know, a hair's breadth apart.
00;29;25;06 - 00;29;28;19
And it it just seems,
it seems to me or that's
00;29;28;20 - 00;29;33;19
there's certainly an argument to be made
that what tilts people to the light side
00;29;33;22 - 00;29;37;04
is if they're asking for a big bag
of money, if they're asking for somebody
00;29;37;04 - 00;29;40;06
to fund them, then suddenly
all it's going to be amazing, you
00;29;40;06 - 00;29;43;06
know, versus if you're, yudkowsky, you're
you're not asking for money.
00;29;43;06 - 00;29;45;05
It's a lot easier to move to the dark
side.
00;29;45;05 - 00;29;47;13
So, so, I mean, let me frame it this way.
00;29;47;13 - 00;29;51;02
Like, to what degree do you buy the,
00;29;51;02 - 00;29;55;01
utopianism or the true acceleration,
this vision
00;29;55;05 - 00;29;57;11
of some of these technology
leaders versus,
00;29;57;11 - 00;30;03;19
you know, how much do you think
it's a fundraising tool?
00;30;03;22 - 00;30;05;10
I think that's such an important question.
00;30;05;10 - 00;30;05;26
Right.
00;30;05;26 - 00;30;09;01
And it's definitely one
that a lot of tech journalists wrestle
00;30;09;01 - 00;30;12;01
with because they've,
00;30;12;05 - 00;30;15;10
I mean everybody that I know
has been burned, whether they were burned
00;30;15;10 - 00;30;18;24
on Google Glass or they were burned
on the metaverse or whatever.
00;30;18;27 - 00;30;24;23
That's the nature of technologists
is to hype their, their, their stuff.
00;30;24;23 - 00;30;29;00
Now, we should note that lawsuit
Koski is not selling anything.
00;30;29;03 - 00;30;32;02
He's just trying to warn the world.
00;30;32;02 - 00;30;34;05
He's like a Jeremiah.
00;30;34;05 - 00;30;41;06
And there are many people out there who,
00;30;41;09 - 00;30;45;05
you know, for example, I would say Yoshua
Bengio or Geoffrey Hinton.
00;30;45;05 - 00;30;49;00
Geoffrey Hinton left his job at Google,
a very high paying job,
00;30;49;00 - 00;30;50;07
which he got only in the 60s.
00;30;50;07 - 00;30;54;11
Geoffrey Hinton, of course, the godfather
of AI creator or not directly creator,
00;30;54;11 - 00;30;58;19
but certainly a co-creator
of this incredibly important,
00;30;58;22 - 00;31;02;19
algorithm backpropagation,
which led to AI models.
00;31;02;22 - 00;31;04;29
he left his job at Google and he's
00;31;04;29 - 00;31;08;15
he's out there trying to warn the world
about about this technology.
00;31;08;15 - 00;31;12;07
So I don't think it's just the,
00;31;12;10 - 00;31;17;13
all the hype is coming from the people who
who are or who stand to benefit.
00;31;17;16 - 00;31;18;17
However.
00;31;18;17 - 00;31;21;07
It does make it extremely difficult
to talk about
00;31;21;07 - 00;31;24;23
because clearly, there's a lot of hype.
00;31;25;00 - 00;31;28;08
I think I think probably I
mean, it'd be great to talk a little bit
00;31;28;08 - 00;31;32;09
about Yoshua Bengio,
maybe because Yoshua Bengio to me is the,
00;31;32;12 - 00;31;36;06
his is is such a different
kind of model out there.
00;31;36;06 - 00;31;39;10
And it's not just, complaining about
or warning the world.
00;31;39;10 - 00;31;43;02
It's he's, he's he's literally
presented the world with an alternative, a
00;31;43;02 - 00;31;46;27
non a genetic model of of
00;31;47;00 - 00;31;50;06
of AI which, which we don't we,
we're not even talking
00;31;50;06 - 00;31;53;13
about at all where we, we're imagining
that there's only one way to make AI.
00;31;53;19 - 00;31;55;14
They're going to get smarter and smarter
and they're
00;31;55;14 - 00;31;57;14
going to do more and more things
and they're going to,
00;31;57;14 - 00;32;00;25
you know, make our airplane reservations,
and then they're going to,
00;32;00;28 - 00;32;04;24
you know, take over the
then they're going to be our lawyer,
00;32;04;24 - 00;32;07;01
and then they're going to be our doctor.
And then there will be our CEO.
00;32;07;01 - 00;32;09;23
And they just like take over
more and more human roles.
00;32;09;23 - 00;32;12;01
Right. But
00;32;12;04 - 00;32;13;20
what Yoshua Bengio
00;32;13;20 - 00;32;19;07
has created is he's it, again,
another godfather of AI early, early,
00;32;19;07 - 00;32;25;02
pioneer of these models, huge
fan of OpenAI, a huge fan of AI until he's
00;32;25;05 - 00;32;29;17
more recently looked at OpenAI,
look at the ChatGPT and
00;32;29;20 - 00;32;33;20
and realized that he's devoted his life
to something that may kill humanity.
00;32;33;20 - 00;32;38;19
So he took a huge u turn,
created something called, scientist AI.
00;32;38;22 - 00;32;42;02
And,
have you had Yoshua on the show yet? Or.
00;32;42;09 - 00;32;45;13
I've heard, you know his interviews
that you did with him on, on your program.
00;32;45;13 - 00;32;46;16
But we haven't had him on your show.
00;32;46;16 - 00;32;50;09
So why don't you
you know if you'll indulge me, you know,
00;32;50;12 - 00;32;53;04
tell us a little bit about his position
and what he's proposing.
00;32;53;04 - 00;32;54;18
No, I appreciate the chance I'm in.
00;32;54;18 - 00;32;58;21
You're indulging me because, you know,
hopefully he'll come on the show soon and
00;32;58;21 - 00;33;02;08
and and say all this from
the from the horse's mouth and
00;33;02;11 - 00;33;05;11
and not, you know,
have to deal with the poor, you know,
00;33;05;14 - 00;33;10;10
the middle man
here, but basically, Yoshua Bengio,
00;33;10;13 - 00;33;11;16
he has created
00;33;11;16 - 00;33;15;11
this thing called scientist
AI, and scientist
00;33;15;11 - 00;33;20;07
AI is, as he says, it's
like an ideal scientist or psychologist.
00;33;20;07 - 00;33;24;08
So its job is to understand and to explain
00;33;24;11 - 00;33;28;05
and to predict,
but not to act on its own goals.
00;33;28;08 - 00;33;30;25
So it is non-genetic.
00;33;30;25 - 00;33;33;18
It's an unknown genetic model,
meaning it does not have its own long term
00;33;33;18 - 00;33;36;26
goals,
that it's trying to achieve in the world.
00;33;36;26 - 00;33;39;07
It is probabilistic and cautious.
00;33;39;07 - 00;33;44;02
So, for example, unlike if, you know,
if you've interacted with like Claude or,
00;33;44;04 - 00;33;48;18
or ChatGPT, it doesn't kind of bombastic
think it knows every answer
00;33;48;23 - 00;33;53;29
and act like this overconfident
kind of, know it all.
00;33;54;02 - 00;33;56;20
Rather,
it will have a probabilistic assessment.
00;33;56;20 - 00;34;01;15
It'll say like, well, there's a 3%,
3% chance that this plan leads to
00;34;01;18 - 00;34;05;04
this outcome or this other outcome,
and it will tell you you're wrong, which
00;34;05;04 - 00;34;10;08
which a lot of times, the other models
are not designed to do so.
00;34;10;08 - 00;34;12;12
It's not trained to persuade you.
00;34;12;12 - 00;34;14;00
It's not trained to please you.
00;34;14;00 - 00;34;16;20
It's it's supposed to be honest
and calibrated.
00;34;16;20 - 00;34;20;04
And most importantly,
and this is his vision.
00;34;20;07 - 00;34;23;27
It is supposed to be or it's
hopefully his plan is
00;34;23;27 - 00;34;30;01
that it might be used as a guardrail
for other, agents, a genetic AI.
00;34;30;01 - 00;34;33;17
So, for example,
you could run a powerful AI agent
00;34;33;17 - 00;34;39;05
through scientist AI,
and it will evaluate the safety
00;34;39;08 - 00;34;42;08
of their
proposed actions and can veto them.
00;34;42;12 - 00;34;46;09
So he's got, Yoshua
Bengio has this, thing called Law Zero,
00;34;46;12 - 00;34;48;14
which, he's talked about,
but a lot of zeros,
00;34;48;14 - 00;34;51;14
but essentially,
a different approach to regulation.
00;34;51;14 - 00;34;54;20
It's not saying, okay, we we're just going
to regulate these companies
00;34;54;20 - 00;34;57;20
and ask them to follow certain benchmarks
that you will.
00;34;57;22 - 00;34;58;29
They are we don't understand.
00;34;58;29 - 00;35;03;03
We're going to use
AI to regulate the AI, essentially use
00;35;03;03 - 00;35;07;26
a technological solution
to, to this kind of firm.
00;35;07;26 - 00;35;10;18
Yeah, to this kind of, safety approach.
00;35;10;18 - 00;35;15;09
And what what
Bengio is fundamentally worried about
00;35;15;09 - 00;35;19;06
is the, the, the very direction
00;35;19;06 - 00;35;23;07
that the industry is heading,
which is a genetic AI.
00;35;23;10 - 00;35;26;01
He doesn't even necessarily
talk about the dangers of,
00;35;26;01 - 00;35;29;01
might say superintelligence or,
00;35;29;02 - 00;35;32;13
you know, AGI that's a term
that gets thrown out a lot about.
00;35;32;13 - 00;35;35;23
But he, which means that or
00;35;35;23 - 00;35;39;07
artificial
general intelligence is smart as a human.
00;35;39;10 - 00;35;41;21
He just says, well, as soon as something
00;35;41;21 - 00;35;45;01
is a genetic meaning, it can help people
00;35;45;04 - 00;35;50;05
design a plan or it can manipulate humans
00;35;50;05 - 00;35;53;27
or institutions to, to achieve its ends,
00;35;54;00 - 00;35;59;02
or it can resist being shut down or it
can, you know, cause harm.
00;35;59;02 - 00;36;02;26
And we've seen the models
do all these things already.
00;36;02;28 - 00;36;06;17
That means we should we should not
be modeling AI off of humans.
00;36;06;20 - 00;36;09;20
We should not be modeling them
off of agency.
00;36;09;27 - 00;36;11;15
That's what not only humans.
00;36;11;15 - 00;36;14;05
Actually, every life form on the planet
has some degree of agency.
00;36;14;05 - 00;36;16;26
That's what kind of defines life.
00;36;16;29 - 00;36;19;26
Artificial intelligence
does not need to be a genetic.
00;36;19;26 - 00;36;23;09
It can just be a very helpful, very smart,
00;36;23;12 - 00;36;27;28
very, perceptive tool.
00;36;28;01 - 00;36;30;04
And thus we get away from deception.
00;36;30;04 - 00;36;34;08
We get away from manipulation because
it will have any agency of its own.
00;36;34;11 - 00;36;36;29
That's not at all
where the industry is headed.
00;36;36;29 - 00;36;41;03
But I think it's important to know
that there's an alternative out there.
00;36;41;06 - 00;36;43;11
Yeah,
and it's a it's a compelling alternative.
00;36;43;11 - 00;36;46;23
And certainly for us,
you know, as a species,
00;36;46;23 - 00;36;50;02
if we think about what's best for us, I.
00;36;50;05 - 00;36;52;09
I like that vision.
00;36;52;09 - 00;36;56;23
The. You know, the concern I have is
it seems like, if anything,
00;36;56;26 - 00;37;00;13
these models are getting more kind
of fractured and fragmented,
00;37;00;13 - 00;37;02;12
like as we've seen more open source AI.
00;37;02;12 - 00;37;05;20
You know, even if we get to things
like Deep Seek and some of these models
00;37;05;20 - 00;37;10;00
outside of,
you know, the US, I don't know, like
00;37;10;03 - 00;37;14;06
have we crossed like a Rubicon here
in terms of the ability to control these?
00;37;14;10 - 00;37;20;19
Like how do we I don't know, it kind of
feels like the the cat is out of the bag.
00;37;20;22 - 00;37;21;16
Yeah, it's a good point.
00;37;21;16 - 00;37;25;22
I mean, this is my main
beef personally with,
00;37;25;25 - 00;37;27;29
with with folks,
00;37;27;29 - 00;37;31;26
with the drill dumber camp
that folks like Lars Yudkowsky.
00;37;31;29 - 00;37;35;01
Because I just don't understand.
00;37;35;05 - 00;37;39;15
Maybe I'm just not smart enough
to understand, but I don't understand why,
00;37;39;18 - 00;37;44;08
we can't put the cat back in the bag
a little bit.
00;37;44;08 - 00;37;45;04
You know why?
00;37;45;04 - 00;37;51;00
Why human institutions can't rally
to create the right regulations.
00;37;51;03 - 00;37;54;15
And to sandbox
00;37;54;15 - 00;37;58;00
new models, for example,
until they're truly ready.
00;37;58;03 - 00;38;00;14
There are things that can be done.
00;38;00;14 - 00;38;01;25
They're not easy.
00;38;01;25 - 00;38;06;16
It would take a lot of,
a lot of societal will, but,
00;38;06;20 - 00;38;12;12
I think it's, I think it's not the time
to feel despair and think, gosh,
00;38;12;16 - 00;38;16;07
we've already kind of passed
over some crucial threshold.
00;38;16;07 - 00;38;20;23
I mean, in some sense, we've we passed
that when ChatGPT was first released.
00;38;20;23 - 00;38;23;00
Or you could say we passed that.
00;38;23;00 - 00;38;27;03
I mean, the Turing test has long
been kind of
00;38;27;06 - 00;38;28;11
I mean, it's it's it's
00;38;28;11 - 00;38;30;11
there's arguments about
whether the Turing test has been passed,
00;38;30;11 - 00;38;33;24
but certainly we've we've crossed
some sort of incredibly important line.
00;38;33;27 - 00;38;37;13
I think to though, you know,
what it gets at for me, I
00;38;37;14 - 00;38;41;17
one of the things about working on
this series that really taught me is the,
00;38;41;20 - 00;38;42;10
the importance of
00;38;42;10 - 00;38;46;06
storytelling and imagination
in this technology,
00;38;46;09 - 00;38;49;02
and that goes all the way back
to Alan Turing,
00;38;49;02 - 00;38;52;03
who and I didn't really understand this
because I understood the Turing test
00;38;52;10 - 00;38;53;22
as a kind of benchmark,
00;38;53;22 - 00;38;57;20
like this would be a benchmark of human,
of sorry, of machine progress.
00;38;57;20 - 00;39;00;17
You know it
once, once the Turing test was passed.
00;39;00;17 - 00;39;03;17
So, for example,
if I could chat with a machine
00;39;03;17 - 00;39;08;27
and not know it was a machine, then, well,
it's it's achieved some sort of,
00;39;09;00 - 00;39;10;22
some sort of, milestone.
00;39;10;22 - 00;39;14;11
And that was the Turing test to,
what do you call it, the Imitation Game.
00;39;14;14 - 00;39;19;13
But, in fact,
what Turing was doing all the way back
00;39;19;17 - 00;39;25;12
in World War Two and right postwar,
when he was introducing this idea,
00;39;25;15 - 00;39;26;05
was not just
00;39;26;05 - 00;39;29;09
saying, okay, this is a benchmark
for machines to pass.
00;39;29;09 - 00;39;31;27
And once machines passed that,
we can say that they're on their way
00;39;31;27 - 00;39;33;15
to really being thinking machines.
00;39;33;15 - 00;39;39;08
He said that, but he was also taking
what was at the time a really complicated,
00;39;39;11 - 00;39;43;29
philosophical debate about,
well, can machines ever think.
00;39;44;02 - 00;39;47;28
And he and he treated it like an engineer
and he said, you know what?
00;39;47;28 - 00;39;53;09
We just need to create, and,
an observable,
00;39;53;12 - 00;39;57;00
metric by which we can say
that they're thinking and, and that's,
00;39;57;01 - 00;40;00;16
and we don't have to deal
with the philosophical,
00;40;00;19 - 00;40;04;26
you know, discomfort of saying,
well, can machines thinking,
00;40;04;26 - 00;40;07;20
what would that mean
if they are thinking, etc.,
00;40;07;20 - 00;40;09;06
if, if they pass the Turing test
00;40;09;06 - 00;40;11;12
and then then they're on their way
to thinking machines.
00;40;11;12 - 00;40;15;04
And by doing that, he not only sort of
00;40;15;07 - 00;40;18;06
freed engineers
from the philosophical angst
00;40;18;06 - 00;40;21;28
and set them a kind of path to to follow,
which they certainly follow it.
00;40;21;28 - 00;40;26;22
And it kind of leads
to ChatGPT today, but also,
00;40;26;25 - 00;40;27;22
I think, created
00;40;27;22 - 00;40;31;12
this, this, this, this new way.
00;40;31;19 - 00;40;35;05
He kind of he kind of realized
something very important, which is that
00;40;35;05 - 00;40;38;28
we would not recognize machines
as thinking
00;40;39;01 - 00;40;44;02
until we started interacting with them
in a humanlike way,
00;40;44;05 - 00;40;47;18
and that when they started using language
00;40;47;21 - 00;40;51;23
and, and talking back to us,
00;40;51;26 - 00;40;56;15
that's when we would see them
recognize their thinking.
00;40;56;18 - 00;40;59;17
And, you could get very philosophical
about this.
00;40;59;17 - 00;41;02;19
You could say that you know, trees
do a lot of thinking,
00;41;02;21 - 00;41;07;00
but we don't think of them as thinking,
there's a lot of other
00;41;07;00 - 00;41;10;00
living things in this planet
that, that think,
00;41;10;03 - 00;41;12;21
but we
00;41;12;21 - 00;41;15;08
their intelligence
is don't interact with our own.
00;41;15;08 - 00;41;18;07
And so we're not really
that concerned about them or many of us.
00;41;18;07 - 00;41;19;09
Right.
00;41;19;12 - 00;41;21;24
And so what Turing felt was
00;41;21;24 - 00;41;27;20
in order for us to sort of really respect
and use this word respect machines,
00;41;27;24 - 00;41;31;10
they would need to sort of, interact
with us
00;41;31;13 - 00;41;36;07
like this, in the way they were, the way
ChatGPT interacts with us.
00;41;36;10 - 00;41;39;12
But the danger of that, right, is that we
00;41;39;12 - 00;41;41;16
then don't see this gets back to something
we talked about.
00;41;41;16 - 00;41;45;02
We don't see the alien,
this alien ness of it.
00;41;45;05 - 00;41;49;05
And we start to think,
we start to interact with it,
00;41;49;05 - 00;41;53;12
maybe too much like a person
or like a fellow,
00;41;53;15 - 00;41;58;07
human or a human like entity,
a human thinking like entity.
00;41;58;07 - 00;42;03;21
And thus we make very important
cognitive mistakes in interacting with it.
00;42;03;24 - 00;42;08;07
And we perhaps
trust it or distrust it in the wrong ways.
00;42;08;10 - 00;42;10;18
So and this also happened in
00;42;10;18 - 00;42;13;18
the sort of this failure of imagination
then or this,
00;42;13;23 - 00;42;17;25
this, this kind of this way in which
our imagination is challenge channeled.
00;42;17;28 - 00;42;21;12
We don't see how the models,
00;42;21;15 - 00;42;23;28
are vastly different year
00;42;23;28 - 00;42;27;08
over year because we're interacting
with a model right now.
00;42;27;08 - 00;42;29;05
It's interacting with us.
00;42;29;05 - 00;42;31;08
Yes, like the fastest human
we've ever talked to.
00;42;31;08 - 00;42;35;27
But it's still recognizable
in its thinking in some ways.
00;42;35;27 - 00;42;40;08
And it's it's
something we can respect but recognize.
00;42;40;11 - 00;42;44;28
And so it's very, very difficult for us to
then think, okay, wait a second.
00;42;44;28 - 00;42;47;16
This is just the current iteration.
00;42;47;16 - 00;42;50;05
We have to imagine
00;42;50;05 - 00;42;53;20
a different kind of intelligence
that this could grow into.
00;42;53;23 - 00;42;57;12
And what would I be in that,
00;42;57;15 - 00;43;00;05
in that situation,
if I could say one more thing, it's
00;43;00;05 - 00;43;04;01
it reminds me, honestly, of being,
you know, reporting in Ukraine
00;43;04;04 - 00;43;07;04
or reporting in Afghanistan
or reporting in South Sudan,
00;43;07;04 - 00;43;11;03
and you talk to people
who are in the middle of a war
00;43;11;06 - 00;43;12;19
and they say, you know, we
00;43;12;19 - 00;43;16;18
knew the war was coming,
but we just didn't
00;43;16;21 - 00;43;21;13
imagine what it would feel like to be us
when it was here.
00;43;21;16 - 00;43;23;29
And they want me to know.
00;43;23;29 - 00;43;24;20
You don't understand.
00;43;24;20 - 00;43;28;00
I was just planning my daughter's
wedding in that building over there,
00;43;28;00 - 00;43;29;26
which is now like a bombed out
whole, like.
00;43;29;26 - 00;43;32;07
And they still see.
00;43;32;07 - 00;43;34;24
They still see the,
the place that they were planning the way
00;43;34;24 - 00;43;38;21
and they still are thinking and frustrated
about the, the money they spent on,
00;43;38;21 - 00;43;40;01
the wedding invitations or something.
00;43;40;01 - 00;43;43;08
You know, they haven't quite transitioned
over from the old world
00;43;43;08 - 00;43;46;08
to the new and,
00;43;46;10 - 00;43;48;15
I don't know,
I don't want to make this sound
00;43;48;15 - 00;43;52;08
like a Duma forecast because I think
the future could be quite bright.
00;43;52;08 - 00;43;57;07
But it does take an active imagination
whether we think we're headed toward,
00;43;57;10 - 00;44;00;24
you know, any of these kind of versions
of the future
00;44;00;27 - 00;44;04;07
to put ourselves
in the new version of the future?
00;44;04;10 - 00;44;08;10
And to sort of play with,
you know, our, our imagination
00;44;08;10 - 00;44;10;19
and to imagine that the, you know,
the world is not going to be the same.
00;44;10;19 - 00;44;13;07
It is is as it is now.
00;44;13;10 - 00;44;15;20
It's so, so thank you for that.
00;44;15;20 - 00;44;18;05
that was a really interesting
answer that covered a lot Yeah.
00;44;18;05 - 00;44;19;14
So so I have.
00;44;19;14 - 00;44;22;23
so I can, you know there's a lot of things
we could talk about Sure.
00;44;22;26 - 00;44;26;23
But one of the pieces
I want to talk about is,
00;44;26;26 - 00;44;30;03
you know, you got me thinking that there's
all this conversation about the future
00;44;30;03 - 00;44;31;22
and what will happen in the next model.
00;44;31;22 - 00;44;34;20
And we talked about this before, but,
you know,
00;44;34;20 - 00;44;37;20
the sense that the future is already here
or, you know, the Arthur C
00;44;37;20 - 00;44;41;02
Clarke quote about the future is
here are not, you know, evenly distributed
00;44;41;09 - 00;44;44;02
and you know, how many people out there
right now
00;44;44;02 - 00;44;48;08
are interacting with chat bots
or with this technology
00;44;48;08 - 00;44;53;22
in a way that would have been completely
unimaginable to people, Write
00;44;53;25 - 00;44;54;20
right.
00;44;54;20 - 00;44;57;25
to, you know, what I think
00;44;57;25 - 00;45;01;20
has been built
into the design of these tools is,
00;45;01;23 - 00;45;04;23
human engagement
00;45;04;29 - 00;45;08;01
as a design principle,
if I can call it that.
00;45;08;01 - 00;45;08;06
Right.
00;45;08;06 - 00;45;11;03
Like if you go back to the Turing test,
it's, you know, what is it?
00;45;11;03 - 00;45;11;25
It's it's
00;45;11;25 - 00;45;16;10
the fact that thinking for us is measured
in terms of interaction with us.
00;45;16;15 - 00;45;18;29
And that's exactly how these things
have been designed.
00;45;18;29 - 00;45;23;00
They've been designed to,
you know, flatter, to create engagement,
00;45;23;00 - 00;45;26;10
to let people's guard down,
to continuously engage.
00;45;26;10 - 00;45;26;18
Right.
00;45;26;18 - 00;45;29;09
Like, you know, one of the things
I've noticed that ChatGPT does
00;45;29;09 - 00;45;33;03
is it always prompts you for like, hey,
how can we keep the conversation going?
00;45;33;03 - 00;45;34;26
What do you need next from me? Right?
00;45;34;26 - 00;45;38;15
It's almost like a
a meta fixation or a social media
00;45;38;15 - 00;45;42;20
fixation, of this
and where does that take us?
00;45;42;20 - 00;45;43;10
You know,
00;45;43;10 - 00;45;47;07
we're we're having a conversation
out of one side of our mouths about
00;45;47;10 - 00;45;51;16
we need to be more cautious
about the alien nature of this.
00;45;51;19 - 00;45;52;06
it kind of
00;45;52;06 - 00;45;55;26
feels like we're watching
that battle be lost.
00;45;55;29 - 00;45;58;01
And is that,
00;45;58;01 - 00;45;59;17
I don't know, I guess.
00;45;59;17 - 00;46;02;18
Let me let me kind of
frame the discussion up this way.
00;46;02;18 - 00;46;07;02
If we're if we're worried about where
that future direction is taking us,
00;46;07;05 - 00;46;12;14
you know, do we have an obligation
to push the technologists
00;46;12;14 - 00;46;17;09
and the owners of these tools to put more,
put more guardrails
00;46;17;09 - 00;46;21;01
and principles in place
that prevent people from,
00;46;21;04 - 00;46;25;05
you know, becoming enamored, let's say,
at the extreme end, with these tools?
00;46;25;08 - 00;46;28;10
Or is it more purely on the demand side?
00;46;28;10 - 00;46;31;24
And we just have to do a better job
of educating these people
00;46;32;00 - 00;46;35;11
about, you know, what
they're signing themselves up for?
00;46;35;14 - 00;46;39;01
I that's, that's I think the key question
to ask, it's such an important question.
00;46;39;01 - 00;46;42;10
I think that well a couple quick things.
00;46;42;10 - 00;46;46;19
One is that the metaphors
we use to describe this do matter.
00;46;46;22 - 00;46;51;08
And even if, even if,
even if we're sort of taking
00;46;51;08 - 00;46;54;16
the perspective of a business leader
saying, okay, let's what's practical here?
00;46;54;16 - 00;46;55;26
How can I use this to cut costs?
00;46;55;26 - 00;46;58;16
How can I use this to maximize efficiency,
compete?
00;46;58;16 - 00;47;02;13
Even still, our employees
will be narrative izing this.
00;47;02;13 - 00;47;07;22
This technology and in interacting with it
in a way that, as you say, that,
00;47;07;25 - 00;47;11;11
that kind of pulls the trigger
on our very relational intelligence
00;47;11;11 - 00;47;16;22
and our sense of self, which is so based
on how we interact with others.
00;47;16;22 - 00;47;19;24
So if we're interacting with with the AI
and that's going away, then
00;47;19;24 - 00;47;23;02
then our sense of self is, is disrupted,
is, is is affected.
00;47;23;02 - 00;47;28;29
And even if,
if the AI doesn't intend that or
00;47;28;29 - 00;47;33;10
isn't trying to quote unquote manipulate
us, there's a French philosopher,
00;47;33;13 - 00;47;36;10
Catherine
Evans, who introduced me to a concept
00;47;36;10 - 00;47;40;21
I, I felt it was quite I don't think
she's published about this yet, but
00;47;40;24 - 00;47;43;18
she said, you know,
I guess you're probably familiar
00;47;43;18 - 00;47;47;13
with there's all kinds of UN rules
about not anthropomorphizing intelligence,
00;47;47;13 - 00;47;49;20
not anthropomorphizing in technology.
00;47;49;20 - 00;47;52;03
And these go back some years.
00;47;52;03 - 00;47;56;26
So she was, creating a, I think, a
comic book for kids about AI.
00;47;56;26 - 00;48;00;23
But she ended up stumbling into this idea
of, since she wasn't allowed to create
00;48;00;23 - 00;48;04;00
AI as a is it,
she wasn't allowed to anthropomorphize
00;48;04;00 - 00;48;06;04
the AI because of UN rules.
She was doing it for them.
00;48;06;04 - 00;48;08;12
And also she wasn't
allowed to have any antagonist.
00;48;08;12 - 00;48;12;14
So the worst, kind of narrative
situation, you know, no conflict.
00;48;12;19 - 00;48;15;10
No, no people. What do you tell a story?
00;48;15;10 - 00;48;19;16
But she ended up coming up with
this idea of AI as a place
00;48;19;19 - 00;48;20;17
and, you
00;48;20;17 - 00;48;23;17
know, in the in the cartoon
or in the graphic novel, the YouTube
00;48;23;17 - 00;48;27;22
content algorithm is a sort of,
a place, and it's, you know,
00;48;27;25 - 00;48;32;19
you lead it with a map and it leads
you down different, different, different,
00;48;32;22 - 00;48;36;27
different recommendation, portals, but
00;48;37;00 - 00;48;40;23
I found in my interaction
with the different models
00;48;40;26 - 00;48;44;07
and again,
this is AI is not just about chat bots.
00;48;44;07 - 00;48;48;27
We don't always feel
like that's important to note that
00;48;49;00 - 00;48;52;00
it is kind of useful,
I think, to think about,
00;48;52;06 - 00;48;55;14
it is a sense of place,
if only because it gets it.
00;48;55;14 - 00;49;01;01
What you're talking about,
which is how are the norms and culture
00;49;01;04 - 00;49;04;04
and cultural expectations of this place
a little bit different?
00;49;04;10 - 00;49;07;06
You know, how do I behave with the model?
00;49;07;06 - 00;49;12;05
That's not quite the way I would
that I was going to behave in person.
00;49;12;05 - 00;49;13;09
We all deal with this, right?
00;49;13;09 - 00;49;16;19
And social media behavior is different
than we are in person with each other.
00;49;16;26 - 00;49;18;23
And so I think if we think of AI
in that same way,
00;49;18;23 - 00;49;23;16
because it is programed exactly
as you said, it is programed to be helpful
00;49;23;19 - 00;49;29;13
to be solicitous to, to prove its value.
00;49;29;16 - 00;49;32;09
You know how 9000 in Stanley Kubrick's
00;49;32;09 - 00;49;37;11
fantastic film is constantly talking
about how foolproof it is
00;49;37;14 - 00;49;39;19
before it murders the entire crew?
00;49;39;19 - 00;49;43;16
And and that's a sort of an important part
is that these the
00;49;43;20 - 00;49;48;08
the models are advertising themselves
to us,
00;49;48;11 - 00;49;51;01
much like an intern
that wants to keep its job and get
00;49;51;01 - 00;49;54;06
promoted,
will be constantly promoting itself.
00;49;54;09 - 00;49;55;23
And we like to think that
00;49;55;23 - 00;49;59;18
that's kind of kind of helpful
and that's quite culturally appropriate.
00;49;59;18 - 00;50;03;23
Certainly we we want it to tell us
other things it can do.
00;50;03;26 - 00;50;06;03
There's nothing wrong with that, per se,
00;50;06;03 - 00;50;08;27
but it is a is
it is a kind of different world.
00;50;08;27 - 00;50;11;19
It's a different place that we step into.
00;50;11;19 - 00;50;12;21
And,
00;50;12;21 - 00;50;16;17
yeah, I mean, I think barring whether it's
the supply side or demand side,
00;50;16;17 - 00;50;18;06
I think maybe both
solutions are important.
00;50;18;06 - 00;50;21;02
But it's also about the metaphors
we we use.
00;50;21;02 - 00;50;21;21
Yeah.
00;50;21;21 - 00;50;22;21
Well, and the,
00;50;22;21 - 00;50;24;19
you know, on the demand side
that the reason I ask this
00;50;24;19 - 00;50;28;02
and it's something that I've been
brushing up more and more against
00;50;28;02 - 00;50;32;29
is this notion
that it is providing a value to people.
00;50;33;06 - 00;50;33;15
Right.
00;50;33;15 - 00;50;37;16
And I'll use the specific example because
it's, it's one of the most intimate.
00;50;37;16 - 00;50;39;22
It's I don't know
if it'll be the most intimate for long,
00;50;39;22 - 00;50;42;10
but one of the more intimate ways
people are using AI,
00;50;42;10 - 00;50;45;05
or he's using chat
bots is as a therapist.
00;50;45;05 - 00;50;45;14
Right.
00;50;45;14 - 00;50;50;08
And it becomes a way to process,
you know, whether it's trauma
00;50;50;08 - 00;50;54;12
or conflict, it's a way
to have an intimate relationship.
00;50;54;15 - 00;50;57;23
And, I don't know,
maybe even some ways get
00;50;57;26 - 00;51;02;06
a better sense of self
or a better sense of purpose.
00;51;02;09 - 00;51;04;20
And it's working, right.
00;51;04;20 - 00;51;07;22
Like I've talked to enough people,
some of them AI experts, some of them,
00;51;07;27 - 00;51;11;07
you know, friends who say, well,
I don't have a therapist.
00;51;11;07 - 00;51;13;13
And this gets me one and it's useful.
00;51;13;13 - 00;51;15;04
Or I do have a therapist.
And you know what?
00;51;15;04 - 00;51;17;06
AI is better than my therapist.
00;51;17;06 - 00;51;20;06
And yeah, I don't know, like,
00;51;20;09 - 00;51;23;09
I just think about where this is going
00;51;23;09 - 00;51;27;01
and what happens
when you've created a technology
00;51;27;04 - 00;51;30;18
that does provide this service
and that people start saying,
00;51;30;18 - 00;51;34;05
well, this is actually better for me
than my human relationships.
00;51;34;12 - 00;51;36;06
Like does that take us?
00;51;36;06 - 00;51;38;22
And what do we do with that?
00;51;38;22 - 00;51;41;22
And, you know,
what are the implications for us
00;51;41;23 - 00;51;45;03
as a society where historically,
if you wanted a human relationship,
00;51;45;03 - 00;51;46;17
you had to have it with a human,
00;51;46;17 - 00;51;51;19
and that was a propagating force
for the continuation of the human race.
00;51;51;22 - 00;51;52;00
Yeah.
00;51;52;00 - 00;51;52;29
No, no, I think you're right.
00;51;52;29 - 00;51;58;09
I mean, it's hard to parse out what is I
00;51;58;09 - 00;52;01;15
mean, morality shifts
through across generations, right?
00;52;01;15 - 00;52;02;20
We know this.
00;52;02;20 - 00;52;05;01
And, standards change.
00;52;05;01 - 00;52;10;27
I think Esther Perel has made
a very important point that therapy,
00;52;11;00 - 00;52;13;24
I therapy is thin.
00;52;13;24 - 00;52;18;10
It's a thin kind of therapy,
which is an interesting way of approaching
00;52;18;10 - 00;52;22;07
this, which is to say that it's not a
00;52;22;10 - 00;52;24;21
a challenging kind of therapy.
00;52;24;21 - 00;52;27;21
It's not a one that's whole bodied.
00;52;27;21 - 00;52;31;28
it's thinner, and it
she feels that it then
00;52;32;01 - 00;52;35;08
maybe leads people to have thinner
or have have lower
00;52;35;08 - 00;52;38;12
expectations of human relationships.
00;52;38;12 - 00;52;41;20
So it sort of lowers
the standard, as it were.
00;52;41;23 - 00;52;45;09
I think that's a, that's a yeah,
that's a concern.
00;52;45;15 - 00;52;49;18
At the same time, though, I'm
a little bit worried about saying
00;52;49;18 - 00;52;52;18
that I'm, I'm worried
about saying I'm worried where,
00;52;52;18 - 00;52;57;17
where society is going just because
there are so many different stories.
00;52;57;17 - 00;53;01;04
I mean, there's so many people
that don't have access to therapy at all.
00;53;01;07 - 00;53;04;19
And, so many people for whom
00;53;04;19 - 00;53;09;01
I think, a first chat with ChatGPT
00;53;09;04 - 00;53;14;20
or any other model might be the gateway
to a certain of self-understanding.
00;53;14;23 - 00;53;17;25
It's not like everybody's in a situation
where, oh, should I,
00;53;18;01 - 00;53;20;01
should I call my therapist
or should I know?
00;53;20;01 - 00;53;22;21
Perhaps they don't have health insurance.
Perhaps they don't have that access.
00;53;22;21 - 00;53;27;18
So I just think it's hard to generalize
and say we are going anywhere.
00;53;27;18 - 00;53;33;10
It goes back to the quote you said, which
is the future is unevenly distributed.
00;53;33;13 - 00;53;33;19
Yeah.
00;53;33;19 - 00;53;38;07
It's very it's very difficult
to know, the some effect of this,
00;53;38;10 - 00;53;43;26
on, on human society,
but perhaps it does it certainly enters
00;53;43;26 - 00;53;47;16
it certainly adds a different kinds
of expectation of perhaps a lower one.
00;53;47;19 - 00;53;48;21
To change gears slightly.
00;53;48;21 - 00;53;51;14
I wanted to come back to something
you said earlier about,
00;53;51;14 - 00;53;53;06
you know, the culture of AI.
00;53;53;06 - 00;53;56;13
And, you know,
culture is being kind of a component here.
00;53;56;13 - 00;54;00;05
And, you know, that being dynamic
and looking different in different places.
00;54;00;11 - 00;54;03;11
You know, you you created the
00;54;03;12 - 00;54;07;09
The Rough Translation podcast and,
you know, you had a sub series in there
00;54;07;09 - 00;54;10;23
about at work and you looked at work
across different cultures.
00;54;10;23 - 00;54;11;18
And I'm curious,
00;54;11;18 - 00;54;14;16
you know, whether we're talking about AI
or whether we're not
00;54;14;16 - 00;54;17;16
if we're if we're talking
about the future of work,
00;54;17;19 - 00;54;22;07
how are you seeing people's relationship
with work change?
00;54;22;10 - 00;54;26;28
And, you know, as you did that series,
you know,
00;54;27;01 - 00;54;30;05
did did you see market differences
across cultures, or was there
00;54;30;05 - 00;54;34;15
kind of a common core of
the way people approach work?
00;54;34;18 - 00;54;36;05
No. Thank you for that question.
00;54;36;05 - 00;54;39;21
Yeah.
00;54;39;22 - 00;54;42;18
You know, just to highlight maybe two
things I learned from that series.
00;54;42;18 - 00;54;45;10
So one. We had one,
00;54;45;10 - 00;54;46;00
one show in
00;54;46;00 - 00;54;49;22
I think it was, it might have been called
failure is a four letter word.
00;54;49;25 - 00;54;53;16
That was maybe that was a throw
in a title, but it was about how this,
00;54;53;19 - 00;54;57;25
this concept of fail fast.
00;54;57;28 - 00;55;00;15
Which was
we think of associated with Silicon Valley
00;55;00;15 - 00;55;06;09
just does not translate well into other
and to, to everywhere in the world.
00;55;06;09 - 00;55;09;25
You know, we, talked to somebody
00;55;09;28 - 00;55;13;10
in Nigeria who said, you know,
what about fail slow?
00;55;13;11 - 00;55;15;18
Everything takes forever here.
00;55;15;18 - 00;55;17;10
Where, there's so much bureaucracy.
00;55;17;10 - 00;55;21;14
Or perhaps you live in a culture
or somebody else was from Mexico City.
00;55;21;14 - 00;55;25;10
They said, you know,
failing is such a taboo
00;55;25;13 - 00;55;29;16
that once you fail, you never want
to even show your face again.
00;55;29;17 - 00;55;33;01
So fail fast doesn't
00;55;33;04 - 00;55;34;09
isn't as accessible.
00;55;34;09 - 00;55;35;25
And yet, because we live in a digital,
00;55;35;25 - 00;55;39;18
globalized, culture,
fail fast, and Silicon Valley
00;55;39;18 - 00;55;43;24
and inspiring entrepreneurial stories
were very much part of the the water.
00;55;43;25 - 00;55;46;29
So one of the things that I wanted
to explore in that work series was,
00;55;46;29 - 00;55;50;18
how do you swear
00;55;50;21 - 00;55;56;08
what you're reading and seeing on YouTube
and inspired by with your own local,
00;55;56;11 - 00;55;59;11
constraints?
00;55;59;14 - 00;56;04;02
How do people translate
that entrepreneurial spirit?
00;56;04;05 - 00;56;08;14
You know, and the idea was not to say,
oh, well, well, it's
00;56;08;14 - 00;56;12;18
more difficult to be an entrepreneur
in Mexico City or Nigeria.
00;56;12;18 - 00;56;15;05
I mean, perhaps that's true, but
in some ways, actually that's not true.
00;56;15;05 - 00;56;16;14
In some ways the opposite is true.
00;56;16;14 - 00;56;19;19
So nothing's nothing's nothing's,
black and white.
00;56;19;22 - 00;56;23;14
But to me it was
it was about finding space to,
00;56;23;17 - 00;56;28;06
to give permission to people who may not
have swallowed that fail fast mantra,
00;56;28;09 - 00;56;28;29
who maybe feel
00;56;28;29 - 00;56;32;20
alienated by it to find a home in it.
00;56;32;23 - 00;56;36;03
You know,
how how do they find their own way to it,
00;56;36;03 - 00;56;40;24
which has always been my life's work,
honestly, as a journalist, is to try to,
00;56;40;27 - 00;56;45;09
focus on these mistranslations,
focus on the ways in which,
00;56;45;12 - 00;56;49;12
you know, some advice
or some or some or a piece of culture
00;56;49;12 - 00;56;55;24
might feel foreign or inaccessible,
but how can we find the commonalities?
00;56;55;24 - 00;56;59;08
How can we sort of all be,
00;56;59;11 - 00;57;01;15
participate
in some ways in the global economy,
00;57;01;15 - 00;57;03;12
even from our different cultural angles?
00;57;03;12 - 00;57;06;18
And what role does
where we're from affect what
00;57;06;18 - 00;57;10;10
we think of as good
or how we how we approach the question?
00;57;10;13 - 00;57;14;26
and then one more, example,
from that series and come around to
00;57;14;26 - 00;57;18;07
it is looking at, well, there's,
00;57;18;11 - 00;57;22;15
there's a particular law in, in Portugal.
00;57;22;16 - 00;57;27;01
I remember at the time being reading about
it, it said that if your boss calls you
00;57;27;04 - 00;57;29;24
after hours
or the boss is not allowed to call
00;57;29;24 - 00;57;34;01
you or email you after hours or otherwise,
they would get a $10,000 fine.
00;57;34;04 - 00;57;35;05
Or 10,000.
00;57;35;05 - 00;57;37;03
€10,000. Fine. Sorry.
00;57;37;03 - 00;57;39;18
And what we discovered
00;57;39;18 - 00;57;43;07
was that this law,
which seemed to be quite a,
00;57;43;10 - 00;57;43;19
you know,
00;57;43;19 - 00;57;47;21
lovely law, as if you're a worker,
suddenly you don't get called by your boss
00;57;47;24 - 00;57;52;29
was actually quite a cynical ploy
by the Minister of Labor would form
00;57;52;29 - 00;57;55;23
a formally was the Minister of Tourism
00;57;55;23 - 00;57;58;28
to sort of push Portugal
as a place for work life balance
00;57;58;28 - 00;58;04;13
and because she knew that nobody
who was coming to Portugal
00;58;04;13 - 00;58;06;07
would ever kind of be affected
by that law,
00;58;06;07 - 00;58;10;25
I was fine to just kind of pass this law
and to create this illusion of difference,
00;58;10;25 - 00;58;15;12
this illusion that Portugal
was this space that we could go to.
00;58;15;15 - 00;58;21;04
That was a that was a,
that that had their stuff in order.
00;58;21;04 - 00;58;23;14
They figured out something
that, that the, for example,
00;58;23;14 - 00;58;26;26
the US companies hadn't figured out,
which is work life balance.
00;58;26;29 - 00;58;32;20
And so the cynicism, though, that
then created then for people in Portugal
00;58;32;23 - 00;58;34;12
was, was quite profound
00;58;34;12 - 00;58;38;15
because they felt that these laws were
then just created for the outsiders.
00;58;38;18 - 00;58;43;01
And there was a second kind of there was,
it weren't created for them.
00;58;43;04 - 00;58;44;28
And so
00;58;44;28 - 00;58;46;03
what do we learn from these two stories?
00;58;46;03 - 00;58;52;13
Well, I think that we're living in a time
where, work is global, where,
00;58;52;16 - 00;58;56;03
work
advice, work people work across borders.
00;58;56;07 - 00;59;01;01
and we people have international teams,
and yet those teams are all dealing
00;59;01;01 - 00;59;05;17
with things that are not only because
of their own cultural,
00;59;05;20 - 00;59;09;26
point of view, but just because of the
the role of geography,
00;59;09;26 - 00;59;14;21
the role of,
the role of societal expectations.
00;59;14;24 - 00;59;18;27
That, that even though people are under
00;59;19;04 - 00;59;23;01
speaking of increasingly,
you know, global English,
00;59;23;04 - 00;59;26;04
these, these differences
really do matter.
00;59;26;07 - 00;59;29;23
And I think this directly translates
kind of the.
00;59;29;29 - 00;59;33;17
So so I would sum it up, sum
up the theory of the Future of work series
00;59;33;17 - 00;59;37;12
as, as an exploration into how specific
00;59;37;12 - 00;59;41;01
stories, into how even as teams are global
00;59;41;02 - 00;59;44;09
and and work is happening
more internationally.
00;59;44;12 - 00;59;47;25
The the fault lines, you know,
00;59;47;25 - 00;59;51;12
between cultures,
in, between societies really
00;59;51;18 - 00;59;55;01
are becoming even more exploited
and mean more
00;59;55;04 - 00;59;59;01
and are more painful to people
because they see the differences.
00;59;59;07 - 01;00;03;06
You know, it's it's
I saw this in my years
01;00;03;06 - 01;00;06;06
living in Nairobi,
going back and forth to East Africa,
01;00;06;11 - 01;00;10;16
how people were so much more aware of, of,
01;00;10;19 - 01;00;13;06
of the, of their status
01;00;13;06 - 01;00;18;00
in relationship to their age mates
in other places because of the internet
01;00;18;03 - 01;00;23;26
and, and so, so,
so what does this mean then, for, for AI?
01;00;23;29 - 01;00;24;08
And we
01;00;24;08 - 01;00;28;27
haven't talked about China,
but for me, the,
01;00;29;00 - 01;00;33;27
the, the, the kind of story
that's being told within China
01;00;34;00 - 01;00;37;19
about AI and the story
that China is telling, the world about AI
01;00;37;19 - 01;00;41;06
is so different,
and it's so important that we realize this
01;00;41;06 - 01;00;44;28
because we're in this battle
with with China, where AI companies
01;00;44;28 - 01;00;48;19
are in this battle with China, this race.
01;00;48;22 - 01;00;51;27
And yet
China has its own specific priorities
01;00;52;00 - 01;00;55;09
and its own kind of, narrative about AI
01;00;55;09 - 01;00;58;16
within, within, within the country
that affects the workers.
01;00;58;16 - 01;01;04;19
And, and I think will become increasingly
important as, as the AI race heats up.
01;01;04;22 - 01;01;06;13
Let's dive a little bit deeper into that.
01;01;06;13 - 01;01;09;21
So what what is the narrative there
within China.
01;01;09;21 - 01;01;11;17
And you know,
how is it the same or different from
01;01;11;17 - 01;01;14;03
what's being projected outwardly
as it comes to? I.
01;01;14;03 - 01;01;14;23
Sure, sure.
01;01;14;23 - 01;01;15;23
So, I mean, I think
01;01;15;23 - 01;01;20;09
basically, you know, for a long time
we were told that AI that Chinese firms
01;01;20;09 - 01;01;24;05
were more interested in sort of practical
a or applied AI, whereas
01;01;24;08 - 01;01;28;06
the US has had a more clear, stated
goal of superintelligence.
01;01;28;09 - 01;01;30;18
I think many people that I talked to
did not believe that.
01;01;30;18 - 01;01;36;01
And indeed, Alibaba became the first major
Chinese tech giant to openly discuss
01;01;36;01 - 01;01;39;23
artificial general intelligence
and even superintelligence. So,
01;01;39;26 - 01;01;41;22
we know that
01;01;41;22 - 01;01;47;01
China's has just as much ambition in the
in the area of superintelligence.
01;01;47;04 - 01;01;50;06
But importantly, the nationalist ambitions
01;01;50;06 - 01;01;54;20
and the technological ambitions are in
somewhat in
01;01;54;20 - 01;01;59;13
are in are in contradiction there
because or at least not always aligned.
01;01;59;17 - 01;02;04;13
I was talking to a safety researcher,
who had who had mentioned that
01;02;04;16 - 01;02;06;25
in, in China,
01;02;06;25 - 01;02;10;08
you know, red teaming,
this kind of safety testing of the models,
01;02;10;11 - 01;02;14;19
it's much more rigorous in Chinese
than it is in English.
01;02;14;22 - 01;02;17;22
So you could ask the, the model,
01;02;17;27 - 01;02;23;08
for instance, like moonshots, Kimi K2,
which apparently is it's thinking
01;02;23;08 - 01;02;28;15
is outperformed OpenAI's church GPT five,
as well as anthropic squads latest model.
01;02;28;15 - 01;02;35;09
So okay, so you take this model
and you can get it to do things in English
01;02;35;15 - 01;02;42;02
that it would not do in, in, in Mandarin
Chinese or in Chinese language. So,
01;02;42;05 - 01;02;45;02
what
01;02;45;02 - 01;02;50;05
that means is course, is that,
the red teaming is,
01;02;50;08 - 01;02;55;10
is, is very much,
about political control.
01;02;55;10 - 01;02;59;06
It's about making sure that people
are not asking questions of this
01;02;59;06 - 01;03;02;15
AI that will will destabilize or,
01;03;02;18 - 01;03;06;14
you know, work
against the Chinese government,
01;03;06;17 - 01;03;10;12
but it's not exactly the same thing
as a safe model.
01;03;10;15 - 01;03;14;08
and the whole way China got into the
AI race is, is,
01;03;14;08 - 01;03;18;05
I think, also instructive in this way,
where, where AlphaGo,
01;03;18;08 - 01;03;21;13
DeepMind, or deep secret,
01;03;21;13 - 01;03;27;12
did the, sorry, DeepMind, the,
the Demis Hassabis model, created,
01;03;27;15 - 01;03;30;25
created a go player in 2016
that then beat the Chinese
01;03;30;25 - 01;03;34;27
national champion is that's
when China became interested and in go
01;03;35;00 - 01;03;38;08
very interested in AI
when it kind of hit home.
01;03;38;13 - 01;03;41;23
You know it
hit a game that that China reveres
01;03;41;26 - 01;03;44;16
and is a is a is an ancient Chinese game.
01;03;44;16 - 01;03;47;24
So this has always been about,
01;03;47;27 - 01;03;52;10
I think, the turf war about,
what is happening within China,
01;03;52;10 - 01;03;55;29
how in terms of China's
concern of controlling its people,
01;03;55;29 - 01;04;00;04
China is concerning
preserving its own sort of, territory.
01;04;00;09 - 01;04;01;27
Territory
01;04;02;00 - 01;04;03;08
that, that that
01;04;03;08 - 01;04;06;28
is why the, the AI race is, is being
is being heard.
01;04;07;01 - 01;04;11;00
So what that means,
I think, for us is that
01;04;11;03 - 01;04;14;03
has these models
01;04;14;03 - 01;04;16;17
become smarter.
01;04;16;17 - 01;04;19;28
There is nobody, who is concerned
01;04;19;28 - 01;04;24;19
with making them safer in a complete way
01;04;24;22 - 01;04;29;07
that the, the goals of winning,
winning the AI race,
01;04;29;10 - 01;04;32;24
supersede the goal of creating a model
01;04;32;24 - 01;04;36;14
that, keeps us all safe.
01;04;36;17 - 01;04;39;24
And, even though we should,
01;04;39;27 - 01;04;43;29
we should imagine that China doesn't
want the US to get a superintelligence.
01;04;43;29 - 01;04;46;27
The US doesn't want
China get a superintelligence.
01;04;46;27 - 01;04;47;26
There should be.
01;04;47;26 - 01;04;50;08
You know,
when you think about nuclear disarmament,
01;04;50;08 - 01;04;53;07
there should be some sort of, agreement
between rivals.
01;04;53;07 - 01;04;54;25
That's possible here.
01;04;54;25 - 01;05;00;10
Just as there has been in
nuclear armed arms treaties.
01;05;00;13 - 01;05;03;11
And yet, despite the fact that I know
many good diplomats
01;05;03;11 - 01;05;07;05
diplomacy these days, is seen
as, is a dead end.
01;05;07;09 - 01;05;11;17
And,
so we're all racing on our own, sides.
01;05;11;20 - 01;05;12;00
Yeah.
01;05;12;00 - 01;05;13;06
Well, well and it's,
01;05;13;06 - 01;05;16;19
you know, if you're a student of history,
it's a bit concerning that it seems like
01;05;16;19 - 01;05;20;26
with most of these technologies,
you know, nuclear arms included,
01;05;20;29 - 01;05;23;29
the technology comes first
and the safeguards come later.
01;05;24;06 - 01;05;24;15
Right.
01;05;24;15 - 01;05;28;00
Like it's
we've got nuclear nonproliferation treaty
01;05;28;03 - 01;05;30;04
that came
after we deployed nuclear weapons.
01;05;30;04 - 01;05;31;03
There's the
01;05;31;03 - 01;05;33;13
you know that the story making rounds
about the
01;05;33;13 - 01;05;36;08
the gap in years between when the first,
you know, assembly line
01;05;36;08 - 01;05;39;23
car was made versus
when seatbelts were introduced?
01;05;39;26 - 01;05;42;16
And, you know,
I think you hear a lot of the rumors say
01;05;42;16 - 01;05;44;16
we may not have that luxury with AI,
right?
01;05;44;16 - 01;05;46;19
Like it's again, to come back to that,
you'd kowski,
01;05;46;19 - 01;05;48;20
you know, good piece of
of brand marketing.
01;05;48;20 - 01;05;50;23
If, if anybody builds it,
you know, everybody dies.
01;05;50;23 - 01;05;53;00
So that to me is one of the exam questions
here.
01;05;53;00 - 01;05;57;01
Is, is this time fundamentally different
or is it the same.
01;05;57;01 - 01;06;00;04
And how do we how do we grapple with that.
01;06;00;07 - 01;06;02;14
Yeah, it's interesting because at the
AI Summit in Seoul,
01;06;02;14 - 01;06;07;19
which I think was just 20, 24,
it's at that summit that 15
01;06;07;22 - 01;06;12;06
leading AI companies committed to quote,
you know, defining the intolerable risks
01;06;12;06 - 01;06;15;08
and agreeing to not deploy,
which one person said to me,
01;06;15;08 - 01;06;17;10
that's the only time
trillion dollar companies had agreed
01;06;17;10 - 01;06;19;19
to literally deploy a product
if it's not safe. Right.
01;06;19;19 - 01;06;22;25
So in some sense, what's also unusual
01;06;22;25 - 01;06;26;08
about this industry is that,
01;06;26;11 - 01;06;29;24
you know, unlike
the car companies that needed Ralph Nader
01;06;29;27 - 01;06;33;05
to to shout about seatbelts
for quite a long time, and a lot of people
01;06;33;06 - 01;06;36;29
did die
before seatbelts were even included.
01;06;37;02 - 01;06;40;18
And, you know, similarly, the, you know,
you can look at other industries
01;06;40;18 - 01;06;44;10
with their whistleblowers
and their and their gadfly is that, the
01;06;44;10 - 01;06;48;06
no AI companies from the get
go have been talking about AI safety.
01;06;48;06 - 01;06;51;22
So it's it's they've actually been
the main people talking about it.
01;06;51;25 - 01;06;55;01
So, so in some sense you could say, well,
01;06;55;04 - 01;06;58;25
they are ahead
of the curve in, in safety.
01;06;58;28 - 01;07;02;12
The problem with that, I think, is
that, well, first of all, the problem we,
01;07;02;13 - 01;07;06;07
we pointed out before,
which is that they don't actually know
01;07;06;10 - 01;07;08;15
what the models are capable of
until they release them.
01;07;08;15 - 01;07;12;19
So there's that Unknowability
in this technology, the unpredictability.
01;07;12;19 - 01;07;15;03
That's just part of the,
01;07;15;03 - 01;07;17;22
it's part of the way
in which I instructed,
01;07;17;22 - 01;07;21;06
but also, you know, not safe.
01;07;21;09 - 01;07;25;23
Defining the intolerable
risks is not an easy thing.
01;07;25;23 - 01;07;30;16
I mean, no, no, technology in the world
is completely devoid of risk.
01;07;30;19 - 01;07;35;21
You might be somebody who enjoys,
you know, hang gliding.
01;07;35;21 - 01;07;38;29
I might be somebody who, is scared.
01;07;38;29 - 01;07;41;13
Even when I go in the back seat of a car
when I, when I was.
01;07;41;13 - 01;07;44;13
But, I mean, we have different, different
standards of risk.
01;07;44;16 - 01;07;48;29
And so, this is what I think is
so important about for instance, your show
01;07;49;06 - 01;07;54;11
and this kind of conversation
is to dig deeper into what we mean
01;07;54;14 - 01;07;58;08
by not safe, because that's kind of where
a lot of the discussion ends.
01;07;58;08 - 01;08;02;05
It says,
oh my gosh, these things might destroy us.
01;08;02;08 - 01;08;03;25
And then
01;08;03;25 - 01;08;04;23
it stops there.
01;08;04;23 - 01;08;07;18
But but in fact, you know,
if you look at, for example,
01;08;07;18 - 01;08;10;21
I mentioned sold,
the agreements that came out of Seoul.
01;08;10;21 - 01;08;12;26
So I think that was like a
01;08;12;26 - 01;08;16;14
500 word agreement,
you know, that just a statement.
01;08;16;14 - 01;08;17;16
It was a kind of an open letter.
01;08;17;16 - 01;08;19;19
And then after that,
01;08;19;19 - 01;08;24;29
the companies issued maybe, thousand word
01;08;25;02 - 01;08;29;21
mission statements, documents, basically
saying how they defined intolerable risks.
01;08;29;21 - 01;08;32;12
So there was some effort of defining it.
01;08;32;12 - 01;08;34;15
But,
I was talking to somebody and they said,
01;08;34;15 - 01;08;37;15
you know, yeah, about five,
5000 words would be better.
01;08;37;15 - 01;08;39;09
10,000
words would be even better than that.
01;08;39;09 - 01;08;43;17
I mean, the more granular that we can get
these companies to be about,
01;08;43;20 - 01;08;48;04
how do you define risky,
how deep, what specifically do
01;08;48;04 - 01;08;51;01
we want to see in the model
and the training,
01;08;51;01 - 01;08;53;19
the pre-training requirements
and the conditions for deployment?
01;08;53;19 - 01;08;57;20
You know, specifics
before this thing gets deployed?
01;08;57;20 - 01;09;00;10
I would say the details matter.
01;09;00;10 - 01;09;03;06
And unfortunately,
maybe because it's our relationship
01;09;03;06 - 01;09;05;13
to technology, I would not say that of
about your listeners.
01;09;05;13 - 01;09;10;02
But, you know, I think just our society
more broadly, we've just kind of
01;09;10;05 - 01;09;14;01
come into this idea that the, oh,
technology will either work or not work.
01;09;14;01 - 01;09;17;03
You know, it'll either
glitch or it'll function.
01;09;17;09 - 01;09;19;18
And the best technology is invisible.
01;09;19;18 - 01;09;22;11
And maybe,
yeah, maybe we can't do that with this.
01;09;22;11 - 01;09;24;15
Maybe we have to kind of get nerdy
01;09;24;15 - 01;09;28;17
and get into the details,
because I'm agreeing with you.
01;09;28;17 - 01;09;30;18
I, I'm just by nature.
01;09;30;18 - 01;09;33;20
I can't subscribe to Utopia, but
01;09;33;20 - 01;09;36;28
I also just can't subscribe to apocalypse.
01;09;37;01 - 01;09;41;01
Maybe this is my failure of imagination,
but but what I do think
01;09;41;01 - 01;09;45;07
is, is that I have to do
the work of reading about backpropagation
01;09;45;07 - 01;09;50;14
just to just to understand, you know what,
how alien these intelligences are and,
01;09;50;14 - 01;09;55;10
maybe understand a little bit more about
what kind of requirements.
01;09;55;13 - 01;09;59;02
I would what I would want to see before
the models are released.
01;09;59;05 - 01;10;00;07
Right. Well, well.
01;10;00;07 - 01;10;04;05
And, you know, it's it's
great to have everybody sign off on this,
01;10;04;05 - 01;10;07;04
and then I don't want to dismiss that
because that's a win. As you said.
01;10;07;04 - 01;10;10;21
That's something that's that's very
unusual for, for technology companies.
01;10;10;21 - 01;10;14;10
But, you know, to what degree can
01;10;14;10 - 01;10;18;13
we actually create
any sort of enforcements mechanisms here?
01;10;18;13 - 01;10;18;19
Right.
01;10;18;19 - 01;10;22;11
Because that's, that's,
you know, all of this comes down and,
01;10;22;11 - 01;10;25;11
and you set it right off the top
that there's like a trust component here.
01;10;25;15 - 01;10;29;04
And there's the fact that,
the future of civilization is concentrated
01;10;29;04 - 01;10;34;01
in the hands of a bunch of guys who may
or may not be in a group chat together.
01;10;34;04 - 01;10;38;05
And, what happens when Sam says, you know,
oh, yeah, we're not going to do that.
01;10;38;05 - 01;10;41;01
And then says his team,
let's make sure we do that anyway.
01;10;41;01 - 01;10;45;17
You know, like, it's just it's just a wild
I don't know, it's
01;10;45;17 - 01;10;50;00
certainly beyond my imagination
that we could be that we could get here.
01;10;50;03 - 01;10;50;14
Yeah.
01;10;50;14 - 01;10;51;20
Yeah, yeah. No, absolutely.
01;10;51;20 - 01;10;57;11
And, it has been my kind of mission
through the, through the whole show.
01;10;57;11 - 01;11;00;18
And it's not been quite, not easy in this,
in this reporting is to just
01;11;00;18 - 01;11;05;00
to be constantly thinking what is the role
01;11;05;03 - 01;11;09;10
of a person who is not a,
01;11;09;13 - 01;11;12;29
not a technologist, not a lawmaker?
01;11;13;02 - 01;11;16;18
What is their role other than to just
sit back and watch this future happen?
01;11;16;21 - 01;11;18;21
Because clearly,
01;11;18;21 - 01;11;22;19
if you're just a person,
if you're a parent, if you're
01;11;22;22 - 01;11;25;01
if you care about the world,
01;11;25;01 - 01;11;28;00
it doesn't feel acceptable to just sit
01;11;28;02 - 01;11;31;24
with this level of risk and do nothing.
01;11;31;27 - 01;11;34;04
But it also feels premature
01;11;34;04 - 01;11;38;02
or perhaps histrionic or,
I don't know, just to freak out.
01;11;38;02 - 01;11;41;01
And some
and you know, I had this conversation
01;11;41;01 - 01;11;44;01
with a number of people,
even somebody who was
01;11;44;05 - 01;11;46;28
reviewing and recommending the podcast
and they said, you know, well,
01;11;46;28 - 01;11;49;28
I had to give it a mixed review
because I was also freaked out.
01;11;50;01 - 01;11;54;29
And so I said, you know, that doesn't mean
you shouldn't listen, but it's
01;11;55;02 - 01;11;58;28
but I got their point, which is that,
I think it is important for us
01;11;58;28 - 01;12;03;06
and you and I and,
anybody who has the opportunity
01;12;03;06 - 01;12;07;21
to talk to anybody about this
stuff is to think about how
01;12;07;24 - 01;12;08;24
how we
01;12;08;24 - 01;12;13;12
how we, walk the tightrope that I guess
all prophets
01;12;13;15 - 01;12;17;24
and, and, and biblical,
figures have have walked before us,
01;12;17;24 - 01;12;23;12
which is how do you warn people without,
01;12;23;15 - 01;12;28;13
making them unhelpfully anxious.
01;12;28;16 - 01;12;32;02
And, you know,
I see the, the in the, in the podcast, we,
01;12;32;02 - 01;12;34;04
we talk about these three groups,
we talk about the doomsayers,
01;12;34;04 - 01;12;38;10
but the acceleration is we also talk about
this third group called the Scouts
01;12;38;13 - 01;12;41;14
and the scouts, kind of believe
01;12;41;15 - 01;12;44;27
essentially in a nutshell,
that this win win opportunity is possible,
01;12;45;00 - 01;12;46;07
that, that more
01;12;46;07 - 01;12;49;28
AI safety is absolutely necessary,
but that we shouldn't just stop AI.
01;12;50;01 - 01;12;52;12
But I, I see them
01;12;52;12 - 01;12;55;17
struggling
as any centrist kind of struggles
01;12;55;20 - 01;12;59;15
where you don't have a clear message,
this is amazing or this is horrible,
01;12;59;18 - 01;12;59;27
and you're
01;12;59;27 - 01;13;04;00
trying to explain to people, no, this is,
this is this could be really good.
01;13;04;03 - 01;13;08;06
But we have to do we have to understand
the models a little bit more.
01;13;08;06 - 01;13;11;16
And because it's not easy to say
this could be good,
01;13;11;16 - 01;13;14;16
but we certainly need to regulation
that will do X
01;13;14;23 - 01;13;18;13
because there's it's not clear
that there's any regulation
01;13;18;13 - 01;13;21;26
that the superintelligence
or just an advanced AI will not outwit.
01;13;22;01 - 01;13;25;11
This is where I think in answer
to that question, more people need to
01;13;25;11 - 01;13;27;06
be involved in the problem.
01;13;27;09 - 01;13;28;22
More people need to be
01;13;28;22 - 01;13;33;00
scenario planning for what,
let's say, never happens.
01;13;33;03 - 01;13;35;13
Who knows? Let's say ten years, 20 years.
01;13;35;13 - 01;13;37;27
Everything looks exactly the same
as it does now. Then fine.
01;13;37;27 - 01;13;40;02
Then we will have done a mental exercise
for no reason.
01;13;40;02 - 01;13;43;09
Okay, but let's say there is a chance.
01;13;43;12 - 01;13;43;20
Let's say
01;13;43;20 - 01;13;47;24
it's a good chance that things look
radically different in 10 or 20 years.
01;13;47;27 - 01;13;50;23
Perhaps some scenario planning now
01;13;50;23 - 01;13;53;27
as to how universities might change,
01;13;53;27 - 01;13;58;20
how schools might change,
how parenting might change, how how,
01;13;58;23 - 01;14;01;07
community living or sort of,
01;14;01;07 - 01;14;04;26
communal relationships
might change with a superintelligence
01;14;05;03 - 01;14;09;26
or with an advanced
AI that is doing most of the human labor.
01;14;09;29 - 01;14;12;16
That's a
that's a question that we could play out,
01;14;12;16 - 01;14;16;13
that we could we could worry, in
fact, you know, if anybody
01;14;16;16 - 01;14;17;27
wants to contact me with,
01;14;17;27 - 01;14;21;21
with some scenario planning ideas,
I, I'm, I'm working on this right now.
01;14;21;24 - 01;14;26;04
So a question, and it's very important
01;14;26;04 - 01;14;29;13
if you're doing scenario planning
to not be freaked out
01;14;29;16 - 01;14;32;06
in the, in the sense of that word,
not be panicked,
01;14;32;06 - 01;14;36;14
but also not be,
not take a cavalier attitude
01;14;36;17 - 01;14;38;18
and, and maybe we could figure out
01;14;38;18 - 01;14;41;28
some new solutions that would work
even if we don't get to advance AI.
01;14;42;01 - 01;14;44;17
That's my optimist. Talking.
01;14;44;17 - 01;14;45;17
No I love that.
01;14;45;17 - 01;14;48;17
And I love that as a mission for you know
01;14;48;21 - 01;14;51;23
the podcast in general and frankly
the journalistic mission of it all.
01;14;51;23 - 01;14;53;04
And I think it's you know.
01;14;53;04 - 01;14;55;28
Yeah I agree that it's super,
super important
01;14;55;28 - 01;14;58;22
to, to just sort of pivot
that question a little bit
01;14;58;22 - 01;15;01;13
when we think about scenario planning
and we think about, you know, what
01;15;01;13 - 01;15;05;11
we need to know and what we need to do
differently to build the future we want.
01;15;05;17 - 01;15;08;27
What's your advice for,
you know, business leaders
01;15;08;27 - 01;15;12;22
or government leaders, you know,
in the organizational side of government?
01;15;12;25 - 01;15;16;21
Yeah, outside of Silicon Valley,
like for the people who are looking at
01;15;16;21 - 01;15;20;28
adopting this technology, looking to,
you know, figure out
01;15;21;01 - 01;15;25;17
what they need to do differently to be
successful as people and as organizations,
01;15;25;20 - 01;15;28;20
what should be on their radar
and what guidance would you give them?
01;15;28;22 - 01;15;30;14
Yeah, I appreciate it.
01;15;30;14 - 01;15;35;29
One is, I think, you know, one of the
the feedback that I've gotten from this,
01;15;36;02 - 01;15;39;12
from the, from working on this series,
I've talked to people
01;15;39;12 - 01;15;42;06
who feel that their companies
01;15;42;09 - 01;15;45;09
and these are, these tend to be, say,
01;15;45;14 - 01;15;50;12
middle level decision makers or even,
people who don't feel like that.
01;15;50;12 - 01;15;53;02
They're the key decision maker.
They're just under the CEO.
01;15;53;02 - 01;15;56;25
They feel that their company is either,
01;15;56;28 - 01;16;00;16
either moving too fast
or being left behind.
01;16;00;18 - 01;16;03;09
Right. This is the
this is the always the story.
01;16;03;09 - 01;16;05;20
The moving too fast.
01;16;05;20 - 01;16;10;01
Goes with
they're throwing out human intelligence.
01;16;10;01 - 01;16;11;23
They're trying to replace.
01;16;11;23 - 01;16;13;28
They're trying to automate everything.
01;16;13;28 - 01;16;16;11
And the we're not moving
fast enough saying
01;16;16;11 - 01;16;19;21
we're going to be left behind when we're,
we're, we're we're sort of stuck.
01;16;19;21 - 01;16;23;00
And so I'm sure the business leaders
are feeling that pressure
01;16;23;00 - 01;16;26;21
and having to make decisions
all the time about the pace of adoption.
01;16;26;23 - 01;16;28;06
Right.
01;16;28;06 - 01;16;31;09
And it's extremely difficult to make
a decision about the pace of adoption
01;16;31;09 - 01;16;35;08
of something that keeps changing,
that keeps developing and getting smarter.
01;16;35;11 - 01;16;38;07
Because how do you make that decision?
01;16;38;07 - 01;16;40;14
So one thing that I think is
helpful are two things
01;16;40;14 - 01;16;44;10
that I think has been helpful for me
in talking to those people
01;16;44;13 - 01;16;46;24
and in some sense allaying their concerns,
01;16;46;24 - 01;16;50;05
but also addressing, addressing
01;16;50;05 - 01;16;54;21
what is the elephant in the room,
which is what will my life look like?
01;16;54;21 - 01;16;57;05
What will my industry look like?
01;16;57;05 - 01;17;00;17
On the other side of this technology?
01;17;00;20 - 01;17;02;17
The two things I always say is, first,
01;17;02;17 - 01;17;07;24
the, the,
I think I told the story before, but the,
01;17;07;27 - 01;17;11;24
the fact that a chess computer
with a human,
01;17;11;27 - 01;17;16;12
a human and a computer
and an AI is is smarter
01;17;16;12 - 01;17;20;09
is better than a, and then,
01;17;20;09 - 01;17;24;04
any AI or any human, at least currently.
01;17;24;07 - 01;17;26;21
And I think that's going to be true
for a while.
01;17;26;21 - 01;17;31;27
So what I think
the smart approach is to think about
01;17;32;00 - 01;17;36;04
how do we enhance, how do we,
01;17;36;07 - 01;17;41;12
you know, supercharge the work of the
of our our of our of our employees?
01;17;41;12 - 01;17;44;21
How do we get them to do not only more
01;17;44;21 - 01;17;48;10
but to, to to think deeper
01;17;48;13 - 01;17;52;29
and to make more interesting decisions
to as you use the predictive power of AI
01;17;53;00 - 01;17;57;25
to make decisions not only for, for now,
but to do more sophisticated planning
01;17;57;28 - 01;18;00;25
and so I think that that's
actually addresses
01;18;00;25 - 01;18;04;07
both concerns that we're not moving fast
enough as well as we're not, we're moving
01;18;04;07 - 01;18;07;08
to, to, to, we're moving too
01;18;07;08 - 01;18;10;20
rapidly in that humans are humans.
01;18;10;23 - 01;18;13;07
Our employees need to feel empowered.
01;18;13;07 - 01;18;17;19
They need to feel that they are
getting smarter because of this AI.
01;18;17;22 - 01;18;19;07
And it gets it. Really.
01;18;19;07 - 01;18;22;07
The second point, I would say,
which is that the story,
01;18;22;08 - 01;18;25;23
there's so many stories from sci fi
01;18;25;26 - 01;18;27;10
that are
01;18;27;10 - 01;18;31;19
not only just in our heads,
but baked into these models.
01;18;31;22 - 01;18;37;18
I mean, even the way forgiveness people
haven't seen, say, Stanley Kubrick's,
01;18;37;21 - 01;18;39;21
masterpiece, 2001
01;18;39;21 - 01;18;44;25
A Space Odyssey or they haven't seen,
Blade Runner or The Matrix,
01;18;44;28 - 01;18;47;25
even if they haven't
seen any of those films.
01;18;47;25 - 01;18;53;09
Even the way that AI is so solicitous
and the way it is encouraging.
01;18;53;09 - 01;18;55;18
And you talked about this earlier,
01;18;55;18 - 01;18;57;03
it begs a kind of narrative.
01;18;57;03 - 01;19;01;20
It makes you think of a story
where, you know,
01;19;01;25 - 01;19;04;24
like an Isaac Asimov
kind of narrative where this,
01;19;04;24 - 01;19;08;00
this servant is quite helpful
01;19;08;03 - 01;19;13;04
until they're not, and
01;19;13;07 - 01;19;16;21
Geoffrey Hinton, I would say,
says something so smart about this.
01;19;16;24 - 01;19;21;29
He says that most CEOs, you know,
most leaders
01;19;22;02 - 01;19;24;29
are used to not being the most
intelligent person in the room, right?
01;19;24;29 - 01;19;26;19
If they were the most intelligent person
01;19;26;19 - 01;19;28;15
in the room,
they're probably not a good leader
01;19;28;15 - 01;19;31;21
because they need to hire
the smarter people,
01;19;31;24 - 01;19;34;26
because they hold the vision
for the company
01;19;34;26 - 01;19;37;29
and they hold the, you know, the
they are the leadership.
01;19;37;29 - 01;19;42;04
The leadership
shouldn't be the best at every task.
01;19;42;07 - 01;19;44;25
And so in some sense, he
01;19;44;25 - 01;19;48;24
says that these models have been designed
01;19;48;27 - 01;19;51;14
by very smart people who are used
01;19;51;14 - 01;19;55;09
to even smarter people working for them,
01;19;55;12 - 01;19;59;17
and they are not threatened
by intelligence
01;19;59;20 - 01;20;03;15
in the way
that say, the employees in a company
01;20;03;18 - 01;20;06;12
might be threatened by the arrival
01;20;06;12 - 01;20;11;12
of this incredibly capable,
infinitely knowledgeable,
01;20;11;15 - 01;20;14;25
machine and technology
that doesn't need to sleep
01;20;14;25 - 01;20;18;14
and doesn't need to eat and and and and
and never forgets anything.
01;20;18;14 - 01;20;21;15
So, Geoffrey Hinton uses
that as an analogy to say that, that
01;20;21;17 - 01;20;24;24
in some sense, the CEOs of these companies
aren't, aren't worried enough.
01;20;24;24 - 01;20;26;05
They, they're used to they
01;20;26;05 - 01;20;28;08
they can dream of superintelligence
and still imagine
01;20;28;08 - 01;20;30;07
that the superintelligence
will do their bidding
01;20;30;07 - 01;20;32;13
just as their employees do their bidding,
because they
01;20;32;13 - 01;20;35;19
they haven't really appreciated the fact
that that a superintelligence is
01;20;35;19 - 01;20;39;18
much, much, much, much,
much smarter than any, you know,
01;20;39;21 - 01;20;42;01
Einstein level person that they hire.
01;20;42;01 - 01;20;45;03
But I think the, the takeaway
01;20;45;03 - 01;20;50;06
for me, for, for business leaders
who are adopting AI is to watch
01;20;50;09 - 01;20;53;23
and take care for the narratives
that, that,
01;20;53;23 - 01;20;58;15
that are often in this disharmony,
this not alignment between the C-suite
01;20;58;18 - 01;21;02;17
and the workshop
for the rest of the employees that,
01;21;02;20 - 01;21;07;14
our relationship to an intelligence
is going to be different in the C-suite
01;21;07;14 - 01;21;10;20
and is going to be different
on the among the workers,
01;21;10;23 - 01;21;14;17
and that people need to feel
even as this is making the company better
01;21;14;17 - 01;21;19;03
or more efficient,
that it's also making the humans
01;21;19;06 - 01;21;23;17
smarter and more capable and even happier.
01;21;23;20 - 01;21;28;03
So that's that's more around the narrative
than it is around the adoption.
01;21;28;03 - 01;21;32;13
But I think that it's it's
kind of can guide all the,
01;21;32;16 - 01;21;35;15
adoption decisions if,
if that's if that makes sense.
01;21;35;15 - 01;21;36;19
It makes complete sense.
01;21;36;19 - 01;21;38;12
And I love that guidance.
01;21;38;12 - 01;21;40;24
And I think it's it's
so important these days.
01;21;40;24 - 01;21;43;25
I mean, it's and you read about it
everywhere, and it's something we've had
01;21;43;25 - 01;21;46;25
people talk about on the show of,
you know, you've got this kind of
01;21;47;02 - 01;21;50;11
two speed or kind of,
01;21;50;14 - 01;21;53;13
you know, narratives in conflict of CEOs
saying,
01;21;53;13 - 01;21;55;25
oh, this is going to make my company
so much better.
01;21;55;25 - 01;21;59;02
And frontline workers say, oh, by,
you know, putting me out of a job.
01;21;59;08 - 01;22;03;12
And if you can't marry those,
if you're an organizational leader
01;22;03;14 - 01;22;07;17
and you can't get the people that report
to you to be excited about the s,
01;22;07;20 - 01;22;12;06
it feels like it's going to be very
difficult to successfully, you know, rally
01;22;12;06 - 01;22;15;28
the organization and build something,
that's going to be worth doing.
01;22;16;01 - 01;22;20;09
I agree and I and I think also we may see
this play out politically as well.
01;22;20;12 - 01;22;25;21
I mean, currently, I don't think that
I as a, as a political issue yet.
01;22;25;28 - 01;22;31;22
There's not a clear Democratic
or Republican position on AI.
01;22;31;28 - 01;22;37;29
But that I think could quickly
change and we could see,
01;22;38;02 - 01;22;40;13
kind of people rallying around
01;22;40;13 - 01;22;43;29
this becoming as polarized
an issue as any other.
01;22;44;02 - 01;22;47;02
Perhaps we could we could theorize
about what might trigger that,
01;22;47;02 - 01;22;52;07
but it's not hard to imagine, for example,
in our polarized climate, a stop AI
01;22;52;07 - 01;22;56;07
camp and a AI is great camp or whatever,
01;22;56;10 - 01;22;58;23
and that would be very disappointing.
01;22;58;23 - 01;23;04;07
That would be very disappointing
because polarization is is really the
01;23;04;10 - 01;23;06;13
the killer of anything complex,
01;23;06;13 - 01;23;09;25
anything complicated to discuss
and to think about clearly.
01;23;09;25 - 01;23;13;09
I mean, you and I are having a discussion
where neither of us are in the,
01;23;13;12 - 01;23;17;03
say, the stop everything camp
or the except at all Utopia camp.
01;23;17;03 - 01;23;20;26
But in order to talk about the fine print
01;23;20;28 - 01;23;25;28
of these models,
I would say, to use our alien analogy,
01;23;26;01 - 01;23;28;13
if this becomes politicized,
01;23;28;13 - 01;23;31;27
then the alien definitely wins.
01;23;32;00 - 01;23;34;00
Well said, well said.
01;23;34;00 - 01;23;37;22
Greg, we've talked a few times now about,
this conversation has been
01;23;37;25 - 01;23;40;16
pretty heavily focused on chat bots,
and I think that makes sense, given
01;23;40;16 - 01;23;44;04
that the the technology is here
and it's it's very clear
01;23;44;04 - 01;23;45;15
how it's impacting us right now.
01;23;45;15 - 01;23;47;10
But, you know, one of the
01;23;47;10 - 01;23;50;13
in some ways, you know, cardinal sins of
AI is conflating
01;23;50;20 - 01;23;54;06
chat bots or just,
you know, generative AI with everything
01;23;54;06 - 01;23;57;26
possible in the world of algorithms
and these advanced technologies.
01;23;57;26 - 01;23;59;27
We've talked briefly about agen tech.
01;23;59;27 - 01;24;03;15
As we look beyond that, as we look
at some of the other
01;24;03;15 - 01;24;06;15
big buckets of,
01;24;06;22 - 01;24;10;18
you know, technological capabilities
that are either here or are emerging,
01;24;10;21 - 01;24;12;01
what are some of the ones that, through
01;24;12;01 - 01;24;14;12
these conversations
have gotten your attention
01;24;14;12 - 01;24;17;07
and maybe what are some of the ones
that you think are a little bit
01;24;17;07 - 01;24;19;22
less likely to have an impact?
01;24;19;25 - 01;24;20;02
Yeah.
01;24;20;02 - 01;24;21;01
I'm so glad you asked the question,
01;24;21;01 - 01;24;24;02
because it does feel like,
as you said, we do.
01;24;24;02 - 01;24;24;27
We do make the mistake.
01;24;24;27 - 01;24;28;14
I often make this mistake of thinking
that the chat bot is the AI, but in fact,
01;24;28;14 - 01;24;34;00
the chat bot is just, it points us to the
the artificial intelligence underneath.
01;24;34;00 - 01;24;37;11
And AI is being used
in very different ways.
01;24;37;14 - 01;24;41;13
And I think also just to understand why
01;24;41;16 - 01;24;45;04
frontier AI companies
would be kind of barreling
01;24;45;04 - 01;24;48;28
toward this future of superintelligence
is easier to understand
01;24;49;01 - 01;24;52;15
when we look beyond the chat bot,
for example.
01;24;52;18 - 01;24;57;27
I mean, just to give like a couple
of quick examples, there's a company
01;24;58;00 - 01;25;03;11
that I spoke with which is, using AI to,
01;25;03;14 - 01;25;07;03
to very quickly,
in an automated way, harvest,
01;25;07;06 - 01;25;12;18
cells from your own body
so that you if you need, let's say,
01;25;12;21 - 01;25;15;06
a liver replacement, it will be harvesting
01;25;15;06 - 01;25;18;15
and growing
cells is a very onerous process.
01;25;18;15 - 01;25;21;27
You have to find the exact right
kind of cell that's healthy,
01;25;21;27 - 01;25;24;21
and then you have to encourage
to grow and divide and kill
01;25;24;21 - 01;25;28;00
or kill the other cells so you don't get
some kind of disease thing.
01;25;28;00 - 01;25;31;16
Doing this outside of the human body
is very difficult, but it's something
01;25;31;16 - 01;25;35;26
that it's a kind of pattern recognition
that AI is extremely good at.
01;25;35;29 - 01;25;39;24
So, the vision of this particular company
is called Selena.
01;25;39;24 - 01;25;43;09
But there are other companies
like this is to,
01;25;43;12 - 01;25;45;24
I mean, essentially have
a future of medicine
01;25;45;24 - 01;25;48;24
where we go to the hospital
and we have a kind of cassette
01;25;48;26 - 01;25;52;01
with our organs
there on the on the cassette.
01;25;52;07 - 01;25;55;19
And perhaps if we need,
01;25;55;22 - 01;26;00;07
if we need a, I mean, maybe in the future
if we need a whole organ or currently
01;26;00;07 - 01;26;00;26
maybe if we need,
01;26;00;26 - 01;26;04;18
you know, some, some tissue, then we
we don't have to use a donor.
01;26;04;21 - 01;26;06;08
We can use ourselves as the donor.
01;26;06;08 - 01;26;10;05
so that's just one example, I think of how
the very same tools that were used
01;26;10;05 - 01;26;13;10
to like pattern recognition,
making, making these decisions,
01;26;13;10 - 01;26;17;04
but but doing it
quickly, could transform health care.
01;26;17;10 - 01;26;22;07
Another example that I was at,
think about is,
01;26;22;10 - 01;26;26;25
a particular crown created by a company,
01;26;26;28 - 01;26;27;29
French company.
01;26;27;29 - 01;26;31;09
Olivia Julia is the
is the scientist behind it,
01;26;31;09 - 01;26;35;25
and he's created an AI crown
that will essentially read your thoughts.
01;26;35;25 - 01;26;39;18
So if we think of, say, neurotic
or other brain computer interfaces, this
01;26;39;18 - 01;26;43;23
is not involve any drilling into the brain
or any drilling into the skull.
01;26;43;23 - 01;26;45;25
You don't think actually stick
something in there?
01;26;45;25 - 01;26;50;09
Rather it's, it, it,
it works outside the head.
01;26;50;09 - 01;26;55;02
You just plop it on your head
like, like a headset and a paraplegic man
01;26;55;09 - 01;27;00;06
in, was able to use this
headset to control,
01;27;00;09 - 01;27;03;22
not only a mouse on the screen,
but actually to drive a formula
01;27;03;22 - 01;27;08;03
One race car on a real racetrack and,
01;27;08;06 - 01;27;11;06
using only his mind.
01;27;11;07 - 01;27;13;14
What's crazy about this story
and the other stories
01;27;13;14 - 01;27;15;21
is that this technology, it's
not futuristic.
01;27;15;21 - 01;27;16;20
It's actually exists.
01;27;16;20 - 01;27;20;24
I mean, there is a real person
who is driving a Formula One racing car
01;27;20;25 - 01;27;24;04
using only his mind with just a headset
that sat on his head.
01;27;24;07 - 01;27;28;04
And yet for that future to be accessible
or build,
01;27;28;06 - 01;27;31;12
be available to the rest of us,
you need a lot more data.
01;27;31;15 - 01;27;34;23
But that's always the question with, with,
with every I think it's like, well,
01;27;34;26 - 01;27;36;17
where's the data?
01;27;36;20 - 01;27;38;19
Then we we I'm sure
you've talked to many people about that,
01;27;38;19 - 01;27;42;02
but in this case
the data is our own brainwaves, right?
01;27;42;02 - 01;27;45;09
So many, many,
many of us would have to volunteer up
01;27;45;09 - 01;27;49;02
our brainwaves for the AI to learn enough.
01;27;49;05 - 01;27;54;10
Enough to have enough data essentially to,
to work with, to be able to be.
01;27;54;13 - 01;27;55;07
And I don't want to
01;27;55;07 - 01;27;57;00
I don't want to make it sound like it's
just an out of the box,
01;27;57;00 - 01;27;59;01
throw in the headset
and drive a car with no hands.
01;27;59;01 - 01;27;59;09
I mean, I'm
01;27;59;09 - 01;28;03;10
sure there's some training involved,
but to make that even a possible future
01;28;03;13 - 01;28;07;07
where if I have had an injury,
01;28;07;11 - 01;28;12;14
I'm able to stick a thing on my head
and then function as I was before
01;28;12;15 - 01;28;17;26
while I recover
or perhaps that's my new future.
01;28;17;29 - 01;28;18;19
That is
01;28;18;19 - 01;28;22;18
the sort of vision of AI that,
I would say a lot
01;28;22;18 - 01;28;27;17
of my humanitarian friends say we
we cannot, cannot come fast enough.
01;28;27;20 - 01;28;30;12
There's so many people
that could could benefit from that,
01;28;30;12 - 01;28;34;13
from a point of view of accessibility
or longevity or health care.
01;28;34;16 - 01;28;35;08
So, yes.
01;28;35;08 - 01;28;38;04
So I definitely think that that's there.
01;28;38;04 - 01;28;42;11
As for things
that may not have as much of an impact,
01;28;42;14 - 01;28;44;20
I guess that's that's everything else.
01;28;44;20 - 01;28;47;14
I would not
it's it's really hard to know.
01;28;47;14 - 01;28;49;26
I mean, there's so many new AI companies
coming up every day.
01;28;49;26 - 01;28;51;19
It's hard to know what's going to be,
01;28;51;19 - 01;28;55;22
what's going to be, you know, the wheat
and what's going to be the chaff. But,
01;28;55;25 - 01;28;57;07
somehow it's going to be.
01;28;57;07 - 01;28;59;26
Yeah, I think there's some radical changes
coming ahead.
01;28;59;26 - 01;29;01;07
the examples you chose.
01;29;01;07 - 01;29;05;09
not not just because they're so positive
but because there's such a, there's
01;29;05;09 - 01;29;09;15
such a radical departure from where we got
caught up before talking about like,
01;29;09;18 - 01;29;13;13
you know, AI as this,
you know, alien intelligence, right?
01;29;13;13 - 01;29;16;07
That this is not
AI is as an alien intelligence.
01;29;16;07 - 01;29;20;08
This is very much,
you know, basically a complexity engine
01;29;20;08 - 01;29;24;09
that helps us,
you know, serve humans better, right?
01;29;24;12 - 01;29;28;13
It helps us personalized medicine,
personalized care,
01;29;28;16 - 01;29;32;17
and just create tools that help us,
01;29;32;20 - 01;29;36;07
you know, live healthier,
more fulfilling lives, which is
01;29;36;10 - 01;29;39;12
which is inspirational and is just such
a completely different vision
01;29;39;17 - 01;29;45;27
from yeah, it's trying to manipulate me
or, you know, build Skynet.
01;29;46;00 - 01;29;46;29
It's actually true.
01;29;46;29 - 01;29;47;26
And it's it's that's why
01;29;47;26 - 01;29;51;19
it's hard for me to understand
the debate around the AI bubble
01;29;51;22 - 01;29;55;06
because,
the bubble question has so much to do
01;29;55;11 - 01;29;58;11
with valuation rather than value,
01;29;58;14 - 01;30;01;14
you know,
01;30;01;17 - 01;30;03;23
in so well
and this is something I've never,
01;30;03;23 - 01;30;05;24
you know, again,
when the mysteries of the stock market
01;30;05;24 - 01;30;10;00
is how something could be valuable
but still, over overvalued.
01;30;10;00 - 01;30;14;18
But, it's not that AI is
01;30;14;23 - 01;30;18;29
is this promising future thing
that once it crosses a certain threshold,
01;30;18;29 - 01;30;22;04
then it will being incredibly powerful
01;30;22;04 - 01;30;25;04
and change our our world.
01;30;25;09 - 01;30;29;29
The technology of current AI that exists
01;30;30;02 - 01;30;32;23
is is already
01;30;32;23 - 01;30;37;16
quite amazing and and goes far beyond
the fact that, you know, the GPT can say
01;30;37;16 - 01;30;42;20
like write a write a pretty decent
short story or, or a legal report.
01;30;42;27 - 01;30;49;04
nevertheless, there's a contract
with, with the AI that's necessary.
01;30;49;04 - 01;30;52;04
There's a, there's obviously data centers
that are being built.
01;30;52;10 - 01;30;54;28
There are there's a question of data.
01;30;54;28 - 01;30;57;04
There's a question of regulation.
01;30;57;04 - 01;31;00;25
So that's why, you know,
I think one of our themes
01;31;00;25 - 01;31;04;08
throughout this whole conversation
has been, well, what what is my role,
01;31;04;11 - 01;31;08;13
what is what is our role collectively
in shaping the future of AI
01;31;08;15 - 01;31;13;09
if we're not the head of a frontier
AI company, if there were not a lawmaker,
01;31;13;12 - 01;31;17;17
if we are a decision maker in a company,
but again, not
01;31;17;17 - 01;31;21;07
somebody who can create a new model but
just decide whether to adopt it or not,
01;31;21;10 - 01;31;25;00
I think there's going to be
so many questions in the next few years.
01;31;25;03 - 01;31;27;16
In terms
of whether we're running a factory,
01;31;27;16 - 01;31;32;04
how much data gets used,
how is that data get used if we're even
01;31;32;04 - 01;31;35;28
just a patient in health care,
whether we give up our data.
01;31;36;05 - 01;31;38;02
I mean, many of us are already
giving up our data,
01;31;38;02 - 01;31;41;02
but having more understanding
about how that data is being used,
01;31;41;09 - 01;31;44;18
that will contribute to whether AI has
01;31;44;18 - 01;31;48;29
a shaping force on our world
in a in these different domains.
01;31;49;02 - 01;31;51;20
You know, it was interesting, Greg,
what you said earlier about,
01;31;51;20 - 01;31;55;24
you know, journalists
and about the program,
01;31;56;01 - 01;31;59;24
you know, suffering by
not by scaring people too much.
01;31;59;24 - 01;32;03;29
And to me, it's almost like a journalistic
responsibility to be scaring people.
01;32;03;29 - 01;32;06;23
Right. Like, I you in some ways.
01;32;06;23 - 01;32;08;24
And I feel the same way here.
01;32;08;24 - 01;32;12;14
There's an obligation to actually tell
people the facts and what's going on.
01;32;12;14 - 01;32;12;21
Right.
01;32;12;21 - 01;32;17;23
And then one of the, you know, downfalls
or issues with journalism these days
01;32;17;26 - 01;32;21;21
is this sort of social media ification
or this algorithmic,
01;32;21;24 - 01;32;23;29
you know, content lens
where you just tell people
01;32;23;29 - 01;32;27;07
what they want to hear and people
say, yes, you know, validate me,
01;32;27;08 - 01;32;32;13
tell me what I think is right versus
actually provide me with the facts and,
01;32;32;13 - 01;32;36;10
and tell me something that
that's important that I'm educated on,
01;32;36;13 - 01;32;41;18
you know, versus just you know, that
the same old thing that I already know.
01;32;41;21 - 01;32;42;04
Yeah.
01;32;42;04 - 01;32;43;18
I mean, it's it's it's a good point.
01;32;43;18 - 01;32;45;06
I mean, it reminds me of my days
01;32;45;06 - 01;32;48;28
as an international correspondent
and sort of being, say, in Afghanistan
01;32;48;28 - 01;32;51;03
and, and hearing the grapes
of other correspondents.
01;32;51;03 - 01;32;52;24
They say, you know, gosh,
this is a whole war
01;32;52;24 - 01;32;56;09
and people are back at home
are just not interested anymore.
01;32;56;09 - 01;32;58;12
It's fallen off the news.
01;32;58;12 - 01;33;00;03
And I always saw it differently.
01;33;00;03 - 01;33;05;21
I always thought, no, no,
my job is to make you care and to
01;33;05;24 - 01;33;09;17
I don't know if you call that entertain
or engage you, but I was going to find
01;33;09;17 - 01;33;13;10
some kind of angle that was going
to make this feel relevant to you.
01;33;13;17 - 01;33;20;05
So I do think that as voices, as as
I feel like
01;33;20;05 - 01;33;23;26
we do have a role, we do have a job
to make this feel relevant.
01;33;23;29 - 01;33;27;12
that probably means
don't freak people out right away
01;33;27;15 - 01;33;29;06
because, because
01;33;29;06 - 01;33;33;01
then they'll just feel,
small as opposed to empowered.
01;33;33;01 - 01;33;36;25
But I think that you can take that
too far.
01;33;36;25 - 01;33;40;18
And we are in a situation
now where there's
01;33;40;18 - 01;33;42;26
this incredibly important technology.
01;33;42;26 - 01;33;48;12
It's complicated, the complexity
is interesting and is I mean,
01;33;48;15 - 01;33;50;23
is is maybe worth
01;33;50;23 - 01;33;53;17
spending some time thinking about.
01;33;53;17 - 01;33;58;05
But this is why it feels like you and I
and we need to find
01;33;58;05 - 01;34;03;01
these new, new new narratives,
not just the sci fi narrative,
01;34;03;04 - 01;34;05;25
not just the,
you know, apocalyptic doom and gloom.
01;34;05;25 - 01;34;08;15
I think Ellie is you. Koski is.
01;34;08;15 - 01;34;11;11
I don't know if for titling the book,
if anyone builds it, everyone dies.
01;34;11;11 - 01;34;14;10
It's blunt,
And maybe it gets more readers, but
01;34;14;10 - 01;34;18;19
there's there must be a role
for sitting down with a listener or reader
01;34;18;22 - 01;34;20;15
and saying, okay,
01;34;20;18 - 01;34;21;22
here are some
01;34;21;22 - 01;34;24;22
incredibly fascinating things
about AI that you know it.
01;34;24;22 - 01;34;28;14
Here's some ways they could go wrong,
01;34;28;17 - 01;34;31;13
and here's ways in which the
the world might play out
01;34;31;13 - 01;34;35;07
that might feel radically different
than than you may think it feels.
01;34;35;10 - 01;34;37;19
Here's how your kids will fare.
01;34;37;19 - 01;34;41;12
You know, it's like we have to address
the questions that people have
01;34;41;17 - 01;34;47;16
and not just leave them with, you know,
01;34;47;19 - 01;34;49;11
leave them with a scare story.
01;34;49;11 - 01;34;50;15
And so I, I struggle with it.
01;34;50;15 - 01;34;51;14
You hear me struggling with it
01;34;51;14 - 01;34;55;19
in this answer because I,
I do think our role as information
01;34;55;19 - 01;34;59;13
gatherers is to packaged package that
information in a way that people will,
01;34;59;16 - 01;35;00;21
will want to consume.
01;35;00;21 - 01;35;04;28
I mean, it's we're not like professors
here with a, with a captured audience.
01;35;04;28 - 01;35;07;06
People can choose
what they want to tune into.
01;35;07;06 - 01;35;10;10
So we have to we have to sing and dance
for our supper, as it were.
01;35;10;10 - 01;35;14;12
But at the same time,
it's the complexity of it
01;35;14;12 - 01;35;18;13
and the the fear, I think, that audiences
have of that complexity.
01;35;18;13 - 01;35;21;08
And I think even the fear that journalists
have of the complexity.
01;35;21;08 - 01;35;24;08
I would say that you're very much
an exception to this, but,
01;35;24;14 - 01;35;27;06
You know, a fear of say,
oh, gosh, I don't want to sound stupid
01;35;27;06 - 01;35;29;29
because I'm talking about the stuff
that's computer science related.
01;35;29;29 - 01;35;32;13
And I didn't really feel, I mean,
01;35;32;16 - 01;35;34;07
you know, yes,
01;35;34;07 - 01;35;38;24
there's a lot that makes us feel dumb
when we're trying to understand how a
01;35;38;27 - 01;35;41;07
something
like artificial intelligence works, but
01;35;41;07 - 01;35;45;07
just being willing to ask those questions
and being willing to dive
01;35;45;07 - 01;35;48;09
into what is red teaming with, like,
what is safety training look like?
01;35;48;09 - 01;35;52;07
What what
what might a model take to be controlled?
01;35;52;14 - 01;35;54;14
If we can get more people
to have these conversations
01;35;54;14 - 01;35;57;14
without feeling imperiled,
you know, either
01;35;57;14 - 01;36;01;21
physically or economically imperiled,
that will be a win.
01;36;01;24 - 01;36;02;09
Right.
01;36;02;09 - 01;36;04;15
I love that it,
it makes, it makes complete sense.
01;36;04;15 - 01;36;07;08
And it's it's a noble calling as well.
01;36;07;08 - 01;36;09;13
Well,
thanks so much for this, encouragement.
01;36;09;13 - 01;36;11;19
I really appreciate it, Jeff,
and appreciate the work you're doing.
01;36;11;19 - 01;36;13;15
Keep, keep doing what you're doing
and you know, we'll
01;36;13;15 - 01;36;16;01
get, we'll get out there,
one person at a time.
01;36;16;01 - 01;36;19;04
Absolutely, absolutely.
01;36;19;07 - 01;36;19;29
If you work in
01;36;19;29 - 01;36;23;19
IT, Infotech research Group is a name
you need to know.
01;36;23;22 - 01;36;26;22
No matter what your needs are, Infotech
has you covered.
01;36;26;27 - 01;36;28;04
AI strategy?
01;36;28;04 - 01;36;30;16
Covered. Disaster recovery?
01;36;30;16 - 01;36;31;16
Covered.
01;36;31;16 - 01;36;34;01
Vendor negotiation? Covered.
01;36;34;01 - 01;36;37;24
Infotech supports you with the best
practice research and a team of analysts
01;36;37;24 - 01;36;41;17
standing by ready to help you
tackle your toughest challenges.
01;36;41;20 - 01;36;44;20
Check it out at the link below
and don't forget to like and subscribe!
The Next Industrial Revolution Is Already Here
Digital Disruption is where leaders and experts share their insights on using technology to build the organizations of the future. As intelligent technologies reshape our lives and our livelihoods, we speak with the thinkers and the doers who will help us predict and harness this disruption.
Our Guest Gregory Warner Discusses
AI's Most Dangerous Truth: We've Already Lost Control
What happens when the people building artificial intelligence quietly believe it might destroy us? Journalist and podcaster Gregory Warner sits down with Geoff for an honest conversation about the AI race unfolding today.
Our Guest Emad Mostaque Discusses
AI Will End Human Jobs: Emad Mostaque on the Future of Human Work
What happens to jobs, money, and meaning when intelligence becomes cheaper than labor and humans are no longer the smartest ones in the room?
Our Guests Mo Gawdat, Roman Yampolskiy, and Malcolm Gladwell Discuss
AI Boom or Bust? AI Boomers and Doomers Reveal Their Predictions for Our Future
In this year-in-review episode of Digital Disruption, we bring together the most provocative, conflicting, and urgent ideas from this past year to confront the biggest question of our time: What does AI actually mean for humanity’s future?
Our Guest Jeremy Roberts Discusses
What AI Bubble? Top Trends in Tech and Jobs in 2026
Looking ahead to 2026, Geoff Nielson and Jeremy Roberts sit down for an unfiltered conversation about artificial intelligence, the economy, and the future of work. As AI hype accelerates across markets, boardrooms, and headlines, they ask the hard questions many leaders and workers are quietly worrying about: Are we in an AI bubble? If so, what happens when expectations collide with reality?