Our Guest Cassie Kozyrkov Discusses
Why AI Is Failing: Ex-Google Chief Cassie Kozyrkov Debunks "AI-First"
Is “AI-first” the future of business or just another tech buzzword?
On this episode of Digital Disruption, we’re joined by former Google Chief Decision Scientist and CEO of Kozyr, Cassie Kozyrkov.
Cassie is best known for founding the field of Decision Intelligence and serving as Google’s first Chief Decision Scientist, where she helped lead the company’s AI-first transformation. A sought-after advisor and keynote speaker, Cassie has guided organizations including Gucci, NASA, Meta, Spotify, Salesforce, and GSK on AI strategy. She combines deep technical expertise with theater-trained charisma to make complex concepts engaging and actionable for executive and general audiences alike delighting audiences in over 40 countries across all seven continents, including stages at the UN, WEF, Web Summit, and SXSW.
Cassie sits down with Geoff to unpack the hidden cost of the “AI-first” hype, the dangers of AI infrastructure debt, and why real AI readiness starts with people, not technology. She reveals how leaders can architect their organizations for innovation, build human-in-the-loop systems, and create cultures that embrace experimentation instead of fearing mistakes.
Cassie exposes why 95% of organizations fail to achieve measurable ROI from AI and how leaders can finally bridge the AI value gap. This conversation dives into why AI success isn’t about tools, it’s about leadership, measurement, and mindset.
Most organizations chasing “AI transformation” see no measurable ROI not because the technology fails, but because leaders are still measuring value the old way. Generative AI success is hard to quantify when there isn’t a single “right answer,” yet many businesses keep trying to apply outdated metrics to a completely new paradigm.
00;00;00;24 - 00;00;22;24
Geoff Nielson
Hey everyone! I'm super excited to be sitting down with Kasey Causer Cobb, former chief decision scientist at Google and a game changing founder AI advisor, and keynote speaker. What I love about Kasey is not just that she's incredibly smart and thought provoking, but how fearless she is at calling out bullshit and just how good she is at parsing what's useful from all the noise and hype.
00;00;22;26 - 00;00;43;13
Geoff Nielson
She believes that companies talking about going AI first are getting it fundamentally wrong, and we need to completely change the conversation about what AI is capable of. I want to ask her what I can really do for us and our jobs, and what is the real future of work. Let's find out.
00;00;43;16 - 00;01;05;13
Geoff Nielson
I'm here with Kasey Causer Koff, former chief decision scientist at Google. Really excited to connect today. Kasey. And maybe to to kick things off, you know, you've talked recently about a question that I think is kind of on everybody's mind, which is, you know, what you've called the generative AI value gap. What does that mean? And what are you seeing in that space?
00;01;05;15 - 00;01;22;01
Cassie Kazyrkov
Yeah. So, I'm sure that anyone who's been watching the, various surveys and a numbers coming out about generative AI and, and generative AI deployments would have found that 95% number, you know, the one. Right?
00;01;22;03 - 00;01;25;09
Geoff Nielson
Sure. Do 95% not getting any value from AI.
00;01;25;09 - 00;01;56;07
Cassie Kazyrkov
Exactly, exactly. Except the phrasing I like this phrasing is measurable ROI. Right. So some part of what's going on is that companies are really getting no ROI, and there are fantastically foolish ways to just, try to keep up with the Joneses. And I have no idea what you want it for, kind of send your people off to, to go sprinkle the magical AI on of your business, and you hope better things happen.
00;01;56;07 - 00;02;23;17
Cassie Kazyrkov
And then you join the, no ROI bucket. But there is also some number of those 95 are going to be no measurable ROI. And, this is but this breaks up into two pieces. One is that generative AI is fundamentally just more difficult to measure. And I want to double click on that in a moment, because I know that's what you're asking me about.
00;02;23;19 - 00;02;52;25
Cassie Kazyrkov
But the other piece is that sometimes what we're actually getting is we're getting the ability to innovate next time. And I think that not enough companies appreciate that innovation demands waste. But if you are doing something that you've done before, you know exactly how it's going to go, then of course you can have these KPIs. You know you're going to hit for sure because you've already done it.
00;02;52;28 - 00;03;20;00
Cassie Kazyrkov
Now you're trying a completely new technology, the completely new use case. You have no idea if it's going to work. You have to be willing to accept that that might be time and effort thrown, you know, burned at the altar of innovation, so to speak. Right? That that is just the nature of innovation. And I've had companies come and consult with me who they really wanted to be innovators.
00;03;20;00 - 00;03;44;26
Cassie Kazyrkov
But when I ask them, so what is your actual tolerance for getting no results back after you invest in innovation? For how much bandwidth do you give your people to do things that are very specific work product that you expect from them? Do you give them time and space to to chase an idea? And quite often the answer is no, no we don't.
00;03;44;26 - 00;04;11;17
Cassie Kazyrkov
We have no tolerance for innovation. We have absolutely no slack for our people. And we need every project to be predictable. Okay, if you're dealing with that, you're just not going to be an innovator or you're going to be an accidental innovator because you somehow accidentally hired somebody who's going to essentially work two jobs, the one you gave them, and then, you know, the other one, they'll spend nights in the office and maybe they'll come up with something, but there won't be a lot of these folks.
00;04;11;17 - 00;04;36;18
Cassie Kazyrkov
And, yeah, that's not a great lottery ticket. So, if you don't have that tolerance for her, why, when you're trying to innovate, just you have to be a follower. It just wait for everybody else to to, share how it's done and follow them. But there's another piece with when you actually do this wasteful innovating, you learn how to innovate.
00;04;36;20 - 00;05;11;08
Cassie Kazyrkov
And we have solos paradox coming up again in AI. And so is paradox. Came up in computers for productivity and that, you could see the, the productivity everywhere except in the numbers. Right. That was the paradox. So how is it that we can all feel so much more productive? How is it that we can have, individuals numbers like 90% of software engineers are, using generative AI to help them code.
00;05;11;10 - 00;05;38;00
Cassie Kazyrkov
Other numbers, like car memberships, if it's 90 or 70 or some big number of people personally use these tools, in the surveyed population for this, 95 know different study, I think, anyway, workers personally use tools and yet a tiny, fraction of the employers, the companies actually formally give access to these tools.
00;05;38;00 - 00;05;56;16
Cassie Kazyrkov
So you've got this chatter, I think, going on where people are using AI, but it's not sanctioned. It's not, held by their employers. Right. You got this big disconnect. People really like it. They seem to be productive. I'm much more productive with it personally. And yet we don't see it in the ROI. We don't see it in the productivity numbers.
00;05;56;16 - 00;06;18;20
Cassie Kazyrkov
Sometimes what we're doing is we are just laying tracks. Be able to innovate next time to get the next project right. Sometimes this is the first pancake. And in some sense when you begin a batch of pancakes, the first one is an investment and there is a return on that investment. It's just not measured the same way as return on investment of your other pancakes.
00;06;18;25 - 00;06;38;06
Cassie Kazyrkov
So I just want to caution people as they're in the innovation camp, they're just getting started as there's a lot of uncertainty. Don't expect that there's some magic here. The guarantee is that the rules are now suddenly different. They're not. It's the same innovation game is different. But now, no, I'm going to answer your actual question. The Landis plain.
00;06;38;09 - 00;07;17;12
Cassie Kazyrkov
Finally, let's get back to the difficulty of measuring ROI and the difficulty of value. Talking about value and why there's a value here and here. I'll say that if you look at how we thought about metrics before with your classic machine learning ten, 20 years ago, we're thinking, and when we deploy it as what we're thinking in terms of minimizing the loss or minimizing error, when you have that philosophy of error, what you also have is a philosophy of correct answer, right answer, right.
00;07;17;12 - 00;07;36;29
Cassie Kazyrkov
Because if you don't have such a thing as, a right answer, you can't have such a thing as a mistake. So you can't have such a thing as an error to minimize. So you can't have all the optimization stuff that we're very used. So it'll be things like, you know, you'll have an image classifier and it's supposed to say cat and instead it says dog.
00;07;37;01 - 00;07;57;16
Cassie Kazyrkov
And we can say that that's an error, right? We can measure that. Or you're supposed to predict the weather. And it was supposed to be 72 degrees. And we observed that it's 75 degrees, right? Three degree error. All in terms of there's a single right answer that we are treating. Now, of course, there are many wrong answers. If the weather is 72 degrees there, all the other numbers are wrong.
00;07;57;23 - 00;08;37;14
Cassie Kazyrkov
Right. But there's only one right. Now. Think about generative AI, where we are essentially simulating from distributions here, and anything out of that distribution could potentially be. This is from the right distribution. A good ish answer I think about this a customer service interaction, an email, a poke. If I ask for a an email to have this podcast with you on a Friday afternoon, I could write that email hundreds, thousands, infinite number of different ways and it would still be a good email of course, is infinite number of ways.
00;08;37;14 - 00;08;56;07
Cassie Kazyrkov
It would be a bad email. I could, you know, start cursing in the middle of it. I could, you know, send, send you complete, not an email, but, you know, just just a poem. And you would find that weird, though you'd probably be intrigued. We like, but definitely invite her on the podcast. Or, you know, I can do a classic mistake.
00;08;56;07 - 00;09;18;06
Cassie Kazyrkov
And instead of Jeff, I could name you something else, right? Lots of different ways to be wrong. Lots of different ways to be right. What should be my tone? Watch it. How would I know which email is better than which other email? Right? I've got infinite, endless ways to get it right. It's not cat, not cat, or 72 degrees.
00;09;18;06 - 00;09;42;01
Cassie Kazyrkov
Not 72 degrees. It's infinity of ways that I could be solving that. And now we get the big problem. So far these are just mathematical, tools or, you know, software based on mathematical tools based on data that's doing what it was made.
00;09;42;04 - 00;10;07;13
Cassie Kazyrkov
But what it can't do for a leader is tell them what good enough actually means. How do you make this cut between, completely awful emails all the way across to, I don't know what how you would even think of, like, the most perfect email you could get, but somewhere you're going to have to draw a line. You're going to have to create standards of some kind.
00;10;07;13 - 00;10;32;13
Cassie Kazyrkov
You're going to have to talk about how you're going to measure this. If you're going to have automated email, for example, if that's the system that you're going to put interests, and if you're kind of squeamish about that, you could say, well, I will reduce it to a KPI I know about. I will maybe see how much time I can save my humans if I give them an email and copy.
00;10;32;15 - 00;10;50;07
Cassie Kazyrkov
But now I get some measurement issues as well as a as a manager, because do I force them to use it or not? Because if I don't force them to use it, am I tracking whether they chose to use it or not? Now, how am I going to measure value if they're all ignoring it and continuing to write?
00;10;50;07 - 00;11;12;02
Cassie Kazyrkov
But maybe they are writing better for reasons unrelated to the AI? Maybe that'll look like, results. Maybe it won't. All right, we've got we've got some potential issues here. Or maybe we force them all to use it. They hate it. They haven't learned how to use it yet. Maybe what we're going to see is decreased productivity. And eventually that productivity comes up.
00;11;12;02 - 00;11;29;21
Cassie Kazyrkov
But how are we are we sure we're measuring the right thing? And how would we think about those strange edge cases where every now and then that email is a PR disaster, especially when we make systems where we take the human out of the loop. And now all those emails are going to be sent with no human oversight.
00;11;29;23 - 00;12;04;11
Cassie Kazyrkov
Maybe a bunch of them save a lot of time, but there's not one that gets the media interested and that tanks your company. So there are so many different ways you could think about setting up notions of what value is, how to measure it, and how to deal with this curse of endless right answers. And again, most MBA courses, most things that we think about when we think about metrics is about targeting a right answer and how wrong this is a different paradigm, and I think it's snuck into our workplaces without us even realizing how much of a different paradigm is.
00;12;04;13 - 00;12;32;00
Geoff Nielson
If you work in it, Infotech Research Group is a name you need to know no matter what your needs are. Infotech has you covered. AI strategy covered. Disaster recovery covered. Vendor negotiation covered. Infotech supports you with the best practice, research, and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe!
00;12;32;03 - 00;12;53;04
Geoff Nielson
There's so much you said there, Cassie, that I want to follow up with. And and so many interesting jump off points. But but you know, to your last point, maybe I'll start with this one, which ties into something you said earlier, which is in the absence of really understanding what we should be measuring or what return we're actually looking for on our investment, which is which is hard.
00;12;53;07 - 00;13;17;17
Geoff Nielson
And I think in some ways what we're seeing is like an abdication of leadership decision making, to even try to do it and to come back to what you said, just sprinkle AI on everything, right? Like just just sprinkle AI and it'll make everything better. What when you when you hear leaders say, just, you know, we need more AI in the business, everybody should be using AI.
00;13;17;21 - 00;13;32;29
Geoff Nielson
Yeah. What what's your reaction to that? And, you know, assuming it's a negative one, but based on the expressions you just made there, you know what? What better approach do you recommend so that they can have a better that they can, you know, create more impacted value?
00;13;33;01 - 00;14;01;18
Cassie Kazyrkov
Look, I, I am a huge fan of the word why I'm an enormous part of it. And I would never presume that the, surface read of a situation the way that you described it is in fact the reality. Right? Because sometimes, you get a request that sounds silly, and that is because there are there are things underneath it that nobody is telling you.
00;14;01;18 - 00;14;26;19
Cassie Kazyrkov
You you got to check. All right? Sometimes folks are just on the hook to have to do something, anything that calls itself AI, because as a board member that must be abused or something like this, and there's just no way out of this conundrum. And then what you do is you sprinkle AI as far away from the business as possible, where it won't actually touch anything, do anything important.
00;14;26;19 - 00;14;48;15
Cassie Kazyrkov
I mean, a very classic version of this would be, you know, you could add some, you know, fun, harmless AI feature, you know, maybe mix music on your, on one of your web pages or something, or maybe it's awful, but, you know, it's instrumental. So this was no real way to mess it up, right? Like, who knows?
00;14;48;20 - 00;15;10;25
Cassie Kazyrkov
Just something. Keep it away. Keep it away from anything important. So we we got to know why. Why we're chasing. And the other thing is, if you have a sense, if the reason that you're doing this as an executive is that you have a sense that the world is going AI first, and I would agree with you, but you don't yet know what you need it for.
00;15;10;27 - 00;15;37;05
Cassie Kazyrkov
But you want to make sure that if you needed it, you would be in an okay position to act quickly like that. That is the panic and the thing to do there is not to actually deploy. I am I newsletter I put out a piece about AI infrastructure debt. So this is a term that Cisco has just, what's it coined in their AI readiness report that came out this week?
00;15;37;05 - 00;16;10;18
Cassie Kazyrkov
They're 2025 one. And, in there and I'm going to forget the exact numbers, but it's something like, 13% of the, the, companies that they surveyed. And there's a survey of over 1000 leaders, something like 13% have the preparedness to actually do AI and agenda AI at scale, 13%, 83% is how many plan to release AI agents.
00;16;10;21 - 00;16;39;07
Cassie Kazyrkov
Right. These numbers these numbers compute poorly, so to speak. Seems like there's a problem. And they, they refer to, this AI infrastructure debt as every time that you, you're just sloppily, putting AI into production without thinking about the infrastructure, without thinking. It's sort of like a cousin or an evolution of technical debt. Right? You're just punting potential problems.
00;16;39;07 - 00;17;00;11
Cassie Kazyrkov
Oh, we don't really have the people to do this. Oh, we can't really set up proper guardrails or human in the loop interventions. Oh, you know, if we had to scale this up, we wouldn't have the GPUs. Right? All of that stuff that is debt. But it grows quite quickly as you get a lot of pressure to be in the game of AI.
00;17;00;14 - 00;17;12;22
Cassie Kazyrkov
And that has a really, really high, interest. That's a really, really high interest rate credit card, that credit card. So what I would say instead is.
00;17;12;25 - 00;17;48;14
Cassie Kazyrkov
If you are able, as a leader, to somewhat blur the boundaries, I think this is ethically fine. Blur the boundaries between what AI infrastructure means to you, your board and your leadership team, and what AI means. You can start thinking about rather investing in what you would actually need when the time comes, and you can watch others in your industry show you what use cases are good or not good, and you are set up to hit the ground running and scale quickly and join that 13% from the.
00;17;48;16 - 00;18;11;22
Cassie Kazyrkov
The Cisco report. So that might be the smarter thing, right? Start start investing in the capability of doing it as you're doing this. Then you also have these pilots and you can be in pilot purgatory as it. But it doesn't matter because what you are trying to do is set yourself up to be able to use AI properly in the future because you don't know what you want it for, yet you want to watch.
00;18;11;24 - 00;18;38;19
Cassie Kazyrkov
You want to get inspired by what others are doing. You don't want to rush in, don't want to be ballooning that AI infrastructure that you want to be setting baseline. That said, also going back to that 95% study, one of the findings there is that companies that partner do, are twice as likely to deploy AI solutions as those who, reinvent the wheel, build things from scratch in-house.
00;18;38;21 - 00;19;12;20
Cassie Kazyrkov
And then I think there was a note in that report that said something like, most of them or almost all of them that, the researchers talked to had considered or tried to build in-house. Right. So this is like an urge everybody wants to reinvent the wheel and the the guidance. And this is guidance that, we've I've been I've believed in this for years, but it has made sense to me it during my entire pretty much, career involved at Google.
00;19;12;20 - 00;19;37;27
Cassie Kazyrkov
And now that you should focus on what your particular strengths are, whatever your, business edge is and let somebody else handle for you the parts that you are not an expert in. And you know what you are definitely not an expert in. If you are not a company that does this, or if, tech giant, you are not an expert in AI security.
00;19;38;00 - 00;20;15;06
Cassie Kazyrkov
So that's one you don't want to roll at home yourself. You have to go and you have to partner with a vendor that actually knows how to do that. And if it's the teensy, tiniest little vendor and a security breach could be catastrophic to your business. Well, I mean, look, I am biased because I spent ten years at at a behemoth at Google, but, you might want to go to a, larger company that actually has the staff, that talent pool to do something like, secure your client facing AI if that's the direction you're going in.
00;20;15;08 - 00;20;17;21
Cassie Kazyrkov
Yeah. I forgot what the question was, Jeff.
00;20;17;24 - 00;20;44;29
Geoff Nielson
So that's that's how good I wanted to. Yeah, we covered a lot of ground there. And one of the things I did want to unpack a little bit more is the phrase AI infrastructure. And so when you say I, AI infrastructure, you know, is that making sure that you've got the, you know, basically your data is primed and ready for AI ingestion is that you're working with the right, you know, stack of vendors in the space is that you've got enough AWS credits for all the processing.
00;20;44;29 - 00;20;53;28
Geoff Nielson
You're going to do what what is in that bucket? And what if if organizations are truly serious about AI, what do they need to do foundationally to get ready?
00;20;54;05 - 00;21;17;27
Cassie Kazyrkov
Right. So, so of things that you mentioned, whether it's AWS or one of the other cloud providers, those make sense, but those are even there. We would want to poke a little bit, which we will. You remind me? But there's a whole piece that you didn't mention. And that's the humans. That's the humans on the inside.
00;21;17;29 - 00;21;46;21
Cassie Kazyrkov
That's your leadership. And that's also the humans on the outside. One of the things that leaders, in my opinion, could spend more time thinking about is how, whatever AI system that they would love to put into production, how that will actually, be taken up by the people it touches. Right? You really have to think about that.
00;21;46;28 - 00;22;25;29
Cassie Kazyrkov
So when you when you think about, the expectations of your users and what you are deploying into this, pot of expectations. Yeah, you can immediately be defensive against some bad situation. For example, if you have, all of your users very well trained to expect a narrow set, but, unimpeachable, narrow set of correct responses from your system, and now you offer them a generative AI system that might though it makes mistakes.
00;22;25;29 - 00;22;46;28
Cassie Kazyrkov
AI systems always make mistakes. Just sometimes it takes quite a lot of skill to to see them. Right. What happens on a even a very functional AI system? We still say you will meet the long tail, find the the outliers, weirdos, situations that you did not see coming. Even when it's highly performant, expect, expect something's going to happen.
00;22;47;01 - 00;23;19;18
Cassie Kazyrkov
And so you anticipate that there will be mistakes. Now the question is when a mistake touches a user who has a particular kind of expectation, what then happens? How much how flammable is that your your AI infrastructure? Of course, it's all the obvious things. You have actual infrastructure in your data pipelines and all the rest. But it's also things, intangible things like at what stage are my user expectations?
00;23;19;20 - 00;23;42;02
Cassie Kazyrkov
Have I managed them sufficiently where I even could be deploying to users? What about internally if I'm making if I'm doing some, you know, an internal corporate engineering, I'm offering some, you know, and I we're looking at the digital employee experience. I'm offering some tools to my employees, some digital tools have I manage their expectations? Have I trained my staff?
00;23;42;04 - 00;24;01;26
Cassie Kazyrkov
Do they know how to think about these tools? Let's. I need humans in the loop. Am I sure my human will be in the loop? Or might they be asleep at the wheel? And how do I do the training? And how do I put in maybe a collection? Depending on the importance of the task, I might need to think about having multiple humans in the loop.
00;24;01;29 - 00;24;23;04
Cassie Kazyrkov
I might need to think about consensus AI. There are all kinds of, measurement infrastructure things that we would need to put in place in generative AI. We've just seen endless right answer this nightmare thing, a nightmare challenge for management because we've all got to change our paradigm, and we've got to think differently about measurement metrics. Have we done that?
00;24;23;04 - 00;24;40;24
Cassie Kazyrkov
Have we put this in place? Do we have testing pipelines? Do we have experimentation pipelines? Do we know how we're going to roll things back, what we need to do. We know what we're going to, what versions we're going to go, do we actually know what will happen? In what kind of scenario do we know how we're going to make our guardrails?
00;24;40;27 - 00;25;16;27
Cassie Kazyrkov
What sets those guardrails? How do we update them? How are we going to react to, legal changes? Right. All this stuff turns out okay, I know it's hegemonic. Or you can say everything is AI infrastructure, but to be ready for AI, there is a lot of stuff that you would need to be right. And so one of the ways that you can dodge a lot of this is that you you do outsource some piece to a vendor who is supposed to do all of it for you, and you just check that you're getting precisely what you need, and you have to still articulate what it is that you need, and you have to worry
00;25;16;27 - 00;25;38;01
Cassie Kazyrkov
about measurement wise, that there is going to be a gap, a hole between what the vendor sees and what you see. That's going to be some bit in the middle that nobody sees. And that could be a huge risk, not just in terms of security, but, in terms of your system slowly going, sideways with neither party notes.
00;25;38;03 - 00;25;57;19
Geoff Nielson
So and that's, that's it's really interesting. And I, I interpreted infrastructure, as you could tell, on a much narrower, more technological answer. And you're saying, no, no, it's it's cultural infrastructure. It's the business architecture. It's it's everything you need internally to set up yourself. Success for that type of adoption.
00;25;57;21 - 00;26;26;12
Cassie Kazyrkov
Look, Jeff, I also think, maybe this is just a great one to pull out, but if we take things to their limit to their their logical conclusion. You and I, I imagine, nerdy folks, that we are, have a certain love for technology, right? We still think of technology as with some, this ugly duckling thing that that we, we have to suffer with a little bit.
00;26;26;12 - 00;26;56;23
Cassie Kazyrkov
And after a while, we love it because we've we've done battle with it a little bit fiddly and detail oriented. You know, I, you learn some unnatural languages, that kind of stuff. Or it's, you know, physical hardware objects that might. And we still remember how annoying things were to set up. Right. We still have this feeling that the majority of technology is the fiddle and the long journey of fiddling, and then the pale.
00;26;56;25 - 00;27;27;08
Cassie Kazyrkov
But what we will see is that technology as the the actual experience of building it and interacting, it becomes so much easier. We will all move to less fiddling and more architecture, more thinking about how everything fits together. And we will see that what everything fits together with is not just all the other complex technology, which is there's going to be plenty of, but also all the ways in which it touches physical reality, quite often through humans.
00;27;27;10 - 00;28;01;00
Cassie Kazyrkov
So if you're not thinking of humans as now part of this infrastructure, because you're, you're still used to wires and bits being what infrastructure is, then I'm not sure. It's like, will that way of thinking have to snap crack at some point within the next decade or so when we realize just how, human and, ambiguity filled and, unpredictable technology.
00;28;01;03 - 00;28;30;21
Geoff Nielson
Right? And, you know, as you were saying, I was reflecting, too. I mean, part of me wonders if some of, like, the hubris of the moment is people believing that that fiddling isn't here this time with AI and, oh, AI is easy. You just like you really can sprinkle it in and it's are all built, and it's like even in this world where we get toward AI, being able to remove some of the fiddly stuff, focusing more on integration, focusing more on on, you know, sort of the magic that people bring to it.
00;28;30;24 - 00;28;47;27
Geoff Nielson
We still have to build that. Right. And so coming, coming back to that initial report and I love, by the way, so, so many people I talked to saw that report and were like this, this is the end. It's a bubble. You know, AI is dead. And and I mean, I love that you come in and you're like, I'm not even surprised.
00;28;47;27 - 00;28;52;20
Geoff Nielson
Like, of course it said that people have the wrong expectations.
00;28;52;23 - 00;29;06;05
Cassie Kazyrkov
For me, I just I just need to make sure I publish before. Right. I can just tap to let feel like this is January. Tap, tap. You're a came out in the middle of the year. Don't make me laugh at the board again.
00;29;06;07 - 00;29;28;27
Geoff Nielson
No, it's it's it's really good. But, you know, it's making me wonder as well, with this belief that, you know, AI is so easy, you know, just, you know, plug it into anything and, you know, magic will come out. And that bubble sort of being deflated, you know, it comes back to I can't wait. Sorry, I can't like I can't either.
00;29;28;29 - 00;29;49;28
Geoff Nielson
I can't either. But but I did want to ask you, you know, there's a, there's another phrase that has been floating around which is, you know, the AI first enterprise, the AI first organization. And so I think part of what's the why people are coming back to leaders are, oh, I want to get there. I need to to be competitive.
00;29;50;00 - 00;30;15;05
Geoff Nielson
And there's this, you know, sort of race and I, I deliberately choose the word race because it's like as fast as possible. Like I'm not budgeting for any architecting. I just want results. Now, is this is that the right approach for everybody? Like, should everybody be in AI first organization or is that only right for some? And what do you need to do to answer that question and to get there properly?
00;30;15;07 - 00;30;41;16
Cassie Kazyrkov
Yeah, I look I love this question. I also want to say to the, to the, the question of it's going to be relevant. I promise. I land the plane eventually. It's not blocking. It's not blocking rich people. I issue blocking conversion not an option. It's the plane is taking a look. But we'll start here. The part of why I want that that, AI is easy notion to die, but also to live.
00;30;41;18 - 00;31;05;04
Cassie Kazyrkov
Because I want people to see two separate things here. And the lessons from one don't translate to the other. So the question of if I am always my own human in my immediate loop, and I am just dealing with language, right? I just have a large language model. It's not connected to anything. Maybe the internet, but all it does for me is it.
00;31;05;04 - 00;31;28;20
Cassie Kazyrkov
It gives me language back on screen, so language can only hurt me in that setting if I let it right. If I'm gullible, I ask a stupid question. I get a stupid answer. I don't realize it's a stupid answer. I go, you know, change my medical habits or something on the basis of it, right? Bad. And it hurts me, but it hurts me because I took it and I did it.
00;31;28;22 - 00;31;48;14
Cassie Kazyrkov
But mostly what I can do as an individual is I can play back and forth and ask anything that's immediately of interest to me. And if I'm bored, I leave. And if I'm compelled, I stay. And as a worker, if it's making me more productive, I stick with it. And right if it's a choice. Otherwise I put it aside and in the moment, right.
00;31;48;18 - 00;32;17;22
Cassie Kazyrkov
But the human is generally, if they're using these tools well, shaping a whole bunch of output. And that version of I first of of saying this is actually a pretty awesome tool. If you're good at seeking advice, then everyone in the company should join the new economics of advice. So what are the skills for for getting advice? Knowing what's important?
00;32;17;24 - 00;32;34;09
Cassie Kazyrkov
I'm not just there old judgment skills or leadership skills. So they're actually kind of hard though they sound easy. But knowing what's worth asking about, right? Those your priorities. Knowing how to ask that is context. If you're having marriage issues and you get the best advisor in the world, don't just run up to them. They're like, should I get a divorce, right?
00;32;34;09 - 00;32;52;07
Cassie Kazyrkov
They need a little more context than that. It's not going to work. It doesn't matter how good they are, they're going to give you a stupid answer if you don't supply and context a lot of judgment required. Here's not one right answer. What you choose to show is as important as what you choose to reveal, and that's going to shape what we get next.
00;32;52;07 - 00;33;12;10
Cassie Kazyrkov
And that's judgment as well. And then the third piece of asking for advice is the scale of not taking bad advice or knowing the quality. Right? So knowing what's important, knowing how to ask and knowing the quality, if you're pretty good at that and you can get better at that, you can get so much out of just the language piece of generative that you can learn things fast, right?
00;33;12;10 - 00;33;31;21
Cassie Kazyrkov
Give a bunch of context. You're interested in quantum physics, I don't know, but you know nothing about it. Tell it what you do know about. Tell it how how you like to learn how you want. The output format of get help in finding sources. Rabbit holes to go down right. You will learn quantum physics much faster that way than if you just try to consume opportunities.
00;33;31;23 - 00;34;12;17
Cassie Kazyrkov
So that version of I first where as a leader, you have, quite a strict mandate for everybody to when it doesn't involve confidential information to get a second opinion, for crying out loud, from a large language model. Right before us, your manager, before submitting work, that's non-confidential again. Screen it. Maybe you were asked to create some graphics right before showing them to the next person or the client, right?
00;34;12;17 - 00;34;35;07
Cassie Kazyrkov
Maybe screen it, ask how can this be better? And then ask how this can be better again, and then ask how was can be better? Ask about your prompts themselves. How can this be better and keep applying your judgment. Keep applying your brain and take the good advice. Leave the bad advice and you see yourself supercharged. That version of AI first is not going to harm anything.
00;34;35;10 - 00;34;53;02
Cassie Kazyrkov
We're just going to realize how at how much of a premium judgment is. We're going to begin to value it more, hopefully. And we're going to realize those with good judgment are going to supercharge themselves. This is on the individual level, where you are your own intense human in the loop. So that version of AI for us, right.
00;34;53;02 - 00;35;24;18
Cassie Kazyrkov
Like, please do that train, train your people set their expectations right. Tell them not to put confidential stuff where it doesn't belong. And for the rest like go wild with AI. I had my junior admin that I hired, his his manager, my executive assistant. Has been very, very insistent that he does several rounds of checking things with AI before even passing it to her, let alone passing it to me.
00;35;24;21 - 00;35;50;12
Cassie Kazyrkov
The skills that he's managed to pick up are phenomenal. And how quickly, within a week, editing some of my art code for renaming files, right. That that are code is not exactly a state secret, but this is just a young, bright kid with zero engineering background is jumping into editing our code, which is, you know, an arcane language that, only, the, masochists can love.
00;35;50;12 - 00;36;19;03
Cassie Kazyrkov
But, yeah, what can you do? So. And then, you know, he's gotten really great at graphics in just a few weeks, right. What you can learn and what you can do, you have no experience in before, and how quickly you can become what I call a, chimeric worker. I'm Eric, aren't you? Just pull in new skills and get AI to supercharge you when you're doing it with this, individual, productivity lens is phenomenal, right?
00;36;19;03 - 00;36;58;16
Cassie Kazyrkov
You just have no excuse. So I'm singing that version of AI first, from the rooftops. But it does not translate, does not translate in every lesson. There does not translate to automating with AI at scale. Hands off the steering wheel does not translate. In fact, it teaches you the wrong lessons and bad lessons. If you're an executive who is intending to only boost the individual productivity of their workers, you should absolutely be, you know, banging the drum about asking, hey, before you ask me, always inner ask are everything not confidential?
00;36;58;16 - 00;37;23;17
Cassie Kazyrkov
If if you wouldn't violate your NDA by asking an external friend about something? But you know, hey, what kind of, style is appropriate for a, performance review and give me, a draft of how to express in a kind way, the following things for a two out of five stars. Performer. Write this type of stuff without naming any names.
00;37;23;19 - 00;37;49;00
Cassie Kazyrkov
You can get that advice externally. Get that advice internally as right, which you champion of. When you see how easy that is, you think that kind of easiness that is predicated on you really micromanaging your AI tool and really massaging where you point it and how you use it, you want to now take this thing and scale it up somehow and let it go.
00;37;49;00 - 00;38;20;25
Cassie Kazyrkov
It is not the same game. And that's where all your AI infrastructure debt issues come in. And that's where you you have your meet the long tail situations where you thought, oh, AI is smart, that's fine. Let me put this chat bot in front of a customer and then you have situations like, the, the, car dealership chatbot offering to sell a car for a dollar, or my personal favorite, when Virgin Money's chatbot operated a user for using the word virgin.
00;38;20;27 - 00;38;39;19
Cassie Kazyrkov
All right, you just get this stuff, and it's it's a magnet for for the press making fun of you. And it is just such a different game. So when we automate, this is a different thing. When we take our hands off, this is a different thing. And so you have to know which version of AI first you're talking about.
00;38;39;22 - 00;38;51;05
Cassie Kazyrkov
There is absolutely do that. You know I as advisor don't take bad advice. It could be bad advice. I do that before automation.
00;38;51;08 - 00;39;06;15
Cassie Kazyrkov
You're going to have to know how to measure value. You're going to have to say why you want to do the thing you're doing. You're going to have to put all kinds of guardrails and you are going to have to expect that your guardrails were wrong. So you better have safety nets and plans in place for when things happen.
00;39;06;22 - 00;39;25;27
Cassie Kazyrkov
You're going to have to think about security. You're going to have to think about, can you scale properly? You're going to have to think very carefully about how this is going to be received by people who were not the people who set it up. But there is a glorious side to this version of AI first as well, which is why it smells like catnip.
00;39;25;29 - 00;39;56;25
Cassie Kazyrkov
And that is the just the simple equation of if I makes anything easier. But let's imagine it makes email easier. Just that there's a death. Probably. I can ramble into the the chat bot and say, you know, write. You write a polite email to Jeff. We're doing this on Friday at 1 p.m. and something polite will come up.
00;39;56;28 - 00;40;17;24
Cassie Kazyrkov
I don't even have to prove that I can make software engineering easier, which we can write. I even just reducing email in software engineers have more time to write more software, but if we make them productive with software copilots, they have more time to write more. And then we have some of that time will be invested in software or making software writing easier.
00;40;17;26 - 00;40;44;02
Cassie Kazyrkov
So we just have this this loop, all of the things that we can suddenly automate that that space is brewing. The universe is expanding much faster than it used to. And so now, from the executive's point of view, the executive has been told recently, probably that there is something that they wanted that's impossible, technologically impossible, technologically infeasible just can't be done.
00;40;44;04 - 00;41;04;23
Cassie Kazyrkov
And if if it's teleportation, that's still infeasible. There are a whole bunch of other things that are suddenly being put on the table for automation, and that the version of AI first that I would love executives to have for this automation sphere is not let's automate it with AI, no matter what, right that is. That is bad news.
00;41;04;23 - 00;41;34;07
Cassie Kazyrkov
That is leadership abdicating its role. As you said, Jeff, don't be doing that. Instead, before you go do the thing the traditional way, or before you give up thinking that it is impossible and infeasible to run your business the way that you wish you could, you think you've got a technological constraint insurmountable. Just take the time to revisit that frequently, because it could just be that what you need has recently become possible.
00;41;34;09 - 00;42;07;07
Cassie Kazyrkov
It could be. And so before assuming and committing to doing things the old way, consider the possibility that I might have given you the impossible made possible. That version of AI first is what we want to see. But that version of an AI first is also, a can of worms. Because let's say that you do see that your traditional processes now have an automation option with AI that will change everything.
00;42;07;09 - 00;42;28;26
Cassie Kazyrkov
You're going to have to think differently about how to measure. You're going to have to think about how to, have a foundationally probabilistic system play nice with what is probably a deterministic ecosystem. So are your users prepared for that? Are your workers prepared for that? Are any other systems going to break if you start putting this in place?
00;42;28;29 - 00;42;36;13
Cassie Kazyrkov
But things are moving so quickly, you can't, as an executive, afford not to be thinking about it in that way. You got to be.
00;42;36;13 - 00;42;56;17
Geoff Nielson
I, I love the way you, you know, kind of tease that apart into the different, you know, different definitions of AI first. And I'm trying to that this may be a dramatic oversimplification, but I'm trying to frame this up in my mind, Cassie. And, you know, if I and I, I'm happy for you to disagree with the framing, but I'm almost seeing like three buckets here.
00;42;56;21 - 00;43;24;01
Geoff Nielson
There's the AI for the individual worker, which is fairly easy. And it seems like there's definitely some, you know, reward or return on it. There's the opposite end, which is trying to just go after everything and re-architect your entire organization. And then there's, you know, if I may, like a Goldilocks option in the middle, which is figure out specific, you know, cases or specific components of your business where there is there was too much complexity previously.
00;43;24;01 - 00;43;50;04
Geoff Nielson
But now that can that can be unlocked. So, I mean, first of all, do you agree with that framing of, you know, the spectrum of difficulty? And then across those, where would you place the value on each of them? Like, is it just as simple as like ones really like ones really easy and low value. The other one's really hard and, you know, big value and you know, the Goldilocks is just right.
00;43;50;04 - 00;43;51;08
Geoff Nielson
Or how would you how would you categorize?
00;43;51;09 - 00;43;58;00
Cassie Kazyrkov
I would think about this differently. I would I would again, I wouldn't put them.
00;43;58;03 - 00;44;35;06
Cassie Kazyrkov
I wouldn't even want them to, to be juxtaposed. I would say the year is 2025. It is almost 2026. If you still think that you can run your business without introducing your workers to the concept of personal productivity upgrades, it is like you were trying to run your business in the year 2015. Ignoring that the internet is a thing, there are a few businesses that you could probably get away with with just pretending there's no internet.
00;44;35;09 - 00;45;09;22
Cassie Kazyrkov
But what a wild thing to do, right? You don't have to be a technology company, just, you know, get with the program. So that version is a it's a really a different set of stuff. And it is when, when that gets delegated to the IT department, I find it very funny. That is not an I.T Department thing. Making sure they have access to the internet, making sure the WiFi works, making sure that the security system is there that I get.
00;45;09;25 - 00;45;47;26
Cassie Kazyrkov
But the, you know, how do we search the internet is not in it, right? Not now, not in 2015. We have access to a store of knowledge. Here's human knowledge. And we can we can go searching within it. What would we want to think about finding. Right. Very domain oriented. And so once you've got people past that first lesson, just just use it for something, which again, do every time that you notice that someone is asking you something at work where they really could have asked an hello, just as it was thought, oh, I'm until they get the hang of it.
00;45;47;28 - 00;46;16;10
Cassie Kazyrkov
That shouldn't even be just now. The other two things, your other two buckets. Here's how I see them. The thing you're calling the Goldilocks zone is actually the thing where leadership has not abdicated their responsibility, as in, leaders had to think about what is worth doing, what is worth having at the scale that they're supposed to operate, not at individual scale.
00;46;16;10 - 00;46;49;04
Cassie Kazyrkov
What's my priority and what should I ask advice on that? But big what do we need? What if we had? This would save us a lot of time or money, or allow us to open new lines of business, or to completely change the way that we operate? If you do the exercise of thinking about what you actually wish you could do with your business, then once you have that inventory, you can go through that inventory and see if any of those wishes might actually be granted.
00;46;49;06 - 00;47;12;17
Cassie Kazyrkov
So that work is not a work for, a technical PhD. That work is for somebody who's who's committed to making the time to be strategic, giving them the space to actually think about where they want to direct their company, their organization and articulate this right and this work. This is the hard work, and this is going to be the work.
00;47;12;20 - 00;47;35;01
Cassie Kazyrkov
No matter how much we automate, it's still going to be the hard work. And I think of your third bucket as don't worry about any of this. Let's just get all our shiny toys in place. I see that as a very expensive way to skip doing the work.
00;47;35;04 - 00;47;56;14
Cassie Kazyrkov
Because doing the work is probably faster than setting everything up. Or, maybe I need it and you can begin setting things up, like beginning to train people that the possibilities get people to help you with your brainstorming. You can certainly begin.
00;47;56;17 - 00;48;19;16
Cassie Kazyrkov
Delegating some opportunities where individual teams would find their own quick wins and put in some quick win automations and maybe, you know, catch the bug that way. But that's just to get your organization even able to think about what to know about that. There is a vendor ecosystem that here's how, here's what it actually means to, to connect these pipelines.
00;48;19;16 - 00;48;34;29
Cassie Kazyrkov
Right. But mostly the work is what would we actually want. And again, it's a very, very expensive way to skip doing the work to try to set yourself up for every possible future.
00;48;35;01 - 00;48;55;21
Geoff Nielson
So let's let's stick on that one for a minute, because it sounds like this is obviously, you know, the approach that organizations need to be taking if they're going to be successful, you know, in 2025 and beyond. And so in this world where you have to do the work, you have to ask the tough questions. You have to focus and prioritize as a leader.
00;48;55;24 - 00;49;19;26
Geoff Nielson
Can you we've talked already about AI architecture, right? It's not just a leadership exercise. Then you have to bring in it. You have to actually figure out how are you going to do this? What does that recipe kind of look like for you from end to end, at least at least abstractly. And then, you know, what do you see as the biggest success factors and the areas where people tend to go off the rails here?
00;49;19;28 - 00;49;26;19
Cassie Kazyrkov
Okay, one question at a time, just so that, you know, the those those planes are going to go, it's going to be a whole, air show.
00;49;26;19 - 00;49;48;01
Geoff Nielson
Okay. Let's let's start with the let's start with the recipe. We know there's kind of a right approach here. If I can just, you know, kind of cut the nod and call it that, there's an approach we need to be taking as leaders with it, with technology, with probably an ecosystem of vendors. Can you just kind of lay out that vision abstractly from end to end?
00;49;48;04 - 00;50;19;03
Cassie Kazyrkov
Yeah. Well, first things first. The safest way to automate is to do it at home rather than, I don't know, at home. At home. I mean, internally rather than plant based AI. And so again, back to that 95% report which is quoted to death now. But back office automation was much more predictive of successful deployment than attempting to, you know, automate the sales person or something like that.
00;50;19;05 - 00;50;55;09
Cassie Kazyrkov
And you can see that first, automating the salesperson is an insult to humankind, right? That whenever we say automate the instead, what do we want to automate? Repetitive drudgery things, right. Often things involving translation in a way that we're not used to thinking about as translation. We think translation English to Spanish. But what about, translation from image to text, translation from, very boring long legal documents to a quickly passable Tldr, right?
00;50;55;15 - 00;51;22;12
Cassie Kazyrkov
Those types of translations. So it's repetitive, it's internal, it's back office stuff involves a whole lot of drudgery, things that any human could do. I will probably do better. So if it doesn't matter who it is, if there's no holding of special context within your organization, that's. And you're doing a lot of that. That is likely a great target for AI.
00;51;22;20 - 00;51;48;18
Cassie Kazyrkov
And so you really want to break down. Where are we wasting a lot of opportunities? Where are we spending our time, where we don't want to be spending our time? And then at the same time, you have to look at if we could do anything, if we now said now we've we've freed half of the, the hours, of some of our staff members.
00;51;48;21 - 00;52;18;02
Cassie Kazyrkov
Does that mean that we cut half of our staff members, or does that mean that we get ambitious about what what we're going to do tomorrow, or are we going to pinch yesterday's pennies because I, let's let's us do this, or are we going to realize that if everybody is, getting into AI, and actually using it to transform their businesses, then the future is going to get interesting and weird, and we're going to need to, compete with everyone else's move.
00;52;18;02 - 00;52;51;14
Cassie Kazyrkov
So so let's think about what that's going to look like and what our wishes would be, or how we would prepare for that. And we might want to retrain our people whose time we've just saved to be able to be effective in that future. Giving an example of Ikea reducing, a lot of the, customer, the basic customer service, think phone requests, and this was a, a client facing one, an external facing one that seems to have worked.
00;52;51;16 - 00;53;21;16
Cassie Kazyrkov
But they, they reduced those, hours on that, that need for that stuff. What they did was they retrained them to do AI assisted, interior design, interior decorating, design. So you have, a complete change now in what you can offer as a business and how you can do business. And you don't need to hire specialists because I can upgrade many of your non-specialists into specialists.
00;53;21;16 - 00;53;40;08
Cassie Kazyrkov
It's more important is, valuing the people who carry the context of your business, know how your business actually works, who know what your clients need, and upskilling them aggressively with AI into an interesting future. So that's where we start with all that vision stuff.
00;53;40;10 - 00;54;23;29
Cassie Kazyrkov
That is just the first part. Now we have to break up, break down, whatever, whatever we we are, imagining to break that down into what it would take to make it happen. And you want to ideally have a, a culture not of people saying no to you, but of people expressing. Here's what it would take. Because again, when we go and verify those, those individual pieces, if maybe that it is cheaper, or easier than we realize and what it takes when we are enabling individual workers, giving individual workers tools with which to work, what it takes is we have to figure out how to train them.
00;54;24;01 - 00;54;53;03
Cassie Kazyrkov
We have to figure out, if something goes wrong, if there are issues, what are the escalation paths? When should they override what the tool is using? When shouldn't they what guardrails we're putting in place? When it is things involving, I don't know, quality control on Warhammer figurines. That's a very different thing from, you know, using AI assisted tools for, surgery.
00;54;53;05 - 00;55;28;29
Cassie Kazyrkov
Right. These are different, cutting a human and and, cutting the plastic. All, no, no, no disrespect meant to the Warhammer community. These are different things. So different applications. And so it really it becomes very individual hands what you are building. If it is again, something that is, internal system and it is not actually connected to any external systems, that it's going to have different security requirements to things that are connected to the outside.
00;55;29;02 - 00;55;54;24
Cassie Kazyrkov
We see now that, generative AI is now the top, according to a very recent report, top data exfiltration risk. It's now beaten email. As, as the, the way in which corporate secrets will, will leave the company and then the more, directly customer facing it is the more you have to plan for all kinds of things going wrong.
00;55;54;27 - 00;56;21;21
Cassie Kazyrkov
So this is, this is a setting up of, guardrails and control structures before we're even talking about vendor selection and actually building the darknet. And then depending on what it is, either there if it's an obvious vendor and a and the partnership will be where they will do a lot of the advising from here, because that's what they're supposed to do.
00;56;21;24 - 00;56;42;03
Cassie Kazyrkov
Or it will be a case of, evaluating and choosing among many. And that's its own, journey. Or you may find that it doesn't exist externally. It makes a lot of sense for you to put it in internally. You might have to build it all yourself from scratch and inwards do that. Then you have to attract the talent to do it.
00;56;42;06 - 00;57;03;19
Cassie Kazyrkov
There's a whole lot of balancing and and weighing there all well, always remembering that this system is not going to exist in isolation, even if it is the most back office application, it's still going to exist in the context of what other people already do and other systems and people are not used to software that is proposed or used deterministic.
00;57;03;22 - 00;57;31;13
Cassie Kazyrkov
Like if your spreadsheet gives you some issue, you typically think that the problem is some human typed in something wrong. The formula, the data. You don't think that every now and then the summation function is just an average, and that just happens. And again, not always, not all the time, but eventually mistakes do come up and you need a completely different way of thinking about how to do that.
00;57;31;13 - 00;57;36;06
Cassie Kazyrkov
Need a completely different set of trainings for your, and I use.
00;57;36;08 - 00;57;59;09
Geoff Nielson
So, so that's a, that's a point that's come up now a few times Cassie and I. And so I want to double click on that a little bit because the notion that with AI in general, with any of this generative stuff, it's not black and white, it's not right and wrong, it's probabilistic. And, you know, you've said that that has a sort of, I guess, cultural and mindset implications for everyone.
00;57;59;09 - 00;58;19;03
Geoff Nielson
It touches in the organization and it requires a new way of thinking. And can you share a little bit more like what is that new way of thinking? How how do leaders need to be communicating that? What are like the right skills and mindsets to actually, you know, thrive in that world?
00;58;19;06 - 00;58;52;22
Cassie Kazyrkov
Yeah. So there's a there's an answer that works for right now and there's an answer it'll probably work for, a near ish future, because I'm of the opinion that we will culturally all adjust to being more tolerant of mistakes. And so in situations where mistakes are, let's say life threatening, then we will have, a lot more guardrails, safety nets and so on than we usually do.
00;58;52;25 - 00;59;14;24
Cassie Kazyrkov
But for the rest of everything, I think we might actually develop more of a sense of humor. We will just have situations where the chat bot does something funny. And, you know, I can sort of imagine a future in 50 years time where instead of that just being such, everyone is amazed that a corporate system could do that.
00;59;14;24 - 00;59;41;29
Cassie Kazyrkov
People would laugh it off, understanding that as a as a civilization, we get a lot out of, being able to do more and automate more. But the price for this is the occasional mistake. And we we've got to accept that and we will we'll know we'll we'll expect it. And for that reason, we won't be alarmed by it.
00;59;42;01 - 01;00;12;20
Cassie Kazyrkov
Now, in the near term. And it's funny, it's I'm a recovering statistician. I think probably probabilistic thinking has been beaten into me. You know, like, I've been it for more than half my life, so, it's actually quite strange to me to think deterministically. But I'm always. I'm, I'm always a little bit cautious when things are important.
01;00;12;23 - 01;00;45;14
Cassie Kazyrkov
It's there is a sort of trust issue, maybe that we statisticians and recovering statistician have, that the rest of us could adopt. And the worst way to have trust issues is to be paranoid about everything all the time. The best way to have trust issues is to first have a filter, which is your, one of your core skills as a decision maker is to have a filter based on the importance of what's at stake.
01;00;45;16 - 01;01;28;28
Cassie Kazyrkov
You know, the reasonable best case and the reasonable worst is should, inform how much cognitive effort you put into something. And so there's so many situations in life where we know if something goes wrong, we just shrug it off. There is not that much reason to overthink and check and obsess and worry about. But when you, I mean, a life is probabilistically deterministic, but I think and as an example, when I'm thinking about my travel plans, I'm always amazed my friends expect things to actually go the way they're supposed to.
01;01;29;00 - 01;01;49;20
Cassie Kazyrkov
Right? I'm amazed by this. It blows my mind. I had been seeing 50 plus possibilities of how things could go. Adding time, subtracting time. You know where, which levers I can press, which ones I can't. How do I set myself up with more optionality? All right. That I just do that like breathing. And some people are like,
01;01;49;22 - 01;02;16;22
Cassie Kazyrkov
The flights little. What is surprised that this kind of thing happens? We're just going to have to start applying this, this pastiches a little bit to our tools. Same thing as out there. How we we have to start thinking about. Let's imagine that mistakes are possible. Let's just really hold that thought. Sometimes it will do something weird.
01;02;16;24 - 01;02;48;16
Cassie Kazyrkov
Let's just imagine that that's true. Where can we apply this? Where it's fine that that's the case, and maybe that's subtle. Maybe it that's very obvious. I don't know for listeners, but I think that there's there's so much, there's so much framing around how we use AI that suggests that we put it in first, and then we worry about the quality of it afterwards instead.
01;02;48;18 - 01;03;13;21
Cassie Kazyrkov
So you have a valuable tool, just like so. You have a fallible human being. You know that humans are fallible. You begin by designing the guardrails and control structures before you even get there. You deploy the humans to where it's acceptable to have mistakes. And, you know, sometimes mistakes happen. And so you set everything up with the expectation that mistakes are possible.
01;03;13;24 - 01;03;23;03
Cassie Kazyrkov
And just do that same thing with machines. You don't have to deploy AI to every which where.
01;03;23;05 - 01;03;32;18
Cassie Kazyrkov
It actually takes some thinking from leadership of let's say this thing completely messes up. Where in my business would that be? Fine?
01;03;32;20 - 01;03;57;02
Geoff Nielson
It's yeah, it's really, really interesting. And I, I don't know if this is a word you like or hate, but I keep coming back to the word architecture and architecting, like actually understanding the entire system here, the entire series of processes here and thinking intelligently about how to redesign it, you know, with governance, with guardrails, with, you know, an understanding of where the value is, what probabilistically could happen.
01;03;57;07 - 01;04;20;02
Geoff Nielson
And just like as you said before, the quote, you like doing the work here and, my, my sense here is one of the bigger challenges for organizations is when you're talking about this across the scale of an entire enterprise, we're talking about a transformation here. Right. And, and and how do you, how do you do that?
01;04;20;02 - 01;04;36;08
Geoff Nielson
Like how do you muster, I guess, kind of the organizational will and I guess also the political will where you're willing to let like, how do you let people make the right decisions and who are those people if I'm not asking too big a question with that.
01;04;36;08 - 01;04;45;18
Cassie Kazyrkov
Oh, well, that's, And is it just one again or. Hum, how many questions are in there? Quite a quite a bunch.
01;04;45;22 - 01;04;47;17
Geoff Nielson
Take, take that wherever you want, Cassie.
01;04;47;18 - 01;05;20;28
Cassie Kazyrkov
You know, look, transformation is hard. It doesn't matter if it's AI transformation or any other kind of transformation. And I think that, the more technically minded you are, the more you're just like, oh, it'll transform. You'll install the latest version and everything will be great. And the more you've spent time with people, you do understand the that, the organizational will and the political will are very real beasts that are very hard to wrangle.
01;05;21;01 - 01;06;00;21
Cassie Kazyrkov
And if there was a, magic, spell that would make you win at that every time, right? That that will be the number one bestseller for the rest of time. Unfortunately, that is exactly tricky, because every time that you have something working a particular way, there is a whole ecosystem that that feeds. And when you disrupt that ecosystem, you have to compassionately manage that change and figure out what's going to happen with all the various people and their skills and their relationships and their knowledge.
01;06;00;23 - 01;06;28;22
Cassie Kazyrkov
And we are a species that creates technology that builds upon its technology and makes more interesting technology. You know, starting with some, some sticks we rub together all the way through to now and we've managed to somehow manage that change, and occasionally it hasn't gone great. But, it takes good leadership and good leadership. Again, if there was just that formula, we could teach it to everyone.
01;06;28;22 - 01;06;53;19
Cassie Kazyrkov
Wouldn't the world be beautiful? But, you know, that's not how this works. What we I guess, have to understand is that the human side of it is going to be at least as difficult as the technological side, and then to come back to something I, I oh, pull back something you said way at the beginning about is ai readiness preparing your data.
01;06;53;25 - 01;07;21;27
Cassie Kazyrkov
You've raised it with that. And now you also said about architecture will be knowing how all the systems work. I want to take these two concepts and combine them with another concept to to truly make this whole thing sound dismal, so that leaders know why that report. So the, the there are different channels of AI as I like to think of them.
01;07;21;29 - 01;07;43;14
Cassie Kazyrkov
And sometimes it's easy to think you're on one channel having a conversation with someone else who's on that same channel. And actually you're having you're talking completely possible. So to, to kind of come back to basics, there's all the AI stuff. That's the theory. That's your researcher is publishing papers. If you're an executive, if you need that, you already know you need that.
01;07;43;21 - 01;08;09;28
Cassie Kazyrkov
If you don't know why you need that, skip until you know, then there are a few other channels and you might be on one of them. And this is data, generative AI of agents. The data thing has been what we've been doing that we've called AI for a long time, that we're machine learning, where we find patterns in data, we turn them into recipes that a machine will follow.
01;08;10;01 - 01;08;29;24
Cassie Kazyrkov
The a substitute for human written code, human written instructions. And what we're doing is we're essentially expressing our wishes with data, with examples instead of instructions. And are we? Let's ask ourselves why? Why would we do this? Instructions or how do we get control? If we know what instructions we wrote down, we know exactly what's going to happen next.
01;08;29;26 - 01;08;49;12
Cassie Kazyrkov
Get a nice deterministic solution. Why, for goodness sake, would we give that up? Sounds mad. The reason we give it up is that there is something thing when it's important, when when it's lazy and it doesn't touch anyone. We can do what we like, when it's important, when the business realize it, when the when we need return on investment.
01;08;49;15 - 01;09;10;23
Cassie Kazyrkov
The reason we we use data for automation is that we cannot come up with the instructions. That's why something about the instructions is hard. If it's that we were wrong about them and we had to check and we did some science, and now we can write them out again. That's not I, I that's good research. That's nice science.
01;09;10;25 - 01;09;34;27
Cassie Kazyrkov
If we are physically incapable of writing those instructions, that's there. Use data in that situation. And the reason we're physically incapable is that it is so complex. The solution is so complex to our complex problem, it doesn't fit in our heads. So even that that data piece on that channel, what we see is that data gives us the gift of memory.
01;09;35;00 - 01;09;59;25
Cassie Kazyrkov
That is what we are doing. We're we need the gift of memory to deal with complexity. We need machine memory for vast complexity that overwhelms humans, that makes us not able to write down instructions. And so the gift of memory is a powerful thing. But we also know that we sign up for everything, complexity and place, all its opportunities and all its threats, including when things are complex, you don't know how they're going to go wrong.
01;09;59;28 - 01;10;45;07
Cassie Kazyrkov
So that's why you start to think about paranoid things like control structures and security guardrails and all this generative AI. The next channel is when someone else, not you, someone else, apply to that automation with data principle to language and or video or physics or something like that, some sensory, and now the focusing on language, because that's the one that really, is I think valuable or folks to, to connect with language is the universal interface for human collaboration.
01;10;45;09 - 01;11;10;03
Cassie Kazyrkov
It's how we collaborate with one another. And so if we solve language understanding, any of us can communicate with same machine. So companies like Google, anthropic, OpenAI that what the providers of the foundation models they did the paradigm in the previous channel applied that to solving language. And then they offer tools to developers, end to end users. It's like language.
01;11;10;03 - 01;11;29;09
Cassie Kazyrkov
Here you go. And on this channel is all the what could we do with language if we could all speak without learning some unnatural language like Python? So it was last. The rest of it first, what could we ask? So this is the gift of language sits on top of the gift of memory. Language is pretty complex. What could possibly go wrong?
01;11;29;09 - 01;11;52;20
Cassie Kazyrkov
We're sitting with built on data. There's complexity here. What could possibly go wrong with language? Well, our unnatural languages are pretty stiff and constrained, but at least we know what they're going to do. Our natural language is our mother tongue is filled with ambiguity. Half the time we don't know what we're saying. So when we automate that and we're not watching and we're not our own human in the loop, it was a mic based on you know, something that sounded good at the time.
01;11;52;26 - 01;12;20;19
Cassie Kazyrkov
That's your genie story with potentially a very unskilled wish. Are not even knowing what they're saying. An upscale, kind of terrifying. And then the genetic piece that is the gift of action on top of the gifts of language and memory and the gift of action is only a gift if the action is good and useful and, and that makes your life better as opposed to worse.
01;12;20;21 - 01;12;45;28
Cassie Kazyrkov
And in the a genetic paradigm, you realized that there was a very, very important piece in that Genie story. It's not just the genie and the wisher, it's also the lamp. That lamp is really important. What is the the actual control structure and security? How do you prevent an unauthorized wisher making requests on a powerful genie? How do you deal with all this stuff?
01;12;46;00 - 01;13;08;17
Cassie Kazyrkov
So that's what we've got now. You're like, what the hell? Where the hell is she going? I'm bringing this back first. You might not need anything to do with getting your data in order for doing AI. That might not be part of your AI infrastructure. You might just be wanting to ride what it means to be able to play with language and plug language understanding into your business processes.
01;13;08;19 - 01;13;28;18
Cassie Kazyrkov
That gives you rather powerful ways of translating things. What kinds of things? If you have the creativity to see it, or you could plug that into the gift of action as well and have autonomous actions taken that be very careful about what kind of wisher and what kind of lamp you're dealing with. And if everybody's just talking about the genie, you're missing a big, big piece.
01;13;28;21 - 01;13;48;13
Cassie Kazyrkov
But all of this stuff, this is why, Jeff, you're like, all this stuff is about is built on top of complexity. And so now you can't say things like, I'm going to be an architect, and I'm going to know all the all of the all of the all. There's going to be so much you're not going to know you there.
01;13;48;19 - 01;14;16;21
Cassie Kazyrkov
All the, all of these engines are complexity based. That's why we start talking about if this is probabilistic now, some of it might even be deterministic if you could understand it, but it's too complex for the human mind. It may as well be probabilistic, generative. It's literally probabilistic. So it's very, very complicated. And these complicated systems that can go wrong in complicated ways are going to be put in first one.
01;14;16;23 - 01;14;43;13
Cassie Kazyrkov
Now you're one complicated system. You have to try to understand all the architecture around it. But once you've got a bunch of these in, plus a bunch of augmented employees, workers who are all augmented in their own way, that is a lot of complexity, all moving around. And so this idea that you would you could understand the all of it, that does that doesn't compute.
01;14;43;15 - 01;15;11;18
Cassie Kazyrkov
What you're going to have to get really, really good at is understanding your piece and the edges where it connects to the universe. And then it's going to be a game of trust, trusting other humans who are, for lack of a better way of putting it, gardening their own gardens and how that connects to you and still being tolerant of there can be errors and linkages, and the more that could hurt people, the stronger.
01;15;11;18 - 01;15;29;11
Cassie Kazyrkov
Again, the lamp, the guardrails, and the rest of it need. So that's what it comes down to. The wisher is much more important than the genie, and the lamp might be more important than both of them. If the lamp just shuts everything down. When we need better.
01;15;29;14 - 01;15;46;27
Geoff Nielson
So first of all, I feel like I have to say thank you for landing that plane, because that was, that one was a wild ride, and you had me for a minute, and then I. Oh, there's. We've landed the plane. Yeah. And I and I love the story that you told, and I love the way you kind of framed the role of complexity and some of the implications.
01;15;47;00 - 01;16;08;17
Geoff Nielson
But I wanted I want to maybe take that up a level and, you know, apologies because I feel like I'm going to give you another multi-part question, but when you think about this world with increased complexity, where it's. Yeah, and we can start with an organization, but when we've got increased complexity where we've got people who can't see it all anymore, they're tending to their own garden.
01;16;08;20 - 01;16;31;20
Geoff Nielson
And, you know, as you said, there's there's control implications here. Right? We're giving up control in the name of being able to do more or to tackle more complexities. What are what are the implications of that in terms of organizational leadership in, you know, in the in the scope of the future of work, in the future of the organization?
01;16;31;22 - 01;16;38;09
Geoff Nielson
And then maybe if you're feeling bold, what what are some of the implications of that, do you think more broadly, societally or outside of the organization?
01;16;38;15 - 01;16;59;24
Cassie Kazyrkov
Well, I mean, look, that that cuts to the heart of it. I think that the idea that we will eventually as a, as a species and we do already, I think if you if you think about it, how do you know that a skyscraper that you, you know, you go up to the 50th floor, how do you know that that all works.
01;16;59;26 - 01;17;31;00
Cassie Kazyrkov
I don't I know nothing about this, but somehow there it is. Right. We we do end up having to collaborate with others who all hold their, their peace, their bit of something much greater than us. And occasionally there are mistakes and problems, and we really all do try our best. So while not all of us, some of us don't try our best, we can talk about work swap any time you like.
01;17;31;02 - 01;17;59;16
Cassie Kazyrkov
The I kind on the regular kind. But, you know, we try our best to put systems in place that that limit the, the speed of the damage, of things going wrong. And we, we do give up control over it. It's just that we are adding a it's going to be another order of magnitude, and it's going to be systems that are fundamentally probabilistic.
01;17;59;18 - 01;18;31;12
Cassie Kazyrkov
So that's why I said earlier, as a society, we are going to have to be more tolerant of mistakes, more aware the possibility of mistakes, less gullible. I think that we're going to go through some, growing pains, for example, now with them with propaganda applications, people who don't, for example, know that a video could be entirely, fake and AI generated and it might look real, but it's not evidence of anything.
01;18;31;12 - 01;19;07;23
Cassie Kazyrkov
Right? We just need to grow as a, as a species to now realize that we, we exist in a future, you know, present where that's technologically possible. And so that is no longer our way of forming trust. And maybe trust will have to move back towards, trusting humans, trust in individuals, individuals that we know, individuals that we know to do their jobs well, to, take care of their piece of the, the, conflict complex sphere that they are in charge of knowing that there is no perfect way to do this.
01;19;07;23 - 01;19;42;09
Cassie Kazyrkov
But, you know, the one thing that I, I just keep coming back to, I hate, I hate, making predictions. This is a statistician talking. So you've got a lovely irony, but I hate making predictions for the future, particularly when the so much changed. What can you really predict? How do we know what the world's capital like? But my one controversial prediction that I, I feel pretty okay with is that we're all going to have more to do, not less, just because of that sheer complexity and all the different ways that it's everything can fit together, right?
01;19;42;12 - 01;20;10;28
Cassie Kazyrkov
I mean, imagine, imagine a future where all the kids are all on individual learning and education. Workers are fundamentally chimeric and can, pull in new skills with AI as needed and put those skills down, just, you know, rattling the concepts of job ladders and, you know, bursting the boundaries of their roles and taking on new challenges each week as needed.
01;20;10;28 - 01;20;37;22
Cassie Kazyrkov
And then you have technologies that are going to put all kinds of, probabilistic things. Again, probabilistic automations at scale. All of these pieces, moving around, how do we coordinate over how do we make sense? How do we plan, how do we, regulate?
01;20;37;24 - 01;21;04;22
Cassie Kazyrkov
There is going to be so much, you know, what am I trying to say? The ducks, little feet under the water are going to be have to be swimming very, very quickly to catch up in this. I don't see us all, going back to our natural state of grace, naked on the savanna, just pulling a, drop down menu from the sky and, you know, back to back to our, hunter gatherer roots.
01;21;04;24 - 01;21;08;12
Cassie Kazyrkov
I we're just we're going to have a lot of work. A lot of work ahead of us.
01;21;08;14 - 01;21;27;01
Geoff Nielson
How do you how do you see that work being distributed? Because one of the one of the points of fear that that, you know, I hear a lot about these days, and I love, by the way, that you brought the conversation to one about trust, because I think that's kind of an operative word in all of this. How do you see that that orchestration, that work, that planning being distributed?
01;21;27;01 - 01;21;56;04
Geoff Nielson
Because the fear that I hear about is this is going to be overly concentrated in big tech or in a few, you know, small firms that through the platforms that they operate have outsized power. And, you know, people fall less and like people fall, they become less important and less valuable in this new economy. Is that a concern to you, or do you see it being more decentralized than that?
01;21;56;07 - 01;22;24;17
Cassie Kazyrkov
Well, I guess I when I put my economist hat on, I think to myself about barriers to entry and which parts will have high barriers and the, the hardware and hardware innovation, those parts I think, have very high barriers. And I'm not sure whether it's going to be possible for many, many, many players to compete.
01;22;24;19 - 01;23;02;15
Cassie Kazyrkov
Right. And see that that's, certainly a centralizing of force. But then we if we think about things like, what, what would be things keeping consumers loyal to, let's say platforms, apps, etc., it's often laziness. It's not that it's the best thing. It's just maybe it's hard to move to another one. Maybe you've given all your data away already and the system knows you really, really well.
01;23;02;15 - 01;23;22;23
Cassie Kazyrkov
So a competing product would be difficult to stand up. Right? You could see those types of of barriers, and those types of capture. But technology is also going to make things pretty easy. Everything a little bit easier for everybody. And.
01;23;22;25 - 01;24;03;23
Cassie Kazyrkov
I don't know to what extent that will, counter, natural inertia. But I do think that it will be easier for anybody to, make the attempt, I guess make the attempt, at least in the digital space, to offer something and offer something, nuanced and personalized and to be matched right now that we've got pretty bad matching, like, you know, maybe somebody might be designing the most perfect custom t shirts, for you, Jeff, from that, somebody might be based today, and I don't know where they might be living.
01;24;03;23 - 01;24;37;00
Cassie Kazyrkov
Maybe they're in Jakarta. It's not a great system to connect you with them and with their art and with their, very individual offering to you. But I'm I imagine that, providers of, that matchmaking. But that will be a high barrier to entry thing make a lot of money. But, all entrance into that I think have an opportunity to, to participate and participate in interesting ways in the economy that, they wouldn't have before.
01;24;37;03 - 01;25;05;10
Cassie Kazyrkov
I think they'll be a lot more, very small businesses that start because of this realize with a very few people who could do some unique, interesting things. And, while we will see more personalization from behemoth corporations again, managing all the humans and the humans, then being clear on what precisely they provide, there will be a sort of, how shall I say this?
01;25;05;10 - 01;25;09;00
Cassie Kazyrkov
I'm trying to find the word.
01;25;09;02 - 01;25;36;08
Cassie Kazyrkov
It will be hard for the humans to track and take responsibility for, very different or very custom offerings, even if the system itself could do that. So we I still expect that at the edges you get the interest itself. And if the barriers are lower there, you could have a lot more entrants at the edges. If you have good matchmaking, you can have a very interesting digital and physical and services.
01;25;36;10 - 01;25;45;17
Geoff Nielson
Cassie, this has been super interesting and super insightful. I really appreciate everything you've shared today. You've certainly given me a lot to think about. Thanks so much for joining me.
01;25;45;19 - 01;25;47;03
Cassie Kazyrkov
Thank you for having me.
01;25;47;05 - 01;26;12;18
Geoff Nielson
If you work in it, Infotech Research Group is a name you need to know no matter what your needs are. Infotech has you covered. AI strategy covered. Disaster recovery covered. Vendor negotiation covered. Infotech supports you with the best practice, research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe!
The Next Industrial Revolution Is Already Here
Digital Disruption is where leaders and experts share their insights on using technology to build the organizations of the future. As intelligent technologies reshape our lives and our livelihoods, we speak with the thinkers and the doers who will help us predict and harness this disruption.
Our Guest Dr. Anne-Marie Imafidon Discusses
Is AI Eroding Identity? Future of Work Expert on How AI Is Taking More Than Jobs
From redefining long-held beliefs about “jobs for life,” to the cultural fractures emerging between companies, workers, and society, Dr. Anne-Marie goes deep on what’s changing, what still isn’t understood, and what leaders must do right now to avoid being left behind.
Our Guest Andy Mills Discusses
How AI Will Save Humanity: Creator of The Last Invention Explains
If you want clarity on AGI, existential risk, the future of work, and what it all means for humanity, this is an episode you won’t want to miss.
Our Guest Peter Norvig Discusses
AGI Is Here: AI Legend Peter Norvig on Why It Doesn't Matter Anymore
Are we chasing the wrong goal with artificial general intelligence and missing the breakthroughs that matter now?
Our Guest Cassie Kozyrkov Discusses
Why AI Is Failing: Ex-Google Chief Cassie Kozyrkov Debunks "AI-First"
In this episode, Cassie Kozyrkov, former Google Chief Decision Scientist and CEO of Kozyr, sits down with Geoff to unpack the hidden cost of the “AI-first” hype, the dangers of AI infrastructure debt, and why real AI readiness starts with people, not technology.