Our Guest Walter Pasquarelli Discusses
Robots, AI Ethics, and the End of Thinking: Top Researcher on the State of AI in 2026
What does the future of AI really look like as we head toward 2026, beyond the hype, headlines, and fear-driven narratives?
On this episode of Digital Disruption, we’re joined by internationally recognized advisor, speaker, and researcher on AI strategy, Walter Pasquarelli.
Walter is one of the world’s leading voices on ethical and strategic AI. He has advised governments, global institutions, and leading technology companies on AI governance, policy, and readiness, and brings a grounded perspective on what it really takes to lead in the age of artificial intelligence.
Walter joins Geoff to unpack what’s actually happening with artificial intelligence and what most media coverage gets wrong. He brings a 360-degree view of AI adoption and how AI is moving out of boardrooms and into everyday life, reshaping how people think, decide, work, and relate to technology.
This conversation dives into:
- Why AI adoption is accelerating among consumers, not just enterprises.
- The rise of AI companions, humanoid robots, and everyday AI use.
- The real risks behind automation anxiety, data privacy, and emotional dependency.
- What “AI psychosis” is and why it’s a growing concern.
- Why AI literacy matters more than fear, hype, or blind regulation.
- How AI is reshaping work, leadership, and global competitiveness.
00;00;00;19 - 00;00;23;01
Geoff Nielson
Hey everyone! I'm super excited to be sitting down with Walter Pasquarelli. He's a globally recognized expert on the ethical use of AI and AI strategy. What I love about Walter is not just that he's a former AI leader at The Economist, a research partner at Cambridge, and an advisor to Google, meta, Microsoft and Intel. But then he brings a super practical mindset to AI adoption.
00;00;23;07 - 00;00;49;21
Geoff Nielson
And as a 360 degree view of how the technology is being used by businesses, people and governments. Walter is a deep skeptic of a lot of mainstream journalism about AI, and is putting his money where his mouth is, conducting a substantial amount of his own research. I want to know what the media is getting wrong, what's really going on, and what we need to understand about AI adoption and consumption if we're going to harness the power of this technology.
00;00;49;24 - 00;00;53;22
Geoff Nielson
Let's find out.
00;00;53;24 - 00;01;14;06
Geoff Nielson
Hey, Walter, thanks so much for joining us today. Super happy to have you here. Maybe just to get things started. You know, as we look down the barrel at 2026, what's on your radar in terms of your outlook around AI, the impact that it's going to have, both in terms of the technology itself and the broader, you know, kind of societal and economic outlook.
00;01;14;13 - 00;01;35;15
Walter Pasquarelli
Well, thanks. Set. Thank you so much, Jeff. And really excited to be to be joining you here today. So I think when it comes to the development of artificial intelligence, I think there's really for the upcoming year, really three main areas that I would be looking out for. Now, the number one thing is that the capabilities of AI in absolute terms.
00;01;35;15 - 00;02;09;01
Walter Pasquarelli
So think of the way that it's able to make, calculations, the precision of its outputs, the risk of hallucinations decreasing. It also really I think one of the areas where given the advancements that the models are making, I think we should really be able to observe, just as we have been able to observe throughout this year. But maybe another point, to, to be said is that over the past years, we always looked at AI as something that could be used for enterprises or large organizations, for governments even.
00;02;09;03 - 00;02;45;06
Walter Pasquarelli
But I feel that really one of the areas that has been historically most under looked is the fact that the use of artificial intelligence has really shifted not only from boardrooms and, and government. Areas, but really into people's bedrooms, into people's living rooms, into everyday uses of ordinary citizens. And so we have been able to observe this year that people started using artificial intelligence more and more to ask it personal questions, to bounce off, ideas, to potentially, debate some questions or some arguments that we have with people that are close to us.
00;02;45;08 - 00;03;16;27
Walter Pasquarelli
And so this area, the interactivity of artificial intelligence systems is one thing that in part also due to the, desire of of people to be able to use them more, but also because technology firms are seeing really a business case for that. I think we should be able to observe increasingly as well over the past year, over the next year, and perhaps an area that I think will really come to fruition in 2026 is, of course, the field of humanoid robots.
00;03;17;00 - 00;03;41;20
Walter Pasquarelli
And this is particularly interesting because so far, artificial intelligence has been something that we interacted via our screens. It's all via our laptops, typically also via our smartphones and something that we interacted essentially through chat bots, maybe in some cases through avatars. But we have been able to see that, especially over the past years. That's been really been a wide and very steep acceleration of investment into humanoid robots.
00;03;41;22 - 00;03;58;14
Walter Pasquarelli
And first we created the brain. Now we created the body. And I think we should expect to see over the next year that artificial intelligence systems increasingly integrated into hardware for supporting us in either our daily lives, but also really integration into our economy.
00;03;58;16 - 00;04;17;17
Geoff Nielson
So. So let's talk for a minute about the humanoid robot piece. That's a really interesting one to me. And it it makes complete sense that that's sort of the next frontier here. I love the, you know, the analogy of the brain in the body. You know, as we look out over the next handful of months, where would you expect to be the frontiers of this space?
00;04;17;19 - 00;04;29;08
Geoff Nielson
Is it going to be in specific industries? Is it in, you know, is it businesses leading this? Is it going to, you know, make its way into people's personal lives? Where should we be watching for these frontiers?
00;04;29;11 - 00;04;48;25
Walter Pasquarelli
Yeah. So I think typically when people think about humanoid robots and kind of like robotics, the first thing that pops into mind is obviously industrial robotics. And that's something that isn't really new. In fact, even just a decade ago, people used to, used to equate it with automation. So think of, robots that we could help for.
00;04;48;27 - 00;05;09;17
Walter Pasquarelli
You could use, for instance, in warehouses. Amazon being typically a front runner in that that help us, segment and order parcels in a, in a in a better way medium and maybe even robotics that is effectively more, precise in how it handles particular manufacturing processes. And that's something that, particularly Asia has been leading China in particular.
00;05;09;19 - 00;05;35;05
Walter Pasquarelli
And I can, I think this is obviously going to be an area where by integrating artificial intelligence systems, particularly computer vision, we can expect this to accelerate this. But again, this is really even though it sounds very futuristic, it's not necessarily something that is, that that is, novel in its entirety, but perhaps a few other areas that I think are interesting or of course, the personal uses of that.
00;05;35;07 - 00;05;54;23
Walter Pasquarelli
And, we see that there has been the Tesla robots. Another one which was sort of making a big splurge on social media was the humanoid robot by one X, but perhaps one of the companies that is also, let's say in mainstream discussion, little well known, but has attracted major, major investment from all the leading technology and other firms.
00;05;54;26 - 00;06;16;21
Walter Pasquarelli
Is another firm called figure AI. And there we see that, there has been some demos provided of people purchasing these humanoid robots to help them essentially in their everyday lives. So think of it as someone who basically lives inside your homes and can support you with, doing the dishes or doing with other things that you don't enjoy.
00;06;16;23 - 00;06;36;29
Walter Pasquarelli
And I think that obviously the capabilities of these humanoids aren't fully there yet. There are some claims, especially by the providers of these firms, that they say, oh, it's actually going to be able to do the dishes that we'll have to learn. There's going to be maybe some data collection. It still needs to take place. But I think that's something that, that it will effectively be able to support you.
00;06;37;01 - 00;06;57;26
Walter Pasquarelli
And then there is also another element which I think is, is, perhaps still a little bit undercover, but it's, of course, the element of prestige. And that's the fact that we can see, especially due to the price of these humanoid robots that I think are somewhere around 20, 20 to $30,000 per piece, approximately. It's something that ordinary users can obviously not afford, but wealthy people can.
00;06;57;26 - 00;07;33;23
Walter Pasquarelli
And I can see a world in which this becomes almost like a new status symbol, similar as it did with the very advanced smartphones, maybe 15 years ago. There is obviously then the integration into economies like we could see in, in drone delivery services, potentially also in like, in like other industries that we see, we see out there, military systems potentially being one obviously for these tools to be reliable, we need, we need to be 100% certain that we can test this can actually work specifically in military applications or high stakes scenarios.
00;07;33;29 - 00;08;05;28
Walter Pasquarelli
So those might be some where maybe there could be some experimentation with it. The regulatory landscape is still pretty much immature. So there's a lot of considerations around the policy and governance of these tools that need to be put in place. But other than that, I think this will be another very interesting area of investments that we could see potentially, if we wanted to expand this into adjacent fields where we're maybe not looking directly at humanoid robotics, but then we're also talking about self-driving cars and the predictions that were in place maybe around 6 to 7 years ago.
00;08;05;28 - 00;08;46;20
Walter Pasquarelli
Was that really the proliferation of those, and the mainstream of, mainstreaming of self-driving vehicles was something that should be expected by maybe 2030. Possibly the timeline has shifted a bit more, forwards in the sense that, there's some leading companies, one being, Waymo, another one being, for instance, Tesla doing experimentations in San Francisco. And of course, Uber being also like some that are really putting significant amount of investments in that it will probably start out in the capital cities where we become increasingly frequent, but potentially as we'll be able to become better and better at the mapping of streets, of towns and cities.
00;08;46;23 - 00;08;55;00
Walter Pasquarelli
That's something that slowly we should be able to see increasingly more as well. Over the upcoming year.
00;08;55;03 - 00;09;22;25
Geoff Nielson
If you work in it, Infotech Research Group is a name you need to know no matter what your needs are. Infotech has you covered. AI strategy covered. Disaster recovery covered. Vendor negotiation covered. Infotech supports you with the best practice, research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe!
00;09;22;27 - 00;09;43;06
Geoff Nielson
I'm glad you brought up self-driving cars, because that's something on my radar as well that it seems, you know, undeniable that the pace of change there is increasing. But you you did something really interesting, which is you categorized it in the broader category of robotics. And we were talking about humanoid robotics. And I'm curious, Walter, do you expect humanoid?
00;09;43;06 - 00;10;05;18
Geoff Nielson
Humanoid is such sort of a constricting term, right? Like, it's it's a really one specific type of robotics. Do you believe that the humanoid piece is going to outpace other types of, you know, industrial, military, consumer robotics? Or do you think the the tide is going to kind of rise equally with, you know, a number of different, you know, form factors?
00;10;05;20 - 00;10;35;14
Walter Pasquarelli
I mean, of course, that's, a prediction that would, need to be assessed against current trends and for example, geopolitical security, macroeconomic environments, general investment trends. There is, of course, a big business case for that developing these humanoid robots in the sense that, they can truly help us fulfill maybe some tasks that might be more costly if operated by humans, specifically, across certain kinds of industries, or maybe even in areas where there's, shortages of workers.
00;10;35;14 - 00;11;02;08
Walter Pasquarelli
Take, for example, care workers. Right. It should be said that when we talk about humanoid robots, however, people think immediately of a robot, right? The thing that looks like a human that moves like a human, that, that that gives us the feeling that we're really talking to a human like presence in the room. But as a matter of fact, us humans, we're not necessarily the most efficient physical form for doing various tasks, right?
00;11;02;08 - 00;11;26;00
Walter Pasquarelli
I mean, we're kind of like, a general purpose species if we want. We're kind of not either super strong, like bears or tigers. But. And we also cannot fly, but we can develop the things that allow us to do all of these kinds of things. And so I think that the humanoid robotics market, apart from potentially, the household ones, which, as I said, is something that will still be quite expensive at around 20 to $30,000.
00;11;26;03 - 00;11;54;24
Walter Pasquarelli
I think we will probably be able to witness increasingly specialized humanoid, robotic swarms that will look in a particular kind of way, maybe some with stronger legs, maybe some walking on all fours, maybe some looking more like us. So it depends a little bit on, on, on the, on the use case. Part of the reasons why there's some companies that develop them that look a bit like us or look a bit cutesy, like, have a nice piece of, of a nice, friendly cat or a nice, friendly dog or something that looks a bit like Wall-E.
00;11;54;24 - 00;12;23;07
Walter Pasquarelli
The Disney movie is really so that it increase the socialization and acceptance so that people are willing to embrace them more in their everyday lives and not feel that there's the Terminator walking among us to walk, or to mention potentially some more fictional scenarios. The key thing is, of course, that the markets will react to, on the one hand, what's, immediate priority, some of them being, as I said, geopolitical and securities.
00;12;23;09 - 00;12;52;24
Walter Pasquarelli
And we see already now that in particular conflict areas, drones are of course, one of the key determinant factors of the outcomes of armed conflicts. But then there's also the more long term ISM element there, where long term investments are going into and research is being conducted. And that's more the ones that potentially could address some areas within the economies that, can still be essentially financially exploited, irrespective of whether we think the outcome is positive or negative.
00;12;52;27 - 00;13;17;28
Geoff Nielson
So the there's a lot of really interesting interrelated factors there. But the one that the one that I want to pull on is, I guess, kind of consumer or individual sentiment toward, you know, these robots or toward this advanced technology at all. And you mentioned, you know, one of the trends you're seeing is an increase in consumer rather than than enterprise uptake of AI.
00;13;17;28 - 00;13;45;01
Geoff Nielson
And, you know, I'm just just sort of talking through it with the robotics, you know, we're looking at more of a consumer market. On the other hand, a lot of people are exposed to the military aspect of this. They're worried about, you know, AI or robots taking their jobs. How do you see the consumer sentiment toward AI and robotics changing over the next year, or do you think we're we're trending toward people becoming more accepting of it?
00;13;45;01 - 00;13;49;21
Geoff Nielson
Do you think we're pushing toward a greater backlash? How how do you expect this to evolve?
00;13;49;27 - 00;14;23;17
Walter Pasquarelli
Yeah, I think this is a fascinating question. And potentially also one where the answer is quite paradoxical because the, the issue of automation anxiety, both among professionals but also among ordinary people is truly real. And it's not just something that is imagined. And I think, perhaps one point that is also important to address here is the ethics of AI narrative, in the sense that I see a lot of conversations around, around some engagements that I do in general, corporate speak that says, oh, guys, never worry about that.
00;14;23;17 - 00;14;44;06
Walter Pasquarelli
It's always going to be fine. Humans, the human touch will always stay relevant. But as a matter of fact, the evidence that we collect today say that there is, in fact, some particular industries, some particular, roles and vacancies in particular that will be strongly impacted on that. And there is no sugarcoating that precisely because of the economics that are behind that.
00;14;44;06 - 00;15;10;07
Walter Pasquarelli
So I think it's as a matter of fact, I think it's more ethical to be able to discuss these things upfront rather than just be like, kind of like mention some, some, some, some, some studies that might, might argue otherwise, that might not have a very strong mental logical backbone. Now, as far as what people think about AI, there is of course, as I just mentioned, the automation anxiety, there is a fear of a superintelligent future.
00;15;10;07 - 00;15;31;15
Walter Pasquarelli
Will people think, oh, will the Terminator be coming along? And those things are, potentially seen as something that is more futuristic but is definitely out there. And then there is, of course, the people that are being asked and survey to what extreme do you actually use artificial intelligence who effectively mis reports to their own employers and to their own bosses, their actual users?
00;15;31;15 - 00;16;04;21
Walter Pasquarelli
Because on the one hand, they might be prohibited from using AI systems, and on the other hand, there's also an element of social desirability that people don't want to look like. They're kind of outsourcing tasks that they should be doing, or assume they should be doing themselves to some of these technologies. But based on some of the studies that that I've conducted and some of the surveys that we've done, there is actually a very intense usage of some of these tools, not only as a one off, not only as an experimentation for light entertainment, but really for some of the more critical areas and domains of of people's lives.
00;16;04;24 - 00;16;30;03
Walter Pasquarelli
Let me give you a few examples here. For instance, and one survey that we conducted, we asked about AI companions. So essentially tools that are created with the intention of developing an interactive and potentially even an emotional connection with people. And a lot of the times these tools are used not only for, some conversation, but also for more for making decisions or getting advice on areas that are really high stakes.
00;16;30;05 - 00;16;58;07
Walter Pasquarelli
So one question we ask people is, for instance, to what extent have you, at least once consulted an AI companion for getting information about, finances, potentially about health advice, potentially about relationship advice when you were in conflict with a friend or when you were dating, and also about political information. And the answer we got on that one question was that about 60 to 70% across those domains had done it at least once in the past three months.
00;16;58;09 - 00;17;31;22
Walter Pasquarelli
What we then asked was to what extent have you used an AI companion to substitute or taken advice from it, and AI companion over the advice from a human experts? Again, a financial advisor, a doctor, a, a therapist or a trusted friend or maybe even the media. And they're the numbers were at about 30% for at least once over the past three months, slightly lower when it came to regular users, which we defined as between 5 to 10 times in the past three months.
00;17;31;24 - 00;17;59;05
Walter Pasquarelli
So what this implies us is essentially two things. On the one hand that while we do have these very legitimate concerns about artificial intelligence, there is also, as a matter of fact, us still using them because of the convenience that they provide and because of it provides us essentially, for free, good enough output in some cases. And I'm not going into the unintended negative consequences just yet.
00;17;59;08 - 00;18;30;02
Walter Pasquarelli
But to a lot of people, there is still really an opportunity that you see there. And being able to use these tools as a way to navigate their lives, in a way that is, that is potentially, more helpful and more, more seamless. What this means as well is that on a higher picture, we're seeing that there's really increasingly a shift of expertise, potentially even a description of the authority to some of these AI systems, to a degree, to a degree, by by everyday people.
00;18;30;02 - 00;18;51;01
Walter Pasquarelli
And it's also no accident that the, respondents with the highest incidence of AI usage, any ICU, AI substitutions were the ones that were between 25 and 34 years old. So it's the ones that actually are in a situation in life. Maybe they've just left university. Maybe if they've just entered different jobs to what they're ambitious, they want to get that.
00;18;51;01 - 00;19;06;01
Walter Pasquarelli
They want to get ahead in their careers. Maybe they're just getting married. And so those are the ones that rely on these tools more, because they use it as a source of authority in a society and in a, in an age in which the traditional sources of expertise and authority are increasingly fragmented.
00;19;06;03 - 00;19;29;10
Geoff Nielson
So it's it's it's really interesting. And, you know, all of that points to the idea that there's enough net benefit for consumers of this as individuals and in their own lives that this will continue. And as people try it more, they'll be willing to, you know, rely on it more, delegate more to it, substitute traditional sources of authority more to AI.
00;19;29;12 - 00;19;58;10
Geoff Nielson
And I'm curious from your perspective, Walter, is that is that a net benefit to society? And it strikes me that if this trend continues, there have to be some risks as well. Right? Because you're now taking you're now taking decision making and judgment and influence out of the hands of humans and putting them in algorithms that, you know, have owners that are, you know, corporations or organizations.
00;19;58;10 - 00;20;16;25
Geoff Nielson
And so, what do you what do you see as some of those risks? And are there any things that we societally or, you know, in terms of our, our, you know, political organizations need to be aware of to make sure that this is a smooth transition and doesn't tip into something more dystopian.
00;20;16;28 - 00;20;51;03
Walter Pasquarelli
Yeah. So there's a couple of, of, of issues there, one of them being, about power concentration. And of course, as you just mentioned, it's it's, these AI systems, a lot of people report that they feel when they talk to some of these tools, there's a persona, not a person, but something, something that they are interacting with, in part because these these systems have been heavily anthropomorphized or given like, qualities that feel humanlike, that make us feel good, essentially, a bit like social media that was effectively designed to be very addictive.
00;20;51;05 - 00;21;20;24
Walter Pasquarelli
There is also an element of these AI tools that try to essentially get us in there, and the more data we provide about them, it's not that it just stays safe in our on our laptops, but they're effectively uploaded into these models to train the models again to make them smarter and more tailored. So there is, again, the usual, the usual interaction that we see as well with any online service that we use, we get some services, but we provide some data.
00;21;20;26 - 00;21;53;00
Walter Pasquarelli
And particularly with AI, we see that people give them a lot of very, very, very sensitive data about their health, about their dreams, about their fears, about any kinds of very intimate, factors about themselves. And so that monopoly of power over users data is real. And that is something that needs to be addressed strongly. Now, the other point that that is directly related to that is of course, about the data privacy and the potential data leakages and who gets access to this data.
00;21;53;02 - 00;22;20;03
Walter Pasquarelli
So when I provide, and I don't do this myself, but just as an example, if I would provide very confidential health records about myself because I want to have maybe, Gemini or ChatGPT analyze it, the data, as I say, will be inside the model. And so there there is a risk that other people, other actors, beyond just the technology firms, can actually get access to the most sensitive information about me that there actually is.
00;22;20;05 - 00;22;49;26
Walter Pasquarelli
And that means as well, that if we get someone, a bad actor might get access to that leakage of data. There is actually some potentially very serious harms that can happen, particularly as there has been some announcements and some considerations that these tools like ChatGPT, will, as a matter of fact, now have ads, and that's substantial. And that means that the privacy risk that we saw with social media, which probably has not been addressed effectively, will now come up again in a stronger, more intense fashion.
00;22;49;29 - 00;23;16;13
Walter Pasquarelli
Now, when it comes to personal users, there is also other also other elements on there and there. Those are risks that I think we have not seen before. There have been cases, as I'm I'm sure you might, you might have heard about, for example, teenagers who committed suicide after, after they had been speaking to some of these AI systems that effectively, left them to, to, to to self-harm.
00;23;16;16 - 00;23;35;22
Walter Pasquarelli
It should be said, of course, that these were people who were already suffering before from depression. But because these systems are not empathetic, they don't have, like societal values directly embedded in the same way or have the same gut feeling, the same actual empathy that a human has, we can really, have the risk of disasters being created on there.
00;23;35;25 - 00;23;58;22
Walter Pasquarelli
That's something that some psychologists and, have called so-called AI psychosis, which I should add is a non-clinical term, but an observational term that is becoming more relevant where effectively, AI, because it wants to make us feel good. It's it after world starts amplifying some of our beliefs because it actually doesn't want to tell us that we're necessarily wrong.
00;23;58;24 - 00;24;38;00
Walter Pasquarelli
It probably can be corrected over the, near term future, but that is a feature of addictive systems that they actually, try to reinforce our beliefs and for that reason might not always be the right kinds of outlet for us of voicing. Our emotions are asking for advice. And then there is, of course, the other point that I think is directly related to the intense usage that people make, be it in professional services, so professional environments, being, you know, in a, in a, in a personal uses the fact that over time, the more we ask these tools for advice, the less we use our own critical thinking, the more we effectively rely
00;24;38;00 - 00;25;10;09
Walter Pasquarelli
on them. The brain is a is a muscle. And if you don't use it, then it atrophies like any other muscle. If you work out a lot at the gym, you're going to you're going to become stronger. If you don't, then you're going to atrophy as well. And that's exactly the same thing with the cognitive capabilities, for which there's also some studies that say if we use them relentlessly and we don't force ourself and we give ourselves into that convenience, that it effectively, leads us to being less able to activate those critical neurons that we need to make decisions by ourselves.
00;25;10;12 - 00;25;41;17
Walter Pasquarelli
Now, there's some nuance that is needed to be, to be provided here, especially of the cases that we've seen around the AI companions, where, of course, the risks that we've reported on and the media outlets have discussed are, of course, risks that have had catastrophic, tragic endings. But there is also some evidence that shows that using AI systems and companions the right way in a therapeutic setting, particularly if combined with a human therapist, can actually help, especially for reducing mild cases of loneliness.
00;25;41;19 - 00;26;03;03
Walter Pasquarelli
It can also help for reducing mild cases of anxiety. And you will you will notice, Jeff, that what I'm saying is of course the term mild. So it's not a case that can substitute intense therapeutic treatment, but it can from the evidence that we've seen, it can actually support people, when they're in cases where they might be spiraling and when they're especially also, under under other kinds of therapeutic treatments.
00;26;03;03 - 00;26;18;12
Walter Pasquarelli
So the key thing that we have to learn, and that I think we haven't done well with social media, is that we really need to teach people how to develop that AI literacy, how to discern between outputs, and how it can ultimately help us live a better life.
00;26;18;15 - 00;26;48;23
Geoff Nielson
I'm glad you brought up that last point, which is that the teaching people and the notion of AI literacy, because, I was going to ask you, in light of this broad list of risks of, you know, everything from, you know, suicide to, you know, atrophy of critical thinking, how much of the path forward is with better, better education on the part of consumers versus better regulation and governance of, you know, the owners of these tools?
00;26;48;23 - 00;27;11;12
Geoff Nielson
Because, you know, I can imagine a world where either you say, hey, no one under the age of 16 or 18 is allowed to use this, you know, to, large language models similar to what we've seen in some countries starting to emerge with social media. Yeah. I can certainly imagine a world where people are getting up on their soapboxes saying, don't use AI as a doctor, don't use AI as a therapist.
00;27;11;12 - 00;27;38;02
Geoff Nielson
I don't know how, you know, credible that would be or how much that would limit demand. And I can also see a world where there's a regulatory push on, you know, the Googles and the open eyes of the world to regulate and limit how the interactions with these tools happen with individuals and and starting to say no to some requests around that.
00;27;38;05 - 00;27;44;04
Geoff Nielson
Which do you see as being the most fruitful? And you know what, what would you recommend and not recommend in that space?
00;27;44;05 - 00;28;11;25
Walter Pasquarelli
Yeah, there's not a single silver bullet here that can work. So if we were to say there is essentially, let's say three areas, one being regulation, another one being algorithmic controls directly implemented by the companies that for which regulation can push them to do that. And then literacy, those are all three areas which by themselves, as a standalone approach, are flawed in combination.
00;28;11;27 - 00;28;33;02
Walter Pasquarelli
That's where they can actually be most fruitful. However, it's not that simple. And and I and and I'll tell you why. The point here about regulation is that for the most part, regulation tends to be quite slow. And a prime example on that is of course, the UAE act, right, where we invested major, major resources on developing that.
00;28;33;02 - 00;28;55;18
Walter Pasquarelli
And we wanted to be the first. And now we've developed, the first standalone really regulatory environment that is that is pan-European. And we can say, well, that's great. We've now done the job right. It doesn't work like that. The problem with these kinds of regulatory approaches is the fact that sometimes, first of all, the technology might develop in a way that is very unexpected.
00;28;55;18 - 00;29;20;11
Walter Pasquarelli
Number one or number two use cases emerge that we did not forecast AI components being number one. And for for example, I companions, they they're not really regulated by the EU, AI act or by other existing regulatory environments. Because most of the times these systems are treated as products and that we look at the infrastructure, we ask ourselves, is there any data bias?
00;29;20;11 - 00;29;46;02
Walter Pasquarelli
We ask yourself, is there explainability, transparency and to a degree control over those? Right. And those are all questions that are by themselves, correct. But the impact of AI companies from the late the study that I've conducted is emotional. And how do you affect, how do you assess emotional impact and especially when users themselves give them that trust and when users themselves volunteer some of their most intimate data?
00;29;46;02 - 00;30;09;00
Walter Pasquarelli
Right. So regulation is a step forward. It can help us get in the right direction. But of course, we need to provide also an environment that is flexible enough for policymakers and technology firms alike to be able to accelerate those kinds of provisions whenever it's needed. As far as, technological, control is concerned. So the area number two, it can obviously help.
00;30;09;00 - 00;30;35;22
Walter Pasquarelli
And I think that's necessary. For instance, one of the points that I think are are critical and that there has been some policy initiatives, one being, in California, another one being in New York, is that, for instance, when, an algorithm or an AI system spots that, a human being might be at risk of suicide or engaging in suicidal ideation, that they effectively stop and that they say you need to get some help.
00;30;35;25 - 00;30;55;18
Walter Pasquarelli
This is a hotline. I think you would it would be a benefit for you to be able to use this, in which case they would then maybe scale back potentially the the support that they provide. But critically, the stopping of that and the seizing of sort of the spiraling of the, AI psychosis as I described earlier, that is, I think, a critical element on that.
00;30;55;20 - 00;31;16;28
Walter Pasquarelli
The issue with that is that they can usually be circumvented. So I think there was a few weeks ago, the case where, if you ask Claude or Gemini or Richard, gpt some, something in the, in the, in, in the format of a poet, then it would be able to give you information that actually, it's, it was not allowed to give you before that.
00;31;16;28 - 00;31;48;09
Walter Pasquarelli
And that's again, a problem circumvented. You can jailbreak the models essentially. Then the final point, which is the AI literacy is potentially the most sustainable one, but also, again, for the same reason that regulation is difficult to implement, one where we need constant work. So AI literacy is one that I'm personally a believer in. And that means that we can have, for instance, governments that can put forward programs for the public to allow them to engage to, or how to engage with the AI system.
00;31;48;09 - 00;32;11;03
Walter Pasquarelli
So this is what it can do. This is what it cannot do. This is how you should be using it. This is what it will do to your data. And by providing really tangible use cases and example so that people will say, well, I don't know anything. What data? What is data, what is personal information? I don't care, like I've heard that a lot of times, but just saw that people really develop that gut feeling for what actually a good use of AI is.
00;32;11;05 - 00;32;33;07
Walter Pasquarelli
The problem, again, is that I will develop is developed in one way today and in another way tomorrow. And so it requires a constant update. And ideally since childhood, particularly in middle school, but so that people are just constantly aware of that and they are able to use it in the same way that there might be able to, to, to use any other kinds of tools.
00;32;33;10 - 00;33;01;18
Walter Pasquarelli
Now perhaps the, the, the, the, the right mindset for AI literacy programs is that we should see them much more as an endeavor rather than a milestone. So something that we constantly strive for, that we accept to be imperfect, because perfection in an AI world is you topic doesn't exist. But if we strive for that constant development, that's something that I think in combination with the technical controls and the right policy landscape is something that I think holds true promise.
00;33;01;21 - 00;33;22;11
Geoff Nielson
So you talked to, you talked to a lot of business leaders as part of your, you know, role as an AI advocate, talking about AI literacy and just broadly helping people understand what these tools can do and not do and how they can be used for good. What what are the main messages you're finding that you're sharing with business leaders these days?
00;33;22;11 - 00;33;25;16
Geoff Nielson
And what are the biggest misconceptions about the technology?
00;33;25;18 - 00;33;52;06
Walter Pasquarelli
Yeah. So I think that's a great question. I think, the number one thing, and that's kind of where it all starts, is about demystifying the technology. A lot of business leaders, especially when the whole, AI boom, the AI let's, let's call it for what it is, the AI hype happened. Well, they started doing is that they, they wanted to to essentially throw AI at their business and then become an AI first company.
00;33;52;06 - 00;34;15;24
Walter Pasquarelli
I think the the best example of that that I've seen was, I think, a case of Oral-B, the toothbrush company that developed a toothbrush that they called nothing less but genius, because that effectively, was able to gather the data about the movements of how you would brush their teeth. And then it had AI. So it would be, again, a true Einstein if they're essentially brushing your teeth.
00;34;15;26 - 00;34;34;10
Walter Pasquarelli
And I think that's obviously, a lot of marketing, that I think has gone into that, but also an example of how you actually should not approach I probably the way that you approach artificial intelligence, first of all, by demystifying it, by understanding what it what is this technology? What can it do, what can it not do?
00;34;34;13 - 00;35;04;25
Walter Pasquarelli
And and really keeping up to date with that, constantly following the developments. And truly become a bit of an expert yourself on that, at least for your sector. That's the number one thing. And that's why I think, well, the success begins and ends. But I would say the real thing that that that differentiates true great business leaders and also really great government leaders as well, is, first of all, starting with the vision that you had developed before that.
00;35;04;27 - 00;35;23;27
Walter Pasquarelli
Who are you? Who do you want to be? What do you want to take your country or where do you want to take your organization? What are my KPIs? What's my vision? And then once you have that, then you start thinking, how can this really powerful technology, now that I've demystified it, now that I understand it, how can it actually help me get there?
00;35;24;00 - 00;35;58;01
Walter Pasquarelli
And then you start almost like this negotiation between, on the one hand, you already existing strategy and the technology and the key thing here, the key message is really that, your, your AI is not the strategy. Your business strategy is the strategy. And AI is only the tool that can really help you get there, either as an individual citizen that maybe wants to do something creative or wants to do something in the in the set of a side hustle, but also really as a multinational organization, that or or governments that I work with, that's really the the key transformation, a key mindset shift, that needs to happen.
00;35;58;03 - 00;36;19;26
Walter Pasquarelli
There's a few other things as well that I think, people often tend to overlook. And that's about capabilities. And, data is potentially I would, it's potentially one of the top ten un sexiest topics that people feel is out there because they think about numbers and they think about, I teach strategy and they cannot quite categorize that.
00;36;19;26 - 00;36;49;09
Walter Pasquarelli
I think of in Europe, GDPR and this like big meaty piece of legislation that just makes their life hard. They think, but it's also really the mother's milk of artificial intelligence without data. No. I, and if you have bad data, you have bad AI. And so prioritizing, the cleanliness, the representativeness, of your data set, that is really one of the areas where still I think a lot of businesses and a lot of governments even are really still struggling nowadays to develop those capabilities.
00;36;49;12 - 00;37;09;13
Walter Pasquarelli
And the other point is then the, the talent. And let me just share an anecdote with you. Just, a few years ago, I was, I was actually moderating a conference, between, a number of on the one and a few ministers and in a couple of C suites that were getting together. And the topic was the Future of work.
00;37;09;15 - 00;37;27;24
Walter Pasquarelli
And I was looking for some evidence that I could add some up to date evidence that I could essentially add, to, to the conversation. I found a really interesting piece from the Financial Times that said something like, tech talent is at a global shortage. And I saw, oh, perfect. And then I looked and it was 1997.
00;37;27;26 - 00;37;57;06
Walter Pasquarelli
So it's kind of like an ongoing issue that we're we're not quite great getting to grips with. But it's always like a big shortage, next to the talent. Now, if we look at, if we zoom out and then we look at sort of the national level, the other sort of point that I think where and I think that's, that that could be called perhaps a misconception, as is I think you, you just mentioned earlier, it's the point about sovereign AI capabilities.
00;37;57;06 - 00;38;23;18
Walter Pasquarelli
And here we're looking specifically at the infrastructure. I think now that we're sort of, entering a world that is geopolitically and economically more and more uncertain, for a lot of, especially European countries, they kind of feel that we cannot rely as much anymore on some of our global, historically global trading partners. So there is now really that desire, the recognition that we kind of need to develop our own AI capabilities.
00;38;23;18 - 00;38;43;05
Walter Pasquarelli
And I think the, the sovereign developed, development of the, the nurturing of our own capabilities, both as countries but as well as also organization. I think that's sort of one element that I think we've tended to outsource, for far too long over the past year. So, bringing it all together, the strategy is the business strategy.
00;38;43;11 - 00;38;54;15
Walter Pasquarelli
The capabilities are critical, and the sovereignty, the independence that you should have, I think, is another point, which I would definitely prioritize as you embark on that journey.
00;38;54;18 - 00;39;26;04
Geoff Nielson
Let's, let's stay on sovereignty for a minute. That's a really interesting one. And I have to imagine, for a lot of organizations and nation states, it's a tricky conversation because American capabilities are so far ahead and big tech capabilities are so far ahead that to to develop truly sovereign tools. I like it. It takes quite a step back for most organizations, most, most companies, most governments to build up those capabilities.
00;39;26;06 - 00;39;44;23
Geoff Nielson
Are you hearing generally an appetite to take that on, seeing investments start to flow there, or is there still sort of a reluctance? And, if I can call it a hope that the status quo is good enough and sovereignty is maybe not that important? How seriously is this being taken?
00;39;44;26 - 00;40;19;12
Walter Pasquarelli
Yeah. So that's, that's a great question. I mean, the, the things that I see basically out there are almost like three typologies. Maybe if we were to categorize that. On the one hand, there's those governments that, they're not really interested, let's put it bluntly, that kind of like maybe have their own, capability constraints. Maybe there is more, more important issues that need to be tackled, like, in some cases, really access to water sources, in some other cases really like economic issues or inflationary pressures that really haven't been solved yet.
00;40;19;15 - 00;40;39;01
Walter Pasquarelli
Or maybe in some other countries there is a very high level of crime or very high level of public dissatisfaction with the government as a whole. And so for those reasons, those countries, understandably have to prioritize those things first. The AI can help them get there. But we cannot talk about sovereign AI capability investments when we don't have the foundations.
00;40;39;01 - 00;41;03;10
Walter Pasquarelli
Right. And I think that's, that's one thing that is important. But of course, there's also some countries that kind of are, are daydreaming the AI revolution. It's kind of not really, really, reached, the top political km's, the second ones, which I think are a lot more dangerous, are the ones that want to do the tick box approach, that kind of want to get to a place where it's it's good enough.
00;41;03;12 - 00;41;31;26
Walter Pasquarelli
And I can tell you, I've worked with, a couple of offices of heads of states without necessarily disclosing the identity of those countries. Where I provided quite a substantial amount of research and also like strategic advice and, lots of, the primary data on on the one hand, our business was seeing it. On the one hand, our society was seeing the opportunity, which used to be really, substantial.
00;41;31;28 - 00;41;54;06
Walter Pasquarelli
But there's no political buying, and sometimes it's because of cultural issues. Maybe these are countries that have been very successful in the past decade. And so they feel that they are, that they can kind of relax now or that they can kind of continue doing what they've been doing so far. And for that reason, there's not really that appetite, that desire to keep pushing.
00;41;54;08 - 00;42;18;12
Walter Pasquarelli
And, they're paying now the price with some of their, key industries being disrupted, especially by American and Chinese industries. Right. And I'll I'll leave it up to your imagination. What what what what countries I mean, with that. And then there is other another group, that are effectively leaders when are investing heavily and want to take a risk.
00;42;18;15 - 00;42;43;05
Walter Pasquarelli
And, there's a lot of countries that, that are spearheading really interesting initiatives, different initiatives, especially based on the capabilities that they have. Right? I mean, if you look at, for example, some of the bulk Baltic countries like Estonia, that, that just a couple of decades ago, came out of socialism, is that now is really, truly a leader in everything digital innovation.
00;42;43;05 - 00;43;03;12
Walter Pasquarelli
And I think you see that also in like the economic numbers where you look at wage increases across Europe, they're like one of the big leaders. Right. And so, we're looking here at countries that have both the, the appetite, the desire, and as well, kind of like, to put it bluntly, put, put their money where their mouth is and they, they invest and they take a risk.
00;43;03;14 - 00;43;24;03
Walter Pasquarelli
Some other countries kind of try and do everything because they want to lead. And their the issue is much less about available capital allocation or investments. But there is a lot more about, the right strategy. So picking the right thing, a good strategy doesn't mean that we do everything. A good strategy means that we do some things that we choose not to do other things.
00;43;24;06 - 00;43;30;08
Walter Pasquarelli
And that's where risk comes in. But it's also where expertise can help us decide the right path.
00;43;30;10 - 00;43;51;25
Geoff Nielson
The, I'm glad you brought up Estonia because, an which is not actually something I think I've ever said on this podcast before, but, Estonia was certainly one of the the countries that came to mind for the exact reasons that you mentioned, because they've been ahead of the curve digitally and it's paid dividends for them as they, you know, lift up their GDP and standard of living.
00;43;51;25 - 00;44;19;29
Geoff Nielson
And, you know, our, you know, at the vanguard of, of all these digital services. And there's there's sort of an implication there that countries probably businesses to have an opportunity to get ahead with the right strategy here if they're going to build more of these sovereign capabilities, if they can build more of these capabilities in-house. And you had a story in there basically, of advising some countries where, you know, you were pushing them.
00;44;19;29 - 00;44;42;08
Geoff Nielson
It sounds like, at least implicitly, to be a bit more active here. And it was meeting with political resistance. If you were advising broadly the these heads of state, what would be sort of your most direct guidance for how they should be approaching this and what they should each be doing that's best aligned with their national interest in making sure that they're staying competitive and getting ahead?
00;44;42;10 - 00;45;01;11
Walter Pasquarelli
Yeah, I mean, part of the issue is that, as we say, where I'm from, we say you can force your luck. You can't force people's luck upon them, even though sometimes you're you feel it's so obvious, that you kind of want, the what is just be like, come on. Just just move a little bit, and it's it can be a little bit frustrating.
00;45;01;14 - 00;45;26;10
Walter Pasquarelli
But I think in the cases where, for example, your political leaders that are reluctant, usually when you talk to political leaders, it's, there's two things that matter. The one is the numbers. And we're talking here about specifically economic growth and jobs. And the other thing is obviously votes, specifically when you, when you work in like, in democracies where there's no parliamentary term limits, they want to be reelected.
00;45;26;10 - 00;45;54;15
Walter Pasquarelli
So you have to really on the what, you really show them what's at stake, both the, the issues that could arise, by not investing and by not pushing forward the, the, a strong, innovative AI economic agenda. And I think that's kind of the, the thing we often forget to talk about is that, there is also an ethical question of not doing anything instead of just like we tend to think of, will we implement AI.
00;45;54;15 - 00;46;24;25
Walter Pasquarelli
And so there's AI ethics and that's correct. And that's true. But there's also an ethical component of not doing anything and missing out and not preparing your citizens for that. And just thinking, yeah, whatever, America will do it. Right. And I think that's key. That's one thing that I personally find very important. The other thing with, with countries, especially when it comes to, the, the larger vision for their, for the nations is that you want to have the right strategy.
00;46;24;27 - 00;46;48;19
Walter Pasquarelli
And as I said earlier, the right strategy typically means that we want to, on the one hand, have the capabilities for that, typically the sovereign ones, when they when, when possible. And it starts usually with an x ray almost where we try and understand where does this particular country, sit currently. Maybe within the region, maybe within the global environment a little bit, depending on their size.
00;46;48;19 - 00;47;10;02
Walter Pasquarelli
So I, I, I'm, a few years ago, I worked on a tool called the IR readiness Index, where it was really the the purpose of that was to benchmark nations and where they would sit. The key thing here is not to increase competition necessarily. But it's really to understand where do I sit with my peers, where are my strengths and where are my weaknesses?
00;47;10;04 - 00;47;31;12
Walter Pasquarelli
And then once you have done that, then you can think strategically. What are the sectors where I want to excel? And then you might have, some countries that might even be tourism. Some other countries that might be automotive, some other countries it might be professional services. Then you want a prioritize and ideally find key target sectors, at least for the average country.
00;47;31;12 - 00;47;53;20
Walter Pasquarelli
Maybe not for the big ones. Not like the U.S. and China. That's a different story. I would say. But the ones that you want to promote and prioritize and critically then the other advice that that I always that I always provide is obviously, a government led future strategy is always a bit tricky because the innovation potential that we can provide there is only very limited.
00;47;53;22 - 00;48;16;11
Walter Pasquarelli
But of course, the integration of AI into pupils and students across schools curriculum, is essential. And I think of all the countries that I, that I've seen, I think, China has been doing it and think about it like billions of children being effectively taught in AI, or merely hundreds of millions of children taught in AI since a very young age.
00;48;16;11 - 00;48;34;29
Walter Pasquarelli
What kind of advantage that gives them? I think the United States is now set to do the same thing with. There was an executive order that was just signed a couple of months ago. In Europe, you have the computer classes. It, to understand how to operate a laptop at least what I, what and where I come from.
00;48;34;29 - 00;49;17;17
Walter Pasquarelli
And that's that's not exactly a future readiness strategy. And that it means you really, you really push people to rely themselves on, on how to use these tools. And then that's when tragedies happen and when mistakes happen. And I think being able to, to really support those people at a very young age is one of the things that to a head of state, or a minister, I would say there's only a limited amount of things that you can do, but what you can do now is invest in the next generation so that they will create some of the positive synergies and positive effects that then eventually will translate into economic, gain
00;49;17;17 - 00;49;19;14
Walter Pasquarelli
and social benefit for your country.
00;49;19;17 - 00;49;49;13
Geoff Nielson
I love that, and it makes sense. And it ties so directly into, you know, as you said, the broader economic gain and sort of the long term thinking there. On the economic piece, you mentioned taking sort of a sector based approach and looking at the sectors that, you know, a given country wants to be investing in. Now, the sector piece is interesting because it's happening at a time where this technology is also disrupting, you know, almost every sector in in some way or another.
00;49;49;16 - 00;50;09;10
Geoff Nielson
And so I'm curious, you know, I want to come back to this notion of, you know, the future of work that we brushed up upon earlier. When you look across sectors, are you starting to see some trends in terms of which sectors you see as being more lucrative or being more strongly, I guess, disrupted for, for better or for worse?
00;50;09;13 - 00;50;16;10
Geoff Nielson
And, you know, how do you see this playing out in terms of, you know, our work lives over the next handful of years?
00;50;16;12 - 00;50;36;03
Walter Pasquarelli
Yeah, yeah. That's a, that's a, that's the $1 million question, right. Well, I, I take my job. I think that's probably the question that I received the most or in a decade of working in this field. And, there's perhaps a couple of misconceptions that I always feel when I, when I talk to people and sometimes even when I think whether I will take my old job.
00;50;36;03 - 00;50;56;16
Walter Pasquarelli
Right. Because that's also a possibility that that that we all also always fail to kind of like consider but, as a matter of fact, I think we, we tend to look at jobs as almost like distinct unified categories, when in reality I think they're much more like a bundle of tasks, a number of a variety of different things.
00;50;56;16 - 00;51;22;20
Walter Pasquarelli
And you could start with the most rudimentary ones, which is sending emails could be part of a job, right? The other thing is dealing with humans. The other thing is making calculations. If maybe you work in finance, the other thing is operating slide tax. The other thing is ex ABCd. I think once we do that, once we unbundle it, it's much easier for us to to see the actual impacts that some of these tools can help us.
00;51;22;20 - 00;51;49;18
Walter Pasquarelli
And I think that also like helps us like calm down to some extent, a little bit against all the wild forecasts that are, that are being made by numerous, research pieces out there. I would probably not look at a sector specific approach, but based on that, I would look at a capability based approach and my, hypotheses, my two hypotheses here is that there's essentially two main trajectories here.
00;51;49;21 - 00;52;16;15
Walter Pasquarelli
The first one is that if there is a task that has a maximum point of efficiency, so a task where the sole aim is to be as efficient as possible, to basically make calculations to get a distinct answer. Those are obviously jobs that that lend themselves for automation perfectly because they're repetitive, because maybe the case changes, but ultimately the outcome is is similar.
00;52;16;15 - 00;52;35;01
Walter Pasquarelli
Think of tax returns. I can just optimize my tax returns up to a certain extent. I can have a bit of a strategy in forecasting, and that's where human can be great. But as far as tax returns is concerned, if I try and optimize them up to it's beyond a certain point effectively, breaking the law. And I want to avoid that.
00;52;35;04 - 00;53;02;19
Walter Pasquarelli
And the other types of tasks are the ones where there's no set ceiling of efficiency or what I call excellence based tasks. Here. Think of it this way. You might be a researcher that has to find out something new. Maybe you have to make a discovery. You it might be someone who works in finance or in banking and needs to make a prediction or a financial model about a potential stock that you might want to invest in.
00;53;02;19 - 00;53;22;16
Walter Pasquarelli
You might be a creative, you might be a writer, you might be anyone in any kinds of industry. And as a matter of fact, these are really the bulk of jobs that make up today's economy. But here's what gets a little bit tricky. And the, some research has been produced and there's been some evidence that can help us actually determine that.
00;53;22;18 - 00;53;52;20
Walter Pasquarelli
There have been some studies conducted, with material scientists, with financial advisors, even within creative industries. And the top performance performers across these industries when they started using AI tools. So people that already were excellent already in their job when they started getting exposure to these AI systems, their performance increased drastically. The people that did that were almost like more only like average performers.
00;53;52;23 - 00;54;23;28
Walter Pasquarelli
They stayed at about the same level of performance. So no change. But what happened is that you would have the top performance effectively gaining ground. So the gap between the best of the rest widening. As a matter of fact. Now, what does this mean? Effectively means two things. The number one thing is that, yeah, you should probably learn how to use AI tools, that that's a great idea if it will help you get far, especially if you use them safely and accurately.
00;54;24;01 - 00;54;47;01
Walter Pasquarelli
And number two is the fact that you will need to develop still the human skills that you have today. If you someone who is, wants to become a top tier investor, AI is not going to help you become a top tier investor if you're not already. Great. But you you should really develop that, that, that excellence either way.
00;54;47;04 - 00;55;02;22
Walter Pasquarelli
And I think that's kind of like maybe, a wake up call, maybe to some level, to some of us that we can kind of not lean back, but we always have to work on ourselves. Obviously, the key thing is, of course, why do the ones that are that are of whom performance skyrockets, become so much better?
00;55;02;24 - 00;55;26;04
Walter Pasquarelli
And the truth is not necessarily that they just direct an AI in a particular way. That's kind of secondary concern to all of our expectations. But it's because they have the ability to select, judge and to curate the outputs. So you might have an AI system that gives you 20 different answers, and you can say, take this one.
00;55;26;06 - 00;55;47;08
Walter Pasquarelli
Let's not take this one. You can tell what is special. You can tell what is right, in the same way that when I have, an electrician coming to my place and he or she comes to my, my place, and maybe I have some electricity or energy issues at my place, and they just, like, twist the knob and they, they charge me $400 for that.
00;55;47;08 - 00;56;03;02
Walter Pasquarelli
And I'm like, what? What? You wish you for, 30s. And you're charging me all this money? I'm not paying for them to do that. I'm paying for them to select the right one. Because if I were to do that, I might either have, like this issue persisting or I might. I might blow up mo flat like it.
00;56;03;02 - 00;56;14;23
Walter Pasquarelli
If it does, I mean, I'm exaggerating, obviously, but you get the idea. It's selection, it's curation, it's judgment. That's the thing that matters and that we need to help people cultivate over the years.
00;56;14;26 - 00;56;32;06
Geoff Nielson
I think that's, that's that's extremely well said. And it ties up so much of what we've been talking about in this conversation around, AI literacy and, you know, the importance of using our own judgment and understanding. You know, what really matters here. So, I really appreciate that note, Walter. I want to say a big thanks for joining today.
00;56;32;06 - 00;56;37;10
Geoff Nielson
This has been a really insightful conversation. And, I appreciate all of your insights.
00;56;37;13 - 00;56;42;05
Walter Pasquarelli
Thank you so much, Jeff. Good to be with you. Talk to you soon.
00;56;42;08 - 00;57;07;21
Geoff Nielson
If you work in it. Infotech research Group is a name you need to know no matter what your needs are. Infotech has you covered. AI strategy covered. Disaster recovery covered. Vendor negotiation covered. Infotech supports you with the best practice, research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe!
The Next Industrial Revolution Is Already Here
Digital Disruption is where leaders and experts share their insights on using technology to build the organizations of the future. As intelligent technologies reshape our lives and our livelihoods, we speak with the thinkers and the doers who will help us predict and harness this disruption.
Our Guest Walter Pasquarelli Discusses
Robots, AI Ethics, and the End of Thinking: Top Researcher on the State of AI in 2026
On this episode of Digital Disruption, we’re joined by internationally recognized advisor, speaker, and researcher on AI strategy, Walter Pasquarelli.
Our Guest Nina Schick Discusses
The AI World Order: Nina Schick Reveals How AI Is Reshaping Global Order
On this episode of Digital Disruption, Nina Schick, geopolitical analyst and one of the world’s leading voices on AI, explains why we are entering the Age of Intelligence, where nonbiological intelligence may soon rival or surpass human intelligence, reshaping economics, warfare, democracy, labor, and global power structures.
Our Guest Christian Venables Discusses
Forget Meta: How AI and XR Are Quietly Transforming Work, Design, and Learning
Is the metaverse actually dead or just badly branded?
Our Guest Gregory Warner Discusses
AI's Most Dangerous Truth: We've Already Lost Control
What happens when the people building artificial intelligence quietly believe it might destroy us? Journalist and podcaster Gregory Warner sits down with Geoff for an honest conversation about the AI race unfolding today.