Latest Research


This content is currently locked.

Your current Info-Tech Research Group subscription does not include access to this content. Contact your account representative to gain access to Premium SoftwareReviews.

Contact Your Representative
Or Call Us:
1-888-670-8889 (US/CAN) or
+1-519-432-3550 (International)

Comprehensive software reviews to make better IT decisions

Google and IBM Are Calling for AI Regulation

Last week, Google’s CEO, Sundar Pichai, called for new artificial intelligence (AI) regulations. The next day, IBM called for rules to eliminate AI biases that can discriminate against consumers, citizens, and employees based on their gender, age, and ethnicity, among other characteristics.

Mr. Pichai wrote in an editorial for The Financial Times, “There is no question in my mind that artificial intelligence needs to be regulated. It is too important not to,” reports The Verge. “The only question is how to approach it.” (FT’s site was not accessible at the time of writing this note.)

He called for a cautious and nuanced approach, based on the technologies and sectors in which AI is used. In some, new rules are needed, e.g., autonomous vehicles. Others – financial services, insurance, healthcare – are already regulated, and the existing frameworks should be extended to cover AI-powered products and services.

“Companies such as ours cannot just build promising new technology and let market forces decide how it will be used,” wrote Mr. Pichai. “It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone.”

The sentiment is echoed by IBM, which issued policy proposals in preparation for the AI panel hosted by its CEO, Ginni Rometty, at the World Economic Forum in Davos last week.

IBM recommends that companies work with governments to develop standards to avoid discrimination by AI systems, that they conduct assessments to determine risks and harms, and that they maintain documentation to be able to explain decisions that adversely impact individuals.

Our Take

As consumers become increasingly aware of the degree to which AI controls and shapes our lives and society and of the harms resulting from biased apps, the pressure is intensifying on technology firms and governments alike to put in place guardrails and, in some cases, put on the brakes to ensure that society can catch up to the speed of innovation and work out what needs to be regulated and how.

Of particular concern are facial recognition technologies (FRTs), which are used by law enforcement agencies around the world to identify and track (potential) criminals and by governments for social surveillance and social engineering, including monitoring and persecuting ethnic minorities.

These technologies and the pervasive surveillance they create violate basic human rights, such as the right to privacy. (See Amnesty International’s report “Surveillance Giants: How the Business Model of Google and Facebook Threatens Human Rights,” and our note Amnesty International Calls Google and Facebook a Threat to Human Rights.)

The Economist compared AI a while back to the ancient Roman god Janus, who is depicted with two faces, one looking into the past and the other into the future. (He is the god of “beginnings, gates, transitions, time, duality, doorways, passages, and endings.”) Janus, writes The Economist, “contained both beginnings and endings within him. That duality characterizes AI, too.”

There is “good” AI and “bad” AI, and then there is “good” AI with some bad mixed in – for example, biases. And then there is “good” technology which could lead to unanticipated and undesired, potentially horrific outcomes. Just like numerous other things in life and many technologies before AI. As a society, we need time to anticipate and sort this out, hopefully before we build the AI equivalent of the atomic bomb. And I don’t mean the singularity (i.e. hypothetical uncontrolled and irreversible technological growth).

And it is our responsibility as technology and business leaders to think through the consequences of using new technologies and the impact they may have on individuals, communities, society, and the world at large.


Want to Know More?

To get educated on AI biases and start eliminating them, see Info-Tech’s blueprint Mitigate Machine Bias.

To learn about AI guardrails and the controls we recommend you start putting in place even if you are just getting your feet wet with AI, look out for our upcoming blueprint on AI governance, or reach out to the analysts to get a kick-start.

Visit our IT Cost Optimization Center
Over 100 analysts waiting to take your call right now: 1-519-432-3550 x2019