Our systems detected an issue with your IP. If you think this is an error please submit your concerns via our contact form.

Cio icon

Prepare for AI Regulation

Understand current AI legislative initiatives and prepare your organization for future AI regulations.

Use this research to:

  • Protect users from the possible risks that AI can introduce in areas such as: misinformation, unfair bias, malicious uses, cybersecurity threats, and more.
  • Provide guidance and best practices in the development and deployment of AI applications.
  • Plan and anticipate the resources required to address AI risk and comply with AI regulation initiatives.

Our Advice

Critical Insight

Responsible AI guiding principles provide the safeguards to mitigate the risks that are introduced with AI applications. They serve as a foundation for policies and governance practices, and they are an integral component of an organization’s AI strategy to maximize the benefits and minimize the risks of AI.

Impact and Result

  • Develop an AI strategy that balances innovation and risk involved with generative AI.
  • Implement an AI risk management system that leverages and integrates with your organization’s risk management system and governance programs.
  • Establish and operationalize responsible AI principles to govern AI development and deployments.
  • Integrate AI governance with the organization’s enterprise-wide governance programs.


Prepare for AI Regulation Research & Tools

1. Prepare for AI Regulation Storyboard – A guide to help you understand the current state of AI regulation initiatives around the world and actions to mitigate the risks that come with Generative AI.

Understand current AI legislative initiatives and prepare your organization for future AI regulations.


Prepare for AI Regulation

Understand current AI legislative initiatives and prepare your organization for future AI regulations.

Analyst perspective

There is a global effort focused on how to mitigate the risks that AI can bring.

Bill Wong

Generative AI is changing the world we live in. It represents the most disruptive and transformative technology of our lifetime. It will revolutionize how we interact with technology and how we work. However, along with the benefits of AI, this technology introduces new risks. Generative AI has demonstrated the ease of creating misinformation and deepfakes, and it can be misused to threaten the integrity of elections.

Organizations around the world are seeking guidance, and some are requesting governments to regulate AI to provide safeguards for the use of this technology. As a result, AI legislation is emerging around the world. A key challenge with any legislation is to find the balance between the need for regulation to protect the public vs. the need to provide an environment that fosters innovation.

Some governments and regions (US and UK) are context- and market-driven with their approach, relying on self-regulation and introducing little, if any, new legislation. Contrast that with the EU’s approach, which has introduced comprehensive legislation to govern the use of AI technology to protect the public from potential harm from AI. It is anticipated that international cooperation across governments and regions will likely be required for the effective regulation of AI around the world.

Regardless of what legislation exists, organizations can take many steps to mitigate the potential risks from AI. One of the first steps and best practices for any organization is to establish and adopt responsible AI guiding principles.

Bill Wong

AI Research Fellow
Info-Tech Research Group

Executive summary

Your Challenge

Generative AI introduces new risks, and several intuitions are introducing guidance and/or legislation to mitigate the risks involved in deploying AI-based technologies.

  • Protect users from the possible risks that AI can introduce in areas such as misinformation, unfair bias, malicious uses, and cybersecurity threats.
  • Provide guidance and best practices in the development and deployment of AI applications.
  • Plan and anticipate the resources required to address AI risk and comply with AI regulation initiatives.

Common Obstacles

The current state of the organization’s risk and governance programs has not anticipated the introduction of AI applications and their impact:

  • Risk-based categorization of AI applications not in place today.
  • Maturity of the organization’s data and AI governance programs to support responsible AI initiatives.

Organizations will need to upgrade their data and AI governance programs to address voluntary or legislated AI regulations.

Info-Tech’s Approach

Recommendations:

  • Develop an AI strategy that balances innovation and risk involved with generative AI.
  • Implement an AI risk management system that leverages and integrates with your organization’s existing risk management system and governance programs.

Establish and operationalize responsible AI principles to govern AI development and deployments. Integrate AI governance with the organization’s enterprise-wide governance programs.

Info-Tech Insight

Responsible AI guiding principles provide the safeguards to mitigate the risks that are introduced with AI applications. They serve as a foundation for policies and governance practices and as an integral component of an organization’s AI strategy to maximize the benefits and minimize the risks of AI.

Info-Tech offers various levels of support to best suit your needs

DIY Toolkit

“Our team has already made this critical project a priority, and we have the time and capability, but some guidance along the way would be helpful.”

Guided Implementation

“Our team knows that we need to fix a process, but we need assistance to determine where to focus. Some check-ins along the way would help keep us on track.”

Workshop

"We need to hit the ground running and get this project kicked off immediately. Our team has the ability to take this over once we get a framework and strategy in place."

Executive & Technical Counseling

“Our team and processes are maturing; however, to expedite the journey we’ll need a seasoned practitioner to coach and validate approaches, deliverables, and opportunities.”

Consulting

“Our team does not have the time or the knowledge to take this project on. We need assistance through the entirety of this project.”

Diagnostics and consistent frameworks are used throughout all five options.

AI incidents and risks continue to grow

Some AI incidents that occurred in 2023-2024

Elections/politics

  • New Hampshire’s presential primary election had a robocall urging Democrats not to vote.
  • UK opposition leader targeted by AI-generated fake audio smear.
  • Korea saw a swirl of deepfakes ahead of general elections.
  • Turkey’s opposition claimed Russia created deepfakes during election.
  • Pakistan’s elections were impacted by AI and fake news.
  • Russia funded a Latin America–wide anti-Ukraine disinformation drive.

AI and the law

  • Colorado Springs attorney was fired for citing fake cases created by ChatGPT.
  • Porcha Woodruff was eight months pregnant when Detroit police mistakenly arrested her for robbery and carjacking based on a faulty facial recognition match.
  • DoNotPay (world’s first “robot lawyer”) is facing a class action lawsuit over allegations that it misled customers and misrepresented its product.

Fraud/deepfakes

  • In Hong Kong, attackers used a fake video conference populated by simulations of the CFO and other personnel to convince an employee of an unnamed company to transfer about US$25 million.
  • Rash of deepfakes that hijack images of trusted news personalities in spurious ads, undermining confidence in the news media (CNN’s Wolf Blitzer, CBS Mornings host Gayle King, BBC hosts Matthew Amroliwala and Sally Bundock, and CBC host Ian Hanomansing).
  • Couple scammed out of US$15,449 after they received a call from someone who used an AI voice clone of their son to convince them their son was in jail for killing a diplomat in a car accident.
  • and many, many more to come…

Find the balance between regulation and innovation

Regulation

Protect users or citizens from unintended consequences of AI applications by requiring organizations to ensure applications are developed and deployed in a manner that addresses data privacy, safety and security, explainability and transparency, fairness and bias detection, validity and reliability, and accountability.

Deliver a framework where users or citizens have the right to file complaints against AI providers, and where compensation can be enforced.

Innovation

Promote and enable rapid and agile development and deployment of AI applications.

Minimize bureaucratic oversight and compliance costs.

Deliver an AI ecosystem or framework that promotes innovation and competition.

AI regulatory initiatives around the world

World Map, with flag Icons for Canada, The US, Brazil, Argentina, The EU, The UK, Germany, France, Italy, China, Japan, India, and Australia
  • US – Executive Order 14110
  • Canada – Artificial Intelligence and Data Act
  • Brazil – Bill 2338
  • Argentina – Agency of Access to Public Information
  • EU – EU AI Act
  • UK – AI Bill
  • Germany – EU AI Act proposal
  • France – EU AI Act proposal
  • Italy - EU AI Act
  • China – Deep Synthesis Provisions
  • Japan – Safe and responsible AI
  • India – Digital India Act (updates proposed for AI)
  • Australia – Safe and responsible AI

Understand the current EU regulation landscape

Map of the European Union
  • Global standard, risk-based approach to regulate AI.
  • Aligns and supports democratic values, human rights, and the rule of law.
  • Leverages and builds on existing legislation:
    • Digital Services Act
    • Digital Markets Act
    • Product Liability Directive
    • General Data Protection Regulation (GDPR)

High-level overview of the EU AI Act

The EU AI Act classifies AI according to its risk. The Future of Life Institute summarizes the main points as follows:

  • Unacceptable risk is prohibited (e.g. social scoring systems and manipulative AI).
  • High-risk systems, the main focus of the legislation, are regulated.
  • Limited-risk AI systems are subject to lighter transparency obligations: developers and deployers must ensure that end-users are aware that they are interacting with AI (e.g. chatbots and deepfakes).
  • Minimal risk is unregulated (including the majority of AI applications currently available on the EU single market, such as AI enabled video games and spam filters).

The majority of obligations fall on providers (developers) of high-risk AI systems.

  • Those that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country.

Source: “High-Level Summary,” EU AI Act, 2024.

The EU uses a risk-based approach

The EU's Risk-based approach to AI Regulations. Violation of EU fundamental values is an Unacceptable risk and is prohibited. Impact on health, safety, or fundamental rights is a high risk and requires a conformity assessment. Risks of impersonation, manipulation, or deception is a limited risk and requires transparency obligation. Common AI systems are a minimal risk and require no additional obligations.

Source: “Artificial intelligence Act Briefing,” European Parliament, 2024.

Understand current AI legislative initiatives and prepare your organization for future AI regulations.

About Info-Tech

Info-Tech Research Group is the world’s fastest-growing information technology research and advisory company, proudly serving over 30,000 IT professionals.

We produce unbiased and highly relevant research to help CIOs and IT leaders make strategic, timely, and well-informed decisions. We partner closely with IT teams to provide everything they need, from actionable tools to analyst guidance, ensuring they deliver measurable results for their organizations.

What Is a Blueprint?

A blueprint is designed to be a roadmap, containing a methodology and the tools and templates you need to solve your IT problems.

Each blueprint can be accompanied by a Guided Implementation that provides you access to our world-class analysts to help you get through the project.

Talk to an Analyst

Our analyst calls are focused on helping our members use the research we produce, and our experts will guide you to successful project completion.

Book an Analyst Call on This Topic

You can start as early as tomorrow morning. Our analysts will explain the process during your first call.

Get Advice From a Subject Matter Expert

Each call will focus on explaining the material and helping you to plan your project, interpret and analyze the results of each project step, and set the direction for your next project step.

Unlock Sample Research

Author

Bill Wong

Visit our Exponential IT Research Center
Over 100 analysts waiting to take your call right now: 1-519-432-3550 x2019