Our systems detected an issue with your IP. If you think this is an error please submit your concerns via our contact form.

Vendor Management icon

Prepare to Negotiate Your Generative AI Vendor Contract

Build risk awareness: You can’t begin to negotiate until you understand where your real risk points are.

Your organization has built its AI strategy, developed some high-value use cases, and begun the process of acquiring and licensing a Gen AI platform. A crucial part of that process will be identifying and mitigating risk, and the Gen AI vendor contract is a good place to start.

Our Advice

Critical Insight

  • As you prepare for contract negotiation, take the opportunity to build risk awareness about the nature of these offerings and how you may be impacted.

Impact and Result

  • Understand how the major areas of risk in Gen AI products may manifest in the contracts for these products.
  • Come to a consensus on your level of risk tolerance before entering into negotiations.
  • Determine which risks can be addressed in negotiations, which are to be mitigated operationally, and which cannot be mitigated.

Prepare to Negotiate Your Generative AI Vendor Contract Research & Tools

1. Prepare to Negotiate Your Generative AI Vendor Contract Deck – Work through a series of the major risks that may affect your generative AI contract negotiation.

Use this research to ensure your organization enters contract negotiations reasonably informed of the risks inherent in Gen AI and aligned on the position it will take on these risks in the negotiation process.

2. Prepare to Negotiate Your Generative AI Contract Risk Assessment Tool – Identify areas of risk and roadblocks in the negotiation process for a Gen AI tool.

Use this tool to understand your risk across the following areas: data privacy, AI bias, organizational readiness, reliability of outputs, vendor commitment to AI governance, liability, vendor transparency, ownership and use of customer data, market volatility and vendor lock-in, and safety and security.


Prepare to Negotiate Your Generative AI Vendor Contract

Prepare to Negotiate Your Generative AI Vendor Contract

Build risk awareness: You can’t begin to negotiate until you understand where your real risk points are.

EXECUTIVE BRIEF

Analyst Perspective

Build risk awareness: You can’t begin to negotiate until you understand where your real risk points are.

Emily Sugerman.

Generative AI (Gen AI) has arrived on the scene and your organization has decided to leverage this technology. The business sees value in exploiting Gen AI products, derived from large language models, for a variety of use cases such as data analysis and summary, copywriting, image generation, or code-writing. However, excitement around the potential of these tools is tempered by awareness of their risk landscape. You need to build risk awareness and understand what to consider when entering into negotiations.

You can’t begin to negotiate until you understand where your real risk points are. What considerations must be surfaced as you prepare to negotiate for a Gen AI contract? Which risks can be addressed within the contract, and which will be mitigated operationally?

Then, assess your risk around prominent Gen AI concerns and identify where you mitigate or otherwise respond to these risks. In all this context, you must employ a risk-based approach so you understand how far you are leaning into this space.

Emily Sugerman
Senior Research Analyst
Info-Tech Research Group

Executive Summary

Your Challenge

Common Obstacles

Info-Tech’s Approach

  • Your organization has built its AI strategy, developed some high value use cases, and begun the process of acquiring and licensing a Gen AI platform.
  • A crucial part of that process will be the identification and mitigation of risk, and the negotiation of the Gen AI platform will be one area where risk is identified and addressed.
  • Users applying AI tools without clear guardrails or guidelines set by IT.
  • Lack of understanding of the status and safety of your organization’s data when inputted into the tool.
  • Lack of understanding of how you, or the provider, are responsible in the event of third-party copyright claims.
  • Uncertainty around how open lawsuits against these new technologies would affect your use of the product.
  • Understand how the major areas of risk in Gen AI products may manifest in the contracts for these products.
  • Come to a consensus on your level of risk tolerance before entering into negotiations.
  • Determine which risks can be addressed in negotiations, which are to be mitigated operationally, and which cannot be mitigated.

Info-Tech Insight
In preparation for contract negotiation, take the opportunity to build risk awareness about the nature of these offerings and how you may be impacted.

Your challenge

This research is designed to help organizations that want to:

  • Contract with a Gen AI platform provider, or a provider incorporating AI into their suite of existing products, in a rapidly growing yet still volatile market.
  • Secure a tool/platform that will enable the Gen AI-enabled use cases that the organization has determined are part of its roadmap.
  • Ensure that the organization enters contract negotiations reasonably informed of the risks currently understood as inherent in this technology, and aligned on the position it will take on these risks in the negotiation process.

The Gen AI market is projected to grow to 1.3 trillion by 2032 (Bloomberg, 2023).

ChatGPT received 1.4 billion visits in August 2023 (Similarweb, 2023).

But it’s not clear whether these tools are profitable yet:

GitHub Copilot was estimated to be losing approximately $20 per user every month in early 2023 (The Wall Street Journal, 2023).

Common obstacles

Gen AI risks will factor into your negotiation preparation.

  • These tools present potential risk if users are already using AI without clear guardrails or guidelines set by IT.
  • Organizations are unclear on whether the company’s data is safe in a Gen AI tool. What control do you potentially relinquish when providing inputs into the tool?
  • Class action lawsuits are now hitting Gen AI providers, especially related to copyright infringement. If third-party copyright violations do occur through your company’s use of the tool, will your provider indemnify you against them or will you be responsible?
  • Open lawsuits and an immature regulatory environment mean the Gen AI providers’ legal obligations might change: how will this affect the long-term viability of the product and your ability to use it as anticipated?

Entering the era of Gen AI

Contextualizing Gen AI in the broader AI landscape.

Artificial Intelligence (AI)
A field of computer science that focuses on building systems to imitate human behavior. Not all AI systems have learning behavior; many systems operate on preset rules, such as customer service chatbots.

Machine Learning (ML) and Deep Learning (DL)
An approach to implementing AI, whereby the AI system is instructed to search for patterns in a data set and then make predictions based on that set. In this way, the system “learns” to provide accurate content over time (think of Google’s search recommendations). DL is a subset of ML algorithms that leverages artificial neural networks to develop relationships among the data.

Generative AI (Gen AI)
A form of ML whereby, in response to prompts, a Gen AI platform can generate new outputs based on the data it has been trained on. Depending on its foundational model, a Gen AI platform will provide different modalities and thereby use case applications.

Key concepts

Artificial Intelligence (AI)
A combination of technologies that can include ML. AI systems perform tasks that mimic human intelligence, such as learning from experience and problem solving. Most importantly, AI makes its own decisions without human intervention.

Machine Learning (ML)
ML systems learn from experience and without explicit instructions. They learn patterns from data then analyze and make predictions based on past behavior and the patterns learned.

Responsible AI
Refers to guiding principles to govern the development, deployment, and maintenance of AI applications. In addition, these principles also provide human-based requirements that AI applications should address. Requirements include safety and security, privacy, fairness and bias detection, explainability and transparency, governance, and accountability.

Generative AI (Gen AI)
Is a subfield of AI that focuses on creating models and algorithms capable of generating new content such as images, text, music, or even videos. It involves training AI models to learn patterns and characteristics from existing data and then using that knowledge to generate new content that resembles the original data.

Natural Language Processing (NLP)
NLP is a subset of AI that involves machine interpretation and replication of human language. NLP focuses on the study and analysis of linguistics as well as other principles of AI to create an effective method of communication between humans and machines or computers.

ChatGPT
An AI-powered chatbot application built on OpenAI’s GPT-3.5 implementation, ChatGPT accepts text prompts to generate text-based output. Other AI-powered chatbot applications exist (i.e. Midjourney and Stability AI’s Stable Diffusion).

Inputs/Prompts
The phrases, queries, requests, and orders put into the Gen AI tool intended to generate the content. Prompts can take the form of an interactive conversation, where the user refines the prompt in order to move closer to the desired output.

Outputs
The new content created by the Gen AI system in response to the prompt, which can be in the form of text, images, audio, video, etc.

Hallucination
A term that has arisen to describe Gen AI’s ability to generate responses not based on observation and present them as legitimate responses to prompts (e.g. false/nonexistent citations of sources).

Understand Gen AI and its commercial models

What kind of platform will you be using?

What is Gen AI?

A form of ML whereby, in response to prompts, a Gen AI platform can generate new outputs based on the data it has been trained on. Its outputs include text, code, images, audio, and video.

Direct Access

Product Extensions

  • Licensing direct access to a large language model (LLM) provider
  • Licensing a set of tools to build one’s own solution
  • Billing may be granular and consumption-based in nature (e.g. data, queries, etc.)
  • Purchasing Gen AI capabilities through product extensions of Tier 1 vendors
    • E.g. Microsoft CoPilot from Microsoft, Einstein GPT from Salesforce, Joule from SAP, Sensei from Adobe
  • Likely a user-based model, billed per user per month

For more on foundational AI concepts and industry use cases, see Info-Tech’s An AI Primer for Business Leaders.

Info-Tech Insight
The direct access model will present equivalent risks and different risks to the consumer. While they may not have negotiation leverage in this scenario, an understanding and evaluation of the contract risks is still required.

Derive your contract negotiation position from preexisting responsible AI principles

Your organization should have already defined its responsible AI principles and have a reasonable understanding of AI capabilities, opportunities, and risks before you begin negotiating with providers.

Develop Responsible AI Guiding Principles: Use this guide to establish responsible AI guiding principles that are foundational, providing a framework to set safeguards for technical and nontechnical staff when working with AI technologies so the organization can leverage their innovative potential while protecting shareholder value from risk.

Where do you fall on the vendor manager Gen AI risk evaluation continuum?

Use this deck to help identify where you fall on the continuum.

The image contains a screenshot of the Vendor Manager Gen AI Risk Evaluation Continuum.

Source: Info-Tech's Adopt a Structured Acquisition Process to Ensure Excellence in Gen AI Outcomes blueprint

Identify areas of contract risk aligned with responsible AI principles

The image contains a screenshot of Info-Tech's Foundational Responsible AI Principles.

Insight summary

Assess the space

In preparation for contract negotiation, take the opportunity to build risk awareness about the nature of these offerings and how you may be impacted.

Manage risk

Learn the difference between the vendor’s standard consumer license terms and enterprise terms and assume initial terms will favor the vendor. In such an unsettled space, establish clarity beforehand about your risk tolerance profile and work to secure terms more favorable to you.

Know when you want to walk away

The terms focused on liability and security will likely be rigid, necessitating a risk analysis around a take-it-or-leave-it standard. Other terms may be more negotiable (e.g. around the levers of solution governance required by the customer to allow the purchase of a vendor’s solution).

Deliverable

The key deliverable where you will document the outcomes from the activity in this deck is:

Prepare to Negotiate Your Generative AI Contract Risk Assessment Tool

Use this tool to help you identify the major areas of risk and roadblocks you will want to pay attention to in the negotiation process for a Gen AI tool.

Understand Gen AI risks

Especially as they pertain to your negotiation process.

Issues related to intellectual property (IP) are the most prominent concerns about the development and use of Gen AI tools. If an organization starts down the road incorrectly, the problem of IP violations could scale rapidly.

Lawsuits are raising the question of whether the methods of training existing large language models violate copyright law and open-source licenses.

If violations do occur, Gen AI customers must understand who is liable for third-party claims of infringement – the provider or you, the customer?

This step could involve the following participants:

  • CIO
  • Chief Data Officer
  • AI Ethics Officer
  • Data Governance Specialist
  • AI Strategy Manager
  • Vendor Manager
  • AI Governance Manager
  • Risk & Compliance Analyst
  • Security Analyst

Understand risks and roadblocks

Risk

  • Something that could potentially go wrong.
  • You can respond to risks by mitigating them:
    • Eliminate: take action to prevent the risk from causing issues.
    • Reduce: take action to minimize the likelihood/severity of the risk.
    • Transfer: shift responsibility for the risk away from IT, toward another division of the company.
    • Accept: where the likelihood or severity is low, it may be prudent to accept that the risk could come to fruition.

Roadblock

  • There are things that aren’t “risks” that we still must care about when acquiring the Gen AI tool.
  • We respond to roadblocks by generating work items.

Info-Tech Insight
The terms focused on liability and security will likely be rigid, necessitating a risk analysis around a take-it-or-leave-it standard. Other terms may be more negotiable (e.g. around the levers of solution governance required by the customer to allow the purchase of a vendor’s solution).

Understand the source of the tool’s training data

How transparent is the vendor on the sources of its training data? Did it include copyrighted or protected material?

The excitement around new Gen AI technology and its potential use cases is tempered by the launch of several lawsuits against the companies developing these large language models and offering products derived from them: OpenAI, Meta, Stability AI, Midjourney, Microsoft, and GitHub (ABA Journal, 2023).

The unsettled nature of ongoing lawsuits from the creators of works that made up Gen AI training data means that using the output of these tools produces a level of risk you may or may not be comfortable with. These tools require a massive amount of training data scraped from the internet, and commenters suggest it’s likely these sources include copyrighted material, not just material in the public domain, and data from websites whose terms of use explicitly prohibit this kind of data scraping. As a result, “a court could find Gen AIs problematic under either (i) copyright infringement or (ii) breach of contract” (Zuva, 2023). On the other hand, courts may find that this use of data is defensible under fair use (ABA Journal, 2023). Until these claims have been tested in court, certainty is not possible.

Translate into action items/vendor questions

  1. Do you know the copyright status of the tool’s training data?
  2. Can the provider make assurances that its training data was not copyrighted and/or it was used with permission?
  3. Will the outcome of the lawsuits served against these companies impact your ability to use the product? If so, how?
  4. Does the vendor anticipate producing audit trails for its outputs? If not now, is it on the roadmap?
  5. Is the user expected to do their own due diligence (e.g. reverse image searches of outputs)? Will this be feasible for you? Will a human be reviewing the outputs to mitigate unintended consequences?

Case Study: Getty sues Stability AI for copyright infringement

SOURCE: US District Court for the District of Delaware. Getty Images (US), Inc. v. Stability AI, Inc. 1:23-cv-00135-UNA. 3 Feb. 2023.

From innovation to lawsuits

As expected, the rise of Gen AI brings scrutiny of the provenance of its training data and the legality of its creation.

In February 2023, Getty Images sued Stability AI for copyright infringement, providing false copyright management information, removal or alteration of copyright management information, trademark infringement, unfair competition, trademark dilution, and deceptive trade practices. In the suit, Getty claims that Stability AI copied over 12 million of its visual assets and metadata, without permission or remuneration and against Getty’s terms of use in order to create its product, which Getty claims operates as a direct competitor against them. The suit also claims that Stability AI commits trademark infringement and that its outputs dilutes Getty’s trademark when it incorporates images that resemble the Getty watermark. As of February 2024, the case is still working its way through the courts.

Clarify what the vendor will do to avoid copyright infringement

How does the vendor anticipate mitigating claims of IP violation?

Some vendors are relying on prospective favorable legal outcomes; some have mitigated the risk of their training data. Intellectual property law researcher Andres Guadamuz points to Adobe’s assertion that its Firefly model was trained entirely on legal inputs: “This is an indication that they have conducted a thorough investigation of their training sources and are happy that they will not get sued” (Fast Company, 2023).

Future customer demand, and even insurance demand, may increase for vendors to protect them against infringement claims by developing “audit trails” of AI outputs, which “recor[d] the platform that was used to develop the content, details on the settings that were employed, tracking of seed-data’s metadata, and tags to facilitate AI reporting, including the generative seed, and the specific prompt that was used to create the content” (Harvard Business Review, 2023).

“I think it’s really simple. AI systems are not magical black boxes that are exempt from the law, and the only way we’re going to have a responsible AI is if it’s fair and ethical for everyone. So the owners of these systems need to remain accountable. This isn’t a principle we’re making out of whole cloth and just applying to AI. It’s the same principle we apply to all kinds of products, whether it’s food, pharmaceuticals, or transportation.”

– Matthew Butterick, who, with the Joseph Saviari Law Firm, is filing class action lawsuits against Gen AI vendors based on the large language models’ training on copyrighted and open-source material (The Verge, 2022).

Pin down who indemnifies whom

If infringement or damages are claimed by a third party as the result of your organization’s use of the tool, who will be responsible for covering legal costs: you or the provider?

After an analysis of existing public Gen AI terms of service, Waisberg and Lash note that the vendors’ terms of service around indemnification and liability tend to favor the vendor over the customer, and infrequently proactively offer the customer remedies (e.g. refunds, replacements) in the event of a third-party infringement (Zuva, 2023).

This should be a major area of attention in contract negotiations: “unless you have negotiated a more customer-favorable approach with the provider, you and your colleagues’ use of the tool may subject your company to broad liability and, should your use of the tool result in liability to the company, the terms of use are unlikely to offer much protection from the provider” (Cooley GO, 2023). Where possible, aim “to shift risk to the tool vendor, and reserve rights to remedies in contract terms if a claim is brought based on decisions made using the tool” (Bloomberg Law, 2023).

If you do receive indemnity from the vendor, they will likely have certain requirements that must be met in order to qualify for it.

Translate into action items/vendor questions

  1. Are we, the customer, indemnified against third-party claims of IP infringement by the provider?
  2. Do the terms of service require us, instead, to indemnify the provider against infringement claims created through our use of the tool?

Build risk awareness: You can’t begin to negotiate until you understand where your real risk points are.

About Info-Tech

Info-Tech Research Group is the world’s fastest-growing information technology research and advisory company, proudly serving over 30,000 IT professionals.

We produce unbiased and highly relevant research to help CIOs and IT leaders make strategic, timely, and well-informed decisions. We partner closely with IT teams to provide everything they need, from actionable tools to analyst guidance, ensuring they deliver measurable results for their organizations.

What Is a Blueprint?

A blueprint is designed to be a roadmap, containing a methodology and the tools and templates you need to solve your IT problems.

Each blueprint can be accompanied by a Guided Implementation that provides you access to our world-class analysts to help you get through the project.

Talk to an Analyst

Our analyst calls are focused on helping our members use the research we produce, and our experts will guide you to successful project completion.

Book an Analyst Call on This Topic

You can start as early as tomorrow morning. Our analysts will explain the process during your first call.

Get Advice From a Subject Matter Expert

Each call will focus on explaining the material and helping you to plan your project, interpret and analyze the results of each project step, and set the direction for your next project step.

Unlock Sample Research

Author

Emily Sugerman

Search Code: 104338
Last Revised: April 12, 2024

Visit our Exponential IT Research Center
Over 100 analysts waiting to take your call right now: 1-519-432-3550 x2019