AI Transformation Brief – March 2026

The latest news from AI Big Tech vendors Anthropic, OpenAI, AWS, Google, and Microsoft Azure, plus the latest industry news and a brief review of an organization benefiting from its implementation of AI.

Author(s): Bill Wong

AI Transformation Brief

VOL 3 MARCH 2026

Featuring AI best practices and insights to enable our members to strategize, plan, develop, deploy, manage, and govern AI-based technologies and solutions.

In This Issue –› AI in the News | AI Research Highlights | Vendor Spotlight | Upcoming Events & Resources

AI IN THE NEWS

Anthropic challenges the US administration in court, releases previews of Claude Code Review and Claude Code Security

Read the Anthropic response to the US administration

Claude Code Review

Claude Code Security

On March 9, 2026, Anthropic filed two lawsuits claiming the US government labeled it a "supply chain risk" as a punitive response after Anthropic declined to remove contractual restrictions preventing its AI model, Claude, from being used for mass surveillance or lethal autonomous weapons.

Anthropic released a preview of Claude Code Security on February 20 and a preview of Claude Code Review, a multi-agent code review system, on March 9.

ANALYST ANALYSIS

Anthropic claims several legal standards were violated, including that the “supply chain risk” label is potentially devasting, jeopardizing hundreds of millions of dollars in existing and future contracts with other businesses and government agencies using Anthropic.

Claude Code Review is a multi-agent, team-based review system modeled after the one used internally by Anthropic.

Claude Code Security can scan your codebase to identify vulnerabilities and generates targeted patches for human review. It helps teams eliminate security gaps that often slip past traditional tools.

These latest two products showcase Anthropic’s continued implementation of AI agents.

OpenAI releases GPT-5.4, hires OpenClaw founder, and acquires Promptfoo

GPT-5.4 announcement

OpenAI hires OpenClaw founder

Promptfoo acquisition

On February 14, 2026, OpenAI hired the founder of OpenClaw, Peter Steinberger, to drive the company’s personal agent strategy.

On March 5, 2026, OpenAI released its most capable model, GPT-5.4, with improved agentic AI capabilities and the power to interact directly with desktop interfaces and professional software suites.

On March 9, 2026, OpenAI acquired Promptfoo, a security and vulnerability platform that proactively identifies and mitigates risks such as prompt injections and data leaks

ANALYST ANALYSIS

The GPT-5.4 release represents the continued shift from conversational AI to autonomous agency. With the introduction of native computer use, the AI model can interpret a computer’s visual interface via screenshots and execute commands using a virtual mouse and keyboard. This new capability allows it to use legacy software and navigate complex web portals.

While Peter Steinberger will be an OpenAI employee, OpenClaw will continue to operate as an open-source project under an independent foundation that OpenAI will support.

Promptfoo acts as an automated red teamer, constantly testing the AI agent’s actions to ensure it does not leak data or fall victim to malicious instructions hidden on the web.

Together, GPT-5.4, OpenClaw, and Promptfoo support the delivery of an enterprise agentic AI platform designed to support AI coworkers.

INFO-TECH AI INSIGHTS

AI Transformation Brief

#2

AI IN THE NEWS

Microsoft introduces Frontier Suite, Copilot Cowork (integration of Claude 4.6), and Azure Site Reliability Engineering Agent

Frontier Suite

Copilot Cowork

Azure Site Reliability Engineering Agent

On March 9, 2026, Microsoft released M365 E7 Frontier Suite, a platform designed for agentic AI allowing agents to navigate enterprise systems with the same authentication as human employees.

On the same day, Microsoft announced Copilot Cowork, which represents the availability of Claude 4.6 within Microsoft 365 Copilot.

On March 10, Microsoft released the Azure Site Reliability Engineering (SRE) Agent, designed to autonomously manage cloud infrastructure.

ANALYST ANALYSIS

Microsoft Frontier Suite bundles the existing E5 security foundation with Microsoft 365 Copilot and two brand-new governance tools: Agent 365 and the Entra Suite for AI identities. Its core capability is providing a unified control plane that allows IT administrators to manage, secure, and audit autonomous AI agents as if they were human employees.

The integration of Claude 4.6 (Opus and Sonnet) includes "model-diverse design," allowing Copilot to intelligently route tasks to either OpenAI or Claude 4.6 based on which model offers superior reasoning for the specific context.

The Azure SRE Agent acts as an autonomous operations teammate designed to maintain cloud uptime and reduce firefighting for IT teams. Unlike standard chatbots, this agent operates with deep context, meaning it is natively grounded in an organization's specific source code, logs, and deployment configurations to proactively diagnose and mitigate incidents.

Amazon invests $50B into OpenAI, Amazon Bedrock AgentCore, and Cerebras partnership

$50B announcement

AgentCore

Cerebras

On February 27, 2026, Amazon announced a $50 billion multiyear partnership with OpenAI. As part of this deal, AWS and OpenAI are cocreating a Stateful Runtime Environment that will allow OpenAI models to be integrated directly into Amazon Bedrock.

On March 3, Amazon announced the general availability of policy management in Amazon Bedrock AgentCore to enable centralized, fine-grained controls for agent-tool interactions.

On March 13, Amazon announced a partnership with Cerebras to deliver accelerated inference compute performance.

ANALYST ANALYSIS

The Stateful Runtime Environment is expected to allow AI models to maintain persistent memory, identity, and context across long-running tasks, enabling them to navigate complex workflows across various software tools and data sources without losing track of previous steps.

AWS Bedrock AgentCore Policy functions independently of your agent code, allowing security, compliance, and operations teams to define tool access and input validation rules without modifying the agent itself. Development teams can create policies using natural language that is converted automatically into Cedar, the AWS open-source policy language.

The collaboration of Cerebras will deliver the fastest AI inference solutions available for generative AI applications and LLM workloads.

Google releases Gemini 3.1 Pro, Flash Image, and Flash-Lite

Gemini 3.1 Pro

Gemini 3.1 Flash Image

Gemini 3.1 Flash-Lite

On February 19, 2026, Google released its most capable model, Gemini 3.1 Pro, which introduces "thinking levels," allowing users to choose how much internal reasoning the AI model performs before responding.

On February 26, Google released Gemini 3.1 Flash Image (aka Nano Banana 2), its most capable image generator.

On March 3, Google released Gemini 3.1 Flash-Lite, positioned as the company’s most cost-efficient and fastest model in the Gemini 3 family.

ANALYST ANALYSIS

Gemini 3.1 Pro delivers improved software engineering capabilities and usability, with agentic improvements in domains like finance and spreadsheet applications. It’s best suited for high-precision reasoning, complex logic, and multistep orchestration.

Gemini 3.1 Flash Image enables professional-grade visual generation and conversational editing at an accessible speed and price.

Gemini 3.1 Flash-Lite is for high-volume, repetitive, or latency-sensitive tasks where "good enough" intelligence at massive scale is the goal.

Google’s AI models function as a tiered "intelligence stack" where Gemini 3.1 Pro acts as the high-reasoning architect for complex problem-solving, Flash-Lite serves as the high-speed agent for mass-scale multimodal workflows, and Flash Image provides the generative engine for high-fidelity visual production and iterative editing.

INFO-TECH AI INSIGHTS

AI Transformation Brief

#3

AI IN THE NEWS

India AI Impact Summit 2026 and sovereign AI

Read India AI Impact Summit 2026 resources

From February 16 to 21, 2026, the India AI Impact Summit 2026 was held in New Delhi. It was positioned as the first global AI event hosted in the Global South and attracted approximately 600,000 onsite attendees from over 100 countries.

Major announcements included the New Delhi Declaration, an agreement endorsed by 92 countries and international organizations, committing to the principle that AI's benefits must be shared by all humanity rather than concentrated in a few nations or corporations

ANALYST ANALYSIS

Over $200 billion in investment pledges were announced, much of it in support of India’s sovereign AI strategy, IndiaAI Mission, which includes the government’s $1.2 billion investment to build its domestic AI ecosystem. Key announcements included:

  • The development of 12 indigenous foundation models, including Param2 (a 17-billion-parameter model supporting all 22 scheduled Indian languages)
  • The addition of 20,000 GPUs to strengthen the national infrastructure (today at 38,000 GPUs)
  • The MANAV vision (moral, accountable, national sovereignty, accessible, valid), unveiled as India’s AI governance framework and delivered by Prime Minister Modi.

This event marks India’s strategic shift in focus from the safety of AI to deploying AI for the "welfare of all" and becoming a leading voice for digital and AI sovereignty.

NVIDIA and Palantir joint sovereign AI Operating System Reference Architecture

Read the Palantir announcement

On March 12, 2026, Palantir and NVIDIA announced a complete sovereign AI operating system – a turnkey AI super-computer-in-a-box positioned to address strict requirements surrounding data sovereignty, security, and performance for sensitive workloads for governments and enterprises.

According to McKinsey & Company, the sovereign AI ecosystem is projected to reach $600 billion by 2030.

ANALYST ANALYSIS

The sovereign AI Operating System Reference Architecture (AIOS-RA) is composed of the following technology layers:

  • NVIDIA AI Infrastructure (Hardware Layer): NVIDIA servers
  • Palantir Compute Infrastructure (Orchestration Layer): Hardened Kubernetes substrate running Foundry services (catalog, build, multipass, etc.)
  • Unified Management Plane (Control Layer): Rubix (zero trust Kubernetes) and Apollo (autonomous deployment and lifecycle management)
  • Full-Stack NVIDIA Software Acceleration (Optimization Layer): NVIDIA AI Enterprise, NVIDIA CUDA-X libraries, NVIDIA Nemotron open models, and NVIDIA Magnum IO
  • AI Platform (Application Layer): Enterprise AI platform connecting LLMs to organizational data and operational systems

These layers integrate NVIDIA’s hardware platform with Palantir's software platforms to enable nations to build, deploy, and maintain complete control over their own domestic AI infrastructure and data.

Lloyds Banking Group expects over £100 million in value from next‑generation AI in 2026

Read the Lloyds Banking Group announcement

On January 29, 2026, the Lloyds Banking Group announced that generative AI delivered around £50 million of value for its organization in 2025, with more than £100 million in additional value expected this year from generative and agentic AI as the Group extends its AI leadership position.

ANALYST ANALYSIS

In 2025, the Group implemented over 50 Gen AI solutions, revolutionizing customer interactions with the bank. These include faster, more intuitive in-app search and quicker, more accurate responses across operations, enabling colleagues to offer improved support in branches and remotely.

In 2026, the Group will expand agentic AI bank-wide, optimizing current tools and introducing numerous new applications. This includes major strategic investments to deliver faster, seamless experiences for its 28 million customers.

INFO-TECH AI INSIGHTS

AI Transformation Brief

#4

AI RESEARCH HIGHLIGHTS

RESEARCH SPOTLIGHT:
Establish Your Adaptive AI Governance Roadmap New!

Sample of the Establish Your Adaptive AI Governance Roadmap.

Source: Establish Your Adaptive AI Governance Roadmap: From Principles to Practice

Info-Tech Research Group

VENDOR SPOTLIGHT:

UPCOMING AND RECENT EVENTS

LEVEL-UP Series: AI Workforce Development

  • Regina, SK, April 8, 2026
  • Austin, TX, May 14, 2026

Webinar: Building the Sovereign AI Strategy With a Responsible AI Mandate, May 7, 2026, 10:00am-11:00am ET Registration

AI AND DATA ANALYTICS SOLUTIONS – RESOURCES

AI EDITOR-IN-CHIEF

Bill Wong – Info-Tech AI Research Fellow

INFO~TECH RESEARCH GROUP