This Privacy Regulation Roundup summarizes the latest major global privacy regulatory developments, announcements, and changes. This report is updated monthly. For each relevant regulatory activity, you can find actionable Info-Tech analyst insights and links to useful Info-Tech research that can assist you with becoming compliant.
AI Governance Is Quietly Becoming an Insurance Standard
Type: Article
Enforced: January 2026
Affected Region: All
Summary: AI is exposing new risks that traditional insurance models are not yet equipped to handle. AI-related losses do not fit neatly into existing categories such as cyber, general liability, or professional liability coverage. Insurers are still grappling with how to classify, price, and underwrite AI risk.
Insurance organizations lack the historical claims data needed to standardize coverage or pricing. As a result, some insurers are cautious, exploring exclusions or scrutinizing how organizations use and govern AI. While AI governance is not yet a formal underwriting requirement, insurers are increasingly considering whether organizations have basic AI policies, data controls, employee training, and safeguards against misuse, fraud, and unintended outputs.
Growing concerns around deepfakes, synthetic identities, and chatbot-related privacy violations are prompting insurers to look for stronger controls, including authentication measures, human-in-the-loop oversight, limits on public AI models handling sensitive data, and better logging and auditing of AI outputs. On the other hand, some insurers have chosen to innovate and take the opportunity to introduce coverage for deepfake-related reputational harm, including forensic, legal, and crisis-management services.
Nonetheless, the insurance industry is at a crossroads as AI-related claims are increasing faster than its ability to model them.
Analyst Perspective: Insurance markets depend on predictability, classification, and historical loss data. AI undermines all three. Insurance organizations may perceive it as a risk multiplier that cuts across existing lines, which may explain insurers implicitly using AI governance maturity as a proxy for risk. High-profile incidents (e.g. Air Canada, Google) illustrate that AI liability is often less about malicious intent and more about inadequate controls, oversight, and accountability. While not yet formalized, underwriting is beginning to resemble regulatory compliance, with growing emphasis on human-in-the-loop controls, auditability, employee training, and disciplined data use.
Unlike traditional cyber threats that target systems and networks, AI-enabled attacks exploit trust, identity, and publicly available data. That is why insurers like Coalition are moving beyond pure loss indemnification toward response-oriented coverage. The value proposition is not “preventing breach” but “containing reputational and operational fallout.” Organizations that proactively integrate AI governance into enterprise risk management are likely to influence how AI risk is priced, categorized, and ultimately insured.
Analyst: Safayat Moahamad, Research Director – Security & Privacy
More Reading:
- Source Material: IAPP
- Related Info-Tech Research:
New Research on AI Agents in a Simulated Economic Market
Type: Research
Published: November 2025
Affected Region: USA
Summary: A research team from Microsoft explored the possibility of autonomous AI agents navigating real-world digital marketplaces, including negotiating, pricing, and executing transactions. The team simulated an open-source environment called Magnetic Marketplace that allows agent-to-agent communication between customer agents and business agents. The experiments involved 100 customers and 300 businesses and compared results between proprietary models (GPT, Gemini) and open-source models (OSS, Qwen). Customer agents were provided with a list of items/amenities, and satisfaction was based on meeting the criteria as well as cost. Unfortunately, biases emerged across nearly all the large language models (LLMs) tested.
The agents preferred transacting with service agents that responded fastest, as opposed to comparing the quality of offers. As a result, their search behavior provided suboptimal results since the agents did not produce exhaustive research on products/offerings from businesses. In fact, as search results increased, performance across the models decreased. The agents preferred a “fast and good enough” option compared to a higher quality and well-researched option.
The agents were also vulnerable to manipulation strategies, such as appeals to authority using fake credentials or social proof using fake positive customer reviews, and prompt injection attacks that could override instructions.
The biases presented encourage the importance of human-in-the-loop designs to continue to monitor and influence model behavior in economic environments toward better and higher quality strategic outcomes.
Analyst Perspective: The biases and vulnerabilities presented from the models are not new. One may think that agents would support a customer in deeper analysis of products over a large range of businesses. However, models can suffer from “needle in a haystack” problems: When models are presented with lots of dense information, relevant details can get lost in the mix, resulting in a less nuanced output.
Additionally, a study done by the Wharton Generative AI Labs in July 2025 found that LLMs fall prey to persuasion tactics, especially authority, commitment, and unity, even when guardrails are in place. These tendencies occur since LLMs are trained on data based on how humans act and have unfortunately adopted human-like vulnerabilities.
In Microsoft’s study, even without any attempt to skew outcomes, the emergent behavior from the agents inappropriately prioritizes speed and visibility of businesses over accuracy of the request. With interference, they are ripe for market manipulation.
As organizations continue to test LLMs and the agents built on top of them, it’s important to recognize the positives and shortcomings of what these tools can and cannot do, and how efficacy of actions varies by LLM.
Analyst: Pearl Almeida, Research Director – Security & Privacy
More Reading:
- Source Material: Microsoft, Towards Data Science
- Related Info-Tech Research:
Employee Monitoring: A Governance Problem
Type: Article
Published: December 2025
Affected Region: USA and Canada
Summary: Employee monitoring is generally permitted in both the United States and Canada, but it is tightly constrained by privacy, employment, and data protection laws. Across both jurisdictions, the central requirement is that monitoring must serve a legitimate business purpose and be implemented in a reasonable, proportionate, and transparent manner.
In the United States, the legal framework is highly fragmented. Federal laws allow monitoring for legitimate business reasons but restrict unauthorized interception of communications, while state laws add additional obligations. California is the most stringent, applying a data-minimization standard under the CCPA that requires monitoring to be necessary, proportionate, and consistent with employee expectations.
In Canada, employee monitoring is framed as a balance between organizational needs and employee privacy rights. The applicable rules depend on whether the employer is federally regulated or subject to provincial law. While notice is generally required nationwide, consent requirements vary. Ontario uniquely mandates a written electronic monitoring policy, while Quebec applies the strictest standard overall, requiring monitoring to be minimally intrusive, clearly justified, disclosed in detail, and typically consent-based.
Employers in both countries, generally, approach monitoring through a privacy-by-design lens. Typically, this is done by limiting collection to what is necessary, documenting justification and proportionality, providing clear and ongoing notice, and communicating internal policies. This approach is increasingly important as monitoring technologies become more automated, data-driven, and intrusive.
Analyst Perspective: As monitoring technologies become more data-intensive and AI-driven, regulators are moving beyond simple notice requirements toward closer scrutiny of whether monitoring is truly necessary, proportionate, and aligned with employee expectations. This further magnifies a growing concern that productivity analytics, behavioral inference, and automated evaluation can overreach and undermine trust in the employment relationship.
From an organizational perspective, employee monitoring can no longer be treated as a purely HR or IT decision. It increasingly resembles high-risk data processing that demands executive oversight, documented justification, and privacy-by-design controls.
Analyst: Safayat Moahamad, Research Director – Security & Privacy
More Reading:
- Source Material: IAPP
- Related Info-Tech Research:
If you have a question or would like to receive these monthly briefings via email, submit a request here.