Privacy Regulation Roundup


This Privacy Regulation Roundup summarizes the latest major global privacy regulatory developments, announcements, and changes. This report is updated monthly. For each relevant regulatory activity, you can find actionable Info-Tech analyst insights and links to useful Info-Tech research that can assist you with becoming compliant.

Author(s): Safayat Moahamad, Carlos Rivera, Ahmad Jowhar, Horia Rosian

  • Privacy Regulation Roundup – October 2025

  • Privacy Regulation Roundup – November 2025

  • Privacy Regulation Roundup – December 2025

  • Privacy Regulation Roundup – January 2026

  • Privacy Regulation Roundup – February 2026

  • Privacy Regulation Roundup – March 2026

State of State Privacy Laws in the US

Type: Article

Published: February 2026

Affected Region: USA

Summary: Connecticut has emerged as a key player in US state privacy regulations. The Connecticut Data Privacy Act took effect in July 2023. State officials recently released the 2025 enforcement report during a joint press conference, detailing actions over the past year.

The report covers 70 complaints and over 1,830 data breach notifications, offering insights into areas like universal opt-out signals, privacy notices, and handling of genetic and health data. Interestingly, a significant focus was on insufficient responses to data deletion requests, with Attorney General William Tong highlighting challenges in exercising deletion rights amid a broader surveillance economy.

The report also emphasized children's privacy, especially in messaging apps, gaming platforms, and AI chatbots, following 2024 amendments that enhanced protections. Lawmakers plan to address exemptions for deletion rights and AI chatbot interactions with children, noting that 75% of teenagers have engaged with companion chatbots.

Florida Attorney General James Uthmeier has established the Consumer Harm from International Nefarious Actors Prevention Unit (CHINAPU). This is to tackle data privacy risks from foreign companies, particularly those linked to China. The unit will investigate data practices and has already subpoenaed e-commerce platform Shein and sought audits of medical device manufacturers with Chinese ties.

In South Carolina, Governor Henry McMaster signed House Bill 3431, the Age-Appropriate Code Design, which includes unique elements like broad coverage thresholds, data minimization standards, opt-out rights for personalized recommendations, and third-party audit requirements submitted to the attorney general. Industry groups urged a veto, and litigation continues in similar laws elsewhere.

Meanwhile, Minnesota's comprehensive privacy law is now fully enforceable after its cure provision expired on January 31, with Attorney General Keith Ellison encouraging residents to assert their rights and report issues, following 200 complaints mostly on deletion rights and dozens of enforcement notices leading to quick fixes.

Analyst Perspective: These developments show a maturing state-level approach to privacy enforcement, where initial laws are being refined based on real-world data and emerging threats like AI and foreign data access. Organizations should prioritize robust deletion processes, children's data protections, and supply chain audits to avoid scrutiny as enforcement ramps up without mandatory cure periods in places like Minnesota. Proactive compliance reviews, especially around health and genetic data or international ties, will be essential to navigate this patchwork of regulations effectively.

Analyst: Carlos Rivera, Principal Advisory Director – Security & Privacy

More Reading:


Ontario’s IPC Addresses Key Risks and Guardrails for AI in Healthcare

Type: Article

Published: February 2026

Type: Canada

Summary: The security and privacy risks of AI and its implications in the healthcare sector were the main topic of conversation in a data privacy workshop held in January. It included the senior health policy advisor from Ontario’s Office of the Information Privacy Commissioner (IPC). The workshop addressed the IPC’s aim to keep up to date with the AI landscape and implement necessary guardrails.

They first developed principles for responsible use of AI, their guidance expanded to address AI notetakers and scribes in the healthcare settings. This will help healthcare professionals adopt a privacy-first approach and focus on core governance and accountability measures to protect PHI and reduce risks of bias. They also provided checklists for procurement professionals, developers, and users in the healthcare sector, which included considerations throughout the AI lifecycle when evaluating potential AI solutions.

The workshop also addressed key considerations for drafting policies on AI related to data provenance and the degrees of consent given for that data. The importance of establishing roles and responsibilities with respect to handling data that might be used for AI initiatives was also addressed, with emphasis on the role of data custodians. The insights gained from the workshop will help drive further discussions and inform the agency’s strategy to monitor AI integration within the healthcare sector.

Analyst Perspective: The proliferation of AI systems has potential to revolutionize the healthcare sector and enable more efficient services and processes. With more than 90% of physicians reporting a significant administrative burden in manual tasks such as filling out paperwork, and almost half identifying AI as a potential solution to address the issue, it is evident that AI technologies will enable healthcare professionals to focus more on providing care to their patients.

That said, key risks of AI should not be overlooked, and ensuring appropriate guardrails are in place for high-risk domains like healthcare should be prioritized. The development of guiding principles by the IPC could be viewed as a regulatory benchmark for healthcare organizations in Ontario. This could also influence audits and, by extension, public trust.

Analyst: Ahmad Jowhar, Senior Research Analyst – Security & Privacy

More Reading:


Machine Unlearning: Training AI to Forget

Type: Article

Published: February 2026

Affected Region: All

Summary: Generative AI technologies challenge the traditional application of data deletion rights and practices because these models encode information as diffused statistical associations rather than records. Incidentally, the concept of machine unlearning techniques has seen significant advances, such as:

  • Source‑free unlearning technique
  • Example‑tied dropout (ETD)
  • Redirection for erasing memory (REM)

These techniques offer promising pathways for removing specific data influences without full model retraining, though their scalability to large‑scale language models remains uncertain. Given that generative systems may still infer or reconstruct information even after targeted deletion efforts, absolute erasure is technically unattainable.

As a result, policymakers and regulators are increasingly oriented toward a technically grounded right to unlearn, emphasizing feasible deletion standards, rigorous statistical guarantees, and enhanced transparency and accountability measures to safeguard individual autonomy within the constraints of contemporary AI architectures.

Analyst Perspective: Today’s data deletion laws don’t line up with how generative AI functions, since these models blend training data into their internal statistical patterns rather than storing anything you can cleanly erase. New unlearning techniques are encouraging early steps toward making AI systems forget specific information without retraining everything from scratch, but they’re still far from ready to scale.

At the same time, regulators continue to expect companies to honor deletion rights, leaving organizations in a tricky spot where compliance is required but technically difficult. The result is growing uncertainty, especially for companies handling cross‑border data or relying heavily on third‑party models.

Organizations should start preparing now by investing in architectures, documentation, and governance practices that support measurable influence removal, because the regulatory environment is moving toward clearer expectations, and a new generation of AI governance tools focused on unlearning is likely not far behind.

Analyst: Horia Rosian, Director – Cybersecurity & Privacy, Workshops

More Reading:


Disney’s CCPA Settlement: A Case for Privacy Engineering

Type: Enforcement

Published: February 2026

Affected Region: USA

Summary: California's Attorney General announced a US$2.75 million settlement with the Walt Disney Company, the largest to date under the California Consumer Privacy Act (CCPA). The case centered on failures to properly implement consumer opt-out rights related to the sale or sharing of personal information for targeted advertising. Regulators found that opt-out mechanisms were fragmented and global privacy control signals were not consistently honored across user accounts.

The settlement reinforces a key enforcement principle, i.e. if a company can unify user identity for advertising and analytics, it must also unify opt-out rights across its ecosystem. The agreement requires injunctive relief, ongoing compliance reporting, and heightened consequences for future violations, signaling that regulators are scrutinizing not just privacy notices but the actual functionality of opt-out systems. Regulators are testing whether:

  • Suppression mechanisms truly stop downstream data flows.
  • Opt-out processes are easy to use.
  • Controls operate consistently across platforms.

The settlement makes it evident that consent compliance is now an operational and architectural discipline, not merely a policy exercise.

Analyst Perspective: Regulators are no longer focused primarily on privacy notices or formal disclosures. They are testing whether technical mechanisms function across systems. This reflects a shift from legal compliance to technical accountability, where data flows, tracking technologies, and identity resolution systems are subject to investigations and audits. They expect opt-out preferences to apply as seamlessly as unifying user data across devices and services for advertising and analytics.

The case also underscores the importance of privacy to user experience, and that ease of use is enforceable. Opt-out tools must be clear, conspicuous, and simple, aligning with broader regulatory concern around dark patterns and friction-based design.

Strategically, the settlement reinforces that privacy governance has become an operational discipline. Organizations must treat consent compliance as an architectural and engineering priority, particularly as multi-state coordination increases enforcement risk.

Analyst: Safayat Moahamad, Research Director – Security & Privacy

More Reading:


If you have a question or would like to receive these monthly briefings via email, submit a request here.