This Privacy Regulation Roundup summarizes the latest major global privacy regulatory developments, announcements, and changes. This report is updated monthly. For each relevant regulatory activity, you can find actionable Info-Tech analyst insights and links to useful Info-Tech research that can assist you with becoming compliant.
New Rules of Engagement for Platform-Level Data Access
Type: Enforcement
Announced: March 2026
Affected Region: EU
Summary: A conflict originated when LinkedIn suspended accounts associated with Teamfluence, a company that developed a browser tool designed to collect and analyze LinkedIn user activity for sales and business purposes. Teamfluence challenged the suspensions, arguing that LinkedIn’s actions violated the EU Digital Markets Act (DMA) and constituted abuse of market power.
BrowserGate, an EU based advocacy group and campaign initiative on digital‑fairness, amplified this dispute by framing LinkedIn’s actions as part of a broader pattern of unlawful tracking. It further alleged that LinkedIn was secretly conducting illegal browser surveillance and undermining EU competition law obligations.
In March 2026, the Munich Regional Court issued a ruling on a request for a preliminary injunction. The court rejected the plaintiffs’ application and found that LinkedIn’s actions were lawful. The court did not find evidence that LinkedIn engaged in covert or illegal browser spying. The court also held that the Digital Markets Act does not grant third parties an unrestricted right to access platform data outside authorized channels. Most importantly, the court found that Teamfluence’s software involved data processing practices that conflicted with GDPR requirements, which justified LinkedIn’s decision to intervene.
While the German court validated LinkedIn’s authority to restrict automated data extraction to protect users and enforce platform rules, recent lawsuits in the US argue that LinkedIn’s own anti-scraping measures may cross privacy boundaries by probing users’ devices without sufficiently clear disclosure or consent.
Analyst Perspective: The ruling signals that European courts are likely to support platforms that restrict automated data harvesting, when they can plausibly link organization-level policy enforcement actions to consumer protection and regulatory compliance. In this case, the court affirmed that platforms are regulated environments with legitimate authority to control how data is accessed, extracted, and reused.
This also reflects an emerging European legal view that technical capability to access data does not equate to legal entitlement to use it, especially when automation changes the scale and risk profile of processing. Which alludes to another takeaway from this case; namely, that courts will look to balance terms-of-service enforcement with privacy law objectives. LinkedIn’s restrictions imposed on Teamfluence were not viewed merely as commercial self-interest but as a defensible mechanism for managing GDPR risk, recognizing that uncontrolled third-party automation can undermine purpose limitation and transparency.
The ruling makes it clear that platforms are expected to actively prevent misuse of personal data, and that automated data collection is increasingly evaluated through privacy and accountability lenses. LinkedIn’s legal victory in Germany strengthens its enforcement rights, while the US lawsuits will test the limits of how far those enforcement mechanisms can extend before becoming perceived as privacy intrusions.
Analyst: Safayat Moahamad, Research Director – Security & Privacy
More Reading:
- Source Material: Ars Technica, BrowserGate
- Court Ruling: Landgericht Muenchen I [Munich Regional Court I], 11 March 2026, 37 O 104/26.
- Related Info-Tech Research:
Limitations of Automated Cybersecurity Decisions
Type: Article
Published: April 2026
Affected Region: All
Summary: Since the dawn of civilization and the invention of writing, humans have been offloading their memory onto external support, freeing up their cognitive space for other forms of reasoning. Technical revolutions such as printing, computing, and the internet reshaped society and pushed the process of human augmentation. It is argued that the emergence of AI and LLMs have provided a scalable version of human cognition, from the perspective of analyzing inputs and generating outputs that resemble a decision. However, AI and most machines may lack the ability to form judgment, which is an important human element to reasoning.
Human judgment is embodied and contextual, often drawing on perception, emotion, and social understanding to help formulate a decision. LLMs only generate probabilistic output rather than reasoned decisions, which becomes problematic as the technology, is increasingly leveraged to not only assist but also provide automated decisions. Cybersecurity is a domain where ambiguity is always present and requires humans to analyze with reflective judgment. This includes revisiting alerts and questioning whether patterns observed are meaningful. LLMs may not have a clear understanding of organizational risk tolerance or regulatory exposure and only generate output, not accountable decisions. This is important as cybersecurity incidents are increasing in scale and cost, and more organizations are exploring automated cybersecurity capabilities.
Analyst Perspective: The proliferation of AI systems raises the important question of how much autonomy should be given to technologies for cybersecurity decision-making. AI can process vast amounts of threat data faster than humans and flag anomalies, but it can’t truly understand the context, or ethical implications of its actions. Relying solely on AI introduces operational and governance risks, which are exacerbated when accountability is unclear and human oversight is removed from critical decisions.
Hence, organizations should explore the adoption of an augmented intelligence approach, where investments are made in human-in-the-loop AI systems. This will ensure that the core capabilities of AI that organizations seek, such as handling repetitive, data-intensive tasks, are done by technology, while allowing analysts to interpret outputs and make decisions. This would ensure accountability is retained by humans. By allowing humans to apply their reflective judgment while still leveraging AI to improve productivity, they will not feel as if AI is replacing humans but rather empowering them.
Analyst: Ahmad Jowhar, Senior Research Analyst – Security & Privacy
More Reading:
- Source Material: IAPP
- Related Info-Tech Research:
Digital Agency and the Future of Online Responsibility
Type: Article
Published: April 2026
Affected Region: USA
Summary: Amazon’s lawsuit against Perplexity marks one of the first major legal confrontations over agentic AI. It is alleged that Perplexity’s Comet AI agent violated the Computer Fraud and Abuse Act (CFAA) by accessing Amazon accounts while masking its identity and failing to comply with platform conditions of use. A US District Court has granted Amazon a preliminary injunction in March 2026.
Based on two landmark cases, hiQ Labs v. LinkedIn and Van Buren v. United States, it is argued that Amazon’s CFAA claims are likely misplaced. Apparently, Perplexity’s AI only accessed Amazon accounts after users logged in and delegated authority to the agent, meaning no technical access barrier was breached.
It is further suggested that legal responsibility may lie more appropriately with users who deploy AI agents, drawing on the law of agency and prior litigation involving automated scraping. Perplexity’s conduct, such as disguising its digital fingerprint, raises legitimate concerns; however, similar behavior in earlier cases was not sufficient to establish CFAA liability.
Analyst Perspective: Traditional access laws assume direct interaction between a human and a platform, while AI agents introduce a new intermediary acting under delegated authority. This raises a foundational question courts have yet to fully answer, i.e. whether authorization flows from the platform, the user, or both. If courts treat AI agents as extensions of users, liability may shift toward the individuals. Whereas, if platform authorization becomes independently required, platforms gain significant power to restrict autonomous technologies through contractual controls.
The case is probably less about scraping and more about who governs automation online. The CFAA, originally designed to combat hacking, appears ill-suited to resolve these tensions. This may suggest that courts will need new frameworks to allocate responsibility among users, AI developers, and platforms.
Legal debates seem to be shifting from questions of data access to questions of digital agency. This case may be a signal of a wave of litigation coming as courts are engaged to clarify legally responsible parties when autonomous AI agents act online on users' behalf.
Analyst: Safayat Moahamad, Research Director – Security & Privacy
More Reading:
- Source Material: IAPP
- Related Info-Tech Research:
If you have a question or would like to receive these monthly briefings via email, submit a request here.