Privacy Regulation Roundup

Author(s): Horia Rosian, Carlos Rivera, Ahmad Jowhar, Erik Avakian

This Privacy Regulation Roundup summarizes the latest major global privacy regulatory developments, announcements, and changes. This report is updated on a monthly basis. For each relevant regulatory activity, you can find actionable Info-Tech analyst insights and links to useful Info-Tech research that can assist you with becoming compliant.

Building Strong AI Governance: Collaboration Between Privacy, Security, and Governance Teams

Canada USA Europe APAC Rest of World


Type: Article
Date: June 2024

Summary: The widespread adoption of AI brings immense benefits, but also raises significant concerns regarding privacy, security, and reputational risks for organizations. AI systems, if not carefully managed, can be manipulated to generate offensive content, exhibit bias against certain groups, or even leak sensitive data. As this article points out, the multifaceted risks associated with AI require a collaborative approach to governance, involving privacy, security, and governance teams.

The article points out several risks, including potential violations due to improper data handling, regulatory challenges associated with the evolving legal landscape, and a lack of transparency around data usage. Cybersecurity concerns involve the potential for manipulation of AI models through adversarial attacks, model extraction for intellectual property theft, and the creation of new attack vectors for malicious actors. Additionally, unethical or biased AI outputs can damage an organization's reputation, leading to public distrust and loss of credibility.

Analyst Perspective: I agree with the emphasis on collaboration between privacy, security, and governance teams because each team has specific roles to play in risk mitigation. For instance, governance teams are responsible for establishing AI policies, conducting ethical assessments, and educating employees. Privacy teams handle risk assessments, implement privacy-by-design principles, and manage user consent. Security teams focus on data security, preventing model theft, and educating employees on AI-related threats.

As effective governance requires open communication and collaboration, a cross-functional AI task force is crucial. This task force can develop guidelines, approve new AI technologies, and ensure AI safety is prioritized in new system development. Regular meetings will keep the task force updated on industry trends and threats, and dedicated communication channels will inform employees about approved AI tools and their use.

AI has undeniable potential, but organizations cannot afford to ignore the associated risks. A proactive approach to AI governance that involves collaboration between privacy, security, and governance teams is essential to risk mitigation and responsible AI development. The creation of an AI task force and the use of AI governance tools are steps in the right direction.

Analyst: Carlos Rivera, Principal Advisory Director – Security & Privacy

More Reading:


Apple's AI Leap Reinforces Privacy Commitment

Canada USA Europe APAC Rest of World



Type: Announcement
Announcement Date: June 2024

Summary: Apple's latest AI initiative, Apple Intelligence, is a testament to the company's commitment to privacy. By introducing this system at its developers’ conference, Apple has set a precedent for privacy-centric AI. The system is designed to enhance user experiences while ensuring that personal data remains secure. This is particularly significant given the industry's heightened sensitivity to data privacy. Apple's partnership with OpenAI and the integration of ChatGPT has indeed sparked privacy debates, but Apple has been clear that any use of ChatGPT will be contingent on explicit user consent. This approach aligns with Apple's longstanding philosophy of user privacy as a fundamental right.

The core of Apple's privacy strategy lies in its Private Cloud Compute (PCC) technology, which processes AI tasks directly on the user's device as much as possible. For more complex tasks that require cloud computing, PCC ensures that data is processed in a way that prevents personal information from being compromised. This method not only secures user data but also opens the door for third-party audits, offering an additional layer of transparency and trust. Apple's cautious and deliberate entry into the generative AI space, with a strong emphasis on proprietary technology, means that it can offer these advanced features without sacrificing user privacy. The company's focus on keeping most AI processing on-device, rather than relying on cloud servers, minimizes the risk of data breaches and unauthorized access.

Apple Intelligence intends to make a bold statement on the importance of privacy in the age of AI. It's a move that could redefine user expectations and industry standards, potentially sparking a new wave of innovation focused on privacy and security in technology.

Analyst Perspective: The announcement of Apple Intelligence is a strategic maneuver that positions Apple as a forerunner in the privacy domain. The company's decision to process AI tasks on-device, with cloud computing as a secondary option, is a direct response to growing data security concerns. This approach not only differentiates Apple from its competitors but also serves as a potential catalyst for market disruption.

The partnership with OpenAI seems to be a calculated risk that balances the pursuit of innovation with the maintenance of privacy standards. It could solidify Apple's reputation as an innovator while upholding its commitment to user privacy and compelling other tech giants to reassess their AI and privacy strategies. Apple's insistence on explicit user consent for ChatGPT's use and the provision for third-party verification are likely to reinforce consumer trust and loyalty. The implications for app developers are significant as well, with the potential for a wave of new apps that leverage Apple's privacy-centric AI, setting a higher bar for app security.

If Apple's focus on privacy leads to a surge in smartphone upgrades, the ripple effects could be substantial, benefiting not just Apple but its supply chain partners as well. In essence, Apple's emphasis on privacy in its AI offerings is a bold strategy that could reshape industry standards and elevate user expectations regarding privacy and security in technology.

Analyst: Horia Rosian, Director – Cybersecurity & Privacy, Workshops

More Reading:


Microsoft Receives Privacy Complaints Over Use of Its 365 Education Suite in the EU

Canada USA Europe APAC Rest of World


Type: Announcement
Announcement Date: June 2024

Summary: Tech giant Microsoft is being investigated by an EU-based privacy rights group for its failure to comply with GDPR requirements related to its Microsoft 365 Education suite. Privacy rights group NOYB filed complaints with the Austrian data protection authority (DSB).

The first complaint alleges the unlawful processing of children’s data, which was due to the ambiguous information shared by Microsoft on how children’s data are being used. It also alleges Microsoft is shifting the legal responsibilities of a data controller of children’s information onto the schools.

The second complaint is about Microsoft’s use of cookies to track children’s data and usage of the software. NOYB alleges that Microsoft installed tracking cookies on its 365 Education software without the complainant’s consent or the school’s knowledge. The cookies were found to analyze user behavior and collect browser data used for advertising. Although privacy rights groups have requested the DSB investigate the allegations, the investigation is still pending.

Analyst Perspective: The violation of GDPR rights is a breach of consumer data privacy and protection, which could result in severe penalties and fines. The violations are exacerbated by the inclusion of children’s data, given its sensitivity and the children’s inability to consent to the processing of such data.

GDPR complaints related to children’s data privacy have resulted in large fines for organizations, including a €405 million fine imposed by the Ireland DPA on Meta in 2022 for Instagram’s failure to protect the data privacy of children. Microsoft has also been under scrutiny for its 365 cloud product suite’s recent breach of GDPR. The European Data Protection Supervisor found that the EU’s own use of Microsoft 365 violated some data protection laws.

These examples showcase the severe consequences an organization could face for its GPDR non-compliance, especially if it includes the data of minors. Understanding the requirements of GDPR, defining your business units that are under the data regulator’s scope, and documenting your record of processing activities are some initiatives that would help your organization mitigate the risk of GDPR non-compliance.

With organizations continuing to expand operations globally, demonstrating due diligence in protecting consumer data would assure consumers they can continue to leverage an organization’s service with peace of mind and protect the business from reputational and financial risks.

Analyst: Ahmad Jowhar, Research Analyst – Security & Privacy

More Reading:


FTC and DOJ Investigating TikTok's Privacy Practices

CanadaUSAEuropeAPACRest of World



Type: Article
Announcement Date: July 2024

Summary: TikTok is currently under scrutiny from the Federal Trade Commission (FTC), whose investigation has been escalated to the Department of Justice (DOJ), for possible violations of the Children’s Online Privacy Protection Act (COPPA). The investigation, which was revealed in March, began after an anonymous source revealed that the FTC was examining TikTok’s data management practices. Concerns were raised when the FTC had discovered that Chinese engineers had access to US user data until 2023 – despite TikTok's promises to isolate data geographically.

The matter traces back to a 2019 settlement with TikTok's predecessor, Musical.ly. That settlement involved a $5.7 million fine for collecting personal information from minors without parental consent since 2017. COPPA regulations require digital platforms aimed at children under 13 to obtain parental consent before collecting personal information and to safeguard minors' data from targeted advertising and ensure any advertising content shown to minors is appropriate and not harmful.

TikTok has expressed disappointment with the referral to the DOJ and highlighted its efforts over the past year to address data access issues involving Chinese employees. TikTok has stated that it has bolstered its privacy measures for children in recent years, adding features like family account pairing, screen time limits for users under 16, and improved detection and removal of underage users.

Despite these measures, TikTok still faces significant global challenges concerning children's privacy. In 2023, the UK fined TikTok $16 million for allowing 14 million children under 13 to use the app in 2020. The Irish Data Protection Commission later imposed a €345 million fine for similar infractions.

Analyst Perspective: The new developments and escalation of the FTC investigation highlights the importance of privacy compliance. It remains crucial to regularly review and assess policies and practices to ensure customer services and products align with COPPA and other relevant regulations to avoid potential legal and financial penalties.

The revelation that Chinese engineers may have accessed US user data despite promises of data localization stresses the importance for organizations to implement and enforce appropriate data access controls and policies to ensure compliance with data localization requirements. TikTok’s reported enhancements in privacy measures are a step in the right direction, but privacy controls and policies should be evaluated and assessed regularly to avoid future issues. Implementing advanced age verification and preventing underage access to inappropriate content through parental consent mechanisms that are difficult to bypass are critical to protecting younger users.

The global implications stress that organizations need to meet policy standards across all jurisdictions in which they operate. Changes to policies, products, or services should be communicated regularly and transparently to the customer base. These measures can help maintain public trust and user confidence.

The current situation with TikTok is a reminder of the many complexities of the ever-changing privacy landscape. It is more important than ever for organizations to employ a multilayered approach to privacy that includes governance, policy, compliance, ongoing assessment of services, and regular customer communication.

Analyst: Erik Avakian, Technical Counselor – Security & Privacy

More Reading:


If you have a question or would like to receive these monthly briefings via email, submit a request here.

Related Content

Visit our Exponential IT Research Center
Over 100 analysts waiting to take your call right now: 1-519-432-3550 x2019