Comprehensive Software Reviews to make better IT decisions
Amnesty International Calls Google and Facebook a Threat to Human Rights
We recently covered Google’s lackadaisical approach to data privacy in the context of its partnership with Ascension, a US healthcare giant. (See our tech briefs Google Builds AI-Powered Tools for Patient Care: Project Nightingale and Google Has Personal Medical Data of Up to 50 Million Americans: Project Nightingale.) Last month, Google was under fire again, along with Facebook. Amnesty International has produced a report “Surveillance Giants: How the Business Model of Google and Facebook Threatens Human Rights.”
In this report, Amnesty International criticizes Google and Facebook for their business practices. The report says that their pervasive surveillance machinery violates core human rights, such as the right to dignity, autonomy, and privacy, the right to control information about ourselves, and the right to a space where we can freely express our identities.
The report concludes with a ten-point list of recommendations for governments, urging them to take action, including recommendations such as:
- Enact and enforce strong data protection laws.
- Ensure access to digital services and infrastructure that is not conditional on ubiquitous surveillance.
- Put in place regulation and oversight over design, development, and implementation of algorithmic decision-making systems.
- Create mechanisms for remediation of harm to human rights resulting from such systems.
- Strengthen regulatory bodies investigating violations and sanctions.
- Create regulatory frameworks to support data portability and interoperability.
- Create effective digital education programs to ensure citizens understand and exercise their rights.
The report fell short of calling to break up Google and Facebook, but it recommends that governments take a stronger stance on universal access to digital services and protection of human rights, including “taking measures to disrupt the market” – essentially an indirect way of saying that.
The report contains recommendations for businesses, too, urging them to:
- Replace the current surveillance-based business model with a model that is respectful of human rights.
- Stop lobbying to relax data protection and privacy legislation.
- Take action to remediate human rights abuses they have caused or contributed to.
While these recommendations are aimed at Google and Facebook, as well the others in the Big Six – Apple, Amazon, IBM, and Microsoft – this is a warning call for everyone collecting and using customer, consumer, patient, partner, and employee data, which is effectively every organization these days.
With the general public’s increasing awareness of how their personal, private, and in many cases intimate data is being used and abused – including manipulation, exploitation, discrimination, restricted access to information and economic opportunities, and weakened civic society and democratic institutions – the pressure is increasing on governments to get more active and proactive in controlling and, in some cases, restricting application of artificial intelligence (AI) and machine learning technologies and harmful business practices.
The race to control data privacy, ownership, and usage in AI applications is just starting, and it is only going to intensify. Is your organization getting ready?
The UN Human Rights Council states that “companies have a responsibility to respect all human rights.” And the UN Guiding Principles on Business and Human Rights require companies to “take ongoing, proactive, and reactive steps to ensure they do not cause or contribute to human rights abuses – a process called human rights due diligence.”
Want to Know More?
To get educated on data-related risks with AI and machine learning and get started with this due diligence, download our blueprint Mitigate Machine Bias.
The EU plans to invest €6 billion to build a single European data space, reports EURACTIV. The envisioned space will house personal, business, and “high-quality industrial data” and create the infrastructure for data sharing and use across businesses and nations.
“Facebook quietly acquired another UK AI startup and almost no one noticed,” reported TechCrunch on February 10. We looked into why.
In a landmark ruling, a Dutch court has ordered an immediate halt to the government’s use of an automated system for detection of welfare fraud.
Databricks, a data processing and analytics platform with a strong focus on AI and ML, has partnered with Immuta to deliver automated end-to-end data governance for AI, data science, and ML projects.
CognitiveScale has been named one of the 50 Smartest Companies of the Year 2019 by The Silicon Review. The recognition is for “transforming customer engagement and lifetime value with Artificial Intelligence.”
Facebook agreed to pay $550 million to settle a class action lawsuit with a group of users in Illinois over its use of facial recognition technology (FRT) to tag individuals in photographs, reports the BBC.
AI has been making headlines in healthcare for some time, and the current outbreak of the coronavirus in Wuhan, China, (with cases now in other parts of the world) – or, more specifically, the early warning of the outbreak – is another example.
Google founders Larry Page and Sergey Brin are stepping down as CEO and President of Alphabet, respectively. Google CEO Sundar Pichai will take over as Alphabet’s CEO. Both Page and Brin will remain actively involved as board members, shareholders, and cofounders.
I recently had an opportunity to speak with a KPMG partner in the Canadian risk consulting practice and with the head of data science for Canada about several things, including KPMG Ignite. This is what I learned.