Comprehensive software reviews to make better IT decisions
Dutch Court Halts Use of AI for Detecting Welfare Fraud
In a landmark ruling, a Dutch court has ordered an immediate halt to the government’s use of an automated system for detection of welfare fraud.
The case was filed two years ago in the District Court of The Hague by a coalition of civil society groups and civil rights activists who claimed that the system violated human rights, in particular the right to privacy.
The system, called SyRI (Dutch: Systeem Risico Indicatie, or in English, Risk Indication System) was built by the Dutch Ministry of Social Affairs and Employment to predict which individuals may be committing tax or benefits fraud or violating labor laws (collecting benefits while working).
The system uses rich data from government agencies and state institutions (e.g. benefits and employment records, tax information, history of naturalization, housing and education history, information about personal debt, and health insurance status) to predict “increased risk of irregularities” and identify individuals involved. It then passes this information to the relevant government agency for action.
How SyRI Works*
*For a detailed explanation, see the attribution link below the diagram.
SyRI has been deployed in three cities. One of them – Rotterdam – is the second largest city in the country and also has the highest poverty rate. In one predominantly low-income neighborhood there, the system reported 113 violations (of which 62 turned out to be false positives – more on that below), which “have resulted in the discontinuation or recovery of state benefits and allowances in a total volume of 496,000 euros (including subsequent savings)”. (That’s savings of approximately US$544,236.)
False positives can happen, for example, when the risk model is used on data that differs from the data on which the model had been trained, or when it does not account for some uses cases, such as people living in retirement homes collecting basic state pension. Given such high error rate, it is not surprising then that alerts are examined by a human analyst before a risk report is generated. (We strongly support this approach of keeping human experts in the decision-making loop.)
Why did the system receive so much attention? There are at least four reasons:
- It has been deployed primarily in low-income, predominantly immigrant neighborhoods, where people are struggling with unemployment, poverty, and housing. So it really smacks of ethnic and poverty profiling.
- It was rolled out without meaningful consultation with those who may be impacted by it or with the general public.
- It is just one recent example in a series of experiments by the Dutch government to use technology for fraud detection. (And they are not unique in doing that.)
- It is also part of a larger debate around data privacy, ownership, and human rights in the context of AI, machine learning, and decision systems using these technologies.
As governments around the world are turning to AI to automate services – from benefits to immigration decisions – we are going to see more of such cases, unless governments pause to consult their citizens. The Guardian recently ran a series of articles, “Automating Poverty - Digital Dystopia: How Algorithms Punish the Poor,” that highlighted several of these cases.
And while I understand the desire to combat fraud, reduce costs, and direct benefits to those who truly need them, I also wonder: what if the Dutch government instead built a system like SyRI to identify which social programs and incentives work best to reduce poverty, encourage employment, and better integrate immigrants into the society?
The government of Finland, for example, thinks that the best way to reduce homelessness is by providing people with housing. And it seems to be working. Or maybe the Dutch have already built such models, and it’s only the negative examples that generate the publicity (as always).
Still, if you are looking to leverage AI for any kind of automation or augmentation of decision making – and this is not unique to governments – it is prudent to continue with caution. As the SyRI example indicates, all use cases need to be considered, stakeholders must be consulted, and consequences must be evaluated.
If governments deploy AI apps that violate human rights – and let’s give the Dutch the benefit of the doubt and say they have done it unintentionally, although “carelessness” is a more appropriate description – how can we expect businesses to respect human rights? See our note “Amnesty International Calls Google and Facebook a Threat to Human Rights” for further discussion on this matter.
Want to Know More?
To learn more about harms you can unleash on your citizens, employees, and customers and how to prevent them, consult Info-Tech’s blueprint Mitigate Machine Bias.
To learn about the guardrails and the controls we recommend you start putting in place even if you are just getting your feet wet with AI, look out for our upcoming blueprint on AI governance, or reach out to the analysts to get a kick-start.
Transparency, explainability, and trust are pressing topics in AI/ML today. While much has been written about why these are important and what organizations should do, no tools to help implement these principles have existed – until now.
Recently I attended the inaugural Emotion AI conference, organized by Seth Grimes, a leading analyst and business consultant in the areas of natural language processing, text analytics, sentiment analysis, and their business applications. So, what is emotion AI, why is it relevant, and what do you need to know about it?
SortSpoke’s novel approach to machine learning answers a longstanding problem in financial services – how to efficiently extract critical data from inbound, unstructured documents at 100% data quality.
Amazon is offering its cashierless store technology to other retailers. The technology known as “Just Walk Out” eliminates checkout lines, offering an “effortless” shopping experience and shifting store associates to “more valuable activities”.
As the COVID-19 pandemic is shutting down whole countries, a few of you may be wondering whether AI can help create a vaccine for the virus responsible. After all, AI is magic, right?
Alphabet is facing backlash from its shareholders over its approach to digital privacy, reports the Financial Times. And not for the first time. This time, however, things will need to change.
The EU plans to invest €6 billion to build a single European data space, reports EURACTIV. The envisioned space will house personal, business, and “high-quality industrial data” and create the infrastructure for data sharing and use across businesses and nations.
“Facebook quietly acquired another UK AI startup and almost no one noticed,” reported TechCrunch on February 10. We looked into why.
Databricks, a data processing and analytics platform with a strong focus on AI and ML, has partnered with Immuta to deliver automated end-to-end data governance for AI, data science, and ML projects.