Latest Research


This content is currently locked.

Your current Info-Tech Research Group subscription does not include access to this content. Contact your account representative to gain access to Premium SoftwareReviews.

Contact Your Representative
Or Call Us:
1-888-670-8889 (US/CAN) or
+1-519-432-3550 (International)

Comprehensive software reviews to make better IT decisions

Dutch Court Halts Use of AI for Detecting Welfare Fraud

In a landmark ruling, a Dutch court has ordered an immediate halt to the government’s use of an automated system for detection of welfare fraud.

The case was filed two years ago in the District Court of The Hague by a coalition of civil society groups and civil rights activists who claimed that the system violated human rights, in particular the right to privacy.

The system, called SyRI (Dutch: Systeem Risico Indicatie, or in English, Risk Indication System) was built by the Dutch Ministry of Social Affairs and Employment to predict which individuals may be committing tax or benefits fraud or violating labor laws (collecting benefits while working).

The system uses rich data from government agencies and state institutions (e.g. benefits and employment records, tax information, history of naturalization, housing and education history, information about personal debt, and health insurance status) to predict “increased risk of irregularities” and identify individuals involved. It then passes this information to the relevant government agency for action.

How SyRI Works*

*For a detailed explanation, see the attribution link below the diagram.

Source: “High-Risk Citizens,” AlgorithmWatch. CC BY. Accessed 6 February 2020.

SyRI has been deployed in three cities. One of them – Rotterdam – is the second largest city in the country and also has the highest poverty rate. In one predominantly low-income neighborhood there, the system reported 113 violations (of which 62 turned out to be false positives – more on that below), which “have resulted in the discontinuation or recovery of state benefits and allowances in a total volume of 496,000 euros (including subsequent savings)”. (That’s savings of approximately US$544,236.)

False positives can happen, for example, when the risk model is used on data that differs from the data on which the model had been trained, or when it does not account for some uses cases, such as people living in retirement homes collecting basic state pension. Given such high error rate, it is not surprising then that alerts are examined by a human analyst before a risk report is generated. (We strongly support this approach of keeping human experts in the decision-making loop.)

Why did the system receive so much attention? There are at least four reasons:

  • It has been deployed primarily in low-income, predominantly immigrant neighborhoods, where people are struggling with unemployment, poverty, and housing. So it really smacks of ethnic and poverty profiling.
  • It was rolled out without meaningful consultation with those who may be impacted by it or with the general public.
  • It is just one recent example in a series of experiments by the Dutch government to use technology for fraud detection. (And they are not unique in doing that.)
  • It is also part of a larger debate around data privacy, ownership, and human rights in the context of AI, machine learning, and decision systems using these technologies.

Our Take

As governments around the world are turning to AI to automate services – from benefits to immigration decisions – we are going to see more of such cases, unless governments pause to consult their citizens. The Guardian recently ran a series of articles, “Automating Poverty - Digital Dystopia: How Algorithms Punish the Poor,” that highlighted several of these cases.

And while I understand the desire to combat fraud, reduce costs, and direct benefits to those who truly need them, I also wonder: what if the Dutch government instead built a system like SyRI to identify which social programs and incentives work best to reduce poverty, encourage employment, and better integrate immigrants into the society?

The government of Finland, for example, thinks that the best way to reduce homelessness is by providing people with housing. And it seems to be working. Or maybe the Dutch have already built such models, and it’s only the negative examples that generate the publicity (as always).

Still, if you are looking to leverage AI for any kind of automation or augmentation of decision making – and this is not unique to governments – it is prudent to continue with caution. As the SyRI example indicates, all use cases need to be considered, stakeholders must be consulted, and consequences must be evaluated.

If governments deploy AI apps that violate human rights – and let’s give the Dutch the benefit of the doubt and say they have done it unintentionally, although “carelessness” is a more appropriate description – how can we expect businesses to respect human rights? See our note “Amnesty International Calls Google and Facebook a Threat to Human Rights” for further discussion on this matter.


Want to Know More?

To learn more about harms you can unleash on your citizens, employees, and customers and how to prevent them, consult Info-Tech’s blueprint Mitigate Machine Bias.

To learn about the guardrails and the controls we recommend you start putting in place even if you are just getting your feet wet with AI, look out for our upcoming blueprint on AI governance, or reach out to the analysts to get a kick-start.

Related Research

Facebook Will Pay Illinois Users $550-Million Settlement Over Its Use of FRT

Google and IBM Are Calling for AI Regulation

Clearview AI Demonstrates the Dangers of Facial Recognition

Visit our IT Cost Optimization Center
Over 100 analysts waiting to take your call right now: 1-519-432-3550 x2019