The bait and switch behind AI risk prediction tools
Sayash Kapoor, Arvind Narayanan
Sayash Kapoor, Arvind Narayanan
Quote:In 2013, the Netherlands deployed an algorithm to detect welfare fraud by people receiving childcare benefits. The algorithm found statistical correlations in the data, but these correlations were used to make serious accusations of guilt—without any other evidence.
The algorithm was used to wrongly accuse 30,000 parents. It sent many into financial and mental ruin. People accused by the algorithm were often asked to pay back hundreds of thousands of euros. In many cases, the accusation resulted from incorrect data about people—but they had no way to find out.
Shockingly, one of the inputs to the algorithm was whether someone had dual nationality; simply having a Turkish, Moroccan, or Eastern European nationality would make a person more likely to be flagged as a fraudster.
Worse, those accused had no recourse. Before the algorithm was developed, each case used to be reviewed by humans. After its deployment, no human was in the loop to override the algorithm’s flawed decisions.
Despite these issues, the algorithm was used for over 6 years.
In the fallout over the algorithm’s use, the Prime Minister and his entire cabinet resigned. Tax authorities that deployed the algorithm had to pay a EUR 3.7 million fine for the lapses that occurred during the model’s creation. This was the largest such fine imposed in the country.
This serves as a cautionary example of over-automation: an untested algorithm was deployed without any oversight and caused massive financial and emotional harm to people for 6 years before it was disbanded.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'
- Bertrand Russell
- Bertrand Russell