The bait and switch behind AI risk prediction tools

2 Replies, 225 Views

The bait and switch behind AI risk prediction tools

Sayash Kapoor, Arvind Narayanan

Quote:In 2013, the Netherlands deployed an algorithm to detect welfare fraud by people receiving childcare benefits. The algorithm found statistical correlations in the data, but these correlations were used to make serious accusations of guilt—without any other evidence. 

The algorithm was used to wrongly accuse 30,000 parents. It sent many into financial and mental ruin. People accused by the algorithm were often asked to pay back hundreds of thousands of euros. In many cases, the accusation resulted from incorrect data about people—but they had no way to find out.

Shockingly, one of the inputs to the algorithm was whether someone had dual nationality; simply having a Turkish, Moroccan, or Eastern European nationality would make a person more likely to be flagged as a fraudster.

Worse, those accused had no recourse. Before the algorithm was developed, each case used to be reviewed by humans. After its deployment, no human was in the loop to override the algorithm’s flawed decisions.

Despite these issues, the algorithm was used for over 6 years.

In the fallout over the algorithm’s use, the Prime Minister and his entire cabinet resigned. Tax authorities that deployed the algorithm had to pay a EUR 3.7 million fine for the lapses that occurred during the model’s creation. This was the largest such fine imposed in the country.

This serves as a cautionary example of over-automation: an untested algorithm was deployed without any oversight and caused massive financial and emotional harm to people for 6 years before it was disbanded.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Laird
Sci, when I read the quote in your OP, I thought of mentioning the related scandal here in Australia, and when I read the linked article, I saw that it was already briefly mentioned in the "Further reading" section:

Quote:In another fraud detection scandal, the Australian government stole AUD 721 million from its citizens from 2016-2020. Citizens were accused of welfare fraud using an algorithm; this is often called the “robodebt” scandal.

Strictly speaking, the algorithm used was a crude and simple one of income averaging based on tax return data: far too crude to qualify as artificial intelligence. It did, though, have devastating effects, including (it is claimed) people on benefits taking their own lives.
[-] The following 1 user Likes Laird's post:
  • Sciborg_S_Patel
AI cannot predict the future. But companies keep trying (and failing).

Sayash Kapoor & Arvind Narayanan

Quote:Governments, banks, and employers use predictive optimization to make life-changing decisions such as whom to investigate for child maltreatment, whom to approve for a loan, and whom to hire. Companies sell predictive optimization with the promise of more accurate, fair, and efficient decisions. They claim it does away with human discretion entirely. And since predictive optimization relies entirely on existing data, it is cheap: no additional data is needed.

But do these claims hold up? Our hunch was: no. But hunches are merely the beginning of a research project. Over the last year, together with Angelina Wang and Solon Barocas, we investigated predictive optimization in-depth: 
  • We read 387 news articles, Kaggle competition descriptions, and reports to find 47 real-world applications of predictive optimization. From these 47, we chose the eight most consequential applications. 
  • We then read over 100 papers on the shortcomings of AI in making decisions about people and selected seven critiques that challenged developers' claims of accuracy, fairness, and efficiency. 
  • Finally, we checked if these seven critiques apply to our chosen applications by reviewing past literature and giving our own arguments where necessary.
The table below presents our main results. Each row in the table is one of the eight applications we chose; each column is a critique. Our main empirical finding is that each critique applies to every application we selected.


[Image: https%3A%2F%2Fsubstack-post-media.s3.ama...00x830.png]
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell



  • View a Printable Version


Users browsing this thread: 1 Guest(s)