News:

And we're back!

Main Menu

The Tech Dystopia Thread

Started by Sheilbh, April 13, 2022, 04:58:52 PM

Previous topic - Next topic

Sheilbh

Extraordinary story - I'd heard about this causing the Dutch government to fall and some details, but nothing as in-depth in this. The consequences don't seem sufficient and it feels like there's a lot of warnings here for governments looking at modernisations like this:
QuoteDutch scandal serves as a warning for Europe over risks of using algorithms
The Dutch tax authority ruined thousands of lives after using an algorithm to spot suspected benefits fraud — and critics say there is little stopping it from happening again.
By Melissa Heikkilä
March 29, 2022 6:14 pm

Chermaine Leysner's life changed in 2012, when she received a letter from the Dutch tax authority demanding she pay back her child care allowance going back to 2008. Leysner, then a student studying social work, had three children under the age of 6. The tax bill was over €100,000.

"I thought, 'Don't worry, this is a big mistake.' But it wasn't a mistake. It was the start of something big," she said.

The ordeal took nine years of Leysner's life. The stress caused by the tax bill and her mother's cancer diagnosis drove Leysner into depression and burnout. She ended up separating from her children's father. "I was working like crazy so I could still do something for my children like give them some nice things to eat or buy candy. But I had times that my little boy had to go to school with a hole in his shoe," Leysner said.

Leysner is one of the tens of thousands of victims of what the Dutch have dubbed the "toeslagenaffaire," or the child care benefits scandal.

In 2019 it was revealed that the Dutch tax authorities had used a self-learning algorithm to create risk profiles in an effort to spot child care benefits fraud.

Authorities penalized families over a mere suspicion of fraud based on the system's risk indicators. Tens of thousands of families — often with lower incomes or belonging to ethnic minorities — were pushed into poverty because of exorbitant debts to the tax agency. Some victims committed suicide. More than a thousand children were taken into foster care.


The Dutch tax authorities now face a new €3.7 million fine from the country's privacy regulator. In a statement released April 12, the agency outlined several violations of the EU's data protection rulebook, the General Data Protection Regulation, including not having a legal basis to process people's data and hanging on to the information for too long.

Aleid Wolfsen, the head of the Dutch privacy authority, called the violations unprecedented.

"For over 6 years, people were often wrongly labeled as fraudsters, with dire consequences ... some did not receive a payment arrangement or you were not eligible for debt restructuring. The tax authorities have turned lives upside down," he said, according to the statement.

As governments around the world are turning to algorithms and AI to automate their systems, the Dutch scandal shows just how utterly devastating automated systems can be without the right safeguards. The European Union, which positions itself as the world's leading tech regulator, is working on a bill that aims to curb algorithmic harms.

But critics say the bill misses the mark and would fail to protect citizens from incidents such as what happened in the Netherlands.

No checks and balances

The Dutch system — which was launched in 2013 — was intended to weed out benefits fraud at an early stage. The criteria for the risk profile were developed by the tax authority, reports Dutch newspaper Trouw. Having dual nationality was marked as a big risk indicator, as was a low income.

Why Leysner ended up in the situation is unclear. One reason could be that she had twins, which meant she needed more support from the government. Leysner, who was born in the Netherlands, also has Surinamese roots.

In 2020, Trouw and another Dutch news outlet, RTL Nieuws revealed that the tax authorities also kept secret blacklists of people for two decades, which tracked both credible and unsubstantiated "signals" of potential fraud. Citizens had no way of finding out why they were on the list or defending themselves.

An audit showed that the tax authorities focused on people with "a non-Western appearance," while having Turkish or Moroccan nationality was a particular focus. Being on the blacklist also led to a higher risk score in the child care benefits system.


A parliamentary report into the child care benefits scandal found several grave shortcomings, including ��institutional biases and authorities hiding information or misleading the parliament about the facts. Once the full scale of the scandal came to light, Prime Minister Mark Rutte's government resigned, only to regroup 225 days later.

In addition to the penalty announced April 12, the Dutch data protection agency also fined the Dutch tax administration €2.75 million in December 2021 for the "unlawful, discriminatory and therefore improper manner" in which the tax authority processed data on the dual nationality of child care benefit applicants.

"There was a total lack of checks and balances within every organization of making sure people realize what was going on," said Pieter Omtzigt, an independent member of the Dutch parliament who played a pivotal role in uncovering the scandal and grilling the tax authorities.

"What is really worrying me is that I'm not sure that we've taken even vaguely enough preventive measures to strengthen our institutions to handle the next derailment," he continued.

The new Rutte government has pledged to create a new algorithm regulator under the country's data protection authority. Dutch Digital Minister Alexandra van Huffelen — who was previously the finance minister in charge of the tax authority — told POLITICO that the data authority's role will be "to oversee the creation of algorithms and AI, but also how it plays out when it's there, how it's treated, make sure that is human-centered, and that it does apply to all the regulations that are in use." The regulator will scrutinize algorithms in both the public and private sectors.

Van Huffelen stressed the need to make sure humans are always in the loop. "What I find very important is to make sure that decisions, governmental decisions based on AI are also always treated afterwards by a human person," she said. 

A warning to the rest of Europe

Europe's top digital official, European Commission Executive Vice President Margrethe Vestager, said the Dutch scandal is exactly what every government should be scared of.

"We have huge public sectors in Europe. There are so many different services where decision-making supported by AI could be really useful, if you trust it," Vestager told the European Parliament in March. The EU's new AI Act is aimed at creating that trust, she argued, "so that this big public sector market will be open also for artificial intelligence."

The Commission's proposal for the AI Act restricts the use of so-called high-risk AI systems and bans certain "unacceptable" uses. Companies providing high-risk AI systems have to meet certain EU requirements. The AI Act also creates a public EU register of such systems in an effort to improve transparency and help with enforcement.

That's not good enough, argues Renske Leijten, a Socialist member of the Dutch parliament and another key politician who helped uncover the true scale of the scandal. Leijten argues that the AI Act should also apply to those using high-risk AI systems in both the private and public sectors.

In the AI Act, "we see that there are more guarantees for your rights when companies and private enterprises are working with AI. But the important thing we must learn out of the child care benefit scandal is that this was not an enterprise or private sector ... This was the government," she said.

As it is now, the AI Act will not protect citizens from similar dangers, said Dutch Green MEP Kim van Sparrentak, a member of the European Parliament's AI Act negotiating team on the internal market committee. Van Sparrentak is pushing for the AI Act to have fundamental rights impact assessments that will also be published in the EU's AI register. Parliament is also proposing adding obligations to the users of high-risk AI systems, including in the public sector.

"Fraud prediction and predictive policing based on profiling should just be banned. Because we have seen only very bad outcomes and not a single person can be determined based on some of their data," van Sparrentak said.

In a report detailing how the Dutch government used ethnic profiling in the child care benefits scandal, Amnesty International calls on governments to ban the "use of data on nationality and ethnicity when risk-scoring for law enforcement purposes in the search of potential crime or fraud suspects."

The Netherlands is still reckoning with the aftermath of the scandal. The government has promised to pay back victims of the incident €30,000. But for those like Leysner, that doesn't even begin to cover the years she lost — justice seems like a long way off.


"If you go through things like this, you also lose your trust in the government. So it's very difficult to trust what [authorities] say right now," Leysner said.

Clothilde Goujard and Vincent Manancourt contributed reporting.

This article has been updated with the results of the Dutch tax authorities' investigation released in April.
Let's bomb Russia!

Barrister

That's remarkable.

I can understand using an AI to develop subjects.  I can even see how that'd be useful.  But surely you then need to actually uncover evidence of fraud itself?

I'd like it to use of facial recognition software.  It's great to help ID suspects - but then you still need to prove you have the right person.  The software itself isn't admissible in court.
Posts here are my own private opinions.  I do not speak for my employer.

The Brain

So many things that I don't understand here.
Women want me. Men want to be with me.

Admiral Yi

What's your source Shelf?  I.e. advocacy group or straight reporting?

The comment about racial profiling brings up what is going to be an issue in the data driven age.


Jacob

Quote from: Admiral Yi on April 13, 2022, 05:55:23 PMWhat's your source Shelf?  I.e. advocacy group or straight reporting?

The comment about racial profiling brings up what is going to be an issue in the data driven age.

Racial profiling is going to be one issue in the data driven age. I don't think being targetted for additional scrutiny and assumed guilty based on demographics and trained AI is a non-issue, even if it's based on factors other than race.

Admiral Yi

Quote from: Jacob on April 13, 2022, 06:15:17 PMRacial profiling is going to be one issue in the data driven age. I don't think being targetted for additional scrutiny and assumed guilty based on demographics and trained AI is a non-issue, even if it's based on factors other than race.

I agree with what you've written, but the devil is in the details.

There's a big gray area between "assumed guilty" and "don't worry about it" that includes things like flagged for further scrutiny, or sent a letter that says something like "we have concerns about your benefits, please respond with some information."

Jacob

Quote from: Admiral Yi on April 13, 2022, 06:39:46 PM
Quote from: Jacob on April 13, 2022, 06:15:17 PMRacial profiling is going to be one issue in the data driven age. I don't think being targetted for additional scrutiny and assumed guilty based on demographics and trained AI is a non-issue, even if it's based on factors other than race.

I agree with what you've written, but the devil is in the details.

There's a big gray area between "assumed guilty" and "don't worry about it" that includes things like flagged for further scrutiny, or sent a letter that says something like "we have concerns about your benefits, please respond with some information."

And I agree with what you've written too :cheers:

What I'm unsure about is how race is a particular special case in that context though? I do you simply mean that because race is such an inflamed issue, this will inflame it further?

DGuller

Quote from: Barrister on April 13, 2022, 05:05:27 PMThat's remarkable.

I can understand using an AI to develop subjects.  I can even see how that'd be useful.  But surely you then need to actually uncover evidence of fraud itself?

I'd like it to use of facial recognition software.  It's great to help ID suspects - but then you still need to prove you have the right person.  The software itself isn't admissible in court.
These are my thoughts as well.  It doesn't surprise me at all that an algorithm would classify things incorrectly sometimes, or even more than sometimes.  Depending on the use, AI algorithms can get things wrong a lot of the time and still be an improvement over the next best alternative.

What really surprises me is that an AI algorithm is used without any human oversight, and doubly so in an application involving government function where the careless error is simply unacceptable.  It's one thing to show an ad for tampons to a 60 year old guy, it's another thing to accuse a person of wrongdoing.  The model is also described as designed to develop a risk profile or suspected fraud, so to me the very description implies that the model is just a first step in a process that ends with human investigation and determination of facts.

So much about this story is so wrong that I'm also wondering about how objective and complete the article's description is of what actually went on.  It seems much more likely to me that if bad things did happen, they happened because the human didn't do the job they were required to do and just rubber-stamped the model inidications (as opposed to didn't do the job because no one thought it was necessary).

Admiral Yi

Quote from: Jacob on April 13, 2022, 06:47:20 PMWhat I'm unsure about is how race is a particular special case in that context though? I do you simply mean that because race is such an inflamed issue, this will inflame it further?

Because "no racial profiling" has already become a sacred cow in some circles, and that's exactly what this system does, assign risk points on the basis of historical performance of racial groups (or more specifically national origin).

Jacob

Quote from: Admiral Yi on April 13, 2022, 07:00:25 PM
Quote from: Jacob on April 13, 2022, 06:47:20 PMWhat I'm unsure about is how race is a particular special case in that context though? I do you simply mean that because race is such an inflamed issue, this will inflame it further?

Because "no racial profiling" has already become a sacred cow in some circles, and that's exactly what this system does, assign risk points on the basis of historical performance of racial groups (or more specifically national origin).

Gotcha. Makes sense.

DGuller

In US, I find it hard to imagine that any model in production would directly use race as a predictor.  I'm not sure there are cases where it wouldn't be outright illegal, but even if there are such cases, I imagine that a legal department or some other compliance-related department would be deathly scared of the bad PR that would result if such models become public knowledge.

When it comes to algorithms and race, what everyone is worried about is implicit bias, where the model is accidentally discriminating on race without race being used explicitly as a predictor.

PDH

I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth.
-Umberto Eco

-------
"I'm pretty sure my level of depression has nothing to do with how much of a fucking asshole you are."

-CdM

Jacob

Quote from: PDH on April 13, 2022, 07:53:44 PMThe Computer is your friend.

Have another Bubbly Bouncy Beverage!

The Brain

Just some of the weirdness:

  • The system was launched in 2013, but the example case was hit by its negative effects already in 2012.
  • In the Netherlands apparently the government has the right to demand its money back on a whim without any process. Also, this would seem to be a problem that urgently needs fixing regardless of any AI system.
Women want me. Men want to be with me.