Languish.org

General Category => Off the Record => Topic started by: Sheilbh on April 13, 2022, 04:58:52 PM

Title: The Tech Dystopia Thread
Post by: Sheilbh on April 13, 2022, 04:58:52 PM
Extraordinary story - I'd heard about this causing the Dutch government to fall and some details, but nothing as in-depth in this. The consequences don't seem sufficient and it feels like there's a lot of warnings here for governments looking at modernisations like this:
QuoteDutch scandal serves as a warning for Europe over risks of using algorithms
The Dutch tax authority ruined thousands of lives after using an algorithm to spot suspected benefits fraud — and critics say there is little stopping it from happening again.
By Melissa Heikkilä
March 29, 2022 6:14 pm

Chermaine Leysner's life changed in 2012, when she received a letter from the Dutch tax authority demanding she pay back her child care allowance going back to 2008. Leysner, then a student studying social work, had three children under the age of 6. The tax bill was over €100,000.

"I thought, 'Don't worry, this is a big mistake.' But it wasn't a mistake. It was the start of something big," she said.

The ordeal took nine years of Leysner's life. The stress caused by the tax bill and her mother's cancer diagnosis drove Leysner into depression and burnout. She ended up separating from her children's father. "I was working like crazy so I could still do something for my children like give them some nice things to eat or buy candy. But I had times that my little boy had to go to school with a hole in his shoe," Leysner said.

Leysner is one of the tens of thousands of victims of what the Dutch have dubbed the "toeslagenaffaire," or the child care benefits scandal.

In 2019 it was revealed that the Dutch tax authorities had used a self-learning algorithm to create risk profiles in an effort to spot child care benefits fraud.

Authorities penalized families over a mere suspicion of fraud based on the system's risk indicators. Tens of thousands of families — often with lower incomes or belonging to ethnic minorities — were pushed into poverty because of exorbitant debts to the tax agency. Some victims committed suicide. More than a thousand children were taken into foster care.


The Dutch tax authorities now face a new €3.7 million fine from the country's privacy regulator. In a statement released April 12, the agency outlined several violations of the EU's data protection rulebook, the General Data Protection Regulation, including not having a legal basis to process people's data and hanging on to the information for too long.

Aleid Wolfsen, the head of the Dutch privacy authority, called the violations unprecedented.

"For over 6 years, people were often wrongly labeled as fraudsters, with dire consequences ... some did not receive a payment arrangement or you were not eligible for debt restructuring. The tax authorities have turned lives upside down," he said, according to the statement.

As governments around the world are turning to algorithms and AI to automate their systems, the Dutch scandal shows just how utterly devastating automated systems can be without the right safeguards. The European Union, which positions itself as the world's leading tech regulator, is working on a bill that aims to curb algorithmic harms.

But critics say the bill misses the mark and would fail to protect citizens from incidents such as what happened in the Netherlands.

No checks and balances

The Dutch system — which was launched in 2013 — was intended to weed out benefits fraud at an early stage. The criteria for the risk profile were developed by the tax authority, reports Dutch newspaper Trouw. Having dual nationality was marked as a big risk indicator, as was a low income.

Why Leysner ended up in the situation is unclear. One reason could be that she had twins, which meant she needed more support from the government. Leysner, who was born in the Netherlands, also has Surinamese roots.

In 2020, Trouw and another Dutch news outlet, RTL Nieuws revealed that the tax authorities also kept secret blacklists of people for two decades, which tracked both credible and unsubstantiated "signals" of potential fraud. Citizens had no way of finding out why they were on the list or defending themselves.

An audit showed that the tax authorities focused on people with "a non-Western appearance," while having Turkish or Moroccan nationality was a particular focus. Being on the blacklist also led to a higher risk score in the child care benefits system.


A parliamentary report into the child care benefits scandal found several grave shortcomings, including ��institutional biases and authorities hiding information or misleading the parliament about the facts. Once the full scale of the scandal came to light, Prime Minister Mark Rutte's government resigned, only to regroup 225 days later.

In addition to the penalty announced April 12, the Dutch data protection agency also fined the Dutch tax administration €2.75 million in December 2021 for the "unlawful, discriminatory and therefore improper manner" in which the tax authority processed data on the dual nationality of child care benefit applicants.

"There was a total lack of checks and balances within every organization of making sure people realize what was going on," said Pieter Omtzigt, an independent member of the Dutch parliament who played a pivotal role in uncovering the scandal and grilling the tax authorities.

"What is really worrying me is that I'm not sure that we've taken even vaguely enough preventive measures to strengthen our institutions to handle the next derailment," he continued.

The new Rutte government has pledged to create a new algorithm regulator under the country's data protection authority. Dutch Digital Minister Alexandra van Huffelen — who was previously the finance minister in charge of the tax authority — told POLITICO that the data authority's role will be "to oversee the creation of algorithms and AI, but also how it plays out when it's there, how it's treated, make sure that is human-centered, and that it does apply to all the regulations that are in use." The regulator will scrutinize algorithms in both the public and private sectors.

Van Huffelen stressed the need to make sure humans are always in the loop. "What I find very important is to make sure that decisions, governmental decisions based on AI are also always treated afterwards by a human person," she said. 

A warning to the rest of Europe

Europe's top digital official, European Commission Executive Vice President Margrethe Vestager, said the Dutch scandal is exactly what every government should be scared of.

"We have huge public sectors in Europe. There are so many different services where decision-making supported by AI could be really useful, if you trust it," Vestager told the European Parliament in March. The EU's new AI Act is aimed at creating that trust, she argued, "so that this big public sector market will be open also for artificial intelligence."

The Commission's proposal for the AI Act restricts the use of so-called high-risk AI systems and bans certain "unacceptable" uses. Companies providing high-risk AI systems have to meet certain EU requirements. The AI Act also creates a public EU register of such systems in an effort to improve transparency and help with enforcement.

That's not good enough, argues Renske Leijten, a Socialist member of the Dutch parliament and another key politician who helped uncover the true scale of the scandal. Leijten argues that the AI Act should also apply to those using high-risk AI systems in both the private and public sectors.

In the AI Act, "we see that there are more guarantees for your rights when companies and private enterprises are working with AI. But the important thing we must learn out of the child care benefit scandal is that this was not an enterprise or private sector ... This was the government," she said.

As it is now, the AI Act will not protect citizens from similar dangers, said Dutch Green MEP Kim van Sparrentak, a member of the European Parliament's AI Act negotiating team on the internal market committee. Van Sparrentak is pushing for the AI Act to have fundamental rights impact assessments that will also be published in the EU's AI register. Parliament is also proposing adding obligations to the users of high-risk AI systems, including in the public sector.

"Fraud prediction and predictive policing based on profiling should just be banned. Because we have seen only very bad outcomes and not a single person can be determined based on some of their data," van Sparrentak said.

In a report detailing how the Dutch government used ethnic profiling in the child care benefits scandal, Amnesty International calls on governments to ban the "use of data on nationality and ethnicity when risk-scoring for law enforcement purposes in the search of potential crime or fraud suspects."

The Netherlands is still reckoning with the aftermath of the scandal. The government has promised to pay back victims of the incident €30,000. But for those like Leysner, that doesn't even begin to cover the years she lost — justice seems like a long way off.


"If you go through things like this, you also lose your trust in the government. So it's very difficult to trust what [authorities] say right now," Leysner said.

Clothilde Goujard and Vincent Manancourt contributed reporting.

This article has been updated with the results of the Dutch tax authorities' investigation released in April.
Title: Re: The Tech Dystopia Thread
Post by: Barrister on April 13, 2022, 05:05:27 PM
That's remarkable.

I can understand using an AI to develop subjects.  I can even see how that'd be useful.  But surely you then need to actually uncover evidence of fraud itself?

I'd like it to use of facial recognition software.  It's great to help ID suspects - but then you still need to prove you have the right person.  The software itself isn't admissible in court.
Title: Re: The Tech Dystopia Thread
Post by: The Brain on April 13, 2022, 05:06:24 PM
So many things that I don't understand here.
Title: Re: The Tech Dystopia Thread
Post by: Admiral Yi on April 13, 2022, 05:55:23 PM
What's your source Shelf?  I.e. advocacy group or straight reporting?

The comment about racial profiling brings up what is going to be an issue in the data driven age.
Title: Re: The Tech Dystopia Thread
Post by: Sheilbh on April 13, 2022, 05:56:54 PM
Politico EU:
https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/
Title: Re: The Tech Dystopia Thread
Post by: Jacob on April 13, 2022, 06:15:17 PM
Quote from: Admiral Yi on April 13, 2022, 05:55:23 PMWhat's your source Shelf?  I.e. advocacy group or straight reporting?

The comment about racial profiling brings up what is going to be an issue in the data driven age.

Racial profiling is going to be one issue in the data driven age. I don't think being targetted for additional scrutiny and assumed guilty based on demographics and trained AI is a non-issue, even if it's based on factors other than race.
Title: Re: The Tech Dystopia Thread
Post by: Admiral Yi on April 13, 2022, 06:39:46 PM
Quote from: Jacob on April 13, 2022, 06:15:17 PMRacial profiling is going to be one issue in the data driven age. I don't think being targetted for additional scrutiny and assumed guilty based on demographics and trained AI is a non-issue, even if it's based on factors other than race.

I agree with what you've written, but the devil is in the details.

There's a big gray area between "assumed guilty" and "don't worry about it" that includes things like flagged for further scrutiny, or sent a letter that says something like "we have concerns about your benefits, please respond with some information."
Title: Re: The Tech Dystopia Thread
Post by: Jacob on April 13, 2022, 06:47:20 PM
Quote from: Admiral Yi on April 13, 2022, 06:39:46 PM
Quote from: Jacob on April 13, 2022, 06:15:17 PMRacial profiling is going to be one issue in the data driven age. I don't think being targetted for additional scrutiny and assumed guilty based on demographics and trained AI is a non-issue, even if it's based on factors other than race.

I agree with what you've written, but the devil is in the details.

There's a big gray area between "assumed guilty" and "don't worry about it" that includes things like flagged for further scrutiny, or sent a letter that says something like "we have concerns about your benefits, please respond with some information."

And I agree with what you've written too :cheers:

What I'm unsure about is how race is a particular special case in that context though? I do you simply mean that because race is such an inflamed issue, this will inflame it further?
Title: Re: The Tech Dystopia Thread
Post by: DGuller on April 13, 2022, 06:56:42 PM
Quote from: Barrister on April 13, 2022, 05:05:27 PMThat's remarkable.

I can understand using an AI to develop subjects.  I can even see how that'd be useful.  But surely you then need to actually uncover evidence of fraud itself?

I'd like it to use of facial recognition software.  It's great to help ID suspects - but then you still need to prove you have the right person.  The software itself isn't admissible in court.
These are my thoughts as well.  It doesn't surprise me at all that an algorithm would classify things incorrectly sometimes, or even more than sometimes.  Depending on the use, AI algorithms can get things wrong a lot of the time and still be an improvement over the next best alternative.

What really surprises me is that an AI algorithm is used without any human oversight, and doubly so in an application involving government function where the careless error is simply unacceptable.  It's one thing to show an ad for tampons to a 60 year old guy, it's another thing to accuse a person of wrongdoing.  The model is also described as designed to develop a risk profile or suspected fraud, so to me the very description implies that the model is just a first step in a process that ends with human investigation and determination of facts.

So much about this story is so wrong that I'm also wondering about how objective and complete the article's description is of what actually went on.  It seems much more likely to me that if bad things did happen, they happened because the human didn't do the job they were required to do and just rubber-stamped the model inidications (as opposed to didn't do the job because no one thought it was necessary).
Title: Re: The Tech Dystopia Thread
Post by: Admiral Yi on April 13, 2022, 07:00:25 PM
Quote from: Jacob on April 13, 2022, 06:47:20 PMWhat I'm unsure about is how race is a particular special case in that context though? I do you simply mean that because race is such an inflamed issue, this will inflame it further?

Because "no racial profiling" has already become a sacred cow in some circles, and that's exactly what this system does, assign risk points on the basis of historical performance of racial groups (or more specifically national origin).
Title: Re: The Tech Dystopia Thread
Post by: Jacob on April 13, 2022, 07:03:46 PM
Quote from: Admiral Yi on April 13, 2022, 07:00:25 PM
Quote from: Jacob on April 13, 2022, 06:47:20 PMWhat I'm unsure about is how race is a particular special case in that context though? I do you simply mean that because race is such an inflamed issue, this will inflame it further?

Because "no racial profiling" has already become a sacred cow in some circles, and that's exactly what this system does, assign risk points on the basis of historical performance of racial groups (or more specifically national origin).

Gotcha. Makes sense.
Title: Re: The Tech Dystopia Thread
Post by: DGuller on April 13, 2022, 07:10:50 PM
In US, I find it hard to imagine that any model in production would directly use race as a predictor.  I'm not sure there are cases where it wouldn't be outright illegal, but even if there are such cases, I imagine that a legal department or some other compliance-related department would be deathly scared of the bad PR that would result if such models become public knowledge.

When it comes to algorithms and race, what everyone is worried about is implicit bias, where the model is accidentally discriminating on race without race being used explicitly as a predictor.
Title: Re: The Tech Dystopia Thread
Post by: PDH on April 13, 2022, 07:53:44 PM
The Computer is your friend.
Title: Re: The Tech Dystopia Thread
Post by: Jacob on April 13, 2022, 08:35:56 PM
Quote from: PDH on April 13, 2022, 07:53:44 PMThe Computer is your friend.

Have another Bubbly Bouncy Beverage!
Title: Re: The Tech Dystopia Thread
Post by: The Brain on April 14, 2022, 01:37:21 AM
Just some of the weirdness:

Title: Re: The Tech Dystopia Thread
Post by: Syt on April 14, 2022, 01:48:05 AM
(https://www.smbc-comics.com/comics/1649857386-20220413.png)
 :P
Title: Re: The Tech Dystopia Thread
Post by: Syt on April 14, 2022, 02:37:56 AM
More seriously, John Oliver had a segment about data brokers last weekend:

Title: Re: The Tech Dystopia Thread
Post by: Maladict on April 14, 2022, 12:38:58 PM
The AI angle is only part of the story, and didn't get all that much coverage here.
There are actually several issues that created this terrible mess, some dating back a decade before the system was introduced.

First off, lots of errors were made by people claiming benefits, which prompted the investigations as it seemed  mass fraud was taking place. The benefit system is very complicated and the child care providers often misinformed parents as well, sometimes for their own benefit. Although most of these errors were not made intentionally, the tax department held that they were entitled to reclaim all benefits even if a single, small error had been made. The disproportional clamping down on innocent mistakes, and then fighting every appeal tooth and nail is what made it such a scandal. That and the apparent discriminatory practices of selecting double nationalities and low income families as high risk factors for fraud. And finally the downright criminal behavior of trying to cover up the mistake and destroying the evidence.
While the AI situation is problematic those risk analyses were already done well before the system was introduced iirc. The problem is a rotten tax department with a terrible work culture that has been unable to do major reforms for a long time. It's ironic that most IT projects the tax department started were abandoned after spending millions, and the one that actually became operational helped create a disaster.
Title: Re: The Tech Dystopia Thread
Post by: Jacob on April 14, 2022, 12:50:41 PM
Thanks for the additional context Maladict.
Title: Re: The Tech Dystopia Thread
Post by: The Brain on April 14, 2022, 12:51:20 PM
Thanks. Makes more sense.
Title: Re: The Tech Dystopia Thread
Post by: Darth Wagtaros on April 14, 2022, 12:54:37 PM
I think some areas of the US use predictive AI for crime, and harass people who the algorithm consider criminals.
Title: Re: The Tech Dystopia Thread
Post by: Sheilbh on April 14, 2022, 12:57:02 PM
Reflects as well that what you put into the system (like a secret blacklist) is going to result in those characteristics becoming what it flags. Which I think is normally the issue with these stories.

What have the consequences been because a GDPR fine doesn't seem like enough (or high enough)? Plus I never know how much one bit of government fining another bit of government helps.
Title: Re: The Tech Dystopia Thread
Post by: Josquius on April 14, 2022, 01:16:57 PM
Algorithms to flag risk factors, especially complex stuff where multiple unconnected factors combine intersectionality, should be a good thing.
Spot people to be investigated far more efficiently than through old school methods. Take away the menial sorting through heaps of data part of the job to give civil servants more time to actually do the human stuff of investigating specifics and dealing with people.

Instead it tends to just be taken as an excuse to cut costs. Don't move people away from menial work to more value added stuff. Just downsize them.
And this flag people who have factors behind them that warrant investigation? Guilty until proven innocent. Investigation is as good as guilty.
Title: Re: The Tech Dystopia Thread
Post by: Sheilbh on April 14, 2022, 01:28:05 PM
I remember working with a company that had an anti-fraud AI product and there was so much work involved in making sure that we never accidentally process data about race or ethnicity or nationality or in any way infer that data (because, in European law, if you infer it you're processing it) - so I just find it mindblowing that the tax authority looked at their racist secret list of dual citizens, especially with "a non-Western appearance" and decided to add that :blink:

I've never read of a secret blacklist that wasn't really dodgy. It feels like if you have to keep what you're doing super-secret maybe you shouldn't be doing it.
Title: Re: The Tech Dystopia Thread
Post by: Maladict on April 14, 2022, 03:42:39 PM
Quote from: Sheilbh on April 14, 2022, 12:57:02 PMReflects as well that what you put into the system (like a secret blacklist) is going to result in those characteristics becoming what it flags. Which I think is normally the issue with these stories.

What have the consequences been because a GDPR fine doesn't seem like enough (or high enough)? Plus I never know how much one bit of government fining another bit of government helps.

I believe they're still looking into criminal charges against a number of tax dept employees. And a new  proportionality principle was introduced, which can be used to appeal against disproportionate fines.

Title: Re: The Tech Dystopia Thread
Post by: Josephus on April 14, 2022, 04:00:53 PM
I'm becoming more and more luddite each passing day.
Title: Re: The Tech Dystopia Thread
Post by: Jacob on April 14, 2022, 04:04:45 PM
Quote from: Josephus on April 14, 2022, 04:00:53 PMI'm becoming more and more luddite each passing day.

Natural result of getting old while new tech is developing :hug:
Title: Re: The Tech Dystopia Thread
Post by: PJL on April 14, 2022, 04:32:47 PM
Quote from: Jacob on April 14, 2022, 04:04:45 PM
Quote from: Josephus on April 14, 2022, 04:00:53 PMI'm becoming more and more luddite each passing day.

Natural result of getting old while new tech is developing :hug:

I think I got to that point in 2010 with mobile phones. To be honest, everything since then tech-wise I seem to be lagging behind.
Title: Re: The Tech Dystopia Thread
Post by: Josephus on April 15, 2022, 06:08:45 AM
Quote from: Jacob on April 14, 2022, 04:04:45 PM
Quote from: Josephus on April 14, 2022, 04:00:53 PMI'm becoming more and more luddite each passing day.

Natural result of getting old while new tech is developing :hug:

Yup.  :(
Title: Re: The Tech Dystopia Thread
Post by: crazy canuck on April 15, 2022, 10:03:14 AM
Quote from: Josephus on April 15, 2022, 06:08:45 AM
Quote from: Jacob on April 14, 2022, 04:04:45 PM
Quote from: Josephus on April 14, 2022, 04:00:53 PMI'm becoming more and more luddite each passing day.

Natural result of getting old while new tech is developing :hug:

Yup.  :(

If it makes you feel any better my older son just mentioned to me that he feels like he starting to get out of touch with technology. Actually that might not make you feel any better.
Title: Re: The Tech Dystopia Thread
Post by: Josephus on April 15, 2022, 01:27:16 PM
Quote from: crazy canuck on April 15, 2022, 10:03:14 AM
Quote from: Josephus on April 15, 2022, 06:08:45 AM
Quote from: Jacob on April 14, 2022, 04:04:45 PM
Quote from: Josephus on April 14, 2022, 04:00:53 PMI'm becoming more and more luddite each passing day.

Natural result of getting old while new tech is developing :hug:

Yup.  :(

If it makes you feel any better my older son just mentioned to me that he feels like he starting to get out of touch with technology. Actually that might not make you feel any better.

LOL....No.
Title: Re: The Tech Dystopia Thread
Post by: viper37 on April 15, 2022, 04:27:41 PM
Quote from: Barrister on April 13, 2022, 05:05:27 PMThat's remarkable.

I can understand using an AI to develop subjects.  I can even see how that'd be useful.  But surely you then need to actually uncover evidence of fraud itself?
If it's like Canada, they don't really bother themselves with evidence and stuff like that.  They send you a bil, you pay them or you pay your lawyer.
Title: Re: The Tech Dystopia Thread
Post by: DGuller on April 27, 2022, 06:45:43 PM
Speaking of dystopia, I guess I'm the victim of one as well, thankfully far more minor than the one in the first post.  Two days ago I was clearing my e-mails, saw some really, really old e-mail from eBay, so I decided to log in just to see that I still could.  That was probably the first thing I've done on that account in literally 10 years.

Today I get an e-mail that I'm permanently suspended, "because of activity that we believe was putting the eBay community at risk".  "We understand that this must be frustrating, but this decision was not made lightly and it's important that we keep our marketplace safe for everyone."

I beg to differ about the last part.  It sure sounds like the decision has been made because of some flag automatically acted on.  I got on the support chat, and apparently my account has been permanently suspended because eBay believed it was putting the eBay community at risk.  Shockingly, I actually got some additional detail out of them eventually, and it turns out that I had no address or phone number on file.  :huh:

Can I give it to them now?  Nope.  Can they unban me?  No.  So what the fuck do I do?  One agent suggested starting a new account.  I'm kind of skeptical that starting another account after being permanently suspended is a wise choice, and I don't have another e-mail anyway.

I Googled around, and it turns out that being permanently suspended is done only for very serious violations, and there is no way to get it reversed.  :hmm:  Obviously I don't need this account badly if I haven't used it for 10 years, but at this point I'm just pursuing this matter out of stubbornness, and also out of curiosity as to how far this Kafkaesque saga can go.  Supposedly some supervisor will contact me by e-mail within a day.
Title: Re: The Tech Dystopia Thread
Post by: DGuller on April 27, 2022, 11:46:46 PM
This saga got me thinking about about how our society should deal with private companies that are nonetheless natural monopolies.  It's true that private companies have a right to choose who to do business with, even if their choice is made poorly.  The thinking is that the customers have other choices, and companies which are dumb enough to piss off too many customers for bad reasons will lose out over time.

Does this logic still work for natural monopolies?  In the Internet age, a lot of businesses naturally gravitate towards snowballing by one competitor, as the network effect dominates.  At that point, customers getting banned can have a non-negligible impact on them that cannot be mitigated.  If Microsoft for some reason decides to ban me from purchasing their products, I would be screwed in a lot of ways.  For all we know, I could be banned because of a personal vendetta by someone working at Microsoft.  Or what if Google bars me from using their search engine?

Should there be consumer protections for citizens to protect them from the actions of companies with dominant market shares?  I think there should be, my general view is that a democratic society should give citizens recourse against all kinds of coercion, not just coercion originating from the government.
Title: Re: The Tech Dystopia Thread
Post by: Josquius on April 28, 2022, 02:41:05 AM
Ebay is a mess. Their whole setup is simply a shambles. All emails I get from them are in German with no way to change it purely because I first made the account from a Swiss IP.
Title: Re: The Tech Dystopia Thread
Post by: Syt on April 28, 2022, 05:17:33 AM
https://globalnews.ca/news/8791036/freshii-percy-virtual-cashier-job-outsourcing/

(https://images.thestar.com/RY9Raa9cS7MnFUKk0I8GoVccJaU=/605x807/smart/filters:cb(1651096406300)/https://www.thestar.com/content/dam/thestar/business/2022/04/26/meet-the-freshii-virtual-cashier-who-works-from-nicaragua-for-375-an-hour/percy_digitized.jpg)

QuoteFreshii introduces 'Percy' virtual cashier, outsourcing jobs to Central America

Ordering at your local Freshii may look a little different next time you stop in at the healthy fast-food alternative. The Toronto-based company has launched "Percy" a virtual cashier who takes your order and payment.


"Unlike a kiosk or a pre-ordering app, which removes human jobs entirely, Percy allows for the face-to-face customer experience, that restaurant owners and operators want to provide their guests, by mobilizing a global and eager workforce," explained the company in a statement signed by "Percy."

The virtual cashier is a SaaS technology platform which the company says is aimed at helping the restaurant industry grapple with its biggest crisis �ever – staffing shortages. Designed by "Thomas and Friends" Freshii says the technology will help with labour shortages by creating "a human solution for the ordering/cashier process.

Freshii operates more than 340 stores in North America and abroad and while "Percy" may appear as a method to circumvent Ontario's employment standards, employment lawyers say the practice is entirely legal.

"It's just like any other outsourcing by a company, so, if you hire workers in another country your only obligation as an employer is to ensure that you are in compliance with minimum employment standards legislation of that particular country," explains Fiona Martin of Samfiru Tumarkin LLP, a law firm based in Toronto.

But Freshii is facing criticism for the decision to outsource cashier jobs as the Canadian Labour Congress expresses concern over eliminating important front-facing jobs that are usually staffed by students breaking into the job market.

"You're taking a situation where you're exploiting workers in another country who have a lesser working standard, who have a much smaller minimum wage," explains Bea Bruske of the Canadian Labour Congress. "That's not acceptable, nor should it be acceptable."

The Ontario Minister of Labour, Training and Skills Development called the outsourcing of cashiers "outrageous," adding that it "moves entirely in the wrong direction."

"I expect better from a Toronto-based company and know customers will vote with their feet, said minister Monte McNaughton.

The official Opposition is also voicing concern, explaining in a one-on-one interview with Global News that these types of jobs need to be protected.

"To strengthen employment standards legislation to deal with the fact that there are new kinds of technology that are revolutionising our labour market and the legislation has to reflect that," says Patty Sattler, NDP labour critic.

But Freshii maintains the virtual cashier helps redirect staff to "higher value work," pointing to automation in general including self-checkouts and app-based companies like Amazon who have taken people out of the sales process.

If you think your cashiers are overpaid, and you don't trust your customers enough for self service, I guess.
Title: Re: The Tech Dystopia Thread
Post by: Josquius on April 28, 2022, 05:24:39 AM
I've noticed the UK is really horrific for not trusting customers with self service.
The way the machines are set up to weigh your bags and scream at you for not packing right is simply ridiculous.
Contrast to elsewhere where you just scan and go. None of this invalid item in bagging area crap.

Anyway.
This thing looks like something from a 1960s Sci fi - you know, when they imagine one area of tech advancing a tonne but completely fail to consider another. In this case that video chat is super advanced, but the barcode was never invented to automatically add up prices yourself.
Title: Re: The Tech Dystopia Thread
Post by: The Brain on April 28, 2022, 05:25:57 AM
Fast food QOL improved so much with automatic kiosks and apps.
Title: Re: The Tech Dystopia Thread
Post by: Sheilbh on April 28, 2022, 05:55:19 AM
Quote from: Josquius on April 28, 2022, 05:24:39 AMI've noticed the UK is really horrific for not trusting customers with self service.
The way the machines are set up to weigh your bags and scream at you for not packing right is simply ridiculous.
Contrast to elsewhere where you just scan and go. None of this invalid item in bagging area crap.
Because the UK - there's a class angle :lol:

Waitrose doesn't have those the machines to weigh bags. There's a few other big supermarkets where they don't - I think it might be in their Metro/city centre stores where a huge chunk of their business is people getting a meal deal at lunch and the queue needs to move quickly.
Title: Re: The Tech Dystopia Thread
Post by: Admiral Yi on April 28, 2022, 06:33:38 AM
My supermarket has that weighing thing.
Title: Re: The Tech Dystopia Thread
Post by: Syt on April 28, 2022, 06:34:47 AM
Some have them, some don't, even within the same chain.
Title: Re: The Tech Dystopia Thread
Post by: Admiral Yi on April 28, 2022, 06:39:49 AM
Interestingly my place gives you a "skip bagging" option that turns off the weighing.
Title: Re: The Tech Dystopia Thread
Post by: Josquius on April 28, 2022, 06:52:35 AM
Quote from: Admiral Yi on April 28, 2022, 06:39:49 AMInterestingly my place gives you a "skip bagging" option that turns off the weighing.
I thought I'd discovered a hack with that. But here it just blocks you and calls an attendant after a few items doing it.
Title: Re: The Tech Dystopia Thread
Post by: Grey Fox on April 28, 2022, 08:24:12 AM
All self checkout I've seen here have weighters(:hmm:)

Walmart, grocery stores & dollar stores.
Title: Re: The Tech Dystopia Thread
Post by: ulmont on April 28, 2022, 09:14:36 AM
Quote from: DGuller on April 27, 2022, 11:46:46 PMThis saga got me thinking about about how our society should deal with private companies that are nonetheless natural monopolies. 

Based on my past experience with the gas and electric companies, what we will do is let you choose your ebay marketing provider, so that you pay Shiny-Ebay who then pays eBay for the service access and pockets the difference, while ignoring any requests for help with "we put in a ticket with eBay and we haven't heard back."

...what we should do is, well, not that. 
Title: Re: The Tech Dystopia Thread
Post by: Jacob on April 28, 2022, 12:00:51 PM
Quote from: Grey Fox on April 28, 2022, 08:24:12 AMAll self checkout I've seen here have weighters(:hmm:)

Walmart, grocery stores & dollar stores.

None of my local ones that I use do. Though, I've come across some in the past... and it's annoying enough that I stop using the self-checkout terminals in those places (or stop going to the store altogether, depending on the convenience of alternatives).
Title: Re: The Tech Dystopia Thread
Post by: Barrister on April 28, 2022, 12:18:43 PM
Quote from: Josquius on April 28, 2022, 06:52:35 AM
Quote from: Admiral Yi on April 28, 2022, 06:39:49 AMInterestingly my place gives you a "skip bagging" option that turns off the weighing.
I thought I'd discovered a hack with that. But here it just blocks you and calls an attendant after a few items doing it.

Most annoying thing about self-bagging at my local Safeway:

They stopped using disposable plastic bags about a year ago.  They encourage you to bring re-usable bags.

But the moment you put a disposable bag on the scale it sets the scale off.  You can hit a "I'm using my own bag" button but that then requires a clerk to check.
Title: Re: The Tech Dystopia Thread
Post by: grumbler on May 03, 2022, 09:02:47 PM
The local markets here have all dropped the weight thing.  Thank Hod.