The Boy Who Cried Robot: A World Without Work

Started by jimmy olsen, June 28, 2015, 12:26:12 AM

Previous topic - Next topic

What should we do if automation renders most people permanently unemployed?

Negative Income Tax
26 (52%)
Communist command economy directed by AI
7 (14%)
Purge/sterilize the poor
3 (6%)
The machines will eradicate us, so why worry about unemployment?
7 (14%)
Other, please specify
7 (14%)

Total Members Voted: 49

Savonarola

Quote from: Zanza on September 17, 2021, 10:26:12 AM
Quote from: Savonarola on September 17, 2021, 10:03:05 AM
There was a reference to the SAE levels in the article, but they didn't go into any detail - could you tell us what the different levels are?


:thumbsup:  Thanks.  When do you think we'll see the first level 5 vehicles?
In Italy, for thirty years under the Borgias, they had warfare, terror, murder and bloodshed, but they produced Michelangelo, Leonardo da Vinci and the Renaissance. In Switzerland, they had brotherly love, they had five hundred years of democracy and peace—and what did that produce? The cuckoo clock

The Larch

Quote from: The Brain on September 17, 2021, 10:41:22 AM
Does level 6 drive you to where you need to be instead of where you want to be?

That sounds like the zen taxi driver from Alan Moore's Top Ten comics.  :lol:

Zanza

#437
@Sav: No idea. On the one hand, sensor and AI development is ongoing and accelerating. On the other hand, there is an expectation of perfection for these systems, which is hard to achieve. For many systems it means redundancy is necessary, which is very expensive.

Customers buying a car for themselves do not seem to be willing to pay sufficiently for the additional hard- and software, so there is no business case to be made right now. You can do level 4 with basic sensors and software, but probably not level 5. Unless you are willing to go high risk on liability for failures (e.g. Tesla - currently also at level 3 to 4 at best).

So the area where most progress is made is commercial vehicles. In the US, mainly for Class 8 on-highway semis. The scenario they are spending most of their time in is fairly easy, only the last mile and the ramp are complex. To give an example, Freightliner cooperates with Waymo (Google) and has bought Torc to be able to offer autonomous trucks within this decade. It's also possible to test these, e.g. in Nevada. Furthermore, the driver is one of the main cost drivers for fleet operators, especially pause time. Trucks only earn money when running. So a business case here looks very different.

It will have major disruptive impact on the market though: fleet operators will leverage capital to buy this technology. Individuals owning and driving one truck will be priced out of the market.

Jacob

Quote from: Zanza on September 18, 2021, 01:34:10 AM
@Sav: No idea. On the one hand, sensor and AI development is ongoing and accelerating. On the other hand, there is an expectation of perfection for these systems, which is hard to achieve. For many systems it means redundancy is necessary, which is very expensive.

Customers buying a car for themselves do not seem to be willing to pay sufficiently for the additional hard- and software, so there is no business case to be made right now. You can do level 4 with basic sensors and software, but probably not level 5. Unless you are willing to go high risk on liability for failures (e.g. Tesla - currently also at level 3 to 4 at best).

So the area where most progress is made is commercial vehicles. In the US, mainly for Class 8 on-highway semis. The scenario they are spending most of their time in is fairly easy, only the last mile and the ramp are complex. To give an example, Freightliner cooperates with Waymo (Google) and has bought Torc to be able to offer autonomous trucks within this decade. It's also possible to test these, e.g. in Nevada. Furthermore, the driver is one of the main cost drivers for fleet operators, especially pause time. Trucks only earn money when running. So a business case here looks very different.

It will have major disruptive impact on the market though: fleet operators will leverage capital to buy this technology. Individuals owning and driving one truck will be priced out of the market.

I've seen it argued (here? elsewhere? I don't remember) that using drivers for "the last mile" will keep the disruption less for the drivers. The argument, I believe, is that growth in "last mile" jobs will significantly offset the loss of long haul driver jobs, in part due to an overall growth in transport.

What do you think of that? Is that just wishful thinking/ an attempt to take the edge off for folks who are concerned? Or is there real substance there?

Zanza

No idea to be honest.

But last mile logistics with vans for the booming online retail sector is among the shittiest jobs that currently exist in the Western world. Exploited, AI controlled precarious jobs with poor pay and poorer working conditions. 

That does not make me hopeful for last mile logistics for big rigs. I guess persons with the appropriate driver's license are currently in short supply, at least in Europe (especially in Brexit Britain). But I guess when you need much less drivers, working conditions could deteriorate. 

Two of the things a driver is currently responsible for is freight loading, but that's more and more AI controlled (think Tetris) and paper-work, but that's being digitalized and will also be taken over by technology more and more (RFID or NFC or just sensors being able to identify loads).

crazy canuck

I wonder if the analogy to the port workers will hold.  As technology developed the work at the port changed significantly from being very dangerous labourers jobs to largely monitoring and to some extent operating the robotic tech that did all the sorting, loading an unloading.  The wages rose - in part because the skill set required increased, but also in large part because port workers have good unions - at least in this country.


Zanza

I guess working conditions in ports are much better these days.

But if you consider a metric like moved freight per worker, productivity in ports must have increased by a factor that is many times the growth of the worker's income.

I also don't see why AI controlled systems won't replace those workers eventually if they become too expensive. Especially moving containers seems to something that should be possible to automate using AI. If it is possible to build robots for warehouses, letting sensors and AI control a container terminal seems feasible.

Savonarola

This comes from the same magazine, I thought it was relevant to the subject at hand:

QuoteSocietal Implications of Service Robots

Over the past several years, studying networked robotics has grown in both popularity and importance due to its various services via using connectivity, remote or local, to and from robots. Networked robotics is not only critically important toward Industry 4.0, providing an additional layer of benefits for enterprises and related industry, but also necessary for some critical factors that our daily living environments have already begun to face. The following is a list of societal implications of networked robotics:

• Productivity of manufacturing and logistics:
Industrial robots in manufacturing and logistics environments are getting more intelligent and more capable of tasks that require a higher level of efficiency and performance requirement, such as clock synchronization, ultra-low latency, communication service availability, positioning accuracy, and data rate.

• Imbalance of resource allocations: Many resources, including humans as workforce and network resources, are scarce at a certain point in time, although one can perform the optimal scheduling for a given decision making problem. Healthcare delivery is one of the most common examples that have critical issues regarding the imbalance of resource allocations. Remote or local medical robotics service is a typical example of using networked robots to improve the quality of healthcare delivery, for example, in emergency or urgent cases with limited availability of medical personnel and resources at the site where a patient is located.

• Population aging: old-age dependency ratio (OADR) is defined as the population aged 65 years or over divided by the population
aged from 20 to 64 years, which is often used as a proxy for the social and economic dependency of the older population. According to World Population Aging published by the United Nations, OADRs were highest in Europe and Northern America, with 30 persons aged 65 or older per 100 persons aged 20–64 years (the "working ages"), followed by Australia and New Zealand, with 27 older persons per 100 persons of working age. This ratio is projected to rise considerably, reaching 49 per 100 in Europe and Northern America and 42 per 100 in Australia and New Zealand in 2050. With this observation, it is expected that SOBOTs can play a key role in our future society that requires more labor resources to support the aging population.

The recent focus in communication engineering research is the advances that are to come from the greater throughput and lower latency of 5G networks.  5G when coupled with cloud computing (or Edge Computing or the wonderfully named Fog Computing) seems to have a number of applications in manufacturing, and that's where most of the articles focus.  This article focused on Service Robots (or SOBOTs, which has an unfortunately similar sound as a remarkably lame toy line from the 80s.)  While this is still just conceptual; I think it does lay out the reasons we should expect robot caretakers in our golden years; as well as the possibility of robots in other service sectors.
In Italy, for thirty years under the Borgias, they had warfare, terror, murder and bloodshed, but they produced Michelangelo, Leonardo da Vinci and the Renaissance. In Switzerland, they had brotherly love, they had five hundred years of democracy and peace—and what did that produce? The cuckoo clock

jimmy olsen

It is far better for the truth to tear my flesh to pieces, then for my soul to wander through darkness in eternal damnation.

Jet: So what kind of woman is she? What's Julia like?
Faye: Ordinary. The kind of beautiful, dangerous ordinary that you just can't leave alone.
Jet: I see.
Faye: Like an angel from the underworld. Or a devil from Paradise.
--------------------------------------------
1 Karma Chameleon point

Admiral Yi

That doesn't have anything to do with work.  It's about autonomous weapons.

jimmy olsen

Quote from: Admiral Yi on September 25, 2021, 12:57:11 AM
That doesn't have anything to do with work.  It's about autonomous weapons.
This is the robot thread.

Also, the killbots will take the jobs of thousands of soldiers.
It is far better for the truth to tear my flesh to pieces, then for my soul to wander through darkness in eternal damnation.

Jet: So what kind of woman is she? What's Julia like?
Faye: Ordinary. The kind of beautiful, dangerous ordinary that you just can't leave alone.
Jet: I see.
Faye: Like an angel from the underworld. Or a devil from Paradise.
--------------------------------------------
1 Karma Chameleon point

celedhring

 :ph34r:

https://www.cnet.com/tech/google-suspends-engineer-who-rang-alarms-about-a-company-ai-achieving-sentience/

QuoteGoogle Suspends Engineer Who Rang Alarms About a Company AI Achieving Sentience
Google says Blake Lemoine violated the company's confidentiality policy.


Google suspended an engineer last week for revealing confidential details of a chatbot powered by artificial intelligence, a move that marks the latest disruption of the company's AI department.

Blake Lemoine, a senior software engineer in Google's responsible AI group, was put on paid administrative leave after he took public his concern that the chatbot, known as LaMDA, or Language Model for Dialogue Applications, had achieved sentience. Lemoine revealed his suspension in a June 6 Medium post and subsequently discussed his concerns about LaMDA's possible sentience with The Washington Post in a story published over the weekend. Lemoine also sought outside counsel for LaMDA itself, according to The Post.

In his Medium post, Lemoine says that he investigated ethics concerns with people outside of Google in order to get enough evidence to escalate them to senior management. The Medium post was "intentionally vague" about the nature of his concerns, though they were subsequently detailed in the Post story. On Saturday, Lemoine published a series of "interviews" that he conducted with LaMDA.

Lemoine didn't immediately respond to a request for comment via LinkedIn. In a Twitter post, Lemoine said that he's on his honeymoon and would be unavailable for comment until June 21.

In a statement, Google dismissed Lemoine's assertion that LaMDA is self-aware.

"These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic," Google spokesperson Brian Gabriel said in a statement. "If you ask what it's like to be an ice cream dinosaur, they can generate text about melting and roaring and so on."

The high-profile suspension marks another point of controversy within Google's AI team, which has weathered a spate of departures. In late 2020, prominent AI ethics researcher Timnit Gebru said Google fired her for raising concerns about bias in AI systems. About 2,700 Googlers signed an open letter in support of Gebru, who Google says resigned her position. Two months later, Margaret Mitchell, who co-led the Ethical AI team along with Gebru, was fired.

Research scientist Alex Hanna and software engineer Dylan Baker subsequently resigned. Earlier this year, Google fired Satrajit Chatterjee, an AI researcher, who challenged a research paper about the use of artificial intelligence to develop computer chips.

AI sentience is a common theme in science fiction, but few researchers believe the technology is advanced enough at this point to create a self-aware chatbot.

"What these systems do, no more and no less, is to put together sequences of words, but without any coherent understanding of the world behind them," said AI scientist and author Gary Marcus in a Substack post. Marcus didn't dismiss the idea that AI could one day comprehend the larger world, but that LaMDA doesn't at the moment.

Economist and Stanford professor Erik Brynjolfsson equated LaMDA to a dog listening to a human voice through a gramophone.


The Minsky Moment

#447
From the story

Quote"What these systems do, no more and no less, is to put together sequences of words, but without any coherent understanding of the world behind them,"

Sounds human to me  . . .

Quote"These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic  . . . If you ask what it's like to be an ice cream dinosaur, they can generate text about melting and roaring and so on."

Again seems consistent with Lemoine's description of the chatbot as akin to an 8 year old child.  They learn by imitation and riff on fantastical topics.

It always seemed to me that the point of the Turing Test was to flag the fact that concepts like "sentience" and "self-awareness" are slippery and have deep philosophical coherence problems of their own. That's not to say I'm ready to jump on Lemoine's train and start issuing social security numbers to chatbots.  But as this technology keeps getting more sophisticated, we are going to need to develop more sophisticated ways of thinking about it.
The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.
--Joan Robinson

DGuller

Just because you're trying to shut someone up doesn't mean that the person you want to shut up is saying something meaningful.  It would be a problem for Google if it wanted to keep AI sentience secret and this guy spilled the beans, true, but it would be just as much of a problem if this guy just didn't know what he was talking about, but had the credentials for lay persons to think that he does.

Oexmelin

None of this matters when it comes to brand management.

I've mentioned this before, but that one Radiolab episode, where the scientists developping the technology now behind deep fakes, reacted with considerable surprise to the concerns of the journalist about trust and democracy, confessing they had never thought about it at all, didn't fill me with amazing confidence about the ethical and philosophical bend of much of the scientific community.
Que le grand cric me croque !