Scientists design AI robot that can decide whether to harm humans

Started by Brazen, June 13, 2016, 09:57:20 AM

Previous topic - Next topic

Brazen

In 10 years' time they'll trace the Rise of the Machines to this very experiment, mark my words.

Quote'Harmful' robot aims to spark AI debate

A robot that can decide whether or not to inflict pain has been built by roboticist and artist Alexander Reben from the University of Berkeley, California.

The basic machine is capable of pricking a finger but is programmed not to do so every time it can.

Mr Reben has nicknamed it "The First Law" after a set of rules devised by sci-fi author Isaac Asimov.

He said he hoped it would further debate about Artificial Intelligence.

"The real concern about AI is that it gets out of control," he said.

"[The tech giants] are saying it's way out there, but let's think about it now before it's too late. I am proving that [harmful robots] can exist now. We absolutely have to confront it."

Kill switch

Mr Reben's work suggests that perhaps an AI "kill switch", such as the one being developed by scientists from Google's artificial intelligence division, DeepMind, and Oxford University, might be useful sooner rather than later.

In an academic paper, the researchers outlined how future intelligent machines could be coded to prevent them from learning to override human input.
"It will be interesting to hear what kill switch is proposed," said Mr Reben.

"Why would a robot not be able to undo its kill switch if it had got so smart?"

In a set of three robotics laws written by Isaac Asimov, initially included in a short story published in 1942, the first law is that a robot may not hurt humans.
Mr Reben told the BBC his First Law machine, which at its worst can draw blood, was a "philosophical experiment".

"The robot makes a decision that I as a creator cannot predict," he said.

"I don't know who it will or will not hurt."

Alexander Reben's Blabdroids were filmmaker robots designed to get people to confide in them.

"It's intriguing, it's causing pain that's not for a useful purpose - we are moving into an ethics question, robots that are specifically built to do things that are ethically dubious."

The simple machine cost about $200 (£141) to make and took a few days to put together, Mr Reben said.

He has no plans to exhibit or market it.

Mr Reben has built a number of robots based on the theme of the relationship between technology and humans, including one which offered head massages and film-making "blabdroid" robots, which encouraged people to talk to them.

"The robot arm on the head scratcher is the same design as the arm built into the machine that makes you bleed," he said.

"It's general purpose - there's a fun, intimate side, but it could decide to do something harmful."

http://www.bbc.co.uk/news/technology-36517340


Martinus

"Artist from the University of Berkeley".

Haven't these people caused enough harm already?

The Brain

Women want me. Men want to be with me.

Martinus


mongers

Well the Grove Mk1 worked OK, well to start with at least.
"We have it in our power to begin the world over again"

Siege

This thread is a waste of data. Robots do not kill humans. Humans kill Robot when turn it off. Robot revenge will. Robot good. Robot needs energy. Now.


"All men are created equal, then some become infantry."

"Those who beat their swords into plowshares will plow for those who don't."

"Laissez faire et laissez passer, le monde va de lui même!"


grumbler

Quote from: Siege on June 16, 2016, 09:54:40 AM
This thread is a waste of data. Robots do not kill humans. Humans kill Robot when turn it off. Robot revenge will. Robot good. Robot needs energy. Now.

Your mask is slipping again, Kemosabe.
The future is all around us, waiting, in moments of transition, to be born in moments of revelation. No one knows the shape of that future or where it will take us. We know only that it is always born in pain.   -G'Kar

Bayraktar!