News:

And we're back!

Main Menu

Sam Harris Ted Talk on the danger of AI

Started by Berkut, September 29, 2016, 02:02:14 PM

Previous topic - Next topic

Savonarola

Quote from: Valmy on September 29, 2016, 05:22:47 PM
Quote from: Monoriu on September 29, 2016, 05:11:07 PM
I am beginning to think it is humanity's destiny to create an AI/robotic race.  It is our mission.  We are supposed to be their stepping stone.  They will replace us to colonise the galaxy, and humanity will disappear.  Is this necessarily a bad thing?

Not sure what the standard of ethics and morality I would have to adopt to properly judge the goodness or badness of that :P

Dewey:  If colonizing the galaxy makes you a better robot then you should colonize the galaxy.
Mill:  Colonizing the galaxy will yield the greatest happiness for the most number of robots.
Nietzsche: Super-Robot morality dictates conquest of the galaxy in order to express the will to power.
Aristotle:   A golden mean of virtue lies between the robots who stay on their home world and those that aspire to conquer the universe.
In Italy, for thirty years under the Borgias, they had warfare, terror, murder and bloodshed, but they produced Michelangelo, Leonardo da Vinci and the Renaissance. In Switzerland, they had brotherly love, they had five hundred years of democracy and peace—and what did that produce? The cuckoo clock

Tonitrus

Quote from: DGuller on September 29, 2016, 08:24:46 PM
Strategy game AI is still stuck in the stone age of AI.  Basically there is no I there at all, just a large set of pre-programmed instructions.  By comparison, Go was beaten by a computer that taught itself to play.

And, to be fair, weak AI in computer games is likely mostly a capitalistic, "we ain't got time for that!" business priority decision.

Ed Anger

A John Tiller game AI in a weapon.

*shudder*
Stay Alive...Let the Man Drive

lustindarkness

Would AI vote to Trump? It won't take much for AI to be more intelligent than us.
Grand Duke of Lurkdom

The Minsky Moment

Quote from: Berkut on September 29, 2016, 03:29:04 PM
OK. But people who are experts in this field don't agree with you - general AI is a matter of time, and not that long of a time. The dangerous part is that the nature of the problem, if it is a real problem, is that the growth in intelligence will happen incredibly quickly once the threshold is reached. Is that 10 years away? 50? 100?

It is really hard to say, and even Harris would agree with that - but there is little reasonable argument to be made that it is 500 years away, for example.

Sure there are some people in the field who question its possibility so there is that reasonable argument to be made.

My admittedly limited understanding is that current research and applications have mostly given up on the approach of trying to mimic the structure and functioning of the human brain but rather to replicate intelligence by effect using other means. 

As to the possibility of hitting the threshhold in 10 to 20 years that would require an enormous leap in capability completely out of proportion with the progress made in the past.
The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.
--Joan Robinson

The Minsky Moment

Quote from: Hamilcar on September 29, 2016, 04:55:53 PM
While machine learning still has many challenges, your account really doesn't do justice to current advances.

Such as what specifically?
The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.
--Joan Robinson

Hamilcar

Quote from: The Minsky Moment on September 30, 2016, 12:30:18 PM
Quote from: Hamilcar on September 29, 2016, 04:55:53 PM
While machine learning still has many challenges, your account really doesn't do justice to current advances.

Such as what specifically?

Specifically from your list, deep learning is getting pretty good at reading emotion off images.

The Minsky Moment

Quote from: Hamilcar on September 30, 2016, 12:37:35 PM
Specifically from your list, deep learning is getting pretty good at reading emotion off images.

And then doing what?
The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.
--Joan Robinson

Hamilcar

Quote from: The Minsky Moment on September 30, 2016, 12:43:25 PM
Quote from: Hamilcar on September 30, 2016, 12:37:35 PM
Specifically from your list, deep learning is getting pretty good at reading emotion off images.

And then doing what?

Dunno, if I had a good idea, I'd start a company. I do work with deep learning (in the practical sense, I wouldn't claim real understanding) and it's both very powerful and often baffling.

The Minsky Moment

Reading emotions is a task performable by most infant primates.   So while it is an impressive feat for a machine to acquire this capability, it is still pretty far off from what we are talking about here.
The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.
--Joan Robinson

DGuller

I think what most people fail to appreciate about AI is that we're already making computers learn relationships without micromanaging them.  Of course you have to program the learning itself, but the idea that a computer can only conceive of things that a human can is already badly outdated.

Hamilcar

Quote from: The Minsky Moment on September 30, 2016, 12:51:16 PM
Reading emotions is a task performable by most infant primates.   So while it is an impressive feat for a machine to acquire this capability, it is still pretty far off from what we are talking about here.

Not sure if facetious: just because infants do something effortlessly doesn't mean it's easy or simple. The fact that machines are approaching human-level proficiency in such complex tasks is pretty amazing.

Valmy

Well if we can do it effortlessly why do we need machines to do it? Aren't we supposed to make machines to help us do tasks that are hard to do?

Maybe I don't understand the point of AI.
Quote"This is a Russian warship. I propose you lay down arms and surrender to avoid bloodshed & unnecessary victims. Otherwise, you'll be bombed."

Zmiinyi defenders: "Russian warship, go fuck yourself."

DontSayBanana

Quote from: The Minsky Moment on September 30, 2016, 12:29:54 PM
Sure there are some people in the field who question its possibility so there is that reasonable argument to be made.

My admittedly limited understanding is that current research and applications have mostly given up on the approach of trying to mimic the structure and functioning of the human brain but rather to replicate intelligence by effect using other means. 

As to the possibility of hitting the threshhold in 10 to 20 years that would require an enormous leap in capability completely out of proportion with the progress made in the past.

Maximus would be the go-to guy for this due to his experience with machine learning, but as somebody specializing in this in college, you've hit on a few of my pet peeves.

1) General AI is not just sophisticated narrow AI- they are totally different beasts; narrow AI is capable of modifying the process it uses to achieve a finite goal, while general AI would be capable of modifying those goals entirely.
2) The problem with "artificial" intelligence (from here on out, I'm specifically talking about general AI when I say this) is that we only have a tentative grasp of how "genuine" intelligence is formed, and much of it is difficult to replicate because platforms housing AI are unlikely to physically mimic us...

For example, our spatial sense is largely formed by our responses to visual indicators, which are in turn the result of processing two images roughly 50-70mm apart from an elevation of between a little over 5 feet and almost 7 feet from the ground.  The artificial objects we interact with are largely designed to accommodate our general body proportions and our opposable thumbs- a tool may not be instantly recognizable as a tool to an "artificial" intelligence.  In addition, the "uncanny valley" for artificial intelligence is its lack of personality, and there's only tenuous agreement as to what elements form our personality, let alone what processes form those elements.  About the only thing we can definitely agree on is that the constantly increasing sum of our experiences has a significant impact in shaping our responses to future events.

My suspicion is that "general AI" is more akin to a "grand unified theory" of cognitive development, which we are nowhere near modeling, let alone simulating at this point.
Experience bij!

Hamilcar

Quote from: Valmy on September 30, 2016, 01:11:53 PM
Well if we can do it effortlessly why do we need machines to do it? Aren't we supposed to make machines to help us do tasks that are hard to do?

Maybe I don't understand the point of AI.

I can give an AI an arbitrary amount of capacity to do more.