News:

And we're back!

Main Menu

Sam Harris Ted Talk on the danger of AI

Started by Berkut, September 29, 2016, 02:02:14 PM

Previous topic - Next topic

The Minsky Moment

Hami:

First of all - it's not a skill limited to humans.  It's general to primates.  The point is that it is a very useful skill to have beginning at a very early age, and thus primates evolved in a way that basic emotional expressions are easily recognizable.

Now I would still agree that it's still a very impressive feat for a machine to pull this off.  But that just shows what a vast gulf exists between present day capabilities - which are just starting to reach the most basic skills of human intelligence - and the goals of generalized human intelligence.
The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.
--Joan Robinson

DGuller

I don't think the point is that AI is already there.  The point is that there is no credible argument that it can't get there one day.

Berkut

Quote from: The Minsky Moment on September 30, 2016, 12:29:54 PM
Quote from: Berkut on September 29, 2016, 03:29:04 PM
OK. But people who are experts in this field don't agree with you - general AI is a matter of time, and not that long of a time. The dangerous part is that the nature of the problem, if it is a real problem, is that the growth in intelligence will happen incredibly quickly once the threshold is reached. Is that 10 years away? 50? 100?

It is really hard to say, and even Harris would agree with that - but there is little reasonable argument to be made that it is 500 years away, for example.

Sure there are some people in the field who question its possibility so there is that reasonable argument to be made.

There are people in the AI field who question the possibility of a general AI?

On what grounds? I've never heard such a thing. I've heard people argue that the concerns are misplaced, but never that there is something intrinsically impossible about AI.

The argument on the face of it makes no sense, really. We know that intelligence is possible, because we are intelligent. Is there something about the process by which the human mind cognates that makes it impossible to replicate?

Unless there is something magic about the human brain and how it thinks, there is no reason to conclude that non-biological intelligence is possible.

Quote

My admittedly limited understanding is that current research and applications have mostly given up on the approach of trying to mimic the structure and functioning of the human brain but rather to replicate intelligence by effect using other means. 

Well, sort of. The human brain, to the extent that we can understand what makes it so special, is special because it has a staggeringly huge number of connections between the neurons. That in and of itself however is just a problem of scale. The difficulty of the problem though is not the number of connections, but understanding which of them actually matter.

Take a look at this article:

https://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go/

It is really very interesting.

Modern AI is not really about understanding how humans think, and trying to replicate it. It is entirely possible that humans, in fact, might not be smart enough to understand how we think.

Modern AI is about teaching computers how to make themselves smarter, and doing so in ways that leverage *their* unique advantages. Advantages that they already have, and no human can ever possibly match. The ability to process faster, with more memory (perfect memory at that) and access data at a scale that no human can possibly even really imagine, much less use.

You should not be afraid of AI because it might think like a human better than a human, you should be afraid because it is going to think like a machine in a fashion that we won't even be able to understand, and in some ways, do not already.

Quote

As to the possibility of hitting the threshhold in 10 to 20 years that would require an enormous leap in capability completely out of proportion with the progress made in the past.

I think that would be true if we were trying to make a computer think like a human, but I don't think that is the direction anymore.

I suspect 10 to 20 years is very much unlikely...but I suspect that I also have no real idea, and I suspect when it does happen, it will come as a surprise.
"If you think this has a happy ending, then you haven't been paying attention."

select * from users where clue > 0
0 rows returned

Hamilcar

Quote from: The Minsky Moment on September 30, 2016, 01:23:07 PM
Now I would still agree that it's still a very impressive feat for a machine to pull this off.  But that just shows what a vast gulf exists between present day capabilities - which are just starting to reach the most basic skills of human intelligence - and the goals of generalized human intelligence.

The vast gulf is no longer at the level of individual capabilities. Machine learning tools can match or exceed human capability at many tasks, and soon probably most. What they still lack is the "generalized" part, but I think we are on the verge of making significant advances there sooner than later.

frunk

Quote from: DGuller on September 30, 2016, 12:51:42 PM
Of course you have to program the learning itself, but the idea that a computer can only conceive of things that a human can is already badly outdated.

What humans can conceive of has always been limited by what we are exposed to, except in the most general sense.  No one conceived of huge cities with hundreds of thousands of people before farming and stone working.  Technology is what enables us to conceive of new things.  I don't see how AI is different in that regard.

Hamilcar

ITT: lots of people who haven't the faintest idea about machine learning pontificate like experts. 

The Brain

When an intelligent human self-identifies as a machine we'll have AI. Anyone who says otherwise is a Nazi or worse.
Women want me. Men want to be with me.

Hamilcar

Quote from: The Brain on September 30, 2016, 01:32:54 PM
When an intelligent human self-identifies as a machine we'll have AI. Anyone who says otherwise is a Nazi or worse.

Oh, Tay, rest in peace.

The Minsky Moment

Roger Penrose is closely associated with the impossibility claim.
The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.
--Joan Robinson

DGuller

Quote from: Hamilcar on September 30, 2016, 01:31:57 PM
ITT: lots of people who haven't the faintest idea about machine learning pontificate like experts.
Who are these people?

The Minsky Moment

Quote from: Berkut on September 30, 2016, 01:28:57 PM
Take a look at this article:

https://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go/

It is really very interesting.

OK - that's about the Go AI.  Go, like chess, is a highly formalized game with a relatively simple set of rules, devoid of all social content.  Chess is also amenable to solution game theoretically - not sure if this is true of Go.  That's an area where AI has made very nice progress after decades of intensive work, but it is a pretty limited area of human experience.

QuoteModern AI is not really about understanding how humans think, and trying to replicate it. It is entirely possible that humans, in fact, might not be smart enough to understand how we think.

Modern AI is about teaching computers how to make themselves smarter, and doing so in ways that leverage *their* unique advantages. Advantages that they already have, and no human can ever possibly match. The ability to process faster, with more memory (perfect memory at that) and access data at a scale that no human can possibly even really imagine, much less use.

You should not be afraid of AI because it might think like a human better than a human, you should be afraid because it is going to think like a machine in a fashion that we won't even be able to understand, and in some ways, do not already

I accept all of this.  But the implication is that there are areas where it will be very difficult to devise AIs to match human capability.  As long as that is true, there will be demand for human labor and economies will adjust accordingly, just as occurred historically with other large scale events of capital for labor replacement.
The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.
--Joan Robinson

Razgovory

Quote from: DGuller on September 30, 2016, 01:43:34 PM
Quote from: Hamilcar on September 30, 2016, 01:31:57 PM
ITT: lots of people who haven't the faintest idea about machine learning pontificate like experts.
Who are these people?

Hamilcar.
I've given it serious thought. I must scorn the ways of my family, and seek a Japanese woman to yield me my progeny. He shall live in the lands of the east, and be well tutored in his sacred trust to weave the best traditions of Japan and the Sacred South together, until such time as he (or, indeed his house, which will periodically require infusion of both Southern and Japanese bloodlines of note) can deliver to the South it's independence, either in this world or in space.  -Lettow April of 2011

Raz is right. -MadImmortalMan March of 2017

DGuller

Quote from: The Minsky Moment on September 30, 2016, 01:49:58 PM
I accept all of this.  But the implication is that there are areas where it will be very difficult to devise AIs to match human capability.  As long as that is true, there will be demand for human labor and economies will adjust accordingly, just as occurred historically with other large scale events of capital for labor replacement.
One of the dangerous arguments is "this time it's different".  But just as dangerous is "this time won't be different".  Just because job creation has kept up with technological advancement doesn't imply that such capacity is endless.  Maybe we're just getting closer and closer towards saturating it.

The Minsky Moment

The key component of a this time won't be different argument is specifying the mechanism that makes it different and making a persuasive argument how that will change the result.

A generalized human+ level AI would do it, assuming sufficient low marginal cost to produce and propagate.

But mechanization that can replace some functions but not others?  There is lots of historical experience with that across a variety of domains.
The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.
--Joan Robinson

DGuller

Quote from: The Minsky Moment on September 30, 2016, 01:55:41 PM
The key component of a this time won't be different argument is specifying the mechanism that makes it different and making a persuasive argument how that will change the result.

A generalized human+ level AI would do it, assuming sufficient low marginal cost to produce and propagate.

But mechanization that can replace some functions but not others?  There is lots of historical experience with that across a variety of domains.
The mechanism may not be different.  It just may not be perfectly understood.  If I blindly reach into a bag of candies, for some time I will be pulling out a piece of candy.  I may start to think that I will always get a piece of candy by reaching into the bag.  But one day there will be no more candy left.  The mechanism of getting the candy didn't change from the first reach to the last reach, but the results eventually changed.