This is one of the best explanations online about superintelligence and the technological singularity, and how exponential technological growth works.
Part 1:
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
Part 2:
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
Siege, I appreciate that, and I read the first few paragraphs. But I am not going to continue, because it is far too long, and more importantly, the whole thing sounds like wishful thinking or speculation to me.
Quote from: Monoriu on February 23, 2016, 10:42:53 AM
Siege, I appreciate that, and I read the first few paragraphs. But I am not going to continue, because it is far too long, and more importantly, the whole thing sounds like wishful thinking or speculation to me.
The author, Tim Urban, is famous for being neutral and breaking down arguments all the way to the bottom.
That's why his site is called "wait but why?"
I am not saying you have to agree with him, just that if you want an impartial assessment by someone not from the tech industry, this is it.
He goes through all the different opinions and classifies the arguments and theories from kool-aid optimism to anxious pessimism.
I myself i am not convinced about the viability of ASI artificial super intelligence, as a self aware entity.
I am more think towards the merging between humans and AI, meaning any ASI would be of biological origin...
And I think the optimist corner is too optimist.
But I am open to the discussion.
Bottom line, when you have the time read it if you can.
This is clearly the tech topic of the century, whether right or wrong.
I'm not exactly clear what is being measured on a graph where the side bar says "progress". I'm kinda curious what the methodology is on that.
Sorry man. He lists all his sources at the bottom.
His source is himself. Exactly what is an increment of "human progress"? How do you measure such a thing?
Quote from: Razgovory on February 23, 2016, 08:55:10 PM
His source is himself. Exactly what is an increment of "human progress"? How do you measure such a thing?
Well you can make predictions based on past progress. You know, like Moore's Law...not actually a law but based on an educated guess.
Can someone please explain to me why I shouldn't send this to deleted thread like all the other nonsense shit spam we get from sock puppets like siege?
Quote from: katmai on February 23, 2016, 09:23:23 PM
Can someone please explain to me why I shouldn't send this to deleted thread like all the other nonsense shit spam we get from sock puppets like siege?
Because you don't delete all of Tim's threads?
Oh I would, if only he wasn't under the protection of Neil.
Let's assume for a moment that this technological singularity will happen. One of the implications is that computers and robots will be able to self-improve at an increasing rate. So they will become self-aware, and will no doubt become much more capable than humans. This sounds suspiciously like Skynet. What I am trying to say is, is technological singularity something that we should look forward to, or is it something that we need to guard against :ph34r:
Quote from: Valmy on February 23, 2016, 09:22:02 PM
Quote from: Razgovory on February 23, 2016, 08:55:10 PM
His source is himself. Exactly what is an increment of "human progress"? How do you measure such a thing?
Well you can make predictions based on past progress. You know, like Moore's Law...not actually a law but based on an educated guess.
Moore's law is based on a discrete measurement. The speed of computers. "Progress", is far more nebulous. How do we know the rate of "progress" has increased exponentially rather then linearly over the last 20 years? To make a graph (which is what this is based on), you need actual numbers to be plotted. For the numbers to be plotted you need to measure "human progress" in increments. How the hell do you do that? Also the idea that someone would just die because they saw advanced technology is silly. The author claims that if we took a man from 1750 and brought him to the modern day he might just die of shock. To get a similar situation you would need to get someone from 1750 to 10,000 BC. Here's the kicker, we have had people interact with such wildly different levels of technology. When the English first started to arrive in Australia, they met a people who hadn't developed agriculture. Their material was about close to that of people in 10,000 BC were were isolated from the rest of humanity for 40,000 years. These people encounters late 18th century and early 19th century technology. Those people did not all just drop dead.
Quote from: Razgovory on February 23, 2016, 09:50:05 PM
Quote from: Valmy on February 23, 2016, 09:22:02 PM
Quote from: Razgovory on February 23, 2016, 08:55:10 PM
His source is himself. Exactly what is an increment of "human progress"? How do you measure such a thing?
Well you can make predictions based on past progress. You know, like Moore's Law...not actually a law but based on an educated guess.
Moore's law is based on a discrete measurement. The speed of computers. "Progress", is far more nebulous. How do we know the rate of "progress" has increased exponentially rather then linearly over the last 20 years? To make a graph (which is what this is based on), you need actual numbers to be plotted. For the numbers to be plotted you need to measure "human progress" in increments. How the hell do you do that? Also the idea that someone would just die because they saw advanced technology is silly. The author claims that if we took a man from 1750 and brought him to the modern day he might just die of shock. To get a similar situation you would need to get someone from 1750 to 10,000 BC. Here's the kicker, we have had people interact with such wildly different levels of technology. When the English first started to arrive in Australia, they met a people who hadn't developed agriculture. Their material was about close to that of people in 10,000 BC were were isolated from the rest of humanity for 40,000 years. These people encounters late 18th century and early 19th century technology. Those people did not all just drop dead.
Yeah Tasmanians didn't fucking die.
Quote from: katmai on February 23, 2016, 09:23:23 PM
Can someone please explain to me why I shouldn't send this to deleted thread like all the other nonsense shit spam we get from sock puppets like siege?
Because it's doing no harm. No one has to read it. No one has to post in it.
I gotta say, Dguller's recent description of Siegy's posting style, i.e. as if someone was picking pages from "Flowers of Algernon" at random, must be one of the best and most accurate things I have ever read on Languish. :lol:
Quote from: katmai on February 23, 2016, 09:32:32 PM
Oh I would, if only he wasn't under the protection of Neil.
Excuses, excuses. Neil is never here; he wouldn't notice it.
Quote from: Razgovory on February 23, 2016, 09:50:05 PM
Quote from: Valmy on February 23, 2016, 09:22:02 PM
Quote from: Razgovory on February 23, 2016, 08:55:10 PM
His source is himself. Exactly what is an increment of "human progress"? How do you measure such a thing?
Well you can make predictions based on past progress. You know, like Moore's Law...not actually a law but based on an educated guess.
Moore's law is based on a discrete measurement. The speed of computers. "Progress", is far more nebulous. How do we know the rate of "progress" has increased exponentially rather then linearly over the last 20 years? To make a graph (which is what this is based on), you need actual numbers to be plotted. For the numbers to be plotted you need to measure "human progress" in increments. How the hell do you do that? Also the idea that someone would just die because they saw advanced technology is silly. The author claims that if we took a man from 1750 and brought him to the modern day he might just die of shock. To get a similar situation you would need to get someone from 1750 to 10,000 BC. Here's the kicker, we have had people interact with such wildly different levels of technology. When the English first started to arrive in Australia, they met a people who hadn't developed agriculture. Their material was about close to that of people in 10,000 BC were were isolated from the rest of humanity for 40,000 years. These people encounters late 18th century and early 19th century technology. Those people did not all just drop dead.
Very bad example.
Contact with the autralasians was gradual and in their territory.
Now, if you would have taught an Australian aboriginal to speak english, and then show him England, London to be exact, then maybe.
Then again, the shock factor only really works when it is you watching your own culture in the future. If the cultural difference is too great you would write it off as magic and deal with it.
When you are watching your own culture, something you understand, you would not write it off as magic, and be shocked to death. Potentially.
Bottom line, my goal is to make you all "singularity aware ", and i will collect cool points when it actually happens by being the first one to bring it up here in languish.
It is a very selfish way to redeem myself for being the slowest guy in languish.
Quote from: Siege on February 24, 2016, 08:41:49 AM
Bottom line, my goal is to make you all "singularity aware ", and i will collect cool points when it actually happens by being the first one to bring it up here in languish.
It is a very selfish way to redeem myself for being the slowest guy in languish.
Siege, thank you for talking about the singularity. Many visionaries have been ridiculed in the past. Of course most people with visions have psychological problems or are under the influence of drugs, but those are disabilities that should not be ridiculed anyways.
Siege, FWIW, I prefer you talking about singularity over you talking about almost anything else. :hug:
Quote from: Martinus on February 24, 2016, 10:59:14 AM
Siege, FWIW, I prefer you talking about singularity over you talking about almost anything else. :hug:
I prefer his early Saturday morning drunken outpourings.
Quote from: Siege on February 24, 2016, 08:38:18 AM
Very bad example.
Contact with the autralasians was gradual and in their territory.
Now, if you would have taught an Australian aboriginal to speak english, and then show him England, London to be exact, then maybe.
Then again, the shock factor only really works when it is you watching your own culture in the future. If the cultural difference is too great you would write it off as magic and deal with it.
When you are watching your own culture, something you understand, you would not write it off as magic, and be shocked to death. Potentially.
No, it wasn't "gradual" and why wouldn't you write it off as "magic" if wasn't your own culture? It's pretty much impossible for it to be your culture anyway, since that amount of time passing would make it no longer your culture. The idea that the technology would be so shocking that it would cause people to die is simply untrue, we have examples of it. The question is why you would even think that would be true? There is absolutely no indication that this would be true besides what some guy on the internet says. He doesn't really back up what he says, or give a medical reason why it might be true, it just is true. That's insufficient. Especially when I can find counter examples.
Quote from: katmai on February 23, 2016, 09:23:23 PM
Can someone please explain to me why I shouldn't send this to deleted thread like all the other nonsense shit spam we get from sock puppets like siege?
Because it is a really interesting article.
https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzweil#The_Age_of_Intelligent_Machines_.281990.29
Kurzweil hasn't been particularly good at predicting the future.
Quote from: Razgovory on February 24, 2016, 05:46:29 PM
https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzweil#The_Age_of_Intelligent_Machines_.281990.29
Kurzweil hasn't been particularly good at predicting the future.
Wikipedia? I raise you Big Think.com
http://bigthink.com/endless-innovation/why-ray-kurzweils-predictions-are-right-86-of-the-time
Quote from: Siege on February 24, 2016, 08:41:49 AM
Bottom line, my goal is to make you all "singularity aware ", and i will collect cool points when it actually happens by being the first one to bring it up here in languish.
Eh, I'd rather be climate aware. Much more important issue.
Quote from: Siege on February 24, 2016, 09:56:38 PM
Quote from: Razgovory on February 24, 2016, 05:46:29 PM
https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzweil#The_Age_of_Intelligent_Machines_.281990.29 (https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzweil#The_Age_of_Intelligent_Machines_.281990.29)
Kurzweil hasn't been particularly good at predicting the future.
Wikipedia? I raise you Big Think.com
http://bigthink.com/endless-innovation/why-ray-kurzweils-predictions-are-right-86-of-the-time (http://bigthink.com/endless-innovation/why-ray-kurzweils-predictions-are-right-86-of-the-time)
He doesn't even say which predictions are correct :lol: Try again, Siege. I don't know why you are so into the "rapture of nerds". You already have a god, why are so obsessed with creating another?
Does he think we're alone in the universe?
No clue, but he's a bit of crank. I did find the source that Ray Kurzweil's predictions are 87% true. It's from Ray Kurzweil!
Quote from: Razgovory on February 25, 2016, 05:39:00 AM
No clue, but he's a bit of crank. I did find the source that Ray Kurzweil's predictions are 87% true. It's from Ray Kurzweil!
That counts, right. :(
Quote from: 11B4V on February 24, 2016, 10:11:58 PM
Eh, I'd rather be climate aware. Much more important issue.
The singularity will be able to solve our climate issues. :)
I think you should be skeptical of futurists in general, and extremely suspicious of ones that make absurd claims like "we will abolish death", or, "we will create anything from nothing".
Quote from: Razgovory on February 25, 2016, 06:51:49 PM
I think you should be skeptical of futurists in general, and extremely suspicious of ones that make absurd claims like "we will abolish death", or, "we will create anything from nothing".
Or you can simply conclude they watch too many star trek re-runs.
Quote from: alfred russel on February 25, 2016, 06:15:34 PM
Quote from: 11B4V on February 24, 2016, 10:11:58 PM
Eh, I'd rather be climate aware. Much more important issue.
The singularity will be able to solve our climate issues. :)
Yeah, by killing all the humans.
Quote from: Monoriu on February 23, 2016, 09:36:13 PM
Let's assume for a moment that this technological singularity will happen. One of the implications is that computers and robots will be able to self-improve at an increasing rate. So they will become self-aware, and will no doubt become much more capable than humans. This sounds suspiciously like Skynet. What I am trying to say is, is technological singularity something that we should look forward to, or is it something that we need to guard against :ph34r:
The technological singularity will turn
us into Skynet.
We will start by simply augmenting our bodies. For example with nanobot immune systems which can defend you from anything with a simple firmware update, emergency shut-off valves for our bloodstream in case of accident, bypassable pain receptors, enhanced senses, datalinks, auxiliary co-processors and memory banks.
Gah, futurists. I wish I got paid that kind of money to speculate wildly, incorrectly describe processes, make up whatever pseudo-science I feel will back up my claim, and generally scare the crap out of people.
The plain facts of the matter, and this is speaking as someone who's trying to specialize in artificial general intelligence, are that we don't have a path toward making one. It's not a question of numbers, like the futurists claim; it's the fact that we simply can't make a computer convincingly show preference. Also, you don't know what you don't know. The best futurists in the world can only make guesses when it comes to AGI, when it will happen, what it will look like, how it will react (but you better believe we're going to keep it isolated until we're damn sure it's safe), etc.
Oh, and you can take that "die progress unit" and file it right next to a Scientologists' "thetans-" that's how meaningful a measurement it is. The author made it up, and it's total bullshit. If culture shock due to technology could kill an early human, then a chimpanzee would die upon seeing a can opener "magically transform" a can into food. I'm not about to put too much stock into an author who uses an easily disprovable urban myth as a "unit of measurement."
Quote from: DontSayBanana on February 26, 2016, 03:08:12 PM
(but you better believe we're going to keep it isolated until we're damn sure it's safe)
^_^
Quote from: 11B4V on February 25, 2016, 07:40:53 PM
Quote from: alfred russel on February 25, 2016, 06:15:34 PM
Quote from: 11B4V on February 24, 2016, 10:11:58 PM
Eh, I'd rather be climate aware. Much more important issue.
The singularity will be able to solve our climate issues. :)
Yeah, by killing all the humans.
That won't work...we'll still have a climate. :mad:
Quote from: DontSayBanana on February 26, 2016, 03:08:12 PM
Gah, futurists. I wish I got paid that kind of money to speculate wildly, incorrectly describe processes, make up whatever pseudo-science I feel will back up my claim, and generally scare the crap out of people.
The plain facts of the matter, and this is speaking as someone who's trying to specialize in artificial general intelligence, are that we don't have a path toward making one. It's not a question of numbers, like the futurists claim; it's the fact that we simply can't make a computer convincingly show preference. Also, you don't know what you don't know. The best futurists in the world can only make guesses when it comes to AGI, when it will happen, what it will look like, how it will react (but you better believe we're going to keep it isolated until we're damn sure it's safe), etc.
Oh, and you can take that "die progress unit" and file it right next to a Scientologists' "thetans-" that's how meaningful a measurement it is. The author made it up, and it's total bullshit. If culture shock due to technology could kill an early human, then a chimpanzee would die upon seeing a can opener "magically transform" a can into food. I'm not about to put too much stock into an author who uses an easily disprovable urban myth as a "unit of measurement."
It would appear to be a problem of kind rather then degree. If you sped up a dog's brain you wouldn't have a dog who thinks like a man, you would just have dog that's particularly quick on his feet. To build this "god computer" you would need to program it to be able think in ways that human beings can't imagine. That would appear to be impossible by definition.
It seems to me that "The singularity" is to computer science as the philosopher's stone was to chemistry.
It's possible that humanity will create a self aware intelligence smarter than us.
The difficulties in doing that are wildly underestimated by those who have never done any AI work.
What happens at that point is pure speculation, as in it seems like the futurists are assuming that just because something is smarter than us human motivation and potential physical limitations of the new intelligence become completely irrelevant.
You guys really didn't read the two parts of the opening link.
Tim Urban actually flesh out most pro and con points, including all the stuff you guys mention.
Yes, ASI might be impossible to achieve, with it being a simulation of intelligence rather than being actually self aware. That's ok, the are many other paths in front of us.
The bigger picture though, and the argument that really have me thinking, is that ALL intelligence, as it develops as a societal species, lead to a technological singularity and a post scarcity civilization. Unless it self destroy in the process.
In other words if you take all technology from humankind right now, humanity would just start from zero and in a few thousand years it would be right where we are now. On the brink of a technological singularity. Because intelligence by definition accumulates knowledge in the search of happiness and a better way of life. Eventually leading to the search for immortality, the ultimate goal of every self-aware organism.
I like to think it is humanity's destiny to create a robotic race that is more capable than us. Humans will be destroyed in the process and the robots will move on to become masters.
Quote from: Siege on February 27, 2016, 08:21:41 PM
In other words if you take all technology from humankind right now, humanity would just start from zero and in a few thousand years it would be right where we are now. On the brink of a technological singularity. Because intelligence by definition accumulates knowledge in the search of happiness and a better way of life. Eventually leading to the search for immortality, the ultimate goal of every self-aware organism.
Battlestar Galactica figured it would take us 150,000 years to get back. Because apparently their people forgot what farming and writing are, and the urge to congregate in cities.
Quote from: Monoriu on February 27, 2016, 08:25:41 PM
I like to think it is humanity's destiny to create a robotic race that is more capable than us. Humans will be destroyed in the process and the robots will move on to become masters.
Oh pleez, you watch too much movies.
The technological singularity is the merging of humans and technology. Un-enhanced humans will not be able to keep up with the exponential advances.
We will be the droids, the ASIs, the immortal explorers of the universe, eventually ascending to a higher plane of existence as some form of software, retreating from the physical world, interacting with it only through our avatars.
One day, when you go to visit a friend, you will not be visiting just his home, but his own world, probably his own universe, with the rules he sees fit, and we will judge each other ethically by the way we treat the NPCs living in our virtual worlds.
Maybe.
There are way too many paths to a post singularity post scarcity civilization.
Quote from: Siege on February 27, 2016, 08:21:41 PM
You guys really didn't read the two parts of the opening link.
Tim Urban actually flesh out most pro and con points, including all the stuff you guys mention.
Yes, ASI might be impossible to achieve, with it being a simulation of intelligence rather than being actually self aware. That's ok, the are many other paths in front of us.
The bigger picture though, and the argument that really have me thinking, is that ALL intelligence, as it develops as a societal species, lead to a technological singularity and a post scarcity civilization. Unless it self destroy in the process.
In other words if you take all technology from humankind right now, humanity would just start from zero and in a few thousand years it would be right where we are now. On the brink of a technological singularity. Because intelligence by definition accumulates knowledge in the search of happiness and a better way of life. Eventually leading to the search for immortality, the ultimate goal of every self-aware organism.
I read it, it's simply bullshit. It's not like Mr. Urban is neutral in this. He is very much a booster. Maybe you should read some stuff debunking the whole concept.
I don't think a post-scarcity world is possible or desirable. The amount of matter available to humans is finite. The surface area of the Earth is finite. But human numbers or desire are infinite. How can you satisfy everybody's desire to own property the size of a theme park?
Besides, I think scarcity is one of the most important driving force behind humanity's progress. If you take away the carrot and the stick, the donkey won't have incentive to move forward.
Quote from: Monoriu on February 27, 2016, 09:16:04 PM
Besides, I think scarcity is one of the most important driving force behind humanity's progress. If you take away the carrot and the stick, the donkey won't have incentive to move forward.
Don't disregard humanity's ability to create artificial carrots.
Yeah, but there is certain things you can't create. Real estate can not be produce in a machine. For people who say " I want a house on the sea shore in Florida", there will always be shortages so long as there are enough people. "Sure", the Singularity kook will say, "but you can up load your mind into a computer where there is an infinite space." Which doesn't actually solve the problem for me, though depending on how it's done it may involve killing me. So I guess it solves my problem that way.
Quote from: Razgovory on February 27, 2016, 01:41:37 AM
It would appear to be a problem of kind rather then degree. If you sped up a dog's brain you wouldn't have a dog who thinks like a man, you would just have dog that's particularly quick on his feet. To build this "god computer" you would need to program it to be able think in ways that human beings can't imagine. That would appear to be impossible by definition.
I'm not sure about that. You seem to be assuming that computers cannot recognize patterns with them first being programmed by humans. That is already not true. There are already machine learning algorithm that can recognize patterns better than humans can, and machine learning field is still in its infancy. Obviously humans need to program the learning algorithms, but I don't see why it's obvious that they can't come up with an algorithm that can be self-improving.
I'm thinking of "big leaps" in cognitive functioning. Take a dog. A dog can't read. Even a smart dog can't read. A dog can't comprehend the the idea of reading. The understanding that abstract symbols can mean something forever beyond the dog, so not only can the dog not read, he can't even know that he can't. Extrapolate that to human beings. A human being can't "x". Not only can the human being not "x", he doesn't know what "x" is, that "x" exists as a concept, and that he can never "x". Now, how does he go about programming a computer to "x"? It's not a matter of a computer doing something better then humans, it's doing something that humans can't. Any computer built by human being is going to be limited in the same ways that the humans are limited.
Quote from: Razgovory on February 28, 2016, 12:17:34 AM
I'm thinking of "big leaps" in cognitive functioning. Take a dog. A dog can't read. Even a smart dog can't read. A dog can't comprehend the the idea of reading. The understanding that abstract symbols can mean something forever beyond the dog, so not only can the dog not read, he can't even know that he can't. Extrapolate that to human beings. A human being can't "x". Not only can the human being not "x", he doesn't know what "x" is, that "x" exists as a concept, and that he can never "x". Now, how does he go about programming a computer to "x"?
One difference is that a dog cannot be reprogrammed.
QuoteIt's not a matter of a computer doing something better then humans, it's doing something that humans can't. Any computer built by human being is going to be limited in the same ways that the humans are limited.
As I already said, that's not true. I have created statistical models that I myself do not understand, as in I don't understand why they do better than anything I can think of manually (well, I understand the big picture of why, but not the details of why). They can get at latent variables by using a complicated combination of known variables, something that I can't conceive of. The best-performing statistical models are usually black boxes created by machine learning algorithms.
When you say you don't understand the model is beyond all human understanding or just yours? Are human beings biologically incapable of understanding it? If so, what good is it?
Quote from: Razgovory on February 28, 2016, 01:03:18 AM
When you say you don't understand the model is beyond all human understanding or just yours?
I can't speak for all the human beings, but it's a generally acceptable principle that the kinds of models I built are not easily interpretable. And given that I was the human that built it, and not some other human, my level of understanding of my own model seems to be the most relevant one.
QuoteAre human beings biologically incapable of understanding it? If so, what good is it?
It predicts outcomes better than models humans can understand.
The singularity is imminent, and it will develop from statistical models. Actuaries will be the priests serving as our intermediaries with our new overlords.
Quote from: alfred russel on February 28, 2016, 01:19:38 AM
The singularity is imminent, and it will develop from statistical models. Actuaries will be the priests serving as our intermediaries with our new overlords.
I'm a data scientist now. :mad:
Quote from: DGuller on February 28, 2016, 01:23:04 AM
Quote from: alfred russel on February 28, 2016, 01:19:38 AM
The singularity is imminent, and it will develop from statistical models. Actuaries will be the priests serving as our intermediaries with our new overlords.
I'm a data scientist now. :mad:
I know. Data scientists will be the altar boys to the priests. :)
Quote from: alfred russel on February 28, 2016, 01:26:24 AM
Quote from: DGuller on February 28, 2016, 01:23:04 AM
Quote from: alfred russel on February 28, 2016, 01:19:38 AM
The singularity is imminent, and it will develop from statistical models. Actuaries will be the priests serving as our intermediaries with our new overlords.
I'm a data scientist now. :mad:
I know. Data scientists will be the altar boys to the priests. :)
:mad: :mad: :mad:
Quote from: DGuller on February 28, 2016, 01:13:06 AM
Quote from: Razgovory on February 28, 2016, 01:03:18 AM
When you say you don't understand the model is beyond all human understanding or just yours?
I can't speak for all the human beings, but it's a generally acceptable principle that the kinds of models I built are not easily interpretable. And given that I was the human that built it, and not some other human, my level of understanding of my own model seems to be the most relevant one.
QuoteAre human beings biologically incapable of understanding it? If so, what good is it?
It predicts outcomes better than models humans can understand.
Okay, that's difference of degree rather then kind. It increases the capability that a human already has, rather then giving an entirely new ability. You can make a computer that is better able find patterns better then a human, but you can't make a computer that can find patterns that human can't comprehend.
Let's say computer falls from space and is able to predict the future. It analyzes the clouds, how many kitten videos are on youtube, and pigment of dandelions and concludes you will be hit by a truck in 2,618 days. None of that makes sense to us. It is essentially magic. The computer is finding patterns beyond human comprehension. Humans will never be able to understand that. Yet this is the kind of "Big leap" that these singularity computers are suppose to make. They would be from our point of view magic, now how do you program a computer to do this kind of magic? You don't. And make no mistake, the singularity is magic. It's suppose to spirit us away to a world of immortality and plenty.
I think you're repeating the same argument for the third time, without really addressing my counter-argument. Yes, computers can find patterns we can't comprehend, and on that basis can do better than human already at certain tasks. Yes, they're starting with a learning algorithm a human programmed, but they take it from there and go further.
Quote from: Razgovory on February 28, 2016, 01:56:10 AM
Let's say computer falls from space and is able to predict the future. It analyzes the clouds, how many kitten videos are on youtube, and pigment of dandelions and concludes you will be hit by a truck in 2,618 days. None of that makes sense to us. It is essentially magic. The computer is finding patterns beyond human comprehension. Humans will never be able to understand that.
Why?
It seems logical that if we understand all the rules of the physical world, and have rather comprehensive information of the world as it is, and immense amounts of data processing, we could predict quite a bit. That wouldn't be magic.
Quote from: DGuller on February 28, 2016, 02:11:13 AM
I think you're repeating the same argument for the third time, without really addressing my counter-argument. Yes, computers can find patterns we can't comprehend, and on that basis can do better than human already at certain tasks. Yes, they're starting with a learning algorithm a human programmed, but they take it from there and go further.
Because your counter argument doesn't work. Your computer program is not finding patterns that are beyond human comprehension. Someone had to comprehend what it was looking for to write the program. Now it can find patterns that are practically impossible, but not physically impossible for human being. Computers have been able to do what practically impossible for a human being for a while. The mapping of the human genome is an example in which a computer did something that is practically impossible for a human being to do, but not physically impossible. In practice nobody could sit down and jot down 3 billion base pairs. Not because human being can't wrap their heads around it, but because it would just take to long. A computer can record the base pairs much faster. It is many degrees faster then a human being, just like computer program is several degrees more capable of find certain patterns.
The Singularity idea is based not just on a computer being many degrees quicker or smarter then a human being but being a different kind of smart. Remember, it's suppose to be so much above us as we are above mice. Humans don't just have really fast mouse brains. Our brains are do things fundamentally different. Things that mice can't comprehend. You can't take a mouse's brain, speed it up, and have it read Shakespeare to you.
Likewise you can't simply create a "learning algorithm" and have a computer learn things that beyond human comprehension. A human had to program it so it's limited by the human brain. How would a computer know how to upgrade itself to what it doesn't know can exist? The difference between the brain of modern man and Homo Erectus went through several "great leaps forward". Not just in processing power, but in kind. Evolution can do that because all her leaps are blind. To have this singularity computer it must be able the same leaps with out know how to do so or where it's leaping to on purpose.
Like the old South Park Episode:
Step 1: We are here
????????
Step 3: Magic computer takes us to Heaven.
Except maybe the endpoint should tip you off that it's not exactly a realistic goal. It's interesting that people who often pride themselves as skeptics have no problem believing in magic computer heaven.
My latest model is, when you break it down, is a block of about 500,000 if-then statements. That's beyond my feeble comprehension, regardless of what you say. I know they all collectively work, since I know that at the end of it I get a prediction that is more accurate than anything I can come up with that's explainable, but I sure can't comprehend the logic of all those 8-way interactions of variables my model is finding.
Quote from: DGuller on February 28, 2016, 01:27:48 AM
Quote from: alfred russel on February 28, 2016, 01:26:24 AM
Quote from: DGuller on February 28, 2016, 01:23:04 AM
Quote from: alfred russel on February 28, 2016, 01:19:38 AM
The singularity is imminent, and it will develop from statistical models. Actuaries will be the priests serving as our intermediaries with our new overlords.
I'm a data scientist now. :mad:
I know. Data scientists will be the altar boys to the priests. :)
:mad: :mad: :mad:
Show us on the doll where the actuary touched you.
Quote from: Peter Wiggin on February 27, 2016, 09:18:53 PM
Quote from: Monoriu on February 27, 2016, 09:16:04 PM
Besides, I think scarcity is one of the most important driving force behind humanity's progress. If you take away the carrot and the stick, the donkey won't have incentive to move forward.
Don't disregard humanity's ability to create artificial carrots.
Indeed. One of the arguments i have seen is that post scarcity is only possible as far as raw materials and well established consumer products, but some things like art will always be scarce. There can only be one original Mona Lisa, for example.
New products will also be scarce initially. But then, if we can really 3D print anything, and one day even change the atomic and molecular composition of anything into anything, then we will be free of the basic limitations of scarcity, and free to dedicate our time to self improvement and the exploration of our intellectual potential.
Quote from: Peter Wiggin on February 28, 2016, 09:50:14 AM
Show us on the doll where the actuary touched you.
Nowhere. :cry:
Ok Raz. You win.
Let's abandon all our computer research and let the chinese get to the singularity first.
Quote from: Razgovory on February 28, 2016, 03:16:39 AM
Likewise you can't simply create a "learning algorithm" and have a computer learn things that beyond human comprehension. A human had to program it so it's limited by the human brain. How would a computer know how to upgrade itself to what it doesn't know can exist? The difference between the brain of modern man and Homo Erectus went through several "great leaps forward". Not just in processing power, but in kind. Evolution can do that because all her leaps are blind. To have this singularity computer it must be able the same leaps with out know how to do so or where it's leaping to on purpose.
You are quite wrong here, Raz. Computers do learn by themselves without a programmer having to explicitly tell them what to do. They've been doing it for quite a while.
Quote from: DGuller on February 28, 2016, 09:45:23 AM
My latest model is, when you break it down, is a block of about 500,000 if-then statements. That's beyond my feeble comprehension, regardless of what you say. I know they all collectively work, since I know that at the end of it I get a prediction that is more accurate than anything I can come up with that's explainable, but I sure can't comprehend the logic of all those 8-way interactions of variables my model is finding.
Yeah, but you understand what the if-then statements are. You could theoretically sit down and write each one out and figure out what it means. The computer is much faster then you and can do all 500,000 at once. That what I mean by a difference in degree. It does what you can do much faster and at a greater scale.
Quote from: Siege on February 28, 2016, 10:08:35 AM
Ok Raz. You win.
Let's abandon all our computer research and let the chinese get to the singularity first.
And then we'll all be tortured for all eternity by the basilisk (http://rationalwiki.org/wiki/Roko's_basilisk). :(
Quote from: Siege on February 28, 2016, 10:08:35 AM
Ok Raz. You win.
Let's abandon all our computer research and let the chinese get to the singularity first.
Just because I don't believe in magic computer heaven, doesn't mean I don't think we should do computer research. I don't believe in Alkahest, but doesn't mean I think we should close down every chemistry department in the country. Also why would it matter who invents it first? In a post scarcity nobody has to compete for resources.
Quote from: Iormlund on February 28, 2016, 06:10:59 PM
Quote from: Razgovory on February 28, 2016, 03:16:39 AM
Likewise you can't simply create a "learning algorithm" and have a computer learn things that beyond human comprehension. A human had to program it so it's limited by the human brain. How would a computer know how to upgrade itself to what it doesn't know can exist? The difference between the brain of modern man and Homo Erectus went through several "great leaps forward". Not just in processing power, but in kind. Evolution can do that because all her leaps are blind. To have this singularity computer it must be able the same leaps with out know how to do so or where it's leaping to on purpose.
You are quite wrong here, Raz. Computers do learn by themselves without a programmer having to explicitly tell them what to do. They've been doing it for quite a while.
Okay. I didn't discount that. I said that computers can't upgrade themselves in some sort of "great leap forward".
if we could inject sentience into computers, and we add in dguller's 500,000 analogy, then I think computers could eventually upgrade themselves
Quote from: Razgovory on February 28, 2016, 03:16:39 AMBecause your counter argument doesn't work. Your computer program is not finding patterns that are beyond human comprehension. Someone had to comprehend what it was looking for to write the program. Now it can find patterns that are practically impossible, but not physically impossible for human being. Computers have been able to do what practically impossible for a human being for a while. The mapping of the human genome is an example in which a computer did something that is practically impossible for a human being to do, but not physically impossible. In practice nobody could sit down and jot down 3 billion base pairs. Not because human being can't wrap their heads around it, but because it would just take to long. A computer can record the base pairs much faster. It is many degrees faster then a human being, just like computer program is several degrees more capable of find certain patterns.
The Singularity idea is based not just on a computer being many degrees quicker or smarter then a human being but being a different kind of smart. Remember, it's suppose to be so much above us as we are above mice. Humans don't just have really fast mouse brains. Our brains are do things fundamentally different. Things that mice can't comprehend. You can't take a mouse's brain, speed it up, and have it read Shakespeare to you.
Likewise you can't simply create a "learning algorithm" and have a computer learn things that beyond human comprehension. A human had to program it so it's limited by the human brain. How would a computer know how to upgrade itself to what it doesn't know can exist? The difference between the brain of modern man and Homo Erectus went through several "great leaps forward". Not just in processing power, but in kind. Evolution can do that because all her leaps are blind. To have this singularity computer it must be able the same leaps with out know how to do so or where it's leaping to on purpose.
Like the old South Park Episode:
Step 1: We are here
????????
Step 3: Magic computer takes us to Heaven.
Except maybe the endpoint should tip you off that it's not exactly a realistic goal. It's interesting that people who often pride themselves as skeptics have no problem believing in magic computer heaven.
some humans are capable of doing things other humans simply can't do. let's use tesla as an example. if we built teslabot, and teslabot was sentient, then teslabot could run through every single possible thought and idea in a much more efficient way than any human could possibly do. within this mass of thoughts and ideas, there's very likely some brilliant answer that humans haven't been able to reach (due to old age, real world concerns, some personal defect, etc.). through this answer, which is currently/contemporaneously unavailable to humans, teslabot uses it to solve some riddle we don't yet know exists. through this, I can see a situation where a robot eventually upgrades itself to being *actually* smarter than humans
it's not about whether the creator can create a machine that's smarter than the creator. it's whether there are things that currently exist that the creator doesn't know exist, things a machine could figure out
Quote from: Razgovory on February 28, 2016, 06:47:55 PM
Quote from: DGuller on February 28, 2016, 09:45:23 AM
My latest model is, when you break it down, is a block of about 500,000 if-then statements. That's beyond my feeble comprehension, regardless of what you say. I know they all collectively work, since I know that at the end of it I get a prediction that is more accurate than anything I can come up with that's explainable, but I sure can't comprehend the logic of all those 8-way interactions of variables my model is finding.
Yeah, but you understand what the if-then statements are. You could theoretically sit down and write each one out and figure out what it means. The computer is much faster then you and can do all 500,000 at once. That what I mean by a difference in degree. It does what you can do much faster and at a greater scale.
No, I can't figure out what it means. You must not have programmed a lot to say that you can figure out what 500,000 if-then statements that go 10 levels deep mean. Humans don't think in ways that would lend to them writing code like that or understanding code like that. Just because you can tell what a 1 is and what a 0 is doesn't mean that you can comprehend what the sequences of 1s and 0s mean.
:huh: Seriously? You don't know what binary code is? People can read it. It's just not optimal for people.
You are really hung up on scale.
Quote from: Razgovory on February 28, 2016, 07:56:08 PM
:huh: Seriously? You don't know what binary code is? People can read it. It's just not optimal for people.
You are really hung up on scale.
:jaron:
Quote from: LaCroix on February 28, 2016, 07:26:13 PM
some humans are capable of doing things other humans simply can't do. let's use tesla as an example. if we built teslabot, and teslabot was sentient, then teslabot could run through every single possible thought and idea in a much more efficient way than any human could possibly do. within this mass of thoughts and ideas, there's very likely some brilliant answer that humans haven't been able to reach (due to old age, real world concerns, some personal defect, etc.). through this answer, which is currently/contemporaneously unavailable to humans, teslabot uses it to solve some riddle we don't yet know exists. through this, I can see a situation where a robot eventually upgrades itself to being *actually* smarter than humans
it's not about whether the creator can create a machine that's smarter than the creator. it's whether there are things that currently exist that the creator doesn't know exist, things a machine could figure out
Presumably you wouldn't need a genius, you could just use Siege or Berkut or something. The question is not can it think faster, the to that is answer is yes, but can it think thoughts that
can't be thought by people. Because we can't think thoughts that are beyond human comprehension it's hard to describe them. If we made an artificial mouse brain that was much, much faster and could hold much, much more information then an ordinary mouse brain would the mouse be able to build a nuclear reactor? No. It's still a mouse brain.
I think what's confusing you guys, (and the fundamental mistake of magic computer proponents),is that intelligence isn't expressible as a number or a line on a graph. Instead of intelligence think of horse power. Let's say that a horse has one horsepower. A wagon with 10 horses has 10 horsepower, a car has 100 horsepower, a submarine 1000, a jet 10,000, a ship 100,000 and space ship 1 million (I pulled these numbers out of my ass I have no idea how much horsepower each one has). You can't get a 1,000 horses and expect to have a submarine. They operate in a completely different way, use different principles and do different things. It doesn't matter how many horses you put together, they won't act like a submarine. The number of horses are differences of degree, the difference between a horse-drawn cart and a submarine is a difference of kind.
Quote from: DGuller on February 28, 2016, 07:57:51 PM
Quote from: Razgovory on February 28, 2016, 07:56:08 PM
:huh: Seriously? You don't know what binary code is? People can read it. It's just not optimal for people.
You are really hung up on scale.
:jaron:
You were the one who said you couldn't comperhend what the series of 1 and 0s mean. How should I have read that?
That wasn't smartass. That was me being baffled.
The point I was making is that just because you know the letters doesn't mean you can read the language.
That's a poor example since people can do that.
I think I've tolerated your obstinate foolishness in this thread for far longer than most reasonable people would. But all things come to an end.
raz, we can't know whether it's possible for mankind to ever have a thought (with enough knowledge, time, and luck) that could lead to the next great leap forward in machines (or nanohumans, I guess). it could be possible, though, just as it could be impossible. going to your 1,000 -> 10,000 analogy, we already sort of figured out how to reach 10,000 even though humanity is limited to 1,000. look at things like LHC and super computers. we take the data generated by these super machines and, with our thoughts, expand our knowledge of the universe. something similar could happen to reach singularity or whatever sci fi concept.
You are missing the point, Mr Cross. It's not can man make a big leap in computers, certainly he can. People have already done that. The question is can make a machine built and programmed by people make a leap that human being don't even know exists. The answer is no. It's like Chimps building a tool to read books to them. Not only can they not build it, they don't know it's possible since the idea of reading is forever beyond them.
Quote from: Razgovory on February 29, 2016, 03:37:22 AM
You are missing the point, Mr Cross. It's not can man make a big leap in computers, certainly he can. People have already done that. The question is can make a machine built and programmed by people make a leap that human being don't even know exists. The answer is no. It's like Chimps building a tool to read books to them. Not only can they not build it, they don't know it's possible since the idea of reading is forever beyond them.
I thought in our hypothetical humans created AI -- machines had human-like minds
I find it kind of weird to argue that humans cannot do what biology has already done without humans.
It is some kind of weird, "Watchmaker" kind of argument...
Quote from: Berkut on February 29, 2016, 12:22:03 PM
I find it kind of weird to argue that humans cannot do what biology has already done without humans.
It is some kind of weird, "Watchmaker" kind of argument...
Prehumans cannot create humans, because humans could think thoughts that were beyond the capability of the prehumans. I guess only Binky can create humans.
Quote from: LaCroix on February 28, 2016, 07:06:33 PM
if we could inject sentience into computers, and we add in dguller's 500,000 analogy, then I think computers could eventually upgrade themselves
You don't need sentience to upgrade yourself. Mosquitoes became fairly proficient bloodsuckers without it. Life all over Earth is on a constant upgrade path and only a few species understand the concept of sentience.
I think it's a mistake to assume that machines will be limited to what humans can imagine. First off, what humans can imagine changes over time, and isn't some constant against which anything can be calibrated. Second, machines already have gone beyond what the humans that created it imagined. There are genetic algorithms and neural nets that have achieved results that weren't anticipated ahead of time.
Quote from: Berkut on February 29, 2016, 12:22:03 PM
I find it kind of weird to argue that humans cannot do what biology has already done without humans.
It is some kind of weird, "Watchmaker" kind of argument...
I suppose if humans can ever replicate what biology now does without humans anything is possible. But that is still in the realm of science fiction.
Quote from: crazy canuck on February 29, 2016, 04:07:38 PM
Quote from: Berkut on February 29, 2016, 12:22:03 PM
I find it kind of weird to argue that humans cannot do what biology has already done without humans.
It is some kind of weird, "Watchmaker" kind of argument...
I suppose if humans can ever replicate what biology now does without humans anything is possible. But that is still in the realm of science fiction.
I am not sure I understand your comment. What do you mean?
Humans are a product of evolution. My point is that it seems silly to claim that it is even theoretically impossible for humans to do what has been done without humans, via evolution and random selection.
We can, of course, replicate what evolution does ourselves, and we've done so in many, many examples.
Quote from: Berkut on February 29, 2016, 04:12:30 PM
Quote from: crazy canuck on February 29, 2016, 04:07:38 PM
Quote from: Berkut on February 29, 2016, 12:22:03 PM
I find it kind of weird to argue that humans cannot do what biology has already done without humans.
It is some kind of weird, "Watchmaker" kind of argument...
I suppose if humans can ever replicate what biology now does without humans anything is possible. But that is still in the realm of science fiction.
I am not sure I understand your comment. What do you mean?
Humans are a product of evolution. My point is that it seems silly to claim that it is even theoretically impossible for humans to do what has been done without humans, via evolution and random selection.
We can, of course, replicate what evolution does ourselves, and we've done so in many, many examples.
Why do you assume that because something has been done without human intervention that human intervention could cause the same thing to be done. Are you assuming computers can evolve naturally?
And we replicate evolution? We have bred animals and plants to have certain characteristics. But we are still trying to figure out our own DNA. We are long way off from replicating evolution.
CC, genetic algorithms mimicking actual evolution have been a thing for ages.
The way machine learning works is the programmer sets up an environment where the software can improve itself.
Quote from: crazy canuckWhy do you assume that because something has been done without human intervention that human intervention could cause the same thing to be done. Are you assuming computers can evolve naturally?[/size]And we replicate evolution? We have bred animals and plants to have certain characteristics. But we are still trying to figure out our own DNA. We are long way off from replicating evolution.
Evolution works without understanding anything about "our own DNA". You don't need to understand anything about DNA to engage in evolution, and we have been using evolution to produce species of plants and animals we want for hundreds of years without even knowing there was anything such thing as DNA.
There are entire species of animals and plants that exist and thrive today because humans understood evolution enough to create them, even if they had never even heard the word evolution to begin with.
There are all kinds of systems that work without any singular intelligence understanding them at all - indeed, intelligence itself is a result of some those systems.
The argument that humans cannot, even in theory, do what systems that have managed to do without any intelligence guiding them at all is not just a bad conclusion, it is objectively wrong.
How could it be the case that *adding* intelligence into the mix could make it LESS possible to accomplish something, at least in theory? Surely directed evolution, as an example, can at the very least achieve what non-directed evolution has accomplished. And in fact we know this to be true, because we have dogs and grain and roses and all kinds of species of living things that only exist because humans consciously or even in some cases unconsciously drove them to exist.
Humans cannot stray into the domain of the Lord God.
Quote from: Iormlund on February 29, 2016, 05:07:40 PM
CC, genetic algorithms mimicking actual evolution have been a thing for ages.
Indeed - it is actually really damn cool. You can get solutions to problem that you simply cannot understand. You just know they work because...well, they work.
You can create an easy thought experiment to show this. Lets imagine a black box that generates a number between 1 and 100. Prior to starting, we have no idea what number the box will generate, or what system it uses to generate that number, we just know it generates a number. We don't know what the number is, we only know if we get the answer right or wrong. So our test box generates a number.
Now, if we understood HOW it generated the number, we could write a program to predict what it generates. Lets say it uses some incredibly complex algorithm that checks the orbits of the planets at some point in time, then multiplies it by an atomic clock, and takes the modulus of that 100 to get the number, and thrown in a hundred other complex permutations in there for giggles. If we understood all that, and we knew the input variables, with sufficient time we could replicate its logic and generate the right number.
Or we could just feed it numbers from 1 to 100 until it tells us which one is right. We don't understand WHY we are right at all, but we may not really care. We have the right answer, and we did it by a dumb simple "learning" algorithm that only learns what numbers are not correct, and knows not to check them again.
We have a computer now that can solve this incredibly complex problem that we may not understand at all.
This is a gross over-simplification of how evolution works. The mechanism of evolution doesn't understand why or how it works, it just uses random fluctuation to try various crap until something is a little bit better and repeats.
Quote from: Berkut on February 29, 2016, 12:22:03 PM
I find it kind of weird to argue that humans cannot do what biology has already done without humans.
It is some kind of weird, "Watchmaker" kind of argument...
It is a watchmaker argument. Human evolved, machines do not. They are produced by intelligent design (that is by people). Evolution is slow, but it has one enormous advantage. It's not directed. Nobody has to know where it's going or how to get there. Planning isn't necessary. Computer programming requires design. You have to know where you are going. If you can't know where to go, you can't plan to get there and you can't move in that direction.
Quote from: Iormlund on February 29, 2016, 05:07:40 PM
CC, genetic algorithms mimicking actual evolution have been a thing for ages.
The way machine learning works is the programmer sets up an environment where the software can improve itself.
Evolution is a process of mutation based on a number of factors. What you are talking about seems to be more about programming a system that can learn certain things. Aren't those two things different?
Quote from: Berkut on February 29, 2016, 05:21:40 PM
This is a gross over-simplification of how evolution works. The mechanism of evolution doesn't understand why or how it works, it just uses random fluctuation to try various crap until something is a little bit better and repeats.
That wasnt just a gross over simplification. That isnt evolution at all.
Quote from: crazy canuck on February 29, 2016, 09:52:33 PM
Quote from: Iormlund on February 29, 2016, 05:07:40 PM
CC, genetic algorithms mimicking actual evolution have been a thing for ages.
The way machine learning works is the programmer sets up an environment where the software can improve itself.
Evolution is a process of mutation based on a number of factors. What you are talking about seems to be more about programming a system that can learn certain things. Aren't those two things different?
Here's how a genetic algorithm works:
1. A bunch of programs with different traits are set loose to compete for resources.
2. Those that do well pass on their traits to more "children" than those who don't.
3. Add in the potential for mutations, crossbreeding and the like.
For most uses it isn't an attempt to simulate evolution, it's an attempt to mimic evolutionary processes to get to a specific solution. The point is that genetic algorithms can get to unpredictable and unimaginable answers, not that it is replicating evolution. For the purposes that it is used it is far superior to evolution, which would be much less reliable.
Quote from: crazy canuck on February 29, 2016, 09:52:33 PM
Quote from: Iormlund on February 29, 2016, 05:07:40 PM
CC, genetic algorithms mimicking actual evolution have been a thing for ages.
The way machine learning works is the programmer sets up an environment where the software can improve itself.
Evolution is a process of mutation based on a number of factors.
It is a process of natural selection actually. Mutation is just one part of evolution. A necessary, but far from sufficient condition.
Quote
What you are talking about seems to be more about programming a system that can learn certain things. Aren't those two things different?
You can program a system that uses principles of evolution to evaluate solutions.
Quote from: crazy canuck on February 29, 2016, 09:54:19 PM
Quote from: Berkut on February 29, 2016, 05:21:40 PM
This is a gross over-simplification of how evolution works. The mechanism of evolution doesn't understand why or how it works, it just uses random fluctuation to try various crap until something is a little bit better and repeats.
That wasnt just a gross over simplification. That isnt evolution at all.
Argument by assertion is not argument at all.
I don't understand why this discussion exists.
Or rather I do. :(
Trunk Monkey FTW.
(https://languish.org/forums/proxy.php?request=http%3A%2F%2Fstream1.gifsoup.com%2Fview7%2F3398843%2Ftrunk-monkey-o.gif&hash=cde747ded50ea3cb7cd6bf11324d864509385018)
Technically, that's a Trunk Ape. :sleep:
Awesome gif, btw.
Quote from: Berkut on March 01, 2016, 01:23:47 AM
Quote from: crazy canuck on February 29, 2016, 09:54:19 PM
Quote from: Berkut on February 29, 2016, 05:21:40 PM
This is a gross over-simplification of how evolution works. The mechanism of evolution doesn't understand why or how it works, it just uses random fluctuation to try various crap until something is a little bit better and repeats.
That wasnt just a gross over simplification. That isnt evolution at all.
Argument by assertion is not argument at all.
Which is why I found your assertion so troubling. :P
Quote from: crazy canuck on March 02, 2016, 12:24:13 PM
Quote from: Berkut on March 01, 2016, 01:23:47 AM
Argument by assertion is not argument at all.
Which is why I found your assertion so troubling. :P
Well-done! You have decisively out-stupided Berkut. You make it seem so effortless. :P
(https://languish.org/forums/proxy.php?request=http%3A%2F%2F24.media.tumblr.com%2Ftumblr_lmr7c82cUk1qerndjo1_400.gif&hash=ec8ace1b8c3bc51454f993ee06ff6446defad9d6)
Well Raz, you really should read the link in the original post instead of talking shit out of your ass over and over about the diff between speed and quality.
"A key distinction is the difference between speed superintelligence and quality superintelligence. Often, someone's first thought when they imagine a super-smart computer is one that's as intelligent as a human but can think much, much faster2—they might picture a machine that thinks like a human, except a million times quicker, which means it could figure out in five minutes what would take a human a decade.
That sounds impressive, and ASI would think much faster than any human could—but the true separator would be its advantage in intelligence quality, which is something completely different. What makes humans so much more intellectually capable than chimps isn't a difference in thinking speed—it's that human brains contain a number of sophisticated cognitive modules that enable things like complex linguistic representations or longterm planning or abstract reasoning, that chimps' brains do not. Speeding up a chimp's brain by thousands of times wouldn't bring him to our level—even with a decade's time, he wouldn't be able to figure out how to use a set of custom tools to assemble an intricate model, something a human could knock out in a few hours. There are worlds of human cognitive function a chimp will simply never be capable of, no matter how much time he spends trying.
But it's not just that a chimp can't do what we do, it's that his brain is unable to grasp that those worlds even exist—a chimp can become familiar with what a human is and what a skyscraper is, but he'll never be able to understand that the skyscraper was built by humans. In his world, anything that huge is part of nature, period, and not only is it beyond him to build a skyscraper, it's beyond him to realize that anyone can build a skyscraper. That's the result of a small difference in intelligence quality.
And in the scheme of the intelligence range we're talking about today, or even the much smaller range among biological creatures, the chimp-to-human quality intelligence gap is tiny. In an earlier post, I depicted the range of biological cognitive capacity using a staircase:3"
Ok, Raz, this is just for you.
Please nobody else read this.
Thank you.
Quote
Before we dive into things, let's remind ourselves what it would mean for a machine to be superintelligent.
A key distinction is the difference between speed superintelligence and quality superintelligence. Often, someone's first thought when they imagine a super-smart computer is one that's as intelligent as a human but can think much, much faster2—they might picture a machine that thinks like a human, except a million times quicker, which means it could figure out in five minutes what would take a human a decade.
That sounds impressive, and ASI would think much faster than any human could—but the true separator would be its advantage in intelligence quality, which is something completely different. What makes humans so much more intellectually capable than chimps isn't a difference in thinking speed—it's that human brains contain a number of sophisticated cognitive modules that enable things like complex linguistic representations or longterm planning or abstract reasoning, that chimps' brains do not. Speeding up a chimp's brain by thousands of times wouldn't bring him to our level—even with a decade's time, he wouldn't be able to figure out how to use a set of custom tools to assemble an intricate model, something a human could knock out in a few hours. There are worlds of human cognitive function a chimp will simply never be capable of, no matter how much time he spends trying.
But it's not just that a chimp can't do what we do, it's that his brain is unable to grasp that those worlds even exist—a chimp can become familiar with what a human is and what a skyscraper is, but he'll never be able to understand that the skyscraper was built by humans. In his world, anything that huge is part of nature, period, and not only is it beyond him to build a skyscraper, it's beyond him to realize that anyone can build a skyscraper. That's the result of a small difference in intelligence quality.
And in the scheme of the intelligence range we're talking about today, or even the much smaller range among biological creatures, the chimp-to-human quality intelligence gap is tiny. In an earlier post, I depicted the range of biological cognitive capacity using a staircase:
(https://languish.org/forums/proxy.php?request=http%3A%2F%2F28oa9i1t08037ue3m1l0i861.wpengine.netdna-cdn.com%2Fwp-content%2Fuploads%2F2015%2F01%2Fstaircase1.png&hash=994e044f6bdb5010538bd5fcd35b0125d04bd5bc)
To absorb how big a deal a superintelligent machine would be, imagine one on the dark green step two steps above humans on that staircase. This machine would be only slightly superintelligent, but its increased cognitive ability over us would be as vast as the chimp-human gap we just described. And like the chimp's incapacity to ever absorb that skyscrapers can be built, we will never be able to even comprehend the things a machine on the dark green step can do, even if the machine tried to explain it to us—let alone do it ourselves. And that's only two steps above us. A machine on the second-to-highest step on that staircase would be to us as we are to ants—it could try for years to teach us the simplest inkling of what it knows and the endeavor would be hopeless.
But the kind of superintelligence we're talking about today is something far beyond anything on this staircase. In an intelligence explosion—where the smarter a machine gets, the quicker it's able to increase its own intelligence, until it begins to soar upwards—a machine might take years to rise from the chimp step to the one above it, but perhaps only hours to jump up a step once it's on the dark green step two above us, and by the time it's ten steps above us, it might be jumping up in four-step leaps every second that goes by. Which is why we need to realize that it's distinctly possible that very shortly after the big news story about the first machine reaching human-level AGI, we might be facing the reality of coexisting on the Earth with something that's here on the staircase (or maybe a million times higher):
(https://languish.org/forums/proxy.php?request=http%3A%2F%2F28oa9i1t08037ue3m1l0i861.wpengine.netdna-cdn.com%2Fwp-content%2Fuploads%2F2015%2F01%2Fstaircase2.png&hash=235b4bbb4033e8d3fbd1314a744012e1fd4874b7)
And since we just established that it's a hopeless activity to try to understand the power of a machine only two steps above us, let's very concretely state once and for all that there is no way to know what ASI will do or what the consequences will be for us. Anyone who pretends otherwise doesn't understand what superintelligence means.
Evolution has advanced the biological brain slowly and gradually over hundreds of millions of years, and in that sense, if humans birth an ASI machine, we'll be dramatically stomping on evolution. Or maybe this is part of evolution—maybe the way evolution works is that intelligence creeps up more and more until it hits the level where it's capable of creating machine superintelligence, and that level is like a tripwire that triggers a worldwide game-changing explosion that determines a new future for all living things:
(https://languish.org/forums/proxy.php?request=http%3A%2F%2F28oa9i1t08037ue3m1l0i861.wpengine.netdna-cdn.com%2Fwp-content%2Fuploads%2F2015%2F01%2FTripwire.png&hash=ce3695e8fc9e400ad28fc4f41a7b238905b8f75b)
And for reasons we'll discuss later, a huge part of the scientific community believes that it's not a matter of whether we'll hit that tripwire, but when. Kind of a crazy piece of information.
So where does that leave us?
Well no one in the world, especially not I, can tell you what will happen when we hit the tripwire. But Oxford philosopher and lead AI thinker Nick Bostrom believes we can boil down all potential outcomes into two broad categories.
First, looking at history, we can see that life works like this: species pop up, exist for a while, and after some time, inevitably, they fall off the existence balance beam and land on extinction—
(https://languish.org/forums/proxy.php?request=http%3A%2F%2F28oa9i1t08037ue3m1l0i861.wpengine.netdna-cdn.com%2Fwp-content%2Fuploads%2F2015%2F01%2Fbeam1.png&hash=2b3c84997dd0888139ffe43a4533b3b9d4299542)
"All species eventually go extinct" has been almost as reliable a rule through history as "All humans eventually die" has been. So far, 99.9% of species have fallen off the balance beam, and it seems pretty clear that if a species keeps wobbling along down the beam, it's only a matter of time before some other species, some gust of nature's wind, or a sudden beam-shaking asteroid knocks it off. Bostrom calls extinction an attractor state—a place species are all teetering on falling into and from which no species ever returns.
And while most scientists I've come across acknowledge that ASI would have the ability to send humans to extinction, many also believe that used beneficially, ASI's abilities could be used to bring individual humans, and the species as a whole, to a second attractor state—species immortality. Bostrom believes species immortality is just as much of an attractor state as species extinction, i.e. if we manage to get there, we'll be impervious to extinction forever—we'll have conquered mortality and conquered chance. So even though all species so far have fallen off the balance beam and landed on extinction, Bostrom believes there are two sides to the beam and it's just that nothing on Earth has been intelligent enough yet to figure out how to fall off on the other side.
(https://languish.org/forums/proxy.php?request=http%3A%2F%2F28oa9i1t08037ue3m1l0i861.wpengine.netdna-cdn.com%2Fwp-content%2Fuploads%2F2015%2F01%2Fbeam2.jpg&hash=2dc60fd83f42fa8461e9e068af28e248be4c51f4)
If Bostrom and others are right, and from everything I've read, it seems like they really might be, we have two pretty shocking facts to absorb:
1) The advent of ASI will, for the first time, open up the possibility for a species to land on the immortality side of the balance beam.
2) The advent of ASI will make such an unimaginably dramatic impact that it's likely to knock the human race off the beam, in one direction or the other.
It may very well be that when evolution hits the tripwire, it permanently ends humans' relationship with the beam and creates a new world, with or without humans.
Kind of seems like the only question any human should currently be asking is: When are we going to hit the tripwire and which side of the beam will we land on when that happens?
No one in the world knows the answer to either part of that question, but a lot of the very smartest people have put decades of thought into it. We'll spend the rest of this post exploring what they've come up with.
___________
Let's start with the first part of the question: When are we going to hit the tripwire?
i.e. How long until the first machine reaches superintelligence?
Not shockingly, opinions vary wildly and this is a heated debate among scientists and thinkers. Many, like professor Vernor Vinge, scientist Ben Goertzel, Sun Microsystems co-founder Bill Joy, or, most famously, inventor and futurist Ray Kurzweil, agree with machine learning expert Jeremy Howard when he puts up this graph during a TED Talk:
(https://languish.org/forums/proxy.php?request=http%3A%2F%2F28oa9i1t08037ue3m1l0i861.wpengine.netdna-cdn.com%2Fwp-content%2Fuploads%2F2015%2F01%2FHoward-Graph.png&hash=9937e9f5d24c79be07e4a320ef6641ffe2eb0bdb)
Those people subscribe to the belief that this is happening soon—that exponential growth is at work and machine learning, though only slowly creeping up on us now, will blow right past us within the next few decades.
Others, like Microsoft co-founder Paul Allen, research psychologist Gary Marcus, NYU computer scientist Ernest Davis, and tech entrepreneur Mitch Kapor, believe that thinkers like Kurzweil are vastly underestimating the magnitude of the challenge and believe that we're not actually that close to the tripwire.
The Kurzweil camp would counter that the only underestimating that's happening is the underappreciation of exponential growth, and they'd compare the doubters to those who looked at the slow-growing seedling of the internet in 1985 and argued that there was no way it would amount to anything impactful in the near future.
The doubters might argue back that the progress needed to make advancements in intelligence also grows exponentially harder with each subsequent step, which will cancel out the typical exponential nature of technological progress. And so on.
A third camp, which includes Nick Bostrom, believes neither group has any ground to feel certain about the timeline and acknowledges both A) that this could absolutely happen in the near future and B) that there's no guarantee about that; it could also take a much longer time.
Still others, like philosopher Hubert Dreyfus, believe all three of these groups are naive for believing that there even is a tripwire, arguing that it's more likely that ASI won't actually ever be achieved.
So what do you get when you put all of these opinions together?
In 2013, Vincent C. Müller and Nick Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences the following question: "For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such HLMI4 to exist?" It asked them to name an optimistic year (one in which they believe there's a 10% chance we'll have AGI), a realistic guess (a year they believe there's a 50% chance of AGI—i.e. after that year they think it's more likely than not that we'll have AGI), and a safe guess (the earliest year by which they can say with 90% certainty we'll have AGI). Gathered together as one data set, here were the results:2
Median optimistic year (10% likelihood): 2022
Median realistic year (50% likelihood): 2040
Median pessimistic year (90% likelihood): 2075
So the median participant thinks it's more likely than not that we'll have AGI 25 years from now. The 90% median answer of 2075 means that if you're a teenager right now, the median respondent, along with over half of the group of AI experts, is almost certain AGI will happen within your lifetime.
A separate study, conducted recently by author James Barrat at Ben Goertzel's annual AGI Conference, did away with percentages and simply asked when participants thought AGI would be achieved—by 2030, by 2050, by 2100, after 2100, or never. The results:3
By 2030: 42% of respondents
By 2050: 25%
By 2100: 20%
After 2100: 10%
Never: 2%
Pretty similar to Müller and Bostrom's outcomes. In Barrat's survey, over two thirds of participants believe AGI will be here by 2050 and a little less than half predict AGI within the next 15 years. Also striking is that only 2% of those surveyed don't think AGI is part of our future.
But AGI isn't the tripwire, ASI is. So when do the experts think we'll reach ASI?
Müller and Bostrom also asked the experts how likely they think it is that we'll reach ASI A) within two years of reaching AGI (i.e. an almost-immediate intelligence explosion), and B) within 30 years. The results:4
The median answer put a rapid (2 year) AGI → ASI transition at only a 10% likelihood, but a longer transition of 30 years or less at a 75% likelihood.
We don't know from this data the length of this transition the median participant would have put at a 50% likelihood, but for ballpark purposes, based on the two answers above, let's estimate that they'd have said 20 years. So the median opinion—the one right in the center of the world of AI experts—believes the most realistic guess for when we'll hit the ASI tripwire is [the 2040 prediction for AGI + our estimated prediction of a 20-year transition from AGI to ASI] = 2060.
(https://languish.org/forums/proxy.php?request=http%3A%2F%2F28oa9i1t08037ue3m1l0i861.wpengine.netdna-cdn.com%2Fwp-content%2Fuploads%2F2015%2F01%2FTimeline.png&hash=7f5c124a5faf5bf3c2fb59d4c040e36d52f9273e)
Of course, all of the above statistics are speculative, and they're only representative of the center opinion of the AI expert community, but it tells us that a large portion of the people who know the most about this topic would agree that 2060 is a very reasonable estimate for the arrival of potentially world-altering ASI. Only 45 years from now.
Okay now how about the second part of the question above: When we hit the tripwire, which side of the beam will we fall to?
Superintelligence will yield tremendous power—the critical question for us is:
Who or what will be in control of that power, and what will their motivation be?
The answer to this will determine whether ASI is an unbelievably great development, an unfathomably terrible development, or something in between.
Of course, the expert community is again all over the board and in a heated debate about the answer to this question. Müller and Bostrom's survey asked participants to assign a probability to the possible impacts AGI would have on humanity and found that the mean response was that there was a 52% chance that the outcome will be either good or extremely good and a 31% chance the outcome will be either bad or extremely bad. For a relatively neutral outcome, the mean probability was only 17%. In other words, the people who know the most about this are pretty sure this will be a huge deal. It's also worth noting that those numbers refer to the advent of AGI—if the question were about ASI, I imagine that the neutral percentage would be even lower.
Before we dive much further into this good vs. bad outcome part of the question, let's combine both the "when will it happen?" and the "will it be good or bad?" parts of this question into a chart that encompasses the views of most of the relevant experts:
(https://languish.org/forums/proxy.php?request=http%3A%2F%2F28oa9i1t08037ue3m1l0i861.wpengine.netdna-cdn.com%2Fwp-content%2Fuploads%2F2015%2F01%2FSquare11.jpg&hash=d4cd98553c8c324258843c8ad03c5c3f80ee51e2)
We'll talk more about the Main Camp in a minute, but first—what's your deal? Actually I know what your deal is, because it was my deal too before I started researching this topic. Some reasons most people aren't really thinking about this topic:
◾As mentioned in Part 1, movies have really confused things by presenting unrealistic AI scenarios that make us feel like AI isn't something to be taken seriously in general. James Barrat compares the situation to our reaction if the Centers for Disease Control issued a serious warning about vampires in our future.5
◾Due to something called cognitive biases, we have a hard time believing something is real until we see proof. I'm sure computer scientists in 1988 were regularly talking about how big a deal the internet was likely to be, but people probably didn't really think it was going to change their lives until it actually changed their lives. This is partially because computers just couldn't do stuff like that in 1988, so people would look at their computer and think, "Really? That's gonna be a life changing thing?" Their imaginations were limited to what their personal experience had taught them about what a computer was, which made it very hard to vividly picture what computers might become. The same thing is happening now with AI. We hear that it's gonna be a big deal, but because it hasn't happened yet, and because of our experience with the relatively impotent AI in our current world, we have a hard time really believing this is going to change our lives dramatically. And those biases are what experts are up against as they frantically try to get our attention through the noise of collective daily self-absorption.
◾Even if we did believe it—how many times today have you thought about the fact that you'll spend most of the rest of eternity not existing? Not many, right? Even though it's a far more intense fact than anything else you're doing today? This is because our brains are normally focused on the little things in day-to-day life, no matter how crazy a long-term situation we're a part of. It's just how we're wired.
One of the goals of these two posts is to get you out of the I Like to Think About Other Things Camp and into one of the expert camps, even if you're just standing on the intersection of the two dotted lines in the square above, totally uncertain.
During my research, I came across dozens of varying opinions on this topic, but I quickly noticed that most people's opinions fell somewhere in what I labeled the Main Camp, and in particular, over three quarters of the experts fell into two Subcamps inside the Main Camp:
(https://languish.org/forums/proxy.php?request=http%3A%2F%2F28oa9i1t08037ue3m1l0i861.wpengine.netdna-cdn.com%2Fwp-content%2Fuploads%2F2015%2F01%2FSquare21.jpg&hash=79f7e2e62e6d32deec59f5def5a18a7954fd9b76)
Ok, I got tired of so much cut and paste.
Keep reading here is you care:
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
Dude, I read you link the first time.
Quote from: Siege on March 03, 2016, 10:35:48 AM
Ok, Raz, this is just for you.
**SNIPE**
BUNCH OF BS
(https://pbs.twimg.com/media/CCICVmmUkAAwVA8.jpg)
I just want that computer put in my brain so I can function like a Mentat from Dune.
Quote from: 11B4V on March 03, 2016, 11:29:26 AM
Quote from: Siege on March 03, 2016, 10:35:48 AM
Ok, Raz, this is just for you.
**SNIPE**
BUNCH OF BS
(https://pbs.twimg.com/media/CCICVmmUkAAwVA8.jpg)
I don't speak wirh trators.
Quote from: Siege on March 03, 2016, 01:02:04 PM
Quote from: 11B4V on March 03, 2016, 11:29:26 AM
Quote from: Siege on March 03, 2016, 10:35:48 AM
Ok, Raz, this is just for you.
**SNIPE**
BUNCH OF BS
(https://pbs.twimg.com/media/CCICVmmUkAAwVA8.jpg)
I don't speak wirh trators.
It's spelled Traitors.
Quote from: 11B4V on March 03, 2016, 07:26:20 PM
Quote from: Siege on March 03, 2016, 01:02:04 PM
Quote from: 11B4V on March 03, 2016, 11:29:26 AM
Quote from: Siege on March 03, 2016, 10:35:48 AM
Ok, Raz, this is just for you.
**SNIPE**
BUNCH OF BS
(https://pbs.twimg.com/media/CCICVmmUkAAwVA8.jpg)
I don't speak wirh trators.
It's spelled Traitors.
I don't speak english.
None of us know Arabic.
الله أكبر
Google Translate doesn't count.
Wow, Raz finally says something right.
Quote from: Siege on March 08, 2016, 09:17:25 AM
Wow, Raz finally says something right.
That Hebrew is Arabic? :huh: