Languish.org

General Category => Off the Record => Topic started by: jimmy olsen on July 26, 2009, 10:04:43 AM

Title: What kind of limits should we put on Artificial Intelligences?
Post by: jimmy olsen on July 26, 2009, 10:04:43 AM
I think we all know where this is headed.
(https://languish.org/forums/proxy.php?request=http%3A%2F%2Fwww.solarnavigator.net%2Fimages%2Fterminator_robot.jpg&hash=3e70d23fd33fd29c5324e36b4fbf5758468c641f)
http://www.msnbc.msn.com/id/32147267/ns/technology_and_science-the_new_york_times/
QuoteScientists fear machines will outsmart us
Researchers worry that diverse technologies could cause social disruption

By John Markoff
updated 12:21 a.m. ET, Sun., July 26, 2009

A robot that can open doors and find electrical outlets to recharge itself. Computer viruses that no one can stop. Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously.

Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society's workload, from waging war to chatting with customers on the phone.

Their concern is that further advances could create profound social disruptions and even have dangerous consequences.

As examples, the scientists pointed to a number of technologies as diverse as experimental medical systems that interact with patients to simulate empathy, and computer worms and viruses that defy extermination and could thus be said to have reached a "cockroach" stage of machine intelligence.

While the computer scientists agreed that we are a long way from Hal, the computer that took over the spaceship in "2001: A Space Odyssey," they said there was legitimate concern that technological progress would transform the work force by destroying a widening range of jobs, as well as force humans to learn to live with machines that increasingly copy human behaviors.

The researchers — leading computer scientists, artificial intelligence researchers and roboticists who met at the Asilomar Conference Grounds on Monterey Bay in California — generally discounted the possibility of highly centralized superintelligences and the idea that intelligence might spring spontaneously from the Internet. But they agreed that robots that can kill autonomously are either already here or will be soon.

They focused particular attention on the specter that criminals could exploit artificial intelligence systems as soon as they were developed. What could a criminal do with a speech synthesis system that could masquerade as a human being? What happens if artificial intelligence technology is used to mine personal information from smart phones?

The researchers also discussed possible threats to human jobs, like self-driving cars, software-based personal assistants and service robots in the home. Just last month, a service robot developed by Willow Garage in Silicon Valley proved it could navigate the real world.

A report from the conference, which took place in private on Feb. 25, is to be issued later this year. Some attendees discussed the meeting for the first time with other scientists this month and in interviews.

The conference was organized by the Association for the Advancement of Artificial Intelligence, and in choosing Asilomar for the discussions, the group purposefully evoked a landmark event in the history of science. In 1975, the world's leading biologists also met at Asilomar to discuss the new ability to reshape life by swapping genetic material among organisms. Concerned about possible biohazards and ethical questions, scientists had halted certain experiments. The conference led to guidelines for recombinant DNA research, enabling experimentation to continue.

The meeting on the future of artificial intelligence was organized by Eric Horvitz, a Microsoft researcher who is now president of the association.

Dr. Horvitz said he believed computer scientists must respond to the notions of superintelligent machines and artificial intelligence systems run amok.

The idea of an "intelligence explosion" in which smart machines would design even more intelligent machines was proposed by the mathematician I. J. Good in 1965. Later, in lectures and science fiction novels, the computer scientist Vernor Vinge popularized the notion of a moment when humans will create smarter-than-human machines, causing such rapid change that the "human era will be ended." He called this shift the Singularity.

This vision, embraced in movies and literature, is seen as plausible and unnerving by some scientists like William Joy, co-founder of Sun Microsystems. Other technologists, notably Raymond Kurzweil, have extolled the coming of ultrasmart machines, saying they will offer huge advances in life extension and wealth creation.

"Something new has taken place in the past five to eight years," Dr. Horvitz said. "Technologists are replacing religion, and their ideas are resonating in some ways with the same idea of the Rapture."

The Kurzweil version of technological utopia has captured imaginations in Silicon Valley. This summer an organization called the Singularity University began offering courses to prepare a "cadre" to shape the advances and help society cope with the ramifications.

"My sense was that sooner or later we would have to make some sort of statement or assessment, given the rising voice of the technorati and people very concerned about the rise of intelligent machines," Dr. Horvitz said.

The A.A.A.I. report will try to assess the possibility of "the loss of human control of computer-based intelligences." It will also grapple, Dr. Horvitz said, with socioeconomic, legal and ethical issues, as well as probable changes in human-computer relationships. How would it be, for example, to relate to a machine that is as intelligent as your spouse?

Dr. Horvitz said the panel was looking for ways to guide research so that technology improved society rather than moved it toward a technological catastrophe. Some research might, for instance, be conducted in a high-security laboratory.

The meeting on artificial intelligence could be pivotal to the future of the field. Paul Berg, who was the organizer of the 1975 Asilomar meeting and received a Nobel Prize for chemistry in 1980, said it was important for scientific communities to engage the public before alarm and opposition becomes unshakable.

"If you wait too long and the sides become entrenched like with G.M.O.," he said, referring to genetically modified foods, "then it is very difficult. It's too complex, and people talk right past each other."

Tom Mitchell, a professor of artificial intelligence and machine learning at Carnegie Mellon University, said the February meeting had changed his thinking. "I went in very optimistic about the future of A.I. and thinking that Bill Joy and Ray Kurzweil were far off in their predictions," he said. But, he added, "The meeting made me want to be more outspoken about these issues and in particular be outspoken about the vast amounts of data collected about our personal lives."

Despite his concerns, Dr. Horvitz said he was hopeful that artificial intelligence research would benefit humans, and perhaps even compensate for human failings. He recently demonstrated a voice-based system that he designed to ask patients about their symptoms and to respond with empathy. When a mother said her child was having diarrhea, the face on the screen said, "Oh no, sorry to hear that."

A physician told him afterward that it was wonderful that the system responded to human emotion. "That's a great idea," Dr. Horvitz said he was told. "I have no time for that."

Copyright © 2009 The New York Times
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: The Brain on July 26, 2009, 10:15:19 AM
Luddites are still around. Film at 11.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Faeelin on July 26, 2009, 10:27:54 AM
QuoteThe researchers also discussed possible threats to human jobs, like self-driving cars, software-based personal assistants and service robots in the home. Just last month, a service robot developed by Willow Garage in Silicon Valley proved it could navigate the real world.

Oh, what bullshit. None of these peeps cared about all the calligraphers who lost their job thanks to movable type, did they?
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Habbaku on July 26, 2009, 10:33:57 AM
Self-driving cars would save thousands of lives a year and untold millions in damage.  :mellow:
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Josquius on July 26, 2009, 10:37:05 AM
AIs are the way forward. Its stupid robots we need to be more worried about.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: jimmy olsen on July 26, 2009, 11:01:08 AM
Quote from: Habbaku on July 26, 2009, 10:33:57 AM
Self-driving cars would save thousands of lives a year and untold millions in damage.  :mellow:
It's not the self driving cars that worry me.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Neil on July 26, 2009, 11:09:20 AM
Quote from: jimmy olsen on July 26, 2009, 11:01:08 AM
Quote from: Habbaku on July 26, 2009, 10:33:57 AM
Self-driving cars would save thousands of lives a year and untold millions in damage.  :mellow:
It's not the self driving cars that worry me.
You become overworried about everything.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Strix on July 26, 2009, 11:14:11 AM
I think there has to be limits on AI. I don't think there is a concern that AI robots would take over the world or turn on humans such as West World or Terminator. I think the major concern is that liberal nut jobs will start demanding that AI robots be given their own freedom and rights the more human like their AI ability becomes claiming slavery.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Neil on July 26, 2009, 11:16:05 AM
Quote from: Strix on July 26, 2009, 11:14:11 AM
I think there has to be limits on AI. I don't think there is a concern that AI robots would take over the world or turn on humans such as West World or Terminator. I think the major concern is that liberal nut jobs will start demanding that AI robots be given their own freedom and rights the more human like their AI ability becomes claiming slavery.
An excellent point.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Josephus on July 26, 2009, 11:17:42 AM
Quote from: Strix on July 26, 2009, 11:14:11 AM
I think there has to be limits on AI. I don't think there is a concern that AI robots would take over the world or turn on humans such as West World or Terminator. I think the major concern is that liberal nut jobs will start demanding that AI robots be given their own freedom and rights the more human like their AI ability becomes claiming slavery.

Captain Picard ruled quite succesfully that Data was a sentient being. :contract:
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Carolus on July 26, 2009, 12:32:03 PM
Will Smith will save us.  :bowler:
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Ed Anger on July 26, 2009, 12:44:00 PM
Quote from: Josephus on July 26, 2009, 11:17:42 AM
Quote from: Strix on July 26, 2009, 11:14:11 AM
I think there has to be limits on AI. I don't think there is a concern that AI robots would take over the world or turn on humans such as West World or Terminator. I think the major concern is that liberal nut jobs will start demanding that AI robots be given their own freedom and rights the more human like their AI ability becomes claiming slavery.

Captain Picard ruled quite succesfully that Data was a sentient being. :contract:

Picard is an inferior captain.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Grey Fox on July 26, 2009, 12:49:10 PM
Plane's autopilots have been there for ages & we still have 2 Pilots per plane.

And contrary to the common belief, the autopilot can take off & land by itself. Pilots usually do it because it's "fun".
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Martinus on July 26, 2009, 01:07:18 PM
Whatever limits the creators put on the spambot TimOrtiz2009 have already proven to be too restrictive.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Siege on July 26, 2009, 01:52:21 PM
I don't think real intelligence, as in self-awareness, is possible for computers.

You might be able to build a computer with massive procesing power, capable to simulate intelligence, but real intelligence? I don't think so.

Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Iormlund on July 26, 2009, 02:28:34 PM
[Turing]
If a computer could accurately simulate intelligence, it would be by definition intelligent.
[/Turing]
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Tonitrus on July 26, 2009, 02:53:15 PM
Quote from: Neil on July 26, 2009, 11:16:05 AM
Quote from: Strix on July 26, 2009, 11:14:11 AM
I think there has to be limits on AI. I don't think there is a concern that AI robots would take over the world or turn on humans such as West World or Terminator. I think the major concern is that liberal nut jobs will start demanding that AI robots be given their own freedom and rights the more human like their AI ability becomes claiming slavery.
An excellent point.

Indeed, that is the clinching factor that could inhibit the sex-bot industry.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: FunkMonk on July 26, 2009, 03:16:52 PM
Hopefully America's war machine will one day be fully automated, so we can all enjoy a little warfare from the comfort of our televisions at home without risk. :cool:
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Razgovory on July 26, 2009, 04:04:51 PM
I have a question, why did they put teeth on those robots in Terminator?  Do they eat something?  What do Robots eat?
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: ulmont on July 26, 2009, 04:07:59 PM
Quote from: Razgovory on July 26, 2009, 04:04:51 PM
I have a question, why did they put teeth on those robots in Terminator?

Those robots are the same model used for infiltration, after having the human flesh and skin slapped on the top of them.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Habbaku on July 26, 2009, 04:08:07 PM
Quote from: Razgovory on July 26, 2009, 04:04:51 PM
I have a question, why did they put teeth on those robots in Terminator?  Do they eat something?  What do Robots eat?

I don't know all that much about the Terminator story, but I assume they did so in case they wanted to later apply the human/camouflage skin over the same bot.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Razgovory on July 26, 2009, 04:20:16 PM
Oh okay.  That makes sense.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Siege on July 26, 2009, 04:23:23 PM
No it doesn't.

A terminator with armor unrestricted by the need for human camouflage would be far more effective in combat.  Its offensive weapon systems could be attached to its arms, instead of carrying human looking little assault rifles.

Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Razgovory on July 26, 2009, 04:25:47 PM
They carry guns that look like people?  Now that's weird!
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: FunkMonk on July 26, 2009, 04:27:27 PM
Because it's a movie and it looked cool.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Neil on July 26, 2009, 06:18:20 PM
Quote from: Martinus on July 26, 2009, 01:07:18 PM
Whatever limits the creators put on the spambot TimOrtiz2009 have already proven to be too restrictive.
Let's be fair.  Tim's posts are both more clever and informative than yours.  Constant newsposts are better than concentrated faggotry and junior-high girl drama.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Neil on July 26, 2009, 06:20:54 PM
Quote from: Siege on July 26, 2009, 01:52:21 PM
I don't think real intelligence, as in self-awareness, is possible for computers.

You might be able to build a computer with massive procesing power, capable to simulate intelligence, but real intelligence? I don't think so.
Simulated intelligence is identical to real intelligence.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: HVC on July 26, 2009, 07:43:39 PM
Quote from: FunkMonk on July 26, 2009, 03:16:52 PM
Hopefully America's war machine will one day be fully automated, so we can all enjoy a little warfare from the comfort of our televisions at home without risk. :cool:
I can do that riight now :P
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Neil on July 26, 2009, 07:51:59 PM
Quote from: HVC on July 26, 2009, 07:43:39 PM
Quote from: FunkMonk on July 26, 2009, 03:16:52 PM
Hopefully America's war machine will one day be fully automated, so we can all enjoy a little warfare from the comfort of our televisions at home without risk. :cool:
I can do that riight now :P
Indeed.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Darth Wagtaros on July 26, 2009, 07:52:38 PM
Quote from: Josephus on July 26, 2009, 11:17:42 AM
Quote from: Strix on July 26, 2009, 11:14:11 AM
I think there has to be limits on AI. I don't think there is a concern that AI robots would take over the world or turn on humans such as West World or Terminator. I think the major concern is that liberal nut jobs will start demanding that AI robots be given their own freedom and rights the more human like their AI ability becomes claiming slavery.

Captain Picard ruled quite succesfully that Data was a sentient being. :contract:
And that's why Captain Ben Sisko was teh superior captain.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: saskganesh on July 26, 2009, 08:37:25 PM
Quote from: Siege on July 26, 2009, 04:23:23 PM
No it doesn't.

A terminator with armor unrestricted by the need for human camouflage would be far more effective in combat.  Its offensive weapon systems could be attached to its arms, instead of carrying human looking little assault rifles.

a robot does not even need a body. it can sit in an armored-plated square block command post, and just send out weapon systems.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Darth Wagtaros on July 26, 2009, 08:40:05 PM
The real threat is the Cybermen. 

Losing American jobs to AIs is silly, really.  They are being outsourced to other countries anyway.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Neil on July 27, 2009, 08:50:32 AM
Quote from: Darth Wagtaros on July 26, 2009, 08:40:05 PM
Losing American jobs to AIs is silly, really.  They are being outsourced to other countries anyway.
Don't be so sure.  What if someone develops a robot who can pump gas, or flip burgers?
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Josquius on July 27, 2009, 08:56:19 AM
Quote from: Neil on July 27, 2009, 08:50:32 AM
Quote from: Darth Wagtaros on July 26, 2009, 08:40:05 PM
Losing American jobs to AIs is silly, really.  They are being outsourced to other countries anyway.
Don't be so sure.  What if someone develops a robot who can pump gas, or flip burgers?
The silly North American obsession with 'service' will damn you all to Ludditism.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Neil on July 27, 2009, 09:06:03 AM
Quote from: Tyr on July 27, 2009, 08:56:19 AM
Quote from: Neil on July 27, 2009, 08:50:32 AM
Quote from: Darth Wagtaros on July 26, 2009, 08:40:05 PM
Losing American jobs to AIs is silly, really.  They are being outsourced to other countries anyway.
Don't be so sure.  What if someone develops a robot who can pump gas, or flip burgers?
The silly North American obsession with 'service' will damn you all to Ludditism.
It gives shitty people with no skills something to do, as opposed to collecting welfare.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Caliga on July 27, 2009, 09:12:25 AM
We should put no limits on artificial intelligence.  That will only delay the singularity.  Our destiny is to be superceded by machine life of our own creation.

It... is... inevitable.
(https://languish.org/forums/proxy.php?request=http%3A%2F%2Flivingromcom.typepad.com%2Fmy_weblog%2Fimages%2Fagent_smith_poses04.jpg&hash=d920da31d178e1b0450f00ae77169e10f57ed944)
(it amuses me to be able to use this twice in a single day)
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: jimmy olsen on July 27, 2009, 09:14:45 AM
Quote from: Darth Wagtaros on July 26, 2009, 07:52:38 PM
Quote from: Josephus on July 26, 2009, 11:17:42 AM
Quote from: Strix on July 26, 2009, 11:14:11 AM
I think there has to be limits on AI. I don't think there is a concern that AI robots would take over the world or turn on humans such as West World or Terminator. I think the major concern is that liberal nut jobs will start demanding that AI robots be given their own freedom and rights the more human like their AI ability becomes claiming slavery.

Captain Picard ruled quite succesfully that Data was a sentient being. :contract:
And that's why Captain Ben Sisko was teh superior captain.
Did Sisko rule that Data wasn't a sentient being? Wouldn't he be constrained by Picard's precedent, unless Starfleet command overruled it?
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: The Brain on July 27, 2009, 09:15:57 AM
IIRC Imelda Marcos suggested that all government problems could be solved by letting a computer make all decisions. But she didn't win that election so we'll never know.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: PDH on July 27, 2009, 09:36:03 AM
I am still skeptical of the existence of non-artificial intelligence.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: Ed Anger on July 27, 2009, 09:37:36 AM
Quote from: PDH on July 27, 2009, 09:36:03 AM
I am still skeptical of the existence of non-artificial intelligence.

I'm still skeptical on the issue of Timmay.
Title: Re: What kind of limits should we put on Artificial Intelligences?
Post by: PDH on July 27, 2009, 09:40:14 AM
Quote from: Ed Anger on July 27, 2009, 09:37:36 AM
I'm still skeptical on the issue of Timmay.
That was decided LONG ago in my book.