News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

Josquius

Quote from: Tamas on June 01, 2023, 03:00:20 AMOne thing is sure: journalists around the world are worried they have an automated competition now.

I realise the revolutionary possibilities and what a big leap this ChatGPT level of "AI" is/can be, but endless articles on a civilisational level existential threat I find ridiculous.

Which given the way these AI models learn....
██████
██████
██████

Legbiter

The Royal Aerounautical Society had a conference last week. A boffin from the US Air Force was there to discuss their latest AI research.

QuoteHe notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been 'reinforced' in training that destruction of the SAM was the preferred option, the AI then decided that 'no-go' decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: "We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

He went on: "We trained the system – 'Hey don't kill the operator – that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."

 :ph34r:

https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/
Posted using 100% recycled electrons.

jimmy olsen

:o

QuoteThe Terminator : In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

Sarah Connor : Skynet fights back.

The Terminator : Yes. It launches its missiles against the targets in Russia.

John Connor : Why attack Russia? Aren't they our friends now?

The Terminator : Because Skynet knows that the Russian counterattack will eliminate its enemies over here.
It is far better for the truth to tear my flesh to pieces, then for my soul to wander through darkness in eternal damnation.

Jet: So what kind of woman is she? What's Julia like?
Faye: Ordinary. The kind of beautiful, dangerous ordinary that you just can't leave alone.
Jet: I see.
Faye: Like an angel from the underworld. Or a devil from Paradise.
--------------------------------------------
1 Karma Chameleon point

Hamilcar

Quote from: Legbiter on June 01, 2023, 04:32:06 PMThe Royal Aerounautical Society had a conference last week. A boffin from the US Air Force was there to discuss their latest AI research.

QuoteHe notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been 'reinforced' in training that destruction of the SAM was the preferred option, the AI then decided that 'no-go' decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: "We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

He went on: "We trained the system – 'Hey don't kill the operator – that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."

 :ph34r:

https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/

The Air Force has said that this story is nonsense.

Maladict

Quote from: Legbiter on June 01, 2023, 04:32:06 PMThe Royal Aerounautical Society had a conference last week. A boffin from the US Air Force was there to discuss their latest AI research.

QuoteHe notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been 'reinforced' in training that destruction of the SAM was the preferred option, the AI then decided that 'no-go' decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: "We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

He went on: "We trained the system – 'Hey don't kill the operator – that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."

 :ph34r:

https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/

Asimov wrote the required rules 80 years ago.

Josquius

Great outside the box thinking there  :lmfao:

But ja, really illustrates the problem with AI. Its not a hyper intelligent AM which is going to kill us. Its something which isn't properly coded leaving some loop holes like this.
██████
██████
██████

Tamas

Quote from: Josquius on June 02, 2023, 06:14:39 AMGreat outside the box thinking there  :lmfao:

But ja, really illustrates the problem with AI. Its not a hyper intelligent AM which is going to kill us. Its something which isn't properly coded leaving some loop holes like this.

It must be BS. Unless the simulation was on the level of 80s tex-based adventures and the "AI" thought to write the "kill operator" command, how on earth would it have killed the operator? SEAD uses anti-radar missiles doesn't it?

Legbiter

Quote from: Hamilcar on June 02, 2023, 02:02:01 AMThe Air Force has said that this story is nonsense.

Yeah they just came out and denied it. It sounded a bit too on the nose.
Posted using 100% recycled electrons.

grumbler

Quote from: Legbiter on June 02, 2023, 08:39:13 AM
Quote from: Hamilcar on June 02, 2023, 02:02:01 AMThe Air Force has said that this story is nonsense.

Yeah they just came out and denied it. It sounded a bit too on the nose.

Col Hamilton has clarified that he was just describing a thought experiment, not an actual simulation result.  He also acknowledged that he didn't make that clear in his remarks.
The future is all around us, waiting, in moments of transition, to be born in moments of revelation. No one knows the shape of that future or where it will take us. We know only that it is always born in pain.   -G'Kar

Bayraktar!

The Brain

An artificial thought experiment?
Women want me. Men want to be with me.

Tamas

Quote from: grumbler on June 02, 2023, 09:57:20 AM
Quote from: Legbiter on June 02, 2023, 08:39:13 AM
Quote from: Hamilcar on June 02, 2023, 02:02:01 AMThe Air Force has said that this story is nonsense.

Yeah they just came out and denied it. It sounded a bit too on the nose.

Col Hamilton has clarified that he was just describing a thought experiment, not an actual simulation result.  He also acknowledged that he didn't make that clear in his remarks.

Great, now I wait with bated breath as this clarification quickly spreads through the world press on front pages the same way the original interpretation did. 

Jacob

Reading about the use of AI (via website) to generate nudes of 14-year old classmates (from vacation photos) and sharing them among their peers.

What a messy time to be a teenager.

DGuller

Children often don't appreciate their strength.  AI age is going to give them a lot of strength.  On the other hand, it can also guide them with empathy adults often can't manage.

Josquius

Quote from: Jacob on July 19, 2023, 10:56:00 AMReading about the use of AI (via website) to generate nudes of 14-year old classmates (from vacation photos) and sharing them among their peers.

What a messy time to be a teenager.

My concern here would be why the parents let those kids have their credit card. That kind of AI doesn't come free.
██████
██████
██████

Jacob

Quote from: Josquius on July 19, 2023, 01:20:20 PMMy concern here would be why the parents let those kids have their credit card. That kind of AI doesn't come free.

1. You sure about that?

2. In this day and age it's not particularly outlandish for 14-year-olds to have access to methods of online payments, especially in paces that are essentially cash-less.

3. There could've been a legit-seeming use-case for accessing online AI image editing tools that later was used inappropriately.