News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

Hamilcar

Has science gone too far? Are the machines about to take over? What does languish think?

Disclosure: working on AI.

Barrister

Quote from: Hamilcar on April 06, 2023, 12:44:43 PMHas science gone too far? Are the machines about to take over? What does languish think?

Disclosure: working on AI.

You tell us.  It is both impressive and kind-of creepy what AI has been able to pull off in just the last little while, and how quickly it's improving - at least in the consumer-facing stuff like ChatGPT or AI-art.
Posts here are my own private opinions.  I do not speak for my employer.

crazy canuck

Quote from: Hamilcar on April 06, 2023, 12:44:43 PMHas science gone too far? Are the machines about to take over? What does languish think?

Disclosure: working on AI.

The CBC had an interesting panel on this yesterday.

The upshot was that it is all overblown and it is in the interests of those working on it to make it overblown - makes going out and getting funding easier.

No idea whether that view is correct, but the panelists were all researchers working on AI.

CountDeMoney

It's gonna be totally awesome when someone uses it to convince a nation's electorate that their leader is stepping down when he isn't, or that a preemptive nuclear strike is necessary when it isn't, or any of the other nifty fucking things it will be able to do convincingly when epistemology is finally erased by the Silicon Sheldons.

crazy canuck

Quote from: CountDeMoney on April 06, 2023, 12:59:20 PMIt's gonna be totally awesome when someone uses it to convince a nation's electorate that their leader is stepping down when he isn't, or that a preemptive nuclear strike is necessary when it isn't, or any of the other nifty fucking things it will be able to do convincingly when epistemology is finally erased by the Silicon Sheldons.

The biggest concern is around what you are saying, people mistake what AI says is the answer as something that is infallibly correct, when it is just something that predicts what the next word should be - sort of like a million monkeys.

Maladict

I've just spend fifteen minutes trying to get AI to write a poem using tercets. However much I try to help it, it just can't do it. I'm not worried until it goes full Dante on me.

Jacob

My thoughts:

AI is the new hype. There'll be real economic, social, political, and ethical impacts from this. Some of them will be more profound than we imagine, others will be much more trivial than we fear/ hope. It's hard to predict which is which at this time. Broadly, I think it might end up like the industrial revolution.

I think it's a given that there'll be efficiencies gained, job losses, and attendant social disruption. There will definitely be opportunties for those who are clever and/ or lucky. I expect it will make rich people richer, poor people more marginalized, allow more control in totalitarian societies, and allow more sidestepping/ manipulation of democracy in countries where sidestepping/ manipulation of of democratic principles is a significant part of the political process. In short, the benefits will tend to accrue to those who already have power. Maybe it'll also result in a general increase in quality across the board.

I think IP lawyers will make money on cases where AI generated art is argued to be derivative of existing work.

I'm interested in seeing how AI generated creative content undermines / encourages creativity and new ideas. There'll also be a significant impact on the value of creative content since it can now be mass produced much more efficiently. I have some misgivings, but they could be misplaced... or not. But the horse has already left the barn there, so it's a matter of seeing what happens rather than right vs wrong.

One area where AI is a long way away still is accontability. Sure AI can give you the result of [whatever] and replace the work of however many people; but if there are real consequences from what the AI outputs (medical decisions, driving AI, legal opinions, allocation of money, killing or hurting people) who is accountable for AI errors? Or for consequences if the AI applies criteria that turn out not to conform to social and legal values?

As for AGI, I recently talked to someone who's in AI and he said something like "AGI continues to be 5 to 50 years in the future." It sounds like it may be a bit like cold fusion - potential right around the corner in some years, but that timeline keeps getting pushed out. When (if) we do get near it, it'll be very interesting to figure out what kind of status they'll have - do they get individual rights? How can they be exploited? What sort of decision making power will they have? What sort of practical things will they be able to do?

... there are of course more sci-fi type hypotheticals that are fun (worrying?) to consider, but I think they're a bit further down the line.

crazy canuck

I forget to mention - I for one welcome our new AI overlords.


Tamas

Quote from: Barrister on April 06, 2023, 12:49:44 PM
Quote from: Hamilcar on April 06, 2023, 12:44:43 PMHas science gone too far? Are the machines about to take over? What does languish think?

Disclosure: working on AI.

You tell us.  It is both impressive and kind-of creepy what AI has been able to pull off in just the last little while, and how quickly it's improving - at least in the consumer-facing stuff like ChatGPT or AI-art.

Are those "true" AIs though, or its just our human brain seeing things where there's nothing but a sophisticated script?

Or the other side of that: are WE more than a sophisticated script?

Grey Fox

It's barely there & what is is mostly only greatly optimized algorithms like models.

Disclosure : Works on imaging AIs.
Colonel Caliga is Awesome.

PDH

Of course we're doomed.  Not from this, but that doesn't matter.
I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth.
-Umberto Eco

-------
"I'm pretty sure my level of depression has nothing to do with how much of a fucking asshole you are."

-CdM

HVC

It's like no one watched the terminator movies. I mean that makes sense after the second one since their not worth watching, but the first two gave plenty of warnings.
Being lazy is bad; unless you still get what you want, then it's called "patience".
Hubris must be punished. Severely.

Josquius

There are scary prospects in it for sure. Though they're less in the direction of evil AI conquers the world and more reality breaks down as we are overwhelmed with algorithmically generated fakes.
██████
██████
██████

Barrister

Quote from: crazy canuck on April 06, 2023, 01:01:00 PM
Quote from: CountDeMoney on April 06, 2023, 12:59:20 PMIt's gonna be totally awesome when someone uses it to convince a nation's electorate that their leader is stepping down when he isn't, or that a preemptive nuclear strike is necessary when it isn't, or any of the other nifty fucking things it will be able to do convincingly when epistemology is finally erased by the Silicon Sheldons.

The biggest concern is around what you are saying, people mistake what AI says is the answer as something that is infallibly correct, when it is just something that predicts what the next word should be - sort of like a million monkeys.

I mean - the ultimate biggest concern is the Terminator scenario where AIs gain sentience and wage war against humanity.

In a much more near-term time-frame though, I think the biggest concern is when AI can generate such convincing deep-fake audio and video that we can no longer trust any video we see.
Posts here are my own private opinions.  I do not speak for my employer.

Jacob

... and that I think goes back to the accountability point.

If realistic but fake video is trivial to create, video evidence needs some sort of guarantor to be credible. "Yes I was there. That video shows what I saw also" statements from someone credible. Or - I suppose - in the court of law "yes, the chain of custody is that we obtained the video from the drive it was recorded to, that drive has no evidence of material being added toit, and the video has not been substituted since then - so we can rely on it as a recording of real events" type stuff.