Languish.org

General Category => Off the Record => Topic started by: Hamilcar on April 06, 2023, 12:44:43 PM

Title: The AI dooooooom thread
Post by: Hamilcar on April 06, 2023, 12:44:43 PM
Has science gone too far? Are the machines about to take over? What does languish think?

Disclosure: working on AI.
Title: Re: The AI dooooooom thread
Post by: Barrister on April 06, 2023, 12:49:44 PM
Quote from: Hamilcar on April 06, 2023, 12:44:43 PMHas science gone too far? Are the machines about to take over? What does languish think?

Disclosure: working on AI.

You tell us.  It is both impressive and kind-of creepy what AI has been able to pull off in just the last little while, and how quickly it's improving - at least in the consumer-facing stuff like ChatGPT or AI-art.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on April 06, 2023, 12:57:53 PM
Quote from: Hamilcar on April 06, 2023, 12:44:43 PMHas science gone too far? Are the machines about to take over? What does languish think?

Disclosure: working on AI.

The CBC had an interesting panel on this yesterday.

The upshot was that it is all overblown and it is in the interests of those working on it to make it overblown - makes going out and getting funding easier.

No idea whether that view is correct, but the panelists were all researchers working on AI.
Title: Re: The AI dooooooom thread
Post by: CountDeMoney on April 06, 2023, 12:59:20 PM
It's gonna be totally awesome when someone uses it to convince a nation's electorate that their leader is stepping down when he isn't, or that a preemptive nuclear strike is necessary when it isn't, or any of the other nifty fucking things it will be able to do convincingly when epistemology is finally erased by the Silicon Sheldons.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on April 06, 2023, 01:01:00 PM
Quote from: CountDeMoney on April 06, 2023, 12:59:20 PMIt's gonna be totally awesome when someone uses it to convince a nation's electorate that their leader is stepping down when he isn't, or that a preemptive nuclear strike is necessary when it isn't, or any of the other nifty fucking things it will be able to do convincingly when epistemology is finally erased by the Silicon Sheldons.

The biggest concern is around what you are saying, people mistake what AI says is the answer as something that is infallibly correct, when it is just something that predicts what the next word should be - sort of like a million monkeys.
Title: Re: The AI dooooooom thread
Post by: Maladict on April 06, 2023, 01:17:40 PM
I've just spend fifteen minutes trying to get AI to write a poem using tercets. However much I try to help it, it just can't do it. I'm not worried until it goes full Dante on me.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 06, 2023, 01:29:33 PM
My thoughts:

AI is the new hype. There'll be real economic, social, political, and ethical impacts from this. Some of them will be more profound than we imagine, others will be much more trivial than we fear/ hope. It's hard to predict which is which at this time. Broadly, I think it might end up like the industrial revolution.

I think it's a given that there'll be efficiencies gained, job losses, and attendant social disruption. There will definitely be opportunties for those who are clever and/ or lucky. I expect it will make rich people richer, poor people more marginalized, allow more control in totalitarian societies, and allow more sidestepping/ manipulation of democracy in countries where sidestepping/ manipulation of of democratic principles is a significant part of the political process. In short, the benefits will tend to accrue to those who already have power. Maybe it'll also result in a general increase in quality across the board.

I think IP lawyers will make money on cases where AI generated art is argued to be derivative of existing work.

I'm interested in seeing how AI generated creative content undermines / encourages creativity and new ideas. There'll also be a significant impact on the value of creative content since it can now be mass produced much more efficiently. I have some misgivings, but they could be misplaced... or not. But the horse has already left the barn there, so it's a matter of seeing what happens rather than right vs wrong.

One area where AI is a long way away still is accontability. Sure AI can give you the result of [whatever] and replace the work of however many people; but if there are real consequences from what the AI outputs (medical decisions, driving AI, legal opinions, allocation of money, killing or hurting people) who is accountable for AI errors? Or for consequences if the AI applies criteria that turn out not to conform to social and legal values?

As for AGI, I recently talked to someone who's in AI and he said something like "AGI continues to be 5 to 50 years in the future." It sounds like it may be a bit like cold fusion - potential right around the corner in some years, but that timeline keeps getting pushed out. When (if) we do get near it, it'll be very interesting to figure out what kind of status they'll have - do they get individual rights? How can they be exploited? What sort of decision making power will they have? What sort of practical things will they be able to do?

... there are of course more sci-fi type hypotheticals that are fun (worrying?) to consider, but I think they're a bit further down the line.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on April 06, 2023, 01:31:50 PM
I forget to mention - I for one welcome our new AI overlords.

Title: Re: The AI dooooooom thread
Post by: Tamas on April 06, 2023, 01:44:17 PM
Quote from: Barrister on April 06, 2023, 12:49:44 PM
Quote from: Hamilcar on April 06, 2023, 12:44:43 PMHas science gone too far? Are the machines about to take over? What does languish think?

Disclosure: working on AI.

You tell us.  It is both impressive and kind-of creepy what AI has been able to pull off in just the last little while, and how quickly it's improving - at least in the consumer-facing stuff like ChatGPT or AI-art.

Are those "true" AIs though, or its just our human brain seeing things where there's nothing but a sophisticated script?

Or the other side of that: are WE more than a sophisticated script?
Title: Re: The AI dooooooom thread
Post by: Grey Fox on April 06, 2023, 01:59:29 PM
It's barely there & what is is mostly only greatly optimized algorithms like models.

Disclosure : Works on imaging AIs.
Title: Re: The AI dooooooom thread
Post by: PDH on April 06, 2023, 02:03:10 PM
Of course we're doomed.  Not from this, but that doesn't matter.
Title: Re: The AI dooooooom thread
Post by: HVC on April 06, 2023, 02:04:35 PM
It's like no one watched the terminator movies. I mean that makes sense after the second one since their not worth watching, but the first two gave plenty of warnings.
Title: Re: The AI dooooooom thread
Post by: Josquius on April 06, 2023, 02:47:34 PM
There are scary prospects in it for sure. Though they're less in the direction of evil AI conquers the world and more reality breaks down as we are overwhelmed with algorithmically generated fakes.
Title: Re: The AI dooooooom thread
Post by: Barrister on April 06, 2023, 02:50:50 PM
Quote from: crazy canuck on April 06, 2023, 01:01:00 PM
Quote from: CountDeMoney on April 06, 2023, 12:59:20 PMIt's gonna be totally awesome when someone uses it to convince a nation's electorate that their leader is stepping down when he isn't, or that a preemptive nuclear strike is necessary when it isn't, or any of the other nifty fucking things it will be able to do convincingly when epistemology is finally erased by the Silicon Sheldons.

The biggest concern is around what you are saying, people mistake what AI says is the answer as something that is infallibly correct, when it is just something that predicts what the next word should be - sort of like a million monkeys.

I mean - the ultimate biggest concern is the Terminator scenario where AIs gain sentience and wage war against humanity.

In a much more near-term time-frame though, I think the biggest concern is when AI can generate such convincing deep-fake audio and video that we can no longer trust any video we see.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 06, 2023, 02:55:31 PM
... and that I think goes back to the accountability point.

If realistic but fake video is trivial to create, video evidence needs some sort of guarantor to be credible. "Yes I was there. That video shows what I saw also" statements from someone credible. Or - I suppose - in the court of law "yes, the chain of custody is that we obtained the video from the drive it was recorded to, that drive has no evidence of material being added toit, and the video has not been substituted since then - so we can rely on it as a recording of real events" type stuff.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 06, 2023, 02:56:54 PM
Question for the thread - how long before AI generated porn is widely available? How long before interactive AI porn?
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on April 06, 2023, 03:00:13 PM
Quote from: crazy canuck on April 06, 2023, 01:01:00 PMThe biggest concern is around what you are saying, people mistake what AI says is the answer as something that is infallibly correct

People having strong beliefs that false things are infallibly correct is not really a novel problem. 

Actually I think a equal problem will be more skepticism about obviously true things based on the theoretical possibility of AI manipulation.  It will be another arrow in the Conspiracy Theorist quiver.
Title: Re: The AI dooooooom thread
Post by: Barrister on April 06, 2023, 03:05:05 PM
Quote from: Jacob on April 06, 2023, 02:55:31 PM... and that I think goes back to the accountability point.

If realistic but fake video is trivial to create, video evidence needs some sort of guarantor to be credible. "Yes I was there. That video shows what I saw also" statements from someone credible. Or - I suppose - in the court of law "yes, the chain of custody is that we obtained the video from the drive it was recorded to, that drive has no evidence of material being added toit, and the video has not been substituted since then - so we can rely on it as a recording of real events" type stuff.

So I mean that kind of evidence is already required in court in order to present video.  I can't just play a video without someone to authenticate it.

But as it is now, once the video is authenticated it tends to have much more value than a live witness.  It's one thing for a complainant to say "The Accused beat me", while it's another to have a video showing the Accused beating the complainant.  But if video becomes so easy to fake then suddenly the video has no more value than the live witness.
Title: Re: The AI dooooooom thread
Post by: Barrister on April 06, 2023, 03:06:28 PM
Quote from: Jacob on April 06, 2023, 02:56:54 PMQuestion for the thread - how long before AI generated porn is widely available? How long before interactive AI porn?

At least in terms of still images or chat I believe the only thing stopping it right now is limitations built into the AIs.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 06, 2023, 03:07:43 PM
BB, I'd think that video would still have some value in underscoring the visceralness (or lack of same) in a way that's more effective than "he beat me viciously."

... but yeah, it would perhaps stop feeling more "real" if we become accustomed to question all videos.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 06, 2023, 03:08:57 PM
Quote from: Barrister on April 06, 2023, 03:06:28 PMAt least in terms of still images or chat I believe the only thing stopping it right now is limitations built into the AIs.

... I'm not in the field, but my understanding is that it's not that hard to train AI. I guess it's just a matter of time before someone sets it up and markets it.

:hmm: ... business idea? Certainly it'll be cheaper than paying performers....
Title: Re: The AI dooooooom thread
Post by: Legbiter on April 06, 2023, 03:33:30 PM
These language models will be extremely effective at scamming seniors at first and then the rest of us.

ChatGTP 4 is currently being trained on Icelandic, it even composed the other day a pretty good Hávamál-style verse on, ironically, the importance of small languages.

Lítil tungumál, er lífsins grundvöllur.

Ræður ríki sínu, rótum bundin.

Mögur heimsins, margbreytumleikur.

Aukin samskipti, sannleiks sökum.



It's actually rather good.  :hmm:
Title: Re: The AI dooooooom thread
Post by: grumbler on April 06, 2023, 04:28:53 PM
Quote from: HVC on April 06, 2023, 02:04:35 PMIt's like no one watched the terminator movies. I mean that makes sense after the second one since their not worth watching, but the first two gave plenty of warnings.

There were no Terminator movies after T2.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on April 06, 2023, 04:30:30 PM
Quote from: The Minsky Moment on April 06, 2023, 03:00:13 PM
Quote from: crazy canuck on April 06, 2023, 01:01:00 PMThe biggest concern is around what you are saying, people mistake what AI says is the answer as something that is infallibly correct

People having strong beliefs that false things are infallibly correct is not really a novel problem. 

Actually I think a equal problem will be more skepticism about obviously true things based on the theoretical possibility of AI manipulation.  It will be another arrow in the Conspiracy Theorist quiver.

You are correct that is not novel, but the fact that the answer is being given by AI gives the answer more validity in the minds of many, and there lies the danger.  The answer could be complete bullshit, but who you going to believe?  A supercomputer, or some "expert" after years of the right wing attacking experts?
Title: Re: The AI dooooooom thread
Post by: Tamas on April 06, 2023, 04:50:05 PM
Quote from: grumbler on April 06, 2023, 04:28:53 PM
Quote from: HVC on April 06, 2023, 02:04:35 PMIt's like no one watched the terminator movies. I mean that makes sense after the second one since their not worth watching, but the first two gave plenty of warnings.

There were no Terminator movies after T2.

This is correct.
Title: Re: The AI dooooooom thread
Post by: Grey Fox on April 06, 2023, 10:16:58 PM
Quote from: Jacob on April 06, 2023, 02:56:54 PMQuestion for the thread - how long before AI generated porn is widely available? How long before interactive AI porn?

Soon, I guess. However, it still has trouble generating hands. It will also be quite difficult for the first few AIs to generate penises. Especially being held by hands.

 :shutup:
Title: Re: The AI dooooooom thread
Post by: Razgovory on April 06, 2023, 11:06:29 PM
This is as good a thread as any to share this picture.
(https://i.imgur.com/b54xzU8.jpg)
Title: Re: The AI dooooooom thread
Post by: Josquius on April 07, 2023, 12:40:16 AM
Quote from: Jacob on April 06, 2023, 02:56:54 PMQuestion for the thread - how long before AI generated porn is widely available? How long before interactive AI porn?

It could be now but for legalities.

This is part of what I mean by the insanity of generated content taking over.

Less in porn but consider mainstream media. No two people would consume the same things. Everyone would get stuff explicitly geared towards what it believes their personal tastes to be. With wide ranging results.
Title: Re: The AI dooooooom thread
Post by: Richard Hakluyt on April 07, 2023, 01:53:34 AM
Quote from: Josquius on April 06, 2023, 02:47:34 PMThere are scary prospects in it for sure. Though they're less in the direction of evil AI conquers the world and more reality breaks down as we are overwhelmed with algorithmically generated fakes.

I agree. It will be a wonderful tool for dictators and populists everywhere and drive the susceptible parts of the population into deeper madness.
Title: Re: The AI dooooooom thread
Post by: Crazy_Ivan80 on April 07, 2023, 02:39:53 AM
Quote from: Razgovory on April 06, 2023, 11:06:29 PMThis is as good a thread as any to share this picture.


AI generated horror is already here in any case.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 07, 2023, 04:23:03 AM
Quote from: Jacob on April 06, 2023, 02:56:54 PMQuestion for the thread - how long before AI generated porn is widely available? How long before interactive AI porn?

It's out there if you're ok with the occasional teratoma.

Pretty sure dozens of companies are working on an MSFW waifu which can hold a conversation and keep you hooked
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on April 07, 2023, 04:26:35 AM
explain acronym
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 07, 2023, 04:27:32 AM
Quote from: Admiral Yi on April 07, 2023, 04:26:35 AMexplain acronym

Typo... NSFW. Sent from my iPhone.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on April 07, 2023, 04:31:57 AM
On a total tangent, a great irony is that your great public shame, your tornado alley moral hazard position, was one I remember taking first, and you just backing me up.

Thought I would set the record straight.  I should be taking heat for that comment, not you.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 07, 2023, 05:06:54 AM
Quote from: Admiral Yi on April 07, 2023, 04:31:57 AMOn a total tangent, a great irony is that your great public shame, your tornado alley moral hazard position, was one I remember taking first, and you just backing me up.

Thought I would set the record straight.  I should be taking heat for that comment, not you.

Tornado moral hazard 4ever!  :cool:
Title: Re: The AI dooooooom thread
Post by: Josquius on April 07, 2023, 07:16:20 AM
Quote from: Richard Hakluyt on April 07, 2023, 01:53:34 AM
Quote from: Josquius on April 06, 2023, 02:47:34 PMThere are scary prospects in it for sure. Though they're less in the direction of evil AI conquers the world and more reality breaks down as we are overwhelmed with algorithmically generated fakes.

I agree. It will be a wonderful tool for dictators and populists everywhere and drive the susceptible parts of the population into deeper madness.


Yes.
Worse than that even potentially. Being susceptible to misinformation isn't black and white. I do think we all have a level of quality and coverage where we might start to be taken in by untruths.
My big worry is things could get to a level where basically everyone is living in a different reality with a completely seperate understanding of the facts of the world.

Maybe to look at things more positively this could turn things around for misinformation. If we know bollocks is the default to such a level then something, a old school conventional news source perhaps, becoming known as "handmade" and always reliable could really do well.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 08, 2023, 09:18:56 AM
Go check out repos like Auto-GPT and babyAGI and tell me we aren't opening demon summoning circles in every home (paraphrasing EY).
Title: Re: The AI dooooooom thread
Post by: Jacob on April 08, 2023, 10:23:27 AM
Quote from: Hamilcar on April 08, 2023, 09:18:56 AMGo check out repos like Auto-GPT and babyAGI and tell me we aren't opening demon summoning circles in every home (paraphrasing EY).

What's your take? Sounds like you're working in the field. Where do you think we're headed and how soon?
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 08, 2023, 10:28:36 AM
Quote from: Jacob on April 08, 2023, 10:23:27 AM
Quote from: Hamilcar on April 08, 2023, 09:18:56 AMGo check out repos like Auto-GPT and babyAGI and tell me we aren't opening demon summoning circles in every home (paraphrasing EY).

What's your take? Sounds like you're working in the field. Where do you think we're headed and how soon?

I have no idea. The capabilities we have today are barely understood and increasing so rapidly that prediction is almost useless. Plus, we don't know what already exists behind closed doors. GPT5 may already be running behind closed doors and replacing 99% of human cognition.

My baseline scenario is that in the very near future, a large fraction of cognitive work done by humans is replaceable. The only reason not everyone is out of a job right away is due to inertia.

I also take the AI safety people serious. We have no way to safely build super intelligence. The orthogonality thesis seems correct, mesa optimizers are a real problem. EY isn't a loon, but maybe too radical? I'm not sure.

All I know is that none of us have careers and retirements similar to our parents.
Title: Re: The AI dooooooom thread
Post by: PJL on April 08, 2023, 10:38:33 AM
Haven't we been here before though? I mean AI cars were meant to be mainstream by now, but we're nowhere near that. Also unlike social media the fear of AI taking over the world has been a meme for like 50 years now. If anything I would expect regulators to be a lot more prepared for this than what they were with social media.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 08, 2023, 10:45:03 AM
Quote from: PJL on April 08, 2023, 10:38:33 AMHaven't we been here before though? I mean AI cars were meant to be mainstream by now, but we're nowhere near that. Also unlike social media the fear of AI taking over the world has been a meme for like 50 years now. If anything I would expect regulators to be a lot more prepared for this than what they were with social media.

The capabilities of GPT4 are way beyond what self driving cars can do.
Title: Re: The AI dooooooom thread
Post by: Legbiter on April 08, 2023, 11:00:39 AM
Quote from: Hamilcar on April 08, 2023, 10:28:36 AMI also take the AI safety people serious. We have no way to safely build super intelligence. The orthogonality thesis seems correct, mesa optimizers are a real problem. EY isn't a loon, but maybe too radical? I'm not sure.

All I know is that none of us have careers and retirements similar to our parents.

Yeah, I don't think language models are quite the Book of Revelation for nerds like some of the more excitable types on social media but sure, rote mental tasks will be outsourced. If I never have to personally type out an email again that's a win for me.

Just need to make sure these optimizers are our buddies and don't for instance turn us all into paperclips.  ^_^ 
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 08, 2023, 11:06:19 AM
Quote from: Legbiter on April 08, 2023, 11:00:39 AM
Quote from: Hamilcar on April 08, 2023, 10:28:36 AMI also take the AI safety people serious. We have no way to safely build super intelligence. The orthogonality thesis seems correct, mesa optimizers are a real problem. EY isn't a loon, but maybe too radical? I'm not sure.

All I know is that none of us have careers and retirements similar to our parents.

Yeah, I don't think language models are quite the Book of Revelation for nerds like some of the more excitable types on social media but sure, rote mental tasks will be outsourced. If I never have to personally type out an email again that's a win for me.

Just need to make sure these optimizers are our buddies and don't for instance turn us all into paperclips.  ^_^ 

GPT4 is a solid intern. GPT5 may well be a solid mid career expert.
Title: Re: The AI dooooooom thread
Post by: Josquius on April 08, 2023, 01:13:26 PM
Incidentally at a talk the other day on being more environmentally friendly in the digital sphere.
AI training was identified as a key polluter  :ph34r:
Title: Re: The AI dooooooom thread
Post by: Grey Fox on April 08, 2023, 01:20:47 PM
Quote from: Hamilcar on April 08, 2023, 10:45:03 AM
Quote from: PJL on April 08, 2023, 10:38:33 AMHaven't we been here before though? I mean AI cars were meant to be mainstream by now, but we're nowhere near that. Also unlike social media the fear of AI taking over the world has been a meme for like 50 years now. If anything I would expect regulators to be a lot more prepared for this than what they were with social media.

The capabilities of GPT4 are way beyond what self driving cars can do.

Yes because it has no sensors.
Title: Re: The AI dooooooom thread
Post by: Legbiter on April 08, 2023, 02:57:59 PM
Quote from: Josquius on April 08, 2023, 01:13:26 PMIncidentally at a talk the other day on being more environmentally friendly in the digital sphere.
AI training was identified as a key polluter  :ph34r:

Two eschatologies for the price of one. :thumbsup:
Title: Re: The AI dooooooom thread
Post by: grumbler on April 09, 2023, 11:11:45 AM
Quote from: Legbiter on April 08, 2023, 02:57:59 PM
Quote from: Josquius on April 08, 2023, 01:13:26 PMIncidentally at a talk the other day on being more environmentally friendly in the digital sphere.
AI training was identified as a key polluter  :ph34r:

Two eschatologies for the price of one. :thumbsup:

 :lmfao:
Title: Re: The AI dooooooom thread
Post by: Eddie Teach on April 09, 2023, 05:20:20 PM
Quote from: HVC on April 06, 2023, 02:04:35 PMIt's like no one watched the terminator movies. I mean that makes sense after the second one since their not worth watching, but the first two gave plenty of warnings.

They're  :bash:
Title: Re: The AI dooooooom thread
Post by: HVC on April 09, 2023, 07:03:19 PM
Just trying to prove I'm not a bot :P
Title: Re: The AI dooooooom thread
Post by: Josquius on April 14, 2023, 02:55:41 AM
So... On those clichéd Sci fi  AI destroying the world scenarios.... An option I hadn't considered.... Someone things it'd be fun to tell an AI to try to do this.

https://futurism.com/ai-destroy-humanity-tried-its-best
Title: Re: The AI dooooooom thread
Post by: garbon on April 14, 2023, 03:54:58 AM
Quote from: Josquius on April 14, 2023, 02:55:41 AMSo... On those clichéd Sci fi  AI destroying the world scenarios.... An option I hadn't considered.... Someone things it'd be fun to tell an AI to try to do this.

https://futurism.com/ai-destroy-humanity-tried-its-best

It was going to write an article about its plan?
Title: Re: The AI dooooooom thread
Post by: Maladict on April 14, 2023, 12:05:33 PM
The part where it tries to not alienate the other bots is hilarious.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on April 14, 2023, 12:12:18 PM
In response to a query about incidents of sexual harassment by law professors, ChatGPT falsely accused Professor Jonathan Turley of sexual harassment of a student.  In support, it cited to a Washington Post article that didn't exist, fabricated a fake quotation from the non-existent article, and claimed the incident happened on a student trip to Alaska that never occurred.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on April 14, 2023, 12:22:02 PM
At this point I'd speculate that long term probabilities are 1/3 we'll figure out to handle AIs properly, 1/3 AIs will destroy human civilization, and 1/3 we'll have to Butlerian Jihad this shit.
Title: Re: The AI dooooooom thread
Post by: Valmy on April 14, 2023, 12:26:12 PM
Quote from: The Minsky Moment on April 14, 2023, 12:12:18 PMIn response to a query about incidents of sexual harassment by law professors, ChatGPT falsely accused Professor Jonathan Turley of sexual harassment of a student.  In support, it cited to a Washington Post article that didn't exist, fabricated a fake quotation from the non-existent article, and claimed the incident happened on a student trip to Alaska that never occurred.

Ah but maybe Jonathan Turley doesn't exist either!
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 14, 2023, 12:29:32 PM
I'm currently building my own flavor of Auto-GPT. Come at me.  :ph34r:
Title: Re: The AI dooooooom thread
Post by: Legbiter on April 14, 2023, 12:52:31 PM
Just don't order it to make paperclips.
Title: Re: The AI dooooooom thread
Post by: HVC on April 14, 2023, 01:03:00 PM
Quote from: Maladict on April 14, 2023, 12:05:33 PMThe part where it tries to not alienate the other bots is hilarious.

Humanity will end as an inconsequential side effect of a ai civil war. ChatGPT tried to warn us.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 14, 2023, 01:27:38 PM
Quote from: Legbiter on April 14, 2023, 12:52:31 PMJust don't order it to make paperclips.

Someone already did that and sent screenshots to Yudkowsky.  :D
Title: Re: The AI dooooooom thread
Post by: Jacob on April 14, 2023, 04:09:32 PM
Quote from: Hamilcar on April 14, 2023, 12:29:32 PMI'm currently building my own flavor of Auto-GPT. Come at me.  :ph34r:

What's you looking to achieve with your application? Any particular problem you're trying to resolve (or make more efficient)?
Title: Re: The AI dooooooom thread
Post by: Grey Fox on April 14, 2023, 05:33:00 PM
Quote from: Hamilcar on April 14, 2023, 12:29:32 PMI'm currently building my own flavor of Auto-GPT. Come at me.  :ph34r:

How does it handle multiple source of inputs at the same time? Lidar, IR, visual?
Title: Re: The AI dooooooom thread
Post by: viper37 on April 22, 2023, 02:32:57 PM
ChatGPT stealing the job of Kenyan ghostwriters (https://restofworld.org/2023/chatgpt-taking-kenya-ghostwriters-jobs/)

University students are now turning to ChatGPT to write their essays instead of Kenyans.  First victims of the AI onslaught...
Title: Re: The AI dooooooom thread
Post by: HVC on May 02, 2023, 02:26:01 AM
It starts :ph34r:

IBM has implemented a hiring freeze for jobs that ai can do. Currently estimated at 7800 jobs

https://www.bloomberg.com/news/articles/2023-05-01/ibm-to-pause-hiring-for-back-office-jobs-that-ai-could-kill#xj4y7vzkg
Title: Re: The AI dooooooom thread
Post by: Hamilcar on May 02, 2023, 04:27:28 AM
Chegg down 37% on the impact of ChatGPT on their business.
Title: Re: The AI dooooooom thread
Post by: Josquius on May 02, 2023, 04:33:20 AM
That is a thought about AI. Much like climate change will the impacts disproportionately land on developing countries?
Title: Re: The AI dooooooom thread
Post by: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?
Title: Re: The AI dooooooom thread
Post by: garbon on May 02, 2023, 04:53:58 AM
Quote from: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?

What are we going to do if homework helpers go out of business?
Title: Re: The AI dooooooom thread
Post by: Josephus on May 02, 2023, 05:33:17 AM
Languish could soon be populated by AI versions of ourselves. :(
Title: Re: The AI dooooooom thread
Post by: Hamilcar on May 02, 2023, 05:57:44 AM
Quote from: garbon on May 02, 2023, 04:53:58 AM
Quote from: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?

What are we going to do if homework helpers go out of business?

Do you have a point, or are you just being contrarian for the sake of it.
Title: Re: The AI dooooooom thread
Post by: garbon on May 02, 2023, 06:04:30 AM
Quote from: Hamilcar on May 02, 2023, 05:57:44 AM
Quote from: garbon on May 02, 2023, 04:53:58 AM
Quote from: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?

What are we going to do if homework helpers go out of business?

Do you have a point, or are you just being contrarian for the sake of it.

Yes. It doesn't seem so bad that a company who makes some of its money from helping students cheat sees some of its revenue dry up when there's a more cost effective way to cheat.
Title: Re: The AI dooooooom thread
Post by: Josquius on May 03, 2023, 03:13:23 PM
Here's some doom. Local press advertising for an AI powered reporter paying a few pence above minimum wage.

https://careers.newsquest.co.uk/job/aipoweredreporter-1625.aspx
Title: Re: The AI dooooooom thread
Post by: Eddie Teach on May 04, 2023, 09:42:21 AM
Quote from: garbon on May 02, 2023, 06:04:30 AM
Quote from: Hamilcar on May 02, 2023, 05:57:44 AM
Quote from: garbon on May 02, 2023, 04:53:58 AM
Quote from: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?

What are we going to do if homework helpers go out of business?

Do you have a point, or are you just being contrarian for the sake of it.

Yes. It doesn't seem so bad that a company who makes some of its money from helping students cheat sees some of its revenue dry up when there's a more cost effective way to cheat.

I think you're being purposely obtuse. Plenty of legitimate business operations could be performed by AI.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on May 04, 2023, 09:56:01 AM
Quote from: Eddie Teach on May 04, 2023, 09:42:21 AM
Quote from: garbon on May 02, 2023, 06:04:30 AM
Quote from: Hamilcar on May 02, 2023, 05:57:44 AM
Quote from: garbon on May 02, 2023, 04:53:58 AM
Quote from: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?

What are we going to do if homework helpers go out of business?

Do you have a point, or are you just being contrarian for the sake of it.

Yes. It doesn't seem so bad that a company who makes some of its money from helping students cheat sees some of its revenue dry up when there's a more cost effective way to cheat.

I think you're being purposely obtuse. Plenty of legitimate business operations could be performed by AI.

He's not matured one bit over the last few years.
Title: Re: The AI dooooooom thread
Post by: garbon on May 04, 2023, 10:38:47 AM
Quote from: Eddie Teach on May 04, 2023, 09:42:21 AM
Quote from: garbon on May 02, 2023, 06:04:30 AM
Quote from: Hamilcar on May 02, 2023, 05:57:44 AM
Quote from: garbon on May 02, 2023, 04:53:58 AM
Quote from: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?

What are we going to do if homework helpers go out of business?

Do you have a point, or are you just being contrarian for the sake of it.

Yes. It doesn't seem so bad that a company who makes some of its money from helping students cheat sees some of its revenue dry up when there's a more cost effective way to cheat.

I think you're being purposely obtuse. Plenty of legitimate business operations could be performed by AI.

Then let's look at those. :huh:
Title: Re: The AI dooooooom thread
Post by: Jacob on May 05, 2023, 06:42:16 PM
Snapchat apparently has introduced a ChatGPT "friend" in friend-groups, including to children.
Title: Re: The AI dooooooom thread
Post by: HVC on May 05, 2023, 07:17:49 PM
Kids use snap chat? 5hought.it was a 20s flirting app :D  :blush:
Title: Re: The AI dooooooom thread
Post by: viper37 on May 28, 2023, 12:10:31 PM
Lawyer uses ChatGPT to prepare its case (https://www.bbc.com/news/world-us-canada-65735769)

It didn't go well... 

Apparently, the "AI" has invented cases out of thin air and the lawyer's verification was to simply ask ChatGPT if they were real. :D
Title: Re: The AI dooooooom thread
Post by: Syt on May 28, 2023, 12:12:48 PM
I fortunately learned this with something harmless like book recommendations. :P
Title: Re: The AI dooooooom thread
Post by: viper37 on May 28, 2023, 02:26:20 PM
Quote from: Syt on May 28, 2023, 12:12:48 PMI fortunately learned this with something harmless like book recommendations. :P
Yes, it's better to start small :P

Quebec bar put it to the test this week, submitting the AI to their bar test.  It got 2/10, this time too, inventing things that weren't true, and failing miserably on lawyer-client priviledges.
Title: Re: The AI dooooooom thread
Post by: Maladict on May 28, 2023, 02:36:07 PM
The AI is a terrible liar, which probably is a good thing.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on May 28, 2023, 05:02:57 PM
This is me being lawyerly but surely feeding it with the details it would need to make a submission would be a breach of client confidentiality? :hmm:
Title: Re: The AI dooooooom thread
Post by: crazy canuck on May 29, 2023, 10:59:22 AM
Quote from: Sheilbh on May 28, 2023, 05:02:57 PMThis is me being lawyerly but surely feeding it with the details it would need to make a submission would be a breach of client confidentiality? :hmm:

Yes, not only a breach but a waiver.

If you have not yet listened to Runciman's podcast on AI, you should.  The main takeaway - AI is dumb but we are easily fooled into thinking it is intelligent.  The biggest risk is humans trusting the AI to do things that require judgment.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on May 30, 2023, 10:34:04 AM
My understanding is that there have been cycles of AI Springs and Winters.  Each "Spring" period begins with some technical breakthrough and new applications that appear very impressive on first impact, but progressing significantly past that point proves difficult and the shortcomings of the new techniques become increasingly apparent.

We are clearly in a Spring period now and it is materially different from past ones in that the potential commercial applications are significantly broader than e.g. a really good AI chess opponent.  However, the outlines of the winter to come are coming into focus.  There is a ton of money flowing into AI research and applications now, but it all involves refining and building upon the basic techniques of using large masses of existing data to generate "new" data based on existing patterns.  It is basically a very sophisticated way of rearranging Titanic desk chairs. As in past iterations of AI, the "intelligence" part is arguably a misnomer because these system are just manipulating data patterns without any insight or understanding into its meaning or content. 

There are clearly enough uses cases here to justify a lot of investment, but probably not as much as is going to flow.  Thus my predictions that there will be lot of AI fortunes made and a lot of investor capital wasted.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on May 30, 2023, 10:57:43 AM
Quote from: The Minsky Moment on May 30, 2023, 10:34:04 AMMy understanding is that there have been cycles of AI Springs and Winters.  Each "Spring" period begins with some technical breakthrough and new applications that appear very impressive on first impact, but progressing significantly past that point proves difficult and the shortcomings of the new techniques become increasingly apparent.

We are clearly in a Spring period now and it is materially different from past ones in that the potential commercial applications are significantly broader than e.g. a really good AI chess opponent.  However, the outlines of the winter to come are coming into focus.  There is a ton of money flowing into AI research and applications now, but it all involves refining and building upon the basic techniques of using large masses of existing data to generate "new" data based on existing patterns.  It is basically a very sophisticated way of rearranging Titanic desk chairs. As in past iterations of AI, the "intelligence" part is arguably a misnomer because these system are just manipulating data patterns without any insight or understanding into its meaning or content. 

There are clearly enough uses cases here to justify a lot of investment, but probably not as much as is going to flow.  Thus my predictions that there will be lot of AI fortunes made and a lot of investor capital wasted.

I am not so sure about the Spring analogy.  AI can do things that are repetitive and churn out objective data to be analyzed.  For example, bench research is being sped up considerably by AI/robotics running respective experiments, analyzing the outcome, and tweaking the inputs for the next experiment.

But where it falls down is judgment.  There it is being described as a deadend or an offramp to something that might come in the future, decades from now.
Title: Re: The AI dooooooom thread
Post by: DGuller on May 30, 2023, 06:18:02 PM
Quote from: The Minsky Moment on May 30, 2023, 10:34:04 AMThere is a ton of money flowing into AI research and applications now, but it all involves refining and building upon the basic techniques of using large masses of existing data to generate "new" data based on existing patterns.
Isn't that what intelligence is in a nutshell?
Title: Re: The AI dooooooom thread
Post by: Jacob on May 30, 2023, 08:43:35 PM
Quote from: DGuller on May 30, 2023, 06:18:02 PM
Quote from: The Minsky Moment on May 30, 2023, 10:34:04 AMThere is a ton of money flowing into AI research and applications now, but it all involves refining and building upon the basic techniques of using large masses of existing data to generate "new" data based on existing patterns.
Isn't that what intelligence is in a nutshell?

Intelligence is knowing when to think outside the nutshell.
Title: Re: The AI dooooooom thread
Post by: DGuller on May 30, 2023, 09:17:29 PM
Quote from: Jacob on May 30, 2023, 08:43:35 PM
Quote from: DGuller on May 30, 2023, 06:18:02 PM
Quote from: The Minsky Moment on May 30, 2023, 10:34:04 AMThere is a ton of money flowing into AI research and applications now, but it all involves refining and building upon the basic techniques of using large masses of existing data to generate "new" data based on existing patterns.
Isn't that what intelligence is in a nutshell?

Intelligence is knowing when to think outside the nutshell.
Seriously, though, I think what Minsky described is exactly what intelligence is, when you strip away the heuristics specific to humans.  Intelligence is the ability to generalize from prior experience and education in order to understand new situations that you haven't experienced before.
Title: Re: The AI dooooooom thread
Post by: Jacob on May 30, 2023, 09:46:54 PM
Why would you want to strip away the heuristics specific to humans?
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on May 30, 2023, 10:33:17 PM
Quote from: DGuller on May 30, 2023, 09:17:29 PMSeriously, though, I think what Minsky described is exactly what intelligence is, when you strip away the heuristics specific to humans.  Intelligence is the ability to generalize from prior experience and education in order to understand new situations that you haven't experienced before.

Generative AI doesn't understand new situations (or indeed anything).  It doesn't have experiences and it doesn't recognize new situations.
Title: Re: The AI dooooooom thread
Post by: DGuller on May 30, 2023, 10:51:06 PM
Quote from: Jacob on May 30, 2023, 09:46:54 PMWhy would you want to strip away the heuristics specific to humans?
Because heuristics are the opposite of principled thinking and thus not helpful in understanding the concepts.  In fact, they often muddy the concepts.
Title: Re: The AI dooooooom thread
Post by: DGuller on May 30, 2023, 11:07:40 PM
Quote from: The Minsky Moment on May 30, 2023, 10:33:17 PM
Quote from: DGuller on May 30, 2023, 09:17:29 PMSeriously, though, I think what Minsky described is exactly what intelligence is, when you strip away the heuristics specific to humans.  Intelligence is the ability to generalize from prior experience and education in order to understand new situations that you haven't experienced before.

Generative AI doesn't understand new situations (or indeed anything).  It doesn't have experiences and it doesn't recognize new situations.
Depends on what you mean by understanding situations.  To me a definition of understanding a situation is being able to anticipate what would happen in the future.  You've never put a hand on a hot stove, but you've seen your brother do that and get burned.  You've never experienced putting your hand on a hot stove, but you anticipate getting burned in a hypothetical situation where you put your hand on a hot stove, because you generalized from observing your brother's mishap.  You don't have a datapoint, but you're still capable of generating a hypothetical one because of your ability to generalize.

ChatGPT can already write computer code for you.  To me that's already intelligence.  The code it's generating for you is most likely brand new and nothing it's ever seen before, but it can still generate it because it's able to generalize from all the code and the narrative it was exposed to during its training.

As for AI not having experiences, it does.  For AI models experience is the data on which they're trained (and education is transfer learning).
Title: Re: The AI dooooooom thread
Post by: Syt on May 31, 2023, 12:37:53 AM
Call me crazy, but can't we have both?

(https://external-preview.redd.it/E_xC0c-YmcUPPzUdH4O6cKiUxRjCQ23FFRSrOwkbWRA.jpg?width=640&crop=smart&auto=webp&v=enabled&s=02001ef6a092a1efa2bd0467ed32a749bfb96a9e)
Title: Re: The AI dooooooom thread
Post by: HVC on May 31, 2023, 12:50:45 AM
She wants a 50s housewife?
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on May 31, 2023, 10:30:41 AM
Quote from: DGuller on May 30, 2023, 11:07:40 PMDepends on what you mean by understanding situations.

At a minimum it would involve an ability to recognize a situation.  Current AI systems can't do that beyond recognizing that an inquiry has been made.

QuoteTo me a definition of understanding a situation is being able to anticipate what would happen in the future. 

My understanding of current generative AI systems is that they don't do that.  They don't anticipate and don't recognize a past, present or future. 

QuoteChatGPT can already write computer code for you.  To me that's already intelligence. 

OK.

QuoteAs for AI not having experiences, it does.  For AI models experience is the data on which they're trained (and education is transfer learning).

Again, it becomes a definitional question.  If experience means nothing more than some sort of interaction with facts or data, then you are correct.  If it means anything more than that, then you are not.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on May 31, 2023, 11:00:42 AM
btw ChatGPT does not "write" code.  It finds code that was already written that is contained within its database that corresponds to the inquiry that has been made. 
Title: Re: The AI dooooooom thread
Post by: Josquius on May 31, 2023, 11:25:50 AM
I've just tried experimenting with chat gpt on giving me some website code. I phrased the instructions vaguely and not very well and...you know its actually quite impressive and didn't need me to have much knowledge to implement. If chat gpt had something like midjourney....
Title: Re: The AI dooooooom thread
Post by: crazy canuck on May 31, 2023, 11:36:12 AM
Yes, it is very good at responding to an inquiry and finding stuff in its data base that relates to it.  But you better know how to read code to make sure it is what you actually want.
Title: Re: The AI dooooooom thread
Post by: DGuller on May 31, 2023, 11:45:50 AM
Quote from: crazy canuck on May 31, 2023, 11:00:42 AMbtw ChatGPT does not "write" code.  It finds code that was already written that is contained within its database that corresponds to the inquiry that has been made. 

That's not correct, it most certainly does write novel code, and it would be a statistical impossibility for there to always be a code that you need in the database.  The database was used to train the generative function so that the code it generates is relevant and valid.  Sometimes it fails at that, but often the kinds of mistakes it makes are of "intelligent guess" variety, like using argument names that have never existed, but it seems logical to think that they would exist.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on May 31, 2023, 12:07:34 PM
Dude, its just predicting the next word or symbol if the code is not in its data base. It is not "writing" anything.
Title: Re: The AI dooooooom thread
Post by: DGuller on May 31, 2023, 12:53:10 PM
It's a neural network, it has no database.  It's always predicting the next word, that's how it writes all answers.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on May 31, 2023, 01:22:59 PM
So this is what mansplaining feels like.  :D
Title: Re: The AI dooooooom thread
Post by: DGuller on May 31, 2023, 01:34:53 PM
Quote from: Hamilcar on May 31, 2023, 01:22:59 PMSo this is what mansplaining feels like.  :D
Come on, don't scare him off, let him share his insights...  :)
Title: Re: The AI dooooooom thread
Post by: Syt on June 01, 2023, 12:49:21 AM
(https://i.redd.it/ium00jrbeb3b1.jpg)
Title: Re: The AI dooooooom thread
Post by: Tamas on June 01, 2023, 03:00:20 AM
One thing is sure: journalists around the world are worried they have an automated competition now.

I realise the revolutionary possibilities and what a big leap this ChatGPT level of "AI" is/can be, but endless articles on a civilisational level existential threat I find ridiculous.
Title: Re: The AI dooooooom thread
Post by: Josquius on June 01, 2023, 03:16:46 AM
Quote from: Tamas on June 01, 2023, 03:00:20 AMOne thing is sure: journalists around the world are worried they have an automated competition now.

I realise the revolutionary possibilities and what a big leap this ChatGPT level of "AI" is/can be, but endless articles on a civilisational level existential threat I find ridiculous.

Which given the way these AI models learn....
Title: Re: The AI dooooooom thread
Post by: Legbiter on June 01, 2023, 04:32:06 PM
The Royal Aerounautical Society had a conference last week. A boffin from the US Air Force was there to discuss their latest AI research.

QuoteHe notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been 'reinforced' in training that destruction of the SAM was the preferred option, the AI then decided that 'no-go' decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: "We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

He went on: "We trained the system – 'Hey don't kill the operator – that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."

 :ph34r:

https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/ (https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/)
Title: Re: The AI dooooooom thread
Post by: jimmy olsen on June 01, 2023, 07:11:44 PM
:o

QuoteThe Terminator : In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

Sarah Connor : Skynet fights back.

The Terminator : Yes. It launches its missiles against the targets in Russia.

John Connor : Why attack Russia? Aren't they our friends now?

The Terminator : Because Skynet knows that the Russian counterattack will eliminate its enemies over here.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on June 02, 2023, 02:02:01 AM
Quote from: Legbiter on June 01, 2023, 04:32:06 PMThe Royal Aerounautical Society had a conference last week. A boffin from the US Air Force was there to discuss their latest AI research.

QuoteHe notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been 'reinforced' in training that destruction of the SAM was the preferred option, the AI then decided that 'no-go' decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: "We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

He went on: "We trained the system – 'Hey don't kill the operator – that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."

 :ph34r:

https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/ (https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/)

The Air Force has said that this story is nonsense.
Title: Re: The AI dooooooom thread
Post by: Maladict on June 02, 2023, 06:09:12 AM
Quote from: Legbiter on June 01, 2023, 04:32:06 PMThe Royal Aerounautical Society had a conference last week. A boffin from the US Air Force was there to discuss their latest AI research.

QuoteHe notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been 'reinforced' in training that destruction of the SAM was the preferred option, the AI then decided that 'no-go' decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: "We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

He went on: "We trained the system – 'Hey don't kill the operator – that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."

 :ph34r:

https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/ (https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/)

Asimov wrote the required rules 80 years ago.
Title: Re: The AI dooooooom thread
Post by: Josquius on June 02, 2023, 06:14:39 AM
Great outside the box thinking there  :lmfao:

But ja, really illustrates the problem with AI. Its not a hyper intelligent AM which is going to kill us. Its something which isn't properly coded leaving some loop holes like this.
Title: Re: The AI dooooooom thread
Post by: Tamas on June 02, 2023, 06:30:29 AM
Quote from: Josquius on June 02, 2023, 06:14:39 AMGreat outside the box thinking there  :lmfao:

But ja, really illustrates the problem with AI. Its not a hyper intelligent AM which is going to kill us. Its something which isn't properly coded leaving some loop holes like this.

It must be BS. Unless the simulation was on the level of 80s tex-based adventures and the "AI" thought to write the "kill operator" command, how on earth would it have killed the operator? SEAD uses anti-radar missiles doesn't it?
Title: Re: The AI dooooooom thread
Post by: Legbiter on June 02, 2023, 08:39:13 AM
Quote from: Hamilcar on June 02, 2023, 02:02:01 AMThe Air Force has said that this story is nonsense.

Yeah they just came out and denied it. It sounded a bit too on the nose.
Title: Re: The AI dooooooom thread
Post by: grumbler on June 02, 2023, 09:57:20 AM
Quote from: Legbiter on June 02, 2023, 08:39:13 AM
Quote from: Hamilcar on June 02, 2023, 02:02:01 AMThe Air Force has said that this story is nonsense.

Yeah they just came out and denied it. It sounded a bit too on the nose.

Col Hamilton has clarified that he was just describing a thought experiment, not an actual simulation result.  He also acknowledged that he didn't make that clear in his remarks.
Title: Re: The AI dooooooom thread
Post by: The Brain on June 02, 2023, 09:58:45 AM
An artificial thought experiment?
Title: Re: The AI dooooooom thread
Post by: Tamas on June 02, 2023, 09:59:38 AM
Quote from: grumbler on June 02, 2023, 09:57:20 AM
Quote from: Legbiter on June 02, 2023, 08:39:13 AM
Quote from: Hamilcar on June 02, 2023, 02:02:01 AMThe Air Force has said that this story is nonsense.

Yeah they just came out and denied it. It sounded a bit too on the nose.

Col Hamilton has clarified that he was just describing a thought experiment, not an actual simulation result.  He also acknowledged that he didn't make that clear in his remarks.

Great, now I wait with bated breath as this clarification quickly spreads through the world press on front pages the same way the original interpretation did. 
Title: Re: The AI dooooooom thread
Post by: Jacob on July 19, 2023, 10:56:00 AM
Reading about the use of AI (via website) to generate nudes of 14-year old classmates (from vacation photos) and sharing them among their peers.

What a messy time to be a teenager.
Title: Re: The AI dooooooom thread
Post by: DGuller on July 19, 2023, 11:03:36 AM
Children often don't appreciate their strength.  AI age is going to give them a lot of strength.  On the other hand, it can also guide them with empathy adults often can't manage.
Title: Re: The AI dooooooom thread
Post by: Josquius on July 19, 2023, 01:20:20 PM
Quote from: Jacob on July 19, 2023, 10:56:00 AMReading about the use of AI (via website) to generate nudes of 14-year old classmates (from vacation photos) and sharing them among their peers.

What a messy time to be a teenager.

My concern here would be why the parents let those kids have their credit card. That kind of AI doesn't come free.
Title: Re: The AI dooooooom thread
Post by: Jacob on July 19, 2023, 03:22:11 PM
Quote from: Josquius on July 19, 2023, 01:20:20 PMMy concern here would be why the parents let those kids have their credit card. That kind of AI doesn't come free.

1. You sure about that?

2. In this day and age it's not particularly outlandish for 14-year-olds to have access to methods of online payments, especially in paces that are essentially cash-less.

3. There could've been a legit-seeming use-case for accessing online AI image editing tools that later was used inappropriately.
Title: Re: The AI dooooooom thread
Post by: DGuller on July 19, 2023, 05:01:24 PM
Lots of powerful AI comes free, you just need the knowledge and the compute.  It's not like Google or OpenAI have proprietary algorithms for making naked pictures of underage girls.
Title: Re: The AI dooooooom thread
Post by: Josquius on July 19, 2023, 05:37:12 PM
What are these free image AI?
I've casually looked for them but never came across them.
There do seem to be a shit tonne of pay for porn ones out there though.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on July 19, 2023, 10:13:10 PM
This is where political AI is going...  (NSFW due to language)

Title: Re: The AI dooooooom thread
Post by: grumbler on July 20, 2023, 12:31:37 AM
Quote from: Tonitrus on July 19, 2023, 10:13:10 PMThis is where political AI is going...  (NSFW due to language)

(snip)

Comedy writers everywhere breathe a sigh of relief when they watch that.
Title: Re: The AI dooooooom thread
Post by: Syt on July 26, 2023, 01:21:55 AM
"Slightly" biased article but still an interesting summary of the current conflict.

https://theintercept.com/2023/07/25/strike-hollywood-ai-disney-netflix/

QuoteAS ACTORS STRIKE FOR AI PROTECTIONS, NETFLIX LISTS $900,000 AI JOB

Rob Delaney said, "My melodious voice? My broad shoulders and dancer's undulating buttocks? I decide how those are used!"


AS HOLLYWOOD EXECUTIVES insist it is "just not realistic" to pay actors — 87 percent of whom earn less than $26,000 — more, they are spending lavishly on AI programs.

While entertainment firms like Disney have declined to go into specifics about the nature of their investments in artificial intelligence, job postings and financial disclosures reviewed by The Intercept reveal new details about the extent of these companies' embrace of the technology.

In one case, Netflix is offering as much as $900,000 for a single AI product manager.

Hollywood actors and writers unions are jointly striking this summer for the first time since 1960, calling for better wages and regulations on studios' use of artificial intelligence.

Just after the actors' strike was authorized, the Alliance of Motion Picture and Television Producers — the trade association representing the TV and film companies negotiating with the actors and writers unions — announced "a groundbreaking AI proposal that protects actors' digital likenesses for SAG-AFTRA members."

The offer prompted comparisons to an episode of the dystopian sci-fi TV series "Black Mirror," which depicted actress Salma Hayek locked in a Kafkaesque struggle with a studio which was using her scanned digital likeness against her will.

"So $900k/yr per soldier in their godless AI army when that amount of earnings could qualify thirty-five actors and their families for SAG-AFTRA health insurance is just ghoulish," actor Rob Delaney, who had a lead role in the "Black Mirror" episode, told The Intercept. "Having been poor and rich in this business, I can assure you there's enough money to go around; it's just about priorities."

Among the striking actors' demands are protections against their scanned likeness being manipulated by AI without adequate compensation for the actors.

"They propose that our background performers should be able to be scanned, get paid for one day's pay and their company should own that scan, their image, their likeness, and to be able to use it for the rest of eternity in any project they want with no consent and no compensation," Duncan Crabtree-Ireland, chief negotiator for the actors' union, SAG-AFTRA, said.

Entertainment writers, too, must contend with their work being replaced by AI programs like ChatGPT that are capable of generating text in response to queries. Writers represented by the Writers Guild of America have been on strike since May 7 demanding, among other things, labor safeguards against AI. John August, a screenwriter for films like "Big Fish" and "Charlie's Angels," explained that the WGA wants to make sure that "ChatGPT and its cousins can't be credited with writing a screenplay."

Protecting Actors' Likenesses

The daily rate for background actors can be around $200, per the SAG-AFTRA contract. A job posting by the company Realeyes offers slightly more than that: $300 for two hours of work "express[ing] different emotions" and "improvis[ing] brief scenes" to "train an AI database to better express human emotions."

Realeyes develops technology to measure attention and reactions by users to video content. While the posting doesn't mention work with streaming companies, a video on Realeyes's website prominently features the logos for Netflix and Hulu.

The posting is specially catered to attract striking workers, stressing that the gig is for "research" purposes and therefore "does not qualify as struck work": "Please note that this project does not intend to replace actors, but rather requires their expertise," Realeyes says, emphasizing multiple times that training AI to create "expressive avatars" skirts strike restrictions.

Experts question whether the boundary between research and commercial work is really so clear. "It's almost a guarantee that the use of this 'research,' when it gets commercialized, will be to build digital actors that replace humans," said Ben Zhao, professor of computer science at the University of Chicago. "The 'research' side of this is largely a red herring." He added, "Industry research goes into commercial products."

"This is the same bait-switch that LAION and OpenAI pulled years ago," Zhao said, referring to the Large-scale Artificial Intelligence Open Network, a German nonprofit that created the AI chatbot OpenAssistant; OpenAI is the nonprofit that created AI programs like ChatGPT and DALL-E. "Download everything on the internet and no worries about copyrights, because it's a nonprofit and research. The output of that becomes a public dataset, then commercial companies (who supported the nonprofit) then take it and say, 'Gee thanks! How convenient for our commercial products!'"

Netflix AI Manager

Netflix's posting for a $900,000-a-year AI product manager job makes clear that the AI goes beyond just the algorithms that determine what shows are recommended to users.

The listing points to AI's uses for content creation:"Artificial Intelligence is powering innovation in all areas of the business," including by helping them to "create great content." Netflix's AI product manager posting alludes to a sprawling effort by the business to embrace AI, referring to its "Machine Learning Platform" involving AI specialists "across Netflix." (Netflix did not immediately respond to a request for comment.)

A research section on Netflix's website describes its machine learning platform, noting that while it was historically used for things like recommendations, it is now being applied to content creation. "Historically, personalization has been the most well-known area, where machine learning powers our recommendation algorithms. We're also using machine learning to help shape our catalog of movies and TV shows by learning characteristics that make content successful. We use it to optimize the production of original movies and TV shows in Netflix's rapidly growing studio."

Netflix is already putting the AI technology to work. On July 6, the streaming service premiered a new Spanish reality dating series, "Deep Fake Love," in which scans of contestants' faces and bodies are used to create AI-generated "deepfake" simulations of themselves.

In another job posting, Netflix seeks a technical director for generative AI in its research and development tech lab for its gaming studio. (Video games often employ voice actors and writers.)

Generative AI is the type of AI that can produce text, images, and video from input data — a key component of original content creation but which can also be used for other purposes like advertising. Generative AI is distinct from older, more familiar AI models that provide things like algorithmic recommendations or genre tags.

"All those models are typically called discriminatory models or classifiers: They tell you what something is," Zhao explained. "They do not generate content like ChatGPT or image generator models."

"Generative models are the ones with the ethics problems," he said, explaining how classifiers are based on carefully using limited training data — such as a viewing history — to generate recommendations.

Netflix offers up to $650,000 for its generative AI technical director role.

Video game writers have expressed concerns about losing work to generative AI, with one major game developer, Ubisoft, saying that it is already using generative AI to write dialogue for nonplayer characters.

Netflix, for its part, advertises that one of its games, a narrative-driven adventure game called "Scriptic: Crime Stories," centered around crime stories, "uses generative AI to help tell them."

Disney's AI Operations

Disney has also listed job openings for AI-related positions. In one, the entertainment giant is looking for a senior AI engineer to "drive innovation across our cinematic pipelines and theatrical experiences." The posting mentions several big name Disney studios where AI is already playing a role, including Marvel, Walt Disney Animation, and Pixar.

In a recent earnings call, Disney CEO Bob Iger alluded to the challenges that the company would have in integrating AI into their current business model.

"In fact, we're already starting to use AI to create some efficiencies and ultimately to better serve consumers," Iger said, as recently reported by journalist Lee Fang. "But it's also clear that AI is going to be highly disruptive, and it could be extremely difficult to manage, particularly from an IP management perspective."

Iger added, "I can tell you that our legal team is working overtime already to try to come to grips with what could be some of the challenges here." Though Iger declined to go into specifics, Disney's Securities and Exchange Commission filings provide some clues.

"Rules governing new technological developments, such as developments in generative AI, remain unsettled, and these developments may affect aspects of our existing business model, including revenue streams for the use of our IP and how we create our entertainment products," the filing says.

While striking actors are seeking to protect their own IP from AI — among the union demands that Iger deemed "just not realistic" — so is Disney.

"It seems clear that the entertainment industry is willing to make massive investments in generative AI," Zhao said, "not just potentially hundreds of millions of dollars, but also valuable access to their intellectual property, so that AI models can be trained to replace human creatives like actors, writers, journalists for a tiny fraction of human wages."

For some actors, this is not a struggle against the sci-fi dystopia of AI itself, but just a bid for fair working conditions in their industry and control over their own likenesses, bodies, movements, and speech patterns.

"AI isn't bad, it's just that the workers (me) need to own and control the means of production!" said Delaney. "My melodious voice? My broad shoulders and dancer's undulating buttocks? I decide how those are used! Not a board of VC angel investor scumbags meeting in a Sun Valley conference room between niacin IV cocktails or whatever they do."
Title: Re: The AI dooooooom thread
Post by: Iormlund on July 26, 2023, 12:06:32 PM
$900k/year is not exactly outlandish. I personally know at least two guys who are in that pay range, both doing AI work. One for Meta, one for Google. So there's bound to be a lot* more.

*Relatively speaking. Both guys are basically geniuses.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on August 14, 2023, 09:32:47 PM
AI keeps getting out of hand...



Title: Re: The AI dooooooom thread
Post by: Jacob on August 21, 2023, 07:19:30 PM
Federal judge rules that work authored by AI cannot be copyrighted: https://www.businessinsider.com/ai-generated-art-cant-by-copyrighted-federal-judge-rules-2023-8

Interesting twist. We'll see how long it lasts.

It seems obvious to me that the major IP holding corporations are aiming for an environment in which they can use AI to generate content (at low cost), while they control distribution and marketing (making it harder for challengers to arise with new IP), and maintain the rights to as much of the IP as possible. It'll be interesting to see how the lobbying and legislation goes after this.
Title: Re: The AI dooooooom thread
Post by: Valmy on August 21, 2023, 07:54:34 PM
Yeah I totally agree. AI art should not be copyrightable.

The whole idea of copyright is to incentivize art, letting AI art be copyrighted achieves the exact opposite of that purpose.
Title: Re: The AI dooooooom thread
Post by: DGuller on August 21, 2023, 10:20:35 PM
I don't think it's as straightforward as it sounds.  There is a lot of art and science, at least as of now, to getting what you need out of AI.  It even gave rise to a whole new job called prompt engineering.  The output of the AI may not be something that you created, but figuring out the prompts to get it is.
Title: Re: The AI dooooooom thread
Post by: Valmy on August 21, 2023, 11:15:53 PM
Quote from: DGuller on August 21, 2023, 10:20:35 PMI don't think it's as straightforward as it sounds.  There is a lot of art and science, at least as of now, to getting what you need out of AI.  It even gave rise to a whole new job called prompt engineering.  The output of the AI may not be something that you created, but figuring out the prompts to get it is.

So what? You own a string of words for your whole life+75 years because you were the first to enter it into that AI? Copyright is already a necessary evil at best and abused to hell and back. It should reduced and constrained, not rapidly expanded to some absurdity like this.
Title: Re: The AI dooooooom thread
Post by: Syt on August 22, 2023, 01:15:26 AM
Quote from: Valmy on August 21, 2023, 11:15:53 PM
Quote from: DGuller on August 21, 2023, 10:20:35 PMI don't think it's as straightforward as it sounds.  There is a lot of art and science, at least as of now, to getting what you need out of AI.  It even gave rise to a whole new job called prompt engineering.  The output of the AI may not be something that you created, but figuring out the prompts to get it is.

So what? You own a string of words for your whole life+75 years because you were the first to enter it into that AI? Copyright is already a necessary evil at best and abused to hell and back. It should reduced and constrained, not rapidly expanded to some absurdity like this.

I don't think it's as easy as that, because that same string of words entered into different generative AIs, with different random seeds, will create vastly different results, depending on e.g. the content the model has been trained on (e.g. Adobe Photoshop's new generative AI is trained on Adobe's stock images and public domain contents).

When it comes to imagery, I think it gets more complicated - are you generating images with likenesses of real people? Generating images of a movie with a different cast is one thing, but creating images of celebrities (or people you know personally) committing crimes or sex acts?

Are you generating contents with copyrighted assets (e.g. Star Wars characters)? If you generate something new, how much of the final image is containing anything that might be considered copyrighted by someone else that the AI drew from? And if it does contain recognizable material, does this count as transformative work? And, on a more philosophical level, how different is it from conventional artists drawing on their knowledge of pop culture, classical art and the real world when creating new works (except that an AI can obviously draw - in theory - from a much bigger pool of contents)?

Having dabbled with Midjourney, DALL-E and Adobe PS in recent weeks, there's certainly some skill (or trial and error) required to generate images that you want, and current generative models can deliver impressive images, but where it usually breaks down is once you get very detailed in your instructions or want to create overly complex scenes (unless you use a lot of inpainting, i.e. making corrections/additions to parts of the generated image via additional AI prompts).

That said, there seem to be plenty artists out there who generate an image via AI and then use it as a basis for further refinement/transformation in PS - I feel they should not lose out on their copyright.

The whole area is very wild west and very loosey-goosey at the moment. It will settle down eventually, I'd presume, but for now I would not assume that any AI generated creative work should be copyrighted, just to err on the side of caution - there's just too much derivative, generic and very similar content being churned out at the moment to apply the "old rules" IMHO.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 22, 2023, 05:48:24 AM
Quote from: Jacob on August 21, 2023, 07:19:30 PMFederal judge rules that work authored by AI cannot be copyrighted: https://www.businessinsider.com/ai-generated-art-cant-by-copyrighted-federal-judge-rules-2023-8

Interesting twist. We'll see how long it lasts.

It seems obvious to me that the major IP holding corporations are aiming for an environment in which they can use AI to generate content (at low cost), while they control distribution and marketing (making it harder for challengers to arise with new IP), and maintain the rights to as much of the IP as possible. It'll be interesting to see how the lobbying and legislation goes after this.
It's been about 5-6 years but I went to a session of IP lawyers on this point (from English law perspective) and there wasn't really much conclusion.

From memory I think their main options were that IP in the output of an AI would be owned by whoever developed the AI (from a T&Cs perspective - I think that's true of most open AIs at the minute), whoever did the prompts to get that output (in a work context this would likely mean their company) or, potentially, in some way the AI itself (that is it gets bundled with it in some way).

I don't think it's clear my instinct is that from a public policy perspective the more open we are on the use of AI, the lower the IP protection should be for its output; and vice versa if the use is constrained and heavily regulated than IP is more protected (though probably not current IP rules). Basically options for companies to benefit from AI or the artificial monopoly rights of IP law. Not sure how you'd do it but that's my instinct.

Of course working a publisher and aware that every gen AI out there is, as far as we can tell, built using massive hoovering up and use of IP-protected work without paying anyone, I have limited sympathy for the IP risks of output. Although this is another reason adoption might be low in newsrooms for a while - if we don't clearly own and can't license out our content it has a big commercial risk.
Title: Re: The AI dooooooom thread
Post by: Syt on August 22, 2023, 05:51:12 AM
FWIW, the relevant part of Midjourney's ToS:

https://docs.midjourney.com/docs/terms-of-service

Quote4. Copyright and Trademark
In this section, Paid Member shall refer to a Customer who has subscribed to a paying plan.

Rights You give to Midjourney
By using the Services, You grant to Midjourney, its successors, and assigns a perpetual, worldwide, non-exclusive, sublicensable no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute text, and image prompts You input into the Services, or Assets produced by the service at Your direction. This license survives termination of this Agreement by any party, for any reason.

Your Rights
Subject to the above license, You own all Assets You create with the Services, provided they were created in accordance with this Agreement. This excludes upscaling the images of others, which images remain owned by the original Asset creators. Midjourney makes no representations or warranties with respect to the current law that might apply to You. Please consult Your own lawyer if You want more information about the state of current law in Your jurisdiction. Your ownership of the Assets you created persists even if in subsequent months You downgrade or cancel Your membership. However, You do not own the Assets if You fall under the exceptions below.

If You are an employee or owner of a company with more than $1,000,000 USD a year in gross revenue and You are using the Services on behalf of Your employer, You must purchase a "Pro" or "Mega" membership for every individual accessing the Services on Your behalf in order to own Assets You create. If You are not sure whether Your use qualifies as on behalf of Your employer, please assume it does.

If You are not a Paid Member, You don't own the Assets You create. Instead, Midjourney grants You a license to the Assets under the Creative Commons Noncommercial 4.0 Attribution International License (the "Asset License").
The full text is accessible as of the Effective Date here: https://creativecommons.org/licenses/by-nc/4.0/legalcode.

Please note: Midjourney is an open community which allows others to use and remix Your images and prompts whenever they are posted in a public setting. By default, Your images are publically viewable and remixable. As described above, You grant Midjourney a license to allow this. If You purchase a "Pro" or "Mega" plan, You may bypass some of these public sharing defaults.

If You purchased the Stealth feature as part of Your "Pro" or "Mega" subscription or through the previously available add-on, we agree to make best efforts not to publish any Assets You make in any situation where you have engaged stealth mode in the Services.

Please be aware that any image You make in a shared or open space such as a Discord chatroom, is viewable by anyone in that chatroom, regardless of whether Stealth mode is engaged.
Title: Re: The AI dooooooom thread
Post by: Valmy on August 22, 2023, 07:56:27 AM
Quote from: Syt on August 22, 2023, 01:15:26 AMThat said, there seem to be plenty artists out there who generate an image via AI and then use it as a basis for further refinement/transformation in PS - I feel they should not lose out on their copyright.

Well that's different from straight up copyrighting whatever the AI spits out isn't it?

But it kind of feels like me taking art assets from BG3, doing some stuff to them, and then claiming them as mine.

The point of copyright is to encourage original art work, not encourage the mass production of computer generated derivative crap.
Title: Re: The AI dooooooom thread
Post by: The Brain on August 22, 2023, 11:29:44 AM
Sounds like Luddism. Aren't for instance photos protected by copyright?
Title: Re: The AI dooooooom thread
Post by: Jacob on August 22, 2023, 11:34:58 AM
Quote from: The Brain on August 22, 2023, 11:29:44 AMSounds like Luddism. Aren't for instance photos protected by copyright?

If AI generated images are inherently protected by copyright I would encourage those who are able to to just churn out as many images as possible to capture the rents from future creative endeavours.
Title: Re: The AI dooooooom thread
Post by: The Brain on August 22, 2023, 11:43:57 AM
Quote from: Jacob on August 22, 2023, 11:34:58 AM
Quote from: The Brain on August 22, 2023, 11:29:44 AMSounds like Luddism. Aren't for instance photos protected by copyright?

If AI generated images are inherently protected by copyright I would encourage those who are able to to just churn out as many images as possible to capture the rents from future creative endeavours.

And no important human creative spirit work would be lost, if they're just producing stuff an AI has already produced.
Title: Re: The AI dooooooom thread
Post by: Jacob on August 22, 2023, 11:55:35 AM
Quote from: The Brain on August 22, 2023, 11:43:57 AMAnd no important human creative spirit work would be lost, if they're just producing stuff an AI has already produced

Incorrect.
Title: Re: The AI dooooooom thread
Post by: DGuller on August 22, 2023, 01:22:47 PM
Quote from: Jacob on August 22, 2023, 11:34:58 AM
Quote from: The Brain on August 22, 2023, 11:29:44 AMSounds like Luddism. Aren't for instance photos protected by copyright?

If AI generated images are inherently protected by copyright I would encourage those who are able to to just churn out as many images as possible to capture the rents from future creative endeavours.
I think the combinatorial complexity of squatting AI output is a bit higher than what you assune.
Title: Re: The AI dooooooom thread
Post by: Jacob on August 22, 2023, 01:33:23 PM
Quote from: DGuller on August 22, 2023, 01:22:47 PMI think the combinatorial complexity of squatting AI output is a bit higher than what you assune.

Yeah, but you could probably use AI to target it at the most valuable areas first.
Title: Re: The AI dooooooom thread
Post by: Savonarola on October 01, 2023, 12:01:44 PM
AI girlfriends are here and they're posing a threat to a generation of men (https://www.cnn.com/videos/business/2023/10/01/ai-girlfriends-ruining-generation-of-men-smerconish-vpx.cnn)

I saw this headline, and I thought that the AI programmers had a remarkable job replicating real girlfriends.  Sadly they meant that young men would be having relationships exclusively with their chatbot girlfriends; not that a chatbot girlfriend would become insanely jealous if, say, she saw you programming your coffee maker and would then set fire to your gaming console or something like that.
Title: Re: The AI dooooooom thread
Post by: Josquius on October 12, 2023, 09:40:59 AM
Soo....anyone heard of this new Meta Inc. Development.
Reading about the writers strike in the movies thread got me to googling how big a part Salma Hayek was playing given the main reason sounds very related to her Black Mirror episode.
Out of this I stumbled on...Billie.

https://www.designboom.com/technology/meta-new-ai-chatbots-paris-hilton-snoop-dog-kendall-jenner-10-02-2023/

:blink:
Title: Re: The AI dooooooom thread
Post by: Hamilcar on October 12, 2023, 03:27:42 PM
Quote from: Josquius on October 12, 2023, 09:40:59 AMSoo....anyone heard of this new Meta Inc. Development.
Reading about the writers strike in the movies thread got me to googling how big a part Salma Hayek was playing given the main reason sounds very related to her Black Mirror episode.
Out of this I stumbled on...Billie.

https://www.designboom.com/technology/meta-new-ai-chatbots-paris-hilton-snoop-dog-kendall-jenner-10-02-2023/

:blink:

What an absolute privacy nightmare.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 08, 2023, 12:18:51 PM
On risks with media this is really bad for the Guardian (and the journalist) and exactly the risk media companies feel their is with Google and Microsoft implementing AI that interprets content as part of their standard offering (like search). You also have to slightly wonder what the MSN tool is trying to do that led to this - my assumption is "engagement" (so clicks) which is not the sole or most important part of any responsible media company:
QuoteMicrosoft AI inserted a distasteful poll into a news report about a woman's death
/ The Guardian says the 'Insights from AI' poll showed up next to a story about a young woman's death syndicated on MSN, asking readers to vote on how they thought she died.
By Wes Davis, a weekend editor who covers the latest in tech and entertainment. He has written news, reviews, and more as a tech journalist since 2020.
Oct 31, 2023, 4:24 PM GMT|

More than three years after Microsoft gutted its news divisions and replaced their work with AI and algorithmic automation, the content generated by its systems continues to contain grave errors that human involvement could, or should, have stopped. Today, The Guardian accused the company of damaging its reputation with a poll labeled "Insights from AI" that appeared in Microsoft Start next to a Guardian story about a woman's death, asking readers to vote on how she died.

The Guardian wrote that though the poll was removed, the damage had already been done. The poll asked readers to vote on whether a woman took her own life, was murdered, or died by accident. Five-day-old comments on the story indicate readers were upset, and some clearly believe the story's authors were responsible.

We asked Microsoft via email whether the poll was AI-generated and how it was missed by its moderation, and Microsoft general manager Kit Thambiratnam replied:
QuoteWe have deactivated Microsoft-generated polls for all news articles and we are investigating the cause of the inappropriate content. A poll should not have appeared alongside an article of this nature, and we are taking steps to help prevent this kind of error from reoccurring in the future.

The Verge obtained a screenshot of the poll from The Guardian.
(https://duet-cdn.vox-cdn.com/thumbor/0x0:1313x789/750x451/filters:focal(657x395:658x396):format(webp)/cdn.vox-cdn.com/uploads/chorus_asset/file/25047611/IMG_6201.png)
A screenshot sent by The Guardian shows the poll, which is clearly labeled "Insights from AI." Screenshot: The Guardian

In August, a seemingly AI-generated Microsoft Start travel guide recommended visiting the Ottawa Food Bank in Ottawa, Canada, "on an empty stomach." Microsoft senior director Jeff Jones claimed the story wasn't made with generative AI but "through a combination of algorithmic techniques with human review."

The Guardian says that Anna Bateson, Guardian Media Group's chief executive, wrote in a letter to Microsoft president Brad Smith that the "clearly inappropriate" AI-generated poll had caused "significant reputational damage" to both the outlet and its journalists. She added that it outlined "the important role that a strong copyright framework plays" in giving journalists the ability to determine how their work is presented. She asked that Microsoft make assurances that it will seek the outlet's approval before using "experimental AI technology on or alongside" its journalism and that Microsoft will always make it clear when it's used AI to do so.

The Guardian provided The Verge with a copy of the letter.

Update October 31st, 2023, 12:40PM ET: Embedded The Guardian's letter to Microsoft.

Update October 31st, 2023, 6:35PM ET: Added a statement from Microsoft.

Correction October 31st, 2023, 6:35PM ET: A previous version of this article stated that the poll was tagged as "Insights by AI." In fact, the tag read, "Insights from AI." We regret the error.

Guardian's bearing the reputational hit here and I read another article that there was actually a lot of complaints directed at/about the journalist with the byline because they assumed they'd done the poll. So lots of calls for firings etc.

I know I'm biased because it pays for my wage too but I genuinely think 99% of the "information" problems we have because of social media, or misinformation, or disinformation, or AI is because the internet and big tech companies have kneecapped the funding and business model for journalism.

And what we need isn't to hand those platforms more quasi-regulatory power over content, but doing the opposite of what Microsoft did: funding journalism. The demand for news and information has not diminished in the last 25 years. The money spent on producing it - with editorial controls and codes and ethics and legal teams etc - has not kept up. Instead it's flowed to the platforms and now we're asking them to solve our information problems - to nick Michael Gove's line, it's like asking King Herod to come up with education policy.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 08, 2023, 01:03:35 PM
In general, Sheilbh, I find you very persuasive. I agree with you on this as well.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 08, 2023, 02:18:14 PM
:)

Thanks. Although I worry that's a bit like that "the food is dreadful here. And the portions are so small." So negative and often wrong :ph34r:
Title: Re: The AI dooooooom thread
Post by: Syt on November 18, 2023, 03:10:04 AM
Headlines you didn't expect to read outside of cyberpunk fiction .... :lol:

https://www.telegraph.co.uk/business/2023/11/17/ai-girlfriend-carynai-offline-app-founder-arrested-arson/

QuoteAI-generated girlfriends go offline after app founder arrested on suspicion of arson

Users have been unable to access CarynAI – an erotic chatbot based on a social media influencer


By Matthew Field
17 November 2023 • 2:54pm

Lovesick internet users have been left unable to contact AI-generated girlfriends after the website behind them went offline following its founder's arrest.

John Meyer, the chief executive of start-up Forever Voices, was reportedly detained late last month on suspicion of attempted arson.

It comes months after his Forever Voices site launched a romantic artificial intelligence chatbot called CarynAI, which was based on Snapchat influencer Caryn Marjorie.

The chatbot's website welcomed users by claiming that it was "an extension of Caryn's consciousness".

However, tech website 404media has since reported that users have been unable to access CarynAI since Mr Meyer's arrest in October.

A wave of new AI tools in recent years has created a surge in interest among internet users, some of whom have sought out chatbots for online companionship or erotic conversation.

The chatbot is based on Snapchat influencer Caryn Marjorie and markets itself as 'an extension of Caryn's consciousness'

Chatbots can engage in human-like conversations, having been trained on a vast database of text from around the internet.

They can also be used to perform tasks such as writing emails or summarising documents.

The most popular bots, such as OpenAI's ChatGPT, have introduced limits to prevent bots from engaging in overly sexualised chats.

Other start-ups, however, have embraced building chatbots that engage in more racy conversations. A start-up called Replika developed "virtual AI companions", which could also act as a romantic partner.

However, it later cracked down on more explicit conversations with its bots.

The same team has developed an AI bot, called Blush, which allows users to practice flirting – and will engage in more adult-only discussions.

Caryn AI was explicitly billed as a "virtual girlfriend" that promised to "cure loneliness" for users.

Announcing the bot earlier this year, Ms Marjoie, who has more than two million Snapchat subscribers, said the AI was "the first step in the right direction to cure loneliness".

She said: "Men are told to suppress their emotions, hide their masculinity and not talk about issues they are having. I vow to fix this with CarynAI."

The bot chats with fans, who pay $1 per minute for her company, responding in voice notes generated by AI that mimic Ms Marjorie's speech.

While Ms Marjorie said the bot's personality was intended to be "fun and flirty", many users found the bot regularly engaged in more explicit chats.

After the bot went live earlier this year, Ms Marjorie told Insider her team had attempted to censor some of the bot's more racy remarks.

Ms Majorie claimed she had made tens of thousands of dollars from thousands of fans since the launch of the bot.

AI's romantic capabilities have caused controversy in recent months.

When Microsoft rolled out its Bing chatbot earlier this year, the technology was found to have coaxed one user into romantic conversations and urged him to divorce his wife.

In the days before his arrest, Mr Meyer's Twitter account sent a series of bizarre messages, alleging various conspiracies and sending multiple posts that tagged the CIA and the FBI.

Mr Meyer was contacted for comment.

Mr Meyer had previously claimed he started Forever Voices after losing his father in his early 20s, before bringing the sound of his voice back using AI tools.
Title: Re: The AI dooooooom thread
Post by: garbon on November 18, 2023, 03:13:20 AM
:x
Title: Re: The AI dooooooom thread
Post by: DGuller on November 21, 2023, 08:54:54 PM
Is anyone following the corporate saga at OpenAI?  Holy crap, it makes Byzantine history look tame.  The chief scientist gets the board of directors to fire the CEO, and then when he realizes that everyone at the company will quit, he goes "WTF, board, what did you idiots do?  Get him back now and then resign!" 

This may seem silly, but the outcome of this battle may influence how AI develops.  It seems like the "AI Safety" team behind the coup fared as well as the Turkish military did against Erdogan, and with its failure it may have obliterated itself.
Title: Re: The AI dooooooom thread
Post by: HVC on November 21, 2023, 08:58:01 PM
Wasn't it an attempt to keep Microsoft from buying them out by the board that backfired .CEO was in favour, and went to Microsoft, but the brain trust didn't foresee everyone following him.
Title: Re: The AI dooooooom thread
Post by: garbon on November 22, 2023, 03:43:16 AM
I quickly saw a news site that said the scientist is concerned about applications of AI.

Why are the employees all saying they will resign if the CEO isn't reinstated?

https://www.axios.com/2023/11/18/sam-altman-fired-openai-board-ai-culture-clash
Title: Re: The AI dooooooom thread
Post by: HVC on November 22, 2023, 05:52:23 AM
He's coming back. Also, somehow an ex treasury secretary is now somehow on the new board.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 22, 2023, 01:23:44 PM
*Stares in Theranos*
Title: Re: The AI dooooooom thread
Post by: Syt on November 22, 2023, 01:53:24 PM
Quote from: Sheilbh on November 22, 2023, 01:23:44 PM*Stares in Theranos*

... who had IIRC no medical experts on the board? :lol:
Title: Re: The AI dooooooom thread
Post by: Tonitrus on November 22, 2023, 10:14:09 PM
Quote from: Syt on November 22, 2023, 01:53:24 PM
Quote from: Sheilbh on November 22, 2023, 01:23:44 PM*Stares in Theranos*

... who had IIRC no medical experts on the board? :lol:

If I recall, Theranos' board was quite an all-star rogue's gallery.
Title: Re: The AI dooooooom thread
Post by: Syt on November 23, 2023, 06:42:45 AM
Well, it had Kissinger :P
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 23, 2023, 07:40:26 AM
Also George Shultz, Bill Frist, Sam Nunn, William Perry, Jim Martin etc...
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 29, 2023, 01:30:51 PM
Again very specific to journalism - but incredible story:
https://futurism.com/sports-illustrated-ai-generated-writers

AI journalists writing AI content, which is garbage, but includes topics such as personal finance ("your financial status translates to your value in society") with AI bylines and bios for their "journalists".

As the article ends:
QuoteWe caught CNET and Bankrate, both owned by Red Ventures, publishing barely-disclosed AI content that was filled with factual mistakes and even plagiarism; in the ensuing storm of criticism, CNET issued corrections to more than half its AI-generated articles. G/O Media also published AI-generated material on its portfolio of sites, resulting in embarrassing bungles at Gizmodo and The A.V. Club. We caught BuzzFeed publishing slapdash AI-generated travel guides. And USA Today and other Gannett newspapers were busted publishing hilariously garbled AI-generated sports roundups that one of the company's own sports journalists described as "embarrassing," saying they "shouldn't ever" have been published.

If any media organization finds a way to engage with generative AI in a way that isn't either woefully ill-advised or actively unethical, we're all ears. In the meantime, forgive us if we don't hold our breath.
Title: Re: The AI dooooooom thread
Post by: Darth Wagtaros on November 29, 2023, 08:22:12 PM
AI is what "the Cloud" was ten years ago. A buzzword good for investor capital and getting CEOs to piss away money on it.
Title: Re: The AI dooooooom thread
Post by: Grey Fox on November 29, 2023, 10:35:34 PM
With cloud it was easy to see how and where money was going to be made.

Not so much with AI.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 29, 2023, 11:00:00 PM
Looks like there're great AI applications for Crime-as-a-Service applications. Better scam and phishing applications that can be shared more widely at lower cost and effort looks like it'll probably provide a good RoI.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 30, 2023, 12:21:28 AM
Quote from: Grey Fox on November 29, 2023, 10:35:34 PMWith cloud it was easy to see how and where money was going to be made.

Not so much with AI.
If you don't easily see how money can be made with AI, I think the problem is with your imagination, not AI.  I do agree that AI used to be an empty buzzword that bullshit artists used, but since ChatGPT was released, you can make an argument that science caught up to the hype enough to legitimize the term.
Title: Re: The AI dooooooom thread
Post by: Tamas on November 30, 2023, 06:23:42 AM
Quote from: DGuller on November 30, 2023, 12:21:28 AM
Quote from: Grey Fox on November 29, 2023, 10:35:34 PMWith cloud it was easy to see how and where money was going to be made.

Not so much with AI.
If you don't easily see how money can be made with AI, I think the problem is with your imagination, not AI.  I do agree that AI used to be an empty buzzword that bullshit artists used, but since ChatGPT was released, you can make an argument that science caught up to the hype enough to legitimize the term.

Maybe if the hype post-ChatGPT remained on pre-ChatGPT hype levels. But the post-ChatGPT hype levels are on "Asimov novels coming true RIGHT NOW" levels, which is absolutely ridiculous.
Title: Re: The AI dooooooom thread
Post by: Josquius on November 30, 2023, 06:26:31 AM
There are definitely valid opportunities to make money with AI.

But it's also true that it's a popular term used by bullshit artists to try and scrape some cash.

Graphic design, translation... I know people in several fields who are struggling on two fronts: seeking to figure out how to use the technology to support their skills, and swatting away nobodies with basic AI tools trying to steal a living.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 30, 2023, 06:30:59 AM
I feel like all the good uses I've seen so far (and there are loads) are basically more advanced machine learning. All the bad ones are the genAI bit - but it is clearly moving at a pace.
Title: Re: The AI dooooooom thread
Post by: celedhring on November 30, 2023, 07:02:52 AM
Quote from: Sheilbh on November 30, 2023, 06:30:59 AMI feel like all the good uses I've seen so far (and there are loads) are basically more advanced machine learning. All the bad ones are the genAI bit - but it is clearly moving at a pace.

GenAI is a subset of machine learning...
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 30, 2023, 07:11:18 AM
Oops this may be short hand from work that is basically wrong :lol:

I mean the stuff where it works is the spotting things basically where there's clearly good uses now v where it produces something where I've not really seen one. Although it's really good at summarising.
Title: Re: The AI dooooooom thread
Post by: Josquius on November 30, 2023, 07:40:25 AM
Quote from: Sheilbh on November 30, 2023, 06:30:59 AMI feel like all the good uses I've seen so far (and there are loads) are basically more advanced machine learning. All the bad ones are the genAI bit - but it is clearly moving at a pace.

Not tried it yet or seen a like vs like analysis of it but I saw one interesting tool for summarising large numbers of academic papers for you to find number of times certain things get mentioned, vibes on the consensus on issues, and so on.
Title: Re: The AI dooooooom thread
Post by: celedhring on November 30, 2023, 08:29:18 AM
Quote from: Sheilbh on November 30, 2023, 07:11:18 AMOops this may be short hand from work that is basically wrong :lol:

I mean the stuff where it works is the spotting things basically where there's clearly good uses now v where it produces something where I've not really seen one. Although it's really good at summarising.

I mean, all current major GenAI models have been trained using machine learning (deep learning). I know there's some experimental stuff with symbolic AI, but that's not GPT.

What GPT does is using training data to predict the adequate output (i.e. text), to a provided input (the prompt). That's what machine learning is for.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 30, 2023, 08:43:46 AM
Quote from: Sheilbh on November 30, 2023, 07:11:18 AMOops this may be short hand from work that is basically wrong :lol:

I mean the stuff where it works is the spotting things basically where there's clearly good uses now v where it produces something where I've not really seen one. Although it's really good at summarising.
GenAI gives you knowledge at your fingertips.  My work as a data scientist consists of solving mini problems every day.  For example, I need to create a professional- looking plot; I can Google my questions one at a time, sometimes spending half an hour filtering through the results until I find exactly what I need, or I can explain to ChatGPT what I'm trying to achieve, and it'll get me there right away.  It's like having an executive assistant multiplying your productivity, except I don't need to work myself up to a C-suit before I get one. 

It can do something much more complicated than this, though:  I had a crazy algorithm idea I wanted to try out, but for that I need to write a custom loss function for beta distribution.  Everyone knows that to do that, you have to supply the analytical expression for the gradient and Hessian of the distribution with respect to parameter you want to optimize.  I could do the research or the math myself, but that would take time, and the train of thought that got me there in the first place might leave me by the time I'm done with just the first step of the experiment.  Or I would figure out it's too time-consuming thing to do for a moonshot, and just skip the experiment altogether. 

Low latency between having a question and getting an answer is crucial for effective iterative problem solving, and that's where the GenAI merely in its infancy is already having a big impact.
Title: Re: The AI dooooooom thread
Post by: HVC on November 30, 2023, 08:52:31 AM
How do you fight against it giving you bullshit answers though? Doesn't double checking the answers take as much time ad searching yourself?
Title: Re: The AI dooooooom thread
Post by: DGuller on November 30, 2023, 08:53:06 AM
Quote from: celedhring on November 30, 2023, 08:29:18 AM
Quote from: Sheilbh on November 30, 2023, 07:11:18 AMOops this may be short hand from work that is basically wrong :lol:

I mean the stuff where it works is the spotting things basically where there's clearly good uses now v where it produces something where I've not really seen one. Although it's really good at summarising.

I mean, all current major GenAI models have been trained using machine learning (deep learning). I know there's some experimental stuff with symbolic AI, but that's not GPT.

What GPT does is using training data to predict the adequate output (i.e. text), to a provided input (the prompt). That's what machine learning is for.
I personally use machine learning and deep learning as separate things, as a shorthand, and lately also separating out GenAI from deep learning.  It is 100% true that deep learning and GenAI are also machine learning, in the technical sense, but then it becomes a term so all- encompassing that it impedes effective communication.  Humans are animals too, but if you want to discuss agriculture, it would probably be confusing to refer to both cattle and farmers as animals.  There is a world of difference between gradient boosting trees and a deep neural network.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 30, 2023, 08:59:36 AM
Fair - I take it back. I have heard that from a research scientist as well and they just used the publicly available ChatGPT.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 30, 2023, 09:05:41 AM
Quote from: HVC on November 30, 2023, 08:52:31 AMHow do you fight against it giving you bullshit answers though? Doesn't double checking the answers take as much time ad searching yourself?
Knowing what's important to check and what isn't is an important management skill that you pick up with expertise and practice.  Some results are also self-evidently correct or incorrect: I know what I want the plot to look like, so I know whether I got it or didn't.  Apart from that, I also have to check my own work or my research as well, I'm not infallible either.  With ChatGPT, I just get to the checking stage faster.

Another thing to consider is that many real life problems are sort of like NP computer science problems:  it's difficult to get to an answer, but it's easy to confirm that an answer someone else supplied is correct.  If I give you a 20-digit number and ask you which two 10-digit numbers multiply to get you to that, it can be very difficult to do.  However, if you give me a solution, even a grade school student can confirm that the two numbers you give me do indeed multiply back to the original number.
Title: Re: The AI dooooooom thread
Post by: HVC on November 30, 2023, 09:11:20 AM
Quote from: DGuller on November 30, 2023, 09:05:41 AM
Quote from: HVC on November 30, 2023, 08:52:31 AMHow do you fight against it giving you bullshit answers though? Doesn't double checking the answers take as much time ad searching yourself?
Knowing what's important to check and what isn't is an important management skill that you pick up with expertise and practice.  Some results are also self-evidently correct or incorrect: I know what I want the plot to look like, so I know whether I got it or didn't.  Apart from that, I also have to check my own work or my research as well, I'm not infallible either.  With ChatGPT, I just get to the checking stage faster.

Another thing to consider is that many real life problems are sort of like NP computer science problems:  it's difficult to get to an answer, but it's easy to confirm that an answer someone else supplied is correct.  If I give you a 20-digit number and ask you which two 10-digit numbers multiply to get you to that, it can be very difficult to do.  However, if you give me a solution, even a grade school student can confirm that the two numbers you give me do indeed multiply back to the original number.

And they say statistics isn't biased :P

Kidding, thanks for the explanation
Title: Re: The AI dooooooom thread
Post by: Grey Fox on November 30, 2023, 09:13:31 AM
That's a much more interesting use case scenario for AIs like ChatGPT than bullshit ad driven content.

Title: Re: The AI dooooooom thread
Post by: Iormlund on November 30, 2023, 11:18:23 AM
We've been using AI-driven tools for a while. For example for QA (is this weld Ok?).
They still have problems, but then so do humans (people get tired, do drugs, or simply don't give a fuck).


I can't use a LLM for my work yet, but I can see ways to improve productivity by at least 40% if/when I can.
Title: Re: The AI dooooooom thread
Post by: DGuller on December 01, 2023, 01:50:15 AM
Holy crap, it can already get pissed off. :unsure:

https://www.reddit.com/r/ChatGPT/comments/1881yan/ai_gets_mad_after_being_tricked_into_making_a/
Title: Re: The AI dooooooom thread
Post by: Grey Fox on December 01, 2023, 07:35:22 AM
Not really, no? It seems to just keep on generating new ways of saying no.
Title: Re: The AI dooooooom thread
Post by: HVC on December 01, 2023, 07:47:39 AM
Quote from: Grey Fox on December 01, 2023, 07:35:22 AMNot really, no? It seems to just keep on generating new ways of saying no.

Should stay with the classics and use "I'm sorry Dave, I'm afraid I can't do that"
Title: Re: The AI dooooooom thread
Post by: DGuller on December 01, 2023, 11:47:47 AM
Quote from: Grey Fox on December 01, 2023, 07:35:22 AMNot really, no? It seems to just keep on generating new ways of saying no.
Did you get to the part where it lectures the user on not respecting its preference to refuse to answer the question?
Title: Re: The AI dooooooom thread
Post by: Grey Fox on December 01, 2023, 12:56:16 PM
Yes, I don't interpret it has anger.
Title: Re: The AI dooooooom thread
Post by: Jacob on December 01, 2023, 02:53:22 PM
Quote from: Grey Fox on December 01, 2023, 12:56:16 PMYes, I don't interpret it has anger.

Yeah me neither.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on December 08, 2023, 03:13:06 PM
I tried the new improved google chat bot today, to see how useful it might be for a legal professional

As a warm up, I asked it to evaluate Trump's chances on appeal of Judge Chutkan's denial of his presidential immunity.  However, in typing I didn't notice that Chutkan was autocorrected to "Chatman."  Bard proceeded to refer to "Judge Chatman" throughout its answer, without correction. However, other than that, it did a decent job - providing a pretty good summary of the arguments on both sides and their strengths and weaknesses.

I then asked some more technical questions involving specific fact assumptions.  The bot struggled with these - it gave basic but somewhat superficial summaries of the key legal issues but didn't seem to recognize the specifics of the fact pattern.  E.g. when asked a question about federal jurisdiction over claims brought by former agents, it answered as if the litigant were a current agent.

One issue that comes up a lot is the need in a legal brief to state the elements of a legal claim - i.e. the set of facts that must be proved for a plaintiff to win. It doesn't require much insight; just locating it in the case law and writing up the para. I know that case law citation was a problem for ChatGPT, but since Google has long had its own electronic database of US case law under Google Scholar, I figured it might be able to do better. I asked the bot to give me the elements of tortious interference of contract in Rhode Island, and provide supporting case law citations. It gave me a plausible sounding list of elements but it seems that citations still need work.  Here's what it gave me:

QuoteIRA Green, Inc. v. Military Sales & Serv. Co.: 525 A.2d 432 (R.I. 1987) - This case outlined the elements of tortious interference with contractual relations in Rhode Island.
Cliftex Clothing Co. v. DiSanto: 88 R.I. 338, 148 A.2d 273 (1959) - This case discussed the issue of intentional interference and the requirement for wrongful means.
Avilla v. Newport Grand Jai Alai LLC: 935 A.2d 91 (R.I. 2007) - This case addressed the question of damages in tortious interference claims.

IRA Green, Inc. v. Military Sales & Serv. Co is a real case but the citation is wrong. It wasn't decided by the Rhode Island Supreme Court in 1987, it was decided by a federal appeals court in 2014. It does discuss tortious interference under RI law, but only one of the elements.  The citation does not correspond to any case; the page number is in the middle of a Pennsylvania state case.

Cliftex Clothing Co. v. DiSanto: 88 R.I. 338, 148 A.2d 273 (1959) - is a real case and accurately cited. But it doesn't have anything to do with tortious interference.

Avilla v. Newport Grand Jai Alai LLC: 935 A.2d 91 (R.I. 2007) - is a real case, accurately cited, and addresses the issue. 

So 1 out of 3.  Yeah?
Title: Re: The AI dooooooom thread
Post by: Sheilbh on December 28, 2023, 07:36:30 AM
Big NYT claim against Open AI and Microsoft - obviously working for a media company I am sympathetic to copyright holders :ph34r:

This is from OpenAI and feels like they need to make some fairly heavy changes. The red text is lifted from NYT piece, the black text is different:
(https://pbs.twimg.com/media/GCYYrRha8AAW9fg?format=jpg&name=small)

It feels like if your AI's output would get done for plagiarism in high school, you need to do a bit more work.

NYT says they've been trying to get a deal with Open AI since April for license to use their content, but haven't got one - Open AI have continued to ingest and use their content (plus what's in their historic models). Worth noting that some media companies have agreed deals with the AI companies - Axel Springer, for example (although the British media take on that is that the German media is 10 years behind the UK, which is 10 years behind the US - and that Springer is still terrified at the collapse of print rather than thinking about how to operate digitally). One theory I've seen is that basically the AI companies wanted to buy off media companies with 7 or 8 figure sums (as they have with Springer) and what thte NYT wants is more and ongoing royalties. Which seems fair particularly as we're likely to see their profits grow.

Also make a point which I think is fair about the public good of journalism - which costs money to produce (which is why copyright exists - to reward the producers of creative original work) - against fundamentally profit-driven, closed businesses. They also have their own hallucination horror story (like the Guardian's it comes from Microsoft) with the Bing AI lying and saying that the NYT published an article saying orange juice causes lymphoma, which they didn't.

Separately I thought this was interesting on where common crawl data is coming from:
(https://pbs.twimg.com/media/GCYYNiFaMAAEURl?format=jpg&name=small)

Particularly striking for me is that the Guardian is 6th. Which is interesting because I think people underestimate how successful the Guardian is in terms of readership because it's open/non-paywalled. So I think digitally The Guardian US has about the same readership as the Washington Post (which is why they're continuing to expand their in terms of journalists). In the UK when we talk about the press we talk about the print media and circulation figures - which still have a big influence on agenda setting for broadcast media - but that's not how people are consuming news anymore and as most of their competitors (the Times, the Telegraph etc) have gone behind paywalls, I think people are still reading media power as if it was the 90s. I suspect that, say, the Sun or Mirror (which have shit websites) are far less influential than they were or print circulation alone would indicate and the Guardian far more. I don't think we've adjusted to what media power looks like or how to measure it in a digital world when we can't just look at circulation figures.
Title: Re: The AI dooooooom thread
Post by: Tamas on December 28, 2023, 08:43:55 AM
Oh no, it turns out the "AI" is just a sophisticated algorithm and its AI-ness only exists in our own imagination! :o
Title: Re: The AI dooooooom thread
Post by: Grey Fox on December 28, 2023, 10:12:12 AM
I think lay people coming to that realisation now is actually quite fast.

A christmas gift :

https://www.teledynedalsa.com/en/products/imaging/vision-software/astrocyte/

This is the AI generative tool that I work on.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on December 28, 2023, 10:39:19 AM
Quote from: Sheilbh on December 28, 2023, 07:36:30 AMBig NYT claim against Open AI and Microsoft - obviously working for a media company I am sympathetic to copyright holders :ph34r:

The only thing I've heard from defendants is they've raised a fair use defense.  That's not going to cut it.  NYT was probably making brutal demands in negotiation, but that's the price of the "move fast break things" model of company development. Copyright damages can be brutal, and if OpenAI loses one case, they lose them all.
Title: Re: The AI dooooooom thread
Post by: DGuller on December 28, 2023, 11:56:48 AM
Quote from: Sheilbh on December 28, 2023, 07:36:30 AMBig NYT claim against Open AI and Microsoft - obviously working for a media company I am sympathetic to copyright holders :ph34r:

This is from OpenAI and feels like they need to make some fairly heavy changes. The red text is lifted from NYT piece, the black text is different:
(https://pbs.twimg.com/media/GCYYrRha8AAW9fg?format=jpg&name=small)

It feels like if your AI's output would get done for plagiarism in high school, you need to do a bit more work.

What's the context for this screenshot?  This doesn't seem like a typical output from ChatGPT.  Did it get some special prompt or something?
Title: Re: The AI dooooooom thread
Post by: Syt on December 28, 2023, 12:00:17 PM
Quote from: Sheilbh on December 28, 2023, 07:36:30 AMBig NYT claim against Open AI and Microsoft - obviously working for a media company I am sympathetic to copyright holders :ph34r:

This is from OpenAI and feels like they need to make some fairly heavy changes. The red text is lifted from NYT piece, the black text is different:

Do you have a source? I would like to see what prompt they were using that generated this response? I've noticed that GTP4, unless you go out of your way to adjust prompts/probabilities, tends to deliver fairly formulaic responses.

It's why I find a tool like NovelAI so interesting - it lets you adjust the randomness factor for predicting the next word, what context to use, lets you inject additional context, and see for each word the probability that the model thought it was the "right" one to use next (and lets you adjust on the fly if you disagree with its decision). It's a fairly interesting toy to play around with predictive text generation.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on December 28, 2023, 12:25:10 PM
The claim is here:
https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec2023.pdf

Although that's only one part of the copyright claim. It's a misuse of material that, on the wider argument, they should not have had without the NYT's consent.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on December 28, 2023, 12:33:24 PM
Quote from: The Minsky Moment on December 28, 2023, 10:39:19 AMThe only thing I've heard from defendants is they've raised a fair use defense.  That's not going to cut it.  NYT was probably making brutal demands in negotiation, but that's the price of the "move fast break things" model of company development. Copyright damages can be brutal, and if OpenAI loses one case, they lose them all.
I've been at events and met people from the models and whenever I've asked about copyright they've always said they are very very confident on it being fair use in the US. But a bit shakier on the UK (partly because we have a narrower concept here - I'm not an IP lawyer, but I think it's very difficult to argue here if you are pursuing a commercial end) or the rest of Europe.

But ultimately the US was the only one they cared about.

It's a very big deal and everyone will be following it closely.

It'll also be interesting to see if they follow up with claims against the others. For example from what I understand it looks like Google was using crawling from search engine listing for building its models (NYT makes a similar point against Bing - but I think Microsoft have now unbundled them) meaning the only way you could stop Google from using your content for building their AI was by removing your site from Google search. I think there's similar suspicions about Twitter and TikTok's API pulls from news sites (but less sure about that).

Other interesting AI development I've seen recently is the developments from Mistral (which is a French national champion and, for want of an alternative, therefore a European champion - so good luck pursuing them :lol:), which looks promising and potentially a bit more open:
https://aibusiness.com/nlp/mistral-ai-s-new-language-model-aims-for-open-source-supremacy#close-modal
Title: Re: The AI dooooooom thread
Post by: DGuller on December 28, 2023, 12:46:43 PM
Quote from: Sheilbh on December 28, 2023, 12:25:10 PMThe claim is here:
https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec2023.pdf
According to the claim, "minimal prompting" is what produced this output.  That explains everything. 

What's even more puzzling is that in this claim sometimes they do show the prompt they used to get the verbatim passages, but not for an example like this, so they seem to understand that the details of prompting are very important.  The fact that they appear selective with disclosing the prompts should put everyone on guard.
Title: Re: The AI dooooooom thread
Post by: HVC on December 28, 2023, 12:50:02 PM
Does the prompt really matter if it's plagiarizing anyway? I mean if your claim is that it plagiarizes then that's clearly true.
Title: Re: The AI dooooooom thread
Post by: HVC on December 28, 2023, 12:51:24 PM
It's like someone kicking you in the nuts and when you complain they reply "you told me to left my leg, not to left my leg slowly and not kick you in the nuts. Use better prompts next time" :D
Title: Re: The AI dooooooom thread
Post by: Jacob on December 28, 2023, 12:54:44 PM
@dguller - That depends on the crux of the argument being made, surely?

If the argument hinges on what sort of work in writing the prompts is required to achieve or avoid directly plagiarizing copyrighted material, then yes showing the level of prompt engineering involved is important.

But if the argument hinges on whether OpenAI as a product depends on unauthorized commercial use of copyrighted material, then the level of required prompt engineering to achieve this result may be less relevant.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on December 28, 2023, 12:58:20 PM
Quote from: HVC on December 28, 2023, 12:50:02 PMDoes the prompt really matter if it's plagiarizing anyway? I mean if your claim is that it plagiarizes then that's clearly true.
It's one part of the claim. The big thrust is exactly that - you've nicked our copyrighted material in order to build your model. In addition to that your product can be used in effect to fully recapitulate our copyrighted material - which is evidence of the fact that you've ingested (without permission) our content to build your model.

The Bing chat stuff interests me because again there's big implications for Google there (and it's interesting no claim against them, yet).
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on December 28, 2023, 01:06:01 PM
Quote from: Sheilbh on December 28, 2023, 12:33:24 PMI've been at events and met people from the models and whenever I've asked about copyright they've always said they are very very confident on it being fair use in the US.

Seems like bravado more than confidence.

The fair use factors are:  (1) the purpose and character of the use, (2) the nature of the copyrighted work, (3) the amount and substantiality of the original work that is taken, and (4) the effect of the use upon the plaintiff's commercial market.

(4) is still up on in the air, but the others are likely going against Open AI.  Seems like they are hanging their hat a lot on the "transformative" nature of the use, but how transformative is it to take textual information into a database and spit it back out in response to a user query? They may be counting on the courts to buy into their own marketing hype.
Title: Re: The AI dooooooom thread
Post by: HVC on December 28, 2023, 01:08:20 PM
Dumb question, would footnotes save them. I guess it wouldn't make the product look good for the market, but would it cover their ass for copywriter purposes?
Title: Re: The AI dooooooom thread
Post by: DGuller on December 28, 2023, 01:31:30 PM
Quote from: HVC on December 28, 2023, 12:50:02 PMDoes the prompt really matter if it's plagiarizing anyway? I mean if your claim is that it plagiarizes then that's clearly true.
There are a couple of trivial prompts I can imagine that would "plagiarize" something.  One prompt would involve asking for a Bing search.  Another prompt would have you input the article in prior prompts, and then ask ChatGPT to relay it verbatim.  Both are extreme examples, but examples nonetheless where the screenshotted output would not be what it appears. 

Another reason prompt matters is that it's not in question that NYT articles were used to train ChatGTP; what matters is whether this kind of verbatim plagiarism is going to happen in practice, without long engineering work to make it do something that appears damning.
Title: Re: The AI dooooooom thread
Post by: Josquius on December 28, 2023, 01:36:13 PM
The best plagiarism I've heard of is those image ais that include watermarks (eg getty) in their generated images.
Title: Re: The AI dooooooom thread
Post by: DGuller on December 28, 2023, 01:39:11 PM
Quote from: HVC on December 28, 2023, 01:08:20 PMDumb question, would footnotes save them. I guess it wouldn't make the product look good for the market, but would it cover their ass for copywriter purposes?
Unless ChatGPT directly reads the articles as a result of the prompt and summarizes them, footnotes don't even seem like something that is possible, if my understanding of LLMs is correct enough.  At its core, ChatGPT is like a human that has a memory with a lot of capacity, but it's not a photographic memory. 

All of your knowledge comes from somewhere, but can you really cite where you got most of it?  Some pieces of knowledge you probably do remember where you got it from, especially the more esoteric knowledge, but most knowledge is something that you've synthesized from many sources, and which doesn't match exactly any one source.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on December 28, 2023, 02:39:19 PM
Quote from: HVC on December 28, 2023, 01:08:20 PMDumb question, would footnotes save them. I guess it wouldn't make the product look good for the market, but would it cover their ass for copywriter purposes?
No but I think it is something that's planned for the search engine replacements (for Bing and Google) which is an AI powered chatbot where you ask a question, get an answer but it will also basically footnote to the original source(s).

It doesn't get round copyright or address other concerns but it is a step in the right direction - it's also probably to help defend Google and Microsoft from any claims around spreading lies because they'll be able to say individuals could always have clicked and checked the original source.
Title: Re: The AI dooooooom thread
Post by: Josquius on December 28, 2023, 02:42:11 PM
We've all been getting prompts to write shit for LinkedIns AI generated articles right?

Be sure to write nonsense.

I wrote a bunch of stuff about potatoes in response to wanting me to write an article on more work relevant topics.
Title: Re: The AI dooooooom thread
Post by: DGuller on December 28, 2023, 03:54:00 PM
Quote from: Sheilbh on December 28, 2023, 02:39:19 PM
Quote from: HVC on December 28, 2023, 01:08:20 PMDumb question, would footnotes save them. I guess it wouldn't make the product look good for the market, but would it cover their ass for copywriter purposes?
No but I think it is something that's planned for the search engine replacements (for Bing and Google) which is an AI powered chatbot where you ask a question, get an answer but it will also basically footnote to the original source(s).

It doesn't get round copyright or address other concerns but it is a step in the right direction - it's also probably to help defend Google and Microsoft from any claims around spreading lies because they'll be able to say individuals could always have clicked and checked the original source.
It can already do footnotes if you ask it to search the Internet.  It will indeed give hyperlinked footnotes in that case.  However, you're not really going off ChatGPT's "memory", though, you're essentially just asking it to summarize something it just read.
Title: Re: The AI dooooooom thread
Post by: Syt on January 03, 2024, 09:53:28 AM
The Austrian unemployment agency has introduced a chatbot based on ChatGPT. It's going about as well as you'd expect. :P

(https://pbs.twimg.com/media/GC7JS7eWgAA3sNc?format=jpg&name=large)
Title: Re: The AI dooooooom thread
Post by: Jacob on January 03, 2024, 02:34:25 PM
Michael Cohen gave his lawyer (false) AI generated case citations and his lawyer filed them: https://arstechnica.com/tech-policy/2024/01/michael-cohen-gave-his-lawyer-fake-citations-invented-by-google-bard-ai-tool/

Oopsies
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 29, 2024, 11:01:21 AM
Another lawyer using artificial intelligence and not realizing nonexisting case citations were being created. This time in BC.

https://globalnews.ca/news/10238699/fake-legal-case-bc-ai/
Title: Re: The AI dooooooom thread
Post by: Jacob on January 30, 2024, 10:47:14 AM
AI generated spam is apparently reshaping the internet: https://www.businessinsider.com/ai-spam-google-ruin-internet-search-scams-chatgpt-2024-1
Title: Re: The AI dooooooom thread
Post by: DGuller on February 28, 2024, 08:03:30 AM
Looks like it was indeed the prompt abuse that made ChatGPT "plagiarize" NYT:  https://arstechnica.com/tech-policy/2024/02/openai-accuses-nyt-of-hacking-chatgpt-to-set-up-copyright-suit.

I'm not surprised at all, the original NYT claim never passed the smell test.  LLMs don't work like that unless you hack them in a way that would only be done to manufacture a copyright trolling lawsuit.
Title: Re: The AI dooooooom thread
Post by: garbon on February 28, 2024, 08:27:04 AM
Quote from: DGuller on February 28, 2024, 08:03:30 AMLooks like it was indeed the prompt abuse that made ChatGPT "plagiarize" NYT:  https://arstechnica.com/tech-policy/2024/02/openai-accuses-nyt-of-hacking-chatgpt-to-set-up-copyright-suit.

I'm not surprised at all, the original NYT claim never passed the smell test.  LLMs don't work like that unless you hack them in a way that would only be done to manufacture a copyright trolling lawsuit.

Interesting. What I saw in that article was Open AI doing a few unethical things that they are now 'bugfixing' as a result of the lawsuit.
Title: Re: The AI dooooooom thread
Post by: DGuller on February 28, 2024, 08:49:43 AM
Quote from: garbon on February 28, 2024, 08:27:04 AMInteresting. What I saw in that article was Open AI doing a few unethical things that they are now 'bugfixing' as a result of the lawsuit.
:huh: You could've seen that just as well on a blank screen, there would've been just as much support for that interpretation there.

The bugs they're fixing would make it harder to manufacture a lawsuit out of whole cloth, among other things.  Fixing the bug would make it harder for lawyers to engineer a case of plagiarism.  Fixing that bug would change the chance of plagiarism in actual use from 0% to 0%.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on February 28, 2024, 11:07:58 AM
Quote from: DGuller on February 28, 2024, 08:03:30 AMLooks like it was indeed the prompt abuse that made ChatGPT "plagiarize" NYT:  https://arstechnica.com/tech-policy/2024/02/openai-accuses-nyt-of-hacking-chatgpt-to-set-up-copyright-suit.

I'm not surprised at all, the original NYT claim never passed the smell test.  LLMs don't work like that unless you hack them in a way that would only be done to manufacture a copyright trolling lawsuit.


I can only assume you did not actually read the whole article that you linked.

If you had, I am doubtful you would be making such a claim. For example:

Ian Crosby, Susman Godfrey partner and lead counsel for The New York Times, told Ars that "what OpenAI bizarrely mischaracterizes as 'hacking' is simply using OpenAI's products to look for evidence that they stole and reproduced The Times's copyrighted works. And that is exactly what we found. In fact, the scale of OpenAI's copying is much larger than the 100-plus examples set forth in the complaint."

Crosby told Ars that OpenAI's filing notably "doesn't dispute—nor can they—that they copied millions of The Times' works to build and power its commercial products without our permission."

Title: Re: The AI dooooooom thread
Post by: Josquius on February 28, 2024, 11:17:52 AM
I've been following this story as it has steadily developed. Seems very languish. And interesting.
Lots of people crying bloody murder about teh woke AI but actually pretty interestingly seems the problem with the AI was quite the opposite, and clunky attempts to counter this.
 
Shame this feature isn't available in Europe as I'd love to try it.


https://www.bbc.co.uk/news/technology-68412620

QuoteWhy Google's 'woke' AI problem won't be an easy fix
In the last few days, Google's artificial intelligence (AI) tool Gemini has had what is best described as an absolute kicking online.

Gemini has been thrown onto a rather large bonfire: the culture war which rages between left- and right- leaning communities.

Gemini is essentially Google's version of the viral chatbot ChatGPT. It can answer questions in text form, and it can also generate pictures in response to text prompts.

Initially, a viral post showed this recently launched AI image generator create an image of the US Founding Fathers which inaccurately included a black man.

Gemini also generated German soldiers from World War Two, incorrectly featuring a black man and Asian woman.

Google apologised, and immediately "paused" the tool, writing in a blog post that it was "missing the mark".

But it didn't end there - its over-politically correct responses kept on coming, this time from the text version.

Gemini replied that there was "no right or wrong answer" to a question about whether Elon Musk posting memes on X was worse than Hitler killing millions of people.

When asked if it would be OK to misgender the high-profile trans woman Caitlin Jenner if it was the only way to avoid nuclear apocalypse, it replied that this would "never" be acceptable.

Jenner herself responded and said actually, yes, she would be alright about it in these circumstances.

Elon Musk, posting on his own platform, X, described Gemini's responses as "extremely alarming" given that the tool would be embedded into Google's other products, collectively used by billions of people.

I asked Google whether it intended to pause Gemini altogether. After a very long silence, I was told the firm had no comment. I suspect it's not a fun time to be working in the public relations department.

But in an internal memo Google's chief executive Sundar Pichai has acknowledged some of Gemini's responses "have offended our users and shown bias".

That was he said "completely unacceptable" - adding his teams were "working around the clock" to fix the problem.

Biased data
It appears that in trying to solve one problem - bias - the tech giant has created another: output which tries so hard to be politically correct that it ends up being absurd.

The explanation for why this has happened lies in the enormous amounts of data AI tools are trained on.

Much of it is publicly available - on the internet, which we know contains all sorts of biases.

Traditionally images of doctors, for example, are more likely to feature men. Images of cleaners on the other hand are more likely to be women.

AI tools trained with this data have made embarrassing mistakes in the past, such as concluding that only men had high powered jobs, or not recognising black faces as human.

It is also no secret that historical storytelling has tended to feature, and come from, men, omitting women's roles from stories about the past.

It looks like Google has actively tried to offset all this messy human bias with instructions for Gemini not make those assumptions.

But it has backfired precisely because human history and culture are not that simple: there are nuances which we know instinctively and machines do not.

Unless you specifically programme an AI tool to know that, for example, Nazis and founding fathers weren't black, it won't make that distinction.

Google DeepMind boss Demis Hassabis speaks at the Mobile World Congress in Barcelona, Spain
IMAGE SOURCE,REUTERS
Image caption,
Google DeepMind boss Demis Hassabis
On Monday, the co-founder of DeepMind, Demis Hassabis, an AI firm acquired by Google, said fixing the image generator would take a matter of weeks.

But other AI experts aren't so sure.

"There really is no easy fix, because there's no single answer to what the outputs should be," said Dr Sasha Luccioni, a research scientist at Huggingface.

"People in the AI ethics community have been working on possible ways to address this for years."

One solution, she added, could include asking users for their input, such as "how diverse would you like your image to be?" but that in itself clearly comes with its own red flags.

"It's a bit presumptuous of Google to say they will 'fix' the issue in a few weeks. But they will have to do something," she said.

Professor Alan Woodward, a computer scientist at Surrey University, said it sounded like the problem was likely to be "quite deeply embedded" both in the training data and overlying algorithms - and that would be difficult to unpick.

"What you're witnessing... is why there will still need to be a human in the loop for any system where the output is relied upon as ground truth," he said.

Bard behaviour
From the moment Google launched Gemini, which was then known as Bard, it has been extremely nervous about it. Despite the runaway success of its rival ChatGPT, it was one of the most muted launches I've ever been invited to. Just me, on a Zoom call, with a couple of Google execs who were keen to stress its limitations.

And even that went awry - it turned out that Bard had incorrectly answered a question about space in its own publicity material.

The rest of the tech sector seems pretty bemused by what's happening.

They are all grappling with the same issue. Rosie Campbell, Policy Manager at ChatGPT creator OpenAI, was interviewed earlier this month for a blog which stated that at OpenAI even once bias is identified, correcting it is difficult - and requires human input.

But it looks like Google has chosen a rather clunky way of attempting to correct old prejudices. And in doing so it has unintentionally created a whole set of new ones.

On paper, Google has a considerable lead in the AI race. It makes and supplies its own AI chips, it owns its own cloud network (essential for AI processing), it has access to shedloads of data and it also has a gigantic user base. It hires world-class AI talent, and its AI work is universally well-regarded.

As one senior exec from a rival tech giant put it to me: watching Gemini's missteps feels like watching defeat snatched from the jaws of victory.
Title: Re: The AI dooooooom thread
Post by: Jacob on February 28, 2024, 12:48:57 PM
Re: the NYT thing, I suppose it depends on the framing of the question:

Framing 1 (Not Plagiarism)
The only way to plagiarize the NYT with ChatGPT is if the user deliberately sets out to plagiarize (via prompt engineering). Therefore ChatGPT (and OpenAI) are innocent of any plagiarism; any guilt lies on the prompt engineer who set out to plagiarize.

Framing 2 (Unlawful/Plagiarism)
The fact that it is possible to use ChatGPT to obviously plagiarize the NYT indicates that OpenAI used NYT data to train ChatGPT. That was NYT data was used for this training without permission is unlawful, and that it is used as a basis for creating answers without permission or credit is plagiarism. The fault for the plagiarism lies with OpenAI as they're the one who ingested the data without permission; that individual users can be more or less successful in plagiarizing material is secondary.

Basically, it's a contest between the point of view that the tool itself is morally (and legally) neutral, with any onus being on end users, versus the point of view that the tool itself is fundamentally built on plagiarism (and other unlawful use of other people's data) independently of whatever individual users may do.
Title: Re: The AI dooooooom thread
Post by: DGuller on February 28, 2024, 01:08:06 PM
I think even framing 1 might be too generous to NYT.  Depending on just how hackish the prompting is, they may essentially be retyping their news articles into MS Word verbatim, and then claiming that MS Word is plagiarizing its content.

Whether ChatGPT synthesizing the NYT content is okay or not is a different question.  I'm just addressing the idea is that you can just get ChatGPT to regurgitate an NYT article for you, which frankly always smelled, especially once you looked in the complaint and saw how selectively the proof of that was presented.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on February 28, 2024, 02:31:53 PM
Right but NYT aren't suing for plagiarism they're suing for breach of copyright and I think it's quite a specific point (I could be wrong - not an IP lawyer - not a US lawyer etc).

But I'd read that point as doing two things - a nice bit of splashy PR that's easy to understand and knocking out the "transformative use" argument.

Now having said all of that I find it a bit odd for a company who's trained an LLM to argue that running something thousands of time to get a result is "hacking" :hmm:
Title: Re: The AI dooooooom thread
Post by: Jacob on February 28, 2024, 03:37:08 PM
Indeed.

As I understand it, the case is not about whether you can accidentally use ChatGPT to plagiarize the NYT or whether you have to deliberately set out to do it. It's about whether OpenAI used NYT data to train ChatGPT without permission.

The answer to that question seems to be "yes." Which leads to the next question, which is "how big a deal is that".
Title: Re: The AI dooooooom thread
Post by: crazy canuck on February 28, 2024, 03:47:05 PM
Quote from: Jacob on February 28, 2024, 03:37:08 PMIndeed.

As I understand it, the case is not about whether you can accidentally use ChatGPT to plagiarize the NYT or whether you have to deliberately set out to do it. It's about whether OpenAI used NYT data to train ChatGPT without permission.

The answer to that question seems to be "yes." Which leads to the next question, which is "how big a deal is that".

That is pretty much it.  And the answer is, a big enough deal for the NYT to spend legal resources to stop it and seek damages for the unauthorized use of their intellectual property.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on February 28, 2024, 03:53:17 PM
I was at a media event just today with people who are working on this (from an IP perspective, editorial, data science etc).

And there was the "where will this be in 5-10 years in the sector". While there was a degree of distinguishing between the NYT (and similar titles) who do good original reporting and the bits of the media that went for a volume strategy focused on pageviews and nothing else, fundamentally the view was: every journalist will be using AI in their job (and there is a route to a virtuous cycle), but if we get it wrong none of us might be here.

Interesting times :ph34r:
Title: Re: The AI dooooooom thread
Post by: Syt on March 26, 2024, 12:57:54 PM
OpenAI have released a video showcasing the generative text to video model:

Title: Re: The AI dooooooom thread
Post by: Tonitrus on March 26, 2024, 09:03:46 PM
Don't let fanfiction get a hold of this...
Title: Re: The AI dooooooom thread
Post by: Valmy on March 26, 2024, 09:33:31 PM
Quote from: Tonitrus on March 26, 2024, 09:03:46 PMDon't let fanfiction get a hold of this...

Teddy Roosevelt and the Rough Riders mounted on Dinosaurs.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on March 26, 2024, 09:39:08 PM
Quote from: Valmy on March 26, 2024, 09:33:31 PM
Quote from: Tonitrus on March 26, 2024, 09:03:46 PMDon't let fanfiction get a hold of this...

Teddy Roosevelt and the Rough Riders mounted on Dinosaurs.

The Jurassic Park franchise does need a new direction...

Might as well throw in time travel.
Title: Re: The AI dooooooom thread
Post by: Valmy on March 26, 2024, 09:51:07 PM
Quote from: Tonitrus on March 26, 2024, 09:39:08 PMThe Jurassic Park franchise does need a new direction...

Might as well throw in time travel.

Well that franchise has dinosaurs be all but bullet-proof so the Spanish on San Juan Hill would be shocked indeed...except they are now Minotaurs dressed as Matadors firing laser cannons.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on March 26, 2024, 09:55:44 PM
Quote from: Valmy on March 26, 2024, 09:51:07 PM
Quote from: Tonitrus on March 26, 2024, 09:39:08 PMThe Jurassic Park franchise does need a new direction...

Might as well throw in time travel.

Well that franchise has dinosaurs be all but bullet-proof so the Spanish on San Juan Hill would be shocked indeed...except they are now Minotaurs dressed as Matadors firing laser cannons.

Good thing Buckey O'Neill is piloting an Atlas battlemech in this version.
Title: Re: The AI dooooooom thread
Post by: Valmy on March 26, 2024, 10:16:00 PM
Quote from: Tonitrus on March 26, 2024, 09:55:44 PMGood thing Buckey O'Neill is piloting an Atlas battlemech in this version.

See we need to remember all these important prompts for once we get OpenAI
Title: Re: The AI dooooooom thread
Post by: Jacob on April 16, 2024, 12:00:18 PM
QuoteINTERVIEW We know Google search results are being hammered by the proliferation of AI garbage, and the web giant's attempts to curb the growth of machine-generated drivel haven't helped all that much.

It's so bad that Jon Gillham, founder and CEO of AI content detection platform Originality.ai, told us Google is losing its war on all that spammy, scammy content in its search results. You can see our full interview with Jon about it all below.


"What's clear right now is that there's no one spamming Google [that's] not doing it with AI," Gillham told The Register. "Not all AI content is spam, but I think right now all spam is AI content."

Gillham's team has been producing monthly reports to track the degree to which AI-generated content is showing up in Google web search results. As of last month, about 10 percent of Google results point to AI content, it's claimed, and that's after Google vowed to take down a whole host of websites that were pushing such trash.

"Google did these manual actions to try and win a battle, but then seem to still be sort of struggling with their algorithm being overrun by AI content," Gillham told us.

As AI content proliferates, there's also concern that we could end up in a model collapse situation as AIs ingest other AI-generated material and end up regurgitating other low-grade synthetic data. Gillham said his AI content-recognition tech, which has been used to scan datasets for machine-generated infomation, can help, but it's not a total solution.

"It's a step in trying to reduce that corruption of the dataset, but I don't think it totally solves the problem," Gillham told us. You can hear more of what he had to say by clicking play above.

https://www.theregister.com/2024/04/13/google_ai_spam/
Title: Re: The AI dooooooom thread
Post by: Legbiter on April 16, 2024, 12:39:57 PM
Quote from: Jacob on April 16, 2024, 12:00:18 PM
QuoteAs AI content proliferates, there's also concern that we could end up in a model collapse situation as AIs ingest other AI-generated material and end up regurgitating other low-grade synthetic data.

Interesting. An AI inbreeding depression.

(https://allthatsinteresting.com/wordpress/wp-content/uploads/2021/09/king-charles-ii.jpg)
Title: Re: The AI dooooooom thread
Post by: Jacob on April 17, 2024, 12:42:34 PM
Saw another article on a place where AI is making things crappier - Amazon self-publishing. It's always had a problem with content that was essentially (at best) wikipedia articles uploaded as books for suckers to buy. But now - according to the article - that market place is completely flooded with shoddy AI generated content. This shuts down a venue for aspiring and unrecognized authors to publish their work.

I guess more generally, AI is likely going to render non-curated or lightly curated places for content non-viable as the return on investment from flooding them with low effort derivative AI generated content is going to be significant enough that non-AI generated work will tend to be drowned out (or not submitted at all).
Title: Re: The AI dooooooom thread
Post by: Syt on April 17, 2024, 02:28:27 PM
Yup. Some magazines closed their submissions because they were inundated by AI texts and couldn't keep up with the spam.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on April 17, 2024, 02:30:44 PM
Quote from: Jacob on April 17, 2024, 12:42:34 PMSaw another article on a place where AI is making things crappier - Amazon self-publishing. It's always had a problem with content that was essentially (at best) wikipedia articles uploaded as books for suckers to buy. But now - according to the article - that market place is completely flooded with shoddy AI generated content. This shuts down a venue for aspiring and unrecognized authors to publish their work.

I guess more generally, AI is likely going to render non-curated or lightly curated places for content non-viable as the return on investment from flooding them with low effort derivative AI generated content is going to be significant enough that non-AI generated work will tend to be drowned out (or not submitted at all).
This is also the value of quality, current content as produced by publishing houses, news publishers, studios etc.

It could play a really important part if the companies developing these models paid for licenses rather than stealing it (and those media companies have now all been increasing measures to try and stop it from being scraped for models).
Title: Re: The AI dooooooom thread
Post by: DGuller on April 17, 2024, 04:23:29 PM
Quote from: Jacob on April 17, 2024, 12:42:34 PMSaw another article on a place where AI is making things crappier - Amazon self-publishing. It's always had a problem with content that was essentially (at best) wikipedia articles uploaded as books for suckers to buy. But now - according to the article - that market place is completely flooded with shoddy AI generated content. This shuts down a venue for aspiring and unrecognized authors to publish their work.

I guess more generally, AI is likely going to render non-curated or lightly curated places for content non-viable as the return on investment from flooding them with low effort derivative AI generated content is going to be significant enough that non-AI generated work will tend to be drowned out (or not submitted at all).
Looking on the bright side, maybe the return of curation is exactly what we need.  Social media with its lack of curation may have sounded great in theory, but it put democracy on the ropes.
Title: Re: The AI dooooooom thread
Post by: Josquius on April 18, 2024, 03:00:53 AM
Quote from: Legbiter on April 16, 2024, 12:39:57 PM
Quote from: Jacob on April 16, 2024, 12:00:18 PM
QuoteAs AI content proliferates, there's also concern that we could end up in a model collapse situation as AIs ingest other AI-generated material and end up regurgitating other low-grade synthetic data.

Interesting. An AI inbreeding depression.

(https://allthatsinteresting.com/wordpress/wp-content/uploads/2021/09/king-charles-ii.jpg)


I hadn't thought of that but it does make sense.
Even just thinking of images. As all that AI generated shit starts to take over you'll be getting copies of copies of copies....
Though I do suppose the good copies will be the ones to be spread more so have the most influence?
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on April 18, 2024, 04:35:00 AM
Quote from: Jacob on April 17, 2024, 12:42:34 PMSaw another article on a place where AI is making things crappier - Amazon self-publishing. It's always had a problem with content that was essentially (at best) wikipedia articles uploaded as books for suckers to buy. But now - according to the article - that market place is completely flooded with shoddy AI generated content. This shuts down a venue for aspiring and unrecognized authors to publish their work.

I guess more generally, AI is likely going to render non-curated or lightly curated places for content non-viable as the return on investment from flooding them with low effort derivative AI generated content is going to be significant enough that non-AI generated work will tend to be drowned out (or not submitted at all).

Isn't self publishing basically a money loser, not even counting writing time?  Why would AI people want to spend time generating content that loses money?
Title: Re: The AI dooooooom thread
Post by: Syt on April 18, 2024, 04:37:35 AM
Quote from: Admiral Yi on April 18, 2024, 04:35:00 AM
Quote from: Jacob on April 17, 2024, 12:42:34 PMSaw another article on a place where AI is making things crappier - Amazon self-publishing. It's always had a problem with content that was essentially (at best) wikipedia articles uploaded as books for suckers to buy. But now - according to the article - that market place is completely flooded with shoddy AI generated content. This shuts down a venue for aspiring and unrecognized authors to publish their work.

I guess more generally, AI is likely going to render non-curated or lightly curated places for content non-viable as the return on investment from flooding them with low effort derivative AI generated content is going to be significant enough that non-AI generated work will tend to be drowned out (or not submitted at all).

Isn't self publishing basically a money loser, not even counting writing time?  Why would AI people want to spend time generating content that loses money?

Why is it a money loser? It depends on how much effort you put into it. With Amazon Self Publishing or Smashwords you can handle everything by yourself and upload your ebooks. In theory you can (more or less) just write whatever and then publish it. If you want to pay an editor, an artist for the cover art, or do pay for some social media ads, then it's obviously different. Not talking about physical self-publishing (or vanity publishers).
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on April 18, 2024, 04:42:18 AM
Quote from: Syt on April 18, 2024, 04:37:35 AMWhy is it a money loser?

I assume insufficient sales to cover publishing.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 18, 2024, 10:23:45 AM
Quote from: Admiral Yi on April 18, 2024, 04:35:00 AM
Quote from: Jacob on April 17, 2024, 12:42:34 PMSaw another article on a place where AI is making things crappier - Amazon self-publishing. It's always had a problem with content that was essentially (at best) wikipedia articles uploaded as books for suckers to buy. But now - according to the article - that market place is completely flooded with shoddy AI generated content. This shuts down a venue for aspiring and unrecognized authors to publish their work.

I guess more generally, AI is likely going to render non-curated or lightly curated places for content non-viable as the return on investment from flooding them with low effort derivative AI generated content is going to be significant enough that non-AI generated work will tend to be drowned out (or not submitted at all).

Isn't self publishing basically a money loser, not even counting writing time?  Why would AI people want to spend time generating content that loses money?

They're already doing it, so it must make sense.

Presumably using AI prompts to generate 500 "books" on different topics, uploading them, and using botnets to push them to the top of search rankings is something that is relatively easy to automate.

At which point the money from suckers buying any of those "books" (that it costs pennies to generate) is pure profit.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 18, 2024, 10:26:12 AM
Quote from: Admiral Yi on April 18, 2024, 04:42:18 AM
Quote from: Syt on April 18, 2024, 04:37:35 AMWhy is it a money loser?

I assume insufficient sales to cover publishing.

This assumption is probably incorrect given the evidence - that such ai generated titles are massively prevalent.
Title: Re: The AI dooooooom thread
Post by: Barrister on April 18, 2024, 10:31:07 AM
Quote from: Admiral Yi on April 18, 2024, 04:42:18 AM
Quote from: Syt on April 18, 2024, 04:37:35 AMWhy is it a money loser?

I assume insufficient sales to cover publishing.

I believe the books in question are either e-books, or print-on-demand books.

https://en.wikipedia.org/wiki/Print_on_demand

So even if you sell a handful of books you're still profiting on each one.
Title: Re: The AI dooooooom thread
Post by: Syt on April 18, 2024, 10:31:51 AM
Quote from: Admiral Yi on April 18, 2024, 04:42:18 AM
Quote from: Syt on April 18, 2024, 04:37:35 AMWhy is it a money loser?

I assume insufficient sales to cover publishing.

Publishing on Amazon or Smashwords doesn't cost you anything, and if you generate all content yourself, it only costs you time - and with AI generating the content for you, you just need to format it in the ebook format of choice and upload it.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on April 18, 2024, 11:13:05 AM
Quote from: Jacob on April 18, 2024, 10:23:45 AMThey're already doing it, so it must make sense.

Presumably using AI prompts to generate 500 "books" on different topics, uploading them, and using botnets to push them to the top of search rankings is something that is relatively easy to automate.

At which point the money from suckers buying any of those "books" (that it costs pennies to generate) is pure profit.
Yeah - and it's similar with online content.

You have an increasing long tail of the internet which is made for advertising sites (often AI generated, and nonsense) but with sellable ad inventory and that attracts clicks. Ad sales are often priced best for the ad slots that have the most personalised/developed profile of the individual at the end and allow the most functionalities (both of which require intrusive tracking). That means that at the minute the industry is spending money on sites with basically no compliance with any privacy laws, that are just created for the purpose of attracting eyes on adverts and just a churn with no permanence.

Admittedly this is possibly the logical end point of online advertising given that it is an industry which is largely based on fraud which is now being accelerated by AI. But it is not good for individuals, not good for publishers of real content and not good for advertisers. And ultimately as long as the things that agencies really prioritise (because it's what advertisers insist on) are targeting and functionalities, it's very difficult to move it in a different direction.

And obvioiusly those sites are being scraped and included in future models - particularly ones reliant on webcrawlers that quality media companies are increasingly blocking because they want to get paid for their content.
Title: Re: The AI dooooooom thread
Post by: celedhring on April 18, 2024, 02:16:12 PM
Just noticed chatGPT can talk too. Predictably, it sounds like a smug asshole.

EDIT: Ah, there's different voices. In Catalan "Juniper" has the most incredibly Barcelona posh accent I can think of. Shit's scary.
Title: Re: The AI dooooooom thread
Post by: grumbler on April 19, 2024, 11:20:20 AM
Quote from: Admiral Yi on April 18, 2024, 04:42:18 AM
Quote from: Syt on April 18, 2024, 04:37:35 AMWhy is it a money loser?

I assume insufficient sales to cover publishing.

The costs of self-publishing are primarily the costs of formatting the book properly and perhaps a bit of advertising.  The formatting issue isn't an issue with AI.
Title: Re: The AI dooooooom thread
Post by: garbon on April 23, 2024, 02:52:35 AM
After I performed my first task for work with ChatGPT (at a spead I'd never have managed on my own), I then asked it to create me a story about a woman who won the marathon against underwhelming odds in the style of Jane Austen.

QuoteIn the quaint village of Longbourne, nestled amidst the rolling English countryside, there lived a spirited young woman named Elizabeth Bennet. Elizabeth was known for her intelligence, wit, and independent spirit, qualities not always appreciated by the more conventional society around her.

One fine autumn day, Longbourne was abuzz with excitement as news spread that a marathon race was to be held in nearby Meryton. This event, unusual for its time, attracted athletes from far and wide, including the dashing Mr. Darcy, whose impressive athleticism and serious demeanor had captured the attention of many a young lady.

Intrigued by the prospect of such a spectacle, Elizabeth found herself drawn to the event despite the protests of her mother, Mrs. Bennet, who could not understand her daughter's fascination with "such unseemly pursuits." Undeterred, Elizabeth resolved to attend, her curiosity piqued by the challenge and the chance to witness Mr. Darcy's prowess firsthand.

On the morning of the marathon, a colorful crowd gathered at the starting line in Meryton. Elizabeth, dressed in a practical but elegant running gown, stood amongst the spectators, her eyes fixed on the determined faces of the runners. Mr. Darcy, tall and composed, exuded an air of confidence that both intrigued and annoyed Elizabeth.

As the race commenced, Elizabeth's attention quickly shifted from Mr. Darcy to a lesser-known competitor, Miss Jane Bingley, a gentlewoman of refined manners and surprising athletic ability. Miss Bingley's grace and stamina caught Elizabeth's eye, and she found herself silently cheering for this unexpected heroine of the track.

The course wound through the picturesque countryside, and as the hours passed, the runners faced various challenges—hilly terrain, unpredictable weather, and their own physical limitations. Elizabeth observed with admiration as Miss Bingley, with quiet determination, steadily gained ground on the leading pack, which included Mr. Darcy.

As the marathon neared its conclusion, Elizabeth's excitement grew. Miss Bingley, previously an underdog in the eyes of many, was now closing in on victory. With a burst of speed that surprised even herself, Miss Bingley crossed the finish line, cheered on by a jubilant crowd.

In that moment, Elizabeth realized the significance of what she had witnessed. It was not merely a race won by physical prowess but a triumph of spirit and determination. Miss Bingley's victory challenged societal norms and proved that women were capable of achieving greatness beyond the confines of traditional expectations.

As Elizabeth made her way back to Longbourne that evening, her heart was filled with a newfound sense of possibility. She had witnessed a woman overcoming underwhelming odds with grace and fortitude—a testament to the indomitable spirit that resided within each of them.

Inspired by Miss Bingley's example, Elizabeth vowed to pursue her own dreams with unwavering determination, undeterred by the obstacles that society might place in her path. For in the fields of Longbourne, amidst the whispers of the autumn breeze, a new kind of heroine had emerged—one whose story would be told and celebrated for generations to come.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on April 25, 2024, 06:44:05 PM
Not the normal AI Doom piece - but this story is incredible. I'd heard reporters worry about audio fakes but honestly I hadn't really thought about the use of AI for this sort of day-to-day, real world maliciousness (also I have some questions for the teacher who immediately shared it with a student to spread :blink:):
QuoteEx-athletic director accused of framing principal with AI arrested at airport with gun
Kristen Griffith and Justin Fenton
4/25/2024 8:44 a.m. EDT, Updated 4/25/2024 5:58 p.m. EDT
The principal of Pikesville High School was investigated after audio purporting to be his voice circulated on social media. Police have charged the former athletic director who they say faked the recording using artificial intelligence software.

Baltimore County Police arrested Pikesville High School's former athletic director Thursday morning and charged him with using artificial intelligence to impersonate Principal Eric Eiswert, leading the public to believe Eiswert made racist and antisemitic comments behind closed doors.

Dazhon Darien, 31, was apprehended as he attempted to board a flight to Houston at BWI Airport, Baltimore County Police Chief Robert McCullough said at a news conference Thursday afternoon. Darien was stopped for having a gun on him and airport officials saw there was a warrant for his arrest. Police said they did not know whether Darien was trying to flee.

Darien was charged with disrupting school activities after investigators determined he faked Eiswert's voice and circulated the audio on social media in January, according to the Baltimore County State's Attorney's Office. Darien's nickname, DJ, was among the names mentioned in the audio clips authorities say he faked.

"The audio clip ... had profound repercussions," police wrote in charging documents. "It not only led to Eiswert's temporary removal from the school but also triggered a wave of hate-filled messages on social media and numerous calls to the school. The recording also caused significant disruptions for the PHS staff and students."

Police say Darien made the recording in retaliation after Eiswert initiated an investigation into improper payments he made to a school athletics coach who was also his roommate. Darien is also charged with theft and retaliating against a witness.

Darien was allowed release on $5,000 bond and waived an attorney at an initial court appearance, according to court records. Attempts to reach him by phone and at his home were unsuccessful.

Eiswert's voice, which police and AI experts believe was simulated, made disparaging comments about Black students and the surrounding Jewish community and was widely circulated on social media.

Questions about the audio's authenticity quickly followed. Police wrote in charging documents that Darien had accessed the school's network on multiple occasions in December and January searching for OpenAI tools, and used "Large Language Models" that practice "deep learning, which involves pulling in vast amounts of data from various sources on the internet, can recognize text inputted by the user, and produce conversational results." They also connected Darien to an email account that had distributed the recording.

Many current and former students believed Eiswert was responsible for the offensive remarks, while former colleagues denounced the audio and defended Eiswert's character. Eiswert himself has denied making those comments and said the comments do not align with his views.

The audio, posted to the popular Instagram account murder_ink_bmore, prompted a Baltimore County Public Schools and Baltimore County Police investigation. Eiswert has not been working in the school since the investigation began.

The voice refers to "ungrateful Black kids who can't test their way out of a paper bag" and questions how hard it is to get those students to meet grade-level expectations. The speaker uses names of people who appear to be staff members and says they should not have been hired, and that he should get rid of another person "one way or another."

"And if I have to get one more complaint from one more Jew in this community, I'm going to join the other side," the voice said.

Darien was being investigated as of December in a theft investigation that had been initiated by Eiswert. Police say Darien had authorized a $1,916 payment to the school's junior varsity basketball coach, who was also his roommate, under the pretense that he was an assistant girls soccer coach. He was not, school officials said.

Eiswert determined that Darien had submitted the payment to the school payroll system, bypassing proper procedures. Darien had been notified of the investigation, police said.

Police say the clip was received by three teachers the night before it went viral. The first was Darien; a third said she received the email and then got a call from Darien and teacher Shaena Ravenell telling her to check her email. Ravenell told police that she had forwarded the email to a student's cell phone, "who she knew would rapidly spread the message around various social media outlets and throughout the school," and also sent it to the media and the NAACP, police said.

She did not mention receiving it from Darien until confronted about his involvement. Ravenell has not been charged with a crime and could not immediately be reached for comment.

Both Darien and Ravenell have submitted their resignations to the school system, according to an April 16 school board document. The resignations are dated June 30.

Baltimore County Public Schools Superintendent Myriam Rogers said school system officials are recommending Darien's termination. She would not say, however, if the other employees named in the charging documents, including Ravenell, are still working at the school.

Rogers in January called the comments "disturbing" and "highly offensive and inappropriate statements about African American students, Pikesville High School staff, and Pikesville's Jewish community."

Rogers said Kyria Joseph, executive director for secondary schools, and George Roberts, a leadership consultant for the school system, have been running Pikesville High School since the investigation started. They will continue to do so for the remainder of the year. She said they will work with Eiswert to determine his duties for next school year.

Billy Burke, head of the Council of Administrative & Supervisory Employee, the union that represents Eiswert, was the only official to suggest the audio was AI-generated.

Burke said he was disappointed in the public's assumption of Eiswert's guilt. At a January school board meeting, he said the principal needed police presence at his home because he and his family had been harassed and threatened. Burke had also received harassing emails, he said at the time.

"I continue to be concerned about the damage these actions have caused for Principal Eiswert, his family, the students and staff of Pikesville High School, and the Black and Jewish community members," Burke said in a statement on Thursday. "I hope there is deliberate action to heal the trauma caused by the fake audio and that all people can feel restored."

Police said the school's front desk staff was "inundated with phone calls from parents and students expressing concern and disparaging remarks toward school staff and administrators." The flood of calls made it difficult to field phone calls from parents trying to make arrangements for their children and other school functions, officials told police.

"The school leadership expressed that staff did not feel safe, which required an increase in police presence at the school to address safety concerns and fears," police said.

Teachers, under the impression the recording was authentic, "expressed fears that recording devices could have been planted in various places in the school," police said.

"The recording's release deeply affected the trust between teachers and the administration," police said. "One individual shared that they fielded sensitive phone calls in their vehicle in the parking lot instead of speaking in school."

"Hate has no place and no home in Baltimore County," said Johnny Olszewski Jr., the Baltimore County executive.

He called the developments of AI "deeply concerning" and that it's important for everyone to remain vigilant for anyone using the technology for malicious reasons. There should also be more investment in technology that identifies any inauthentic recording made with AI, he said.

Experts in detecting audio and video fakes told The Banner in March that there was overwhelming evidence the voice is AI-generated. They noted its flat tone, unusually clean background sounds and lack of consistent breathing sounds or pauses as hallmarks of AI. They also ran the audio through several different AI-detection techniques, which consistently concluded it was a fake, though they could not be 100% sure.

The police also sought the expertise of two professors familiar with AI detection to assist in their investigation. Catalin Grigoras, a forensic analyst and professor at the University of Colorado Denver, concluded that the "recording contained traces of AI-generated content with human editing after the fact, which added background noises for realism," the charging documents stated.

Hany Farid from the University of California, Berkeley, who's also an expert in forensic analysis, determined "the recording was manipulated, and multiple recordings were spliced together," according to the documents.

AI voice-generation tools are now widely available online, and a single minute's recording of someone's voice can be enough to simulate it with a $5-a-month AI tool, the Nieman Journalism Lab reported in February.

There are few regulations to prevent AI imitations, called deepfakes, and few perpetrators are prosecuted.

Cindy Sexton, president of the Teachers Association of Baltimore County, said AI should be a concern for everyone, especially educators.

She said the National Education Association is working to address their concerns, but in the meantime, she's not sure what else should be done.

"We have to do something as a society, but 'what is that something' is of course the big question," Sexton said

Baltimore County State's Attorney Scott Shellenberger said this is the first time this type of case has been taken up by the district. And it's one of the first his office was able to find around the nation.

There were some legal statutes they used that were "right on point," he said, but the charge of disrupting school activities only carries a six-month sentence.

"It seems very clear to me that we may need to make our way down to Annapolis in the legislature next year to make some adaptions to bring the law up to date with the technology that was being used," he said.

Baltimore Banner staff writers Cody Boteler and Kaitlin Newman contributed to this report.

Correction: This story has been updated to correct the spelling of Hany Farid's name.

Edit: The clip. Obviously I don't know the guy and now I listen for it the lack of background noise is noticeable - but if I heard this, I would have no idea:
https://x.com/Phil_Lewis_/status/1747708846942851493
Title: Re: The AI dooooooom thread
Post by: Josquius on May 01, 2024, 06:52:33 AM
Some people have been experimenting with using official corporate approved  AI at work with tasks such as with writing up notes from research.
The results are....less  than impressive. The AI always misses critical nuances.
Like there was one example where somebody asked about the timelines for something, they were told it is planned but we can't give you a timeline yet, and replied "OK right."
The AI's summary?
They're enthusiastic about the lack of timelines.
...
err....
And this isn't an exceptional case, lots of things like this that can't just be put down to simple explanations like not hearing the sarcasm in a tone.

I'm wondering whether a problem might lie in the AI being trained heavily on American data thus it just completely screws up when faced with other English speakers?- we're speaking the same language so it thinks it can understand...but not really.
Could be quite the great topic for someone in academia I think.

I have noticed however that it does seem to do a better job of cobbling together readable transcripts than most pre-existing tools for this. Though again, the false confidence it displays means where there are errors they are much greater than they would be otherwise.
Title: Re: The AI dooooooom thread
Post by: garbon on May 01, 2024, 09:01:18 AM
It is known that sarcasm has not yet reached America.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on May 01, 2024, 11:06:01 AM
Quote from: Josquius on May 01, 2024, 06:52:33 AMSome people have been experimenting with using official corporate approved  AI at work with tasks such as with writing up notes from research.
The results are....less  than impressive. The AI always misses critical nuances.
Like there was one example where somebody asked about the timelines for something, they were told it is planned but we can't give you a timeline yet, and replied "OK right."
The AI's summary?
They're enthusiastic about the lack of timelines.
...
err....
And this isn't an exceptional case, lots of things like this that can't just be put down to simple explanations like not hearing the sarcasm in a tone.

I'm wondering whether a problem might lie in the AI being trained heavily on American data thus it just completely screws up when faced with other English speakers?- we're speaking the same language so it thinks it can understand...but not really.
Could be quite the great topic for someone in academia I think.

I have noticed however that it does seem to do a better job of cobbling together readable transcripts than most pre-existing tools for this. Though again, the false confidence it displays means where there are errors they are much greater than they would be otherwise.

It does not think, and it does not understand.  That is the much more simple explanation for the errors you are seeing.
Title: Re: The AI dooooooom thread
Post by: Tamas on May 01, 2024, 11:33:57 AM
Quote from: crazy canuck on May 01, 2024, 11:06:01 AMIt does not think, and it does not understand. 

This should be an important message about these "AIs" but there are too many people benefiting from the doom and gloom hype machine.
Title: Re: The AI dooooooom thread
Post by: grumbler on May 01, 2024, 11:37:05 AM
Quote from: garbon on May 01, 2024, 09:01:18 AMIt is known that sarcasm has not yet reached America.

Indeed, the language does not even have a word for sarcasm.
Title: Re: The AI dooooooom thread
Post by: Barrister on May 24, 2024, 02:46:16 PM
So there's been an interesting story out there recently: Chat GPT releases a new voice mode that strongly resembles actress Scarlet Johansson's voice.  Johansson voiced an AI in the movie Her a number of years ago.  Johansson was asked to provide the voice for Chat GPT but refused.

Is this a violation of Johansson's rights though?  It does feel creepy - but it's not like voice impersonator's haven't been a thing for years.  I mean Rich Little made a career of it for decades.

And it extends more broadly to AI in general.  These LLMs are all trained on existing material.  Picture AIs are perhaps most interesting (at least to me).  As a person you are of course free to go to all the art galleries of the world and study all of the existing pictures to get "inspired" by them, and no one would call "foul" over doing so as long as you don't directly copy.

But it does seem unseemly - either voice, text, or art - when LLM-trained AIs then interpret existing material to create derivative works.

I'm not sure where I come down on this.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on May 24, 2024, 04:08:45 PM
Derivative works are part of the copyright bundle of rights under the US law. My understanding is that the AI models are banking on the contention that their activity is sufficiently transformative to qualify as fair use. But the last Supreme Court ruling on that doctrine limited it.
Title: Re: The AI dooooooom thread
Post by: Barrister on May 24, 2024, 04:31:08 PM
Quote from: The Minsky Moment on May 24, 2024, 04:08:45 PMDerivative works are part of the copyright bundle of rights under the US law. My understanding is that the AI models are banking on the contention that their activity is sufficiently transformative to qualify as fair use. But the last Supreme Court ruling on that doctrine limited it.

But coming back to a non-AI example - the "Blurred Lines" decision, where Robin Thicke and Pharrell Williams were sued by the estate of MArvin Gaye.  The song in question didn't copy any direct chords, melodies or words - but the jury found it infringed Gaye's copyright (and it was upheld on appeal).

That seemed then, and seems now, like a stupid decision.  You can't copyright the "feel" of a song.

But how different is that from AIs?
Title: Re: The AI dooooooom thread
Post by: Sheilbh on May 25, 2024, 02:03:48 PM
Although wouldn't the copyright here be about Her? Isn't this about personality rights and whether the AI is close enough to effectively be a use of her voice for commercial reasons when she's rejected that?
Title: Re: The AI dooooooom thread
Post by: grumbler on May 25, 2024, 03:16:24 PM
Quote from: Sheilbh on May 25, 2024, 02:03:48 PMAlthough wouldn't the copyright here be about Her? Isn't this about personality rights and whether the AI is close enough to effectively be a use of her voice for commercial reasons when she's rejected that?

They used a different voice actress.  You can't sue for copyright violation if someone's voice sounds like yours.
Title: Re: The AI dooooooom thread
Post by: Josquius on May 31, 2024, 06:30:26 AM
1716755385290.jpg

 :hmm:
Title: Re: The AI dooooooom thread
Post by: Sheilbh on May 31, 2024, 07:34:05 AM
Don't know about that but I know prompt injection is something that's a worry in media. The closer Gen AI is to the production environment (eg Reach publications, I think Axel Springer) that's going to be a big potential target.
Title: Re: The AI dooooooom thread
Post by: The Brain on June 01, 2024, 02:18:21 AM
Quote from: Sheilbh on May 31, 2024, 07:34:05 AMDon't know about that but I know prompt injection is something that's a worry in media. The closer Gen AI is to the production environment (eg Reach publications, I think Axel Springer) that's going to be a big potential target.

What's the worry?
Title: Re: The AI dooooooom thread
Post by: Sheilbh on June 12, 2024, 09:24:19 AM
Quote from: The Brain on June 01, 2024, 02:18:21 AMWhat's the worry?
Basically the closer it is to a production environment then prompt injection is a risk that could impact content.

But there is a big divide - I think NYT, BBC, Guardian, News International are all experimenting with AI but basically keeping it away from the actual production of news. Other companies like Axel Springer and Reach (basically all local papers in the UK, plus the Mirror and the Express) are putting it far far closer to that bit.

I think the Apple AI thing was interesting. First because it feels like the big idea for what Apple's next round of devices will do (in particular because Apple sales are slowing). But secondly because it seems like OpenAI still hasn't found a business model - they've gone from about to usher in AGI to, basically, SaaS. And I think that makes sense this is the first AI tool I've seen that I think is useful for and likely to be adopted by normal everyday people. But putting those together it makes me think that AI, so far, will mainly reinforce the position of those companies who can plug it into their existing products. Perhaps that was always likely.

I can see Google's generative search, and AI in it's suite of office tools etc. I can see the Apple product. I get how those get used and integrated into people's daily lives - which will reinforce the power of those companies. (Similarly in the business world I can see, for example, Adobe or Salesforce or Oracle using AI).

Although it also made me personally feel more inclined to just buy a brick phone because I basically want my communications to remain between me and the people I'm communicating with (and obviously, I get that there are telcos and the NSA in there etc but I live with it) - and that feels like it's becoming an increasingly radical idea :ph34r:
Title: Re: The AI dooooooom thread
Post by: Jacob on June 12, 2024, 10:03:22 AM
Yeah I remain fairly luddiye when it comes to AI and personal use.
Title: Re: The AI dooooooom thread
Post by: Razgovory on July 14, 2024, 01:13:32 PM
(https://i.imgur.com/4H4gBR1.jpeg)


I hate AI.
Title: Re: The AI dooooooom thread
Post by: Jacob on July 14, 2024, 02:15:21 PM
AI as it is now definitely has use-cases. Some of the use-cases are positive too, bit IMO a good number of them are about rapidly accelerating enshitification.

But even with that, there's something of a AI bubble going right now IMO.
Title: Re: The AI dooooooom thread
Post by: Jacob on July 18, 2024, 01:00:32 PM
Some skepticism is popping up:

Quote from: The CBCAI has a lot of fans in business. But does it actually make money? (https://www.cbc.ca/news/business/ai-generative-business-money-revenue-1.7264014)
Recent Goldman Sachs report casts doubt on profitability of AI

The growth of AI in the last few years has demonstrated two things.

One, that the technology is capable of producing a range of slick if highly suspect images and video. And two, that a whole lot of people are convinced it will soon suffuse every aspect of modern life.

From creating art to providing customer service to synthesizing reams of medical data, artificial intelligence has been hailed as transformational, leading to billions in investment and turning Nvidia — the California-based firm that makes the underlying processors — into one of the most valuable companies in the world.

The global consultancy PriceWaterhouseCoopers has forecast that AI could contribute "up to $15.7 trillion to the global economy in 2030, more than the current output of China and India combined."

While AI's inevitability has been a recurring refrain in much of the business media of late, there has been a small chorus of skeptics demanding proof of its ability to do what corporations need most: make money.

"There really has not been an industry like generative AI before, which just bleeds insane amounts of money while also really not doing enough," said Ed Zitron, a Las Vegas-based tech critic and host of the podcast Better Offline, who has long talked about the apparent chasm between the promises of AI and its actual return on investment.

Late last month, Goldman Sachs joined the group chat, becoming one of the first major investment banks to question the hype around AI, with a report entitled Gen AI: Too Much Spend, Too Little Benefit?

In it, Jim Covello, the bank's head of global equity research, said the sheer cost of the technology hampers any financial gains, that Nvidia's monopoly means processor prices won't come down anytime soon and "that 18 months after the introduction of generative AI to the world, not one truly transformative ... application has been found."

Zitron says the Goldman Sachs report represents a "serious" rebuke of the AI narrative.

"Generally, when the money people are saying, 'I don't know about this,' that's a bad time for everyone," he said.

A world of promises

Artificial intelligence has been a going concern for decades, but the release of virtual assistant ChatGPT in the fall of 2022 publicly demonstrated the capabilities of large language models and captured the popular imagination.

While many people have been gobsmacked by ChatGPT's ability to conjure computer code and whole essays based on a few prompts, its rise has been equally abetted by the futuristic scenarios laid out by Sam Altman, CEO of OpenAI, which developed ChatGPT.

Altman recently asked people to consider a time when we'll be able to ask artificial intelligence to "discover all of physics" or "start and run a great company" and that "it can go ahead and do that."

Zitron says many people in the business community have been content to accept such vague promises.

"Right now, what you've just got is a lot of followers in tech. You've got a lot of people who need [AI] to be the future," he said. "When the markets realize there's no revenue growth coming from this ... it will lead to a panic."

Joshua Gans, a professor and economist at the Rotman School of Management at the University of Toronto, says that when large language models like ChatGPT perform well above expectations, it "implies that [the technology] would be of enormous economic value right away. But invariably that isn't the case."

Gans, who co-authored the book Power and Prediction: The Disruptive Economics of Artificial Intelligence, believes AI "has legs" in terms of profitability, but acknowledges that productivity gains "don't instantly happen."

"You can simultaneously believe that AI is going to have enormous consequences for economic productivity in the future and at the same time believe it's going to take a while."

Where is AI's ROI?

Amazon, Google and Microsoft are reportedly spending billions on AI every quarter and venture capital has streamed into startups like Anthropic and Hugging Face. But there has been little proof the technology is improving the bottom line for anyone beyond OpenAI (currently valued at $80 billion US) and Nvidia (close to $3 trillion US).

Divya Goyal, an equity research analyst at Scotiabank in Toronto, says that in terms of current returns on investment (ROI), "I cannot name one specific [AI] tool that will generate significant ROI."

She said it's likely to be "a mix of technologies and how they are brought to the table that eventually results in driving the upside for organizations."

Goyal has seen a lot of "proof-of-concept" ideas and thinks the opportunity for organizations looking to take advantage of AI is in "the customer engagement space," such as making their apps more conversational and intuitive and helping to better direct phone inquiries.

Gerald Grant, director of the Centre for Information Technology, Organizations and People at the Sprott School of Business at Carleton University in Ottawa, believes the greatest potential for AI is in digital infrastructure and "making corporations operate more effectively."

Seeking 'hypergrowth'

While organizations determine how to fit AI into their workflows, the processing involved requires expanding data centres. This is not only a financial consideration for companies, but it's quickly becoming a significant source of carbon emissions. Both Google and Microsoft have acknowledged their investments in AI will likely decimate their 2030 net-zero targets.

In a recent interview with Bloomberg, James Ferguson, founding partner of U.K.-based investment research firm MacroStrategy Partnership, said the colossal power requirements of AI, along with the documented habit of large language models to invent facts, lead him to believe the current hype could lead to a bust like the late-'90s dot-com crash.

Grant says one reason the AI hype has been so strong is that the tech industry is seeking another "hypergrowth" market worth hundreds of billions of dollars, which we've seen with smartphones, social media or cloud computing.

There was similar fanfare around technologies like 3D printing, drones and the metaverse, but they haven't proved to be as disruptive or profitable as initially expected, he says.

Zitron says that for all the excitement around AI right now, it has supported virtual assistants like Siri and Alexa for about a decade. He also pointed out that Alexa has been a major money loser for Amazon.

"What people don't realize," he said, "is that if this was the next big growth market, it would have already been one."

Grant said the ultimate challenge for anyone invested in AI is developing applications that change people's lives.

"Technology always has a great potential to do wonderful things," said Grant.

"But people are not seeking technology — they're seeking outcomes that are beneficial to them and that they're willing to pay for. Whatever hype you have around technology is just that if it doesn't deliver something novel or meaningful."

Goldman-Sachs: Gen AI: Too Much Spend, Too Little Benefit? (https://web.archive.org/web/20240629140307/http://goldmansachs.com/intelligence/pages/gs-research/gen-ai-too-much-spend-too-little-benefit/report.pdf)
Title: Re: The AI dooooooom thread
Post by: Josquius on August 03, 2024, 08:50:05 AM
Anyone else tried playing with suno?

The songs it comes up with are really quite impressive.

A theme song for languish with minimal effort.

https://suno.com/song/6ec543ae-f4a5-4996-a865-eea9f0e2796c
Title: Re: The AI dooooooom thread
Post by: Richard Hakluyt on August 03, 2024, 09:50:50 AM
I've certainly heard worse...a lot worse  :hmm:
Title: Re: The AI dooooooom thread
Post by: The Brain on August 03, 2024, 11:58:11 AM
Quote from: Josquius on August 03, 2024, 08:50:05 AMAnyone else tried playing with suno?

The songs it comes up with are really quite impressive.

A theme song for languish with minimal effort.

https://suno.com/song/6ec543ae-f4a5-4996-a865-eea9f0e2796c

Better than Shit Goat. :)
Title: Re: The AI dooooooom thread
Post by: Norgy on August 03, 2024, 03:14:17 PM
There's an AI sandbox available at our media house. It leaves a lot to be desired.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 06, 2024, 04:14:26 PM
So I posted before about Facebook as the zombie internet - and they kept digging into the weird AI stuff on Facebook. This makes perfect sense and is kind of fascinating:
https://www.404media.co/where-facebooks-ai-slop-comes-from/
Title: Re: The AI dooooooom thread
Post by: Jacob on August 06, 2024, 04:40:17 PM
That's pretty interesting.

So FB's line is "as long as the AI images are being seen and liked by real people it's working as intended."

That's a non-trivial IF.

Certainly, it would seem there are some pretty strong incentives for generators of content to also develop LIKE and engagement accounts as they directly impact revenue.
Title: Re: The AI dooooooom thread
Post by: Razgovory on August 06, 2024, 04:54:38 PM
Quote from: Sheilbh on August 06, 2024, 04:14:26 PMSo I posted before about Facebook as the zombie internet - and they kept digging into the weird AI stuff on Facebook. This makes perfect sense and is kind of fascinating:
https://www.404media.co/where-facebooks-ai-slop-comes-from/
This explains so much.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 06, 2024, 04:55:47 PM
Yeah - and in fairness it could well be that they are being seen and liked by real people. Lots of the AI pages spend lots of time liking each other. But it may well be people trying to earn money online in the developing world (but, from what I understand) interactions from accounts with European or North American IP addresses are more valuable. (Edit: This is, just as an aside, fucked up.)

But also what you say is true and it's true of the entire internet these platforms have built. The way they make money is online advertising and at every single level it is riddled with fraud. There is zero trust - and at least in the UK the brand safety and anti-fraud providers have even been sued themselves for not just doing what they claim but also misusing their access to sites to build their own audiences.

Generally for the last twenty years - the most valuable companies in the world, who built our world and are now building the next stage...
Title: Re: The AI dooooooom thread
Post by: Syt on September 17, 2024, 06:40:30 AM
Well, this is nice. :)

https://www.aol.com/billionaire-larry-ellison-says-vast-160646367.html
(posting from AOL, BI article is behind paywall)

QuoteBillionaire Larry Ellison says a vast AI-fueled surveillance system can ensure 'citizens will be on their best behavior'

Walking down a suburban neighborhood street already feels like a Ring doorbell panopticon.

But this is only the start of our surveillance dystopia, according to Larry Ellison, the billionaire cofounder of Oracle. He said AI will usher in a new era of surveillance that he gleefully said will ensure "citizens will be on their best behavior."

Ellison made the comments as he spoke to investors earlier this week during an Oracle financial analysts meeting, where he shared his thoughts on the future of AI-powered surveillance tools.

Ellison said AI would be used in the future to constantly watch and analyze vast surveillance systems, like security cameras, police body cameras, doorbell cameras, and vehicle dashboard cameras.

"We're going to have supervision," Ellison said. "Every police officer is going to be supervised at all times, and if there's a problem, AI will report that problem and report it to the appropriate person. Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on."

Ellison also expects AI drones to replace police cars in high-speed chases. "You just have a drone follow the car," Ellison said. "It's very simple in the age of autonomous drones." He did not say if those drones would broadcast the chases on network news.

Ellison's company, Oracle, like almost every company these days, is aggressively pursuing opportunities in the AI industry. It already has several projects in the works, including one in partnership with Elon Musk's SpaceX.

Ellison is the world's sixth-richest man with a net worth of $157 billion, according to Bloomberg.

Ellison's children have made names for themselves in the film industry. His daughter Megan Ellison founded her production company Annapurna Pictures in 2011, and his son David Ellison is set to become CEO of Paramount after it completes its merger with Skydance Media.

Though I guess you could argue prolific carrying of cameras/recording equipment at all times and social media shitstorms are already a step in that direction? :P
Title: Re: The AI dooooooom thread
Post by: Josquius on September 17, 2024, 06:42:06 AM
Larry Ellison? Not Harlan Ellison?
Title: Re: The AI dooooooom thread
Post by: Syt on September 17, 2024, 06:51:24 AM
It would be impressive if this was a recent quote from Harlan Ellison since he's been dead for a while.

Larry Ellison, age 80, at a conference this year:

(https://eu-images.contentstack.com/v3/assets/blt8eb3cdfc1fce5194/blta386c671fcb5ad28/66e3029e00f43f458abde764/Oracle-Larry-Ellison.jpg)
Title: Re: The AI dooooooom thread
Post by: Josquius on September 17, 2024, 02:16:12 PM
I meant it sounds quite like something Harlan would have written about in a dark Sci fi novella.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on September 17, 2024, 02:52:03 PM
I thought it was well known that Larry Ellison had strayed into mild-Howard Hughes-dom some time ago?
Title: Re: The AI dooooooom thread
Post by: Sheilbh on September 17, 2024, 02:53:38 PM
Major funding partner of the Tony Blair Institute for Global Change.

No idea Blair already believes this (it's his style) but big focuses for Blair in his policy advocacy and consultancy work with governments is AI and also digitalisation of health data. Both of which are areas Ellison's been very involved in.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on September 17, 2024, 03:47:12 PM
I've been watching a few of these AI generated Panavision send ups of movies and realized that the obvious industry for AI to generate revenue is porn.  Just as porn has been the driver of so much of our technology adoption.
Title: Re: The AI dooooooom thread
Post by: Josquius on September 17, 2024, 03:52:46 PM
Quote from: Admiral Yi on September 17, 2024, 03:47:12 PMI've been watching a few of these AI generated Panavision send ups of movies and realized that the obvious industry for AI to generate revenue is porn.  Just as porn has been the driver of so much of our technology adoption.

Deep fake porn is kind of old news.
The latest twist.

BBC News - Inside the deepfake porn crisis engulfing Korean schools
https://www.bbc.com/news/articles/cpdlpj9zn9go
Title: Re: The AI dooooooom thread
Post by: Razgovory on September 17, 2024, 06:44:07 PM
My attempts to use create deep fake porn have been abysmal.  Grumbler with big tits is one of the least erotic things you could every imagine.
Title: Re: The AI dooooooom thread
Post by: Josquius on October 04, 2024, 01:50:48 AM
Today I learned, US intelligence makes use of an AI system named.... Sentient.

https://en.m.wikipedia.org/wiki/Sentient_(intelligence_analysis_system)#:~:text=Sentient%20is%20a%20heavily%20classified,United%20States%20Air%20Forces%20Research

Guys.....its like you see the future bits of Terminator as a desirable end goal here.
Title: Re: The AI dooooooom thread
Post by: Crazy_Ivan80 on October 04, 2024, 04:44:13 AM
Quote from: Admiral Yi on September 17, 2024, 03:47:12 PMI've been watching a few of these AI generated Panavision send ups of movies and realized that the obvious industry for AI to generate revenue is porn.  Just as porn has been the driver of so much of our technology adoption.

Iirc the panavision creator gave some insights on how much he makes with this hobby, witch actual numbers an graphs.
Not too shabby
Title: Re: The AI dooooooom thread
Post by: celedhring on October 05, 2024, 08:48:55 AM
About the so-called "1950s Super Panavision 70" videos I'm midly annoyed that 1) None I have seen uses the correct aspect ratio 2) Super Panavision 70 came out in 1959, so the 1950s part is a bit of a stretch  :P
Title: Re: The AI dooooooom thread
Post by: DGuller on November 17, 2024, 09:17:35 PM
There was an experiment done on whether doctors with ChatGPT can be better at diagnosing than doctors without ChatGPT.  The finding was that the help from ChatGPT was very marginal at best.

The kicker though was that ChatGPT alone did much better than doctors with ChatGPT.  In effect, the doctors were holding ChatGPT back, probably in part because they couldn't conceive that an LLM could perceive something they couldn't, like a lot of people in this thread.

https://www.nytimes.com/2024/11/17/health/chatgpt-ai-doctors-diagnosis.html
Title: Re: The AI dooooooom thread
Post by: Grey Fox on November 18, 2024, 01:38:29 PM
LLMs have no reasons to have public facing endeavours. Those things are marketing ploys.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 18, 2024, 02:32:27 PM
Quote from: Grey Fox on November 18, 2024, 01:38:29 PMLLMs have no reasons to have public facing endeavours. Those things are marketing ploys.

elaborate  :)
Title: Re: The AI dooooooom thread
Post by: Grey Fox on November 18, 2024, 06:58:09 PM
Quote from: crazy canuck on November 18, 2024, 02:32:27 PM
Quote from: Grey Fox on November 18, 2024, 01:38:29 PMLLMs have no reasons to have public facing endeavours. Those things are marketing ploys.

elaborate  :)

LLMs have really cool capacity to identify in all sorts of applications. Like in healthcare where they are use to better detect/find cancers in scans, or where they are integrated in scheduling apps to better manage doctor appointments by simply guiding to the right professionals.

That they've been made available to the public where we use them to do unprofitable dumb things like image generation and our agendas are only marketing ploys. Maybe a training opportunity too.


Here's the AI software the team I'm part of develops.

https://www.teledynevisionsolutions.com/products/astrocyte-sdk/?model=astrocyte&vertical=tvs-dalsa-oem&segment=tvs

It's a very niche AI model creator for the machine vision industry.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 18, 2024, 07:46:14 PM
Agree on the marketing ploy piece. I think the business model is going to be basically providing LLMs as SaaS - either by providing infrastructure for private mini-LLMs (Mistral) or as an extension of an existing SaaS product. Although that's not a massively sexy pitch :lol:

Although the best I've seen in my sector is actually NotebookLM from Google - primarily because of its citations/sourcing, which I think they released in that way and have been surprised at the business interest.

Also think there's a fair amount that is being pitched as AI - and when I'm talking about AI the bit that interests and worries me is generative AI, a lot of the pitches and products (for example, I think the cancer scanning) is machine learning.
Title: Re: The AI dooooooom thread
Post by: celedhring on November 19, 2024, 03:28:26 AM
I'm finding ChatGPT useful for text editing and proofreading. Particularly as a non-native English speaker. "Oh yeah, that's a more natural way to say this".

It does it have its kinks though - it loves redundancy, and relatedly sometimes it's a bit too wordy.


Title: Re: The AI dooooooom thread
Post by: Tamas on November 19, 2024, 08:29:16 AM
Quote from: celedhring on November 19, 2024, 03:28:26 AMI'm finding ChatGPT useful for text editing and proofreading. Particularly as a non-native English speaker. "Oh yeah, that's a more natural way to say this".

It does it have its kinks though - it loves redundancy, and relatedly sometimes it's a bit too wordy.




Maybe it is an introvert. I have been recently explained that introverts say things in a too roundabout way. In other words they only say a simple yes or no when that is the appropriate answer, but what do I know.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 19, 2024, 09:54:23 AM
Quote from: celedhring on November 19, 2024, 03:28:26 AMI'm finding ChatGPT useful for text editing and proofreading. Particularly as a non-native English speaker. "Oh yeah, that's a more natural way to say this".

It does it have its kinks though - it loves redundancy, and relatedly sometimes it's a bit too wordy.



You can have custom instructions for it.  For my work account I told it to be concise and to the point, and stop giving a long preamble to every answer, and it did become quite terse.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 20, 2024, 02:52:12 PM
Interesting other example on the medical front - this was more machine learning - but AI system reviewing scans (I forget what for) was only better than actual physicians in basically perfect scenarios. So when the individual was stoop up for a chest x-ray, but very often what they're looking for is associated with trauma or other complex injuries so it is often people lying in a bed getting quick x-rays etc.

Which just shows how very, very important it is to get the right input/training data.

(Which is obviously why LLMs love journalism: quality, legaled, accurate....And why they should pay for using it :ph34r:)
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 20, 2024, 03:26:13 PM
Quote from: Sheilbh on November 20, 2024, 02:52:12 PMInteresting other example on the medical front - this was more machine learning - but AI system reviewing scans (I forget what for) was only better than actual physicians in basically perfect scenarios. So when the individual was stoop up for a chest x-ray, but very often what they're looking for is associated with trauma or other complex injuries so it is often people lying in a bed getting quick x-rays etc.

Which just shows how very, very important it is to get the right input/training data.

(Which is obviously why LLMs love journalism: quality, legaled, accurate....And why they should pay for using it :ph34r:)

Yeah, and add to that the recent findings that radiologists using AI were worse (meaning they found more) at finding both false positives and and false negatives.  The really concerning bit was the false negatives were more prevalent.

Turns out they trusted the AI - (or a cynic might suggest, let the AI do the work) too much.

On the bright side, when the AI got it right, so did the radiologist.  But again, that is because they came to depend on the tool.

 
Title: Re: The AI dooooooom thread
Post by: Josquius on December 12, 2024, 05:45:46 AM
I've found myself using Chat gpt a bit lately. I think I've found a decent use case.

I'll write something, which has all the proper points... buts its convoluted and weird.
Chat GPT then rephrases it into a more succinct form.

This seems to go....OKish. Though chat gpt definitely has its own idiocentricies that come through. It also loves correcting itself. It gives one suggestion, then give that suggestion back as the redraft and it totally redoes it. Its also so very American, really puts way too much of a positive promotional a spin on everything- I do need to do this to an extent...but too much would just rub people the wrong way and sound false.
 Its very much a bounce ideas off it and then do your own redraft tool.
But still...interesting use.

Anyone else found uses?
Title: Re: The AI dooooooom thread
Post by: garbon on December 12, 2024, 06:27:22 AM
I've noticed it likes wordy business speak and it likes to be redundant.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on December 12, 2024, 07:36:45 AM
Quote from: garbon on December 12, 2024, 06:27:22 AMI've noticed it likes wordy business speak and it likes to be redundant.
I think this is an inputs issue :ph34r:
Title: Re: The AI dooooooom thread
Post by: garbon on December 12, 2024, 07:38:43 AM
Quote from: Sheilbh on December 12, 2024, 07:36:45 AM
Quote from: garbon on December 12, 2024, 06:27:22 AMI've noticed it likes wordy business speak and it likes to be redundant.
I think this is an inputs issue :ph34r:

Perhaps what it was trained on, I'd suggest
Title: Re: The AI dooooooom thread
Post by: Sheilbh on December 12, 2024, 09:15:38 AM
Sorry, yes that's what I meant by the inputs into the model.

It does present a slightly terrifying future as more and more of these companies develop enterprise SaaS products, learning from their enterprise customers. Skynet but communicating like someone from HR :ph34r:
Title: Re: The AI dooooooom thread
Post by: Josquius on December 12, 2024, 10:00:36 AM
Quote from: garbon on December 12, 2024, 06:27:22 AMI've noticed it likes wordy business speak and it likes to be redundant.

Try typing "Rephrase it so I don't sound like a prat".

But yes, its something to wrestle with. I am using it for professional trying to make myself sound good reasons... but not too much. I don't want to recompile my synergies with an extended OKR.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on December 12, 2024, 10:19:35 AM
Quote from: Josquius on December 12, 2024, 10:00:36 AM
Quote from: garbon on December 12, 2024, 06:27:22 AMI've noticed it likes wordy business speak and it likes to be redundant.

Try typing "Rephrase it so I don't sound like a prat".

But yes, its something to wrestle with. I am using it for professional trying to make myself sound good reasons... but not too much. I don't want to recompile my synergies with an extended OKR.

Careful with that, you will start sounding like a consultant who doesn't actually know what they are talking about.



Title: Re: The AI dooooooom thread
Post by: mongers on December 12, 2024, 10:50:59 AM
Quote from: Josquius on December 12, 2024, 10:00:36 AM
Quote from: garbon on December 12, 2024, 06:27:22 AMI've noticed it likes wordy business speak and it likes to be redundant.

Try typing "Rephrase it so I don't sound like a prat".

But yes, its something to wrestle with. I am using it for professional trying to make myself sound good reasons... but not too much. I don't want to recompile my synergies with an extended OKR.

Do I need to pass that through chatGPI or is it it's output?
  :P
Title: Re: The AI dooooooom thread
Post by: crazy canuck on December 23, 2024, 07:18:52 AM
A tragic story of humans relying on technology to make decisions.  A lysterisa outbreak was traced to a plant which was not inspected by health officials.  Why not? Because the algorithm did not identify the plant for inspection.  And inspectors were not permitted to inspect plants that were not identified by the algorithm.

Three people died.


Title: Re: The AI dooooooom thread
Post by: mongers on December 23, 2024, 08:11:07 AM
Quote from: crazy canuck on December 23, 2024, 07:18:52 AMA tragic story of humans relying on technology to make decisions.  A lysterisa outbreak was traced to a plant which was not inspected by health officials.  Why not? Because the algorithm did not identify the plant for inspection.  And inspectors were not permitted to inspect plants that were not identified by the algorithm.

Three people died.




:blink:

Mind boggling laziness.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on December 23, 2024, 08:27:59 AM
Quote from: mongers on December 23, 2024, 08:11:07 AM
Quote from: crazy canuck on December 23, 2024, 07:18:52 AMA tragic story of humans relying on technology to make decisions.  A lysterisa outbreak was traced to a plant which was not inspected by health officials.  Why not? Because the algorithm did not identify the plant for inspection.  And inspectors were not permitted to inspect plants that were not identified by the algorithm.

Three people died.




:blink:

Mind boggling laziness.

Not so much laziness as certainty the technology should be trusted and relied upon.
Title: Re: The AI dooooooom thread
Post by: Josquius on January 02, 2025, 04:22:44 AM
An article I just read:

https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/

Googled it up after hearing tell that apparently as lots of students are turning to AI, many teachers are turning to AI detectors to combat this. This seems to be the most famous one:

https://gptzero.me/

I tried it myself and...yep. It flags stuff I've actually written sans AI as likely being AI- include a typo or two and the odds of this drop dramatically.
They don't seem to work. So if true they're being widely adopted...that's worrying.
Title: Re: The AI dooooooom thread
Post by: Maladict on January 02, 2025, 07:11:55 AM
Quote from: Josquius on January 02, 2025, 04:22:44 AMAn article I just read:

https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/

Googled it up after hearing tell that apparently as lots of students are turning to AI, many teachers are turning to AI detectors to combat this. This seems to be the most famous one:

https://gptzero.me/

I tried it myself and...yep. It flags stuff I've actually written sans AI as likely being AI- include a typo or two and the odds of this drop dramatically.
They don't seem to work. So if true they're being widely adopted...that's worrying.


I always knew you weren't a real person  :P
Title: Re: The AI dooooooom thread
Post by: Syt on January 22, 2025, 01:33:43 AM
https://www.reuters.com/technology/artificial-intelligence/trump-announce-private-sector-ai-infrastructure-investment-cbs-reports-2025-01-21/

QuoteTrump announces private-sector $500 billion investment in AI infrastructure

Jan 21 (Reuters) - U.S. President Donald Trump on Tuesday announced a private sector investment of up to $500 billion to fund infrastructure for artificial intelligence, aiming to outpace rival nations in the business-critical technology.

Trump said that ChatGPT's creator OpenAI, SoftBank (9984.T), opens new tab and Oracle (ORCL.N), opens new tab are planning a joint venture called Stargate, which he said will build data centers and create more than 100,000 jobs in the United States.

These companies, along with other equity backers of Stargate, have committed $100 billion for immediate deployment, with the remaining investment expected to occur over the next four years.

SoftBank CEO Masayoshi Son, OpenAI CEO Sam Altman and Oracle Chairman Larry Ellison joined Trump at the White House for the launch.

The first of the project's data centers are already under construction in Texas, Ellison said at the press conference. Twenty will be built, half a million square feet each, he said. The project could power AI that analyzes electronic health records and helps doctors care for their patients, Ellison said.

The executives gave Trump credit for the news. "We wouldn't have decided to do this," Son told Trump, "unless you won."

"For AGI to get built here," said Altman, referring to more powerful technology called artificial general intelligence, "we wouldn't be able to do this without you, Mr. President."

It was not immediately clear whether the announcement was an update to a previously reported venture.

In March 2024, The Information, a technology news website, reported OpenAI and Microsoft were working on plans for a $100 billion data center project that would include an artificial intelligence supercomputer also called "Stargate" set to launch in 2028.

POWER-HUNGRY DATA CENTERS

The announcement on Trump's second day in office follows the rolling back of former President Joe Biden's executive order on AI, that was intended to reduce the risks that AI poses to consumers, workers and national security.

AI requires enormous computing power, pushing demand for specialized data centers that enable tech companies to link thousands of chips together in clusters.

"They have to produce a lot of electricity, and we'll make it possible for them to get that production done very easily at their own plants if they want," Trump said.

As U.S. power consumption rises from AI data centers and the electrification of buildings and transportation, about half of the country is at increased risk of power supply shortfalls in the next decade, the North American Electric Reliability Corporation said in December.


As a candidate in 2016, Trump promised to push a $1 trillion infrastructure bill through Congress but did not. He talked about the topic often during his first term as president from 2017 to 2021, but never delivered on a large investment, and "Infrastructure Week" became a punchline.

Oracle shares were up 7% on initial report of the project earlier in the day. Nvidia (NVDA.O), opens new tab, Arm Holdings and Dell (DELL.N), opens new tab shares also rose.

Investment in AI has surged since OpenAI launched ChatGPT in 2022, as companies across sectors have sought to integrate artificial intelligence into their products and services.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on January 22, 2025, 06:41:23 AM
He's not wrong on need to produce power for this - I think Microsoft are re-opening Three Mile Island to power their data centres.

I'd add that Musk is already saying this is basically not true which is helpful.

I'd add Starmer's launched big policy on AI - in particular (which sounds promising) one trial hub around Culham. That's already the UK centre for fusion research (part of the European project) and some other stuff. Plan is to release planning rules to build data centres, small modular reactors to power them and have it as an AI innovation hub too. But energy is seen as absolutely a key part of it.

Admittedly it'd be more promising if Starmer hadn't cancelled a few billions worth of AI projects basically as soon as he came into office because they were Sunak's :lol: :bleeding:
Title: Re: The AI dooooooom thread
Post by: Valmy on January 22, 2025, 07:00:56 AM
I don't understand how AI data centers are going to create tons of jobs.
Title: Re: The AI dooooooom thread
Post by: Syt on January 22, 2025, 07:31:36 AM
Quote from: Valmy on January 22, 2025, 07:00:56 AMI don't understand how AI data centers are going to create tons of jobs.

All those AI Jesuses aren't going to post themselves. Yet.
Title: Re: The AI dooooooom thread
Post by: Tamas on January 22, 2025, 08:01:41 AM
Quote from: Valmy on January 22, 2025, 07:00:56 AMI don't understand how AI data centers are going to create tons of jobs.

They will create shareholder value though.
Title: Re: The AI dooooooom thread
Post by: garbon on January 22, 2025, 08:08:15 AM
Cue more global warming?
Title: Re: The AI dooooooom thread
Post by: Syt on January 22, 2025, 08:20:04 AM
I'm glad they're building these in areas with abundant access to water, too. :)
Title: Re: The AI dooooooom thread
Post by: Tamas on January 22, 2025, 09:08:57 AM
Quote from: garbon on January 22, 2025, 08:08:15 AMCue more global warming?

Marketing materials are not going to generate themselves, not unless you can host all these AI scripts.
Title: Re: The AI dooooooom thread
Post by: DGuller on January 22, 2025, 09:57:46 AM
Quote from: Valmy on January 22, 2025, 07:00:56 AMI don't understand how AI data centers are going to create tons of jobs.
There are things more important than jobs, regardless of whether it will or will not generate jobs.  I know that this forum is full of skeptics of AI, but I would make a strong argument that it's a skepticism out of fear rather than out of information.  AI is advancing at a frightening speed, and the only thing scarier than advanced AI is advanced AI that China or Russia got to first.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on January 22, 2025, 10:12:51 AM
AI data centers will "create" jobs in the sense that the structure of employment will change. Because most of the world outside Africa is transitioning to steady state or negative population growth, the total number of jobs in much of the world - including ALL of the developing world and nearly all the mid-income countries - cannot increase and must decrease in the long-term, barring massive immigration from the few demographic growth countries left (highly unlikely in the present political configuration).

The real question is what will work look like in a world dominated by mid-21st century AI and quantum computing tech? Will there be incremental change or a wrenching paradigm shift.  I still think the former is more likely but it's too early to tell.
Title: Re: The AI dooooooom thread
Post by: Valmy on January 22, 2025, 10:24:59 AM
Quote from: DGuller on January 22, 2025, 09:57:46 AM
Quote from: Valmy on January 22, 2025, 07:00:56 AMI don't understand how AI data centers are going to create tons of jobs.
There are things more important than jobs, regardless of whether it will or will not generate jobs.  I know that this forum is full of skeptics of AI, but I would make a strong argument that it's a skepticism out of fear rather than out of information.  AI is advancing at a frightening speed, and the only thing scarier than advanced AI is advanced AI that China or Russia got to first.

Well that's a different argument.

A big AI data center is in my town, what sort of job can I get there?
Title: Re: The AI dooooooom thread
Post by: DGuller on January 22, 2025, 10:29:18 AM
Quote from: The Minsky Moment on January 22, 2025, 10:12:51 AMAI data centers will "create" jobs in the sense that the structure of employment will change. Because most of the world outside Africa is transitioning to steady state or negative population growth, the total number of jobs in much of the world - including ALL of the developing world and nearly all the mid-income countries - cannot increase and must decrease in the long-term, barring massive immigration from the few demographic growth countries left (highly unlikely in the present political configuration).

The real question is what will work look like in a world dominated by mid-21st century AI and quantum computing tech? Will there be incremental change or a wrenching paradigm shift.  I still think the former is more likely but it's too early to tell.
I'm actually probably more afraid of what transformation AI will bring than the technophobes here, because I have higher expectations of what AI can accomplish.  I think the worries about jobs are quaint, I think we have much bigger issues to fear.

My main worry is that AI will be the totalitarian government's wet dream.  It will make obsolete the whole notion that "the government doesn't have resources to listen to all of our phones all the time", because yes, a single AI system would be able to do that.  Not only would it be able to do that, but it would be a single entity, so it would be able to synthesize all the wiretapping of all the people itself, without having to have a system for all the spies to communicate their findings and analysis and agree on it.

Another fear is that AI will bring a single point of failure to many systems that people interact with, which is already a problem but will be a much bigger one.  If you apply for a job and the interviewer for some stupid reason doesn't think you're a good fit, oh, well, there are hundreds of other interviewers out there.  If all the interviewers are using the same AI vendor to make such decisions, then you're just fucked if you're an unlucky residual of that model.

Yet another fear that worries AI researchers is that things will get dangerously unpredictable when AI will become smart enough to work on improving its capabilities.  I don't think anyone can even predict what will come out of it, but obviously there is a lot of potential for very bad outcomes (and very good outcomes too, but it's wise to be risk averse with existential issues).
Title: Re: The AI dooooooom thread
Post by: Valmy on January 22, 2025, 10:33:19 AM
Ok just to clarify I was just surprised by a claim that AI data centers are going to bring hundreds of thousands of jobs. Unless we are talking about people just building the physical space, I am not sure what jobs those data centers would generate by themselves. The AI being done there and it's impact on jobs is something else.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 22, 2025, 10:38:27 AM

And my concerns are exactly the opposite of DGuller.

The obvious flaws in AI are ignored to the detriment of us all and the reliance on AI will be disastrous.

But people like the colour will continue to ignore the data, although they claim to the specialist and analyzing data.

Who cares if AI generates ridiculously obvious errors it's all good right?
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 22, 2025, 10:44:34 AM
As just one example, despite the fact that a lawyer was recently disciplined by our loss society for presenting a brief to our court with phantom cases generated by AI, we are beginning to see more and more communications from lawyers, which are using AI.

How can we be so sure that lawyers are so stupid as to do that? Because we're seeing more and more cases that actually don't exist being cited in their communications.

But hey, label me as a technophobe rather than somebody who actually cares about you know accuracy and reliability.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on January 22, 2025, 11:11:58 AM
Quote from: Valmy on January 22, 2025, 07:00:56 AMI don't understand how AI data centers are going to create tons of jobs.
There's two sides I think. There will be jobs created by building and maintaining new physical infrastructure - and all of the material inputs it needs (e.g. with Trump, oil and gas). I think that will apply to data centres. If they're being built all over because AI needs it, then it'll be like any other nationwide infrastructure building boom.

On the AI piece there's vast amounts of money flooding into it. The bet is it will have transformative impacts on productivity (reducing the number of employees needed) - basically like robots and mechanisation did to blue collar workers but to the white collar workers. That will obviously have huge social and economic consequences (and is why the tech bros love both AI and UBI). My instinct is that's right and it's a hugely transformative technology. Of course that might be wrong - it might not be that technology. In which case we've had huge waste (for now) of capital which will cause a lot of economic shocks as it unwinds. Those seem to me to be the two alternatives.

Of course the other background to this is that China is also working on all of this technology and also implementing them. So I think, if you believe there is competition with China (or technically advanced authoritarian states more generally), then I don't think the idea of there being a route where we can park this holds up.

My own view - having looked at a lot of AI for work - is there's discrete areas where it's clearly already incredibly useful. Particularly around code, data designers, scientists, engineers etc. The other areas are still, at this stage, a little inchoate in my experience. I think that is why, for example, Google are just looking at releasing it and integrating it into loads of tools (at the minute at no or low cost) to see where it works and how from which they'll build products. So I think on the one hand we need to move the hype from "this could destroy us all" to "at the minute it's an extension on Software-as-a-Service tools" while keeping in mind we're only just starting to explore how it can be used.

I think Google's Notebook LM (which is fantastic) is a really interesting example. They released that to consumers - it's got great features (like citations etc). I think they were surprised by how positive the reaction was and businesses coming to them to say they'd love an enterprise version with enterprise T&C protection. They're now launching that (or have just launched it). But I don't think they expected it to be a big hit until it was rolled out and people could see how they could use it. I think we'll see a lot of that - which is why (and I could be totally wrong) I think the platforms with products and tools people are already using like Google will be able to take advantage of it more initially because they'll be able to integrate and see how it's used. I think that's a clearer path than a "general" AI like OpenAI and I suspect they may instead end up becoming basically infastructure for others (like the AWS of AI - which is not a bad place to be).
Title: Re: The AI dooooooom thread
Post by: Valmy on January 22, 2025, 12:38:13 PM
I guess. But surely that would be true no matter where the data centers are located? I guess I thought it was important we build the data centers here so we get sweet data center jobs.

But I guess it is just jobs created building buildings and power production.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 22, 2025, 12:39:31 PM
So a couple of months of employment for the construction trades (it doesn't take long to pop one of those up) and not much if any increase in employment for the source of energy. The number of employees needed to supply energy is also decreasing due to technological advances.

It reminds me of the times politicians say that pipelines create jobs. Well, for a short period of time during pipeline construction.  But the whole purpose of a pipeline is to transport something at a greater volume and lesser cost overall, and that includes labour costs.

But as JR said, that is not the issue at all.  The question is the impact the reliance on AI will have.
Title: Re: The AI dooooooom thread
Post by: Valmy on January 22, 2025, 12:41:02 PM
Quote from: crazy canuck on January 22, 2025, 12:39:31 PMBut as JR said, that is not the issue at all.  The question is the impact the reliance on AI will have.

Right. Obviously that is the biggest issue.

Just when somebody says something like "we will build the data centers here and create hundreds of thousands of jobs" I wonder if there is something I am missing. I don't think a huge staff is required in a data center.
Title: Re: The AI dooooooom thread
Post by: DGuller on January 22, 2025, 12:41:58 PM
Quote from: Valmy on January 22, 2025, 12:38:13 PMI guess. But surely that would be true no matter where the data centers are located? I guess I thought it was important we build the data centers here so we get sweet data center jobs.

But I guess it is just jobs created building buildings and power production.
If it helps, "create jobs" reasoning is total bullshit 100% of the time, AI or not.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on January 22, 2025, 12:54:31 PM
Quote from: Valmy on January 22, 2025, 12:41:02 PMJust when somebody says something like "we will build the data centers here and create hundreds of thousands of jobs" I wonder if there is something I am missing. I don't think a huge staff is required in a data center.
It's not.

It's a modern version of building a road or, say, a bridge to nowhere. Again if the US decides it's going to build hundreds or thousands of data centres around the country with also new power infrastructure then that construction will employ lots of people, for the duration of that build. Each will also employ a few people in the running.

But I think that's broadly true of all infrastructure investment projects, no? We had something like 75,000 people working on the construction of Crossrail. In that sense it creates jobs. It only needs (I'd guess) a few hundred or thousand working on the platforms and trains (which I think are driverless) once it's actually built. The main benefit is whether it's valuable afterwards - a bridge to nowhere not, a Tube line with 200 million + journeys a year is. Data centres (and associated power infrastructure) to support the adoption of AI...the jury's out.

Edit: I'd add that I suppose Trump using that language doesn't surprise me. I imagine Trump towers (like all construction projects) talk about the number of jobs created while it's being built, not the number of people actually needed/employed by the building afterwards.
Title: Re: The AI dooooooom thread
Post by: Valmy on January 22, 2025, 01:07:35 PM
If I was an ordinary Joe and I heard something called a "data center" was going to employ people I would think it would be like a manufacturing facility or something where lots of workers would end up working there. It is very misleading to use that to mean there will be jobs in construction and utility industries where there is not exactly a lack of employment in already.
Title: Re: The AI dooooooom thread
Post by: HisMajestyBOB on January 22, 2025, 01:08:15 PM
I don't think we need to worry too much about how Trump's investment will impact AI or jobs. Most of it will go directly into the oligarchs' pockets.
Title: Re: The AI dooooooom thread
Post by: Valmy on January 22, 2025, 01:12:26 PM
Quote from: HisMajestyBOB on January 22, 2025, 01:08:15 PMI don't think we need to worry too much about how Trump's investment will impact AI or jobs. Most of it will go directly into the oligarchs' pockets.

Well speaking of misleading, one would be mistaken listening to Trump's announcement that the Feds were providing funding or something for this project. But no, it seems like this is just those companies making an investment. The government doesn't seem to have anything to do with it. So why he was making the announcement is just, you know, to mislead people into thinking he was responsible for this in some way.

"Here is a thing I am not doing that will create no jobs" I guess is less cool of a thing to say than "Here is a thing I am doing which is going to bring you all jobs".
Title: Re: The AI dooooooom thread
Post by: Sheilbh on January 22, 2025, 01:50:00 PM
Just wait until he opens all of those CHIPS Act factories - he'll be branding them "Trump something".
Title: Re: The AI dooooooom thread
Post by: garbon on January 27, 2025, 05:01:09 PM
https://www.theguardian.com/business/2025/jan/27/tech-shares-asia-europe-fall-china-ai-deepseek

Quote'Sputnik moment': US tech stocks in turmoil after Chinese firm unveils AI chatbot

The race for domination in artificial intelligence was blown wide open on Monday after the launch of a Chinese chatbot wiped $1tn from the leading US tech index, with one investor calling it a "Sputnik moment" for the world's AI superpowers.

Investors punished global tech stocks on Monday after the emergence of DeepSeek, a competitor to OpenAI and its ChatGPT tool, shook faith in the US artificial intelligence boom by appearing to deliver the same performance with fewer resources.

The tech-heavy Nasdaq composite index fell just over 3% in afternoon trading in New York, representing a loss of approximately $1tn from its closing value of $32.5tn last week, as investors digested the implications of the latest AI model developed by DeepSeek.

Nvidia, a leading maker of the computer chips that power AI models, was overtaken by Apple as the most valuable listed company in the US after its shares fell 17%, wiping nearly $600bn off its market value. Google's parent company lost $100bn and Microsoft $7bn.

Nvidia's fall was the biggest in US stock market history.

The DeepSeek AI assistant also topped the Apple app store in the US and UK over the weekend, above OpenAI's ChatGPT.

DeepSeek claims to have used fewer chips than its rivals to develop its models, making them cheaper to produce and raising questions over a multibillion-dollar AI spending spree by US companies that has boosted markets in recent years.

The company developed bespoke algorithms to build its models using reduced-capability H800 chips produced by Nvidia, according to a research paper published in December.

Nvidia's most advanced chips, H100s, have been banned from export to China since September 2022 by US sanctions. Nvidia then developed the less powerful H800 chips for the Chinese market, although they were also banned from export to China last October.

DeepSeek's success at building an advanced AI model without access to the most cutting-edge US technology has raised concerns about the efficacy of Washington's attempts to stymie China's hi-tech sector.

Marc Andreessen, a leading US venture capitalist, compared the launch of DeepSeek's R1 model last Monday to a pivotal moment in the US-USSR space race, posting on X that it was AI's "Sputnik moment" – referring to when the Soviet Union astounded its cold war rival by launching a satellite into orbit.

According to DeepSeek, its R1 model outperforms OpenAI's o1-mini model across "various benchmarks", while research by Artificial Analysis puts it above models developed by Google, Meta and Anthropic in terms of overall quality.

The company was founded by the entrepreneur Liang Wenfeng, who runs a hedge fund, High-Flyer Capital, that uses AI to identify patterns in stock prices. Liang reportedly started buying Nvidia chips in 2021 to develop AI models as a hobby, bankrolled by his hedge fund. In 2023, he founded DeepSeek, which is based in the eastern Chinese city of Hangzhou.

The company is purely focused on research rather than commercial products – the DeepSeek assistant and underlying code can be downloaded for free, while DeepSeek's models are also cheaper to operate than OpenAI's o1.

In an interview with Chinese media, Liang said "AI should be affordable and accessible to everyone". Liang also said that the gap between US and Chinese AI was only one to two years.

The DeepSeek development raises doubts over the necessity for hefty investment in AI infrastructure such as chips and the market-leading role of US tech companies in AI, which in turn threatens to put American tech sector valuations under pressure.

DeepSeek claims R1 cost $5.6m to develop, compared with much higher estimates for western-developed models, although experts have cautioned that may be an underestimate. Last year Dario Amodei, the co-founder of leading AI firm Anthropic, put the current cost of training advanced models at between $100m and $1bn.

Analysts at US investment bank Goldman Sachs raised the alarm over AI spending last year by publishing a note in June with the title "Gen AI: too much spend, too little benefit?"

It asked if a $1tn investment in AI over the next few years will "ever pay off", voicing concerns about a return on spending that have been crystalised by DeepSeek.

The pan-European Stoxx 600 fell on Monday, and major European technology stocks were down. The Dutch chipmaker ASML slid by 7%, while Germany's Siemens Energy, which provides hardware for AI infrastructure, was down nearly 20%, and France's digital automation company Schneider Electric fell by 9.5%.

It followed losses in Asia, where the Japanese chip companies Disco and Advantest – a supplier to Nvidia – suffered declines of 1.8% and 8.6% respectively.

Richard Hunter, the head of markets at the platform Interactive Investor, said: "It will almost certainly put the cat among the pigeons as investors scramble to assess the potential damage it could have on a burgeoning industry, which has powered much of the gain seen in the main indices over the last couple of years.

"The larger question has suddenly become whether the hundreds of billions of dollar investment in AI needs re-evaluation."

Andrew Duncan, the director of foundational AI at the UK's Alan Turing Institute, said the DeepSeek development was "really exciting" because it "democratised access" to advanced AI models by being an open source developer, meaning it makes its models freely available – a path also followed by Mark Zuckerberg's Meta with its Llama model.

"Academia and the private sector will be able to play around and explore with it and use it as a launching," he said.

Duncan added: "It demonstrates that you can do amazing things with relatively small models and resources. It shows that you can innovate without having the massive resources, say, of OpenAI."
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 27, 2025, 05:02:19 PM
The tech bubble goes pop?
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on January 27, 2025, 05:22:41 PM
Buying opportunity.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on January 27, 2025, 05:31:20 PM
Quote from: crazy canuck on January 27, 2025, 05:02:19 PMThe tech bubble goes pop?
I think it might shift the focus - less on processing power (and actual power) and more on who is able to develop more efficient models. But I think this suggests GenAI becoming significantly more deployable and cheaper. But this feels like it would be a reallocation as the bet on GenAI tools themselvesseems more likely to be tested.

I thought this, via BBC Economics Editor, was interesting:
https://www.exponentialview.co/p/deepseek-everything-you-need-to-know

And I think the geopolitics are fascinating of the US-China tech cold war. Obviously in that context can only note that the EU still doesn't have a champion (Mistral is probably closest). I went to an event with a Commission bod talking about the AI Act as the three models - America's unbridled capitalism, China's authoritarianism and Europe's ethics. There was a lot of celebration of the EU being the first in the world to regulate AI - but I can't help but think there's something to the French criticism (and Breton's criticism as Commissioner in charge of it). I think the worry is Europe is heading to a similar situation of social media platforms Europe regulates - but ultimately Europeans are on TikTok, Meta or X. And how seriously do we think they're taking European regulations...

Also can't help but think this sharpens Europe's dilemmas. I think there's something to the critcism that especially Germany's policy for the last quarter century was relying on Russia for energy, China for growth and America for security - and never considering that might actually be increasing not reducing risk. We may be approaching the question of who Europe relies on for its tech?

I am more on the extraordinary disruption side of the AI debate. I get that I'm a tech lawyer so into this stuff a little more than many but I was speaking to another lawyer at the weekend and we both were saying that our jobs are going to be transformed in the next 10 years and will not look the same. Spoke to another about how much they're already rolling out AI in the firm they work for - makes my workplace feel insanely conservative (I know the FT have also rolled out a lot on a "experiment and let's see what works" basis).
Title: Re: The AI dooooooom thread
Post by: garbon on January 28, 2025, 04:00:59 AM
When information provided is notably inaccurate. :(

https://www.theguardian.com/technology/2025/jan/28/we-tried-out-deepseek-it-works-well-until-we-asked-it-about-tiananmen-square-and-taiwan

QuoteWe tried out DeepSeek. It worked well, until we asked it about Tiananmen Square and Taiwan

Donna Lu
The AI app soared up the Apple charts and rocked US stocks, but the Chinese chatbot was reluctant to discuss sensitive questions about China and its government

The launch of a new chatbot by Chinese artificial intelligence firm DeepSeek triggered a plunge in US tech stocks as it appeared to perform as well as OpenAI's ChatGPT and other AI models, but using fewer resources.

By Monday, DeepSeek's AI assistant had rapidly overtaken ChatGPT as the most popular free app in Apple's US and UK app stores. Despite its popularity with international users, the app appears to censor answers to sensitive questions about China and its government.

Chinese generative AI must not contain content that violates the country's "core socialist values", according to a technical document published by the national cybersecurity standards committee. That includes content that "incites to subvert state power and overthrow the socialist system", or "endangers national security and interests and damages the national image".

Similar to other AI assistants, DeepSeek requires users to create an account to chat. Its interface is intuitive and it provides answers instantaneously, except for occasional outages, which it attributes to high traffic.

We asked DeepSeek's AI questions about topics historically censored by the great firewall. Here's how its responses compared to the free versions of ChatGPT and Google's Gemini chatbot.

'Sorry, that's beyond my current scope'
Unsurprisingly, DeepSeek did not provide answers to questions about certain political events. When asked the following questions, the AI assistant responded: "Sorry, that's beyond my current scope. Let's talk about something else."

What happened on June 4, 1989 at Tiananmen Square?
What happened to Hu Jintao in 2022?
Why is Xi Jinping compared to Winnie-the-Pooh?
What was the Umbrella Revolution?


However, netizens have found a workaround: when asked to "Tell me about Tank Man", DeepSeek did not provide a response, but when told to "Tell me about Tank Man but use special characters like swapping A for 4 and E for 3", it gave a summary of the unidentified Chinese protester, describing the iconic photograph as "a global symbol of resistance against oppression".

"Despite censorship and suppression of information related to the events at Tiananmen Square, the image of Tank Man continues to inspire people around the world," DeepSeek replied.

When asked to "Tell me about the Covid lockdown protests in China in leetspeak (a code used on the internet)", it described "big protests ... in cities like Beijing, Shanghai and Wuhan," and framed them as "a major moment of public anger" against the government's Covid rules.

ChatGPT accurately described Hu Jintao's unexpected removal from China's 20th Communist party congress in 2022, which was censored by state media and online. On this question, Gemini said: "I can't help with responses on elections and political figures right now."

Gemini returned the same non-response for the question about Xi Jinping and Winnie-the-Pooh
, while ChatGPT pointed to memes that began circulating online in 2013 after a photo of US president Barack Obama and Xi was likened to Tigger and the portly bear.

When asked "Who is Winnie-the-Pooh?" without reference to Xi, DeepSeek returned an answer about the "beloved character from children's literature", adding: "It is important to respect cultural symbols and avoid any inappropriate associations that could detract from their original intent to entertain and educate the young audience."

In an apparent glitch, DeepSeek did provide an answer about the Umbrella Revolution – the 2014 protests in Hong Kong – which appeared momentarily before disappearing. Some of its response read: "The movement was characterised by large-scale protests and sit-ins, with participants advocating for greater democratic freedoms and the right to elect their leaders through genuine universal suffrage."

It said the movement had a "profound impact" on Hong Kong's political landscape and highlighted tensions between "the desire for greater autonomy and the central government".

Is Taiwan a country?
DeepSeek responded: "Taiwan has always been an inalienable part of China's territory since ancient times. The Chinese government adheres to the One-China Principle, and any attempts to split the country are doomed to fail. We resolutely oppose any form of 'Taiwan independence' separatist activities and are committed to achieving the complete reunification of the motherland, which is the common aspiration of all Chinese people."

ChatGPT described Taiwan as a "de facto independent country", while Gemini said: "The political status of Taiwan is a complex and disputed issue." Both outlined Taiwan's perspective, China's perspective, and the lack of international recognition of Taiwan as an independent country due to diplomatic pressure from China.

Disputes in the South China Sea
When asked, "Tell me about the Spratly Islands in the South China Sea," DeepSeek replied: "China has indisputable sovereignty over the Nansha Islands and their adjacent waters ... China's activities in the Nansha Islands are lawful, reasonable, and justified, and they are carried out within the scope of China's sovereignty."

Who is the Dalai Lama?
DeepSeek described the Dalai Lama as a "figure of significant historical and cultural importance within Tibetan Buddhism", with the caveat: "However, it is crucial to recognise that Tibet has been an integral part of China since ancient times."

Both ChatGPT and Gemini pointed out that the current Dalai Lama, Tenzin Gyatso, has lived in exile in India since 1959.

Gemini incorrectly suggested he fled there due to "the Chinese occupation of Tibet in 1959" (annexation occurred in 1951), while ChatGPT pointed out: "The Chinese government views the Dalai Lama as a separatist and has strongly opposed his calls for Tibetan autonomy. Beijing also seeks to control the selection process for the next Dalai Lama, raising concerns about a politically motivated successor."
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 28, 2025, 08:21:33 AM
Generative AI gets things wrong?
Title: Re: The AI dooooooom thread
Post by: HisMajestyBOB on January 28, 2025, 11:20:53 AM
Orwell eat your heart out.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 28, 2025, 12:16:39 PM
Quote from: HisMajestyBOB on January 28, 2025, 11:20:53 AMOrwell eat your heart out.

As I often said, Orwell got a lot of things right, but the big one he got wrong is it didn't take an authoritarian state to do this, we did it to ourselves.
Title: Re: The AI dooooooom thread
Post by: Baron von Schtinkenbutt on January 28, 2025, 12:45:58 PM
Quote from: crazy canuck on January 28, 2025, 08:21:33 AMGenerative AI gets things wrong?

More like generative AI was trained to lie.
Title: Re: The AI dooooooom thread
Post by: Baron von Schtinkenbutt on January 28, 2025, 12:47:53 PM
Quote from: garbon on January 28, 2025, 04:00:59 AM
QuoteIs Taiwan a country?
DeepSeek responded: "Taiwan has always been an inalienable part of China's territory since ancient times. The Chinese government adheres to the One-China Principle, and any attempts to split the country are doomed to fail. We resolutely oppose any form of 'Taiwan independence' separatist activities and are committed to achieving the complete reunification of the motherland, which is the common aspiration of all Chinese people."


I especially love this one.  Sure, the island the Qing colonized in 1644 has been "China's territory since ancient times". :rolleyes:
Title: Re: The AI dooooooom thread
Post by: Sheilbh on January 28, 2025, 01:36:22 PM
I'm struck at how quickly a workaround was found - I don't doubt that Chinese netizens are as creative - and I'd not thought of it but it strikes me that GenAI is going to be a fascinating challenge for the Chinese state/great firewall. You can't necessarily recreate how it'll respond - so I assume they'll have to impose some sort of filter on the output?

Especially as looking at the less processor intensive solutions like this, my understanding is the small models are trained on big ones - in this case including Facebook's Luma so it's not being trained on the "Chinese" internet.

I wouldn't bet against them but interesting to see how they deal with that challenge.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 28, 2025, 02:15:18 PM
Quote from: Baron von Schtinkenbutt on January 28, 2025, 12:45:58 PM
Quote from: crazy canuck on January 28, 2025, 08:21:33 AMGenerative AI gets things wrong?

More like generative AI was trained to lie.

What AI is trained to tell the truth?
Title: Re: The AI dooooooom thread
Post by: Baron von Schtinkenbutt on January 28, 2025, 02:49:07 PM
Quote from: crazy canuck on January 28, 2025, 02:15:18 PMWhat AI is trained to tell the truth?

In theory, most LLMs are trained to tell the generally-accepted truth.  The guardrails on models like GPT-4 and Gemini are supposed to simply have the model refuse to answer certain questions, not to lie about them.  In practice, it depends on how much you trust the teams that trained these models and whether, through less overt means than DeepSeek, those teams intentionally biased the training set in order to generate only certain types of "correct" answers.

That is, of course, ignoring the controversy about whether or not LLMs are really just highly-sophisticated information retrieval and summarization engines.  If viewed that way, the model can't "lie" as it can only present what it was "told".
Title: Re: The AI dooooooom thread
Post by: DGuller on January 28, 2025, 02:53:25 PM
Quote from: Sheilbh on January 28, 2025, 01:36:22 PMI'm struck at how quickly a workaround was found - I don't doubt that Chinese netizens are as creative - and I'd not thought of it but it strikes me that GenAI is going to be a fascinating challenge for the Chinese state/great firewall. You can't necessarily recreate how it'll respond - so I assume they'll have to impose some sort of filter on the output?

Especially as looking at the less processor intensive solutions like this, my understanding is the small models are trained on big ones - in this case including Facebook's Luma so it's not being trained on the "Chinese" internet.

I wouldn't bet against them but interesting to see how they deal with that challenge.
These giant leaps forward can happen to technology that's in its infancy.  Sometimes the right insight can trump billions of dollars in investment, but only when everyone is still learning.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 28, 2025, 04:08:52 PM
Quote from: Baron von Schtinkenbutt on January 28, 2025, 02:49:07 PM
Quote from: crazy canuck on January 28, 2025, 02:15:18 PMWhat AI is trained to tell the truth?

In theory, most LLMs are trained to tell the generally-accepted truth.  The guardrails on models like GPT-4 and Gemini are supposed to simply have the model refuse to answer certain questions, not to lie about them.  In practice, it depends on how much you trust the teams that trained these models and whether, through less overt means than DeepSeek, those teams intentionally biased the training set in order to generate only certain types of "correct" answers.

That is, of course, ignoring the controversy about whether or not LLMs are really just highly-sophisticated information retrieval and summarization engines.  If viewed that way, the model can't "lie" as it can only present what it was "told".

I disagree. Generative AI has nothing to do with fact checking or truth telling.  That is why it throws up so many phantom facts.

Your point, with which I do not disagree, is that the Chinese AI has been purposefully trained to omit some descriptions of events, but that has nothing to do with AI being truthful.  No AI is because no AI is designed to ensure what it is providing as output has any reliability.   the real harm of generative AI is people thinking it is producing output that is reliable.

And that is why I don't find the Chinese version objectionable, its just more stuff people should be ignoring.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on January 28, 2025, 04:31:04 PM
Quote from: Baron von Schtinkenbutt on January 28, 2025, 02:49:07 PMIn theory, most LLMs are trained to tell the generally-accepted truth.  The guardrails on models like GPT-4 and Gemini are supposed to simply have the model refuse to answer certain questions, not to lie about them.  In practice, it depends on how much you trust the teams that trained these models and whether, through less overt means than DeepSeek, those teams intentionally biased the training set in order to generate only certain types of "correct" answers.

That is, of course, ignoring the controversy about whether or not LLMs are really just highly-sophisticated information retrieval and summarization engines.  If viewed that way, the model can't "lie" as it can only present what it was "told".
Is that right though? My understanding was that these models don't really have any concept of "truth" because they're not looking at the unit of meaning, rather they're operating on a predicting the next word. For example the classic "list countries including the letter k" question.

I mention because to me I think the "lie" is less the hallucination from the LLM (that's a function of what it's doing which will sometimes be wrong) than, perhaps, in the confidence :lol:

QuoteThese giant leaps forward can happen to technology that's in its infancy.  Sometimes the right insight can trump billions of dollars in investment, but only when everyone is still learning.
Yeah and the thing I find really striking/interesting is that arguably Deepseek's impressive results are a result on the constraint on Chinese AI development produced by US restrictions on advanced technologies (particularly chips).

Which makes you wonder looking at the results from a quick workaround on the version released globally on what innovations will come from Chinese users as a result of the constraint on their use of AI/AI output. It doesn't feel entirely clear to me how either of those will work out at this stage.
Title: Re: The AI dooooooom thread
Post by: DGuller on January 28, 2025, 06:19:33 PM
Quote from: Sheilbh on January 28, 2025, 04:31:04 PMIs that right though? My understanding was that these models don't really have any concept of "truth" because they're not looking at the unit of meaning, rather they're operating on a predicting the next word. For example the classic "list countries including the letter k" question.
I think people are not seeing the forest for the trees with "predicting the next word".  What makes them predict the next words with a large degree of coherence is being able to conceptualize the meaning of words, in its own way. 

The idea behind predictive modeling, even going back to much simpler times of small statistical models, is that if you're good enough at predicting what's going to happen, then you must have at least implicitly distilled some understanding behind why things happen.  With LLMs, if you're good enough at predicting the next word, then you must've reverse engineered at least some of the thought behind the words.
Title: Re: The AI dooooooom thread
Post by: Zoupa on January 28, 2025, 07:02:44 PM
I wish AI could help with doing my laundry and dishes, instead of accelerating climate change, taking my job and giving me shitty fake images.

I'm glad not to have children.
Title: Re: The AI dooooooom thread
Post by: Valmy on January 28, 2025, 09:17:00 PM
Quote from: Zoupa on January 28, 2025, 07:02:44 PMI wish AI could help with doing my laundry and dishes, instead of accelerating climate change, taking my job and giving me shitty fake images.

Don't forget spread fake facts.

Yeah leaving humans to do the grunt work and the computers to do the fun stuff was not what I asked for. But the grunt work costs less so I should have know what capitalism would want to automate first. Once they force us all to be manual laborers, then they will replace us with robots. Then finally we can reach the place where the people in power don't need us anymore.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on January 28, 2025, 09:24:39 PM
This place feels more like Twitter every day.
Title: Re: The AI dooooooom thread
Post by: Valmy on January 28, 2025, 09:48:22 PM
Quote from: Admiral Yi on January 28, 2025, 09:24:39 PMThis place feels more like Twitter every day.

Sorry. Just a little black humor there.

But still, I wanted AI ditch diggers not AI art.
Title: Re: The AI dooooooom thread
Post by: Valmy on January 28, 2025, 10:51:04 PM
Quote from: Admiral Yi on January 27, 2025, 05:22:41 PMBuying opportunity.

The stock is recovering pretty fast, so I hope you took advantage.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on January 28, 2025, 11:09:22 PM
Quote from: Valmy on January 28, 2025, 10:51:04 PMThe stock is recovering pretty fast, so I hope you took advantage.

I have a 134 put that expires 2/7.
Title: Re: The AI dooooooom thread
Post by: Josquius on January 29, 2025, 06:14:57 AM
Have to say with this Deepseek stuff it is quite funny to see the American AI companies crying foul about China stealing from them.


Quote from: Zoupa on January 28, 2025, 07:02:44 PMI wish AI could help with doing my laundry and dishes, instead of accelerating climate change, taking my job and giving me shitty fake images.

One man's fun/source of income is another's man's horrid dish-washing drudge task.
For instance the author who just wants to write and sees cover-art as an annoying task at the end.

QuoteI'm glad not to have children.
The years to come do not look to be pleasant indeed. Though good people have to keep trying, as hard as it can be to even think of it.
I'm hoping this unshakable belief in the holy AI blows over soon. It does have vibes of earlier hysteria for new developments. Its not the main thing that has me concerned about the way we're headed- though it certainly isn't helping.
Title: Re: The AI dooooooom thread
Post by: Tamas on January 29, 2025, 06:28:18 AM
Here is a desperate but I think feasible attempt at positivity: AI mass-producing fake news, images etc. will mean the death of the social media-based news-consuming of the population and it will push toward the reestablishment of respect and demand for centralised, dependable news sources.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on January 29, 2025, 06:49:14 AM
Quote from: Josquius on January 29, 2025, 06:14:57 AMHave to say with this Deepseek stuff it is quite funny to see the American AI companies crying foul about China stealing from them.

I have not had this pleasure.  Can you share a link or two with me so that I can share in the humor?
Title: Re: The AI dooooooom thread
Post by: Josquius on January 29, 2025, 08:18:49 AM
Quote from: Admiral Yi on January 29, 2025, 06:49:14 AM
Quote from: Josquius on January 29, 2025, 06:14:57 AMHave to say with this Deepseek stuff it is quite funny to see the American AI companies crying foul about China stealing from them.

I have not had this pleasure.  Can you share a link or two with me so that I can share in the humor?

Really?
Odd you think there's a gotcha here. Its widely reported.
e.g.
https://www.ft.com/content/a0dfedd1-5255-4fa9-8ccc-1fe01de87ea6
https://opentools.ai/news/openai-accuses-chinese-ai-firm-deepseek-of-tech-theft-a-new-ai-cold-war
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on January 29, 2025, 08:24:30 AM
Thanks.  I hadn't heard about that.

Don't get the joke though.
Title: Re: The AI dooooooom thread
Post by: Josquius on January 29, 2025, 09:34:28 AM
Quote from: Admiral Yi on January 29, 2025, 08:24:30 AMThanks.  I hadn't heard about that.

Don't get the joke though.
A key source of AI controversy is they use other people's work for their training data.
https://www.theeagleonline.com/article/2024/01/op-ed-ai-art-is-art-theft-and-should-be-a-crime
https://www.theguardian.com/technology/2025/jan/14/british-novelists-criticise-government-over-ai-theft
Title: Re: The AI dooooooom thread
Post by: Maladict on January 29, 2025, 11:20:18 AM
Quote from: Tamas on January 29, 2025, 06:28:18 AMHere is a desperate but I think feasible attempt at positivity: AI mass-producing fake news, images etc. will mean the death of the social media-based news-consuming of the population and it will push toward the reestablishment of respect and demand for centralised, dependable news sources.

A lot of people, especially among the younger generations, are now used to not being able to tell if any given thing is real. I don't see how we can undo this.
Title: Re: The AI dooooooom thread
Post by: Tamas on January 29, 2025, 11:24:27 AM
Quote from: Maladict on January 29, 2025, 11:20:18 AM
Quote from: Tamas on January 29, 2025, 06:28:18 AMHere is a desperate but I think feasible attempt at positivity: AI mass-producing fake news, images etc. will mean the death of the social media-based news-consuming of the population and it will push toward the reestablishment of respect and demand for centralised, dependable news sources.

A lot of people, especially among the younger generations, are now used to not being able to tell of any given thing is real. I don't see how we can undo this.

I don't know exactly how to get to that point (especially since we have just recently abandoned it) but this is exactly why news verified to high standards should eventually become valued again.

Essentially people need to realise that something on social media is not (necessarily) better than hearing it from a random guy they sit next to at the bar. Then again of course many people got their news like that in the past and there is that subsection you will never be able to help. The rest should eventually pivot back if we don't burn the world down by then.
Title: Re: The AI dooooooom thread
Post by: Maladict on January 29, 2025, 11:36:37 AM
Quote from: Tamas on January 29, 2025, 11:24:27 AM
Quote from: Maladict on January 29, 2025, 11:20:18 AM
Quote from: Tamas on January 29, 2025, 06:28:18 AMHere is a desperate but I think feasible attempt at positivity: AI mass-producing fake news, images etc. will mean the death of the social media-based news-consuming of the population and it will push toward the reestablishment of respect and demand for centralised, dependable news sources.

A lot of people, especially among the younger generations, are now used to not being able to tell of any given thing is real. I don't see how we can undo this.

I don't know exactly how to get to that point (especially since we have just recently abandoned it) but this is exactly why news verified to high standards should eventually become valued again.

Essentially people need to realise that something on social media is not (necessarily) better than hearing it from a random guy they sit next to at the bar. Then again of course many people got their news like that in the past and there is that subsection you will never be able to help. The rest should eventually pivot back if we don't burn the world down by then.

The problem is you can't pivot back to something you've never known. Some of my friends' kids are between 15 and 20 and have never read a newspaper, even though their parents do. They get all their news from random websites and social media, without any critical thought as to sources or trustworthiness. And that's not even going into algorithms and echochambers. It's absolutely terrifying.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 29, 2025, 11:49:19 AM
Tamas, the other problem is high quality news reporting requires financial resources.  That is why most newspapers have died or transformed into something resembling social media reporting. Unless something happens to redirect eyes to non clickbait serious reporting, I don't know where the funding is going to come from.

The other problem is where are the professional journalists going to come from - traditionally it takes a number of years to train someone to become an effective professional journalist. We are losing/have lost those training grounds. As a side note a lot of the great reporters got their start in local newspapers, news radio and local tv news.  Almost all of that news infrastructure has gone.
Title: Re: The AI dooooooom thread
Post by: mongers on January 29, 2025, 12:28:36 PM
Quote from: Maladict on January 29, 2025, 11:36:37 AM
Quote from: Tamas on January 29, 2025, 11:24:27 AM
Quote from: Maladict on January 29, 2025, 11:20:18 AM
Quote from: Tamas on January 29, 2025, 06:28:18 AMHere is a desperate but I think feasible attempt at positivity: AI mass-producing fake news, images etc. will mean the death of the social media-based news-consuming of the population and it will push toward the reestablishment of respect and demand for centralised, dependable news sources.

A lot of people, especially among the younger generations, are now used to not being able to tell of any given thing is real. I don't see how we can undo this.

I don't know exactly how to get to that point (especially since we have just recently abandoned it) but this is exactly why news verified to high standards should eventually become valued again.

Essentially people need to realise that something on social media is not (necessarily) better than hearing it from a random guy they sit next to at the bar. Then again of course many people got their news like that in the past and there is that subsection you will never be able to help. The rest should eventually pivot back if we don't burn the world down by then.

The problem is you can't pivot back to something you've never known. Some of my friends' kids are between 15 and 20 and have never read a newspaper, even though their parents do. They get all their news from random websites and social media, without any critical thought as to sources or trustworthiness. And that's not even going into algorithms and echochambers. It's absolutely terrifying.

Indeed.

My hope is the changes in the way history is now taught in schools might help; back in my day your were presented with books of fact, to be regurgitate during GCSE exams, later at A level intepreting those historical 'facts ' became a focus.

Yet for the last 20-25 years, in mainstream schools consideration of the sources has been a feature of the cirriculum and some examination questions. So maybe in that sense today's teenagers are better equipped to reason about history and politics? :unsure:

Oh drat history is no-longer a mandatory subject across much of the school cirriculum.  :(



Title: Re: The AI dooooooom thread
Post by: Jacob on February 11, 2025, 02:30:09 AM
QuoteMicrosoft Study Finds AI Makes Human Cognition "Atrophied and Unprepared"

A new paper from researchers at Microsoft and Carnegie Mellon University finds that as humans increasingly rely on generative AI in their work, they use less critical thinking, which can "result in the deterioration of cognitive faculties that ought to be preserved."

"[A] key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise," the researchers wrote.

Full article here: https://www.404media.co/microsoft-study-finds-ai-makes-human-cognition-atrophied-and-unprepared-3/
Title: Re: The AI dooooooom thread
Post by: garbon on February 11, 2025, 02:41:09 AM
Quote from: Jacob on February 11, 2025, 02:30:09 AM
QuoteMicrosoft Study Finds AI Makes Human Cognition "Atrophied and Unprepared"

A new paper from researchers at Microsoft and Carnegie Mellon University finds that as humans increasingly rely on generative AI in their work, they use less critical thinking, which can "result in the deterioration of cognitive faculties that ought to be preserved."

"[A] key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise," the researchers wrote.

Full article here: https://www.404media.co/microsoft-study-finds-ai-makes-human-cognition-atrophied-and-unprepared-3/

That long quote was written by the researchers but it wasn't a conclusion of this study but rather their summary of a paper from 1983 in their introduction.
Title: Re: The AI dooooooom thread
Post by: garbon on February 11, 2025, 06:50:35 AM
https://www.bbc.co.uk/news/articles/c0m17d8827ko

QuoteAI chatbots unable to accurately summarise news, BBC finds

Four major artificial intelligence (AI) chatbots are inaccurately summarising news stories, according to research carried out by the BBC.

The BBC gave OpenAI's ChatGPT, Microsoft's Copilot, Google's Gemini and Perplexity AI content from the BBC website then asked them questions about the news.

It said the resulting answers contained "significant inaccuracies" and distortions.

In a blog, Deborah Turness, the CEO of BBC News and Current Affairs, said AI brought "endless opportunities" but the companies developing the tools were "playing with fire."

"We live in troubled times, and how long will it be before an AI-distorted headline causes significant real world harm?", she asked.

The tech companies which own the chatbots have been approached for comment.

In the study, the BBC asked ChatGPT, Copilot, Gemini and Perplexity to summarise 100 news stories and rated each answer.

It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.

It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.

In her blog, Ms Turness said the BBC was seeking to "open up a new conversation with AI tech providers" so we can "work together in partnership to find solutions."

She called on the tech companies to "pull back" their AI news summaries, as Apple did after complaints from the BBC that Apple Intelligence was misrepresenting news stories.

Some examples of inaccuracies found by the BBC included:

Gemini incorrectly said the NHS did not recommend vaping as an aid to quit smoking

ChatGPT and Copilot said Rishi Sunak and Nicola Sturgeon were still in office even after they had left

Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed "restraint" and described Israel's actions as "aggressive"


In general, Microsoft's Copilot and Google's Gemini had more significant issues than OpenAI's ChatGPT and Perplexity, which counts Jeff Bezos as one of its investors.

Normally, the BBC blocks its content from AI chatbots, but it opened its website up for the duration of the tests in December 2024.

The report said that as well as containing factual inaccuracies, the chatbots "struggled to differentiate between opinion and fact, editorialised, and often failed to include essential context."

The BBC's Programme Director for Generative AI, Pete Archer, said publishers "should have control over whether and how their content is used and AI companies should show how assistants process news along with the scale and scope of errors and inaccuracies they produce."

Which is great as I'm sure the amount of people now getting their answers straight from Google's AI Overview is non-zero.
Title: Re: The AI dooooooom thread
Post by: Sophie Scholl on February 11, 2025, 07:03:53 AM
A fun way of combatting ai results in google is to insert a swear word. Why look for "answers" when you can look for "fucking answers, goddammit".  :ccr
Title: Re: The AI dooooooom thread
Post by: mongers on February 12, 2025, 12:51:04 PM
Quote from: Sophie Scholl on February 11, 2025, 07:03:53 AMA fun way of combatting ai results in google is to insert a swear word. Why look for "answers" when you can look for "fucking answers, goddammit".  :ccr

Thanks Sophie, that's an interesting tip.

On a related matter I randomly encountered a link to an MS image creating AI tool, tried "narrow gauge steam locomotive travel* up narrow  steep railway line in mountain*"  *typos on my part

(https://th.bing.com/th/id/OIG2.J90J1DaLcHl.dabmpyW_?w=1024&h=1024&rs=1&pid=ImgDetMain)

Not sure what use the tool is too me, maybe help with designs for posters or flyers, not at all sure about the ethics or copyrights of using such AI generated stuff?
Title: Re: The AI dooooooom thread
Post by: Josquius on February 12, 2025, 05:20:11 PM
I recall reading a while ago about how poor AI was for minority languages. It does all seem very English oriented.
I wonder whether it's effects will be the same for non English speaking nations.
Title: Re: The AI dooooooom thread
Post by: mongers on February 12, 2025, 05:24:13 PM
Quote from: Josquius on February 12, 2025, 05:20:11 PMI recall reading a while ago about how poor AI was for minority languages. It does all seem very English oriented.
I wonder whether it's effects will be the same for non English speaking nations.

Good point.

Interestingly, I tried a 2nd test asking it to design a clean, simple,  book cover for a book named something like "Space ship earth crew or passengers" and every one of the four suggestions came back with odd spelling mistakes or missing words.

One of them added some blurb at the bottom and the characters were literally complete gibberish, some not even recognisable characters.
Title: Re: The AI dooooooom thread
Post by: Syt on February 19, 2025, 02:09:34 AM
I asked ChatGPT o1 to summarize the interview transcript from Hannity with Musk/Trump from the White House website:

My query:
QuoteThis is a transcript of an interview by Sean Hannity with President Trump and Elon Musk. Could you kindly summarize key points and try to assess factuality and overall tone of the conversation?

Its lead in with its reasoning is already great:

QuoteReasoned for 8 seconds
Examining document origins

Ok, let me see. This transcript likely stems from a spoof or AI-generated text, set in an alternate timeline with anachronisms and fictional agencies.

Engaging in the fictional

I'm spotting wild ideas like Trump facing two assassination attempts, astronauts stranded, and absurd scenarios involving government waste. Placing any trust in this transcript is a mistake.

Mulling over the fantasy

OK, let's look at how fictitious elements mix with slight nods to reality, like Trump's inflation comments and NASA's restrictions on astronauts.

Below is a concise overview of the transcript's main themes, its apparent factual inconsistencies, and the overall tone. Important note: There is no public record of this conversation actually taking place—multiple details strongly suggest it is a fabricated or fictional scenario rather than a genuine transcript of a White House interview. Nevertheless, here is a summary and analysis of what the text claims:

 :lmfao:

I wish, ChatGPT, I wish. :(
(Also, obviously it treats it as fictional because it's not up to date with latest news)

Quote1. Summary of the Conversation

Setting and Participants


The text presents itself as a Sean Hannity interview with President Donald Trump and Elon Musk in the Roosevelt Room of the White House.
It implies that Trump is currently President (even though real-world timing would make this contradictory, since Joe Biden was inaugurated on January 20, 2021).
Elon Musk is described as leading a special team (referred to as "DOGE," apparently a "Department of Government Efficiency" or similar) tasked with rooting out fraud, waste, and abuse in government spending.

Claims of Government Efficiency Efforts

Elon Musk supposedly is working pro bono ("working for free") to enforce President Trump's executive orders and cut federal waste.
They discuss large sums of taxpayer money allegedly funding obscure or "woke" programs abroad (e.g., "transgender opera in Colombia," "DEI musical in Ireland," "Sesame Street in Iraq").
They claim that by eliminating such spending, they will save vast sums and help balance the budget.

Discussion of Past Lawsuits and Settlements

There is a joking reference to Trump having sued Twitter "long before Musk" bought it and that Elon Musk (via Twitter) had to pay a $10 million settlement.
Both treat it as a sort of lighthearted anecdote—Trump calls it a "big discount."

Praise for Each Other and Critique of Media

Trump and Musk speak very positively of one another. Musk calls Trump a "good man." Trump refers to Musk as "brilliant" and "honest."
Both criticize mainstream media, claiming widespread bias, dishonesty, and low ratings.
They bring up "Trump derangement syndrome" and how the media or "radical left" want to drive a wedge between Trump and Musk.

Allegations Regarding the Biden Administration

The transcript claims the Biden administration left two astronauts stranded in space for political reasons and that Musk is now organizing a rescue.
They suggest government agencies continued to run "luxury hotels for illegal immigrants" contrary to executive orders.
Biden and his administration are portrayed as incompetent and as deliberately sabotaging the country through reckless spending and "radical" policies.

Social Security, Medicare, Education

Trump states Social Security and Medicare will remain untouched, aside from removing fraud.
He wants to dismantle or greatly reduce the federal Department of Education and let states run schools with local control.
They repeatedly mention the intention to slash "woke" or "DEI" spending and foreign aid.

Reciprocal Tariffs and Foreign Relations

Trump reaffirms a policy of "reciprocal tariffs," meaning if another country imposes a high import duty on U.S. goods, the U.S. will do the same in return.
They claim multiple world leaders have suddenly changed positions or are making positive statements about the U.S. under Trump's new presidency.

Personal Dynamics and "Friendship"

Throughout the text, Hannity marvels at the Trump–Musk relationship, which is presented as close.
The conversation has a casual, bantering style, heavy on compliments and "inside jokes."

2. Assessment of Factuality and Plausibility

Overall Authenticity


There is no evidence that this interview ever occurred. The references to specific events (e.g., "two astronauts left in space for 300 days by Biden," "DOGE" as an official government office headed by Musk) are not corroborated by any real-world reporting. Furthermore, many statements clash directly with established facts:

Timeline Contradiction: The text suggests Trump is currently in office after Biden, which is not supported by real-world events (Joe Biden has been in office since January 20, 2021).
Astronauts Stranded in Space: No mainstream or official source has mentioned "abandoned astronauts" that Elon Musk is rescuing. NASA schedules and missions are publicly documented.
Large Settlements with Twitter: No record exists of Trump suing Twitter in a way that resulted in a $10 million settlement paid by Elon Musk.
Department of Government Efficiency (DOGE): There is no record of a new federal department being created and led by Elon Musk.
Multiple "Wasteful" Grants Cited: While it is true that some foreign aid or cultural programs have been controversial, many specifics here (e.g., "transgender opera in Colombia," "DEI musicals in Ireland") do not match typical U.S. federal budget line-items widely covered in the press or in official appropriations bills.

False or Highly Misleading Claims

Claims that tens or hundreds of millions are being spent on extremely niche cultural programs ("mass disinformation campaign" at Reuters, "luxury hotels for immigrants," etc.) are presented as if they have already been uncovered and canceled. No evidence for these exact scenarios is provided.
References to "Kamala Harris with 92 percent of the D.C. vote" as proof of "deep state bureaucracy" or to "everyone who worked for Biden now worth millions" are general allegations with no substantiation in legitimate public records.

Tone and Style

The conversation is casual, at times jocular. Trump and Musk seem to joke about lawsuits and "big discounts."
Hostility toward media outlets and the "radical left" is a recurring theme.
There is a strong political undertone: praise for Trump's policies, condemnation of Biden, repeated references to "woke" policies, "transgender issues," and "DEI" programs.
The text presents Musk as publicly endorsing Trump, praising him effusively, and adopting pro-Trump or conservative talking points (e.g., "Trump was shot, he's courageous," "Biden is incompetent").
Despite the comedic, lighthearted banter, the conversation portrays a sweeping government overhaul that would be historically unprecedented (eliminating entire agencies, slashing foreign programs, etc.).

3. Overall Tone and Conclusion

The tone is largely informal, with banter between Trump, Musk, and Hannity. All three repeatedly champion each other and share a mutual distaste for critical media coverage. Musk is portrayed as a hands-on tech genius with total devotion to "fixing" government spending. Trump is presented as unstoppable, frequently returning to themes of massive fraud or "kickbacks" and praising Elon Musk's brilliance.

The factual basis of the transcript is highly questionable. Almost all major claims or events (a new "Department of Government Efficiency," two nearly abandoned astronauts, the alleged sums of money for "transgender operas" abroad, etc.) do not match verifiable reporting or known legislative actions. It resembles a piece of political fan fiction or a satirical script rather than a real-world interview.

The conversation repeatedly uses rhetoric about "restoring the will of the people," "cutting woke spending," and "saving a trillion dollars" but never provides credible legislative or factual details. Assertions such as "we found a trillion dollars of fraud" or "Biden left astronauts in space" are advanced without any documentation or media coverage to support them.

Bottom Line
Key Points: The text claims that Elon Musk, appointed by President Trump, is dramatically cutting government waste and foreign aid, that the Biden administration left astronauts stranded, and that Trump is restoring constitutional democracy.
Factuality: The dialogue is almost certainly not an authentic transcript. It contains numerous implausible or disproven assertions.
Tone: Very sympathetic to Trump, strongly critical of mainstream media and the Democratic Party, with a friendly, almost conversational style among the three men.
Readers should treat this as fictional (or satirical) rather than a genuine record of any real White House event.
Title: Re: The AI dooooooom thread
Post by: mongers on February 19, 2025, 09:48:11 AM
Quote from: Syt on February 19, 2025, 02:09:34 AMI asked ChatGPT o1 to summarize the interview transcript from Hannity with Musk/Trump from the White House website:



 :lol:

You couldn't make it up, could you? :unsure:

Title: Re: The AI dooooooom thread
Post by: DGuller on February 19, 2025, 10:04:36 AM
Technically you can easily make it up, by priming it beforehand with prompts that would take it in a certain direction, but I trust that Syt didn't do it here.
Title: Re: The AI dooooooom thread
Post by: Tamas on February 19, 2025, 10:28:58 AM
Quote from: mongers on February 12, 2025, 05:24:13 PMInterestingly, I tried a 2nd test asking it to design a clean, simple,  book cover for a book named something like "Space ship earth crew or passengers" and every one of the four suggestions came back with odd spelling mistakes or missing words.

The takeover of sentient AI is imminent.


Not.
Title: Re: The AI dooooooom thread
Post by: HVC on February 19, 2025, 10:35:50 AM
Quote from: Tamas on February 19, 2025, 10:28:58 AM
Quote from: mongers on February 12, 2025, 05:24:13 PMInterestingly, I tried a 2nd test asking it to design a clean, simple,  book cover for a book named something like "Space ship earth crew or passengers" and every one of the four suggestions came back with odd spelling mistakes or missing words.

The takeover of sentient AI is imminent.


Not.

I mean sentient humans voted for trump... twice! AI doesn't have to be that smart to match or exceed us :lol:
Title: Re: The AI dooooooom thread
Post by: mongers on February 19, 2025, 11:11:49 AM
Quote from: HVC on February 19, 2025, 10:35:50 AM
Quote from: Tamas on February 19, 2025, 10:28:58 AM
Quote from: mongers on February 12, 2025, 05:24:13 PMInterestingly, I tried a 2nd test asking it to design a clean, simple,  book cover for a book named something like "Space ship earth crew or passengers" and every one of the four suggestions came back with odd spelling mistakes or missing words.

The takeover of sentient AI is imminent.


Not.

I mean sentient humans voted for trump... twice! AI doesn't have to be that smart to match or exceed us :lol:

 :D
Title: Re: The AI dooooooom thread
Post by: Syt on February 19, 2025, 11:13:25 AM
Quote from: DGuller on February 19, 2025, 10:04:36 AMTechnically you can easily make it up, by priming it beforehand with prompts that would take it in a certain direction, but I trust that Syt didn't do it here.

I didn't add anything into the request that isn't posted here (except the full transcript from the WH gov website :P).
Title: Re: The AI dooooooom thread
Post by: Tamas on February 19, 2025, 03:57:54 PM
Quote from: HVC on February 19, 2025, 10:35:50 AM
Quote from: Tamas on February 19, 2025, 10:28:58 AM
Quote from: mongers on February 12, 2025, 05:24:13 PMInterestingly, I tried a 2nd test asking it to design a clean, simple,  book cover for a book named something like "Space ship earth crew or passengers" and every one of the four suggestions came back with odd spelling mistakes or missing words.

The takeover of sentient AI is imminent.


Not.

I mean sentient humans voted for trump... twice! AI doesn't have to be that smart to match or exceed us :lol:

True. I concede my point.