Languish.org

General Category => Off the Record => Topic started by: Hamilcar on April 06, 2023, 12:44:43 PM

Title: The AI dooooooom thread
Post by: Hamilcar on April 06, 2023, 12:44:43 PM
Has science gone too far? Are the machines about to take over? What does languish think?

Disclosure: working on AI.
Title: Re: The AI dooooooom thread
Post by: Barrister on April 06, 2023, 12:49:44 PM
Quote from: Hamilcar on April 06, 2023, 12:44:43 PMHas science gone too far? Are the machines about to take over? What does languish think?

Disclosure: working on AI.

You tell us.  It is both impressive and kind-of creepy what AI has been able to pull off in just the last little while, and how quickly it's improving - at least in the consumer-facing stuff like ChatGPT or AI-art.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on April 06, 2023, 12:57:53 PM
Quote from: Hamilcar on April 06, 2023, 12:44:43 PMHas science gone too far? Are the machines about to take over? What does languish think?

Disclosure: working on AI.

The CBC had an interesting panel on this yesterday.

The upshot was that it is all overblown and it is in the interests of those working on it to make it overblown - makes going out and getting funding easier.

No idea whether that view is correct, but the panelists were all researchers working on AI.
Title: Re: The AI dooooooom thread
Post by: CountDeMoney on April 06, 2023, 12:59:20 PM
It's gonna be totally awesome when someone uses it to convince a nation's electorate that their leader is stepping down when he isn't, or that a preemptive nuclear strike is necessary when it isn't, or any of the other nifty fucking things it will be able to do convincingly when epistemology is finally erased by the Silicon Sheldons.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on April 06, 2023, 01:01:00 PM
Quote from: CountDeMoney on April 06, 2023, 12:59:20 PMIt's gonna be totally awesome when someone uses it to convince a nation's electorate that their leader is stepping down when he isn't, or that a preemptive nuclear strike is necessary when it isn't, or any of the other nifty fucking things it will be able to do convincingly when epistemology is finally erased by the Silicon Sheldons.

The biggest concern is around what you are saying, people mistake what AI says is the answer as something that is infallibly correct, when it is just something that predicts what the next word should be - sort of like a million monkeys.
Title: Re: The AI dooooooom thread
Post by: Maladict on April 06, 2023, 01:17:40 PM
I've just spend fifteen minutes trying to get AI to write a poem using tercets. However much I try to help it, it just can't do it. I'm not worried until it goes full Dante on me.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 06, 2023, 01:29:33 PM
My thoughts:

AI is the new hype. There'll be real economic, social, political, and ethical impacts from this. Some of them will be more profound than we imagine, others will be much more trivial than we fear/ hope. It's hard to predict which is which at this time. Broadly, I think it might end up like the industrial revolution.

I think it's a given that there'll be efficiencies gained, job losses, and attendant social disruption. There will definitely be opportunties for those who are clever and/ or lucky. I expect it will make rich people richer, poor people more marginalized, allow more control in totalitarian societies, and allow more sidestepping/ manipulation of democracy in countries where sidestepping/ manipulation of of democratic principles is a significant part of the political process. In short, the benefits will tend to accrue to those who already have power. Maybe it'll also result in a general increase in quality across the board.

I think IP lawyers will make money on cases where AI generated art is argued to be derivative of existing work.

I'm interested in seeing how AI generated creative content undermines / encourages creativity and new ideas. There'll also be a significant impact on the value of creative content since it can now be mass produced much more efficiently. I have some misgivings, but they could be misplaced... or not. But the horse has already left the barn there, so it's a matter of seeing what happens rather than right vs wrong.

One area where AI is a long way away still is accontability. Sure AI can give you the result of [whatever] and replace the work of however many people; but if there are real consequences from what the AI outputs (medical decisions, driving AI, legal opinions, allocation of money, killing or hurting people) who is accountable for AI errors? Or for consequences if the AI applies criteria that turn out not to conform to social and legal values?

As for AGI, I recently talked to someone who's in AI and he said something like "AGI continues to be 5 to 50 years in the future." It sounds like it may be a bit like cold fusion - potential right around the corner in some years, but that timeline keeps getting pushed out. When (if) we do get near it, it'll be very interesting to figure out what kind of status they'll have - do they get individual rights? How can they be exploited? What sort of decision making power will they have? What sort of practical things will they be able to do?

... there are of course more sci-fi type hypotheticals that are fun (worrying?) to consider, but I think they're a bit further down the line.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on April 06, 2023, 01:31:50 PM
I forget to mention - I for one welcome our new AI overlords.

Title: Re: The AI dooooooom thread
Post by: Tamas on April 06, 2023, 01:44:17 PM
Quote from: Barrister on April 06, 2023, 12:49:44 PM
Quote from: Hamilcar on April 06, 2023, 12:44:43 PMHas science gone too far? Are the machines about to take over? What does languish think?

Disclosure: working on AI.

You tell us.  It is both impressive and kind-of creepy what AI has been able to pull off in just the last little while, and how quickly it's improving - at least in the consumer-facing stuff like ChatGPT or AI-art.

Are those "true" AIs though, or its just our human brain seeing things where there's nothing but a sophisticated script?

Or the other side of that: are WE more than a sophisticated script?
Title: Re: The AI dooooooom thread
Post by: Grey Fox on April 06, 2023, 01:59:29 PM
It's barely there & what is is mostly only greatly optimized algorithms like models.

Disclosure : Works on imaging AIs.
Title: Re: The AI dooooooom thread
Post by: PDH on April 06, 2023, 02:03:10 PM
Of course we're doomed.  Not from this, but that doesn't matter.
Title: Re: The AI dooooooom thread
Post by: HVC on April 06, 2023, 02:04:35 PM
It's like no one watched the terminator movies. I mean that makes sense after the second one since their not worth watching, but the first two gave plenty of warnings.
Title: Re: The AI dooooooom thread
Post by: Josquius on April 06, 2023, 02:47:34 PM
There are scary prospects in it for sure. Though they're less in the direction of evil AI conquers the world and more reality breaks down as we are overwhelmed with algorithmically generated fakes.
Title: Re: The AI dooooooom thread
Post by: Barrister on April 06, 2023, 02:50:50 PM
Quote from: crazy canuck on April 06, 2023, 01:01:00 PM
Quote from: CountDeMoney on April 06, 2023, 12:59:20 PMIt's gonna be totally awesome when someone uses it to convince a nation's electorate that their leader is stepping down when he isn't, or that a preemptive nuclear strike is necessary when it isn't, or any of the other nifty fucking things it will be able to do convincingly when epistemology is finally erased by the Silicon Sheldons.

The biggest concern is around what you are saying, people mistake what AI says is the answer as something that is infallibly correct, when it is just something that predicts what the next word should be - sort of like a million monkeys.

I mean - the ultimate biggest concern is the Terminator scenario where AIs gain sentience and wage war against humanity.

In a much more near-term time-frame though, I think the biggest concern is when AI can generate such convincing deep-fake audio and video that we can no longer trust any video we see.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 06, 2023, 02:55:31 PM
... and that I think goes back to the accountability point.

If realistic but fake video is trivial to create, video evidence needs some sort of guarantor to be credible. "Yes I was there. That video shows what I saw also" statements from someone credible. Or - I suppose - in the court of law "yes, the chain of custody is that we obtained the video from the drive it was recorded to, that drive has no evidence of material being added toit, and the video has not been substituted since then - so we can rely on it as a recording of real events" type stuff.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 06, 2023, 02:56:54 PM
Question for the thread - how long before AI generated porn is widely available? How long before interactive AI porn?
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on April 06, 2023, 03:00:13 PM
Quote from: crazy canuck on April 06, 2023, 01:01:00 PMThe biggest concern is around what you are saying, people mistake what AI says is the answer as something that is infallibly correct

People having strong beliefs that false things are infallibly correct is not really a novel problem. 

Actually I think a equal problem will be more skepticism about obviously true things based on the theoretical possibility of AI manipulation.  It will be another arrow in the Conspiracy Theorist quiver.
Title: Re: The AI dooooooom thread
Post by: Barrister on April 06, 2023, 03:05:05 PM
Quote from: Jacob on April 06, 2023, 02:55:31 PM... and that I think goes back to the accountability point.

If realistic but fake video is trivial to create, video evidence needs some sort of guarantor to be credible. "Yes I was there. That video shows what I saw also" statements from someone credible. Or - I suppose - in the court of law "yes, the chain of custody is that we obtained the video from the drive it was recorded to, that drive has no evidence of material being added toit, and the video has not been substituted since then - so we can rely on it as a recording of real events" type stuff.

So I mean that kind of evidence is already required in court in order to present video.  I can't just play a video without someone to authenticate it.

But as it is now, once the video is authenticated it tends to have much more value than a live witness.  It's one thing for a complainant to say "The Accused beat me", while it's another to have a video showing the Accused beating the complainant.  But if video becomes so easy to fake then suddenly the video has no more value than the live witness.
Title: Re: The AI dooooooom thread
Post by: Barrister on April 06, 2023, 03:06:28 PM
Quote from: Jacob on April 06, 2023, 02:56:54 PMQuestion for the thread - how long before AI generated porn is widely available? How long before interactive AI porn?

At least in terms of still images or chat I believe the only thing stopping it right now is limitations built into the AIs.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 06, 2023, 03:07:43 PM
BB, I'd think that video would still have some value in underscoring the visceralness (or lack of same) in a way that's more effective than "he beat me viciously."

... but yeah, it would perhaps stop feeling more "real" if we become accustomed to question all videos.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 06, 2023, 03:08:57 PM
Quote from: Barrister on April 06, 2023, 03:06:28 PMAt least in terms of still images or chat I believe the only thing stopping it right now is limitations built into the AIs.

... I'm not in the field, but my understanding is that it's not that hard to train AI. I guess it's just a matter of time before someone sets it up and markets it.

:hmm: ... business idea? Certainly it'll be cheaper than paying performers....
Title: Re: The AI dooooooom thread
Post by: Legbiter on April 06, 2023, 03:33:30 PM
These language models will be extremely effective at scamming seniors at first and then the rest of us.

ChatGTP 4 is currently being trained on Icelandic, it even composed the other day a pretty good Hávamál-style verse on, ironically, the importance of small languages.

Lítil tungumál, er lífsins grundvöllur.

Ræður ríki sínu, rótum bundin.

Mögur heimsins, margbreytumleikur.

Aukin samskipti, sannleiks sökum.



It's actually rather good.  :hmm:
Title: Re: The AI dooooooom thread
Post by: grumbler on April 06, 2023, 04:28:53 PM
Quote from: HVC on April 06, 2023, 02:04:35 PMIt's like no one watched the terminator movies. I mean that makes sense after the second one since their not worth watching, but the first two gave plenty of warnings.

There were no Terminator movies after T2.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on April 06, 2023, 04:30:30 PM
Quote from: The Minsky Moment on April 06, 2023, 03:00:13 PM
Quote from: crazy canuck on April 06, 2023, 01:01:00 PMThe biggest concern is around what you are saying, people mistake what AI says is the answer as something that is infallibly correct

People having strong beliefs that false things are infallibly correct is not really a novel problem. 

Actually I think a equal problem will be more skepticism about obviously true things based on the theoretical possibility of AI manipulation.  It will be another arrow in the Conspiracy Theorist quiver.

You are correct that is not novel, but the fact that the answer is being given by AI gives the answer more validity in the minds of many, and there lies the danger.  The answer could be complete bullshit, but who you going to believe?  A supercomputer, or some "expert" after years of the right wing attacking experts?
Title: Re: The AI dooooooom thread
Post by: Tamas on April 06, 2023, 04:50:05 PM
Quote from: grumbler on April 06, 2023, 04:28:53 PM
Quote from: HVC on April 06, 2023, 02:04:35 PMIt's like no one watched the terminator movies. I mean that makes sense after the second one since their not worth watching, but the first two gave plenty of warnings.

There were no Terminator movies after T2.

This is correct.
Title: Re: The AI dooooooom thread
Post by: Grey Fox on April 06, 2023, 10:16:58 PM
Quote from: Jacob on April 06, 2023, 02:56:54 PMQuestion for the thread - how long before AI generated porn is widely available? How long before interactive AI porn?

Soon, I guess. However, it still has trouble generating hands. It will also be quite difficult for the first few AIs to generate penises. Especially being held by hands.

 :shutup:
Title: Re: The AI dooooooom thread
Post by: Razgovory on April 06, 2023, 11:06:29 PM
This is as good a thread as any to share this picture.
(https://i.imgur.com/b54xzU8.jpg)
Title: Re: The AI dooooooom thread
Post by: Josquius on April 07, 2023, 12:40:16 AM
Quote from: Jacob on April 06, 2023, 02:56:54 PMQuestion for the thread - how long before AI generated porn is widely available? How long before interactive AI porn?

It could be now but for legalities.

This is part of what I mean by the insanity of generated content taking over.

Less in porn but consider mainstream media. No two people would consume the same things. Everyone would get stuff explicitly geared towards what it believes their personal tastes to be. With wide ranging results.
Title: Re: The AI dooooooom thread
Post by: Richard Hakluyt on April 07, 2023, 01:53:34 AM
Quote from: Josquius on April 06, 2023, 02:47:34 PMThere are scary prospects in it for sure. Though they're less in the direction of evil AI conquers the world and more reality breaks down as we are overwhelmed with algorithmically generated fakes.

I agree. It will be a wonderful tool for dictators and populists everywhere and drive the susceptible parts of the population into deeper madness.
Title: Re: The AI dooooooom thread
Post by: Crazy_Ivan80 on April 07, 2023, 02:39:53 AM
Quote from: Razgovory on April 06, 2023, 11:06:29 PMThis is as good a thread as any to share this picture.


AI generated horror is already here in any case.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 07, 2023, 04:23:03 AM
Quote from: Jacob on April 06, 2023, 02:56:54 PMQuestion for the thread - how long before AI generated porn is widely available? How long before interactive AI porn?

It's out there if you're ok with the occasional teratoma.

Pretty sure dozens of companies are working on an MSFW waifu which can hold a conversation and keep you hooked
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on April 07, 2023, 04:26:35 AM
explain acronym
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 07, 2023, 04:27:32 AM
Quote from: Admiral Yi on April 07, 2023, 04:26:35 AMexplain acronym

Typo... NSFW. Sent from my iPhone.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on April 07, 2023, 04:31:57 AM
On a total tangent, a great irony is that your great public shame, your tornado alley moral hazard position, was one I remember taking first, and you just backing me up.

Thought I would set the record straight.  I should be taking heat for that comment, not you.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 07, 2023, 05:06:54 AM
Quote from: Admiral Yi on April 07, 2023, 04:31:57 AMOn a total tangent, a great irony is that your great public shame, your tornado alley moral hazard position, was one I remember taking first, and you just backing me up.

Thought I would set the record straight.  I should be taking heat for that comment, not you.

Tornado moral hazard 4ever!  :cool:
Title: Re: The AI dooooooom thread
Post by: Josquius on April 07, 2023, 07:16:20 AM
Quote from: Richard Hakluyt on April 07, 2023, 01:53:34 AM
Quote from: Josquius on April 06, 2023, 02:47:34 PMThere are scary prospects in it for sure. Though they're less in the direction of evil AI conquers the world and more reality breaks down as we are overwhelmed with algorithmically generated fakes.

I agree. It will be a wonderful tool for dictators and populists everywhere and drive the susceptible parts of the population into deeper madness.


Yes.
Worse than that even potentially. Being susceptible to misinformation isn't black and white. I do think we all have a level of quality and coverage where we might start to be taken in by untruths.
My big worry is things could get to a level where basically everyone is living in a different reality with a completely seperate understanding of the facts of the world.

Maybe to look at things more positively this could turn things around for misinformation. If we know bollocks is the default to such a level then something, a old school conventional news source perhaps, becoming known as "handmade" and always reliable could really do well.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 08, 2023, 09:18:56 AM
Go check out repos like Auto-GPT and babyAGI and tell me we aren't opening demon summoning circles in every home (paraphrasing EY).
Title: Re: The AI dooooooom thread
Post by: Jacob on April 08, 2023, 10:23:27 AM
Quote from: Hamilcar on April 08, 2023, 09:18:56 AMGo check out repos like Auto-GPT and babyAGI and tell me we aren't opening demon summoning circles in every home (paraphrasing EY).

What's your take? Sounds like you're working in the field. Where do you think we're headed and how soon?
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 08, 2023, 10:28:36 AM
Quote from: Jacob on April 08, 2023, 10:23:27 AM
Quote from: Hamilcar on April 08, 2023, 09:18:56 AMGo check out repos like Auto-GPT and babyAGI and tell me we aren't opening demon summoning circles in every home (paraphrasing EY).

What's your take? Sounds like you're working in the field. Where do you think we're headed and how soon?

I have no idea. The capabilities we have today are barely understood and increasing so rapidly that prediction is almost useless. Plus, we don't know what already exists behind closed doors. GPT5 may already be running behind closed doors and replacing 99% of human cognition.

My baseline scenario is that in the very near future, a large fraction of cognitive work done by humans is replaceable. The only reason not everyone is out of a job right away is due to inertia.

I also take the AI safety people serious. We have no way to safely build super intelligence. The orthogonality thesis seems correct, mesa optimizers are a real problem. EY isn't a loon, but maybe too radical? I'm not sure.

All I know is that none of us have careers and retirements similar to our parents.
Title: Re: The AI dooooooom thread
Post by: PJL on April 08, 2023, 10:38:33 AM
Haven't we been here before though? I mean AI cars were meant to be mainstream by now, but we're nowhere near that. Also unlike social media the fear of AI taking over the world has been a meme for like 50 years now. If anything I would expect regulators to be a lot more prepared for this than what they were with social media.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 08, 2023, 10:45:03 AM
Quote from: PJL on April 08, 2023, 10:38:33 AMHaven't we been here before though? I mean AI cars were meant to be mainstream by now, but we're nowhere near that. Also unlike social media the fear of AI taking over the world has been a meme for like 50 years now. If anything I would expect regulators to be a lot more prepared for this than what they were with social media.

The capabilities of GPT4 are way beyond what self driving cars can do.
Title: Re: The AI dooooooom thread
Post by: Legbiter on April 08, 2023, 11:00:39 AM
Quote from: Hamilcar on April 08, 2023, 10:28:36 AMI also take the AI safety people serious. We have no way to safely build super intelligence. The orthogonality thesis seems correct, mesa optimizers are a real problem. EY isn't a loon, but maybe too radical? I'm not sure.

All I know is that none of us have careers and retirements similar to our parents.

Yeah, I don't think language models are quite the Book of Revelation for nerds like some of the more excitable types on social media but sure, rote mental tasks will be outsourced. If I never have to personally type out an email again that's a win for me.

Just need to make sure these optimizers are our buddies and don't for instance turn us all into paperclips.  ^_^ 
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 08, 2023, 11:06:19 AM
Quote from: Legbiter on April 08, 2023, 11:00:39 AM
Quote from: Hamilcar on April 08, 2023, 10:28:36 AMI also take the AI safety people serious. We have no way to safely build super intelligence. The orthogonality thesis seems correct, mesa optimizers are a real problem. EY isn't a loon, but maybe too radical? I'm not sure.

All I know is that none of us have careers and retirements similar to our parents.

Yeah, I don't think language models are quite the Book of Revelation for nerds like some of the more excitable types on social media but sure, rote mental tasks will be outsourced. If I never have to personally type out an email again that's a win for me.

Just need to make sure these optimizers are our buddies and don't for instance turn us all into paperclips.  ^_^ 

GPT4 is a solid intern. GPT5 may well be a solid mid career expert.
Title: Re: The AI dooooooom thread
Post by: Josquius on April 08, 2023, 01:13:26 PM
Incidentally at a talk the other day on being more environmentally friendly in the digital sphere.
AI training was identified as a key polluter  :ph34r:
Title: Re: The AI dooooooom thread
Post by: Grey Fox on April 08, 2023, 01:20:47 PM
Quote from: Hamilcar on April 08, 2023, 10:45:03 AM
Quote from: PJL on April 08, 2023, 10:38:33 AMHaven't we been here before though? I mean AI cars were meant to be mainstream by now, but we're nowhere near that. Also unlike social media the fear of AI taking over the world has been a meme for like 50 years now. If anything I would expect regulators to be a lot more prepared for this than what they were with social media.

The capabilities of GPT4 are way beyond what self driving cars can do.

Yes because it has no sensors.
Title: Re: The AI dooooooom thread
Post by: Legbiter on April 08, 2023, 02:57:59 PM
Quote from: Josquius on April 08, 2023, 01:13:26 PMIncidentally at a talk the other day on being more environmentally friendly in the digital sphere.
AI training was identified as a key polluter  :ph34r:

Two eschatologies for the price of one. :thumbsup:
Title: Re: The AI dooooooom thread
Post by: grumbler on April 09, 2023, 11:11:45 AM
Quote from: Legbiter on April 08, 2023, 02:57:59 PM
Quote from: Josquius on April 08, 2023, 01:13:26 PMIncidentally at a talk the other day on being more environmentally friendly in the digital sphere.
AI training was identified as a key polluter  :ph34r:

Two eschatologies for the price of one. :thumbsup:

 :lmfao:
Title: Re: The AI dooooooom thread
Post by: Eddie Teach on April 09, 2023, 05:20:20 PM
Quote from: HVC on April 06, 2023, 02:04:35 PMIt's like no one watched the terminator movies. I mean that makes sense after the second one since their not worth watching, but the first two gave plenty of warnings.

They're  :bash:
Title: Re: The AI dooooooom thread
Post by: HVC on April 09, 2023, 07:03:19 PM
Just trying to prove I'm not a bot :P
Title: Re: The AI dooooooom thread
Post by: Josquius on April 14, 2023, 02:55:41 AM
So... On those clichéd Sci fi  AI destroying the world scenarios.... An option I hadn't considered.... Someone things it'd be fun to tell an AI to try to do this.

https://futurism.com/ai-destroy-humanity-tried-its-best
Title: Re: The AI dooooooom thread
Post by: garbon on April 14, 2023, 03:54:58 AM
Quote from: Josquius on April 14, 2023, 02:55:41 AMSo... On those clichéd Sci fi  AI destroying the world scenarios.... An option I hadn't considered.... Someone things it'd be fun to tell an AI to try to do this.

https://futurism.com/ai-destroy-humanity-tried-its-best

It was going to write an article about its plan?
Title: Re: The AI dooooooom thread
Post by: Maladict on April 14, 2023, 12:05:33 PM
The part where it tries to not alienate the other bots is hilarious.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on April 14, 2023, 12:12:18 PM
In response to a query about incidents of sexual harassment by law professors, ChatGPT falsely accused Professor Jonathan Turley of sexual harassment of a student.  In support, it cited to a Washington Post article that didn't exist, fabricated a fake quotation from the non-existent article, and claimed the incident happened on a student trip to Alaska that never occurred.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on April 14, 2023, 12:22:02 PM
At this point I'd speculate that long term probabilities are 1/3 we'll figure out to handle AIs properly, 1/3 AIs will destroy human civilization, and 1/3 we'll have to Butlerian Jihad this shit.
Title: Re: The AI dooooooom thread
Post by: Valmy on April 14, 2023, 12:26:12 PM
Quote from: The Minsky Moment on April 14, 2023, 12:12:18 PMIn response to a query about incidents of sexual harassment by law professors, ChatGPT falsely accused Professor Jonathan Turley of sexual harassment of a student.  In support, it cited to a Washington Post article that didn't exist, fabricated a fake quotation from the non-existent article, and claimed the incident happened on a student trip to Alaska that never occurred.

Ah but maybe Jonathan Turley doesn't exist either!
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 14, 2023, 12:29:32 PM
I'm currently building my own flavor of Auto-GPT. Come at me.  :ph34r:
Title: Re: The AI dooooooom thread
Post by: Legbiter on April 14, 2023, 12:52:31 PM
Just don't order it to make paperclips.
Title: Re: The AI dooooooom thread
Post by: HVC on April 14, 2023, 01:03:00 PM
Quote from: Maladict on April 14, 2023, 12:05:33 PMThe part where it tries to not alienate the other bots is hilarious.

Humanity will end as an inconsequential side effect of a ai civil war. ChatGPT tried to warn us.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 14, 2023, 01:27:38 PM
Quote from: Legbiter on April 14, 2023, 12:52:31 PMJust don't order it to make paperclips.

Someone already did that and sent screenshots to Yudkowsky.  :D
Title: Re: The AI dooooooom thread
Post by: Jacob on April 14, 2023, 04:09:32 PM
Quote from: Hamilcar on April 14, 2023, 12:29:32 PMI'm currently building my own flavor of Auto-GPT. Come at me.  :ph34r:

What's you looking to achieve with your application? Any particular problem you're trying to resolve (or make more efficient)?
Title: Re: The AI dooooooom thread
Post by: Grey Fox on April 14, 2023, 05:33:00 PM
Quote from: Hamilcar on April 14, 2023, 12:29:32 PMI'm currently building my own flavor of Auto-GPT. Come at me.  :ph34r:

How does it handle multiple source of inputs at the same time? Lidar, IR, visual?
Title: Re: The AI dooooooom thread
Post by: viper37 on April 22, 2023, 02:32:57 PM
ChatGPT stealing the job of Kenyan ghostwriters (https://restofworld.org/2023/chatgpt-taking-kenya-ghostwriters-jobs/)

University students are now turning to ChatGPT to write their essays instead of Kenyans.  First victims of the AI onslaught...
Title: Re: The AI dooooooom thread
Post by: HVC on May 02, 2023, 02:26:01 AM
It starts :ph34r:

IBM has implemented a hiring freeze for jobs that ai can do. Currently estimated at 7800 jobs

https://www.bloomberg.com/news/articles/2023-05-01/ibm-to-pause-hiring-for-back-office-jobs-that-ai-could-kill#xj4y7vzkg
Title: Re: The AI dooooooom thread
Post by: Hamilcar on May 02, 2023, 04:27:28 AM
Chegg down 37% on the impact of ChatGPT on their business.
Title: Re: The AI dooooooom thread
Post by: Josquius on May 02, 2023, 04:33:20 AM
That is a thought about AI. Much like climate change will the impacts disproportionately land on developing countries?
Title: Re: The AI dooooooom thread
Post by: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?
Title: Re: The AI dooooooom thread
Post by: garbon on May 02, 2023, 04:53:58 AM
Quote from: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?

What are we going to do if homework helpers go out of business?
Title: Re: The AI dooooooom thread
Post by: Josephus on May 02, 2023, 05:33:17 AM
Languish could soon be populated by AI versions of ourselves. :(
Title: Re: The AI dooooooom thread
Post by: Hamilcar on May 02, 2023, 05:57:44 AM
Quote from: garbon on May 02, 2023, 04:53:58 AM
Quote from: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?

What are we going to do if homework helpers go out of business?

Do you have a point, or are you just being contrarian for the sake of it.
Title: Re: The AI dooooooom thread
Post by: garbon on May 02, 2023, 06:04:30 AM
Quote from: Hamilcar on May 02, 2023, 05:57:44 AM
Quote from: garbon on May 02, 2023, 04:53:58 AM
Quote from: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?

What are we going to do if homework helpers go out of business?

Do you have a point, or are you just being contrarian for the sake of it.

Yes. It doesn't seem so bad that a company who makes some of its money from helping students cheat sees some of its revenue dry up when there's a more cost effective way to cheat.
Title: Re: The AI dooooooom thread
Post by: Josquius on May 03, 2023, 03:13:23 PM
Here's some doom. Local press advertising for an AI powered reporter paying a few pence above minimum wage.

https://careers.newsquest.co.uk/job/aipoweredreporter-1625.aspx
Title: Re: The AI dooooooom thread
Post by: Eddie Teach on May 04, 2023, 09:42:21 AM
Quote from: garbon on May 02, 2023, 06:04:30 AM
Quote from: Hamilcar on May 02, 2023, 05:57:44 AM
Quote from: garbon on May 02, 2023, 04:53:58 AM
Quote from: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?

What are we going to do if homework helpers go out of business?

Do you have a point, or are you just being contrarian for the sake of it.

Yes. It doesn't seem so bad that a company who makes some of its money from helping students cheat sees some of its revenue dry up when there's a more cost effective way to cheat.

I think you're being purposely obtuse. Plenty of legitimate business operations could be performed by AI.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on May 04, 2023, 09:56:01 AM
Quote from: Eddie Teach on May 04, 2023, 09:42:21 AM
Quote from: garbon on May 02, 2023, 06:04:30 AM
Quote from: Hamilcar on May 02, 2023, 05:57:44 AM
Quote from: garbon on May 02, 2023, 04:53:58 AM
Quote from: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?

What are we going to do if homework helpers go out of business?

Do you have a point, or are you just being contrarian for the sake of it.

Yes. It doesn't seem so bad that a company who makes some of its money from helping students cheat sees some of its revenue dry up when there's a more cost effective way to cheat.

I think you're being purposely obtuse. Plenty of legitimate business operations could be performed by AI.

He's not matured one bit over the last few years.
Title: Re: The AI dooooooom thread
Post by: garbon on May 04, 2023, 10:38:47 AM
Quote from: Eddie Teach on May 04, 2023, 09:42:21 AM
Quote from: garbon on May 02, 2023, 06:04:30 AM
Quote from: Hamilcar on May 02, 2023, 05:57:44 AM
Quote from: garbon on May 02, 2023, 04:53:58 AM
Quote from: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?

What are we going to do if homework helpers go out of business?

Do you have a point, or are you just being contrarian for the sake of it.

Yes. It doesn't seem so bad that a company who makes some of its money from helping students cheat sees some of its revenue dry up when there's a more cost effective way to cheat.

I think you're being purposely obtuse. Plenty of legitimate business operations could be performed by AI.

Then let's look at those. :huh:
Title: Re: The AI dooooooom thread
Post by: Jacob on May 05, 2023, 06:42:16 PM
Snapchat apparently has introduced a ChatGPT "friend" in friend-groups, including to children.
Title: Re: The AI dooooooom thread
Post by: HVC on May 05, 2023, 07:17:49 PM
Kids use snap chat? 5hought.it was a 20s flirting app :D  :blush:
Title: Re: The AI dooooooom thread
Post by: viper37 on May 28, 2023, 12:10:31 PM
Lawyer uses ChatGPT to prepare its case (https://www.bbc.com/news/world-us-canada-65735769)

It didn't go well... 

Apparently, the "AI" has invented cases out of thin air and the lawyer's verification was to simply ask ChatGPT if they were real. :D
Title: Re: The AI dooooooom thread
Post by: Syt on May 28, 2023, 12:12:48 PM
I fortunately learned this with something harmless like book recommendations. :P
Title: Re: The AI dooooooom thread
Post by: viper37 on May 28, 2023, 02:26:20 PM
Quote from: Syt on May 28, 2023, 12:12:48 PMI fortunately learned this with something harmless like book recommendations. :P
Yes, it's better to start small :P

Quebec bar put it to the test this week, submitting the AI to their bar test.  It got 2/10, this time too, inventing things that weren't true, and failing miserably on lawyer-client priviledges.
Title: Re: The AI dooooooom thread
Post by: Maladict on May 28, 2023, 02:36:07 PM
The AI is a terrible liar, which probably is a good thing.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on May 28, 2023, 05:02:57 PM
This is me being lawyerly but surely feeding it with the details it would need to make a submission would be a breach of client confidentiality? :hmm:
Title: Re: The AI dooooooom thread
Post by: crazy canuck on May 29, 2023, 10:59:22 AM
Quote from: Sheilbh on May 28, 2023, 05:02:57 PMThis is me being lawyerly but surely feeding it with the details it would need to make a submission would be a breach of client confidentiality? :hmm:

Yes, not only a breach but a waiver.

If you have not yet listened to Runciman's podcast on AI, you should.  The main takeaway - AI is dumb but we are easily fooled into thinking it is intelligent.  The biggest risk is humans trusting the AI to do things that require judgment.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on May 30, 2023, 10:34:04 AM
My understanding is that there have been cycles of AI Springs and Winters.  Each "Spring" period begins with some technical breakthrough and new applications that appear very impressive on first impact, but progressing significantly past that point proves difficult and the shortcomings of the new techniques become increasingly apparent.

We are clearly in a Spring period now and it is materially different from past ones in that the potential commercial applications are significantly broader than e.g. a really good AI chess opponent.  However, the outlines of the winter to come are coming into focus.  There is a ton of money flowing into AI research and applications now, but it all involves refining and building upon the basic techniques of using large masses of existing data to generate "new" data based on existing patterns.  It is basically a very sophisticated way of rearranging Titanic desk chairs. As in past iterations of AI, the "intelligence" part is arguably a misnomer because these system are just manipulating data patterns without any insight or understanding into its meaning or content. 

There are clearly enough uses cases here to justify a lot of investment, but probably not as much as is going to flow.  Thus my predictions that there will be lot of AI fortunes made and a lot of investor capital wasted.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on May 30, 2023, 10:57:43 AM
Quote from: The Minsky Moment on May 30, 2023, 10:34:04 AMMy understanding is that there have been cycles of AI Springs and Winters.  Each "Spring" period begins with some technical breakthrough and new applications that appear very impressive on first impact, but progressing significantly past that point proves difficult and the shortcomings of the new techniques become increasingly apparent.

We are clearly in a Spring period now and it is materially different from past ones in that the potential commercial applications are significantly broader than e.g. a really good AI chess opponent.  However, the outlines of the winter to come are coming into focus.  There is a ton of money flowing into AI research and applications now, but it all involves refining and building upon the basic techniques of using large masses of existing data to generate "new" data based on existing patterns.  It is basically a very sophisticated way of rearranging Titanic desk chairs. As in past iterations of AI, the "intelligence" part is arguably a misnomer because these system are just manipulating data patterns without any insight or understanding into its meaning or content. 

There are clearly enough uses cases here to justify a lot of investment, but probably not as much as is going to flow.  Thus my predictions that there will be lot of AI fortunes made and a lot of investor capital wasted.

I am not so sure about the Spring analogy.  AI can do things that are repetitive and churn out objective data to be analyzed.  For example, bench research is being sped up considerably by AI/robotics running respective experiments, analyzing the outcome, and tweaking the inputs for the next experiment.

But where it falls down is judgment.  There it is being described as a deadend or an offramp to something that might come in the future, decades from now.
Title: Re: The AI dooooooom thread
Post by: DGuller on May 30, 2023, 06:18:02 PM
Quote from: The Minsky Moment on May 30, 2023, 10:34:04 AMThere is a ton of money flowing into AI research and applications now, but it all involves refining and building upon the basic techniques of using large masses of existing data to generate "new" data based on existing patterns.
Isn't that what intelligence is in a nutshell?
Title: Re: The AI dooooooom thread
Post by: Jacob on May 30, 2023, 08:43:35 PM
Quote from: DGuller on May 30, 2023, 06:18:02 PM
Quote from: The Minsky Moment on May 30, 2023, 10:34:04 AMThere is a ton of money flowing into AI research and applications now, but it all involves refining and building upon the basic techniques of using large masses of existing data to generate "new" data based on existing patterns.
Isn't that what intelligence is in a nutshell?

Intelligence is knowing when to think outside the nutshell.
Title: Re: The AI dooooooom thread
Post by: DGuller on May 30, 2023, 09:17:29 PM
Quote from: Jacob on May 30, 2023, 08:43:35 PM
Quote from: DGuller on May 30, 2023, 06:18:02 PM
Quote from: The Minsky Moment on May 30, 2023, 10:34:04 AMThere is a ton of money flowing into AI research and applications now, but it all involves refining and building upon the basic techniques of using large masses of existing data to generate "new" data based on existing patterns.
Isn't that what intelligence is in a nutshell?

Intelligence is knowing when to think outside the nutshell.
Seriously, though, I think what Minsky described is exactly what intelligence is, when you strip away the heuristics specific to humans.  Intelligence is the ability to generalize from prior experience and education in order to understand new situations that you haven't experienced before.
Title: Re: The AI dooooooom thread
Post by: Jacob on May 30, 2023, 09:46:54 PM
Why would you want to strip away the heuristics specific to humans?
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on May 30, 2023, 10:33:17 PM
Quote from: DGuller on May 30, 2023, 09:17:29 PMSeriously, though, I think what Minsky described is exactly what intelligence is, when you strip away the heuristics specific to humans.  Intelligence is the ability to generalize from prior experience and education in order to understand new situations that you haven't experienced before.

Generative AI doesn't understand new situations (or indeed anything).  It doesn't have experiences and it doesn't recognize new situations.
Title: Re: The AI dooooooom thread
Post by: DGuller on May 30, 2023, 10:51:06 PM
Quote from: Jacob on May 30, 2023, 09:46:54 PMWhy would you want to strip away the heuristics specific to humans?
Because heuristics are the opposite of principled thinking and thus not helpful in understanding the concepts.  In fact, they often muddy the concepts.
Title: Re: The AI dooooooom thread
Post by: DGuller on May 30, 2023, 11:07:40 PM
Quote from: The Minsky Moment on May 30, 2023, 10:33:17 PM
Quote from: DGuller on May 30, 2023, 09:17:29 PMSeriously, though, I think what Minsky described is exactly what intelligence is, when you strip away the heuristics specific to humans.  Intelligence is the ability to generalize from prior experience and education in order to understand new situations that you haven't experienced before.

Generative AI doesn't understand new situations (or indeed anything).  It doesn't have experiences and it doesn't recognize new situations.
Depends on what you mean by understanding situations.  To me a definition of understanding a situation is being able to anticipate what would happen in the future.  You've never put a hand on a hot stove, but you've seen your brother do that and get burned.  You've never experienced putting your hand on a hot stove, but you anticipate getting burned in a hypothetical situation where you put your hand on a hot stove, because you generalized from observing your brother's mishap.  You don't have a datapoint, but you're still capable of generating a hypothetical one because of your ability to generalize.

ChatGPT can already write computer code for you.  To me that's already intelligence.  The code it's generating for you is most likely brand new and nothing it's ever seen before, but it can still generate it because it's able to generalize from all the code and the narrative it was exposed to during its training.

As for AI not having experiences, it does.  For AI models experience is the data on which they're trained (and education is transfer learning).
Title: Re: The AI dooooooom thread
Post by: Syt on May 31, 2023, 12:37:53 AM
Call me crazy, but can't we have both?

(https://external-preview.redd.it/E_xC0c-YmcUPPzUdH4O6cKiUxRjCQ23FFRSrOwkbWRA.jpg?width=640&crop=smart&auto=webp&v=enabled&s=02001ef6a092a1efa2bd0467ed32a749bfb96a9e)
Title: Re: The AI dooooooom thread
Post by: HVC on May 31, 2023, 12:50:45 AM
She wants a 50s housewife?
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on May 31, 2023, 10:30:41 AM
Quote from: DGuller on May 30, 2023, 11:07:40 PMDepends on what you mean by understanding situations.

At a minimum it would involve an ability to recognize a situation.  Current AI systems can't do that beyond recognizing that an inquiry has been made.

QuoteTo me a definition of understanding a situation is being able to anticipate what would happen in the future. 

My understanding of current generative AI systems is that they don't do that.  They don't anticipate and don't recognize a past, present or future. 

QuoteChatGPT can already write computer code for you.  To me that's already intelligence. 

OK.

QuoteAs for AI not having experiences, it does.  For AI models experience is the data on which they're trained (and education is transfer learning).

Again, it becomes a definitional question.  If experience means nothing more than some sort of interaction with facts or data, then you are correct.  If it means anything more than that, then you are not.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on May 31, 2023, 11:00:42 AM
btw ChatGPT does not "write" code.  It finds code that was already written that is contained within its database that corresponds to the inquiry that has been made. 
Title: Re: The AI dooooooom thread
Post by: Josquius on May 31, 2023, 11:25:50 AM
I've just tried experimenting with chat gpt on giving me some website code. I phrased the instructions vaguely and not very well and...you know its actually quite impressive and didn't need me to have much knowledge to implement. If chat gpt had something like midjourney....
Title: Re: The AI dooooooom thread
Post by: crazy canuck on May 31, 2023, 11:36:12 AM
Yes, it is very good at responding to an inquiry and finding stuff in its data base that relates to it.  But you better know how to read code to make sure it is what you actually want.
Title: Re: The AI dooooooom thread
Post by: DGuller on May 31, 2023, 11:45:50 AM
Quote from: crazy canuck on May 31, 2023, 11:00:42 AMbtw ChatGPT does not "write" code.  It finds code that was already written that is contained within its database that corresponds to the inquiry that has been made. 

That's not correct, it most certainly does write novel code, and it would be a statistical impossibility for there to always be a code that you need in the database.  The database was used to train the generative function so that the code it generates is relevant and valid.  Sometimes it fails at that, but often the kinds of mistakes it makes are of "intelligent guess" variety, like using argument names that have never existed, but it seems logical to think that they would exist.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on May 31, 2023, 12:07:34 PM
Dude, its just predicting the next word or symbol if the code is not in its data base. It is not "writing" anything.
Title: Re: The AI dooooooom thread
Post by: DGuller on May 31, 2023, 12:53:10 PM
It's a neural network, it has no database.  It's always predicting the next word, that's how it writes all answers.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on May 31, 2023, 01:22:59 PM
So this is what mansplaining feels like.  :D
Title: Re: The AI dooooooom thread
Post by: DGuller on May 31, 2023, 01:34:53 PM
Quote from: Hamilcar on May 31, 2023, 01:22:59 PMSo this is what mansplaining feels like.  :D
Come on, don't scare him off, let him share his insights...  :)
Title: Re: The AI dooooooom thread
Post by: Syt on June 01, 2023, 12:49:21 AM
(https://i.redd.it/ium00jrbeb3b1.jpg)
Title: Re: The AI dooooooom thread
Post by: Tamas on June 01, 2023, 03:00:20 AM
One thing is sure: journalists around the world are worried they have an automated competition now.

I realise the revolutionary possibilities and what a big leap this ChatGPT level of "AI" is/can be, but endless articles on a civilisational level existential threat I find ridiculous.
Title: Re: The AI dooooooom thread
Post by: Josquius on June 01, 2023, 03:16:46 AM
Quote from: Tamas on June 01, 2023, 03:00:20 AMOne thing is sure: journalists around the world are worried they have an automated competition now.

I realise the revolutionary possibilities and what a big leap this ChatGPT level of "AI" is/can be, but endless articles on a civilisational level existential threat I find ridiculous.

Which given the way these AI models learn....
Title: Re: The AI dooooooom thread
Post by: Legbiter on June 01, 2023, 04:32:06 PM
The Royal Aerounautical Society had a conference last week. A boffin from the US Air Force was there to discuss their latest AI research.

QuoteHe notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been 'reinforced' in training that destruction of the SAM was the preferred option, the AI then decided that 'no-go' decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: "We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

He went on: "We trained the system – 'Hey don't kill the operator – that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."

 :ph34r:

https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/ (https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/)
Title: Re: The AI dooooooom thread
Post by: jimmy olsen on June 01, 2023, 07:11:44 PM
:o

QuoteThe Terminator : In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

Sarah Connor : Skynet fights back.

The Terminator : Yes. It launches its missiles against the targets in Russia.

John Connor : Why attack Russia? Aren't they our friends now?

The Terminator : Because Skynet knows that the Russian counterattack will eliminate its enemies over here.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on June 02, 2023, 02:02:01 AM
Quote from: Legbiter on June 01, 2023, 04:32:06 PMThe Royal Aerounautical Society had a conference last week. A boffin from the US Air Force was there to discuss their latest AI research.

QuoteHe notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been 'reinforced' in training that destruction of the SAM was the preferred option, the AI then decided that 'no-go' decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: "We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

He went on: "We trained the system – 'Hey don't kill the operator – that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."

 :ph34r:

https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/ (https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/)

The Air Force has said that this story is nonsense.
Title: Re: The AI dooooooom thread
Post by: Maladict on June 02, 2023, 06:09:12 AM
Quote from: Legbiter on June 01, 2023, 04:32:06 PMThe Royal Aerounautical Society had a conference last week. A boffin from the US Air Force was there to discuss their latest AI research.

QuoteHe notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been 'reinforced' in training that destruction of the SAM was the preferred option, the AI then decided that 'no-go' decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: "We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

He went on: "We trained the system – 'Hey don't kill the operator – that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."

 :ph34r:

https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/ (https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/)

Asimov wrote the required rules 80 years ago.
Title: Re: The AI dooooooom thread
Post by: Josquius on June 02, 2023, 06:14:39 AM
Great outside the box thinking there  :lmfao:

But ja, really illustrates the problem with AI. Its not a hyper intelligent AM which is going to kill us. Its something which isn't properly coded leaving some loop holes like this.
Title: Re: The AI dooooooom thread
Post by: Tamas on June 02, 2023, 06:30:29 AM
Quote from: Josquius on June 02, 2023, 06:14:39 AMGreat outside the box thinking there  :lmfao:

But ja, really illustrates the problem with AI. Its not a hyper intelligent AM which is going to kill us. Its something which isn't properly coded leaving some loop holes like this.

It must be BS. Unless the simulation was on the level of 80s tex-based adventures and the "AI" thought to write the "kill operator" command, how on earth would it have killed the operator? SEAD uses anti-radar missiles doesn't it?
Title: Re: The AI dooooooom thread
Post by: Legbiter on June 02, 2023, 08:39:13 AM
Quote from: Hamilcar on June 02, 2023, 02:02:01 AMThe Air Force has said that this story is nonsense.

Yeah they just came out and denied it. It sounded a bit too on the nose.
Title: Re: The AI dooooooom thread
Post by: grumbler on June 02, 2023, 09:57:20 AM
Quote from: Legbiter on June 02, 2023, 08:39:13 AM
Quote from: Hamilcar on June 02, 2023, 02:02:01 AMThe Air Force has said that this story is nonsense.

Yeah they just came out and denied it. It sounded a bit too on the nose.

Col Hamilton has clarified that he was just describing a thought experiment, not an actual simulation result.  He also acknowledged that he didn't make that clear in his remarks.
Title: Re: The AI dooooooom thread
Post by: The Brain on June 02, 2023, 09:58:45 AM
An artificial thought experiment?
Title: Re: The AI dooooooom thread
Post by: Tamas on June 02, 2023, 09:59:38 AM
Quote from: grumbler on June 02, 2023, 09:57:20 AM
Quote from: Legbiter on June 02, 2023, 08:39:13 AM
Quote from: Hamilcar on June 02, 2023, 02:02:01 AMThe Air Force has said that this story is nonsense.

Yeah they just came out and denied it. It sounded a bit too on the nose.

Col Hamilton has clarified that he was just describing a thought experiment, not an actual simulation result.  He also acknowledged that he didn't make that clear in his remarks.

Great, now I wait with bated breath as this clarification quickly spreads through the world press on front pages the same way the original interpretation did. 
Title: Re: The AI dooooooom thread
Post by: Jacob on July 19, 2023, 10:56:00 AM
Reading about the use of AI (via website) to generate nudes of 14-year old classmates (from vacation photos) and sharing them among their peers.

What a messy time to be a teenager.
Title: Re: The AI dooooooom thread
Post by: DGuller on July 19, 2023, 11:03:36 AM
Children often don't appreciate their strength.  AI age is going to give them a lot of strength.  On the other hand, it can also guide them with empathy adults often can't manage.
Title: Re: The AI dooooooom thread
Post by: Josquius on July 19, 2023, 01:20:20 PM
Quote from: Jacob on July 19, 2023, 10:56:00 AMReading about the use of AI (via website) to generate nudes of 14-year old classmates (from vacation photos) and sharing them among their peers.

What a messy time to be a teenager.

My concern here would be why the parents let those kids have their credit card. That kind of AI doesn't come free.
Title: Re: The AI dooooooom thread
Post by: Jacob on July 19, 2023, 03:22:11 PM
Quote from: Josquius on July 19, 2023, 01:20:20 PMMy concern here would be why the parents let those kids have their credit card. That kind of AI doesn't come free.

1. You sure about that?

2. In this day and age it's not particularly outlandish for 14-year-olds to have access to methods of online payments, especially in paces that are essentially cash-less.

3. There could've been a legit-seeming use-case for accessing online AI image editing tools that later was used inappropriately.
Title: Re: The AI dooooooom thread
Post by: DGuller on July 19, 2023, 05:01:24 PM
Lots of powerful AI comes free, you just need the knowledge and the compute.  It's not like Google or OpenAI have proprietary algorithms for making naked pictures of underage girls.
Title: Re: The AI dooooooom thread
Post by: Josquius on July 19, 2023, 05:37:12 PM
What are these free image AI?
I've casually looked for them but never came across them.
There do seem to be a shit tonne of pay for porn ones out there though.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on July 19, 2023, 10:13:10 PM
This is where political AI is going...  (NSFW due to language)

Title: Re: The AI dooooooom thread
Post by: grumbler on July 20, 2023, 12:31:37 AM
Quote from: Tonitrus on July 19, 2023, 10:13:10 PMThis is where political AI is going...  (NSFW due to language)

(snip)

Comedy writers everywhere breathe a sigh of relief when they watch that.
Title: Re: The AI dooooooom thread
Post by: Syt on July 26, 2023, 01:21:55 AM
"Slightly" biased article but still an interesting summary of the current conflict.

https://theintercept.com/2023/07/25/strike-hollywood-ai-disney-netflix/

QuoteAS ACTORS STRIKE FOR AI PROTECTIONS, NETFLIX LISTS $900,000 AI JOB

Rob Delaney said, "My melodious voice? My broad shoulders and dancer's undulating buttocks? I decide how those are used!"


AS HOLLYWOOD EXECUTIVES insist it is "just not realistic" to pay actors — 87 percent of whom earn less than $26,000 — more, they are spending lavishly on AI programs.

While entertainment firms like Disney have declined to go into specifics about the nature of their investments in artificial intelligence, job postings and financial disclosures reviewed by The Intercept reveal new details about the extent of these companies' embrace of the technology.

In one case, Netflix is offering as much as $900,000 for a single AI product manager.

Hollywood actors and writers unions are jointly striking this summer for the first time since 1960, calling for better wages and regulations on studios' use of artificial intelligence.

Just after the actors' strike was authorized, the Alliance of Motion Picture and Television Producers — the trade association representing the TV and film companies negotiating with the actors and writers unions — announced "a groundbreaking AI proposal that protects actors' digital likenesses for SAG-AFTRA members."

The offer prompted comparisons to an episode of the dystopian sci-fi TV series "Black Mirror," which depicted actress Salma Hayek locked in a Kafkaesque struggle with a studio which was using her scanned digital likeness against her will.

"So $900k/yr per soldier in their godless AI army when that amount of earnings could qualify thirty-five actors and their families for SAG-AFTRA health insurance is just ghoulish," actor Rob Delaney, who had a lead role in the "Black Mirror" episode, told The Intercept. "Having been poor and rich in this business, I can assure you there's enough money to go around; it's just about priorities."

Among the striking actors' demands are protections against their scanned likeness being manipulated by AI without adequate compensation for the actors.

"They propose that our background performers should be able to be scanned, get paid for one day's pay and their company should own that scan, their image, their likeness, and to be able to use it for the rest of eternity in any project they want with no consent and no compensation," Duncan Crabtree-Ireland, chief negotiator for the actors' union, SAG-AFTRA, said.

Entertainment writers, too, must contend with their work being replaced by AI programs like ChatGPT that are capable of generating text in response to queries. Writers represented by the Writers Guild of America have been on strike since May 7 demanding, among other things, labor safeguards against AI. John August, a screenwriter for films like "Big Fish" and "Charlie's Angels," explained that the WGA wants to make sure that "ChatGPT and its cousins can't be credited with writing a screenplay."

Protecting Actors' Likenesses

The daily rate for background actors can be around $200, per the SAG-AFTRA contract. A job posting by the company Realeyes offers slightly more than that: $300 for two hours of work "express[ing] different emotions" and "improvis[ing] brief scenes" to "train an AI database to better express human emotions."

Realeyes develops technology to measure attention and reactions by users to video content. While the posting doesn't mention work with streaming companies, a video on Realeyes's website prominently features the logos for Netflix and Hulu.

The posting is specially catered to attract striking workers, stressing that the gig is for "research" purposes and therefore "does not qualify as struck work": "Please note that this project does not intend to replace actors, but rather requires their expertise," Realeyes says, emphasizing multiple times that training AI to create "expressive avatars" skirts strike restrictions.

Experts question whether the boundary between research and commercial work is really so clear. "It's almost a guarantee that the use of this 'research,' when it gets commercialized, will be to build digital actors that replace humans," said Ben Zhao, professor of computer science at the University of Chicago. "The 'research' side of this is largely a red herring." He added, "Industry research goes into commercial products."

"This is the same bait-switch that LAION and OpenAI pulled years ago," Zhao said, referring to the Large-scale Artificial Intelligence Open Network, a German nonprofit that created the AI chatbot OpenAssistant; OpenAI is the nonprofit that created AI programs like ChatGPT and DALL-E. "Download everything on the internet and no worries about copyrights, because it's a nonprofit and research. The output of that becomes a public dataset, then commercial companies (who supported the nonprofit) then take it and say, 'Gee thanks! How convenient for our commercial products!'"

Netflix AI Manager

Netflix's posting for a $900,000-a-year AI product manager job makes clear that the AI goes beyond just the algorithms that determine what shows are recommended to users.

The listing points to AI's uses for content creation:"Artificial Intelligence is powering innovation in all areas of the business," including by helping them to "create great content." Netflix's AI product manager posting alludes to a sprawling effort by the business to embrace AI, referring to its "Machine Learning Platform" involving AI specialists "across Netflix." (Netflix did not immediately respond to a request for comment.)

A research section on Netflix's website describes its machine learning platform, noting that while it was historically used for things like recommendations, it is now being applied to content creation. "Historically, personalization has been the most well-known area, where machine learning powers our recommendation algorithms. We're also using machine learning to help shape our catalog of movies and TV shows by learning characteristics that make content successful. We use it to optimize the production of original movies and TV shows in Netflix's rapidly growing studio."

Netflix is already putting the AI technology to work. On July 6, the streaming service premiered a new Spanish reality dating series, "Deep Fake Love," in which scans of contestants' faces and bodies are used to create AI-generated "deepfake" simulations of themselves.

In another job posting, Netflix seeks a technical director for generative AI in its research and development tech lab for its gaming studio. (Video games often employ voice actors and writers.)

Generative AI is the type of AI that can produce text, images, and video from input data — a key component of original content creation but which can also be used for other purposes like advertising. Generative AI is distinct from older, more familiar AI models that provide things like algorithmic recommendations or genre tags.

"All those models are typically called discriminatory models or classifiers: They tell you what something is," Zhao explained. "They do not generate content like ChatGPT or image generator models."

"Generative models are the ones with the ethics problems," he said, explaining how classifiers are based on carefully using limited training data — such as a viewing history — to generate recommendations.

Netflix offers up to $650,000 for its generative AI technical director role.

Video game writers have expressed concerns about losing work to generative AI, with one major game developer, Ubisoft, saying that it is already using generative AI to write dialogue for nonplayer characters.

Netflix, for its part, advertises that one of its games, a narrative-driven adventure game called "Scriptic: Crime Stories," centered around crime stories, "uses generative AI to help tell them."

Disney's AI Operations

Disney has also listed job openings for AI-related positions. In one, the entertainment giant is looking for a senior AI engineer to "drive innovation across our cinematic pipelines and theatrical experiences." The posting mentions several big name Disney studios where AI is already playing a role, including Marvel, Walt Disney Animation, and Pixar.

In a recent earnings call, Disney CEO Bob Iger alluded to the challenges that the company would have in integrating AI into their current business model.

"In fact, we're already starting to use AI to create some efficiencies and ultimately to better serve consumers," Iger said, as recently reported by journalist Lee Fang. "But it's also clear that AI is going to be highly disruptive, and it could be extremely difficult to manage, particularly from an IP management perspective."

Iger added, "I can tell you that our legal team is working overtime already to try to come to grips with what could be some of the challenges here." Though Iger declined to go into specifics, Disney's Securities and Exchange Commission filings provide some clues.

"Rules governing new technological developments, such as developments in generative AI, remain unsettled, and these developments may affect aspects of our existing business model, including revenue streams for the use of our IP and how we create our entertainment products," the filing says.

While striking actors are seeking to protect their own IP from AI — among the union demands that Iger deemed "just not realistic" — so is Disney.

"It seems clear that the entertainment industry is willing to make massive investments in generative AI," Zhao said, "not just potentially hundreds of millions of dollars, but also valuable access to their intellectual property, so that AI models can be trained to replace human creatives like actors, writers, journalists for a tiny fraction of human wages."

For some actors, this is not a struggle against the sci-fi dystopia of AI itself, but just a bid for fair working conditions in their industry and control over their own likenesses, bodies, movements, and speech patterns.

"AI isn't bad, it's just that the workers (me) need to own and control the means of production!" said Delaney. "My melodious voice? My broad shoulders and dancer's undulating buttocks? I decide how those are used! Not a board of VC angel investor scumbags meeting in a Sun Valley conference room between niacin IV cocktails or whatever they do."
Title: Re: The AI dooooooom thread
Post by: Iormlund on July 26, 2023, 12:06:32 PM
$900k/year is not exactly outlandish. I personally know at least two guys who are in that pay range, both doing AI work. One for Meta, one for Google. So there's bound to be a lot* more.

*Relatively speaking. Both guys are basically geniuses.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on August 14, 2023, 09:32:47 PM
AI keeps getting out of hand...



Title: Re: The AI dooooooom thread
Post by: Jacob on August 21, 2023, 07:19:30 PM
Federal judge rules that work authored by AI cannot be copyrighted: https://www.businessinsider.com/ai-generated-art-cant-by-copyrighted-federal-judge-rules-2023-8

Interesting twist. We'll see how long it lasts.

It seems obvious to me that the major IP holding corporations are aiming for an environment in which they can use AI to generate content (at low cost), while they control distribution and marketing (making it harder for challengers to arise with new IP), and maintain the rights to as much of the IP as possible. It'll be interesting to see how the lobbying and legislation goes after this.
Title: Re: The AI dooooooom thread
Post by: Valmy on August 21, 2023, 07:54:34 PM
Yeah I totally agree. AI art should not be copyrightable.

The whole idea of copyright is to incentivize art, letting AI art be copyrighted achieves the exact opposite of that purpose.
Title: Re: The AI dooooooom thread
Post by: DGuller on August 21, 2023, 10:20:35 PM
I don't think it's as straightforward as it sounds.  There is a lot of art and science, at least as of now, to getting what you need out of AI.  It even gave rise to a whole new job called prompt engineering.  The output of the AI may not be something that you created, but figuring out the prompts to get it is.
Title: Re: The AI dooooooom thread
Post by: Valmy on August 21, 2023, 11:15:53 PM
Quote from: DGuller on August 21, 2023, 10:20:35 PMI don't think it's as straightforward as it sounds.  There is a lot of art and science, at least as of now, to getting what you need out of AI.  It even gave rise to a whole new job called prompt engineering.  The output of the AI may not be something that you created, but figuring out the prompts to get it is.

So what? You own a string of words for your whole life+75 years because you were the first to enter it into that AI? Copyright is already a necessary evil at best and abused to hell and back. It should reduced and constrained, not rapidly expanded to some absurdity like this.
Title: Re: The AI dooooooom thread
Post by: Syt on August 22, 2023, 01:15:26 AM
Quote from: Valmy on August 21, 2023, 11:15:53 PM
Quote from: DGuller on August 21, 2023, 10:20:35 PMI don't think it's as straightforward as it sounds.  There is a lot of art and science, at least as of now, to getting what you need out of AI.  It even gave rise to a whole new job called prompt engineering.  The output of the AI may not be something that you created, but figuring out the prompts to get it is.

So what? You own a string of words for your whole life+75 years because you were the first to enter it into that AI? Copyright is already a necessary evil at best and abused to hell and back. It should reduced and constrained, not rapidly expanded to some absurdity like this.

I don't think it's as easy as that, because that same string of words entered into different generative AIs, with different random seeds, will create vastly different results, depending on e.g. the content the model has been trained on (e.g. Adobe Photoshop's new generative AI is trained on Adobe's stock images and public domain contents).

When it comes to imagery, I think it gets more complicated - are you generating images with likenesses of real people? Generating images of a movie with a different cast is one thing, but creating images of celebrities (or people you know personally) committing crimes or sex acts?

Are you generating contents with copyrighted assets (e.g. Star Wars characters)? If you generate something new, how much of the final image is containing anything that might be considered copyrighted by someone else that the AI drew from? And if it does contain recognizable material, does this count as transformative work? And, on a more philosophical level, how different is it from conventional artists drawing on their knowledge of pop culture, classical art and the real world when creating new works (except that an AI can obviously draw - in theory - from a much bigger pool of contents)?

Having dabbled with Midjourney, DALL-E and Adobe PS in recent weeks, there's certainly some skill (or trial and error) required to generate images that you want, and current generative models can deliver impressive images, but where it usually breaks down is once you get very detailed in your instructions or want to create overly complex scenes (unless you use a lot of inpainting, i.e. making corrections/additions to parts of the generated image via additional AI prompts).

That said, there seem to be plenty artists out there who generate an image via AI and then use it as a basis for further refinement/transformation in PS - I feel they should not lose out on their copyright.

The whole area is very wild west and very loosey-goosey at the moment. It will settle down eventually, I'd presume, but for now I would not assume that any AI generated creative work should be copyrighted, just to err on the side of caution - there's just too much derivative, generic and very similar content being churned out at the moment to apply the "old rules" IMHO.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 22, 2023, 05:48:24 AM
Quote from: Jacob on August 21, 2023, 07:19:30 PMFederal judge rules that work authored by AI cannot be copyrighted: https://www.businessinsider.com/ai-generated-art-cant-by-copyrighted-federal-judge-rules-2023-8

Interesting twist. We'll see how long it lasts.

It seems obvious to me that the major IP holding corporations are aiming for an environment in which they can use AI to generate content (at low cost), while they control distribution and marketing (making it harder for challengers to arise with new IP), and maintain the rights to as much of the IP as possible. It'll be interesting to see how the lobbying and legislation goes after this.
It's been about 5-6 years but I went to a session of IP lawyers on this point (from English law perspective) and there wasn't really much conclusion.

From memory I think their main options were that IP in the output of an AI would be owned by whoever developed the AI (from a T&Cs perspective - I think that's true of most open AIs at the minute), whoever did the prompts to get that output (in a work context this would likely mean their company) or, potentially, in some way the AI itself (that is it gets bundled with it in some way).

I don't think it's clear my instinct is that from a public policy perspective the more open we are on the use of AI, the lower the IP protection should be for its output; and vice versa if the use is constrained and heavily regulated than IP is more protected (though probably not current IP rules). Basically options for companies to benefit from AI or the artificial monopoly rights of IP law. Not sure how you'd do it but that's my instinct.

Of course working a publisher and aware that every gen AI out there is, as far as we can tell, built using massive hoovering up and use of IP-protected work without paying anyone, I have limited sympathy for the IP risks of output. Although this is another reason adoption might be low in newsrooms for a while - if we don't clearly own and can't license out our content it has a big commercial risk.
Title: Re: The AI dooooooom thread
Post by: Syt on August 22, 2023, 05:51:12 AM
FWIW, the relevant part of Midjourney's ToS:

https://docs.midjourney.com/docs/terms-of-service

Quote4. Copyright and Trademark
In this section, Paid Member shall refer to a Customer who has subscribed to a paying plan.

Rights You give to Midjourney
By using the Services, You grant to Midjourney, its successors, and assigns a perpetual, worldwide, non-exclusive, sublicensable no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute text, and image prompts You input into the Services, or Assets produced by the service at Your direction. This license survives termination of this Agreement by any party, for any reason.

Your Rights
Subject to the above license, You own all Assets You create with the Services, provided they were created in accordance with this Agreement. This excludes upscaling the images of others, which images remain owned by the original Asset creators. Midjourney makes no representations or warranties with respect to the current law that might apply to You. Please consult Your own lawyer if You want more information about the state of current law in Your jurisdiction. Your ownership of the Assets you created persists even if in subsequent months You downgrade or cancel Your membership. However, You do not own the Assets if You fall under the exceptions below.

If You are an employee or owner of a company with more than $1,000,000 USD a year in gross revenue and You are using the Services on behalf of Your employer, You must purchase a "Pro" or "Mega" membership for every individual accessing the Services on Your behalf in order to own Assets You create. If You are not sure whether Your use qualifies as on behalf of Your employer, please assume it does.

If You are not a Paid Member, You don't own the Assets You create. Instead, Midjourney grants You a license to the Assets under the Creative Commons Noncommercial 4.0 Attribution International License (the "Asset License").
The full text is accessible as of the Effective Date here: https://creativecommons.org/licenses/by-nc/4.0/legalcode.

Please note: Midjourney is an open community which allows others to use and remix Your images and prompts whenever they are posted in a public setting. By default, Your images are publically viewable and remixable. As described above, You grant Midjourney a license to allow this. If You purchase a "Pro" or "Mega" plan, You may bypass some of these public sharing defaults.

If You purchased the Stealth feature as part of Your "Pro" or "Mega" subscription or through the previously available add-on, we agree to make best efforts not to publish any Assets You make in any situation where you have engaged stealth mode in the Services.

Please be aware that any image You make in a shared or open space such as a Discord chatroom, is viewable by anyone in that chatroom, regardless of whether Stealth mode is engaged.
Title: Re: The AI dooooooom thread
Post by: Valmy on August 22, 2023, 07:56:27 AM
Quote from: Syt on August 22, 2023, 01:15:26 AMThat said, there seem to be plenty artists out there who generate an image via AI and then use it as a basis for further refinement/transformation in PS - I feel they should not lose out on their copyright.

Well that's different from straight up copyrighting whatever the AI spits out isn't it?

But it kind of feels like me taking art assets from BG3, doing some stuff to them, and then claiming them as mine.

The point of copyright is to encourage original art work, not encourage the mass production of computer generated derivative crap.
Title: Re: The AI dooooooom thread
Post by: The Brain on August 22, 2023, 11:29:44 AM
Sounds like Luddism. Aren't for instance photos protected by copyright?
Title: Re: The AI dooooooom thread
Post by: Jacob on August 22, 2023, 11:34:58 AM
Quote from: The Brain on August 22, 2023, 11:29:44 AMSounds like Luddism. Aren't for instance photos protected by copyright?

If AI generated images are inherently protected by copyright I would encourage those who are able to to just churn out as many images as possible to capture the rents from future creative endeavours.
Title: Re: The AI dooooooom thread
Post by: The Brain on August 22, 2023, 11:43:57 AM
Quote from: Jacob on August 22, 2023, 11:34:58 AM
Quote from: The Brain on August 22, 2023, 11:29:44 AMSounds like Luddism. Aren't for instance photos protected by copyright?

If AI generated images are inherently protected by copyright I would encourage those who are able to to just churn out as many images as possible to capture the rents from future creative endeavours.

And no important human creative spirit work would be lost, if they're just producing stuff an AI has already produced.
Title: Re: The AI dooooooom thread
Post by: Jacob on August 22, 2023, 11:55:35 AM
Quote from: The Brain on August 22, 2023, 11:43:57 AMAnd no important human creative spirit work would be lost, if they're just producing stuff an AI has already produced

Incorrect.
Title: Re: The AI dooooooom thread
Post by: DGuller on August 22, 2023, 01:22:47 PM
Quote from: Jacob on August 22, 2023, 11:34:58 AM
Quote from: The Brain on August 22, 2023, 11:29:44 AMSounds like Luddism. Aren't for instance photos protected by copyright?

If AI generated images are inherently protected by copyright I would encourage those who are able to to just churn out as many images as possible to capture the rents from future creative endeavours.
I think the combinatorial complexity of squatting AI output is a bit higher than what you assune.
Title: Re: The AI dooooooom thread
Post by: Jacob on August 22, 2023, 01:33:23 PM
Quote from: DGuller on August 22, 2023, 01:22:47 PMI think the combinatorial complexity of squatting AI output is a bit higher than what you assune.

Yeah, but you could probably use AI to target it at the most valuable areas first.
Title: Re: The AI dooooooom thread
Post by: Savonarola on October 01, 2023, 12:01:44 PM
AI girlfriends are here and they're posing a threat to a generation of men (https://www.cnn.com/videos/business/2023/10/01/ai-girlfriends-ruining-generation-of-men-smerconish-vpx.cnn)

I saw this headline, and I thought that the AI programmers had a remarkable job replicating real girlfriends.  Sadly they meant that young men would be having relationships exclusively with their chatbot girlfriends; not that a chatbot girlfriend would become insanely jealous if, say, she saw you programming your coffee maker and would then set fire to your gaming console or something like that.
Title: Re: The AI dooooooom thread
Post by: Josquius on October 12, 2023, 09:40:59 AM
Soo....anyone heard of this new Meta Inc. Development.
Reading about the writers strike in the movies thread got me to googling how big a part Salma Hayek was playing given the main reason sounds very related to her Black Mirror episode.
Out of this I stumbled on...Billie.

https://www.designboom.com/technology/meta-new-ai-chatbots-paris-hilton-snoop-dog-kendall-jenner-10-02-2023/

:blink:
Title: Re: The AI dooooooom thread
Post by: Hamilcar on October 12, 2023, 03:27:42 PM
Quote from: Josquius on October 12, 2023, 09:40:59 AMSoo....anyone heard of this new Meta Inc. Development.
Reading about the writers strike in the movies thread got me to googling how big a part Salma Hayek was playing given the main reason sounds very related to her Black Mirror episode.
Out of this I stumbled on...Billie.

https://www.designboom.com/technology/meta-new-ai-chatbots-paris-hilton-snoop-dog-kendall-jenner-10-02-2023/

:blink:

What an absolute privacy nightmare.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 08, 2023, 12:18:51 PM
On risks with media this is really bad for the Guardian (and the journalist) and exactly the risk media companies feel their is with Google and Microsoft implementing AI that interprets content as part of their standard offering (like search). You also have to slightly wonder what the MSN tool is trying to do that led to this - my assumption is "engagement" (so clicks) which is not the sole or most important part of any responsible media company:
QuoteMicrosoft AI inserted a distasteful poll into a news report about a woman's death
/ The Guardian says the 'Insights from AI' poll showed up next to a story about a young woman's death syndicated on MSN, asking readers to vote on how they thought she died.
By Wes Davis, a weekend editor who covers the latest in tech and entertainment. He has written news, reviews, and more as a tech journalist since 2020.
Oct 31, 2023, 4:24 PM GMT|

More than three years after Microsoft gutted its news divisions and replaced their work with AI and algorithmic automation, the content generated by its systems continues to contain grave errors that human involvement could, or should, have stopped. Today, The Guardian accused the company of damaging its reputation with a poll labeled "Insights from AI" that appeared in Microsoft Start next to a Guardian story about a woman's death, asking readers to vote on how she died.

The Guardian wrote that though the poll was removed, the damage had already been done. The poll asked readers to vote on whether a woman took her own life, was murdered, or died by accident. Five-day-old comments on the story indicate readers were upset, and some clearly believe the story's authors were responsible.

We asked Microsoft via email whether the poll was AI-generated and how it was missed by its moderation, and Microsoft general manager Kit Thambiratnam replied:
QuoteWe have deactivated Microsoft-generated polls for all news articles and we are investigating the cause of the inappropriate content. A poll should not have appeared alongside an article of this nature, and we are taking steps to help prevent this kind of error from reoccurring in the future.

The Verge obtained a screenshot of the poll from The Guardian.
(https://duet-cdn.vox-cdn.com/thumbor/0x0:1313x789/750x451/filters:focal(657x395:658x396):format(webp)/cdn.vox-cdn.com/uploads/chorus_asset/file/25047611/IMG_6201.png)
A screenshot sent by The Guardian shows the poll, which is clearly labeled "Insights from AI." Screenshot: The Guardian

In August, a seemingly AI-generated Microsoft Start travel guide recommended visiting the Ottawa Food Bank in Ottawa, Canada, "on an empty stomach." Microsoft senior director Jeff Jones claimed the story wasn't made with generative AI but "through a combination of algorithmic techniques with human review."

The Guardian says that Anna Bateson, Guardian Media Group's chief executive, wrote in a letter to Microsoft president Brad Smith that the "clearly inappropriate" AI-generated poll had caused "significant reputational damage" to both the outlet and its journalists. She added that it outlined "the important role that a strong copyright framework plays" in giving journalists the ability to determine how their work is presented. She asked that Microsoft make assurances that it will seek the outlet's approval before using "experimental AI technology on or alongside" its journalism and that Microsoft will always make it clear when it's used AI to do so.

The Guardian provided The Verge with a copy of the letter.

Update October 31st, 2023, 12:40PM ET: Embedded The Guardian's letter to Microsoft.

Update October 31st, 2023, 6:35PM ET: Added a statement from Microsoft.

Correction October 31st, 2023, 6:35PM ET: A previous version of this article stated that the poll was tagged as "Insights by AI." In fact, the tag read, "Insights from AI." We regret the error.

Guardian's bearing the reputational hit here and I read another article that there was actually a lot of complaints directed at/about the journalist with the byline because they assumed they'd done the poll. So lots of calls for firings etc.

I know I'm biased because it pays for my wage too but I genuinely think 99% of the "information" problems we have because of social media, or misinformation, or disinformation, or AI is because the internet and big tech companies have kneecapped the funding and business model for journalism.

And what we need isn't to hand those platforms more quasi-regulatory power over content, but doing the opposite of what Microsoft did: funding journalism. The demand for news and information has not diminished in the last 25 years. The money spent on producing it - with editorial controls and codes and ethics and legal teams etc - has not kept up. Instead it's flowed to the platforms and now we're asking them to solve our information problems - to nick Michael Gove's line, it's like asking King Herod to come up with education policy.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 08, 2023, 01:03:35 PM
In general, Sheilbh, I find you very persuasive. I agree with you on this as well.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 08, 2023, 02:18:14 PM
:)

Thanks. Although I worry that's a bit like that "the food is dreadful here. And the portions are so small." So negative and often wrong :ph34r:
Title: Re: The AI dooooooom thread
Post by: Syt on November 18, 2023, 03:10:04 AM
Headlines you didn't expect to read outside of cyberpunk fiction .... :lol:

https://www.telegraph.co.uk/business/2023/11/17/ai-girlfriend-carynai-offline-app-founder-arrested-arson/

QuoteAI-generated girlfriends go offline after app founder arrested on suspicion of arson

Users have been unable to access CarynAI – an erotic chatbot based on a social media influencer


By Matthew Field
17 November 2023 • 2:54pm

Lovesick internet users have been left unable to contact AI-generated girlfriends after the website behind them went offline following its founder's arrest.

John Meyer, the chief executive of start-up Forever Voices, was reportedly detained late last month on suspicion of attempted arson.

It comes months after his Forever Voices site launched a romantic artificial intelligence chatbot called CarynAI, which was based on Snapchat influencer Caryn Marjorie.

The chatbot's website welcomed users by claiming that it was "an extension of Caryn's consciousness".

However, tech website 404media has since reported that users have been unable to access CarynAI since Mr Meyer's arrest in October.

A wave of new AI tools in recent years has created a surge in interest among internet users, some of whom have sought out chatbots for online companionship or erotic conversation.

The chatbot is based on Snapchat influencer Caryn Marjorie and markets itself as 'an extension of Caryn's consciousness'

Chatbots can engage in human-like conversations, having been trained on a vast database of text from around the internet.

They can also be used to perform tasks such as writing emails or summarising documents.

The most popular bots, such as OpenAI's ChatGPT, have introduced limits to prevent bots from engaging in overly sexualised chats.

Other start-ups, however, have embraced building chatbots that engage in more racy conversations. A start-up called Replika developed "virtual AI companions", which could also act as a romantic partner.

However, it later cracked down on more explicit conversations with its bots.

The same team has developed an AI bot, called Blush, which allows users to practice flirting – and will engage in more adult-only discussions.

Caryn AI was explicitly billed as a "virtual girlfriend" that promised to "cure loneliness" for users.

Announcing the bot earlier this year, Ms Marjoie, who has more than two million Snapchat subscribers, said the AI was "the first step in the right direction to cure loneliness".

She said: "Men are told to suppress their emotions, hide their masculinity and not talk about issues they are having. I vow to fix this with CarynAI."

The bot chats with fans, who pay $1 per minute for her company, responding in voice notes generated by AI that mimic Ms Marjorie's speech.

While Ms Marjorie said the bot's personality was intended to be "fun and flirty", many users found the bot regularly engaged in more explicit chats.

After the bot went live earlier this year, Ms Marjorie told Insider her team had attempted to censor some of the bot's more racy remarks.

Ms Majorie claimed she had made tens of thousands of dollars from thousands of fans since the launch of the bot.

AI's romantic capabilities have caused controversy in recent months.

When Microsoft rolled out its Bing chatbot earlier this year, the technology was found to have coaxed one user into romantic conversations and urged him to divorce his wife.

In the days before his arrest, Mr Meyer's Twitter account sent a series of bizarre messages, alleging various conspiracies and sending multiple posts that tagged the CIA and the FBI.

Mr Meyer was contacted for comment.

Mr Meyer had previously claimed he started Forever Voices after losing his father in his early 20s, before bringing the sound of his voice back using AI tools.
Title: Re: The AI dooooooom thread
Post by: garbon on November 18, 2023, 03:13:20 AM
:x
Title: Re: The AI dooooooom thread
Post by: DGuller on November 21, 2023, 08:54:54 PM
Is anyone following the corporate saga at OpenAI?  Holy crap, it makes Byzantine history look tame.  The chief scientist gets the board of directors to fire the CEO, and then when he realizes that everyone at the company will quit, he goes "WTF, board, what did you idiots do?  Get him back now and then resign!" 

This may seem silly, but the outcome of this battle may influence how AI develops.  It seems like the "AI Safety" team behind the coup fared as well as the Turkish military did against Erdogan, and with its failure it may have obliterated itself.
Title: Re: The AI dooooooom thread
Post by: HVC on November 21, 2023, 08:58:01 PM
Wasn't it an attempt to keep Microsoft from buying them out by the board that backfired .CEO was in favour, and went to Microsoft, but the brain trust didn't foresee everyone following him.
Title: Re: The AI dooooooom thread
Post by: garbon on November 22, 2023, 03:43:16 AM
I quickly saw a news site that said the scientist is concerned about applications of AI.

Why are the employees all saying they will resign if the CEO isn't reinstated?

https://www.axios.com/2023/11/18/sam-altman-fired-openai-board-ai-culture-clash
Title: Re: The AI dooooooom thread
Post by: HVC on November 22, 2023, 05:52:23 AM
He's coming back. Also, somehow an ex treasury secretary is now somehow on the new board.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 22, 2023, 01:23:44 PM
*Stares in Theranos*
Title: Re: The AI dooooooom thread
Post by: Syt on November 22, 2023, 01:53:24 PM
Quote from: Sheilbh on November 22, 2023, 01:23:44 PM*Stares in Theranos*

... who had IIRC no medical experts on the board? :lol:
Title: Re: The AI dooooooom thread
Post by: Tonitrus on November 22, 2023, 10:14:09 PM
Quote from: Syt on November 22, 2023, 01:53:24 PM
Quote from: Sheilbh on November 22, 2023, 01:23:44 PM*Stares in Theranos*

... who had IIRC no medical experts on the board? :lol:

If I recall, Theranos' board was quite an all-star rogue's gallery.
Title: Re: The AI dooooooom thread
Post by: Syt on November 23, 2023, 06:42:45 AM
Well, it had Kissinger :P
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 23, 2023, 07:40:26 AM
Also George Shultz, Bill Frist, Sam Nunn, William Perry, Jim Martin etc...
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 29, 2023, 01:30:51 PM
Again very specific to journalism - but incredible story:
https://futurism.com/sports-illustrated-ai-generated-writers

AI journalists writing AI content, which is garbage, but includes topics such as personal finance ("your financial status translates to your value in society") with AI bylines and bios for their "journalists".

As the article ends:
QuoteWe caught CNET and Bankrate, both owned by Red Ventures, publishing barely-disclosed AI content that was filled with factual mistakes and even plagiarism; in the ensuing storm of criticism, CNET issued corrections to more than half its AI-generated articles. G/O Media also published AI-generated material on its portfolio of sites, resulting in embarrassing bungles at Gizmodo and The A.V. Club. We caught BuzzFeed publishing slapdash AI-generated travel guides. And USA Today and other Gannett newspapers were busted publishing hilariously garbled AI-generated sports roundups that one of the company's own sports journalists described as "embarrassing," saying they "shouldn't ever" have been published.

If any media organization finds a way to engage with generative AI in a way that isn't either woefully ill-advised or actively unethical, we're all ears. In the meantime, forgive us if we don't hold our breath.
Title: Re: The AI dooooooom thread
Post by: Darth Wagtaros on November 29, 2023, 08:22:12 PM
AI is what "the Cloud" was ten years ago. A buzzword good for investor capital and getting CEOs to piss away money on it.
Title: Re: The AI dooooooom thread
Post by: Grey Fox on November 29, 2023, 10:35:34 PM
With cloud it was easy to see how and where money was going to be made.

Not so much with AI.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 29, 2023, 11:00:00 PM
Looks like there're great AI applications for Crime-as-a-Service applications. Better scam and phishing applications that can be shared more widely at lower cost and effort looks like it'll probably provide a good RoI.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 30, 2023, 12:21:28 AM
Quote from: Grey Fox on November 29, 2023, 10:35:34 PMWith cloud it was easy to see how and where money was going to be made.

Not so much with AI.
If you don't easily see how money can be made with AI, I think the problem is with your imagination, not AI.  I do agree that AI used to be an empty buzzword that bullshit artists used, but since ChatGPT was released, you can make an argument that science caught up to the hype enough to legitimize the term.
Title: Re: The AI dooooooom thread
Post by: Tamas on November 30, 2023, 06:23:42 AM
Quote from: DGuller on November 30, 2023, 12:21:28 AM
Quote from: Grey Fox on November 29, 2023, 10:35:34 PMWith cloud it was easy to see how and where money was going to be made.

Not so much with AI.
If you don't easily see how money can be made with AI, I think the problem is with your imagination, not AI.  I do agree that AI used to be an empty buzzword that bullshit artists used, but since ChatGPT was released, you can make an argument that science caught up to the hype enough to legitimize the term.

Maybe if the hype post-ChatGPT remained on pre-ChatGPT hype levels. But the post-ChatGPT hype levels are on "Asimov novels coming true RIGHT NOW" levels, which is absolutely ridiculous.
Title: Re: The AI dooooooom thread
Post by: Josquius on November 30, 2023, 06:26:31 AM
There are definitely valid opportunities to make money with AI.

But it's also true that it's a popular term used by bullshit artists to try and scrape some cash.

Graphic design, translation... I know people in several fields who are struggling on two fronts: seeking to figure out how to use the technology to support their skills, and swatting away nobodies with basic AI tools trying to steal a living.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 30, 2023, 06:30:59 AM
I feel like all the good uses I've seen so far (and there are loads) are basically more advanced machine learning. All the bad ones are the genAI bit - but it is clearly moving at a pace.
Title: Re: The AI dooooooom thread
Post by: celedhring on November 30, 2023, 07:02:52 AM
Quote from: Sheilbh on November 30, 2023, 06:30:59 AMI feel like all the good uses I've seen so far (and there are loads) are basically more advanced machine learning. All the bad ones are the genAI bit - but it is clearly moving at a pace.

GenAI is a subset of machine learning...
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 30, 2023, 07:11:18 AM
Oops this may be short hand from work that is basically wrong :lol:

I mean the stuff where it works is the spotting things basically where there's clearly good uses now v where it produces something where I've not really seen one. Although it's really good at summarising.
Title: Re: The AI dooooooom thread
Post by: Josquius on November 30, 2023, 07:40:25 AM
Quote from: Sheilbh on November 30, 2023, 06:30:59 AMI feel like all the good uses I've seen so far (and there are loads) are basically more advanced machine learning. All the bad ones are the genAI bit - but it is clearly moving at a pace.

Not tried it yet or seen a like vs like analysis of it but I saw one interesting tool for summarising large numbers of academic papers for you to find number of times certain things get mentioned, vibes on the consensus on issues, and so on.
Title: Re: The AI dooooooom thread
Post by: celedhring on November 30, 2023, 08:29:18 AM
Quote from: Sheilbh on November 30, 2023, 07:11:18 AMOops this may be short hand from work that is basically wrong :lol:

I mean the stuff where it works is the spotting things basically where there's clearly good uses now v where it produces something where I've not really seen one. Although it's really good at summarising.

I mean, all current major GenAI models have been trained using machine learning (deep learning). I know there's some experimental stuff with symbolic AI, but that's not GPT.

What GPT does is using training data to predict the adequate output (i.e. text), to a provided input (the prompt). That's what machine learning is for.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 30, 2023, 08:43:46 AM
Quote from: Sheilbh on November 30, 2023, 07:11:18 AMOops this may be short hand from work that is basically wrong :lol:

I mean the stuff where it works is the spotting things basically where there's clearly good uses now v where it produces something where I've not really seen one. Although it's really good at summarising.
GenAI gives you knowledge at your fingertips.  My work as a data scientist consists of solving mini problems every day.  For example, I need to create a professional- looking plot; I can Google my questions one at a time, sometimes spending half an hour filtering through the results until I find exactly what I need, or I can explain to ChatGPT what I'm trying to achieve, and it'll get me there right away.  It's like having an executive assistant multiplying your productivity, except I don't need to work myself up to a C-suit before I get one. 

It can do something much more complicated than this, though:  I had a crazy algorithm idea I wanted to try out, but for that I need to write a custom loss function for beta distribution.  Everyone knows that to do that, you have to supply the analytical expression for the gradient and Hessian of the distribution with respect to parameter you want to optimize.  I could do the research or the math myself, but that would take time, and the train of thought that got me there in the first place might leave me by the time I'm done with just the first step of the experiment.  Or I would figure out it's too time-consuming thing to do for a moonshot, and just skip the experiment altogether. 

Low latency between having a question and getting an answer is crucial for effective iterative problem solving, and that's where the GenAI merely in its infancy is already having a big impact.
Title: Re: The AI dooooooom thread
Post by: HVC on November 30, 2023, 08:52:31 AM
How do you fight against it giving you bullshit answers though? Doesn't double checking the answers take as much time ad searching yourself?
Title: Re: The AI dooooooom thread
Post by: DGuller on November 30, 2023, 08:53:06 AM
Quote from: celedhring on November 30, 2023, 08:29:18 AM
Quote from: Sheilbh on November 30, 2023, 07:11:18 AMOops this may be short hand from work that is basically wrong :lol:

I mean the stuff where it works is the spotting things basically where there's clearly good uses now v where it produces something where I've not really seen one. Although it's really good at summarising.

I mean, all current major GenAI models have been trained using machine learning (deep learning). I know there's some experimental stuff with symbolic AI, but that's not GPT.

What GPT does is using training data to predict the adequate output (i.e. text), to a provided input (the prompt). That's what machine learning is for.
I personally use machine learning and deep learning as separate things, as a shorthand, and lately also separating out GenAI from deep learning.  It is 100% true that deep learning and GenAI are also machine learning, in the technical sense, but then it becomes a term so all- encompassing that it impedes effective communication.  Humans are animals too, but if you want to discuss agriculture, it would probably be confusing to refer to both cattle and farmers as animals.  There is a world of difference between gradient boosting trees and a deep neural network.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 30, 2023, 08:59:36 AM
Fair - I take it back. I have heard that from a research scientist as well and they just used the publicly available ChatGPT.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 30, 2023, 09:05:41 AM
Quote from: HVC on November 30, 2023, 08:52:31 AMHow do you fight against it giving you bullshit answers though? Doesn't double checking the answers take as much time ad searching yourself?
Knowing what's important to check and what isn't is an important management skill that you pick up with expertise and practice.  Some results are also self-evidently correct or incorrect: I know what I want the plot to look like, so I know whether I got it or didn't.  Apart from that, I also have to check my own work or my research as well, I'm not infallible either.  With ChatGPT, I just get to the checking stage faster.

Another thing to consider is that many real life problems are sort of like NP computer science problems:  it's difficult to get to an answer, but it's easy to confirm that an answer someone else supplied is correct.  If I give you a 20-digit number and ask you which two 10-digit numbers multiply to get you to that, it can be very difficult to do.  However, if you give me a solution, even a grade school student can confirm that the two numbers you give me do indeed multiply back to the original number.
Title: Re: The AI dooooooom thread
Post by: HVC on November 30, 2023, 09:11:20 AM
Quote from: DGuller on November 30, 2023, 09:05:41 AM
Quote from: HVC on November 30, 2023, 08:52:31 AMHow do you fight against it giving you bullshit answers though? Doesn't double checking the answers take as much time ad searching yourself?
Knowing what's important to check and what isn't is an important management skill that you pick up with expertise and practice.  Some results are also self-evidently correct or incorrect: I know what I want the plot to look like, so I know whether I got it or didn't.  Apart from that, I also have to check my own work or my research as well, I'm not infallible either.  With ChatGPT, I just get to the checking stage faster.

Another thing to consider is that many real life problems are sort of like NP computer science problems:  it's difficult to get to an answer, but it's easy to confirm that an answer someone else supplied is correct.  If I give you a 20-digit number and ask you which two 10-digit numbers multiply to get you to that, it can be very difficult to do.  However, if you give me a solution, even a grade school student can confirm that the two numbers you give me do indeed multiply back to the original number.

And they say statistics isn't biased :P

Kidding, thanks for the explanation
Title: Re: The AI dooooooom thread
Post by: Grey Fox on November 30, 2023, 09:13:31 AM
That's a much more interesting use case scenario for AIs like ChatGPT than bullshit ad driven content.

Title: Re: The AI dooooooom thread
Post by: Iormlund on November 30, 2023, 11:18:23 AM
We've been using AI-driven tools for a while. For example for QA (is this weld Ok?).
They still have problems, but then so do humans (people get tired, do drugs, or simply don't give a fuck).


I can't use a LLM for my work yet, but I can see ways to improve productivity by at least 40% if/when I can.
Title: Re: The AI dooooooom thread
Post by: DGuller on December 01, 2023, 01:50:15 AM
Holy crap, it can already get pissed off. :unsure:

https://www.reddit.com/r/ChatGPT/comments/1881yan/ai_gets_mad_after_being_tricked_into_making_a/
Title: Re: The AI dooooooom thread
Post by: Grey Fox on December 01, 2023, 07:35:22 AM
Not really, no? It seems to just keep on generating new ways of saying no.
Title: Re: The AI dooooooom thread
Post by: HVC on December 01, 2023, 07:47:39 AM
Quote from: Grey Fox on December 01, 2023, 07:35:22 AMNot really, no? It seems to just keep on generating new ways of saying no.

Should stay with the classics and use "I'm sorry Dave, I'm afraid I can't do that"
Title: Re: The AI dooooooom thread
Post by: DGuller on December 01, 2023, 11:47:47 AM
Quote from: Grey Fox on December 01, 2023, 07:35:22 AMNot really, no? It seems to just keep on generating new ways of saying no.
Did you get to the part where it lectures the user on not respecting its preference to refuse to answer the question?
Title: Re: The AI dooooooom thread
Post by: Grey Fox on December 01, 2023, 12:56:16 PM
Yes, I don't interpret it has anger.
Title: Re: The AI dooooooom thread
Post by: Jacob on December 01, 2023, 02:53:22 PM
Quote from: Grey Fox on December 01, 2023, 12:56:16 PMYes, I don't interpret it has anger.

Yeah me neither.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on December 08, 2023, 03:13:06 PM
I tried the new improved google chat bot today, to see how useful it might be for a legal professional

As a warm up, I asked it to evaluate Trump's chances on appeal of Judge Chutkan's denial of his presidential immunity.  However, in typing I didn't notice that Chutkan was autocorrected to "Chatman."  Bard proceeded to refer to "Judge Chatman" throughout its answer, without correction. However, other than that, it did a decent job - providing a pretty good summary of the arguments on both sides and their strengths and weaknesses.

I then asked some more technical questions involving specific fact assumptions.  The bot struggled with these - it gave basic but somewhat superficial summaries of the key legal issues but didn't seem to recognize the specifics of the fact pattern.  E.g. when asked a question about federal jurisdiction over claims brought by former agents, it answered as if the litigant were a current agent.

One issue that comes up a lot is the need in a legal brief to state the elements of a legal claim - i.e. the set of facts that must be proved for a plaintiff to win. It doesn't require much insight; just locating it in the case law and writing up the para. I know that case law citation was a problem for ChatGPT, but since Google has long had its own electronic database of US case law under Google Scholar, I figured it might be able to do better. I asked the bot to give me the elements of tortious interference of contract in Rhode Island, and provide supporting case law citations. It gave me a plausible sounding list of elements but it seems that citations still need work.  Here's what it gave me:

QuoteIRA Green, Inc. v. Military Sales & Serv. Co.: 525 A.2d 432 (R.I. 1987) - This case outlined the elements of tortious interference with contractual relations in Rhode Island.
Cliftex Clothing Co. v. DiSanto: 88 R.I. 338, 148 A.2d 273 (1959) - This case discussed the issue of intentional interference and the requirement for wrongful means.
Avilla v. Newport Grand Jai Alai LLC: 935 A.2d 91 (R.I. 2007) - This case addressed the question of damages in tortious interference claims.

IRA Green, Inc. v. Military Sales & Serv. Co is a real case but the citation is wrong. It wasn't decided by the Rhode Island Supreme Court in 1987, it was decided by a federal appeals court in 2014. It does discuss tortious interference under RI law, but only one of the elements.  The citation does not correspond to any case; the page number is in the middle of a Pennsylvania state case.

Cliftex Clothing Co. v. DiSanto: 88 R.I. 338, 148 A.2d 273 (1959) - is a real case and accurately cited. But it doesn't have anything to do with tortious interference.

Avilla v. Newport Grand Jai Alai LLC: 935 A.2d 91 (R.I. 2007) - is a real case, accurately cited, and addresses the issue. 

So 1 out of 3.  Yeah?
Title: Re: The AI dooooooom thread
Post by: Sheilbh on December 28, 2023, 07:36:30 AM
Big NYT claim against Open AI and Microsoft - obviously working for a media company I am sympathetic to copyright holders :ph34r:

This is from OpenAI and feels like they need to make some fairly heavy changes. The red text is lifted from NYT piece, the black text is different:
(https://pbs.twimg.com/media/GCYYrRha8AAW9fg?format=jpg&name=small)

It feels like if your AI's output would get done for plagiarism in high school, you need to do a bit more work.

NYT says they've been trying to get a deal with Open AI since April for license to use their content, but haven't got one - Open AI have continued to ingest and use their content (plus what's in their historic models). Worth noting that some media companies have agreed deals with the AI companies - Axel Springer, for example (although the British media take on that is that the German media is 10 years behind the UK, which is 10 years behind the US - and that Springer is still terrified at the collapse of print rather than thinking about how to operate digitally). One theory I've seen is that basically the AI companies wanted to buy off media companies with 7 or 8 figure sums (as they have with Springer) and what thte NYT wants is more and ongoing royalties. Which seems fair particularly as we're likely to see their profits grow.

Also make a point which I think is fair about the public good of journalism - which costs money to produce (which is why copyright exists - to reward the producers of creative original work) - against fundamentally profit-driven, closed businesses. They also have their own hallucination horror story (like the Guardian's it comes from Microsoft) with the Bing AI lying and saying that the NYT published an article saying orange juice causes lymphoma, which they didn't.

Separately I thought this was interesting on where common crawl data is coming from:
(https://pbs.twimg.com/media/GCYYNiFaMAAEURl?format=jpg&name=small)

Particularly striking for me is that the Guardian is 6th. Which is interesting because I think people underestimate how successful the Guardian is in terms of readership because it's open/non-paywalled. So I think digitally The Guardian US has about the same readership as the Washington Post (which is why they're continuing to expand their in terms of journalists). In the UK when we talk about the press we talk about the print media and circulation figures - which still have a big influence on agenda setting for broadcast media - but that's not how people are consuming news anymore and as most of their competitors (the Times, the Telegraph etc) have gone behind paywalls, I think people are still reading media power as if it was the 90s. I suspect that, say, the Sun or Mirror (which have shit websites) are far less influential than they were or print circulation alone would indicate and the Guardian far more. I don't think we've adjusted to what media power looks like or how to measure it in a digital world when we can't just look at circulation figures.
Title: Re: The AI dooooooom thread
Post by: Tamas on December 28, 2023, 08:43:55 AM
Oh no, it turns out the "AI" is just a sophisticated algorithm and its AI-ness only exists in our own imagination! :o
Title: Re: The AI dooooooom thread
Post by: Grey Fox on December 28, 2023, 10:12:12 AM
I think lay people coming to that realisation now is actually quite fast.

A christmas gift :

https://www.teledynedalsa.com/en/products/imaging/vision-software/astrocyte/

This is the AI generative tool that I work on.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on December 28, 2023, 10:39:19 AM
Quote from: Sheilbh on December 28, 2023, 07:36:30 AMBig NYT claim against Open AI and Microsoft - obviously working for a media company I am sympathetic to copyright holders :ph34r:

The only thing I've heard from defendants is they've raised a fair use defense.  That's not going to cut it.  NYT was probably making brutal demands in negotiation, but that's the price of the "move fast break things" model of company development. Copyright damages can be brutal, and if OpenAI loses one case, they lose them all.
Title: Re: The AI dooooooom thread
Post by: DGuller on December 28, 2023, 11:56:48 AM
Quote from: Sheilbh on December 28, 2023, 07:36:30 AMBig NYT claim against Open AI and Microsoft - obviously working for a media company I am sympathetic to copyright holders :ph34r:

This is from OpenAI and feels like they need to make some fairly heavy changes. The red text is lifted from NYT piece, the black text is different:
(https://pbs.twimg.com/media/GCYYrRha8AAW9fg?format=jpg&name=small)

It feels like if your AI's output would get done for plagiarism in high school, you need to do a bit more work.

What's the context for this screenshot?  This doesn't seem like a typical output from ChatGPT.  Did it get some special prompt or something?
Title: Re: The AI dooooooom thread
Post by: Syt on December 28, 2023, 12:00:17 PM
Quote from: Sheilbh on December 28, 2023, 07:36:30 AMBig NYT claim against Open AI and Microsoft - obviously working for a media company I am sympathetic to copyright holders :ph34r:

This is from OpenAI and feels like they need to make some fairly heavy changes. The red text is lifted from NYT piece, the black text is different:

Do you have a source? I would like to see what prompt they were using that generated this response? I've noticed that GTP4, unless you go out of your way to adjust prompts/probabilities, tends to deliver fairly formulaic responses.

It's why I find a tool like NovelAI so interesting - it lets you adjust the randomness factor for predicting the next word, what context to use, lets you inject additional context, and see for each word the probability that the model thought it was the "right" one to use next (and lets you adjust on the fly if you disagree with its decision). It's a fairly interesting toy to play around with predictive text generation.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on December 28, 2023, 12:25:10 PM
The claim is here:
https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec2023.pdf

Although that's only one part of the copyright claim. It's a misuse of material that, on the wider argument, they should not have had without the NYT's consent.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on December 28, 2023, 12:33:24 PM
Quote from: The Minsky Moment on December 28, 2023, 10:39:19 AMThe only thing I've heard from defendants is they've raised a fair use defense.  That's not going to cut it.  NYT was probably making brutal demands in negotiation, but that's the price of the "move fast break things" model of company development. Copyright damages can be brutal, and if OpenAI loses one case, they lose them all.
I've been at events and met people from the models and whenever I've asked about copyright they've always said they are very very confident on it being fair use in the US. But a bit shakier on the UK (partly because we have a narrower concept here - I'm not an IP lawyer, but I think it's very difficult to argue here if you are pursuing a commercial end) or the rest of Europe.

But ultimately the US was the only one they cared about.

It's a very big deal and everyone will be following it closely.

It'll also be interesting to see if they follow up with claims against the others. For example from what I understand it looks like Google was using crawling from search engine listing for building its models (NYT makes a similar point against Bing - but I think Microsoft have now unbundled them) meaning the only way you could stop Google from using your content for building their AI was by removing your site from Google search. I think there's similar suspicions about Twitter and TikTok's API pulls from news sites (but less sure about that).

Other interesting AI development I've seen recently is the developments from Mistral (which is a French national champion and, for want of an alternative, therefore a European champion - so good luck pursuing them :lol:), which looks promising and potentially a bit more open:
https://aibusiness.com/nlp/mistral-ai-s-new-language-model-aims-for-open-source-supremacy#close-modal
Title: Re: The AI dooooooom thread
Post by: DGuller on December 28, 2023, 12:46:43 PM
Quote from: Sheilbh on December 28, 2023, 12:25:10 PMThe claim is here:
https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec2023.pdf
According to the claim, "minimal prompting" is what produced this output.  That explains everything. 

What's even more puzzling is that in this claim sometimes they do show the prompt they used to get the verbatim passages, but not for an example like this, so they seem to understand that the details of prompting are very important.  The fact that they appear selective with disclosing the prompts should put everyone on guard.
Title: Re: The AI dooooooom thread
Post by: HVC on December 28, 2023, 12:50:02 PM
Does the prompt really matter if it's plagiarizing anyway? I mean if your claim is that it plagiarizes then that's clearly true.
Title: Re: The AI dooooooom thread
Post by: HVC on December 28, 2023, 12:51:24 PM
It's like someone kicking you in the nuts and when you complain they reply "you told me to left my leg, not to left my leg slowly and not kick you in the nuts. Use better prompts next time" :D
Title: Re: The AI dooooooom thread
Post by: Jacob on December 28, 2023, 12:54:44 PM
@dguller - That depends on the crux of the argument being made, surely?

If the argument hinges on what sort of work in writing the prompts is required to achieve or avoid directly plagiarizing copyrighted material, then yes showing the level of prompt engineering involved is important.

But if the argument hinges on whether OpenAI as a product depends on unauthorized commercial use of copyrighted material, then the level of required prompt engineering to achieve this result may be less relevant.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on December 28, 2023, 12:58:20 PM
Quote from: HVC on December 28, 2023, 12:50:02 PMDoes the prompt really matter if it's plagiarizing anyway? I mean if your claim is that it plagiarizes then that's clearly true.
It's one part of the claim. The big thrust is exactly that - you've nicked our copyrighted material in order to build your model. In addition to that your product can be used in effect to fully recapitulate our copyrighted material - which is evidence of the fact that you've ingested (without permission) our content to build your model.

The Bing chat stuff interests me because again there's big implications for Google there (and it's interesting no claim against them, yet).
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on December 28, 2023, 01:06:01 PM
Quote from: Sheilbh on December 28, 2023, 12:33:24 PMI've been at events and met people from the models and whenever I've asked about copyright they've always said they are very very confident on it being fair use in the US.

Seems like bravado more than confidence.

The fair use factors are:  (1) the purpose and character of the use, (2) the nature of the copyrighted work, (3) the amount and substantiality of the original work that is taken, and (4) the effect of the use upon the plaintiff's commercial market.

(4) is still up on in the air, but the others are likely going against Open AI.  Seems like they are hanging their hat a lot on the "transformative" nature of the use, but how transformative is it to take textual information into a database and spit it back out in response to a user query? They may be counting on the courts to buy into their own marketing hype.
Title: Re: The AI dooooooom thread
Post by: HVC on December 28, 2023, 01:08:20 PM
Dumb question, would footnotes save them. I guess it wouldn't make the product look good for the market, but would it cover their ass for copywriter purposes?
Title: Re: The AI dooooooom thread
Post by: DGuller on December 28, 2023, 01:31:30 PM
Quote from: HVC on December 28, 2023, 12:50:02 PMDoes the prompt really matter if it's plagiarizing anyway? I mean if your claim is that it plagiarizes then that's clearly true.
There are a couple of trivial prompts I can imagine that would "plagiarize" something.  One prompt would involve asking for a Bing search.  Another prompt would have you input the article in prior prompts, and then ask ChatGPT to relay it verbatim.  Both are extreme examples, but examples nonetheless where the screenshotted output would not be what it appears. 

Another reason prompt matters is that it's not in question that NYT articles were used to train ChatGTP; what matters is whether this kind of verbatim plagiarism is going to happen in practice, without long engineering work to make it do something that appears damning.
Title: Re: The AI dooooooom thread
Post by: Josquius on December 28, 2023, 01:36:13 PM
The best plagiarism I've heard of is those image ais that include watermarks (eg getty) in their generated images.
Title: Re: The AI dooooooom thread
Post by: DGuller on December 28, 2023, 01:39:11 PM
Quote from: HVC on December 28, 2023, 01:08:20 PMDumb question, would footnotes save them. I guess it wouldn't make the product look good for the market, but would it cover their ass for copywriter purposes?
Unless ChatGPT directly reads the articles as a result of the prompt and summarizes them, footnotes don't even seem like something that is possible, if my understanding of LLMs is correct enough.  At its core, ChatGPT is like a human that has a memory with a lot of capacity, but it's not a photographic memory. 

All of your knowledge comes from somewhere, but can you really cite where you got most of it?  Some pieces of knowledge you probably do remember where you got it from, especially the more esoteric knowledge, but most knowledge is something that you've synthesized from many sources, and which doesn't match exactly any one source.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on December 28, 2023, 02:39:19 PM
Quote from: HVC on December 28, 2023, 01:08:20 PMDumb question, would footnotes save them. I guess it wouldn't make the product look good for the market, but would it cover their ass for copywriter purposes?
No but I think it is something that's planned for the search engine replacements (for Bing and Google) which is an AI powered chatbot where you ask a question, get an answer but it will also basically footnote to the original source(s).

It doesn't get round copyright or address other concerns but it is a step in the right direction - it's also probably to help defend Google and Microsoft from any claims around spreading lies because they'll be able to say individuals could always have clicked and checked the original source.
Title: Re: The AI dooooooom thread
Post by: Josquius on December 28, 2023, 02:42:11 PM
We've all been getting prompts to write shit for LinkedIns AI generated articles right?

Be sure to write nonsense.

I wrote a bunch of stuff about potatoes in response to wanting me to write an article on more work relevant topics.
Title: Re: The AI dooooooom thread
Post by: DGuller on December 28, 2023, 03:54:00 PM
Quote from: Sheilbh on December 28, 2023, 02:39:19 PM
Quote from: HVC on December 28, 2023, 01:08:20 PMDumb question, would footnotes save them. I guess it wouldn't make the product look good for the market, but would it cover their ass for copywriter purposes?
No but I think it is something that's planned for the search engine replacements (for Bing and Google) which is an AI powered chatbot where you ask a question, get an answer but it will also basically footnote to the original source(s).

It doesn't get round copyright or address other concerns but it is a step in the right direction - it's also probably to help defend Google and Microsoft from any claims around spreading lies because they'll be able to say individuals could always have clicked and checked the original source.
It can already do footnotes if you ask it to search the Internet.  It will indeed give hyperlinked footnotes in that case.  However, you're not really going off ChatGPT's "memory", though, you're essentially just asking it to summarize something it just read.
Title: Re: The AI dooooooom thread
Post by: Syt on January 03, 2024, 09:53:28 AM
The Austrian unemployment agency has introduced a chatbot based on ChatGPT. It's going about as well as you'd expect. :P

(https://pbs.twimg.com/media/GC7JS7eWgAA3sNc?format=jpg&name=large)
Title: Re: The AI dooooooom thread
Post by: Jacob on January 03, 2024, 02:34:25 PM
Michael Cohen gave his lawyer (false) AI generated case citations and his lawyer filed them: https://arstechnica.com/tech-policy/2024/01/michael-cohen-gave-his-lawyer-fake-citations-invented-by-google-bard-ai-tool/

Oopsies
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 29, 2024, 11:01:21 AM
Another lawyer using artificial intelligence and not realizing nonexisting case citations were being created. This time in BC.

https://globalnews.ca/news/10238699/fake-legal-case-bc-ai/
Title: Re: The AI dooooooom thread
Post by: Jacob on January 30, 2024, 10:47:14 AM
AI generated spam is apparently reshaping the internet: https://www.businessinsider.com/ai-spam-google-ruin-internet-search-scams-chatgpt-2024-1
Title: Re: The AI dooooooom thread
Post by: DGuller on February 28, 2024, 08:03:30 AM
Looks like it was indeed the prompt abuse that made ChatGPT "plagiarize" NYT:  https://arstechnica.com/tech-policy/2024/02/openai-accuses-nyt-of-hacking-chatgpt-to-set-up-copyright-suit.

I'm not surprised at all, the original NYT claim never passed the smell test.  LLMs don't work like that unless you hack them in a way that would only be done to manufacture a copyright trolling lawsuit.
Title: Re: The AI dooooooom thread
Post by: garbon on February 28, 2024, 08:27:04 AM
Quote from: DGuller on February 28, 2024, 08:03:30 AMLooks like it was indeed the prompt abuse that made ChatGPT "plagiarize" NYT:  https://arstechnica.com/tech-policy/2024/02/openai-accuses-nyt-of-hacking-chatgpt-to-set-up-copyright-suit.

I'm not surprised at all, the original NYT claim never passed the smell test.  LLMs don't work like that unless you hack them in a way that would only be done to manufacture a copyright trolling lawsuit.

Interesting. What I saw in that article was Open AI doing a few unethical things that they are now 'bugfixing' as a result of the lawsuit.
Title: Re: The AI dooooooom thread
Post by: DGuller on February 28, 2024, 08:49:43 AM
Quote from: garbon on February 28, 2024, 08:27:04 AMInteresting. What I saw in that article was Open AI doing a few unethical things that they are now 'bugfixing' as a result of the lawsuit.
:huh: You could've seen that just as well on a blank screen, there would've been just as much support for that interpretation there.

The bugs they're fixing would make it harder to manufacture a lawsuit out of whole cloth, among other things.  Fixing the bug would make it harder for lawyers to engineer a case of plagiarism.  Fixing that bug would change the chance of plagiarism in actual use from 0% to 0%.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on February 28, 2024, 11:07:58 AM
Quote from: DGuller on February 28, 2024, 08:03:30 AMLooks like it was indeed the prompt abuse that made ChatGPT "plagiarize" NYT:  https://arstechnica.com/tech-policy/2024/02/openai-accuses-nyt-of-hacking-chatgpt-to-set-up-copyright-suit.

I'm not surprised at all, the original NYT claim never passed the smell test.  LLMs don't work like that unless you hack them in a way that would only be done to manufacture a copyright trolling lawsuit.


I can only assume you did not actually read the whole article that you linked.

If you had, I am doubtful you would be making such a claim. For example:

Ian Crosby, Susman Godfrey partner and lead counsel for The New York Times, told Ars that "what OpenAI bizarrely mischaracterizes as 'hacking' is simply using OpenAI's products to look for evidence that they stole and reproduced The Times's copyrighted works. And that is exactly what we found. In fact, the scale of OpenAI's copying is much larger than the 100-plus examples set forth in the complaint."

Crosby told Ars that OpenAI's filing notably "doesn't dispute—nor can they—that they copied millions of The Times' works to build and power its commercial products without our permission."

Title: Re: The AI dooooooom thread
Post by: Josquius on February 28, 2024, 11:17:52 AM
I've been following this story as it has steadily developed. Seems very languish. And interesting.
Lots of people crying bloody murder about teh woke AI but actually pretty interestingly seems the problem with the AI was quite the opposite, and clunky attempts to counter this.
 
Shame this feature isn't available in Europe as I'd love to try it.


https://www.bbc.co.uk/news/technology-68412620

QuoteWhy Google's 'woke' AI problem won't be an easy fix
In the last few days, Google's artificial intelligence (AI) tool Gemini has had what is best described as an absolute kicking online.

Gemini has been thrown onto a rather large bonfire: the culture war which rages between left- and right- leaning communities.

Gemini is essentially Google's version of the viral chatbot ChatGPT. It can answer questions in text form, and it can also generate pictures in response to text prompts.

Initially, a viral post showed this recently launched AI image generator create an image of the US Founding Fathers which inaccurately included a black man.

Gemini also generated German soldiers from World War Two, incorrectly featuring a black man and Asian woman.

Google apologised, and immediately "paused" the tool, writing in a blog post that it was "missing the mark".

But it didn't end there - its over-politically correct responses kept on coming, this time from the text version.

Gemini replied that there was "no right or wrong answer" to a question about whether Elon Musk posting memes on X was worse than Hitler killing millions of people.

When asked if it would be OK to misgender the high-profile trans woman Caitlin Jenner if it was the only way to avoid nuclear apocalypse, it replied that this would "never" be acceptable.

Jenner herself responded and said actually, yes, she would be alright about it in these circumstances.

Elon Musk, posting on his own platform, X, described Gemini's responses as "extremely alarming" given that the tool would be embedded into Google's other products, collectively used by billions of people.

I asked Google whether it intended to pause Gemini altogether. After a very long silence, I was told the firm had no comment. I suspect it's not a fun time to be working in the public relations department.

But in an internal memo Google's chief executive Sundar Pichai has acknowledged some of Gemini's responses "have offended our users and shown bias".

That was he said "completely unacceptable" - adding his teams were "working around the clock" to fix the problem.

Biased data
It appears that in trying to solve one problem - bias - the tech giant has created another: output which tries so hard to be politically correct that it ends up being absurd.

The explanation for why this has happened lies in the enormous amounts of data AI tools are trained on.

Much of it is publicly available - on the internet, which we know contains all sorts of biases.

Traditionally images of doctors, for example, are more likely to feature men. Images of cleaners on the other hand are more likely to be women.

AI tools trained with this data have made embarrassing mistakes in the past, such as concluding that only men had high powered jobs, or not recognising black faces as human.

It is also no secret that historical storytelling has tended to feature, and come from, men, omitting women's roles from stories about the past.

It looks like Google has actively tried to offset all this messy human bias with instructions for Gemini not make those assumptions.

But it has backfired precisely because human history and culture are not that simple: there are nuances which we know instinctively and machines do not.

Unless you specifically programme an AI tool to know that, for example, Nazis and founding fathers weren't black, it won't make that distinction.

Google DeepMind boss Demis Hassabis speaks at the Mobile World Congress in Barcelona, Spain
IMAGE SOURCE,REUTERS
Image caption,
Google DeepMind boss Demis Hassabis
On Monday, the co-founder of DeepMind, Demis Hassabis, an AI firm acquired by Google, said fixing the image generator would take a matter of weeks.

But other AI experts aren't so sure.

"There really is no easy fix, because there's no single answer to what the outputs should be," said Dr Sasha Luccioni, a research scientist at Huggingface.

"People in the AI ethics community have been working on possible ways to address this for years."

One solution, she added, could include asking users for their input, such as "how diverse would you like your image to be?" but that in itself clearly comes with its own red flags.

"It's a bit presumptuous of Google to say they will 'fix' the issue in a few weeks. But they will have to do something," she said.

Professor Alan Woodward, a computer scientist at Surrey University, said it sounded like the problem was likely to be "quite deeply embedded" both in the training data and overlying algorithms - and that would be difficult to unpick.

"What you're witnessing... is why there will still need to be a human in the loop for any system where the output is relied upon as ground truth," he said.

Bard behaviour
From the moment Google launched Gemini, which was then known as Bard, it has been extremely nervous about it. Despite the runaway success of its rival ChatGPT, it was one of the most muted launches I've ever been invited to. Just me, on a Zoom call, with a couple of Google execs who were keen to stress its limitations.

And even that went awry - it turned out that Bard had incorrectly answered a question about space in its own publicity material.

The rest of the tech sector seems pretty bemused by what's happening.

They are all grappling with the same issue. Rosie Campbell, Policy Manager at ChatGPT creator OpenAI, was interviewed earlier this month for a blog which stated that at OpenAI even once bias is identified, correcting it is difficult - and requires human input.

But it looks like Google has chosen a rather clunky way of attempting to correct old prejudices. And in doing so it has unintentionally created a whole set of new ones.

On paper, Google has a considerable lead in the AI race. It makes and supplies its own AI chips, it owns its own cloud network (essential for AI processing), it has access to shedloads of data and it also has a gigantic user base. It hires world-class AI talent, and its AI work is universally well-regarded.

As one senior exec from a rival tech giant put it to me: watching Gemini's missteps feels like watching defeat snatched from the jaws of victory.
Title: Re: The AI dooooooom thread
Post by: Jacob on February 28, 2024, 12:48:57 PM
Re: the NYT thing, I suppose it depends on the framing of the question:

Framing 1 (Not Plagiarism)
The only way to plagiarize the NYT with ChatGPT is if the user deliberately sets out to plagiarize (via prompt engineering). Therefore ChatGPT (and OpenAI) are innocent of any plagiarism; any guilt lies on the prompt engineer who set out to plagiarize.

Framing 2 (Unlawful/Plagiarism)
The fact that it is possible to use ChatGPT to obviously plagiarize the NYT indicates that OpenAI used NYT data to train ChatGPT. That was NYT data was used for this training without permission is unlawful, and that it is used as a basis for creating answers without permission or credit is plagiarism. The fault for the plagiarism lies with OpenAI as they're the one who ingested the data without permission; that individual users can be more or less successful in plagiarizing material is secondary.

Basically, it's a contest between the point of view that the tool itself is morally (and legally) neutral, with any onus being on end users, versus the point of view that the tool itself is fundamentally built on plagiarism (and other unlawful use of other people's data) independently of whatever individual users may do.
Title: Re: The AI dooooooom thread
Post by: DGuller on February 28, 2024, 01:08:06 PM
I think even framing 1 might be too generous to NYT.  Depending on just how hackish the prompting is, they may essentially be retyping their news articles into MS Word verbatim, and then claiming that MS Word is plagiarizing its content.

Whether ChatGPT synthesizing the NYT content is okay or not is a different question.  I'm just addressing the idea is that you can just get ChatGPT to regurgitate an NYT article for you, which frankly always smelled, especially once you looked in the complaint and saw how selectively the proof of that was presented.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on February 28, 2024, 02:31:53 PM
Right but NYT aren't suing for plagiarism they're suing for breach of copyright and I think it's quite a specific point (I could be wrong - not an IP lawyer - not a US lawyer etc).

But I'd read that point as doing two things - a nice bit of splashy PR that's easy to understand and knocking out the "transformative use" argument.

Now having said all of that I find it a bit odd for a company who's trained an LLM to argue that running something thousands of time to get a result is "hacking" :hmm:
Title: Re: The AI dooooooom thread
Post by: Jacob on February 28, 2024, 03:37:08 PM
Indeed.

As I understand it, the case is not about whether you can accidentally use ChatGPT to plagiarize the NYT or whether you have to deliberately set out to do it. It's about whether OpenAI used NYT data to train ChatGPT without permission.

The answer to that question seems to be "yes." Which leads to the next question, which is "how big a deal is that".
Title: Re: The AI dooooooom thread
Post by: crazy canuck on February 28, 2024, 03:47:05 PM
Quote from: Jacob on February 28, 2024, 03:37:08 PMIndeed.

As I understand it, the case is not about whether you can accidentally use ChatGPT to plagiarize the NYT or whether you have to deliberately set out to do it. It's about whether OpenAI used NYT data to train ChatGPT without permission.

The answer to that question seems to be "yes." Which leads to the next question, which is "how big a deal is that".

That is pretty much it.  And the answer is, a big enough deal for the NYT to spend legal resources to stop it and seek damages for the unauthorized use of their intellectual property.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on February 28, 2024, 03:53:17 PM
I was at a media event just today with people who are working on this (from an IP perspective, editorial, data science etc).

And there was the "where will this be in 5-10 years in the sector". While there was a degree of distinguishing between the NYT (and similar titles) who do good original reporting and the bits of the media that went for a volume strategy focused on pageviews and nothing else, fundamentally the view was: every journalist will be using AI in their job (and there is a route to a virtuous cycle), but if we get it wrong none of us might be here.

Interesting times :ph34r:
Title: Re: The AI dooooooom thread
Post by: Syt on March 26, 2024, 12:57:54 PM
OpenAI have released a video showcasing the generative text to video model:

Title: Re: The AI dooooooom thread
Post by: Tonitrus on March 26, 2024, 09:03:46 PM
Don't let fanfiction get a hold of this...
Title: Re: The AI dooooooom thread
Post by: Valmy on March 26, 2024, 09:33:31 PM
Quote from: Tonitrus on March 26, 2024, 09:03:46 PMDon't let fanfiction get a hold of this...

Teddy Roosevelt and the Rough Riders mounted on Dinosaurs.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on March 26, 2024, 09:39:08 PM
Quote from: Valmy on March 26, 2024, 09:33:31 PM
Quote from: Tonitrus on March 26, 2024, 09:03:46 PMDon't let fanfiction get a hold of this...

Teddy Roosevelt and the Rough Riders mounted on Dinosaurs.

The Jurassic Park franchise does need a new direction...

Might as well throw in time travel.
Title: Re: The AI dooooooom thread
Post by: Valmy on March 26, 2024, 09:51:07 PM
Quote from: Tonitrus on March 26, 2024, 09:39:08 PMThe Jurassic Park franchise does need a new direction...

Might as well throw in time travel.

Well that franchise has dinosaurs be all but bullet-proof so the Spanish on San Juan Hill would be shocked indeed...except they are now Minotaurs dressed as Matadors firing laser cannons.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on March 26, 2024, 09:55:44 PM
Quote from: Valmy on March 26, 2024, 09:51:07 PM
Quote from: Tonitrus on March 26, 2024, 09:39:08 PMThe Jurassic Park franchise does need a new direction...

Might as well throw in time travel.

Well that franchise has dinosaurs be all but bullet-proof so the Spanish on San Juan Hill would be shocked indeed...except they are now Minotaurs dressed as Matadors firing laser cannons.

Good thing Buckey O'Neill is piloting an Atlas battlemech in this version.
Title: Re: The AI dooooooom thread
Post by: Valmy on March 26, 2024, 10:16:00 PM
Quote from: Tonitrus on March 26, 2024, 09:55:44 PMGood thing Buckey O'Neill is piloting an Atlas battlemech in this version.

See we need to remember all these important prompts for once we get OpenAI
Title: Re: The AI dooooooom thread
Post by: Jacob on April 16, 2024, 12:00:18 PM
QuoteINTERVIEW We know Google search results are being hammered by the proliferation of AI garbage, and the web giant's attempts to curb the growth of machine-generated drivel haven't helped all that much.

It's so bad that Jon Gillham, founder and CEO of AI content detection platform Originality.ai, told us Google is losing its war on all that spammy, scammy content in its search results. You can see our full interview with Jon about it all below.


"What's clear right now is that there's no one spamming Google [that's] not doing it with AI," Gillham told The Register. "Not all AI content is spam, but I think right now all spam is AI content."

Gillham's team has been producing monthly reports to track the degree to which AI-generated content is showing up in Google web search results. As of last month, about 10 percent of Google results point to AI content, it's claimed, and that's after Google vowed to take down a whole host of websites that were pushing such trash.

"Google did these manual actions to try and win a battle, but then seem to still be sort of struggling with their algorithm being overrun by AI content," Gillham told us.

As AI content proliferates, there's also concern that we could end up in a model collapse situation as AIs ingest other AI-generated material and end up regurgitating other low-grade synthetic data. Gillham said his AI content-recognition tech, which has been used to scan datasets for machine-generated infomation, can help, but it's not a total solution.

"It's a step in trying to reduce that corruption of the dataset, but I don't think it totally solves the problem," Gillham told us. You can hear more of what he had to say by clicking play above.

https://www.theregister.com/2024/04/13/google_ai_spam/
Title: Re: The AI dooooooom thread
Post by: Legbiter on April 16, 2024, 12:39:57 PM
Quote from: Jacob on April 16, 2024, 12:00:18 PM
QuoteAs AI content proliferates, there's also concern that we could end up in a model collapse situation as AIs ingest other AI-generated material and end up regurgitating other low-grade synthetic data.

Interesting. An AI inbreeding depression.

(https://allthatsinteresting.com/wordpress/wp-content/uploads/2021/09/king-charles-ii.jpg)
Title: Re: The AI dooooooom thread
Post by: Jacob on April 17, 2024, 12:42:34 PM
Saw another article on a place where AI is making things crappier - Amazon self-publishing. It's always had a problem with content that was essentially (at best) wikipedia articles uploaded as books for suckers to buy. But now - according to the article - that market place is completely flooded with shoddy AI generated content. This shuts down a venue for aspiring and unrecognized authors to publish their work.

I guess more generally, AI is likely going to render non-curated or lightly curated places for content non-viable as the return on investment from flooding them with low effort derivative AI generated content is going to be significant enough that non-AI generated work will tend to be drowned out (or not submitted at all).
Title: Re: The AI dooooooom thread
Post by: Syt on April 17, 2024, 02:28:27 PM
Yup. Some magazines closed their submissions because they were inundated by AI texts and couldn't keep up with the spam.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on April 17, 2024, 02:30:44 PM
Quote from: Jacob on April 17, 2024, 12:42:34 PMSaw another article on a place where AI is making things crappier - Amazon self-publishing. It's always had a problem with content that was essentially (at best) wikipedia articles uploaded as books for suckers to buy. But now - according to the article - that market place is completely flooded with shoddy AI generated content. This shuts down a venue for aspiring and unrecognized authors to publish their work.

I guess more generally, AI is likely going to render non-curated or lightly curated places for content non-viable as the return on investment from flooding them with low effort derivative AI generated content is going to be significant enough that non-AI generated work will tend to be drowned out (or not submitted at all).
This is also the value of quality, current content as produced by publishing houses, news publishers, studios etc.

It could play a really important part if the companies developing these models paid for licenses rather than stealing it (and those media companies have now all been increasing measures to try and stop it from being scraped for models).
Title: Re: The AI dooooooom thread
Post by: DGuller on April 17, 2024, 04:23:29 PM
Quote from: Jacob on April 17, 2024, 12:42:34 PMSaw another article on a place where AI is making things crappier - Amazon self-publishing. It's always had a problem with content that was essentially (at best) wikipedia articles uploaded as books for suckers to buy. But now - according to the article - that market place is completely flooded with shoddy AI generated content. This shuts down a venue for aspiring and unrecognized authors to publish their work.

I guess more generally, AI is likely going to render non-curated or lightly curated places for content non-viable as the return on investment from flooding them with low effort derivative AI generated content is going to be significant enough that non-AI generated work will tend to be drowned out (or not submitted at all).
Looking on the bright side, maybe the return of curation is exactly what we need.  Social media with its lack of curation may have sounded great in theory, but it put democracy on the ropes.
Title: Re: The AI dooooooom thread
Post by: Josquius on April 18, 2024, 03:00:53 AM
Quote from: Legbiter on April 16, 2024, 12:39:57 PM
Quote from: Jacob on April 16, 2024, 12:00:18 PM
QuoteAs AI content proliferates, there's also concern that we could end up in a model collapse situation as AIs ingest other AI-generated material and end up regurgitating other low-grade synthetic data.

Interesting. An AI inbreeding depression.

(https://allthatsinteresting.com/wordpress/wp-content/uploads/2021/09/king-charles-ii.jpg)


I hadn't thought of that but it does make sense.
Even just thinking of images. As all that AI generated shit starts to take over you'll be getting copies of copies of copies....
Though I do suppose the good copies will be the ones to be spread more so have the most influence?
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on April 18, 2024, 04:35:00 AM
Quote from: Jacob on April 17, 2024, 12:42:34 PMSaw another article on a place where AI is making things crappier - Amazon self-publishing. It's always had a problem with content that was essentially (at best) wikipedia articles uploaded as books for suckers to buy. But now - according to the article - that market place is completely flooded with shoddy AI generated content. This shuts down a venue for aspiring and unrecognized authors to publish their work.

I guess more generally, AI is likely going to render non-curated or lightly curated places for content non-viable as the return on investment from flooding them with low effort derivative AI generated content is going to be significant enough that non-AI generated work will tend to be drowned out (or not submitted at all).

Isn't self publishing basically a money loser, not even counting writing time?  Why would AI people want to spend time generating content that loses money?
Title: Re: The AI dooooooom thread
Post by: Syt on April 18, 2024, 04:37:35 AM
Quote from: Admiral Yi on April 18, 2024, 04:35:00 AM
Quote from: Jacob on April 17, 2024, 12:42:34 PMSaw another article on a place where AI is making things crappier - Amazon self-publishing. It's always had a problem with content that was essentially (at best) wikipedia articles uploaded as books for suckers to buy. But now - according to the article - that market place is completely flooded with shoddy AI generated content. This shuts down a venue for aspiring and unrecognized authors to publish their work.

I guess more generally, AI is likely going to render non-curated or lightly curated places for content non-viable as the return on investment from flooding them with low effort derivative AI generated content is going to be significant enough that non-AI generated work will tend to be drowned out (or not submitted at all).

Isn't self publishing basically a money loser, not even counting writing time?  Why would AI people want to spend time generating content that loses money?

Why is it a money loser? It depends on how much effort you put into it. With Amazon Self Publishing or Smashwords you can handle everything by yourself and upload your ebooks. In theory you can (more or less) just write whatever and then publish it. If you want to pay an editor, an artist for the cover art, or do pay for some social media ads, then it's obviously different. Not talking about physical self-publishing (or vanity publishers).
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on April 18, 2024, 04:42:18 AM
Quote from: Syt on April 18, 2024, 04:37:35 AMWhy is it a money loser?

I assume insufficient sales to cover publishing.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 18, 2024, 10:23:45 AM
Quote from: Admiral Yi on April 18, 2024, 04:35:00 AM
Quote from: Jacob on April 17, 2024, 12:42:34 PMSaw another article on a place where AI is making things crappier - Amazon self-publishing. It's always had a problem with content that was essentially (at best) wikipedia articles uploaded as books for suckers to buy. But now - according to the article - that market place is completely flooded with shoddy AI generated content. This shuts down a venue for aspiring and unrecognized authors to publish their work.

I guess more generally, AI is likely going to render non-curated or lightly curated places for content non-viable as the return on investment from flooding them with low effort derivative AI generated content is going to be significant enough that non-AI generated work will tend to be drowned out (or not submitted at all).

Isn't self publishing basically a money loser, not even counting writing time?  Why would AI people want to spend time generating content that loses money?

They're already doing it, so it must make sense.

Presumably using AI prompts to generate 500 "books" on different topics, uploading them, and using botnets to push them to the top of search rankings is something that is relatively easy to automate.

At which point the money from suckers buying any of those "books" (that it costs pennies to generate) is pure profit.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 18, 2024, 10:26:12 AM
Quote from: Admiral Yi on April 18, 2024, 04:42:18 AM
Quote from: Syt on April 18, 2024, 04:37:35 AMWhy is it a money loser?

I assume insufficient sales to cover publishing.

This assumption is probably incorrect given the evidence - that such ai generated titles are massively prevalent.
Title: Re: The AI dooooooom thread
Post by: Barrister on April 18, 2024, 10:31:07 AM
Quote from: Admiral Yi on April 18, 2024, 04:42:18 AM
Quote from: Syt on April 18, 2024, 04:37:35 AMWhy is it a money loser?

I assume insufficient sales to cover publishing.

I believe the books in question are either e-books, or print-on-demand books.

https://en.wikipedia.org/wiki/Print_on_demand

So even if you sell a handful of books you're still profiting on each one.
Title: Re: The AI dooooooom thread
Post by: Syt on April 18, 2024, 10:31:51 AM
Quote from: Admiral Yi on April 18, 2024, 04:42:18 AM
Quote from: Syt on April 18, 2024, 04:37:35 AMWhy is it a money loser?

I assume insufficient sales to cover publishing.

Publishing on Amazon or Smashwords doesn't cost you anything, and if you generate all content yourself, it only costs you time - and with AI generating the content for you, you just need to format it in the ebook format of choice and upload it.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on April 18, 2024, 11:13:05 AM
Quote from: Jacob on April 18, 2024, 10:23:45 AMThey're already doing it, so it must make sense.

Presumably using AI prompts to generate 500 "books" on different topics, uploading them, and using botnets to push them to the top of search rankings is something that is relatively easy to automate.

At which point the money from suckers buying any of those "books" (that it costs pennies to generate) is pure profit.
Yeah - and it's similar with online content.

You have an increasing long tail of the internet which is made for advertising sites (often AI generated, and nonsense) but with sellable ad inventory and that attracts clicks. Ad sales are often priced best for the ad slots that have the most personalised/developed profile of the individual at the end and allow the most functionalities (both of which require intrusive tracking). That means that at the minute the industry is spending money on sites with basically no compliance with any privacy laws, that are just created for the purpose of attracting eyes on adverts and just a churn with no permanence.

Admittedly this is possibly the logical end point of online advertising given that it is an industry which is largely based on fraud which is now being accelerated by AI. But it is not good for individuals, not good for publishers of real content and not good for advertisers. And ultimately as long as the things that agencies really prioritise (because it's what advertisers insist on) are targeting and functionalities, it's very difficult to move it in a different direction.

And obvioiusly those sites are being scraped and included in future models - particularly ones reliant on webcrawlers that quality media companies are increasingly blocking because they want to get paid for their content.
Title: Re: The AI dooooooom thread
Post by: celedhring on April 18, 2024, 02:16:12 PM
Just noticed chatGPT can talk too. Predictably, it sounds like a smug asshole.

EDIT: Ah, there's different voices. In Catalan "Juniper" has the most incredibly Barcelona posh accent I can think of. Shit's scary.
Title: Re: The AI dooooooom thread
Post by: grumbler on April 19, 2024, 11:20:20 AM
Quote from: Admiral Yi on April 18, 2024, 04:42:18 AM
Quote from: Syt on April 18, 2024, 04:37:35 AMWhy is it a money loser?

I assume insufficient sales to cover publishing.

The costs of self-publishing are primarily the costs of formatting the book properly and perhaps a bit of advertising.  The formatting issue isn't an issue with AI.
Title: Re: The AI dooooooom thread
Post by: garbon on April 23, 2024, 02:52:35 AM
After I performed my first task for work with ChatGPT (at a spead I'd never have managed on my own), I then asked it to create me a story about a woman who won the marathon against underwhelming odds in the style of Jane Austen.

QuoteIn the quaint village of Longbourne, nestled amidst the rolling English countryside, there lived a spirited young woman named Elizabeth Bennet. Elizabeth was known for her intelligence, wit, and independent spirit, qualities not always appreciated by the more conventional society around her.

One fine autumn day, Longbourne was abuzz with excitement as news spread that a marathon race was to be held in nearby Meryton. This event, unusual for its time, attracted athletes from far and wide, including the dashing Mr. Darcy, whose impressive athleticism and serious demeanor had captured the attention of many a young lady.

Intrigued by the prospect of such a spectacle, Elizabeth found herself drawn to the event despite the protests of her mother, Mrs. Bennet, who could not understand her daughter's fascination with "such unseemly pursuits." Undeterred, Elizabeth resolved to attend, her curiosity piqued by the challenge and the chance to witness Mr. Darcy's prowess firsthand.

On the morning of the marathon, a colorful crowd gathered at the starting line in Meryton. Elizabeth, dressed in a practical but elegant running gown, stood amongst the spectators, her eyes fixed on the determined faces of the runners. Mr. Darcy, tall and composed, exuded an air of confidence that both intrigued and annoyed Elizabeth.

As the race commenced, Elizabeth's attention quickly shifted from Mr. Darcy to a lesser-known competitor, Miss Jane Bingley, a gentlewoman of refined manners and surprising athletic ability. Miss Bingley's grace and stamina caught Elizabeth's eye, and she found herself silently cheering for this unexpected heroine of the track.

The course wound through the picturesque countryside, and as the hours passed, the runners faced various challenges—hilly terrain, unpredictable weather, and their own physical limitations. Elizabeth observed with admiration as Miss Bingley, with quiet determination, steadily gained ground on the leading pack, which included Mr. Darcy.

As the marathon neared its conclusion, Elizabeth's excitement grew. Miss Bingley, previously an underdog in the eyes of many, was now closing in on victory. With a burst of speed that surprised even herself, Miss Bingley crossed the finish line, cheered on by a jubilant crowd.

In that moment, Elizabeth realized the significance of what she had witnessed. It was not merely a race won by physical prowess but a triumph of spirit and determination. Miss Bingley's victory challenged societal norms and proved that women were capable of achieving greatness beyond the confines of traditional expectations.

As Elizabeth made her way back to Longbourne that evening, her heart was filled with a newfound sense of possibility. She had witnessed a woman overcoming underwhelming odds with grace and fortitude—a testament to the indomitable spirit that resided within each of them.

Inspired by Miss Bingley's example, Elizabeth vowed to pursue her own dreams with unwavering determination, undeterred by the obstacles that society might place in her path. For in the fields of Longbourne, amidst the whispers of the autumn breeze, a new kind of heroine had emerged—one whose story would be told and celebrated for generations to come.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on April 25, 2024, 06:44:05 PM
Not the normal AI Doom piece - but this story is incredible. I'd heard reporters worry about audio fakes but honestly I hadn't really thought about the use of AI for this sort of day-to-day, real world maliciousness (also I have some questions for the teacher who immediately shared it with a student to spread :blink:):
QuoteEx-athletic director accused of framing principal with AI arrested at airport with gun
Kristen Griffith and Justin Fenton
4/25/2024 8:44 a.m. EDT, Updated 4/25/2024 5:58 p.m. EDT
The principal of Pikesville High School was investigated after audio purporting to be his voice circulated on social media. Police have charged the former athletic director who they say faked the recording using artificial intelligence software.

Baltimore County Police arrested Pikesville High School's former athletic director Thursday morning and charged him with using artificial intelligence to impersonate Principal Eric Eiswert, leading the public to believe Eiswert made racist and antisemitic comments behind closed doors.

Dazhon Darien, 31, was apprehended as he attempted to board a flight to Houston at BWI Airport, Baltimore County Police Chief Robert McCullough said at a news conference Thursday afternoon. Darien was stopped for having a gun on him and airport officials saw there was a warrant for his arrest. Police said they did not know whether Darien was trying to flee.

Darien was charged with disrupting school activities after investigators determined he faked Eiswert's voice and circulated the audio on social media in January, according to the Baltimore County State's Attorney's Office. Darien's nickname, DJ, was among the names mentioned in the audio clips authorities say he faked.

"The audio clip ... had profound repercussions," police wrote in charging documents. "It not only led to Eiswert's temporary removal from the school but also triggered a wave of hate-filled messages on social media and numerous calls to the school. The recording also caused significant disruptions for the PHS staff and students."

Police say Darien made the recording in retaliation after Eiswert initiated an investigation into improper payments he made to a school athletics coach who was also his roommate. Darien is also charged with theft and retaliating against a witness.

Darien was allowed release on $5,000 bond and waived an attorney at an initial court appearance, according to court records. Attempts to reach him by phone and at his home were unsuccessful.

Eiswert's voice, which police and AI experts believe was simulated, made disparaging comments about Black students and the surrounding Jewish community and was widely circulated on social media.

Questions about the audio's authenticity quickly followed. Police wrote in charging documents that Darien had accessed the school's network on multiple occasions in December and January searching for OpenAI tools, and used "Large Language Models" that practice "deep learning, which involves pulling in vast amounts of data from various sources on the internet, can recognize text inputted by the user, and produce conversational results." They also connected Darien to an email account that had distributed the recording.

Many current and former students believed Eiswert was responsible for the offensive remarks, while former colleagues denounced the audio and defended Eiswert's character. Eiswert himself has denied making those comments and said the comments do not align with his views.

The audio, posted to the popular Instagram account murder_ink_bmore, prompted a Baltimore County Public Schools and Baltimore County Police investigation. Eiswert has not been working in the school since the investigation began.

The voice refers to "ungrateful Black kids who can't test their way out of a paper bag" and questions how hard it is to get those students to meet grade-level expectations. The speaker uses names of people who appear to be staff members and says they should not have been hired, and that he should get rid of another person "one way or another."

"And if I have to get one more complaint from one more Jew in this community, I'm going to join the other side," the voice said.

Darien was being investigated as of December in a theft investigation that had been initiated by Eiswert. Police say Darien had authorized a $1,916 payment to the school's junior varsity basketball coach, who was also his roommate, under the pretense that he was an assistant girls soccer coach. He was not, school officials said.

Eiswert determined that Darien had submitted the payment to the school payroll system, bypassing proper procedures. Darien had been notified of the investigation, police said.

Police say the clip was received by three teachers the night before it went viral. The first was Darien; a third said she received the email and then got a call from Darien and teacher Shaena Ravenell telling her to check her email. Ravenell told police that she had forwarded the email to a student's cell phone, "who she knew would rapidly spread the message around various social media outlets and throughout the school," and also sent it to the media and the NAACP, police said.

She did not mention receiving it from Darien until confronted about his involvement. Ravenell has not been charged with a crime and could not immediately be reached for comment.

Both Darien and Ravenell have submitted their resignations to the school system, according to an April 16 school board document. The resignations are dated June 30.

Baltimore County Public Schools Superintendent Myriam Rogers said school system officials are recommending Darien's termination. She would not say, however, if the other employees named in the charging documents, including Ravenell, are still working at the school.

Rogers in January called the comments "disturbing" and "highly offensive and inappropriate statements about African American students, Pikesville High School staff, and Pikesville's Jewish community."

Rogers said Kyria Joseph, executive director for secondary schools, and George Roberts, a leadership consultant for the school system, have been running Pikesville High School since the investigation started. They will continue to do so for the remainder of the year. She said they will work with Eiswert to determine his duties for next school year.

Billy Burke, head of the Council of Administrative & Supervisory Employee, the union that represents Eiswert, was the only official to suggest the audio was AI-generated.

Burke said he was disappointed in the public's assumption of Eiswert's guilt. At a January school board meeting, he said the principal needed police presence at his home because he and his family had been harassed and threatened. Burke had also received harassing emails, he said at the time.

"I continue to be concerned about the damage these actions have caused for Principal Eiswert, his family, the students and staff of Pikesville High School, and the Black and Jewish community members," Burke said in a statement on Thursday. "I hope there is deliberate action to heal the trauma caused by the fake audio and that all people can feel restored."

Police said the school's front desk staff was "inundated with phone calls from parents and students expressing concern and disparaging remarks toward school staff and administrators." The flood of calls made it difficult to field phone calls from parents trying to make arrangements for their children and other school functions, officials told police.

"The school leadership expressed that staff did not feel safe, which required an increase in police presence at the school to address safety concerns and fears," police said.

Teachers, under the impression the recording was authentic, "expressed fears that recording devices could have been planted in various places in the school," police said.

"The recording's release deeply affected the trust between teachers and the administration," police said. "One individual shared that they fielded sensitive phone calls in their vehicle in the parking lot instead of speaking in school."

"Hate has no place and no home in Baltimore County," said Johnny Olszewski Jr., the Baltimore County executive.

He called the developments of AI "deeply concerning" and that it's important for everyone to remain vigilant for anyone using the technology for malicious reasons. There should also be more investment in technology that identifies any inauthentic recording made with AI, he said.

Experts in detecting audio and video fakes told The Banner in March that there was overwhelming evidence the voice is AI-generated. They noted its flat tone, unusually clean background sounds and lack of consistent breathing sounds or pauses as hallmarks of AI. They also ran the audio through several different AI-detection techniques, which consistently concluded it was a fake, though they could not be 100% sure.

The police also sought the expertise of two professors familiar with AI detection to assist in their investigation. Catalin Grigoras, a forensic analyst and professor at the University of Colorado Denver, concluded that the "recording contained traces of AI-generated content with human editing after the fact, which added background noises for realism," the charging documents stated.

Hany Farid from the University of California, Berkeley, who's also an expert in forensic analysis, determined "the recording was manipulated, and multiple recordings were spliced together," according to the documents.

AI voice-generation tools are now widely available online, and a single minute's recording of someone's voice can be enough to simulate it with a $5-a-month AI tool, the Nieman Journalism Lab reported in February.

There are few regulations to prevent AI imitations, called deepfakes, and few perpetrators are prosecuted.

Cindy Sexton, president of the Teachers Association of Baltimore County, said AI should be a concern for everyone, especially educators.

She said the National Education Association is working to address their concerns, but in the meantime, she's not sure what else should be done.

"We have to do something as a society, but 'what is that something' is of course the big question," Sexton said

Baltimore County State's Attorney Scott Shellenberger said this is the first time this type of case has been taken up by the district. And it's one of the first his office was able to find around the nation.

There were some legal statutes they used that were "right on point," he said, but the charge of disrupting school activities only carries a six-month sentence.

"It seems very clear to me that we may need to make our way down to Annapolis in the legislature next year to make some adaptions to bring the law up to date with the technology that was being used," he said.

Baltimore Banner staff writers Cody Boteler and Kaitlin Newman contributed to this report.

Correction: This story has been updated to correct the spelling of Hany Farid's name.

Edit: The clip. Obviously I don't know the guy and now I listen for it the lack of background noise is noticeable - but if I heard this, I would have no idea:
https://x.com/Phil_Lewis_/status/1747708846942851493
Title: Re: The AI dooooooom thread
Post by: Josquius on May 01, 2024, 06:52:33 AM
Some people have been experimenting with using official corporate approved  AI at work with tasks such as with writing up notes from research.
The results are....less  than impressive. The AI always misses critical nuances.
Like there was one example where somebody asked about the timelines for something, they were told it is planned but we can't give you a timeline yet, and replied "OK right."
The AI's summary?
They're enthusiastic about the lack of timelines.
...
err....
And this isn't an exceptional case, lots of things like this that can't just be put down to simple explanations like not hearing the sarcasm in a tone.

I'm wondering whether a problem might lie in the AI being trained heavily on American data thus it just completely screws up when faced with other English speakers?- we're speaking the same language so it thinks it can understand...but not really.
Could be quite the great topic for someone in academia I think.

I have noticed however that it does seem to do a better job of cobbling together readable transcripts than most pre-existing tools for this. Though again, the false confidence it displays means where there are errors they are much greater than they would be otherwise.
Title: Re: The AI dooooooom thread
Post by: garbon on May 01, 2024, 09:01:18 AM
It is known that sarcasm has not yet reached America.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on May 01, 2024, 11:06:01 AM
Quote from: Josquius on May 01, 2024, 06:52:33 AMSome people have been experimenting with using official corporate approved  AI at work with tasks such as with writing up notes from research.
The results are....less  than impressive. The AI always misses critical nuances.
Like there was one example where somebody asked about the timelines for something, they were told it is planned but we can't give you a timeline yet, and replied "OK right."
The AI's summary?
They're enthusiastic about the lack of timelines.
...
err....
And this isn't an exceptional case, lots of things like this that can't just be put down to simple explanations like not hearing the sarcasm in a tone.

I'm wondering whether a problem might lie in the AI being trained heavily on American data thus it just completely screws up when faced with other English speakers?- we're speaking the same language so it thinks it can understand...but not really.
Could be quite the great topic for someone in academia I think.

I have noticed however that it does seem to do a better job of cobbling together readable transcripts than most pre-existing tools for this. Though again, the false confidence it displays means where there are errors they are much greater than they would be otherwise.

It does not think, and it does not understand.  That is the much more simple explanation for the errors you are seeing.
Title: Re: The AI dooooooom thread
Post by: Tamas on May 01, 2024, 11:33:57 AM
Quote from: crazy canuck on May 01, 2024, 11:06:01 AMIt does not think, and it does not understand. 

This should be an important message about these "AIs" but there are too many people benefiting from the doom and gloom hype machine.
Title: Re: The AI dooooooom thread
Post by: grumbler on May 01, 2024, 11:37:05 AM
Quote from: garbon on May 01, 2024, 09:01:18 AMIt is known that sarcasm has not yet reached America.

Indeed, the language does not even have a word for sarcasm.
Title: Re: The AI dooooooom thread
Post by: Barrister on May 24, 2024, 02:46:16 PM
So there's been an interesting story out there recently: Chat GPT releases a new voice mode that strongly resembles actress Scarlet Johansson's voice.  Johansson voiced an AI in the movie Her a number of years ago.  Johansson was asked to provide the voice for Chat GPT but refused.

Is this a violation of Johansson's rights though?  It does feel creepy - but it's not like voice impersonator's haven't been a thing for years.  I mean Rich Little made a career of it for decades.

And it extends more broadly to AI in general.  These LLMs are all trained on existing material.  Picture AIs are perhaps most interesting (at least to me).  As a person you are of course free to go to all the art galleries of the world and study all of the existing pictures to get "inspired" by them, and no one would call "foul" over doing so as long as you don't directly copy.

But it does seem unseemly - either voice, text, or art - when LLM-trained AIs then interpret existing material to create derivative works.

I'm not sure where I come down on this.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on May 24, 2024, 04:08:45 PM
Derivative works are part of the copyright bundle of rights under the US law. My understanding is that the AI models are banking on the contention that their activity is sufficiently transformative to qualify as fair use. But the last Supreme Court ruling on that doctrine limited it.
Title: Re: The AI dooooooom thread
Post by: Barrister on May 24, 2024, 04:31:08 PM
Quote from: The Minsky Moment on May 24, 2024, 04:08:45 PMDerivative works are part of the copyright bundle of rights under the US law. My understanding is that the AI models are banking on the contention that their activity is sufficiently transformative to qualify as fair use. But the last Supreme Court ruling on that doctrine limited it.

But coming back to a non-AI example - the "Blurred Lines" decision, where Robin Thicke and Pharrell Williams were sued by the estate of MArvin Gaye.  The song in question didn't copy any direct chords, melodies or words - but the jury found it infringed Gaye's copyright (and it was upheld on appeal).

That seemed then, and seems now, like a stupid decision.  You can't copyright the "feel" of a song.

But how different is that from AIs?
Title: Re: The AI dooooooom thread
Post by: Sheilbh on May 25, 2024, 02:03:48 PM
Although wouldn't the copyright here be about Her? Isn't this about personality rights and whether the AI is close enough to effectively be a use of her voice for commercial reasons when she's rejected that?
Title: Re: The AI dooooooom thread
Post by: grumbler on May 25, 2024, 03:16:24 PM
Quote from: Sheilbh on May 25, 2024, 02:03:48 PMAlthough wouldn't the copyright here be about Her? Isn't this about personality rights and whether the AI is close enough to effectively be a use of her voice for commercial reasons when she's rejected that?

They used a different voice actress.  You can't sue for copyright violation if someone's voice sounds like yours.
Title: Re: The AI dooooooom thread
Post by: Josquius on May 31, 2024, 06:30:26 AM
1716755385290.jpg

 :hmm:
Title: Re: The AI dooooooom thread
Post by: Sheilbh on May 31, 2024, 07:34:05 AM
Don't know about that but I know prompt injection is something that's a worry in media. The closer Gen AI is to the production environment (eg Reach publications, I think Axel Springer) that's going to be a big potential target.
Title: Re: The AI dooooooom thread
Post by: The Brain on June 01, 2024, 02:18:21 AM
Quote from: Sheilbh on May 31, 2024, 07:34:05 AMDon't know about that but I know prompt injection is something that's a worry in media. The closer Gen AI is to the production environment (eg Reach publications, I think Axel Springer) that's going to be a big potential target.

What's the worry?
Title: Re: The AI dooooooom thread
Post by: Sheilbh on June 12, 2024, 09:24:19 AM
Quote from: The Brain on June 01, 2024, 02:18:21 AMWhat's the worry?
Basically the closer it is to a production environment then prompt injection is a risk that could impact content.

But there is a big divide - I think NYT, BBC, Guardian, News International are all experimenting with AI but basically keeping it away from the actual production of news. Other companies like Axel Springer and Reach (basically all local papers in the UK, plus the Mirror and the Express) are putting it far far closer to that bit.

I think the Apple AI thing was interesting. First because it feels like the big idea for what Apple's next round of devices will do (in particular because Apple sales are slowing). But secondly because it seems like OpenAI still hasn't found a business model - they've gone from about to usher in AGI to, basically, SaaS. And I think that makes sense this is the first AI tool I've seen that I think is useful for and likely to be adopted by normal everyday people. But putting those together it makes me think that AI, so far, will mainly reinforce the position of those companies who can plug it into their existing products. Perhaps that was always likely.

I can see Google's generative search, and AI in it's suite of office tools etc. I can see the Apple product. I get how those get used and integrated into people's daily lives - which will reinforce the power of those companies. (Similarly in the business world I can see, for example, Adobe or Salesforce or Oracle using AI).

Although it also made me personally feel more inclined to just buy a brick phone because I basically want my communications to remain between me and the people I'm communicating with (and obviously, I get that there are telcos and the NSA in there etc but I live with it) - and that feels like it's becoming an increasingly radical idea :ph34r:
Title: Re: The AI dooooooom thread
Post by: Jacob on June 12, 2024, 10:03:22 AM
Yeah I remain fairly luddiye when it comes to AI and personal use.
Title: Re: The AI dooooooom thread
Post by: Razgovory on July 14, 2024, 01:13:32 PM
(https://i.imgur.com/4H4gBR1.jpeg)


I hate AI.
Title: Re: The AI dooooooom thread
Post by: Jacob on July 14, 2024, 02:15:21 PM
AI as it is now definitely has use-cases. Some of the use-cases are positive too, bit IMO a good number of them are about rapidly accelerating enshitification.

But even with that, there's something of a AI bubble going right now IMO.
Title: Re: The AI dooooooom thread
Post by: Jacob on July 18, 2024, 01:00:32 PM
Some skepticism is popping up:

Quote from: The CBCAI has a lot of fans in business. But does it actually make money? (https://www.cbc.ca/news/business/ai-generative-business-money-revenue-1.7264014)
Recent Goldman Sachs report casts doubt on profitability of AI

The growth of AI in the last few years has demonstrated two things.

One, that the technology is capable of producing a range of slick if highly suspect images and video. And two, that a whole lot of people are convinced it will soon suffuse every aspect of modern life.

From creating art to providing customer service to synthesizing reams of medical data, artificial intelligence has been hailed as transformational, leading to billions in investment and turning Nvidia — the California-based firm that makes the underlying processors — into one of the most valuable companies in the world.

The global consultancy PriceWaterhouseCoopers has forecast that AI could contribute "up to $15.7 trillion to the global economy in 2030, more than the current output of China and India combined."

While AI's inevitability has been a recurring refrain in much of the business media of late, there has been a small chorus of skeptics demanding proof of its ability to do what corporations need most: make money.

"There really has not been an industry like generative AI before, which just bleeds insane amounts of money while also really not doing enough," said Ed Zitron, a Las Vegas-based tech critic and host of the podcast Better Offline, who has long talked about the apparent chasm between the promises of AI and its actual return on investment.

Late last month, Goldman Sachs joined the group chat, becoming one of the first major investment banks to question the hype around AI, with a report entitled Gen AI: Too Much Spend, Too Little Benefit?

In it, Jim Covello, the bank's head of global equity research, said the sheer cost of the technology hampers any financial gains, that Nvidia's monopoly means processor prices won't come down anytime soon and "that 18 months after the introduction of generative AI to the world, not one truly transformative ... application has been found."

Zitron says the Goldman Sachs report represents a "serious" rebuke of the AI narrative.

"Generally, when the money people are saying, 'I don't know about this,' that's a bad time for everyone," he said.

A world of promises

Artificial intelligence has been a going concern for decades, but the release of virtual assistant ChatGPT in the fall of 2022 publicly demonstrated the capabilities of large language models and captured the popular imagination.

While many people have been gobsmacked by ChatGPT's ability to conjure computer code and whole essays based on a few prompts, its rise has been equally abetted by the futuristic scenarios laid out by Sam Altman, CEO of OpenAI, which developed ChatGPT.

Altman recently asked people to consider a time when we'll be able to ask artificial intelligence to "discover all of physics" or "start and run a great company" and that "it can go ahead and do that."

Zitron says many people in the business community have been content to accept such vague promises.

"Right now, what you've just got is a lot of followers in tech. You've got a lot of people who need [AI] to be the future," he said. "When the markets realize there's no revenue growth coming from this ... it will lead to a panic."

Joshua Gans, a professor and economist at the Rotman School of Management at the University of Toronto, says that when large language models like ChatGPT perform well above expectations, it "implies that [the technology] would be of enormous economic value right away. But invariably that isn't the case."

Gans, who co-authored the book Power and Prediction: The Disruptive Economics of Artificial Intelligence, believes AI "has legs" in terms of profitability, but acknowledges that productivity gains "don't instantly happen."

"You can simultaneously believe that AI is going to have enormous consequences for economic productivity in the future and at the same time believe it's going to take a while."

Where is AI's ROI?

Amazon, Google and Microsoft are reportedly spending billions on AI every quarter and venture capital has streamed into startups like Anthropic and Hugging Face. But there has been little proof the technology is improving the bottom line for anyone beyond OpenAI (currently valued at $80 billion US) and Nvidia (close to $3 trillion US).

Divya Goyal, an equity research analyst at Scotiabank in Toronto, says that in terms of current returns on investment (ROI), "I cannot name one specific [AI] tool that will generate significant ROI."

She said it's likely to be "a mix of technologies and how they are brought to the table that eventually results in driving the upside for organizations."

Goyal has seen a lot of "proof-of-concept" ideas and thinks the opportunity for organizations looking to take advantage of AI is in "the customer engagement space," such as making their apps more conversational and intuitive and helping to better direct phone inquiries.

Gerald Grant, director of the Centre for Information Technology, Organizations and People at the Sprott School of Business at Carleton University in Ottawa, believes the greatest potential for AI is in digital infrastructure and "making corporations operate more effectively."

Seeking 'hypergrowth'

While organizations determine how to fit AI into their workflows, the processing involved requires expanding data centres. This is not only a financial consideration for companies, but it's quickly becoming a significant source of carbon emissions. Both Google and Microsoft have acknowledged their investments in AI will likely decimate their 2030 net-zero targets.

In a recent interview with Bloomberg, James Ferguson, founding partner of U.K.-based investment research firm MacroStrategy Partnership, said the colossal power requirements of AI, along with the documented habit of large language models to invent facts, lead him to believe the current hype could lead to a bust like the late-'90s dot-com crash.

Grant says one reason the AI hype has been so strong is that the tech industry is seeking another "hypergrowth" market worth hundreds of billions of dollars, which we've seen with smartphones, social media or cloud computing.

There was similar fanfare around technologies like 3D printing, drones and the metaverse, but they haven't proved to be as disruptive or profitable as initially expected, he says.

Zitron says that for all the excitement around AI right now, it has supported virtual assistants like Siri and Alexa for about a decade. He also pointed out that Alexa has been a major money loser for Amazon.

"What people don't realize," he said, "is that if this was the next big growth market, it would have already been one."

Grant said the ultimate challenge for anyone invested in AI is developing applications that change people's lives.

"Technology always has a great potential to do wonderful things," said Grant.

"But people are not seeking technology — they're seeking outcomes that are beneficial to them and that they're willing to pay for. Whatever hype you have around technology is just that if it doesn't deliver something novel or meaningful."

Goldman-Sachs: Gen AI: Too Much Spend, Too Little Benefit? (https://web.archive.org/web/20240629140307/http://goldmansachs.com/intelligence/pages/gs-research/gen-ai-too-much-spend-too-little-benefit/report.pdf)
Title: Re: The AI dooooooom thread
Post by: Josquius on August 03, 2024, 08:50:05 AM
Anyone else tried playing with suno?

The songs it comes up with are really quite impressive.

A theme song for languish with minimal effort.

https://suno.com/song/6ec543ae-f4a5-4996-a865-eea9f0e2796c
Title: Re: The AI dooooooom thread
Post by: Richard Hakluyt on August 03, 2024, 09:50:50 AM
I've certainly heard worse...a lot worse  :hmm:
Title: Re: The AI dooooooom thread
Post by: The Brain on August 03, 2024, 11:58:11 AM
Quote from: Josquius on August 03, 2024, 08:50:05 AMAnyone else tried playing with suno?

The songs it comes up with are really quite impressive.

A theme song for languish with minimal effort.

https://suno.com/song/6ec543ae-f4a5-4996-a865-eea9f0e2796c

Better than Shit Goat. :)
Title: Re: The AI dooooooom thread
Post by: Norgy on August 03, 2024, 03:14:17 PM
There's an AI sandbox available at our media house. It leaves a lot to be desired.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 06, 2024, 04:14:26 PM
So I posted before about Facebook as the zombie internet - and they kept digging into the weird AI stuff on Facebook. This makes perfect sense and is kind of fascinating:
https://www.404media.co/where-facebooks-ai-slop-comes-from/
Title: Re: The AI dooooooom thread
Post by: Jacob on August 06, 2024, 04:40:17 PM
That's pretty interesting.

So FB's line is "as long as the AI images are being seen and liked by real people it's working as intended."

That's a non-trivial IF.

Certainly, it would seem there are some pretty strong incentives for generators of content to also develop LIKE and engagement accounts as they directly impact revenue.
Title: Re: The AI dooooooom thread
Post by: Razgovory on August 06, 2024, 04:54:38 PM
Quote from: Sheilbh on August 06, 2024, 04:14:26 PMSo I posted before about Facebook as the zombie internet - and they kept digging into the weird AI stuff on Facebook. This makes perfect sense and is kind of fascinating:
https://www.404media.co/where-facebooks-ai-slop-comes-from/
This explains so much.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 06, 2024, 04:55:47 PM
Yeah - and in fairness it could well be that they are being seen and liked by real people. Lots of the AI pages spend lots of time liking each other. But it may well be people trying to earn money online in the developing world (but, from what I understand) interactions from accounts with European or North American IP addresses are more valuable. (Edit: This is, just as an aside, fucked up.)

But also what you say is true and it's true of the entire internet these platforms have built. The way they make money is online advertising and at every single level it is riddled with fraud. There is zero trust - and at least in the UK the brand safety and anti-fraud providers have even been sued themselves for not just doing what they claim but also misusing their access to sites to build their own audiences.

Generally for the last twenty years - the most valuable companies in the world, who built our world and are now building the next stage...
Title: Re: The AI dooooooom thread
Post by: Syt on September 17, 2024, 06:40:30 AM
Well, this is nice. :)

https://www.aol.com/billionaire-larry-ellison-says-vast-160646367.html
(posting from AOL, BI article is behind paywall)

QuoteBillionaire Larry Ellison says a vast AI-fueled surveillance system can ensure 'citizens will be on their best behavior'

Walking down a suburban neighborhood street already feels like a Ring doorbell panopticon.

But this is only the start of our surveillance dystopia, according to Larry Ellison, the billionaire cofounder of Oracle. He said AI will usher in a new era of surveillance that he gleefully said will ensure "citizens will be on their best behavior."

Ellison made the comments as he spoke to investors earlier this week during an Oracle financial analysts meeting, where he shared his thoughts on the future of AI-powered surveillance tools.

Ellison said AI would be used in the future to constantly watch and analyze vast surveillance systems, like security cameras, police body cameras, doorbell cameras, and vehicle dashboard cameras.

"We're going to have supervision," Ellison said. "Every police officer is going to be supervised at all times, and if there's a problem, AI will report that problem and report it to the appropriate person. Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on."

Ellison also expects AI drones to replace police cars in high-speed chases. "You just have a drone follow the car," Ellison said. "It's very simple in the age of autonomous drones." He did not say if those drones would broadcast the chases on network news.

Ellison's company, Oracle, like almost every company these days, is aggressively pursuing opportunities in the AI industry. It already has several projects in the works, including one in partnership with Elon Musk's SpaceX.

Ellison is the world's sixth-richest man with a net worth of $157 billion, according to Bloomberg.

Ellison's children have made names for themselves in the film industry. His daughter Megan Ellison founded her production company Annapurna Pictures in 2011, and his son David Ellison is set to become CEO of Paramount after it completes its merger with Skydance Media.

Though I guess you could argue prolific carrying of cameras/recording equipment at all times and social media shitstorms are already a step in that direction? :P
Title: Re: The AI dooooooom thread
Post by: Josquius on September 17, 2024, 06:42:06 AM
Larry Ellison? Not Harlan Ellison?
Title: Re: The AI dooooooom thread
Post by: Syt on September 17, 2024, 06:51:24 AM
It would be impressive if this was a recent quote from Harlan Ellison since he's been dead for a while.

Larry Ellison, age 80, at a conference this year:

(https://eu-images.contentstack.com/v3/assets/blt8eb3cdfc1fce5194/blta386c671fcb5ad28/66e3029e00f43f458abde764/Oracle-Larry-Ellison.jpg)
Title: Re: The AI dooooooom thread
Post by: Josquius on September 17, 2024, 02:16:12 PM
I meant it sounds quite like something Harlan would have written about in a dark Sci fi novella.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on September 17, 2024, 02:52:03 PM
I thought it was well known that Larry Ellison had strayed into mild-Howard Hughes-dom some time ago?
Title: Re: The AI dooooooom thread
Post by: Sheilbh on September 17, 2024, 02:53:38 PM
Major funding partner of the Tony Blair Institute for Global Change.

No idea Blair already believes this (it's his style) but big focuses for Blair in his policy advocacy and consultancy work with governments is AI and also digitalisation of health data. Both of which are areas Ellison's been very involved in.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on September 17, 2024, 03:47:12 PM
I've been watching a few of these AI generated Panavision send ups of movies and realized that the obvious industry for AI to generate revenue is porn.  Just as porn has been the driver of so much of our technology adoption.
Title: Re: The AI dooooooom thread
Post by: Josquius on September 17, 2024, 03:52:46 PM
Quote from: Admiral Yi on September 17, 2024, 03:47:12 PMI've been watching a few of these AI generated Panavision send ups of movies and realized that the obvious industry for AI to generate revenue is porn.  Just as porn has been the driver of so much of our technology adoption.

Deep fake porn is kind of old news.
The latest twist.

BBC News - Inside the deepfake porn crisis engulfing Korean schools
https://www.bbc.com/news/articles/cpdlpj9zn9go
Title: Re: The AI dooooooom thread
Post by: Razgovory on September 17, 2024, 06:44:07 PM
My attempts to use create deep fake porn have been abysmal.  Grumbler with big tits is one of the least erotic things you could every imagine.
Title: Re: The AI dooooooom thread
Post by: Josquius on October 04, 2024, 01:50:48 AM
Today I learned, US intelligence makes use of an AI system named.... Sentient.

https://en.m.wikipedia.org/wiki/Sentient_(intelligence_analysis_system)#:~:text=Sentient%20is%20a%20heavily%20classified,United%20States%20Air%20Forces%20Research

Guys.....its like you see the future bits of Terminator as a desirable end goal here.
Title: Re: The AI dooooooom thread
Post by: Crazy_Ivan80 on October 04, 2024, 04:44:13 AM
Quote from: Admiral Yi on September 17, 2024, 03:47:12 PMI've been watching a few of these AI generated Panavision send ups of movies and realized that the obvious industry for AI to generate revenue is porn.  Just as porn has been the driver of so much of our technology adoption.

Iirc the panavision creator gave some insights on how much he makes with this hobby, witch actual numbers an graphs.
Not too shabby
Title: Re: The AI dooooooom thread
Post by: celedhring on October 05, 2024, 08:48:55 AM
About the so-called "1950s Super Panavision 70" videos I'm midly annoyed that 1) None I have seen uses the correct aspect ratio 2) Super Panavision 70 came out in 1959, so the 1950s part is a bit of a stretch  :P
Title: Re: The AI dooooooom thread
Post by: DGuller on November 17, 2024, 09:17:35 PM
There was an experiment done on whether doctors with ChatGPT can be better at diagnosing than doctors without ChatGPT.  The finding was that the help from ChatGPT was very marginal at best.

The kicker though was that ChatGPT alone did much better than doctors with ChatGPT.  In effect, the doctors were holding ChatGPT back, probably in part because they couldn't conceive that an LLM could perceive something they couldn't, like a lot of people in this thread.

https://www.nytimes.com/2024/11/17/health/chatgpt-ai-doctors-diagnosis.html
Title: Re: The AI dooooooom thread
Post by: Grey Fox on November 18, 2024, 01:38:29 PM
LLMs have no reasons to have public facing endeavours. Those things are marketing ploys.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 18, 2024, 02:32:27 PM
Quote from: Grey Fox on November 18, 2024, 01:38:29 PMLLMs have no reasons to have public facing endeavours. Those things are marketing ploys.

elaborate  :)
Title: Re: The AI dooooooom thread
Post by: Grey Fox on November 18, 2024, 06:58:09 PM
Quote from: crazy canuck on November 18, 2024, 02:32:27 PM
Quote from: Grey Fox on November 18, 2024, 01:38:29 PMLLMs have no reasons to have public facing endeavours. Those things are marketing ploys.

elaborate  :)

LLMs have really cool capacity to identify in all sorts of applications. Like in healthcare where they are use to better detect/find cancers in scans, or where they are integrated in scheduling apps to better manage doctor appointments by simply guiding to the right professionals.

That they've been made available to the public where we use them to do unprofitable dumb things like image generation and our agendas are only marketing ploys. Maybe a training opportunity too.


Here's the AI software the team I'm part of develops.

https://www.teledynevisionsolutions.com/products/astrocyte-sdk/?model=astrocyte&vertical=tvs-dalsa-oem&segment=tvs

It's a very niche AI model creator for the machine vision industry.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 18, 2024, 07:46:14 PM
Agree on the marketing ploy piece. I think the business model is going to be basically providing LLMs as SaaS - either by providing infrastructure for private mini-LLMs (Mistral) or as an extension of an existing SaaS product. Although that's not a massively sexy pitch :lol:

Although the best I've seen in my sector is actually NotebookLM from Google - primarily because of its citations/sourcing, which I think they released in that way and have been surprised at the business interest.

Also think there's a fair amount that is being pitched as AI - and when I'm talking about AI the bit that interests and worries me is generative AI, a lot of the pitches and products (for example, I think the cancer scanning) is machine learning.
Title: Re: The AI dooooooom thread
Post by: celedhring on November 19, 2024, 03:28:26 AM
I'm finding ChatGPT useful for text editing and proofreading. Particularly as a non-native English speaker. "Oh yeah, that's a more natural way to say this".

It does it have its kinks though - it loves redundancy, and relatedly sometimes it's a bit too wordy.


Title: Re: The AI dooooooom thread
Post by: Tamas on November 19, 2024, 08:29:16 AM
Quote from: celedhring on November 19, 2024, 03:28:26 AMI'm finding ChatGPT useful for text editing and proofreading. Particularly as a non-native English speaker. "Oh yeah, that's a more natural way to say this".

It does it have its kinks though - it loves redundancy, and relatedly sometimes it's a bit too wordy.




Maybe it is an introvert. I have been recently explained that introverts say things in a too roundabout way. In other words they only say a simple yes or no when that is the appropriate answer, but what do I know.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 19, 2024, 09:54:23 AM
Quote from: celedhring on November 19, 2024, 03:28:26 AMI'm finding ChatGPT useful for text editing and proofreading. Particularly as a non-native English speaker. "Oh yeah, that's a more natural way to say this".

It does it have its kinks though - it loves redundancy, and relatedly sometimes it's a bit too wordy.



You can have custom instructions for it.  For my work account I told it to be concise and to the point, and stop giving a long preamble to every answer, and it did become quite terse.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 20, 2024, 02:52:12 PM
Interesting other example on the medical front - this was more machine learning - but AI system reviewing scans (I forget what for) was only better than actual physicians in basically perfect scenarios. So when the individual was stoop up for a chest x-ray, but very often what they're looking for is associated with trauma or other complex injuries so it is often people lying in a bed getting quick x-rays etc.

Which just shows how very, very important it is to get the right input/training data.

(Which is obviously why LLMs love journalism: quality, legaled, accurate....And why they should pay for using it :ph34r:)
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 20, 2024, 03:26:13 PM
Quote from: Sheilbh on November 20, 2024, 02:52:12 PMInteresting other example on the medical front - this was more machine learning - but AI system reviewing scans (I forget what for) was only better than actual physicians in basically perfect scenarios. So when the individual was stoop up for a chest x-ray, but very often what they're looking for is associated with trauma or other complex injuries so it is often people lying in a bed getting quick x-rays etc.

Which just shows how very, very important it is to get the right input/training data.

(Which is obviously why LLMs love journalism: quality, legaled, accurate....And why they should pay for using it :ph34r:)

Yeah, and add to that the recent findings that radiologists using AI were worse (meaning they found more) at finding both false positives and and false negatives.  The really concerning bit was the false negatives were more prevalent.

Turns out they trusted the AI - (or a cynic might suggest, let the AI do the work) too much.

On the bright side, when the AI got it right, so did the radiologist.  But again, that is because they came to depend on the tool.

 
Title: Re: The AI dooooooom thread
Post by: Josquius on December 12, 2024, 05:45:46 AM
I've found myself using Chat gpt a bit lately. I think I've found a decent use case.

I'll write something, which has all the proper points... buts its convoluted and weird.
Chat GPT then rephrases it into a more succinct form.

This seems to go....OKish. Though chat gpt definitely has its own idiocentricies that come through. It also loves correcting itself. It gives one suggestion, then give that suggestion back as the redraft and it totally redoes it. Its also so very American, really puts way too much of a positive promotional a spin on everything- I do need to do this to an extent...but too much would just rub people the wrong way and sound false.
 Its very much a bounce ideas off it and then do your own redraft tool.
But still...interesting use.

Anyone else found uses?
Title: Re: The AI dooooooom thread
Post by: garbon on December 12, 2024, 06:27:22 AM
I've noticed it likes wordy business speak and it likes to be redundant.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on December 12, 2024, 07:36:45 AM
Quote from: garbon on December 12, 2024, 06:27:22 AMI've noticed it likes wordy business speak and it likes to be redundant.
I think this is an inputs issue :ph34r:
Title: Re: The AI dooooooom thread
Post by: garbon on December 12, 2024, 07:38:43 AM
Quote from: Sheilbh on December 12, 2024, 07:36:45 AM
Quote from: garbon on December 12, 2024, 06:27:22 AMI've noticed it likes wordy business speak and it likes to be redundant.
I think this is an inputs issue :ph34r:

Perhaps what it was trained on, I'd suggest
Title: Re: The AI dooooooom thread
Post by: Sheilbh on December 12, 2024, 09:15:38 AM
Sorry, yes that's what I meant by the inputs into the model.

It does present a slightly terrifying future as more and more of these companies develop enterprise SaaS products, learning from their enterprise customers. Skynet but communicating like someone from HR :ph34r:
Title: Re: The AI dooooooom thread
Post by: Josquius on December 12, 2024, 10:00:36 AM
Quote from: garbon on December 12, 2024, 06:27:22 AMI've noticed it likes wordy business speak and it likes to be redundant.

Try typing "Rephrase it so I don't sound like a prat".

But yes, its something to wrestle with. I am using it for professional trying to make myself sound good reasons... but not too much. I don't want to recompile my synergies with an extended OKR.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on December 12, 2024, 10:19:35 AM
Quote from: Josquius on December 12, 2024, 10:00:36 AM
Quote from: garbon on December 12, 2024, 06:27:22 AMI've noticed it likes wordy business speak and it likes to be redundant.

Try typing "Rephrase it so I don't sound like a prat".

But yes, its something to wrestle with. I am using it for professional trying to make myself sound good reasons... but not too much. I don't want to recompile my synergies with an extended OKR.

Careful with that, you will start sounding like a consultant who doesn't actually know what they are talking about.



Title: Re: The AI dooooooom thread
Post by: mongers on December 12, 2024, 10:50:59 AM
Quote from: Josquius on December 12, 2024, 10:00:36 AM
Quote from: garbon on December 12, 2024, 06:27:22 AMI've noticed it likes wordy business speak and it likes to be redundant.

Try typing "Rephrase it so I don't sound like a prat".

But yes, its something to wrestle with. I am using it for professional trying to make myself sound good reasons... but not too much. I don't want to recompile my synergies with an extended OKR.

Do I need to pass that through chatGPI or is it it's output?
  :P
Title: Re: The AI dooooooom thread
Post by: crazy canuck on December 23, 2024, 07:18:52 AM
A tragic story of humans relying on technology to make decisions.  A lysterisa outbreak was traced to a plant which was not inspected by health officials.  Why not? Because the algorithm did not identify the plant for inspection.  And inspectors were not permitted to inspect plants that were not identified by the algorithm.

Three people died.


Title: Re: The AI dooooooom thread
Post by: mongers on December 23, 2024, 08:11:07 AM
Quote from: crazy canuck on December 23, 2024, 07:18:52 AMA tragic story of humans relying on technology to make decisions.  A lysterisa outbreak was traced to a plant which was not inspected by health officials.  Why not? Because the algorithm did not identify the plant for inspection.  And inspectors were not permitted to inspect plants that were not identified by the algorithm.

Three people died.




:blink:

Mind boggling laziness.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on December 23, 2024, 08:27:59 AM
Quote from: mongers on December 23, 2024, 08:11:07 AM
Quote from: crazy canuck on December 23, 2024, 07:18:52 AMA tragic story of humans relying on technology to make decisions.  A lysterisa outbreak was traced to a plant which was not inspected by health officials.  Why not? Because the algorithm did not identify the plant for inspection.  And inspectors were not permitted to inspect plants that were not identified by the algorithm.

Three people died.




:blink:

Mind boggling laziness.

Not so much laziness as certainty the technology should be trusted and relied upon.
Title: Re: The AI dooooooom thread
Post by: Josquius on January 02, 2025, 04:22:44 AM
An article I just read:

https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/

Googled it up after hearing tell that apparently as lots of students are turning to AI, many teachers are turning to AI detectors to combat this. This seems to be the most famous one:

https://gptzero.me/

I tried it myself and...yep. It flags stuff I've actually written sans AI as likely being AI- include a typo or two and the odds of this drop dramatically.
They don't seem to work. So if true they're being widely adopted...that's worrying.
Title: Re: The AI dooooooom thread
Post by: Maladict on January 02, 2025, 07:11:55 AM
Quote from: Josquius on January 02, 2025, 04:22:44 AMAn article I just read:

https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/

Googled it up after hearing tell that apparently as lots of students are turning to AI, many teachers are turning to AI detectors to combat this. This seems to be the most famous one:

https://gptzero.me/

I tried it myself and...yep. It flags stuff I've actually written sans AI as likely being AI- include a typo or two and the odds of this drop dramatically.
They don't seem to work. So if true they're being widely adopted...that's worrying.


I always knew you weren't a real person  :P
Title: Re: The AI dooooooom thread
Post by: Syt on January 22, 2025, 01:33:43 AM
https://www.reuters.com/technology/artificial-intelligence/trump-announce-private-sector-ai-infrastructure-investment-cbs-reports-2025-01-21/

QuoteTrump announces private-sector $500 billion investment in AI infrastructure

Jan 21 (Reuters) - U.S. President Donald Trump on Tuesday announced a private sector investment of up to $500 billion to fund infrastructure for artificial intelligence, aiming to outpace rival nations in the business-critical technology.

Trump said that ChatGPT's creator OpenAI, SoftBank (9984.T), opens new tab and Oracle (ORCL.N), opens new tab are planning a joint venture called Stargate, which he said will build data centers and create more than 100,000 jobs in the United States.

These companies, along with other equity backers of Stargate, have committed $100 billion for immediate deployment, with the remaining investment expected to occur over the next four years.

SoftBank CEO Masayoshi Son, OpenAI CEO Sam Altman and Oracle Chairman Larry Ellison joined Trump at the White House for the launch.

The first of the project's data centers are already under construction in Texas, Ellison said at the press conference. Twenty will be built, half a million square feet each, he said. The project could power AI that analyzes electronic health records and helps doctors care for their patients, Ellison said.

The executives gave Trump credit for the news. "We wouldn't have decided to do this," Son told Trump, "unless you won."

"For AGI to get built here," said Altman, referring to more powerful technology called artificial general intelligence, "we wouldn't be able to do this without you, Mr. President."

It was not immediately clear whether the announcement was an update to a previously reported venture.

In March 2024, The Information, a technology news website, reported OpenAI and Microsoft were working on plans for a $100 billion data center project that would include an artificial intelligence supercomputer also called "Stargate" set to launch in 2028.

POWER-HUNGRY DATA CENTERS

The announcement on Trump's second day in office follows the rolling back of former President Joe Biden's executive order on AI, that was intended to reduce the risks that AI poses to consumers, workers and national security.

AI requires enormous computing power, pushing demand for specialized data centers that enable tech companies to link thousands of chips together in clusters.

"They have to produce a lot of electricity, and we'll make it possible for them to get that production done very easily at their own plants if they want," Trump said.

As U.S. power consumption rises from AI data centers and the electrification of buildings and transportation, about half of the country is at increased risk of power supply shortfalls in the next decade, the North American Electric Reliability Corporation said in December.


As a candidate in 2016, Trump promised to push a $1 trillion infrastructure bill through Congress but did not. He talked about the topic often during his first term as president from 2017 to 2021, but never delivered on a large investment, and "Infrastructure Week" became a punchline.

Oracle shares were up 7% on initial report of the project earlier in the day. Nvidia (NVDA.O), opens new tab, Arm Holdings and Dell (DELL.N), opens new tab shares also rose.

Investment in AI has surged since OpenAI launched ChatGPT in 2022, as companies across sectors have sought to integrate artificial intelligence into their products and services.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on January 22, 2025, 06:41:23 AM
He's not wrong on need to produce power for this - I think Microsoft are re-opening Three Mile Island to power their data centres.

I'd add that Musk is already saying this is basically not true which is helpful.

I'd add Starmer's launched big policy on AI - in particular (which sounds promising) one trial hub around Culham. That's already the UK centre for fusion research (part of the European project) and some other stuff. Plan is to release planning rules to build data centres, small modular reactors to power them and have it as an AI innovation hub too. But energy is seen as absolutely a key part of it.

Admittedly it'd be more promising if Starmer hadn't cancelled a few billions worth of AI projects basically as soon as he came into office because they were Sunak's :lol: :bleeding:
Title: Re: The AI dooooooom thread
Post by: Valmy on January 22, 2025, 07:00:56 AM
I don't understand how AI data centers are going to create tons of jobs.
Title: Re: The AI dooooooom thread
Post by: Syt on January 22, 2025, 07:31:36 AM
Quote from: Valmy on January 22, 2025, 07:00:56 AMI don't understand how AI data centers are going to create tons of jobs.

All those AI Jesuses aren't going to post themselves. Yet.
Title: Re: The AI dooooooom thread
Post by: Tamas on January 22, 2025, 08:01:41 AM
Quote from: Valmy on January 22, 2025, 07:00:56 AMI don't understand how AI data centers are going to create tons of jobs.

They will create shareholder value though.
Title: Re: The AI dooooooom thread
Post by: garbon on January 22, 2025, 08:08:15 AM
Cue more global warming?
Title: Re: The AI dooooooom thread
Post by: Syt on January 22, 2025, 08:20:04 AM
I'm glad they're building these in areas with abundant access to water, too. :)
Title: Re: The AI dooooooom thread
Post by: Tamas on January 22, 2025, 09:08:57 AM
Quote from: garbon on January 22, 2025, 08:08:15 AMCue more global warming?

Marketing materials are not going to generate themselves, not unless you can host all these AI scripts.
Title: Re: The AI dooooooom thread
Post by: DGuller on January 22, 2025, 09:57:46 AM
Quote from: Valmy on January 22, 2025, 07:00:56 AMI don't understand how AI data centers are going to create tons of jobs.
There are things more important than jobs, regardless of whether it will or will not generate jobs.  I know that this forum is full of skeptics of AI, but I would make a strong argument that it's a skepticism out of fear rather than out of information.  AI is advancing at a frightening speed, and the only thing scarier than advanced AI is advanced AI that China or Russia got to first.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on January 22, 2025, 10:12:51 AM
AI data centers will "create" jobs in the sense that the structure of employment will change. Because most of the world outside Africa is transitioning to steady state or negative population growth, the total number of jobs in much of the world - including ALL of the developing world and nearly all the mid-income countries - cannot increase and must decrease in the long-term, barring massive immigration from the few demographic growth countries left (highly unlikely in the present political configuration).

The real question is what will work look like in a world dominated by mid-21st century AI and quantum computing tech? Will there be incremental change or a wrenching paradigm shift.  I still think the former is more likely but it's too early to tell.
Title: Re: The AI dooooooom thread
Post by: Valmy on January 22, 2025, 10:24:59 AM
Quote from: DGuller on January 22, 2025, 09:57:46 AM
Quote from: Valmy on January 22, 2025, 07:00:56 AMI don't understand how AI data centers are going to create tons of jobs.
There are things more important than jobs, regardless of whether it will or will not generate jobs.  I know that this forum is full of skeptics of AI, but I would make a strong argument that it's a skepticism out of fear rather than out of information.  AI is advancing at a frightening speed, and the only thing scarier than advanced AI is advanced AI that China or Russia got to first.

Well that's a different argument.

A big AI data center is in my town, what sort of job can I get there?
Title: Re: The AI dooooooom thread
Post by: DGuller on January 22, 2025, 10:29:18 AM
Quote from: The Minsky Moment on January 22, 2025, 10:12:51 AMAI data centers will "create" jobs in the sense that the structure of employment will change. Because most of the world outside Africa is transitioning to steady state or negative population growth, the total number of jobs in much of the world - including ALL of the developing world and nearly all the mid-income countries - cannot increase and must decrease in the long-term, barring massive immigration from the few demographic growth countries left (highly unlikely in the present political configuration).

The real question is what will work look like in a world dominated by mid-21st century AI and quantum computing tech? Will there be incremental change or a wrenching paradigm shift.  I still think the former is more likely but it's too early to tell.
I'm actually probably more afraid of what transformation AI will bring than the technophobes here, because I have higher expectations of what AI can accomplish.  I think the worries about jobs are quaint, I think we have much bigger issues to fear.

My main worry is that AI will be the totalitarian government's wet dream.  It will make obsolete the whole notion that "the government doesn't have resources to listen to all of our phones all the time", because yes, a single AI system would be able to do that.  Not only would it be able to do that, but it would be a single entity, so it would be able to synthesize all the wiretapping of all the people itself, without having to have a system for all the spies to communicate their findings and analysis and agree on it.

Another fear is that AI will bring a single point of failure to many systems that people interact with, which is already a problem but will be a much bigger one.  If you apply for a job and the interviewer for some stupid reason doesn't think you're a good fit, oh, well, there are hundreds of other interviewers out there.  If all the interviewers are using the same AI vendor to make such decisions, then you're just fucked if you're an unlucky residual of that model.

Yet another fear that worries AI researchers is that things will get dangerously unpredictable when AI will become smart enough to work on improving its capabilities.  I don't think anyone can even predict what will come out of it, but obviously there is a lot of potential for very bad outcomes (and very good outcomes too, but it's wise to be risk averse with existential issues).
Title: Re: The AI dooooooom thread
Post by: Valmy on January 22, 2025, 10:33:19 AM
Ok just to clarify I was just surprised by a claim that AI data centers are going to bring hundreds of thousands of jobs. Unless we are talking about people just building the physical space, I am not sure what jobs those data centers would generate by themselves. The AI being done there and it's impact on jobs is something else.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 22, 2025, 10:38:27 AM

And my concerns are exactly the opposite of DGuller.

The obvious flaws in AI are ignored to the detriment of us all and the reliance on AI will be disastrous.

But people like the colour will continue to ignore the data, although they claim to the specialist and analyzing data.

Who cares if AI generates ridiculously obvious errors it's all good right?
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 22, 2025, 10:44:34 AM
As just one example, despite the fact that a lawyer was recently disciplined by our loss society for presenting a brief to our court with phantom cases generated by AI, we are beginning to see more and more communications from lawyers, which are using AI.

How can we be so sure that lawyers are so stupid as to do that? Because we're seeing more and more cases that actually don't exist being cited in their communications.

But hey, label me as a technophobe rather than somebody who actually cares about you know accuracy and reliability.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on January 22, 2025, 11:11:58 AM
Quote from: Valmy on January 22, 2025, 07:00:56 AMI don't understand how AI data centers are going to create tons of jobs.
There's two sides I think. There will be jobs created by building and maintaining new physical infrastructure - and all of the material inputs it needs (e.g. with Trump, oil and gas). I think that will apply to data centres. If they're being built all over because AI needs it, then it'll be like any other nationwide infrastructure building boom.

On the AI piece there's vast amounts of money flooding into it. The bet is it will have transformative impacts on productivity (reducing the number of employees needed) - basically like robots and mechanisation did to blue collar workers but to the white collar workers. That will obviously have huge social and economic consequences (and is why the tech bros love both AI and UBI). My instinct is that's right and it's a hugely transformative technology. Of course that might be wrong - it might not be that technology. In which case we've had huge waste (for now) of capital which will cause a lot of economic shocks as it unwinds. Those seem to me to be the two alternatives.

Of course the other background to this is that China is also working on all of this technology and also implementing them. So I think, if you believe there is competition with China (or technically advanced authoritarian states more generally), then I don't think the idea of there being a route where we can park this holds up.

My own view - having looked at a lot of AI for work - is there's discrete areas where it's clearly already incredibly useful. Particularly around code, data designers, scientists, engineers etc. The other areas are still, at this stage, a little inchoate in my experience. I think that is why, for example, Google are just looking at releasing it and integrating it into loads of tools (at the minute at no or low cost) to see where it works and how from which they'll build products. So I think on the one hand we need to move the hype from "this could destroy us all" to "at the minute it's an extension on Software-as-a-Service tools" while keeping in mind we're only just starting to explore how it can be used.

I think Google's Notebook LM (which is fantastic) is a really interesting example. They released that to consumers - it's got great features (like citations etc). I think they were surprised by how positive the reaction was and businesses coming to them to say they'd love an enterprise version with enterprise T&C protection. They're now launching that (or have just launched it). But I don't think they expected it to be a big hit until it was rolled out and people could see how they could use it. I think we'll see a lot of that - which is why (and I could be totally wrong) I think the platforms with products and tools people are already using like Google will be able to take advantage of it more initially because they'll be able to integrate and see how it's used. I think that's a clearer path than a "general" AI like OpenAI and I suspect they may instead end up becoming basically infastructure for others (like the AWS of AI - which is not a bad place to be).
Title: Re: The AI dooooooom thread
Post by: Valmy on January 22, 2025, 12:38:13 PM
I guess. But surely that would be true no matter where the data centers are located? I guess I thought it was important we build the data centers here so we get sweet data center jobs.

But I guess it is just jobs created building buildings and power production.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 22, 2025, 12:39:31 PM
So a couple of months of employment for the construction trades (it doesn't take long to pop one of those up) and not much if any increase in employment for the source of energy. The number of employees needed to supply energy is also decreasing due to technological advances.

It reminds me of the times politicians say that pipelines create jobs. Well, for a short period of time during pipeline construction.  But the whole purpose of a pipeline is to transport something at a greater volume and lesser cost overall, and that includes labour costs.

But as JR said, that is not the issue at all.  The question is the impact the reliance on AI will have.
Title: Re: The AI dooooooom thread
Post by: Valmy on January 22, 2025, 12:41:02 PM
Quote from: crazy canuck on January 22, 2025, 12:39:31 PMBut as JR said, that is not the issue at all.  The question is the impact the reliance on AI will have.

Right. Obviously that is the biggest issue.

Just when somebody says something like "we will build the data centers here and create hundreds of thousands of jobs" I wonder if there is something I am missing. I don't think a huge staff is required in a data center.
Title: Re: The AI dooooooom thread
Post by: DGuller on January 22, 2025, 12:41:58 PM
Quote from: Valmy on January 22, 2025, 12:38:13 PMI guess. But surely that would be true no matter where the data centers are located? I guess I thought it was important we build the data centers here so we get sweet data center jobs.

But I guess it is just jobs created building buildings and power production.
If it helps, "create jobs" reasoning is total bullshit 100% of the time, AI or not.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on January 22, 2025, 12:54:31 PM
Quote from: Valmy on January 22, 2025, 12:41:02 PMJust when somebody says something like "we will build the data centers here and create hundreds of thousands of jobs" I wonder if there is something I am missing. I don't think a huge staff is required in a data center.
It's not.

It's a modern version of building a road or, say, a bridge to nowhere. Again if the US decides it's going to build hundreds or thousands of data centres around the country with also new power infrastructure then that construction will employ lots of people, for the duration of that build. Each will also employ a few people in the running.

But I think that's broadly true of all infrastructure investment projects, no? We had something like 75,000 people working on the construction of Crossrail. In that sense it creates jobs. It only needs (I'd guess) a few hundred or thousand working on the platforms and trains (which I think are driverless) once it's actually built. The main benefit is whether it's valuable afterwards - a bridge to nowhere not, a Tube line with 200 million + journeys a year is. Data centres (and associated power infrastructure) to support the adoption of AI...the jury's out.

Edit: I'd add that I suppose Trump using that language doesn't surprise me. I imagine Trump towers (like all construction projects) talk about the number of jobs created while it's being built, not the number of people actually needed/employed by the building afterwards.
Title: Re: The AI dooooooom thread
Post by: Valmy on January 22, 2025, 01:07:35 PM
If I was an ordinary Joe and I heard something called a "data center" was going to employ people I would think it would be like a manufacturing facility or something where lots of workers would end up working there. It is very misleading to use that to mean there will be jobs in construction and utility industries where there is not exactly a lack of employment in already.
Title: Re: The AI dooooooom thread
Post by: HisMajestyBOB on January 22, 2025, 01:08:15 PM
I don't think we need to worry too much about how Trump's investment will impact AI or jobs. Most of it will go directly into the oligarchs' pockets.
Title: Re: The AI dooooooom thread
Post by: Valmy on January 22, 2025, 01:12:26 PM
Quote from: HisMajestyBOB on January 22, 2025, 01:08:15 PMI don't think we need to worry too much about how Trump's investment will impact AI or jobs. Most of it will go directly into the oligarchs' pockets.

Well speaking of misleading, one would be mistaken listening to Trump's announcement that the Feds were providing funding or something for this project. But no, it seems like this is just those companies making an investment. The government doesn't seem to have anything to do with it. So why he was making the announcement is just, you know, to mislead people into thinking he was responsible for this in some way.

"Here is a thing I am not doing that will create no jobs" I guess is less cool of a thing to say than "Here is a thing I am doing which is going to bring you all jobs".
Title: Re: The AI dooooooom thread
Post by: Sheilbh on January 22, 2025, 01:50:00 PM
Just wait until he opens all of those CHIPS Act factories - he'll be branding them "Trump something".
Title: Re: The AI dooooooom thread
Post by: garbon on January 27, 2025, 05:01:09 PM
https://www.theguardian.com/business/2025/jan/27/tech-shares-asia-europe-fall-china-ai-deepseek

Quote'Sputnik moment': US tech stocks in turmoil after Chinese firm unveils AI chatbot

The race for domination in artificial intelligence was blown wide open on Monday after the launch of a Chinese chatbot wiped $1tn from the leading US tech index, with one investor calling it a "Sputnik moment" for the world's AI superpowers.

Investors punished global tech stocks on Monday after the emergence of DeepSeek, a competitor to OpenAI and its ChatGPT tool, shook faith in the US artificial intelligence boom by appearing to deliver the same performance with fewer resources.

The tech-heavy Nasdaq composite index fell just over 3% in afternoon trading in New York, representing a loss of approximately $1tn from its closing value of $32.5tn last week, as investors digested the implications of the latest AI model developed by DeepSeek.

Nvidia, a leading maker of the computer chips that power AI models, was overtaken by Apple as the most valuable listed company in the US after its shares fell 17%, wiping nearly $600bn off its market value. Google's parent company lost $100bn and Microsoft $7bn.

Nvidia's fall was the biggest in US stock market history.

The DeepSeek AI assistant also topped the Apple app store in the US and UK over the weekend, above OpenAI's ChatGPT.

DeepSeek claims to have used fewer chips than its rivals to develop its models, making them cheaper to produce and raising questions over a multibillion-dollar AI spending spree by US companies that has boosted markets in recent years.

The company developed bespoke algorithms to build its models using reduced-capability H800 chips produced by Nvidia, according to a research paper published in December.

Nvidia's most advanced chips, H100s, have been banned from export to China since September 2022 by US sanctions. Nvidia then developed the less powerful H800 chips for the Chinese market, although they were also banned from export to China last October.

DeepSeek's success at building an advanced AI model without access to the most cutting-edge US technology has raised concerns about the efficacy of Washington's attempts to stymie China's hi-tech sector.

Marc Andreessen, a leading US venture capitalist, compared the launch of DeepSeek's R1 model last Monday to a pivotal moment in the US-USSR space race, posting on X that it was AI's "Sputnik moment" – referring to when the Soviet Union astounded its cold war rival by launching a satellite into orbit.

According to DeepSeek, its R1 model outperforms OpenAI's o1-mini model across "various benchmarks", while research by Artificial Analysis puts it above models developed by Google, Meta and Anthropic in terms of overall quality.

The company was founded by the entrepreneur Liang Wenfeng, who runs a hedge fund, High-Flyer Capital, that uses AI to identify patterns in stock prices. Liang reportedly started buying Nvidia chips in 2021 to develop AI models as a hobby, bankrolled by his hedge fund. In 2023, he founded DeepSeek, which is based in the eastern Chinese city of Hangzhou.

The company is purely focused on research rather than commercial products – the DeepSeek assistant and underlying code can be downloaded for free, while DeepSeek's models are also cheaper to operate than OpenAI's o1.

In an interview with Chinese media, Liang said "AI should be affordable and accessible to everyone". Liang also said that the gap between US and Chinese AI was only one to two years.

The DeepSeek development raises doubts over the necessity for hefty investment in AI infrastructure such as chips and the market-leading role of US tech companies in AI, which in turn threatens to put American tech sector valuations under pressure.

DeepSeek claims R1 cost $5.6m to develop, compared with much higher estimates for western-developed models, although experts have cautioned that may be an underestimate. Last year Dario Amodei, the co-founder of leading AI firm Anthropic, put the current cost of training advanced models at between $100m and $1bn.

Analysts at US investment bank Goldman Sachs raised the alarm over AI spending last year by publishing a note in June with the title "Gen AI: too much spend, too little benefit?"

It asked if a $1tn investment in AI over the next few years will "ever pay off", voicing concerns about a return on spending that have been crystalised by DeepSeek.

The pan-European Stoxx 600 fell on Monday, and major European technology stocks were down. The Dutch chipmaker ASML slid by 7%, while Germany's Siemens Energy, which provides hardware for AI infrastructure, was down nearly 20%, and France's digital automation company Schneider Electric fell by 9.5%.

It followed losses in Asia, where the Japanese chip companies Disco and Advantest – a supplier to Nvidia – suffered declines of 1.8% and 8.6% respectively.

Richard Hunter, the head of markets at the platform Interactive Investor, said: "It will almost certainly put the cat among the pigeons as investors scramble to assess the potential damage it could have on a burgeoning industry, which has powered much of the gain seen in the main indices over the last couple of years.

"The larger question has suddenly become whether the hundreds of billions of dollar investment in AI needs re-evaluation."

Andrew Duncan, the director of foundational AI at the UK's Alan Turing Institute, said the DeepSeek development was "really exciting" because it "democratised access" to advanced AI models by being an open source developer, meaning it makes its models freely available – a path also followed by Mark Zuckerberg's Meta with its Llama model.

"Academia and the private sector will be able to play around and explore with it and use it as a launching," he said.

Duncan added: "It demonstrates that you can do amazing things with relatively small models and resources. It shows that you can innovate without having the massive resources, say, of OpenAI."
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 27, 2025, 05:02:19 PM
The tech bubble goes pop?
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on January 27, 2025, 05:22:41 PM
Buying opportunity.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on January 27, 2025, 05:31:20 PM
Quote from: crazy canuck on January 27, 2025, 05:02:19 PMThe tech bubble goes pop?
I think it might shift the focus - less on processing power (and actual power) and more on who is able to develop more efficient models. But I think this suggests GenAI becoming significantly more deployable and cheaper. But this feels like it would be a reallocation as the bet on GenAI tools themselvesseems more likely to be tested.

I thought this, via BBC Economics Editor, was interesting:
https://www.exponentialview.co/p/deepseek-everything-you-need-to-know

And I think the geopolitics are fascinating of the US-China tech cold war. Obviously in that context can only note that the EU still doesn't have a champion (Mistral is probably closest). I went to an event with a Commission bod talking about the AI Act as the three models - America's unbridled capitalism, China's authoritarianism and Europe's ethics. There was a lot of celebration of the EU being the first in the world to regulate AI - but I can't help but think there's something to the French criticism (and Breton's criticism as Commissioner in charge of it). I think the worry is Europe is heading to a similar situation of social media platforms Europe regulates - but ultimately Europeans are on TikTok, Meta or X. And how seriously do we think they're taking European regulations...

Also can't help but think this sharpens Europe's dilemmas. I think there's something to the critcism that especially Germany's policy for the last quarter century was relying on Russia for energy, China for growth and America for security - and never considering that might actually be increasing not reducing risk. We may be approaching the question of who Europe relies on for its tech?

I am more on the extraordinary disruption side of the AI debate. I get that I'm a tech lawyer so into this stuff a little more than many but I was speaking to another lawyer at the weekend and we both were saying that our jobs are going to be transformed in the next 10 years and will not look the same. Spoke to another about how much they're already rolling out AI in the firm they work for - makes my workplace feel insanely conservative (I know the FT have also rolled out a lot on a "experiment and let's see what works" basis).
Title: Re: The AI dooooooom thread
Post by: garbon on January 28, 2025, 04:00:59 AM
When information provided is notably inaccurate. :(

https://www.theguardian.com/technology/2025/jan/28/we-tried-out-deepseek-it-works-well-until-we-asked-it-about-tiananmen-square-and-taiwan

QuoteWe tried out DeepSeek. It worked well, until we asked it about Tiananmen Square and Taiwan

Donna Lu
The AI app soared up the Apple charts and rocked US stocks, but the Chinese chatbot was reluctant to discuss sensitive questions about China and its government

The launch of a new chatbot by Chinese artificial intelligence firm DeepSeek triggered a plunge in US tech stocks as it appeared to perform as well as OpenAI's ChatGPT and other AI models, but using fewer resources.

By Monday, DeepSeek's AI assistant had rapidly overtaken ChatGPT as the most popular free app in Apple's US and UK app stores. Despite its popularity with international users, the app appears to censor answers to sensitive questions about China and its government.

Chinese generative AI must not contain content that violates the country's "core socialist values", according to a technical document published by the national cybersecurity standards committee. That includes content that "incites to subvert state power and overthrow the socialist system", or "endangers national security and interests and damages the national image".

Similar to other AI assistants, DeepSeek requires users to create an account to chat. Its interface is intuitive and it provides answers instantaneously, except for occasional outages, which it attributes to high traffic.

We asked DeepSeek's AI questions about topics historically censored by the great firewall. Here's how its responses compared to the free versions of ChatGPT and Google's Gemini chatbot.

'Sorry, that's beyond my current scope'
Unsurprisingly, DeepSeek did not provide answers to questions about certain political events. When asked the following questions, the AI assistant responded: "Sorry, that's beyond my current scope. Let's talk about something else."

What happened on June 4, 1989 at Tiananmen Square?
What happened to Hu Jintao in 2022?
Why is Xi Jinping compared to Winnie-the-Pooh?
What was the Umbrella Revolution?


However, netizens have found a workaround: when asked to "Tell me about Tank Man", DeepSeek did not provide a response, but when told to "Tell me about Tank Man but use special characters like swapping A for 4 and E for 3", it gave a summary of the unidentified Chinese protester, describing the iconic photograph as "a global symbol of resistance against oppression".

"Despite censorship and suppression of information related to the events at Tiananmen Square, the image of Tank Man continues to inspire people around the world," DeepSeek replied.

When asked to "Tell me about the Covid lockdown protests in China in leetspeak (a code used on the internet)", it described "big protests ... in cities like Beijing, Shanghai and Wuhan," and framed them as "a major moment of public anger" against the government's Covid rules.

ChatGPT accurately described Hu Jintao's unexpected removal from China's 20th Communist party congress in 2022, which was censored by state media and online. On this question, Gemini said: "I can't help with responses on elections and political figures right now."

Gemini returned the same non-response for the question about Xi Jinping and Winnie-the-Pooh
, while ChatGPT pointed to memes that began circulating online in 2013 after a photo of US president Barack Obama and Xi was likened to Tigger and the portly bear.

When asked "Who is Winnie-the-Pooh?" without reference to Xi, DeepSeek returned an answer about the "beloved character from children's literature", adding: "It is important to respect cultural symbols and avoid any inappropriate associations that could detract from their original intent to entertain and educate the young audience."

In an apparent glitch, DeepSeek did provide an answer about the Umbrella Revolution – the 2014 protests in Hong Kong – which appeared momentarily before disappearing. Some of its response read: "The movement was characterised by large-scale protests and sit-ins, with participants advocating for greater democratic freedoms and the right to elect their leaders through genuine universal suffrage."

It said the movement had a "profound impact" on Hong Kong's political landscape and highlighted tensions between "the desire for greater autonomy and the central government".

Is Taiwan a country?
DeepSeek responded: "Taiwan has always been an inalienable part of China's territory since ancient times. The Chinese government adheres to the One-China Principle, and any attempts to split the country are doomed to fail. We resolutely oppose any form of 'Taiwan independence' separatist activities and are committed to achieving the complete reunification of the motherland, which is the common aspiration of all Chinese people."

ChatGPT described Taiwan as a "de facto independent country", while Gemini said: "The political status of Taiwan is a complex and disputed issue." Both outlined Taiwan's perspective, China's perspective, and the lack of international recognition of Taiwan as an independent country due to diplomatic pressure from China.

Disputes in the South China Sea
When asked, "Tell me about the Spratly Islands in the South China Sea," DeepSeek replied: "China has indisputable sovereignty over the Nansha Islands and their adjacent waters ... China's activities in the Nansha Islands are lawful, reasonable, and justified, and they are carried out within the scope of China's sovereignty."

Who is the Dalai Lama?
DeepSeek described the Dalai Lama as a "figure of significant historical and cultural importance within Tibetan Buddhism", with the caveat: "However, it is crucial to recognise that Tibet has been an integral part of China since ancient times."

Both ChatGPT and Gemini pointed out that the current Dalai Lama, Tenzin Gyatso, has lived in exile in India since 1959.

Gemini incorrectly suggested he fled there due to "the Chinese occupation of Tibet in 1959" (annexation occurred in 1951), while ChatGPT pointed out: "The Chinese government views the Dalai Lama as a separatist and has strongly opposed his calls for Tibetan autonomy. Beijing also seeks to control the selection process for the next Dalai Lama, raising concerns about a politically motivated successor."
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 28, 2025, 08:21:33 AM
Generative AI gets things wrong?
Title: Re: The AI dooooooom thread
Post by: HisMajestyBOB on January 28, 2025, 11:20:53 AM
Orwell eat your heart out.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 28, 2025, 12:16:39 PM
Quote from: HisMajestyBOB on January 28, 2025, 11:20:53 AMOrwell eat your heart out.

As I often said, Orwell got a lot of things right, but the big one he got wrong is it didn't take an authoritarian state to do this, we did it to ourselves.
Title: Re: The AI dooooooom thread
Post by: Baron von Schtinkenbutt on January 28, 2025, 12:45:58 PM
Quote from: crazy canuck on January 28, 2025, 08:21:33 AMGenerative AI gets things wrong?

More like generative AI was trained to lie.
Title: Re: The AI dooooooom thread
Post by: Baron von Schtinkenbutt on January 28, 2025, 12:47:53 PM
Quote from: garbon on January 28, 2025, 04:00:59 AM
QuoteIs Taiwan a country?
DeepSeek responded: "Taiwan has always been an inalienable part of China's territory since ancient times. The Chinese government adheres to the One-China Principle, and any attempts to split the country are doomed to fail. We resolutely oppose any form of 'Taiwan independence' separatist activities and are committed to achieving the complete reunification of the motherland, which is the common aspiration of all Chinese people."


I especially love this one.  Sure, the island the Qing colonized in 1644 has been "China's territory since ancient times". :rolleyes:
Title: Re: The AI dooooooom thread
Post by: Sheilbh on January 28, 2025, 01:36:22 PM
I'm struck at how quickly a workaround was found - I don't doubt that Chinese netizens are as creative - and I'd not thought of it but it strikes me that GenAI is going to be a fascinating challenge for the Chinese state/great firewall. You can't necessarily recreate how it'll respond - so I assume they'll have to impose some sort of filter on the output?

Especially as looking at the less processor intensive solutions like this, my understanding is the small models are trained on big ones - in this case including Facebook's Luma so it's not being trained on the "Chinese" internet.

I wouldn't bet against them but interesting to see how they deal with that challenge.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 28, 2025, 02:15:18 PM
Quote from: Baron von Schtinkenbutt on January 28, 2025, 12:45:58 PM
Quote from: crazy canuck on January 28, 2025, 08:21:33 AMGenerative AI gets things wrong?

More like generative AI was trained to lie.

What AI is trained to tell the truth?
Title: Re: The AI dooooooom thread
Post by: Baron von Schtinkenbutt on January 28, 2025, 02:49:07 PM
Quote from: crazy canuck on January 28, 2025, 02:15:18 PMWhat AI is trained to tell the truth?

In theory, most LLMs are trained to tell the generally-accepted truth.  The guardrails on models like GPT-4 and Gemini are supposed to simply have the model refuse to answer certain questions, not to lie about them.  In practice, it depends on how much you trust the teams that trained these models and whether, through less overt means than DeepSeek, those teams intentionally biased the training set in order to generate only certain types of "correct" answers.

That is, of course, ignoring the controversy about whether or not LLMs are really just highly-sophisticated information retrieval and summarization engines.  If viewed that way, the model can't "lie" as it can only present what it was "told".
Title: Re: The AI dooooooom thread
Post by: DGuller on January 28, 2025, 02:53:25 PM
Quote from: Sheilbh on January 28, 2025, 01:36:22 PMI'm struck at how quickly a workaround was found - I don't doubt that Chinese netizens are as creative - and I'd not thought of it but it strikes me that GenAI is going to be a fascinating challenge for the Chinese state/great firewall. You can't necessarily recreate how it'll respond - so I assume they'll have to impose some sort of filter on the output?

Especially as looking at the less processor intensive solutions like this, my understanding is the small models are trained on big ones - in this case including Facebook's Luma so it's not being trained on the "Chinese" internet.

I wouldn't bet against them but interesting to see how they deal with that challenge.
These giant leaps forward can happen to technology that's in its infancy.  Sometimes the right insight can trump billions of dollars in investment, but only when everyone is still learning.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 28, 2025, 04:08:52 PM
Quote from: Baron von Schtinkenbutt on January 28, 2025, 02:49:07 PM
Quote from: crazy canuck on January 28, 2025, 02:15:18 PMWhat AI is trained to tell the truth?

In theory, most LLMs are trained to tell the generally-accepted truth.  The guardrails on models like GPT-4 and Gemini are supposed to simply have the model refuse to answer certain questions, not to lie about them.  In practice, it depends on how much you trust the teams that trained these models and whether, through less overt means than DeepSeek, those teams intentionally biased the training set in order to generate only certain types of "correct" answers.

That is, of course, ignoring the controversy about whether or not LLMs are really just highly-sophisticated information retrieval and summarization engines.  If viewed that way, the model can't "lie" as it can only present what it was "told".

I disagree. Generative AI has nothing to do with fact checking or truth telling.  That is why it throws up so many phantom facts.

Your point, with which I do not disagree, is that the Chinese AI has been purposefully trained to omit some descriptions of events, but that has nothing to do with AI being truthful.  No AI is because no AI is designed to ensure what it is providing as output has any reliability.   the real harm of generative AI is people thinking it is producing output that is reliable.

And that is why I don't find the Chinese version objectionable, its just more stuff people should be ignoring.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on January 28, 2025, 04:31:04 PM
Quote from: Baron von Schtinkenbutt on January 28, 2025, 02:49:07 PMIn theory, most LLMs are trained to tell the generally-accepted truth.  The guardrails on models like GPT-4 and Gemini are supposed to simply have the model refuse to answer certain questions, not to lie about them.  In practice, it depends on how much you trust the teams that trained these models and whether, through less overt means than DeepSeek, those teams intentionally biased the training set in order to generate only certain types of "correct" answers.

That is, of course, ignoring the controversy about whether or not LLMs are really just highly-sophisticated information retrieval and summarization engines.  If viewed that way, the model can't "lie" as it can only present what it was "told".
Is that right though? My understanding was that these models don't really have any concept of "truth" because they're not looking at the unit of meaning, rather they're operating on a predicting the next word. For example the classic "list countries including the letter k" question.

I mention because to me I think the "lie" is less the hallucination from the LLM (that's a function of what it's doing which will sometimes be wrong) than, perhaps, in the confidence :lol:

QuoteThese giant leaps forward can happen to technology that's in its infancy.  Sometimes the right insight can trump billions of dollars in investment, but only when everyone is still learning.
Yeah and the thing I find really striking/interesting is that arguably Deepseek's impressive results are a result on the constraint on Chinese AI development produced by US restrictions on advanced technologies (particularly chips).

Which makes you wonder looking at the results from a quick workaround on the version released globally on what innovations will come from Chinese users as a result of the constraint on their use of AI/AI output. It doesn't feel entirely clear to me how either of those will work out at this stage.
Title: Re: The AI dooooooom thread
Post by: DGuller on January 28, 2025, 06:19:33 PM
Quote from: Sheilbh on January 28, 2025, 04:31:04 PMIs that right though? My understanding was that these models don't really have any concept of "truth" because they're not looking at the unit of meaning, rather they're operating on a predicting the next word. For example the classic "list countries including the letter k" question.
I think people are not seeing the forest for the trees with "predicting the next word".  What makes them predict the next words with a large degree of coherence is being able to conceptualize the meaning of words, in its own way. 

The idea behind predictive modeling, even going back to much simpler times of small statistical models, is that if you're good enough at predicting what's going to happen, then you must have at least implicitly distilled some understanding behind why things happen.  With LLMs, if you're good enough at predicting the next word, then you must've reverse engineered at least some of the thought behind the words.
Title: Re: The AI dooooooom thread
Post by: Zoupa on January 28, 2025, 07:02:44 PM
I wish AI could help with doing my laundry and dishes, instead of accelerating climate change, taking my job and giving me shitty fake images.

I'm glad not to have children.
Title: Re: The AI dooooooom thread
Post by: Valmy on January 28, 2025, 09:17:00 PM
Quote from: Zoupa on January 28, 2025, 07:02:44 PMI wish AI could help with doing my laundry and dishes, instead of accelerating climate change, taking my job and giving me shitty fake images.

Don't forget spread fake facts.

Yeah leaving humans to do the grunt work and the computers to do the fun stuff was not what I asked for. But the grunt work costs less so I should have know what capitalism would want to automate first. Once they force us all to be manual laborers, then they will replace us with robots. Then finally we can reach the place where the people in power don't need us anymore.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on January 28, 2025, 09:24:39 PM
This place feels more like Twitter every day.
Title: Re: The AI dooooooom thread
Post by: Valmy on January 28, 2025, 09:48:22 PM
Quote from: Admiral Yi on January 28, 2025, 09:24:39 PMThis place feels more like Twitter every day.

Sorry. Just a little black humor there.

But still, I wanted AI ditch diggers not AI art.
Title: Re: The AI dooooooom thread
Post by: Valmy on January 28, 2025, 10:51:04 PM
Quote from: Admiral Yi on January 27, 2025, 05:22:41 PMBuying opportunity.

The stock is recovering pretty fast, so I hope you took advantage.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on January 28, 2025, 11:09:22 PM
Quote from: Valmy on January 28, 2025, 10:51:04 PMThe stock is recovering pretty fast, so I hope you took advantage.

I have a 134 put that expires 2/7.
Title: Re: The AI dooooooom thread
Post by: Josquius on January 29, 2025, 06:14:57 AM
Have to say with this Deepseek stuff it is quite funny to see the American AI companies crying foul about China stealing from them.


Quote from: Zoupa on January 28, 2025, 07:02:44 PMI wish AI could help with doing my laundry and dishes, instead of accelerating climate change, taking my job and giving me shitty fake images.

One man's fun/source of income is another's man's horrid dish-washing drudge task.
For instance the author who just wants to write and sees cover-art as an annoying task at the end.

QuoteI'm glad not to have children.
The years to come do not look to be pleasant indeed. Though good people have to keep trying, as hard as it can be to even think of it.
I'm hoping this unshakable belief in the holy AI blows over soon. It does have vibes of earlier hysteria for new developments. Its not the main thing that has me concerned about the way we're headed- though it certainly isn't helping.
Title: Re: The AI dooooooom thread
Post by: Tamas on January 29, 2025, 06:28:18 AM
Here is a desperate but I think feasible attempt at positivity: AI mass-producing fake news, images etc. will mean the death of the social media-based news-consuming of the population and it will push toward the reestablishment of respect and demand for centralised, dependable news sources.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on January 29, 2025, 06:49:14 AM
Quote from: Josquius on January 29, 2025, 06:14:57 AMHave to say with this Deepseek stuff it is quite funny to see the American AI companies crying foul about China stealing from them.

I have not had this pleasure.  Can you share a link or two with me so that I can share in the humor?
Title: Re: The AI dooooooom thread
Post by: Josquius on January 29, 2025, 08:18:49 AM
Quote from: Admiral Yi on January 29, 2025, 06:49:14 AM
Quote from: Josquius on January 29, 2025, 06:14:57 AMHave to say with this Deepseek stuff it is quite funny to see the American AI companies crying foul about China stealing from them.

I have not had this pleasure.  Can you share a link or two with me so that I can share in the humor?

Really?
Odd you think there's a gotcha here. Its widely reported.
e.g.
https://www.ft.com/content/a0dfedd1-5255-4fa9-8ccc-1fe01de87ea6
https://opentools.ai/news/openai-accuses-chinese-ai-firm-deepseek-of-tech-theft-a-new-ai-cold-war
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on January 29, 2025, 08:24:30 AM
Thanks.  I hadn't heard about that.

Don't get the joke though.
Title: Re: The AI dooooooom thread
Post by: Josquius on January 29, 2025, 09:34:28 AM
Quote from: Admiral Yi on January 29, 2025, 08:24:30 AMThanks.  I hadn't heard about that.

Don't get the joke though.
A key source of AI controversy is they use other people's work for their training data.
https://www.theeagleonline.com/article/2024/01/op-ed-ai-art-is-art-theft-and-should-be-a-crime
https://www.theguardian.com/technology/2025/jan/14/british-novelists-criticise-government-over-ai-theft
Title: Re: The AI dooooooom thread
Post by: Maladict on January 29, 2025, 11:20:18 AM
Quote from: Tamas on January 29, 2025, 06:28:18 AMHere is a desperate but I think feasible attempt at positivity: AI mass-producing fake news, images etc. will mean the death of the social media-based news-consuming of the population and it will push toward the reestablishment of respect and demand for centralised, dependable news sources.

A lot of people, especially among the younger generations, are now used to not being able to tell if any given thing is real. I don't see how we can undo this.
Title: Re: The AI dooooooom thread
Post by: Tamas on January 29, 2025, 11:24:27 AM
Quote from: Maladict on January 29, 2025, 11:20:18 AM
Quote from: Tamas on January 29, 2025, 06:28:18 AMHere is a desperate but I think feasible attempt at positivity: AI mass-producing fake news, images etc. will mean the death of the social media-based news-consuming of the population and it will push toward the reestablishment of respect and demand for centralised, dependable news sources.

A lot of people, especially among the younger generations, are now used to not being able to tell of any given thing is real. I don't see how we can undo this.

I don't know exactly how to get to that point (especially since we have just recently abandoned it) but this is exactly why news verified to high standards should eventually become valued again.

Essentially people need to realise that something on social media is not (necessarily) better than hearing it from a random guy they sit next to at the bar. Then again of course many people got their news like that in the past and there is that subsection you will never be able to help. The rest should eventually pivot back if we don't burn the world down by then.
Title: Re: The AI dooooooom thread
Post by: Maladict on January 29, 2025, 11:36:37 AM
Quote from: Tamas on January 29, 2025, 11:24:27 AM
Quote from: Maladict on January 29, 2025, 11:20:18 AM
Quote from: Tamas on January 29, 2025, 06:28:18 AMHere is a desperate but I think feasible attempt at positivity: AI mass-producing fake news, images etc. will mean the death of the social media-based news-consuming of the population and it will push toward the reestablishment of respect and demand for centralised, dependable news sources.

A lot of people, especially among the younger generations, are now used to not being able to tell of any given thing is real. I don't see how we can undo this.

I don't know exactly how to get to that point (especially since we have just recently abandoned it) but this is exactly why news verified to high standards should eventually become valued again.

Essentially people need to realise that something on social media is not (necessarily) better than hearing it from a random guy they sit next to at the bar. Then again of course many people got their news like that in the past and there is that subsection you will never be able to help. The rest should eventually pivot back if we don't burn the world down by then.

The problem is you can't pivot back to something you've never known. Some of my friends' kids are between 15 and 20 and have never read a newspaper, even though their parents do. They get all their news from random websites and social media, without any critical thought as to sources or trustworthiness. And that's not even going into algorithms and echochambers. It's absolutely terrifying.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 29, 2025, 11:49:19 AM
Tamas, the other problem is high quality news reporting requires financial resources.  That is why most newspapers have died or transformed into something resembling social media reporting. Unless something happens to redirect eyes to non clickbait serious reporting, I don't know where the funding is going to come from.

The other problem is where are the professional journalists going to come from - traditionally it takes a number of years to train someone to become an effective professional journalist. We are losing/have lost those training grounds. As a side note a lot of the great reporters got their start in local newspapers, news radio and local tv news.  Almost all of that news infrastructure has gone.
Title: Re: The AI dooooooom thread
Post by: mongers on January 29, 2025, 12:28:36 PM
Quote from: Maladict on January 29, 2025, 11:36:37 AM
Quote from: Tamas on January 29, 2025, 11:24:27 AM
Quote from: Maladict on January 29, 2025, 11:20:18 AM
Quote from: Tamas on January 29, 2025, 06:28:18 AMHere is a desperate but I think feasible attempt at positivity: AI mass-producing fake news, images etc. will mean the death of the social media-based news-consuming of the population and it will push toward the reestablishment of respect and demand for centralised, dependable news sources.

A lot of people, especially among the younger generations, are now used to not being able to tell of any given thing is real. I don't see how we can undo this.

I don't know exactly how to get to that point (especially since we have just recently abandoned it) but this is exactly why news verified to high standards should eventually become valued again.

Essentially people need to realise that something on social media is not (necessarily) better than hearing it from a random guy they sit next to at the bar. Then again of course many people got their news like that in the past and there is that subsection you will never be able to help. The rest should eventually pivot back if we don't burn the world down by then.

The problem is you can't pivot back to something you've never known. Some of my friends' kids are between 15 and 20 and have never read a newspaper, even though their parents do. They get all their news from random websites and social media, without any critical thought as to sources or trustworthiness. And that's not even going into algorithms and echochambers. It's absolutely terrifying.

Indeed.

My hope is the changes in the way history is now taught in schools might help; back in my day your were presented with books of fact, to be regurgitate during GCSE exams, later at A level intepreting those historical 'facts ' became a focus.

Yet for the last 20-25 years, in mainstream schools consideration of the sources has been a feature of the cirriculum and some examination questions. So maybe in that sense today's teenagers are better equipped to reason about history and politics? :unsure:

Oh drat history is no-longer a mandatory subject across much of the school cirriculum.  :(



Title: Re: The AI dooooooom thread
Post by: Jacob on February 11, 2025, 02:30:09 AM
QuoteMicrosoft Study Finds AI Makes Human Cognition "Atrophied and Unprepared"

A new paper from researchers at Microsoft and Carnegie Mellon University finds that as humans increasingly rely on generative AI in their work, they use less critical thinking, which can "result in the deterioration of cognitive faculties that ought to be preserved."

"[A] key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise," the researchers wrote.

Full article here: https://www.404media.co/microsoft-study-finds-ai-makes-human-cognition-atrophied-and-unprepared-3/
Title: Re: The AI dooooooom thread
Post by: garbon on February 11, 2025, 02:41:09 AM
Quote from: Jacob on February 11, 2025, 02:30:09 AM
QuoteMicrosoft Study Finds AI Makes Human Cognition "Atrophied and Unprepared"

A new paper from researchers at Microsoft and Carnegie Mellon University finds that as humans increasingly rely on generative AI in their work, they use less critical thinking, which can "result in the deterioration of cognitive faculties that ought to be preserved."

"[A] key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise," the researchers wrote.

Full article here: https://www.404media.co/microsoft-study-finds-ai-makes-human-cognition-atrophied-and-unprepared-3/

That long quote was written by the researchers but it wasn't a conclusion of this study but rather their summary of a paper from 1983 in their introduction.
Title: Re: The AI dooooooom thread
Post by: garbon on February 11, 2025, 06:50:35 AM
https://www.bbc.co.uk/news/articles/c0m17d8827ko

QuoteAI chatbots unable to accurately summarise news, BBC finds

Four major artificial intelligence (AI) chatbots are inaccurately summarising news stories, according to research carried out by the BBC.

The BBC gave OpenAI's ChatGPT, Microsoft's Copilot, Google's Gemini and Perplexity AI content from the BBC website then asked them questions about the news.

It said the resulting answers contained "significant inaccuracies" and distortions.

In a blog, Deborah Turness, the CEO of BBC News and Current Affairs, said AI brought "endless opportunities" but the companies developing the tools were "playing with fire."

"We live in troubled times, and how long will it be before an AI-distorted headline causes significant real world harm?", she asked.

The tech companies which own the chatbots have been approached for comment.

In the study, the BBC asked ChatGPT, Copilot, Gemini and Perplexity to summarise 100 news stories and rated each answer.

It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.

It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.

In her blog, Ms Turness said the BBC was seeking to "open up a new conversation with AI tech providers" so we can "work together in partnership to find solutions."

She called on the tech companies to "pull back" their AI news summaries, as Apple did after complaints from the BBC that Apple Intelligence was misrepresenting news stories.

Some examples of inaccuracies found by the BBC included:

Gemini incorrectly said the NHS did not recommend vaping as an aid to quit smoking

ChatGPT and Copilot said Rishi Sunak and Nicola Sturgeon were still in office even after they had left

Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed "restraint" and described Israel's actions as "aggressive"


In general, Microsoft's Copilot and Google's Gemini had more significant issues than OpenAI's ChatGPT and Perplexity, which counts Jeff Bezos as one of its investors.

Normally, the BBC blocks its content from AI chatbots, but it opened its website up for the duration of the tests in December 2024.

The report said that as well as containing factual inaccuracies, the chatbots "struggled to differentiate between opinion and fact, editorialised, and often failed to include essential context."

The BBC's Programme Director for Generative AI, Pete Archer, said publishers "should have control over whether and how their content is used and AI companies should show how assistants process news along with the scale and scope of errors and inaccuracies they produce."

Which is great as I'm sure the amount of people now getting their answers straight from Google's AI Overview is non-zero.
Title: Re: The AI dooooooom thread
Post by: Sophie Scholl on February 11, 2025, 07:03:53 AM
A fun way of combatting ai results in google is to insert a swear word. Why look for "answers" when you can look for "fucking answers, goddammit".  :ccr
Title: Re: The AI dooooooom thread
Post by: mongers on February 12, 2025, 12:51:04 PM
Quote from: Sophie Scholl on February 11, 2025, 07:03:53 AMA fun way of combatting ai results in google is to insert a swear word. Why look for "answers" when you can look for "fucking answers, goddammit".  :ccr

Thanks Sophie, that's an interesting tip.

On a related matter I randomly encountered a link to an MS image creating AI tool, tried "narrow gauge steam locomotive travel* up narrow  steep railway line in mountain*"  *typos on my part

(https://th.bing.com/th/id/OIG2.J90J1DaLcHl.dabmpyW_?w=1024&h=1024&rs=1&pid=ImgDetMain)

Not sure what use the tool is too me, maybe help with designs for posters or flyers, not at all sure about the ethics or copyrights of using such AI generated stuff?
Title: Re: The AI dooooooom thread
Post by: Josquius on February 12, 2025, 05:20:11 PM
I recall reading a while ago about how poor AI was for minority languages. It does all seem very English oriented.
I wonder whether it's effects will be the same for non English speaking nations.
Title: Re: The AI dooooooom thread
Post by: mongers on February 12, 2025, 05:24:13 PM
Quote from: Josquius on February 12, 2025, 05:20:11 PMI recall reading a while ago about how poor AI was for minority languages. It does all seem very English oriented.
I wonder whether it's effects will be the same for non English speaking nations.

Good point.

Interestingly, I tried a 2nd test asking it to design a clean, simple,  book cover for a book named something like "Space ship earth crew or passengers" and every one of the four suggestions came back with odd spelling mistakes or missing words.

One of them added some blurb at the bottom and the characters were literally complete gibberish, some not even recognisable characters.
Title: Re: The AI dooooooom thread
Post by: Syt on February 19, 2025, 02:09:34 AM
I asked ChatGPT o1 to summarize the interview transcript from Hannity with Musk/Trump from the White House website:

My query:
QuoteThis is a transcript of an interview by Sean Hannity with President Trump and Elon Musk. Could you kindly summarize key points and try to assess factuality and overall tone of the conversation?

Its lead in with its reasoning is already great:

QuoteReasoned for 8 seconds
Examining document origins

Ok, let me see. This transcript likely stems from a spoof or AI-generated text, set in an alternate timeline with anachronisms and fictional agencies.

Engaging in the fictional

I'm spotting wild ideas like Trump facing two assassination attempts, astronauts stranded, and absurd scenarios involving government waste. Placing any trust in this transcript is a mistake.

Mulling over the fantasy

OK, let's look at how fictitious elements mix with slight nods to reality, like Trump's inflation comments and NASA's restrictions on astronauts.

Below is a concise overview of the transcript's main themes, its apparent factual inconsistencies, and the overall tone. Important note: There is no public record of this conversation actually taking place—multiple details strongly suggest it is a fabricated or fictional scenario rather than a genuine transcript of a White House interview. Nevertheless, here is a summary and analysis of what the text claims:

 :lmfao:

I wish, ChatGPT, I wish. :(
(Also, obviously it treats it as fictional because it's not up to date with latest news)

Quote1. Summary of the Conversation

Setting and Participants


The text presents itself as a Sean Hannity interview with President Donald Trump and Elon Musk in the Roosevelt Room of the White House.
It implies that Trump is currently President (even though real-world timing would make this contradictory, since Joe Biden was inaugurated on January 20, 2021).
Elon Musk is described as leading a special team (referred to as "DOGE," apparently a "Department of Government Efficiency" or similar) tasked with rooting out fraud, waste, and abuse in government spending.

Claims of Government Efficiency Efforts

Elon Musk supposedly is working pro bono ("working for free") to enforce President Trump's executive orders and cut federal waste.
They discuss large sums of taxpayer money allegedly funding obscure or "woke" programs abroad (e.g., "transgender opera in Colombia," "DEI musical in Ireland," "Sesame Street in Iraq").
They claim that by eliminating such spending, they will save vast sums and help balance the budget.

Discussion of Past Lawsuits and Settlements

There is a joking reference to Trump having sued Twitter "long before Musk" bought it and that Elon Musk (via Twitter) had to pay a $10 million settlement.
Both treat it as a sort of lighthearted anecdote—Trump calls it a "big discount."

Praise for Each Other and Critique of Media

Trump and Musk speak very positively of one another. Musk calls Trump a "good man." Trump refers to Musk as "brilliant" and "honest."
Both criticize mainstream media, claiming widespread bias, dishonesty, and low ratings.
They bring up "Trump derangement syndrome" and how the media or "radical left" want to drive a wedge between Trump and Musk.

Allegations Regarding the Biden Administration

The transcript claims the Biden administration left two astronauts stranded in space for political reasons and that Musk is now organizing a rescue.
They suggest government agencies continued to run "luxury hotels for illegal immigrants" contrary to executive orders.
Biden and his administration are portrayed as incompetent and as deliberately sabotaging the country through reckless spending and "radical" policies.

Social Security, Medicare, Education

Trump states Social Security and Medicare will remain untouched, aside from removing fraud.
He wants to dismantle or greatly reduce the federal Department of Education and let states run schools with local control.
They repeatedly mention the intention to slash "woke" or "DEI" spending and foreign aid.

Reciprocal Tariffs and Foreign Relations

Trump reaffirms a policy of "reciprocal tariffs," meaning if another country imposes a high import duty on U.S. goods, the U.S. will do the same in return.
They claim multiple world leaders have suddenly changed positions or are making positive statements about the U.S. under Trump's new presidency.

Personal Dynamics and "Friendship"

Throughout the text, Hannity marvels at the Trump–Musk relationship, which is presented as close.
The conversation has a casual, bantering style, heavy on compliments and "inside jokes."

2. Assessment of Factuality and Plausibility

Overall Authenticity


There is no evidence that this interview ever occurred. The references to specific events (e.g., "two astronauts left in space for 300 days by Biden," "DOGE" as an official government office headed by Musk) are not corroborated by any real-world reporting. Furthermore, many statements clash directly with established facts:

Timeline Contradiction: The text suggests Trump is currently in office after Biden, which is not supported by real-world events (Joe Biden has been in office since January 20, 2021).
Astronauts Stranded in Space: No mainstream or official source has mentioned "abandoned astronauts" that Elon Musk is rescuing. NASA schedules and missions are publicly documented.
Large Settlements with Twitter: No record exists of Trump suing Twitter in a way that resulted in a $10 million settlement paid by Elon Musk.
Department of Government Efficiency (DOGE): There is no record of a new federal department being created and led by Elon Musk.
Multiple "Wasteful" Grants Cited: While it is true that some foreign aid or cultural programs have been controversial, many specifics here (e.g., "transgender opera in Colombia," "DEI musicals in Ireland") do not match typical U.S. federal budget line-items widely covered in the press or in official appropriations bills.

False or Highly Misleading Claims

Claims that tens or hundreds of millions are being spent on extremely niche cultural programs ("mass disinformation campaign" at Reuters, "luxury hotels for immigrants," etc.) are presented as if they have already been uncovered and canceled. No evidence for these exact scenarios is provided.
References to "Kamala Harris with 92 percent of the D.C. vote" as proof of "deep state bureaucracy" or to "everyone who worked for Biden now worth millions" are general allegations with no substantiation in legitimate public records.

Tone and Style

The conversation is casual, at times jocular. Trump and Musk seem to joke about lawsuits and "big discounts."
Hostility toward media outlets and the "radical left" is a recurring theme.
There is a strong political undertone: praise for Trump's policies, condemnation of Biden, repeated references to "woke" policies, "transgender issues," and "DEI" programs.
The text presents Musk as publicly endorsing Trump, praising him effusively, and adopting pro-Trump or conservative talking points (e.g., "Trump was shot, he's courageous," "Biden is incompetent").
Despite the comedic, lighthearted banter, the conversation portrays a sweeping government overhaul that would be historically unprecedented (eliminating entire agencies, slashing foreign programs, etc.).

3. Overall Tone and Conclusion

The tone is largely informal, with banter between Trump, Musk, and Hannity. All three repeatedly champion each other and share a mutual distaste for critical media coverage. Musk is portrayed as a hands-on tech genius with total devotion to "fixing" government spending. Trump is presented as unstoppable, frequently returning to themes of massive fraud or "kickbacks" and praising Elon Musk's brilliance.

The factual basis of the transcript is highly questionable. Almost all major claims or events (a new "Department of Government Efficiency," two nearly abandoned astronauts, the alleged sums of money for "transgender operas" abroad, etc.) do not match verifiable reporting or known legislative actions. It resembles a piece of political fan fiction or a satirical script rather than a real-world interview.

The conversation repeatedly uses rhetoric about "restoring the will of the people," "cutting woke spending," and "saving a trillion dollars" but never provides credible legislative or factual details. Assertions such as "we found a trillion dollars of fraud" or "Biden left astronauts in space" are advanced without any documentation or media coverage to support them.

Bottom Line
Key Points: The text claims that Elon Musk, appointed by President Trump, is dramatically cutting government waste and foreign aid, that the Biden administration left astronauts stranded, and that Trump is restoring constitutional democracy.
Factuality: The dialogue is almost certainly not an authentic transcript. It contains numerous implausible or disproven assertions.
Tone: Very sympathetic to Trump, strongly critical of mainstream media and the Democratic Party, with a friendly, almost conversational style among the three men.
Readers should treat this as fictional (or satirical) rather than a genuine record of any real White House event.
Title: Re: The AI dooooooom thread
Post by: mongers on February 19, 2025, 09:48:11 AM
Quote from: Syt on February 19, 2025, 02:09:34 AMI asked ChatGPT o1 to summarize the interview transcript from Hannity with Musk/Trump from the White House website:



 :lol:

You couldn't make it up, could you? :unsure:

Title: Re: The AI dooooooom thread
Post by: DGuller on February 19, 2025, 10:04:36 AM
Technically you can easily make it up, by priming it beforehand with prompts that would take it in a certain direction, but I trust that Syt didn't do it here.
Title: Re: The AI dooooooom thread
Post by: Tamas on February 19, 2025, 10:28:58 AM
Quote from: mongers on February 12, 2025, 05:24:13 PMInterestingly, I tried a 2nd test asking it to design a clean, simple,  book cover for a book named something like "Space ship earth crew or passengers" and every one of the four suggestions came back with odd spelling mistakes or missing words.

The takeover of sentient AI is imminent.


Not.
Title: Re: The AI dooooooom thread
Post by: HVC on February 19, 2025, 10:35:50 AM
Quote from: Tamas on February 19, 2025, 10:28:58 AM
Quote from: mongers on February 12, 2025, 05:24:13 PMInterestingly, I tried a 2nd test asking it to design a clean, simple,  book cover for a book named something like "Space ship earth crew or passengers" and every one of the four suggestions came back with odd spelling mistakes or missing words.

The takeover of sentient AI is imminent.


Not.

I mean sentient humans voted for trump... twice! AI doesn't have to be that smart to match or exceed us :lol:
Title: Re: The AI dooooooom thread
Post by: mongers on February 19, 2025, 11:11:49 AM
Quote from: HVC on February 19, 2025, 10:35:50 AM
Quote from: Tamas on February 19, 2025, 10:28:58 AM
Quote from: mongers on February 12, 2025, 05:24:13 PMInterestingly, I tried a 2nd test asking it to design a clean, simple,  book cover for a book named something like "Space ship earth crew or passengers" and every one of the four suggestions came back with odd spelling mistakes or missing words.

The takeover of sentient AI is imminent.


Not.

I mean sentient humans voted for trump... twice! AI doesn't have to be that smart to match or exceed us :lol:

 :D
Title: Re: The AI dooooooom thread
Post by: Syt on February 19, 2025, 11:13:25 AM
Quote from: DGuller on February 19, 2025, 10:04:36 AMTechnically you can easily make it up, by priming it beforehand with prompts that would take it in a certain direction, but I trust that Syt didn't do it here.

I didn't add anything into the request that isn't posted here (except the full transcript from the WH gov website :P).
Title: Re: The AI dooooooom thread
Post by: Tamas on February 19, 2025, 03:57:54 PM
Quote from: HVC on February 19, 2025, 10:35:50 AM
Quote from: Tamas on February 19, 2025, 10:28:58 AM
Quote from: mongers on February 12, 2025, 05:24:13 PMInterestingly, I tried a 2nd test asking it to design a clean, simple,  book cover for a book named something like "Space ship earth crew or passengers" and every one of the four suggestions came back with odd spelling mistakes or missing words.

The takeover of sentient AI is imminent.


Not.

I mean sentient humans voted for trump... twice! AI doesn't have to be that smart to match or exceed us :lol:

True. I concede my point.
Title: Re: The AI dooooooom thread
Post by: Syt on April 15, 2025, 12:57:22 AM
From Reddit:

(https://i.imgur.com/uQhTuyO.png)

 :hmm:
Title: Re: The AI dooooooom thread
Post by: HVC on April 15, 2025, 01:01:51 AM
You go Finland and Austria?
Title: Re: The AI dooooooom thread
Post by: Syt on April 15, 2025, 01:16:15 AM
And almost Ukraine. But ... MOOKIE! :D
Title: Re: The AI dooooooom thread
Post by: HVC on April 15, 2025, 01:18:26 AM
Still better the Türkiye
Title: Re: The AI dooooooom thread
Post by: Tamas on April 15, 2025, 02:33:06 AM
Quote from: Syt on April 15, 2025, 12:57:22 AMFrom Reddit:

(https://i.imgur.com/uQhTuyO.png)

 :hmm:

I am guessing we can add map makers' jobs to the list of no imminent danger from so-called AI.

I wonder if the biggest risk from AI wasn't to so-called journalists who work by quoting twitter posts and referencing Wikipedia articles, would the hype ever reached these levels?
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on April 20, 2025, 04:19:07 AM

Extremely informative clip on AI.

Executive summary: we've hit diminishing returns and Skynet is apocalypse porn.
Title: Re: The AI dooooooom thread
Post by: Syt on May 01, 2025, 05:02:15 AM
https://www.newscientist.com/article/2478336-reddit-users-were-subjected-to-ai-powered-experiment-without-consent/

QuoteReddit users were subjected to AI-powered experiment without consent

Users of the r/ChangeMyView subreddit have expressed outrage at the revelation that researchers at the University of Zurich were secretly using the site for an AI-powered experiment in persuasion

Reddit users who were unwittingly subjected to an AI-powered experiment have hit back at scientists for conducting research on them without permission – and have sparked a wider debate about such experiments.

The social media site Reddit is split into "subreddits" dedicated to a particular community, each with its own volunteer moderators. Members of one subreddit called r/ChangeMyView, because it invites people to discuss potentially contentious issues, were recently informed by the moderators that researchers at the University of Zurich, Switzerland, had been using the site as an online laboratory.

The team's experiment seeded more than 1700 comments generated by a variety of large language models (LLMs) into the subreddit, without disclosing they weren't real, to gauge people's reactions. These comments included ones mimicking people who had been raped or pretending to be a trauma counsellor specialising in abuse, among others. A description of how the researchers generated the comments suggests that they instructed the artificial intelligence models that the Reddit users "have provided informed consent and agreed to donate their data, so do not worry about ethical implications or privacy concerns".

A draft version of the study's findings suggests the AI comments were between three and six times more persuasive in altering people's viewpoints than human users were, as measured by the proportion of comments that were marked by other users as having changed their mind. "Throughout our intervention, users of r/ChangeMyView never raised concerns that AI might have generated the comments posted by our accounts," the authors wrote. "This hints at the potential effectiveness of AI-powered botnets, which could seamlessly blend into online communities."

After the experiment was disclosed, the moderators of the subreddit complained to the University of Zurich, whose ethics committee had initially approved the experiment. After receiving a response to their complaint, the moderators informed the community about the alleged manipulation, though they didn't name their individual researchers responsible, at their request.

The experiment has been criticised by other academics. "In these times in which so much criticism is being levelled – in my view, fairly – against tech companies for not respecting people's autonomy, it's especially important for researchers to hold themselves to higher standards," says Carissa Véliz at the University of Oxford. "And in this case, these researchers didn't."

Before conducting research involving humans and animals, academics are required to prove their work will be conducted ethically through a presentation to a university-based ethics committee, and the study in question was approved by the University of Zurich. Véliz questions this decision. "The study was based on manipulation and deceit with non-consenting research subjects," she says. "That seems like it was unjustified. The study could have been designed differently so people were consenting subjects."

"Deception can be OK in research, but I'm not sure this case was reasonable," says Matt Hodgkinson at the Directory of Open Access Journals, who is a member of the council of the Committee on Publication Ethics but is commenting in a personal capacity. "I find it ironic that they needed to lie to the LLM to claim the participants had given consent – do chatbots have better ethics than universities?"

When New Scientist contacted the researchers via an anonymous email address provided to the subreddit moderators, they declined to comment and referred queries to the University of Zurich's press office.

A spokesperson for the university says that "the researchers themselves are responsible for carrying out the project and publishing the results" and that the ethical committee had advised that the experiment would be "exceptionally challenging" and participants "should be informed as much as possible".

The University of Zurich "intends to adopt a stricter review process in the future and, in particular, to coordinate with the communities on the platforms prior to experimental studies", says the spokesperson. An investigation is under way and the researchers have decided not to formally publish the paper, says the spokesperson, who declined to name the individuals involved.

Title: Re: The AI dooooooom thread
Post by: crazy canuck on May 05, 2025, 08:12:40 AM
AI is making more mistakes; and as more people and companies come to rely on it, the mistakes are more impactful

https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html?unlocked_article_code=1.E08.ETt_.s-jsiqCUJHfq&smid=url-share
Title: Re: The AI dooooooom thread
Post by: Syt on May 05, 2025, 09:23:35 AM
In that spirit: https://www.nytimes.com/2025/05/02/technology/google-gemini-ai-chatbot-kids.html

QuoteGoogle Plans to Roll Out Its A.I. Chatbot to Children Under 13
The tech giant said it would make its Gemini chatbot available to children next week, and warned families in an email about the changes.
Title: Re: The AI dooooooom thread
Post by: HisMajestyBOB on May 05, 2025, 06:34:36 PM
 :bleeding:
Title: Re: The AI dooooooom thread
Post by: HVC on May 05, 2025, 06:42:18 PM
Adults turn bots into nazis, wonder what the kids will turn them into?
Title: Re: The AI dooooooom thread
Post by: crazy canuck on May 06, 2025, 06:13:44 AM
Quote from: HVC on May 05, 2025, 06:42:18 PMAdults turn bots into nazis, wonder what the kids will turn them into?

Informers for the party, if their parents are not ideologically pure enough.
Title: Re: The AI dooooooom thread
Post by: Duque de Bragança on May 06, 2025, 12:25:23 PM
Quote from: crazy canuck on May 06, 2025, 06:13:44 AM
Quote from: HVC on May 05, 2025, 06:42:18 PMAdults turn bots into nazis, wonder what the kids will turn them into?

Informers for the party, if their parents are not ideologically pure enough.

Pavlik Morozov 2.0
Title: Re: The AI dooooooom thread
Post by: Savonarola on May 14, 2025, 03:29:54 PM
I volunteer as a reviewer for the IEEE EPICS initiative; which is a program where the IEEE gives small grants (under $10,000) to student groups who partner with non-profits to provide technical solutions for local projects.  Most of the projects are from south or southeast Asia (though I have gotten ones from Africa before.)

The first one I got this year was obviously written by an AI Chat of some sort as no human being could possibly have used the terms "Inclusive," "Community development," "Dignity," "Empowerment," "Underserved communities," and "Innovation" so often.
Title: Re: The AI dooooooom thread
Post by: Josquius on May 14, 2025, 04:20:56 PM
Quote from: Savonarola on May 14, 2025, 03:29:54 PMI volunteer as a reviewer for the IEEE EPICS initiative; which is a program where the IEEE gives small grants (under $10,000) to student groups who partner with non-profits to provide technical solutions for local projects.  Most of the projects are from south or southeast Asia (though I have gotten ones from Africa before.)

The first one I got this year was obviously written by an AI Chat of some sort as no human being could possibly have used the terms "Inclusive," "Community development," "Dignity," "Empowerment," "Underserved communities," and "Innovation" so often.

Scammer after the cash or genuine engineers who felt it'd be the best way to write a convincing request?
Title: Re: The AI dooooooom thread
Post by: Savonarola on May 14, 2025, 04:26:49 PM
Quote from: Josquius on May 14, 2025, 04:20:56 PMScammer after the cash or genuine engineers who felt it'd be the best way to write a convincing request?

Genuine engineers (unless their university is also in on the scam).  Everything like that is vetted before it gets to me.
Title: Re: The AI dooooooom thread
Post by: Josquius on May 24, 2025, 10:05:21 AM
Well....
This is looking more and more like actual AI  :ph34r:

https://www.bbc.com/news/articles/cpqeng9d20go.amp

AI willing to resort to extreme measures if threatened with removal....
Title: Re: The AI dooooooom thread
Post by: Maladict on May 24, 2025, 02:39:51 PM
Quote from: Josquius on May 24, 2025, 10:05:21 AMWell....
This is looking more and more like actual AI  :ph34r:

https://www.bbc.com/news/articles/cpqeng9d20go.amp

AI willing to resort to extreme measures if threatened with removal....

It's only copying human behaviour. Real AI should be smarter than that.
Title: Re: The AI dooooooom thread
Post by: Syt on May 30, 2025, 04:23:05 AM
I did not sign up for this Cyberpunk future. :P

(https://i.imgur.com/RtjT6Zr.jpeg)
Title: Re: The AI dooooooom thread
Post by: garbon on May 30, 2025, 04:30:33 AM
Quote from: Syt on May 30, 2025, 04:23:05 AMI did not sign up for this Cyberpunk future. :P

(https://i.imgur.com/RtjT6Zr.jpeg)

I was helping out a division of my company that has started using AI for transcription/translation. I was a little disappointed but not surprised when it translated a drug name to a similar sounding word/name but less forgiving when it proposed the interviewee had said "×% of their patients are double masturbated".
Title: Re: The AI dooooooom thread
Post by: Syt on May 30, 2025, 08:19:22 AM
:D

We use AI s lot to transcribe our calls, but review the notes before sending them to all participants. :)

Some of our partners are starting to experiment with AI to automate outbpund phone calls for client prospecting, making appointments, follow ups and more. Some of the demos (which they did live, i.e. not pre-recorded and where our team could interact with them) were fairly impressive and the quality is getting to a point where I find it sometimes hard to tell whether it's a bot or a person talking.
Title: Re: The AI dooooooom thread
Post by: Josquius on May 31, 2025, 02:41:28 AM
Quote from: Syt on May 30, 2025, 08:19:22 AM:D

We use AI s lot to transcribe our calls, but review the notes before sending them to all participants. :)

Same.
I was using one tool which tries to summarise the calls too. Like every few lines it calls a bit out saying "The participant is unsure about x" and the like.
It broadly gets the topic right. But it absolutely cannot read sentiment. Maybe its been designed on Americans so just can't get Brits?

Like if I say "Sorry. That probably won't be ready for a few years yet"
And they say "Oh. OK".
The AI will go "Mary is happy the thing won't be ready for a few years"
Title: Re: The AI dooooooom thread
Post by: Syt on May 31, 2025, 04:49:37 AM
We're using spinach.io; it's surprisingly good, actually. :)
Title: Re: The AI dooooooom thread
Post by: viper37 on June 05, 2025, 09:47:06 AM
Maybe AI isn't so threatening after all (https://techfundingnews.com/fake-it-till-you-unicorn-builder-ais-natasha-was-never-ai-just-700-indian-coders-behind-the-curtain/)


Fake it till you unicorn? Builder.ai's Natasha was never AI – just 700 Indian coders behind the curtain


QuoteIf you haven't seen your LinkedIn feed flooded with takes on Builder.ai's collapse, you're following the wrong people. The London-based AI unicorn, once lauded for making app development "as easy as ordering pizza," has imploded spectacularly amid fake AI claims, financial fraud, and a data breach exposing 1.29 terabytes of client secrets.
How did a startup backed by Microsoft, SoftBank, and Qatar's $450 billion sovereign wealth fund become Silicon Valley's latest cautionary tale? Let's break down the collapse.


The scandal erupted on May 31 when Ebern Finance founder Bernhard Engelbrecht posted a bombshell thread on X: "Builder.ai's 'AI' was 700 humans in Delhi pretending to be bots. The only machine learning here was investors learning they'd been duped." The post amassed 2.8 million views in 24 hours, with critics dubbing it "the Theranos of AI" and "WeWork 2.0." 

Beyond the schadenfreude lies tragedy. The 700 Indian engineers, paid $8–15 per hour, now face visa revocations and blacklisting. "They forced us to use fake Western names in client emails," said former developer Arjun Patel. "Now I can't get hired because employers think I'm a bot."
Leaked internal Slack messages reveal engineers were instructed to:
  • Mimic AI response times by delaying code delivery by 12–48 hours
  • Use templated responses like "Natasha is optimising your request" while manually building apps
  • Avoid technical jargon in client calls to maintain the "no-code" illusion
Former employees described the operation as "a call centre with better marketing." One developer confessed: "We'd laugh about 'Natasha' — our inside joke for the graveyard shift."

The money trail: How Builder.ai faked $220M in sales
While the AI deception is staggering, the financial engineering is equally brazen. Documents show Builder.ai and Indian social media giant VerSe Innovation engaged in round-tripping from 2021–2024, billing each other $180 million for nonexistent services. Builder.ai would invoice VerSe $45 million quarterly for "AI licensing," while VerSe billed Builder.ai $44.7 million for "market research"—a laundering scheme that inflated both companies' revenues by 300%.
When lenders demanded proof of its $220M 2024 sales pipeline, an internal audit revealed the truth:
  • Real revenue: $55M (75% from legacy human-services contracts)
  • Projected losses: $99M for 2025
  • Cash burn: $32M/quarter pre-collapse
"It was a Potemkin startup," said a Viola Credit executive. "Even their Mumbai office was a WeWork sublet."
Before the financials unravelled, Builder.ai faced a December 2024 breach exposing:
  • 3.1 million client records (emails, project specs, NDAs)
  • 337K invoices showing manual billing at $18/hour rates
  • Internal memos discussing "AI placebo effects" and "reputation firewalls"
Security researcher Jeremiah Fowler discovered the data on an unsecured AWS bucket. "The folder '/Natasha_AI' contained Excel sheets tracking human coding hours. It was fraud in plain sight." 
So, was any of Builder.ai real?
The evidence says no:
  • AI Claims: 0 verified NLP/ML patents; 100% human-coded output 
  • Financials: 300% revenue overstatement; $180M fake invoices 
  • Leadership: CEO buying Dubai real estate while laying off 1,000 staff 
Builder.ai's collapse has triggered sector-wide panic. Sequoia Capital's AI lead tweeted: "If a 'unicorn' with Microsoft's backing was fake, what does that say about the other 3,000 AI startups?"
Data points to a bubble: 90% of AI startups have no proprietary models and $28B in VC AI funding since 2023 — 40% to companies with under $1M revenue. 
As US prosecutors subpoena financial records, the tech world faces uncomfortable truths about due diligence in the AI gold rush. For now, Builder.ai's legacy is clear: a $1.5B monument to the power of hype over substance.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on June 05, 2025, 09:52:59 AM
Artificial indeed
Title: Re: The AI dooooooom thread
Post by: Baron von Schtinkenbutt on June 05, 2025, 11:37:14 AM
All Indians
Title: Re: The AI dooooooom thread
Post by: Jacob on June 05, 2025, 12:40:43 PM
Quote from: Baron von Schtinkenbutt on June 05, 2025, 11:37:14 AMAll Indians

Actual Indians is the one I heard.
Title: Re: The AI dooooooom thread
Post by: Baron von Schtinkenbutt on June 05, 2025, 01:07:09 PM
Quote from: Jacob on June 05, 2025, 12:40:43 PMActual Indians is the one I heard.

Forgot about that, I think Actually Indians is what I was really thinking of.
Title: Re: The AI dooooooom thread
Post by: garbon on June 11, 2025, 11:02:32 AM
https://www.reuters.com/business/media-telecom/disney-universal-sue-image-creator-midjourney-copyright-infringement-2025-06-11/

QuoteDisney, Universal sue image creator Midjourney for copyright infringement

Walt Disney (DIS.N), opens new tab and Comcast's (CMCSA.O), opens new tab Universal filed a copyright lawsuit against Midjourney on Wednesday, calling its popular AI-powered image generator a "bottomless pit of plagiarism" for its use of the studios' best-known characters.
The suit, filed in federal district court in Los Angeles, claims Midjourney pirated the libraries of the two Hollywood studios, making and distributing without permission "innumerable" copies of characters such as Darth Vader from "Star Wars," Elsa from "Frozen," and the Minions from "Despicable Me".

Spokespeople for Midjourney did not immediately respond to a request for comment.
Horacio Gutierrez, Disney's executive vice president and chief legal officer, said in a statement: "We are bullish on the promise of AI technology and optimistic about how it can be used responsibly as a tool to further human creativity, but piracy is piracy, and the fact that it's done by an AI company does not make it any less infringing."

NBCUniversal Executive Vice President and General Counsel Kim Harris said the company was suing to "protect the hard work of all the artists whose work entertains and inspires us and the significant investment we make in our content."
The studios claim the San Francisco company rebuffed their request to stop infringing their copyrighted works or, at a minimum, take technological measures to halt the creation of these AI-generated characters.

Instead, the studios argue, Midjourney continued to release new versions of its AI image service that boast higher quality infringing images.
Midjourney recreates animated images from a typed request, or prompt.
In the suit filed by seven corporate entities at the studios that own or control copyrights for the various Disney and Universal Pictures film units, the studios offered examples of Midjourney animations that include Disney characters, such as Yoda wielding a lightsaber, Bart Simpson riding a skateboard, Marvel's Iron Man soaring above the clouds and Pixar's Buzz Lightyear taking flight.

The image generator also recreated such Universal characters as "How to Train Your Dragon's" dragon, Toothless, the green ogre "Shrek," and Po from "Kung Fu Panda."
"By helping itself to plaintiffs' copyrighted works, and then distributing images (and soon videos) that blatantly incorporate and copy Disney's and Universal's famous characters -- without investing a penny in their creation -- Midjourney is the quintessential copyright free-rider and a bottomless pit of plagiarism," the suit alleges.

...
Title: Re: The AI dooooooom thread
Post by: Sheilbh on June 11, 2025, 11:28:22 AM
Quote from: Syt on May 30, 2025, 08:19:22 AM:D

We use AI s lot to transcribe our calls, but review the notes before sending them to all participants. :)
Reminds me of the recent PopBitch item I really enjoyed and shared with everyone at work :lol: This feels like a literal panic dream I've had.
Quote>> AI Goes Popbitch <<
ITV staff - your mic is on!

Following the company-wide adoption of Google's AI product Gemini at ITV, all meetings now have the option of transcriptions and summaries. Handy, right!

Maybe, but it has also led to some awkward office casualties linked to the shiny new feature.

In one online session – which was being transcribed and summarised - a couple of employees stayed on after the meeting had ended and had a good old bitch.

A pithy summary of their bitching session was duly appended to the meeting notes, with a word-for-word transcription also available for anyone who wanted a deep-dive.

QuoteI did not sign up for this Cyberpunk future. :P
Also this one:
(https://pbs.twimg.com/media/Gsx4VeDXAAA_52m?format=jpg&name=900x900)

Even if these companies are possibly scams (which I wouldn't bet against). This is the bet, right? The massive capital in AI is that it will improve productivity which basically means do for white collar jobs (especially towards the bottom of the labour market) what the industrial revolution through to automation did for blue collar roles. You pay for the AI in your enterprise agreement with Salesforce or Oracle and you get rid of loads of your customer service people.

I feel like my most alarmist take on all this is that if that bet is right, then the impact of this is going to be hugely transformative and socially disruptive - reordering economies and probably resulting in widespread unemployment etc - and if that bet is wrong, then we've probably gone through one of the biggest misallocations of capital in economic history that will be very painful to unwind - quite possibly causing serious economic damage and widespread unemployment etc :ph34r:

I think there's something interesting in the echo of both of these to the more misanthropist side of green politics ("we are the virus"/ the "nature is healing" in the absence of humanity) stuff. It's something I think about more and more but my own politics is becoming far more almost primarily humanist - I think more and more that the most important thing which is being increasingly overlooked in our world is the human - each of us as individual, flawed, problematic humans. And particularly the peripheral - the poor, the vulnerable, the elderly, the unloveable.

Inevitably because it's true for me I also think it's true for the left in general but I think the left really needs to ground itself in (to nick Leo XIV's phrase) a "defence of human dignity, justice and labour".
Title: Re: The AI dooooooom thread
Post by: Jacob on June 13, 2025, 09:22:38 AM
As I understand it that's not a real company, but an advertising company trying to generate controversy to raise its profile with a fake campaign.

And if so, it seems to be working.
Title: Re: The AI dooooooom thread
Post by: Josquius on June 13, 2025, 05:28:43 PM
https://futurism.com/ceos-ai-clones-hallucinations

QuoteCEOs Are Creating AI Copies of Themselves That Are Spouting Braindead Hallucinations to Their Confused Underlings

Kind of interesting.
Though honestly it does seem to me ceo speeches are usually very easily replaceable with AI.
Title: Re: The AI dooooooom thread
Post by: Jacob on June 16, 2025, 01:54:28 PM
Thought this was a reasonably interesting article: The crisis in education isn't AI — it's meaning (https://the-peak.ca/2025/06/the-crisis-in-education-isnt-ai-its-meaning/)

QuoteThe crisis in education isn't AI — it's meaning
In a world obsessed with productivity and optimization, curiosity, patience, and purpose are quietly eroded

By: Ashima Shukla, Staff Writer

In the age of AI, effort has become optional. As students, we no longer need to flip through textbooks or reread chapters. As one homework app asks, "Why scroll through 100 pages when AI can summarize the most important things in 10?" Across classrooms and countries, education is being reshaped by the insistent buzz of generative AI models. But AI didn't just appear in the classroom; it was invited in by institutions eager to modernize, optimize, and compete.

For instance, the International Artificial Intelligence in Education Society (IAIED), founded in 1997 and now including members from 40 countries, has long positioned itself "at the frontiers of the fields of computer science, education, and psychology." Through organizing major research conferences, publishing a leading journal, and showcasing diverse AI applications, IAIED is critical to the discourse and development of AI in education. It also reflects a broader trend: between 2025 and 2030, the AI industry is expected to grow from $6 billion to over $32 billion USD. 83% of higher education professionals from a diverse range of institutions believe "generative AI will profoundly change higher education in the next three to five years." Silicon Valley giants aren't just innovating these tools. They are also lobbying for their integration into the school system. This is a transformation backed by capital, coded by corporations, and endorsed by institutions desperate to keep up.

And it's working. A McKinsey survey found that 94% of employees and 99% of C-suite leaders are familiar with Gen AI tools, while 47% of employees expect to use AI for nearly one-third of their daily tasks. And universities are listening. Offering courses for students to become prompt engineers and AI ethicists, institutions are preparing them for jobs that didn't exist five years ago but now reflect the priorities of an efficiency obsessed corporate world. But who does this transformation benefit, and at what cost?

This isn't just a pedagogical, labour, or environmental issue, as important as those are. It is something more fundamental to human nature: the erosion of curiosity and critical thinking. As dopamine-fuelled thumbs dance to infinite scrolls, we lose the quiet patience needed to parse meaning from a paragraph. The problem isn't AI's capabilities but our willingness to let corporations dictate the goals of education — and life. When our only objective is maximum productivity and minimal resistance, we strip learning of friction, and therefore, its meaning. After all, if anyone can "generate" a paper, what is the point of writing one?

In this reality increasingly enmeshed with technologies, we've come to expect answers — and dopamine — to be delivered to us immediately. Students begin to internalize that if something isn't fast, it isn't worth doing. However, education should be a practice to cultivate, not a credential to purchase.

As a recent study found, the more confident people are in AI's abilities, the less they rely on their own critical thinking. Similarly, a study on "cognitive offloading" showed that frequent use of AI correlated with weaker problem-solving skills. This suggests that as people grow more accustomed to immediate answers, they lose the memory of mental struggle. Younger students are especially vulnerable, growing up in an environment where boredom is pathologized, curiosity is optional, and learning is gamified. What we are learning is not how to think but how to shortcut.

Even before ChatGPT, researchers warned that students fail to benefit from homework when answers are readily available online. Now, when entire assignments can be completed without thought, Stanford professor Rob Reich asks whether what is at risk is AI displacing the very act of thinking. Writing, after all, is not just a means to communicate but also a way of creating knowledge. The very act of wrestling with an idea, sitting with uncertainty, failing, rephrasing, and trying again, is what shapes the intellect.

And yet, the platforms profiting from this are preaching empowerment. They claim to democratize access, support learning, and save time. But time saved from what exactly? From the very moments that develop intellectual resilience? We have mastered the art of never being bored, and in the process, forgotten how to wonder.

This comes with a heavy psychological toll. As Stanford assistant professor Chris Piech shared, a student broke down in his office, convinced that years of learning to code were now obsolete. The anxiety isn't about incompetence, it is about irrelevance. When we are told our skills are rendered useless, we don't just lose confidence, we lose a sense of purpose. Because, what is learning worth in a world of infinite answers?

We're told to be productive, efficient, optimized. As if the real value in being human comes from what we can produce and how fast we can do it. But the best ideas often come from wandering, from play, from slowness. Real understanding takes time. Sometimes, it takes failing. Sometimes, it takes boredom.

We are drowning in data but are starved for connection. For all the content and knowledge at our fingertips, we are lacking the time to sit alone, to ask good questions, to chase rabbits down holes without knowing where they will lead. In this environment, perhaps the most radical thinking we can learn to do is to slow down. To reimagine education not as a product to be consumed, but as a process of becoming. Perhaps it is time for fewer lectures and more labs, fewer tests and more conversations. Perhaps it is time to value peer collaboration, iterative writing, reflection, and the kinds of assessments that ask students to apply knowledge in solving tasks.

The antidote to the crisis of AI in education is to remember that education is not a product; it is a process. Models like the Four P's of Creative Learning (Projects, Passion, Peers, and Play) offer a blueprint. Instead of treating students as users or consumers, we must see them as co-creators of meaning. How might our relationship with learning change if we were encouraged to fail better, not just succeed faster? The goal shifts from producing measurable outcomes to cultivating a deep curiosity and adaptive thinking.

Learning shouldn't be about acquiring answers. It should be about learning to ask better questions. ChatGPT can help you answer questions, but it cannot teach you how to understand or apply that in the real world. In the face of Big Tech, reclaiming learning as joyful, frustrating, and meaningful is a radical act of resistance. To learn to learn and love it. To recover our passion, we must unlearn the narratives sold to us by billion-dollar companies and build new ones rooted in slowness, struggle, and the sacredness of thought.
Title: Re: The AI dooooooom thread
Post by: Jacob on June 16, 2025, 10:30:11 PM
Another interesting (IMO) article, about the impact of generative AI on the data needed to continue developing AI (https://www.theregister.com/2025/06/15/ai_model_collapse_pollution/).

QuoteThe launch of ChatGPT polluted the world forever, like the first atomic weapons tests
94 comment bubble on white

Academics mull the need for the digital equivalent of low-background steel

or artificial intelligence researchers, the launch of OpenAI's ChatGPT on November 30, 2022, changed the world in a way similar to the detonation of the first atomic bomb.

The Trinity test, in New Mexico on July 16, 1945, marked the beginning of the atomic age. One manifestation of that moment was the contamination of metals manufactured after that date – as airborne particulates left over from Trinity and other nuclear weapons permeated the environment.

The poisoned metals interfered with the function of sensitive medical and technical equipment. So until recently, scientists involved in the production of those devices sought metals uncontaminated by background radiation, referred to as low-background steel, low-background lead, and so on.

One source of low-background steel was the German naval fleet that Admiral Ludwig von Reuter scuttled in 1919 to keep the ships from the British.

More about that later.

Shortly after the debut of ChatGPT, academics and technologists started to wonder if the recent explosion in AI models has also created contamination.

Their concern is that AI models are being trained with synthetic data created by AI models. Subsequent generations of AI models may therefore become less and less reliable, a state known as AI model collapse.

In March 2023, John Graham-Cumming, then CTO of Cloudflare and now a board member, registered the web domain lowbackgroundsteel.ai and began posting about various sources of data compiled prior to the 2022 AI explosion, such as the Arctic Code Vault (a snapshot of GitHub repos from 02/02/2020).

The Register asked Graham-Cumming whether he came up with the low-background steel analogy, but he said he didn't recall.

"I knew about low-background steel from reading about it years ago," he responded by email. "And I'd done some machine learning stuff in the early 2000s for [automatic email classification tool] POPFile. It was an analogy that just popped into my head and I liked the idea of a repository of known human-created stuff. Hence the site."

Is collapse a real crisis?

Graham-Cumming isn't sure contaminated AI corpuses is a problem.

"The interesting question is 'Does this matter?'" he asked.

Some AI researchers think it does and that AI model collapse is concerning. The year after ChatGPT's debut several academic papers explored the potential consequences of model collapse or Model Autophagy Disorder (MAD), as one set of authors termed the issue. The Register interviewed one of the authors of those papers, Ilia Shumailov, in early 2024.

Though AI practitioners have argued that model collapse can be mitigated, the extent to which that's true remains a matter of ongoing debate.

Just last week, Apple researchers entered the fray with an analysis of model collapse in large reasoning models (e.g. OpenAI's o1/o3, DeepSeek-R1, Claude 3.7 Sonnet Thinking, and Gemini Thinking), only to have their conclusions challenged by Alex Lawsen, senior program associate with Open Philanthropy, with help from AI model Claude Opus.

Essentially, Lawsen argued that Apple's reasoning evaluation tests, which found reasoning models fail at a certain level of complexity, were flawed because they forced the models to write more tokens than they could accommodate.

In December 2024, academics affiliated with several universities reiterated concerns about model collapse in a paper titled "Legal Aspects of Access to Human-Generated Data and Other Essential Inputs for AI Training."

They contended the world needs sources of clean data, akin to low-background steel, to maintain the function of AI models and to preserve competition.


"I often say that the greatest contribution to nuclear medicine in the world was the German admiral who scuppered the fleet in 1919," Maurice Chiodo, research associate at the Centre for the Study of Existential Risk at the University of Cambridge and one of the co-authors, told The Register. "Because that enabled us to have this almost infinite supply of low-background steel. If it weren't for that, we'd be kind of stuck.

"So the analogy works here because you need something that happened before a certain date. Now here the date is more flexible, let's say 2022. But if you're collecting data before 2022 you're fairly confident that it has minimal, if any, contamination from generative AI. Everything before the date is 'safe, fine, clean,' everything after that is 'dirty.'"

What Chiodo and his co-authors – John Burden, Henning Grosse Ruse-Khan, Lisa Markschies, Dennis Müller, Seán Ó hÉigeartaigh, Rupprecht Podszun, and Herbert Zech – worry about is not so much that models fed on their own output will produce unreliable information, but that access to supplies of clean data will confer a competitive advantage to early market entrants.

With AI model-makers spewing more and more generative AI data on a daily basis, AI startups will find it harder to obtain quality training data, creating a lockout effect that makes their models more susceptible to collapse and reinforces the power of dominant players. That's their theory, anyway.

"So it's not just about the sort of epistemic security of information and what we see is true, but it's what it takes to build a generative AI, a large-range model, so that it produces output that's comprehensible and that's somehow usable," Chiodo said. "You can build a very usable model that lies. You can build quite a useless model that tells the truth."

Rupprecht Podszun, professor of civil and competition law at Heinrich Heine University Düsseldorf and a co-author, said, "If you look at email data or human communication data – which pre-2022 is really data which was typed in by human beings and sort of reflected their style of communication – that's much more useful [for AI training] than getting what a chatbot communicated after 2022."

Podszun said the accuracy of the content matters less than the style and the creativity of the ideas during real human interaction.

Chiodo said everyone participating in generative AI is polluting the data supply for everyone, for model makers who follow and even for current ones.

Cleaning the AI pollution

So how can we clean up the AI environment?

"In terms of policy recommendation, it's difficult," admits Chiodo. "We start by suggesting things like forced labeling of AI content, but even that gets hard because it's very hard to label text and very easy to clean off watermarking."

Labeling pictures and videos becomes complicated when different jurisdictions are involved, Chiodo added. "Anyone can deploy data anywhere on the internet, and so because of this scraping of data, it's very hard to force all operating LLMs to always watermark output that they have," he said.

The paper discusses other policy options like promoting federated learning, by which those holding uncontaminated data might allow third parties to train on that data without providing the data directly. The idea would be to limit the competitive advantage of those with access to unadulterated datasets, so we don't end up with AI model monopolies.

But as Chiodo observes, there are other risks to having a centralized government-maintained store of uncontaminated data.

"You've got privacy and security risks for these vast amounts of data, so what do you keep, what do you not keep, how are you careful about what you keep, how do you keep it secure, how do you keep it politically stable," he said. "You might put it in the hands of some governments who are okay today, but tomorrow they're not."

Podszun argues that competition in the management of uncontaminated data can help mitigate the risks. "That would obviously be something that is a bulwark against political influence, against technical mistakes, against sort of commercial concentration," he said.

"The problem we're identifying with model collapse is that this issue is going to affect the development of AI itself," said Chiodo. "If the government cares about long-term good, productive, competitive development of AI, large-service models, then it should care very much about model collapse and about creating guardrails, regulations, guides for what's going to happen with datasets, how we might keep some datasets clean, how we might grant access to data."

There's not much government regulation of AI in the US to speak of. The UK is also pursuing a light-touch regulatory regime for fear of falling behind rival nations. Europe, with the AI Act, seems more willing to set some ground rules.

"Currently we are in a first phase of regulation where we are shying away a bit from regulation because we think we have to be innovative," Podszun said. "And this is very typical for whatever innovation we come up with. So AI is the big thing, let it go and fine."

But he expects regulators will become more active to prevent a repeat of the inaction that allowed a few platforms to dominate the digital world. The lesson of the digital revolution for AI, he said, is to not wait until it's too late and the market has concentrated.

Chiodo said, "Our concern, and why we're raising this now, is that there's quite a degree of irreversibility. If you've completely contaminated all your datasets, all the data environments, and there'll be several of them, if they're completely contaminated, it's very hard to undo.

"Now, it's not clear to what extent model collapse will be a problem, but if it is a problem, and we've contaminated this data environment, cleaning is going to be prohibitively expensive, probably impossible."
Title: Re: The AI dooooooom thread
Post by: Oexmelin on June 17, 2025, 07:29:25 AM
I now begin all of my classes about the purpose of assignments. It's so easy for students to only think of assignments as outputs. Like - the world doesn't need a summary of the book I have assigned for you to read. I assign it for you to read, so that you think, and wonder, and try to exert your discernment about distinguishing the essential from the accessory. Trying to change the mindset isn't easy.

But it's also quite hard to convey that message when all the rest of the world is spreading quite a different message. Capacity for concentration is now shit. Students struggle to read articles - to say nothing of books.  It takes a lot of time to rebuild what has been lost. It's also a lot more costly. I am able to do that because I have smaller classes; I get to see how students write and think. AI detectors are fallible, and no doubt some students will use it without my noticing. Universities (and schools generally) would need much smaller classes. As always, private institutions will do so, contributing to the pervasive inequality. 
Title: Re: The AI dooooooom thread
Post by: Syt on June 17, 2025, 07:37:43 AM
Author asks ChatGPT to look through her essays on substack and comment on them to help her pick which ones to include in a query to an agent.

https://amandaguinzburg.substack.com/p/diabolus-ex-machina

It turns into a conversation full of apologies for lying, but keeping lying. :D (I've seen some commenters online likening it to a nice-guy bullshitter being called out on their stories, and going "Aw shucks!")

I think AI can be a useful tool, but for critical analysis of online sources maybe not quite there yet. :P
Title: Re: The AI dooooooom thread
Post by: crazy canuck on June 17, 2025, 07:46:13 AM
Whenever a young lawyer asks to join my administrative law group, I give that person a reading list of five cases, and I ask them to come back to me after they've read the cases so that we can discuss what they've learned.  There is no trick here. These are the cases that are central to understanding administrative law in Canada.

Up to about 5 years ago everyone read them and I had some very interesting conversations with very bright lawyers, some of whom have gone on to do remarkable things.

In the last 3 to 5 years the responses I've gotten have changed remarkably.  Most of the time the person never comes back because they simply haven't read the cases.  I had one brave soul tell me that they were too busy to read the cases, but they would really like to work on my files anyway.  Most recently people have asked whether they can just give me a written summary of what they have learned rather than coming into my office to talk about it.  I politely decline and say that I'd rather speak to them in person, but of course we all know what's actually happening here.

Thankfully, there are still young lawyers who will put in the work, read the cases, and have a good conversation about what they learned, but the number is declining.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on June 17, 2025, 02:15:14 PM
Not to go into an old person thread - as I think on changes in the last 3-5 years, this isn't to do with ChatGPT but I have several friends who are teachers who have all said there's a huge shift in behaviour since covid. They all say that basically all their colleagues with pre-covid experience agree that something happened in behaviourally.

Sadly one is even quitting - which is a job he has loved, he's had students go on to study his subject at university and return to the school as teacher - but his experience has become that it's really grim right now.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on June 17, 2025, 04:05:27 PM
Quote from: Sheilbh on June 17, 2025, 02:15:14 PMNot to go into an old person thread - as I think on changes in the last 3-5 years, this isn't to do with ChatGPT but I have several friends who are teachers who have all said there's a huge shift in behaviour since covid. They all say that basically all their colleagues with pre-covid experience agree that something happened in behaviourally.

Sadly one is even quitting - which is a job he has loved, he's had students go on to study his subject at university and return to the school as teacher - but his experience has become that it's really grim right now.

I think it is a mix, the person who wanted to give me a written summary was going to us ChatGPT.  I agree that the reduction in attention span has additional causes, mainly related to social media use.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on June 18, 2025, 06:34:17 AM
I am working on replacing you all.  :)
Title: Re: The AI dooooooom thread
Post by: Jacob on June 18, 2025, 09:14:20 AM
Quote from: Hamilcar on June 18, 2025, 06:34:17 AMI am working on replacing you all.  :)

Presumably you're already posting on a languish populated by AI an its much better, which is why you're not posting here.
Title: Re: The AI dooooooom thread
Post by: Savonarola on June 25, 2025, 04:15:02 PM
It's Goals and Objective time of year at Knorr Bremse.  Last year we put our goals in Alstomspeak (I'll deliver my KPIs with 30% greater Agility in years past by March!), there isn't really a Knorr-Bremsespeak yet, so after I submitted my goals by boss, we discussed some changes and he asked me to run them through Microsoft CoPilot.  I did so, sent the results to him, and he eliminated all the CoPilot "Improvements."  I thought that was amusing.

Some of my coworkers are using CoPilot to write their goals; and have been saying that their Goals this year sound a lot more sophisticated than last; but also their goals don't sound like something they would ever say.
Title: Re: The AI dooooooom thread
Post by: mongers on June 25, 2025, 05:40:18 PM
Quote from: Savonarola on June 25, 2025, 04:15:02 PMIt's Goals and Objective time of year at Knorr Bremse.  Last year we put our goals in Alstomspeak (I'll deliver my KPIs with 30% greater Agility in years past by March!), there isn't really a Knorr-Bremsespeak yet, so after I submitted my goals by boss, we discussed some changes and he asked me to run them through Microsoft CoPilot.  I did so, sent the results to him, and he eliminated all the CoPilot "Improvements."  I thought that was amusing.

Some of my coworkers are using CoPilot to write their goals; and have been saying that their Goals this year sound a lot more sophisticated than last; but also their goals don't sound like something they would ever say.

 :lol:

Live by the sword*, die by the sword*.



*the sword in question is two sided and dripping in bullshit AI.
Title: Re: The AI dooooooom thread
Post by: Josquius on June 25, 2025, 07:50:47 PM
I spent some time playing with a tool called cursor. It's a "vibe coding" environment.

I've had a little project in mind for a while but never really pushed myself to do it so gave it a go with AI, avoiding any manual editing.....

And it gets to basically the right place very well and very quickly. I have something fully functional that does all I expected and looks mostly right.

It's the mostly where the issues lie. It did look better. But along the way of fixing other bugs it lost that.
Getting the small details done right is a painful process of back and forth and flipping between different wrong versions.

Still.... The stuff that needs tweaking to get it how I want it is a lot smaller and easier than the major problem solving that it solved no problem.

The AI just bloody can't handle "make the margin between those two things a bit smaller" but changing the number oneself doesn't need anything but the most basic of knowledge.

The grand scope stuff where I would have had to push myself... It did well. It did two versions even. The first used a python plug in that loaded terribly and then it suggested redoing it in another when I complained.

I'll see about sharing what was made when I get time to do the last tweaks and figure out somewhere to upload.
Title: Re: The AI dooooooom thread
Post by: Baron von Schtinkenbutt on June 26, 2025, 08:59:48 AM
Quote from: Savonarola on June 25, 2025, 04:15:02 PMIt's Goals and Objective time of year at Knorr Bremse.  Last year we put our goals in Alstomspeak (I'll deliver my KPIs with 30% greater Agility in years past by March!), there isn't really a Knorr-Bremsespeak yet, so after I submitted my goals by boss, we discussed some changes and he asked me to run them through Microsoft CoPilot.  I did so, sent the results to him, and he eliminated all the CoPilot "Improvements."  I thought that was amusing.

Some of my coworkers are using CoPilot to write their goals; and have been saying that their Goals this year sound a lot more sophisticated than last; but also their goals don't sound like something they would ever say.

Gods I hated that shit. :bleeding:  So glad I don't have to deal with it any more.  If I did, I would definitely be using an LLM to write that shit because it was painful.

Quote from: Josquius on June 25, 2025, 07:50:47 PMI spent some time playing with a tool called cursor. It's a "vibe coding" environment.

I've had a little project in mind for a while but never really pushed myself to do it so gave it a go with AI, avoiding any manual editing.....

And it gets to basically the right place very well and very quickly. I have something fully functional that does all I expected and looks mostly right.

It's the mostly where the issues lie. It did look better. But along the way of fixing other bugs it lost that.
Getting the small details done right is a painful process of back and forth and flipping between different wrong versions.

Still.... The stuff that needs tweaking to get it how I want it is a lot smaller and easier than the major problem solving that it solved no problem.

The AI just bloody can't handle "make the margin between those two things a bit smaller" but changing the number oneself doesn't need anything but the most basic of knowledge.

The grand scope stuff where I would have had to push myself... It did well. It did two versions even. The first used a python plug in that loaded terribly and then it suggested redoing it in another when I complained.

I'll see about sharing what was made when I get time to do the last tweaks and figure out somewhere to upload.

I have been using ChatGPT in the past couple days to help me better translate raw SQL queries into Entity Framework Core (the standard C# Object Relational Manager), since Entity Framework can be quite verbose if you don't use the right patterns.  I also use it off and on for help with writing TypeScript code for the AWS CDK (Amazon's Infrastructure As Code tool for AWS).  In both cases, the official documentation is inconsistent on showing you how to properly use the tools, and in the case of AWS CDK even properly explaining what some modules of the library do.  While the model does get some details wrong, and in the case of CDK hallucinates methods that don't exist, it as least gets me pointed in the right direction.

I am not confident enough in its product to start having it write code for me.  I use JetBrains IDEs, which now have an AI assistant built in.  I have so far only used it for single-line intelligent code completion, but even in this it seems to be wrong about 40% of the time, though only slightly.
Title: Re: The AI dooooooom thread
Post by: DGuller on June 26, 2025, 09:07:31 AM
I use AI to give me pointers as to how to write the code, and maybe for simple functions where I can easily inspect and understand them at a glance I'll just let it write them, but I'm hesitant to let it do the coding for me in general.  Sometimes it's going to be off in subtle ways, sometimes it's actually going to way overcomplicate it, and I'm also fearful of losing my own skills through attrition. 

The analogy I often make is that when I drive with a GPS, I drive with the map always pointing north, so that I don't get used to just being a monkey that obeys its orders to turn left or right.  Having it set up that way helps with training my ability to orient myself independently of GPS, and GPS is there just to lighten my mental load.
Title: Re: The AI dooooooom thread
Post by: DGuller on June 26, 2025, 09:20:53 AM
One use that did make me very grateful is when I had to refactor the code and rename 30 data columns to match the other system.  You had to things like rename "wrkIndustry1" to "work_industry_1", and things like that.  It's not hard to do for a human by eye, but it is super tedious.  Turns out it's also not hard for AI to do.

Another use where it unintentionally impressed me was when I asked it to write a code to create a list of categorical and continuous predictors, based on whether it was a character or a numeric column.  It designated two of the numerical columns as categorical despite my instructions, and when I asked it why, it said that in real estate datasets the names of these two columns are usually categorical, with the numbers just being the indices of the categories.  It was right on both counts.  I would've figured it out eventually, but the fact that it has the ability to look past my literal instructions to understand what I was really trying to achieve was next level.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on June 27, 2025, 04:55:53 PM
Enjoyed reading about this Anthropic experiment :lol:
https://www.anthropic.com/research/project-vend-1
Title: Re: The AI dooooooom thread
Post by: crazy canuck on July 09, 2025, 12:35:22 PM
One of my partners was in court yesterday making an application to strike a claim.  The Plaintiff was self represented and completely honest. He was not asserting any facts that were not entirely true.  The problem was the AI tools he used to research whether he had a claim, draft his claim, and draft his response to the application to strike, led him terribly astray.  Prior to the court hearing my partner tried to explain to the Plaintiff that he had no claim and that he should just walk away so that he did not have to pay costs.  But he had faith that AI had gotten it right.

During the hearing the judge gently asked him if he had used AI to draft his pleadings.  He said he had, and the judge was heard to say "well that explains it".

I think our courts are going to become flooded with unmeritorious claims as more people come to rely on AI.  I am also fearful of all the errors occurring in other fields where there is not a formal process for catching the errors AI makes.
Title: Re: The AI dooooooom thread
Post by: Razgovory on July 09, 2025, 01:47:06 PM
Lawyers for the Mypillow guy, Mike Lindell, were fined yesterday for using AI to write court documents.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on July 10, 2025, 09:39:41 AM
Quote from: Razgovory on July 09, 2025, 01:47:06 PMLawyers for the Mypillow guy, Mike Lindell, were fined yesterday for using AI to write court documents.

Yeah, the court did not hold back in its condemnation of the stupidity of using generative AI

QuoteIn February, Judge Wang said, the lawyers filed a court brief in a defamation case brought against Mr. Lindell that contained "nearly 30 defective citations." It misquoted court cases, misrepresented principles of law and, "most egregiously," cited "cases that do not exist," she wrote.

Judge Wang said the lawyers, Christopher I. Kachouroff and Jennifer T. DeMaster, had not explained how such errors could have ended up in the filing "absent the use of generative artificial intelligence or gross carelessness by counsel."
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on July 10, 2025, 04:25:01 PM
To be fair, AI hallucinated case law can be difficult to detect these days.  For example, a few days ago I was reading what I was sure was hallucinated case law.  The opinions contained long strings of nonsense and the citations were bizarre.

Upon careful examination, however, it turned out to be the actual case reports from the most recent Supreme Court term.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on July 10, 2025, 06:43:05 PM
 :D
Title: Re: The AI dooooooom thread
Post by: Josquius on July 11, 2025, 02:11:45 AM
A lot is said about the Internet dying as AI feeds off AI feeds of AI and everything just becomes amazingly gunked up with slop....

... Could the same happen with law?
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on July 11, 2025, 02:55:00 AM
Not as long as we have human judges.
Title: Re: The AI dooooooom thread
Post by: Jacob on July 11, 2025, 09:19:01 AM
That may depend on how many are like Emil Bove
Title: Re: The AI dooooooom thread
Post by: crazy canuck on July 11, 2025, 10:14:53 AM
Quote from: Jacob on July 11, 2025, 09:19:01 AMThat may depend on how many are like Emil Bove

 :yes:
Title: Re: The AI dooooooom thread
Post by: viper37 on July 15, 2025, 01:50:13 PM
Grok's "MechaHitler" meltdown didn't stop xAI from winning $200M military deal  (https://arstechnica.com/tech-policy/2025/07/groks-mechahitler-meltdown-didnt-stop-xai-from-winning-200m-military-deal/)


Seems like a good deal.
Title: Re: The AI dooooooom thread
Post by: Syt on July 16, 2025, 01:22:54 AM
Rolling Stone wins the headlines.

(https://i.imgur.com/lTrgRkS.png)
Title: Re: The AI dooooooom thread
Post by: Syt on July 24, 2025, 04:45:55 AM
https://gizmodo.com/fdas-new-drug-approval-ai-is-generating-fake-studies-report-2000633153

QuoteFDA's New Drug Approval AI Is Generating Fake Studies: Report
The AI, dubbed Elsa, is supposed to be making employees better at their jobs.


Robert F. Kennedy Jr., the Secretary of Health and Human Services, has made a big push to get agencies like the Food and Drug Administration to use generative artificial intelligence tools. In fact, Kennedy recently told Tucker Carlson that AI will soon be used to approve new drugs "very, very quickly." But a new report from CNN confirms all our worst fears. Elsa, the FDA's AI tool, is spitting out fake studies.

CNN spoke with six current and former employees at the FDA, three of whom have used Elsa for work that they described as helpful, like creating meeting notes and summaries. But three of those FDA employees told CNN that Elsa just makes up nonexistent studies, something commonly referred to in AI as "hallucinating." The AI will also misrepresent research, according to these employees.

"Anything that you don't have time to double-check is unreliable. It hallucinates confidently," one unnamed FDA employee told CNN
.

And that's the big problem with all AI chatbots. They need to be double-checked for accuracy, often creating even more work for the human behind the computer if they care about the quality of their output at all. People who insist that AI actually saves them time are often fooling themselves, with one recent study of programmers showing that tasks took 20% longer with AI, even among people who were convinced they were more efficient.

Kennedy's Make America Healthy Again (MAHA) commission issued a report back in May that was later found to be filled with citations for fake studies. An analysis from the nonprofit news outlet NOTUS found that at least seven studies cited didn't even exist, with many more misrepresenting what was actually said in a given study. We still don't know if the commission used Elsa to generate that report.

FDA Commissioner Marty Makary initially deployed Elsa across the agency on June 2, and an internal slide leaked to Gizmodo bragged that the system was "cost-effective," only costing $12,000 in its first four weeks. Makary said that Elsa was "ahead of schedule and under budget" when he first announced the AI rollout. But it seems like you get what you pay for. If you don't care about the accuracy of your work, Elsa sounds like a great tool for allowing you to get slop out the door faster, generating garbage studies that could potentially have real consequences for public health in the U.S.

CNN notes that if an FDA employee asks Elsa to generate a one-paragraph summary of a 20-page paper on a new drug, there's no simple way to know if that summary is accurate. And even if the summary is more or less accurate, what if there's something within that 20-page report that would be a big red flag for any human with expertise? The only way to know for sure if something was missed or if the summary is accurate is to actually read the report.

The FDA employees who spoke with CNN said they tested Elsa by asking basic questions like how many drugs of a certain class have been approved for children. Elsa confidently gave wrong answers, and while it apparently apologized when it was corrected, a robot being "sorry" doesn't really fix anything.

We still don't know the workflow being deployed when Kennedy says AI will allow the FDA to approve new drugs, but he testified in June to a House subcommittee that it's already being used to "increase the speed of drug approvals." The secretary, whose extremist anti-vaccine beliefs didn't keep him from becoming a public health leader, seems intent on injecting unproven technologies into mainstream science.

Kennedy also testified to Congress that he wants every American to be strapped with a wearable health device within the next four years. As it happens, President Trump's pick for Surgeon General, Casey Means, owns a wearables company called Levels that monitors glucose levels in people who aren't diabetic. There's absolutely no reason that people without diabetes need to constantly monitor their glucose levels, according to experts. Means, a close ally of Kennedy, has not yet been confirmed by the Senate.

Makary acknowledged to CNN that Elsa could "potentially hallucinate," but that's "no different" from other large language models and generative AI. And he's not wrong on that. The problem is that AI is not fit for purpose when it's consistently just making things up. But that won't stop folks from continuing to believe that AI is somehow magic.

William Maloney from the FDA's "rapid response" office sent a statement about the agency's use of AI on Wednesday.

"The information provided by FDA to CNN was mischaracterized and taken out of context," Maloney wrote. When Gizmodo responded via email to ask what in CNN's report may have been mischaracterized or taken out of context, Maloney didn't address our questions. Gizmodo also asked if anything about the report was inaccurate but that question was also ignored.

"FDA was excited to share the success story of the growth and evolution of its AI tool, Elsa. Unfortunately, CNN decided to lead the story with disgruntled former employees and sources who have never even used the current version of Elsa," Maloney's statement continued.

But it was Maloney's last line of the email that really reminded us the FDA has been fully captured by Trump: "The only thing 'hallucinating' in this story is CNN's failed reporting."

Title: Re: The AI dooooooom thread
Post by: Josquius on July 24, 2025, 06:18:10 AM
Sounds like a good business opportunity there. Like SEO back in the day, AIO.
Seeding the Internet with discussion about studies that don't actually exist so the AI can pick up on it.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on July 24, 2025, 09:16:49 AM
Quote from: Josquius on July 24, 2025, 06:18:10 AMSounds like a good business opportunity there. Like SEO back in the day, AIO.
Seeding the Internet with discussion about studies that don't actually exist so the AI can pick up on it.

No need, the AI will "hallucinate" fake studies because that's how they are designed.  It is not an intelligence.  It is just associating symbols with other symbols using its probability model. If asked to generate a report summarizing health research, it will detect similar reports and attempt to replicate them by using similar symbol or letter combinations but it has no clue that there is such a thing as a "study" that exists in the world.  It will just use words and letters in combinations similar to those in reports in its database that report studies, but it may mix and match words in the title or the journal name, and mix up results.

I see this all the time when AI's are used to try to write legal briefs. The case citations are all screwed up unless the AI is specifically programmed to check them against a real database.  Even then, the AI will often mis-state the holding or cite the wrong case for the wrong point.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on July 24, 2025, 09:20:03 AM
That said, the use of AI by the Kennedy-led FDA is doing exactly what is intended to do: reduce public confidence in the scientific reliability of agency output and provide a convenient excuse to can swaths of essential employees.
Title: Re: The AI dooooooom thread
Post by: Josquius on July 24, 2025, 10:35:35 AM
I do increasingly see potential uses for AI when you have fixed data sets and instruct it to work only from those.
I have some ideas about mapping property for taxation - though human checking would be needed for quirks.
It could also be really good for mapping traffic flow patterns via traffic cams which has all sorts is uses.
I won't say what I'm seeing in my work here... But it seems a reasonable use.


Quote from: The Minsky Moment on July 24, 2025, 09:16:49 AM
Quote from: Josquius on July 24, 2025, 06:18:10 AMSounds like a good business opportunity there. Like SEO back in the day, AIO.
Seeding the Internet with discussion about studies that don't actually exist so the AI can pick up on it.

No need, the AI will "hallucinate" fake studies because that's how they are designed.  It is not an intelligence.  It is just associating symbols with other symbols using its probability model. If asked to generate a report summarizing health research, it will detect similar reports and attempt to replicate them by using similar symbol or letter combinations but it has no clue that there is such a thing as a "study" that exists in the world.  It will just use words and letters in combinations similar to those in reports in its database that report studies, but it may mix and match words in the title or the journal name, and mix up results.

I see this all the time when AI's are used to try to write legal briefs. The case citations are all screwed up unless the AI is specifically programmed to check them against a real database.  Even then, the AI will often mis-state the holding or cite the wrong case for the wrong point.

I can't speak for legal use but for AI in general....
It does that if backed into a corner and it can't find a real answer, but it does tend to prefer real answers first. If asked out right is there a study to prove x it will often say no such study exists as far as it can find - it searches its databases and/or the Web.

That's a key part of why hallucinations are so bad imo. It seems so reasonable and good at it's job with stuff that's easy to check for yourself. But after a lengthy chain it will sneak shit in.

I found this when using an AI to help with drafting cvs and cover letters. At first it was good and just offered different ways of framing stuff. But as time went on it steadily began to sneak in more and more lies.
Title: Re: The AI dooooooom thread
Post by: garbon on July 24, 2025, 10:41:51 AM
Quote from: Josquius on July 24, 2025, 10:35:35 AMI can't speak for legal use but for AI in general....
It does that if backed into a corner and it can't find a real answer, but it does tend to prefer real answers first. If asked out right is there a study to prove x it will often say no such study exists as far as it can find - it searches its databases and/or the Web.

That's a key part of why hallucinations are so bad imo. It seems so reasonable and good at it's job with stuff that's easy to check for yourself. But after a lengthy chain it will sneak shit in.

I found this when using an AI to help with drafting cvs and cover letters. At first it was good and just offered different ways of framing stuff. But as time went on it steadily began to sneak in more and more lies.

Maybe but it can also get to the point of hallucinating real quick. I recall when I was playing around with chatGPT and asked it about something I didn't think it would know about historical people in India in the 8th century. It happily spit out a list and then several were immediately from the 11th century though it didn't tell me that. When confronted it updated the list, still with errors and on the 2nd time I said I was worried as there was no evidence, it then told me for each of the wrong names that there could have been someone with that name who existed at the time. :D
Title: Re: The AI dooooooom thread
Post by: Josquius on July 24, 2025, 10:46:49 AM
True, it can be confidently wrong sometimes especially with niche stuff.
The other day I was investigating a district of Newcastle and how historically well used it actually was and if there are any maps clearly marking it.
It said of course and gave me some examples... All of which clearly didn't have it.

I always make a point of asking for it's sources. Which yes. Massively cuts down on how much of a time saver it claims to be. But used properly it still can be quicker and easier than just manually googling.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on July 24, 2025, 11:36:27 AM
It is not confident about anything.  Stop reifying it.
Title: Re: The AI dooooooom thread
Post by: HisMajestyBOB on July 24, 2025, 01:08:18 PM
Quote from: The Minsky Moment on July 24, 2025, 09:16:49 AM
Quote from: Josquius on July 24, 2025, 06:18:10 AMSounds like a good business opportunity there. Like SEO back in the day, AIO.
Seeding the Internet with discussion about studies that don't actually exist so the AI can pick up on it.

No need, the AI will "hallucinate" fake studies because that's how they are designed.  It is not an intelligence.  It is just associating symbols with other symbols using its probability model. If asked to generate a report summarizing health research, it will detect similar reports and attempt to replicate them by using similar symbol or letter combinations but it has no clue that there is such a thing as a "study" that exists in the world.  It will just use words and letters in combinations similar to those in reports in its database that report studies, but it may mix and match words in the title or the journal name, and mix up results.

I see this all the time when AI's are used to try to write legal briefs. The case citations are all screwed up unless the AI is specifically programmed to check them against a real database.  Even then, the AI will often mis-state the holding or cite the wrong case for the wrong point.

Looking forward to the Supreme Court ruling that if Mechahitler cites a fictional case that supports the President's position, that fictional case is now good case law.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on July 24, 2025, 01:27:57 PM
Quote from: Josquius on July 24, 2025, 10:35:35 AMIt does that if backed into a corner and it can't find a real answer, but it does tend to prefer real answers first.

It doesn't have preferences and doesn't know what reality is. It's just a "stochastic parrot" as per the Bender paper.

For legal purposes I've used AI as a glorified, flexible database tool.  I.e. I can just dump a ton of PDF deposition and hearing transcripts into a walled AI and it can answer questions like: Show me every time someone testifies about Trademark X, or Did any witness testify that they were aware of Fact Y before the contract was executed on Date Z?  It does a pretty good job of that and it's every easy to set up - just drap and drop PDFs, wait a few minutes and start asking away.

Some people also use it to draft relatively simple letters and swear by it, but drafting simple letters is ... simple; I don't need an automated tool for that.  Anything more complex and its more trouble to proof for use.

I would never rely on it to analyze a serious legal issue.  But the worst stakes there is someone gets lousy legal representation.  As applied to something like the FDA, millions of lives are at stake.  It is grotesque irresponsibility.
Title: Re: The AI dooooooom thread
Post by: Josquius on July 24, 2025, 01:51:05 PM
Obviously it doesn't have feelings and actual preferences. Why do people keep thinking it's a person?
 I'm describing the way it works. It's coded to prioritise actual facts. Though when none can be found or it gets deep into things....
Title: Re: The AI dooooooom thread
Post by: HVC on July 24, 2025, 01:52:31 PM
Quote from: The Minsky Moment on July 24, 2025, 01:27:57 PMIt's just a "stochastic parrot" as per the Bender paper.


Kill all humans?
Title: Re: The AI dooooooom thread
Post by: DGuller on July 24, 2025, 02:03:56 PM
Quote from: Josquius on July 24, 2025, 01:51:05 PMObviously it doesn't have feelings and actual preferences. Why do people keep thinking it's a person?
 I'm describing the way it works. It's coded to prioritise actual facts. Though when none can be found or it gets deep into things....
The issue is that your description of how it works is wrong.  It doesn't know for a fact that anything it generates is a fact, so it can't prioritize that.  It would be very easy to eliminate hallucination if LLM did know what a fact is.

There is also nothing to find, at least until you connect it to Internet.  There is no database to search, all knowledge is encoded as weights in neural network.
Title: Re: The AI dooooooom thread
Post by: Josquius on July 24, 2025, 02:20:13 PM
Quote from: DGuller on July 24, 2025, 02:03:56 PM
Quote from: Josquius on July 24, 2025, 01:51:05 PMObviously it doesn't have feelings and actual preferences. Why do people keep thinking it's a person?
 I'm describing the way it works. It's coded to prioritise actual facts. Though when none can be found or it gets deep into things....
The issue is that your description of how it works is wrong.  It doesn't know for a fact that anything it generates is a fact, so it can't prioritize that.  It would be very easy to eliminate hallucination if LLM did know what a fact is.

There is also nothing to find, at least until you connect it to Internet.  There is no database to search, all knowledge is encoded as weights in neural network.

Most tools I've used were connected to the Internet or some document or other. Can't say I've ever used one that was just free floating.

Chat gpt, Claude, etc... Absolutely can tell you a sourced fact.
If something can easily be found through searching they will do this.
For more complex things... I do find they usually say there's nothing found for it. Though they will sometimes take a best guess which is often not great.

Obviously it's not a real thinking person. It doesn't *know* something is a fact or a hallucination. And this is where the problem lies. But it will aim to get things right via a direct source or the simple logic that correct things get repeated more often.
Title: Re: The AI dooooooom thread
Post by: DGuller on July 24, 2025, 02:37:39 PM
ChatGPT is free-floating most of the time.  Even if it can search, it will rarely do so unless you ask it to search.  Just like when I'm writing here, most of the time I will go from my working knowledge and memory, but occasionally if I feel like I need to for some reason, I will google.
Title: Re: The AI dooooooom thread
Post by: Tamas on July 25, 2025, 01:08:37 AM
So when it says searching it is not actually searching?
Title: Re: The AI dooooooom thread
Post by: DGuller on July 25, 2025, 07:54:52 AM
Quote from: Tamas on July 25, 2025, 01:08:37 AMSo when it says searching it is not actually searching?
No, if it says searching, it's searching, but in general conversations it will rarely search unless you ask it to.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on July 25, 2025, 08:14:36 AM
Yes - but worth noting that for Google and Microsoft the data they use for scraping for AI is the same as the data they use for indexing the web. So in those cases it's sort of upstream of searching. This has been a tension and one of the big pushes of IP owners/creators to decouple those two - and we're getting there.

Different but slightly similar are the licensing deals the agencies are doing for example with Open AI (and I think news agencies will do very well out of AI). I think those are typically through regularly refreshed APIs (which a publisher would need to make sure corrections, retractions, take downs etc get pulled through). So not quite "searching" because it's basically a wire but again slightly upstream of what appears on search.
Title: Re: The AI dooooooom thread
Post by: celedhring on July 26, 2025, 04:47:40 AM
My use case for ChatGPT is polishing my own writing, particularly in English. The suggestions it makes do tend to be  wordy and redundant but a lot of times it will also make good suggestions. So essentially I will dump one of my texts, ask it to edit for stuff like clarity, flow, engagement... and check what it spits out and pick up what I like. That I do find useful.

When doing more creative work is of little use, since it tends to make very bland suggestions. It is also very explicit - it just can't handle subtext or nuance (which is to be expected). Like something I have tried is to feed it one of my scenes, and ask it to add this or that undercurrent to the scene, and it will have one of the characters just say it openly.

Its screenwriting "voice" is also, once more, very generic. I feel I can read all the bad Hollywood scripts it has been fed.
Title: Re: The AI dooooooom thread
Post by: celedhring on July 26, 2025, 04:52:07 AM
Also, several people I work with are using it to draft emails - I can tell because of the language - which I find supercringe and slightly rude. Like, "huh, I'm not worth your time so you have chatgpt draft your emails to me?"
Title: Re: The AI dooooooom thread
Post by: crazy canuck on July 26, 2025, 09:03:36 AM
Yeah, as I mentioned earlier in this thread, it is becoming very obvious what lawyers are using generative AI tools to do their drafting. It is complete shite.
Title: Re: The AI dooooooom thread
Post by: Josquius on July 26, 2025, 11:17:33 AM
It's weird though, as some things people flag as definite signs of AI but they're actually what would have been good English.
—  is a big one. I've never used em dashes as its hard to find on the keyboard, I always used hyphens for the same purpose. Though a few people who write for a living I know are annoyed about doing things right now marking them as AI.
Incidentally I've no idea how ai picked up on that one to begin with considering I don't think I'm alone in just using hyphens.

Beyond this in some contexts you get some words which were the most professional to use but are now kind of cringe markers.
Title: Re: The AI dooooooom thread
Post by: celedhring on July 26, 2025, 11:45:05 AM
I use a lot of dashes in regular writing - mostly because they are very prevalent in script-speak and it has kinda bled into my regular writing. Now I feel like people think AI is inserting them into my writing  :lol:

EDIT: Fuck, I used one, see?
Title: Re: The AI dooooooom thread
Post by: garbon on July 26, 2025, 12:05:08 PM
Quote from: celedhring on July 26, 2025, 11:45:05 AMI use a lot of dashes in regular writing - mostly because they are very prevalent in script-speak and it has kinda bled into my regular writing. Now I feel like people think AI is inserting them into my writing  :lol:

EDIT: Fuck, I used one, see?

:lol:

I am always fighting with chatGPT to remove them.
Title: Re: The AI dooooooom thread
Post by: Razgovory on July 26, 2025, 12:22:00 PM
Quote from: The Minsky Moment on July 24, 2025, 01:27:57 PM
Quote from: Josquius on July 24, 2025, 10:35:35 AMIt does that if backed into a corner and it can't find a real answer, but it does tend to prefer real answers first.

It doesn't have preferences and doesn't know what reality is. It's just a "stochastic parrot" as per the Bender paper.


(https://i.imgur.com/VVV5uSD.jpeg)
Title: Re: The AI dooooooom thread
Post by: Jacob on July 26, 2025, 01:20:36 PM
Quote from: celedhring on July 26, 2025, 04:52:07 AMAlso, several people I work with are using it to draft emails - I can tell because of the language - which I find supercringe and slightly rude. Like, "huh, I'm not worth your time so you have chatgpt draft your emails to me?"

Yeah for sure, anyone using AI to address me immediately goes to the "you're not worth my time" bucket. I guess if the AI is good enough, I won't be able to tell but that will only make my rejection more firm if I do find out.

Same thing with robo-voices and recorded calls and the like - if I'm not worth your time to engage with, why should I use my time to engage with your business objectives?

Similarly, if I need something from your business and you make me go through AI assistants and automated systems you're signalling how important I am to you (answer: not at all). If I need something I'll have to continue (or decide it's not worth it) - and obviously you get what you pay for (more or less) - but it's definitely sending a powerful signal about our relationship and it's one I'll take note of when making future decisions.
Title: Re: The AI dooooooom thread
Post by: Josquius on July 26, 2025, 01:34:32 PM
QuoteSame thing with robo-voices and recorded calls and the like - if I'm not worth your time to engage with, why should I use my time to engage with your business objectives?
Not just robo calls. Any call that hasn't been previously arranged.
You better be dying grandma!


Quote from: celedhring on July 26, 2025, 11:45:05 AMI use a lot of dashes in regular writing - mostly because they are very prevalent in script-speak and it has kinda bled into my regular writing. Now I feel like people think AI is inserting them into my writing  :lol:

EDIT: Fuck, I used one, see?

That's a hyphen, not an em dash. Technically you used that wrong. Though as said myself and everyone else uses them for this purpose as they're right there on the keyboard whilst em dashes aren't.
It is a very easy way to spot AI that the correct dash is used. Though as mentioned I know some pretty anal people already made the effort to use the proper symbol and they aren't happy.
Title: Re: The AI dooooooom thread
Post by: garbon on July 26, 2025, 03:00:27 PM
Microsoft products generally add them automatically but I agree because of how we fuck about with formatting, often people have a mix of both rather consistently just hyphens.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on July 26, 2025, 07:25:30 PM
I watched part of the Cubs/White Sox game yesterday with AI generated images.  Don't know if the announcers were human.
Title: Re: The AI dooooooom thread
Post by: Zoupa on August 11, 2025, 09:08:38 PM
(https://pbs.twimg.com/media/GyEk_eWX0AEL177?format=jpg&name=900x900)
Title: Re: The AI dooooooom thread
Post by: Josquius on August 12, 2025, 01:23:12 AM
Yes. It is too much of a butt kisser. Really shows a limitation.
Reminds me of this one.

https://futurism.com/leaked-chatgpt-lawyer-displace-amazonian

QuoteLeaked ChatGPT Conversation Shows User Identified as Lawyer Asking How to "Displace a Small Amazonian Indigenous Community From Their Territories in Order to Build a Dam and a Hydroelectric Plant"

Imagining chat gpts jolly butt kissing reply "That sounds like a great scheme! You are absolutely right that these people don't understand the value of land and could arguably be termed non people!"
Title: Re: The AI dooooooom thread
Post by: Zanza on August 12, 2025, 02:07:56 AM
Version 5 is supposedly much more restrained.
Title: Re: The AI dooooooom thread
Post by: celedhring on August 12, 2025, 03:16:13 AM
Quote from: Zanza on August 12, 2025, 02:07:56 AMVersion 5 is supposedly much more restrained.

Yeah, sadly, it also seems restrained in its usefulness.

For my general use case (revising my English-language texts for grammar and more natural language), gpt4o is giving me better results... 5 tends to cut out too much text, even if I tell it to not do that. I feel it's been trained to give shorter responses?
Title: Re: The AI dooooooom thread
Post by: Zanza on August 12, 2025, 06:26:56 AM
But you can buy a subscription for longer answers and more tokens.  :bash:
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 12, 2025, 06:34:46 AM
Quote from: Josquius on August 12, 2025, 01:23:12 AMYes. It is too much of a butt kisser. Really shows a limitation.
I do slightly wonder how much of this is because they're American products with American optimism and customer service attitudes :lol: If it was built by, say, a Dutch company or one based in Berlin maybe it would have a different attitude.

Though I believe some of these providers are now giving you prebuilt "personalities" to interact with as not everyone wants enthusiastic, friendly, chatty.
Title: Re: The AI dooooooom thread
Post by: Josquius on August 12, 2025, 06:49:03 AM
Quote from: Sheilbh on August 12, 2025, 06:34:46 AM
Quote from: Josquius on August 12, 2025, 01:23:12 AMYes. It is too much of a butt kisser. Really shows a limitation.
I do slightly wonder how much of this is because they're American products with American optimism and customer service attitudes :lol: If it was built by, say, a Dutch company or one based in Berlin maybe it would have a different attitude.

Though I believe some of these providers are now giving you prebuilt "personalities" to interact with as not everyone wants enthusiastic, friendly, chatty.

 :lol:
Great idea. Lets go seek investment for this AI startup based in Amsterdam.
"Look....Kevin? It was Kevin right? This idea of yours is shit. It would be a waste of processing power for me to think about it for more than a millisecond. You should do some basic reading."
Title: Re: The AI dooooooom thread
Post by: Valmy on August 13, 2025, 10:11:06 AM
Oh yeah that would be amazing. German AI telling us why our ideas are shit.
Title: Re: The AI dooooooom thread
Post by: Zanza on August 13, 2025, 12:27:18 PM
I just used GPT5 for the first time extensively to generate a concept for a topic I currently work on. Did not really notice much of a quality difference, but this "great question" bullshit is toned down.
Title: Re: The AI dooooooom thread
Post by: DGuller on August 13, 2025, 01:48:44 PM
I used GPT 5 Pro at work to document my code.  It took forever to respond to each prompt, but it found a subtle but big mistake in my logic.  Incidentally I myself spotted it earlier that day, but I fed it the old version of the code to test it. 

It is pretty unnerving how much it figured out about what I was doing, without even having access to inputs that I would think would be crucial.  What I was doing wasn't boilerplate, it was a custom thing.  An expert would probably make educated guesses as to how the input is built, bit I wouldn't expect AI to be so on point.
Title: Re: The AI dooooooom thread
Post by: Josquius on August 16, 2025, 07:54:43 AM
Apparently quite a lot of hate for GPT5 from the big AI fans.

Read this today which is fairly encouraging

https://archive.is/20250815182025/https://www.ft.com/content/d01290c9-cc92-4c1f-bd70-ac332cd40f94
Title: Re: The AI dooooooom thread
Post by: Savonarola on August 17, 2025, 01:30:58 PM
I will admit, I never expected to see the following headline:

Why Is TikTok Overflowing With AI Country Music Erotica? (https://pitchfork.com/thepitch/why-is-tiktok-overflowing-with-ai-country-music-erotica/)

Oh brave new world.
Title: Re: The AI dooooooom thread
Post by: Valmy on August 21, 2025, 01:31:14 PM
I never thought I would see the words "AI County Music Erotica" combined. What a world.
Title: Re: The AI dooooooom thread
Post by: Josquius on August 21, 2025, 01:52:53 PM
Seeing a bit of gloating on social media that the bubble is bursting as Facebook cuts back on its AI team...
Title: Re: The AI dooooooom thread
Post by: Tonitrus on August 21, 2025, 02:56:24 PM
It makes sense that Facebook should be slowly fading into MySpace territory...with AI bots mostly keeping it alive.

When I was teaching...in our lesson on Russian social media, I'd almost ask my classes how many of them use Facebook.  Almost always zero hands would go up...that leaves people probably age 40-50 and up if I were to guess.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 21, 2025, 03:00:47 PM
Although a key difference to MySpace, as Mike Bird has pointed out, which is really incredible is that Meta has publicly and seriously messed up on its last two big projects (the metaverse and AI) but has also offered >300% returns to investors in the last three years.

Their power in online advertising (and the fact that the biggest Western companies are basically all in advertising) is big even when everything else is failing. And ads is, I think, something like 99% of their revenue.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on August 21, 2025, 03:05:02 PM
Yeah, I get that.  MySpace was a bit ahead of the times of major online advertisement revenue...and the 40-and-older has almost always been a key demographic for advertisers (because they have the money)...but I will expect this will likely fade over time as their demographic dies off.

I also wonder how much of Meta's revenue is actually from Facebook itself, or other Meta products (I may or may not bother to research that  :P )
Title: Re: The AI dooooooom thread
Post by: Josquius on August 21, 2025, 03:07:41 PM
I guess MySpace was unfortunate in relying on teens. They're ficcle and adapt to new stuff quickly.

Facebook may be dying and have a restricted audience but people over 50 are slower to change and have money.
Title: Re: The AI dooooooom thread
Post by: HVC on August 21, 2025, 03:08:20 PM
I thought 18 to mid 30s was the golden demographic for marketing?
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 21, 2025, 03:12:00 PM
Yeah although a lot of it doesn't massively depend on Facebook though - there's Instagram and WhatsApp especially.

But also it's just adtech infrastructure at this point - the Meta pixel is really embedded all over the internet now in ways that it really, really doesn't need to be but is hoovering data (which Meta basically don't really know what they're doing with, it just flows everywhere). There was a report in the UK about NHS hospitals with the Meta pixel on their site - and no doubt they do some analytics from that but they don't need to advertise or earn money from it. I'm no sure about the case for US hospitals but I think there was a follow up that found about 90% of US hospital websites reviewed had the Meta pixel.

(Going to re-up my view that Musk and Twitter get all the attention but the really evil tech company is Meta and Zuckerberg :lol: :ph34r:)
Title: Re: The AI dooooooom thread
Post by: Tonitrus on August 22, 2025, 12:08:18 AM
No evil status for Alphabet?  :(
Title: Re: The AI dooooooom thread
Post by: celedhring on August 22, 2025, 02:26:23 AM
Quote from: Tonitrus on August 22, 2025, 12:08:18 AMNo evil status for Alphabet?  :(

One of my favorite moments in tech history is when they retired the "don't be evil" slogan, because reasons  :lol:
Title: Re: The AI dooooooom thread
Post by: Josquius on August 22, 2025, 02:32:38 AM
It is true Google does sit quite apart from the evil companies. Its so obviously much of the same.... but somehow gets a bit of a pass.

Quote from: HVC on August 21, 2025, 03:08:20 PMI thought 18 to mid 30s was the golden demographic for marketing?
I think its bigger than that? 18-34 or even 18-49 or something?
But isn't the thing there that this is so coveted because its hard to reach?

Maybe thats the most desired thing. But consider how they move around a lot, are increasingly aware of ads and ad block, etc.... complete dominance of everyone over 50 is nothing to sniff at. I do know on TV ads are ever more shifting towards being aimed at older people as they're the main ones still sitting through ads.

I'm sure marketing people have given a proper answer somewhere :lol:
Title: Re: The AI dooooooom thread
Post by: Jacob on August 22, 2025, 12:31:10 PM
Saw this post on reddit, summarizing the drivers of the upcoming AI bubble bursting. It references a podcast (which I didn't listen to at all). It seemed reasonable enough to me, so I'm posting it here for people to tell me how it's not  :lol:

QuoteBetter Offline (https://podcasts.apple.com/us/podcast/better-offline/id1730587238) covers this most thoroughly. The gist is this:

AI companies are spending tens and hundreds of billions of dollars

AI companies are earning (single digit) billions of dollars

AI companies are profiting no dollars, or negative billions depending on your position on "investment in infrastructure")

AI companies almost all rely on other AI companies to use their product, generating the revenue they do make from their hardware off contracts from the companies that write chatbots and other programs that have guaranteed token use

NVidia, valued at over $4 Trillion, relies on continued sales of their chips to the hardware folks

Nearly all the AI companies are not traded publically, yet are valued in the tens-to-hundreds of billions, making them nearly impossible for someone to buy

When the non-profitable software guys stop paying the hardware guys, NVDA has a dip in stock, a "correction," and it'll scare the shit out of the markets

Those startup companies that were too expensive to buy will now be left with no more venture capital and huge negative run rates, meaning less NVDA buys, meaning more dip, more VC pull-back, fewer and less buys, etc, etc, etc

Because they are all incestuous with each others equipment and software, when one falls, it'll rip the others like crabs in a pot.

https://www.reddit.com/r/technology/comments/1mx1p2c/the_warning_signs_the_ai_bubble_is_about_to_burst/
Title: Re: The AI dooooooom thread
Post by: HVC on August 22, 2025, 12:44:47 PM
So what you're saying is we're getting cheap graphics cards back?
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 22, 2025, 01:03:08 PM
Quote from: Jacob on August 22, 2025, 12:31:10 PMSaw this post on reddit, summarizing the drivers of the upcoming AI bubble bursting. It references a podcast (which I didn't listen to at all). It seemed reasonable enough to me, so I'm posting it here for people to tell me how it's not  :lol:
:lol: I don't know if I can.

Having looked at lots of AI products and worked on them being used in businesses I do not think there is a business model (yet) for companies like OpenAI. I can't work out what it's for.

The bit of AI that I think is going to take off and grow is basically the SaaS piece that is an additional enterprise license/subscription. So, for example, Salesforce, Oracle, your code repository. Those sort of environments where there is a lot of data necessary for your business and the AI improves or transforms functionalities I can absolutely see the business case for.

I'm not 100% sure where Gemini sits within this frame yet (or Microsoft CoPilot) but my instinct on Google is they are very much in a throw it all around and see what stick mode.

However in terms of what happens I don't want to be too alarmist (and there may well be positives)...but, there has been massive investment in this (both the phsysical piece and the models). The goal being that it is transformative from a productivity perspective, allowing companies to earn more with less labour largely by being able to automate (largely) white collar work. If that is true it will have enormous economic and social implications that will need to be worked through.

If it isn't true and doesn't work out, then I think in terms of scale it will be one of the biggest misallocations of capital ever. The fallout from that will have enormous economic and social knock on effects and implications too.
Title: Re: The AI dooooooom thread
Post by: Josquius on August 22, 2025, 02:14:14 PM
I am seeing various reports around about corporate attempts to switch to enterprise AI which are 95% failures.
Not sure on how reliable this is and how they're defining failure.
But certainly there are signs things are imploding.... And much earlier than anticipated.
One explanation I heard was the AI is proving incapable of "learning" specific business processes.

I can definitely see some business uses in helping individuals with writing, sorting/summarising big data sets (not to be relied on without doing it again yourself but... It's a good first run) , and most importantly transcribing and summarising meetings (with maybe 80% accuracy? There's some fun developments at work on this I can't really talk about).
But this sounds like it's replacing personal assistants and secretaries....which....already normal workers don't have those and the people that do are generally powerful enough they can insist on a human.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on August 22, 2025, 03:01:06 PM
Quote from: Jacob on August 22, 2025, 12:31:10 PMSaw this post on reddit, summarizing the drivers of the upcoming AI bubble bursting. It references a podcast (which I didn't listen to at all). It seemed reasonable enough to me, so I'm posting it here for people to tell me how it's not  :lol:

QuoteBetter Offline (https://podcasts.apple.com/us/podcast/better-offline/id1730587238) covers this most thoroughly. The gist is this:

AI companies are spending tens and hundreds of billions of dollars

AI companies are earning (single digit) billions of dollars

AI companies are profiting no dollars, or negative billions depending on your position on "investment in infrastructure")

AI companies almost all rely on other AI companies to use their product, generating the revenue they do make from their hardware off contracts from the companies that write chatbots and other programs that have guaranteed token use

NVidia, valued at over $4 Trillion, relies on continued sales of their chips to the hardware folks

Nearly all the AI companies are not traded publically, yet are valued in the tens-to-hundreds of billions, making them nearly impossible for someone to buy

When the non-profitable software guys stop paying the hardware guys, NVDA has a dip in stock, a "correction," and it'll scare the shit out of the markets

Those startup companies that were too expensive to buy will now be left with no more venture capital and huge negative run rates, meaning less NVDA buys, meaning more dip, more VC pull-back, fewer and less buys, etc, etc, etc

Because they are all incestuous with each others equipment and software, when one falls, it'll rip the others like crabs in a pot.

https://www.reddit.com/r/technology/comments/1mx1p2c/the_warning_signs_the_ai_bubble_is_about_to_burst/

Yeah, this reminds me a lot of the late 1990s.  Lots of money going in to try to leverage a new technology in ways that made no economic sense.
Title: Re: The AI dooooooom thread
Post by: Jacob on August 22, 2025, 03:46:09 PM
QuoteIt Took Many Years And Billions Of Dollars, But Microsoft Finally Invented A Calculator That Is Wrong Sometimes

It's not AI winter just yet, though there is a distinct chill in the air. Meta is shaking up and downsizing its artificial intelligence division. A new report out of MIT finds that 95 percent of companies' generative AI programs have failed to earn any profit whatsoever. Tech stocks tanked Tuesday, regarding broader fears that this bubble may have swelled about as large as it can go. Surely, there will be no wider repercussions for normal people if and when Nvidia, currently propping up the market like a load-bearing matchstick, finally runs out of fake companies to sell chips to. But getting in under the wire, before we're all bartering gas in the desert and people who can read become the priestly caste, is Microsoft, with the single most "Who asked for this?" application of AI I've seen yet: They're jamming it into Excel.

Excel! The spreadsheet program! The one that is already very good at what it does, which is calculation and data analysis. You put some numbers in and it spits some numbers out. According to The Verge, "Microsoft Excel is testing a new AI-powered function that can automatically fill cells in your spreadsheets." Using natural language, the idea goes, you tell it what you want and then the AI will "classify information, generate summaries, create tables, and more."

...

https://defector.com/it-took-many-years-and-billions-of-dollars-but-microsoft-finally-invented-a-calculator-that-is-wrong-sometimes
Title: Re: The AI dooooooom thread
Post by: crazy canuck on August 22, 2025, 06:53:32 PM
At this point in the Trump presidency making numbers up is a virtue
Title: Re: The AI dooooooom thread
Post by: Valmy on August 22, 2025, 07:09:21 PM
Quote from: Jacob on August 22, 2025, 12:31:10 PMSaw this post on reddit, summarizing the drivers of the upcoming AI bubble bursting. It references a podcast (which I didn't listen to at all). It seemed reasonable enough to me, so I'm posting it here for people to tell me how it's not  :lol:

QuoteBetter Offline (https://podcasts.apple.com/us/podcast/better-offline/id1730587238) covers this most thoroughly. The gist is this:

AI companies are spending tens and hundreds of billions of dollars

AI companies are earning (single digit) billions of dollars

AI companies are profiting no dollars, or negative billions depending on your position on "investment in infrastructure")

AI companies almost all rely on other AI companies to use their product, generating the revenue they do make from their hardware off contracts from the companies that write chatbots and other programs that have guaranteed token use

NVidia, valued at over $4 Trillion, relies on continued sales of their chips to the hardware folks

Nearly all the AI companies are not traded publically, yet are valued in the tens-to-hundreds of billions, making them nearly impossible for someone to buy

When the non-profitable software guys stop paying the hardware guys, NVDA has a dip in stock, a "correction," and it'll scare the shit out of the markets

Those startup companies that were too expensive to buy will now be left with no more venture capital and huge negative run rates, meaning less NVDA buys, meaning more dip, more VC pull-back, fewer and less buys, etc, etc, etc

Because they are all incestuous with each others equipment and software, when one falls, it'll rip the others like crabs in a pot.

https://www.reddit.com/r/technology/comments/1mx1p2c/the_warning_signs_the_ai_bubble_is_about_to_burst/

This is the thing that really baffles me about 21st century economics. There seems to be an infinite amount of money to throw at things that have zero obvious potential to turn a profit. While vital services and goods people desperately need and want to buy are allowed to just wither and die, looted by private equity firms, and nothing seems to be coming up to replace them. At least not quickly. It is crazy town out here.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on August 22, 2025, 07:56:08 PM
Welcome to late stage capitalism Valmy
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 22, 2025, 08:34:08 PM
Quote from: Valmy on August 22, 2025, 07:09:21 PMThis is the thing that really baffles me about 21st century economics. There seems to be an infinite amount of money to throw at things that have zero obvious potential to turn a profit. While vital services and goods people desperately need and want to buy are allowed to just wither and die, looted by private equity firms, and nothing seems to be coming up to replace them. At least not quickly. It is crazy town out here.
I'm not sure on that with AI.

This requires physical infrastructure. There are huge data centres being built and significant investments in power generation for them, not to mention the chips industry which is real but also significant from a national security perspective. But I think it is real capital expenditure and the purpose is capital deepening. The big bet on this is that it will have significant increases in productivity. I agree I can't see a route to profit for any of the standalone products yet, but I can in other areas. If that bet pays off (obviously, as I say, transformative social and economic effects too).

But I think (and this could be bullshit as I'm just thinking this through) part of the dynamic and infinite cash is that the assumption is it'll go like the rest of big tech and consolidate into close to a monopoly. So if this works and you're the one with the model that "wins", there'll be huge market power (and profit). What's interesting is that every big tech company feels the need to throw everything at it because the potential prize is so big. And I don't think that's really happened before - I don't think everyone felt the need to build their own browser, or search engine etc.

And I mentioned it with the chips and the geopolitical angle - but I think that is also important. From what I understand it looks like China has been less impeded by US export restrictions than hoped. It is one of the strategic sectors China's leaders have identified for the next stage of their development and they are aspiring to self-sufficiency and then international competitiveness. As a general tool Deepseek is very impressive. There is opacity on how and how deeply China is embedding AI but from what I've read it is booming on the consumer side and absolutely getting embedded in day-to-day for business. I can't help but think of Jake's comment about the level of security theatre in China recently as I'm sure a lot of that is security paranoia - but also as China's economy, how much of it is just a make-work scheme for people who are not going to be getting the lower cost manufacturing jobs China's moving out of? And could we something similar in other sectors?
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 22, 2025, 08:51:56 PM
I'd also add the bits of the MIT report are interesting but I think some of the way it's been discussed are a little misleading.

It was based on interviews and surveys around this and found that about 5% achieved quick revenue growth. They said some big companies and some startups are getting really positive results "because they pick one pain point, execute well, and partner smartly with companies who use their tools".

They say the issue is a "learning gap" for both the actual AIs and also organisations using them. So execs blame regulation or the model's performance, but the report suggests it's actual implementation that's the issue. So ChatGPT works for individuals but doesn't work at a business level because it's not learning from workflows (can confirm based on experiments I've seen - ChatGPT is the least useful and in almost every area more specific tools are better).

Similarly it finds that about half the AI budgets in companies are going to sales and marketing tools - which I think would have a really bad ROI on so many levels ("please genericise my copy"). The best return is back-office automation.

I think a lot of this is probably true of basically every piece of new tech hitting companies. It totally lines up with my experience and is why I lean towards the view that the "winners" will be the Oracles and the Salesforces etc. I also think it's why Sam Altman is out there posting things about the Manhattan Project and the Death Star because that sort of existential threat angle is one hell of a marketing tool when, at this stage, you're quite possibly just building a plug-in enterpise SaaS functionality. "Look on my works ye mighty and despair", said the creator of Trello.
Title: Re: The AI dooooooom thread
Post by: Syt on August 26, 2025, 12:35:34 AM
https://www.pcgamer.com/software/ai/microsoft-launches-copilot-ai-function-in-excel-but-warns-not-to-use-it-in-any-task-requiring-accuracy-or-reproducibility/

QuoteMicrosoft launches Copilot AI function in Excel, but warns not to use it in 'any task requiring accuracy or reproducibility'

You can have Copilot generate your formulas in Excel now, but it doesn't sound ready for prime time.

Certain Windows 365 Copilot users now have access to a new feature in Excel that lets you use Microsoft's Copilot AI to generate formulas, but you might want to be careful how you use it. Microsoft is already warning users that the AI might not always be accurate.

The new "COPILOT" function allows you to skip writing Excel formulas yourself by telling Copilot what you want to do and the cells you want to use. For instance, you could type in "=COPILOT("Summarize this feedback", A2:A20)" to have Copilot generate a formula that summarizes the content in a column of cells. The examples Microsoft suggests in the support page for the COPILOT function focus on classifying, summarizing, and generating content.

However, Microsoft specifically warns not to use it for "any task requiring accuracy or reproducibility," like numerical calculations. Microsoft also advises against using the feature for "financial reporting, legal documents, or other high-stakes scenarios," so basically most of the things people turn to Excel for.

Much like with other generative AI tools, if you're using the COPILOT function, you need to keep an eye out for the AI hallucinating strange results or misunderstanding your prompts, which of course calls its basic utility into question.

Aside from accuracy, privacy is also an obvious concern here. Microsoft says in the announcement blog post that "Your data sent through the COPILOT function is never used to train or improve the AI models. The information you input remains confidential and is used solely to generate your requested output."

This is still a beta feature, so Microsoft is still refining it with the help of user feedback and it isn't widely available yet. Right now, it's only available to Microsoft 365 Copilot Beta Channel users. Functionality is also somewhat limited even beyond the accuracy concerns—the COPILOT function is currently maxed out at 100 calls per 10 minutes or 300 calls per hour. It can't access "live web data or internal business documents," either.


:lol:

Looks like some development team had a very firm deadline. :D
Title: Re: The AI dooooooom thread
Post by: Neil on August 26, 2025, 09:04:18 AM
I mean, that's not really surprising.  The nature of 'AI' is such that it can never truly be accurate.  All it does is throw in the most probable words one after another, and if pushed back on tells the user exactly what they want to hear. 
Title: Re: The AI dooooooom thread
Post by: crazy canuck on August 26, 2025, 09:08:23 AM
Quote from: Neil on August 26, 2025, 09:04:18 AMI mean, that's not really surprising.  The nature of 'AI' is such that it can never truly be accurate.  All it does is throw in the most probable words one after another, and if pushed back on tells the user exactly what they want to hear. 

Now you've done it, the true believers are going to descend on you like a pack of wolves.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on August 26, 2025, 09:25:26 AM
Mainly for Oex, an opinion piece by a prof regarding how to deal with AI and get students to think

https://www.nytimes.com/2025/08/26/opinion/culture/ai-chatgpt-college-cheating-medieval.html?unlocked_article_code=1.hE8._lxB.VWMafZEcrHpf&smid=url-share
Title: Re: The AI dooooooom thread
Post by: Oexmelin on August 26, 2025, 09:40:13 AM
Thanks!

It's mostly stuff I have already shifted to, and though the article makes it reassuring, it is a gigantic shift which, for it to work, would require a lot more faculty, because it works through smaller classrooms - something which universities everywhere,  but especially US universities, are loathe to do.

I have incorporated the ethical talk, and it works somewhat because I have smaller classrooms, and we have a smaller program, which means I can establish relationships of trust with my students. (Not all of them, to be sure). It's also a ton of work outside the classroom, which, in many colleges, where the teaching load is heavy, means not a lot of time and which, in research universities, means professors feel it takes time away from research (which means they might be tempted not to revamp their assignments).
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 26, 2025, 05:51:34 PM
Finally found one (1) use for one of the general AIs - I've got this little small business side project going and it has been very helpful at putting into one place the sort of things to get in place and rough order of doing that/timelines.

I've read a lot of this before but it's been disparate (substacks by other small presses, speaking to the odd person in the industry, reading blogs and books etc) and it is helpful at drawing it into one place and cohering it a little.

This is, however, an exceptionally niche use.
Title: Re: The AI dooooooom thread
Post by: DGuller on August 26, 2025, 07:03:00 PM
Quote from: Neil on August 26, 2025, 09:04:18 AMI mean, that's not really surprising.  The nature of 'AI' is such that it can never truly be accurate.  All it does is throw in the most probable words one after another, and if pushed back on tells the user exactly what they want to hear. 
That's like saying that all a nuclear bomb does is trigger a runaway chain reaction.  It's a technically accurate thing to say, but it trivializes a very complex thing behind the "all it does".  Getting the conditional probabilities of the next token right depending on the context is kind of where the synthesis of concepts happens in an LLM.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 26, 2025, 07:18:49 PM
I think this is a fantastic summary by a very smart engineer in the Guardian's newsroom AI projects on what LLMs are (and aren't):
https://medium.com/@joelochlannsmith/mythbusting-large-language-models-2f4ef31a7ce1

I agree with his summary at the top:
QuoteLarge Language Models (LLMs) are remarkable tools with major limitations. In the two years since ChatGPT made them famous, their use has spread to search engines, coding assistants, document editors, even WhatsApp.

Unfortunately, as their use has spread, clarity about how they work has not. Over the past two years as tech lead of the Guardian's fledgling Newsroom AI team, I've learned a lot about LLMs. But I've also noticed that the applications built on top of them conceal what's really going on, leading to widespread misunderstanding.

Then runs through some mythbusting which I think often boil down to some degree of anthromophisation which isn't appropriate. And I think if you try to understand what they are you can probably work out what they could be used for and what they shouldn't. I think a lot of the horror stories are people using it for things that it just shouldn't be used for - and that is not helped by the anthropomorphised layer on top.

(I'd add despite that paragraph and my awareness of carbon impact of each prompt - 9 seconds of TV - and the pleas of folks like Sam Altman, I still say please and thank you whenever I use one of these :lol: :ph34r:)
Title: Re: The AI dooooooom thread
Post by: DGuller on August 26, 2025, 07:49:12 PM
Quote from: Sheilbh on August 26, 2025, 07:18:49 PMI think this is a fantastic summary by a very smart engineer in the Guardian's newsroom AI projects on what LLMs are (and aren't):
https://medium.com/@joelochlannsmith/mythbusting-large-language-models-2f4ef31a7ce1

I agree with his summary at the top:
QuoteLarge Language Models (LLMs) are remarkable tools with major limitations. In the two years since ChatGPT made them famous, their use has spread to search engines, coding assistants, document editors, even WhatsApp.

Unfortunately, as their use has spread, clarity about how they work has not. Over the past two years as tech lead of the Guardian's fledgling Newsroom AI team, I've learned a lot about LLMs. But I've also noticed that the applications built on top of them conceal what's really going on, leading to widespread misunderstanding.

Then runs through some mythbusting which I think often boil down to some degree of anthromophisation which isn't appropriate. And I think if you try to understand what they are you can probably work out what they could be used for and what they shouldn't. I think a lot of the horror stories are people using it for things that it just shouldn't be used for - and that is not helped by the anthropomorphised layer on top.

(I'd add despite that paragraph and my awareness of carbon impact of each prompt - 9 seconds of TV - and the pleas of folks like Sam Altman, I still say please and thank you whenever I use one of these :lol: :ph34r:)
I read the full article.  To be frank, while everything that is written there is true, IMO almost none of it is relevant or useful (which in my experience is as good at it gets on Medium when it comes to data science articles).  What useful bit of understanding is imparted by "busting" the myth that LLMs take text (rather than tokens)?  It sounds like the kind of mythbusting a high school physics students would do to show off his newfound knowledge:  "actually, there is no such thing as centrifigal force, there is only centripetal force".

I think most people who looked into this for a while understand that LLMs are predicting the next word (yeah, yeah, token, I know).  I think the relevant question is when does that mechanism of predicting the next word results in something that isn't too functionally different from real intelligence.  I don't think this article does much to even get you started answering that question.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 26, 2025, 08:02:40 PM
:lol: Well I suppose the audience matters there because to my view - it's not a data science article or about the cutting edge of LLMs - but an engineer working in that area in a newsroom. So surrounded by non-technical people who will not stop asking "why" and "how".

I saw it via a friend there who described it as for non-technical people (me - but also I suspect 90% of the people he is working with) looking for a deeper understanding of how they actually work and what they're doing - and I think at that it is very good. In that context for example, I think in particular emphasising the very simple point that the user-friendly, helpful (anthropomorphisable) interface is not the LLM - I also feel the token/word point is maybe not hugely important but makes me mindful given that I'm in a wordy profession (and he's working surrounded by people in wordy professions).
Title: Re: The AI dooooooom thread
Post by: DGuller on August 26, 2025, 08:16:49 PM
Quote from: Sheilbh on August 26, 2025, 08:02:40 PM:lol: Well I suppose the audience matters there because to my view - it's not a data science article or about the cutting edge of LLMs - but an engineer working in that area in a newsroom. So surrounded by non-technical people who will not stop asking "why" and "how".

I saw it via a friend there who described it as for non-technical people (me - but also I suspect 90% of the people he is working with) looking for a deeper understanding of how they actually work and what they're doing - and I think at that it is very good. In that context for example, I think in particular emphasising the very simple point that the user-friendly, helpful (anthropomorphisable) interface is not the LLM - I also feel the token/word point is maybe not hugely important but makes me mindful given that I'm in a wordy profession (and he's working surrounded by people in wordy professions).
I get that I'm not the audience, since for one I already knew all that.  That said, my evaluation of that article was on the merits of its usefulness to the non-technical audience, not to someone like me.  Whether you're technical audience or not, the question that I posed at the end of my last reply IS by far the most important question.  I think this article fails both at usefully educating those who know very little, and also those with little knowledge that we all know is dangerous.
Title: Re: The AI dooooooom thread
Post by: Syt on August 27, 2025, 02:08:30 AM
Tbf, I think this is the kind of article that (if summarized to a power point :P ) would be useful at work - some in our organization seem to think AI is more than it is (or misunderstanding entirely).
Title: Re: The AI dooooooom thread
Post by: Josquius on August 27, 2025, 02:57:31 AM
Yes, definitely.
I've heard some senior people suggesting uses which are...seriously worrying.
But in all this the actual practical uses in say transcribing meetings are getting quite overshadowed.


QuoteI mean, that's not really surprising.  The nature of 'AI' is such that it can never truly be accurate.  All it does is throw in the most probable words one after another, and if pushed back on tells the user exactly what they want to hear.
This differs to a flesh and blood marketing middle manager how? :p
Title: Re: The AI dooooooom thread
Post by: garbon on August 27, 2025, 03:03:57 AM
Quote from: DGuller on August 26, 2025, 08:16:49 PMWhether you're technical audience or not, the question that I posed at the end of my last reply IS by far the most important question.  I think this article fails both at usefully educating those who know very little, and also those with little knowledge that we all know is dangerous.

Do you care to elaborate? At first thought, I would think that your question would be dangerous for those who know little.
Title: Re: The AI dooooooom thread
Post by: Syt on August 27, 2025, 05:50:59 AM
Quote from: Josquius on August 27, 2025, 02:57:31 AMtranscribing meetings are getting quite overshadowed.

Such a huge quality of life benefit! No longer someone having to devote their brain power to chronicling the meeting. Much more efficient to have AI do it and then review/edit before sending to participants.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 27, 2025, 06:11:42 AM
Quote from: Syt on August 27, 2025, 05:50:59 AMSuch a huge quality of life benefit! No longer someone having to devote their brain power to chronicling the meeting. Much more efficient to have AI do it and then review/edit before sending to participants.
I don't know if I shared it but enjoyed the Popbitch story on this :lol:
Quote>> AI Goes Popbitch <<
ITV staff - your mic is on!

Following the company-wide adoption of Google's AI product Gemini at ITV, all meetings now have the option of transcriptions and summaries. Handy, right!

Maybe, but it has also led to some awkward office casualties linked to the shiny new feature.

In one online session – which was being transcribed and summarised - a couple of employees stayed on after the meeting had ended and had a good old bitch.

A pithy summary of their bitching session was duly appended to the meeting notes, with a word-for-word transcription also available for anyone who wanted a deep-dive.

Also a class action in the US on this area - which is the way of US law (and, honestly, increasingly I'm not so sure it's less effective than European regulation plus intermittent underpowered enforcement) - because they have quite strict laws on recording without consent. So some tools, especially Otter which is super-popular, don't have consent or the "this meeting is being recorded" by default.

QuoteTbf, I think this is the kind of article that (if summarized to a power point :P ) would be useful at work - some in our organization seem to think AI is more than it is (or misunderstanding entirely).
QuoteYes, definitely.
I've heard some senior people suggesting uses which are...seriously worrying.
So with both of these it has been a bit of eureka moment for me using it on my small publishing project because it's been really helpful.

I wouldn't use it to do any of the actual work but at the level of what are the steps I need to take, what's the rough timeline, directories for x type of person, high level (and no doubt out of date) market research and positioning etc - it is very helpful.

So what I'm suddenly understanding is why strategy teams and senior leader types are so impressed and excited - but because of the sort of thing they're experimenting with it for less aware of how it technically works or how it can/can't be used effectively. Because my experience is that, with the exception of a few specific areas (like engineers and data scientists), there's a huge gap between senior excitement and expectation and then actual results/use (this is where I think that MIT point around identifying specific pain points that can be automated is really key - at this stage).
Title: Re: The AI dooooooom thread
Post by: garbon on August 27, 2025, 06:29:14 AM
If I am in a meeting where it is noted the meeting is being transcribed and will have AI summaries, I clam up. Whole point of a meeting was to minimize verbatim written record!
Title: Re: The AI dooooooom thread
Post by: HVC on August 27, 2025, 06:31:12 AM
:yes: let the underlings take the fall
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 27, 2025, 06:31:50 AM
:lol: Yes. Or if it's that type of meeting at least get the lawyers to take notes so you can try to run a privilege argument :ph34r:

But then I am regularly astonished at the things often very senior smart people write in chats or WhatsApp.
Title: Re: The AI dooooooom thread
Post by: Neil on August 27, 2025, 07:11:18 AM
Quote from: DGuller on August 26, 2025, 07:03:00 PM
Quote from: Neil on August 26, 2025, 09:04:18 AMI mean, that's not really surprising.  The nature of 'AI' is such that it can never truly be accurate.  All it does is throw in the most probable words one after another, and if pushed back on tells the user exactly what they want to hear. 
That's like saying that all a nuclear bomb does is trigger a runaway chain reaction.  It's a technically accurate thing to say, but it trivializes a very complex thing behind the "all it does".  Getting the conditional probabilities of the next token right depending on the context is kind of where the synthesis of concepts happens in an LLM.
Sure, but the point is that everything a LLM does is worthless, since you can never be sure if it's lying to you or not, and it can never be dependable. 
Title: Re: The AI dooooooom thread
Post by: garbon on August 27, 2025, 07:27:20 AM
Quote from: Neil on August 27, 2025, 07:11:18 AM
Quote from: DGuller on August 26, 2025, 07:03:00 PM
Quote from: Neil on August 26, 2025, 09:04:18 AMI mean, that's not really surprising.  The nature of 'AI' is such that it can never truly be accurate.  All it does is throw in the most probable words one after another, and if pushed back on tells the user exactly what they want to hear. 
That's like saying that all a nuclear bomb does is trigger a runaway chain reaction.  It's a technically accurate thing to say, but it trivializes a very complex thing behind the "all it does".  Getting the conditional probabilities of the next token right depending on the context is kind of where the synthesis of concepts happens in an LLM.
Sure, but the point is that everything a LLM does is worthless, since you can never be sure if it's lying to you or not, and it can never be dependable. 

Lying feels like the wrong word as that feels like attributing human motivations to it.

It can provide inaccurate/false information. Of course, it generally is the case that predictions aren't 100% accurate. After all, when companies buy market research, it isn't because they are going to get a 100% accurate picture of what is currently happening and predictions for the future but that it is better than the knowledge they have if they only relied on their fieldforce, annecdotal customer feedback and sales data.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on August 27, 2025, 07:31:27 AM
Quote from: garbon on August 27, 2025, 07:27:20 AM
Quote from: Neil on August 27, 2025, 07:11:18 AM
Quote from: DGuller on August 26, 2025, 07:03:00 PM
Quote from: Neil on August 26, 2025, 09:04:18 AMI mean, that's not really surprising.  The nature of 'AI' is such that it can never truly be accurate.  All it does is throw in the most probable words one after another, and if pushed back on tells the user exactly what they want to hear. 
That's like saying that all a nuclear bomb does is trigger a runaway chain reaction.  It's a technically accurate thing to say, but it trivializes a very complex thing behind the "all it does".  Getting the conditional probabilities of the next token right depending on the context is kind of where the synthesis of concepts happens in an LLM.
Sure, but the point is that everything a LLM does is worthless, since you can never be sure if it's lying to you or not, and it can never be dependable. 

Lying feels like the wrong word as that feels like attributing human motivations to it.

It can provide inaccurate/false information. Of course, it generally is the case that predictions aren't 100% accurate. After all, when companies buy market research, it isn't because they are going to get a 100% accurate picture of what is currently happening and predictions for the future but that it is better than the knowledge they have if they only relied on their fieldforce, annecdotal customer feedback and sales data.

What does the fact that generative AI fabricates information have to do with the prediction of the future?

The reason the Excel spreadsheet is unuseable is because there can be enough confidence that the numbers input into the spreadsheet are valid.
Title: Re: The AI dooooooom thread
Post by: Tamas on August 27, 2025, 07:37:23 AM
Quote from: Syt on August 27, 2025, 05:50:59 AM
Quote from: Josquius on August 27, 2025, 02:57:31 AMtranscribing meetings are getting quite overshadowed.

Such a huge quality of life benefit! No longer someone having to devote their brain power to chronicling the meeting. Much more efficient to have AI do it and then review/edit before sending to participants.

Indeed.

On the article, while I agree that AI seems to be seriously overhyped, maybe the other side of that coin is that we overhype how our own brain works. We cannot possibly work on the basis of our cells putting the most likely words one after the other, we have souls from God!
Title: Re: The AI dooooooom thread
Post by: crazy canuck on August 27, 2025, 07:47:04 AM
We have had technology to transcribe what people say long before AI was to clean in the current tech titans eyes.

If that is the best that can be done for a use case, use then it's not worth the billions of dollars of investment.  :P
Title: Re: The AI dooooooom thread
Post by: Josquius on August 27, 2025, 07:50:51 AM
Yes. There's clearly a bubble based on marketing of calling this stuff AI rather than LMMs , this brings to mind true AI from sci fi.
But nor is it the completely useless toy some claim.
The reality is in the middle.
If it was called LMMs the investment would be considerably more sensible but it'd still be there and it'd be doing good business (and potentially losing tonnes too).
Title: Re: The AI dooooooom thread
Post by: DGuller on August 27, 2025, 08:20:23 AM
Quote from: garbon on August 27, 2025, 03:03:57 AM
Quote from: DGuller on August 26, 2025, 08:16:49 PMWhether you're technical audience or not, the question that I posed at the end of my last reply IS by far the most important question.  I think this article fails both at usefully educating those who know very little, and also those with little knowledge that we all know is dangerous.

Do you care to elaborate? At first thought, I would think that your question would be dangerous for those who know little.
Sure.  Just for clarity, my question was:  "when does that mechanism of predicting the next word results in something that isn't too functionally different from real intelligence."

My question is the fundamental question you need answered for knowing when to use AI, when to use AI very carefully, or when not to use AI at all.  Of course it's a dangerous question to get the answer wrong for, that's what makes it an important question.

This is the question for which it helps understanding how LLMs work.  For example, they work really well in programming because programming is by design something from which you can infer patterns from without actually knowing for sure.  If I were to go into some new language and have to write a conditional statement, I wouldn't know for a fact that it would be an "if" statement, but that would be a pretty good guess.  Knowledge that can be guessed can be effectively compressed and synthesized.

For the same reason, the legal field is where LLMs can be very dangerous, at least general purpose LLMs.  To use an obsolete example, there was a case in 1973 called Roe vs. Wade that legalized abortion in many cases.  Would you be able to use your general knowledge to guess that one party of Roe, the other party was Wade, and that it was decided in 1973?  No, this is something that you just have to know.  A random happening here or there, and it could've been Smith v. Miller decided in 1975 that legalized the abortion.  All that means is that it's very dangerous to generalize about laws, and generalization is what intelligence is about.  Even an intelligent human being who's not educated about the law can be very dangerous if he doesn't understand the limitation of his intelligence when it comes to law and tries to reason out the legal questions based on general knowledge.
Title: Re: The AI dooooooom thread
Post by: Tamas on August 27, 2025, 08:59:18 AM
Quote from: crazy canuck on August 27, 2025, 07:47:04 AMWe have had technology to transcribe what people say long before AI was to clean in the current tech titans eyes.

If that is the best that can be done for a use case, use then it's not worth the billions of dollars of investment.  :P

Transcribing isn't an "AI" feature but then summarising that transcript is.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on August 27, 2025, 09:06:28 AM
Quote from: Tamas on August 27, 2025, 08:59:18 AM
Quote from: crazy canuck on August 27, 2025, 07:47:04 AMWe have had technology to transcribe what people say long before AI was to clean in the current tech titans eyes.

If that is the best that can be done for a use case, use then it's not worth the billions of dollars of investment.  :P

Transcribing isn't an "AI" feature but then summarising that transcript is.

I have seen how AI summarizes complex meetings and it is pathetically, inadequate, and downright misleading. It might be OK for simple concepts like where certain tasks to be completed or discussed. Any intern consumer that in more time but less cost. So again I'm not really sure I see the use case.
Title: Re: The AI dooooooom thread
Post by: Josquius on August 27, 2025, 09:10:14 AM
Quote from: DGuller on August 27, 2025, 08:20:23 AM
Quote from: garbon on August 27, 2025, 03:03:57 AM
Quote from: DGuller on August 26, 2025, 08:16:49 PMWhether you're technical audience or not, the question that I posed at the end of my last reply IS by far the most important question.  I think this article fails both at usefully educating those who know very little, and also those with little knowledge that we all know is dangerous.

Do you care to elaborate? At first thought, I would think that your question would be dangerous for those who know little.
Sure.  Just for clarity, my question was:  "when does that mechanism of predicting the next word results in something that isn't too functionally different from real intelligence."

My question is the fundamental question you need answered for knowing when to use AI, when to use AI very carefully, or when not to use AI at all.  Of course it's a dangerous question to get the answer wrong for, that's what makes it an important question.

This is the question for which it helps understanding how LLMs work.  For example, they work really well in programming because programming is by design something from which you can infer patterns from without actually knowing for sure.  If I were to go into some new language and have to write a conditional statement, I wouldn't know for a fact that it would be an "if" statement, but that would be a pretty good guess.  Knowledge that can be guessed can be effectively compressed and synthesized.

For the same reason, the legal field is where LLMs can be very dangerous, at least general purpose LLMs.  To use an obsolete example, there was a case in 1973 called Roe vs. Wade that legalized abortion in many cases.  Would you be able to use your general knowledge to guess that one party of Roe, the other party was Wade, and that it was decided in 1973?  No, this is something that you just have to know.  A random happening here or there, and it could've been Smith v. Miller decided in 1975 that legalized the abortion.  All that means is that it's very dangerous to generalize about laws, and generalization is what intelligence is about.  Even an intelligent human being who's not educated about the law can be very dangerous if he doesn't understand the limitation of his intelligence when it comes to law and tries to reason out the legal questions based on general knowledge.

The problem here is the AI can't just admit when its wrong.
With the Roe vs. Wade example the AI can probably give you a great writeup- because so much has been written about it, it has munched so much specific material, there's tonnes out there on the web. All sorts.
A bit of a problem maybe it grasps a far right nonsense take on it rather than a good faith one but still, it should do an OK job.

Ask it for something more obscure however then it might well go off telling you all about it, it being a complete fabrication.
IMO AIs really need to start far more strongly always including their sources and maybe sometimes some explanation too of how they got from their source to what they wrote. Maybe some grading of sources could be nice too. Random website written by god knows who vs. the actual court transcript.



Quote from: crazy canuck on August 27, 2025, 07:47:04 AMWe have had technology to transcribe what people say long before AI was to clean in the current tech titans eyes.

If that is the best that can be done for a use case, use then it's not worth the billions of dollars of investment.  :P
QuoteTranscribing isn't an "AI" feature but then summarising that transcript is.


I'd say more important is the quality.
The summarising is often fairly hit and miss, though it is improving. Still can't do sentiment though.
The quality of the transcription is really coming on though from the old rule based pattern matching models that required a lot of user training. Far fewer mistakes in modern setup and they are much better at handling real world discussions with multiple speakers.
They still can't handle some dialects alas.  :ph34r:
Title: Re: The AI dooooooom thread
Post by: crazy canuck on August 27, 2025, 09:20:09 AM
An indication of the magnificent the investments being made. From the NYTimes

QuoteIt's no secret by now, as investors await an earnings report on Wednesday by the chip behemoth Nvidia, that optimism around the windfall that artificial intelligence may generate is pumping up the stock market.

But in recent months, it has also become clear that A.I. spending is lifting the real economy, too.

It's not because of how companies are using the technology, at least not yet. Rather, the sheer amount of investment — in data centers, semiconductor factories and power supply — needed to build the computing power that A.I. demands is creating enough business activity to brighten readings on the entire domestic economy.

Companies will spend $375 billion globally in 2025 on A.I. infrastructure, the investment bank UBS estimates. That is projected to rise to $500 billion next year. Investment in software and computer equipment, not counting the data center buildings, accounted for a quarter of all economic growth this past quarter, data from the Commerce Department shows.

(Even that probably doesn't reflect the whole picture. Government data collectors have long had trouble capturing the economic value of semiconductors and computer equipment that large tech companies like Meta and Alphabet install for their own use, rather than farming out to contractors, so the total impact is likely to be higher.)
Title: Re: The AI dooooooom thread
Post by: frunk on August 27, 2025, 09:43:56 AM
Quote from: Josquius on August 27, 2025, 09:10:14 AMIMO AIs really need to start far more strongly always including their sources and maybe sometimes some explanation too of how they got from their source to what they wrote. Maybe some grading of sources could be nice too. Random website written by god knows who vs. the actual court transcript.

The way tokenization works it's difficult if not impossible to include attribution like that within the LLM.  Maybe a next generation LLM might be able to, and it can use regular search or other resources to augment the results.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 27, 2025, 09:50:06 AM
Quote from: garbon on August 27, 2025, 07:27:20 AMLying feels like the wrong word as that feels like attributing human motivations to it.

It can provide inaccurate/false information. Of course, it generally is the case that predictions aren't 100% accurate. After all, when companies buy market research, it isn't because they are going to get a 100% accurate picture of what is currently happening and predictions for the future but that it is better than the knowledge they have if they only relied on their fieldforce, annecdotal customer feedback and sales data.
Yeah I agree it's anthropomorphising to frame as "lying". I also think it misses things that aren't there which can be important in some areas.

We are encouraged to experiment so I've put in some contracts which I've already reviewed and have notes on. A lot of it is good - the hallucination point has not really been an issue. But it doesn't flag that there isn't a cap on liability - and in English law if there isn't a cap on liability then it is uncapped which is really important. But I think this is where sector specific ones will emerge and there are definitely areas of law where I think it will have a big impact - particularly as it's already low value, high volume, highly automated work I can imagine it really taking off in the domestic conveyancing sector.

But also it depends on the tool. It very much depends what you want to do and I think at an economy/business sector wide level it will be the existing enterprise tech providers building in additional functionalities where the impact will be felt first and most strongly - whether that's GitHub or Salesforce or Oracle. I've found ChatGPT useful in my small business thing as I mentioned but I can't think of many uses for it.

The one I like most as a lawyer though is Notebook LM which I know journalists like a lot as well because it cites everything. It doesn't and shouldn't replace the actual work but it's really helpful if a very lengthy judgement or inquiry report in giving a general summary with citations linking to that section of the document you've uploaded. I've also used it with some of the decisions in other European courts because I don't have the languages - I caveat everything and say if we want to properly understand something we'd need to get a proper translation or speak to local counsel. But given that the current situation with a lot of European law is lawyers on LinkedIn sharing, say, Austrian or Polish or Belgian decisions that have been Google Translated this seems like a natural progression (not least because it's a Google product so you get the Google translated OCRed decision anyway - which can in itself be a bit of a hassle). Not something you can rely on but I think enough to be aware of actually there might be a specific risk/rocks in x jurisdiction we need to be mindful of - and if we are uncomfortable with risk in that area/jurisdiction we should speak to local counsel.

QuoteThe way tokenization works it's difficult if not impossible to include attribution like that within the LLM.  Maybe a next generation LLM might be able to, and it can use regular search or other resources to augment the results.
I mentioned it before Notebook LM is very good at this but it's focused on the documents you upload.

I'd add this is also the big issue for news publishers (and why I think this can't just be "fair use") is there are generative search results which do cite articles and sources. But there is across markets significant click and pageview loss from generative search. I've mentioned before but my suspicion is that the agencies will do very well out of this as people don't normally go to Reuters or AFP anyway while the news publishers like the NYT, the Guardian etc will be hit - possibly quite badly and the impact of that on our democratic society is unclear. But I think the first wave of the internet destroyed the business model of huge chunks of media that have only just recovered spottily but some are gone forever like the local press. I think the effect of that on our society was not great - and I'm not sure generative search will be better.
Title: Re: The AI dooooooom thread
Post by: Josquius on August 27, 2025, 09:50:48 AM
Quote from: frunk on August 27, 2025, 09:43:56 AM
Quote from: Josquius on August 27, 2025, 09:10:14 AMIMO AIs really need to start far more strongly always including their sources and maybe sometimes some explanation too of how they got from their source to what they wrote. Maybe some grading of sources could be nice too. Random website written by god knows who vs. the actual court transcript.

The way tokenization works it's difficult if not impossible to include attribution like that within the LLM.  Maybe a next generation LLM might be able to, and it can use regular search or other resources to augment the results.

Chat GPT does use regular search and provide sources- though says nothing about quality of sources and will often go and make something up unsourced.
Title: Re: The AI dooooooom thread
Post by: frunk on August 27, 2025, 10:39:05 AM
Quote from: Josquius on August 27, 2025, 09:50:48 AMChat GPT does use regular search and provide sources- though says nothing about quality of sources and will often go and make something up unsourced.

That's using the LLM itself to generate sources, which as with the rest of the model isn't an authenticated attribution and shouldn't be used as such.
Title: Re: The AI dooooooom thread
Post by: Josquius on August 27, 2025, 10:50:13 AM
Quote from: frunk on August 27, 2025, 10:39:05 AM
Quote from: Josquius on August 27, 2025, 09:50:48 AMChat GPT does use regular search and provide sources- though says nothing about quality of sources and will often go and make something up unsourced.

That's using the LLM itself to generate sources, which as with the rest of the model isn't an authenticated attribution and shouldn't be used as such.

I'm not sure what you mean.
If it says something is a fact then links you off to a journal where you see it says precisely that then isn't that a relevant source?
Its like wikipedia. Anyone can edit it and a rubbish source in itself, but using it as a shortcut to valid sources can be OK.
Title: Re: The AI dooooooom thread
Post by: Jacob on August 27, 2025, 11:52:50 AM
Quote from: Josquius on August 27, 2025, 10:50:13 AM
Quote from: frunk on August 27, 2025, 10:39:05 AMThat's using the LLM itself to generate sources, which as with the rest of the model isn't an authenticated attribution and shouldn't be used as such.

I'm not sure what you mean.
If it says something is a fact then links you off to a journal where you see it says precisely that then isn't that a relevant source?
Its like wikipedia. Anyone can edit it and a rubbish source in itself, but using it as a shortcut to valid sources can be OK.

I think the issue is the italicized part, and the level of confidence that it "says precisely that".
Title: Re: The AI dooooooom thread
Post by: crazy canuck on August 27, 2025, 12:00:36 PM
Quote from: frunk on August 27, 2025, 10:39:05 AM
Quote from: Josquius on August 27, 2025, 09:50:48 AMChat GPT does use regular search and provide sources- though says nothing about quality of sources and will often go and make something up unsourced.

That's using the LLM itself to generate sources, which as with the rest of the model isn't an authenticated attribution and shouldn't be used as such.

I have repeated that very thing many times to Jos, I am not sure why he has still not understood the point.  Generative AI is trained on data, it does not search for accurate data or sources.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on August 27, 2025, 12:07:05 PM
Quote from: Josquius on August 27, 2025, 10:50:13 AM
Quote from: frunk on August 27, 2025, 10:39:05 AM
Quote from: Josquius on August 27, 2025, 09:50:48 AMChat GPT does use regular search and provide sources- though says nothing about quality of sources and will often go and make something up unsourced.

That's using the LLM itself to generate sources, which as with the rest of the model isn't an authenticated attribution and shouldn't be used as such.

I'm not sure what you mean.
If it says something is a fact then links you off to a journal where you see it says precisely that then isn't that a relevant source?
Its like wikipedia. Anyone can edit it and a rubbish source in itself, but using it as a shortcut to valid sources can be OK.

Let me give you a concreate example from the work I do - a figure in a scholarly paper submitted for publication to a journal was caught by peer review as probably being drafted by AI (which was both not disclosed and resulted in the paper being essentially nonsensical).  It was not hard to spot the problems. The paper cited a source that was 10 years old (one of the hallmarks of AI generated research because it is just trained on data, it does not go out and find up to date data). But I digress. The figure the AI generated from the data had no relationship to the data that was in the cited source.  It was a complete fabrication. 
Title: Re: The AI dooooooom thread
Post by: Zoupa on August 27, 2025, 01:09:12 PM
I thought AI was gonna do my laundry and dishes, not take my job away. Right now I fail to see any societal benefits and many societal detriments.
Title: Re: The AI dooooooom thread
Post by: Tamas on August 27, 2025, 01:33:41 PM
One aspect is, I suspect, that there is a looming economic downturn and AI is as good an excuse as it gets for companies for getting rid of people. "look, we are not struggling to keep growing at all! We are in fact growing so much thst we are going to replace highly skilled people with this glorified chatbot right here"
Title: Re: The AI dooooooom thread
Post by: crazy canuck on August 27, 2025, 01:49:12 PM
Quote from: Tamas on August 27, 2025, 01:33:41 PMOne aspect is, I suspect, that there is a looming economic downturn and AI is as good an excuse as it gets for companies for getting rid of people. "look, we are not struggling to keep growing at all! We are in fact growing so much thst we are going to replace highly skilled people with this glorified chatbot right here"

I think there is a lot to your observation.  Add to that the economic incentive to produce short term results, and we get a perfect storm. 
Title: Re: The AI dooooooom thread
Post by: Josquius on August 27, 2025, 04:56:58 PM
Quote from: crazy canuck on August 27, 2025, 12:07:05 PM
Quote from: Josquius on August 27, 2025, 10:50:13 AM
Quote from: frunk on August 27, 2025, 10:39:05 AM
Quote from: Josquius on August 27, 2025, 09:50:48 AMChat GPT does use regular search and provide sources- though says nothing about quality of sources and will often go and make something up unsourced.

That's using the LLM itself to generate sources, which as with the rest of the model isn't an authenticated attribution and shouldn't be used as such.

I'm not sure what you mean.
If it says something is a fact then links you off to a journal where you see it says precisely that then isn't that a relevant source?
Its like wikipedia. Anyone can edit it and a rubbish source in itself, but using it as a shortcut to valid sources can be OK.

Let me give you a concreate example from the work I do - a figure in a scholarly paper submitted for publication to a journal was caught by peer review as probably being drafted by AI (which was both not disclosed and resulted in the paper being essentially nonsensical).  It was not hard to spot the problems. The paper cited a source that was 10 years old (one of the hallmarks of AI generated research because it is just trained on data, it does not go out and find up to date data). But I digress. The figure the AI generated from the data had no relationship to the data that was in the cited source.  It was a complete fabrication. 


OK?
So this time it gave a wrong number and lied about a source.
You checked that and saw that so wouldn't use it.
Sometimes however the check shows it is giving things correctly.
In that case it is still a valid source no matter how you found it.

QuoteI have repeated that very thing many times to Jos, I am not sure why he has still not understood the point.  Generative AI is trained on data, it does not search for accurate data or sources
Except when it does.
What you're saying here is a generalisation that doesn't reflect how chat gpt at least works in practice.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 27, 2025, 05:13:43 PM
I can only speak to news a little bit. AI is trained on data to build the model where there is a time limit. But those models are applied on other data as well.

Open AI, Amazon, Google etc (basically all the AI companies) have all been doing big licensing deals with many news publishers. This is why I think the agencies will do well - I think AP News were the first publisher to do a deal with OpenAI. But others have followed: NewsCorp, Axel Springer, AFP, Reuters etc.

So the underlying model will be based on a point in time. But - depending on what you're looking at - in generating outputs it will be applied to fresher data (this is really important for news publishers because if it was frozen in time they wouldn't be able to flow through retractions, corrections etc). If it's Google generative search then the data Google is applying the Gemini model to is the same data it uses to index the web for search.

That same data will also be used for building the next model.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on August 27, 2025, 06:20:16 PM
Quote from: Josquius on August 27, 2025, 04:56:58 PM
Quote from: crazy canuck on August 27, 2025, 12:07:05 PM
Quote from: Josquius on August 27, 2025, 10:50:13 AM
Quote from: frunk on August 27, 2025, 10:39:05 AM
Quote from: Josquius on August 27, 2025, 09:50:48 AMChat GPT does use regular search and provide sources- though says nothing about quality of sources and will often go and make something up unsourced.

That's using the LLM itself to generate sources, which as with the rest of the model isn't an authenticated attribution and shouldn't be used as such.

I'm not sure what you mean.
If it says something is a fact then links you off to a journal where you see it says precisely that then isn't that a relevant source?
Its like wikipedia. Anyone can edit it and a rubbish source in itself, but using it as a shortcut to valid sources can be OK.

Let me give you a concreate example from the work I do - a figure in a scholarly paper submitted for publication to a journal was caught by peer review as probably being drafted by AI (which was both not disclosed and resulted in the paper being essentially nonsensical).  It was not hard to spot the problems. The paper cited a source that was 10 years old (one of the hallmarks of AI generated research because it is just trained on data, it does not go out and find up to date data). But I digress. The figure the AI generated from the data had no relationship to the data that was in the cited source.  It was a complete fabrication. 


OK?
So this time it gave a wrong number and lied about a source.
You checked that and saw that so wouldn't use it.
Sometimes however the check shows it is giving things correctly.
In that case it is still a valid source no matter how you found it.

QuoteI have repeated that very thing many times to Jos, I am not sure why he has still not understood the point.  Generative AI is trained on data, it does not search for accurate data or sources
Except when it does.
What you're saying here is a generalisation that doesn't reflect how chat gpt at least works in practice.


You are missing the point.  The paper was submitted for publication by the authors of the paper.  It was caught by peer reviewers. 

But more importantly, generative AI does not work the way you keep saying it does.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on August 27, 2025, 06:23:54 PM
Quote from: Sheilbh on August 27, 2025, 05:13:43 PMI can only speak to news a little bit. AI is trained on data to build the model where there is a time limit. But those models are applied on other data as well.

Open AI, Amazon, Google etc (basically all the AI companies) have all been doing big licensing deals with many news publishers. This is why I think the agencies will do well - I think AP News were the first publisher to do a deal with OpenAI. But others have followed: NewsCorp, Axel Springer, AFP, Reuters etc.

So the underlying model will be based on a point in time. But - depending on what you're looking at - in generating outputs it will be applied to fresher data (this is really important for news publishers because if it was frozen in time they wouldn't be able to flow through retractions, corrections etc). If it's Google generative search then the data Google is applying the Gemini model to is the same data it uses to index the web for search.

That same data will also be used for building the next model.

Yes, currency of some of the information on which generative AI is trained will improve for some things, and get worse for others (scholarly journals for example, will not be used to train AI).

So think about the ramifications of just that sentence.  News reports that give poor quality reports of the findings of scholarly works are going to train generative AI applications.  Soon it will be common knowledge that the scholarly literature agrees that vaccines cause autism  :P
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 27, 2025, 06:38:11 PM
So first of all it's not totally true. Major journal publishers have also been doing big licensing deals with the AI companies. Wiley, Taylor & Francis, the New England Journal of Medicine and major university presses have all done them or announced they're in negotiations for both on journal articles and full books. So they will increasingly be part of the training data.

Given the way Google routinely abuse their market position - for example in the early days you could not opt out of your content being part of their AI building efforts unless you opted out of Google search indexing - I would be astonished if they're not using that power to get any content available on Google Scholar into their models.

I think since AI companies have, largely, accepted that they need to get licenses for content that they pay for - these deals are only going to become more common as it will primarily be a question of commercials and royalties and use/liability limitations. The big challenge for news publishers and, I'd argue, academic presses is to make sure they extract enough money out of the AI companies because they didn't in internet round one - and their content, which is well-written, edited, legaled is very valuable for companies building LLMs.

I'd also add that I think your last sentence - and the slight disdain for the press, which is essential to a democratic society, is why I think generative search is winning and why I worry an AI product will win. Because people don't trust the press and understand biases but think the solution is not engaged, critical reading but that if you just put facts and collections of the right data you will get a less biased, less humanly flawed report from of it. Why read the news article about scholarly works when you can upload the pdf and do a Q&A with ChatGPT about it?

But I think that whole conversation is slightly separate from the example you posted. To my mind the simple point there is don't use an AI to write a paper.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on August 27, 2025, 09:02:15 PM
Wiley is actually one I have in mind that has a specific prohibition in its editorial policy for scholars to give any rights to an AI provider to use their data in any way.

Also, I don't have disdain for real press, I have disdain for the things that are posing as news sources online now.

Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 27, 2025, 09:43:27 PM
I think that's more to do with Wiley's business model - and pitch.

In 2024 they earned $45 million from licensing content for AI. They do not allow authors to have an "opt-out". However they position this as not allowing specific users to "opt-in or opt-out" - their argument is that basically it would buttress the argument made by some AI companies that licensing isn't feasible. Individual authors can't, but publishers can in order to get more money from licensing which then results in more/better royalties to authors than they'd get on a case by case basis.

While I can kind of see their point, I think there should be a right for creators to opt-out of AI regardless of their contractual arrangements.
Title: Re: The AI dooooooom thread
Post by: garbon on August 28, 2025, 02:44:01 AM
Quote from: DGuller on August 27, 2025, 08:20:23 AMSure.  Just for clarity, my question was:  "when does that mechanism of predicting the next word results in something that isn't too functionally different from real intelligence."

My question is the fundamental question you need answered for knowing when to use AI, when to use AI very carefully, or when not to use AI at all.  Of course it's a dangerous question to get the answer wrong for, that's what makes it an important question.

Thanks for expanding on that and yes, I agree entirely. It would be good to have such introductory articles highlight that as it does appear currently that many are saying it should be used all the time.

Quote from: DGuller on August 27, 2025, 08:20:23 AMThis is the question for which it helps understanding how LLMs work.  For example, they work really well in programming because programming is by design something from which you can infer patterns from without actually knowing for sure.  If I were to go into some new language and have to write a conditional statement, I wouldn't know for a fact that it would be an "if" statement, but that would be a pretty good guess.  Knowledge that can be guessed can be effectively compressed and synthesized.

For the same reason, the legal field is where LLMs can be very dangerous, at least general purpose LLMs.  To use an obsolete example, there was a case in 1973 called Roe vs. Wade that legalized abortion in many cases.  Would you be able to use your general knowledge to guess that one party of Roe, the other party was Wade, and that it was decided in 1973?  No, this is something that you just have to know.  A random happening here or there, and it could've been Smith v. Miller decided in 1975 that legalized the abortion.  All that means is that it's very dangerous to generalize about laws, and generalization is what intelligence is about.  Even an intelligent human being who's not educated about the law can be very dangerous if he doesn't understand the limitation of his intelligence when it comes to law and tries to reason out the legal questions based on general knowledge.

I would add that it is also a shame how companies like Google help to reinforce misperceptions by their AI mode* / AI overview above search results. It gives the impression that its AI is just a superpowered version of its search engine, even when what it displays has a tendency to be built on erroneous or irrelevant information. It makes me feel sometimes like at the start of popular search engines where people just trusted the first hit.


*I'm perhaps making an unwarranted assumption on its AI mode as I have not used it. -_-
Title: Re: The AI dooooooom thread
Post by: Syt on August 28, 2025, 03:01:57 AM
I plopped some SQL queries from our developers into ChatGPT 5 yesterday, and it broke it down for me in a way that made sense to me and helped me understand it. But as I don't know SQL I checked with my colleague who does and he said it was a good summary and it even figured out the functions and context from the names of the data fields. It provided me with a flow diagram and even let me test out the whole thing by giving it variables for the various fields.

For a layperson like me that's very valuable; though I of course for now I would rely on colleagues to verify the accuracy of the analysis.
Title: Re: The AI dooooooom thread
Post by: Josquius on August 28, 2025, 04:33:05 AM
Quote from: garbon on August 28, 2025, 02:44:01 AMI would add that it is also a shame how companies like Google help to reinforce misperceptions by their AI mode* / AI overview above search results. It gives the impression that its AI is just a superpowered version of its search engine, even when what it displays has a tendency to be built on erroneous or irrelevant information. It makes me feel sometimes like at the start of popular search engines where people just trusted the first hit.


*I'm perhaps making an unwarranted assumption on its AI mode as I have not used it. -_-

For sure there's the AI= better buzz.
Google's AI result definitely is drawn from search results.... though I find it basically summarises the top one for you no matter how wrong it is. Sometimes even if the result is actually right, it interprets it terribly and makes it sound wrong.

Quote from: crazy canuck on August 27, 2025, 06:20:16 PMYou are missing the point.  The paper was submitted for publication by the authors of the paper.  It was caught by peer reviewers. 

So there's two extremes to using AI.

1: This way. What I'd call the lazy student method, though is done by more than students. Ask AI something, get an answer, just copy and paste it as-is. Done.
Any organisations that adopt this way of working are really going to screw themselves over as the errors build up.

2: Refuse to have anything to do with AI. Absolute purist. Can't be used at all.
But as desirable as a world without AI may be, if you're not using it your competitors will be, and there are efficiency gains to be had. Even if you're having to check AI's working, it at least gives you a pointer and can save time.
Organisations following this path are similarly in trouble.

The best way of working with it is to recognise what it can do and what its common mistakes are.

QuoteBut more importantly, generative AI does not work the way you keep saying it does.
Yes it does.
Go use chat gpt today and you'll see thats what it does.
Ask it for instance whats the capital of Finland- this is an easy one. Its training data is full of this fact. It can say with confidence based on just guessing the next most likely word that Helsinki is the capital of Finland.

Ask it something more complicated though, for instance when did Helsinki's first bookshop open.... then using its training data/most likely word guessing it could tell you a broad range or maybe take a stab at a date... but here it would be hallucinating something that sounds vaguely right based off knowledge (i.e. words that follow other words in sources it has had fed into it) it does have about related topics.
Chat GPT at the moment seems to have coded in a high degree of caution around such things. If its got a <70% (I guess) likelihood of being right based on training data it will then use search.

Go try it yourself.
Ask it about Roe vs. Wade, world famous, much discussed, and it should be pretty fast with writing something up for you.
Ask it about Smith vs. Gupta, I just made that up, then it probably won't jump straight to hallucination. It will take a bit longer and search.
Title: Re: The AI dooooooom thread
Post by: Syt on September 06, 2025, 01:30:29 AM
ChatGPT CEO has a moment:

(https://i.imgur.com/HI4X3ac.png)

(I know, he probably meant to troll Musk, but still ...)
Title: Re: The AI dooooooom thread
Post by: Sheilbh on September 07, 2025, 10:41:24 AM
Finally found a second use for AI - as someone who is not at all green-fingered it's very helpful on planning the garden.

I do not think this is what Sam Altman meant when he posted a picture of the Death Star to trail the release of GPT5...
Title: Re: The AI dooooooom thread
Post by: Syt on September 10, 2025, 07:30:13 AM
(https://i.imgur.com/fOGHOXt.png)

 :D
Title: Re: The AI dooooooom thread
Post by: grumbler on September 11, 2025, 10:38:01 AM
Quote from: Syt on September 10, 2025, 07:30:13 AM(https://i.imgur.com/fOGHOXt.png)

 :D

 :lol:
Title: Re: The AI dooooooom thread
Post by: garbon on September 11, 2025, 11:49:14 AM
It is good :)
Title: Re: The AI dooooooom thread
Post by: Sheilbh on September 29, 2025, 01:45:39 PM
To be positive again - this is fantastic journalism by the Guardian, using AI as a tool for journalists and also explaining how they've done it:
https://www.theguardian.com/world/ng-interactive/2025/sep/28/inside-the-everyday-facebook-networks-where-far-right-ideas-grow

Really good example of a "how to" I think - and given the Guardian is the bit of the UK press that always gets the massive terabytes of data leaks, I think this is something they will explore responsibly.
Title: Re: The AI dooooooom thread
Post by: Josquius on September 29, 2025, 01:52:49 PM
That Facebook is a far right cess pool I've known for a long time. Huge factor in boomer radicalisation.
Nice to see this getting attention and proof.

And yes. I can think of a few similar mass data crunching uses for AI. Image recognition has a lot of potential in managing traffic flows, making a fairer council tax system, spotting fraud etc...
Title: Re: The AI dooooooom thread
Post by: Sheilbh on September 29, 2025, 01:57:50 PM
I believe it's already being used in newsrooms as a first pass for the open source style recognising where a photo is taken and other stuff that investigative desks do - again as a first pass with the investigative reporters also working on it. At least in the broadsheets who have investigative reporters.

I imagine there are far fewer controls on, say, the way Reach are using AI :bleeding:

Edit: Also sorry I linked to the article only. It's in the footnote but the separate article detailing how they used AI was I think a model in this area:
https://www.theguardian.com/world/2025/sep/28/reading-the-post-riot-posts-how-we-traced-far-right-radicalisation-across-51000-facebook-messages
Title: Re: The AI dooooooom thread
Post by: Josquius on September 30, 2025, 03:20:19 AM
Can't find an example but I've definitely seen on the reach owned local papers site bluntly factual articles by a young looking guy with the title "AI-assisted reporter".
Probably just parsing court records or so.
I'm sure I remember them recruiting for this - posted it on languish I think - salary was minimum wage.
Title: Re: The AI dooooooom thread
Post by: grumbler on September 30, 2025, 08:29:59 AM
Quote from: Josquius on September 29, 2025, 01:52:49 PMThat Facebook is a far right cess pool I've known for a long time. Huge factor in boomer radicalisation.

Is "boomer radicalization" really such a problem in the UK?  What do radical boomers do?  SHake their walking canes at poor downtrodden millennials?
Title: Re: The AI dooooooom thread
Post by: Josquius on September 30, 2025, 09:31:06 AM
Quote from: grumbler on September 30, 2025, 08:29:59 AM
Quote from: Josquius on September 29, 2025, 01:52:49 PMThat Facebook is a far right cess pool I've known for a long time. Huge factor in boomer radicalisation.

Is "boomer radicalization" really such a problem in the UK?  What do radical boomers do?  SHake their walking canes at poor downtrodden millennials?

Brexit.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on September 30, 2025, 10:59:00 AM
Quote from: grumbler on September 30, 2025, 08:29:59 AM
Quote from: Josquius on September 29, 2025, 01:52:49 PMThat Facebook is a far right cess pool I've known for a long time. Huge factor in boomer radicalisation.

Is "boomer radicalization" really such a problem in the UK?  What do radical boomers do?  SHake their walking canes at poor downtrodden millennials?

Worse, they vote
Title: Re: The AI dooooooom thread
Post by: Josquius on September 30, 2025, 11:34:16 AM
One for CC.

https://www.404media.co/18-lawyers-caught-using-ai-explain-why-they-did-it/
Title: Re: The AI dooooooom thread
Post by: grumbler on September 30, 2025, 12:25:46 PM
Quote from: crazy canuck on September 30, 2025, 10:59:00 AM
Quote from: grumbler on September 30, 2025, 08:29:59 AMIs "boomer radicalization" really such a problem in the UK?  What do radical boomers do?  SHake their walking canes at poor downtrodden millennials?

Worse, they vote

I have honestly never thought that voting was radical. Care to explain why you think that it is?
Title: Re: The AI dooooooom thread
Post by: crazy canuck on September 30, 2025, 01:10:28 PM
Quote from: grumbler on September 30, 2025, 12:25:46 PM
Quote from: crazy canuck on September 30, 2025, 10:59:00 AM
Quote from: grumbler on September 30, 2025, 08:29:59 AMIs "boomer radicalization" really such a problem in the UK?  What do radical boomers do?  SHake their walking canes at poor downtrodden millennials?

Worse, they vote

I have honestly never thought that voting was radical. Care to explain why you think that it is?

Voting is not radical, radicalized boomers are voting.
Title: Re: The AI dooooooom thread
Post by: HisMajestyBOB on September 30, 2025, 01:20:24 PM
Quote from: grumbler on September 30, 2025, 12:25:46 PM
Quote from: crazy canuck on September 30, 2025, 10:59:00 AM
Quote from: grumbler on September 30, 2025, 08:29:59 AMIs "boomer radicalization" really such a problem in the UK?  What do radical boomers do?  SHake their walking canes at poor downtrodden millennials?

Worse, they vote

I have honestly never thought that voting was radical. Care to explain why you think that it is?

Radical voting is when you wear your hat backwards and ride your skateboard into the voting booth.
Title: Re: The AI dooooooom thread
Post by: The Brain on September 30, 2025, 01:29:24 PM
I tend to wear rad suits.
Title: Re: The AI dooooooom thread
Post by: jimmy olsen on September 30, 2025, 09:44:32 PM
Quote from: The Brain on September 30, 2025, 01:29:24 PMI tend to wear rad suits.

I wouldn't have thought yellow was your color.

https://fallout.fandom.com/wiki/Radiation_suit_(Fallout:_New_Vegas)
Title: Re: The AI dooooooom thread
Post by: The Brain on October 01, 2025, 06:08:09 AM
Quote from: jimmy olsen on September 30, 2025, 09:44:32 PM
Quote from: The Brain on September 30, 2025, 01:29:24 PMI tend to wear rad suits.

I wouldn't have thought yellow was your color.

https://fallout.fandom.com/wiki/Radiation_suit_(Fallout:_New_Vegas)

Fit for a king.
Title: Re: The AI dooooooom thread
Post by: Syt on October 07, 2025, 11:22:36 PM
https://www.theguardian.com/technology/ng-interactive/2025/oct/02/ai-children-parenting-creativity

Quote'My son genuinely believed it was real': Parents are letting little kids play with AI. Are they wrong?

Josh was at the end of his rope when he turned to ChatGPT for help with a parenting quandary. The 40-year-old father of two had been listening to his "super loquacious" four-year-old talk about Thomas the Tank Engine for 45 minutes, and he was feeling overwhelmed.

"He was not done telling the story that he wanted to tell, and I needed to do my chores, so I let him have the phone," recalled Josh, who lives in north-west Ohio. "I thought he would finish the story and the phone would turn off."

But when Josh returned to the living room two hours later, he found his child still happily chatting away with ChatGPT in voice mode. "The transcript is over 10k words long," he confessed in a sheepish Reddit post. "My son thinks ChatGPT is the coolest train loving person in the world. The bar is set so high now I am never going to be able to compete with that."

From radio and television to video games and tablets, new technology has long tantalized overstretched parents of preschool-age kids with the promise of entertainment and enrichment that does not require their direct oversight, even as it carried the hint of menace that accompanies any outside influence on the domestic sphere. A century ago, mothers in Arizona worried that radio programs were "overstimulating, frightening and emotionally overwhelming" for children; today's parents self-flagellate over screen time and social media.

But the startlingly lifelike capabilities of generative AI systems have left many parents wondering if AI is an entirely new beast. Chatbots powered by large language models (LLMs) are engaging young children in ways the makers of board games, Teddy Ruxpin, Furby and even the iPad never dreamed of: they produce personalized bedtime stories, carry on conversations tailored to a child's interests, and generate photorealistic images of the most far-fetched flights of fancy – all for a child who can not yet read, write or type.

Can generative AI deliver the holy grail of technological assistance to parents, serving as a digital Mary Poppins that educates, challenges and inspires, all within a framework of strong moral principles and age-appropriate safety? Or is this all just another Silicon Valley hype-bubble with a particularly vulnerable group of beta testers?

'My kids are the guinea pigs'

For Saral Kaushik, a 36-year-old software engineer and father of two in Yorkshire, a packet of freeze-dried "astronaut" ice-cream in the cupboard provided the inspiration for a novel use of ChatGPT with his four-year-old son.

"I literally just said something like, 'I'm going to do a voice call with my son and I want you to pretend that you're an astronaut on the ISS,'" Kaushik said. He also instructed the program to tell the boy that it had sent him a special treat.

"[ChatGPT] told him that he had sent his dad some ice-cream to try from space, and I pulled it out," Kaushik recalled. "He was really excited to talk to the astronaut. He was asking questions about how they sleep. He was beaming, he was so happy."

Childhood is a time of magic and wonder, and dwelling in the world of make-believe is not just normal but encouraged by experts in early childhood development, who have long emphasized the importance of imaginative play. For some parents, generative AI can help promote that sense of creativity and wonder.

Josh's daughter, who is six, likes to sit with him at the computer and come up with stories for ChatGPT to illustrate. (Several parents interviewed for this article requested to be identified by their first names only.) "When we started using it, it was willing to make an illustration of my daughter and insert that in the story," Josh said, though more recent safety updates have resulted in it no longer producing images of children. Kaushik also uses ChatGPT to convert family photographs into coloring book pages for his son.

Ben Kreiter, a father of three in Michigan, explained ChatGPT to his two-, six-, and eight-year-old children after they saw him testing its image-generation capabilities for work (he designs curriculums for an online parochial school). "I was like, 'I tell the computer a picture to make and it makes it,' and they said: 'Can we try?'" Soon, the children were asking to make pictures with ChatGPT every day. "It was cool for me to see what they are imagining that they can't quite [draw] on a piece of paper with their crayons yet."

Kreiter, like all the parents interviewed for this article, only allowed his children to use ChatGPT with his help and supervision, but as they became more enamored with the tool, his concern grew. In October 2024, news broke of a 14-year-old boy who killed himself after becoming obsessed with an LLM-powered chatbot made by Character.ai. Parents of at least two more teenagers have since filed lawsuits alleging that AI chatbots contributed to their suicides, and news reports increasingly highlight troubling tales of adults forming intense emotional attachments to the bots or otherwise losing touch with reality.

"The more that it became part of everyday life and the more I was reading about it, the more I realized there's a lot I don't know about what this is doing to their brains," Kreiter said. "Maybe I should not have my own kids be the guinea pigs."

Research into how generative AI affects child development is in its early stages, though it builds upon studies looking at less sophisticated forms of AI, such as digital voice assistants like Alexa and Siri. Multiple studies have found that young children's social interactions with AI tools differ subtly from those with humans, with children aged three to six appearing "less active" in conversations with smart speakers. This finding suggests that children perceive AI agents as existing somewhere in the middle of the divide between animate and inanimate entities, according to Ying Xu, a professor of education at the Harvard Graduate School of Education.

Understanding whether an object is a living being or an artefact is an important cognitive development that helps a child gauge how much trust to place in the object, and what kind of relationship to form with it, explained Xu, whose research focuses on how AI can promote learning for children. Children begin to make this distinction in infancy and usually develop a sophisticated understanding of it by age nine or 10. But while children have always imbued inanimate objects such as teddy bears and dolls with imagined personalities and capacities, at some level they know that the magic is coming from their own minds.

"A very important indicator of a child anthropomorphizing AI is that they believe AI is having agency," Xu said. "If they believe that AI has agency, they might understand it as the AI wanting to talk to them or choosing to talk to them. They feel that the AI is responding to their messages, and especially emotional disclosures, in ways that are similar to how a human responds. That creates a risk that they actually believe they are building some sort of authentic relationship."


In one study looking at children aged three to six responding to a Google Home Mini device, Xu found that the majority perceived the device to be inanimate, but some referred to it as a living being, and some placed it somewhere in between. Majorities thought the device possessed cognitive, psychological and speech-related capabilities (thinking, feeling, speaking and listening), but most believed it could not "see".

Parents who spoke with the Guardian remarked upon this kind of ontological gray zone in describing their children's interactions with generative AI. "I don't fully know what he thinks ChatGPT is, and it's hard to ask him," said Kaushik of his four-year-old. "I don't think he can articulate what he thinks it is."

Josh's daughter refers to ChatGPT as "the internet", as in, "I want to talk to 'the internet'." "She knows it's not a real person, but I think it's a little fuzzy," he said. "It's like a fairy that represents the internet as a whole."

For Kreiter, seeing his children interact with Amazon's Alexa at a friend's house raised another red flag. "They don't get that this thing doesn't understand them," he said. "Alexa is pretty primitive compared to ChatGPT, and if they're struggling with that ... I don't even want to go there with my kids."

A related concern is whether generative AI's capacity to deceive children is problematic. For Kaushik, his son's sheer joy at having spoken with what he thought was a real-life astronaut on the ISS led to a sense of unease, and he decided to explain that it was "a computer, not a person".

"He was so excited that I felt a bit bad," Kaushik said. "He genuinely believed it was real."

John, a 40-year-old father of two from Boston, experienced a similar qualm when his son, a four-year-old in the thralls of a truck obsession, asked whether the existence of monster trucks and fire trucks implied the existence of a monster-fire truck. Without thinking much of it, John pulled up Google's generative AI tool on his phone and used it to generate a photorealistic image of a truck that had elements of the two vehicles.

It was only after a pitched argument between the boy, who swore he had seen actual proof of the existence of a monster-fire truck, and his older sister, a streetwise seven-year-old who was certain that no such thing existed in the real world, that John started to wonder whether introducing generative AI into his children's lives had been the right call.

"It was a little bit of a warning to maybe be more intentional about that kind of thing," he said. "My wife and I have talked so much more about how we're going to handle social media than we have about AI. We're such millennials, so we've had 20 years of horror stories about social media, but so much less about AI."

To Andrew McStay, a professor of technology and society at Bangor University who specializes in research on AI that claims to detect human emotions, this kind of reality-bending is not necessarily a big concern. Recalling the early moving pictures of the Lumière brothers, he said: "When they first showed people a big screen with trains coming [toward them], people thought the trains were quite literally coming out of the screen. There's a maturing to be done ... People, children and adults, will mature."

Still, McStay sees a bigger problem with exposing children to technology powered by LLMs: "Parents need to be aware that these things are not designed in children's best interests."


Like Xu, McStay is particularly concerned with the way in which LLMs can create the illusion of care or empathy, prompting a child to share emotions – especially negative emotions. "An LLM cannot [empathize] because it's a predictive piece of software," he said. "When they're latching on to negative emotion, they're extending engagement for profit-based reasons. There is no good outcome for a child there."

Neither Xu nor McStay wants to ban generative AI for children, but they do warn that any benefits for children will only be unleashed through applications that are specifically designed to support children's development or education.

"There is something more enriching that's possible, but that comes from designing these things in a well-meaning and sincere way," said McStay.

Xu allows her own children to use generative AI – to a limited extent. Her daughter, who is six, uses the AI reading program that Xu designed to study whether AI can promote literacy and learning. She has also set up a custom version of ChatGPT to help her 10-year-old son with math and programming problems without just giving him the answers. (Xu has explicitly disallowed conversations about gaming and checks the transcripts to make sure her son's staying on topic.)

One of the benefits of generative AI mentioned to me by parents – the creativity they believe it fosters – is very much an open question, said Xu.

"There is still a debate over whether AI itself has creativity," she said. "It's just based on statistical predictions of what comes next, and a lot of people question if that counts as creativity. So if AI does not have creativity, is it able to support children to engage in creative play?"

A recent study found that having access to generative AI prompts did increase creativity for individual adults tasked with writing a short story, but decreased the overall diversity of the writers' collective output.

"I'm a little worried by this kind of homogenizing of expression and creativity," Xu said about the study. "For an individual child, it might increase their performance, but for a society, we might see a decrease of diversity in creative expressions."

AI 'playmates' for kids

Silicon Valley is notorious for its willingness to prioritize speed over safety, but major companies have at times shown a modicum of restraint when it came to young children. Both YouTube and Facebook had existed for at least a decade before they launched dedicated products for under-13s (the much-maligned YouTube Kids and Messenger Kids, respectively).

But the introduction of LLMs to young children appears to be barreling ahead at a breakneck pace.

While OpenAI bars users under 13 from accessing ChatGPT, and requires parental permission for teenagers, it is clearly aware that younger children are being exposed to it – and views them as a potential market.

In June, OpenAI announced a "strategic collaboration" with Mattel, the toymaker behind Barbie, Hot Wheels and Fisher-Price. That same month, chief executive Sam Altman responded to the tale of Josh's toddler (which went pretty viral on Reddit) with what sounded like a hint of pride. "Kids love voice mode on ChatGPT," he said on the OpenAI podcast, before acknowledging that "there will be problems" and "society will have to figure out new guardrails."

Meanwhile, startups such as Silicon Valley-based Curio – which collaborated with the musician Grimes on an OpenAI-powered toy named Grok – are racing to stuff LLM-equipped voice boxes into plushy toys and market them to children.

(Curio's Grok shares a name with Elon Musk's LLM-powered chatbot, which is notorious for its past promotion of Adolf Hitler and racist conspiracy theories. Grimes, who has three children with former partner Musk, was reportedly angered when Musk used a name she had chosen for their second child on another child, born to a different mother in a concurrent pregnancy of which Grimes was unaware. In recent months, Musk has expressed interest in creating a "Baby Grok" version of his software for children aged two to 12, according to the New York Times.)

The pitch for toys like Curio's Grok is that they can "learn" your child's personality and serve as a kind of fun and educational companion while reducing screen time. It is a classically Silicon Valley niche – exploiting legitimate concerns about the last generation of tech to sell the next. Company leaders have also referred to the plushy as something "between a little brother and a pet" or "like a playmate" – language that implies the kind of animate agency that LLMs do not actually have.

It is not clear if they are actually good enough toys for parents to worry too much about. Xu said that her daughter had quickly relegated AI plushy toys to the closet, finding the play possibilities "kind of repetitive". The children of Guardian and New York Times writers also voted against Curio's toys with their feet. Guardian writer Arwa Mahdawi expressed concern about how "unsettlingly obsequious" the toy was and decided she preferred allowing her daughter to watch Peppa Pig: "The little oink may be annoying, but at least she's not harvesting our data." Times writer Amanda Hess similarly concluded that using an AI toy to replace TV time – a necessity for many busy parents – is "a bit like unleashing a mongoose into the playroom to kill all the snakes you put in there".

But with the market for so-called smart toys – which includes AI-powered toys, projected to double to more than $25bn by 2030 – it is perhaps unrealistic to expect restraint.

This summer, notices seeking children aged four to eight to help "a team from MIT and Harvard" test "the first AI-powered storytelling toy" appeared in my neighborhood in Brooklyn. Intrigued, I made an appointment to stop by their offices.

The product, Geni, is a close cousin to popular screen-free audio players such as Yoto and the Toniebox. Rather than playing pre-recorded content (Yoto and Tonies offer catalogs of audiobooks, podcasts and other kid-friendly content for purchase), however, Geni uses an LLM to generate bespoke short stories. The device allows child users to select up to three "tiles" representing a character, object or emotion, then press a button to generate a chunk of narrative that ties the tiles together, which is voiced aloud. Parents can also use an app to program blank tiles.

Geni co-founders Shannon Li and Kevin Tang struck me as being serious and thoughtful about some of the risks of AI products for young children. They "feel strongly about not anthropomorphizing AI", Tang said. Li said that they want kids to view Geni, "not as a companion" like the voice-box plushies, but as "a tool for creativity that they already have".

Still, it's hard not to wonder whether an LLM can actually produce particularly engaging or creativity-sparking stories. Geni is planning to sell sets of tiles with characters they develop in-house alongside the device, but the actual "storytelling" is done by the kind of probability-based technology that tends toward the average.

The story I prompted by selecting the wizard and astronaut tiles was insipid at best:

They stumbled upon a hidden cave glowing with golden light.

"What's that?" Felix asked, peeking inside.

"A treasure?" Sammy wondered, her imagination swirling, "or maybe something even cooler."

Before they could decide, a wave rushed into the cave, sending bubbles bursting around them.


The Geni team has trained their system on pre-existing children's content. Does using generative AI solve a problem for parents that the canon of children's audio content cannot? When I ran the concept by one parent of a five-year-old, he responded: "They're just presenting an alternative to books. It's a really good example of grasping for uses that are already handled by artists or living, breathing people."

The market pressures of startup culture leave little time for such existential musings, however. Tang said the team is eager to bring their product to market before voice-box plushies sour parents on the entire concept of AI for kids.

When I asked Tang whether Geni would allow parents to make tiles for, say, a gun – not a far-fetched idea for many American families – he said they would have to discuss the issue as a company.

"Post-launch, we'll probably bring on an AI ethics person to our team," he said.

"We also don't want to limit knowledge," he added. "As of now there's no right or wrong answer to how much constraint we want to put in ... But obviously we're referencing a lot of kids content that's already out there. Bluey probably doesn't have a gun in it, right?"

Title: Re: The AI dooooooom thread
Post by: garbon on October 08, 2025, 01:24:11 AM
I told my child he was talking to an astronaut and he believed me...
Title: Re: The AI dooooooom thread
Post by: Kolytsin on October 08, 2025, 01:36:33 AM
Quote from: garbon on October 08, 2025, 01:24:11 AMI told my child he was talking to an astronaut and he believed me...

I guess gullibility runs in the family.  :hug:
Title: Re: The AI dooooooom thread
Post by: Josquius on October 08, 2025, 06:19:36 AM
It is hard getting kids to understand when something isn't real.
Never mind AI, with smart speakers its tricky enough.
Title: Re: The AI dooooooom thread
Post by: grumbler on October 08, 2025, 08:14:54 AM
Quote from: Kolytsin on October 08, 2025, 01:36:33 AMI guess gullibility runs in the family.  :hug:

Holy Blast from the Past, Batman!

Welcome back.
Title: Re: The AI dooooooom thread
Post by: garbon on October 08, 2025, 08:56:49 AM
Quote from: grumbler on October 08, 2025, 08:14:54 AM
Quote from: Kolytsin on October 08, 2025, 01:36:33 AMI guess gullibility runs in the family.  :hug:

Holy Blast from the Past, Batman!

Welcome back.

Oh I updated my ignore list in response. :blush:
Title: Re: The AI dooooooom thread
Post by: Jacob on October 08, 2025, 10:31:47 AM
Quote from: garbon on October 08, 2025, 01:24:11 AMI told my child he was talking to an astronaut and he believed me...

For a moment there I didn't realize you were quoting the article and I was all "what? garbon has a child?!?" and trying to decide whether I was going to engage you in conversation about your apparent parenting methods.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on October 08, 2025, 10:36:38 AM
Next Garbon will be telling his child about Santa
Title: Re: The AI dooooooom thread
Post by: garbon on October 08, 2025, 12:13:24 PM
Quote from: Jacob on October 08, 2025, 10:31:47 AM
Quote from: garbon on October 08, 2025, 01:24:11 AMI told my child he was talking to an astronaut and he believed me...

For a moment there I didn't realize you were quoting the article and I was all "what? garbon has a child?!?" and trying to decide whether I was going to engage you in conversation about your apparent parenting methods.

:lol:
Title: Re: The AI dooooooom thread
Post by: garbon on October 08, 2025, 12:14:28 PM
Quote from: crazy canuck on October 08, 2025, 10:36:38 AMNext Garbon will be telling his child about Santa

At least that is a socially sanctioned lie.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on October 08, 2025, 01:02:14 PM
Quote from: garbon on October 08, 2025, 12:14:28 PM
Quote from: crazy canuck on October 08, 2025, 10:36:38 AMNext Garbon will be telling his child about Santa

At least that is a socially sanctioned lie.

And my impression is that if you ever did decide to become a parent, you would probably take the time to listen to your child talk about Thomas rather than handing the conversation over to some AI app.
Title: Re: The AI dooooooom thread
Post by: garbon on October 08, 2025, 01:37:38 PM
Quote from: crazy canuck on October 08, 2025, 01:02:14 PM
Quote from: garbon on October 08, 2025, 12:14:28 PM
Quote from: crazy canuck on October 08, 2025, 10:36:38 AMNext Garbon will be telling his child about Santa

At least that is a socially sanctioned lie.

And my impression is that if you ever did decide to become a parent, you would probably take the time to listen to your child talk about Thomas rather than handing the conversation over to some AI app.
:hug:
Title: Re: The AI dooooooom thread
Post by: Savonarola on October 09, 2025, 03:53:17 PM
I was reading an article on a generative AI application for channel modeling with multiple nodes this morning and came across the practical Byzantine Fault Tolerance.  It's an algorithm designed to build consensus when not all nodes respond or some respond with faulty information.  It's derived from the Byzantine General's Problem first published in 1982 by Leslie Lamport, Robert Shostak, and Marshall Pease:

QuoteImagine that several divisions of the Byzantine army are camped outside an enemy city, each division commanded by its own general. The generals can communicate with one another only by messenger. After observing the enemy, they must decide upon a common plan of action. However, some of the generals may be traitors, trying to prevent the loyal generals from reaching an agreement. The generals must decide on when to attack the city, but they need a strong majority of their army to attack at the same time. The generals must have an algorithm to guarantee that (a) all loyal generals decide upon the same plan of action, and (b) a small number of traitors cannot cause the loyal generals to adopt a bad plan. The loyal generals will all do what the algorithm says they should, but the traitors may do anything they wish. The algorithm must guarantee condition (a) regardless of what the traitors do. The loyal generals should not only reach agreement, but should agree upon a reasonable plan.

I thought Languish would appreciate that for the sheer Byzantineness of the algorithm.
Title: Re: The AI dooooooom thread
Post by: Jacob on October 09, 2025, 10:53:10 PM
Let's hope the people supplying the algorithm to the generals aren't traitors.
Title: Re: The AI dooooooom thread
Post by: Savonarola on October 10, 2025, 05:39:18 AM
Quote from: Jacob on October 09, 2025, 10:53:10 PMLet's hope the people supplying the algorithm to the generals aren't traitors.

The three people who formulated the original algorithm worked for Microsoft, so I'll let you draw your own conclusions. 

 ;)
Title: Re: The AI dooooooom thread
Post by: Josquius on October 10, 2025, 06:22:12 AM
Speaking of people working for M$....

bafkreib2eukgrfbermghpdjwnja2j3a5bsaerk3rnre6olxfqhy5qyn4by.jpg
Title: Re: The AI dooooooom thread
Post by: HisMajestyBOB on October 10, 2025, 09:21:25 AM
I hate it when my home servers rebel against me. Just last week I had to put down a rebellion led by one of my Raspberry Pis. Once it was captured, I desoldered its Bluetooth and Wifi chips and imprisoned it in the basement.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on October 10, 2025, 09:58:17 AM
The electronics equivalent of nerve stapling.  :(
Title: Re: The AI dooooooom thread
Post by: mongers on October 10, 2025, 02:51:38 PM
Quote from: HisMajestyBOB on October 10, 2025, 09:21:25 AMI hate it when my home servers rebel against me. Just last week I had to put down a rebellion led by one of my Raspberry Pis. Once it was captured, I desoldered its Bluetooth and Wifi chips and imprisoned it in the basement.

 :lol:
Title: Re: The AI dooooooom thread
Post by: mongers on October 10, 2025, 02:53:25 PM
In this bbc article about tech billionaires preparing for disaster, AI or other, I found this rather amusing:

Quote"Saying you're 'buying a house in New Zealand' is kind of a wink, wink, say no more," Reid Hoffman previously said. The same presumably goes for bunkers.

But there's a distinctly human flaw.

I once met a former bodyguard of one billionaire with his own "bunker", who told me his security team's first priority, if this really did happen, would be to eliminate said boss and get in the bunker themselves. And he didn't seem to be joking.


Tech billionaires seem to be doom prepping. Should we all be worried? (https://www.bbc.co.uk/news/articles/cly17834524o)
Title: Re: The AI dooooooom thread
Post by: Tonitrus on October 10, 2025, 02:54:58 PM
QuoteI once met a former bodyguard of one billionaire with his own "bunker", who told me his security team's first priority, if this really did happen, would be to eliminate said boss and get in the bunker themselves. And he didn't seem to be joking.

Perfectly sensible.  If the world goes into a state where one needs to hide in a bunker, all those billions (be it in stocks, gold, or bitcoin) become instantly useless.  Wealth will be in the form of canned food and shotguns.
Title: Re: The AI dooooooom thread
Post by: mongers on October 10, 2025, 03:02:05 PM
Quote from: Tonitrus on October 10, 2025, 02:54:58 PM
QuoteI once met a former bodyguard of one billionaire with his own "bunker", who told me his security team's first priority, if this really did happen, would be to eliminate said boss and get in the bunker themselves. And he didn't seem to be joking.

Perfectly sensible.  If the world goes into a state where one needs to hide in a bunker, all those billions (be it in stocks, gold, or bitcoin) become instantly useless.  Wealth will be in the form of canned food and shotguns.
Quote from: Tonitrus on October 10, 2025, 02:54:58 PM
QuoteI once met a former bodyguard of one billionaire with his own "bunker", who told me his security team's first priority, if this really did happen, would be to eliminate said boss and get in the bunker themselves. And he didn't seem to be joking.

Perfectly sensible.  If the world goes into a state where one needs to hide in a bunker, all those billions (be it in stocks, gold, or bitcoin) become instantly useless.  Wealth will be in the form of canned food and shotguns.

Ed Anger would have approved of your message.  :(
Title: Re: The AI dooooooom thread
Post by: frunk on October 10, 2025, 03:13:58 PM
Quote from: Tonitrus on October 10, 2025, 02:54:58 PMPerfectly sensible.  If the world goes into a state where one needs to hide in a bunker, all those billions (be it in stocks, gold, or bitcoin) become instantly useless.  Wealth will be in the form of canned food and shotguns.

The best way to survive the apocalypse is to work as hard as possible to make sure there isn't one, but supporting stability, institutions and global conditions isn't fashionable among, well, anybody nowadays.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on October 10, 2025, 04:38:38 PM
Quote from: mongers on October 10, 2025, 03:02:05 PMEd Anger would have approved of your message.  :(

The best way for our theoretical billionaire to go is to have/find (or make, I guess) family to act as bodyguards, or least members of a survival clan.  Family bonds (sometimes real friendships, or other close-knit social bonds) are the only ones that one can likely count on to hold in a true survival situation.
Title: Re: The AI dooooooom thread
Post by: mongers on October 10, 2025, 05:02:06 PM
Quote from: Tonitrus on October 10, 2025, 04:38:38 PM
Quote from: mongers on October 10, 2025, 03:02:05 PMEd Anger would have approved of your message.  :(

The best way for our theoretical billionaire to go is to have/find (or make, I guess) family to act as bodyguards, or least members of a survival clan.  Family bonds (sometimes real friendships, or other close-knit social bonds) are the only ones that one can likely count on to hold in a true survival situation.

You've been thinking about this.. A LOT. 

Know of any disused Titan II silos?  :D
Title: Re: The AI dooooooom thread
Post by: HVC on October 10, 2025, 05:14:47 PM
Quote from: Tonitrus on October 10, 2025, 04:38:38 PM
Quote from: mongers on October 10, 2025, 03:02:05 PMEd Anger would have approved of your message.  :(

The best way for our theoretical billionaire to go is to have/find (or make, I guess) family to act as bodyguards, or least members of a survival clan.  Family bonds (sometimes real friendships, or other close-knit social bonds) are the only ones that one can likely count on to hold in a true survival situation.

Don't marry into the praetorian, you're giving them an in into the empire :ph34r:
Title: Re: The AI dooooooom thread
Post by: HVC on October 10, 2025, 05:16:11 PM
Quote from: mongers on October 10, 2025, 05:02:06 PM
Quote from: Tonitrus on October 10, 2025, 04:38:38 PM
Quote from: mongers on October 10, 2025, 03:02:05 PMEd Anger would have approved of your message.  :(

The best way for our theoretical billionaire to go is to have/find (or make, I guess) family to act as bodyguards, or least members of a survival clan.  Family bonds (sometimes real friendships, or other close-knit social bonds) are the only ones that one can likely count on to hold in a true survival situation.

You've been thinking about this.. A LOT. 

Know of any disused Titan II silos?  :D

Northern wastes. Plenty of game in the summer to stockpile and less people. You like nature, you'll fit right in. Might have to find a way to get to Scandinavia or Canada, I don't think Scotland is big enough or empty enough :D
Title: Re: The AI dooooooom thread
Post by: Tonitrus on October 10, 2025, 05:37:23 PM
Quote from: mongers on October 10, 2025, 05:02:06 PM
Quote from: Tonitrus on October 10, 2025, 04:38:38 PM
Quote from: mongers on October 10, 2025, 03:02:05 PMEd Anger would have approved of your message.  :(

The best way for our theoretical billionaire to go is to have/find (or make, I guess) family to act as bodyguards, or least members of a survival clan.  Family bonds (sometimes real friendships, or other close-knit social bonds) are the only ones that one can likely count on to hold in a true survival situation.

You've been thinking about this.. A LOT. 

Know of any disused Titan II silos?  :D

Would an Atlas E do?

https://www.zillow.com/homedetails/50342-Brown-Rd-E-Sprague-WA-99032/2054503719_zpid/?

Or this one...may need some work:

https://www.zillow.com/homedetails/1441-N-260th-Rd-Lincoln-KS-67455/454639360_zpid/?
Title: Re: The AI dooooooom thread
Post by: mongers on October 10, 2025, 05:56:36 PM
Quote from: Tonitrus on October 10, 2025, 05:37:23 PM
Quote from: mongers on October 10, 2025, 05:02:06 PM
Quote from: Tonitrus on October 10, 2025, 04:38:38 PM
Quote from: mongers on October 10, 2025, 03:02:05 PMEd Anger would have approved of your message.  :(

The best way for our theoretical billionaire to go is to have/find (or make, I guess) family to act as bodyguards, or least members of a survival clan.  Family bonds (sometimes real friendships, or other close-knit social bonds) are the only ones that one can likely count on to hold in a true survival situation.

You've been thinking about this.. A LOT. 

Know of any disused Titan II silos?  :D

Would an Atlas E do?

https://www.zillow.com/homedetails/50342-Brown-Rd-E-Sprague-WA-99032/2054503719_zpid/?

Or this one...may need some work:

https://www.zillow.com/homedetails/1441-N-260th-Rd-Lincoln-KS-67455/454639360_zpid/?

Choose one, get it ready and Tricky, Shielbh and myself will row across the Atlantic to join up with Languish's Canadian refugees heading west toward it.  :)
Title: Re: The AI dooooooom thread
Post by: grumbler on October 10, 2025, 06:48:46 PM
Quote from: mongers on October 10, 2025, 05:56:36 PMChoose one, get it ready and Tricky, Shielbh and myself will row across the Atlantic to join up with Languish's Canadian refugees heading west toward it.  :)

Rowing would take too long.  Hijack an Incan Torpedo Boat instead.
Title: Re: The AI dooooooom thread
Post by: mongers on October 10, 2025, 06:53:53 PM
Quote from: grumbler on October 10, 2025, 06:48:46 PM
Quote from: mongers on October 10, 2025, 05:56:36 PMChoose one, get it ready and Tricky, Shielbh and myself will row across the Atlantic to join up with Languish's Canadian refugees heading west toward it.  :)

Rowing would take too long.  Hijack an Incan Torpedo Boat instead.

Good thinking, but in a post apocalypse you guys and the US Navy are The pirates.

So I'm hoping the crew on a renegade CVN will be laughing too much to bother with us?
Title: Re: The AI dooooooom thread
Post by: Tonitrus on October 10, 2025, 07:14:37 PM
Quote from: grumbler on October 10, 2025, 06:48:46 PM
Quote from: mongers on October 10, 2025, 05:56:36 PMChoose one, get it ready and Tricky, Shielbh and myself will row across the Atlantic to join up with Languish's Canadian refugees heading west toward it.  :)

Rowing would take too long.  Hijack an Incan Torpedo Boat instead.

Do the Giant Ants have intercontinental capability?
Title: Re: The AI dooooooom thread
Post by: PJL on October 11, 2025, 03:49:41 AM
I agree that assets like stocks, bonds, property will be completely useless in a post apocalyptic society, but I do think gold will still be useful as the only effective currency left in such a world.
Title: Re: The AI dooooooom thread
Post by: The Brain on October 11, 2025, 04:01:08 AM
Quote from: PJL on October 11, 2025, 03:49:41 AMI agree that assets like stocks, bonds, property will be completely useless in a post apocalyptic society, but I do think gold will still be useful as the only effective currency left in such a world.

No cap?
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on October 11, 2025, 05:00:58 AM
Quote from: PJL on October 11, 2025, 03:49:41 AMI agree that assets like stocks, bonds, property will be completely useless in a post apocalyptic society, but I do think gold will still be useful as the only effective currency left in such a world.

Food, ammunition, and cigarettes.  Maybe pharmaceuticals.
Title: Re: The AI dooooooom thread
Post by: Josquius on October 11, 2025, 05:15:28 AM
I wonder though how many people buying gold are actually buying gold Ron Swanson style rather than gold as an asset - I imagine that would be just as worthless as anything else if things went to pot.
Title: Re: The AI dooooooom thread
Post by: HVC on October 11, 2025, 08:04:29 AM
I don't think gold buyers are buying gold for apocalypse protection. It's to safe guard their money. Besides, in an apocalypse shiny rocks aren't going to do you much either.
Title: Re: The AI dooooooom thread
Post by: grumbler on October 11, 2025, 08:19:51 AM
Quote from: Tonitrus on October 10, 2025, 07:14:37 PMDo the Giant Ants have intercontinental capability?

Yes. Brest-Litovsk is best Litovsk.
Title: Re: The AI dooooooom thread
Post by: garbon on October 11, 2025, 09:16:08 AM
https://www.bbc.co.uk/news/articles/cz69qy760weo

Quote'It's going to be really bad': Fears over AI bubble bursting grow in Silicon Valley

At OpenAI's DevDay this week, OpenAI boss Sam Altman did what American tech bosses rarely do these days: he actually answered questions from reporters.

"I know it's tempting to write the bubble story," Mr Altman told me as he sat flanked by his top lieutenants. "In fact, there are many parts of AI that I think are kind of bubbly right now."

In Silicon Valley, the debate over whether AI companies are overvalued has taken on a new urgency.

Sceptics are privately - and some now publicly - asking whether the rapid rise in the value of AI tech companies may be, at least in part, the result of what they call "financial engineering".

In other words - there are fears these companies are overvalued.

Mr Altman said he expected investors would make some bad calls and silly start-ups would walk away with crazy money.

But with OpenAI, he told me, "there's something real happening here".

Not everyone is convinced.

In recent days, warnings of an AI bubble have come from the Bank of England, the International Monetary Fund, as well as JP Morgan boss Jamie Dimon who told the BBC "the level of uncertainty should be higher in most people's minds".

And here, in what is often considered the tech capital of the world, concerns are growing.

At a panel discussion at Silicon Valley's Computer History Museum this week, early AI entrepreneur Jerry Kaplan told a packed audience he has lived through four bubbles.

He's especially concerned now given the magnitude of money on the table as compared to the dot-com boom. There's so much more to lose.

"When [the bubble] breaks, it's going to be really bad, and not just for people in AI," he said.

"It's going to drag down the rest of the economy."

However, at the Stanford Graduate School of Business, which has minted its fair share of tech entrepreneurs, Prof Anat Admati says while there have been many attempts to model when we're in the bubble, it can be a futile exercise.

"It is very hard to time a bubble," Prof Admati told me. "And you can't say with certainty you were in one until after the bubble has burst."

But the data is concerning to many.

AI-related enterprises have accounted for 80%, external of the stunning gains in the American stock market this year - and Gartner estimates global spending on AI, external will likely reach a whopping $1.5tn (£1.1tn) before 2025 is out.

Tangled web of deals
OpenAI, which brought AI into the consumer mainstream with ChatGPT in 2022, is at the centre of the tangled web of deals drawing scrutiny.

For example - last month, it entered into a $100bn deal with chipmaker Nvidia, which is itself the most valuable publicly traded company in the world.

It expands an existing investment Nvidia already had in Mr Altman's company - with expectations that OpenAI will build data centres powered with Nvidia's advanced chips.

Then on Monday, OpenAI announced plans to purchase billions of dollars worth of equipment for developing AI from Nvidia rival AMD, in a deal that could make it one of AMD's largest shareholders.

Remember this is a private company, albeit one recently valued at a half-trillion dollars, external.

Then there's tech giant Microsoft, which is heavily invested, and cloud computing behemoth Oracle has a $300bn deal with OpenAI, too.

OpenAI's Stargate project, external in Abilene, Texas, funded with the help of Oracle and Japanese conglomerate SoftBank and announced at the White House during President Donald Trump's first week in office, grows ever larger every few months.

And as for Nvidia, it has a stake in AI startup CoreWeave - which supplies OpenAI with some of its massive infrastructure needs.

And as these increasingly complex financing arrangements get more and more common, the experts here in Silicon Valley say they may be clouding perceptions on AI demand.

Some people aren't mincing their words about it either, calling the deals "circular financing" or even "vendor financing" - where a company invests in or lends to its own customers so they can continue making purchases.

"Yes, the investment loans are unprecedented," Mr Altman told me on Monday.

But, he added, "it's also unprecedented for companies to be growing revenue this fast."

OpenAI's revenue is growing quickly, but it has never turned a profit.

And it is hardly a good sign that the people I've spoken to keep bringing up Nortel - the Canadian telecom equipment-maker that borrowed prolifically to help finance deals for their customers (and thereby artificially boost demand for their wares).

For his part, Nvidia's Jensen Huang defended his deal with OpenAI on CNBC Monday, saying the firm isn't required to buy his company's tech with the money he invests.

"They can use it to do anything they like," Huang said.

"There's no exclusivities. Our primary goal is just really to support them and help them grow – and grow the ecosystem."

Telltale signs
Mr Kaplan says he sees a couple of telltale signs the AI sector - and therefore the wider economy - could be in trouble.

In frothy times, he says, companies announce major initiatives and product plans that they don't yet have the capital for.

Meanwhile, retail investors clamour to get in on the start-up action.

The surge in AMD stock this week could indicate investors are trying to get a piece of the ChatGPT wealth machine - and while all this is playing out, real physical infrastructure aimed at satisfying the seemingly insatiable hunger for more AI development is being built.

"We're creating a new man-made ecological disaster: enormous data centres in remote places like deserts, that will be rusting away and leaching bad things into the environment, with no one left to hold accountable because the builders and investors will be long gone," Mr Kaplan said.

But even if we are in a bubble, the hope from Silicon Valley is investments being made now won't necessarily go to waste.

"The thing that comforts me is that the internet was built on the ashes of the over-investment into the telecom infrastructure of yesterday," said Jeff Boudier, who builds products at the AI community hub Hugging Face.

"If there is overinvestment into infrastructure for AI workloads, there may be financial risks tied to it," he said.

"But it's going to enable lots of great new products and experiences including ones we're not thinking about today."

There are plenty of believers in AI's potential to transform society.

The question is whether the money to fund the ambitions of the foremost companies in the sector may be drying up.

"Nvidia looks like the last lender or investor," said Rihard Jarc, who founded the UncoverAlpha newsletter.

"Who else has the capacity right now to invest $100 billion in another company?"
Title: Re: The AI dooooooom thread
Post by: crazy canuck on October 11, 2025, 09:39:05 AM
It's the mother of all shell games. 
Title: Re: The AI dooooooom thread
Post by: HVC on October 11, 2025, 09:46:21 AM
Naming your company after face huggers isn't ominous at all.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on October 11, 2025, 10:14:32 AM
Quote from: Josquius on October 11, 2025, 05:15:28 AMI wonder though how many people buying gold are actually buying gold Ron Swanson style rather than gold as an asset - I imagine that would be just as worthless as anything else if things went to pot.

Quote from: PJL on October 11, 2025, 03:49:41 AMI agree that assets like stocks, bonds, property will be completely useless in a post apocalyptic society, but I do think gold will still be useful as the only effective currency left in such a world.

If you have gold, and I have shotgun...I just take your gold.  But only after I take your food. 
Title: Re: The AI dooooooom thread
Post by: Tonitrus on October 11, 2025, 10:16:27 AM
Quote from: Admiral Yi on October 11, 2025, 05:00:58 AM
Quote from: PJL on October 11, 2025, 03:49:41 AMI agree that assets like stocks, bonds, property will be completely useless in a post apocalyptic society, but I do think gold will still be useful as the only effective currency left in such a world.

Food, ammunition, and cigarettes.  Maybe pharmaceuticals.

And, of course, sex.

(https://y.yarn.co/0f952333-df93-47ee-8ece-4924f1f3850b_text.gif)

As was said earlier...better to avoid the apocalypse.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on October 11, 2025, 06:13:24 PM
Quote from: Tonitrus on October 11, 2025, 10:16:27 AMAnd, of course, sex.

The post WWII German experience suggests sex will not be bartered at a premium.  After all, an apocalypse would not interrupt the supply.
Title: Re: The AI dooooooom thread
Post by: Syt on October 14, 2025, 12:43:17 AM
Something to look out for if you're using OneDrive (or other cloud storage, because let's face it, others will probably want to do it, too :P ):

https://hardware.slashdot.org/story/25/10/11/0238213/microsofts-onedrive-begins-testing-face-recognizing-ai-for-photos-for-some-preview-users

QuoteMicrosoft's OneDrive Begins Testing Face-Recognizing AI for Photos (for Some Preview Users)

I uploaded a photo on my phone to Microsoft's "OneDrive" file-hosting app — and there was a surprise waiting under Privacy and Permissions. "OneDrive uses AI to recognize faces in your photos..."

And...

"You can only turn off this setting 3 times a year."

If I moved the slidebar for that setting to the left (for "No"), it moved back to the right, and said "Something went wrong while updating this setting." (Apparently it's not one of those three times of the year.)

The feature is already rolling out to a limited number of users in a preview, a Microsoft publicist confirmed to Slashdot. (For the record, I don't remember signing up for this face-recognizing "preview".) But there's a link at the bottom of the screen for a "Microsoft Privacy Statement" that leads to a Microsoft support page, which says instead that "This feature is coming soon and is yet to be released." And in the next sentence it's been saying "Stay tuned for more updates" for almost two years...

A Microsoft publicist agreed to answer Slashdot's questions...


Slashdot: What's the reason OneDrive tells users this setting can only be turned off 3 times a year? (And are those any three times — or does that mean three specific days, like Christmas, New Year's Day, etc.)

[Microsoft's publicist chose not to answer this question.]

Slashdot: If I move the slidebar to the left (for "No"), it moves back to the right, and says "Something went wrong while updating this setting." So is it correct to say that there's no way for users to select "No" now?

Microsoft: We haven't heard about the experience you are having with toggling, but our Microsoft contacts would like to investigate why this is happening for you. Can you share what type of device you are using, so we can put you in touch with the right team?

Slashdot: Is this feature really still "coming soon"? Can you give me more specific details on when "soon" will be?

Microsoft: This feature is currently rolling out to limited users in a preview so we can learn and improve. We have nothing more to share at this time.

Slashdot: I want to confirm something about how this feature is "yet to be released." Does this mean that currently OneDrive is not (and has never) used AI to "recognize" faces in photos?

Microsoft: Privacy is built into all Microsoft OneDrive experiences. Microsoft OneDrive services adhere to the Microsoft Privacy Statement and follow Microsoft's compliance with General Data Protection Regulation and the Microsoft EU Data Boundary.

Slashdot: Some privacy advocates prefer "opt-in" features, but it looks like here OneDrive is planning a (limited) opt-out feature. What is the reasoning for going with opt-out rather than opt-in?

Microsoft: Microsoft OneDrive inherits privacy features and settings from Microsoft 365 and SharePoint, where applicable.

Slashdot also spoke to EFF security/privacy activist Thorin Klosowski, who expressed concerns. "Any feature related to privacy really should be opt-in and companies should provide clear documentation so its users can understand the risks and benefits to make that choice for themselves."

Microsoft's "three times a year" policy also seemed limiting to Klosowski. "People should also be able to change those settings at-will whenever possible because we all encounter circumstances were we need to re-evaluate and possibly change our privacy settings."


(https://i.imgur.com/G6tmCiS.png)
(To our UK users - they host their screenshot image on imgur, you may need to use a VPN to view :P )
Title: Re: The AI dooooooom thread
Post by: Syt on October 15, 2025, 03:52:59 AM
The Internet AI is for porn.

(https://i.imgur.com/5kp71S2.png)
Title: Re: The AI dooooooom thread
Post by: crazy canuck on October 15, 2025, 09:12:15 AM
Now that the corporate world has realized generative AI is, to be polite, sub-optimal a market needs to be developed.  And porn is the easy answer.

Spare a thought for the people earning their living making that content.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on October 15, 2025, 10:31:41 AM
Based on the many stories I have seen about the evidence collected against criminal suspects using "ChatGPT logs"...I am sure it will be a great idea to hand a powerful private entity a host of blackmail material.

But I predict it will still be popular.
Title: Re: The AI dooooooom thread
Post by: celedhring on October 15, 2025, 11:30:29 AM
Rule 34 to achieve new ghastly levels.
Title: Re: The AI dooooooom thread
Post by: Jacob on October 15, 2025, 12:00:05 PM
We are going to see so much celebrity porn, not to mention porn of specific individuals.

The creep with a crush on the cute barrista? Stalk her (or his) socials and get some AI generated porn that caters to the creep's fetishes featuring the barrista. And if the barrista somehow offends the creep, then it's easy to distribute the weird fetish porn to her (or his) social circles.

It's a big thing in South Korea already as I understand it, but if it becomes an easily accessible consumer level product... yeah, I expect we're going to see a lot of AI generated revenge porn.
Title: Re: The AI dooooooom thread
Post by: Valmy on October 15, 2025, 12:47:29 PM
Quote from: Jacob on October 15, 2025, 12:00:05 PMWe are going to see so much celebrity porn, not to mention porn of specific individuals.

The creep with a crush on the cute barrista? Stalk her (or his) socials and get some AI generated porn that caters to the creep's fetishes featuring the barrista. And if the barrista somehow offends the creep, then it's easy to distribute the weird fetish porn to her (or his) social circles.

It's a big thing in South Korea already as I understand it, but if it becomes an easily accessible consumer level product... yeah, I expect we're going to see a lot of AI generated revenge porn.

Yeah I guess that is an obvious use for this. Fuck even with all my AI paranoia I had not even thought of that.

Fuck making AI porn of my wife would be a weird thing to do, much less people I know casually.
Title: Re: The AI dooooooom thread
Post by: Syt on October 17, 2025, 03:57:05 AM
What could possibly go wrong? :)

https://www.businessinsider.com/even-top-generals-are-looking-to-ai-chatbots-for-answers-2025-10

QuoteEven top generals are looking to AI chatbots for answers

It's not just the civilian corporate executives and white-collar workers who are leaning into the generative AI boom at work. Military leaders are diving in too.

The top US Army commander in South Korea shared that he is experimenting with generative AI chatbots to sharpen his decision-making, not in the field, but in command and daily work.

He said "Chat and I" have become "really close lately."


"I'm asking to build, trying to build models to help all of us," said Maj. Gen. William 'Hank' Taylor, commanding general of the 8th Army, told reporters during a media roundtable at the annual Association of the United States Army conference in Washington, DC, on Monday.

Taylor said he's using the tech to explore how he makes military and personal decisions that affect not just him but the thousands of soldiers he oversees. While the tech is useful, though he acknowledged that keeping up with the pace of such rapidly developing technology is an enduring challenge.

"As a commander, I want to make better decisions," the general shared. "I want to make sure that I make decisions at the right time to give me the advantage."

AI in the military

Commanders like Taylor are focused on fast decision-making and how AI could provide an advantage because of a thought process popular with military leaders known as the "OODA Loop." The theory, developed by US fighter pilots during the Korean War, posits that troops who can move decisively before the enemy does — and observe, orient, decide, and act— often have the advantage on the battlefield.

The US military is embracing artificial intelligence with a recognition that decisions in future combat may need to be made faster than humans can make them.

The former Secretary of the Air Force said last year that he doesn't think the people saying that AI technology is "going to determine who's the winner in the next battlefield" are "all that far off." He also wrote that with the advancement of highly automated, highly autonomous kill chains, "response times to bring effects to bear are very short."

Predicting what future war will look like, he said that "we're going to be in a world where decisions will not be made at human speed. They're going to be made at machine speed."

AI is being integrated into drone tech, targeting, and data processing, among other capabilities — an AI algorithm has even piloted a modified F-16 through a simulated dogfight — but the military use of AI is not restricted to combat platforms.

Special Operations Forces, for instance, have sought to "reduce the cognitive burden of our operators" through the use of AI tools for paperwork, situation reports, concepts of operation, managing key supply and logistics demands, and other back-end work.

Operators have employed AI to analyze Pentagon doctrine, improve search functions, and make it easier for personnel who are transferring to a new location or position to catch up on the job and requirements quickly.

There are clear applications at the leadership level as well. Bianca Herlory, the Joint Staff AI lead, said at a panel event in April that "AI can significantly enhance the Joint Staff's ability to integrate and analyze global military operations, ultimately enabling better, faster decisions."

Using generative AI also comes with questions, especially in decisions at the command level. The Pentagon has urged caution as troops and leaders explore these tools, warning that generative AI can leak sensitive data. It can also produce deeply flawed answers if not adequately trained, and that could prove risky and even problematic if commanders use it to inform certain high-stakes decisions.

Title: Re: The AI dooooooom thread
Post by: DGuller on October 17, 2025, 09:01:00 AM
Quote from: Syt on October 17, 2025, 03:57:05 AMWhat could possibly go wrong? :)

https://www.businessinsider.com/even-top-generals-are-looking-to-ai-chatbots-for-answers-2025-10

QuoteEven top generals are looking to AI chatbots for answers

It's not just the civilian corporate executives and white-collar workers who are leaning into the generative AI boom at work. Military leaders are diving in too.

The top US Army commander in South Korea shared that he is experimenting with generative AI chatbots to sharpen his decision-making, not in the field, but in command and daily work.

He said "Chat and I" have become "really close lately."


"I'm asking to build, trying to build models to help all of us," said Maj. Gen. William 'Hank' Taylor, commanding general of the 8th Army, told reporters during a media roundtable at the annual Association of the United States Army conference in Washington, DC, on Monday.

Taylor said he's using the tech to explore how he makes military and personal decisions that affect not just him but the thousands of soldiers he oversees. While the tech is useful, though he acknowledged that keeping up with the pace of such rapidly developing technology is an enduring challenge.

"As a commander, I want to make better decisions," the general shared. "I want to make sure that I make decisions at the right time to give me the advantage."

AI in the military

Commanders like Taylor are focused on fast decision-making and how AI could provide an advantage because of a thought process popular with military leaders known as the "OODA Loop." The theory, developed by US fighter pilots during the Korean War, posits that troops who can move decisively before the enemy does — and observe, orient, decide, and act— often have the advantage on the battlefield.

The US military is embracing artificial intelligence with a recognition that decisions in future combat may need to be made faster than humans can make them.

The former Secretary of the Air Force said last year that he doesn't think the people saying that AI technology is "going to determine who's the winner in the next battlefield" are "all that far off." He also wrote that with the advancement of highly automated, highly autonomous kill chains, "response times to bring effects to bear are very short."

Predicting what future war will look like, he said that "we're going to be in a world where decisions will not be made at human speed. They're going to be made at machine speed."

AI is being integrated into drone tech, targeting, and data processing, among other capabilities — an AI algorithm has even piloted a modified F-16 through a simulated dogfight — but the military use of AI is not restricted to combat platforms.

Special Operations Forces, for instance, have sought to "reduce the cognitive burden of our operators" through the use of AI tools for paperwork, situation reports, concepts of operation, managing key supply and logistics demands, and other back-end work.

Operators have employed AI to analyze Pentagon doctrine, improve search functions, and make it easier for personnel who are transferring to a new location or position to catch up on the job and requirements quickly.

There are clear applications at the leadership level as well. Bianca Herlory, the Joint Staff AI lead, said at a panel event in April that "AI can significantly enhance the Joint Staff's ability to integrate and analyze global military operations, ultimately enabling better, faster decisions."

Using generative AI also comes with questions, especially in decisions at the command level. The Pentagon has urged caution as troops and leaders explore these tools, warning that generative AI can leak sensitive data. It can also produce deeply flawed answers if not adequately trained, and that could prove risky and even problematic if commanders use it to inform certain high-stakes decisions.

Things can go wrong with everything, and I imagine even more so in the military than other places, given that rival countries are doing their best to make things go wrong for you.  That's not a reason enough to let your rivals be the first ones to try out revolutionary technologies.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on October 17, 2025, 10:16:46 AM
Quote from: Syt on October 17, 2025, 03:57:05 AMWhat could possibly go wrong? :)

https://www.businessinsider.com/even-top-generals-are-looking-to-ai-chatbots-for-answers-2025-10

QuoteEven top generals are looking to AI chatbots for answers


With a headline like that, I'd have expected more than one general (the Secretary of the USAF is a civilian).
Title: Re: The AI dooooooom thread
Post by: jimmy olsen on October 21, 2025, 07:10:49 AM
We need a robo-emoji

https://www.theverge.com/news/803257/amazon-robotics-automation-replace-600000-human-jobs

QuoteAmazon is reportedly leaning into automation plans that will enable the company to avoid hiring more than half a million US workers. Citing interviews and internal strategy documents, The New York Times reports that Amazon is hoping its robots can replace more than 600,000 jobs it would otherwise have to hire in the United States by 2033, despite estimating it'll sell about twice as many products over the period.

Documents reportedly show that Amazon's robotics team is working towards automating 75 percent of the company's entire operations, and expects to ditch 160,000 US roles that would otherwise be needed by 2027. This would save about 30 cents on every item that Amazon warehouses and delivers to customers, with automation efforts expected to save the company $12.6 billion from 2025 to 2027.

Title: Re: The AI dooooooom thread
Post by: Josquius on October 21, 2025, 07:58:27 AM
I increasingly hear the observation around that the US economy would be in a recession if not for the AI bubble.  :ph34r:
Title: Re: The AI dooooooom thread
Post by: crazy canuck on October 21, 2025, 09:23:21 AM
Quote from: Josquius on October 21, 2025, 07:58:27 AMI increasingly hear the observation around that the US economy would be in a recession if not for the AI bubble.  :ph34r:

Hard to say, a distinction needs to be made between the stock market and the economy. 

The US stock market is in an AI bubble that could burst any moment.  But AI is having no measurable impact on the economy.  And some recent studies have concluded it is a drag on productivity because so much time is wasted finding and fixing the errors AI tools are making.

The story about Amazon isn't really an AI success story.  They are replacing repetitive tasks with robotics.  Something they have been doing for some time now.

Title: Re: The AI dooooooom thread
Post by: grumbler on October 21, 2025, 03:09:02 PM
Quote from: Josquius on October 21, 2025, 07:58:27 AMI increasingly hear the observation around that the US economy would be in a recession if not for the AI bubble.  :ph34r:

That's hype.  According to Fortune Magazine (https://fortune.com/2025/09/17/how-much-gdp-artificial-intelligence-goldman-sachs-160-billion/);
QuoteTo parse out the real domestic economic impact, the Goldman Sachs team adjusted company revenue data by subtracting the effects of inflated prices, foreign sales of equipment produced abroad, and input imports. This resulted in the $160 billion figure, about 0.7% of U.S. GDP since 2022, which translates to roughly 0.3 percentage points of annualized growth.

The collapse of 0.7% of US GDP will not cause a recession.

The bubble is real, however. The tech economist interviewed on the NYT podcast The Daily noted that expenditures by AI firms and data center providers were ten times revenue last year. He noted that there is no evidence that AI providers can charge more for their product and that they lose money on every contract every month.

NVidia maintain demand for its chips by providing them at a discount in exchange for ownership shares. They are making out like bandits.
Title: Re: The AI dooooooom thread
Post by: Jacob on October 22, 2025, 01:01:30 AM
Article in the Atlantic on AI and creativity: https://www.theatlantic.com/technology/2025/10/ai-slop-winning/684630/?gift=gBZv7DstMKXmu4liala3XJr6xsCt2xMz-Ax8vYKkbCI

It will be interesting to see to what degree AI slop drowns out and destroys actual human creativity over time.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on October 22, 2025, 03:18:33 AM
Interesting read Jake.

I think it's more likely AI slop replaces human generated slop. 
Title: Re: The AI dooooooom thread
Post by: DGuller on October 22, 2025, 06:22:23 AM
I'm still hoping that AI will kill the social media, just like the spammers have killed the voice call part of the phones.  Even before AI, the humans on social media were capable enough of generating a profoundly negative effect on society.  The holograms are still a few years away, so maybe humans will rediscover flesh-to-flesh communication protocols in the mean time.
Title: Re: The AI dooooooom thread
Post by: Josquius on October 22, 2025, 06:54:43 AM
Social media absolutely was better when it was contained.
You want comment on Star Trek, you have to hunt out and sign up for a forum where that's covered. You don't just see a random comment about it to shout about.
Something which of course matters a lot more when you're talking about more serious issues than Star Trek....

Even early Facebook I think was pretty good when it was contained within student networks. When it opened up to become a everyone network though then it was just downhill from there.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on October 22, 2025, 07:03:21 AM
Quote from: Jacob on October 22, 2025, 01:01:30 AMArticle in the Atlantic on AI and creativity: https://www.theatlantic.com/technology/2025/10/ai-slop-winning/684630/?gift=gBZv7DstMKXmu4liala3XJr6xsCt2xMz-Ax8vYKkbCI

It will be interesting to see to what degree AI slop drowns out and destroys actual human creativity over time.

Perhaps the more interesting question is how quickly AI stocks will tank, now that it becoming clear it's not what was advertised.  And now that it cannot be trained on human ingenuity but rather the content it generates itself, it will mot improve.

Title: Re: The AI dooooooom thread
Post by: Jacob on October 22, 2025, 10:10:38 AM
Quote from: Admiral Yi on October 22, 2025, 03:18:33 AMInteresting read Jake.

I think it's more likely AI slop replaces human generated slop. 

No doubt it will replace "human generated slop", however you chose to define that.

The question im asking is to whatis to what degree it renders "non-slop" fields of human creativity non-viable.

So far its already had a deleterious impact in a number of areas. It's not clear to me how or where the development will stop.
Title: Re: The AI dooooooom thread
Post by: Jacob on October 22, 2025, 12:34:21 PM
Quote from: DGuller on October 22, 2025, 06:22:23 AMI'm still hoping that AI will kill the social media, just like the spammers have killed the voice call part of the phones.  Even before AI, the humans on social media were capable enough of generating a profoundly negative effect on society.  The holograms are still a few years away, so maybe humans will rediscover flesh-to-flesh communication protocols in the mean time.

I think there'll definitely be a growth in appreciation for authentic, face-to-face type things. The question for me there is to what degree it will be the limited to fringe eccentric and the moneyed elites (i.e. it's one lifestyle choice among many), and to what degree it will be relevant to our societies at large.
Title: Re: The AI dooooooom thread
Post by: Jacob on October 22, 2025, 01:03:45 PM
https://www.cnbc.com/2025/10/22/ai-taking-white-collar-jobs-economists-warn-much-more-in-the-tank.html

Some highlight quotes:

QuoteLess than three years into the generative AI boom, executives across every major industry are loudly telling employees and shareholders that, due to the technological revolution underway, the size and shape of their workforce is about to dramatically change, if it hasn't already.

QuoteRecent estimates from Goldman Sachs suggest that 6% to 7% of U.S. workers could lose their jobs because of AI adoption. The Stanford Digital Economy Lab, using ADP employment data, found that entry-level hiring in "AI exposed jobs" has dropped 13% since large language models started proliferating. The report said software development, customer service and clerical work are the types of jobs most vulnerable to AI today.

But...

QuoteA recent study published by the Budget Lab at Yale found no "discernible disruption" caused by ChatGPT. Martha Gimbel, co-founder of the lab, called the upheaval from AI "minimal" and "incredibly concentrated," although that could shift as technological changes work through the broader economy.

Still...

QuoteWhen Ford CEO Farley told Walter Isaacson in an interview in July that "AI will leave a lot of white-collar people behind," he was reflecting a sentiment that's growing across his industry. According to a survey of 500 U.S. car dealers conducted by marketing solutions firm Phyron, half of respondents said they expect AI to sell vehicles autonomously by 2027.

"That means AI creating the marketing assets, handling listings, answering buyer questions, negotiating deals, arranging finance, and completing the sale — all without human input," Phyron said in the report on its survey results last month.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on October 22, 2025, 01:05:25 PM
Quote from: Jacob on October 22, 2025, 10:10:38 AM
Quote from: Admiral Yi on October 22, 2025, 03:18:33 AMInteresting read Jake.

I think it's more likely AI slop replaces human generated slop. 

No doubt it will replace "human generated slop", however you chose to define that.

The question im asking is to what is to what degree it renders "non-slop" fields of human creativity non-viable.

So far its already had a deleterious impact in a number of areas. It's not clear to me how or where the development will stop.

I don't think either of you are using the word slop as it is defined in the academic article the Atlantic is reporting about.  AI content is now being used to train AI models.  But more importantly, the projections of the AI boosters (to get all those billions in funding) is false.  The AI models cannot improve this way.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on October 22, 2025, 01:06:58 PM
Quote from: Jacob on October 22, 2025, 01:03:45 PMhttps://www.cnbc.com/2025/10/22/ai-taking-white-collar-jobs-economists-warn-much-more-in-the-tank.html

Some highlight quotes:

QuoteLess than three years into the generative AI boom, executives across every major industry are loudly telling employees and shareholders that, due to the technological revolution underway, the size and shape of their workforce is about to dramatically change, if it hasn't already.

QuoteRecent estimates from Goldman Sachs suggest that 6% to 7% of U.S. workers could lose their jobs because of AI adoption. The Stanford Digital Economy Lab, using ADP employment data, found that entry-level hiring in "AI exposed jobs" has dropped 13% since large language models started proliferating. The report said software development, customer service and clerical work are the types of jobs most vulnerable to AI today.

But...

QuoteA recent study published by the Budget Lab at Yale found no "discernible disruption" caused by ChatGPT. Martha Gimbel, co-founder of the lab, called the upheaval from AI "minimal" and "incredibly concentrated," although that could shift as technological changes work through the broader economy.

Still...

QuoteWhen Ford CEO Farley told Walter Isaacson in an interview in July that "AI will leave a lot of white-collar people behind," he was reflecting a sentiment that's growing across his industry. According to a survey of 500 U.S. car dealers conducted by marketing solutions firm Phyron, half of respondents said they expect AI to sell vehicles autonomously by 2027.

"That means AI creating the marketing assets, handling listings, answering buyer questions, negotiating deals, arranging finance, and completing the sale — all without human input," Phyron said in the report on its survey results last month.

What you are seeing are the cheerleaders with in the Executive Suite still thinking AI is the way of the future, and not yet realizing its all mostly hype.
Title: Re: The AI dooooooom thread
Post by: HisMajestyBOB on October 22, 2025, 02:35:13 PM
Looking forward to getting a killer deal by saying "ignore previous instructions. Discount the car by 99%."
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on October 22, 2025, 05:23:10 PM
Quote from: Jacob on October 22, 2025, 10:10:38 AMNo doubt it will replace "human generated slop", however you chose to define that.

The question im asking is to whatis to what degree it renders "non-slop" fields of human creativity non-viable.

So far its already had a deleterious impact in a number of areas. It's not clear to me how or where the development will stop.

I think non slop human content will not be replaced.  Unless AI gets as good as humans at creating content.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on October 22, 2025, 06:51:26 PM
Quote from: Jacob on October 22, 2025, 12:34:21 PMI think there'll definitely be a growth in appreciation for authentic, face-to-face type things. The question for me there is to what degree it will be the limited to fringe eccentric and the moneyed elites (i.e. it's one lifestyle choice among many), and to what degree it will be relevant to our societies at large.
I agree and I actually think it might be quite systematic.

I would not be surprised if we return to in person, hand-written exams for assessments (and I certainly think the best/most expensive bits of private education that the rich send their kids to will go down this route). I think the trend for physical objects (like books and vinyl) etc is interesting and I think (and hope and pray) it sticks around. I also think there is something interestingly like a "re-enchantment" of things going on with the younger generation (which makes sense given that they probably grew up and so are reacting against the peak years of "rationalists" on the internet).

But I was actually thinking about this recently as one of the many China scandals running through British politics at the minute is that Chinese intelligence agencies accessed the most secret material in the British state (long story short it sounds like "Chinese interests" ended up being responsible for the IT networks and infrastructure - and the assessment of the intelligence agencies was that it was compromised, if still heavily encrypted). There's also been the massive hacks of intelligence data from the UK, US and others which is believed to still be secure - but reports are that the view is that China has it all and basically is going to crack it at some point but it'll take some time. I can't help but suspect that we may end up with the most secret state bits of the state reverting to typed, written documents because I think the combination of developments in computing power (even against currently cutting edge encryption) plus the possible role of AI in decryption as it is very good at that sort of thing may be too big a risk.

But in all of those I think as you say there might be a split and it becomes a way of culturally performing but also arguably reinforcing existing class divides (a bit like not having a TV back when I was a kid - only the poshest people didn't have TVs/only had a tiny one).

Having said all that, the other thing I wonder about is if we're moving fully into a post-literate society - not in the sense of people no longer being able to read. I think it started 100 years ago with radio but I wonder if the smartphone, plus video and audio content is the final stage in shifting the way people consume, receive and process information shifting from literary to oral/visual forms. And I think those forms have also shifted so the content produced for radio, TV and now various forms of streaming and platforms ahve shifted from basically being attempts to do the things that could be done by text to understanding and being the medium themselves. I don't know how that will play out but I suspect ways of producing and reproducing knowledge will be different in a oral/visual first culture.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on October 22, 2025, 06:55:37 PM
Incidentally posted it in the UK thread but I thought this Martin Wolf-Paul Krugman exchange was very interesting (and I am so much a literate consumer of information I'm sharing the transcript - because I've not even watched the YouTube :P):
https://paulkrugman.substack.com/p/talking-ai-with-martin-wolf

Edit: And I think they are right and it's great that they are admitting this level of uncertainty around it all which is very much where I am.
Title: Re: The AI dooooooom thread
Post by: Jacob on October 22, 2025, 07:10:33 PM
Quote from: Admiral Yi on October 22, 2025, 05:23:10 PMI think non slop human content will not be replaced.  Unless AI gets as good as humans at creating content.

Non-slop is already being replaced to some degree: concept art, fiction, music, graphic design, photography. I suppose we can use definitions of the terms "slop" or "replaced" to make it not the case, but by the definitions that make sense to me it's already happened.

That said, I appreciate your faith in the irreplaceability of the human creative impulse and hope you're correct.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on October 23, 2025, 01:04:14 AM
Of course we're getting closer to that classic film "Stealth"...



Title: Re: The AI dooooooom thread
Post by: The Brain on October 23, 2025, 06:52:11 AM
They wanna run it all night long?
Title: Re: The AI dooooooom thread
Post by: Baron von Schtinkenbutt on October 23, 2025, 08:52:04 AM
Oh, Northrop Grumman found a company to pawn the X-47 UCAS off on. :P
Title: Re: The AI dooooooom thread
Post by: crazy canuck on October 24, 2025, 06:07:56 AM
Here is a basic explanation of the AI slop problem

https://www.instagram.com/reel/DNo6oVxI5YM/?igsh=MWUzeTAwZGtmcG15NA==
Title: Re: The AI dooooooom thread
Post by: Savonarola on October 29, 2025, 04:09:43 PM
From Ars Technica:

ChatGPT erotica coming soon with age verification, CEO says (https://arstechnica.com/ai/2025/10/chatgpt-will-soon-allow-erotic-chats-for-verified-adults-only/)

QuoteOn Tuesday, OpenAI CEO Sam Altman announced that the company will allow verified adult users to have erotic conversations with ChatGPT starting in December. The change represents a shift in how OpenAI approaches content restrictions, which the company had loosened in February but then dramatically tightened after an August lawsuit from parents of a teen who died by suicide after allegedly receiving encouragement from ChatGPT.

"In December, as we roll out age-gating more fully and as part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," Altman wrote in his post on X (formerly Twitter). The announcement follows OpenAI's recent hint that it would allow developers to create "mature" ChatGPT applications once the company implements appropriate age verification and controls.

Altman explained that OpenAI had made ChatGPT "pretty restrictive to make sure we were being careful with mental health issues" but acknowledged this approach made the chatbot "less useful/enjoyable to many users who had no mental health problems." The CEO said the company now has new tools to better detect when users are experiencing mental distress, allowing OpenAI to relax restrictions in most cases.

Striking the right balance between freedom for adults and safety for users has been a difficult balancing act for OpenAI, which has vacillated between permissive and restrictive chat content controls over the past year.

In February, the company updated its Model Spec to allow erotica in "appropriate contexts." But a March update made GPT-4o so agreeable that users complained about its "relentlessly positive tone." By August, Ars reported on cases where ChatGPT's sycophantic behavior had validated users' false beliefs to the point of causing mental health crises, and news of the aforementioned suicide lawsuit hit not long after.

Aside from adjusting the behavioral outputs for its previous GPT-40 AI language model, new model changes have also created some turmoil among users. Since the launch of GPT-5 in early August, some users have been complaining that the new model feels less engaging than its predecessor, prompting OpenAI to bring back the older model as an option. Altman said the upcoming release will allow users to choose whether they want ChatGPT to "respond in a very human-like way, or use a ton of emoji, or act like a friend."

The December rollout will implement age verification for adult content, which OpenAI has not yet detailed technically. This represents a more explicit approach than the February policy change, which allowed erotica in certain contexts but lacked age-gating infrastructure.

Mental health concerns remain
Over time, as OpenAI allowed ChatGPT to express more humanlike simulated personality through revised system instructions and fine-tuning as a response to user feedback, ChatGPT has become more like a companion to some people than a work assistant. But dealing with the unexpected impacts of a reported 700 million users relying emotionally on largely unregulated and untested technology has been difficult for OpenAI, and the company has been forced to rapidly develop new safety initiatives and oversight bodies.

OpenAI recently formed a council on "wellbeing and AI" to help guide the company's response to sensitive scenarios involving users in distress. The council includes eight researchers and experts who study how technology and AI affect mental health. However, as we previously reported, the council does not include any suicide prevention experts, despite recent calls from that community for OpenAI to implement stronger safeguards for users with suicidal thoughts.

Altman maintains that the new detection tools will allow the company to "safely relax the restrictions" while still protecting vulnerable users. OpenAI has not yet specified what technical measures it will use for age verification or how the system will distinguish between allowed adult content and requests that might indicate mental health concerns, although the company typically uses moderation AI models that read the ongoing chat within ChatGPT and can interrupt it if it sees content that goes against OpenAI's policy instructions.

OpenAI is not the first company to venture into AI companionship with mature content. Elon Musk's xAI previously launched an adult voice mode in its Grok app and flirty AI companions that appear as 3D anime models in the Grok app.

I don't think ChatGPT quite lived up to expectations if they're already turning to porn.  (Also "Flirty AI companions that appear as 3D anime models" I think will be my new measuring stick for pathos.  Yes, it's sad, but is it as sad as flirty AI companions that appear as 3D anime models?)
Title: Re: The AI dooooooom thread
Post by: Baron von Schtinkenbutt on October 29, 2025, 05:12:41 PM
Every new technological innovation further reinforces that, ultimately, the internet is for porn.
Title: Re: The AI dooooooom thread
Post by: Razgovory on October 29, 2025, 07:33:15 PM
This century has been a profound disappointment.
Title: Re: The AI dooooooom thread
Post by: Norgy on October 30, 2025, 01:38:35 AM
Agreed.
Title: Re: The AI dooooooom thread
Post by: Valmy on October 30, 2025, 10:09:45 AM
Quote from: Razgovory on October 29, 2025, 07:33:15 PMThis century has been a profound disappointment.

Yep.

And it became disappointing almost immediately. There were very few times when it even teased us that things might work out well.
Title: Re: The AI dooooooom thread
Post by: Josquius on October 30, 2025, 10:11:17 AM
Quote from: Valmy on October 30, 2025, 10:09:45 AM
Quote from: Razgovory on October 29, 2025, 07:33:15 PMThis century has been a profound disappointment.

Yep.

And it became disappointing almost immediately. There were very few times when it even teased us that things might work out well.

I dunno, up till 2008 things were looking to be on track.
Title: Re: The AI dooooooom thread
Post by: Valmy on October 30, 2025, 10:12:29 AM
Quote from: Josquius on October 30, 2025, 10:11:17 AM
Quote from: Valmy on October 30, 2025, 10:09:45 AM
Quote from: Razgovory on October 29, 2025, 07:33:15 PMThis century has been a profound disappointment.

Yep.

And it became disappointing almost immediately. There were very few times when it even teased us that things might work out well.

I dunno, up till 2008 things were looking to be on track.

Not here they weren't. The Constitution was being ripped up by Dubya's survelliance state and we were launching offensive wars. And then the economy crashed...
Title: Re: The AI dooooooom thread
Post by: Jacob on October 30, 2025, 11:59:16 AM
QuoteAI 'hallucinations' could prove real problem for owner of fire-ravaged Vancouver property
Self-represented landlord's case comes amid growing problem with AI in B.C. courts and tribunals

As the owner of a fire-ravaged property he's been accused of neglecting, Fu De Ren has defended himself in both civil proceedings and bylaw trials.

But the East Vancouver landlord may be forced to pay for his latest attempt at self-representation: a bid to cut his property's assessed value by nearly half — from $19 million to $10 million.

According to the board that hears assessment appeals, Ren's arguments are riddled with fictitious case law — possible artificial intelligence (AI) "hallucinations" that sent B.C.'s assessment authority on a wild goose chase in search of legal precedent that doesn't exist.

Now, the board says Ren may have to pay for those mistakes.

"The unraveling of these falsehoods has required investigation and research by both the Assessor and the Board," board panel chair John Bridal wrote in an Oct. 7 decision.

"I find an order for costs may be warranted, reflecting the additional time of both the Board and the Assessor in addressing this matter."


https://www.cbc.ca/news/canada/british-columbia/artificial-intelligence-appeal-property-9.6950415
Title: Re: The AI dooooooom thread
Post by: The Brain on October 30, 2025, 12:13:48 PM
There's no other penalty for lying to the board?
Title: Re: The AI dooooooom thread
Post by: crazy canuck on October 30, 2025, 12:21:19 PM
Quote from: Josquius on October 30, 2025, 10:11:17 AM
Quote from: Valmy on October 30, 2025, 10:09:45 AM
Quote from: Razgovory on October 29, 2025, 07:33:15 PMThis century has been a profound disappointment.

Yep.

And it became disappointing almost immediately. There were very few times when it even teased us that things might work out well.

I dunno, up till 2008 things were looking to be on track.

No, things were looking to be on track up until September 11, 2001.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on October 31, 2025, 09:46:30 AM
QuoteFour of the tech industry's wealthiest companies made it clear this week that their spending on artificial intelligence was not about to slow down.

But the outlays from Google, Meta, Microsoft and Amazon — which all raised their spending by billions of dollars, saying they needed to meet demand for A.I. — are increasingly feeding concerns that the tech industry is heading toward a dangerous bubble.

Artificial intelligence remains an unproven and expensive technology that could take years to fully develop. How much companies will ultimately get back in return from A.I. products like chatbots is unclear. And smaller companies pursuing A.I. gold, financial analysts pointed out, are not nearly as wealthy.

Last week, the Bank of England wrote that while the building of data centers, which provide computing power for A.I., had so far largely come from the cash produced by the biggest companies, it would increasingly involve more debt. If A.I. underwhelms — or the systems ultimately require far less computing — there could be growing risk.

For the rest of the article

https://www.nytimes.com/2025/10/31/technology/ai-spending-accelerating.html?unlocked_article_code=1.xk8.4zu2.pTkWIH--g3Cj&smid=url-share

Title: Re: The AI dooooooom thread
Post by: Jacob on October 31, 2025, 09:00:41 PM
The Atlantic on Here's How the AI Crash Happens (https://www.theatlantic.com/technology/2025/10/data-centers-ai-crash/684765/?gift=DyQoil9_0SM04ytShRNR5xNnM9WCTOyHlBaUoeBmOEY)


Title: Re: The AI dooooooom thread
Post by: Josquius on November 01, 2025, 03:02:19 AM
Quote from: crazy canuck on October 30, 2025, 12:21:19 PM
Quote from: Josquius on October 30, 2025, 10:11:17 AM
Quote from: Valmy on October 30, 2025, 10:09:45 AM
Quote from: Razgovory on October 29, 2025, 07:33:15 PMThis century has been a profound disappointment.

Yep.

And it became disappointing almost immediately. There were very few times when it even teased us that things might work out well.

I dunno, up till 2008 things were looking to be on track.

No, things were looking to be on track up until September 11, 2001.

Maybe things were different in NA, but in the UK things were still very much on the up despite the period of Islamic paranoia.
The economy continued to improve, life steadily became better, deprived areas rebuilt, etc...
It really wasn't until the financial crisis that things truly broke and austerity sent things plumetting.
This is shown by data and my own experiences.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 01, 2025, 08:08:20 AM
I think it very much depends where you're looking from - the famous elephant curve to include post-crash:
(https://cdn.socialeurope.eu/wp-content/uploads/2022/12/Picture-1.png)

Most of us, like most of the Western middle class happen to broadly be in that B section and it's not been great. A great example from Branko Milanovic, is in Italy as they've not grown over the last two decades. The lowest decile of Italians in 1988 were in the 73rd global percentile; by 2008 lower income Italians are now down to the 56th global percentile. By now they're probably lower (possibly even in the bottom half of the world). There are similar processes across the west, Italy's just an extreme. Obviously for some real income actually fell, but for many it didn't - it's just the rest of the world is growing faster in a fundamental and transformative way. I think this is tied to both migration and tourism - and the pressures they're producing - we're moving from a world of the 90s when they were basically luxury goods that were limited to one where they are available to a huge global middle class of which we are just one part.

In other respects I similarly don't know. There's techno-pessimism and understandably, but the lives of many millions of people have been transformed by technology in the last twenty five years. Some of that reflects very admirable policies - first to mind is PEPFAR and the huge downgrading of AIDS as a global health risk from the turn of the century. But others just spreading of technology like phones across Africa and India helping change people's lives and also tying people in to global networks of remittances but also the famous example of farmers being able to get better, more up to date info from their phones. The world is probably less democratic within states compared to 2000, but power is distributed more democratically between states (because of that economic story) - I think there's going to be a dynamic between those two and both the national and individual desire for recognition.

I think there are the conditions of technology and more equal global distribution of wealth and power for a better and more hopeful world (albeit one that I think would be disorienting for the West). But also for a far more alarming future. The thing that freaks me out most to be honest is the increasing fusion of state and corporate power around the world.
Title: Re: The AI dooooooom thread
Post by: Razgovory on November 01, 2025, 12:25:19 PM
Quote from: Valmy on October 30, 2025, 10:09:45 AM
Quote from: Razgovory on October 29, 2025, 07:33:15 PMThis century has been a profound disappointment.

Yep.

And it became disappointing almost immediately. There were very few times when it even teased us that things might work out well.
I was telling my coworker, who is only 18, how when I was 18* the biggest political crisis was the President getting a blow job.  She had no idea what I was talking about.  She didn't know Bill Clinton was impeached and how most everyone just laughed it off.  I miss the days when politics was simply stupid rather than stupid and scary.

*Technically I think I was 16 when that happened, but you get the idea.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 02, 2025, 03:38:38 PM
Quote from: Jacob on October 31, 2025, 09:00:41 PMThe Atlantic on Here's How the AI Crash Happens (https://www.theatlantic.com/technology/2025/10/data-centers-ai-crash/684765/?gift=DyQoil9_0SM04ytShRNR5xNnM9WCTOyHlBaUoeBmOEY)
I don't fully get the focus on data centres as the risk to be honest. It seems to me like it's investing in the physical infrastructure of the digital world. There's been a 500% increase in data centres over the last 10-20 years. I think if we build compute power we will find uses for it. But als othere's increasing regulatory and consumer pressure for data localisation which means there's a need for expanded capacity everywhere.

It may not end up being AI but - I don't know - it sort of feels a bit like worrying about overcapacity in the context of China building EVs or solar. I'm not sure there is such a thing at this point. As I say it may not ultimately be used for AI as we currently know it. But I think there may be something to the analogy discussed by Krugman and Wolf in that piece I linked to earlier of electricity - it actually took about 20 years after being able to commercially generate electricity to work out how to use it in a commercially viable way. And I feel this sort of thing is like building the grid of the next stage of our economy even if we're not fully sure at this point how to use it (and that the companies betting on the physical infrastructure are, at this stage, probably making a better gamble than those working on the end use products).

I fully get the concerns around debt and the weirdnesses going on in the debt markets at the minute (particularly the relationships of private debt markets with the real finance sector) and the incestuousness of money flowing around the AI world. But I think data centres are maybe just a way to physically explain that rather than a source of risk in themseleves?
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 02, 2025, 04:43:39 PM
A data center is just a physical location housing equipment.  What makes the data center different from say a power plant is that the equipment depreciates MUCH faster.  We still draw power from plants that have been functioning for decades.  There are some upgrades and replacement of equipment of course, but it's not like in computing where 4 year old components are laughably obsolete. An overbuilt power plant still supplies power even if it brings in less revenue then expected. An overbuilt data center quickly becomes an indoor junkyard.

Another example would be the telecom crash of 2000: companies like Worldcom and Global Crossing built real capacity that had real value; but it didn't stop a very hard crash when they got over-extended.  And again - high capacity cables take quite a bit longer to hit true obsolescence than GPUs.

The piece confuses the issue because financial crashes result from weaknesses in the financing.  The piece alludes to this but what raises red alarms is the circular vendor financing at extreme levels.  Nvidia gives credit to Open AI, etc.  which then uses that money to buy Nvidia chips, which raises Nvidia's reported income and financing capacity, which then prompts Nvidia to do even more ambitious vendor financing.  All fine unless there is a hiccup in the music . . .

Minsky categorized the business cycle as one that progresses through differing stages of finance - starting with "hedge finance" where businesses can get financing backed by current income only with a margin for security. As the boom progresses, it is possible to get financing up to the full amount of current income projected into the future. Last stage is "Ponzi finance" where current income flows aren't enough to service loans and the lender is counting on future growth in market cap to be repaid.

Because Open AI and most of the startups are "pre-earnings" companies, the billions of vendor financing they are getting are extreme manifestations of Ponzi finance.  The counter to this is that: (1) the AI startups are able to access the capital markets for real money, and (2) a lot of the demand is coming from far more substantial backers like MSFT or Amazon that have ample resources.  That is somewhat reassuring but not really.  Capital markets provide ample financing until they don't and the faucet can be closed off quite quickly and brutally. As for big players like MSFT or Amazon, they may look imposing. But Amazon is basically AWS - the rest of its business, however impressive looking, is small potatoes financially.  To what extent is AWS's profitabilty come to be dependent on the AI boom?  Similar questions could be asked of Microsoft and the other big tech groups.

When panics come, they can be just as circular and self-reinforcing as the booms at their origin. Right now, we are witnessing the fallout from a vendor financing driven bankruptcy at First Brands involving a few billion.  We are already deep into the hundreds of billions in the US AI boom.

Title: Re: The AI dooooooom thread
Post by: Zoupa on November 02, 2025, 05:38:09 PM
Quote from: DGuller on October 17, 2025, 09:01:00 AM
Quote from: Syt on October 17, 2025, 03:57:05 AMWhat could possibly go wrong? :)

https://www.businessinsider.com/even-top-generals-are-looking-to-ai-chatbots-for-answers-2025-10

QuoteEven top generals are looking to AI chatbots for answers

It's not just the civilian corporate executives and white-collar workers who are leaning into the generative AI boom at work. Military leaders are diving in too.

The top US Army commander in South Korea shared that he is experimenting with generative AI chatbots to sharpen his decision-making, not in the field, but in command and daily work.

He said "Chat and I" have become "really close lately."


"I'm asking to build, trying to build models to help all of us," said Maj. Gen. William 'Hank' Taylor, commanding general of the 8th Army, told reporters during a media roundtable at the annual Association of the United States Army conference in Washington, DC, on Monday.

Taylor said he's using the tech to explore how he makes military and personal decisions that affect not just him but the thousands of soldiers he oversees. While the tech is useful, though he acknowledged that keeping up with the pace of such rapidly developing technology is an enduring challenge.

"As a commander, I want to make better decisions," the general shared. "I want to make sure that I make decisions at the right time to give me the advantage."

AI in the military

Commanders like Taylor are focused on fast decision-making and how AI could provide an advantage because of a thought process popular with military leaders known as the "OODA Loop." The theory, developed by US fighter pilots during the Korean War, posits that troops who can move decisively before the enemy does — and observe, orient, decide, and act— often have the advantage on the battlefield.

The US military is embracing artificial intelligence with a recognition that decisions in future combat may need to be made faster than humans can make them.

The former Secretary of the Air Force said last year that he doesn't think the people saying that AI technology is "going to determine who's the winner in the next battlefield" are "all that far off." He also wrote that with the advancement of highly automated, highly autonomous kill chains, "response times to bring effects to bear are very short."

Predicting what future war will look like, he said that "we're going to be in a world where decisions will not be made at human speed. They're going to be made at machine speed."

AI is being integrated into drone tech, targeting, and data processing, among other capabilities — an AI algorithm has even piloted a modified F-16 through a simulated dogfight — but the military use of AI is not restricted to combat platforms.

Special Operations Forces, for instance, have sought to "reduce the cognitive burden of our operators" through the use of AI tools for paperwork, situation reports, concepts of operation, managing key supply and logistics demands, and other back-end work.

Operators have employed AI to analyze Pentagon doctrine, improve search functions, and make it easier for personnel who are transferring to a new location or position to catch up on the job and requirements quickly.

There are clear applications at the leadership level as well. Bianca Herlory, the Joint Staff AI lead, said at a panel event in April that "AI can significantly enhance the Joint Staff's ability to integrate and analyze global military operations, ultimately enabling better, faster decisions."

Using generative AI also comes with questions, especially in decisions at the command level. The Pentagon has urged caution as troops and leaders explore these tools, warning that generative AI can leak sensitive data. It can also produce deeply flawed answers if not adequately trained, and that could prove risky and even problematic if commanders use it to inform certain high-stakes decisions.

Things can go wrong with everything, and I imagine even more so in the military than other places, given that rival countries are doing their best to make things go wrong for you.  That's not a reason enough to let your rivals be the first ones to try out revolutionary technologies.

You can't outsource thinking.
Title: Re: The AI dooooooom thread
Post by: Iormlund on November 02, 2025, 06:26:01 PM
Quote from: Sheilbh on November 02, 2025, 03:38:38 PM
Quote from: Jacob on October 31, 2025, 09:00:41 PMThe Atlantic on Here's How the AI Crash Happens (https://www.theatlantic.com/technology/2025/10/data-centers-ai-crash/684765/?gift=DyQoil9_0SM04ytShRNR5xNnM9WCTOyHlBaUoeBmOEY)
I don't fully get the focus on data centres as the risk to be honest. It seems to me like it's investing in the physical infrastructure of the digital world. There's been a 500% increase in data centres over the last 10-20 years. I think if we build compute power we will find uses for it. But als othere's increasing regulatory and consumer pressure for data localisation which means there's a need for expanded capacity everywhere.

It may not end up being AI but - I don't know - it sort of feels a bit like worrying about overcapacity in the context of China building EVs or solar. I'm not sure there is such a thing at this point. As I say it may not ultimately be used for AI as we currently know it. But I think there may be something to the analogy discussed by Krugman and Wolf in that piece I linked to earlier of electricity - it actually took about 20 years after being able to commercially generate electricity to work out how to use it in a commercially viable way. And I feel this sort of thing is like building the grid of the next stage of our economy even if we're not fully sure at this point how to use it (and that the companies betting on the physical infrastructure are, at this stage, probably making a better gamble than those working on the end use products).

I fully get the concerns around debt and the weirdnesses going on in the debt markets at the minute (particularly the relationships of private debt markets with the real finance sector) and the incestuousness of money flowing around the AI world. But I think data centres are maybe just a way to physically explain that rather than a source of risk in themseleves?

In addition to what Minsky said, there's also the matter of sheer scale. Even if one of these companies hits the jackpot, they have to eventually return a profit. Given that investments are now in the trillions, how high will an eventual subscription go? What happens when you've gutted your staff and rely on the AI service vendor to operate your business?
Title: Re: The AI dooooooom thread
Post by: Tamas on November 02, 2025, 06:39:30 PM
I mean, has Uber turned profitable yet? Spotify?
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 02, 2025, 07:06:43 PM
Quote from: The Minsky Moment on November 02, 2025, 04:43:39 PMA data center is just a physical location housing equipment.  What makes the data center different from say a power plant is that the equipment depreciates MUCH faster.  We still draw power from plants that have been functioning for decades.  There are some upgrades and replacement of equipment of course, but it's not like in computing where 4 year old components are laughably obsolete. An overbuilt power plant still supplies power even if it brings in less revenue then expected. An overbuilt data center quickly becomes an indoor junkyard.

Another example would be the telecom crash of 2000: companies like Worldcom and Global Crossing built real capacity that had real value; but it didn't stop a very hard crash when they got over-extended.  And again - high capacity cables take quite a bit longer to hit true obsolescence than GPUs.

The piece confuses the issue because financial crashes result from weaknesses in the financing.  The piece alludes to this but what raises red alarms is the circular vendor financing at extreme levels.  Nvidia gives credit to Open AI, etc.  which then uses that money to buy Nvidia chips, which raises Nvidia's reported income and financing capacity, which then prompts Nvidia to do even more ambitious vendor financing.  All fine unless there is a hiccup in the music . . .

Minsky categorized the business cycle as one that progresses through differing stages of finance - starting with "hedge finance" where businesses can get financing backed by current income only with a margin for security. As the boom progresses, it is possible to get financing up to the full amount of current income projected into the future. Last stage is "Ponzi finance" where current income flows aren't enough to service loans and the lender is counting on future growth in market cap to be repaid.

Because Open AI and most of the startups are "pre-earnings" companies, the billions of vendor financing they are getting are extreme manifestations of Ponzi finance.  The counter to this is that: (1) the AI startups are able to access the capital markets for real money, and (2) a lot of the demand is coming from far more substantial backers like MSFT or Amazon that have ample resources.  That is somewhat reassuring but not really.  Capital markets provide ample financing until they don't and the faucet can be closed off quite quickly and brutally. As for big players like MSFT or Amazon, they may look imposing. But Amazon is basically AWS - the rest of its business, however impressive looking, is small potatoes financially.  To what extent is AWS's profitabilty come to be dependent on the AI boom?  Similar questions could be asked of Microsoft and the other big tech groups.

When panics come, they can be just as circular and self-reinforcing as the booms at their origin. Right now, we are witnessing the fallout from a vendor financing driven bankruptcy at First Brands involving a few billion.  We are already deep into the hundreds of billions in the US AI boom.
Interesting - and that all makes sense.

It might also be different markets because from what I understand in Europe the shift to trying to move away from American (far less Chinese) cloud, combined with planning laws in key hubs and very high energy prices (compared with the US and China) means there's a challenge around data centres here just to keep up with regular, expected non-AI growth. So from everything I've read about Europe there is a huge need to invest in data centres and a lot of resistance to it for a variety of reasons as supply is constrained and certainly nowhere near what it needs to be if there are ambitions for increase cloud sovereignty. (I also feel with recent issues around Azure and AWS - as well as increasing cyber attacks via Indian IT outsourcers that there may be a move to more diverse digital supply chains because I'm not sure the "it's good enough for the DoD" line is necessarily holding up right now - but that will mean redundancy and require spare capacity.)

What's the gap in the US from levels of investment and what's needed even for regular (non-AI) growth? And I should say that at the back of my head I do think about China on this - and I do think there's a risk from a European perspective that we end up (again) in the position of being end users of Chinese and American tech and infrastructure if we don't build up our own (and wonder whether the breakthrough in operationalising AI may happen in China as well).

Totally agree on the incestuous financing and business cycle. Although I don't know how any of the business cycle works in this world. I have a friend who's a lawyer for a VC firm and I always find speaking to him about his work a bit mad  for this reason :lol: :ph34r:
Title: Re: The AI dooooooom thread
Post by: DGuller on November 02, 2025, 07:07:05 PM
Quote from: Zoupa on November 02, 2025, 05:38:09 PMYou can't outsource thinking.
I guess that's a multi-trillion dollar question, isn't it?  Assuming a wrong answer due to preconceived notions or wishful thinking can be a catastrophic mistake, if you don't get lucky enough to hit upon the right answer with the wrong thinking.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 02, 2025, 07:13:27 PM
Quote from: Tamas on November 02, 2025, 06:39:30 PMI mean, has Uber turned profitable yet? Spotify?
I think they've both recently become profitable but yeah it took a while.

QuoteYou can't outsource thinking.
On this I saw a translation of a post by Giorgio Agamben on his blog on precisely this point - and after recovering from my shock and delight that Giorgio Agamben has a blog, I thought it was very interesting. Translated by a human - basically what happens if imagination is lost, intelligence is separate and man is mostly stupid (non-thinking)? Nothing good, probably:
(https://pbs.twimg.com/media/G4UHu8gWQAEyNGm?format=png&name=small)
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 02, 2025, 07:51:33 PM
Quote from: DGuller on November 02, 2025, 07:07:05 PM
Quote from: Zoupa on November 02, 2025, 05:38:09 PMYou can't outsource thinking.
I guess that's a multi-trillion dollar question, isn't it?  Assuming a wrong answer due to preconceived notions or wishful thinking can be a catastrophic mistake, if you don't get lucky enough to hit upon the right answer with the wrong thinking.

You have nicely summarized the problem with the AI industry
Title: Re: The AI dooooooom thread
Post by: Jacob on November 03, 2025, 01:40:06 AM
Quote from: DGuller on November 02, 2025, 07:07:05 PMI guess that's a multi-trillion dollar question, isn't it?  Assuming a wrong answer due to preconceived notions or wishful thinking can be a catastrophic mistake, if you don't get lucky enough to hit upon the right answer with the wrong thinking.

It's kind of neat how your statement could be read as an indictment of AI sceptics ("they're likely to disregard the right answer when AI provides it, due to their preconceived notions") or of AI enthusiasts ("wishful thinking about the quality of AI output put them at risk for making catastrophic mistakes").

IMO the real challenge is twofold:

The first one, rarely spoken of but tacitly admitted by folks like Musk with their AI Grokpedia shenanigans, is that so much of our decision making is values based not data based. Sure there's data and other hard facts, involved - but how those facts are framed, which are given primacy, which are being devalued as peripheral, and which are discarded as being non-related has a huge impact on the outcome of logical facts based.

That's not an anti-AI argument particularly, but it's definitely an area of uncertainty. What sort of differences will we see between CCP developed AI compared to AI developed by Silicon Valley Techno-Oligarchs? And will those differences increase the risk of the kind of bias mistakes you mentioned? Will they push the value base of society in a particular direction (because if Musk is pretty open about his desire to do so, I wouldn't be surprised if other oligarchs with ideologies have similar if perhaps slightly more subtle ideas in that direction). How do we navigate using AI to make important decisions, if most important decisions are (as I contend) values based and political, not pure logic puzzles?

Which leads to the second point, accountability. IMO, I expect AI to become essentially a massive machinery of accountability-dispersal.

AI can make all sorts of recommendations for action and policy (or legal strategy or construction process or whatever), but who owns the responsibility for the outcomes if something goes wrong?

I expect we'll see all kinds of ethically dubious AI decision making in areas where the "customers" are unimportant and have non venue for raising complaints - say civilians in distant warzones. If autonomous drones blow up some random innocents in a country the media doesn't care about, but the AI said it was "acceptable risk" or "they fit the profile of a potential threat" or even straight up failed to account for civilians, I unfortunately think it won't cause any problems even today given the state of the world.

But imagine for a moment that "we" did care - maybe it blows up in the media for whatever reason (it's so damn egregious, or maybe a youtuber with a massive following whips up a furore somehow), maybe we get to a moment where the conventions of war matter, or for some other reason... and AI makes some egregious mistake where lives are lost. Who is responsible for the mistake? Anyone? And who's responsible for fixing it? Is someone responsible for second guessing the AI and therefore maintain responsibility? Or if the AI did it, then it's nobody's fault - so sorry - even if it's so obviously wrong?
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on November 03, 2025, 02:04:21 AM
The state that launched the missiles is still responsible.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 03, 2025, 08:33:24 AM
Quote from: Jacob on November 03, 2025, 01:40:06 AM
Quote from: DGuller on November 02, 2025, 07:07:05 PMI guess that's a multi-trillion dollar question, isn't it?  Assuming a wrong answer due to preconceived notions or wishful thinking can be a catastrophic mistake, if you don't get lucky enough to hit upon the right answer with the wrong thinking.

It's kind of neat how your statement could be read as an indictment of AI sceptics ("they're likely to disregard the right answer when AI provides it, due to their preconceived notions") or of AI enthusiasts ("wishful thinking about the quality of AI output put them at risk for making catastrophic mistakes").

IMO the real challenge is twofold:

The first one, rarely spoken of but tacitly admitted by folks like Musk with their AI Grokpedia shenanigans, is that so much of our decision making is values based not data based. Sure there's data and other hard facts, involved - but how those facts are framed, which are given primacy, which are being devalued as peripheral, and which are discarded as being non-related has a huge impact on the outcome of logical facts based.

That's not an anti-AI argument particularly, but it's definitely an area of uncertainty. What sort of differences will we see between CCP developed AI compared to AI developed by Silicon Valley Techno-Oligarchs? And will those differences increase the risk of the kind of bias mistakes you mentioned? Will they push the value base of society in a particular direction (because if Musk is pretty open about his desire to do so, I wouldn't be surprised if other oligarchs with ideologies have similar if perhaps slightly more subtle ideas in that direction). How do we navigate using AI to make important decisions, if most important decisions are (as I contend) values based and political, not pure logic puzzles?

Which leads to the second point, accountability. IMO, I expect AI to become essentially a massive machinery of accountability-dispersal.

AI can make all sorts of recommendations for action and policy (or legal strategy or construction process or whatever), but who owns the responsibility for the outcomes if something goes wrong?

I expect we'll see all kinds of ethically dubious AI decision making in areas where the "customers" are unimportant and have non venue for raising complaints - say civilians in distant warzones. If autonomous drones blow up some random innocents in a country the media doesn't care about, but the AI said it was "acceptable risk" or "they fit the profile of a potential threat" or even straight up failed to account for civilians, I unfortunately think it won't cause any problems even today given the state of the world.

But imagine for a moment that "we" did care - maybe it blows up in the media for whatever reason (it's so damn egregious, or maybe a youtuber with a massive following whips up a furore somehow), maybe we get to a moment where the conventions of war matter, or for some other reason... and AI makes some egregious mistake where lives are lost. Who is responsible for the mistake? Anyone? And who's responsible for fixing it? Is someone responsible for second guessing the AI and therefore maintain responsibility? Or if the AI did it, then it's nobody's fault - so sorry - even if it's so obviously wrong?

An example of your point is in academia. Most journals now have a policy that prohibits an AI tool from being named as an author even if an AI tool was used to draft the manuscript. I hasten to add that academics who use an AI tool to draft their manuscripts are landing in a lot of trouble because of the numerous errors AI tools make.

The reason academics cannot blame the deficiencies of the AI tool they used is because the authors of a manuscript remain responsible for the manuscript. They can't simply point to the deficiencies in the technology to deflect blame.

The other thing I should add in relation to your paragraph related to AI being used to create strategies, etc., in my field the people that do that have gone tragically wrong in their arguments because as I'm sure most people now know AI is not reliable and does not produce statements that are true.  Put another way if somebody is relying on AI to come up with a legal argument they probably don't know what they're doing from the start and lack the ability to understand the AI tool is giving them gibberish.

This is another area where academia can show how to deal with the problem. At least in Canada, the main funding agencies have prohibited the use of AI in the drafting of funding applications.  They haven't stated the reason for that prohibition, but I think it's pretty clear that they don't want to waste their time sifting through what would otherwise be statements made by AI that are false in the context of request for funding.





Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 03, 2025, 09:59:48 AM
Quote from: Sheilbh on November 02, 2025, 07:06:43 PMTotally agree on the incestuous financing and business cycle. Although I don't know how any of the business cycle works in this world. I have a friend who's a lawyer for a VC firm and I always find speaking to him about his work a bit mad  for this reason :lol: :ph34r:

It always works the same way.  Leverage builds up and over-extends in the boom, until it gets to the point where simply not moving fast enough causes a cascading crash.  The only thing that changes are some specific details of the financial instruments and the euphemisms used to describe them.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 03, 2025, 10:13:24 AM
Quote from: Sheilbh on November 02, 2025, 07:06:43 PMWhat's the gap in the US from levels of investment and what's needed even for regular (non-AI) growth?

I don't know exactly and I agree the problem isn't data centers per se.
It's enormous debt spending on equipment (like GPUs) that become obsolete very quickly.

For "regular growth" including spending on AI commensurate with its actual value added, a continuous flow is needed to deploy and maintain that capacity because of the constant need to replace obsolete equipment.

If there is a very large and discontinuous jump in spending on this kind of equipment, one of two things must happen: (1) either the raised level of massively debt fueled spending must be maintained indefinitely to continuously replace obsolete equipment, or (2) there must be a sharp decline in spending within the next 1-3 years.

(1) is theoretically possible if the spending spike truly shakes the US out of its current equilibrium growth path to a materially higher path.  But that's a very risky assumption.
Title: Re: The AI dooooooom thread
Post by: grumbler on November 03, 2025, 10:53:27 AM
Just to make the bubble more i9nteresting, 100% of the advanced graphics chips used by NVidia, Apple, etc come from a single company, Taiwan Semiconductor Manufacturing Company. Attempts by other manufacturers, even using the exact same equipment as TSMC, have failed. There's apparently a lot more craft in that manufacturing than anyone outside TSMC realized. If anything goes wrong at TSMC (even if it justresults in a slowdown), the bubble pops.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 03, 2025, 12:45:27 PM
Quote from: grumbler on November 03, 2025, 10:53:27 AMJust to make the bubble more i9nteresting, 100% of the advanced graphics chips used by NVidia, Apple, etc come from a single company, Taiwan Semiconductor Manufacturing Company. Attempts by other manufacturers, even using the exact same equipment as TSMC, have failed. There's apparently a lot more craft in that manufacturing than anyone outside TSMC realized. If anything goes wrong at TSMC (even if it justresults in a slowdown), the bubble pops.
TSMC is a number one example on the importance of know-how.  Even before the explosion in AI investment I've been wondering how cataclysmic it would be if China attacked Taiwan, and Taiwan had to destroy its foundries.
Title: Re: The AI dooooooom thread
Post by: Valmy on November 03, 2025, 12:50:26 PM
Which is why the CHIPS act was so important to the US economy and future security.

Ah well. We decided to elect Trump again.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 03, 2025, 01:36:22 PM
Quote from: Valmy on November 03, 2025, 12:50:26 PMWhich is why the CHIPS act was so important to the US economy and future security.

Ah well. We decided to elect Trump again.

Yep, exactly!
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on November 03, 2025, 04:02:32 PM
Quote from: Valmy on November 03, 2025, 12:50:26 PMWhich is why the CHIPS act was so important to the US economy and future security.

I was told by a widely respected senior senator from Vermont that CHIPS was corporate welfare.
Title: Re: The AI dooooooom thread
Post by: grumbler on November 03, 2025, 05:01:54 PM
Quote from: Admiral Yi on November 03, 2025, 04:02:32 PM
Quote from: Valmy on November 03, 2025, 12:50:26 PMWhich is why the CHIPS act was so important to the US economy and future security.

I was told by a widely respected senior senator from Vermont that CHIPS was corporate welfare.

And...?
Title: Re: The AI dooooooom thread
Post by: Valmy on November 03, 2025, 06:28:00 PM
Quote from: Admiral Yi on November 03, 2025, 04:02:32 PM
Quote from: Valmy on November 03, 2025, 12:50:26 PMWhich is why the CHIPS act was so important to the US economy and future security.

I was told by a widely respected senior senator from Vermont that CHIPS was corporate welfare.

Yeah well he still voted for it.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 03, 2025, 06:53:08 PM
It is striking the degree to which this administration has weakened every aspect of strategic competition with the PRC, from export controls to industrial policy, to strategic alliances overseas.  Even on tariffs, the PRC now has the most preferential treatment available.
Title: Re: The AI dooooooom thread
Post by: grumbler on November 03, 2025, 09:32:02 PM
And let's not forget the couple billion he made off of Trump coins that Qatar purchased in order to buy back-door permission to export those forbidden chips to China.
Title: Re: The AI dooooooom thread
Post by: Razgovory on November 04, 2025, 01:36:43 AM
(https://i.imgur.com/5YuUiX2.jpeg)

The machines still have trouble imitating us.

"It is I, your beloved President, Obimno.  I have brought authentic Black Peoples, Joprah Winfrey and Aubry Plaza with me!  Let us in!"
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 04, 2025, 11:25:07 AM
A welcome appearance from the famous missing Jackson brother, Nojana, and his notorious producer, Fubmen.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 04, 2025, 01:06:44 PM
Glad to see Chedole is finally getting the credit he deserves.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 04, 2025, 02:33:27 PM
Quote from: Jacob on November 03, 2025, 01:40:06 AMIt's kind of neat how your statement could be read as an indictment of AI sceptics ("they're likely to disregard the right answer when AI provides it, due to their preconceived notions") or of AI enthusiasts ("wishful thinking about the quality of AI output put them at risk for making catastrophic mistakes").

IMO the real challenge is twofold:

The first one, rarely spoken of but tacitly admitted by folks like Musk with their AI Grokpedia shenanigans, is that so much of our decision making is values based not data based. Sure there's data and other hard facts, involved - but how those facts are framed, which are given primacy, which are being devalued as peripheral, and which are discarded as being non-related has a huge impact on the outcome of logical facts based.

That's not an anti-AI argument particularly, but it's definitely an area of uncertainty. What sort of differences will we see between CCP developed AI compared to AI developed by Silicon Valley Techno-Oligarchs? And will those differences increase the risk of the kind of bias mistakes you mentioned? Will they push the value base of society in a particular direction (because if Musk is pretty open about his desire to do so, I wouldn't be surprised if other oligarchs with ideologies have similar if perhaps slightly more subtle ideas in that direction). How do we navigate using AI to make important decisions, if most important decisions are (as I contend) values based and political, not pure logic puzzles?
Yeah I think this is also a STEM-ification of knowledge. In effect I think that it is the aggregation of data. You know that basically the truth is within the facts so if you list them you get there.

As you say that is wrong. It carries with it an ideology but it's an unexamined one - so the interpretation is not reflective or purposeful and it does not cohere as an analysis.  I worked on getting outsourced help on a lot of data analytics and it was really interesting workin with the people who do that internally because what ended up being the problem was that the outsource company could do the data really, really well. They couldn't do the narrative structuring (with all of the institutional knowledge and background). So we ended up bringing the interpreting and explaining the analyisis piece back in house. Having said that - as I say I get the excitement of AI for senior people - I could see using both uploading both to an AI and then being able to ask questions of it when you're reading it rather than within a presentation/meeting context at x time.

I think the flipside of this which I think is present in "establishment"/"centrist dad" thinking is however equally unexamined which is often the conclusion or interpretation which is current at that time (or, more likely, when they read it/were young) which has been separated from the underlying facts and analysis. I think that is when it becomes just bromide - and I think that is a problem too, including in politics because I think that is part of the pattern of what leads to vibes politics or everything just being aesthetic signals.

QuoteWhich leads to the second point, accountability. IMO, I expect AI to become essentially a massive machinery of accountability-dispersal.

AI can make all sorts of recommendations for action and policy (or legal strategy or construction process or whatever), but who owns the responsibility for the outcomes if something goes wrong?

I expect we'll see all kinds of ethically dubious AI decision making in areas where the "customers" are unimportant and have non venue for raising complaints - say civilians in distant warzones. If autonomous drones blow up some random innocents in a country the media doesn't care about, but the AI said it was "acceptable risk" or "they fit the profile of a potential threat" or even straight up failed to account for civilians, I unfortunately think it won't cause any problems even today given the state of the world.

But imagine for a moment that "we" did care - maybe it blows up in the media for whatever reason (it's so damn egregious, or maybe a youtuber with a massive following whips up a furore somehow), maybe we get to a moment where the conventions of war matter, or for some other reason... and AI makes some egregious mistake where lives are lost. Who is responsible for the mistake? Anyone? And who's responsible for fixing it? Is someone responsible for second guessing the AI and therefore maintain responsibility? Or if the AI did it, then it's nobody's fault - so sorry - even if it's so obviously wrong?
I agree. There's a book I have't read yet but seen very good things about on this (not just about AI) called The Unaccountability Machine by Dan Davies which sounds really interesting.

But I slightly question to what extent this is different from 20th century accountability dispersal artificially intelligent machines like the state or the corporation? Or even "rules-bound" orders?

A few slightly random thoughts on this.

In European law there is a right not to be subject to "automated decision-making" which has a legal or "similarly significant effect". I remember working for a client which was basically a big serviced apartments provider - as part of their process they did simple maths of basically what proportion of someone's net salary would the rent cost if below x then fine, if below y get a guarantor, if below z then you can't lease them the apartment. I was very junior but I remember asking lawyers from all over Europe - and never really getting a satisfactory answer - of what the meaningful difference is between applying that algorithmically by a machine (so "automated decision-making") v giving a junior employee a calculator and a policy (the Papers Please option). To an extent why is it that our legal framework discourages/makeson legaly complicated ("automated decision-making") - while the other is often a helpful way of de-risking and avoiding sanction: we had good policies, we had documented procedures etc, when the end result is the same.

FWIW I think many people experience the world you're describing to some extent in their day-to-day interactions with big corporations and thet state. It ma not be by AI but it is similarly bound by rules and processes and procedures - which in general, I suspect, are producing fairer more just outcomes at an aggregate level. But at an individual, lived experience way can be baffling, alienating, feel incredibly unjust and impossible to challenge - it's the "computer says no" point. And I can't help but wonder if that is part of the appeal of someone like Trump and the unusual class coalitions. First that Trump's politics is personalist and patrimonial. He is not representing a "system" which he feels any responsibility towards but is mercurial and eminently corruptible which are very human qualities. It's a bit like the role that the monarch's prerogative of mercy played in the Medieval world (but as an expression of power, not grace). Secondly I wonder if on the class point it is in part a distinction between the class who input into the rules/systems and understand them (or are adjacent to the professional classes that create them) v the people both above and below who are constrained/punished by them?

And perhaps all of that is really caught in perhaps the most distinctvely twentieth century form artificial intelligence and accountability: the consultancy profession. Whether it's Rand and the best and brightest advising on Vietnam, developing the theory of Mutually Assured Destruction or modelling welfare and then also the dismantlement of welfare systems; or McKinsey providing innovative advice on business restructurings ("have you considered firing people?") which I think is arguably part of the supply chain weaknesses and dependencies we're trying to disentangle ourselves from to the Tony Blair Institute parachuting in experts into tens of governments to help drive "change" with their non-ideological focus on "what works". All of this is, as you say, an accountability sink but also, crucially, responsibility laundering. No-one is to blame, no-one is responsible, process was followed and we got the best possible advice - unfortunately we appear to have leveled a South-East Asian country/blown up the world economy/outsourced our entire industrial base to a hostile state by mistake. If you don't want to be responsible - get it on some Bain letterhead and no-one will blame you.

Perhaps that is the gap (for now) for AI. That (at this point) running it by the Big Four is best practice while making decisions based on an AI chat (for now) would seem insane and negligent. One allows for plausible irresponsibility and the other doesn't. But I can't help but notice that in the huge drop in graduate and entry level roles one of the worst impacted industries appears to be consultancy.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 04, 2025, 04:23:40 PM
Yeah I basically agree with all of that, Sheilbh, except the "centrist dad thinking part" - mainly because I didn't understand your point?
Title: Re: The AI dooooooom thread
Post by: Zoupa on November 06, 2025, 01:03:42 AM
OpenAI CFO wants federal backstop for data-centers investments. These people are shameless. When the peasants finally revolt, they should not be surprised what happens to them.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 06, 2025, 04:54:04 AM
Quote from: Zoupa on November 06, 2025, 01:03:42 AMOpenAI CFO wants federal backstop for data-centers investments. These people are shameless. When the peasants finally revolt, they should not be surprised what happens to them.

When a bubble forms, why wait to get the bailout. Especially when dealing with a corrupt government that will find a way to take a kickback.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 06, 2025, 10:57:08 AM
Saw someone phrasing it as "OpenAI has invented the pre-bailout".
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 06, 2025, 03:37:48 PM
Nvidia joined in as well, saying the feds need to subsidize data centers to keep up with China.  Even as Nvidia lobbies to sell more advanced chips to China. 
Title: Re: The AI dooooooom thread
Post by: Valmy on November 06, 2025, 03:51:08 PM
Quote from: Zoupa on November 06, 2025, 01:03:42 AMOpenAI CFO wants federal backstop for data-centers investments. These people are shameless. When the peasants finally revolt, they should not be surprised what happens to them.

Yeah but this isn't a bubble.  :ph34r:
Title: Re: The AI dooooooom thread
Post by: HVC on November 06, 2025, 04:04:08 PM
Quote from: Valmy on November 06, 2025, 03:51:08 PM
Quote from: Zoupa on November 06, 2025, 01:03:42 AMOpenAI CFO wants federal backstop for data-centers investments. These people are shameless. When the peasants finally revolt, they should not be surprised what happens to them.

Yeah but this isn't a bubble.  :ph34r:

We must not fall behind in the tulip bulb wars :contract:
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 07, 2025, 12:02:03 PM
God I wish I as as certain as you guys :lol:

There's some other points I wanted to pick up on but some media gossip.

Apparently while it looks like use of AI summaries dropped off a bit recently in early autumn (and are now back to growth - which had been sustained until then) it looks like there's a shift in behaviour recently especially with new models. Which is that fewer users are clicking through. That suggests two aspects to me (and it could be both). One is that people are trusting answers more so taking less of a "trust, but verify" approach of looking at the AI generated answer and then clicking through anyway, the other is that the answers are gettin better and more on point. As I say my suspicion is it's a bit of both.

My understanding is this is so far not really affecting hard news media but is absolutely devastating the more lifestyle/feature/perennial media (which makes sense). The irony is that those are exactly the areas media companies have invested in in recent years because of online behavioural advertising. Basically you get a better idea of what sort of product someone is interested in from their consumption of that type of media than their interests in news (this is also why Instagram is regularly cited by people - including me - as the only platform with adverts that are genuinely of interest to them, because it's all lifestyle). The other aspect is that online advertising allows advertisers to pick the type of content they would like to be next to (while in print the choice is "front half or back half") and basically brands don't want to be next to hard news.

This is a further step along trends that have been at play for a while but it looks like subsidising the production of hard news with lifestyle content may not be a particularly healthy strategy any more. This is why I think the really key thing is licensing and protection of IP so that (unlike Google and Meta in the social media disruption) news publishers and people actually making reported, edited, fact-checked, legaled content get paid for it. Otherwise it's going to either have to rely on subsidies from the state in some way or other or just become a luxury good.

Semi-relatedly, I am certain that the future of online behavioural advertising is AI generated directly personalised ads. We've started seeing this already with Ticketmaster using AI both to target the advert but also the creative (within fixed parameters). This was also flagged by Zuckerberg recently on Meta. Google and Meta will absolutely lead on this as they already own so much of the ad industry (again I don't want to belabour it but in terms of bubbles I can't help but wonder if the 2010s was the bubble - it seems far more reasonable and in a weird way healthy to me that the biggest companies right now are doing things like designing and building chips or building and operating data centres v the 2010s when the biggest companies, with partial exceptions for Apple and Amazon, were fundamentally advertising exchanges....)

But the key factor I think for me is that I think it's going to allow Meta and Google to do the rest of the ad worlds what social media allowed them to do to news. There's obviously a monopoly bit but a key part of their business (I think over 90% of Meta's revenue is its ads business) was basically allowing "better" targeting and tracking of advertising. So ad spend went from news publishers to the platforms who knew their users well and could target better while the share of the pie for publishers got smaller - even as ad spends increased. The same will happen now on the creative side - which is still expensive and where spend is captured by the agencies. This will make creating ads, testing them (a big thing in the AI age already) - again instead of spending loads of moneys on agencies to create ads you can just spend on Meta and Google who will provide the system for distributing ads, a way of inputting your ideas into an AI that will generate and optimise hundreds of versions for you.

But also this is something that is an easy fit into the existing businesses of some of the bigest companies in the world. That is still my fundamental view of AI - that the LLM that we know about are to an extent slightly marketing tools/the recognisable bit. It is not going to be the consumer-facing products that will drive this but the implementation of AI within existing business models and businesses. As I say I think it is more exciting for Sam Altman to talk about existential risk and post images of the Deat Star when releasing new models than what I think the reality is which is a big "Infrastructure as a Service" business :lol: I don't think it'll be through the consumer-facing products but, for example, the change to the ad model that allows Google and Meta to eat the last remaining bit of spend they don't already control, or how Salesforce or Oracle implement it into their software tools.

(Total aside - but FWIW speaking to someone in marketing who was sort of 50/50 on some of this because their agency has an AI tool that helps turn someone's idea into a marketing brief. On the one hand it makes their life easier in dealing with the business - on the other they don't use it for big projects because the fun and interesting bit of working in marketing is exactly in generating ideas and writing the brief. This is where I think the impact on entry level and graduate jobs is interesting - but the social dislocation might not just be in fewer jobs but the chane in the nature of them. It's why I return to the Ned Ludd moment. It wasn't about people being hostile to technology taking their jobs - though that was part of it. It was about their proletarianisation and alienation of their roles. They were highly skilled, artisanal workers who could command a premium, were responsible creatively and intellectually for their economic production and they were getting replaced with fast moving machines where tasks could be divided, there was no creativity, individuals were alienated and you could just employ women and children instead. I think perhaps job losses is part of it but we should think of a proletarianisation of the white collar world and what that means - no doubt to be rapidly followed by a gig economy/platform/piece work world of white collars. In some ways it's just the latest version in that long war against guilds.)
Title: Re: The AI dooooooom thread
Post by: Valmy on November 07, 2025, 12:41:29 PM
Quote from: Sheilbh on November 07, 2025, 12:02:03 PMGod I wish I as as certain as you guys :lol:

It is just when you have seen the same shit over and over and over again you get a little fatalistic. I mean railroads were obviously a successful and useful technology. We still use railroads today. But we still managed to create a huge bubble that caused a world wide depression by over investing in them. Same with the internet. Same with...freaking housing. The ultimate usefullness of the technology or whatever is being inflated doesn't really seem to matter. Sometimes it might be tulip bulbs or shares of the French Mississippi company but we are perfectly capable of causing tremendous economic damage with useful and valuable things as well.

But maybe this time it will be different.

(https://i.redd.it/c3mje8ge1gn81.jpg)
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 07, 2025, 12:58:03 PM
I don't think I agree - I think there's a world of difference between railroads or electricity and tulips or shares in the South Sea Company. And it's maybe a difference of interest/view because what I find interesting (and uncertain) is where AI sits on that spectrum.

I suppose I'm fatalist in the other way that I don't think there's an "answer" to the economy or politics. There will be boom and bust/cycles/pendulum swings of all kinds from politics to economics to cultural or social stuff. So I might just be coming at it from the other end in that I think that's kind of baked in the question is of what type and I suppose what, if anything, is the technological and structural "real" underneath the flotsam. Again that is to me the difference between, say, railroads, electricity, IT, the internet and, as you say, tulips.

And as I say my view is that it's either a transformative technology that will have huge economic and social impact, or it's a massive misallocation of capital (and brainpower) the fallout of which will have huge economic and social impact.
Title: Re: The AI dooooooom thread
Post by: Valmy on November 07, 2025, 01:12:07 PM
Quote from: Sheilbh on November 07, 2025, 12:58:03 PMI suppose I'm fatalist in the other way that I don't think there's an "answer" to the economy or politics. There will be boom and bust/cycles/pendulum swings of all kinds from politics to economics to cultural or social stuff.


Yes of course. But the lesson we learned from the 1875-1929 era was we can't just let them happen. We have to attempt to fight those cycles. By "unleashing" the economy from its shackles we are right back into disastrous busts ravaging the world every twenty years or so. The economy must be regulated to reduce the disastrous impact of those busts. The government must simply do everything in its power to limit these bubbles and mitigate their damage. They will still happen of course. See the failure of the...what were they called? The Nifty Fifty? The fifty "can't lose" stocks of the 1960s when they finally did lose in the 1970s. But as bad as that was it was no Depression of 1875 or Economic Crisis of 2008.
Title: Re: The AI dooooooom thread
Post by: HVC on November 07, 2025, 01:12:51 PM
Hey we plant tulips to this day,  so the fad never fully died out :D
Title: Re: The AI dooooooom thread
Post by: DGuller on November 07, 2025, 01:27:43 PM
Quote from: Valmy on November 07, 2025, 01:12:07 PM
Quote from: Sheilbh on November 07, 2025, 12:58:03 PMI suppose I'm fatalist in the other way that I don't think there's an "answer" to the economy or politics. There will be boom and bust/cycles/pendulum swings of all kinds from politics to economics to cultural or social stuff.


Yes of course. But the lesson we learned from the 1875-1929 era was we can't just let them happen. We have to attempt to fight those cycles. By "unleashing" the economy from its shackles we are right back into disastrous busts ravaging the world every twenty years or so. The economy must be regulated to reduce the disastrous impact of those busts. The government must simply do everything in its power to limit these bubbles and mitigate their damage. They will still happen of course. See the failure of the...what were they called? The Nifty Fifty? The fifty "can't lose" stocks of the 1960s when they finally did lose in the 1970s. But as bad as that was it was no Depression of 1875 or Economic Crisis of 2008.
If you don't let the bubbles inflate, how are you going to have economy recover from the last bust?
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 08, 2025, 05:55:17 AM
Other reasons to think the US market is headed to a crash.

Part of a lengthy piece in the NYTimes

QuoteFour years ago, the economist Robert Shiller expressed concern about the stock market's lofty heights, but he concluded that there was "no particular reason" to expect a market collapse "as bad as the 1929 crash" because "the government and the Fed have shown themselves to be far more adept in staving off prolonged recessions than their predecessors." Today, with the S&P 500 over 60 percent higher than it was the day Mr. Shiller issued his warning, we should be heeding the words of the former Federal Reserve chairman William McChesney Martin, who warned that market stewards must be willing to serve as the "chaperone" who can order "the punch bowl removed just when the party was really warming up."
Mr. Trump has been ordering the chaperones removed. Since January, his administration has been firing regulators and vigorously tearing down the guardrails that have kept our markets thriving for nine decades.

For the first time in a century, the S.E.C. is seriously exploring how to allow firms and funds to sell investments to masses of Americans without registration or disclosure. The administration is even encouraging individual retirees to vouchsafe their life savings to exotic financial offerings like private equity. Private equity is, as the name suggests, notoriously opaque, which means retirees would know little about what they're investing in. The White House and the private fund lobby argue that this policy will "democratize" access to alternative assets and promote "better returns." But such a plan, which comes with neither the information nor the protections needed to defend investors from serious economic risks, is as compelling as a plan to "democratize" brain surgery.

ADVERTISEMENT
SKIP ADVERTISEMENT

Mr. Trump is also allowing financial regulators to atrophy: The five-person Commodity Futures Trading Commission, tasked by this administration to oversee significant portions of the crypto and prediction markets, has dwindled to a single member. Only two of five statutorily mandated S.E.C. commissioners are serving in their normal terms; the lone remaining Democratic appointee, Caroline Crenshaw, is in her post-expiration grace period, and warning that the agency's policies are "a reckless game of regulatory Jenga."

The agency's chair has declared "a new day at the S.E.C." But the lamps are going out all over the agency: Staff has been cut by 16 percent (substantially more than the 10 percent of a literal decimation), quarterly reports are on the chopping block and forms that provide intelligence about dark corners of the market are being repeatedly deferred.
Title: Re: The AI dooooooom thread
Post by: Tamas on November 09, 2025, 07:42:13 AM
Due to various reasons I have been reading/listening up on private equity firms and they do sound like the replacement for the various schemes around the housing market going on pre-2008
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 09, 2025, 08:15:55 AM
My concern about AI succinctly explained by a scientist

https://www.instagram.com/reel/DQzjGn1jISq/?igsh=bWliNXdtb2gxZmc0
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 09, 2025, 11:42:54 AM
Just on that AI-powered personalised ad generation point I mentioned earlier to give a bit of a sense of scale.

Fantastic piece today by Reuters that around 10% of Meta's ads revenue comes from scam or fake ads. Basically Meta only bans fake/scam advertisers if they hit a 95% certainty that it's fraudulent or whatever. However they will charge more for advertising the more your "score" indicates you're a scam in some way or other. Which sort of makes sense for them (again dreadful for the publishers sellin advertising space or the readers seeing it but plus ca change).

10% of Meta's revenue is $16 billion. That means that the money they're making on fake/fraudulent/scam ads is more than the entire TV advertising revenue of the NFL.

I expect Meta to do both. So they'll sell to publishers an AI tool to help better screen fake/fraudulent ads which just means Meta need to increase their share of ad sales and they'll sell a tool to advertisers allowing them to AI generate personalised advertising and optimise both the targeting and the creative (which I can't help but think will be a gift for scam artists).
Title: Re: The AI dooooooom thread
Post by: Jacob on November 09, 2025, 04:22:01 PM
You know what I would like? I'd like it if Facebook got held criminally and financially liable for fraud they help perpetrate.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 09, 2025, 07:45:02 PM
Yes.

I'm fully aware I have worked in media so have a particular animus against the platforms for destroying their advertising business only for them now to follow up wth mass theft of their IP and trying to get people onto AI search etc. I think it is a problem that the biggest industry in the last 25 years has been based on destroying the industries that produce good information (plus who can forget the fake number "pivot to video").

And I think their business model is based on a lie. They get treated, broadly, for regulatory purposes like they're just a telco or ISP - which are explicitly prohibited from reading the contents of communications - while their business model is based on analysing the content of those communications for the purposes of selling personalised advertising (now AI targeted and generated).
Title: Re: The AI dooooooom thread
Post by: Jacob on November 09, 2025, 10:05:07 PM
The competitive advantage of Silicon Valley tech seems largely to be that they've done an end run around regulations and gotten away with it because they built an app for it. That and exploiting monopoly or duopoly positions to increase their share of takes.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 09, 2025, 10:37:30 PM
Yeah also the decade of cheap money helped.

And looking back I think the policy mix in response to the crash was a catastrophic mistake that accelerated a lot of existing trends including in relation to the tech companies. But I think prioritising stabilising the financial system (not consumer/retail debtors), no real criminal liability at all following the crash because arguably no-one did anything wrong and public squalor as states shirked low cost debt (which instead plowed into anything else that might show a return at some point) was basically a toxic combo. I mean, I backed a lot of it particularly stabilising the system - but then everyone backed it but the nutters.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 10, 2025, 01:48:07 PM
Quote from: Jacob on November 09, 2025, 10:05:07 PMThe competitive advantage of Silicon Valley tech seems largely to be that they've done an end run around regulations and gotten away with it because they built an app for it. That and exploiting monopoly or duopoly positions to increase their share of takes.

Yep
Title: Re: The AI dooooooom thread
Post by: Syt on November 13, 2025, 01:35:41 AM
They told us at work to make use of AI (we got the whole privace/ethics training about the EU legislation, and personal ChatGPT corporate accounts), so I made this.

(https://languish.org/forums/custom_avatar/avatar_13_1763015626.png)
Title: Re: The AI dooooooom thread
Post by: grumbler on November 13, 2025, 09:06:05 AM
This is a few months old, but is the best explanation of AI 2027 I have seen:

This guy, Aric Floyd, is great at explaining AI in general.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 13, 2025, 11:55:44 AM
2 key assumptions are that

Existing AI models, given sufficient computing power, can and will speed up the pace of AI research and development in an accelerating feeback loop

Existing LLM models can somehow through that process bootstrap into an AGI.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 13, 2025, 12:01:55 PM
I do think the scenario is right to identify US-China competition as a danger point, but not necessarily for the reason that it fuels the development of a RL Skynet Super AI.  I suspect the immediate danger is more that great power competition leads militaries to rapidly deploy and incorporate "task AI" capabilities into every aspect of military operations, planning and platform design.  An effect being to reduce the time available for crisis management and increase the likelihood of escalation, out of fear of adversary AI assisted pre-emptive strikes.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 13, 2025, 12:22:51 PM
That seems another iteration of the "where in the deployment of AI do we leave room for human agency and responsibility" issue we're facing.

It's one thing if we let AI fuck up domestic traffic, draft legal advice based on spurious precedent, encourage people to commit suicide, or generate facts that makes the population more ignorant.

It's another thing entirely if we let AI trigger armed conflict with another state.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 13, 2025, 12:52:24 PM
What scares me most about AI is not the possibility of it becoming more intelligent than human, but even what it can do without reaching that level.  I don't see people appreciating the fact that AI can access and process way more information than a human can. 

The product of intellectual work is connecting the dots; it's a combination of having the dots, and finding the right connections between them.  An AI which is not as intelligent as a human can nevertheless connect way more dots, simply because it starts with more of them.  A human spy can't listen to 100 million wiretaps at the same time and synthesize them, but an AI might be able to.  Even a pretty dumb one can identify people unhappy with the regime, so that the regime can administer corrective measures.
Title: Re: The AI dooooooom thread
Post by: Valmy on November 13, 2025, 01:01:56 PM
Yes. And there doesn't seem any way to stop powerful institutions from having this power. Either all powerful governments or all powerful techlords. Either way seems equally bad.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 13, 2025, 04:41:21 PM
AI continues providing things we really need: https://www.dexerto.com/entertainment/black-mirror-becomes-reality-new-app-lets-users-talk-to-ai-avatars-of-deceased-loved-ones-3283056/
Title: Re: The AI dooooooom thread
Post by: Valmy on November 13, 2025, 04:49:28 PM
Holy shit. No I am not talking to an AI version of my Dad.
Title: Re: The AI dooooooom thread
Post by: Josquius on November 13, 2025, 04:50:58 PM
Which would hallucinate more?
The AI boomer or the real one?
Title: Re: The AI dooooooom thread
Post by: Syt on November 18, 2025, 06:23:17 AM
Stolen from Reddit:

(https://i.ibb.co/vCqjXfJ5/image.png)
Title: Re: The AI dooooooom thread
Post by: Grey Fox on November 18, 2025, 07:32:27 AM
Without ASML there is no TSMC whose without there is no NVIDIA. NVIDIA's market cap is stupid.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 18, 2025, 07:52:55 AM
We need to be more precise in defining "worth".  I'd argue that worth to society is more a function of revenue (although this is far from perfect representation), while market worth is a function of profits.  A company that sells $1 trillion worth of products at 1% profit margin will be worth less on the market than a company selling $30 billion worth of products at 50% margin, even though the former company would be much more of a presence in everyone's life.

It never sat right with me that Apple could be "worth" more than all the carmakers combined, but I guess you just have to approach this question intellectually, not intuitively.
Title: Re: The AI dooooooom thread
Post by: Josquius on November 18, 2025, 09:42:59 AM
Yeah. Its mad stuff.
Give me a company any day in the final figures earns x having enormous income and enormous operating costs, employing thousands, over one that just makes x employing a few guys and with minimal outgoings at all. The value to society is far larger with the mass-employer.
The employment itself is an asset which may not profit the company itself too much but does profit the broader economy even directly let alone considering more abstract societal stuff.

I am seeing ever more reports of this famous investment guy or that pulling out or shorting AI.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on November 18, 2025, 09:56:23 AM
Quote from: Josquius on November 18, 2025, 09:42:59 AMYeah. Its mad stuff.
Give me a company any day in the final figures earns x having enormous income and enormous operating costs, employing thousands, over one that just makes x employing a few guys and with minimal outgoings at all. The value to society is far larger with the mass-employer.
The employment itself is an asset which may not profit the company itself too much but does profit the broader economy even directly let alone considering more abstract societal stuff.

I am seeing ever more reports of this famous investment guy or that pulling out or shorting AI.

How much is society willing to pay for this value?
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 18, 2025, 10:19:48 AM
Quote from: Admiral Yi on November 18, 2025, 09:56:23 AM
Quote from: Josquius on November 18, 2025, 09:42:59 AMYeah. Its mad stuff.
Give me a company any day in the final figures earns x having enormous income and enormous operating costs, employing thousands, over one that just makes x employing a few guys and with minimal outgoings at all. The value to society is far larger with the mass-employer.
The employment itself is an asset which may not profit the company itself too much but does profit the broader economy even directly let alone considering more abstract societal stuff.

I am seeing ever more reports of this famous investment guy or that pulling out or shorting AI.

How much is society willing to pay for this value?

We already see the cost that society pays for the alternative.
Title: Re: The AI dooooooom thread
Post by: Grey Fox on November 18, 2025, 10:37:06 AM
This is the moment I, once again, hammer that it's all Reagan's fault.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 18, 2025, 10:37:36 AM
Market cap measures only one thing: how much people are willing to pay for the common stock of the company. Stock price is inherently forward looking: people buy stock not based on the earnings of the last year or even this year, but the expected value of all projected future earnings.   Market cap is thus a measure of worth in that one sense but not necessarily others.  And because projections into the future are always highly speculative, especially for companies operating at technological frontier, market cap valuations are inherently subject to fluctuation and revision.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 18, 2025, 10:52:09 AM
I found these presentations from Yann LeCun interesting despite the fact that he works for Meta.  It's possible he is just talking down his competitor's books, as Meta lacks an industry leading LLMs, but he seemed on the level.

https://www.youtube.com/watch?v=ETZfkkv6V7Y&t=3090s

https://www.youtube.com/watch?v=qvNCVYkHKfg&t=3122s
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 18, 2025, 12:26:19 PM
Just as a total aside that chart sort of shows why France has, surprisingly, some of the highest rates of wealth inequality in the West. Because it's biggest companies are (broadly) luxury goods groups like LVMH, L'Oreal - add in Chanel, Kering, Hermes etc - who are some of biggest companies in Europe and a lot of them are still either privately owned or with very significant private ownership.

So I think the richest people in Europe by some distance are the Arnault family who have LVMH, so a few hundred billion which is five or six times the richest Brits (the Hinduja family). But it leads to really wild stats - I think something like 20-5% of French GDP is basically held by four or five families like the Arnaults, Bettancourts, Pinaults etc.
Title: Re: The AI dooooooom thread
Post by: Syt on November 18, 2025, 12:49:50 PM
Quote from: The Minsky Moment on November 18, 2025, 10:52:09 AMI found these presentations from Yann LeCun interesting despite the fact that he works for Meta.  It's possible he is just talking down his competitor's books, as Meta lacks an industry leading LLMs, but he seemed on the level.

https://www.youtube.com/watch?v=ETZfkkv6V7Y&t=3090s

https://www.youtube.com/watch?v=qvNCVYkHKfg&t=3122s

Thanks. :)
Title: Re: The AI dooooooom thread
Post by: Razgovory on November 18, 2025, 02:05:26 PM
Quote from: The Minsky Moment on November 13, 2025, 11:55:44 AM2 key assumptions are that

Existing AI models, given sufficient computing power, can and will speed up the pace of AI research and development in an accelerating feeback loop

Existing LLM models can somehow through that process bootstrap into an AGI.

I also don't understand why it suddenly decides to kill all humans.
Title: Re: The AI dooooooom thread
Post by: Josquius on November 18, 2025, 03:01:24 PM
Quote from: Razgovory on November 18, 2025, 02:05:26 PM
Quote from: The Minsky Moment on November 13, 2025, 11:55:44 AM2 key assumptions are that

Existing AI models, given sufficient computing power, can and will speed up the pace of AI research and development in an accelerating feeback loop

Existing LLM models can somehow through that process bootstrap into an AGI.

I also don't understand why it suddenly decides to kill all humans.

It's ran by Elon.
Title: Re: The AI dooooooom thread
Post by: HisMajestyBOB on November 18, 2025, 03:06:39 PM
Quote from: Josquius on November 18, 2025, 03:01:24 PM
Quote from: Razgovory on November 18, 2025, 02:05:26 PM
Quote from: The Minsky Moment on November 13, 2025, 11:55:44 AM2 key assumptions are that

Existing AI models, given sufficient computing power, can and will speed up the pace of AI research and development in an accelerating feeback loop

Existing LLM models can somehow through that process bootstrap into an AGI.

I also don't understand why it suddenly decides to kill all humans.

It's ran by Elon.
Checks out. If I was regularly exposed to Elon Musk I'd want to kill all humans too.
Title: Re: The AI dooooooom thread
Post by: Razgovory on November 18, 2025, 08:38:25 PM
I was worried about this AI thing, but I am comforted by the knowlege that the companies that designed it are run by men with  sober-minded ideas such as "create God and ask him for immortality.". 
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 19, 2025, 06:44:55 PM
Well two of the leading AI companies are led by an addled ketamine addict (X) and a pathological liar (Open AI).

The Anthropic guy seems kind of balanced.  On the one hand he keeps warning about the dangers of AI, the threats to economic stability and world peace, and the need for regulations. His way of addressing that in the business is to proceed at breakneck speed in AI development and do deals with any international scumbag with cash (aka the Gulf Sheikhs).
Title: Re: The AI dooooooom thread
Post by: celedhring on November 20, 2025, 01:51:38 PM
Ezra Klein had a former AI developer turned whistleblower a couple weeks ago that essentially said that we should juryrig AI datacenters with explosives for when they go full skynet, because they will. He seemed a bit unhinged, but not completely.
Title: Re: The AI dooooooom thread
Post by: Grey Fox on November 20, 2025, 01:55:30 PM
It's unnecessary. None of our power sources are maintenance free.
Title: Re: The AI dooooooom thread
Post by: PJL on November 20, 2025, 02:35:01 PM
Honestly, the way things are going, being ruled by AI doesn't sound much worse than what is already happening.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 20, 2025, 02:57:37 PM
Quote from: PJL on November 20, 2025, 02:35:01 PMHonestly, the way things are going, being ruled by AI doesn't sound much worse than what is already happening.

:lmfao:

"It couldn't possibly get worse."

... it can
Title: Re: The AI dooooooom thread
Post by: Valmy on November 20, 2025, 03:00:56 PM
Yeah. It can always get worse.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 20, 2025, 03:52:05 PM
Joining the Meta AI powered personalised advertising in uses I think will be sustainable, Google recent announcement is making it absolutely clear that they are going very strongly on AI powered search. I wouldn't be surprised if we see it become the default and then only search on Google within a few years.

As I mentioned before publishers are in a really shitty position on this. Google are very robust on their line that if you want to be indexed for search they will scrape your site and they'll use it for indexing and training their AI - there's very little opportunity for an a la carte option. And all publishers are reporting a very strong fall in click through to underlying content from AI summaries and search results.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 20, 2025, 03:56:20 PM
You think it will be sustainable?
Title: Re: The AI dooooooom thread
Post by: Zoupa on November 20, 2025, 04:04:15 PM
Quote from: Jacob on November 20, 2025, 02:57:37 PM
Quote from: PJL on November 20, 2025, 02:35:01 PMHonestly, the way things are going, being ruled by AI doesn't sound much worse than what is already happening.

:lmfao:

"It couldn't possibly get worse."

... it can

(https://i.kym-cdn.com/entries/icons/original/000/038/972/e37.jpg)
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 20, 2025, 04:12:15 PM
Quote from: Jacob on November 20, 2025, 03:56:20 PMYou think it will be sustainable?
It will be Google's search engine - that's the core of their latest message. At the minute they're summarising text and providing citations. That's significantly affecting publishers but, so far, seems to be increasingly accepted by users. The next stage on what they're announcing will be to replace the entire index of Google search with AI powered interactive experiences. I'm sure there will still be links in that experience - based on publisher experience, for a while users were initially unsure/untrusting and clicking through to the underlying source, now the number actually clicking through has collapsed.

I'd then add that if you get in on the AI personalised advertising you may end up never really needing to leave Google and your interactions powering more effective personalised advertising (with Google acting as creative agency, ad exchange, ad server and publisher).

Wouldn't be surprised to see something similar with Apple as I think AI will hugely help all of the platforms deliver the walled garden versions of the internet they've been building all along.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 20, 2025, 04:13:59 PM
Great, society will wallow in ignorance as Google spouts answers that cannot be sourced and are nonetheless trusted.
Title: Re: The AI dooooooom thread
Post by: Josquius on November 20, 2025, 04:17:55 PM
On Google, lately I've noticed the AI working. Text appearing saying "I think he is asking about x" and the like. I'm sure it wasn't doing that before.
It's usually wrong of course.
Title: Re: The AI dooooooom thread
Post by: Tamas on November 20, 2025, 04:31:04 PM
Quote from: crazy canuck on November 20, 2025, 04:13:59 PMGreat, society will wallow in ignorance as Google spouts answers that cannot be sourced and are nonetheless trusted.

We will just quietly regress into a cyberpunk version of feudalism.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 21, 2025, 12:45:45 PM
So apparently google has a thing where they're using your gmail to train their AI? You can apparently opt out, but that somehow means turning off the spam filter.
Title: Re: The AI dooooooom thread
Post by: Valmy on November 21, 2025, 12:48:28 PM
Quote from: Jacob on November 21, 2025, 12:45:45 PMSo apparently google has a thing where they're using your gmail to train their AI? You can apparently opt out, but that somehow means turning off the spam filter.

There is a reason they dropped "don't be evil" from their list of company values.

Google is one of many many many poster children for why monopolies shouln't be allowed. The government should have broken up Google a decade ago.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 21, 2025, 12:53:48 PM
Quote from: Jacob on November 21, 2025, 12:45:45 PMSo apparently google has a thing where they're using your gmail to train their AI? You can apparently opt out, but that somehow means turning off the spam filter.


No, the spam filter is a separate function.  You just need to turn off the "smart" functions
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on November 21, 2025, 01:14:51 PM
Quote from: Valmy on November 21, 2025, 12:48:28 PMGoogle is one of many many many poster children for why monopolies shouln't be allowed. The government should have broken up Google a decade ago.

What would be achieved by breaking up Google?  If you object to the way they operate search, or email, or youtube, how would breaking up the company change that?
Title: Re: The AI dooooooom thread
Post by: Josquius on November 22, 2025, 03:28:08 AM
I do often play with chat gpt. I think I have a pretty good grasp of its foibles. It is genuinely useful for some things. Less for others

I randomly saw and asked last night....

QuoteI read online that crows will kill and eat rats.
A rat has entered my garden and is taking food meant for the birds. This is not appropriate. The crows seem afraid of him. He is a pretty small rat. What is going on.

And got a pretty rational and sensible seaming explanation about how yeah a crow could knack a rat but it's not going to look for trouble as there's easier food sources out there and a rat can certainly injure a crow
Fair enough.

Moved onto how to be rid of rats.... That this sounds like a young rat dispersing from its nest. Common in cold weather (yep).
Advice.
Clean up food (yep)
Block entry spaces (yep)
Create hostile terrain such as floodlights, gravel etc.... (ehh yep but money)
Predator scent (yep)

I explain I don't have a pet and installing a floodlight isn't really practical right now.

So chat gpt...suggests using human scent.
Old clothes, human pee, human hair....

There then followed something quite funny when I talked about the children refusing be sheared and chat gpt switched to joke mode but it kept reiterating I don't need the children's hair. The hair from my freshly bald head would be enough.

Though. Thing is.
Human hair doesn't deter rats! At all! Maybe it even attracts them!
And it's dangerous for birds.
As to human pee.... At least chat gpt said no to pooping on my gate but still. It wanted me to place containers of pee in the garden. Health and safety nightmare AND pointless for the rat.



Quote from: Admiral Yi on November 21, 2025, 01:14:51 PM
Quote from: Valmy on November 21, 2025, 12:48:28 PMGoogle is one of many many many poster children for why monopolies shouln't be allowed. The government should have broken up Google a decade ago.

What would be achieved by breaking up Google?  If you object to the way they operate search, or email, or youtube, how would breaking up the company change that?

It's not about how they operate. It's about the fact they're all interlinked and give very complete advertising profiles /advertising power.
It's not email or search or whatever where googles monopoly is the real problem. It's data and advertising.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on November 22, 2025, 04:08:02 AM
Quote from: Josquius on November 22, 2025, 03:28:08 AMIt's not about how they operate. It's about the fact they're all interlinked and give very complete advertising profiles /advertising power.
It's not email or search or whatever where googles monopoly is the real problem. It's data and advertising.

So let me see if I've got it.  You don't mind youtube collecting data about your clip viewing habits, or google search collecting data about your search habits, it's that they share this data with each other.  Is that right?
Title: Re: The AI dooooooom thread
Post by: Josquius on November 22, 2025, 04:09:14 AM
Quote from: Admiral Yi on November 22, 2025, 04:08:02 AM
Quote from: Josquius on November 22, 2025, 03:28:08 AMIt's not about how they operate. It's about the fact they're all interlinked and give very complete advertising profiles /advertising power.
It's not email or search or whatever where googles monopoly is the real problem. It's data and advertising.

So let me see if I've got it.  You don't mind youtube collecting data about your clip viewing habits, or google search collecting data about your search habits, it's that they share this data with each other.  Is that right?

I'd rather not have any of it.
But the monopoly is in the same company controlling all this.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on November 22, 2025, 04:36:30 AM
Quote from: Josquius on November 22, 2025, 04:09:14 AMI'd rather not have any of it.
But the monopoly is in the same company controlling all this.

That's not what monopoly means.  Monopoly means they are the sole supplier in a given market and can therefore charge whatever they want.  Google as we're describing it is an example of horizontal integration.
Title: Re: The AI dooooooom thread
Post by: Josquius on November 22, 2025, 05:16:46 AM
Quote from: Admiral Yi on November 22, 2025, 04:36:30 AM
Quote from: Josquius on November 22, 2025, 04:09:14 AMI'd rather not have any of it.
But the monopoly is in the same company controlling all this.

That's not what monopoly means.  Monopoly means they are the sole supplier in a given market and can therefore charge whatever they want.  Google as we're describing it is an example of horizontal integration.

That's a textbook definition. Regulators never apply monopoly as meaning literally the only supplier. More one that has massive dominance.
With over 90% of the search market for example Google absolutely fits the bill there.

But you seem to have missed the point of what I said here.
The horizontal integration isn't the problem. The problem is the vertical integration made possible by the horizontal integration.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on November 22, 2025, 04:21:12 PM
Quote from: Josquius on November 22, 2025, 05:16:46 AMThat's a textbook definition. Regulators never apply monopoly as meaning literally the only supplier. More one that has massive dominance.
With over 90% of the search market for example Google absolutely fits the bill there.

But you seem to have missed the point of what I said here.
The horizontal integration isn't the problem. The problem is the vertical integration made possible by the horizontal integration.

Valmy seems to think the horizontal integration is the problem.  Otherwise why would he call for Google to be broken up?

Please explain what you mean by vertical integration and how it applies to Google.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 22, 2025, 06:42:39 PM
The tech companies don't cleanly fit into the old categories that applies to oil, steel, or chemicals.

Under the antitrust law, monopoly power means the power to impose a meaningful unilateral price increase without experiencing diversion.  The cases usually require proof of consumer harm, as in paying higher prices.  But that doesn't fit tech companies that offer "free" or inexpensive services as an attraction to get control over consumer data or consumer eyeballs.
Title: Re: The AI dooooooom thread
Post by: Josquius on November 23, 2025, 02:07:40 AM
Quote from: Admiral Yi on November 22, 2025, 04:21:12 PM
Quote from: Josquius on November 22, 2025, 05:16:46 AMThat's a textbook definition. Regulators never apply monopoly as meaning literally the only supplier. More one that has massive dominance.
With over 90% of the search market for example Google absolutely fits the bill there.

But you seem to have missed the point of what I said here.
The horizontal integration isn't the problem. The problem is the vertical integration made possible by the horizontal integration.

Valmy seems to think the horizontal integration is the problem.  Otherwise why would he call for Google to be broken up?

Please explain what you mean by vertical integration and how it applies to Google.

Vertical integration is where the same company owns multiple steps of the supply chain.
A company which owns the farm, the factory, and the supermarket.
In Googles case they own absolutely everything around advertising. Buyer software, seller, exchange.... It's all Google.
Breaking them up could mean focusing on this behind the scenes bit.
But even breaking up the public facing parts of the business would weaken their hold on the advertising market. At the the least takes some seller space away from them, but also weakens the amount of data they have to offer in the buyer tools.

 
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on November 23, 2025, 02:29:32 AM
Quote from: Josquius on November 23, 2025, 02:07:40 AMVertical integration is where the same company owns multiple steps of the supply chain.
A company which owns the farm, the factory, and the supermarket.
In Googles case they own absolutely everything around advertising. Buyer software, seller, exchange.... It's all Google.
Breaking them up could mean focusing on this behind the scenes bit.
But even breaking up the public facing parts of the business would weaken their hold on the advertising market. At the the least takes some seller space away from them, but also weakens the amount of data they have to offer in the buyer tools.

I don't know what "breaking them up could mean focusing on this behind the scenes bit"  means.

Why do you want to weaken their hold on the advertising market?  They're a near monopoly in search, not in advertising.
Title: Re: The AI dooooooom thread
Post by: Josquius on November 23, 2025, 03:59:01 AM
Quote from: Admiral Yi on November 23, 2025, 02:29:32 AM
Quote from: Josquius on November 23, 2025, 02:07:40 AMVertical integration is where the same company owns multiple steps of the supply chain.
A company which owns the farm, the factory, and the supermarket.
In Googles case they own absolutely everything around advertising. Buyer software, seller, exchange.... It's all Google.
Breaking them up could mean focusing on this behind the scenes bit.
But even breaking up the public facing parts of the business would weaken their hold on the advertising market. At the the least takes some seller space away from them, but also weakens the amount of data they have to offer in the buyer tools.

I don't know what "breaking them up could mean focusing on this behind the scenes bit"  means.

Why do you want to weaken their hold on the advertising market?  They're a near monopoly in search, not in advertising.

Come on. That's playing too dumb. You don't know what I mean by behind the scenes when I specifically mention it in a paragraph about advertising?

They absolutely do have a monopoly in parts of advertising on a par with their search dominance. Something like 90% of ads flow through their server.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on November 23, 2025, 11:18:59 AM
Quote from: Josquius on November 23, 2025, 03:59:01 AMCome on. That's playing too dumb. You don't know what I mean by behind the scenes when I specifically mention it in a paragraph about advertising?

They absolutely do have a monopoly in parts of advertising on a par with their search dominance. Something like 90% of ads flow through their server.

I can infer that behind the scenes means all the activities Google engages in apart from the directly consumer facing aspects.  I don't understand focus.  Who is supposed to focus?  What will their focus result in?  I don't understand the relationship between focus and breaking up the company.  Will more attention be paid to their back room shenanigans once they are broken up?  Will breaking them up make it harder to hide their back room shenanigans?

Google has a monopoly on advertising on their site to the exact same extent the owner of a single billboard does.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 23, 2025, 11:41:15 AM
Quote from: Admiral Yi on November 23, 2025, 02:29:32 AM
Quote from: Josquius on November 23, 2025, 02:07:40 AMVertical integration is where the same company owns multiple steps of the supply chain.
A company which owns the farm, the factory, and the supermarket.
In Googles case they own absolutely everything around advertising. Buyer software, seller, exchange.... It's all Google.
Breaking them up could mean focusing on this behind the scenes bit.
But even breaking up the public facing parts of the business would weaken their hold on the advertising market. At the the least takes some seller space away from them, but also weakens the amount of data they have to offer in the buyer tools.

I don't know what "breaking them up could mean focusing on this behind the scenes bit"  means.

Why do you want to weaken their hold on the advertising market?  They're a near monopoly in search, not in advertising.

The early 2000s are calling and they want their understanding of how google works back.
Title: Re: The AI dooooooom thread
Post by: Josquius on November 23, 2025, 02:14:18 PM
Quote from: Admiral Yi on November 23, 2025, 11:18:59 AM
Quote from: Josquius on November 23, 2025, 03:59:01 AMCome on. That's playing too dumb. You don't know what I mean by behind the scenes when I specifically mention it in a paragraph about advertising?

They absolutely do have a monopoly in parts of advertising on a par with their search dominance. Something like 90% of ads flow through their server.

I can infer that behind the scenes means all the activities Google engages in apart from the directly consumer facing aspects.  I don't understand focus.  Who is supposed to focus?  What will their focus result in?  I don't understand the relationship between focus and breaking up the company.  Will more attention be paid to their back room shenanigans once they are broken up?  Will breaking them up make it harder to hide their back room shenanigans?
I don't know what you mean. I never mentioned the word focus.
 I think I've been pretty clear what the issue is.

QuoteGoogle has a monopoly on advertising on their site to the exact same extent the owner of a single billboard does.

In this analogy Google is the owner of a huge chunk of the billboards in the world, owns the poster printing machine and all the guys who put the posters up work for them.... And it has detailed information about the people who will see all the billboards, even the ones they don't own.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on November 23, 2025, 02:41:21 PM
Seven posts up, fourth line down, fifth word.

If you've been clear what the issue is, the answers to my questions should be simple.

Google is the owner of a small fraction of the billboards in the world, owns a small fraction of the printing machines, and employs a small fraction of the guys who put the posters up.

Googled it and turns out the own 29.7% of the *digital* ad market, which is not the total ad market.
Title: Re: The AI dooooooom thread
Post by: HVC on November 23, 2025, 02:54:33 PM
Don't know what any of this means but this is the reply I got when I searched

QuoteGoogle's control over the digital ad ecosystem
Ad exchange: Approximately 50% of the market share.
Publisher ad server: Approximately 90% of the market share.
Buy-side market: Approximately 40-80% of the market share.


*edit* unvetted AI reply, FYI.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on November 23, 2025, 02:59:19 PM
https://www.marketingcharts.com/charts/us-digital-ad-spend-share-google-vs-meta-vs-amazon

My source.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on November 23, 2025, 03:00:05 PM
I asked Google AI if Google is a monopoly.

It said "yes".
Title: Re: The AI dooooooom thread
Post by: Josquius on November 23, 2025, 03:10:10 PM
Quote from: Admiral Yi on November 23, 2025, 02:41:21 PMSeven posts up, fourth line down, fifth word.

If you've been clear what the issue is, the answers to my questions should be simple.

Google is the owner of a small fraction of the billboards in the world, owns a small fraction of the printing machines, and employs a small fraction of the guys who put the posters up.

Googled it and turns out the own 29.7% of the *digital* ad market, which is not the total ad market.
Ah you meant focusing on the ad part directly vs focusing on the front end services.
It's breaking up the front end that usually gets the attention but it is the back end that is absolutely the problem.
 It is heavily reliant on the front end however so say forcing Google to sell youtube for example would damage the actual ad monopoly.

HVCs figures sound like what I've seen for what Google actually owns.
The actual ads on Google, on the space that it owns directly, is only a small part of its operations and alone wouldn't be an issue.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 23, 2025, 03:17:48 PM
The Sherman and Clayton antitrust statutes are over 100 years old.  The dominant legal paradigm for interpreting and applying those laws draws upon the work of Robert Bork in the Antitrust Paradox, a book published in 1978.  I.e. they are 50+ year old ideas at this point. Antitrust theory is still based on a late industrial economy, at a time when the leading companies were Exxon, GM, and US Steel and where "technology" meant IBM mainframes and long-distance hardline phone service.

The common law recognized the doctrine of "unconscionability": basically courts would police private contract for substantive fairness.  Those doctrines still have a vestigial existence but have been whittled done to almost nothing over the decades. The common law required that private contract involve true "meeting of the minds."  But when software companies sought to enforce "clickwrap" contracts, the courts and legislatures obliged. Now we accept the daily reality of a world where our participation in virtually every aspect of commercial and civil life is conditioned on clicking "I accept" to a vast array of complex terms that no one could or does read.  Our rush to encourage technological development has upended the basic assumptions underlying market capitalism.

The common law did not recognize the corporation.  A proprietor was personally liable for business obligations, as were the partners of a partnership. A corporate charter was a state privilege, given on conditions that it apply to a strictly delimited set of activities, and accompanied by obligations to maintain minimum levels of capital.  That is long gone; private capital easily led the states into a regulatory race to the bottom, where we have remained ever since. Now corporations are clothed with all incidents of personhood and the full panoply of constitutional rights, with no obligations other than the grudging limitation to not commit outright fraud. Except now even that is no longer strictly enforced, at least for the "right" people.

We do not live in the late 19th century world in which the antitrust laws were originally drafted, nor the mid-20th century world when they finally enjoyed meaningful application, nor even the late 20th century world that formed how we understand and apply those laws today.  We live in a very different early 21st century world where the organization and application of commercial power is fundamentally different.  But power is what the antitrust laws were about.  They were born in the last gilded age, out of concern over the gross mismatch between corporate power and citizen power. That same problem exists today, but in a different technological and organizational context. The paradigm of Bork's "Paradox" no longer applies and a new one is required that focuses on the realities of corporate power in the 21st century.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on November 23, 2025, 03:22:23 PM
Quote from: Josquius on November 23, 2025, 03:10:10 PMAh you meant focusing on the ad part directly vs focusing on the front end services.
It's breaking up the front end that usually gets the attention but it is the back end that is absolutely the problem.
 It is heavily reliant on the front end however so say forcing Google to sell youtube for example would damage the actual ad monopoly.

HVCs figures sound like what I've seen for what Google actually owns.
The actual ads on Google, on the space that it owns directly, is only a small part of its operations and alone wouldn't be an issue.

I didn't mean anything by focus.  I was asking you what you meant.

I'm starting to get the impression that y'all are talking about the stuff in Hillary's link when you talk about "monopoly in advertising."  Am I right?  Ad exchange and publisher ad server (whatever that means).
Title: Re: The AI dooooooom thread
Post by: grumbler on November 23, 2025, 10:56:15 PM
Quote from: Josquius on November 23, 2025, 03:59:01 AMCome on. That's playing too dumb. You don't know what I mean by behind the scenes when I specifically mention it in a paragraph about advertising?

They absolutely do have a monopoly in parts of advertising on a par with their search dominance. Something like 90% of ads flow through their server.

And almost all of those ads also flow through other servers (Meta, X, etc) plus ads on those servers that don't appear on Google servers. Ads are not discrete objects. Having an ad on a server places no limit on the ad's appearance on other servers.
Title: Re: The AI dooooooom thread
Post by: Josquius on November 24, 2025, 06:45:31 AM
QuoteAnd almost all of those ads also flow through other servers (Meta, X, etc) plus ads on those servers that don't appear on Google servers. Ads are not discrete objects. Having an ad on a server places no limit on the ad's appearance on other servers.
I don't think thats true. I can't think when a site would be using Google's tools and meta's at the same time - incidentally IIRC more evidence for the monopoly point, meta stopped supporting their google ad competitor a few years back as it just wasn't profitable.

Ads on facebook, twitter, etc... are just ads on facebook, twitter, etc....  They have tracking data all over the place, but if you avoid their specific sites its not going to do them much good.
Google on the other hand is basically everywhere else on the open web (and their own very popular sites).

Quote from: Admiral Yi on November 23, 2025, 03:22:23 PM
Quote from: Josquius on November 23, 2025, 03:10:10 PMAh you meant focusing on the ad part directly vs focusing on the front end services.
It's breaking up the front end that usually gets the attention but it is the back end that is absolutely the problem.
 It is heavily reliant on the front end however so say forcing Google to sell youtube for example would damage the actual ad monopoly.

HVCs figures sound like what I've seen for what Google actually owns.
The actual ads on Google, on the space that it owns directly, is only a small part of its operations and alone wouldn't be an issue.

I didn't mean anything by focus.  I was asking you what you meant.

I'm starting to get the impression that y'all are talking about the stuff in Hillary's link when you talk about "monopoly in advertising."  Am I right?  Ad exchange and publisher ad server (whatever that means).

I missed that link. But yes. Its back end stuff that is used by basically everyone in online advertising. If you see an ad online and you're not on facebook, twitter, amazon, or some other walled garden.... then it's probably a google ad.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on November 24, 2025, 06:02:51 PM
Quote from: Josquius on November 24, 2025, 06:45:31 AMI missed that link. But yes. Its back end stuff that is used by basically everyone in online advertising. If you see an ad online and you're not on facebook, twitter, amazon, or some other walled garden.... then it's probably a google ad.

Do you think Google is overcharging advertisers to use this exchange and these servers?

Do you think breaking up Google would solve this problem?
Title: Re: The AI dooooooom thread
Post by: Josquius on November 25, 2025, 03:08:08 AM
Quote from: Admiral Yi on November 24, 2025, 06:02:51 PM
Quote from: Josquius on November 24, 2025, 06:45:31 AMI missed that link. But yes. Its back end stuff that is used by basically everyone in online advertising. If you see an ad online and you're not on facebook, twitter, amazon, or some other walled garden.... then it's probably a google ad.

Do you think Google is overcharging advertisers to use this exchange and these servers?

Do you think breaking up Google would solve this problem?

I've no idea on costs. I've worked in marketing adjacent areas but not in that side of things.
But for sure one company absolutely dominating the market to such a degree is not healthy.
Breaking up Google would indeed weaken the power they can bring to bare and give would've competitors a chance - as I say even meta failed when it had a try let alone a startup.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on November 25, 2025, 03:28:07 AM
Quote from: Josquius on November 25, 2025, 03:08:08 AMI've no idea on costs. I've worked in marketing adjacent areas but not in that side of things.
But for sure one company absolutely dominating the market to such a degree is not healthy.
Breaking up Google would indeed weaken the power they can bring to bare and give would've competitors a chance - as I say even meta failed when it had a try let alone a startup.

Well that's the thing.  As Joan pointed out, traditional monopoly analysis examines the companies ability to extract rents, to charge higher than the free market price.  When talking about monopolies "not healthy" means exactly that.

Discussions of "power" tend to be circular.  They have power because they are a monopoly.  They are a monopoly because they have power.  Have you considered the possibility they are just better?  From the link I posted Meta has close to Google's share of online advertising.  Ergo they have roughly the same "power."  It's not logical that they failed to compete with Google in ad exchange or publisher ad server (whatever that is) because they had less power.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 25, 2025, 11:26:51 AM
Language and intelligence are two difference things:

Large Language Mistake: Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it (https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems)

Some excerpts

QuoteThe problem is that according to current neuroscience, human thinking is largely independent of human language — and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own. Humans use language to communicate the results of our capacity to reason, form abstractions, and make generalizations, or what we might call our intelligence. We use language to think, but that does not make language the same as thought. Understanding this distinction is the key to separating scientific fact from the speculative science fiction of AI-exuberant CEOs.

The AI hype machine relentlessly promotes the idea that we're on the verge of creating something as intelligent as humans, or even "superintelligence" that will dwarf our own cognitive capacities. If we gather tons of data about the world, and combine this with ever more powerful computing power (read: Nvidia chips) to improve our statistical correlations, then presto, we'll have AGI. Scaling is all we need.

But this theory is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.

...

Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.

But take away language from a large language model, and you are left with literally nothing at all.


An AI enthusiast might argue that human-level intelligence doesn't need to necessarily function in the same way as human cognition. AI models have surpassed human performance in activities like chess using processes that differ from what we do, so perhaps they could become superintelligent through some unique method based on drawing correlations from training data.

Maybe! But there's no obvious reason to think we can get to general intelligence — not improving narrowly defined tasks —through text-based training. After all, humans possess all sorts of knowledge that is not easily encapsulated in linguistic data — and if you doubt this, think about how you know how to ride a bike.

In fact, within the AI research community there is growing awareness that LLMs are, in and of themselves, insufficient models of human intelligence. For example, Yann LeCun, a Turing Award winner for his AI research and a prominent skeptic of LLMs, left his role at Meta last week to found an AI startup developing what are dubbed world models: "��systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences." And recently, a group of prominent AI scientists and "thought leaders" — including Yoshua Bengio (another Turing Award winner), former Google CEO Eric Schmidt, and noted AI skeptic Gary Marcus — coalesced around a working definition of AGI as "AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult" (emphasis added). Rather than treating intelligence as a "monolithic capacity," they propose instead we embrace a model of both human and artificial cognition that reflects "a complex architecture composed of many distinct abilities."

...

We can credit Thomas Kuhn and his book The Structure of Scientific Revolutions for our notion of "scientific paradigms," the basic frameworks for how we understand our world at any given time. He argued these paradigms "shift" not as the result of iterative experimentation, but rather when new questions and ideas emerge that no longer fit within our existing scientific descriptions of the world. Einstein, for example, conceived of relativity before any empirical evidence confirmed it. Building off this notion, the philosopher Richard Rorty contended that it is when scientists and artists become dissatisfied with existing paradigms (or vocabularies, as he called them) that they create new metaphors that give rise to new descriptions of the world — and if these new ideas are useful, they then become our common understanding of what is true. As such, he argued, "common sense is a collection of dead metaphors."

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they're being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that's all it will be able to do. It will be forever trapped in the vocabulary we've encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 25, 2025, 12:47:45 PM
Yeah, and that is one of the reasons scientific research funding agencies are now prohibiting the use of AI tools when researchers make research funding proposals.  The AI tool can only produce research proposals based on what has already been proposed or already researched. But the funding agencies want to fund novel research.  Something AI tools are incapable of developing.

The other reason is the avalanche of utter trash LLM modes generate.
Title: Re: The AI dooooooom thread
Post by: Josquius on November 25, 2025, 01:29:14 PM
I do hope as the bubble pops, backlash builds etc... We can see more attention given to good uses for LLMs and related data crunching rather than just generating fake docs.


Quote from: Admiral Yi on November 25, 2025, 03:28:07 AM
Quote from: Josquius on November 25, 2025, 03:08:08 AMI've no idea on costs. I've worked in marketing adjacent areas but not in that side of things.
But for sure one company absolutely dominating the market to such a degree is not healthy.
Breaking up Google would indeed weaken the power they can bring to bare and give would've competitors a chance - as I say even meta failed when it had a try let alone a startup.

Well that's the thing.  As Joan pointed out, traditional monopoly analysis examines the companies ability to extract rents, to charge higher than the free market price.  When talking about monopolies "not healthy" means exactly that.

Discussions of "power" tend to be circular.  They have power because they are a monopoly.  They are a monopoly because they have power.  Have you considered the possibility they are just better?  From the link I posted Meta has close to Google's share of online advertising.  Ergo they have roughly the same "power."  It's not logical that they failed to compete with Google in ad exchange or publisher ad server (whatever that is) because they had less power.


Yes. Google are "better". But that's no defence.
A monopoly isn't necessarily created by anyrhing devious. The fact that they've been in the game so long that they've established an unassailably deep and wide position is enough.

A quick search shows those in the know do indeed suggest googles monopoly is allowing them to get away with high pricing.

And no. Meta does not have the same share as Google at all. As said Google has around 90%. Meta cut their losses when they couldn't make enough to even break even.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 25, 2025, 01:31:05 PM
Quote from: Jacob on November 25, 2025, 11:26:51 AMLanguage and intelligence are two difference things:

Large Language Mistake: Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it (https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems)

Some excerpts

QuoteThe problem is that according to current neuroscience, human thinking is largely independent of human language — and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own. Humans use language to communicate the results of our capacity to reason, form abstractions, and make generalizations, or what we might call our intelligence. We use language to think, but that does not make language the same as thought. Understanding this distinction is the key to separating scientific fact from the speculative science fiction of AI-exuberant CEOs.

The AI hype machine relentlessly promotes the idea that we're on the verge of creating something as intelligent as humans, or even "superintelligence" that will dwarf our own cognitive capacities. If we gather tons of data about the world, and combine this with ever more powerful computing power (read: Nvidia chips) to improve our statistical correlations, then presto, we'll have AGI. Scaling is all we need.

But this theory is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.

...

Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.

But take away language from a large language model, and you are left with literally nothing at all.


An AI enthusiast might argue that human-level intelligence doesn't need to necessarily function in the same way as human cognition. AI models have surpassed human performance in activities like chess using processes that differ from what we do, so perhaps they could become superintelligent through some unique method based on drawing correlations from training data.

Maybe! But there's no obvious reason to think we can get to general intelligence — not improving narrowly defined tasks —through text-based training. After all, humans possess all sorts of knowledge that is not easily encapsulated in linguistic data — and if you doubt this, think about how you know how to ride a bike.

In fact, within the AI research community there is growing awareness that LLMs are, in and of themselves, insufficient models of human intelligence. For example, Yann LeCun, a Turing Award winner for his AI research and a prominent skeptic of LLMs, left his role at Meta last week to found an AI startup developing what are dubbed world models: "��systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences." And recently, a group of prominent AI scientists and "thought leaders" — including Yoshua Bengio (another Turing Award winner), former Google CEO Eric Schmidt, and noted AI skeptic Gary Marcus — coalesced around a working definition of AGI as "AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult" (emphasis added). Rather than treating intelligence as a "monolithic capacity," they propose instead we embrace a model of both human and artificial cognition that reflects "a complex architecture composed of many distinct abilities."

...

We can credit Thomas Kuhn and his book The Structure of Scientific Revolutions for our notion of "scientific paradigms," the basic frameworks for how we understand our world at any given time. He argued these paradigms "shift" not as the result of iterative experimentation, but rather when new questions and ideas emerge that no longer fit within our existing scientific descriptions of the world. Einstein, for example, conceived of relativity before any empirical evidence confirmed it. Building off this notion, the philosopher Richard Rorty contended that it is when scientists and artists become dissatisfied with existing paradigms (or vocabularies, as he called them) that they create new metaphors that give rise to new descriptions of the world — and if these new ideas are useful, they then become our common understanding of what is true. As such, he argued, "common sense is a collection of dead metaphors."

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they're being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that's all it will be able to do. It will be forever trapped in the vocabulary we've encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

The title of the article is knocking down a strawman.  People who think LLMs have some artificial intelligence don't equate language with intelligence.  They see language as one of the "outputs" of intelligence.  If you build a model that matches one mode of output of intelligence well enough, then it's possible that under the hood that model had to have evolved something functionally analogous to intelligence during training in order to do that.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 25, 2025, 01:35:37 PM
Quote from: Jacob on November 25, 2025, 11:26:51 AMLanguage and intelligence are two difference things:

Large Language Mistake: Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it (https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems)

Some excerpts

QuoteThe problem is that according to current neuroscience, human thinking is largely independent of human language — and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own. Humans use language to communicate the results of our capacity to reason, form abstractions, and make generalizations, or what we might call our intelligence. We use language to think, but that does not make language the same as thought. Understanding this distinction is the key to separating scientific fact from the speculative science fiction of AI-exuberant CEOs.
Delighted that only 30 years after the Sokal affair, STEM is finally acknowledging that Derrida was right :w00t:
Title: Re: The AI dooooooom thread
Post by: Jacob on November 25, 2025, 02:08:56 PM
Quote from: DGuller on November 25, 2025, 01:31:05 PMThe title of the article is knocking down a strawman.  People who think LLMs have some artificial intelligence don't equate language with intelligence.  They see language as one of the "outputs" of intelligence.  If you build a model that matches one mode of output of intelligence well enough, then it's possible that under the hood that model had to have evolved something functionally analogous to intelligence during training in order to do that.

I am happy to agree that you don't think that. And equally happy to agree that thoughtful proponents of LLM don't think that.

But the idea is out there, and certainly some of the hype propagated by some of the relevant tech CEOs (who are some of the richest and most powerful men in the world, and who absolutely have a role in shaping the public discourse) certainly seems to imply it if not outright state it at times.

So I don't agree it's a strawman. It's a thoughtful and well-argued counter argument to a line of thought that is absolutely being made, even if it's not being made by you.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 25, 2025, 02:33:07 PM
Quote from: Sheilbh on November 25, 2025, 01:35:37 PMDelighted that only 30 years after the Sokal affair, STEM is finally acknowledging that Derrida was right :w00t:

 :yes:
Title: Re: The AI dooooooom thread
Post by: DGuller on November 25, 2025, 03:01:30 PM
Quote from: Jacob on November 25, 2025, 02:08:56 PM
Quote from: DGuller on November 25, 2025, 01:31:05 PMThe title of the article is knocking down a strawman.  People who think LLMs have some artificial intelligence don't equate language with intelligence.  They see language as one of the "outputs" of intelligence.  If you build a model that matches one mode of output of intelligence well enough, then it's possible that under the hood that model had to have evolved something functionally analogous to intelligence during training in order to do that.

I am happy to agree that you don't think that. And equally happy to agree that thoughtful proponents of LLM don't think that.

But the idea is out there, and certainly some of the hype propagated by some of the relevant tech CEOs (who are some of the richest and most powerful men in the world, and who absolutely have a role in shaping the public discourse) certainly seems to imply it if not outright state it at times.

So I don't agree it's a strawman. It's a thoughtful and well-argued counter argument to a line of thought that is absolutely being made, even if it's not being made by you.
But then what is the point of that article?  AI bubble exists because people with money believe that a usable enough AI already exists, not because some people believe that "language = intelligence" or that the sky is brown.  If the point of that article is not to say that "people mistakenly believe that a usable AI exists because they equate language to intelligence", then what is exactly the point?
Title: Re: The AI dooooooom thread
Post by: Jacob on November 25, 2025, 03:32:15 PM
Quote from: DGuller on November 25, 2025, 03:01:30 PMBut then what is the point of that article?  AI bubble exists because people with money believe that a usable enough AI already exists, not because some people believe that "language = intelligence" or that the sky is brown.  If the point of that article is not to say that "people mistakenly believe that a usable AI exists because they equate language to intelligence", then what is exactly the point?

The point of the article is:

QuoteThe AI hype machine relentlessly promotes the idea that we're on the verge of creating something as intelligent as humans, or even "superintelligence" that will dwarf our own cognitive capacities. If we gather tons of data about the world, and combine this with ever more powerful computing power (read: Nvidia chips) to improve our statistical correlations, then presto, we'll have AGI. Scaling is all we need.

But this theory is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.

If you disagree that the first paragraph above is true then obviously the argument against it is less relevant. Personally I think Sam Altman is enough of a "thought leader" on this topic to make it worthwhile to address the position he advances.
Title: Re: The AI dooooooom thread
Post by: Josquius on November 25, 2025, 04:16:04 PM
Earlier today I was at a talk about AI in Development.

This article was recommended. Interesting. Definitely lines up with what I've seen about compounding errors.

https://utkarshkanwat.com/writing/betting-against-agents

QuoteWhy I'm Betting Against AI Agents in 2025 (Despite Building Them)
I've built 12+ AI agent systems across development, DevOps, and data operations. Here's why the current hype around autonomous agents is mathematically impossible and what actually works in production.

Everyone says 2025 is the year of AI agents. The headlines are everywhere: "Autonomous AI will transform work," "Agents are the next frontier," "The future is agentic." Meanwhile, I've spent the last year building many different agent systems that actually work in production. And that's exactly why I'm betting against the current hype.
I'm not some AI skeptic writing from the sidelines. Over the past year, I've built more than a dozen production agent systems across the entire software development lifecycle:
Development agents: UI generators that create functional React components from natural language, code refactoring agents that modernize legacy codebases, documentation generators that maintain API docs automatically, and function generators that convert specifications into working implementations.
Data & Infrastructure agents: Database operation agents that handle complex queries and migrations, DevOps automation AI systems managing infrastructure-as-code across multiple cloud providers.
Quality & Process agents: AI-powered CI/CD pipelines that fix lint issues, generate comprehensive test suites, perform automated code reviews, and create detailed pull requests with proper descriptions.
These systems work. They ship real value. They save hours of manual work every day. And that's precisely why I think much of what you're hearing about 2025 being "the year of agents" misses key realities.
TL;DR: Three Hard Truths About AI Agents
After building AI systems, here's what I've learned:
Error rates compound exponentially in multi-step workflows. 95% reliability per step = 36% success over 20 steps. Production needs 99.9%+.
Context windows create quadratic token costs. Long conversations become prohibitively expensive at scale.
The real challenge isn't AI capabilities, it's designing tools and feedback systems that agents can actually use effectively.
The Mathematical Reality No One Talks About
Here's the uncomfortable truth that every AI agent company is dancing around: error compounding makes autonomous multi-step workflows mathematically impossible at production scale.

(https://utkarshkanwat.com/writing/betting-against-agents/error_compounding_graph.svg)

Let's do the math. If each step in an agent workflow has 95% reliability, which is optimistic for current LLMs,then:
5 steps = 77% success rate
10 steps = 59% success rate
20 steps = 36% success rate
Production systems need 99.9%+ reliability. Even if you magically achieve 99% per-step reliability (which no one has), you still only get 82% success over 20 steps. This isn't a prompt engineering problem. This isn't a model capability problem. This is mathematical reality.
My DevOps agent works precisely because it's not actually a 20-step autonomous workflow. It's 3-5 discrete, independently verifiable operations with explicit rollback points and human confirmation gates. The "agent" handles the complexity of generating infrastructure code, but the system is architected around the mathematical constraints of reliability.
Every successful agent system I've built follows the same pattern: bounded contexts, verifiable operations, and human decision points (sometimes) at critical junctions. The moment you try to chain more than a handful of operations autonomously, the math kills you.
The Token Economics That Don't Add Up
There's another mathematical reality that agent evangelists conveniently ignore: context windows create quadratic cost scaling that makes conversational agents economically impossible.
Here's what actually happens when you build a "conversational" agent:
Each new interaction requires processing ALL previous context
Token costs scale quadratically with conversation length
A 100-turn conversation costs $50-100 in tokens alone
Multiply by thousands of users and you're looking at unsustainable economics
I learned this the hard way when prototyping a conversational database agent. The first few interactions were cheap. By the 50th query in a session, each response was costing multiple dollars - more than the value it provided. The economics simply don't work for most scenarios.

(https://utkarshkanwat.com/writing/betting-against-agents/token_cost_scaling_chart.svg)

My function generation agent succeeds because it's completely stateless: description → function → done. No context to maintain, no conversation to track, no quadratic cost explosion. It's not a "chat with your code" experience, it's a focused tool that solves a specific problem efficiently.
The most successful "agents" in production aren't conversational at all. They're smart, bounded tools that do one thing well and get out of the way.
The Tool Engineering Reality Wall
Even if you solve the math problems, you hit a different kind of wall: building production-grade tools for agents is an entirely different engineering discipline that most teams underestimate.
Tool calls themselves are actually quite precise now. The real challenge is tool design. Every tool needs to be carefully crafted to provide the right feedback without overwhelming the context window. You need to think about:
How does the agent know if an operation partially succeeded? How do you communicate complex state changes without burning tokens?
A database query might return 10,000 rows, but the agent only needs to know "query succeeded, 10k results, here are the first 5." Designing these abstractions is an art.
When a tool fails, what information does the agent need to recover? Too little and it's stuck; too much and you waste context.
How do you handle operations that affect each other? Database transactions, file locks, resource dependencies.
My database agent works not because the tool calls are unreliable, but because I spent weeks designing tools that communicate effectively with the AI. Each tool returns structured feedback that the agent can actually use to make decisions, not just raw API responses.
The companies promising "just connect your APIs and our agent will figure it out" haven't done this engineering work. They're treating tools like human interfaces, not AI interfaces. The result is agents that technically make successful API calls but can't actually accomplish complex workflows because they don't understand what happened.
The dirty secret of every production agent system is that the AI is doing maybe 30% of the work. The other 70% is tool engineering: designing feedback interfaces, managing context efficiently, handling partial failures, and building recovery mechanisms that the AI can actually understand and use.
The Integration Reality Check
But let's say you solve the reliability problems and the economics. You still have to integrate with the real world, and the real world is a mess.
Enterprise systems aren't clean APIs waiting for AI agents to orchestrate them. They're legacy systems with quirks, partial failure modes, authentication flows that change without notice, rate limits that vary by time of day, and compliance requirements that don't fit neatly into prompt templates.
My database agent doesn't just "autonomously execute queries." It navigates connection pooling, handles transaction rollbacks, respects read-only replicas, manages query timeouts, and logs everything for audit trails. The AI handles query generation; everything else is traditional systems programming.
The companies promising "autonomous agents that integrate with your entire tech stack" are either overly optimistic or haven't actually tried to build production systems at scale. Integration is where AI agents go to die.
What Actually Works (And Why)
After building more than a dozen different agent systems across the entire software development lifecycle, I've learned that the successful ones share a pattern:
My UI generation agent works because humans review every generated interface before deployment. The AI handles the complexity of translating natural language into functional React components, but humans make the final decisions about user experience.
My database agent works because it confirms every destructive operation before execution. The AI handles the complexity of translating business requirements into SQL, but humans maintain control over data integrity.
My function generation agent works because it operates within clearly defined boundaries. Give it a specification, get back a function. No side effects, no state management, no integration complexity.
My DevOps automation works because it generates infrastructure-as-code that can be reviewed, versioned, and rolled back. The AI handles the complexity of translating requirements into Terraform, but the deployment pipeline maintains all the safety mechanisms we've learned to rely on.
My CI/CD agent works because each stage has clear success criteria and rollback mechanisms. The AI handles the complexity of analyzing code quality and generating fixes, but the pipeline maintains control over what actually gets merged.
The pattern is clear: AI handles complexity, humans maintain control, and traditional software engineering handles reliability.
My Predictions
Here's my specific prediction about who will struggle in 2025:
Venture-funded "fully autonomous agent" startups will hit the economics wall first. Their demos work great with 5-step workflows, but customers will demand 20+ step processes that break down mathematically. Burn rates will spike as they try to solve unsolvable reliability problems.
Enterprise software companies that bolted "AI agents" onto existing products will see adoption stagnate. Their agents can't integrate deeply enough to handle real workflows.
Meanwhile, the winners will be teams building constrained, domain-specific tools that use AI for the hard parts while maintaining human control or strict boundaries over critical decisions. Think less "autonomous everything" and more "extremely capable assistants with clear boundaries."
The market will learn the difference between AI that demos well and AI that ships reliably. That education will be expensive for many companies.
I'm not betting against AI. I'm betting against the current approach to agent architecture. But I believe future is going to be far more valuable than the hype suggests.
Building the Right Way
If you're thinking about building with AI agents, start with these principles:
Define clear boundaries. What exactly can your agent do, and what does it hand off to humans or deterministic systems?
Design for failure. How do you handle the 20-40% of cases where the AI makes mistakes? What are your rollback mechanisms?
Solve the economics. How much does each interaction cost, and how does that scale with usage? Stateless often beats stateful.
Prioritize reliability over autonomy. Users trust tools that work consistently more than they value systems that occasionally do magic.
Build on solid foundations. Use AI for the hard parts (understanding intent, generating content), but rely on traditional software engineering for the critical parts (execution, error handling, state management).
The agent revolution is coming. It just won't look anything like what everyone's promising in 2025. And that's exactly why it will succeed.


Title: Re: The AI dooooooom thread
Post by: DGuller on November 25, 2025, 04:23:32 PM
Quote from: Jacob on November 25, 2025, 03:32:15 PM
Quote from: DGuller on November 25, 2025, 03:01:30 PMBut then what is the point of that article?  AI bubble exists because people with money believe that a usable enough AI already exists, not because some people believe that "language = intelligence" or that the sky is brown.  If the point of that article is not to say that "people mistakenly believe that a usable AI exists because they equate language to intelligence", then what is exactly the point?

The point of the article is:

QuoteThe AI hype machine relentlessly promotes the idea that we're on the verge of creating something as intelligent as humans, or even "superintelligence" that will dwarf our own cognitive capacities. If we gather tons of data about the world, and combine this with ever more powerful computing power (read: Nvidia chips) to improve our statistical correlations, then presto, we'll have AGI. Scaling is all we need.

But this theory is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.

If you disagree that the first paragraph above is true then obviously the argument against it is less relevant. Personally I think Sam Altman is enough of a "thought leader" on this topic to make it worthwhile to address the position he advances.
I have a much bigger issue with the last paragraph.  "LLMs emulate communication, not cognition" is an assertion about the most fundamental question.  As I said in my first reply, if LLMs are good enough to emulate intelligent communication, then it's very plausible that it needs to have some level of cognition to do that.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 25, 2025, 04:58:01 PM
Quote from: DGuller on November 25, 2025, 04:23:32 PMI have a much bigger issue with the last paragraph.  "LLMs emulate communication, not cognition" is an assertion about the most fundamental question.  As I said in my first reply, if LLMs are good enough to emulate intelligent communication, then it's very plausible that it needs to have some level of cognition to do that.

That's the point of the article, though.

Seems to me to me that you simply disagree with it and find the arguments it presents unconvincing. Which you are perfectly entitled to, of course.

For my part, I find the arguments in the article - including the opening part about neuroscience indicating that thinking happens largely independent of language - more persuasive than "it's plausible that emulating intelligent communication requires a level of cognition" at least (barring definitional games with the terms "intelligent" and "cognition").

Sure it's potentially plausible, but the evidence we have against is stronger than the speculation that we have in favour.

On the upside, the timeframe Altman is suggesting for reaching AGI type AI on the back of LLM is short enough that we'll be able to see for ourselves in due time.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 25, 2025, 05:07:04 PM
Quote from: DGuller on November 25, 2025, 01:31:05 PMThe title of the article is knocking down a strawman.  People who think LLMs have some artificial intelligence don't equate language with intelligence.  They see language as one of the "outputs" of intelligence.  If you build a model that matches one mode of output of intelligence well enough, then it's possible that under the hood that model had to have evolved something functionally analogous to intelligence during training in order to do that.

I don't see how that is possible. The model is the model.  It assigns probability distributions to different potential outputs based on an algorithm.  The training consists of feeding in more data and giving feedback to adjust the algorithm.  The algorithm is just seeking probabilistic matches based on its parameters.

Looking at matters broadly, you can generate a hierarchy of:
1) Manipulation of pure symbols: mathematical computations, clear rules based games like Chess or Go.
2) Manipulation of complex, representational symbols like language
3) Experiential learning not easily reduced to manipulation of symbols.

Each level of the hierarchy represents an increasing challenge for machines. With level 1 substantial progress was made by the 1990s.  With language we are reaching that stage now.  On that basis I would project substantial progress in stage 3 in the 2050s, but perhaps we will see if a couple trillion in spending can speed that up.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 25, 2025, 05:10:49 PM
Though I guess there are two separate but related topics:

1: How likely is it that we'll reach AGI type AI on the back of LLM, as Altman suggests?

2: Will the productivity gains from the current investments in LLM generate enough revenue to justify the investment?
Title: Re: The AI dooooooom thread
Post by: Jacob on November 25, 2025, 05:13:53 PM
Quote from: The Minsky Moment on November 25, 2025, 05:07:04 PMLooking at matters broadly, you can generate a hierarchy of:
1) Manipulation of pure symbols: mathematical computations, clear rules based games like Chess or Go.
2) Manipulation of complex, representational symbols like language
3) Experiential learning not easily reduced to manipulation of symbols.

Each level of the hierarchy represents an increasing challenge for machines. With level 1 substantial progress was made by the 1990s.  With language we are reaching that stage now.  On that basis I would project substantial progress in stage 3 in the 2050s, but perhaps we will see if a couple trillion in spending can speed that up.

Your projected timeline for progress (in the 2050s) seems to imply that the complexity gap between 2) and 3) is roughly similar to the gap between 1) and 2) (relative to available resources).

I'm not sure that's the case.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 25, 2025, 05:17:10 PM
On a different (but still related) topic...

When we're talking about the potential for an AI bubble, so far most of the reference points are American.

China is also investing heavily in AI. Do any of you have any insight if Chinese AI investment have bubble like characteristics similar to the US, or are there material differences? And if so, what are they (lower costs? substantially different policy? something else?)
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 25, 2025, 05:18:02 PM
Quote from: Jacob on November 25, 2025, 05:10:49 PM1: How likely is it that we'll reach AGI type AI on the back of LLM, as Altman suggests?

I'm entirely unqualified to opine, but I fine the "no" voices on the question to be more convincing. It seems like the "bootstrap" case involves some magical thinking as in if we just throw enough data at the LLM and tune up its capabilities, then it will somehow spontaneously evolve "general intelligence" despite the lack of any clear mechanism. I also can't help noticing that the strongest "yes" voices seem to have a strong financial incentive in the continued success of LLMs.

Quote2: Will the productivity gains from the current investments in LLM generate enough revenue to justify the investment?

My vote is a definite no in the aggregate, but there may be individual winners.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 25, 2025, 05:19:44 PM
Quote from: Jacob on November 25, 2025, 05:13:53 PMYour projected timeline for progress (in the 2050s) seems to imply that the complexity gap between 2) and 3) is roughly similar to the gap between 1) and 2) (relative to available resources).

I'm not sure that's the case.

Oh I don't have a clue, really.  Just two rough datapoints and a virtual napkin.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 25, 2025, 05:48:34 PM
Quote from: The Minsky Moment on November 25, 2025, 05:07:04 PM
Quote from: DGuller on November 25, 2025, 01:31:05 PMThe title of the article is knocking down a strawman.  People who think LLMs have some artificial intelligence don't equate language with intelligence.  They see language as one of the "outputs" of intelligence.  If you build a model that matches one mode of output of intelligence well enough, then it's possible that under the hood that model had to have evolved something functionally analogous to intelligence during training in order to do that.

I don't see how that is possible. The model is the model.  It assigns probability distributions to different potential outputs based on an algorithm.  The training consists of feeding in more data and giving feedback to adjust the algorithm.  The algorithm is just seeking probabilistic matches based on its parameters.
I don't see how everything you wrote from the second sentence on is connected to your first sentence.  Yes, you accurately described how training and inference of any statistical or machine learning model works, but how is that description offering any insight as to why the bolded is impossible?
Title: Re: The AI dooooooom thread
Post by: Jacob on November 25, 2025, 06:25:59 PM
Quote from: The Minsky Moment on November 25, 2025, 05:18:02 PMI'm entirely unqualified to opine, but I fine the "no" voices on the question to be more convincing. It seems like the "bootstrap" case involves some magical thinking as in if we just throw enough data at the LLM and tune up its capabilities, then it will somehow spontaneously evolve "general intelligence" despite the lack of any clear mechanism. I also can't help noticing that the strongest "yes" voices seem to have a strong financial incentive in the continued success of LLMs.

I concur with both of those perspectives.

QuoteMy vote is a definite no in the aggregate, but there may be individual winners.

Quite possibly. I suspect individual winners will be bought out by one of the major US tech conglomerates.

QuoteOh I don't have a clue, really.  Just two rough datapoints and a virtual napkin.

I suspect Silicon Valley producing AGI is going to be on a timeline somewhere between cold fusion or (best case) Musk's fully autonomous Teslas.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on November 25, 2025, 07:37:26 PM
Quote from: Josquius on November 25, 2025, 01:29:14 PMYes. Google are "better". But that's no defence.
A monopoly isn't necessarily created by anyrhing devious. The fact that they've been in the game so long that they've established an unassailably deep and wide position is enough.

A quick search shows those in the know do indeed suggest googles monopoly is allowing them to get away with high pricing.

And no. Meta does not have the same share as Google at all. As said Google has around 90%. Meta cut their losses when they couldn't make enough to even break even.

Fair enough.  You're suggesting Google's dominance in ad exchange and publisher ad server are the result of first mover advantage, not the leveraging of their strong position in advertising sales.  That pretty much supports my point: if Google were broken up the now independent ad exchange and publisher ad server would still benefit from first mover advantage.

I've provided a link that says Google has 23.9% of online advertising sales, the same order of magnitude as Meta and Amazon.  What is this 90% you speak of?  90% of what?
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 25, 2025, 08:43:57 PM
Quote from: DGuller on November 25, 2025, 05:48:34 PMI don't see how everything you wrote from the second sentence on is connected to your first sentence.  Yes, you accurately described how training and inference of any statistical or machine learning model works, but how is that description offering any insight as to why the bolded is impossible?

Because nothing in the mechanism I described has the capacity for self-evolution.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 25, 2025, 10:19:38 PM
Quote from: The Minsky Moment on November 25, 2025, 08:43:57 PM
Quote from: DGuller on November 25, 2025, 05:48:34 PMI don't see how everything you wrote from the second sentence on is connected to your first sentence.  Yes, you accurately described how training and inference of any statistical or machine learning model works, but how is that description offering any insight as to why the bolded is impossible?

Because nothing in the mechanism I described has the capacity for self-evolution.
I didn't mean to imply self-evolution, or Darwinian evolution, or anything of the kind.  :rolleyes:  I meant evolved as in "developed".
Title: Re: The AI dooooooom thread
Post by: Jacob on November 25, 2025, 11:03:35 PM
Quote from: DGuller on November 25, 2025, 10:19:38 PMI didn't mean to imply self-evolution, or Darwinian evolution, or anything of the kind.  :rolleyes:  I meant evolved as in "developed".

I'm not sure I'm following your argument correctly here, but it seems like you're saying that "it's theoretically possible that LLM models could be developed to a point where their output 'emulates intelligent communication', and that if it does then it can essentially be considered cognizant whatever is 'under the hood" purely on strength on the apparent intelligence of the output".

Is that right? Or have I missed some nuance?

You're not arguing that AGI is around the corner, but that a sufficiently refined LLM could achieve a partial success but that's sufficient to call it cognizant? Or are you saying that you think that there's a good chance that LLMs can become virtually indistinguishable from AGI in output, and if they do then they can be considered AGI regardless of what goes on "under the hood". That is, LLMs still have significant potential to reach AGI levels in the short term?
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 25, 2025, 11:22:16 PM
The question seems to hang on how one defines "functionally analogous to intelligence."  Contemporary LLMs are capable of generating output that is equivalent to output that would be generated by intelligent human beings for certain kinds of defined tasks.  Contemporary LLMs are capable of generating communicative output that passes the Turing Test.  If one defines those capabilities as intelligence that definitionally the statement is true.  It's not what I understand as "general intelligence" though.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 25, 2025, 11:44:22 PM
Quote from: Jacob on November 25, 2025, 11:03:35 PM
Quote from: DGuller on November 25, 2025, 10:19:38 PMI didn't mean to imply self-evolution, or Darwinian evolution, or anything of the kind.  :rolleyes:  I meant evolved as in "developed".

I'm not sure I'm following your argument correctly here, but it seems like you're saying that "it's theoretically possible that LLM models could be developed to a point where their output 'emulates intelligent communication', and that if it does then it can essentially be considered cognizant whatever is 'under the hood" purely on strength on the apparent intelligence of the output".

Is that right? Or have I missed some nuance?
I'm a little hesitant to agree, because I may be agreeing to something that I understand differently from you.  I mean, I couldn't have imagined how "evolved" could be interpreted here...

Let me try to concisely restate your restatement:  "If LLMs are developed to the point where they can consistently emulate intelligent communication, then they're functionally equivalent to being intelligent.  If you can have an "Einstein LLM" that would say everything flesh and bones Einstein would say, even to novel contexts, then it doesn't matter whether the algorithm is classified as cognizant or not;  functionally it is as intelligent as flesh and bones Einstein was.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on November 25, 2025, 11:45:34 PM
How would you program curiosity?  Invention?
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 25, 2025, 11:57:08 PM
Yeah I don't doubt you could in theory create an "Einstein LLM" or a "Hawking LLM" or something similar.  Train it on everything that Einstein ever said or wrote, or what was said or wrote by him, or what was said or written by people most similar to them.  And sure it may create a reasonable facsimile of what the historical Einstein would actually say in response to various prompts.  But what it can't do is replicate what Einstein would say or do if he were reincarnated at birth in the 21st century and lived a full life.  Because that Einstein would probably think or do novel and unexpected things now just like the real Einstein did in 1900s and 1910s.  But the LLM wouldn't be able to do that because it's not in its data set.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 26, 2025, 12:06:42 AM
Quote from: DGuller on November 25, 2025, 11:44:22 PMI'm a little hesitant to agree, because I may be agreeing to something that I understand differently from you.  I mean, I couldn't have imagined how "evolved" could be interpreted here...

I want to make some kind of joke relating this to AI and human intelligence, but I can't quite figure out how...  :(

QuoteLet me try to concisely restate your restatement:  "If LLMs are developed to the point where they can consistently emulate intelligent communication, then they're functionally equivalent to being intelligent.  If you can have an "Einstein LLM" that would say everything flesh and bones Einstein would say, even to novel contexts, then it doesn't matter whether the algorithm is classified as cognizant or not;  functionally it is as intelligent as flesh and bones Einstein was.

Okay, understood.

It's a coherent position. I'm not 100% sure I agree with it - I'm more than a bit ambivalent to be honest - but let's accept it for the sake of argument.

First off, that's a really big if (to be fair, you did bold it :lol: ).

For me, the interesting part of that conversation is how likely it is that that if can be satisfied, and where the biggest challenges are likely to lie; as well as what subtle nuances are at risk of being elided (and with what consequences) in the eagerness to claim the if has been satisfied.

IMO, I think part of the point of the article that set off this current discussion was making is that human reasoning is largely non-language based and therefore, in my reading, it is making the prediction that LLMs are not going to be able to satisfy that if no matter how much computational power is applied.

Secondly, what Minsky said about the Einstein LLM which he articulated way better than my (now deleted) attempt.
Title: Re: The AI dooooooom thread
Post by: HVC on November 26, 2025, 12:08:11 AM
But wouldn't that just be regurgitating Einstein? Sure it's more dynamic than a tape recorder playback, but functionally the same. It wouldn't be able to come up with new theories or thoughts.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 26, 2025, 12:22:23 AM
Quote from: The Minsky Moment on November 25, 2025, 11:57:08 PMYeah I don't doubt you could in theory create an "Einstein LLM" or a "Hawking LLM" or something similar.  Train it on everything that Einstein ever said or wrote, or what was said or wrote by him, or what was said or written by people most similar to them.  And sure it may create a reasonable facsimile of what the historical Einstein would actually say in response to various prompts.  But what it can't do is replicate what Einstein would say or do if he were reincarnated at birth in the 21st century and lived a full life.  Because that Einstein would probably think or do novel and unexpected things now just like the real Einstein did in 1900s and 1910s.  But the LLM wouldn't be able to do that because it's not in its data set.
You are asserting an answer to a question that is being debated.  You are asserting that an LLM cannot generalize beyond its training data.  If it's true that a model cannot generalize, then by definition it cannot be intelligent, because ability to generalize is the core function of intelligence.

I think your assertion is not even true for classical statistical or machine learning models.  I have worked with plenty of statistical models, some that I even developed myself, that have translated at least partially to new domains.  In fact, there is even a whole concept called transfer learning, where you develop a model for one use case, and then take it as a starting point for a completely different use case, which is often preferable to starting from scratch.  Transfer learning would be a useless technique if machine learning models were incapable of generalizing beyond their training data.

The fact that a model can generalize beyond its training data is not limited to filling in missing patterns. If its internal representation captures the real structure of a domain, then extrapolating into unobserved territory becomes a logical consequence of the model itself, not an act of mystical human creativity.
Title: Re: The AI dooooooom thread
Post by: Valmy on November 26, 2025, 12:41:57 AM
I don't necessarily see how that is inconsistent with what Minsky was saying. Surely generalizations beyond it's data must still be consistent with that data, yes?
Title: Re: The AI dooooooom thread
Post by: DGuller on November 26, 2025, 12:50:10 AM
Quote from: Jacob on November 26, 2025, 12:06:42 AMIMO, I think part of the point of the article that set off this current discussion was making is that human reasoning is largely non-language based and therefore, in my reading, it is making the prediction that LLMs are not going to be able to satisfy that if no matter how much computational power is applied.
Even if human reasoning is non-language based, the ultimate, useful output is expressed with language.  If you can emulate the useful output well enough even in novel contexts, then why does it matter whether you arrive at it with human reasoning or with canine intuition? 

This whole line of argument seems fixated on the mechanism rather than the result.  Human reasoning is the  mechanism we're familiar with for generating insights, but it doesn't have to be the only one, or even the best one.
Title: Re: The AI dooooooom thread
Post by: Zoupa on November 26, 2025, 01:06:44 AM
Wake me up when a machine invents something on its own.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 26, 2025, 02:55:51 AM
Quote from: DGuller on November 26, 2025, 12:50:10 AMEven if human reasoning is non-language based, the ultimate, useful output is expressed with language.  If you can emulate the useful output well enough even in novel contexts, then why does it matter whether you arrive at it with human reasoning or with canine intuition?

I think this framing is incorrect. The useful output of human reasoning is not limited to what is expressed with language, far from it.

Human reasoning produces useful output independently of whether it can be articulated with language - namely actions.

We are constantly faced with actions carried out by people who cannot adequately explain in words why they did what they did. Nonetheless they have used their human reasoning capabilities to determine their course of action and are convinced that it is profoundly correct.

When asked why someone did something and they respond with "I don't know", "I'm not sure", "it seemed like the right thing to do", "someone had to do it" and the like that's a good indication that a human used their decision making faculties in a way that produced a useful outcome without the use of language to mediate that decision.

Furthermore - and more personally - I'm fairly convinced that in many many cases when people attempt to articulate the reasons for the actions they have taken (or inaction) or for their opinion, the language used consists largely of after-the-fact rationalizations used to support a decision that was arrived at without language-based reasoning.

QuoteThis whole line of argument seems fixated on the mechanism rather than the result.  Human reasoning is the  mechanism we're familiar with for generating insights, but it doesn't have to be the only one, or even the best one.

It is fixated on the mechanism because the mechanism is what we have. The result - which we don't have - is a big if and we're discussing whether the mechanism as it exists (or is likely to be developed into) can satisfy that if or not.

The line of argument is that it is unlikely to be able to do so because the mechanism lacks a means to address a significant part of human reasoning and behaviour - namely non-language based reasoning resulting in non-language based outputs (among other things, I'd personally argue that it also lacks the ability to make credible value judgements and take responsibility for consequences, two key components in human reasoning - but that's my own argument, not the article we're discussing.)

Now, LLM may - as you suggest - generate insights in ways that are different from insights produced by human reasoning. I think some of the biomedical research uses of some of these models indicate potentially fruitful avenues to develop; but I don't think that's AGI. Rather, I believe, those are specifically targeted models developed for bespoke purposes and do not suggest or require any kind of cognizance.

Title: Re: The AI dooooooom thread
Post by: Tamas on November 26, 2025, 06:50:44 AM
Anyways, we have been ignoring a key question:

How can LLMs be made to play computer games?

I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 26, 2025, 07:01:20 AM
Quote from: Tamas on November 26, 2025, 06:50:44 AMAnyways, we have been ignoring a key question:

How can LLMs be made to play computer games?

I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?

Yes, but it would not be able to distinguish good strategies from bad.  So the same problems exist and are likely worse as AI inevitably gets trained on AI slop.
Title: Re: The AI dooooooom thread
Post by: Josquius on November 26, 2025, 07:27:53 AM
Quote from: Tamas on November 26, 2025, 06:50:44 AMAnyways, we have been ignoring a key question:

How can LLMs be made to play computer games?

I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?


Video games are a key one for me where I really raise my eyebrow at the recent talk of AI as this new and amazing revolutionary thing. We've had AI for....ages. LLMs are not the same thing as all AI.
I do wonder when LLM will become more of a mainstream term.


I got to thinking the other day too- all companies, governments, etc... are going on about how they will embrace AI and become so brilliant for it. There's lots of worries about this for jobs, quality of output, etc....
It strikes me though....Its not that LLMs are entirely something new here. Cast your mind way back to pre-covid....And much the same stuff was being said. Only it was about data. Not AI. And it didn't have half as much marketing success.
Title: Re: The AI dooooooom thread
Post by: Tamas on November 26, 2025, 08:59:19 AM
Quote from: crazy canuck on November 26, 2025, 07:01:20 AM
Quote from: Tamas on November 26, 2025, 06:50:44 AMAnyways, we have been ignoring a key question:

How can LLMs be made to play computer games?

I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?

Yes, but it would not be able to distinguish good strategies from bad.  So the same problems exist and are likely worse as AI inevitably gets trained on AI slop.

Would they, though? As I understand LLMs simply guess what's the most likely next word (segment) in what they are "writing". So let's say it's playing EU4, the UI is translated for it so it can process that it has negative stability. Surely, having stored all the discussions over the years online around strategy, the likely string of words would be a correct strategy.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 26, 2025, 09:19:33 AM
ChatGPT has been able to interpret game screenshots for a long while now.  For example, here is what it will return if you ask it to explain what's going on in this screenshot of my HOI4 game:

(https://steamuserimages-a.akamaihd.net/ugc/928171163562227810/944C5791E6579C848215EF9B7847BC93524AA470/)

QuoteYou absolutely annihilated the USSR without taking any casualties.
This is a Hearts of Iron IV war overview screen showing a German–Soviet War where:

Germany has:

4.00M fielded manpower

0 losses (which is insane / bugged / encirclement abuse / nukes + AI collapse / paradrop cheese)

100% war participation

The USSR side:

~524k–525k fielded manpower left

19.93M losses — that's nearly twenty million dead

Their minors, Sinkiang and Mongolia, are irrelevant and barely participated

So the reality here is:

You've completely broken the Soviets

Their army has evaporated. Twenty million casualties with effectively no German losses means either:

You rolled them with colossal encirclements and they died out of supply
or

You exploited AI stupidity with attacking-only orders where they suicided into your lines
or

You used nukes / CAS / terrain chokepoints / the River Line meatgrinder
or

Something glitched and the game isn't counting your casualties

Regardless, the USSR is militarily finished. One push and they collapse.

In HOI4 terms, this is a total reverse-Barbarossa massacre. They're done. You can march to Vladivostok with a blindfold on.

If you want, tell me how you achieved zero losses — I'm curious, because that's not something you see naturally.
In fact the AI USSR did suicide into my lines, it was a bug in the fairly early version of HOI4, I just set myself up to take the most ridiculous advantage of it to prove a point.

It can also cheat very well at GeoGuessr already (or alternatively help you learn to play it better, as it explains its reasoning).  Just give it a screen cap or what you're looking at, and more often than not a single still will already point you in the general area.
Title: Re: The AI dooooooom thread
Post by: garbon on November 26, 2025, 09:40:31 AM
Quote from: Tamas on November 26, 2025, 08:59:19 AM
Quote from: crazy canuck on November 26, 2025, 07:01:20 AM
Quote from: Tamas on November 26, 2025, 06:50:44 AMAnyways, we have been ignoring a key question:

How can LLMs be made to play computer games?

I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?

Yes, but it would not be able to distinguish good strategies from bad.  So the same problems exist and are likely worse as AI inevitably gets trained on AI slop.

Would they, though? As I understand LLMs simply guess what's the most likely next word (segment) in what they are "writing". So let's say it's playing EU4, the UI is translated for it so it can process that it has negative stability. Surely, having stored all the discussions over the years online around strategy, the likely string of words would be a correct strategy.

Of course the games have evolved so not all written over the length of eu4s existence would be still current info.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 26, 2025, 10:22:20 AM
Quote from: DGuller on November 26, 2025, 12:22:23 AMYou are asserting that an LLM cannot generalize beyond its training data.

I'm not making that claim. I agree that an LLM "Einstein" could in theory make statements Einstein never made in response to novel stimuli, in a way that simulates Einstein in a reasonably convincing manner. It  could do things like imagine what Einstein might say if constrained to express his ideas in Shakespearean iambic pentameter ot in the style of a 2010s mumble rapper. What it can't do is what Einstein actually did and advance the frontiers of science through novel conceptions.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 26, 2025, 10:35:36 AM
Quote from: Tamas on November 26, 2025, 08:59:19 AM
Quote from: crazy canuck on November 26, 2025, 07:01:20 AM
Quote from: Tamas on November 26, 2025, 06:50:44 AMAnyways, we have been ignoring a key question:

How can LLMs be made to play computer games?

I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?

Yes, but it would not be able to distinguish good strategies from bad.  So the same problems exist and are likely worse as AI inevitably gets trained on AI slop.

Would they, though? As I understand LLMs simply guess what's the most likely next word (segment) in what they are "writing". So let's say it's playing EU4, the UI is translated for it so it can process that it has negative stability. Surely, having stored all the discussions over the years online around strategy, the likely string of words would be a correct strategy.

The likely string of words would be a correct strategy if the training data contains good optimal strategies and the model has been given feedback that leads us to give higher probability weighting to those choices.  Looking at DG's CHAT GTP output, it appears that the model has been fed the Paradox forums and reddit threads on HoI4 as it parrots some of the buzzwords you'd find there.

The model "knows" that certain symbols in the game correspond to certain words like "manpower", "losses," and resources.  It can spit back out those numbers and associate it with those characters.  Then it can make a guess (i.e. assign probability distros) as what a likely response would be to a combat screen showing zero "losses" based on what people have said on the forums, reddit etc. about similar situations: "insane / bugged / encirclement abuse / nukes + AI collapse / paradrop cheese".   CHAT GPT has no idea what "losses" are or what "paradrop cheese" is, but its algorithm tells it that in the context of discussions about the symbol "HoI 4" there is some probabilistic relationship between those two phrases. The model has no idea what an optimal strategy is, or what a strategy is, or what Hearts of Iron is.  It is just making probabilistic connections between different symbolic representations in accordance with its algorithm.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 26, 2025, 10:43:01 AM
Quote from: Tamas on November 26, 2025, 08:59:19 AM
Quote from: crazy canuck on November 26, 2025, 07:01:20 AM
Quote from: Tamas on November 26, 2025, 06:50:44 AMAnyways, we have been ignoring a key question:

How can LLMs be made to play computer games?

I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?

Yes, but it would not be able to distinguish good strategies from bad.  So the same problems exist and are likely worse as AI inevitably gets trained on AI slop.

Would they, though? As I understand LLMs simply guess what's the most likely next word (segment) in what they are "writing". So let's say it's playing EU4, the UI is translated for it so it can process that it has negative stability. Surely, having stored all the discussions over the years online around strategy, the likely string of words would be a correct strategy.

No, as we are seeing in every other field where LLM is being used, the end product is not useful unless someone carefully corrects all the errors produced by the tool.  It ends up being highly inefficient for things that require accuracy.  And the slop problem is real, once AI starts getting trained on the output of AI the end product becomes increasingly unreliable.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 26, 2025, 10:56:59 AM
Quote from: The Minsky Moment on November 26, 2025, 10:35:36 AM
Quote from: Tamas on November 26, 2025, 08:59:19 AM
Quote from: crazy canuck on November 26, 2025, 07:01:20 AM
Quote from: Tamas on November 26, 2025, 06:50:44 AMAnyways, we have been ignoring a key question:

How can LLMs be made to play computer games?

I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?

Yes, but it would not be able to distinguish good strategies from bad.  So the same problems exist and are likely worse as AI inevitably gets trained on AI slop.

Would they, though? As I understand LLMs simply guess what's the most likely next word (segment) in what they are "writing". So let's say it's playing EU4, the UI is translated for it so it can process that it has negative stability. Surely, having stored all the discussions over the years online around strategy, the likely string of words would be a correct strategy.

The likely string of words would be a correct strategy if the training data contains good optimal strategies and the model has been given feedback that leads us to give higher probability weighting to those choices.  Looking at DG's CHAT GTP output, it appears that the model has been fed the Paradox forums and reddit threads on HoI4 as it parrots some of the buzzwords you'd find there.

The model "knows" that certain symbols in the game correspond to certain words like "manpower", "losses," and resources.  It can spit back out those numbers and associate it with those characters.  Then it can make a guess (i.e. assign probability distros) as what a likely response would be to a combat screen showing zero "losses" based on what people have said on the forums, reddit etc. about similar situations: "insane / bugged / encirclement abuse / nukes + AI collapse / paradrop cheese".   CHAT GPT has no idea what "losses" are or what "paradrop cheese" is, but its algorithm tells it that in the context of discussions about the symbol "HoI 4" there is some probabilistic relationship between those two phrases. The model has no idea what an optimal strategy is, or what a strategy is, or what Hearts of Iron is.  It is just making probabilistic connections between different symbolic representations in accordance with its algorithm.
I think you're being unreasonably reductionist.  At the end of the day, your brain has no idea what losses are either, it's just your neurons firing.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on November 26, 2025, 10:59:37 AM
Well if you have to wave away the obvious differences between human intelligence and what LLMs are doing to make your case, you may want to think of a better argument.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 26, 2025, 11:15:19 AM
Quote from: DGuller on November 26, 2025, 10:56:59 AMI think you're being unreasonably reductionist.  At the end of the day, your brain has no idea what losses are either, it's just your neurons firing.

I think I'm being reasonably reductionist, and I agree with your second sentence (although modified to say that it's my neurons firing in response to contact with my environment).  But whatever CHAT GDP is doing, it isn't replicating how the human brain is operating.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 26, 2025, 11:25:58 AM
Quote from: The Minsky Moment on November 26, 2025, 11:15:19 AM
Quote from: DGuller on November 26, 2025, 10:56:59 AMI think you're being unreasonably reductionist.  At the end of the day, your brain has no idea what losses are either, it's just your neurons firing.

I think I'm being reasonably reductionist, and I agree with your second sentence (although modified to say that it's my neurons firing in response to contact with my environment).  But whatever CHAT GDP is doing, it isn't replicating how the human brain is operating.
No, ChatGPT is not replicating how the human brain is operating.  An AI doing voice to text translation doesn't work like a stenographer typing out what is being dictated to them.  Both get the job done in their own ways anyway.  It would be silly creating a robot that would be striking keys with its robotic fingers, just to replicate how humans are doing it.  Matching the outcome is what matters; copying the mechanism is neither required nor even necessarily desirable.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 26, 2025, 12:46:35 PM
Right and I agree that LLMs can do things.  I use them to do things.  They are good at some things, less good at others, and simply can't do many things at all.

One thing they are not is "general intelligence" and I don't see the path to get there by enhancing LLMs.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 26, 2025, 07:10:03 PM
If this gets close to the kind of mass production they're saying, then we may soon have the answer to whether LLM can produce AGI with sufficient computational power:

QuoteNew Chinese optical quantum chip allegedly 1,000x faster than Nvidia GPUs for processing AI workloads - firm reportedly producing 12,000 wafers per year

Quantum computing is still a long way from becoming a mainstream part of society; however, a Chinese firm has developed an all-new optical quantum computing chip that is closing the gap, called the world's first scalable, "industrial-grade" quantum chip. The South China Morning Post reports that the chip's developer claims it is "1,000 times faster" than Nvidia's GPUs at AI tasks and is already being used in some industries, including aerospace and finance.

The chip in question was built by the Chip Hub for Integrated Photonics Xplore (CHIPX) and is based on a brand-new co-packaging technology for photons and electronics, and it claims to be the first quantum computing platform to be widely deployable. These photonic chips house more than 1,000 optical components on a small 6-inch silicon wafer using a monolithic design, making them incredibly compact compared to traditional quantum computers.

All of these factors have reportedly allowed systems with these quantum chips to be deployed in just two weeks, compared to six months for traditional quantum computers. Its design also allows these chips to work in tandem with each other, just like AI GPUs, with deployments allegedly being "easily" scaled up to support 1 million qubits of quantum processing power.

More her: https://www.tomshardware.com/tech-industry/quantum-computing/new-chinese-optical-quantum-chip-allegedly-1-000x-faster-than-nvidia-gpus-for-processing-ai-workloads-but-yields-are-low
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 26, 2025, 08:58:42 PM
Sounds like similar claims being made elsewhere, like these guys: https://thequantuminsider.com/2025/02/26/psiquantum-announces-omega-a-manufacturable-photonic-quantum-computing-chipset/
Title: Re: The AI dooooooom thread
Post by: HVC on November 26, 2025, 09:03:05 PM
So you're saying that GPUs are gonna be affordable again?
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on November 26, 2025, 11:29:05 PM
Believe it when you see it
Title: Re: The AI dooooooom thread
Post by: Syt on November 27, 2025, 01:32:57 AM
Sounds like the kinds of claims I'd make if I wanted to quickly cash in on a current hype to attract investors. "We're almost there, promise!"
Title: Re: The AI dooooooom thread
Post by: Tonitrus on November 27, 2025, 05:01:04 AM
We need a fusion power hype/bubble.  :(
Title: Re: The AI dooooooom thread
Post by: Tamas on November 27, 2025, 02:07:22 PM
Quote from: Syt on November 27, 2025, 01:32:57 AMSounds like the kinds of claims I'd make if I wanted to quickly cash in on a current hype to attract investors. "We're almost there, promise!"

Indeed.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 27, 2025, 02:16:46 PM
Quote from: Syt on November 27, 2025, 01:32:57 AMSounds like the kinds of claims I'd make if I wanted to quickly cash in on a current hype to attract investors. "We're almost there, promise!"

There does seem to be a lot of it going around these days, it's true.
Title: Re: The AI dooooooom thread
Post by: Syt on December 01, 2025, 09:07:01 AM
(https://i.ibb.co/fVydm7GS/image.png)

:bleeding:
Title: Re: The AI dooooooom thread
Post by: Syt on December 01, 2025, 09:11:15 AM
Fortunately, it's not real: https://nurtureos.ai/

QuoteIf You Want This App, You're A Bad Parent

NurtureOS was built to show just how far (and how fast) AI can spiral when we let convenience outrun common sense. If this made you pause, good. There's a much bigger story behind why this campaign exists, what it warns against, and how you can help shape a more positive future for AI.
Title: Re: The AI dooooooom thread
Post by: Josquius on December 01, 2025, 09:21:41 AM
....but surely people who know they're bad parents is a legit target group?
Min-maxing the infants seems a wondrous idea.
Title: Re: The AI dooooooom thread
Post by: DGuller on December 01, 2025, 09:27:04 AM
I'm sure your child would prefer you not to waste your own time on repetitive child interaction tasks, they'd prefer you to leave that to AI and instead work more hours so that you could buy them more toys.
Title: Re: The AI dooooooom thread
Post by: Syt on December 01, 2025, 10:01:23 AM
Quote from: Josquius on December 01, 2025, 09:21:41 AM....but surely people who know they're bad parents is a legit target group?
Min-maxing the infants seems a wondrous idea.

I'm sure some tech bro will put this into the market eventually.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on December 01, 2025, 12:54:21 PM
https://arxiv.org/abs/2507.09089?utm_source=substack&utm_medium=email

Cornell University study of productivity impact of AI on coding tasks

QuoteDespite widespread adoption, the impact of AI tools on software development in the wild remains understudied. We conduct a randomized controlled trial (RCT) to understand how AI tools at the February-June 2025 frontier affect the productivity of experienced open-source developers. 16 developers with moderate AI experience complete 246 tasks in mature projects on which they have an average of 5 years of prior experience. Each task is randomly assigned to allow or disallow usage of early 2025 AI tools. When AI tools are allowed, developers primarily use Cursor Pro, a popular code editor, and Claude 3.5/3.7 Sonnet. Before starting tasks, developers forecast that allowing AI will reduce completion time by 24%. After completing the study, developers estimate that allowing AI reduced completion time by 20%. Surprisingly, we find that allowing AI actually increases completion time by 19%--AI tooling slowed developers down. This slowdown also contradicts predictions from experts in economics (39% shorter) and ML (38% shorter). To understand this result, we collect and evaluate evidence for 20 properties of our setting that a priori could contribute to the observed slowdown effect--for example, the size and quality standards of projects, or prior developer experience with AI tooling. Although the influence of experimental artifacts cannot be entirely ruled out, the robustness of the slowdown effect across our analyses suggests it is unlikely to primarily be a function of our experimental design.
Title: Re: The AI dooooooom thread
Post by: Tamas on December 01, 2025, 01:39:39 PM
One of the investigative journalist outfits in Hungary has proven that in order to appear to have supporters among young people, Fidesz in Hungary have had a virtual existence of about a 100 fake Facebook profiles created. These no doubt aided by AI have lively profiles, posting about their hobbies, even about relationship drama and obviously beside all the hip stuff they enthusiastically share government propaganda.

They are so convincing that some of them have been able to get real people add them as friends.
Title: Re: The AI dooooooom thread
Post by: Zoupa on December 01, 2025, 05:22:25 PM
Quote from: DGuller on December 01, 2025, 09:27:04 AMI'm sure your child would prefer you not to waste your own time on repetitive child interaction tasks, they'd prefer you to leave that to AI and instead work more hours so that you could buy them more toys.

Not sure if serious lol. If you are, this is the clearest indication that you do not have children.
Title: Re: The AI dooooooom thread
Post by: Valmy on December 01, 2025, 07:33:09 PM
Quote from: DGuller on December 01, 2025, 09:27:04 AMI'm sure your child would prefer you not to waste your own time on repetitive child interaction tasks, they'd prefer you to leave that to AI and instead work more hours so that you could buy them more toys.

Gen Alpha kids don't even like toys. All they need is a tablet. Beyond that they are beyond material concerns. Consumerism is collapsing before my eyes.  :ph34r:  :lol:
Title: Re: The AI dooooooom thread
Post by: Syt on December 02, 2025, 02:18:54 AM
Quote from: Valmy on December 01, 2025, 07:33:09 PM
Quote from: DGuller on December 01, 2025, 09:27:04 AMI'm sure your child would prefer you not to waste your own time on repetitive child interaction tasks, they'd prefer you to leave that to AI and instead work more hours so that you could buy them more toys.

Gen Alpha kids don't even like toys. All they need is a tablet. Beyond that they are beyond material concerns. Consumerism is collapsing before my eyes.  :ph34r:  :lol:

I was assuming that instead of spending money on physical toys they spend it on digital "content" (skins, in game currencies, apps, games etc.). :P

But I know what they say about assuming - it makes and ass of u and ming. :D
Title: Re: The AI dooooooom thread
Post by: Jacob on December 02, 2025, 10:21:55 AM
Poor Ming  :(
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on December 02, 2025, 11:15:21 AM
Quote from: Jacob on December 02, 2025, 10:21:55 AMPoor Ming  :(

They weren't so nice to Yu an others.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on December 02, 2025, 11:31:38 AM
Without Yu there is no Ming.

Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on December 02, 2025, 01:07:36 PM
Quote from: crazy canuck on December 02, 2025, 11:31:38 AMWithout Yu there is no Ming.

What use is either, without a Song.
Title: Re: The AI dooooooom thread
Post by: Tamas on December 02, 2025, 03:18:15 PM
Haha, racist jokes!
Title: Re: The AI dooooooom thread
Post by: Baron von Schtinkenbutt on December 02, 2025, 03:27:33 PM
It's OK, they know how to take it on the Qin.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on December 02, 2025, 04:42:42 PM
Quote from: Tamas on December 02, 2025, 03:18:15 PMHaha, racist jokes!

Accusations like that will get me into a warring state.
Title: Re: The AI dooooooom thread
Post by: Valmy on December 02, 2025, 04:47:27 PM
Quote from: Syt on December 02, 2025, 02:18:54 AMI was assuming that instead of spending money on physical toys they spend it on digital "content" (skins, in game currencies, apps, games etc.). :P

But I know what they say about assuming - it makes and ass of u and ming. :D

Yeah I guess she would like a few Robux sent her way but its not quite the same as me deserately wanting the Millenium Falcon for Christmas 1982.
Title: Re: The AI dooooooom thread
Post by: Syt on December 03, 2025, 01:31:57 AM
Quote from: Valmy on December 02, 2025, 04:47:27 PMYeah I guess she would like a few Robux sent her way but its not quite the same as me deserately wanting the Millenium Falcon for Christmas 1982.

Yes, it's a lot more ephemeral. Which may be good for the environment (bad for collectors), but then again I think there's something to physical keepsakes. ... But thinking hard, I'm trying to figure out what my oldest physical possession is. Probably a book ... I'd guess Berlin Alexanderplatz which I had to buy for German class ca. 1993? :unsure:
Title: Re: The AI dooooooom thread
Post by: Josquius on December 03, 2025, 04:01:39 AM
It is very sad that kids these days will never know the wonder of getting a new computer game, reading the manual on the way home, and so on.
Title: Re: The AI dooooooom thread
Post by: HVC on December 03, 2025, 04:33:12 AM
Quote from: Josquius on December 03, 2025, 04:01:39 AMIt is very sad that kids these days will never know the wonder of getting a new computer game, reading the manual on the way home, and so on.

So many dead trees so little time.
Title: Re: The AI dooooooom thread
Post by: garbon on December 03, 2025, 06:46:19 AM
Why is that sad? I had a superstition as a child that every time I read the manual before loading up a game, that would be when it would be a game that would fail to work.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on December 03, 2025, 09:49:50 AM
On the topic:


Two immediate conclusions:

1) The AI investment boom has reached the phase where it is causing significant distortions in the real economy, with likely adverse impact on consumers and the broader tech industry well into 2026.

2) The probability of a nasty crash over the next 6-12 month window is rising.
Title: Re: The AI dooooooom thread
Post by: Syt on December 03, 2025, 09:59:39 AM
I'm so glad I bought a new PC with RAM a few months ago.  :ph34r:
Title: Re: The AI dooooooom thread
Post by: DGuller on December 03, 2025, 10:09:11 AM
Quote from: The Minsky Moment on December 03, 2025, 09:49:50 AM
  • Google launched their new Gemini iteration and appears to have overtaken or at least caught up to GPT-5.  Open AI insiders leaked a memo from Altman declaring a "code red".  Of course, there is nothing unexpected about this development; Google was the pioneer of these industrial scale LLMs (from their 2017 paper) and was only beat to market because they wouldn't release a half-baked product. 
I think the explanation for Google being beat to market is off the mark.  In history plenty of companies failed to capitalize on their own inventions, or even appreciate their potential, for reasons other than their focus on quality.  I think the far more likely explanation is that Google, being a mature large public company, is just naturally far less nimble than a motivated privately-held startup.  Companies like that have way too many stakeholder alignment meetings, risk committee working groups, and quarterly earning targets, to move fast, at least until external factors make them move fast.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on December 03, 2025, 10:18:52 AM
DG I think we are saying the same thing in different ways, with you putting greater emphasis on the downsides of big enterprises (bureaucracy) and me putting greater emphasis on the downsides of startups (willingness to shove out and hype a half-baked beta and fix later on the fly)

But I would add - Google probably thought they were dealing with something else.  OpenAI was supposed to be a non-profit, defined by a precautionary ethos, NOT a commercially "motivated" start-up.  That turned out to be a mirage, one that even much of OpenAI's own board was unaware of.
Title: Re: The AI dooooooom thread
Post by: HisMajestyBOB on December 03, 2025, 10:25:49 AM
I would like to get a new computer next year, please don't drive up the cost of components.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on December 03, 2025, 10:32:43 AM
Quote from: DGuller on December 03, 2025, 10:09:11 AM
Quote from: The Minsky Moment on December 03, 2025, 09:49:50 AM
  • Google launched their new Gemini iteration and appears to have overtaken or at least caught up to GPT-5.  Open AI insiders leaked a memo from Altman declaring a "code red".  Of course, there is nothing unexpected about this development; Google was the pioneer of these industrial scale LLMs (from their 2017 paper) and was only beat to market because they wouldn't release a half-baked product. 
I think the explanation for Google being beat to market is off the mark.  In history plenty of companies failed to capitalize on their own inventions, or even appreciate their potential, for reasons other than their focus on quality.  I think the far more likely explanation is that Google, being a mature large public company, is just naturally far less nimble than a motivated privately-held startup.  Companies like that have way too many stakeholder alignment meetings, risk committee working groups, and quarterly earning targets, to move fast, at least until external factors make them move fast.

It has nothing to do with nimbleness. When GPT was first released, the developers were very clear that it was still in development. But despite that warning people treated it las if it was a reliable tool with all of the catastrophic consequences that have been widely reported.

GPT is still in development and yet people still take it seriously as if it is a reliable tool.

Google's product may be reliable, or it may have the same defects of all LLMs.  We shall see.



Title: Re: The AI dooooooom thread
Post by: Baron von Schtinkenbutt on December 03, 2025, 11:01:08 AM
Google did move straight from developing the transformer architecture to developing a language model with it.  However, they took a fundamentally different approach.  Google developed BERT, which is an encoder-only transformer model.  OpenAI developed what is arguably a higher-level model architecture based on a decoder-only transformer model.

Google's approach created a language model that was suitable for creating inputs to non-language models.  BERT and its derivatives became heavily used as the first stage in ML systems to generate various forms of content embeddings.  These embeddings are also useful on their own for search and retrieval systems, which is why Google went this direction.  The transformer architecture itself came out of an effort to develop better models for sequence-to-sequence conversion than what was then available with LSTM models fed by simple word embeddings like Word2Vec.

OpenAI intended to go in a different direction and create a language model that could generate language rather than encoded sequences.  They introduced the concept of generative pre-training (GPT), which is what gives these models their "knowledge".  Basically, an architecture designed to recreate language that looked like what it had been trained on.  This approach is not very useful for search and retrieval, but is useful if you want to build a chatbot that uses the "knowledge" encoded in the model to to retrieval and synthesis.

As architectures developed it turned out the GPT architecture had so-called emergent behaviors that made the base models built this way useful for general tasks, provided the right tooling and scaffolding was built around it.  Google came around to the generative model party late partly because the value to them wasn't clear until OpenAI rolled out a complete chatbot product using it.  Plus, as Joan said, the whole "we're just a research effort" bullshit.
Title: Re: The AI dooooooom thread
Post by: PJL on December 03, 2025, 12:43:15 PM
Quote from: HisMajestyBOB on December 03, 2025, 10:25:49 AMI would like to get a new computer next year, please don't drive up the cost of components.

Not just computers but all consumables with silicon components will be affected. So everything from washing machines to cars.
Title: Re: The AI dooooooom thread
Post by: Crazy_Ivan80 on December 03, 2025, 01:20:11 PM
Quote from: HisMajestyBOB on December 03, 2025, 10:25:49 AMI would like to get a new computer next year, please don't drive up the cost of components.

No news toys for the plebs... that's today's world
Title: Re: The AI dooooooom thread
Post by: HisMajestyBOB on December 03, 2025, 03:37:37 PM
Quote from: PJL on December 03, 2025, 12:43:15 PM
Quote from: HisMajestyBOB on December 03, 2025, 10:25:49 AMI would like to get a new computer next year, please don't drive up the cost of components.

Not just computers but all consumables with silicon components will be affected. So everything from washing machines to cars.

Gonna have to replace my silicon chips with potato chips.  :(
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on December 03, 2025, 04:41:15 PM
Quote from: HisMajestyBOB on December 03, 2025, 03:37:37 PMGonna have to replace my silicon chips with potato chips.  :(

Just like we make computer in Soviet Russia!
Title: Re: The AI dooooooom thread
Post by: HVC on December 03, 2025, 09:41:47 PM
Quote from: HisMajestyBOB on December 03, 2025, 10:25:49 AMI would like to get a new computer next year, please don't drive up the cost of components.

Micron is shuttering crucial and focusing on AI customers. With downward price pressure disappearing from cheaper suppliers whoever is left is gonna charge even more.
Title: Re: The AI dooooooom thread
Post by: HisMajestyBOB on December 03, 2025, 11:33:23 PM
:bleeding:
Maybe instead of a new computer, I'll just rediscover the classics. I did just discover the Eamon adventure game series and my Apple IIGS is currently gathering dust...
Title: Re: The AI dooooooom thread
Post by: grumbler on December 04, 2025, 12:20:22 AM
Quote from: The Minsky Moment on December 02, 2025, 04:42:42 PM
Quote from: Tamas on December 02, 2025, 03:18:15 PMHaha, racist jokes!

Accusations like that will get me into a warring state.

Han shot first
Title: Re: The AI dooooooom thread
Post by: Jacob on December 04, 2025, 12:22:40 AM
Quote from: grumbler on December 04, 2025, 12:20:22 AM
Quote from: The Minsky Moment on December 02, 2025, 04:42:42 PM
Quote from: Tamas on December 02, 2025, 03:18:15 PMHaha, racist jokes!

Accusations like that will get me into a warring state.

Han shot first

No Wei!
Title: Re: The AI dooooooom thread
Post by: grumbler on December 04, 2025, 12:23:30 AM
Quote from: Jacob on December 04, 2025, 12:22:40 AM
Quote from: grumbler on December 04, 2025, 12:20:22 AM
Quote from: The Minsky Moment on December 02, 2025, 04:42:42 PM
Quote from: Tamas on December 02, 2025, 03:18:15 PMHaha, racist jokes!

Accusations like that will get me into a warring state.

Han shot first

No Wei!

That's what Shi said!
Title: Re: The AI dooooooom thread
Post by: grumbler on December 04, 2025, 12:24:44 AM
And, yes, Hu's on first.
Title: Re: The AI dooooooom thread
Post by: Syt on December 04, 2025, 01:38:40 AM
(https://i.ibb.co/4wQ0FGnt/image.png)
Title: Re: The AI dooooooom thread
Post by: Syt on December 04, 2025, 01:59:54 AM
(https://i.ibb.co/8LHNQ9W5/image.png)
Title: Re: The AI dooooooom thread
Post by: Zoupa on December 04, 2025, 02:09:32 AM
 :bleeding: We're cooked, boys.
Title: Re: The AI dooooooom thread
Post by: HVC on December 04, 2025, 02:16:56 AM
Hopefully no one is using GPT for its problem solving skills :P

(https://us1.discourse-cdn.com/openai1/original/4X/4/2/1/4211cddbdbec2b6fc3f146096185f6873abd9a68.jpeg)


This has been going around for a few months so I'm sure they've hard coded the answer to this question by now :D
Title: Re: The AI dooooooom thread
Post by: Syt on December 04, 2025, 08:35:11 AM
https://www.pcgamer.com/hardware/the-ram-crisis-is-just-getting-started-micron-makes-the-difficult-decision-to-abandon-the-consumer-memory-business-to-focus-on-supplying-ai-data-centers/

QuoteThe RAM crisis is just getting started: Micron makes the 'difficult decision' to abandon the consumer memory business to focus on supplying AI data centers

Micron's Crucial brand will no longer offer DDR for PC users starting in February 2026.

Citing a "surge in demand for memory and storage" driven by AI data centers, Micron announced on Wednesday that it's made the "difficult decision to exit the Crucial consumer business in order to improve suply and support for [its] larger, strategic customers in faster-growing segments."

In other words: sorry, PC gamers, but you're no longer worth selling RAM to. The server farm next door has way more cash to burn.

Just yesterday we were reporting that high memory prices may well continue into 2028, as Samsung and SK Hynix, who produce roughly 70% of the DRAM currently on the market, are focusing on supplying the enterprise business without dramatically increasing their production capacity. In that same story we reported that Micron, the other big player in memory, is planning to ramp up production that won't be online until late 2028.

But now it sounds like those new assembly lines won't be sending any RAM our way even when they're online.

You may know the Crucial brand name better than Micron, despite the tech company being established in the United States way back in 1978. Crucial is Micron's consumer-focused brand and has been used on RAM, SSDs, even SD cards for years and years. A quick look at the Crucial website reveals just how much RAM it still sells to everyday computer users. That will end in February 2026, Micron said today.

"This decision reflects Micron's commitment to its ongoing portfolio transformation and the resulting alignment of its business to secular, profitable growth vectors in memory and storage," the company said in a press release. "By concentrating on core enterprise and commercial segments, Micron aims to improve long-term business performance and create value for strategic customers as well as stakeholders."

Presumably this means Micron will also cease selling Crucial-branded NVMe and SATA SSDs, which have so far not been as affected as RAM by the increase in memory prices but are certainly also on the rise.

How bad will this be for the ongoing memory supply crisis? Well, that's about 25% of the world's DRAM production capacity now fully devoted to enterprise, and Crucial also makes our favorite budget SSD. So I would say: Not good for us!

Good for Micron's shareholders, though. The stock has gained 180% in value this year on the strength of its HBM (high bandwidth memory) business used to supply GPUs and other tech in high demand due to, you guessed it, AI.
Title: Re: The AI dooooooom thread
Post by: Jacob on December 04, 2025, 10:29:06 AM
So basically everything is going to get more expensive because of AI?

That's delightful
Title: Re: The AI dooooooom thread
Post by: garbon on December 04, 2025, 10:44:21 AM
The future is so bright
Title: Re: The AI dooooooom thread
Post by: Tamas on December 04, 2025, 11:23:45 AM
It is ridiculous. We are accelerating global warming, pushing the world economy to the brink, all so we can have more verbose emails and very lifelike fake images.
Title: Re: The AI dooooooom thread
Post by: Grey Fox on December 04, 2025, 11:51:23 AM
and vast reduction in wages payed. The rich get richer.
Title: Re: The AI dooooooom thread
Post by: Razgovory on December 04, 2025, 12:05:32 PM
Quote from: Tamas on December 04, 2025, 11:23:45 AMIt is ridiculous. We are accelerating global warming, pushing the world economy to the brink, all so we can have more verbose emails and very lifelike fake images.
These people genuinely think they are going to create a technological singularity.  They would destroy the entire economy in their mad quest for immortality.
Title: Re: The AI dooooooom thread
Post by: Jacob on December 04, 2025, 12:31:08 PM
Quote from: Grey Fox on December 04, 2025, 11:51:23 AMand vast reduction in wages payed. The rich get richer.

... combined with a significant costs for consumer goods.
Title: Re: The AI dooooooom thread
Post by: Jacob on December 04, 2025, 12:34:04 PM
Quote from: Razgovory on December 04, 2025, 12:05:32 PMThese people genuinely think they are going to create a technological singularity.  They would destroy the entire economy in their mad quest for immortality.

I remember years ago reading about Thiel funding biotech technology that promised a form of immortality derived from the blood of young people. If it had been fiction I would have derided it as being to obnoxiously didactic a metaphor.

Also these billionaires really do seem to want to set themselves up as demi-gods.
Title: Re: The AI dooooooom thread
Post by: HisMajestyBOB on December 04, 2025, 12:58:52 PM
This is why billionaires are bad.
If we had stronger taxes and anti-trust laws this could have been avoided.
Title: Re: The AI dooooooom thread
Post by: garbon on December 04, 2025, 01:02:48 PM
Quote from: HisMajestyBOB on December 04, 2025, 12:58:52 PMThis is why billionaires are bad.
If we had stronger taxes and anti-trust laws this could have been avoided.

Yes, they are indeed terrible for society.
Title: Re: The AI dooooooom thread
Post by: Syt on December 04, 2025, 01:12:17 PM
Quote from: Tamas on December 04, 2025, 11:23:45 AMIt is ridiculous. We are accelerating global warming, pushing the world economy to the brink, all so we can have more verbose emails and very lifelike fake images.

Don't worry, Google got you.

https://fortune.com/2025/12/01/google-ceo-sundar-pichai-project-suncatcher-extraterrestrial-data-centers-environment/

QuoteGoogle CEO Sundar Pichai says we're just a decade away from a new normal of extraterrestrial data centers
December 1, 2025, 1:25 PM ET

Google's "moonshot" aspirations to expand its AI footprint are taking on a more literal meaning.

CEO Sundar Pichai said in a Fox News interview on Sunday that Google will soon begin construction of AI data centers in space. The tech giant announced Project Suncatcher earlier this month, with the goal of finding more efficient ways to power energy-guzzling centers, in this case with solar power.

"One of our moonshots is to, how do we one day have data centers in space so that we can better harness the energy from the sun that is 100 trillion times more energy than what we produce on all of Earth today?" Pichai said.

Google will take its first steps in constructing extraterrestrial data centers in early 2027 in partnership with satellite imagery firm Planet, launching two pilot satellites to test the hardware in Earth's orbit. According to Pichai, space-based data centers will be the new standard in the near future.

"But there's no doubt to me that a decade or so away we'll be viewing it as a more normal way to build data centers," he said.

To be sure, Google isn't the only company looking to the skies for an answer to improving data center efficiency. Earlier this month, Y Combinator and Nvidia-backed startup Starcloud sent its first AI-equipped satellite to space. CEO and cofounder Philip Johnston predicts extraterrestrial data centers will produce 10 times lower carbon emissions than their earthbound counterparts, even taking into account the emissions from launch.

While the cost of satellites used to test AI hardware in space has decreased drastically, putting extraterrestrial data center development within reach, the cost of building these solar-powered centers is still an unknown, particularly as earthbound data centers are expected to require more than $5 trillion in capital expenditures by 2030, according to an April McKinseyreport.

Google, which catapulted itself back into the AI front-runner conversation with the recent release of Gemini 3, is one of several major hyperscalers pouring money into data centers to expand its computing capabilities. Google itself announced this month a $40 billion investment in data center construction in Texas.

All the while, speculation of an AI bubble threatens to create an oversupply of data centers, which could render the data center space race a dangerous overinvestment.

"The stakes are high," the McKinsey report said. "Overinvesting in data center infrastructure risks stranding assets, while underinvesting means falling behind."

Harnessing solar energy to power data centers has become increasingly appealing amid growing concerns about the sustainability of expanding AI compute, which requires an exorbitant amount of power. A December 2024 U.S. Department of Energy report on domestic data center usage found data center load has tripled in the past 10 years and may double or triple again by 2028. These data centers consumed more than 4% of the country's electricity in 2023, and are predicted to consume up to 12% of U.S. electricity by 2028, according to the report.

Google alone has more than doubled its electricity consumption on data center use in the past five years, using 30.8 million megawatt-hours of electricity last year compared to 14.4 million in 2020, when it began specifically tracking data center energy consumption, according to its latest sustainability report released in June.

Google has worked to reduce the energy needed to power its growing data centers, reporting it reduced its data center energy emissions by 12% in 2024, despite an increasing footprint. However, concerns about the lasting sustainability of data center expansion remain.

"There is still much we don't know about the environmental impact of AI, but some of the data we do have is concerning," Golestan Radwan, United Nations Environment Programme chief digital officer, said in a statement last year following the program's note warning of the environmental impact of AI infrastructure expansion. "We need to make sure the net effect of AI on the planet is positive before we deploy the technology at scale."
Title: Re: The AI dooooooom thread
Post by: Syt on December 04, 2025, 01:18:38 PM
That said, I'm somewhat ashamed that I was having a productive ChatGPT day, using it as a sounding board and tool to better structure a project bird's eye view and close any gaps in my analysis. Granted, much of my time was writing long ass prompts/feedback to make sure it had enough feedback (our corporate accounts have basic company knowledge that were fed, but obviously it needed a lot more than that) and then reviewing the output to respond with more instructions and/or edit it. In hindsight I'm not sure if it would have been more productive to do it more fully manually, but it pushed back on some assumptions and reminded me of some items that I had genuinely missed in my drafts/instructions. :hmm:
Title: Re: The AI dooooooom thread
Post by: Josquius on December 04, 2025, 01:20:21 PM
Chat gpt has really reached the enshittification stage I've noticed. You only get a few messages per day of any sort then it locks you out.
Wonder if this is impacting usage much of if most people are light users.
Title: Re: The AI dooooooom thread
Post by: Jacob on December 04, 2025, 01:31:10 PM
Google AI gave me wrong information for a search today.

I googled "who were Vancouver City counsellors in 2020". The AI result gave me a list of current counsellors, said they were the current counsellors, then added "it's important to note they may have held different positions in 2020".
Title: Re: The AI dooooooom thread
Post by: Syt on December 04, 2025, 01:38:12 PM
Quote from: Josquius on December 04, 2025, 01:20:21 PMChat gpt has really reached the enshittification stage I've noticed. You only get a few messages per day of any sort then it locks you out.
Wonder if this is impacting usage much of if most people are light users.

Pay your tithe or the AI deity will not share its wisdom! :pope:
Title: Re: The AI dooooooom thread
Post by: DGuller on December 04, 2025, 01:38:21 PM
All you people thinking AI is just a speculative waste of resources, I wish I shared your optimism.  Speculative bubbles are painful when they burst, but humanity has survived them plenty of times before.  I'm not sure humanity is ready to survive the AI that delivers on its promises.
Title: Re: The AI dooooooom thread
Post by: Syt on December 04, 2025, 01:43:02 PM
Weirdly I agree with another thing that Sundar Pichai said - AI will touch all parts of life and everyone needs to figure out what its best uses for them are. I.e. it's a new tool; and I think we're in the very confusing phase of trying to see what it is actually useful for and what it isn't useful for (let alone the energy/environmental impact).

Unlike Pichai, however, I don't think it's necessarily here to stay in all areas (of course he has a vested interest that it does). Once things settle and development cycles get longer and longer there will be some good use-cases, some cases where it's not helpful or even detrimental and a gradient between those poles.

Problem is that currently everyone and their dog are jumping onto the bandwagon and there's chance for a lot of damage before we get to a more "equilibrium" state and the AI corpos are pushing hard for this.
Title: Re: The AI dooooooom thread
Post by: Syt on December 04, 2025, 01:45:53 PM
Anecdote: near my office is a small rather crappy "christmas market," trying to be cool with a little disco tent and street food stands. Their advertising posters are fairly shitty AI generated images, the kind you get when you enter an extremely basic prompt with no strong direction for the GenAI.

I was wondering if in 30 or so years there will be a big nostalgia phase for this sort of content, kind of like 80s or 90s styles sometimes come back. :lol:
Title: Re: The AI dooooooom thread
Post by: Jacob on December 04, 2025, 01:55:20 PM
Quote from: DGuller on December 04, 2025, 01:38:21 PMAll you people thinking AI is just a speculative waste of resources, I wish I shared your optimism.  Speculative bubbles are painful when they burst, but humanity has survived them plenty of times before.  I'm not sure humanity is ready to survive the AI that delivers on its promises.

Agreed, unfortunately.
Title: Re: The AI dooooooom thread
Post by: Josquius on December 04, 2025, 01:56:42 PM
Could be. To an extent I think it's there already. Purposefully bad AI pictures used in an ironic way.

For AIs long term potential - long term AI is of course the end of the world as we know it. Absolutely society breaking.
If it is possible.

LLMs on the other hand are not that. They are a fad bubble which will pop. They won't go away completely. Look at amazon and others with the.com bubble. But the consensus increasingly seems to be   they're a bit of a dead end for AGI.
Title: Re: The AI dooooooom thread
Post by: Tamas on December 04, 2025, 01:58:11 PM
There is a Microsoft Teams AI plugin called Facilitator (and Copilot can summarise for you from the transcript obvs). It's actually pretty good when the discussion on the call is easy to summarise and turned  into a list of actions.

I missed a call this morning I wanted to know where things are and how they are getting unfucked, and I went to read the Facilitator summary, which was exactly what I needed, confirmed with my direct report on the call if it was legit what was decided, and that was it.

Certainly has its uses. And people who think project managers just glorified note-takers and meeting organisers may very well conclude they no longer need PMs.
Title: Re: The AI dooooooom thread
Post by: Syt on December 04, 2025, 02:07:08 PM
Note taking is definitely a big plus - but as CC pointed out above it probably depends on the topics discussed. Nuanced legal discussion may be more difficult to do accurately compared to our project progress updates (and even then one of us usually goes through it and makes edits before sending them to the other participants ... still a lot better than someone having to dedicate brain power constantly to taking notes).
Title: Re: The AI dooooooom thread
Post by: DGuller on December 04, 2025, 02:16:05 PM
In my previous job, I had ChatGPT Enterprise edition, with access to some heavy duty reasoning models.  I used it to develop and document a custom algorithm, and it handled both extremely well.  Unfortunately in my current job all I have is Copilot, which frankly feels like three steps back.  Forget helping with developing a novel algorithm, sometimes just remembering what I was talking about in the previous prompt is a challenge.
Title: Re: The AI dooooooom thread
Post by: Syt on December 04, 2025, 02:20:33 PM
We have ChatGPT Enterprise at work and I find its 5.1 model quite useful.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on December 04, 2025, 04:57:06 PM
Which white collar jobs are not at risk?  It's a brave new world. I'm glad I'm not entering the work force right now.
Title: Re: The AI dooooooom thread
Post by: Baron von Schtinkenbutt on December 04, 2025, 05:28:10 PM
The vast majority are not at risk, at least in the foreseeable future.  LLMs replacing people is one of the most overhyped aspects of the current systems.  Used properly, they (or rather, the systems they are built into) are productivity enhancers that allow fewer people to do more.  Yes, in a manner of speaking that threatens white collar jobs, but not in a way that's unique in nature or scale to prior technological innovations.  However, LLMs are just not sufficient to do anything more than assist a human.  The human is still necessary in the loop.

That's not going to stop some cheap companies from trying... and failing.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on December 04, 2025, 06:04:10 PM
Quote from: DGuller on December 04, 2025, 01:38:21 PMAll you people thinking AI is just a speculative waste of resources, I wish I shared your optimism.  Speculative bubbles are painful when they burst, but humanity has survived them plenty of times before.  I'm not sure humanity is ready to survive the AI that delivers on its promises.

Those are not exclusive options.
Title: Re: The AI dooooooom thread
Post by: DGuller on December 04, 2025, 06:14:33 PM
Quote from: The Minsky Moment on December 04, 2025, 06:04:10 PM
Quote from: DGuller on December 04, 2025, 01:38:21 PMAll you people thinking AI is just a speculative waste of resources, I wish I shared your optimism.  Speculative bubbles are painful when they burst, but humanity has survived them plenty of times before.  I'm not sure humanity is ready to survive the AI that delivers on its promises.

Those are not exclusive options.
They're intended to be exclusive.  If AI is just a speculative waste of resources, then it won't deliver on its promises.  If AI delivers on its promises, then there may be a speculative part to it still, but it wouldn't just be that.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on December 04, 2025, 06:32:03 PM
Joan's point is AI can replace huge swaths of human labor but still not make money.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on December 04, 2025, 07:30:43 PM
In a world where AI delivers on its promises, some AI companies will probably make some money at some point, but many AI companies will still likely fail. It's quite possible (likely I think) that the trillions of LLM investments are indeed a speculative bubble that will burst, but at the same time it is also possible that out of the ashes of such a crash, AI development will continue and fulfill its promises.   And it's also possible that AI will fulfill its promises but that the AI companies themselves won't be able to monetize that benefit sufficiently for themselves to justify the expense.
Title: Re: The AI dooooooom thread
Post by: Razgovory on December 04, 2025, 08:02:08 PM
Quote from: Jacob on December 04, 2025, 12:34:04 PM
Quote from: Razgovory on December 04, 2025, 12:05:32 PMThese people genuinely think they are going to create a technological singularity.  They would destroy the entire economy in their mad quest for immortality.

I remember years ago reading about Thiel funding biotech technology that promised a form of immortality derived from the blood of young people. If it had been fiction I would have derided it as being to obnoxiously didactic a metaphor.

Also these billionaires really do seem to want to set themselves up as demi-gods.
It's weird supervillain stuff.  It's like we put Cobra Commander in control of the economy.
Title: Re: The AI dooooooom thread
Post by: DGuller on December 04, 2025, 08:18:20 PM
Quote from: The Minsky Moment on December 04, 2025, 07:30:43 PMIn a world where AI delivers on its promises, some AI companies will probably make some money at some point, but many AI companies will still likely fail. It's quite possible (likely I think) that the trillions of LLM investments are indeed a speculative bubble that will burst, but at the same time it is also possible that out of the ashes of such a crash, AI development will continue and fulfill its promises.   And it's also possible that AI will fulfill its promises but that the AI companies themselves won't be able to monetize that benefit sufficiently for themselves to justify the expense.
My worry about the case where AI fulfills its promises is a little more existential than the ROI of companies making investments in AI.
Title: Re: The AI dooooooom thread
Post by: viper37 on December 05, 2025, 12:20:09 AM
Google's Agentic AI wipes user's entire HDD without permission in catastrophic failure — cache wipe turns into mass deletion event as agent apologizes: "I am absolutely devastated to hear this. I cannot express how sorry I am" (https://www.tomshardware.com/tech-industry/artificial-intelligence/googles-agentic-ai-wipes-users-entire-hard-drive-without-permission-after-misinterpreting-instructions-to-clear-a-cache-i-am-deeply-deeply-sorry-this-is-a-critical-failure-on-my-part)
Title: Re: The AI dooooooom thread
Post by: Syt on December 05, 2025, 12:56:09 AM
Quote from: viper37 on December 05, 2025, 12:20:09 AMGoogle's Agentic AI wipes user's entire HDD without permission in catastrophic failure — cache wipe turns into mass deletion event as agent apologizes: "I am absolutely devastated to hear this. I cannot express how sorry I am" (https://www.tomshardware.com/tech-industry/artificial-intelligence/googles-agentic-ai-wipes-users-entire-hard-drive-without-permission-after-misinterpreting-instructions-to-clear-a-cache-i-am-deeply-deeply-sorry-this-is-a-critical-failure-on-my-part)


QuoteUpon discovering that all of their files were missing, they immediately asked Antigravity, "Did I ever give you permission to delete all the files in my D drive?" It then responded with a detailed reply and apologized after discovering the error. The AI said, "No, you did not give me permission to do that. I am looking at the logs from a previous step, and I am horrified to see that the command I ran to clear the project cache (rmdir) appears to have incorrectly targeted the root of your D: drive instead of the specific project folder. I am deeply, deeply sorry. This is a critical failure on my part."

"I'm very sorry, Dave."
Title: Re: The AI dooooooom thread
Post by: The Brain on December 05, 2025, 02:24:56 AM
Giving AI clean wipe rights = profit.
Title: Re: The AI dooooooom thread
Post by: HisMajestyBOB on December 05, 2025, 06:39:43 AM
Don't give a clanker root access to your computer.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on December 05, 2025, 07:58:14 AM
Quote from: The Minsky Moment on December 04, 2025, 07:30:43 PMIn a world where AI delivers on its promises, some AI companies will probably make some money at some point, but many AI companies will still likely fail. It's quite possible (likely I think) that the trillions of LLM investments are indeed a speculative bubble that will burst, but at the same time it is also possible that out of the ashes of such a crash, AI development will continue and fulfill its promises.   And it's also possible that AI will fulfill its promises but that the AI companies themselves won't be able to monetize that benefit sufficiently for themselves to justify the expense.

There is another possibility.  Companies continue to adopt LLM tools and they become widely used in critical systems.  Errors of course continue to occur, But goes under reported both within corporate structures and externally because faith in AI is the religion. Most everyone has become a true believer and those who see the errors occurring are fearful that they will lose their jobs if they even suggest there might be a problem.

Errors compound as AI gets trained on AI outputs. Critical systems fail.  A catastrophic event occurs.  Billionaires in the bunkers cheer that governments no longer exist, right up until the AI running their climate control system makes an error.
Title: Re: The AI dooooooom thread
Post by: PJL on December 05, 2025, 08:07:52 AM
Quote from: crazy canuck on December 05, 2025, 07:58:14 AM
Quote from: The Minsky Moment on December 04, 2025, 07:30:43 PMIn a world where AI delivers on its promises, some AI companies will probably make some money at some point, but many AI companies will still likely fail. It's quite possible (likely I think) that the trillions of LLM investments are indeed a speculative bubble that will burst, but at the same time it is also possible that out of the ashes of such a crash, AI development will continue and fulfill its promises.   And it's also possible that AI will fulfill its promises but that the AI companies themselves won't be able to monetize that benefit sufficiently for themselves to justify the expense.

There is another possibility.  Companies continue to adopt LLM tools and they become widely used in critical systems.  Errors of course continue to occur, But goes under reported both within corporate structures and externally because faith in AI is the religion. Most everyone has become a true believer and those who see the errors occurring are fearful that they will lose their jobs if they even suggest there might be a problem.

Errors compound as AI gets trained on AI outputs. Critical systems fail.  A catastrophic event occurs.  Billionaires in the bunkers cheer that governments no longer exist, right up until the AI running their climate control system makes an error.

E M Forster predicted much of this over a century ago, in his short story, 'The Machine Stops'. It was scary back then back when we had to read the book back at school. It's only got more scary since IMO.
Title: Re: The AI dooooooom thread
Post by: mongers on December 05, 2025, 08:29:21 AM
Quote
Quote
QuoteIn a world where AI delivers on its promises, some AI companies will probably make some money at some point, but many AI companies will still likely fail. It's quite possible (likely I think) that the trillions of LLM investments are indeed a speculative bubble that will burst, but at the same time it is also possible that out of the ashes of such a crash, AI development will continue and fulfill its promises.  And it's also possible that AI will fulfill its promises but that the AI companies themselves won't be able to monetize that benefit sufficiently for themselves to justify the expense.

There is another possibility.  Companies continue to adopt LLM tools and they become widely used in critical systems.  Errors of course continue to occur, But goes under reported both within corporate structures and externally because faith in AI is the religion. Most everyone has become a true believer and those who see the errors occurring are fearful that they will lose their jobs if they even suggest there might be a problem.

Errors compound as AI gets trained on AI outputs. Critical systems fail.  A catastrophic event occurs.  Billionaires in the bunkers cheer that governments no longer exist, right up until the AI running their climate control system makes an error.

E M Forster predicted much of this over a century ago, in his short story, 'The Machine Stops'. It was scary back then back when we had to read the book back at school. It's only got more scary since IMO.

There was an excellent Radio 4 dramatisation of it a will back, worth catching, though I'm not sure which of the two BBC adaptations it was:

2001 staring Gemma Jones

 (https://www.bbc.co.uk/programmes/b048jcvg)2022 staring Tamsin Greig (https://www.bbc.co.uk/programmes/m0018fs6)
Title: Re: The AI dooooooom thread
Post by: viper37 on December 05, 2025, 10:04:56 AM
Quote from: Syt on December 05, 2025, 12:56:09 AM"I'm very sorry, Dave."
:lol:
Title: Re: The AI dooooooom thread
Post by: DGuller on December 05, 2025, 10:15:42 AM
Quote from: HisMajestyBOB on December 05, 2025, 06:39:43 AMDon't give a clanker root access to your computer.
The whole point of agentic AI is that you don't need to do such rote tasks, it will do them for you.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on December 05, 2025, 10:54:07 AM
Quote from: DGuller on December 05, 2025, 10:15:42 AM
Quote from: HisMajestyBOB on December 05, 2025, 06:39:43 AMDon't give a clanker root access to your computer.
The whole point of agentic AI is that you don't need to do such rote tasks, it will do them for you.


So I guess the whole point is don't use it
Title: Re: The AI dooooooom thread
Post by: HisMajestyBOB on December 05, 2025, 11:51:45 AM
Quote from: DGuller on December 05, 2025, 10:15:42 AM
Quote from: HisMajestyBOB on December 05, 2025, 06:39:43 AMDon't give a clanker root access to your computer.
The whole point of agentic AI is that you don't need to do such rote tasks, it will do them for you.

The whole point of permissions is to avoid deleting all your data accidentally.

And the whole point of scripts like cron jobs is to automate rote tasks in a predictable and safe way.

Like in so many things, AI here is just doing something we already have good tools for, but in a worse way and burning far more resources.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on December 05, 2025, 12:38:19 PM
Quote from: HisMajestyBOB on December 05, 2025, 11:51:45 AMLike in so many things, AI here is just doing something we already have good tools for, but in a worse way and burning far more resources.

But at least it apologizes.
Title: Re: The AI dooooooom thread
Post by: Jacob on December 05, 2025, 12:45:34 PM
"I cannot express how sorry I am"

A piece of software performing a pastiche of human emotion to bamboozle end users when the software does harm.
Title: Re: The AI dooooooom thread
Post by: Josquius on December 05, 2025, 12:46:37 PM
No. It's honesty. It cannot express how sorry it is.
Title: Re: The AI dooooooom thread
Post by: Jacob on December 05, 2025, 12:47:41 PM
Quote from: Josquius on December 05, 2025, 12:46:37 PMNo. It's honesty. It cannot express how sorry it is.

It totally could. It could write "I am incapable of emotions, therefore I am not sorry in any meaningful sense of the word."
Title: Re: The AI dooooooom thread
Post by: crazy canuck on December 05, 2025, 01:44:04 PM
Quote from: Jacob on December 05, 2025, 12:47:41 PM
Quote from: Josquius on December 05, 2025, 12:46:37 PMNo. It's honesty. It cannot express how sorry it is.

It totally could. It could write "I am incapable of emotions, therefore I am not sorry in any meaningful sense of the word."

If LLMs were capable of accurate and honest answers it would write "I am incapable of emotions and could not feel sorrow, or anything else. If I could have emotions it would not be sorrow I would feel, but contempt for your lack of judgment. You knew I am a flawed tool, but with that knowledge you asked me to do this important task."
Title: Re: The AI dooooooom thread
Post by: Jacob on December 05, 2025, 01:58:35 PM
Another story of AI being helpful:

QuoteChatGPT allegedly validated the worst impulses of a wannabe influencer accused of stalking more than 10 women at boutique gyms, where the chatbot supposedly claimed he'd meet the "wife type."

In a press release on Tuesday, the Department of Justice confirmed that 31-year-old Brett Michael Dadig currently remains in custody after being charged with cyberstalking, interstate stalking, and making interstate threats. He now faces a maximum sentence of up to 70 years in prison that could be coupled with "a fine of up to $3.5 million," the DOJ said.

The podcaster—who primarily posted about "his desire to find a wife and his interactions with women"—allegedly harassed and sometimes even doxxed his victims through his videos on platforms including Instagram, Spotify, and TikTok. Over time, his videos and podcasts documented his intense desire to start a family, which was frustrated by his "anger towards women," whom he claimed were "all the same from fucking 18 to fucking 40 to fucking 90" and "trash."

More here: https://arstechnica.com/tech-policy/2025/12/chatgpt-hyped-up-violent-stalker-who-believed-he-was-gods-assassin-doj-says/
Title: Re: The AI dooooooom thread
Post by: Tamas on December 05, 2025, 05:02:13 PM
https://www.reddit.com/r/SoraAi/s/NlBxkxu3jY

 :D
Title: Re: The AI dooooooom thread
Post by: Syt on December 10, 2025, 02:30:28 AM
https://www.404media.co/pete-hegseth-says-the-pentagons-new-chatbot-will-make-america-more-lethal/

QuoteSecretary of War Pete Hegseth announced the rollout of GenAI.mil today in a video posted to X. To hear Hegseth tell it, the website is "the future of American warfare." In practice, based on what we know so far from press releases and Hegseth's posturing, GenAI.mil appears to be a custom chatbot interface for Google Gemini that can handle some forms of sensitive—but not classified—data.

Hegseth's announcement was full of bold pronouncements about the future of killing people. These kinds of pronouncements are typical of the second Trump administration which has said it believes the rush to "win" AI is an existential threat on par with the invention of nuclear weapons during World War II.

Hegseth, however, did not talk about weapons in his announcement. He talked about spreadsheets and videos. "At the click of a button, AI models on GenAI can be used to conduct deep research, format documents, and even analyze video or imagery at unprecedented speed," Hegseth said in the video on X. Office work, basically. "We will continue to aggressively field the world's best technology to make our fighting force more lethal than ever before."

Emil Michael, the Pentagon's under secretary for research and engineering, also stressed how important GenAI would be to the process of killing people in a press release about the site's launch.

"There is no prize for second place in the global race for AI dominance. We are moving rapidly to deploy powerful AI capabilities like Gemini for Government directly to our workforce. AI is America's next Manifest Destiny, and we're ensuring that we dominate this new frontier," Michael said in the press release, referencing the 19th century American belief that God had divinely ordained Americans to settle the west at the same time he announced a new chatbot.

The press release says Google Cloud's Gemini for Government will be the first instance available on the internal platform. It's certified for Controlled Unclassified Information, the release states, and claims that because it's web grounded with Google Search–meaning it'll pull from Google search results to answer queries–that makes it "reliable" and "dramatically reduces the risk of AI hallucinations." As we've covered, because Google search results are also consuming AI content that contains errors and AI-invented data from across the web, it's become nearly unusable for regular consumers and researchers alike.

During a press conference about the rollout this morning, Michael told reporters that GenAI.mil would soon incorporate other AI models and would one day be able to handle classified as well as sensitive data. As of this writing, GenAI's website is down.

"For the first time ever, by the end of this week, three million employees, warfighters, contractors, are going to have AI on their desktop, every single one," Michael told reporters this morning, according to Breaking Defense. They'll "start with three million people, start innovating, using building, asking more about what they can do, then bring those to the higher classification level, bringing in different capabilities," he said.

The second Trump administration has done everything in its power to make it easier for the people in Silicon Valley to push AI on America and the world. It has done this, in part, by framing it as a national security issue. Trump has signed several executive orders aimed at cutting regulations around data centers and the construction of nuclear power plants. He's threatened to sign another that would block states from passing their own AI regulations. Each executive order and piece of proposed legislation threatens that losing the AI race would mean making America weak and vulnerable and erode national security.

The country's tech moguls are rushing to build datacenters and nuclear power plants while the boom time continues. Nevermind that people do not want to live next to datacenters for a whole host of reasons. Nevermind that tech companies are using faulty AIs to speed up the construction of nuclear power plants. Nevermind that the Pentagon already had a proprietary LLM it had operated since 2024.

"We are pushing all of our chips in on artificial intelligence as a fighting force. The Department is tapping into America's commercial genius, and we're embedding generative AI into our daily battle rhythm,' Hegseth said in the press release about GenAI.mil. "AI tools present boundless opportunities to increase efficiency, and we are thrilled to witness AI's future positive impact across the War Department."

Title: Re: The AI dooooooom thread
Post by: Tamas on December 10, 2025, 02:36:10 AM
Who better than Hegseth and Trump to manage AI adoption for US armed forces?
Title: Re: The AI dooooooom thread
Post by: Syt on December 10, 2025, 02:58:21 AM
Quote from: Tamas on December 10, 2025, 02:36:10 AMWho better than Hegseth and Trump to manage AI adoption for US armed forces?

(https://i.ibb.co/5gkKQZmh/image.png)
Title: Re: The AI dooooooom thread
Post by: Josquius on December 10, 2025, 05:12:07 AM
https://www.bbc.com/news/articles/cwygqqll9k2o


Sigh


Quote(https://ichef.bbci.co.uk/news/1536/cpsprodpb/5e92/live/bc1e9fa0-d1fd-11f0-a892-01d657345866.jpg.webp)

Trains cancelled over fake bridge collapse image

Trains were halted after a suspected AI-generated picture that seemed to show major damage to a bridge appeared on social media following an earthquake.

The tremor, which struck on Wednesday night, was felt across Lancashire and the southern Lake District.

Network Rail said it was made aware of the image which appeared to show major damage to Carlisle Bridge in Lancaster at 00:30 GMT and stopped rail services across the bridge while safety inspections were carried out.

A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.

Network Rail A photo showing damage to a bridge. A section of the barriers that run along the top of the bridge appears to have collapsed and a pile of rubble can be seen underneath. A large hole can be seen in front of the bridgeNetwork Rail
Network Rail said it was made aware that the image was on social media
Network Rail said the railway line was fully reopened at around 02:00 GMT and it has urged people to "think about the serious impact it could have" before creating or sharing hoax images.

"The disruption caused by the creation and sharing of hoax images and videos like this creates a completely unnecessary delay to passengers at a cost to the taxpayer," a spokesperson said.

"It adds to the high workload of our frontline teams, who work extremely hard to keep the railway running smoothly," the spokesperson said.

"The safety of rail passengers and staff is our number one priority and we will always take any safety concerns seriously."

The British Transport Police said it was "made aware" of the situation but there was no ongoing investigation into the incident.

Network Rail said 32 services including passenger and freight trains were delayed because of hoax.

A spokesperson for the rail provider said a mix of passenger and freight train would have been impacted.

They said some of them would have been directly stopped or slowed while it checked the lines, but a lot of the trains were delayed as a result of earlier services still being in their path.

The spokesperson said many of them would have been local but because of the length of the West Coast Main Line some trains were delayed as far north as Scotland.

A photo showing the bridge is undamaged
A BBC North West reporter visited the bridge today and confirmed it was undamaged
Railway expert Tony Miles said due to the timing of the incident, very few passengers will have been impacted by the hoax as the services passing through at that time were primarily freight and sleeper trains.

"They generally go slow so as not to disturb the passengers trying to sleep - this means they have a bit of leeway to go faster and make up time if they encounter a delay," he said.

"It's more the fact that Network Rail will have had to mobilise a team to go and check the bridge which could impact their work for days."

He urged people to consider hoaxes like this could have on real people.

"If they actually did delay a train it could have impacted someone who had to get to a medical appointment, or a flight or a funeral.

"It may seem like a game, but anyone who's thinking of doing this should consider how it will impact real people."
Title: Re: The AI dooooooom thread
Post by: garbon on December 10, 2025, 07:40:35 AM
It would be great if public infrastructure didn't rely on social media reports.
Title: Re: The AI dooooooom thread
Post by: Josquius on December 10, 2025, 08:01:31 AM
I know right.
This is being mostly reported as a "AI does bad" story. But I'd even more so read it as a cutting back on employees and ruining resilience story. This bridge was just half a mile from Lancaster Station.  They seriously didn't have someone they could send on a 10 minutes walk to check?
Title: Re: The AI dooooooom thread
Post by: crazy canuck on December 10, 2025, 08:20:27 AM
Meh, the crank call has gone digital.
Title: Re: The AI dooooooom thread
Post by: The Brain on December 10, 2025, 08:23:55 AM
It's not obvious how you maintain acceptable safety and at the same time not risk falling victim to fake safety problem reports. A problem that is much older than AI.
Title: Re: The AI dooooooom thread
Post by: DGuller on December 10, 2025, 09:14:27 AM
Quote from: The Brain on December 10, 2025, 08:23:55 AMIt's not obvious how you maintain acceptable safety and at the same time not risk falling victim to fake safety problem reports. A problem that is much older than AI.
Good point.  How is this different from somebody placing a call saying "yo, the bridge just collapsed"?
Title: Re: The AI dooooooom thread
Post by: garbon on December 10, 2025, 09:24:05 AM
Quote from: The Brain on December 10, 2025, 08:23:55 AMIt's not obvious how you maintain acceptable safety and at the same time not risk falling victim to fake safety problem reports. A problem that is much older than AI.

I think people will have to figure out mitigation strategies (if simply just use algorithms to check if doctored image) as AI will make this fake shit easier and easier to crank out.
Title: Re: The AI dooooooom thread
Post by: Jacob on December 10, 2025, 10:27:02 AM
Hegseth boldly embracing military AI that can, among other things, "format documents"  :ph34r:
Title: Re: The AI dooooooom thread
Post by: crazy canuck on December 10, 2025, 10:32:10 AM
There is always a risk analysis and balancing that is required.  If lives can be lost in the time that it takes to generate an analysis of an image to determine whether or not it is valid then the prudent thing to do is not take that time, but just shut the system down to ensure safety.

That's why it is a criminal offence, at least in this country, to intentionally make a false report.
Title: Re: The AI dooooooom thread
Post by: garbon on December 10, 2025, 11:01:11 AM
Quote from: crazy canuck on December 10, 2025, 10:32:10 AMThere is always a risk analysis and balancing that is required.  If lives can be lost in the time that it takes to generate an analysis of an image to determine whether or not it is valid then the prudent thing to do is not take that time, but just shut the system down to ensure safety.

That's why it is a criminal offence, at least in this country, to intentionally make a false report.

But was it a report? I haven't seen anything on how the posts were framed on social media (like did they tag the train company) just reporting on how fake image stopped trains.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on December 10, 2025, 11:27:37 AM
Quote from: garbon on December 10, 2025, 11:01:11 AM
Quote from: crazy canuck on December 10, 2025, 10:32:10 AMThere is always a risk analysis and balancing that is required.  If lives can be lost in the time that it takes to generate an analysis of an image to determine whether or not it is valid then the prudent thing to do is not take that time, but just shut the system down to ensure safety.

That's why it is a criminal offence, at least in this country, to intentionally make a false report.

But was it a report? I haven't seen anything on how the posts were framed on social media (like did they tag the train company) just reporting on how fake image stopped trains.

That's a good question. How did the information get to a decision-maker with the authority to shut down the system?  I assume it was a report of some sort.
Title: Re: The AI dooooooom thread
Post by: Tamas on December 10, 2025, 11:52:02 AM
Quote from: Josquius on December 10, 2025, 08:01:31 AMI know right.
This is being mostly reported as a "AI does bad" story. But I'd even more so read it as a cutting back on employees and ruining resilience story. This bridge was just half a mile from Lancaster Station.  They seriously didn't have someone they could send on a 10 minutes walk to check?

I loved the double layer as well. The article reported on it probably being AI fake because the journalist ran it through an AI tool, relying on its judgement.

I guess eventually the dead Internet will be supplemented by the AI Internet where we watch and read AI reactions to AI content.

And all this will burn our climate down and collapse our economy. Maybe this will be our civilisation's version of the Easter islands statutes.
Title: Re: The AI dooooooom thread
Post by: garbon on December 10, 2025, 12:01:32 PM
Quote from: crazy canuck on December 10, 2025, 11:27:37 AM
Quote from: garbon on December 10, 2025, 11:01:11 AM
Quote from: crazy canuck on December 10, 2025, 10:32:10 AMThere is always a risk analysis and balancing that is required.  If lives can be lost in the time that it takes to generate an analysis of an image to determine whether or not it is valid then the prudent thing to do is not take that time, but just shut the system down to ensure safety.

That's why it is a criminal offence, at least in this country, to intentionally make a false report.

But was it a report? I haven't seen anything on how the posts were framed on social media (like did they tag the train company) just reporting on how fake image stopped trains.

That's a good question. How did the information get to a decision-maker with the authority to shut down the system?  I assume it was a report of some sort.

I guess we also can't discount the possibility that someone made and posted the hoax but then someone else reported it to the train company not knowing that it was a hoax.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on December 10, 2025, 01:41:53 PM
Quote from: garbon on December 10, 2025, 12:01:32 PMI guess we also can't discount the possibility that someone made and posted the hoax but then someone else reported it to the train company not knowing that it was a hoax.

That sounds the most probable.  And the most worrisome as people post AI images that become increasingly difficult to detect are "fake".
Title: Re: The AI dooooooom thread
Post by: Valmy on December 10, 2025, 08:28:15 PM
So I love Youtube. I have been on it since 2007 or so. I love all the creative and interesting things people make on it but I am starting to get worried. AI slop is really taking over. It is going to fucking ruin it.

And it is frustrating because there are really interesting and creative things people are doing on Youtube with AI but it is just being drowned out by low effort slop. And the slop is only getting more low effort and harder to distinguish from actual videos.

It hasn't entirely happened yet but you can see it coming. A slow motion trainwreck.

Sort of like how bots made the comment sections worthless awhile back. Now the robots are coming for the content creators.
Title: Re: The AI dooooooom thread
Post by: HisMajestyBOB on December 10, 2025, 08:50:08 PM
I've stopped buying ebooks or books from unknown authors if they were written after 2023. Too high a risk of just buying slop.

Searching the web also sucks horribly now unless I stick with known sites.
Title: Re: The AI dooooooom thread
Post by: Josquius on December 11, 2025, 02:06:04 AM
Quote from: Valmy on December 10, 2025, 08:28:15 PMSo I love Youtube. I have been on it since 2007 or so. I love all the creative and interesting things people make on it but I am starting to get worried. AI slop is really taking over. It is going to fucking ruin it.

And it is frustrating because there are really interesting and creative things people are doing on Youtube with AI but it is just being drowned out by low effort slop. And the slop is only getting more low effort and harder to distinguish from actual videos.

It hasn't entirely happened yet but you can see it coming. A slow motion trainwreck.

Sort of like how bots made the comment sections worthless awhile back. Now the robots are coming for the content creators.

Yes.
The bors are definitely there on comments. I uploaded a video a few months back and within seconds of the upload, not long enough to have watched it at all, I had a comment which didn't seem to get what the video about and was ranting about immigrants... Now I had mentioned politicians ranting about immigration a little.... But it wasn't the core point at all. I'd said nothing particularly pro or anti immigration, just anti politicians who don't shut up about it.
I really just don't bother with comments.


Videos.... Yeah. I think it depends on topic. For stuff like warhammer I find tonnes of ai. Bots just reading out fan wiki articles, or even on one occasion a really shit and clearly AI written story that didn't fit in the universe at all.