News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

Jacob

Quote from: DGuller on November 02, 2025, 07:07:05 PMI guess that's a multi-trillion dollar question, isn't it?  Assuming a wrong answer due to preconceived notions or wishful thinking can be a catastrophic mistake, if you don't get lucky enough to hit upon the right answer with the wrong thinking.

It's kind of neat how your statement could be read as an indictment of AI sceptics ("they're likely to disregard the right answer when AI provides it, due to their preconceived notions") or of AI enthusiasts ("wishful thinking about the quality of AI output put them at risk for making catastrophic mistakes").

IMO the real challenge is twofold:

The first one, rarely spoken of but tacitly admitted by folks like Musk with their AI Grokpedia shenanigans, is that so much of our decision making is values based not data based. Sure there's data and other hard facts, involved - but how those facts are framed, which are given primacy, which are being devalued as peripheral, and which are discarded as being non-related has a huge impact on the outcome of logical facts based.

That's not an anti-AI argument particularly, but it's definitely an area of uncertainty. What sort of differences will we see between CCP developed AI compared to AI developed by Silicon Valley Techno-Oligarchs? And will those differences increase the risk of the kind of bias mistakes you mentioned? Will they push the value base of society in a particular direction (because if Musk is pretty open about his desire to do so, I wouldn't be surprised if other oligarchs with ideologies have similar if perhaps slightly more subtle ideas in that direction). How do we navigate using AI to make important decisions, if most important decisions are (as I contend) values based and political, not pure logic puzzles?

Which leads to the second point, accountability. IMO, I expect AI to become essentially a massive machinery of accountability-dispersal.

AI can make all sorts of recommendations for action and policy (or legal strategy or construction process or whatever), but who owns the responsibility for the outcomes if something goes wrong?

I expect we'll see all kinds of ethically dubious AI decision making in areas where the "customers" are unimportant and have non venue for raising complaints - say civilians in distant warzones. If autonomous drones blow up some random innocents in a country the media doesn't care about, but the AI said it was "acceptable risk" or "they fit the profile of a potential threat" or even straight up failed to account for civilians, I unfortunately think it won't cause any problems even today given the state of the world.

But imagine for a moment that "we" did care - maybe it blows up in the media for whatever reason (it's so damn egregious, or maybe a youtuber with a massive following whips up a furore somehow), maybe we get to a moment where the conventions of war matter, or for some other reason... and AI makes some egregious mistake where lives are lost. Who is responsible for the mistake? Anyone? And who's responsible for fixing it? Is someone responsible for second guessing the AI and therefore maintain responsibility? Or if the AI did it, then it's nobody's fault - so sorry - even if it's so obviously wrong?

Admiral Yi

The state that launched the missiles is still responsible.

crazy canuck

#677
Quote from: Jacob on Today at 01:40:06 AM
Quote from: DGuller on November 02, 2025, 07:07:05 PMI guess that's a multi-trillion dollar question, isn't it?  Assuming a wrong answer due to preconceived notions or wishful thinking can be a catastrophic mistake, if you don't get lucky enough to hit upon the right answer with the wrong thinking.

It's kind of neat how your statement could be read as an indictment of AI sceptics ("they're likely to disregard the right answer when AI provides it, due to their preconceived notions") or of AI enthusiasts ("wishful thinking about the quality of AI output put them at risk for making catastrophic mistakes").

IMO the real challenge is twofold:

The first one, rarely spoken of but tacitly admitted by folks like Musk with their AI Grokpedia shenanigans, is that so much of our decision making is values based not data based. Sure there's data and other hard facts, involved - but how those facts are framed, which are given primacy, which are being devalued as peripheral, and which are discarded as being non-related has a huge impact on the outcome of logical facts based.

That's not an anti-AI argument particularly, but it's definitely an area of uncertainty. What sort of differences will we see between CCP developed AI compared to AI developed by Silicon Valley Techno-Oligarchs? And will those differences increase the risk of the kind of bias mistakes you mentioned? Will they push the value base of society in a particular direction (because if Musk is pretty open about his desire to do so, I wouldn't be surprised if other oligarchs with ideologies have similar if perhaps slightly more subtle ideas in that direction). How do we navigate using AI to make important decisions, if most important decisions are (as I contend) values based and political, not pure logic puzzles?

Which leads to the second point, accountability. IMO, I expect AI to become essentially a massive machinery of accountability-dispersal.

AI can make all sorts of recommendations for action and policy (or legal strategy or construction process or whatever), but who owns the responsibility for the outcomes if something goes wrong?

I expect we'll see all kinds of ethically dubious AI decision making in areas where the "customers" are unimportant and have non venue for raising complaints - say civilians in distant warzones. If autonomous drones blow up some random innocents in a country the media doesn't care about, but the AI said it was "acceptable risk" or "they fit the profile of a potential threat" or even straight up failed to account for civilians, I unfortunately think it won't cause any problems even today given the state of the world.

But imagine for a moment that "we" did care - maybe it blows up in the media for whatever reason (it's so damn egregious, or maybe a youtuber with a massive following whips up a furore somehow), maybe we get to a moment where the conventions of war matter, or for some other reason... and AI makes some egregious mistake where lives are lost. Who is responsible for the mistake? Anyone? And who's responsible for fixing it? Is someone responsible for second guessing the AI and therefore maintain responsibility? Or if the AI did it, then it's nobody's fault - so sorry - even if it's so obviously wrong?

An example of your point is in academia. Most journals now have a policy that prohibits an AI tool from being named as an author even if an AI tool was used to draft the manuscript. I hasten to add that academics who use an AI tool to draft their manuscripts are landing in a lot of trouble because of the numerous errors AI tools make.

The reason academics cannot blame the deficiencies of the AI tool they used is because the authors of a manuscript remain responsible for the manuscript. They can't simply point to the deficiencies in the technology to deflect blame.

The other thing I should add in relation to your paragraph related to AI being used to create strategies, etc., in my field the people that do that have gone tragically wrong in their arguments because as I'm sure most people now know AI is not reliable and does not produce statements that are true.  Put another way if somebody is relying on AI to come up with a legal argument they probably don't know what they're doing from the start and lack the ability to understand the AI tool is giving them gibberish.

This is another area where academia can show how to deal with the problem. At least in Canada, the main funding agencies have prohibited the use of AI in the drafting of funding applications.  They haven't stated the reason for that prohibition, but I think it's pretty clear that they don't want to waste their time sifting through what would otherwise be statements made by AI that are false in the context of request for funding.





Awarded 17 Zoupa points

In several surveys, the overwhelming first choice for what makes Canada unique is multiculturalism. This, in a world collapsing into stupid, impoverishing hatreds, is the distinctly Canadian national project.

The Minsky Moment

Quote from: Sheilbh on November 02, 2025, 07:06:43 PMTotally agree on the incestuous financing and business cycle. Although I don't know how any of the business cycle works in this world. I have a friend who's a lawyer for a VC firm and I always find speaking to him about his work a bit mad  for this reason :lol: :ph34r:

It always works the same way.  Leverage builds up and over-extends in the boom, until it gets to the point where simply not moving fast enough causes a cascading crash.  The only thing that changes are some specific details of the financial instruments and the euphemisms used to describe them.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

The Minsky Moment

Quote from: Sheilbh on November 02, 2025, 07:06:43 PMWhat's the gap in the US from levels of investment and what's needed even for regular (non-AI) growth?

I don't know exactly and I agree the problem isn't data centers per se.
It's enormous debt spending on equipment (like GPUs) that become obsolete very quickly.

For "regular growth" including spending on AI commensurate with its actual value added, a continuous flow is needed to deploy and maintain that capacity because of the constant need to replace obsolete equipment.

If there is a very large and discontinuous jump in spending on this kind of equipment, one of two things must happen: (1) either the raised level of massively debt fueled spending must be maintained indefinitely to continuously replace obsolete equipment, or (2) there must be a sharp decline in spending within the next 1-3 years.

(1) is theoretically possible if the spending spike truly shakes the US out of its current equilibrium growth path to a materially higher path.  But that's a very risky assumption.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

grumbler

Just to make the bubble more i9nteresting, 100% of the advanced graphics chips used by NVidia, Apple, etc come from a single company, Taiwan Semiconductor Manufacturing Company. Attempts by other manufacturers, even using the exact same equipment as TSMC, have failed. There's apparently a lot more craft in that manufacturing than anyone outside TSMC realized. If anything goes wrong at TSMC (even if it justresults in a slowdown), the bubble pops.
The future is all around us, waiting, in moments of transition, to be born in moments of revelation. No one knows the shape of that future or where it will take us. We know only that it is always born in pain.   -G'Kar

Bayraktar!

DGuller

Quote from: grumbler on Today at 10:53:27 AMJust to make the bubble more i9nteresting, 100% of the advanced graphics chips used by NVidia, Apple, etc come from a single company, Taiwan Semiconductor Manufacturing Company. Attempts by other manufacturers, even using the exact same equipment as TSMC, have failed. There's apparently a lot more craft in that manufacturing than anyone outside TSMC realized. If anything goes wrong at TSMC (even if it justresults in a slowdown), the bubble pops.
TSMC is a number one example on the importance of know-how.  Even before the explosion in AI investment I've been wondering how cataclysmic it would be if China attacked Taiwan, and Taiwan had to destroy its foundries.

Valmy

Which is why the CHIPS act was so important to the US economy and future security.

Ah well. We decided to elect Trump again.
Quote"This is a Russian warship. I propose you lay down arms and surrender to avoid bloodshed & unnecessary victims. Otherwise, you'll be bombed."

Zmiinyi defenders: "Russian warship, go fuck yourself."

crazy canuck

Quote from: Valmy on Today at 12:50:26 PMWhich is why the CHIPS act was so important to the US economy and future security.

Ah well. We decided to elect Trump again.

Yep, exactly!
Awarded 17 Zoupa points

In several surveys, the overwhelming first choice for what makes Canada unique is multiculturalism. This, in a world collapsing into stupid, impoverishing hatreds, is the distinctly Canadian national project.