News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

Jacob

Quote from: DGuller on November 02, 2025, 07:07:05 PMI guess that's a multi-trillion dollar question, isn't it?  Assuming a wrong answer due to preconceived notions or wishful thinking can be a catastrophic mistake, if you don't get lucky enough to hit upon the right answer with the wrong thinking.

It's kind of neat how your statement could be read as an indictment of AI sceptics ("they're likely to disregard the right answer when AI provides it, due to their preconceived notions") or of AI enthusiasts ("wishful thinking about the quality of AI output put them at risk for making catastrophic mistakes").

IMO the real challenge is twofold:

The first one, rarely spoken of but tacitly admitted by folks like Musk with their AI Grokpedia shenanigans, is that so much of our decision making is values based not data based. Sure there's data and other hard facts, involved - but how those facts are framed, which are given primacy, which are being devalued as peripheral, and which are discarded as being non-related has a huge impact on the outcome of logical facts based.

That's not an anti-AI argument particularly, but it's definitely an area of uncertainty. What sort of differences will we see between CCP developed AI compared to AI developed by Silicon Valley Techno-Oligarchs? And will those differences increase the risk of the kind of bias mistakes you mentioned? Will they push the value base of society in a particular direction (because if Musk is pretty open about his desire to do so, I wouldn't be surprised if other oligarchs with ideologies have similar if perhaps slightly more subtle ideas in that direction). How do we navigate using AI to make important decisions, if most important decisions are (as I contend) values based and political, not pure logic puzzles?

Which leads to the second point, accountability. IMO, I expect AI to become essentially a massive machinery of accountability-dispersal.

AI can make all sorts of recommendations for action and policy (or legal strategy or construction process or whatever), but who owns the responsibility for the outcomes if something goes wrong?

I expect we'll see all kinds of ethically dubious AI decision making in areas where the "customers" are unimportant and have non venue for raising complaints - say civilians in distant warzones. If autonomous drones blow up some random innocents in a country the media doesn't care about, but the AI said it was "acceptable risk" or "they fit the profile of a potential threat" or even straight up failed to account for civilians, I unfortunately think it won't cause any problems even today given the state of the world.

But imagine for a moment that "we" did care - maybe it blows up in the media for whatever reason (it's so damn egregious, or maybe a youtuber with a massive following whips up a furore somehow), maybe we get to a moment where the conventions of war matter, or for some other reason... and AI makes some egregious mistake where lives are lost. Who is responsible for the mistake? Anyone? And who's responsible for fixing it? Is someone responsible for second guessing the AI and therefore maintain responsibility? Or if the AI did it, then it's nobody's fault - so sorry - even if it's so obviously wrong?

Admiral Yi

The state that launched the missiles is still responsible.

crazy canuck

#677
Quote from: Jacob on Today at 01:40:06 AM
Quote from: DGuller on November 02, 2025, 07:07:05 PMI guess that's a multi-trillion dollar question, isn't it?  Assuming a wrong answer due to preconceived notions or wishful thinking can be a catastrophic mistake, if you don't get lucky enough to hit upon the right answer with the wrong thinking.

It's kind of neat how your statement could be read as an indictment of AI sceptics ("they're likely to disregard the right answer when AI provides it, due to their preconceived notions") or of AI enthusiasts ("wishful thinking about the quality of AI output put them at risk for making catastrophic mistakes").

IMO the real challenge is twofold:

The first one, rarely spoken of but tacitly admitted by folks like Musk with their AI Grokpedia shenanigans, is that so much of our decision making is values based not data based. Sure there's data and other hard facts, involved - but how those facts are framed, which are given primacy, which are being devalued as peripheral, and which are discarded as being non-related has a huge impact on the outcome of logical facts based.

That's not an anti-AI argument particularly, but it's definitely an area of uncertainty. What sort of differences will we see between CCP developed AI compared to AI developed by Silicon Valley Techno-Oligarchs? And will those differences increase the risk of the kind of bias mistakes you mentioned? Will they push the value base of society in a particular direction (because if Musk is pretty open about his desire to do so, I wouldn't be surprised if other oligarchs with ideologies have similar if perhaps slightly more subtle ideas in that direction). How do we navigate using AI to make important decisions, if most important decisions are (as I contend) values based and political, not pure logic puzzles?

Which leads to the second point, accountability. IMO, I expect AI to become essentially a massive machinery of accountability-dispersal.

AI can make all sorts of recommendations for action and policy (or legal strategy or construction process or whatever), but who owns the responsibility for the outcomes if something goes wrong?

I expect we'll see all kinds of ethically dubious AI decision making in areas where the "customers" are unimportant and have non venue for raising complaints - say civilians in distant warzones. If autonomous drones blow up some random innocents in a country the media doesn't care about, but the AI said it was "acceptable risk" or "they fit the profile of a potential threat" or even straight up failed to account for civilians, I unfortunately think it won't cause any problems even today given the state of the world.

But imagine for a moment that "we" did care - maybe it blows up in the media for whatever reason (it's so damn egregious, or maybe a youtuber with a massive following whips up a furore somehow), maybe we get to a moment where the conventions of war matter, or for some other reason... and AI makes some egregious mistake where lives are lost. Who is responsible for the mistake? Anyone? And who's responsible for fixing it? Is someone responsible for second guessing the AI and therefore maintain responsibility? Or if the AI did it, then it's nobody's fault - so sorry - even if it's so obviously wrong?

An example of your point is in academia. Most journals now have a policy that prohibits an AI tool from being named as an author even if an AI tool was used to draft the manuscript. I hasten to add that academics who use an AI tool to draft their manuscripts are landing in a lot of trouble because of the numerous errors AI tools make.

The reason academics cannot blame the deficiencies of the AI tool they used is because the authors of a manuscript remain responsible for the manuscript. They can't simply point to the deficiencies in the technology to deflect blame.

The other thing I should add in relation to your paragraph related to AI being used to create strategies, etc., in my field the people that do that have gone tragically wrong in their arguments because as I'm sure most people now know AI is not reliable and does not produce statements that are true.  Put another way if somebody is relying on AI to come up with a legal argument they probably don't know what they're doing from the start and lack the ability to understand the AI tool is giving them gibberish.

This is another area where academia can show how to deal with the problem. At least in Canada, the main funding agencies have prohibited the use of AI in the drafting of funding applications.  They haven't stated the reason for that prohibition, but I think it's pretty clear that they don't want to waste their time sifting through what would otherwise be statements made by AI that are false in the context of request for funding.





Awarded 17 Zoupa points

In several surveys, the overwhelming first choice for what makes Canada unique is multiculturalism. This, in a world collapsing into stupid, impoverishing hatreds, is the distinctly Canadian national project.