News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

Jacob

Quote from: DGuller on November 02, 2025, 07:07:05 PMI guess that's a multi-trillion dollar question, isn't it?  Assuming a wrong answer due to preconceived notions or wishful thinking can be a catastrophic mistake, if you don't get lucky enough to hit upon the right answer with the wrong thinking.

It's kind of neat how your statement could be read as an indictment of AI sceptics ("they're likely to disregard the right answer when AI provides it, due to their preconceived notions") or of AI enthusiasts ("wishful thinking about the quality of AI output put them at risk for making catastrophic mistakes").

IMO the real challenge is twofold:

The first one, rarely spoken of but tacitly admitted by folks like Musk with their AI Grokpedia shenanigans, is that so much of our decision making is values based not data based. Sure there's data and other hard facts, involved - but how those facts are framed, which are given primacy, which are being devalued as peripheral, and which are discarded as being non-related has a huge impact on the outcome of logical facts based.

That's not an anti-AI argument particularly, but it's definitely an area of uncertainty. What sort of differences will we see between CCP developed AI compared to AI developed by Silicon Valley Techno-Oligarchs? And will those differences increase the risk of the kind of bias mistakes you mentioned? Will they push the value base of society in a particular direction (because if Musk is pretty open about his desire to do so, I wouldn't be surprised if other oligarchs with ideologies have similar if perhaps slightly more subtle ideas in that direction). How do we navigate using AI to make important decisions, if most important decisions are (as I contend) values based and political, not pure logic puzzles?

Which leads to the second point, accountability. IMO, I expect AI to become essentially a massive machinery of accountability-dispersal.

AI can make all sorts of recommendations for action and policy (or legal strategy or construction process or whatever), but who owns the responsibility for the outcomes if something goes wrong?

I expect we'll see all kinds of ethically dubious AI decision making in areas where the "customers" are unimportant and have non venue for raising complaints - say civilians in distant warzones. If autonomous drones blow up some random innocents in a country the media doesn't care about, but the AI said it was "acceptable risk" or "they fit the profile of a potential threat" or even straight up failed to account for civilians, I unfortunately think it won't cause any problems even today given the state of the world.

But imagine for a moment that "we" did care - maybe it blows up in the media for whatever reason (it's so damn egregious, or maybe a youtuber with a massive following whips up a furore somehow), maybe we get to a moment where the conventions of war matter, or for some other reason... and AI makes some egregious mistake where lives are lost. Who is responsible for the mistake? Anyone? And who's responsible for fixing it? Is someone responsible for second guessing the AI and therefore maintain responsibility? Or if the AI did it, then it's nobody's fault - so sorry - even if it's so obviously wrong?

Admiral Yi

The state that launched the missiles is still responsible.