News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

The Minsky Moment

A welcome appearance from the famous missing Jackson brother, Nojana, and his notorious producer, Fubmen.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

crazy canuck

Glad to see Chedole is finally getting the credit he deserves.
Awarded 17 Zoupa points

In several surveys, the overwhelming first choice for what makes Canada unique is multiculturalism. This, in a world collapsing into stupid, impoverishing hatreds, is the distinctly Canadian national project.

Sheilbh

Quote from: Jacob on November 03, 2025, 01:40:06 AMIt's kind of neat how your statement could be read as an indictment of AI sceptics ("they're likely to disregard the right answer when AI provides it, due to their preconceived notions") or of AI enthusiasts ("wishful thinking about the quality of AI output put them at risk for making catastrophic mistakes").

IMO the real challenge is twofold:

The first one, rarely spoken of but tacitly admitted by folks like Musk with their AI Grokpedia shenanigans, is that so much of our decision making is values based not data based. Sure there's data and other hard facts, involved - but how those facts are framed, which are given primacy, which are being devalued as peripheral, and which are discarded as being non-related has a huge impact on the outcome of logical facts based.

That's not an anti-AI argument particularly, but it's definitely an area of uncertainty. What sort of differences will we see between CCP developed AI compared to AI developed by Silicon Valley Techno-Oligarchs? And will those differences increase the risk of the kind of bias mistakes you mentioned? Will they push the value base of society in a particular direction (because if Musk is pretty open about his desire to do so, I wouldn't be surprised if other oligarchs with ideologies have similar if perhaps slightly more subtle ideas in that direction). How do we navigate using AI to make important decisions, if most important decisions are (as I contend) values based and political, not pure logic puzzles?
Yeah I think this is also a STEM-ification of knowledge. In effect I think that it is the aggregation of data. You know that basically the truth is within the facts so if you list them you get there.

As you say that is wrong. It carries with it an ideology but it's an unexamined one - so the interpretation is not reflective or purposeful and it does not cohere as an analysis.  I worked on getting outsourced help on a lot of data analytics and it was really interesting workin with the people who do that internally because what ended up being the problem was that the outsource company could do the data really, really well. They couldn't do the narrative structuring (with all of the institutional knowledge and background). So we ended up bringing the interpreting and explaining the analyisis piece back in house. Having said that - as I say I get the excitement of AI for senior people - I could see using both uploading both to an AI and then being able to ask questions of it when you're reading it rather than within a presentation/meeting context at x time.

I think the flipside of this which I think is present in "establishment"/"centrist dad" thinking is however equally unexamined which is often the conclusion or interpretation which is current at that time (or, more likely, when they read it/were young) which has been separated from the underlying facts and analysis. I think that is when it becomes just bromide - and I think that is a problem too, including in politics because I think that is part of the pattern of what leads to vibes politics or everything just being aesthetic signals.

QuoteWhich leads to the second point, accountability. IMO, I expect AI to become essentially a massive machinery of accountability-dispersal.

AI can make all sorts of recommendations for action and policy (or legal strategy or construction process or whatever), but who owns the responsibility for the outcomes if something goes wrong?

I expect we'll see all kinds of ethically dubious AI decision making in areas where the "customers" are unimportant and have non venue for raising complaints - say civilians in distant warzones. If autonomous drones blow up some random innocents in a country the media doesn't care about, but the AI said it was "acceptable risk" or "they fit the profile of a potential threat" or even straight up failed to account for civilians, I unfortunately think it won't cause any problems even today given the state of the world.

But imagine for a moment that "we" did care - maybe it blows up in the media for whatever reason (it's so damn egregious, or maybe a youtuber with a massive following whips up a furore somehow), maybe we get to a moment where the conventions of war matter, or for some other reason... and AI makes some egregious mistake where lives are lost. Who is responsible for the mistake? Anyone? And who's responsible for fixing it? Is someone responsible for second guessing the AI and therefore maintain responsibility? Or if the AI did it, then it's nobody's fault - so sorry - even if it's so obviously wrong?
I agree. There's a book I have't read yet but seen very good things about on this (not just about AI) called The Unaccountability Machine by Dan Davies which sounds really interesting.

But I slightly question to what extent this is different from 20th century accountability dispersal artificially intelligent machines like the state or the corporation? Or even "rules-bound" orders?

A few slightly random thoughts on this.

In European law there is a right not to be subject to "automated decision-making" which has a legal or "similarly significant effect". I remember working for a client which was basically a big serviced apartments provider - as part of their process they did simple maths of basically what proportion of someone's net salary would the rent cost if below x then fine, if below y get a guarantor, if below z then you can't lease them the apartment. I was very junior but I remember asking lawyers from all over Europe - and never really getting a satisfactory answer - of what the meaningful difference is between applying that algorithmically by a machine (so "automated decision-making") v giving a junior employee a calculator and a policy (the Papers Please option). To an extent why is it that our legal framework discourages/makeson legaly complicated ("automated decision-making") - while the other is often a helpful way of de-risking and avoiding sanction: we had good policies, we had documented procedures etc, when the end result is the same.

FWIW I think many people experience the world you're describing to some extent in their day-to-day interactions with big corporations and thet state. It ma not be by AI but it is similarly bound by rules and processes and procedures - which in general, I suspect, are producing fairer more just outcomes at an aggregate level. But at an individual, lived experience way can be baffling, alienating, feel incredibly unjust and impossible to challenge - it's the "computer says no" point. And I can't help but wonder if that is part of the appeal of someone like Trump and the unusual class coalitions. First that Trump's politics is personalist and patrimonial. He is not representing a "system" which he feels any responsibility towards but is mercurial and eminently corruptible which are very human qualities. It's a bit like the role that the monarch's prerogative of mercy played in the Medieval world (but as an expression of power, not grace). Secondly I wonder if on the class point it is in part a distinction between the class who input into the rules/systems and understand them (or are adjacent to the professional classes that create them) v the people both above and below who are constrained/punished by them?

And perhaps all of that is really caught in perhaps the most distinctvely twentieth century form artificial intelligence and accountability: the consultancy profession. Whether it's Rand and the best and brightest advising on Vietnam, developing the theory of Mutually Assured Destruction or modelling welfare and then also the dismantlement of welfare systems; or McKinsey providing innovative advice on business restructurings ("have you considered firing people?") which I think is arguably part of the supply chain weaknesses and dependencies we're trying to disentangle ourselves from to the Tony Blair Institute parachuting in experts into tens of governments to help drive "change" with their non-ideological focus on "what works". All of this is, as you say, an accountability sink but also, crucially, responsibility laundering. No-one is to blame, no-one is responsible, process was followed and we got the best possible advice - unfortunately we appear to have leveled a South-East Asian country/blown up the world economy/outsourced our entire industrial base to a hostile state by mistake. If you don't want to be responsible - get it on some Bain letterhead and no-one will blame you.

Perhaps that is the gap (for now) for AI. That (at this point) running it by the Big Four is best practice while making decisions based on an AI chat (for now) would seem insane and negligent. One allows for plausible irresponsibility and the other doesn't. But I can't help but notice that in the huge drop in graduate and entry level roles one of the worst impacted industries appears to be consultancy.
Let's bomb Russia!

Jacob

Yeah I basically agree with all of that, Sheilbh, except the "centrist dad thinking part" - mainly because I didn't understand your point?