News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

The Minsky Moment

A welcome appearance from the famous missing Jackson brother, Nojana, and his notorious producer, Fubmen.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

crazy canuck

Glad to see Chedole is finally getting the credit he deserves.
Awarded 17 Zoupa points

In several surveys, the overwhelming first choice for what makes Canada unique is multiculturalism. This, in a world collapsing into stupid, impoverishing hatreds, is the distinctly Canadian national project.

Sheilbh

Quote from: Jacob on November 03, 2025, 01:40:06 AMIt's kind of neat how your statement could be read as an indictment of AI sceptics ("they're likely to disregard the right answer when AI provides it, due to their preconceived notions") or of AI enthusiasts ("wishful thinking about the quality of AI output put them at risk for making catastrophic mistakes").

IMO the real challenge is twofold:

The first one, rarely spoken of but tacitly admitted by folks like Musk with their AI Grokpedia shenanigans, is that so much of our decision making is values based not data based. Sure there's data and other hard facts, involved - but how those facts are framed, which are given primacy, which are being devalued as peripheral, and which are discarded as being non-related has a huge impact on the outcome of logical facts based.

That's not an anti-AI argument particularly, but it's definitely an area of uncertainty. What sort of differences will we see between CCP developed AI compared to AI developed by Silicon Valley Techno-Oligarchs? And will those differences increase the risk of the kind of bias mistakes you mentioned? Will they push the value base of society in a particular direction (because if Musk is pretty open about his desire to do so, I wouldn't be surprised if other oligarchs with ideologies have similar if perhaps slightly more subtle ideas in that direction). How do we navigate using AI to make important decisions, if most important decisions are (as I contend) values based and political, not pure logic puzzles?
Yeah I think this is also a STEM-ification of knowledge. In effect I think that it is the aggregation of data. You know that basically the truth is within the facts so if you list them you get there.

As you say that is wrong. It carries with it an ideology but it's an unexamined one - so the interpretation is not reflective or purposeful and it does not cohere as an analysis.  I worked on getting outsourced help on a lot of data analytics and it was really interesting workin with the people who do that internally because what ended up being the problem was that the outsource company could do the data really, really well. They couldn't do the narrative structuring (with all of the institutional knowledge and background). So we ended up bringing the interpreting and explaining the analyisis piece back in house. Having said that - as I say I get the excitement of AI for senior people - I could see using both uploading both to an AI and then being able to ask questions of it when you're reading it rather than within a presentation/meeting context at x time.

I think the flipside of this which I think is present in "establishment"/"centrist dad" thinking is however equally unexamined which is often the conclusion or interpretation which is current at that time (or, more likely, when they read it/were young) which has been separated from the underlying facts and analysis. I think that is when it becomes just bromide - and I think that is a problem too, including in politics because I think that is part of the pattern of what leads to vibes politics or everything just being aesthetic signals.

QuoteWhich leads to the second point, accountability. IMO, I expect AI to become essentially a massive machinery of accountability-dispersal.

AI can make all sorts of recommendations for action and policy (or legal strategy or construction process or whatever), but who owns the responsibility for the outcomes if something goes wrong?

I expect we'll see all kinds of ethically dubious AI decision making in areas where the "customers" are unimportant and have non venue for raising complaints - say civilians in distant warzones. If autonomous drones blow up some random innocents in a country the media doesn't care about, but the AI said it was "acceptable risk" or "they fit the profile of a potential threat" or even straight up failed to account for civilians, I unfortunately think it won't cause any problems even today given the state of the world.

But imagine for a moment that "we" did care - maybe it blows up in the media for whatever reason (it's so damn egregious, or maybe a youtuber with a massive following whips up a furore somehow), maybe we get to a moment where the conventions of war matter, or for some other reason... and AI makes some egregious mistake where lives are lost. Who is responsible for the mistake? Anyone? And who's responsible for fixing it? Is someone responsible for second guessing the AI and therefore maintain responsibility? Or if the AI did it, then it's nobody's fault - so sorry - even if it's so obviously wrong?
I agree. There's a book I have't read yet but seen very good things about on this (not just about AI) called The Unaccountability Machine by Dan Davies which sounds really interesting.

But I slightly question to what extent this is different from 20th century accountability dispersal artificially intelligent machines like the state or the corporation? Or even "rules-bound" orders?

A few slightly random thoughts on this.

In European law there is a right not to be subject to "automated decision-making" which has a legal or "similarly significant effect". I remember working for a client which was basically a big serviced apartments provider - as part of their process they did simple maths of basically what proportion of someone's net salary would the rent cost if below x then fine, if below y get a guarantor, if below z then you can't lease them the apartment. I was very junior but I remember asking lawyers from all over Europe - and never really getting a satisfactory answer - of what the meaningful difference is between applying that algorithmically by a machine (so "automated decision-making") v giving a junior employee a calculator and a policy (the Papers Please option). To an extent why is it that our legal framework discourages/makeson legaly complicated ("automated decision-making") - while the other is often a helpful way of de-risking and avoiding sanction: we had good policies, we had documented procedures etc, when the end result is the same.

FWIW I think many people experience the world you're describing to some extent in their day-to-day interactions with big corporations and thet state. It ma not be by AI but it is similarly bound by rules and processes and procedures - which in general, I suspect, are producing fairer more just outcomes at an aggregate level. But at an individual, lived experience way can be baffling, alienating, feel incredibly unjust and impossible to challenge - it's the "computer says no" point. And I can't help but wonder if that is part of the appeal of someone like Trump and the unusual class coalitions. First that Trump's politics is personalist and patrimonial. He is not representing a "system" which he feels any responsibility towards but is mercurial and eminently corruptible which are very human qualities. It's a bit like the role that the monarch's prerogative of mercy played in the Medieval world (but as an expression of power, not grace). Secondly I wonder if on the class point it is in part a distinction between the class who input into the rules/systems and understand them (or are adjacent to the professional classes that create them) v the people both above and below who are constrained/punished by them?

And perhaps all of that is really caught in perhaps the most distinctvely twentieth century form artificial intelligence and accountability: the consultancy profession. Whether it's Rand and the best and brightest advising on Vietnam, developing the theory of Mutually Assured Destruction or modelling welfare and then also the dismantlement of welfare systems; or McKinsey providing innovative advice on business restructurings ("have you considered firing people?") which I think is arguably part of the supply chain weaknesses and dependencies we're trying to disentangle ourselves from to the Tony Blair Institute parachuting in experts into tens of governments to help drive "change" with their non-ideological focus on "what works". All of this is, as you say, an accountability sink but also, crucially, responsibility laundering. No-one is to blame, no-one is responsible, process was followed and we got the best possible advice - unfortunately we appear to have leveled a South-East Asian country/blown up the world economy/outsourced our entire industrial base to a hostile state by mistake. If you don't want to be responsible - get it on some Bain letterhead and no-one will blame you.

Perhaps that is the gap (for now) for AI. That (at this point) running it by the Big Four is best practice while making decisions based on an AI chat (for now) would seem insane and negligent. One allows for plausible irresponsibility and the other doesn't. But I can't help but notice that in the huge drop in graduate and entry level roles one of the worst impacted industries appears to be consultancy.
Let's bomb Russia!

Jacob

Yeah I basically agree with all of that, Sheilbh, except the "centrist dad thinking part" - mainly because I didn't understand your point?

Zoupa

OpenAI CFO wants federal backstop for data-centers investments. These people are shameless. When the peasants finally revolt, they should not be surprised what happens to them.

crazy canuck

Quote from: Zoupa on November 06, 2025, 01:03:42 AMOpenAI CFO wants federal backstop for data-centers investments. These people are shameless. When the peasants finally revolt, they should not be surprised what happens to them.

When a bubble forms, why wait to get the bailout. Especially when dealing with a corrupt government that will find a way to take a kickback.
Awarded 17 Zoupa points

In several surveys, the overwhelming first choice for what makes Canada unique is multiculturalism. This, in a world collapsing into stupid, impoverishing hatreds, is the distinctly Canadian national project.

Jacob

Saw someone phrasing it as "OpenAI has invented the pre-bailout".

The Minsky Moment

Nvidia joined in as well, saying the feds need to subsidize data centers to keep up with China.  Even as Nvidia lobbies to sell more advanced chips to China. 
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

Valmy

Quote from: Zoupa on November 06, 2025, 01:03:42 AMOpenAI CFO wants federal backstop for data-centers investments. These people are shameless. When the peasants finally revolt, they should not be surprised what happens to them.

Yeah but this isn't a bubble.  :ph34r:
Quote"This is a Russian warship. I propose you lay down arms and surrender to avoid bloodshed & unnecessary victims. Otherwise, you'll be bombed."

Zmiinyi defenders: "Russian warship, go fuck yourself."

HVC

Quote from: Valmy on November 06, 2025, 03:51:08 PM
Quote from: Zoupa on November 06, 2025, 01:03:42 AMOpenAI CFO wants federal backstop for data-centers investments. These people are shameless. When the peasants finally revolt, they should not be surprised what happens to them.

Yeah but this isn't a bubble.  :ph34r:

We must not fall behind in the tulip bulb wars :contract:
Being lazy is bad; unless you still get what you want, then it's called "patience".
Hubris must be punished. Severely.

Sheilbh

God I wish I as as certain as you guys :lol:

There's some other points I wanted to pick up on but some media gossip.

Apparently while it looks like use of AI summaries dropped off a bit recently in early autumn (and are now back to growth - which had been sustained until then) it looks like there's a shift in behaviour recently especially with new models. Which is that fewer users are clicking through. That suggests two aspects to me (and it could be both). One is that people are trusting answers more so taking less of a "trust, but verify" approach of looking at the AI generated answer and then clicking through anyway, the other is that the answers are gettin better and more on point. As I say my suspicion is it's a bit of both.

My understanding is this is so far not really affecting hard news media but is absolutely devastating the more lifestyle/feature/perennial media (which makes sense). The irony is that those are exactly the areas media companies have invested in in recent years because of online behavioural advertising. Basically you get a better idea of what sort of product someone is interested in from their consumption of that type of media than their interests in news (this is also why Instagram is regularly cited by people - including me - as the only platform with adverts that are genuinely of interest to them, because it's all lifestyle). The other aspect is that online advertising allows advertisers to pick the type of content they would like to be next to (while in print the choice is "front half or back half") and basically brands don't want to be next to hard news.

This is a further step along trends that have been at play for a while but it looks like subsidising the production of hard news with lifestyle content may not be a particularly healthy strategy any more. This is why I think the really key thing is licensing and protection of IP so that (unlike Google and Meta in the social media disruption) news publishers and people actually making reported, edited, fact-checked, legaled content get paid for it. Otherwise it's going to either have to rely on subsidies from the state in some way or other or just become a luxury good.

Semi-relatedly, I am certain that the future of online behavioural advertising is AI generated directly personalised ads. We've started seeing this already with Ticketmaster using AI both to target the advert but also the creative (within fixed parameters). This was also flagged by Zuckerberg recently on Meta. Google and Meta will absolutely lead on this as they already own so much of the ad industry (again I don't want to belabour it but in terms of bubbles I can't help but wonder if the 2010s was the bubble - it seems far more reasonable and in a weird way healthy to me that the biggest companies right now are doing things like designing and building chips or building and operating data centres v the 2010s when the biggest companies, with partial exceptions for Apple and Amazon, were fundamentally advertising exchanges....)

But the key factor I think for me is that I think it's going to allow Meta and Google to do the rest of the ad worlds what social media allowed them to do to news. There's obviously a monopoly bit but a key part of their business (I think over 90% of Meta's revenue is its ads business) was basically allowing "better" targeting and tracking of advertising. So ad spend went from news publishers to the platforms who knew their users well and could target better while the share of the pie for publishers got smaller - even as ad spends increased. The same will happen now on the creative side - which is still expensive and where spend is captured by the agencies. This will make creating ads, testing them (a big thing in the AI age already) - again instead of spending loads of moneys on agencies to create ads you can just spend on Meta and Google who will provide the system for distributing ads, a way of inputting your ideas into an AI that will generate and optimise hundreds of versions for you.

But also this is something that is an easy fit into the existing businesses of some of the bigest companies in the world. That is still my fundamental view of AI - that the LLM that we know about are to an extent slightly marketing tools/the recognisable bit. It is not going to be the consumer-facing products that will drive this but the implementation of AI within existing business models and businesses. As I say I think it is more exciting for Sam Altman to talk about existential risk and post images of the Deat Star when releasing new models than what I think the reality is which is a big "Infrastructure as a Service" business :lol: I don't think it'll be through the consumer-facing products but, for example, the change to the ad model that allows Google and Meta to eat the last remaining bit of spend they don't already control, or how Salesforce or Oracle implement it into their software tools.

(Total aside - but FWIW speaking to someone in marketing who was sort of 50/50 on some of this because their agency has an AI tool that helps turn someone's idea into a marketing brief. On the one hand it makes their life easier in dealing with the business - on the other they don't use it for big projects because the fun and interesting bit of working in marketing is exactly in generating ideas and writing the brief. This is where I think the impact on entry level and graduate jobs is interesting - but the social dislocation might not just be in fewer jobs but the chane in the nature of them. It's why I return to the Ned Ludd moment. It wasn't about people being hostile to technology taking their jobs - though that was part of it. It was about their proletarianisation and alienation of their roles. They were highly skilled, artisanal workers who could command a premium, were responsible creatively and intellectually for their economic production and they were getting replaced with fast moving machines where tasks could be divided, there was no creativity, individuals were alienated and you could just employ women and children instead. I think perhaps job losses is part of it but we should think of a proletarianisation of the white collar world and what that means - no doubt to be rapidly followed by a gig economy/platform/piece work world of white collars. In some ways it's just the latest version in that long war against guilds.)
Let's bomb Russia!

Valmy

Quote from: Sheilbh on November 07, 2025, 12:02:03 PMGod I wish I as as certain as you guys :lol:

It is just when you have seen the same shit over and over and over again you get a little fatalistic. I mean railroads were obviously a successful and useful technology. We still use railroads today. But we still managed to create a huge bubble that caused a world wide depression by over investing in them. Same with the internet. Same with...freaking housing. The ultimate usefullness of the technology or whatever is being inflated doesn't really seem to matter. Sometimes it might be tulip bulbs or shares of the French Mississippi company but we are perfectly capable of causing tremendous economic damage with useful and valuable things as well.

But maybe this time it will be different.

Quote"This is a Russian warship. I propose you lay down arms and surrender to avoid bloodshed & unnecessary victims. Otherwise, you'll be bombed."

Zmiinyi defenders: "Russian warship, go fuck yourself."

Sheilbh

I don't think I agree - I think there's a world of difference between railroads or electricity and tulips or shares in the South Sea Company. And it's maybe a difference of interest/view because what I find interesting (and uncertain) is where AI sits on that spectrum.

I suppose I'm fatalist in the other way that I don't think there's an "answer" to the economy or politics. There will be boom and bust/cycles/pendulum swings of all kinds from politics to economics to cultural or social stuff. So I might just be coming at it from the other end in that I think that's kind of baked in the question is of what type and I suppose what, if anything, is the technological and structural "real" underneath the flotsam. Again that is to me the difference between, say, railroads, electricity, IT, the internet and, as you say, tulips.

And as I say my view is that it's either a transformative technology that will have huge economic and social impact, or it's a massive misallocation of capital (and brainpower) the fallout of which will have huge economic and social impact.
Let's bomb Russia!

Valmy

Quote from: Sheilbh on November 07, 2025, 12:58:03 PMI suppose I'm fatalist in the other way that I don't think there's an "answer" to the economy or politics. There will be boom and bust/cycles/pendulum swings of all kinds from politics to economics to cultural or social stuff.


Yes of course. But the lesson we learned from the 1875-1929 era was we can't just let them happen. We have to attempt to fight those cycles. By "unleashing" the economy from its shackles we are right back into disastrous busts ravaging the world every twenty years or so. The economy must be regulated to reduce the disastrous impact of those busts. The government must simply do everything in its power to limit these bubbles and mitigate their damage. They will still happen of course. See the failure of the...what were they called? The Nifty Fifty? The fifty "can't lose" stocks of the 1960s when they finally did lose in the 1970s. But as bad as that was it was no Depression of 1875 or Economic Crisis of 2008.
Quote"This is a Russian warship. I propose you lay down arms and surrender to avoid bloodshed & unnecessary victims. Otherwise, you'll be bombed."

Zmiinyi defenders: "Russian warship, go fuck yourself."

HVC

Hey we plant tulips to this day,  so the fad never fully died out :D
Being lazy is bad; unless you still get what you want, then it's called "patience".
Hubris must be punished. Severely.