News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

The Minsky Moment

Quote from: Tamas on December 02, 2025, 03:18:15 PMHaha, racist jokes!

Accusations like that will get me into a warring state.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

Valmy

Quote from: Syt on December 02, 2025, 02:18:54 AMI was assuming that instead of spending money on physical toys they spend it on digital "content" (skins, in game currencies, apps, games etc.). :P

But I know what they say about assuming - it makes and ass of u and ming. :D

Yeah I guess she would like a few Robux sent her way but its not quite the same as me deserately wanting the Millenium Falcon for Christmas 1982.
Quote"This is a Russian warship. I propose you lay down arms and surrender to avoid bloodshed & unnecessary victims. Otherwise, you'll be bombed."

Zmiinyi defenders: "Russian warship, go fuck yourself."

Syt

Quote from: Valmy on December 02, 2025, 04:47:27 PMYeah I guess she would like a few Robux sent her way but its not quite the same as me deserately wanting the Millenium Falcon for Christmas 1982.

Yes, it's a lot more ephemeral. Which may be good for the environment (bad for collectors), but then again I think there's something to physical keepsakes. ... But thinking hard, I'm trying to figure out what my oldest physical possession is. Probably a book ... I'd guess Berlin Alexanderplatz which I had to buy for German class ca. 1993? :unsure:
We are born dying, but we are compelled to fancy our chances.
- hbomberguy

Proud owner of 42 Zoupa Points.

Josquius

It is very sad that kids these days will never know the wonder of getting a new computer game, reading the manual on the way home, and so on.
██████
██████
██████

HVC

Quote from: Josquius on Today at 04:01:39 AMIt is very sad that kids these days will never know the wonder of getting a new computer game, reading the manual on the way home, and so on.

So many dead trees so little time.
Being lazy is bad; unless you still get what you want, then it's called "patience".
Hubris must be punished. Severely.

garbon

Why is that sad? I had a superstition as a child that every time I read the manual before loading up a game, that would be when it would be a game that would fail to work.
"I've never been quite sure what the point of a eunuch is, if truth be told. It seems to me they're only men with the useful bits cut off."
I drank because I wanted to drown my sorrows, but now the damned things have learned to swim.

The Minsky Moment

On the topic:

  • Open AI signed huge procurement deals with both Samsung and SK Hynix, the duopoly that effectively dominates DRAM supply.  It was reported that the deals account for 40% of global supply.
  • DRAM prices have exploded as everyone else scrambles to grab supply.  There are immediate knock on effects for retail consumers as anything that uses RAM is going to go way up in price once stockpiles are exhausted.  I don't think we can expect a significant supply response, because "Samnix" has to be concerned what would happen if Open AI implodes (see below) and dumps their massive stockpile back onto the market.
  • Within the industry, there are increasing questions being raised about LLM scalability and whether the existing strategy of dumping more and more computing power and data is hitting diminishing returns.  That is: the incremental cost of improving the models is rising fast due to supply constraints, but the returns to model effectiveness may be diminishing.
  • Google launched their new Gemini iteration and appears to have overtaken or at least caught up to GPT-5.  Open AI insiders leaked a memo from Altman declaring a "code red".  Of course, there is nothing unexpected about this development; Google was the pioneer of these industrial scale LLMs (from their 2017 paper) and was only beat to market because they wouldn't release a half-baked product. 
  • Google generates massive profits off of its core business; it can afford take a bath on LLMs. OpenAI has no business other than raising VC capital from supercharged hype and grabbing a few shekels here and there by vending the more advanced iteration of GPT-5. The true state of OpenAI finances is unknown because they are private but it is clear that operating losses are large and capital costs are staggering, on a country level scale.

Two immediate conclusions:

1) The AI investment boom has reached the phase where it is causing significant distortions in the real economy, with likely adverse impact on consumers and the broader tech industry well into 2026.

2) The probability of a nasty crash over the next 6-12 month window is rising.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

Syt

I'm so glad I bought a new PC with RAM a few months ago.  :ph34r:
We are born dying, but we are compelled to fancy our chances.
- hbomberguy

Proud owner of 42 Zoupa Points.

DGuller

Quote from: The Minsky Moment on Today at 09:49:50 AM
  • Google launched their new Gemini iteration and appears to have overtaken or at least caught up to GPT-5.  Open AI insiders leaked a memo from Altman declaring a "code red".  Of course, there is nothing unexpected about this development; Google was the pioneer of these industrial scale LLMs (from their 2017 paper) and was only beat to market because they wouldn't release a half-baked product. 
I think the explanation for Google being beat to market is off the mark.  In history plenty of companies failed to capitalize on their own inventions, or even appreciate their potential, for reasons other than their focus on quality.  I think the far more likely explanation is that Google, being a mature large public company, is just naturally far less nimble than a motivated privately-held startup.  Companies like that have way too many stakeholder alignment meetings, risk committee working groups, and quarterly earning targets, to move fast, at least until external factors make them move fast.

The Minsky Moment

#864
DG I think we are saying the same thing in different ways, with you putting greater emphasis on the downsides of big enterprises (bureaucracy) and me putting greater emphasis on the downsides of startups (willingness to shove out and hype a half-baked beta and fix later on the fly)

But I would add - Google probably thought they were dealing with something else.  OpenAI was supposed to be a non-profit, defined by a precautionary ethos, NOT a commercially "motivated" start-up.  That turned out to be a mirage, one that even much of OpenAI's own board was unaware of.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

HisMajestyBOB

I would like to get a new computer next year, please don't drive up the cost of components.
Three lovely Prada points for HoI2 help

crazy canuck

Quote from: DGuller on Today at 10:09:11 AM
Quote from: The Minsky Moment on Today at 09:49:50 AM
  • Google launched their new Gemini iteration and appears to have overtaken or at least caught up to GPT-5.  Open AI insiders leaked a memo from Altman declaring a "code red".  Of course, there is nothing unexpected about this development; Google was the pioneer of these industrial scale LLMs (from their 2017 paper) and was only beat to market because they wouldn't release a half-baked product. 
I think the explanation for Google being beat to market is off the mark.  In history plenty of companies failed to capitalize on their own inventions, or even appreciate their potential, for reasons other than their focus on quality.  I think the far more likely explanation is that Google, being a mature large public company, is just naturally far less nimble than a motivated privately-held startup.  Companies like that have way too many stakeholder alignment meetings, risk committee working groups, and quarterly earning targets, to move fast, at least until external factors make them move fast.

It has nothing to do with nimbleness. When GPT was first released, the developers were very clear that it was still in development. But despite that warning people treated it las if it was a reliable tool with all of the catastrophic consequences that have been widely reported.

GPT is still in development and yet people still take it seriously as if it is a reliable tool.

Google's product may be reliable, or it may have the same defects of all LLMs.  We shall see.



Awarded 17 Zoupa points

In several surveys, the overwhelming first choice for what makes Canada unique is multiculturalism. This, in a world collapsing into stupid, impoverishing hatreds, is the distinctly Canadian national project.

Baron von Schtinkenbutt

Google did move straight from developing the transformer architecture to developing a language model with it.  However, they took a fundamentally different approach.  Google developed BERT, which is an encoder-only transformer model.  OpenAI developed what is arguably a higher-level model architecture based on a decoder-only transformer model.

Google's approach created a language model that was suitable for creating inputs to non-language models.  BERT and its derivatives became heavily used as the first stage in ML systems to generate various forms of content embeddings.  These embeddings are also useful on their own for search and retrieval systems, which is why Google went this direction.  The transformer architecture itself came out of an effort to develop better models for sequence-to-sequence conversion than what was then available with LSTM models fed by simple word embeddings like Word2Vec.

OpenAI intended to go in a different direction and create a language model that could generate language rather than encoded sequences.  They introduced the concept of generative pre-training (GPT), which is what gives these models their "knowledge".  Basically, an architecture designed to recreate language that looked like what it had been trained on.  This approach is not very useful for search and retrieval, but is useful if you want to build a chatbot that uses the "knowledge" encoded in the model to to retrieval and synthesis.

As architectures developed it turned out the GPT architecture had so-called emergent behaviors that made the base models built this way useful for general tasks, provided the right tooling and scaffolding was built around it.  Google came around to the generative model party late partly because the value to them wasn't clear until OpenAI rolled out a complete chatbot product using it.  Plus, as Joan said, the whole "we're just a research effort" bullshit.

PJL

Quote from: HisMajestyBOB on Today at 10:25:49 AMI would like to get a new computer next year, please don't drive up the cost of components.

Not just computers but all consumables with silicon components will be affected. So everything from washing machines to cars.