News:

And we're back!

Main Menu

Recent posts

#51
Off the Record / Re: The AI dooooooom thread
Last post by Jacob - November 25, 2025, 06:25:59 PM
Quote from: The Minsky Moment on November 25, 2025, 05:18:02 PMI'm entirely unqualified to opine, but I fine the "no" voices on the question to be more convincing. It seems like the "bootstrap" case involves some magical thinking as in if we just throw enough data at the LLM and tune up its capabilities, then it will somehow spontaneously evolve "general intelligence" despite the lack of any clear mechanism. I also can't help noticing that the strongest "yes" voices seem to have a strong financial incentive in the continued success of LLMs.

I concur with both of those perspectives.

QuoteMy vote is a definite no in the aggregate, but there may be individual winners.

Quite possibly. I suspect individual winners will be bought out by one of the major US tech conglomerates.

QuoteOh I don't have a clue, really.  Just two rough datapoints and a virtual napkin.

I suspect Silicon Valley producing AGI is going to be on a timeline somewhere between cold fusion or (best case) Musk's fully autonomous Teslas.
#52
Off the Record / Re: The AI dooooooom thread
Last post by DGuller - November 25, 2025, 05:48:34 PM
Quote from: The Minsky Moment on November 25, 2025, 05:07:04 PM
Quote from: DGuller on November 25, 2025, 01:31:05 PMThe title of the article is knocking down a strawman.  People who think LLMs have some artificial intelligence don't equate language with intelligence.  They see language as one of the "outputs" of intelligence.  If you build a model that matches one mode of output of intelligence well enough, then it's possible that under the hood that model had to have evolved something functionally analogous to intelligence during training in order to do that.

I don't see how that is possible. The model is the model.  It assigns probability distributions to different potential outputs based on an algorithm.  The training consists of feeding in more data and giving feedback to adjust the algorithm.  The algorithm is just seeking probabilistic matches based on its parameters.
I don't see how everything you wrote from the second sentence on is connected to your first sentence.  Yes, you accurately described how training and inference of any statistical or machine learning model works, but how is that description offering any insight as to why the bolded is impossible?
#53
Off the Record / Re: The AI dooooooom thread
Last post by The Minsky Moment - November 25, 2025, 05:19:44 PM
Quote from: Jacob on November 25, 2025, 05:13:53 PMYour projected timeline for progress (in the 2050s) seems to imply that the complexity gap between 2) and 3) is roughly similar to the gap between 1) and 2) (relative to available resources).

I'm not sure that's the case.

Oh I don't have a clue, really.  Just two rough datapoints and a virtual napkin.
#54
Off the Record / Re: The AI dooooooom thread
Last post by The Minsky Moment - November 25, 2025, 05:18:02 PM
Quote from: Jacob on November 25, 2025, 05:10:49 PM1: How likely is it that we'll reach AGI type AI on the back of LLM, as Altman suggests?

I'm entirely unqualified to opine, but I fine the "no" voices on the question to be more convincing. It seems like the "bootstrap" case involves some magical thinking as in if we just throw enough data at the LLM and tune up its capabilities, then it will somehow spontaneously evolve "general intelligence" despite the lack of any clear mechanism. I also can't help noticing that the strongest "yes" voices seem to have a strong financial incentive in the continued success of LLMs.

Quote2: Will the productivity gains from the current investments in LLM generate enough revenue to justify the investment?

My vote is a definite no in the aggregate, but there may be individual winners.
#55
Off the Record / Re: The AI dooooooom thread
Last post by Jacob - November 25, 2025, 05:17:10 PM
On a different (but still related) topic...

When we're talking about the potential for an AI bubble, so far most of the reference points are American.

China is also investing heavily in AI. Do any of you have any insight if Chinese AI investment have bubble like characteristics similar to the US, or are there material differences? And if so, what are they (lower costs? substantially different policy? something else?)
#56
Off the Record / Re: The AI dooooooom thread
Last post by Jacob - November 25, 2025, 05:13:53 PM
Quote from: The Minsky Moment on November 25, 2025, 05:07:04 PMLooking at matters broadly, you can generate a hierarchy of:
1) Manipulation of pure symbols: mathematical computations, clear rules based games like Chess or Go.
2) Manipulation of complex, representational symbols like language
3) Experiential learning not easily reduced to manipulation of symbols.

Each level of the hierarchy represents an increasing challenge for machines. With level 1 substantial progress was made by the 1990s.  With language we are reaching that stage now.  On that basis I would project substantial progress in stage 3 in the 2050s, but perhaps we will see if a couple trillion in spending can speed that up.

Your projected timeline for progress (in the 2050s) seems to imply that the complexity gap between 2) and 3) is roughly similar to the gap between 1) and 2) (relative to available resources).

I'm not sure that's the case.
#57
Off the Record / Re: The AI dooooooom thread
Last post by Jacob - November 25, 2025, 05:10:49 PM
Though I guess there are two separate but related topics:

1: How likely is it that we'll reach AGI type AI on the back of LLM, as Altman suggests?

2: Will the productivity gains from the current investments in LLM generate enough revenue to justify the investment?
#58
Off the Record / Re: The AI dooooooom thread
Last post by The Minsky Moment - November 25, 2025, 05:07:04 PM
Quote from: DGuller on November 25, 2025, 01:31:05 PMThe title of the article is knocking down a strawman.  People who think LLMs have some artificial intelligence don't equate language with intelligence.  They see language as one of the "outputs" of intelligence.  If you build a model that matches one mode of output of intelligence well enough, then it's possible that under the hood that model had to have evolved something functionally analogous to intelligence during training in order to do that.

I don't see how that is possible. The model is the model.  It assigns probability distributions to different potential outputs based on an algorithm.  The training consists of feeding in more data and giving feedback to adjust the algorithm.  The algorithm is just seeking probabilistic matches based on its parameters.

Looking at matters broadly, you can generate a hierarchy of:
1) Manipulation of pure symbols: mathematical computations, clear rules based games like Chess or Go.
2) Manipulation of complex, representational symbols like language
3) Experiential learning not easily reduced to manipulation of symbols.

Each level of the hierarchy represents an increasing challenge for machines. With level 1 substantial progress was made by the 1990s.  With language we are reaching that stage now.  On that basis I would project substantial progress in stage 3 in the 2050s, but perhaps we will see if a couple trillion in spending can speed that up.
#59
Off the Record / Re: The AI dooooooom thread
Last post by Jacob - November 25, 2025, 04:58:01 PM
Quote from: DGuller on November 25, 2025, 04:23:32 PMI have a much bigger issue with the last paragraph.  "LLMs emulate communication, not cognition" is an assertion about the most fundamental question.  As I said in my first reply, if LLMs are good enough to emulate intelligent communication, then it's very plausible that it needs to have some level of cognition to do that.

That's the point of the article, though.

Seems to me to me that you simply disagree with it and find the arguments it presents unconvincing. Which you are perfectly entitled to, of course.

For my part, I find the arguments in the article - including the opening part about neuroscience indicating that thinking happens largely independent of language - more persuasive than "it's plausible that emulating intelligent communication requires a level of cognition" at least (barring definitional games with the terms "intelligent" and "cognition").

Sure it's potentially plausible, but the evidence we have against is stronger than the speculation that we have in favour.

On the upside, the timeframe Altman is suggesting for reaching AGI type AI on the back of LLM is short enough that we'll be able to see for ourselves in due time.
#60
Off the Record / Re: The AI dooooooom thread
Last post by DGuller - November 25, 2025, 04:23:32 PM
Quote from: Jacob on November 25, 2025, 03:32:15 PM
Quote from: DGuller on November 25, 2025, 03:01:30 PMBut then what is the point of that article?  AI bubble exists because people with money believe that a usable enough AI already exists, not because some people believe that "language = intelligence" or that the sky is brown.  If the point of that article is not to say that "people mistakenly believe that a usable AI exists because they equate language to intelligence", then what is exactly the point?

The point of the article is:

QuoteThe AI hype machine relentlessly promotes the idea that we're on the verge of creating something as intelligent as humans, or even "superintelligence" that will dwarf our own cognitive capacities. If we gather tons of data about the world, and combine this with ever more powerful computing power (read: Nvidia chips) to improve our statistical correlations, then presto, we'll have AGI. Scaling is all we need.

But this theory is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.

If you disagree that the first paragraph above is true then obviously the argument against it is less relevant. Personally I think Sam Altman is enough of a "thought leader" on this topic to make it worthwhile to address the position he advances.
I have a much bigger issue with the last paragraph.  "LLMs emulate communication, not cognition" is an assertion about the most fundamental question.  As I said in my first reply, if LLMs are good enough to emulate intelligent communication, then it's very plausible that it needs to have some level of cognition to do that.