News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

Jacob

Though I guess there are two separate but related topics:

1: How likely is it that we'll reach AGI type AI on the back of LLM, as Altman suggests?

2: Will the productivity gains from the current investments in LLM generate enough revenue to justify the investment?

Jacob

Quote from: The Minsky Moment on Today at 05:07:04 PMLooking at matters broadly, you can generate a hierarchy of:
1) Manipulation of pure symbols: mathematical computations, clear rules based games like Chess or Go.
2) Manipulation of complex, representational symbols like language
3) Experiential learning not easily reduced to manipulation of symbols.

Each level of the hierarchy represents an increasing challenge for machines. With level 1 substantial progress was made by the 1990s.  With language we are reaching that stage now.  On that basis I would project substantial progress in stage 3 in the 2050s, but perhaps we will see if a couple trillion in spending can speed that up.

Your projected timeline for progress (in the 2050s) seems to imply that the complexity gap between 2) and 3) is roughly similar to the gap between 1) and 2) (relative to available resources).

I'm not sure that's the case.

Jacob

On a different (but still related) topic...

When we're talking about the potential for an AI bubble, so far most of the reference points are American.

China is also investing heavily in AI. Do any of you have any insight if Chinese AI investment have bubble like characteristics similar to the US, or are there material differences? And if so, what are they (lower costs? substantially different policy? something else?)

The Minsky Moment

Quote from: Jacob on Today at 05:10:49 PM1: How likely is it that we'll reach AGI type AI on the back of LLM, as Altman suggests?

I'm entirely unqualified to opine, but I fine the "no" voices on the question to be more convincing. It seems like the "bootstrap" case involves some magical thinking as in if we just throw enough data at the LLM and tune up its capabilities, then it will somehow spontaneously evolve "general intelligence" despite the lack of any clear mechanism. I also can't help noticing that the strongest "yes" voices seem to have a strong financial incentive in the continued success of LLMs.

Quote2: Will the productivity gains from the current investments in LLM generate enough revenue to justify the investment?

My vote is a definite no in the aggregate, but there may be individual winners.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

The Minsky Moment

Quote from: Jacob on Today at 05:13:53 PMYour projected timeline for progress (in the 2050s) seems to imply that the complexity gap between 2) and 3) is roughly similar to the gap between 1) and 2) (relative to available resources).

I'm not sure that's the case.

Oh I don't have a clue, really.  Just two rough datapoints and a virtual napkin.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

DGuller

Quote from: The Minsky Moment on Today at 05:07:04 PM
Quote from: DGuller on Today at 01:31:05 PMThe title of the article is knocking down a strawman.  People who think LLMs have some artificial intelligence don't equate language with intelligence.  They see language as one of the "outputs" of intelligence.  If you build a model that matches one mode of output of intelligence well enough, then it's possible that under the hood that model had to have evolved something functionally analogous to intelligence during training in order to do that.

I don't see how that is possible. The model is the model.  It assigns probability distributions to different potential outputs based on an algorithm.  The training consists of feeding in more data and giving feedback to adjust the algorithm.  The algorithm is just seeking probabilistic matches based on its parameters.
I don't see how everything you wrote from the second sentence on is connected to your first sentence.  Yes, you accurately described how training and inference of any statistical or machine learning model works, but how is that description offering any insight as to why the bolded is impossible?

Jacob

Quote from: The Minsky Moment on Today at 05:18:02 PMI'm entirely unqualified to opine, but I fine the "no" voices on the question to be more convincing. It seems like the "bootstrap" case involves some magical thinking as in if we just throw enough data at the LLM and tune up its capabilities, then it will somehow spontaneously evolve "general intelligence" despite the lack of any clear mechanism. I also can't help noticing that the strongest "yes" voices seem to have a strong financial incentive in the continued success of LLMs.

I concur with both of those perspectives.

QuoteMy vote is a definite no in the aggregate, but there may be individual winners.

Quite possibly. I suspect individual winners will be bought out by one of the major US tech conglomerates.

[/quote]Oh I don't have a clue, really.  Just two rough datapoints and a virtual napkin.[/quote]

I suspect Silicon Valley producing AGI is going to be on a timeline somewhere between cold fusion or (best case) Musk's fully autonomous Teslas.