News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

Jacob

Though I guess there are two separate but related topics:

1: How likely is it that we'll reach AGI type AI on the back of LLM, as Altman suggests?

2: Will the productivity gains from the current investments in LLM generate enough revenue to justify the investment?

Jacob

Quote from: The Minsky Moment on November 25, 2025, 05:07:04 PMLooking at matters broadly, you can generate a hierarchy of:
1) Manipulation of pure symbols: mathematical computations, clear rules based games like Chess or Go.
2) Manipulation of complex, representational symbols like language
3) Experiential learning not easily reduced to manipulation of symbols.

Each level of the hierarchy represents an increasing challenge for machines. With level 1 substantial progress was made by the 1990s.  With language we are reaching that stage now.  On that basis I would project substantial progress in stage 3 in the 2050s, but perhaps we will see if a couple trillion in spending can speed that up.

Your projected timeline for progress (in the 2050s) seems to imply that the complexity gap between 2) and 3) is roughly similar to the gap between 1) and 2) (relative to available resources).

I'm not sure that's the case.

Jacob

On a different (but still related) topic...

When we're talking about the potential for an AI bubble, so far most of the reference points are American.

China is also investing heavily in AI. Do any of you have any insight if Chinese AI investment have bubble like characteristics similar to the US, or are there material differences? And if so, what are they (lower costs? substantially different policy? something else?)

The Minsky Moment

Quote from: Jacob on November 25, 2025, 05:10:49 PM1: How likely is it that we'll reach AGI type AI on the back of LLM, as Altman suggests?

I'm entirely unqualified to opine, but I fine the "no" voices on the question to be more convincing. It seems like the "bootstrap" case involves some magical thinking as in if we just throw enough data at the LLM and tune up its capabilities, then it will somehow spontaneously evolve "general intelligence" despite the lack of any clear mechanism. I also can't help noticing that the strongest "yes" voices seem to have a strong financial incentive in the continued success of LLMs.

Quote2: Will the productivity gains from the current investments in LLM generate enough revenue to justify the investment?

My vote is a definite no in the aggregate, but there may be individual winners.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

The Minsky Moment

Quote from: Jacob on November 25, 2025, 05:13:53 PMYour projected timeline for progress (in the 2050s) seems to imply that the complexity gap between 2) and 3) is roughly similar to the gap between 1) and 2) (relative to available resources).

I'm not sure that's the case.

Oh I don't have a clue, really.  Just two rough datapoints and a virtual napkin.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

DGuller

Quote from: The Minsky Moment on November 25, 2025, 05:07:04 PM
Quote from: DGuller on November 25, 2025, 01:31:05 PMThe title of the article is knocking down a strawman.  People who think LLMs have some artificial intelligence don't equate language with intelligence.  They see language as one of the "outputs" of intelligence.  If you build a model that matches one mode of output of intelligence well enough, then it's possible that under the hood that model had to have evolved something functionally analogous to intelligence during training in order to do that.

I don't see how that is possible. The model is the model.  It assigns probability distributions to different potential outputs based on an algorithm.  The training consists of feeding in more data and giving feedback to adjust the algorithm.  The algorithm is just seeking probabilistic matches based on its parameters.
I don't see how everything you wrote from the second sentence on is connected to your first sentence.  Yes, you accurately described how training and inference of any statistical or machine learning model works, but how is that description offering any insight as to why the bolded is impossible?

Jacob

#801
Quote from: The Minsky Moment on November 25, 2025, 05:18:02 PMI'm entirely unqualified to opine, but I fine the "no" voices on the question to be more convincing. It seems like the "bootstrap" case involves some magical thinking as in if we just throw enough data at the LLM and tune up its capabilities, then it will somehow spontaneously evolve "general intelligence" despite the lack of any clear mechanism. I also can't help noticing that the strongest "yes" voices seem to have a strong financial incentive in the continued success of LLMs.

I concur with both of those perspectives.

QuoteMy vote is a definite no in the aggregate, but there may be individual winners.

Quite possibly. I suspect individual winners will be bought out by one of the major US tech conglomerates.

QuoteOh I don't have a clue, really.  Just two rough datapoints and a virtual napkin.

I suspect Silicon Valley producing AGI is going to be on a timeline somewhere between cold fusion or (best case) Musk's fully autonomous Teslas.

Admiral Yi

Quote from: Josquius on November 25, 2025, 01:29:14 PMYes. Google are "better". But that's no defence.
A monopoly isn't necessarily created by anyrhing devious. The fact that they've been in the game so long that they've established an unassailably deep and wide position is enough.

A quick search shows those in the know do indeed suggest googles monopoly is allowing them to get away with high pricing.

And no. Meta does not have the same share as Google at all. As said Google has around 90%. Meta cut their losses when they couldn't make enough to even break even.

Fair enough.  You're suggesting Google's dominance in ad exchange and publisher ad server are the result of first mover advantage, not the leveraging of their strong position in advertising sales.  That pretty much supports my point: if Google were broken up the now independent ad exchange and publisher ad server would still benefit from first mover advantage.

I've provided a link that says Google has 23.9% of online advertising sales, the same order of magnitude as Meta and Amazon.  What is this 90% you speak of?  90% of what?

The Minsky Moment

Quote from: DGuller on November 25, 2025, 05:48:34 PMI don't see how everything you wrote from the second sentence on is connected to your first sentence.  Yes, you accurately described how training and inference of any statistical or machine learning model works, but how is that description offering any insight as to why the bolded is impossible?

Because nothing in the mechanism I described has the capacity for self-evolution.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

DGuller

Quote from: The Minsky Moment on November 25, 2025, 08:43:57 PM
Quote from: DGuller on November 25, 2025, 05:48:34 PMI don't see how everything you wrote from the second sentence on is connected to your first sentence.  Yes, you accurately described how training and inference of any statistical or machine learning model works, but how is that description offering any insight as to why the bolded is impossible?

Because nothing in the mechanism I described has the capacity for self-evolution.
I didn't mean to imply self-evolution, or Darwinian evolution, or anything of the kind.  :rolleyes:  I meant evolved as in "developed".

Jacob

#805
Quote from: DGuller on November 25, 2025, 10:19:38 PMI didn't mean to imply self-evolution, or Darwinian evolution, or anything of the kind.  :rolleyes:  I meant evolved as in "developed".

I'm not sure I'm following your argument correctly here, but it seems like you're saying that "it's theoretically possible that LLM models could be developed to a point where their output 'emulates intelligent communication', and that if it does then it can essentially be considered cognizant whatever is 'under the hood" purely on strength on the apparent intelligence of the output".

Is that right? Or have I missed some nuance?

You're not arguing that AGI is around the corner, but that a sufficiently refined LLM could achieve a partial success but that's sufficient to call it cognizant? Or are you saying that you think that there's a good chance that LLMs can become virtually indistinguishable from AGI in output, and if they do then they can be considered AGI regardless of what goes on "under the hood". That is, LLMs still have significant potential to reach AGI levels in the short term?

The Minsky Moment

The question seems to hang on how one defines "functionally analogous to intelligence."  Contemporary LLMs are capable of generating output that is equivalent to output that would be generated by intelligent human beings for certain kinds of defined tasks.  Contemporary LLMs are capable of generating communicative output that passes the Turing Test.  If one defines those capabilities as intelligence that definitionally the statement is true.  It's not what I understand as "general intelligence" though.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

DGuller

Quote from: Jacob on November 25, 2025, 11:03:35 PM
Quote from: DGuller on November 25, 2025, 10:19:38 PMI didn't mean to imply self-evolution, or Darwinian evolution, or anything of the kind.  :rolleyes:  I meant evolved as in "developed".

I'm not sure I'm following your argument correctly here, but it seems like you're saying that "it's theoretically possible that LLM models could be developed to a point where their output 'emulates intelligent communication', and that if it does then it can essentially be considered cognizant whatever is 'under the hood" purely on strength on the apparent intelligence of the output".

Is that right? Or have I missed some nuance?
I'm a little hesitant to agree, because I may be agreeing to something that I understand differently from you.  I mean, I couldn't have imagined how "evolved" could be interpreted here...

Let me try to concisely restate your restatement:  "If LLMs are developed to the point where they can consistently emulate intelligent communication, then they're functionally equivalent to being intelligent.  If you can have an "Einstein LLM" that would say everything flesh and bones Einstein would say, even to novel contexts, then it doesn't matter whether the algorithm is classified as cognizant or not;  functionally it is as intelligent as flesh and bones Einstein was.

Admiral Yi

How would you program curiosity?  Invention?

The Minsky Moment

Yeah I don't doubt you could in theory create an "Einstein LLM" or a "Hawking LLM" or something similar.  Train it on everything that Einstein ever said or wrote, or what was said or wrote by him, or what was said or written by people most similar to them.  And sure it may create a reasonable facsimile of what the historical Einstein would actually say in response to various prompts.  But what it can't do is replicate what Einstein would say or do if he were reincarnated at birth in the 21st century and lived a full life.  Because that Einstein would probably think or do novel and unexpected things now just like the real Einstein did in 1900s and 1910s.  But the LLM wouldn't be able to do that because it's not in its data set.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson