Quote from: The Minsky Moment on November 25, 2025, 05:18:02 PMI'm entirely unqualified to opine, but I fine the "no" voices on the question to be more convincing. It seems like the "bootstrap" case involves some magical thinking as in if we just throw enough data at the LLM and tune up its capabilities, then it will somehow spontaneously evolve "general intelligence" despite the lack of any clear mechanism. I also can't help noticing that the strongest "yes" voices seem to have a strong financial incentive in the continued success of LLMs.
QuoteMy vote is a definite no in the aggregate, but there may be individual winners.
QuoteOh I don't have a clue, really. Just two rough datapoints and a virtual napkin.
Quote from: The Minsky Moment on November 25, 2025, 05:07:04 PMI don't see how everything you wrote from the second sentence on is connected to your first sentence. Yes, you accurately described how training and inference of any statistical or machine learning model works, but how is that description offering any insight as to why the bolded is impossible?Quote from: DGuller on November 25, 2025, 01:31:05 PMThe title of the article is knocking down a strawman. People who think LLMs have some artificial intelligence don't equate language with intelligence. They see language as one of the "outputs" of intelligence. If you build a model that matches one mode of output of intelligence well enough, then it's possible that under the hood that model had to have evolved something functionally analogous to intelligence during training in order to do that.
I don't see how that is possible. The model is the model. It assigns probability distributions to different potential outputs based on an algorithm. The training consists of feeding in more data and giving feedback to adjust the algorithm. The algorithm is just seeking probabilistic matches based on its parameters.
Quote from: Jacob on November 25, 2025, 05:13:53 PMYour projected timeline for progress (in the 2050s) seems to imply that the complexity gap between 2) and 3) is roughly similar to the gap between 1) and 2) (relative to available resources).
I'm not sure that's the case.
Quote from: Jacob on November 25, 2025, 05:10:49 PM1: How likely is it that we'll reach AGI type AI on the back of LLM, as Altman suggests?
Quote2: Will the productivity gains from the current investments in LLM generate enough revenue to justify the investment?
Quote from: The Minsky Moment on November 25, 2025, 05:07:04 PMLooking at matters broadly, you can generate a hierarchy of:
1) Manipulation of pure symbols: mathematical computations, clear rules based games like Chess or Go.
2) Manipulation of complex, representational symbols like language
3) Experiential learning not easily reduced to manipulation of symbols.
Each level of the hierarchy represents an increasing challenge for machines. With level 1 substantial progress was made by the 1990s. With language we are reaching that stage now. On that basis I would project substantial progress in stage 3 in the 2050s, but perhaps we will see if a couple trillion in spending can speed that up.
Quote from: DGuller on November 25, 2025, 01:31:05 PMThe title of the article is knocking down a strawman. People who think LLMs have some artificial intelligence don't equate language with intelligence. They see language as one of the "outputs" of intelligence. If you build a model that matches one mode of output of intelligence well enough, then it's possible that under the hood that model had to have evolved something functionally analogous to intelligence during training in order to do that.
Quote from: DGuller on November 25, 2025, 04:23:32 PMI have a much bigger issue with the last paragraph. "LLMs emulate communication, not cognition" is an assertion about the most fundamental question. As I said in my first reply, if LLMs are good enough to emulate intelligent communication, then it's very plausible that it needs to have some level of cognition to do that.
Quote from: Jacob on November 25, 2025, 03:32:15 PMI have a much bigger issue with the last paragraph. "LLMs emulate communication, not cognition" is an assertion about the most fundamental question. As I said in my first reply, if LLMs are good enough to emulate intelligent communication, then it's very plausible that it needs to have some level of cognition to do that.Quote from: DGuller on November 25, 2025, 03:01:30 PMBut then what is the point of that article? AI bubble exists because people with money believe that a usable enough AI already exists, not because some people believe that "language = intelligence" or that the sky is brown. If the point of that article is not to say that "people mistakenly believe that a usable AI exists because they equate language to intelligence", then what is exactly the point?
The point of the article is:QuoteThe AI hype machine relentlessly promotes the idea that we're on the verge of creating something as intelligent as humans, or even "superintelligence" that will dwarf our own cognitive capacities. If we gather tons of data about the world, and combine this with ever more powerful computing power (read: Nvidia chips) to improve our statistical correlations, then presto, we'll have AGI. Scaling is all we need.
But this theory is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.
If you disagree that the first paragraph above is true then obviously the argument against it is less relevant. Personally I think Sam Altman is enough of a "thought leader" on this topic to make it worthwhile to address the position he advances.
Page created in 0.018 seconds with 11 queries.