News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

Josquius

Quote from: Admiral Yi on November 24, 2025, 06:02:51 PM
Quote from: Josquius on November 24, 2025, 06:45:31 AMI missed that link. But yes. Its back end stuff that is used by basically everyone in online advertising. If you see an ad online and you're not on facebook, twitter, amazon, or some other walled garden.... then it's probably a google ad.

Do you think Google is overcharging advertisers to use this exchange and these servers?

Do you think breaking up Google would solve this problem?

I've no idea on costs. I've worked in marketing adjacent areas but not in that side of things.
But for sure one company absolutely dominating the market to such a degree is not healthy.
Breaking up Google would indeed weaken the power they can bring to bare and give would've competitors a chance - as I say even meta failed when it had a try let alone a startup.
██████
██████
██████

Admiral Yi

Quote from: Josquius on Today at 03:08:08 AMI've no idea on costs. I've worked in marketing adjacent areas but not in that side of things.
But for sure one company absolutely dominating the market to such a degree is not healthy.
Breaking up Google would indeed weaken the power they can bring to bare and give would've competitors a chance - as I say even meta failed when it had a try let alone a startup.

Well that's the thing.  As Joan pointed out, traditional monopoly analysis examines the companies ability to extract rents, to charge higher than the free market price.  When talking about monopolies "not healthy" means exactly that.

Discussions of "power" tend to be circular.  They have power because they are a monopoly.  They are a monopoly because they have power.  Have you considered the possibility they are just better?  From the link I posted Meta has close to Google's share of online advertising.  Ergo they have roughly the same "power."  It's not logical that they failed to compete with Google in ad exchange or publisher ad server (whatever that is) because they had less power.

Jacob

Language and intelligence are two difference things:

Large Language Mistake: Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it


Some excerpts

QuoteThe problem is that according to current neuroscience, human thinking is largely independent of human language — and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own. Humans use language to communicate the results of our capacity to reason, form abstractions, and make generalizations, or what we might call our intelligence. We use language to think, but that does not make language the same as thought. Understanding this distinction is the key to separating scientific fact from the speculative science fiction of AI-exuberant CEOs.

The AI hype machine relentlessly promotes the idea that we're on the verge of creating something as intelligent as humans, or even "superintelligence" that will dwarf our own cognitive capacities. If we gather tons of data about the world, and combine this with ever more powerful computing power (read: Nvidia chips) to improve our statistical correlations, then presto, we'll have AGI. Scaling is all we need.

But this theory is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.

...

Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.

But take away language from a large language model, and you are left with literally nothing at all.


An AI enthusiast might argue that human-level intelligence doesn't need to necessarily function in the same way as human cognition. AI models have surpassed human performance in activities like chess using processes that differ from what we do, so perhaps they could become superintelligent through some unique method based on drawing correlations from training data.

Maybe! But there's no obvious reason to think we can get to general intelligence — not improving narrowly defined tasks —through text-based training. After all, humans possess all sorts of knowledge that is not easily encapsulated in linguistic data — and if you doubt this, think about how you know how to ride a bike.

In fact, within the AI research community there is growing awareness that LLMs are, in and of themselves, insufficient models of human intelligence. For example, Yann LeCun, a Turing Award winner for his AI research and a prominent skeptic of LLMs, left his role at Meta last week to found an AI startup developing what are dubbed world models: "��systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences." And recently, a group of prominent AI scientists and "thought leaders" — including Yoshua Bengio (another Turing Award winner), former Google CEO Eric Schmidt, and noted AI skeptic Gary Marcus — coalesced around a working definition of AGI as "AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult" (emphasis added). Rather than treating intelligence as a "monolithic capacity," they propose instead we embrace a model of both human and artificial cognition that reflects "a complex architecture composed of many distinct abilities."

...

We can credit Thomas Kuhn and his book The Structure of Scientific Revolutions for our notion of "scientific paradigms," the basic frameworks for how we understand our world at any given time. He argued these paradigms "shift" not as the result of iterative experimentation, but rather when new questions and ideas emerge that no longer fit within our existing scientific descriptions of the world. Einstein, for example, conceived of relativity before any empirical evidence confirmed it. Building off this notion, the philosopher Richard Rorty contended that it is when scientists and artists become dissatisfied with existing paradigms (or vocabularies, as he called them) that they create new metaphors that give rise to new descriptions of the world — and if these new ideas are useful, they then become our common understanding of what is true. As such, he argued, "common sense is a collection of dead metaphors."

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they're being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that's all it will be able to do. It will be forever trapped in the vocabulary we've encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

crazy canuck

Yeah, and that is one of the reasons scientific research funding agencies are now prohibiting the use of AI tools when researchers make research funding proposals.  The AI tool can only produce research proposals based on what has already been proposed or already researched. But the funding agencies want to fund novel research.  Something AI tools are incapable of developing.

The other reason is the avalanche of utter trash LLM modes generate.
Awarded 17 Zoupa points

In several surveys, the overwhelming first choice for what makes Canada unique is multiculturalism. This, in a world collapsing into stupid, impoverishing hatreds, is the distinctly Canadian national project.

Josquius

I do hope as the bubble pops, backlash builds etc... We can see more attention given to good uses for LLMs and related data crunching rather than just generating fake docs.


Quote from: Admiral Yi on Today at 03:28:07 AM
Quote from: Josquius on Today at 03:08:08 AMI've no idea on costs. I've worked in marketing adjacent areas but not in that side of things.
But for sure one company absolutely dominating the market to such a degree is not healthy.
Breaking up Google would indeed weaken the power they can bring to bare and give would've competitors a chance - as I say even meta failed when it had a try let alone a startup.

Well that's the thing.  As Joan pointed out, traditional monopoly analysis examines the companies ability to extract rents, to charge higher than the free market price.  When talking about monopolies "not healthy" means exactly that.

Discussions of "power" tend to be circular.  They have power because they are a monopoly.  They are a monopoly because they have power.  Have you considered the possibility they are just better?  From the link I posted Meta has close to Google's share of online advertising.  Ergo they have roughly the same "power."  It's not logical that they failed to compete with Google in ad exchange or publisher ad server (whatever that is) because they had less power.


Yes. Google are "better". But that's no defence.
A monopoly isn't necessarily created by anyrhing devious. The fact that they've been in the game so long that they've established an unassailably deep and wide position is enough.

A quick search shows those in the know do indeed suggest googles monopoly is allowing them to get away with high pricing.

And no. Meta does not have the same share as Google at all. As said Google has around 90%. Meta cut their losses when they couldn't make enough to even break even.
██████
██████
██████

DGuller

Quote from: Jacob on Today at 11:26:51 AMLanguage and intelligence are two difference things:

Large Language Mistake: Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it


Some excerpts

QuoteThe problem is that according to current neuroscience, human thinking is largely independent of human language — and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own. Humans use language to communicate the results of our capacity to reason, form abstractions, and make generalizations, or what we might call our intelligence. We use language to think, but that does not make language the same as thought. Understanding this distinction is the key to separating scientific fact from the speculative science fiction of AI-exuberant CEOs.

The AI hype machine relentlessly promotes the idea that we're on the verge of creating something as intelligent as humans, or even "superintelligence" that will dwarf our own cognitive capacities. If we gather tons of data about the world, and combine this with ever more powerful computing power (read: Nvidia chips) to improve our statistical correlations, then presto, we'll have AGI. Scaling is all we need.

But this theory is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.

...

Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.

But take away language from a large language model, and you are left with literally nothing at all.


An AI enthusiast might argue that human-level intelligence doesn't need to necessarily function in the same way as human cognition. AI models have surpassed human performance in activities like chess using processes that differ from what we do, so perhaps they could become superintelligent through some unique method based on drawing correlations from training data.

Maybe! But there's no obvious reason to think we can get to general intelligence — not improving narrowly defined tasks —through text-based training. After all, humans possess all sorts of knowledge that is not easily encapsulated in linguistic data — and if you doubt this, think about how you know how to ride a bike.

In fact, within the AI research community there is growing awareness that LLMs are, in and of themselves, insufficient models of human intelligence. For example, Yann LeCun, a Turing Award winner for his AI research and a prominent skeptic of LLMs, left his role at Meta last week to found an AI startup developing what are dubbed world models: "��systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences." And recently, a group of prominent AI scientists and "thought leaders" — including Yoshua Bengio (another Turing Award winner), former Google CEO Eric Schmidt, and noted AI skeptic Gary Marcus — coalesced around a working definition of AGI as "AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult" (emphasis added). Rather than treating intelligence as a "monolithic capacity," they propose instead we embrace a model of both human and artificial cognition that reflects "a complex architecture composed of many distinct abilities."

...

We can credit Thomas Kuhn and his book The Structure of Scientific Revolutions for our notion of "scientific paradigms," the basic frameworks for how we understand our world at any given time. He argued these paradigms "shift" not as the result of iterative experimentation, but rather when new questions and ideas emerge that no longer fit within our existing scientific descriptions of the world. Einstein, for example, conceived of relativity before any empirical evidence confirmed it. Building off this notion, the philosopher Richard Rorty contended that it is when scientists and artists become dissatisfied with existing paradigms (or vocabularies, as he called them) that they create new metaphors that give rise to new descriptions of the world — and if these new ideas are useful, they then become our common understanding of what is true. As such, he argued, "common sense is a collection of dead metaphors."

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they're being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that's all it will be able to do. It will be forever trapped in the vocabulary we've encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

The title of the article is knocking down a strawman.  People who think LLMs have some artificial intelligence don't equate language with intelligence.  They see language as one of the "outputs" of intelligence.  If you build a model that matches one mode of output of intelligence well enough, then it's possible that under the hood that model had to have evolved something functionally analogous to intelligence during training in order to do that.

Sheilbh

Quote from: Jacob on Today at 11:26:51 AMLanguage and intelligence are two difference things:

Large Language Mistake: Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it


Some excerpts

QuoteThe problem is that according to current neuroscience, human thinking is largely independent of human language — and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own. Humans use language to communicate the results of our capacity to reason, form abstractions, and make generalizations, or what we might call our intelligence. We use language to think, but that does not make language the same as thought. Understanding this distinction is the key to separating scientific fact from the speculative science fiction of AI-exuberant CEOs.
Delighted that only 30 years after the Sokal affair, STEM is finally acknowledging that Derrida was right :w00t:
Let's bomb Russia!

Jacob

Quote from: DGuller on Today at 01:31:05 PMThe title of the article is knocking down a strawman.  People who think LLMs have some artificial intelligence don't equate language with intelligence.  They see language as one of the "outputs" of intelligence.  If you build a model that matches one mode of output of intelligence well enough, then it's possible that under the hood that model had to have evolved something functionally analogous to intelligence during training in order to do that.

I am happy to agree that you don't think that. And equally happy to agree that thoughtful proponents of LLM don't think that.

But the idea is out there, and certainly some of the hype propagated by some of the relevant tech CEOs (who are some of the richest and most powerful men in the world, and who absolutely have a role in shaping the public discourse) certainly seems to imply it if not outright state it at times.

So I don't agree it's a strawman. It's a thoughtful and well-argued counter argument to a line of thought that is absolutely being made, even if it's not being made by you.

crazy canuck

Quote from: Sheilbh on Today at 01:35:37 PMDelighted that only 30 years after the Sokal affair, STEM is finally acknowledging that Derrida was right :w00t:

 :yes:
Awarded 17 Zoupa points

In several surveys, the overwhelming first choice for what makes Canada unique is multiculturalism. This, in a world collapsing into stupid, impoverishing hatreds, is the distinctly Canadian national project.

DGuller

Quote from: Jacob on Today at 02:08:56 PM
Quote from: DGuller on Today at 01:31:05 PMThe title of the article is knocking down a strawman.  People who think LLMs have some artificial intelligence don't equate language with intelligence.  They see language as one of the "outputs" of intelligence.  If you build a model that matches one mode of output of intelligence well enough, then it's possible that under the hood that model had to have evolved something functionally analogous to intelligence during training in order to do that.

I am happy to agree that you don't think that. And equally happy to agree that thoughtful proponents of LLM don't think that.

But the idea is out there, and certainly some of the hype propagated by some of the relevant tech CEOs (who are some of the richest and most powerful men in the world, and who absolutely have a role in shaping the public discourse) certainly seems to imply it if not outright state it at times.

So I don't agree it's a strawman. It's a thoughtful and well-argued counter argument to a line of thought that is absolutely being made, even if it's not being made by you.
But then what is the point of that article?  AI bubble exists because people with money believe that a usable enough AI already exists, not because some people believe that "language = intelligence" or that the sky is brown.  If the point of that article is not to say that "people mistakenly believe that a usable AI exists because they equate language to intelligence", then what is exactly the point?

Jacob

Quote from: DGuller on Today at 03:01:30 PMBut then what is the point of that article?  AI bubble exists because people with money believe that a usable enough AI already exists, not because some people believe that "language = intelligence" or that the sky is brown.  If the point of that article is not to say that "people mistakenly believe that a usable AI exists because they equate language to intelligence", then what is exactly the point?

The point of the article is:

QuoteThe AI hype machine relentlessly promotes the idea that we're on the verge of creating something as intelligent as humans, or even "superintelligence" that will dwarf our own cognitive capacities. If we gather tons of data about the world, and combine this with ever more powerful computing power (read: Nvidia chips) to improve our statistical correlations, then presto, we'll have AGI. Scaling is all we need.

But this theory is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.

If you disagree that the first paragraph above is true then obviously the argument against it is less relevant. Personally I think Sam Altman is enough of a "thought leader" on this topic to make it worthwhile to address the position he advances.

Josquius

Earlier today I was at a talk about AI in Development.

This article was recommended. Interesting. Definitely lines up with what I've seen about compounding errors.

https://utkarshkanwat.com/writing/betting-against-agents

QuoteWhy I'm Betting Against AI Agents in 2025 (Despite Building Them)
I've built 12+ AI agent systems across development, DevOps, and data operations. Here's why the current hype around autonomous agents is mathematically impossible and what actually works in production.

Everyone says 2025 is the year of AI agents. The headlines are everywhere: "Autonomous AI will transform work," "Agents are the next frontier," "The future is agentic." Meanwhile, I've spent the last year building many different agent systems that actually work in production. And that's exactly why I'm betting against the current hype.
I'm not some AI skeptic writing from the sidelines. Over the past year, I've built more than a dozen production agent systems across the entire software development lifecycle:
Development agents: UI generators that create functional React components from natural language, code refactoring agents that modernize legacy codebases, documentation generators that maintain API docs automatically, and function generators that convert specifications into working implementations.
Data & Infrastructure agents: Database operation agents that handle complex queries and migrations, DevOps automation AI systems managing infrastructure-as-code across multiple cloud providers.
Quality & Process agents: AI-powered CI/CD pipelines that fix lint issues, generate comprehensive test suites, perform automated code reviews, and create detailed pull requests with proper descriptions.
These systems work. They ship real value. They save hours of manual work every day. And that's precisely why I think much of what you're hearing about 2025 being "the year of agents" misses key realities.
TL;DR: Three Hard Truths About AI Agents
After building AI systems, here's what I've learned:
Error rates compound exponentially in multi-step workflows. 95% reliability per step = 36% success over 20 steps. Production needs 99.9%+.
Context windows create quadratic token costs. Long conversations become prohibitively expensive at scale.
The real challenge isn't AI capabilities, it's designing tools and feedback systems that agents can actually use effectively.
The Mathematical Reality No One Talks About
Here's the uncomfortable truth that every AI agent company is dancing around: error compounding makes autonomous multi-step workflows mathematically impossible at production scale.



Let's do the math. If each step in an agent workflow has 95% reliability, which is optimistic for current LLMs,then:
5 steps = 77% success rate
10 steps = 59% success rate
20 steps = 36% success rate
Production systems need 99.9%+ reliability. Even if you magically achieve 99% per-step reliability (which no one has), you still only get 82% success over 20 steps. This isn't a prompt engineering problem. This isn't a model capability problem. This is mathematical reality.
My DevOps agent works precisely because it's not actually a 20-step autonomous workflow. It's 3-5 discrete, independently verifiable operations with explicit rollback points and human confirmation gates. The "agent" handles the complexity of generating infrastructure code, but the system is architected around the mathematical constraints of reliability.
Every successful agent system I've built follows the same pattern: bounded contexts, verifiable operations, and human decision points (sometimes) at critical junctions. The moment you try to chain more than a handful of operations autonomously, the math kills you.
The Token Economics That Don't Add Up
There's another mathematical reality that agent evangelists conveniently ignore: context windows create quadratic cost scaling that makes conversational agents economically impossible.
Here's what actually happens when you build a "conversational" agent:
Each new interaction requires processing ALL previous context
Token costs scale quadratically with conversation length
A 100-turn conversation costs $50-100 in tokens alone
Multiply by thousands of users and you're looking at unsustainable economics
I learned this the hard way when prototyping a conversational database agent. The first few interactions were cheap. By the 50th query in a session, each response was costing multiple dollars - more than the value it provided. The economics simply don't work for most scenarios.



My function generation agent succeeds because it's completely stateless: description → function → done. No context to maintain, no conversation to track, no quadratic cost explosion. It's not a "chat with your code" experience, it's a focused tool that solves a specific problem efficiently.
The most successful "agents" in production aren't conversational at all. They're smart, bounded tools that do one thing well and get out of the way.
The Tool Engineering Reality Wall
Even if you solve the math problems, you hit a different kind of wall: building production-grade tools for agents is an entirely different engineering discipline that most teams underestimate.
Tool calls themselves are actually quite precise now. The real challenge is tool design. Every tool needs to be carefully crafted to provide the right feedback without overwhelming the context window. You need to think about:
How does the agent know if an operation partially succeeded? How do you communicate complex state changes without burning tokens?
A database query might return 10,000 rows, but the agent only needs to know "query succeeded, 10k results, here are the first 5." Designing these abstractions is an art.
When a tool fails, what information does the agent need to recover? Too little and it's stuck; too much and you waste context.
How do you handle operations that affect each other? Database transactions, file locks, resource dependencies.
My database agent works not because the tool calls are unreliable, but because I spent weeks designing tools that communicate effectively with the AI. Each tool returns structured feedback that the agent can actually use to make decisions, not just raw API responses.
The companies promising "just connect your APIs and our agent will figure it out" haven't done this engineering work. They're treating tools like human interfaces, not AI interfaces. The result is agents that technically make successful API calls but can't actually accomplish complex workflows because they don't understand what happened.
The dirty secret of every production agent system is that the AI is doing maybe 30% of the work. The other 70% is tool engineering: designing feedback interfaces, managing context efficiently, handling partial failures, and building recovery mechanisms that the AI can actually understand and use.
The Integration Reality Check
But let's say you solve the reliability problems and the economics. You still have to integrate with the real world, and the real world is a mess.
Enterprise systems aren't clean APIs waiting for AI agents to orchestrate them. They're legacy systems with quirks, partial failure modes, authentication flows that change without notice, rate limits that vary by time of day, and compliance requirements that don't fit neatly into prompt templates.
My database agent doesn't just "autonomously execute queries." It navigates connection pooling, handles transaction rollbacks, respects read-only replicas, manages query timeouts, and logs everything for audit trails. The AI handles query generation; everything else is traditional systems programming.
The companies promising "autonomous agents that integrate with your entire tech stack" are either overly optimistic or haven't actually tried to build production systems at scale. Integration is where AI agents go to die.
What Actually Works (And Why)
After building more than a dozen different agent systems across the entire software development lifecycle, I've learned that the successful ones share a pattern:
My UI generation agent works because humans review every generated interface before deployment. The AI handles the complexity of translating natural language into functional React components, but humans make the final decisions about user experience.
My database agent works because it confirms every destructive operation before execution. The AI handles the complexity of translating business requirements into SQL, but humans maintain control over data integrity.
My function generation agent works because it operates within clearly defined boundaries. Give it a specification, get back a function. No side effects, no state management, no integration complexity.
My DevOps automation works because it generates infrastructure-as-code that can be reviewed, versioned, and rolled back. The AI handles the complexity of translating requirements into Terraform, but the deployment pipeline maintains all the safety mechanisms we've learned to rely on.
My CI/CD agent works because each stage has clear success criteria and rollback mechanisms. The AI handles the complexity of analyzing code quality and generating fixes, but the pipeline maintains control over what actually gets merged.
The pattern is clear: AI handles complexity, humans maintain control, and traditional software engineering handles reliability.
My Predictions
Here's my specific prediction about who will struggle in 2025:
Venture-funded "fully autonomous agent" startups will hit the economics wall first. Their demos work great with 5-step workflows, but customers will demand 20+ step processes that break down mathematically. Burn rates will spike as they try to solve unsolvable reliability problems.
Enterprise software companies that bolted "AI agents" onto existing products will see adoption stagnate. Their agents can't integrate deeply enough to handle real workflows.
Meanwhile, the winners will be teams building constrained, domain-specific tools that use AI for the hard parts while maintaining human control or strict boundaries over critical decisions. Think less "autonomous everything" and more "extremely capable assistants with clear boundaries."
The market will learn the difference between AI that demos well and AI that ships reliably. That education will be expensive for many companies.
I'm not betting against AI. I'm betting against the current approach to agent architecture. But I believe future is going to be far more valuable than the hype suggests.
Building the Right Way
If you're thinking about building with AI agents, start with these principles:
Define clear boundaries. What exactly can your agent do, and what does it hand off to humans or deterministic systems?
Design for failure. How do you handle the 20-40% of cases where the AI makes mistakes? What are your rollback mechanisms?
Solve the economics. How much does each interaction cost, and how does that scale with usage? Stateless often beats stateful.
Prioritize reliability over autonomy. Users trust tools that work consistently more than they value systems that occasionally do magic.
Build on solid foundations. Use AI for the hard parts (understanding intent, generating content), but rely on traditional software engineering for the critical parts (execution, error handling, state management).
The agent revolution is coming. It just won't look anything like what everyone's promising in 2025. And that's exactly why it will succeed.


██████
██████
██████

DGuller

Quote from: Jacob on Today at 03:32:15 PM
Quote from: DGuller on Today at 03:01:30 PMBut then what is the point of that article?  AI bubble exists because people with money believe that a usable enough AI already exists, not because some people believe that "language = intelligence" or that the sky is brown.  If the point of that article is not to say that "people mistakenly believe that a usable AI exists because they equate language to intelligence", then what is exactly the point?

The point of the article is:

QuoteThe AI hype machine relentlessly promotes the idea that we're on the verge of creating something as intelligent as humans, or even "superintelligence" that will dwarf our own cognitive capacities. If we gather tons of data about the world, and combine this with ever more powerful computing power (read: Nvidia chips) to improve our statistical correlations, then presto, we'll have AGI. Scaling is all we need.

But this theory is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.

If you disagree that the first paragraph above is true then obviously the argument against it is less relevant. Personally I think Sam Altman is enough of a "thought leader" on this topic to make it worthwhile to address the position he advances.
I have a much bigger issue with the last paragraph.  "LLMs emulate communication, not cognition" is an assertion about the most fundamental question.  As I said in my first reply, if LLMs are good enough to emulate intelligent communication, then it's very plausible that it needs to have some level of cognition to do that.

Jacob

Quote from: DGuller on Today at 04:23:32 PMI have a much bigger issue with the last paragraph.  "LLMs emulate communication, not cognition" is an assertion about the most fundamental question.  As I said in my first reply, if LLMs are good enough to emulate intelligent communication, then it's very plausible that it needs to have some level of cognition to do that.

That's the point of the article, though.

Seems to me to me that you simply disagree with it and find the arguments it presents unconvincing. Which you are perfectly entitled to, of course.

For my part, I find the arguments in the article - including the opening part about neuroscience indicating that thinking happens largely independent of language - more persuasive than "it's plausible that emulating intelligent communication requires a level of cognition" at least (barring definitional games with the terms "intelligent" and "cognition").

Sure it's potentially plausible, but the evidence we have against is stronger than the speculation that we have in favour.

On the upside, the timeframe Altman is suggesting for reaching AGI type AI on the back of LLM is short enough that we'll be able to see for ourselves in due time.

The Minsky Moment

Quote from: DGuller on Today at 01:31:05 PMThe title of the article is knocking down a strawman.  People who think LLMs have some artificial intelligence don't equate language with intelligence.  They see language as one of the "outputs" of intelligence.  If you build a model that matches one mode of output of intelligence well enough, then it's possible that under the hood that model had to have evolved something functionally analogous to intelligence during training in order to do that.

I don't see how that is possible. The model is the model.  It assigns probability distributions to different potential outputs based on an algorithm.  The training consists of feeding in more data and giving feedback to adjust the algorithm.  The algorithm is just seeking probabilistic matches based on its parameters.

Looking at matters broadly, you can generate a hierarchy of:
1) Manipulation of pure symbols: mathematical computations, clear rules based games like Chess or Go.
2) Manipulation of complex, representational symbols like language
3) Experiential learning not easily reduced to manipulation of symbols.

Each level of the hierarchy represents an increasing challenge for machines. With level 1 substantial progress was made by the 1990s.  With language we are reaching that stage now.  On that basis I would project substantial progress in stage 3 in the 2050s, but perhaps we will see if a couple trillion in spending can speed that up.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson