News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

Josquius

Quote from: Admiral Yi on November 24, 2025, 06:02:51 PM
Quote from: Josquius on November 24, 2025, 06:45:31 AMI missed that link. But yes. Its back end stuff that is used by basically everyone in online advertising. If you see an ad online and you're not on facebook, twitter, amazon, or some other walled garden.... then it's probably a google ad.

Do you think Google is overcharging advertisers to use this exchange and these servers?

Do you think breaking up Google would solve this problem?

I've no idea on costs. I've worked in marketing adjacent areas but not in that side of things.
But for sure one company absolutely dominating the market to such a degree is not healthy.
Breaking up Google would indeed weaken the power they can bring to bare and give would've competitors a chance - as I say even meta failed when it had a try let alone a startup.
██████
██████
██████

Admiral Yi

Quote from: Josquius on Today at 03:08:08 AMI've no idea on costs. I've worked in marketing adjacent areas but not in that side of things.
But for sure one company absolutely dominating the market to such a degree is not healthy.
Breaking up Google would indeed weaken the power they can bring to bare and give would've competitors a chance - as I say even meta failed when it had a try let alone a startup.

Well that's the thing.  As Joan pointed out, traditional monopoly analysis examines the companies ability to extract rents, to charge higher than the free market price.  When talking about monopolies "not healthy" means exactly that.

Discussions of "power" tend to be circular.  They have power because they are a monopoly.  They are a monopoly because they have power.  Have you considered the possibility they are just better?  From the link I posted Meta has close to Google's share of online advertising.  Ergo they have roughly the same "power."  It's not logical that they failed to compete with Google in ad exchange or publisher ad server (whatever that is) because they had less power.

Jacob

Language and intelligence are two difference things:

Large Language Mistake: Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it


Some excerpts

QuoteThe problem is that according to current neuroscience, human thinking is largely independent of human language — and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own. Humans use language to communicate the results of our capacity to reason, form abstractions, and make generalizations, or what we might call our intelligence. We use language to think, but that does not make language the same as thought. Understanding this distinction is the key to separating scientific fact from the speculative science fiction of AI-exuberant CEOs.

The AI hype machine relentlessly promotes the idea that we're on the verge of creating something as intelligent as humans, or even "superintelligence" that will dwarf our own cognitive capacities. If we gather tons of data about the world, and combine this with ever more powerful computing power (read: Nvidia chips) to improve our statistical correlations, then presto, we'll have AGI. Scaling is all we need.

But this theory is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.

...

Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.

But take away language from a large language model, and you are left with literally nothing at all.


An AI enthusiast might argue that human-level intelligence doesn't need to necessarily function in the same way as human cognition. AI models have surpassed human performance in activities like chess using processes that differ from what we do, so perhaps they could become superintelligent through some unique method based on drawing correlations from training data.

Maybe! But there's no obvious reason to think we can get to general intelligence — not improving narrowly defined tasks —through text-based training. After all, humans possess all sorts of knowledge that is not easily encapsulated in linguistic data — and if you doubt this, think about how you know how to ride a bike.

In fact, within the AI research community there is growing awareness that LLMs are, in and of themselves, insufficient models of human intelligence. For example, Yann LeCun, a Turing Award winner for his AI research and a prominent skeptic of LLMs, left his role at Meta last week to found an AI startup developing what are dubbed world models: "��systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences." And recently, a group of prominent AI scientists and "thought leaders" — including Yoshua Bengio (another Turing Award winner), former Google CEO Eric Schmidt, and noted AI skeptic Gary Marcus — coalesced around a working definition of AGI as "AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult" (emphasis added). Rather than treating intelligence as a "monolithic capacity," they propose instead we embrace a model of both human and artificial cognition that reflects "a complex architecture composed of many distinct abilities."

...

We can credit Thomas Kuhn and his book The Structure of Scientific Revolutions for our notion of "scientific paradigms," the basic frameworks for how we understand our world at any given time. He argued these paradigms "shift" not as the result of iterative experimentation, but rather when new questions and ideas emerge that no longer fit within our existing scientific descriptions of the world. Einstein, for example, conceived of relativity before any empirical evidence confirmed it. Building off this notion, the philosopher Richard Rorty contended that it is when scientists and artists become dissatisfied with existing paradigms (or vocabularies, as he called them) that they create new metaphors that give rise to new descriptions of the world — and if these new ideas are useful, they then become our common understanding of what is true. As such, he argued, "common sense is a collection of dead metaphors."

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they're being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that's all it will be able to do. It will be forever trapped in the vocabulary we've encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

crazy canuck

Yeah, and that is one of the reasons scientific research funding agencies are now prohibiting the use of AI tools when researchers make research funding proposals.  The AI tool can only produce research proposals based on what has already been proposed or already researched. But the funding agencies want to fund novel research.  Something AI tools are incapable of developing.

The other reason is the avalanche of utter trash LLM modes generate.
Awarded 17 Zoupa points

In several surveys, the overwhelming first choice for what makes Canada unique is multiculturalism. This, in a world collapsing into stupid, impoverishing hatreds, is the distinctly Canadian national project.

Josquius

I do hope as the bubble pops, backlash builds etc... We can see more attention given to good uses for LLMs and related data crunching rather than just generating fake docs.


Quote from: Admiral Yi on Today at 03:28:07 AM
Quote from: Josquius on Today at 03:08:08 AMI've no idea on costs. I've worked in marketing adjacent areas but not in that side of things.
But for sure one company absolutely dominating the market to such a degree is not healthy.
Breaking up Google would indeed weaken the power they can bring to bare and give would've competitors a chance - as I say even meta failed when it had a try let alone a startup.

Well that's the thing.  As Joan pointed out, traditional monopoly analysis examines the companies ability to extract rents, to charge higher than the free market price.  When talking about monopolies "not healthy" means exactly that.

Discussions of "power" tend to be circular.  They have power because they are a monopoly.  They are a monopoly because they have power.  Have you considered the possibility they are just better?  From the link I posted Meta has close to Google's share of online advertising.  Ergo they have roughly the same "power."  It's not logical that they failed to compete with Google in ad exchange or publisher ad server (whatever that is) because they had less power.


Yes. Google are "better". But that's no defence.
A monopoly isn't necessarily created by anyrhing devious. The fact that they've been in the game so long that they've established an unassailably deep and wide position is enough.

A quick search shows those in the know do indeed suggest googles monopoly is allowing them to get away with high pricing.

And no. Meta does not have the same share as Google at all. As said Google has around 90%. Meta cut their losses when they couldn't make enough to even break even.
██████
██████
██████

DGuller

Quote from: Jacob on Today at 11:26:51 AMLanguage and intelligence are two difference things:

Large Language Mistake: Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it


Some excerpts

QuoteThe problem is that according to current neuroscience, human thinking is largely independent of human language — and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own. Humans use language to communicate the results of our capacity to reason, form abstractions, and make generalizations, or what we might call our intelligence. We use language to think, but that does not make language the same as thought. Understanding this distinction is the key to separating scientific fact from the speculative science fiction of AI-exuberant CEOs.

The AI hype machine relentlessly promotes the idea that we're on the verge of creating something as intelligent as humans, or even "superintelligence" that will dwarf our own cognitive capacities. If we gather tons of data about the world, and combine this with ever more powerful computing power (read: Nvidia chips) to improve our statistical correlations, then presto, we'll have AGI. Scaling is all we need.

But this theory is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.

...

Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.

But take away language from a large language model, and you are left with literally nothing at all.


An AI enthusiast might argue that human-level intelligence doesn't need to necessarily function in the same way as human cognition. AI models have surpassed human performance in activities like chess using processes that differ from what we do, so perhaps they could become superintelligent through some unique method based on drawing correlations from training data.

Maybe! But there's no obvious reason to think we can get to general intelligence — not improving narrowly defined tasks —through text-based training. After all, humans possess all sorts of knowledge that is not easily encapsulated in linguistic data — and if you doubt this, think about how you know how to ride a bike.

In fact, within the AI research community there is growing awareness that LLMs are, in and of themselves, insufficient models of human intelligence. For example, Yann LeCun, a Turing Award winner for his AI research and a prominent skeptic of LLMs, left his role at Meta last week to found an AI startup developing what are dubbed world models: "��systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences." And recently, a group of prominent AI scientists and "thought leaders" — including Yoshua Bengio (another Turing Award winner), former Google CEO Eric Schmidt, and noted AI skeptic Gary Marcus — coalesced around a working definition of AGI as "AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult" (emphasis added). Rather than treating intelligence as a "monolithic capacity," they propose instead we embrace a model of both human and artificial cognition that reflects "a complex architecture composed of many distinct abilities."

...

We can credit Thomas Kuhn and his book The Structure of Scientific Revolutions for our notion of "scientific paradigms," the basic frameworks for how we understand our world at any given time. He argued these paradigms "shift" not as the result of iterative experimentation, but rather when new questions and ideas emerge that no longer fit within our existing scientific descriptions of the world. Einstein, for example, conceived of relativity before any empirical evidence confirmed it. Building off this notion, the philosopher Richard Rorty contended that it is when scientists and artists become dissatisfied with existing paradigms (or vocabularies, as he called them) that they create new metaphors that give rise to new descriptions of the world — and if these new ideas are useful, they then become our common understanding of what is true. As such, he argued, "common sense is a collection of dead metaphors."

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they're being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that's all it will be able to do. It will be forever trapped in the vocabulary we've encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

The title of the article is knocking down a strawman.  People who think LLMs have some artificial intelligence don't equate language with intelligence.  They see language as one of the "outputs" of intelligence.  If you build a model that matches one mode of output of intelligence well enough, then it's possible that under the hood that model had to have evolved something functionally analogous to intelligence during training in order to do that.

Sheilbh

Quote from: Jacob on Today at 11:26:51 AMLanguage and intelligence are two difference things:

Large Language Mistake: Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it


Some excerpts

QuoteThe problem is that according to current neuroscience, human thinking is largely independent of human language — and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own. Humans use language to communicate the results of our capacity to reason, form abstractions, and make generalizations, or what we might call our intelligence. We use language to think, but that does not make language the same as thought. Understanding this distinction is the key to separating scientific fact from the speculative science fiction of AI-exuberant CEOs.
Delighted that only 30 years after the Sokal affair, STEM is finally acknowledging that Derrida was right :w00t:
Let's bomb Russia!

Jacob

Quote from: DGuller on Today at 01:31:05 PMThe title of the article is knocking down a strawman.  People who think LLMs have some artificial intelligence don't equate language with intelligence.  They see language as one of the "outputs" of intelligence.  If you build a model that matches one mode of output of intelligence well enough, then it's possible that under the hood that model had to have evolved something functionally analogous to intelligence during training in order to do that.

I am happy to agree that you don't think that. And equally happy to agree that thoughtful proponents of LLM don't think that.

But the idea is out there, and certainly some of the hype propagated by some of the relevant tech CEOs (who are some of the richest and most powerful men in the world, and who absolutely have a role in shaping the public discourse) certainly seems to imply it if not outright state it at times.

So I don't agree it's a strawman. It's a thoughtful and well-argued counter argument to a line of thought that is absolutely being made, even if it's not being made by you.

crazy canuck

Quote from: Sheilbh on Today at 01:35:37 PMDelighted that only 30 years after the Sokal affair, STEM is finally acknowledging that Derrida was right :w00t:

 :yes:
Awarded 17 Zoupa points

In several surveys, the overwhelming first choice for what makes Canada unique is multiculturalism. This, in a world collapsing into stupid, impoverishing hatreds, is the distinctly Canadian national project.

DGuller

Quote from: Jacob on Today at 02:08:56 PM
Quote from: DGuller on Today at 01:31:05 PMThe title of the article is knocking down a strawman.  People who think LLMs have some artificial intelligence don't equate language with intelligence.  They see language as one of the "outputs" of intelligence.  If you build a model that matches one mode of output of intelligence well enough, then it's possible that under the hood that model had to have evolved something functionally analogous to intelligence during training in order to do that.

I am happy to agree that you don't think that. And equally happy to agree that thoughtful proponents of LLM don't think that.

But the idea is out there, and certainly some of the hype propagated by some of the relevant tech CEOs (who are some of the richest and most powerful men in the world, and who absolutely have a role in shaping the public discourse) certainly seems to imply it if not outright state it at times.

So I don't agree it's a strawman. It's a thoughtful and well-argued counter argument to a line of thought that is absolutely being made, even if it's not being made by you.
But then what is the point of that article?  AI bubble exists because people with money believe that a usable enough AI already exists, not because some people believe that "language = intelligence" or that the sky is brown.  If the point of that article is not to say that "people mistakenly believe that a usable AI exists because they equate language to intelligence", then what is exactly the point?