News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

crazy canuck

Quote from: Tamas on November 26, 2025, 08:59:19 AM
Quote from: crazy canuck on November 26, 2025, 07:01:20 AM
Quote from: Tamas on November 26, 2025, 06:50:44 AMAnyways, we have been ignoring a key question:

How can LLMs be made to play computer games?

I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?

Yes, but it would not be able to distinguish good strategies from bad.  So the same problems exist and are likely worse as AI inevitably gets trained on AI slop.

Would they, though? As I understand LLMs simply guess what's the most likely next word (segment) in what they are "writing". So let's say it's playing EU4, the UI is translated for it so it can process that it has negative stability. Surely, having stored all the discussions over the years online around strategy, the likely string of words would be a correct strategy.

No, as we are seeing in every other field where LLM is being used, the end product is not useful unless someone carefully corrects all the errors produced by the tool.  It ends up being highly inefficient for things that require accuracy.  And the slop problem is real, once AI starts getting trained on the output of AI the end product becomes increasingly unreliable.
Awarded 17 Zoupa points

In several surveys, the overwhelming first choice for what makes Canada unique is multiculturalism. This, in a world collapsing into stupid, impoverishing hatreds, is the distinctly Canadian national project.

DGuller

Quote from: The Minsky Moment on November 26, 2025, 10:35:36 AM
Quote from: Tamas on November 26, 2025, 08:59:19 AM
Quote from: crazy canuck on November 26, 2025, 07:01:20 AM
Quote from: Tamas on November 26, 2025, 06:50:44 AMAnyways, we have been ignoring a key question:

How can LLMs be made to play computer games?

I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?

Yes, but it would not be able to distinguish good strategies from bad.  So the same problems exist and are likely worse as AI inevitably gets trained on AI slop.

Would they, though? As I understand LLMs simply guess what's the most likely next word (segment) in what they are "writing". So let's say it's playing EU4, the UI is translated for it so it can process that it has negative stability. Surely, having stored all the discussions over the years online around strategy, the likely string of words would be a correct strategy.

The likely string of words would be a correct strategy if the training data contains good optimal strategies and the model has been given feedback that leads us to give higher probability weighting to those choices.  Looking at DG's CHAT GTP output, it appears that the model has been fed the Paradox forums and reddit threads on HoI4 as it parrots some of the buzzwords you'd find there.

The model "knows" that certain symbols in the game correspond to certain words like "manpower", "losses," and resources.  It can spit back out those numbers and associate it with those characters.  Then it can make a guess (i.e. assign probability distros) as what a likely response would be to a combat screen showing zero "losses" based on what people have said on the forums, reddit etc. about similar situations: "insane / bugged / encirclement abuse / nukes + AI collapse / paradrop cheese".   CHAT GPT has no idea what "losses" are or what "paradrop cheese" is, but its algorithm tells it that in the context of discussions about the symbol "HoI 4" there is some probabilistic relationship between those two phrases. The model has no idea what an optimal strategy is, or what a strategy is, or what Hearts of Iron is.  It is just making probabilistic connections between different symbolic representations in accordance with its algorithm.
I think you're being unreasonably reductionist.  At the end of the day, your brain has no idea what losses are either, it's just your neurons firing.

crazy canuck

Well if you have to wave away the obvious differences between human intelligence and what LLMs are doing to make your case, you may want to think of a better argument.
Awarded 17 Zoupa points

In several surveys, the overwhelming first choice for what makes Canada unique is multiculturalism. This, in a world collapsing into stupid, impoverishing hatreds, is the distinctly Canadian national project.

The Minsky Moment

Quote from: DGuller on November 26, 2025, 10:56:59 AMI think you're being unreasonably reductionist.  At the end of the day, your brain has no idea what losses are either, it's just your neurons firing.

I think I'm being reasonably reductionist, and I agree with your second sentence (although modified to say that it's my neurons firing in response to contact with my environment).  But whatever CHAT GDP is doing, it isn't replicating how the human brain is operating.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

DGuller

Quote from: The Minsky Moment on November 26, 2025, 11:15:19 AM
Quote from: DGuller on November 26, 2025, 10:56:59 AMI think you're being unreasonably reductionist.  At the end of the day, your brain has no idea what losses are either, it's just your neurons firing.

I think I'm being reasonably reductionist, and I agree with your second sentence (although modified to say that it's my neurons firing in response to contact with my environment).  But whatever CHAT GDP is doing, it isn't replicating how the human brain is operating.
No, ChatGPT is not replicating how the human brain is operating.  An AI doing voice to text translation doesn't work like a stenographer typing out what is being dictated to them.  Both get the job done in their own ways anyway.  It would be silly creating a robot that would be striking keys with its robotic fingers, just to replicate how humans are doing it.  Matching the outcome is what matters; copying the mechanism is neither required nor even necessarily desirable.

The Minsky Moment

Right and I agree that LLMs can do things.  I use them to do things.  They are good at some things, less good at others, and simply can't do many things at all.

One thing they are not is "general intelligence" and I don't see the path to get there by enhancing LLMs.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

Jacob

If this gets close to the kind of mass production they're saying, then we may soon have the answer to whether LLM can produce AGI with sufficient computational power:

QuoteNew Chinese optical quantum chip allegedly 1,000x faster than Nvidia GPUs for processing AI workloads - firm reportedly producing 12,000 wafers per year

Quantum computing is still a long way from becoming a mainstream part of society; however, a Chinese firm has developed an all-new optical quantum computing chip that is closing the gap, called the world's first scalable, "industrial-grade" quantum chip. The South China Morning Post reports that the chip's developer claims it is "1,000 times faster" than Nvidia's GPUs at AI tasks and is already being used in some industries, including aerospace and finance.

The chip in question was built by the Chip Hub for Integrated Photonics Xplore (CHIPX) and is based on a brand-new co-packaging technology for photons and electronics, and it claims to be the first quantum computing platform to be widely deployable. These photonic chips house more than 1,000 optical components on a small 6-inch silicon wafer using a monolithic design, making them incredibly compact compared to traditional quantum computers.

All of these factors have reportedly allowed systems with these quantum chips to be deployed in just two weeks, compared to six months for traditional quantum computers. Its design also allows these chips to work in tandem with each other, just like AI GPUs, with deployments allegedly being "easily" scaled up to support 1 million qubits of quantum processing power.

More her: https://www.tomshardware.com/tech-industry/quantum-computing/new-chinese-optical-quantum-chip-allegedly-1-000x-faster-than-nvidia-gpus-for-processing-ai-workloads-but-yields-are-low

The Minsky Moment

We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

HVC

So you're saying that GPUs are gonna be affordable again?
Being lazy is bad; unless you still get what you want, then it's called "patience".
Hubris must be punished. Severely.

The Minsky Moment

We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson