News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

crazy canuck

Quote from: Tamas on Today at 08:59:19 AM
Quote from: crazy canuck on Today at 07:01:20 AM
Quote from: Tamas on Today at 06:50:44 AMAnyways, we have been ignoring a key question:

How can LLMs be made to play computer games?

I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?

Yes, but it would not be able to distinguish good strategies from bad.  So the same problems exist and are likely worse as AI inevitably gets trained on AI slop.

Would they, though? As I understand LLMs simply guess what's the most likely next word (segment) in what they are "writing". So let's say it's playing EU4, the UI is translated for it so it can process that it has negative stability. Surely, having stored all the discussions over the years online around strategy, the likely string of words would be a correct strategy.

No, as we are seeing in every other field where LLM is being used, the end product is not useful unless someone carefully corrects all the errors produced by the tool.  It ends up being highly inefficient for things that require accuracy.  And the slop problem is real, once AI starts getting trained on the output of AI the end product becomes increasingly unreliable.
Awarded 17 Zoupa points

In several surveys, the overwhelming first choice for what makes Canada unique is multiculturalism. This, in a world collapsing into stupid, impoverishing hatreds, is the distinctly Canadian national project.

DGuller

Quote from: The Minsky Moment on Today at 10:35:36 AM
Quote from: Tamas on Today at 08:59:19 AM
Quote from: crazy canuck on Today at 07:01:20 AM
Quote from: Tamas on Today at 06:50:44 AMAnyways, we have been ignoring a key question:

How can LLMs be made to play computer games?

I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?

Yes, but it would not be able to distinguish good strategies from bad.  So the same problems exist and are likely worse as AI inevitably gets trained on AI slop.

Would they, though? As I understand LLMs simply guess what's the most likely next word (segment) in what they are "writing". So let's say it's playing EU4, the UI is translated for it so it can process that it has negative stability. Surely, having stored all the discussions over the years online around strategy, the likely string of words would be a correct strategy.

The likely string of words would be a correct strategy if the training data contains good optimal strategies and the model has been given feedback that leads us to give higher probability weighting to those choices.  Looking at DG's CHAT GTP output, it appears that the model has been fed the Paradox forums and reddit threads on HoI4 as it parrots some of the buzzwords you'd find there.

The model "knows" that certain symbols in the game correspond to certain words like "manpower", "losses," and resources.  It can spit back out those numbers and associate it with those characters.  Then it can make a guess (i.e. assign probability distros) as what a likely response would be to a combat screen showing zero "losses" based on what people have said on the forums, reddit etc. about similar situations: "insane / bugged / encirclement abuse / nukes + AI collapse / paradrop cheese".   CHAT GPT has no idea what "losses" are or what "paradrop cheese" is, but its algorithm tells it that in the context of discussions about the symbol "HoI 4" there is some probabilistic relationship between those two phrases. The model has no idea what an optimal strategy is, or what a strategy is, or what Hearts of Iron is.  It is just making probabilistic connections between different symbolic representations in accordance with its algorithm.
I think you're being unreasonably reductionist.  At the end of the day, your brain has no idea what losses are either, it's just your neurons firing.

crazy canuck

Well if you have to wave away the obvious differences between human intelligence and what LLMs are doing to make your case, you may want to think of a better argument.
Awarded 17 Zoupa points

In several surveys, the overwhelming first choice for what makes Canada unique is multiculturalism. This, in a world collapsing into stupid, impoverishing hatreds, is the distinctly Canadian national project.

The Minsky Moment

Quote from: DGuller on Today at 10:56:59 AMI think you're being unreasonably reductionist.  At the end of the day, your brain has no idea what losses are either, it's just your neurons firing.

I think I'm being reasonably reductionist, and I agree with your second sentence (although modified to say that it's my neurons firing in response to contact with my environment).  But whatever CHAT GDP is doing, it isn't replicating how the human brain is operating.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

DGuller

Quote from: The Minsky Moment on Today at 11:15:19 AM
Quote from: DGuller on Today at 10:56:59 AMI think you're being unreasonably reductionist.  At the end of the day, your brain has no idea what losses are either, it's just your neurons firing.

I think I'm being reasonably reductionist, and I agree with your second sentence (although modified to say that it's my neurons firing in response to contact with my environment).  But whatever CHAT GDP is doing, it isn't replicating how the human brain is operating.
No, ChatGPT is not replicating how the human brain is operating.  An AI doing voice to text translation doesn't work like a stenographer typing out what is being dictated to them.  Both get the job done in their own ways anyway.  It would be silly creating a robot that would be striking keys with its robotic fingers, just to replicate how humans are doing it.  Matching the outcome is what matters; copying the mechanism is neither required nor even necessarily desirable.

The Minsky Moment

Right and I agree that LLMs can do things.  I use them to do things.  They are good at some things, less good at others, and simply can't do many things at all.

One thing they are not is "general intelligence" and I don't see the path to get there by enhancing LLMs.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson