Quote from: The Minsky Moment on Today at 11:15:19 AMNo, ChatGPT is not replicating how the human brain is operating. An AI doing voice to text translation doesn't work like a stenographer typing out what is being dictated to them. Both get the job done in their own ways anyway. It would be silly creating a robot that would be striking keys with its robotic fingers, just to replicate how humans are doing it. Matching the outcome is what matters; copying the mechanism is neither required nor even necessarily desirable.Quote from: DGuller on Today at 10:56:59 AMI think you're being unreasonably reductionist. At the end of the day, your brain has no idea what losses are either, it's just your neurons firing.
I think I'm being reasonably reductionist, and I agree with your second sentence (although modified to say that it's my neurons firing in response to contact with my environment). But whatever CHAT GDP is doing, it isn't replicating how the human brain is operating.
Quote from: DGuller on Today at 10:56:59 AMI think you're being unreasonably reductionist. At the end of the day, your brain has no idea what losses are either, it's just your neurons firing.
Quote from: The Minsky Moment on Today at 10:35:36 AMI think you're being unreasonably reductionist. At the end of the day, your brain has no idea what losses are either, it's just your neurons firing.Quote from: Tamas on Today at 08:59:19 AMQuote from: crazy canuck on Today at 07:01:20 AMQuote from: Tamas on Today at 06:50:44 AMAnyways, we have been ignoring a key question:
How can LLMs be made to play computer games?
I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?
Yes, but it would not be able to distinguish good strategies from bad. So the same problems exist and are likely worse as AI inevitably gets trained on AI slop.
Would they, though? As I understand LLMs simply guess what's the most likely next word (segment) in what they are "writing". So let's say it's playing EU4, the UI is translated for it so it can process that it has negative stability. Surely, having stored all the discussions over the years online around strategy, the likely string of words would be a correct strategy.
The likely string of words would be a correct strategy if the training data contains good optimal strategies and the model has been given feedback that leads us to give higher probability weighting to those choices. Looking at DG's CHAT GTP output, it appears that the model has been fed the Paradox forums and reddit threads on HoI4 as it parrots some of the buzzwords you'd find there.
The model "knows" that certain symbols in the game correspond to certain words like "manpower", "losses," and resources. It can spit back out those numbers and associate it with those characters. Then it can make a guess (i.e. assign probability distros) as what a likely response would be to a combat screen showing zero "losses" based on what people have said on the forums, reddit etc. about similar situations: "insane / bugged / encirclement abuse / nukes + AI collapse / paradrop cheese". CHAT GPT has no idea what "losses" are or what "paradrop cheese" is, but its algorithm tells it that in the context of discussions about the symbol "HoI 4" there is some probabilistic relationship between those two phrases. The model has no idea what an optimal strategy is, or what a strategy is, or what Hearts of Iron is. It is just making probabilistic connections between different symbolic representations in accordance with its algorithm.
Quote from: Richard Hakluyt on Today at 07:28:54 AMI'm arguing for the right to call for a trial by jury, not mandatory trial by jury. For most offences it will just be trial by magistrate, as it is now. But if someone is in a politically sensitive trial (such as those silly buggers with the orange powder at stonehenge or the "we support Palestine Action" crowd, I think it is only right that they can call for a trial by jury.
Quote from: Tamas on Today at 08:59:19 AMQuote from: crazy canuck on Today at 07:01:20 AMQuote from: Tamas on Today at 06:50:44 AMAnyways, we have been ignoring a key question:
How can LLMs be made to play computer games?
I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?
Yes, but it would not be able to distinguish good strategies from bad. So the same problems exist and are likely worse as AI inevitably gets trained on AI slop.
Would they, though? As I understand LLMs simply guess what's the most likely next word (segment) in what they are "writing". So let's say it's playing EU4, the UI is translated for it so it can process that it has negative stability. Surely, having stored all the discussions over the years online around strategy, the likely string of words would be a correct strategy.
Page created in 0.022 seconds with 13 queries.