News:

And we're back!

Main Menu

Recent posts

#31
Off the Record / Re: The EU thread
Last post by crazy canuck - Today at 11:20:01 AM
Even prior to January 2025 it was not obvious there were only two, and as you say, now that claim is entirely invalid.
#32
Off the Record / Re: The AI dooooooom thread
Last post by The Minsky Moment - Today at 11:15:19 AM
Quote from: DGuller on Today at 10:56:59 AMI think you're being unreasonably reductionist.  At the end of the day, your brain has no idea what losses are either, it's just your neurons firing.

I think I'm being reasonably reductionist, and I agree with your second sentence (although modified to say that it's my neurons firing in response to contact with my environment).  But whatever CHAT GDP is doing, it isn't replicating how the human brain is operating.
#33
Off the Record / Re: The EU thread
Last post by The Minsky Moment - Today at 11:11:32 AM
Martin Wolf's editorial in the FT, worth reading: https://on.ft.com/48iyoOD

The subject matter is a review of a book by a US economist Neal Shearing, The Fractured Age. Shearing argues that the world in splitting into two competing economic blocs, one led by the US and one by China.  He argues that the US-led bloc will triumph because it is far stronger economically.

Wolf points out an obvious flaw in Shearing's argument: he assumes that Europe is "strongly aligned" in the US bloc, along with other traditionally US-aligned countries like Japan, Canada, and Australia, and that other countries like Mexico, South Korea, and Turkey "lean" US

All that would have seemed completely justifiable at most points from 1945 until January 20, 2025.  But it seems ludicrous to argue in November 2025 that the EU and the USA are "strongly aligned." And that is why the Trump administration has been so catastrophic diplomatically; it has weakened both the US and Europe drastically as against China.

What is interesting is that Shearing's "US aligned" bloc combined has significantly more combined GDP then either the US or the "China bloc".  It does not seem to be unavoidably inevitable that a split of the world economy into blocs would have to be two blocs . . .
#34
Off the Record / Re: The AI dooooooom thread
Last post by crazy canuck - Today at 10:59:37 AM
Well if you have to wave away the obvious differences between human intelligence and what LLMs are doing to make your case, you may want to think of a better argument.
#35
Off the Record / Re: The AI dooooooom thread
Last post by DGuller - Today at 10:56:59 AM
Quote from: The Minsky Moment on Today at 10:35:36 AM
Quote from: Tamas on Today at 08:59:19 AM
Quote from: crazy canuck on Today at 07:01:20 AM
Quote from: Tamas on Today at 06:50:44 AMAnyways, we have been ignoring a key question:

How can LLMs be made to play computer games?

I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?

Yes, but it would not be able to distinguish good strategies from bad.  So the same problems exist and are likely worse as AI inevitably gets trained on AI slop.

Would they, though? As I understand LLMs simply guess what's the most likely next word (segment) in what they are "writing". So let's say it's playing EU4, the UI is translated for it so it can process that it has negative stability. Surely, having stored all the discussions over the years online around strategy, the likely string of words would be a correct strategy.

The likely string of words would be a correct strategy if the training data contains good optimal strategies and the model has been given feedback that leads us to give higher probability weighting to those choices.  Looking at DG's CHAT GTP output, it appears that the model has been fed the Paradox forums and reddit threads on HoI4 as it parrots some of the buzzwords you'd find there.

The model "knows" that certain symbols in the game correspond to certain words like "manpower", "losses," and resources.  It can spit back out those numbers and associate it with those characters.  Then it can make a guess (i.e. assign probability distros) as what a likely response would be to a combat screen showing zero "losses" based on what people have said on the forums, reddit etc. about similar situations: "insane / bugged / encirclement abuse / nukes + AI collapse / paradrop cheese".   CHAT GPT has no idea what "losses" are or what "paradrop cheese" is, but its algorithm tells it that in the context of discussions about the symbol "HoI 4" there is some probabilistic relationship between those two phrases. The model has no idea what an optimal strategy is, or what a strategy is, or what Hearts of Iron is.  It is just making probabilistic connections between different symbolic representations in accordance with its algorithm.
I think you're being unreasonably reductionist.  At the end of the day, your brain has no idea what losses are either, it's just your neurons firing.
#36
Off the Record / Re: Brexit and the waning days...
Last post by crazy canuck - Today at 10:48:03 AM
Quote from: Richard Hakluyt on Today at 07:28:54 AMI'm arguing for the right to call for a trial by jury, not mandatory trial by jury. For most offences it will just be trial by magistrate, as it is now. But if someone is in a politically sensitive trial (such as those silly buggers with the orange powder at stonehenge or the "we support Palestine Action" crowd, I think it is only right that they can call for a trial by jury.


I agree, Jury trials are essential to the administration of justice in a common law jurisdiction.


I also agree that it is not helpful for someone to argue that civil law jurisdictions don't have jury trials so it's all OK. The criminal law and procedure is fundamentally different in those countries.

#37
I would guess most likely it was leaked from inside the US.  Given all the risks, Bloomberg would not have published unless they were pretty certain the recording was authentic.  That would rule out hackers.  A reputable European intel agency source would be possible, but that would be extremely risky if ever traced back.  We know there are divisions on Russia policy within the GOP camp and there are still people in the US intel community that aren't happy about the policy.
#38
Off the Record / Re: The AI dooooooom thread
Last post by crazy canuck - Today at 10:43:01 AM
Quote from: Tamas on Today at 08:59:19 AM
Quote from: crazy canuck on Today at 07:01:20 AM
Quote from: Tamas on Today at 06:50:44 AMAnyways, we have been ignoring a key question:

How can LLMs be made to play computer games?

I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?

Yes, but it would not be able to distinguish good strategies from bad.  So the same problems exist and are likely worse as AI inevitably gets trained on AI slop.

Would they, though? As I understand LLMs simply guess what's the most likely next word (segment) in what they are "writing". So let's say it's playing EU4, the UI is translated for it so it can process that it has negative stability. Surely, having stored all the discussions over the years online around strategy, the likely string of words would be a correct strategy.

No, as we are seeing in every other field where LLM is being used, the end product is not useful unless someone carefully corrects all the errors produced by the tool.  It ends up being highly inefficient for things that require accuracy.  And the slop problem is real, once AI starts getting trained on the output of AI the end product becomes increasingly unreliable.
#39
Off the Record / Re: The Off Topic Topic
Last post by crazy canuck - Today at 10:38:36 AM
Interesting comment on how the Germans viewed the Allied forces.  The Americans kill you with their equipment, the Brits kill you with discipline, the Canadians kill you because they want to

https://www.instagram.com/reel/DRfdWk3kpz6/?igsh=a213N2QyejBhNnpk
#40
Off the Record / Re: The AI dooooooom thread
Last post by The Minsky Moment - Today at 10:35:36 AM
Quote from: Tamas on Today at 08:59:19 AM
Quote from: crazy canuck on Today at 07:01:20 AM
Quote from: Tamas on Today at 06:50:44 AMAnyways, we have been ignoring a key question:

How can LLMs be made to play computer games?

I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?

Yes, but it would not be able to distinguish good strategies from bad.  So the same problems exist and are likely worse as AI inevitably gets trained on AI slop.

Would they, though? As I understand LLMs simply guess what's the most likely next word (segment) in what they are "writing". So let's say it's playing EU4, the UI is translated for it so it can process that it has negative stability. Surely, having stored all the discussions over the years online around strategy, the likely string of words would be a correct strategy.

The likely string of words would be a correct strategy if the training data contains good optimal strategies and the model has been given feedback that leads us to give higher probability weighting to those choices.  Looking at DG's CHAT GTP output, it appears that the model has been fed the Paradox forums and reddit threads on HoI4 as it parrots some of the buzzwords you'd find there.

The model "knows" that certain symbols in the game correspond to certain words like "manpower", "losses," and resources.  It can spit back out those numbers and associate it with those characters.  Then it can make a guess (i.e. assign probability distros) as what a likely response would be to a combat screen showing zero "losses" based on what people have said on the forums, reddit etc. about similar situations: "insane / bugged / encirclement abuse / nukes + AI collapse / paradrop cheese".   CHAT GPT has no idea what "losses" are or what "paradrop cheese" is, but its algorithm tells it that in the context of discussions about the symbol "HoI 4" there is some probabilistic relationship between those two phrases. The model has no idea what an optimal strategy is, or what a strategy is, or what Hearts of Iron is.  It is just making probabilistic connections between different symbolic representations in accordance with its algorithm.