News:

And we're back!

Main Menu

Recent posts

#41
Off the Record / Re: The AI dooooooom thread
Last post by Crazy_Ivan80 - December 03, 2025, 01:20:11 PM
Quote from: HisMajestyBOB on December 03, 2025, 10:25:49 AMI would like to get a new computer next year, please don't drive up the cost of components.

No news toys for the plebs... that's today's world
#42
Gaming HQ / Re: Europa Universalis V confi...
Last post by crazy canuck - December 03, 2025, 01:11:17 PM
Quote from: DGuller on December 03, 2025, 12:02:39 PMLooks like my CtDs were a blessing in disguise, frankly I think the complexity of EU5 far outstrips the abilities of Paradox to ever balance it. 

This reminds me of a case in a very different genre:  iRacing in sim-racing.  Their lead physics modeler couldn't give up on the ambition of building a tire behavior model entirely from first principles, despite the fact that even for tire companies tire behavior is still a bit of a black box.  The more he tried to fix it from first principles, the more he uncovered different modes where tire behavior went completely off the rails.

No, the game is great.  You are missing out.
#43
Off the Record / Re: The AI dooooooom thread
Last post by PJL - December 03, 2025, 12:43:15 PM
Quote from: HisMajestyBOB on December 03, 2025, 10:25:49 AMI would like to get a new computer next year, please don't drive up the cost of components.

Not just computers but all consumables with silicon components will be affected. So everything from washing machines to cars.
#44
Gaming HQ / Re: Europa Universalis V confi...
Last post by Valmy - December 03, 2025, 12:27:58 PM
Quote from: DGuller on December 03, 2025, 12:02:39 PMLooks like my CtDs were a blessing in disguise, frankly I think the complexity of EU5 far outstrips the abilities of Paradox to ever balance it. 

We'll see. I think it is really good.

But the main issue is that there are so many aspects to the game that could be explored as a player but that damn thing just plays so slowly and there is so much to do I feel like I will never have time to really come to grips with like I could with EU2. You can't just put it on fast speed and zoom to the next point of interest. You have to constantly be doing shit.

I do like the ability to automate aspects but...I don't know. It is hard to just give up the levers to key things.

Also the character system. Too many damn characters that take up too much of my time and they don't die enough. And you benefit from making lots of them. But it makes your game borderline unplayable.
#45
Gaming HQ / Re: Europa Universalis V confi...
Last post by Josephus - December 03, 2025, 12:11:03 PM
Ugh....I have an erratic ruler giving me an event each bloody year that drops my legitimacy by 5. Now I'm getting coups which happens when legit. drops below 30. And he's still fucking 50 years old.
#46
Gaming HQ / Re: Europa Universalis V confi...
Last post by DGuller - December 03, 2025, 12:02:39 PM
Looks like my CtDs were a blessing in disguise, frankly I think the complexity of EU5 far outstrips the abilities of Paradox to ever balance it. 

This reminds me of a case in a very different genre:  iRacing in sim-racing.  Their lead physics modeler couldn't give up on the ambition of building a tire behavior model entirely from first principles, despite the fact that even for tire companies tire behavior is still a bit of a black box.  The more he tried to fix it from first principles, the more he uncovered different modes where tire behavior went completely off the rails.
#47
Off the Record / Re: Russo-Ukrainian War 2014-2...
Last post by The Minsky Moment - December 03, 2025, 11:19:28 AM
Quote from: Baron von Schtinkenbutt on December 03, 2025, 09:37:13 AMI think the spirit of "[t]he market can stay irrational longer than you can stay solvent" applies to war economies like Russia's as well.  It is fundamentally unhealthy, but trying to time its collapse is a fool's errand.  I think this point is missed in the "Russia'a economy will collapse any day now" versus "Russia can keep this up forever" debate.

The chestnut that best fits here IMO is Smith's "There is a great deal of ruin in a nation."  Germany and Japan kept going for years in WW2 despite horrific bombings and blockades far worse than anything Russia is dealing with.

Russia started the war in a good position in terms of low levels of public debt, decent reserves, and a stable (if stagnant) macro outlook; Putin was also smart enough to put a pro in charge of the central bank, not a just crony.  That, plus long experience with sanctions dodging and financial improv, plus the fact that Europe couldn't easily wean itself off the Russian gas addiction (thanks Gerhard), got Russia through the crucial first 12 months.

As for how long it can go, as long as the Russian people are willing to tolerate the slow but steady deterioration in their economic prospects, as compared to the personal costs of anti-regime mobilization. 
#48
Off the Record / Re: The AI dooooooom thread
Last post by Baron von Schtinkenbutt - December 03, 2025, 11:01:08 AM
Google did move straight from developing the transformer architecture to developing a language model with it.  However, they took a fundamentally different approach.  Google developed BERT, which is an encoder-only transformer model.  OpenAI developed what is arguably a higher-level model architecture based on a decoder-only transformer model.

Google's approach created a language model that was suitable for creating inputs to non-language models.  BERT and its derivatives became heavily used as the first stage in ML systems to generate various forms of content embeddings.  These embeddings are also useful on their own for search and retrieval systems, which is why Google went this direction.  The transformer architecture itself came out of an effort to develop better models for sequence-to-sequence conversion than what was then available with LSTM models fed by simple word embeddings like Word2Vec.

OpenAI intended to go in a different direction and create a language model that could generate language rather than encoded sequences.  They introduced the concept of generative pre-training (GPT), which is what gives these models their "knowledge".  Basically, an architecture designed to recreate language that looked like what it had been trained on.  This approach is not very useful for search and retrieval, but is useful if you want to build a chatbot that uses the "knowledge" encoded in the model to to retrieval and synthesis.

As architectures developed it turned out the GPT architecture had so-called emergent behaviors that made the base models built this way useful for general tasks, provided the right tooling and scaffolding was built around it.  Google came around to the generative model party late partly because the value to them wasn't clear until OpenAI rolled out a complete chatbot product using it.  Plus, as Joan said, the whole "we're just a research effort" bullshit.
#49
Off the Record / Re: The AI dooooooom thread
Last post by crazy canuck - December 03, 2025, 10:32:43 AM
Quote from: DGuller on December 03, 2025, 10:09:11 AM
Quote from: The Minsky Moment on December 03, 2025, 09:49:50 AM
  • Google launched their new Gemini iteration and appears to have overtaken or at least caught up to GPT-5.  Open AI insiders leaked a memo from Altman declaring a "code red".  Of course, there is nothing unexpected about this development; Google was the pioneer of these industrial scale LLMs (from their 2017 paper) and was only beat to market because they wouldn't release a half-baked product. 
I think the explanation for Google being beat to market is off the mark.  In history plenty of companies failed to capitalize on their own inventions, or even appreciate their potential, for reasons other than their focus on quality.  I think the far more likely explanation is that Google, being a mature large public company, is just naturally far less nimble than a motivated privately-held startup.  Companies like that have way too many stakeholder alignment meetings, risk committee working groups, and quarterly earning targets, to move fast, at least until external factors make them move fast.

It has nothing to do with nimbleness. When GPT was first released, the developers were very clear that it was still in development. But despite that warning people treated it las if it was a reliable tool with all of the catastrophic consequences that have been widely reported.

GPT is still in development and yet people still take it seriously as if it is a reliable tool.

Google's product may be reliable, or it may have the same defects of all LLMs.  We shall see.



#50
Off the Record / Re: Russo-Ukrainian War 2014-2...
Last post by Crazy_Ivan80 - December 03, 2025, 10:30:49 AM
Russia will stay solvent as long as China desires