News:

And we're back!

Main Menu

Recent posts

#31
Off the Record / Re: Russo-Ukrainian War 2014-2...
Last post by Josquius - Today at 11:49:40 AM
Didn't they already turn down their own the American proposal?
#32
Off the Record / Re: Russo-Ukrainian War 2014-2...
Last post by Crazy_Ivan80 - Today at 11:42:41 AM
Quote from: Legbiter on November 24, 2025, 09:02:22 PMThe wishlist seems to have been watered down by the Europeans and Rubio in Geneva so the russians can then predictably turn it down.

which they did by now
#33
Off the Record / Re: The AI dooooooom thread
Last post by Jacob - Today at 11:26:51 AM
Language and intelligence are two difference things:

Large Language Mistake: Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it


Some excerpts

QuoteThe problem is that according to current neuroscience, human thinking is largely independent of human language — and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own. Humans use language to communicate the results of our capacity to reason, form abstractions, and make generalizations, or what we might call our intelligence. We use language to think, but that does not make language the same as thought. Understanding this distinction is the key to separating scientific fact from the speculative science fiction of AI-exuberant CEOs.

The AI hype machine relentlessly promotes the idea that we're on the verge of creating something as intelligent as humans, or even "superintelligence" that will dwarf our own cognitive capacities. If we gather tons of data about the world, and combine this with ever more powerful computing power (read: Nvidia chips) to improve our statistical correlations, then presto, we'll have AGI. Scaling is all we need.

But this theory is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.

...

Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.

But take away language from a large language model, and you are left with literally nothing at all.


An AI enthusiast might argue that human-level intelligence doesn't need to necessarily function in the same way as human cognition. AI models have surpassed human performance in activities like chess using processes that differ from what we do, so perhaps they could become superintelligent through some unique method based on drawing correlations from training data.

Maybe! But there's no obvious reason to think we can get to general intelligence — not improving narrowly defined tasks —through text-based training. After all, humans possess all sorts of knowledge that is not easily encapsulated in linguistic data — and if you doubt this, think about how you know how to ride a bike.

In fact, within the AI research community there is growing awareness that LLMs are, in and of themselves, insufficient models of human intelligence. For example, Yann LeCun, a Turing Award winner for his AI research and a prominent skeptic of LLMs, left his role at Meta last week to found an AI startup developing what are dubbed world models: "��systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences." And recently, a group of prominent AI scientists and "thought leaders" — including Yoshua Bengio (another Turing Award winner), former Google CEO Eric Schmidt, and noted AI skeptic Gary Marcus — coalesced around a working definition of AGI as "AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult" (emphasis added). Rather than treating intelligence as a "monolithic capacity," they propose instead we embrace a model of both human and artificial cognition that reflects "a complex architecture composed of many distinct abilities."

...

We can credit Thomas Kuhn and his book The Structure of Scientific Revolutions for our notion of "scientific paradigms," the basic frameworks for how we understand our world at any given time. He argued these paradigms "shift" not as the result of iterative experimentation, but rather when new questions and ideas emerge that no longer fit within our existing scientific descriptions of the world. Einstein, for example, conceived of relativity before any empirical evidence confirmed it. Building off this notion, the philosopher Richard Rorty contended that it is when scientists and artists become dissatisfied with existing paradigms (or vocabularies, as he called them) that they create new metaphors that give rise to new descriptions of the world — and if these new ideas are useful, they then become our common understanding of what is true. As such, he argued, "common sense is a collection of dead metaphors."

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they're being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that's all it will be able to do. It will be forever trapped in the vocabulary we've encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.
#34
Off the Record / Re: The China Thread
Last post by crazy canuck - Today at 10:36:54 AM
If AI was making the decisions it would be based on what people said their needs were in online conversations.

#35
Off the Record / Re: [Canada] Canadian Politics...
Last post by crazy canuck - Today at 10:33:43 AM
A retired CBC reporter has been active on Instagram.  A lot of his stuff is hilarious and this is one of his best bits - America we have moved on, stop being so cringe.

https://www.instagram.com/reel/DRd1-J7CUco/?igsh=a2ZtZmd0OGE1dXJ5

#36
Off the Record / Re: Russo-Ukrainian War 2014-2...
Last post by crazy canuck - Today at 10:27:10 AM
That's fine, then the pressure is off Ukraine.
#37
Off the Record / Re: The Off Topic Topic
Last post by crazy canuck - Today at 10:25:51 AM
Quote from: Tonitrus on Today at 03:28:04 AMWith all the crap we've given Musk about screwing up Twitter/X, why should we be so eager to straight up trust the location data that their system spits out? :hmm:

I am not sure why somebody posing as an Alberta patriot would purposely say that they are originating from the United States, or the Philippines, etc.
#38
Off the Record / Re: Climate Change/Mass Extinc...
Last post by crazy canuck - Today at 08:08:13 AM
Extreme warming event in the Arctic is disrupting the jet stream. Polar vortex incoming. Alberta and Manitoba will be hit and then likely the American Mid-West and then likely the East Coast. Brace yourselves.
#39
Gaming HQ / Re: Europa Universalis V confi...
Last post by Tamas - Today at 06:53:50 AM
My biggest gripe with the decentralisation changes isn't what the majority's (omg I have learned what to click to win now I have to re-learn it - GAME IS BROKEN), but that a lot of laws give nice bonuses AND a decentralisation push. Clearly earlier in the design there was a clear "centralisation = good, decent = bad" direction. Johan's latest change removes that. If I decide to "play wide" and just accept decentralisation and use it to keeps lots of strong vassals eternally happy, why shouldn't I take all those nice laws-based bonuses as well?
#40
Off the Record / Re: Brexit and the waning days...
Last post by Tamas - Today at 06:41:37 AM
QuoteUK bank shares jump after 'avoiding budget tax raid'
Shares in UK banks have jumped at the start of trading, following reports that they will be spared from a tax raid in the budget.

Splendid, tax raise for me is more guaranteed by the day!

One thing perhaps Labour could take from Fidesz' playbook is the sector taxes. Orban's people would raise taxes on banks and telecom companies. It was very obvious, and came to be, that they'd need to increase prices/interest charged so at the end of the day it'd be the consumers paying the tax, but the majority of the electorate could not make that "complex" connection of the banks being told to pay more and their own personal financial situation becoming a bit worse.