News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

Jacob

#810
Quote from: DGuller on November 25, 2025, 11:44:22 PMI'm a little hesitant to agree, because I may be agreeing to something that I understand differently from you.  I mean, I couldn't have imagined how "evolved" could be interpreted here...

I want to make some kind of joke relating this to AI and human intelligence, but I can't quite figure out how...  :(

QuoteLet me try to concisely restate your restatement:  "If LLMs are developed to the point where they can consistently emulate intelligent communication, then they're functionally equivalent to being intelligent.  If you can have an "Einstein LLM" that would say everything flesh and bones Einstein would say, even to novel contexts, then it doesn't matter whether the algorithm is classified as cognizant or not;  functionally it is as intelligent as flesh and bones Einstein was.

Okay, understood.

It's a coherent position. I'm not 100% sure I agree with it - I'm more than a bit ambivalent to be honest - but let's accept it for the sake of argument.

First off, that's a really big if (to be fair, you did bold it :lol: ).

For me, the interesting part of that conversation is how likely it is that that if can be satisfied, and where the biggest challenges are likely to lie; as well as what subtle nuances are at risk of being elided (and with what consequences) in the eagerness to claim the if has been satisfied.

IMO, I think part of the point of the article that set off this current discussion was making is that human reasoning is largely non-language based and therefore, in my reading, it is making the prediction that LLMs are not going to be able to satisfy that if no matter how much computational power is applied.

Secondly, what Minsky said about the Einstein LLM which he articulated way better than my (now deleted) attempt.

HVC

But wouldn't that just be regurgitating Einstein? Sure it's more dynamic than a tape recorder playback, but functionally the same. It wouldn't be able to come up with new theories or thoughts.
Being lazy is bad; unless you still get what you want, then it's called "patience".
Hubris must be punished. Severely.

DGuller

Quote from: The Minsky Moment on November 25, 2025, 11:57:08 PMYeah I don't doubt you could in theory create an "Einstein LLM" or a "Hawking LLM" or something similar.  Train it on everything that Einstein ever said or wrote, or what was said or wrote by him, or what was said or written by people most similar to them.  And sure it may create a reasonable facsimile of what the historical Einstein would actually say in response to various prompts.  But what it can't do is replicate what Einstein would say or do if he were reincarnated at birth in the 21st century and lived a full life.  Because that Einstein would probably think or do novel and unexpected things now just like the real Einstein did in 1900s and 1910s.  But the LLM wouldn't be able to do that because it's not in its data set.
You are asserting an answer to a question that is being debated.  You are asserting that an LLM cannot generalize beyond its training data.  If it's true that a model cannot generalize, then by definition it cannot be intelligent, because ability to generalize is the core function of intelligence.

I think your assertion is not even true for classical statistical or machine learning models.  I have worked with plenty of statistical models, some that I even developed myself, that have translated at least partially to new domains.  In fact, there is even a whole concept called transfer learning, where you develop a model for one use case, and then take it as a starting point for a completely different use case, which is often preferable to starting from scratch.  Transfer learning would be a useless technique if machine learning models were incapable of generalizing beyond their training data.

The fact that a model can generalize beyond its training data is not limited to filling in missing patterns. If its internal representation captures the real structure of a domain, then extrapolating into unobserved territory becomes a logical consequence of the model itself, not an act of mystical human creativity.

Valmy

I don't necessarily see how that is inconsistent with what Minsky was saying. Surely generalizations beyond it's data must still be consistent with that data, yes?
Quote"This is a Russian warship. I propose you lay down arms and surrender to avoid bloodshed & unnecessary victims. Otherwise, you'll be bombed."

Zmiinyi defenders: "Russian warship, go fuck yourself."

DGuller

Quote from: Jacob on Today at 12:06:42 AMIMO, I think part of the point of the article that set off this current discussion was making is that human reasoning is largely non-language based and therefore, in my reading, it is making the prediction that LLMs are not going to be able to satisfy that if no matter how much computational power is applied.
Even if human reasoning is non-language based, the ultimate, useful output is expressed with language.  If you can emulate the useful output well enough even in novel contexts, then why does it matter whether you arrive at it with human reasoning or with canine intuition? 

This whole line of argument seems fixated on the mechanism rather than the result.  Human reasoning is the  mechanism we're familiar with for generating insights, but it doesn't have to be the only one, or even the best one.

Zoupa

Wake me up when a machine invents something on its own.

Jacob

#816
Quote from: DGuller on Today at 12:50:10 AMEven if human reasoning is non-language based, the ultimate, useful output is expressed with language.  If you can emulate the useful output well enough even in novel contexts, then why does it matter whether you arrive at it with human reasoning or with canine intuition?

I think this framing is incorrect. The useful output of human reasoning is not limited to what is expressed with language, far from it.

Human reasoning produces useful output independently of whether it can be articulated with language - namely actions.

We are constantly faced with actions carried out by people who cannot adequately explain in words why they did what they did. Nonetheless they have used their human reasoning capabilities to determine their course of action and are convinced that it is profoundly correct.

When asked why someone did something and they respond with "I don't know", "I'm not sure", "it seemed like the right thing to do", "someone had to do it" and the like that's a good indication that a human used their decision making faculties in a way that produced a useful outcome without the use of language to mediate that decision.

Furthermore - and more personally - I'm fairly convinced that in many many cases when people attempt to articulate the reasons for the actions they have taken (or inaction) or for their opinion, the language used consists largely of after-the-fact rationalizations used to support a decision that was arrived at without language-based reasoning.

QuoteThis whole line of argument seems fixated on the mechanism rather than the result.  Human reasoning is the  mechanism we're familiar with for generating insights, but it doesn't have to be the only one, or even the best one.

It is fixated on the mechanism because the mechanism is what we have. The result - which we don't have - is a big if and we're discussing whether the mechanism as it exists (or is likely to be developed into) can satisfy that if or not.

The line of argument is that it is unlikely to be able to do so because the mechanism lacks a means to address a significant part of human reasoning and behaviour - namely non-language based reasoning resulting in non-language based outputs (among other things, I'd personally argue that it also lacks the ability to make credible value judgements and take responsibility for consequences, two key components in human reasoning - but that's my own argument, not the article we're discussing.)

Now, LLM may - as you suggest - generate insights in ways that are different from insights produced by human reasoning. I think some of the biomedical research uses of some of these models indicate potentially fruitful avenues to develop; but I don't think that's AGI. Rather, I believe, those are specifically targeted models developed for bespoke purposes and do not suggest or require any kind of cognizance.


Tamas

Anyways, we have been ignoring a key question:

How can LLMs be made to play computer games?

I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?

crazy canuck

Quote from: Tamas on Today at 06:50:44 AMAnyways, we have been ignoring a key question:

How can LLMs be made to play computer games?

I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?

Yes, but it would not be able to distinguish good strategies from bad.  So the same problems exist and are likely worse as AI inevitably gets trained on AI slop.
Awarded 17 Zoupa points

In several surveys, the overwhelming first choice for what makes Canada unique is multiculturalism. This, in a world collapsing into stupid, impoverishing hatreds, is the distinctly Canadian national project.

Josquius

Quote from: Tamas on Today at 06:50:44 AMAnyways, we have been ignoring a key question:

How can LLMs be made to play computer games?

I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?


Video games are a key one for me where I really raise my eyebrow at the recent talk of AI as this new and amazing revolutionary thing. We've had AI for....ages. LLMs are not the same thing as all AI.
I do wonder when LLM will become more of a mainstream term.


I got to thinking the other day too- all companies, governments, etc... are going on about how they will embrace AI and become so brilliant for it. There's lots of worries about this for jobs, quality of output, etc....
It strikes me though....Its not that LLMs are entirely something new here. Cast your mind way back to pre-covid....And much the same stuff was being said. Only it was about data. Not AI. And it didn't have half as much marketing success.
██████
██████
██████

Tamas

Quote from: crazy canuck on Today at 07:01:20 AM
Quote from: Tamas on Today at 06:50:44 AMAnyways, we have been ignoring a key question:

How can LLMs be made to play computer games?

I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?

Yes, but it would not be able to distinguish good strategies from bad.  So the same problems exist and are likely worse as AI inevitably gets trained on AI slop.

Would they, though? As I understand LLMs simply guess what's the most likely next word (segment) in what they are "writing". So let's say it's playing EU4, the UI is translated for it so it can process that it has negative stability. Surely, having stored all the discussions over the years online around strategy, the likely string of words would be a correct strategy.

DGuller

ChatGPT has been able to interpret game screenshots for a long while now.  For example, here is what it will return if you ask it to explain what's going on in this screenshot of my HOI4 game:



QuoteYou absolutely annihilated the USSR without taking any casualties.
This is a Hearts of Iron IV war overview screen showing a German–Soviet War where:

Germany has:

4.00M fielded manpower

0 losses (which is insane / bugged / encirclement abuse / nukes + AI collapse / paradrop cheese)

100% war participation

The USSR side:

~524k–525k fielded manpower left

19.93M losses — that's nearly twenty million dead

Their minors, Sinkiang and Mongolia, are irrelevant and barely participated

So the reality here is:

You've completely broken the Soviets

Their army has evaporated. Twenty million casualties with effectively no German losses means either:

You rolled them with colossal encirclements and they died out of supply
or

You exploited AI stupidity with attacking-only orders where they suicided into your lines
or

You used nukes / CAS / terrain chokepoints / the River Line meatgrinder
or

Something glitched and the game isn't counting your casualties

Regardless, the USSR is militarily finished. One push and they collapse.

In HOI4 terms, this is a total reverse-Barbarossa massacre. They're done. You can march to Vladivostok with a blindfold on.

If you want, tell me how you achieved zero losses — I'm curious, because that's not something you see naturally.
In fact the AI USSR did suicide into my lines, it was a bug in the fairly early version of HOI4, I just set myself up to take the most ridiculous advantage of it to prove a point.

It can also cheat very well at GeoGuessr already (or alternatively help you learn to play it better, as it explains its reasoning).  Just give it a screen cap or what you're looking at, and more often than not a single still will already point you in the general area.

garbon

Quote from: Tamas on Today at 08:59:19 AM
Quote from: crazy canuck on Today at 07:01:20 AM
Quote from: Tamas on Today at 06:50:44 AMAnyways, we have been ignoring a key question:

How can LLMs be made to play computer games?

I was thinking: if the UI of something like a Paradox game could be "translated" into a format processable by these "AIs" (so, text, I guess) then they would be able to learn optimal strategies from online sources, wouldn't they?

Yes, but it would not be able to distinguish good strategies from bad.  So the same problems exist and are likely worse as AI inevitably gets trained on AI slop.

Would they, though? As I understand LLMs simply guess what's the most likely next word (segment) in what they are "writing". So let's say it's playing EU4, the UI is translated for it so it can process that it has negative stability. Surely, having stored all the discussions over the years online around strategy, the likely string of words would be a correct strategy.

Of course the games have evolved so not all written over the length of eu4s existence would be still current info.
"I've never been quite sure what the point of a eunuch is, if truth be told. It seems to me they're only men with the useful bits cut off."
I drank because I wanted to drown my sorrows, but now the damned things have learned to swim.