News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

Jacob

#810
Quote from: DGuller on November 25, 2025, 11:44:22 PMI'm a little hesitant to agree, because I may be agreeing to something that I understand differently from you.  I mean, I couldn't have imagined how "evolved" could be interpreted here...

I want to make some kind of joke relating this to AI and human intelligence, but I can't quite figure out how...  :(

QuoteLet me try to concisely restate your restatement:  "If LLMs are developed to the point where they can consistently emulate intelligent communication, then they're functionally equivalent to being intelligent.  If you can have an "Einstein LLM" that would say everything flesh and bones Einstein would say, even to novel contexts, then it doesn't matter whether the algorithm is classified as cognizant or not;  functionally it is as intelligent as flesh and bones Einstein was.

Okay, understood.

It's a coherent position. I'm not 100% sure I agree with it - I'm more than a bit ambivalent to be honest - but let's accept it for the sake of argument.

First off, that's a really big if (to be fair, you did bold it :lol: ).

For me, the interesting part of that conversation is how likely it is that that if can be satisfied, and where the biggest challenges are likely to lie; as well as what subtle nuances are at risk of being elided (and with what consequences) in the eagerness to claim the if has been satisfied.

IMO, I think part of the point of the article that set off this current discussion was making is that human reasoning is largely non-language based and therefore, in my reading, it is making the prediction that LLMs are not going to be able to satisfy that if no matter how much computational power is applied.

Secondly, what Minsky said about the Einstein LLM which he articulated way better than my (now deleted) attempt.

HVC

But wouldn't that just be regurgitating Einstein? Sure it's more dynamic than a tape recorder playback, but functionally the same. It wouldn't be able to come up with new theories or thoughts.
Being lazy is bad; unless you still get what you want, then it's called "patience".
Hubris must be punished. Severely.

DGuller

Quote from: The Minsky Moment on November 25, 2025, 11:57:08 PMYeah I don't doubt you could in theory create an "Einstein LLM" or a "Hawking LLM" or something similar.  Train it on everything that Einstein ever said or wrote, or what was said or wrote by him, or what was said or written by people most similar to them.  And sure it may create a reasonable facsimile of what the historical Einstein would actually say in response to various prompts.  But what it can't do is replicate what Einstein would say or do if he were reincarnated at birth in the 21st century and lived a full life.  Because that Einstein would probably think or do novel and unexpected things now just like the real Einstein did in 1900s and 1910s.  But the LLM wouldn't be able to do that because it's not in its data set.
You are asserting an answer to a question that is being debated.  You are asserting that an LLM cannot generalize beyond its training data.  If it's true that a model cannot generalize, then by definition it cannot be intelligent, because ability to generalize is the core function of intelligence.

I think your assertion is not even true for classical statistical or machine learning models.  I have worked with plenty of statistical models, some that I even developed myself, that have translated at least partially to new domains.  In fact, there is even a whole concept called transfer learning, where you develop a model for one use case, and then take it as a starting point for a completely different use case, which is often preferable to starting from scratch.  Transfer learning would be a useless technique if machine learning models were incapable of generalizing beyond their training data.

The fact that a model can generalize beyond its training data is not limited to filling in missing patterns. If its internal representation captures the real structure of a domain, then extrapolating into unobserved territory becomes a logical consequence of the model itself, not an act of mystical human creativity.

Valmy

I don't necessarily see how that is inconsistent with what Minsky was saying. Surely generalizations beyond it's data must still be consistent with that data, yes?
Quote"This is a Russian warship. I propose you lay down arms and surrender to avoid bloodshed & unnecessary victims. Otherwise, you'll be bombed."

Zmiinyi defenders: "Russian warship, go fuck yourself."

DGuller

Quote from: Jacob on Today at 12:06:42 AMIMO, I think part of the point of the article that set off this current discussion was making is that human reasoning is largely non-language based and therefore, in my reading, it is making the prediction that LLMs are not going to be able to satisfy that if no matter how much computational power is applied.
Even if human reasoning is non-language based, the ultimate, useful output is expressed with language.  If you can emulate the useful output well enough even in novel contexts, then why does it matter whether you arrive at it with human reasoning or with canine intuition? 

This whole line of argument seems fixated on the mechanism rather than the result.  Human reasoning is the  mechanism we're familiar with for generating insights, but it doesn't have to be the only one, or even the best one.

Zoupa

Wake me up when a machine invents something on its own.