Quote from: The Minsky Moment on November 25, 2025, 11:57:08 PMYeah I don't doubt you could in theory create an "Einstein LLM" or a "Hawking LLM" or something similar. Train it on everything that Einstein ever said or wrote, or what was said or wrote by him, or what was said or written by people most similar to them. And sure it may create a reasonable facsimile of what the historical Einstein would actually say in response to various prompts. But what it can't do is replicate what Einstein would say or do if he were reincarnated at birth in the 21st century and lived a full life. Because that Einstein would probably think or do novel and unexpected things now just like the real Einstein did in 1900s and 1910s. But the LLM wouldn't be able to do that because it's not in its data set.You are asserting an answer to a question that is being debated. You are asserting that an LLM cannot generalize beyond its training data. If it's true that a model cannot generalize, then by definition it cannot be intelligent, because ability to generalize is the core function of intelligence.
Quote from: Zanza on November 25, 2025, 11:18:32 PMTreason
Quote from: HisMajestyBOB on November 25, 2025, 11:41:38 PMThough on the other hand, at this point it's clear that JD Vance would be far worse for Ukraine than Trump is.
Quote from: DGuller on November 25, 2025, 11:44:22 PMI'm a little hesitant to agree, because I may be agreeing to something that I understand differently from you. I mean, I couldn't have imagined how "evolved" could be interpreted here...
QuoteLet me try to concisely restate your restatement: "If LLMs are developed to the point where they can consistently emulate intelligent communication, then they're functionally equivalent to being intelligent. If you can have an "Einstein LLM" that would say everything flesh and bones Einstein would say, even to novel contexts, then it doesn't matter whether the algorithm is classified as cognizant or not; functionally it is as intelligent as flesh and bones Einstein was.
). Quote from: Jacob on November 25, 2025, 11:03:35 PMI'm a little hesitant to agree, because I may be agreeing to something that I understand differently from you. I mean, I couldn't have imagined how "evolved" could be interpreted here...Quote from: DGuller on November 25, 2025, 10:19:38 PMI didn't mean to imply self-evolution, or Darwinian evolution, or anything of the kind.I meant evolved as in "developed".
I'm not sure I'm following your argument correctly here, but it seems like you're saying that "it's theoretically possible that LLM models could be developed to a point where their output 'emulates intelligent communication', and that if it does then it can essentially be considered cognizant whatever is 'under the hood" purely on strength on the apparent intelligence of the output".
Is that right? Or have I missed some nuance?
Page created in 0.018 seconds with 11 queries.