News:

And we're back!

Main Menu

Electric cars

Started by Threviel, October 31, 2021, 01:18:25 AM

Previous topic - Next topic

crazy canuck

Quote from: DGuller on July 07, 2023, 09:04:50 PM
Quote from: Razgovory on July 07, 2023, 08:58:38 PM
Quote from: DGuller on July 07, 2023, 08:20:08 PM
Quote from: crazy canuck on July 07, 2023, 05:48:44 PMI don't think you are right about that. Up thread, I posted a link to a podcast in which Runciman was interviewing two experts in AI. They described what does very differently, and their explanation actually provides a more compelling reason why it just make shit up.
I strongly suspect that that their description wasn't actually very different, but actually just a different way of saying what I said.  I wouldn't expect you to be able to recognize that, as this isn't your domain of expertise.
Ooooh, sick burn.
That wasn't meant to be a burn.  It's legitimately hard to understand when someone is saying equivalent things in a field where you're not an expert.  It doesn't mean that you shouldn't be discussing them, it would be a very boring place if we all just stuck to discussing our areas of expertise, but it would help to say things in a way that allows for the possibility that the experts in the field might know something you don't.

No, they were real experts and so they were able to explain complex ideas using plain language.  I find that is the hallmark of someone who knows what they are talking about.  People who pretend to be experts, on the other hand, tend to retreat into the realm of, "this is too complicated for you to understand or for me to explain it in a way you will understand."

Since you can't be bothered to listen to the actual experts before determining whether you agree with what they said, I am not sure why I should continue to have this discussion with you, if all you are going to do is go on a personal attack.

Good day.

DGuller

So you make an argument of the form "some experts on some podcast said something differently from you, and I think they're right", and your expectation is that I would listen to the entire podcast to figure out what it is that they said, and how it differs from what I described, just for me to figure out what I'm supposed to rebut?  Does that strike you as a reasonable expectation?  I just assumed it was an appeal to authority not meant to be taken as an argument.

crazy canuck

Quote from: DGuller on July 10, 2023, 03:52:24 PMSo you make an argument of the form "some experts on some podcast said something differently from you, and I think they're right", and your expectation is that I would listen to the entire podcast to figure out what it is that they said, and how it differs from what I described, just for me to figure out what I'm supposed to rebut?  Does that strike you as a reasonable expectation?  I just assumed it was an appeal to authority not meant to be taken as an argument.

No, my argument is that CHATGPT has a well known track record of spitting out information as fact which is completely made up.  The reason it does so is because it is a program that essentially is just guessing what the next word should be.  It strings words together that to the ill informed makes sense.  But to the knowledgeable is complete rubbish.

We have already drawn to your attention the case of the legal brief which cited principles of law and cases which did not actually exist.  But sounded good.

This is not a system which tries to be correct.  It was not designed to be correct.  Just to string words together that sound good.

Your argument in this thread is that it is akin to z well informed colleague who is expert in their field.  It is definitely not that.  No lawyer I have ever worked with simply made case names and principles up.  There are things that reasonable people can disagree about and debate. Making shit up is not one of them.

DGuller

Quote from: crazy canuck on July 10, 2023, 04:10:33 PMNo, my argument is that CHATGPT has a well known track record of spitting out information as fact which is completely made up.  The reason it does so is because it is a program that essentially is just guessing what the next word should be.  It strings words together that to the ill informed makes sense.  But to the knowledgeable is complete rubbish.
This paragraph is an example of why I said what I said earlier, about you not being qualified to identify whether the different words are describing the same concept, or a different concept.  It's clear to me that you have not grasped the meaning behind the technical description of how large language models work.

It is true that ChatGPT hallucinates, but your "because" does not follow.  Yes, you are correct that the way ChatGPT writes is by predicting one word at a time, but it seems like you've taken that knowledge in without really understanding the concept or the context.  As I'm writing this reply, I'm also writing one word at a time, and I think most people likewise write one word at a time. 

Even though I'm writing one word at a time, I have a general idea in my mind what I want to convey, and that general idea is a big part in determining what the next word I'm typing is going to be.  I still care what words I already typed, because obviously I want the next word to play nice with everything I've already written before it, but that doesn't mean that I'm just writing something that strings together.  You seem to think that predicting which word to write one at a time somehow precludes the possibility that a larger ideation of what is being written can possibly exist.

I don't know what these experts said on the podcast.  More likely than not, they said something insightful and correct.  That doesn't necessarily mean that what you got out of listening to them is anywhere near what they wanted to convey.  Clearly some other communication about LLMs didn't land the way it was intended, because no expert would want you to believe that LLMs hallucinate because they write one word at a time.

crazy canuck

You are not simply predicting what your next word should be.  You are thinking (I hope) in full sentences/thoughts toward the goal of making an argument.

That is something that is not happening with CHAT - and that is why it just makes shit up.

Let's try this a different way - what is your explanation for why CHAT makes up case references and legal principles that do not exist.

DGuller

Quote from: crazy canuck on July 10, 2023, 05:40:35 PMYou are not simply predicting what your next word should be.  You are thinking (I hope) in full sentences/thoughts toward the goal of making an argument.

That is something that is not happening with CHAT - and that is why it just makes shit up.
Predicting the next word is the mechanism of converting "thoughts" into sentences for ChatGPT.  How else would it know which word is likely to be next, and which is unlikely to be next, if there is no sense of what it's trying to say?

This reminds me of a conversation I had with my dad a week ago about chess engines, although one difference is that my dad was asking me questions rather than telling me how they worked.  He did not understand how chess engines today could beat the very best human in the world every single time.  He understood that computers could calculate so much better than humans, but he knew that there are some positions in chess which are very hard to calculate, and that the best players intuitively know whether they're good or not.

My response to my dad was that he was wrong to implicitly assume that chess engines have no intuition, and that they're only beating humans by thinking deeper.  Some chess engines actually have incredibly strong "intuition", if you define it as vaguely knowing from experience what is good and what isn't without being able to elucidate why.  In fact you can say that all deep learning neural network models are "intuitive", which in some contexts is actually a weakness if you need a concise understanding of how it got to the prediction that it did.

I think one reason why we're failing to connect here in our competing explanations of how LLMs work is that you seem to reject the notion that it's possible for a model to have an "idea" behind its predictions.  While technically they don't have "ideas', it's all just a complex formula with literally billions of parameters, when formulas get so complex that humans can't possibly understand where they came from, there is no practical difference between mathematical calculations and the worshipped intangible human ideas.
QuoteLet's try this a different way - what is your explanation for why CHAT makes up case references and legal principles that do not exist.
I already gave my explanation:  it overgeneralized, because it either wasn't strong enough, or it wasn't exposed enough to the legal content.  What statistical models of all kinds do is find generalizations in observations:  they want to capture transferable patterns.  Capturing patterns is how they learn (and I would argue that's also how humans learn).  The more data and the more capacity you give the model, the more it will be able to find the patterns in all their complexity, and with all the right exceptions to them.

When it comes to legal references, there is no reason why legal cases have the names that they do:  whatever name the legal case has is essentially a random accident of history.  Not to delve too deep into the field I have no expertise in, but as far as I know, there is no obvious reason why Ernesto Miranda had to be the guy whose case gave rise to the rights:  somebody had to win the lottery to be immortalized in such a way, and he happened to be the guy.  You can't intuitively suss out random facts:  all you can do is memorize them.  It's not impossible for large language models to learn random facts, but they need to have enough capacity and exposure to the right training data to learn them well enough.

DGuller

By the way, I think this is a good discussion that deserves its own thread.  Can this tangent be split off?

crazy canuck

OK you must be playing a joke because your response was nonsensical. For sure this is something chat created.

Case names are critically important, they are not random.

DGuller

Quote from: crazy canuck on July 10, 2023, 09:31:51 PMOK you must be playing a joke because your response was nonsensical. For sure this is something chat created.

Case names are critically important, they are not random.
What a nice feeling to spend a lot of time thinking and crafting a reply, only to realize that you were replying to a setup for a "ChatGPT must've wrote that" one-liner.

I don't know why I expected something different, but it's a gut punch, I'm not going to lie.  Hopefully it's going to be the last gut punch I'll allow myself to take.  What an utterly worthless poster you are.

crazy canuck

You are the one that claimed case names are random in an attempt to avoid the fact CHAT just makes up the names.

Then when called on making such an absurd claim, you resort to yet another personal attack.

If that really was your idea to claim that case names were of no import, you might want to ponder just how ridiculous that was before lashing out.