News:

And we're back!

Main Menu

Electric cars

Started by Threviel, October 31, 2021, 01:18:25 AM

Previous topic - Next topic

Jacob

Quote from: DGuller on July 07, 2023, 11:44:28 AMI think I know quite a bit about data science, and I have some quantifiable accomplishments to show for it.  ChatGPT is adding enough for me that I use it every day at work.  Some of my colleagues also know quite a bit about data science, and they also use ChatGPT for work. 

Maybe data scientists are just more comfortable with uncertainty, I'm not at all bothered that the answer that I get can be wrong.  It's easy to deal with wrong answers, it's very difficult to deal with stuff that you don't know that you don't know.  ChatGPT makes it infinitely easier to be introduced to concepts that will make your life much easier, but that you couldn't know about unless you were lucky or very highly educated.

Sounds like a good usecase for ChatGPT then :hug:

Jacob

Quote from: OttoVonBismarck on July 07, 2023, 12:49:15 PMFWIW the "accuracy" issue around ChatGPT isn't that novel. Wikipedia--which in every real sense is an amazing human achievement of knowledge, is often mocked when people "cite it" in formal research or school papers because it...does have inaccuracies in it.

The "problem" with such inaccuracies are pretty manageable, but one reason they often rear their ugly head is a lot of unsophisticated users simply assume "if it is written in Wikipedia, it is true."

When what they should be assuming is "well, someone is saying this on Wikipedia, I should dig deeper in the article citations to confirm it unless I already know to a high reliability it is true based on other knowledge I have."

That is similar to how one should approach ChatGPT, but unsophisticated people approach neither Wikipedia or ChatGPT with this sort of attitude, and instead simply consume it as unambiguous black letter facts.

Which...is also okay, right? Like it's unfortunate, but we've trundled along pretty far as a society with a huge % of the population dumbly believing half truths and inaccuracies in every aspect of their daily life.

That's a good point. Though with Wikipedia at least there's a link to the alleged sources so you can dig into them to assess the quality of the content. That's a bit harder with ChatGPT - though I guess you can ask for sources (though then you have the question of whether the sources actually say what ChatGPT says they say).

crazy canuck

Quote from: OttoVonBismarck on July 07, 2023, 12:49:15 PMFWIW the "accuracy" issue around ChatGPT isn't that novel. Wikipedia--which in every real sense is an amazing human achievement of knowledge, is often mocked when people "cite it" in formal research or school papers because it...does have inaccuracies in it.

The "problem" with such inaccuracies are pretty manageable, but one reason they often rear their ugly head is a lot of unsophisticated users simply assume "if it is written in Wikipedia, it is true."

When what they should be assuming is "well, someone is saying this on Wikipedia, I should dig deeper in the article citations to confirm it unless I already know to a high reliability it is true based on other knowledge I have."

That is similar to how one should approach ChatGPT, but unsophisticated people approach neither Wikipedia or ChatGPT with this sort of attitude, and instead simply consume it as unambiguous black letter facts.

Which...is also okay, right? Like it's unfortunate, but we've trundled along pretty far as a society with a huge % of the population dumbly believing half truths and inaccuracies in every aspect of their daily life.

Yes, that is one problem, but another problem with ChatGPT, as illustrated in the case of a lawyer relying on it to write a court brief, is that it just makes things up.  Not because it is trying to be dishonest.  It has no ability to be accurate.  It is literally just suggesting what word should come next and strings a bunch of text together, that to the uninformed reader sounds like it might be right.

As people might recall, in the court brief, it make up cases and principles of law that did not exist.

At least with Wiki actual human being are trying (presumably) to be accurate.  But they makes mistakes or oversimplify.

DGuller

Quote from: OttoVonBismarck on July 07, 2023, 12:08:12 PMData scientists are probably some of the worst people to act as a representative use case for using ChatGPT precisely because they have a lot of domain expertise at dealing with unreliability.

A huge portion of people using ChatGPT assume every word it emits is 100% true, and don't understand literally anything about it whatsoever.


Even if that is the case, how is that worse than the alternative?  If you're that cartoonishly lacking in critical thinking, you're going to believe other false things as well that don't come from ChatGPT.  It's not like there is currently a reliable way for people with no critical thinking to get 100% accurate information.

OttoVonBismarck

As CC mentions, one way it is worse on that front is if it has produced incorrect caselaw (as an example) and you ask for citation...it will give them to you, literally fabricating fake case citations. Wikipedia at least doesn't "proactively" generate fake answers for you, it may have existing fake answers due to vandals or general idiocy, but it doesn't generate them on the fly.

DGuller

Quote from: Jacob on July 07, 2023, 12:46:19 PM
Quote from: DGuller on July 07, 2023, 11:36:05 AMI don't think it does depend on one's objective.

... and then you go on to state your objective.
I'm stating an objective that I think applies to all humans.  If an objective applies to all humans, then there is nothing conditional (i.e. depending on) about the advice being bad, it's bad for everyone.

QuoteWhat's the benefit?

I mean, yes having access to knowledge is probably a good in and of itself - but is the combination of convenience and precision offered by ChatGPT superior to alternate forms of accessing knowledge for any given specific use case? That's not a given.
It doesn't have to be superior to be useful, it merely has to be adding something.  Using ChatGPT does not prevent you from using other forms of accessing knowledge, but I argue it acts as a force multiplier on them.

QuoteThat gets significantly murkier if the low latency is combined with with imprecision or direct factual errors, especially if you're unable to investigate and critique sources and underlying processes and/ or detect and address likely inaccuracies in the output- whether because you don't have access to them, or whether because you lack the knowledge to do it whatsoever.
I'll turn it around and ask:  so what if from time to time it's going to tell you something false?  We learn wrong facts all the time, did we double check everything before ChatGPT?  You can always double check what ChatGPT is saying, what is much harder is being introduced to something to begin with that you would then be double-checking.

DGuller

Quote from: OttoVonBismarck on July 07, 2023, 01:37:35 PMAs CC mentions, one way it is worse on that front is if it has produced incorrect caselaw (as an example) and you ask for citation...it will give them to you, literally fabricating fake case citations. Wikipedia at least doesn't "proactively" generate fake answers for you, it may have existing fake answers due to vandals or general idiocy, but it doesn't generate them on the fly.
Okay, granted, it gives new modes of failure to people who lack critical thinking.  That said, new modes of failure always come with new technology.  The invention of airplanes drastically increased the number of people killed in plane crashes, the invention of computer drastically increased the number of hacks, and so on... 

I think in the past it was more productive to embrace the possibilities of the new technology and work on mitigating the failure modes, either by education or by further technological iteration.

The Minsky Moment

Quote from: DGuller on July 07, 2023, 08:51:33 AMIt's not like asking a 4 year old, it's like asking a superstar colleague for ideas.

A superstar colleague who also happens to be a pathological liar and suffers from severe hallucinations.
The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.
--Joan Robinson

crazy canuck

Quote from: DGuller on July 07, 2023, 01:51:13 PM
Quote from: OttoVonBismarck on July 07, 2023, 01:37:35 PMAs CC mentions, one way it is worse on that front is if it has produced incorrect caselaw (as an example) and you ask for citation...it will give them to you, literally fabricating fake case citations. Wikipedia at least doesn't "proactively" generate fake answers for you, it may have existing fake answers due to vandals or general idiocy, but it doesn't generate them on the fly.
Okay, granted, it gives new modes of failure to people who lack critical thinking.  That said, new modes of failure always come with new technology.  The invention of airplanes drastically increased the number of people killed in plane crashes, the invention of computer drastically increased the number of hacks, and so on... 

I think in the past it was more productive to embrace the possibilities of the new technology and work on mitigating the failure modes, either by education or by further technological iteration.

Its not a failure of critical thinking.  It is a failure of having knowledge.  If CHAT spit out a fictitious case citation and the legal principle for which it stands, you would have no idea that was wrong.  Despite your undoubted prowess at critical thought.

The Minsky Moment

#399
Quote from: DGuller on July 07, 2023, 01:51:13 PMOkay, granted, it gives new modes of failure to people who lack critical thinking.  That said, new modes of failure always come with new technology.  The invention of airplanes drastically increased the number of people killed in plane crashes, the invention of computer drastically increased the number of hacks, and so on...  .

The analogy doesn't quite work.  I don't think airplanes would have caught on as quickly if they commonly blew up in midair despite no mechanical fault or pilot error, or if 20% of the time the airfoil caused the plane to go down instead of up despite the control surfaces being set for climbing.

Airplanes are designed to fly - they may fail to do so, but that is a problem with the execution or implementation of the design.  ChatGPt on the other hand is not designed to provide correct answers. It is designed to convincingly simulate the appearance of someone claiming to have that answer.  So the analogy would hold better if airplanes were designed to simulate the appearance of the experience of flight as opposed to flight itself.
The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.
--Joan Robinson

DGuller

Quote from: crazy canuck on July 07, 2023, 02:03:34 PMIts not a failure of critical thinking.  It is a failure of having knowledge.  If CHAT spit out a fictitious case citation and the legal principle for which it stands, you would have no idea that was wrong.  Despite your undoubted prowess at critical thought.
It's definitely a failure of critical thinking.  I'm sure there is a way to search for the actual cases being cited, which a personal with even moderate critical thinking would do before submitting a brief.  Knowing what facts to check and when is definitely part of critical thinking.

DGuller

Quote from: The Minsky Moment on July 07, 2023, 02:12:21 PMChatGPt on the other hand is not designed to provide correct answers. It is designed to convincingly simulate the appearance of someone claiming to have that answer.
I don't think that's true at all.  It's designed to simulate a person knowing correct answer.  The problem is that it's not always successful at that simulation, although even at this stage it's a manageable problem.

crazy canuck

Quote from: DGuller on July 07, 2023, 02:19:59 PM
Quote from: The Minsky Moment on July 07, 2023, 02:12:21 PMChatGPt on the other hand is not designed to provide correct answers. It is designed to convincingly simulate the appearance of someone claiming to have that answer.
I don't think that's true at all.  It's designed to simulate a person knowing correct answer.  The problem is that it's not always successful at that simulation, although even at this stage it's a manageable problem.

No, it does not simulate knowing the correct answer.  There is no part of what it does that checks accuracy.

It  just makes stuff up.

crazy canuck

Quote from: DGuller on July 07, 2023, 02:14:43 PM
Quote from: crazy canuck on July 07, 2023, 02:03:34 PMIts not a failure of critical thinking.  It is a failure of having knowledge.  If CHAT spit out a fictitious case citation and the legal principle for which it stands, you would have no idea that was wrong.  Despite your undoubted prowess at critical thought.
It's definitely a failure of critical thinking.  I'm sure there is a way to search for the actual cases being cited, which a personal with even moderate critical thinking would do before submitting a brief.  Knowing what facts to check and when is definitely part of critical thinking.

You say definitely in circumstances when your view is not beyond doubt.  Are you using CHAT to script your languish posts?  :P


DGuller

Quote from: crazy canuck on July 07, 2023, 02:38:21 PMNo, it does not simulate knowing the correct answer.  There is no part of what it does that checks accuracy.

It  just makes stuff up.
The concept of any statistical model is that it will give an accurate enough answer if it captures the patterns with low enough error rate.  ChatGPT is a very complex statistical model, but it is still a statistical model.  Conceptually, if it does a good enough job capturing the patterns in human knowledge, it will give an accurate enough answer.  Hallucination happens because during model training it didn't identify sufficient exceptions to the pattern that it picked up.