News:

And we're back!

Main Menu

Electric cars

Started by Threviel, October 31, 2021, 01:18:25 AM

Previous topic - Next topic

DGuller

Quote from: crazy canuck on July 07, 2023, 02:40:17 PM
Quote from: DGuller on July 07, 2023, 02:14:43 PM
Quote from: crazy canuck on July 07, 2023, 02:03:34 PMIts not a failure of critical thinking.  It is a failure of having knowledge.  If CHAT spit out a fictitious case citation and the legal principle for which it stands, you would have no idea that was wrong.  Despite your undoubted prowess at critical thought.
It's definitely a failure of critical thinking.  I'm sure there is a way to search for the actual cases being cited, which a personal with even moderate critical thinking would do before submitting a brief.  Knowing what facts to check and when is definitely part of critical thinking.

You say definitely in circumstances when your view is not beyond doubt.  Are you using CHAT to script your languish posts?  :P


Dude, you just totally confidently hallucinated a page ago about Formula E, and that was despite having a very easy way to verify what you thought was true.  Maybe this is not the best time to make such lame attempts at such lame jokes?

Jacob

Quote from: DGuller on July 07, 2023, 01:39:18 PMI'm stating an objective that I think applies to all humans.  If an objective applies to all humans, then there is nothing conditional (i.e. depending on) about the advice being bad, it's bad for everyone.

But it does not apply to all humans, as you yourself said " it gives new modes of failure to people who lack critical thinking." That's a non-zero number of people, and potentially a non-trivial number of people.

QuoteIt doesn't have to be superior to be useful, it merely has to be adding something.  Using ChatGPT does not prevent you from using other forms of accessing knowledge, but I argue it acts as a force multiplier on them.

I'm happy to concede that ChatGPT can be a force multiplier in any number of use cases, and for a number of people. But not for all use cases (you can use force multipliers to increase the impact of your mistakes and lapses, for example), and not if used incorrectly (because people lack the correct mindset/ knowledge base to use it correctly).

QuoteI'll turn it around and ask:  so what if from time to time it's going to tell you something false?

If you treat it as a truthful oracle - and that does seem to be the case in any number of situations - then it can lead you astray. The consequences of being led astray will vary based on what is being divined (and contingency).

QuoteWe learn wrong facts all the time, did we double check everything before ChatGPT?  You can always double check what ChatGPT is saying, what is much harder is being introduced to something to begin with that you would then be double-checking.

Your use case is pretty good, no doubt about it. Anyone with similar objectives and levels of critical thinking skills and/ or relevant knowledge certainly can benefit from using ChatGPT.

DGuller

Okay, I'll concede that one.  If you're lacking in critical thinking skills and you're happy to continue lacking them, ChatGPT in your hands would probably be a dangerous weapon.  I do still think that the real problem is with the lack of critical thinking skills, though, and I also think that it must've been a substantial problem for them even before ChatGPT came along.

crazy canuck

Quote from: DGuller on July 07, 2023, 02:47:25 PM
Quote from: crazy canuck on July 07, 2023, 02:38:21 PMNo, it does not simulate knowing the correct answer.  There is no part of what it does that checks accuracy.

It  just makes stuff up.
The concept of any statistical model is that it will give an accurate enough answer if it captures the patterns with low enough error rate.  ChatGPT is a very complex statistical model, but it is still a statistical model.  Conceptually, if it does a good enough job capturing the patterns in human knowledge, it will give an accurate enough answer.  Hallucination happens because during model training it didn't identify sufficient exceptions to the pattern that it picked up.

I don't think you are right about that. Up thread, I posted a link to a podcast in which Runciman was interviewing two experts in AI. They described what does very differently, and their explanation actually provides a more compelling reason why it just make shit up.

crazy canuck

Quote from: DGuller on July 07, 2023, 02:50:46 PM
Quote from: crazy canuck on July 07, 2023, 02:40:17 PM
Quote from: DGuller on July 07, 2023, 02:14:43 PM
Quote from: crazy canuck on July 07, 2023, 02:03:34 PMIts not a failure of critical thinking.  It is a failure of having knowledge.  If CHAT spit out a fictitious case citation and the legal principle for which it stands, you would have no idea that was wrong.  Despite your undoubted prowess at critical thought.
It's definitely a failure of critical thinking.  I'm sure there is a way to search for the actual cases being cited, which a personal with even moderate critical thinking would do before submitting a brief.  Knowing what facts to check and when is definitely part of critical thinking.

You say definitely in circumstances when your view is not beyond doubt.  Are you using CHAT to script your languish posts?  :P


Dude, you just totally confidently hallucinated a page ago about Formula E, and that was despite having a very easy way to verify what you thought was true.  Maybe this is not the best time to make such lame attempts at such lame jokes?

Please tell me once you have actually watched a formula E race.  Then after you have done that, please explain to me why you think there is any difference between the cars. If you actually watch a Formula E race, you will see that there is really no measurable difference in performance amongst the cars. Then watch a Formula One race and you will see that there are significant performance differences between the cars.

The problem with having discussions with you is that you take a little bit of knowledge that you know from a very cursory scan of the Internet, and then you pretend to be an expert. It's very difficult to have a civil discussion with you.

Jacob

#410
Quote from: DGuller on July 07, 2023, 04:36:00 PMOkay, I'll concede that one.  If you're lacking in critical thinking skills and you're happy to continue lacking them, ChatGPT in your hands would probably be a dangerous weapon.  I do still think that the real problem is with the lack of critical thinking skills, though, and I also think that it must've been a substantial problem for them even before ChatGPT came along.

:cheers:

To try to push it just a little bit further... the question for any given use case is whether the effort of applying critical thinking skills or doing due dilligence where facts are important is worth it given the additional value that ChatGPT brings to the table.

You've given some good examples of where ChatGPT is a massive force multiplier in your own work and the work of your colleagues.

Now, if you're using ChatGPT to answer your comp. sci. assignments and exams maybe it's not as good an idea. I mean, you can make the argument that "you'll always have access to ChatGPT, so why not use the available tools to be more efficient." But at the same time, if using ChatGPT retards the process of you developing the domain specific critical thinking skills and expertise to interrogate ChatGPT results appropriately (which is the point of a comp. sci. education), maybe it's counterproductive.

And personally I see almost no use for ChatGPT in my own direct work* - it's all highly context sensitive, risk assessment and mitigation, planning, and interpersonal. While I can probably find places to crowbar in ChatGPT usage (if I put aside risks associated with proprietary or personal identifiable information), the work to craft the inputs or massage the output is greater than or equal to the work potentially saved.

Which takes me back to my original argument - that whether "stop using ChatGPT" is good or bad advice depends on your objective.


*there're likely good applications for ChatGPT-like AI in game dev all up, but it's goint to take a bit IMO before they're fruitful and somewhat longer before they're widely applicable as per my post on the topic earlier (in the games forum maybe?).

mongers

I considered getting an electric car, but couldn't justify the environmental impact.
"We have it in our power to begin the world over again"

DGuller

Quote from: crazy canuck on July 07, 2023, 05:48:44 PMI don't think you are right about that. Up thread, I posted a link to a podcast in which Runciman was interviewing two experts in AI. They described what does very differently, and their explanation actually provides a more compelling reason why it just make shit up.
I strongly suspect that that their description wasn't actually very different, but actually just a different way of saying what I said.  I wouldn't expect you to be able to recognize that, as this isn't your domain of expertise.

DGuller

Quote from: crazy canuck on July 07, 2023, 05:52:11 PMThe problem with having discussions with you is that you take a little bit of knowledge that you know from a very cursory scan of the Internet, and then you pretend to be an expert.
I'm not an expert on Formula E, and never pretended to be one.  What little I did know was enough to make me strongly suspect that you were dead wrong about what you said.  Of course, I did understand that I was not an expert in Formula E, so I did what any person with critical thinking skills would do:  double check my knowledge and recollection on the Internet before writing anything.

Don't you think that it can also be a little problematic to discuss things with a person who with very smug patronizing confidence says things that are just dead wrong?  It happens to everyone, although it happens much more often to smug patronizing people, but what really doesn't happen to anyone I know other than you is pivoting to saying some other thing with smug patronizing confidence and again getting it dead wrong.  And then pivoting again and again getting it dead wrong. 

At least when it comes to machine learning, when the model gets a train example wrong, it learns from it, and the more confident it was in its wrong prediction, the more it learns from it.  It doesn't change the subject and pretend that it was actually fitting to a different target variable, to stretch the analogy.

Razgovory

Quote from: DGuller on July 07, 2023, 08:20:08 PM
Quote from: crazy canuck on July 07, 2023, 05:48:44 PMI don't think you are right about that. Up thread, I posted a link to a podcast in which Runciman was interviewing two experts in AI. They described what does very differently, and their explanation actually provides a more compelling reason why it just make shit up.
I strongly suspect that that their description wasn't actually very different, but actually just a different way of saying what I said.  I wouldn't expect you to be able to recognize that, as this isn't your domain of expertise.
Ooooh, sick burn.
I've given it serious thought. I must scorn the ways of my family, and seek a Japanese woman to yield me my progeny. He shall live in the lands of the east, and be well tutored in his sacred trust to weave the best traditions of Japan and the Sacred South together, until such time as he (or, indeed his house, which will periodically require infusion of both Southern and Japanese bloodlines of note) can deliver to the South it's independence, either in this world or in space.  -Lettow April of 2011

Raz is right. -MadImmortalMan March of 2017

DGuller

Quote from: Jacob on July 07, 2023, 06:18:33 PMNow, if you're using ChatGPT to answer your comp. sci. assignments and exams maybe it's not as good an idea. I mean, you can make the argument that "you'll always have access to ChatGPT, so why not use the available tools to be more efficient." But at the same time, if using ChatGPT retards the process of you developing the domain specific critical thinking skills and expertise to interrogate ChatGPT results appropriately (which is the point of a comp. sci. education), maybe it's counterproductive.
I think it's a different instance of the old debate about the use of calculators in math classes.  One side says that calculators in math classes would make students not understand the basics, while the other side says that using calculators allows you to get quicker to learning something more deep.  I personally fell into the second camp, and in my opinion it's often easier to learn backwards:  start with the application, and then gradually increase the first principles knowledge as it becomes necessary (and it may never become necessary).  That said, reasonable people can disagree on this discussion.

QuoteAnd personally I see almost no use for ChatGPT in my own direct work* - it's all highly context sensitive, risk assessment and mitigation, planning, and interpersonal. While I can probably find places to crowbar in ChatGPT usage (if I put aside risks associated with proprietary or personal identifiable information), the work to craft the inputs or massage the output is greater than or equal to the work potentially saved.
I can't talk with any confidence about your work, for obvious reasons, but I would put out there the possibility that you may not be thinking of all the ways you can make use of ChatGPT.  Every work that I know of has a whole bunch of micro tasks, sometimes only tangentially related, in addition to big ticket one.  Maybe ChatGPT can't do A-Z for you, but maybe when you ask it for ideas on how to do D, F, P, and Q, you'll find out ways to do them that you never suspected existed and improve your quality of life tremendously.  You do have to have that intuition that surely there must be a better way to do D, F, P, or Q, and not just assume that plowing through them as you always do is the best way, but I think people who have the mindset to look for better ways to do things have that intuition well-developed.

DGuller

Quote from: Razgovory on July 07, 2023, 08:58:38 PM
Quote from: DGuller on July 07, 2023, 08:20:08 PM
Quote from: crazy canuck on July 07, 2023, 05:48:44 PMI don't think you are right about that. Up thread, I posted a link to a podcast in which Runciman was interviewing two experts in AI. They described what does very differently, and their explanation actually provides a more compelling reason why it just make shit up.
I strongly suspect that that their description wasn't actually very different, but actually just a different way of saying what I said.  I wouldn't expect you to be able to recognize that, as this isn't your domain of expertise.
Ooooh, sick burn.
That wasn't meant to be a burn.  It's legitimately hard to understand when someone is saying equivalent things in a field where you're not an expert.  It doesn't mean that you shouldn't be discussing them, it would be a very boring place if we all just stuck to discussing our areas of expertise, but it would help to say things in a way that allows for the possibility that the experts in the field might know something you don't.

Grey Fox

Quote from: Josquius on July 07, 2023, 09:29:48 AMWhen asking chat gpt for factual stuff it overwhelmingly lies with confidence I find.


Quote from: Grey Fox on July 07, 2023, 08:22:15 AM
Quote from: Josquius on July 07, 2023, 07:37:44 AM
Quote from: Grey Fox on July 06, 2023, 04:33:11 PM
Quote from: Josquius on July 06, 2023, 03:50:40 PMLooks like finally going hybrid is too expensive for the moment. I don't want to wipe out my bank account paying 30k for one. I just don't get how these new car sellers are in business. Who has that much for a bloody car? I'm probably in the top 20% of income and lol no.

Looking like a Citroën C5 is what we are getting.

Loans? That's how America does it.

Maybe. There are an awful lot of people living in debt in this country.

Having no debt means you are not leveraging properly tho.
I don't understand.
Why is it better to pay x a week at 10% interest than go just pay Y now?
If its some kind of deal where the Interest rate is lower than you get from the bank I can see it but not how paying more for a fancy car helps.

Because you don't have Y?
Colonel Caliga is Awesome.

Josquius

#418
Quote from: Grey Fox on July 07, 2023, 09:12:34 PM]

Because you don't have Y?

I don't have 44 grand for a shiny new car.
I do have <20 grand for something a few years old.
I get that not everyone is so capable of saving and emergencies do happen. I once had to loan my parents 14k as they needed a car at short notice.
But then why go for the luxury new car rather than loaning the smaller amount if its all you can afford. If its not amount you conceivably could save up in a reasonable amount of time then taking a loan seems unwise.
██████
██████
██████

Admiral Yi

Quote from: Josquius on July 08, 2023, 12:22:15 AMIf its not amount you conceivably could save up in a reasonable amount of time then taking a loan seems unwise.

That's the whole point.  You finance future consumption out of future income.  If you hold off on a purchase because you don't want to finance it you are delaying gratification and discounting it greater than the rate you were charged.