News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

Sheilbh

I feel like all the good uses I've seen so far (and there are loads) are basically more advanced machine learning. All the bad ones are the genAI bit - but it is clearly moving at a pace.
Let's bomb Russia!

celedhring

Quote from: Sheilbh on November 30, 2023, 06:30:59 AMI feel like all the good uses I've seen so far (and there are loads) are basically more advanced machine learning. All the bad ones are the genAI bit - but it is clearly moving at a pace.

GenAI is a subset of machine learning...

Sheilbh

Oops this may be short hand from work that is basically wrong :lol:

I mean the stuff where it works is the spotting things basically where there's clearly good uses now v where it produces something where I've not really seen one. Although it's really good at summarising.
Let's bomb Russia!

Josquius

Quote from: Sheilbh on November 30, 2023, 06:30:59 AMI feel like all the good uses I've seen so far (and there are loads) are basically more advanced machine learning. All the bad ones are the genAI bit - but it is clearly moving at a pace.

Not tried it yet or seen a like vs like analysis of it but I saw one interesting tool for summarising large numbers of academic papers for you to find number of times certain things get mentioned, vibes on the consensus on issues, and so on.
██████
██████
██████

celedhring

#169
Quote from: Sheilbh on November 30, 2023, 07:11:18 AMOops this may be short hand from work that is basically wrong :lol:

I mean the stuff where it works is the spotting things basically where there's clearly good uses now v where it produces something where I've not really seen one. Although it's really good at summarising.

I mean, all current major GenAI models have been trained using machine learning (deep learning). I know there's some experimental stuff with symbolic AI, but that's not GPT.

What GPT does is using training data to predict the adequate output (i.e. text), to a provided input (the prompt). That's what machine learning is for.

DGuller

Quote from: Sheilbh on November 30, 2023, 07:11:18 AMOops this may be short hand from work that is basically wrong :lol:

I mean the stuff where it works is the spotting things basically where there's clearly good uses now v where it produces something where I've not really seen one. Although it's really good at summarising.
GenAI gives you knowledge at your fingertips.  My work as a data scientist consists of solving mini problems every day.  For example, I need to create a professional- looking plot; I can Google my questions one at a time, sometimes spending half an hour filtering through the results until I find exactly what I need, or I can explain to ChatGPT what I'm trying to achieve, and it'll get me there right away.  It's like having an executive assistant multiplying your productivity, except I don't need to work myself up to a C-suit before I get one. 

It can do something much more complicated than this, though:  I had a crazy algorithm idea I wanted to try out, but for that I need to write a custom loss function for beta distribution.  Everyone knows that to do that, you have to supply the analytical expression for the gradient and Hessian of the distribution with respect to parameter you want to optimize.  I could do the research or the math myself, but that would take time, and the train of thought that got me there in the first place might leave me by the time I'm done with just the first step of the experiment.  Or I would figure out it's too time-consuming thing to do for a moonshot, and just skip the experiment altogether. 

Low latency between having a question and getting an answer is crucial for effective iterative problem solving, and that's where the GenAI merely in its infancy is already having a big impact.

HVC

How do you fight against it giving you bullshit answers though? Doesn't double checking the answers take as much time ad searching yourself?
Being lazy is bad; unless you still get what you want, then it's called "patience".
Hubris must be punished. Severely.

DGuller

Quote from: celedhring on November 30, 2023, 08:29:18 AM
Quote from: Sheilbh on November 30, 2023, 07:11:18 AMOops this may be short hand from work that is basically wrong :lol:

I mean the stuff where it works is the spotting things basically where there's clearly good uses now v where it produces something where I've not really seen one. Although it's really good at summarising.

I mean, all current major GenAI models have been trained using machine learning (deep learning). I know there's some experimental stuff with symbolic AI, but that's not GPT.

What GPT does is using training data to predict the adequate output (i.e. text), to a provided input (the prompt). That's what machine learning is for.
I personally use machine learning and deep learning as separate things, as a shorthand, and lately also separating out GenAI from deep learning.  It is 100% true that deep learning and GenAI are also machine learning, in the technical sense, but then it becomes a term so all- encompassing that it impedes effective communication.  Humans are animals too, but if you want to discuss agriculture, it would probably be confusing to refer to both cattle and farmers as animals.  There is a world of difference between gradient boosting trees and a deep neural network.

Sheilbh

Fair - I take it back. I have heard that from a research scientist as well and they just used the publicly available ChatGPT.
Let's bomb Russia!

DGuller

Quote from: HVC on November 30, 2023, 08:52:31 AMHow do you fight against it giving you bullshit answers though? Doesn't double checking the answers take as much time ad searching yourself?
Knowing what's important to check and what isn't is an important management skill that you pick up with expertise and practice.  Some results are also self-evidently correct or incorrect: I know what I want the plot to look like, so I know whether I got it or didn't.  Apart from that, I also have to check my own work or my research as well, I'm not infallible either.  With ChatGPT, I just get to the checking stage faster.

Another thing to consider is that many real life problems are sort of like NP computer science problems:  it's difficult to get to an answer, but it's easy to confirm that an answer someone else supplied is correct.  If I give you a 20-digit number and ask you which two 10-digit numbers multiply to get you to that, it can be very difficult to do.  However, if you give me a solution, even a grade school student can confirm that the two numbers you give me do indeed multiply back to the original number.

HVC

Quote from: DGuller on November 30, 2023, 09:05:41 AM
Quote from: HVC on November 30, 2023, 08:52:31 AMHow do you fight against it giving you bullshit answers though? Doesn't double checking the answers take as much time ad searching yourself?
Knowing what's important to check and what isn't is an important management skill that you pick up with expertise and practice.  Some results are also self-evidently correct or incorrect: I know what I want the plot to look like, so I know whether I got it or didn't.  Apart from that, I also have to check my own work or my research as well, I'm not infallible either.  With ChatGPT, I just get to the checking stage faster.

Another thing to consider is that many real life problems are sort of like NP computer science problems:  it's difficult to get to an answer, but it's easy to confirm that an answer someone else supplied is correct.  If I give you a 20-digit number and ask you which two 10-digit numbers multiply to get you to that, it can be very difficult to do.  However, if you give me a solution, even a grade school student can confirm that the two numbers you give me do indeed multiply back to the original number.

And they say statistics isn't biased :P

Kidding, thanks for the explanation
Being lazy is bad; unless you still get what you want, then it's called "patience".
Hubris must be punished. Severely.

Grey Fox

That's a much more interesting use case scenario for AIs like ChatGPT than bullshit ad driven content.

Colonel Caliga is Awesome.

Iormlund

We've been using AI-driven tools for a while. For example for QA (is this weld Ok?).
They still have problems, but then so do humans (people get tired, do drugs, or simply don't give a fuck).


I can't use a LLM for my work yet, but I can see ways to improve productivity by at least 40% if/when I can.


Grey Fox

Not really, no? It seems to just keep on generating new ways of saying no.
Colonel Caliga is Awesome.