News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

Baron von Schtinkenbutt

Quote from: crazy canuck on January 28, 2025, 02:15:18 PMWhat AI is trained to tell the truth?

In theory, most LLMs are trained to tell the generally-accepted truth.  The guardrails on models like GPT-4 and Gemini are supposed to simply have the model refuse to answer certain questions, not to lie about them.  In practice, it depends on how much you trust the teams that trained these models and whether, through less overt means than DeepSeek, those teams intentionally biased the training set in order to generate only certain types of "correct" answers.

That is, of course, ignoring the controversy about whether or not LLMs are really just highly-sophisticated information retrieval and summarization engines.  If viewed that way, the model can't "lie" as it can only present what it was "told".

DGuller

Quote from: Sheilbh on January 28, 2025, 01:36:22 PMI'm struck at how quickly a workaround was found - I don't doubt that Chinese netizens are as creative - and I'd not thought of it but it strikes me that GenAI is going to be a fascinating challenge for the Chinese state/great firewall. You can't necessarily recreate how it'll respond - so I assume they'll have to impose some sort of filter on the output?

Especially as looking at the less processor intensive solutions like this, my understanding is the small models are trained on big ones - in this case including Facebook's Luma so it's not being trained on the "Chinese" internet.

I wouldn't bet against them but interesting to see how they deal with that challenge.
These giant leaps forward can happen to technology that's in its infancy.  Sometimes the right insight can trump billions of dollars in investment, but only when everyone is still learning.

crazy canuck

Quote from: Baron von Schtinkenbutt on January 28, 2025, 02:49:07 PM
Quote from: crazy canuck on January 28, 2025, 02:15:18 PMWhat AI is trained to tell the truth?

In theory, most LLMs are trained to tell the generally-accepted truth.  The guardrails on models like GPT-4 and Gemini are supposed to simply have the model refuse to answer certain questions, not to lie about them.  In practice, it depends on how much you trust the teams that trained these models and whether, through less overt means than DeepSeek, those teams intentionally biased the training set in order to generate only certain types of "correct" answers.

That is, of course, ignoring the controversy about whether or not LLMs are really just highly-sophisticated information retrieval and summarization engines.  If viewed that way, the model can't "lie" as it can only present what it was "told".

I disagree. Generative AI has nothing to do with fact checking or truth telling.  That is why it throws up so many phantom facts.

Your point, with which I do not disagree, is that the Chinese AI has been purposefully trained to omit some descriptions of events, but that has nothing to do with AI being truthful.  No AI is because no AI is designed to ensure what it is providing as output has any reliability.   the real harm of generative AI is people thinking it is producing output that is reliable.

And that is why I don't find the Chinese version objectionable, its just more stuff people should be ignoring.

Sheilbh

Quote from: Baron von Schtinkenbutt on January 28, 2025, 02:49:07 PMIn theory, most LLMs are trained to tell the generally-accepted truth.  The guardrails on models like GPT-4 and Gemini are supposed to simply have the model refuse to answer certain questions, not to lie about them.  In practice, it depends on how much you trust the teams that trained these models and whether, through less overt means than DeepSeek, those teams intentionally biased the training set in order to generate only certain types of "correct" answers.

That is, of course, ignoring the controversy about whether or not LLMs are really just highly-sophisticated information retrieval and summarization engines.  If viewed that way, the model can't "lie" as it can only present what it was "told".
Is that right though? My understanding was that these models don't really have any concept of "truth" because they're not looking at the unit of meaning, rather they're operating on a predicting the next word. For example the classic "list countries including the letter k" question.

I mention because to me I think the "lie" is less the hallucination from the LLM (that's a function of what it's doing which will sometimes be wrong) than, perhaps, in the confidence :lol:

QuoteThese giant leaps forward can happen to technology that's in its infancy.  Sometimes the right insight can trump billions of dollars in investment, but only when everyone is still learning.
Yeah and the thing I find really striking/interesting is that arguably Deepseek's impressive results are a result on the constraint on Chinese AI development produced by US restrictions on advanced technologies (particularly chips).

Which makes you wonder looking at the results from a quick workaround on the version released globally on what innovations will come from Chinese users as a result of the constraint on their use of AI/AI output. It doesn't feel entirely clear to me how either of those will work out at this stage.
Let's bomb Russia!

mongers

Quote from: Sheilbh on January 28, 2025, 04:31:04 PM...snip....

QuoteThese giant leaps forward can happen to technology that's in its infancy.  Sometimes the right insight can trump billions of dollars in investment, but only when everyone is still learning.
Yeah and the thing I find really striking/interesting is that arguably Deepseek's impressive results are a result on the constraint on Chinese AI development produced by US restrictions on advanced technologies (particularly chips).

Which makes you wonder looking at the results from a quick workaround on the version released globally on what innovations will come from Chinese users as a result of the constraint on their use of AI/AI output. It doesn't feel entirely clear to me how either of those will work out at this stage.

Not dissimilar to the experience of computer programmers/engineers in the East Bloc during the 1980s.
"We have it in our power to begin the world over again"

DGuller

Quote from: Sheilbh on January 28, 2025, 04:31:04 PMIs that right though? My understanding was that these models don't really have any concept of "truth" because they're not looking at the unit of meaning, rather they're operating on a predicting the next word. For example the classic "list countries including the letter k" question.
I think people are not seeing the forest for the trees with "predicting the next word".  What makes them predict the next words with a large degree of coherence is being able to conceptualize the meaning of words, in its own way. 

The idea behind predictive modeling, even going back to much simpler times of small statistical models, is that if you're good enough at predicting what's going to happen, then you must have at least implicitly distilled some understanding behind why things happen.  With LLMs, if you're good enough at predicting the next word, then you must've reverse engineered at least some of the thought behind the words.

Zoupa

I wish AI could help with doing my laundry and dishes, instead of accelerating climate change, taking my job and giving me shitty fake images.

I'm glad not to have children.

Valmy

Quote from: Zoupa on January 28, 2025, 07:02:44 PMI wish AI could help with doing my laundry and dishes, instead of accelerating climate change, taking my job and giving me shitty fake images.

Don't forget spread fake facts.

Yeah leaving humans to do the grunt work and the computers to do the fun stuff was not what I asked for. But the grunt work costs less so I should have know what capitalism would want to automate first. Once they force us all to be manual laborers, then they will replace us with robots. Then finally we can reach the place where the people in power don't need us anymore.
Quote"This is a Russian warship. I propose you lay down arms and surrender to avoid bloodshed & unnecessary victims. Otherwise, you'll be bombed."

Zmiinyi defenders: "Russian warship, go fuck yourself."

Admiral Yi

This place feels more like Twitter every day.

Valmy

Quote from: Admiral Yi on January 28, 2025, 09:24:39 PMThis place feels more like Twitter every day.

Sorry. Just a little black humor there.

But still, I wanted AI ditch diggers not AI art.
Quote"This is a Russian warship. I propose you lay down arms and surrender to avoid bloodshed & unnecessary victims. Otherwise, you'll be bombed."

Zmiinyi defenders: "Russian warship, go fuck yourself."

Valmy

Quote from: Admiral Yi on January 27, 2025, 05:22:41 PMBuying opportunity.

The stock is recovering pretty fast, so I hope you took advantage.
Quote"This is a Russian warship. I propose you lay down arms and surrender to avoid bloodshed & unnecessary victims. Otherwise, you'll be bombed."

Zmiinyi defenders: "Russian warship, go fuck yourself."

Admiral Yi

Quote from: Valmy on January 28, 2025, 10:51:04 PMThe stock is recovering pretty fast, so I hope you took advantage.

I have a 134 put that expires 2/7.

Josquius

#357
Have to say with this Deepseek stuff it is quite funny to see the American AI companies crying foul about China stealing from them.


Quote from: Zoupa on January 28, 2025, 07:02:44 PMI wish AI could help with doing my laundry and dishes, instead of accelerating climate change, taking my job and giving me shitty fake images.

One man's fun/source of income is another's man's horrid dish-washing drudge task.
For instance the author who just wants to write and sees cover-art as an annoying task at the end.

QuoteI'm glad not to have children.
The years to come do not look to be pleasant indeed. Though good people have to keep trying, as hard as it can be to even think of it.
I'm hoping this unshakable belief in the holy AI blows over soon. It does have vibes of earlier hysteria for new developments. Its not the main thing that has me concerned about the way we're headed- though it certainly isn't helping.
██████
██████
██████

Tamas

Here is a desperate but I think feasible attempt at positivity: AI mass-producing fake news, images etc. will mean the death of the social media-based news-consuming of the population and it will push toward the reestablishment of respect and demand for centralised, dependable news sources.

Admiral Yi

Quote from: Josquius on January 29, 2025, 06:14:57 AMHave to say with this Deepseek stuff it is quite funny to see the American AI companies crying foul about China stealing from them.

I have not had this pleasure.  Can you share a link or two with me so that I can share in the humor?