News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

Josquius

A lot is said about the Internet dying as AI feeds off AI feeds of AI and everything just becomes amazingly gunked up with slop....

... Could the same happen with law?
██████
██████
██████

Admiral Yi

Not as long as we have human judges.

Jacob

That may depend on how many are like Emil Bove


viper37

I don't do meditation.  I drink alcohol to relax, like normal people.

If Microsoft Excel decided to stop working overnight, the world would practically end.

Syt

Rolling Stone wins the headlines.

We are born dying, but we are compelled to fancy our chances.
- hbomberguy

Proud owner of 42 Zoupa Points.

Syt

https://gizmodo.com/fdas-new-drug-approval-ai-is-generating-fake-studies-report-2000633153

QuoteFDA's New Drug Approval AI Is Generating Fake Studies: Report
The AI, dubbed Elsa, is supposed to be making employees better at their jobs.


Robert F. Kennedy Jr., the Secretary of Health and Human Services, has made a big push to get agencies like the Food and Drug Administration to use generative artificial intelligence tools. In fact, Kennedy recently told Tucker Carlson that AI will soon be used to approve new drugs "very, very quickly." But a new report from CNN confirms all our worst fears. Elsa, the FDA's AI tool, is spitting out fake studies.

CNN spoke with six current and former employees at the FDA, three of whom have used Elsa for work that they described as helpful, like creating meeting notes and summaries. But three of those FDA employees told CNN that Elsa just makes up nonexistent studies, something commonly referred to in AI as "hallucinating." The AI will also misrepresent research, according to these employees.

"Anything that you don't have time to double-check is unreliable. It hallucinates confidently," one unnamed FDA employee told CNN
.

And that's the big problem with all AI chatbots. They need to be double-checked for accuracy, often creating even more work for the human behind the computer if they care about the quality of their output at all. People who insist that AI actually saves them time are often fooling themselves, with one recent study of programmers showing that tasks took 20% longer with AI, even among people who were convinced they were more efficient.

Kennedy's Make America Healthy Again (MAHA) commission issued a report back in May that was later found to be filled with citations for fake studies. An analysis from the nonprofit news outlet NOTUS found that at least seven studies cited didn't even exist, with many more misrepresenting what was actually said in a given study. We still don't know if the commission used Elsa to generate that report.

FDA Commissioner Marty Makary initially deployed Elsa across the agency on June 2, and an internal slide leaked to Gizmodo bragged that the system was "cost-effective," only costing $12,000 in its first four weeks. Makary said that Elsa was "ahead of schedule and under budget" when he first announced the AI rollout. But it seems like you get what you pay for. If you don't care about the accuracy of your work, Elsa sounds like a great tool for allowing you to get slop out the door faster, generating garbage studies that could potentially have real consequences for public health in the U.S.

CNN notes that if an FDA employee asks Elsa to generate a one-paragraph summary of a 20-page paper on a new drug, there's no simple way to know if that summary is accurate. And even if the summary is more or less accurate, what if there's something within that 20-page report that would be a big red flag for any human with expertise? The only way to know for sure if something was missed or if the summary is accurate is to actually read the report.

The FDA employees who spoke with CNN said they tested Elsa by asking basic questions like how many drugs of a certain class have been approved for children. Elsa confidently gave wrong answers, and while it apparently apologized when it was corrected, a robot being "sorry" doesn't really fix anything.

We still don't know the workflow being deployed when Kennedy says AI will allow the FDA to approve new drugs, but he testified in June to a House subcommittee that it's already being used to "increase the speed of drug approvals." The secretary, whose extremist anti-vaccine beliefs didn't keep him from becoming a public health leader, seems intent on injecting unproven technologies into mainstream science.

Kennedy also testified to Congress that he wants every American to be strapped with a wearable health device within the next four years. As it happens, President Trump's pick for Surgeon General, Casey Means, owns a wearables company called Levels that monitors glucose levels in people who aren't diabetic. There's absolutely no reason that people without diabetes need to constantly monitor their glucose levels, according to experts. Means, a close ally of Kennedy, has not yet been confirmed by the Senate.

Makary acknowledged to CNN that Elsa could "potentially hallucinate," but that's "no different" from other large language models and generative AI. And he's not wrong on that. The problem is that AI is not fit for purpose when it's consistently just making things up. But that won't stop folks from continuing to believe that AI is somehow magic.

William Maloney from the FDA's "rapid response" office sent a statement about the agency's use of AI on Wednesday.

"The information provided by FDA to CNN was mischaracterized and taken out of context," Maloney wrote. When Gizmodo responded via email to ask what in CNN's report may have been mischaracterized or taken out of context, Maloney didn't address our questions. Gizmodo also asked if anything about the report was inaccurate but that question was also ignored.

"FDA was excited to share the success story of the growth and evolution of its AI tool, Elsa. Unfortunately, CNN decided to lead the story with disgruntled former employees and sources who have never even used the current version of Elsa," Maloney's statement continued.

But it was Maloney's last line of the email that really reminded us the FDA has been fully captured by Trump: "The only thing 'hallucinating' in this story is CNN's failed reporting."

We are born dying, but we are compelled to fancy our chances.
- hbomberguy

Proud owner of 42 Zoupa Points.

Josquius

Sounds like a good business opportunity there. Like SEO back in the day, AIO.
Seeding the Internet with discussion about studies that don't actually exist so the AI can pick up on it.
██████
██████
██████

The Minsky Moment

Quote from: Josquius on July 24, 2025, 06:18:10 AMSounds like a good business opportunity there. Like SEO back in the day, AIO.
Seeding the Internet with discussion about studies that don't actually exist so the AI can pick up on it.

No need, the AI will "hallucinate" fake studies because that's how they are designed.  It is not an intelligence.  It is just associating symbols with other symbols using its probability model. If asked to generate a report summarizing health research, it will detect similar reports and attempt to replicate them by using similar symbol or letter combinations but it has no clue that there is such a thing as a "study" that exists in the world.  It will just use words and letters in combinations similar to those in reports in its database that report studies, but it may mix and match words in the title or the journal name, and mix up results.

I see this all the time when AI's are used to try to write legal briefs. The case citations are all screwed up unless the AI is specifically programmed to check them against a real database.  Even then, the AI will often mis-state the holding or cite the wrong case for the wrong point.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

The Minsky Moment

That said, the use of AI by the Kennedy-led FDA is doing exactly what is intended to do: reduce public confidence in the scientific reliability of agency output and provide a convenient excuse to can swaths of essential employees.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

Josquius

#445
I do increasingly see potential uses for AI when you have fixed data sets and instruct it to work only from those.
I have some ideas about mapping property for taxation - though human checking would be needed for quirks.
It could also be really good for mapping traffic flow patterns via traffic cams which has all sorts is uses.
I won't say what I'm seeing in my work here... But it seems a reasonable use.


Quote from: The Minsky Moment on July 24, 2025, 09:16:49 AM
Quote from: Josquius on July 24, 2025, 06:18:10 AMSounds like a good business opportunity there. Like SEO back in the day, AIO.
Seeding the Internet with discussion about studies that don't actually exist so the AI can pick up on it.

No need, the AI will "hallucinate" fake studies because that's how they are designed.  It is not an intelligence.  It is just associating symbols with other symbols using its probability model. If asked to generate a report summarizing health research, it will detect similar reports and attempt to replicate them by using similar symbol or letter combinations but it has no clue that there is such a thing as a "study" that exists in the world.  It will just use words and letters in combinations similar to those in reports in its database that report studies, but it may mix and match words in the title or the journal name, and mix up results.

I see this all the time when AI's are used to try to write legal briefs. The case citations are all screwed up unless the AI is specifically programmed to check them against a real database.  Even then, the AI will often mis-state the holding or cite the wrong case for the wrong point.

I can't speak for legal use but for AI in general....
It does that if backed into a corner and it can't find a real answer, but it does tend to prefer real answers first. If asked out right is there a study to prove x it will often say no such study exists as far as it can find - it searches its databases and/or the Web.

That's a key part of why hallucinations are so bad imo. It seems so reasonable and good at it's job with stuff that's easy to check for yourself. But after a lengthy chain it will sneak shit in.

I found this when using an AI to help with drafting cvs and cover letters. At first it was good and just offered different ways of framing stuff. But as time went on it steadily began to sneak in more and more lies.
██████
██████
██████

garbon

Quote from: Josquius on July 24, 2025, 10:35:35 AMI can't speak for legal use but for AI in general....
It does that if backed into a corner and it can't find a real answer, but it does tend to prefer real answers first. If asked out right is there a study to prove x it will often say no such study exists as far as it can find - it searches its databases and/or the Web.

That's a key part of why hallucinations are so bad imo. It seems so reasonable and good at it's job with stuff that's easy to check for yourself. But after a lengthy chain it will sneak shit in.

I found this when using an AI to help with drafting cvs and cover letters. At first it was good and just offered different ways of framing stuff. But as time went on it steadily began to sneak in more and more lies.

Maybe but it can also get to the point of hallucinating real quick. I recall when I was playing around with chatGPT and asked it about something I didn't think it would know about historical people in India in the 8th century. It happily spit out a list and then several were immediately from the 11th century though it didn't tell me that. When confronted it updated the list, still with errors and on the 2nd time I said I was worried as there was no evidence, it then told me for each of the wrong names that there could have been someone with that name who existed at the time. :D
"I've never been quite sure what the point of a eunuch is, if truth be told. It seems to me they're only men with the useful bits cut off."
I drank because I wanted to drown my sorrows, but now the damned things have learned to swim.

Josquius

True, it can be confidently wrong sometimes especially with niche stuff.
The other day I was investigating a district of Newcastle and how historically well used it actually was and if there are any maps clearly marking it.
It said of course and gave me some examples... All of which clearly didn't have it.

I always make a point of asking for it's sources. Which yes. Massively cuts down on how much of a time saver it claims to be. But used properly it still can be quicker and easier than just manually googling.
██████
██████
██████

crazy canuck

It is not confident about anything.  Stop reifying it.

HisMajestyBOB

Quote from: The Minsky Moment on July 24, 2025, 09:16:49 AM
Quote from: Josquius on July 24, 2025, 06:18:10 AMSounds like a good business opportunity there. Like SEO back in the day, AIO.
Seeding the Internet with discussion about studies that don't actually exist so the AI can pick up on it.

No need, the AI will "hallucinate" fake studies because that's how they are designed.  It is not an intelligence.  It is just associating symbols with other symbols using its probability model. If asked to generate a report summarizing health research, it will detect similar reports and attempt to replicate them by using similar symbol or letter combinations but it has no clue that there is such a thing as a "study" that exists in the world.  It will just use words and letters in combinations similar to those in reports in its database that report studies, but it may mix and match words in the title or the journal name, and mix up results.

I see this all the time when AI's are used to try to write legal briefs. The case citations are all screwed up unless the AI is specifically programmed to check them against a real database.  Even then, the AI will often mis-state the holding or cite the wrong case for the wrong point.

Looking forward to the Supreme Court ruling that if Mechahitler cites a fictional case that supports the President's position, that fictional case is now good case law.
Three lovely Prada points for HoI2 help