News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

garbon

Wouldn't this headline be better described as 'partially restrict' or 'limit' rather than stop? :hmm:

https://www.bbc.co.uk/news/articles/ce8gz8g2qnlo

QuoteX to stop Grok AI from undressing images of real people after backlash

Elon Musk's AI tool Grok will no longer be able to edit photos of real people to show them in revealing clothing in jurisdictions where it is illegal, after widespread concern over sexualised AI deepfakes.

"We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing," reads an announcement on X.

Reacting to the ban, the UK government claimed "vindication" after Prime Minister Sir Keir Starmer called on X to control its AI tool.

A spokesperson for UK regulator Ofcom said it was a "welcome development" - but added its investigation into whether the platform had broken UK laws "remains ongoing".

"We are working round the clock to progress this and get answers into what went wrong and what's being done to fix it," they said.

Technology Secretary Liz Kendall also welcomed the move but said she would "expect the facts to be fully and robustly established by Ofcom's ongoing investigation".

X's change was announced hours after California's top prosecutor said the state was probing the spread of sexualised AI deepfakes, including of children, generated by the AI model.

"We now geoblock the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X in those jurisdictions where it's illegal," X said in its statement., external.

It also reiterated that only paid users will be able to edit images using Grok on its platform.

This will add an extra layer of protection by helping to ensure that those who try and abuse Grok to violate the law or X's policies are held accountable, according to the statement.

With NSFW (not safe for work) settings enabled, Grok is supposed to allow "upper body nudity of imaginary adult humans (not real ones)" consistent with what can be seen in R-rated films, Musk wrote online on Wednesday.

"That is the de facto standard in America. This will vary in other regions according to the laws on a country by country basis," said the tech multi-billionaire.

Musk had earlier defended X, posting that critics "just want to suppress free speech" along with two AI-generated images of UK Prime Minister Sir Keir Starmer in a bikini.

It is unclear how his platform will implement location-based blocks on Grok's ability to edit images of real people to create sexualised imagery, and whether users would be able to get around them.

People seeking to circumvent similar geoblocks on features or content often look to tools such as virtual private networks (VPNs), which can disguise their location online to allow them to use the internet as if they are in a different country.

VPN app downloads spiked in the UK last year after porn sites were required to start checking the age of visitors to comply with the Online Safety Act.

...
"I've never been quite sure what the point of a eunuch is, if truth be told. It seems to me they're only men with the useful bits cut off."
I drank because I wanted to drown my sorrows, but now the damned things have learned to swim.

celedhring

Huh, so only in places where it's expressly forbidden? Way to go, Elon.

garbon

I do not understand how that will work.

Woman in UK makes a tweet with a photo. Paying X user in US requests grok to put her in a bikini.

Would the woman in UK be unable to see the image as she is in UK? Otherwise seems like what is already happening will still occur.

I thought about maybe it could block using the AI on her as X knows she is in UK, but surely US user could just download, post in reply to her and then ask grok to do its thing.
"I've never been quite sure what the point of a eunuch is, if truth be told. It seems to me they're only men with the useful bits cut off."
I drank because I wanted to drown my sorrows, but now the damned things have learned to swim.

Sheilbh

I assume a bit like the Online Safety Act blocks at the minute? The functionality and output will not be available in the UK (and other countries - I think Malaysia, for example, shut Twitter down over this).

But as you say someone could do it elsewhere and then just post a screenshot or downloaded image. Unless there is a way for the platform to identify it was Grok generated.

From what I understand on the OSA, Twitter takes a fairly broad brush approach to restricting content.
Let's bomb Russia!

Tamas

I guess it's enough for Starmer to declare victory without having to anger the whole of British media by cutting their access to the one and only tool they use in their job.

Syt

We are born dying, but we are compelled to fancy our chances.
- hbomberguy

Proud owner of 42 Zoupa Points.

The Brain

Women want me. Men want to be with me.

Valmy

That seems like it simplifies it to the point of changing the meaning of the sentense. Advice is different than simply something that somebody tells you.
Quote"This is a Russian warship. I propose you lay down arms and surrender to avoid bloodshed & unnecessary victims. Otherwise, you'll be bombed."

Zmiinyi defenders: "Russian warship, go fuck yourself."

Tonitrus

I first read that as Magabook...


celedhring

Quote from: Valmy on January 18, 2026, 03:31:23 PMThat seems like it simplifies it to the point of changing the meaning of the sentense. Advice is different than simply something that somebody tells you.

I'm gonna play devil's advocate, and say that as a kid I read quite a few simplified language versions of literary classics. There's also all the easy-language versions for language learners.

So there's a legitimate use case for this. However, automating it seems foolish given that - as you say - you risk changing the meaning.

Baron von Schtinkenbutt

Quote from: Valmy on January 18, 2026, 03:31:23 PMThat seems like it simplifies it to the point of changing the meaning of the sentense. Advice is different than simply something that somebody tells you.

Apparently it's deliberately whitewashing some works:

Quote from: New York PostNew app uses AI to dumb down, whitewash classic books

By Chris Harris   
Published July 13, 2024, 1:16 p.m. ET

It's McLiterature.

A newly-launched iPhone and iPad artificial intelligence app is abbreviating iconic literary works like "Moby Dick" and "A Tale of Two Cities" — while whitewashing classics like "The Adventures of Huckleberry Finn."

Magibook's website claims it utilizes artificial intelligence to simplify the language of books like "The Count of Monte Cristo" and "Crime and Punishment," making them more accessible to all readers, "no matter your English level."

Ultimately, though, the app strips away the potency of the original writings, and the emotions their author's were attempting to convey with their prose.

Seminal lines such as "It was the best of times, it was the worst of times" are reduced to "It was a time when things were very good and very bad."

The 219 now-controversial occurrences of the N-word in "The Adventures of Huckleberry Finn," are replaced on Magibook with the noun "Helper."

At this point, users of the free app, which launched July 1, can access five different versions of 10 classic books, including "Dracula," "Robinson Crusoe," "The Three Musketeers," "The Picture of Dorian Gray," and "The Great Gatsby" — from their original versions down to an "elementary version."

Cassandra Jacobs, a linguistics professor at the University of Buffalo, called the new app "alarming," noting exposure to complicated text "makes us smarter."

She also noted authors chose specific words "very deliberately" when they write, and believes ideas will get lost via AI.

The app says it was created to "democratize books and their ideas," and is suggested for "English learners," children, parents, teachers and people with dyslexia and severe ADHD.

The app's developer, Louis Gachot, couldn't be reached for comment.

Jacob

The main question I have is whether IngSoc is going to be replaced with TrumpSoc, MuskSoc, or some other term.

garbon

I feel like this is all just free publicity for this app?
"I've never been quite sure what the point of a eunuch is, if truth be told. It seems to me they're only men with the useful bits cut off."
I drank because I wanted to drown my sorrows, but now the damned things have learned to swim.

Jacob

Uh oh scientific publishing...

QuoteFor more than a century, scientific journals have been the pipes through which knowledge of the natural world flows into our culture. Now they're being clogged with AI slop.

Scientific publishing has always had its plumbing problems. Even before ChatGPT, journal editors struggled to control the quantity and quality of submitted work. Alex Csiszar, a historian of science at Harvard, told me that he has found letters from editors going all the way back to the early 19th century in which they complain about receiving unmanageable volumes of manuscripts. This glut was part of the reason that peer review arose in the first place. Editors would ease their workload by sending articles to outside experts. When journals proliferated during the Cold War spike in science funding, this practice first became widespread. Today it's nearly universal.

But the editors and unpaid reviewers who act as guardians of the scientific literature are newly besieged. Almost immediately after large language models went mainstream, manuscripts started pouring into journal inboxes in unprecedented numbers. Some portion of this effect can be chalked up to AI's ability to juice productivity, especially among non-English-speaking scientists who need help presenting their research. But ChatGPT and its ilk are also being used to give fraudulent or shoddy work a new veneer of plausibility, according to Mandy Hill, the managing director of academic publishing at Cambridge University Press & Assessment.

...

Adam Day runs a company in the United Kingdom called Clear Skies that uses AI to help scientific publishers stay ahead of scammers. He told me that he has a considerable advantage over investigators of, say, financial fraud because the people he's after publish the evidence of their wrongdoing where lots of people can see it. Day knows that individual scientists might go rogue and have ChatGPT generate a paper or two, but he's not that interested in these cases. Like a narcotics detective who wants to take down a cartel, he focuses on companies that engage in industrialized cheating by selling papers in large quantities to scientist customers.

...

Unfortunately, many are fields that society would very much like to be populated with genuinely qualified scientists—cancer research, for one. The mills have hit on a very effective template for a cancer paper, Day told me. Someone can claim to have tested the interactions between a tumor cell and just one protein of the many thousands that exist, and as long as they aren't reporting a dramatic finding, no one will have much reason to replicate their results.

AI can also generate the images for a fake paper. A now-retracted 2024 review paper in Frontiers in Cell and Developmental Biology featured an AI-generated illustration of a rat with hilariously disproportionate testicles, which not only passed peer review but was published before anyone noticed. As embarrassing as this was for the journal, little harm was done. Much more worrying is the ability of generative AI to conjure up convincing pictures of thinly sliced tissue, microscopic fields, or electrophoresis gels that are commonly used as evidence in biomedical research.

Day told me that waves of LLM-assisted fraud have recently hit faddish tech-related fields in academia, including blockchain research. Now, somewhat ironically, the problem is affecting AI research itself. It's easy to see why: The job market for people who can credibly claim to have published original research in machine learning or robotics is as strong, if not stronger, than the one for cancer biologists. There's also a fraud template for AI researchers: All they have to do is claim to have run a machine-learning algorithm on some kind of data, and say that it produced an interesting outcome. Again, so long as the outcome isn't too interesting, few people, if any, will bother to vet it.

...

A similar influx of AI-assisted submissions has hit bioRxiv and medRxiv, the preprint servers for biology and medicine. Richard Sever, the chief science and strategy officer at the nonprofit organization that runs them, told me that in 2024 and 2025, he saw examples of researchers who had never once submitted a paper sending in 50 in a year. Research communities have always had to sift out some junk on preprint servers, but this practice makes sense only when the signal-to-noise ratio is high. "That won't be the case if 99 out of 100 papers are manufactured or fake," Sever said. "It's potentially an existential crisis."

Given that it's so easy to publish on preprint servers, they may be the places where AI slop has its most powerful diluting effect on scientific discourse. At scientific journals, especially the top ones, peer reviewers like Quintana will look at papers carefully. But this sort of work was already burdensome for scientists, even before they had to face the glut of chatbot-made submissions, and the AIs themselves are improving, too. Easy giveaways, such as the false citation that Quintana found, may disappear completely. Automated slop-detectors may also fail. If the tools become too good, all of scientific publishing could be upended.

https://www.theatlantic.com/science/2026/01/ai-slop-science-publishing/685704/

crazy canuck

Yeah, I have been talking about this problem here for a while now  :)
Awarded 17 Zoupa points

In several surveys, the overwhelming first choice for what makes Canada unique is multiculturalism. This, in a world collapsing into stupid, impoverishing hatreds, is the distinctly Canadian national project.