New AI fake text generator may be too dangerous to release, say creators

Started by Syt, February 15, 2019, 04:17:19 AM

Previous topic - Next topic

Syt

https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction

QuoteNew AI fake text generator may be too dangerous to release, say creators

The Elon Musk-backed nonprofit company OpenAI declines to release research publicly for fear of misuse

The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed "deepfakes for text" – have taken the unusual step of not releasing their research publicly, for fear of potential misuse.

OpenAI, an nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.

When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.


Feed it the opening line of George Orwell's Nineteen Eighty-Four – "It was a bright cold day in April, and the clocks were striking thirteen" – and the system recognises the vaguely futuristic tone and the novelistic style, and continues with:

"I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science."


Feed it the first few paragraphs of a Guardian story about Brexit, and its output is plausible newspaper prose, replete with "quotes" from Jeremy Corbyn, mentions of the Irish border, and answers from the prime minister's spokesman.

One such, completely artificial, paragraph reads: "Asked to clarify the reports, a spokesman for May said: 'The PM has made it absolutely clear her intention is to leave the EU as quickly as is possible and that will be under her negotiating mandate as confirmed in the Queen's speech last week.'"

From a research standpoint, GPT2 is groundbreaking in two ways. One is its size, says Dario Amodei, OpenAI's research director. The models "were 12 times bigger, and the dataset was 15 times bigger and much broader" than the previous state-of-the-art AI model. It was trained on a dataset containing about 10m articles, selected by trawling the social news site Reddit for links with more than three votes. The vast collection of text weighed in at 40 GB, enough to store about 35,000 copies of Moby Dick.

The amount of data GPT2 was trained on directly affected its quality, giving it more knowledge of how to understand written text. It also led to the second breakthrough. GPT2 is far more general purpose than previous text models. By structuring the text that is input, it can perform tasks including translation and summarisation, and pass simple reading comprehension tests, often performing as well or better than other AIs that have been built specifically for those tasks.

That quality, however, has also led OpenAI to go against its remit of pushing AI forward and keep GPT2 behind closed doors for the immediate future while it assesses what malicious users might be able to do with it. "We need to perform experimentation to find out what they can and can't do," said Jack Clark, the charity's head of policy. "If you can't anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously."

To show what that means, OpenAI made one version of GPT2 with a few modest tweaks that can be used to generate infinite positive – or negative – reviews of products. Spam and fake news are two other obvious potential downsides, as is the AI's unfiltered nature . As it is trained on the internet, it is not hard to encourage it to generate bigoted text, conspiracy theories and so on.

Instead, the goal is to show what is possible to prepare the world for what will be mainstream in a year or two's time.
"I have a term for this. The escalator from hell," Clark said. "It's always bringing the technology down in cost and down in price. The rules by which you can control technology have fundamentally changed.

"We're not saying we know the right thing to do here, we're not laying down the line and saying 'this is the way' ... We are trying to develop more rigorous thinking here. We're trying to build the road as we travel across it."

I am, somehow, less interested in the weight and convolutions of Einstein's brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops.
—Stephen Jay Gould

Proud owner of 42 Zoupa Points.

celedhring


Syt

Quote from: celedhring on February 15, 2019, 04:20:22 AM
Fuck, I'm losing my job too  :lol:

Eventually, maybe. For now it seems it can only create plausible continuations of existing texts. However, it's probably going to be a while till it can e.g. plot story and character arcs ... maybe in a year or two? :P
I am, somehow, less interested in the weight and convolutions of Einstein's brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops.
—Stephen Jay Gould

Proud owner of 42 Zoupa Points.

mongers

Quote from: Syt on February 15, 2019, 04:17:19 AM
https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction

QuoteNew AI fake text generator may be too dangerous to release, say creators
.....

Feed it the first few paragraphs of a Guardian story about Brexit, and its output is plausible newspaper prose, replete with "quotes" from Jeremy Corbyn, mentions of the Irish border, and answers from the prime minister's spokesman.

One such, completely artificial, paragraph reads: "Asked to clarify the reports, a spokesman for May said: 'The PM has made it absolutely clear her intention is to leave the EU as quickly as is possible and that will be under her negotiating mandate as confirmed in the Queen's speech last week.'"
....


Clearly it has already been released and has been is in use at No.10 for some time.  :bowler:
"We have it in our power to begin the world over again"


Grey Fox

Colonel Caliga is Awesome.


Syt

https://www.dpreview.com/news/9597199609/this-website-uses-ai-to-generate-portraits-of-people-who-don-t-actually-exist

QuoteThis website uses AI to generate portraits of people who don't actually exist

A new website called This Person Does Not Exist went viral this week, and it has one simple function: displaying a portrait of a random person each time the page is refreshed. The website is pointless at first glance, but there's a secret behind its seemingly endless stream of images. According to a Facebook post detailing the website, the images are generated using a generative adversarial networks (GANs) algorithm.

In December, NVIDIA published research detailing the use of style-based GANs (StyleGAN) to generate very realistic portraits of people who don't exist. The same technology is powering This Person Does Not Exist, which was created by Uber software engineer Phillip Wang to 'raise some public awareness for this technology.'

In his Facebook post, Wang said:

Faces are most salient to our cognition, so I've decided to put that specific pretrained model up. Their research group have also included pretrained models for cats, cars, and bedrooms in their repository that you can immediately use.

Each time you refresh the site, the network will generate a new facial image from scratch from a 512 dimensional vector.


Generative adversarial networks were first introduced in 2014 as a way to generate images from datasets, but the resulting content was less than realistic. The technology has improved drastically in only a few years, with major breakthroughs in 2017 and again last year with NVIDIA's introduction of StyleGAN.

This Person Does Not Exist underscores the technology's growing ability to produce life-like images that, in many cases, are indistinguishable from portraits of real people.

As described by NVIDIA last year, StyleGAN can be used to generate more than just portraits. In the video above, the researchers demonstrate the technology being used to generate images of rooms and vehicles, and to modify 'fine styles' in images, such as the color of objects. Results were, in most cases, indistinguishable from images of real settings.


https://www.youtube.com/watch?time_continue=1&v=kSLJriaOumA

https://thispersondoesnotexist.com/
I am, somehow, less interested in the weight and convolutions of Einstein's brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops.
—Stephen Jay Gould

Proud owner of 42 Zoupa Points.

mongers

Quote from: Syt on February 17, 2019, 08:28:10 AM

QuoteThis website uses AI to generate portraits of people who don't actually exist



I should get one of those.  :secret:
"We have it in our power to begin the world over again"

Valmy

Sometimes it fucks up and gives you a really freaky looking image.

Still...it looks like the internet will be flooded by computer generated fake people and fake news in huge quantities soon enough.
Quote"This is a Russian warship. I propose you lay down arms and surrender to avoid bloodshed & unnecessary victims. Otherwise, you'll be bombed."

Zmiinyi defenders: "Russian warship, go fuck yourself."

jimmy olsen

Clearly this should have been posted in the robots are going to take our jobs thread.
It is far better for the truth to tear my flesh to pieces, then for my soul to wander through darkness in eternal damnation.

Jet: So what kind of woman is she? What's Julia like?
Faye: Ordinary. The kind of beautiful, dangerous ordinary that you just can't leave alone.
Jet: I see.
Faye: Like an angel from the underworld. Or a devil from Paradise.
--------------------------------------------
1 Karma Chameleon point

dps

Quote from: jimmy olsen on February 17, 2019, 09:56:27 PM
Clearly this should have been posted in the robots are going to take our jobs thread.

Well, if your job is creating fake news.  :)

The Brain

Women want me. Men want to be with me.

Syt

https://techcrunch.com/2019/03/18/nvidia-ai-turns-sketches-into-photorealistic-landscapes-in-seconds/?fbclid=IwAR1Kt3htdXt_avqQ5XME-moe7B5l02jbhnZg10CZlDr62XYBHRTst8pvEkg&guccounter=1&guce_referrer_us=aHR0cHM6Ly90LmNvL3c4U09WR0RpT0U&guce_referrer_cs=mFXp9ALnCJ5LN8wu3gxJ6A



QuoteNvidia AI turns sketches into photorealistic landscapes in seconds

Today at Nvidia GTC 2019, the company unveiled a stunning image creator. Using generative adversarial networks, users of the software are with just a few clicks able to sketch images that are nearly photorealistic. The software will instantly turn a couple of lines into a gorgeous mountaintop sunset. This is MS Paint for the AI age.

Called GauGAN, the software is just a demonstration of what's possible with Nvidia's neural network platforms. It's designed to compile an image how a human would paint, with the goal being to take a sketch and turn it into a photorealistic photo in seconds. In an early demo, it seems to work as advertised.

GauGAN has three tools: a paint bucket, pen and pencil. At the bottom of the screen is a series of objects. Select the cloud object and draw a line with the pencil, and the software will produce a wisp of photorealistic clouds. But these are not image stamps. GauGAN produces results unique to the input. Draw a circle and fill it with the paint bucket and the software will make puffy summer clouds.

Users can use the input tools to draw the shape of a tree and it will produce a tree. Draw a straight line and it will produce a bare trunk. Draw a bulb at the top and the software will fill it in with leaves producing a full tree.

GauGAN is also multimodal. If two users create the same sketch with the same settings, random numbers built into the project ensure that software creates different results.

In order to have real-time results, GauGAN has to run on a Tensor computing platform. Nvidia demonstrated this software on an RDX Titan GPU platform, which allowed it to produce results in real time. The operator of the demo was able to draw a line and the software instantly produced results. However, Bryan Catanzaro, VP of Applied Deep Learning Research, stated that with some modifications, GauGAN can run on nearly any platform, including CPUs, though the results might take a few seconds to display.

In the demo, the boundaries between objects are not perfect and the team behind the project states it will improve. There is a slight line where two objects touch. Nvidia calls the results photorealistic, but under scrutiny, it doesn't stand up. Neural networks currently have an issue on objects it was trained on and what the neural network is trained to do. This project hopes to decrease that gap.

Nvidia turned to 1 million images on Flickr to train the neural network. Most came from Flickr's Creative Commons, and Catanzaro said the company only uses images with permission. The company says this program can synthesize hundreds of thousands of objects and their relation to other objects in the real world. In GauGAN, change the season and the leaves will disappear from the branches. Or if there's a pond in front of a tree, the tree will be reflected in the water.

Nvidia will release the white paper today. Catanzaro noted that it was previously accepted to CVPR 2019.

Catanzaro hopes this software will be available on Nvidia's new AI Playground, but says there is a bit of work the company needs to do in order to make that happen. He sees tools like this being used in video games to create more immersive environments, but notes Nvidia does not directly build software to do so.

It's easy to bemoan the ease with which this software could be used to produce inauthentic images for nefarious purposes. And Catanzaro agrees this is an important topic, noting that it's bigger than one project and company. "We care about this a lot because we want to make the world a better place," he said, adding that this is a trust issue instead of a technology issue and that we, as a society, must deal with it as such.

Even in this limited demo, it's clear that software built around these abilities would appeal to everyone from a video game designer to architects to casual gamers. The company does not have any plans to release it commercially, but could soon release a public trial to let anyone use the software.


https://www.youtube.com/watch?v=p5U4NgVGAwg
I am, somehow, less interested in the weight and convolutions of Einstein's brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops.
—Stephen Jay Gould

Proud owner of 42 Zoupa Points.

Valmy

Heh. I wonder if it will be even possible to create something that could absolutely be verified to not be fake.
Quote"This is a Russian warship. I propose you lay down arms and surrender to avoid bloodshed & unnecessary victims. Otherwise, you'll be bombed."

Zmiinyi defenders: "Russian warship, go fuck yourself."