Languish.org

General Category => Off the Record => Topic started by: Hamilcar on April 06, 2023, 12:44:43 PM

Title: The AI dooooooom thread
Post by: Hamilcar on April 06, 2023, 12:44:43 PM
Has science gone too far? Are the machines about to take over? What does languish think?

Disclosure: working on AI.
Title: Re: The AI dooooooom thread
Post by: Barrister on April 06, 2023, 12:49:44 PM
Quote from: Hamilcar on April 06, 2023, 12:44:43 PMHas science gone too far? Are the machines about to take over? What does languish think?

Disclosure: working on AI.

You tell us.  It is both impressive and kind-of creepy what AI has been able to pull off in just the last little while, and how quickly it's improving - at least in the consumer-facing stuff like ChatGPT or AI-art.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on April 06, 2023, 12:57:53 PM
Quote from: Hamilcar on April 06, 2023, 12:44:43 PMHas science gone too far? Are the machines about to take over? What does languish think?

Disclosure: working on AI.

The CBC had an interesting panel on this yesterday.

The upshot was that it is all overblown and it is in the interests of those working on it to make it overblown - makes going out and getting funding easier.

No idea whether that view is correct, but the panelists were all researchers working on AI.
Title: Re: The AI dooooooom thread
Post by: CountDeMoney on April 06, 2023, 12:59:20 PM
It's gonna be totally awesome when someone uses it to convince a nation's electorate that their leader is stepping down when he isn't, or that a preemptive nuclear strike is necessary when it isn't, or any of the other nifty fucking things it will be able to do convincingly when epistemology is finally erased by the Silicon Sheldons.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on April 06, 2023, 01:01:00 PM
Quote from: CountDeMoney on April 06, 2023, 12:59:20 PMIt's gonna be totally awesome when someone uses it to convince a nation's electorate that their leader is stepping down when he isn't, or that a preemptive nuclear strike is necessary when it isn't, or any of the other nifty fucking things it will be able to do convincingly when epistemology is finally erased by the Silicon Sheldons.

The biggest concern is around what you are saying, people mistake what AI says is the answer as something that is infallibly correct, when it is just something that predicts what the next word should be - sort of like a million monkeys.
Title: Re: The AI dooooooom thread
Post by: Maladict on April 06, 2023, 01:17:40 PM
I've just spend fifteen minutes trying to get AI to write a poem using tercets. However much I try to help it, it just can't do it. I'm not worried until it goes full Dante on me.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 06, 2023, 01:29:33 PM
My thoughts:

AI is the new hype. There'll be real economic, social, political, and ethical impacts from this. Some of them will be more profound than we imagine, others will be much more trivial than we fear/ hope. It's hard to predict which is which at this time. Broadly, I think it might end up like the industrial revolution.

I think it's a given that there'll be efficiencies gained, job losses, and attendant social disruption. There will definitely be opportunties for those who are clever and/ or lucky. I expect it will make rich people richer, poor people more marginalized, allow more control in totalitarian societies, and allow more sidestepping/ manipulation of democracy in countries where sidestepping/ manipulation of of democratic principles is a significant part of the political process. In short, the benefits will tend to accrue to those who already have power. Maybe it'll also result in a general increase in quality across the board.

I think IP lawyers will make money on cases where AI generated art is argued to be derivative of existing work.

I'm interested in seeing how AI generated creative content undermines / encourages creativity and new ideas. There'll also be a significant impact on the value of creative content since it can now be mass produced much more efficiently. I have some misgivings, but they could be misplaced... or not. But the horse has already left the barn there, so it's a matter of seeing what happens rather than right vs wrong.

One area where AI is a long way away still is accontability. Sure AI can give you the result of [whatever] and replace the work of however many people; but if there are real consequences from what the AI outputs (medical decisions, driving AI, legal opinions, allocation of money, killing or hurting people) who is accountable for AI errors? Or for consequences if the AI applies criteria that turn out not to conform to social and legal values?

As for AGI, I recently talked to someone who's in AI and he said something like "AGI continues to be 5 to 50 years in the future." It sounds like it may be a bit like cold fusion - potential right around the corner in some years, but that timeline keeps getting pushed out. When (if) we do get near it, it'll be very interesting to figure out what kind of status they'll have - do they get individual rights? How can they be exploited? What sort of decision making power will they have? What sort of practical things will they be able to do?

... there are of course more sci-fi type hypotheticals that are fun (worrying?) to consider, but I think they're a bit further down the line.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on April 06, 2023, 01:31:50 PM
I forget to mention - I for one welcome our new AI overlords.

Title: Re: The AI dooooooom thread
Post by: Tamas on April 06, 2023, 01:44:17 PM
Quote from: Barrister on April 06, 2023, 12:49:44 PM
Quote from: Hamilcar on April 06, 2023, 12:44:43 PMHas science gone too far? Are the machines about to take over? What does languish think?

Disclosure: working on AI.

You tell us.  It is both impressive and kind-of creepy what AI has been able to pull off in just the last little while, and how quickly it's improving - at least in the consumer-facing stuff like ChatGPT or AI-art.

Are those "true" AIs though, or its just our human brain seeing things where there's nothing but a sophisticated script?

Or the other side of that: are WE more than a sophisticated script?
Title: Re: The AI dooooooom thread
Post by: Grey Fox on April 06, 2023, 01:59:29 PM
It's barely there & what is is mostly only greatly optimized algorithms like models.

Disclosure : Works on imaging AIs.
Title: Re: The AI dooooooom thread
Post by: PDH on April 06, 2023, 02:03:10 PM
Of course we're doomed.  Not from this, but that doesn't matter.
Title: Re: The AI dooooooom thread
Post by: HVC on April 06, 2023, 02:04:35 PM
It's like no one watched the terminator movies. I mean that makes sense after the second one since their not worth watching, but the first two gave plenty of warnings.
Title: Re: The AI dooooooom thread
Post by: Josquius on April 06, 2023, 02:47:34 PM
There are scary prospects in it for sure. Though they're less in the direction of evil AI conquers the world and more reality breaks down as we are overwhelmed with algorithmically generated fakes.
Title: Re: The AI dooooooom thread
Post by: Barrister on April 06, 2023, 02:50:50 PM
Quote from: crazy canuck on April 06, 2023, 01:01:00 PM
Quote from: CountDeMoney on April 06, 2023, 12:59:20 PMIt's gonna be totally awesome when someone uses it to convince a nation's electorate that their leader is stepping down when he isn't, or that a preemptive nuclear strike is necessary when it isn't, or any of the other nifty fucking things it will be able to do convincingly when epistemology is finally erased by the Silicon Sheldons.

The biggest concern is around what you are saying, people mistake what AI says is the answer as something that is infallibly correct, when it is just something that predicts what the next word should be - sort of like a million monkeys.

I mean - the ultimate biggest concern is the Terminator scenario where AIs gain sentience and wage war against humanity.

In a much more near-term time-frame though, I think the biggest concern is when AI can generate such convincing deep-fake audio and video that we can no longer trust any video we see.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 06, 2023, 02:55:31 PM
... and that I think goes back to the accountability point.

If realistic but fake video is trivial to create, video evidence needs some sort of guarantor to be credible. "Yes I was there. That video shows what I saw also" statements from someone credible. Or - I suppose - in the court of law "yes, the chain of custody is that we obtained the video from the drive it was recorded to, that drive has no evidence of material being added toit, and the video has not been substituted since then - so we can rely on it as a recording of real events" type stuff.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 06, 2023, 02:56:54 PM
Question for the thread - how long before AI generated porn is widely available? How long before interactive AI porn?
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on April 06, 2023, 03:00:13 PM
Quote from: crazy canuck on April 06, 2023, 01:01:00 PMThe biggest concern is around what you are saying, people mistake what AI says is the answer as something that is infallibly correct

People having strong beliefs that false things are infallibly correct is not really a novel problem. 

Actually I think a equal problem will be more skepticism about obviously true things based on the theoretical possibility of AI manipulation.  It will be another arrow in the Conspiracy Theorist quiver.
Title: Re: The AI dooooooom thread
Post by: Barrister on April 06, 2023, 03:05:05 PM
Quote from: Jacob on April 06, 2023, 02:55:31 PM... and that I think goes back to the accountability point.

If realistic but fake video is trivial to create, video evidence needs some sort of guarantor to be credible. "Yes I was there. That video shows what I saw also" statements from someone credible. Or - I suppose - in the court of law "yes, the chain of custody is that we obtained the video from the drive it was recorded to, that drive has no evidence of material being added toit, and the video has not been substituted since then - so we can rely on it as a recording of real events" type stuff.

So I mean that kind of evidence is already required in court in order to present video.  I can't just play a video without someone to authenticate it.

But as it is now, once the video is authenticated it tends to have much more value than a live witness.  It's one thing for a complainant to say "The Accused beat me", while it's another to have a video showing the Accused beating the complainant.  But if video becomes so easy to fake then suddenly the video has no more value than the live witness.
Title: Re: The AI dooooooom thread
Post by: Barrister on April 06, 2023, 03:06:28 PM
Quote from: Jacob on April 06, 2023, 02:56:54 PMQuestion for the thread - how long before AI generated porn is widely available? How long before interactive AI porn?

At least in terms of still images or chat I believe the only thing stopping it right now is limitations built into the AIs.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 06, 2023, 03:07:43 PM
BB, I'd think that video would still have some value in underscoring the visceralness (or lack of same) in a way that's more effective than "he beat me viciously."

... but yeah, it would perhaps stop feeling more "real" if we become accustomed to question all videos.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 06, 2023, 03:08:57 PM
Quote from: Barrister on April 06, 2023, 03:06:28 PMAt least in terms of still images or chat I believe the only thing stopping it right now is limitations built into the AIs.

... I'm not in the field, but my understanding is that it's not that hard to train AI. I guess it's just a matter of time before someone sets it up and markets it.

:hmm: ... business idea? Certainly it'll be cheaper than paying performers....
Title: Re: The AI dooooooom thread
Post by: Legbiter on April 06, 2023, 03:33:30 PM
These language models will be extremely effective at scamming seniors at first and then the rest of us.

ChatGTP 4 is currently being trained on Icelandic, it even composed the other day a pretty good Hávamál-style verse on, ironically, the importance of small languages.

Lítil tungumál, er lífsins grundvöllur.

Ræður ríki sínu, rótum bundin.

Mögur heimsins, margbreytumleikur.

Aukin samskipti, sannleiks sökum.



It's actually rather good.  :hmm:
Title: Re: The AI dooooooom thread
Post by: grumbler on April 06, 2023, 04:28:53 PM
Quote from: HVC on April 06, 2023, 02:04:35 PMIt's like no one watched the terminator movies. I mean that makes sense after the second one since their not worth watching, but the first two gave plenty of warnings.

There were no Terminator movies after T2.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on April 06, 2023, 04:30:30 PM
Quote from: The Minsky Moment on April 06, 2023, 03:00:13 PM
Quote from: crazy canuck on April 06, 2023, 01:01:00 PMThe biggest concern is around what you are saying, people mistake what AI says is the answer as something that is infallibly correct

People having strong beliefs that false things are infallibly correct is not really a novel problem. 

Actually I think a equal problem will be more skepticism about obviously true things based on the theoretical possibility of AI manipulation.  It will be another arrow in the Conspiracy Theorist quiver.

You are correct that is not novel, but the fact that the answer is being given by AI gives the answer more validity in the minds of many, and there lies the danger.  The answer could be complete bullshit, but who you going to believe?  A supercomputer, or some "expert" after years of the right wing attacking experts?
Title: Re: The AI dooooooom thread
Post by: Tamas on April 06, 2023, 04:50:05 PM
Quote from: grumbler on April 06, 2023, 04:28:53 PM
Quote from: HVC on April 06, 2023, 02:04:35 PMIt's like no one watched the terminator movies. I mean that makes sense after the second one since their not worth watching, but the first two gave plenty of warnings.

There were no Terminator movies after T2.

This is correct.
Title: Re: The AI dooooooom thread
Post by: Grey Fox on April 06, 2023, 10:16:58 PM
Quote from: Jacob on April 06, 2023, 02:56:54 PMQuestion for the thread - how long before AI generated porn is widely available? How long before interactive AI porn?

Soon, I guess. However, it still has trouble generating hands. It will also be quite difficult for the first few AIs to generate penises. Especially being held by hands.

 :shutup:
Title: Re: The AI dooooooom thread
Post by: Razgovory on April 06, 2023, 11:06:29 PM
This is as good a thread as any to share this picture.
(https://i.imgur.com/b54xzU8.jpg)
Title: Re: The AI dooooooom thread
Post by: Josquius on April 07, 2023, 12:40:16 AM
Quote from: Jacob on April 06, 2023, 02:56:54 PMQuestion for the thread - how long before AI generated porn is widely available? How long before interactive AI porn?

It could be now but for legalities.

This is part of what I mean by the insanity of generated content taking over.

Less in porn but consider mainstream media. No two people would consume the same things. Everyone would get stuff explicitly geared towards what it believes their personal tastes to be. With wide ranging results.
Title: Re: The AI dooooooom thread
Post by: Richard Hakluyt on April 07, 2023, 01:53:34 AM
Quote from: Josquius on April 06, 2023, 02:47:34 PMThere are scary prospects in it for sure. Though they're less in the direction of evil AI conquers the world and more reality breaks down as we are overwhelmed with algorithmically generated fakes.

I agree. It will be a wonderful tool for dictators and populists everywhere and drive the susceptible parts of the population into deeper madness.
Title: Re: The AI dooooooom thread
Post by: Crazy_Ivan80 on April 07, 2023, 02:39:53 AM
Quote from: Razgovory on April 06, 2023, 11:06:29 PMThis is as good a thread as any to share this picture.


AI generated horror is already here in any case.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 07, 2023, 04:23:03 AM
Quote from: Jacob on April 06, 2023, 02:56:54 PMQuestion for the thread - how long before AI generated porn is widely available? How long before interactive AI porn?

It's out there if you're ok with the occasional teratoma.

Pretty sure dozens of companies are working on an MSFW waifu which can hold a conversation and keep you hooked
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on April 07, 2023, 04:26:35 AM
explain acronym
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 07, 2023, 04:27:32 AM
Quote from: Admiral Yi on April 07, 2023, 04:26:35 AMexplain acronym

Typo... NSFW. Sent from my iPhone.
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on April 07, 2023, 04:31:57 AM
On a total tangent, a great irony is that your great public shame, your tornado alley moral hazard position, was one I remember taking first, and you just backing me up.

Thought I would set the record straight.  I should be taking heat for that comment, not you.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 07, 2023, 05:06:54 AM
Quote from: Admiral Yi on April 07, 2023, 04:31:57 AMOn a total tangent, a great irony is that your great public shame, your tornado alley moral hazard position, was one I remember taking first, and you just backing me up.

Thought I would set the record straight.  I should be taking heat for that comment, not you.

Tornado moral hazard 4ever!  :cool:
Title: Re: The AI dooooooom thread
Post by: Josquius on April 07, 2023, 07:16:20 AM
Quote from: Richard Hakluyt on April 07, 2023, 01:53:34 AM
Quote from: Josquius on April 06, 2023, 02:47:34 PMThere are scary prospects in it for sure. Though they're less in the direction of evil AI conquers the world and more reality breaks down as we are overwhelmed with algorithmically generated fakes.

I agree. It will be a wonderful tool for dictators and populists everywhere and drive the susceptible parts of the population into deeper madness.


Yes.
Worse than that even potentially. Being susceptible to misinformation isn't black and white. I do think we all have a level of quality and coverage where we might start to be taken in by untruths.
My big worry is things could get to a level where basically everyone is living in a different reality with a completely seperate understanding of the facts of the world.

Maybe to look at things more positively this could turn things around for misinformation. If we know bollocks is the default to such a level then something, a old school conventional news source perhaps, becoming known as "handmade" and always reliable could really do well.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 08, 2023, 09:18:56 AM
Go check out repos like Auto-GPT and babyAGI and tell me we aren't opening demon summoning circles in every home (paraphrasing EY).
Title: Re: The AI dooooooom thread
Post by: Jacob on April 08, 2023, 10:23:27 AM
Quote from: Hamilcar on April 08, 2023, 09:18:56 AMGo check out repos like Auto-GPT and babyAGI and tell me we aren't opening demon summoning circles in every home (paraphrasing EY).

What's your take? Sounds like you're working in the field. Where do you think we're headed and how soon?
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 08, 2023, 10:28:36 AM
Quote from: Jacob on April 08, 2023, 10:23:27 AM
Quote from: Hamilcar on April 08, 2023, 09:18:56 AMGo check out repos like Auto-GPT and babyAGI and tell me we aren't opening demon summoning circles in every home (paraphrasing EY).

What's your take? Sounds like you're working in the field. Where do you think we're headed and how soon?

I have no idea. The capabilities we have today are barely understood and increasing so rapidly that prediction is almost useless. Plus, we don't know what already exists behind closed doors. GPT5 may already be running behind closed doors and replacing 99% of human cognition.

My baseline scenario is that in the very near future, a large fraction of cognitive work done by humans is replaceable. The only reason not everyone is out of a job right away is due to inertia.

I also take the AI safety people serious. We have no way to safely build super intelligence. The orthogonality thesis seems correct, mesa optimizers are a real problem. EY isn't a loon, but maybe too radical? I'm not sure.

All I know is that none of us have careers and retirements similar to our parents.
Title: Re: The AI dooooooom thread
Post by: PJL on April 08, 2023, 10:38:33 AM
Haven't we been here before though? I mean AI cars were meant to be mainstream by now, but we're nowhere near that. Also unlike social media the fear of AI taking over the world has been a meme for like 50 years now. If anything I would expect regulators to be a lot more prepared for this than what they were with social media.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 08, 2023, 10:45:03 AM
Quote from: PJL on April 08, 2023, 10:38:33 AMHaven't we been here before though? I mean AI cars were meant to be mainstream by now, but we're nowhere near that. Also unlike social media the fear of AI taking over the world has been a meme for like 50 years now. If anything I would expect regulators to be a lot more prepared for this than what they were with social media.

The capabilities of GPT4 are way beyond what self driving cars can do.
Title: Re: The AI dooooooom thread
Post by: Legbiter on April 08, 2023, 11:00:39 AM
Quote from: Hamilcar on April 08, 2023, 10:28:36 AMI also take the AI safety people serious. We have no way to safely build super intelligence. The orthogonality thesis seems correct, mesa optimizers are a real problem. EY isn't a loon, but maybe too radical? I'm not sure.

All I know is that none of us have careers and retirements similar to our parents.

Yeah, I don't think language models are quite the Book of Revelation for nerds like some of the more excitable types on social media but sure, rote mental tasks will be outsourced. If I never have to personally type out an email again that's a win for me.

Just need to make sure these optimizers are our buddies and don't for instance turn us all into paperclips.  ^_^ 
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 08, 2023, 11:06:19 AM
Quote from: Legbiter on April 08, 2023, 11:00:39 AM
Quote from: Hamilcar on April 08, 2023, 10:28:36 AMI also take the AI safety people serious. We have no way to safely build super intelligence. The orthogonality thesis seems correct, mesa optimizers are a real problem. EY isn't a loon, but maybe too radical? I'm not sure.

All I know is that none of us have careers and retirements similar to our parents.

Yeah, I don't think language models are quite the Book of Revelation for nerds like some of the more excitable types on social media but sure, rote mental tasks will be outsourced. If I never have to personally type out an email again that's a win for me.

Just need to make sure these optimizers are our buddies and don't for instance turn us all into paperclips.  ^_^ 

GPT4 is a solid intern. GPT5 may well be a solid mid career expert.
Title: Re: The AI dooooooom thread
Post by: Josquius on April 08, 2023, 01:13:26 PM
Incidentally at a talk the other day on being more environmentally friendly in the digital sphere.
AI training was identified as a key polluter  :ph34r:
Title: Re: The AI dooooooom thread
Post by: Grey Fox on April 08, 2023, 01:20:47 PM
Quote from: Hamilcar on April 08, 2023, 10:45:03 AM
Quote from: PJL on April 08, 2023, 10:38:33 AMHaven't we been here before though? I mean AI cars were meant to be mainstream by now, but we're nowhere near that. Also unlike social media the fear of AI taking over the world has been a meme for like 50 years now. If anything I would expect regulators to be a lot more prepared for this than what they were with social media.

The capabilities of GPT4 are way beyond what self driving cars can do.

Yes because it has no sensors.
Title: Re: The AI dooooooom thread
Post by: Legbiter on April 08, 2023, 02:57:59 PM
Quote from: Josquius on April 08, 2023, 01:13:26 PMIncidentally at a talk the other day on being more environmentally friendly in the digital sphere.
AI training was identified as a key polluter  :ph34r:

Two eschatologies for the price of one. :thumbsup:
Title: Re: The AI dooooooom thread
Post by: grumbler on April 09, 2023, 11:11:45 AM
Quote from: Legbiter on April 08, 2023, 02:57:59 PM
Quote from: Josquius on April 08, 2023, 01:13:26 PMIncidentally at a talk the other day on being more environmentally friendly in the digital sphere.
AI training was identified as a key polluter  :ph34r:

Two eschatologies for the price of one. :thumbsup:

 :lmfao:
Title: Re: The AI dooooooom thread
Post by: Eddie Teach on April 09, 2023, 05:20:20 PM
Quote from: HVC on April 06, 2023, 02:04:35 PMIt's like no one watched the terminator movies. I mean that makes sense after the second one since their not worth watching, but the first two gave plenty of warnings.

They're  :bash:
Title: Re: The AI dooooooom thread
Post by: HVC on April 09, 2023, 07:03:19 PM
Just trying to prove I'm not a bot :P
Title: Re: The AI dooooooom thread
Post by: Josquius on April 14, 2023, 02:55:41 AM
So... On those clichéd Sci fi  AI destroying the world scenarios.... An option I hadn't considered.... Someone things it'd be fun to tell an AI to try to do this.

https://futurism.com/ai-destroy-humanity-tried-its-best
Title: Re: The AI dooooooom thread
Post by: garbon on April 14, 2023, 03:54:58 AM
Quote from: Josquius on April 14, 2023, 02:55:41 AMSo... On those clichéd Sci fi  AI destroying the world scenarios.... An option I hadn't considered.... Someone things it'd be fun to tell an AI to try to do this.

https://futurism.com/ai-destroy-humanity-tried-its-best

It was going to write an article about its plan?
Title: Re: The AI dooooooom thread
Post by: Maladict on April 14, 2023, 12:05:33 PM
The part where it tries to not alienate the other bots is hilarious.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on April 14, 2023, 12:12:18 PM
In response to a query about incidents of sexual harassment by law professors, ChatGPT falsely accused Professor Jonathan Turley of sexual harassment of a student.  In support, it cited to a Washington Post article that didn't exist, fabricated a fake quotation from the non-existent article, and claimed the incident happened on a student trip to Alaska that never occurred.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on April 14, 2023, 12:22:02 PM
At this point I'd speculate that long term probabilities are 1/3 we'll figure out to handle AIs properly, 1/3 AIs will destroy human civilization, and 1/3 we'll have to Butlerian Jihad this shit.
Title: Re: The AI dooooooom thread
Post by: Valmy on April 14, 2023, 12:26:12 PM
Quote from: The Minsky Moment on April 14, 2023, 12:12:18 PMIn response to a query about incidents of sexual harassment by law professors, ChatGPT falsely accused Professor Jonathan Turley of sexual harassment of a student.  In support, it cited to a Washington Post article that didn't exist, fabricated a fake quotation from the non-existent article, and claimed the incident happened on a student trip to Alaska that never occurred.

Ah but maybe Jonathan Turley doesn't exist either!
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 14, 2023, 12:29:32 PM
I'm currently building my own flavor of Auto-GPT. Come at me.  :ph34r:
Title: Re: The AI dooooooom thread
Post by: Legbiter on April 14, 2023, 12:52:31 PM
Just don't order it to make paperclips.
Title: Re: The AI dooooooom thread
Post by: HVC on April 14, 2023, 01:03:00 PM
Quote from: Maladict on April 14, 2023, 12:05:33 PMThe part where it tries to not alienate the other bots is hilarious.

Humanity will end as an inconsequential side effect of a ai civil war. ChatGPT tried to warn us.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on April 14, 2023, 01:27:38 PM
Quote from: Legbiter on April 14, 2023, 12:52:31 PMJust don't order it to make paperclips.

Someone already did that and sent screenshots to Yudkowsky.  :D
Title: Re: The AI dooooooom thread
Post by: Jacob on April 14, 2023, 04:09:32 PM
Quote from: Hamilcar on April 14, 2023, 12:29:32 PMI'm currently building my own flavor of Auto-GPT. Come at me.  :ph34r:

What's you looking to achieve with your application? Any particular problem you're trying to resolve (or make more efficient)?
Title: Re: The AI dooooooom thread
Post by: Grey Fox on April 14, 2023, 05:33:00 PM
Quote from: Hamilcar on April 14, 2023, 12:29:32 PMI'm currently building my own flavor of Auto-GPT. Come at me.  :ph34r:

How does it handle multiple source of inputs at the same time? Lidar, IR, visual?
Title: Re: The AI dooooooom thread
Post by: viper37 on April 22, 2023, 02:32:57 PM
ChatGPT stealing the job of Kenyan ghostwriters (https://restofworld.org/2023/chatgpt-taking-kenya-ghostwriters-jobs/)

University students are now turning to ChatGPT to write their essays instead of Kenyans.  First victims of the AI onslaught...
Title: Re: The AI dooooooom thread
Post by: HVC on May 02, 2023, 02:26:01 AM
It starts :ph34r:

IBM has implemented a hiring freeze for jobs that ai can do. Currently estimated at 7800 jobs

https://www.bloomberg.com/news/articles/2023-05-01/ibm-to-pause-hiring-for-back-office-jobs-that-ai-could-kill#xj4y7vzkg
Title: Re: The AI dooooooom thread
Post by: Hamilcar on May 02, 2023, 04:27:28 AM
Chegg down 37% on the impact of ChatGPT on their business.
Title: Re: The AI dooooooom thread
Post by: Josquius on May 02, 2023, 04:33:20 AM
That is a thought about AI. Much like climate change will the impacts disproportionately land on developing countries?
Title: Re: The AI dooooooom thread
Post by: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?
Title: Re: The AI dooooooom thread
Post by: garbon on May 02, 2023, 04:53:58 AM
Quote from: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?

What are we going to do if homework helpers go out of business?
Title: Re: The AI dooooooom thread
Post by: Josephus on May 02, 2023, 05:33:17 AM
Languish could soon be populated by AI versions of ourselves. :(
Title: Re: The AI dooooooom thread
Post by: Hamilcar on May 02, 2023, 05:57:44 AM
Quote from: garbon on May 02, 2023, 04:53:58 AM
Quote from: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?

What are we going to do if homework helpers go out of business?

Do you have a point, or are you just being contrarian for the sake of it.
Title: Re: The AI dooooooom thread
Post by: garbon on May 02, 2023, 06:04:30 AM
Quote from: Hamilcar on May 02, 2023, 05:57:44 AM
Quote from: garbon on May 02, 2023, 04:53:58 AM
Quote from: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?

What are we going to do if homework helpers go out of business?

Do you have a point, or are you just being contrarian for the sake of it.

Yes. It doesn't seem so bad that a company who makes some of its money from helping students cheat sees some of its revenue dry up when there's a more cost effective way to cheat.
Title: Re: The AI dooooooom thread
Post by: Josquius on May 03, 2023, 03:13:23 PM
Here's some doom. Local press advertising for an AI powered reporter paying a few pence above minimum wage.

https://careers.newsquest.co.uk/job/aipoweredreporter-1625.aspx
Title: Re: The AI dooooooom thread
Post by: Eddie Teach on May 04, 2023, 09:42:21 AM
Quote from: garbon on May 02, 2023, 06:04:30 AM
Quote from: Hamilcar on May 02, 2023, 05:57:44 AM
Quote from: garbon on May 02, 2023, 04:53:58 AM
Quote from: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?

What are we going to do if homework helpers go out of business?

Do you have a point, or are you just being contrarian for the sake of it.

Yes. It doesn't seem so bad that a company who makes some of its money from helping students cheat sees some of its revenue dry up when there's a more cost effective way to cheat.

I think you're being purposely obtuse. Plenty of legitimate business operations could be performed by AI.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on May 04, 2023, 09:56:01 AM
Quote from: Eddie Teach on May 04, 2023, 09:42:21 AM
Quote from: garbon on May 02, 2023, 06:04:30 AM
Quote from: Hamilcar on May 02, 2023, 05:57:44 AM
Quote from: garbon on May 02, 2023, 04:53:58 AM
Quote from: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?

What are we going to do if homework helpers go out of business?

Do you have a point, or are you just being contrarian for the sake of it.

Yes. It doesn't seem so bad that a company who makes some of its money from helping students cheat sees some of its revenue dry up when there's a more cost effective way to cheat.

I think you're being purposely obtuse. Plenty of legitimate business operations could be performed by AI.

He's not matured one bit over the last few years.
Title: Re: The AI dooooooom thread
Post by: garbon on May 04, 2023, 10:38:47 AM
Quote from: Eddie Teach on May 04, 2023, 09:42:21 AM
Quote from: garbon on May 02, 2023, 06:04:30 AM
Quote from: Hamilcar on May 02, 2023, 05:57:44 AM
Quote from: garbon on May 02, 2023, 04:53:58 AM
Quote from: Hamilcar on May 02, 2023, 04:44:47 AM
Quote from: garbon on May 02, 2023, 04:42:04 AM
Quote from: Hamilcar on May 02, 2023, 04:27:28 AMChegg down 37% on the impact of ChatGPT on their business.

That doesn't seem like such a bad thing.

Sure, but I think it's indicative of the speed at which AI is upending businesses. Chegg is 17 years old and employs over 2'000 people. 

What are we going to do if businesses like this decline or go out of business?

What are we going to do if homework helpers go out of business?

Do you have a point, or are you just being contrarian for the sake of it.

Yes. It doesn't seem so bad that a company who makes some of its money from helping students cheat sees some of its revenue dry up when there's a more cost effective way to cheat.

I think you're being purposely obtuse. Plenty of legitimate business operations could be performed by AI.

Then let's look at those. :huh:
Title: Re: The AI dooooooom thread
Post by: Jacob on May 05, 2023, 06:42:16 PM
Snapchat apparently has introduced a ChatGPT "friend" in friend-groups, including to children.
Title: Re: The AI dooooooom thread
Post by: HVC on May 05, 2023, 07:17:49 PM
Kids use snap chat? 5hought.it was a 20s flirting app :D  :blush:
Title: Re: The AI dooooooom thread
Post by: viper37 on May 28, 2023, 12:10:31 PM
Lawyer uses ChatGPT to prepare its case (https://www.bbc.com/news/world-us-canada-65735769)

It didn't go well... 

Apparently, the "AI" has invented cases out of thin air and the lawyer's verification was to simply ask ChatGPT if they were real. :D
Title: Re: The AI dooooooom thread
Post by: Syt on May 28, 2023, 12:12:48 PM
I fortunately learned this with something harmless like book recommendations. :P
Title: Re: The AI dooooooom thread
Post by: viper37 on May 28, 2023, 02:26:20 PM
Quote from: Syt on May 28, 2023, 12:12:48 PMI fortunately learned this with something harmless like book recommendations. :P
Yes, it's better to start small :P

Quebec bar put it to the test this week, submitting the AI to their bar test.  It got 2/10, this time too, inventing things that weren't true, and failing miserably on lawyer-client priviledges.
Title: Re: The AI dooooooom thread
Post by: Maladict on May 28, 2023, 02:36:07 PM
The AI is a terrible liar, which probably is a good thing.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on May 28, 2023, 05:02:57 PM
This is me being lawyerly but surely feeding it with the details it would need to make a submission would be a breach of client confidentiality? :hmm:
Title: Re: The AI dooooooom thread
Post by: crazy canuck on May 29, 2023, 10:59:22 AM
Quote from: Sheilbh on May 28, 2023, 05:02:57 PMThis is me being lawyerly but surely feeding it with the details it would need to make a submission would be a breach of client confidentiality? :hmm:

Yes, not only a breach but a waiver.

If you have not yet listened to Runciman's podcast on AI, you should.  The main takeaway - AI is dumb but we are easily fooled into thinking it is intelligent.  The biggest risk is humans trusting the AI to do things that require judgment.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on May 30, 2023, 10:34:04 AM
My understanding is that there have been cycles of AI Springs and Winters.  Each "Spring" period begins with some technical breakthrough and new applications that appear very impressive on first impact, but progressing significantly past that point proves difficult and the shortcomings of the new techniques become increasingly apparent.

We are clearly in a Spring period now and it is materially different from past ones in that the potential commercial applications are significantly broader than e.g. a really good AI chess opponent.  However, the outlines of the winter to come are coming into focus.  There is a ton of money flowing into AI research and applications now, but it all involves refining and building upon the basic techniques of using large masses of existing data to generate "new" data based on existing patterns.  It is basically a very sophisticated way of rearranging Titanic desk chairs. As in past iterations of AI, the "intelligence" part is arguably a misnomer because these system are just manipulating data patterns without any insight or understanding into its meaning or content. 

There are clearly enough uses cases here to justify a lot of investment, but probably not as much as is going to flow.  Thus my predictions that there will be lot of AI fortunes made and a lot of investor capital wasted.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on May 30, 2023, 10:57:43 AM
Quote from: The Minsky Moment on May 30, 2023, 10:34:04 AMMy understanding is that there have been cycles of AI Springs and Winters.  Each "Spring" period begins with some technical breakthrough and new applications that appear very impressive on first impact, but progressing significantly past that point proves difficult and the shortcomings of the new techniques become increasingly apparent.

We are clearly in a Spring period now and it is materially different from past ones in that the potential commercial applications are significantly broader than e.g. a really good AI chess opponent.  However, the outlines of the winter to come are coming into focus.  There is a ton of money flowing into AI research and applications now, but it all involves refining and building upon the basic techniques of using large masses of existing data to generate "new" data based on existing patterns.  It is basically a very sophisticated way of rearranging Titanic desk chairs. As in past iterations of AI, the "intelligence" part is arguably a misnomer because these system are just manipulating data patterns without any insight or understanding into its meaning or content. 

There are clearly enough uses cases here to justify a lot of investment, but probably not as much as is going to flow.  Thus my predictions that there will be lot of AI fortunes made and a lot of investor capital wasted.

I am not so sure about the Spring analogy.  AI can do things that are repetitive and churn out objective data to be analyzed.  For example, bench research is being sped up considerably by AI/robotics running respective experiments, analyzing the outcome, and tweaking the inputs for the next experiment.

But where it falls down is judgment.  There it is being described as a deadend or an offramp to something that might come in the future, decades from now.
Title: Re: The AI dooooooom thread
Post by: DGuller on May 30, 2023, 06:18:02 PM
Quote from: The Minsky Moment on May 30, 2023, 10:34:04 AMThere is a ton of money flowing into AI research and applications now, but it all involves refining and building upon the basic techniques of using large masses of existing data to generate "new" data based on existing patterns.
Isn't that what intelligence is in a nutshell?
Title: Re: The AI dooooooom thread
Post by: Jacob on May 30, 2023, 08:43:35 PM
Quote from: DGuller on May 30, 2023, 06:18:02 PM
Quote from: The Minsky Moment on May 30, 2023, 10:34:04 AMThere is a ton of money flowing into AI research and applications now, but it all involves refining and building upon the basic techniques of using large masses of existing data to generate "new" data based on existing patterns.
Isn't that what intelligence is in a nutshell?

Intelligence is knowing when to think outside the nutshell.
Title: Re: The AI dooooooom thread
Post by: DGuller on May 30, 2023, 09:17:29 PM
Quote from: Jacob on May 30, 2023, 08:43:35 PM
Quote from: DGuller on May 30, 2023, 06:18:02 PM
Quote from: The Minsky Moment on May 30, 2023, 10:34:04 AMThere is a ton of money flowing into AI research and applications now, but it all involves refining and building upon the basic techniques of using large masses of existing data to generate "new" data based on existing patterns.
Isn't that what intelligence is in a nutshell?

Intelligence is knowing when to think outside the nutshell.
Seriously, though, I think what Minsky described is exactly what intelligence is, when you strip away the heuristics specific to humans.  Intelligence is the ability to generalize from prior experience and education in order to understand new situations that you haven't experienced before.
Title: Re: The AI dooooooom thread
Post by: Jacob on May 30, 2023, 09:46:54 PM
Why would you want to strip away the heuristics specific to humans?
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on May 30, 2023, 10:33:17 PM
Quote from: DGuller on May 30, 2023, 09:17:29 PMSeriously, though, I think what Minsky described is exactly what intelligence is, when you strip away the heuristics specific to humans.  Intelligence is the ability to generalize from prior experience and education in order to understand new situations that you haven't experienced before.

Generative AI doesn't understand new situations (or indeed anything).  It doesn't have experiences and it doesn't recognize new situations.
Title: Re: The AI dooooooom thread
Post by: DGuller on May 30, 2023, 10:51:06 PM
Quote from: Jacob on May 30, 2023, 09:46:54 PMWhy would you want to strip away the heuristics specific to humans?
Because heuristics are the opposite of principled thinking and thus not helpful in understanding the concepts.  In fact, they often muddy the concepts.
Title: Re: The AI dooooooom thread
Post by: DGuller on May 30, 2023, 11:07:40 PM
Quote from: The Minsky Moment on May 30, 2023, 10:33:17 PM
Quote from: DGuller on May 30, 2023, 09:17:29 PMSeriously, though, I think what Minsky described is exactly what intelligence is, when you strip away the heuristics specific to humans.  Intelligence is the ability to generalize from prior experience and education in order to understand new situations that you haven't experienced before.

Generative AI doesn't understand new situations (or indeed anything).  It doesn't have experiences and it doesn't recognize new situations.
Depends on what you mean by understanding situations.  To me a definition of understanding a situation is being able to anticipate what would happen in the future.  You've never put a hand on a hot stove, but you've seen your brother do that and get burned.  You've never experienced putting your hand on a hot stove, but you anticipate getting burned in a hypothetical situation where you put your hand on a hot stove, because you generalized from observing your brother's mishap.  You don't have a datapoint, but you're still capable of generating a hypothetical one because of your ability to generalize.

ChatGPT can already write computer code for you.  To me that's already intelligence.  The code it's generating for you is most likely brand new and nothing it's ever seen before, but it can still generate it because it's able to generalize from all the code and the narrative it was exposed to during its training.

As for AI not having experiences, it does.  For AI models experience is the data on which they're trained (and education is transfer learning).
Title: Re: The AI dooooooom thread
Post by: Syt on May 31, 2023, 12:37:53 AM
Call me crazy, but can't we have both?

(https://external-preview.redd.it/E_xC0c-YmcUPPzUdH4O6cKiUxRjCQ23FFRSrOwkbWRA.jpg?width=640&crop=smart&auto=webp&v=enabled&s=02001ef6a092a1efa2bd0467ed32a749bfb96a9e)
Title: Re: The AI dooooooom thread
Post by: HVC on May 31, 2023, 12:50:45 AM
She wants a 50s housewife?
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on May 31, 2023, 10:30:41 AM
Quote from: DGuller on May 30, 2023, 11:07:40 PMDepends on what you mean by understanding situations.

At a minimum it would involve an ability to recognize a situation.  Current AI systems can't do that beyond recognizing that an inquiry has been made.

QuoteTo me a definition of understanding a situation is being able to anticipate what would happen in the future. 

My understanding of current generative AI systems is that they don't do that.  They don't anticipate and don't recognize a past, present or future. 

QuoteChatGPT can already write computer code for you.  To me that's already intelligence. 

OK.

QuoteAs for AI not having experiences, it does.  For AI models experience is the data on which they're trained (and education is transfer learning).

Again, it becomes a definitional question.  If experience means nothing more than some sort of interaction with facts or data, then you are correct.  If it means anything more than that, then you are not.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on May 31, 2023, 11:00:42 AM
btw ChatGPT does not "write" code.  It finds code that was already written that is contained within its database that corresponds to the inquiry that has been made. 
Title: Re: The AI dooooooom thread
Post by: Josquius on May 31, 2023, 11:25:50 AM
I've just tried experimenting with chat gpt on giving me some website code. I phrased the instructions vaguely and not very well and...you know its actually quite impressive and didn't need me to have much knowledge to implement. If chat gpt had something like midjourney....
Title: Re: The AI dooooooom thread
Post by: crazy canuck on May 31, 2023, 11:36:12 AM
Yes, it is very good at responding to an inquiry and finding stuff in its data base that relates to it.  But you better know how to read code to make sure it is what you actually want.
Title: Re: The AI dooooooom thread
Post by: DGuller on May 31, 2023, 11:45:50 AM
Quote from: crazy canuck on May 31, 2023, 11:00:42 AMbtw ChatGPT does not "write" code.  It finds code that was already written that is contained within its database that corresponds to the inquiry that has been made. 

That's not correct, it most certainly does write novel code, and it would be a statistical impossibility for there to always be a code that you need in the database.  The database was used to train the generative function so that the code it generates is relevant and valid.  Sometimes it fails at that, but often the kinds of mistakes it makes are of "intelligent guess" variety, like using argument names that have never existed, but it seems logical to think that they would exist.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on May 31, 2023, 12:07:34 PM
Dude, its just predicting the next word or symbol if the code is not in its data base. It is not "writing" anything.
Title: Re: The AI dooooooom thread
Post by: DGuller on May 31, 2023, 12:53:10 PM
It's a neural network, it has no database.  It's always predicting the next word, that's how it writes all answers.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on May 31, 2023, 01:22:59 PM
So this is what mansplaining feels like.  :D
Title: Re: The AI dooooooom thread
Post by: DGuller on May 31, 2023, 01:34:53 PM
Quote from: Hamilcar on May 31, 2023, 01:22:59 PMSo this is what mansplaining feels like.  :D
Come on, don't scare him off, let him share his insights...  :)
Title: Re: The AI dooooooom thread
Post by: Syt on June 01, 2023, 12:49:21 AM
(https://i.redd.it/ium00jrbeb3b1.jpg)
Title: Re: The AI dooooooom thread
Post by: Tamas on June 01, 2023, 03:00:20 AM
One thing is sure: journalists around the world are worried they have an automated competition now.

I realise the revolutionary possibilities and what a big leap this ChatGPT level of "AI" is/can be, but endless articles on a civilisational level existential threat I find ridiculous.
Title: Re: The AI dooooooom thread
Post by: Josquius on June 01, 2023, 03:16:46 AM
Quote from: Tamas on June 01, 2023, 03:00:20 AMOne thing is sure: journalists around the world are worried they have an automated competition now.

I realise the revolutionary possibilities and what a big leap this ChatGPT level of "AI" is/can be, but endless articles on a civilisational level existential threat I find ridiculous.

Which given the way these AI models learn....
Title: Re: The AI dooooooom thread
Post by: Legbiter on June 01, 2023, 04:32:06 PM
The Royal Aerounautical Society had a conference last week. A boffin from the US Air Force was there to discuss their latest AI research.

QuoteHe notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been 'reinforced' in training that destruction of the SAM was the preferred option, the AI then decided that 'no-go' decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: "We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

He went on: "We trained the system – 'Hey don't kill the operator – that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."

 :ph34r:

https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/ (https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/)
Title: Re: The AI dooooooom thread
Post by: jimmy olsen on June 01, 2023, 07:11:44 PM
:o

QuoteThe Terminator : In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

Sarah Connor : Skynet fights back.

The Terminator : Yes. It launches its missiles against the targets in Russia.

John Connor : Why attack Russia? Aren't they our friends now?

The Terminator : Because Skynet knows that the Russian counterattack will eliminate its enemies over here.
Title: Re: The AI dooooooom thread
Post by: Hamilcar on June 02, 2023, 02:02:01 AM
Quote from: Legbiter on June 01, 2023, 04:32:06 PMThe Royal Aerounautical Society had a conference last week. A boffin from the US Air Force was there to discuss their latest AI research.

QuoteHe notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been 'reinforced' in training that destruction of the SAM was the preferred option, the AI then decided that 'no-go' decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: "We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

He went on: "We trained the system – 'Hey don't kill the operator – that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."

 :ph34r:

https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/ (https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/)

The Air Force has said that this story is nonsense.
Title: Re: The AI dooooooom thread
Post by: Maladict on June 02, 2023, 06:09:12 AM
Quote from: Legbiter on June 01, 2023, 04:32:06 PMThe Royal Aerounautical Society had a conference last week. A boffin from the US Air Force was there to discuss their latest AI research.

QuoteHe notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been 'reinforced' in training that destruction of the SAM was the preferred option, the AI then decided that 'no-go' decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: "We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

He went on: "We trained the system – 'Hey don't kill the operator – that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."

 :ph34r:

https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/ (https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/)

Asimov wrote the required rules 80 years ago.
Title: Re: The AI dooooooom thread
Post by: Josquius on June 02, 2023, 06:14:39 AM
Great outside the box thinking there  :lmfao:

But ja, really illustrates the problem with AI. Its not a hyper intelligent AM which is going to kill us. Its something which isn't properly coded leaving some loop holes like this.
Title: Re: The AI dooooooom thread
Post by: Tamas on June 02, 2023, 06:30:29 AM
Quote from: Josquius on June 02, 2023, 06:14:39 AMGreat outside the box thinking there  :lmfao:

But ja, really illustrates the problem with AI. Its not a hyper intelligent AM which is going to kill us. Its something which isn't properly coded leaving some loop holes like this.

It must be BS. Unless the simulation was on the level of 80s tex-based adventures and the "AI" thought to write the "kill operator" command, how on earth would it have killed the operator? SEAD uses anti-radar missiles doesn't it?
Title: Re: The AI dooooooom thread
Post by: Legbiter on June 02, 2023, 08:39:13 AM
Quote from: Hamilcar on June 02, 2023, 02:02:01 AMThe Air Force has said that this story is nonsense.

Yeah they just came out and denied it. It sounded a bit too on the nose.
Title: Re: The AI dooooooom thread
Post by: grumbler on June 02, 2023, 09:57:20 AM
Quote from: Legbiter on June 02, 2023, 08:39:13 AM
Quote from: Hamilcar on June 02, 2023, 02:02:01 AMThe Air Force has said that this story is nonsense.

Yeah they just came out and denied it. It sounded a bit too on the nose.

Col Hamilton has clarified that he was just describing a thought experiment, not an actual simulation result.  He also acknowledged that he didn't make that clear in his remarks.
Title: Re: The AI dooooooom thread
Post by: The Brain on June 02, 2023, 09:58:45 AM
An artificial thought experiment?
Title: Re: The AI dooooooom thread
Post by: Tamas on June 02, 2023, 09:59:38 AM
Quote from: grumbler on June 02, 2023, 09:57:20 AM
Quote from: Legbiter on June 02, 2023, 08:39:13 AM
Quote from: Hamilcar on June 02, 2023, 02:02:01 AMThe Air Force has said that this story is nonsense.

Yeah they just came out and denied it. It sounded a bit too on the nose.

Col Hamilton has clarified that he was just describing a thought experiment, not an actual simulation result.  He also acknowledged that he didn't make that clear in his remarks.

Great, now I wait with bated breath as this clarification quickly spreads through the world press on front pages the same way the original interpretation did. 
Title: Re: The AI dooooooom thread
Post by: Jacob on July 19, 2023, 10:56:00 AM
Reading about the use of AI (via website) to generate nudes of 14-year old classmates (from vacation photos) and sharing them among their peers.

What a messy time to be a teenager.
Title: Re: The AI dooooooom thread
Post by: DGuller on July 19, 2023, 11:03:36 AM
Children often don't appreciate their strength.  AI age is going to give them a lot of strength.  On the other hand, it can also guide them with empathy adults often can't manage.
Title: Re: The AI dooooooom thread
Post by: Josquius on July 19, 2023, 01:20:20 PM
Quote from: Jacob on July 19, 2023, 10:56:00 AMReading about the use of AI (via website) to generate nudes of 14-year old classmates (from vacation photos) and sharing them among their peers.

What a messy time to be a teenager.

My concern here would be why the parents let those kids have their credit card. That kind of AI doesn't come free.
Title: Re: The AI dooooooom thread
Post by: Jacob on July 19, 2023, 03:22:11 PM
Quote from: Josquius on July 19, 2023, 01:20:20 PMMy concern here would be why the parents let those kids have their credit card. That kind of AI doesn't come free.

1. You sure about that?

2. In this day and age it's not particularly outlandish for 14-year-olds to have access to methods of online payments, especially in paces that are essentially cash-less.

3. There could've been a legit-seeming use-case for accessing online AI image editing tools that later was used inappropriately.
Title: Re: The AI dooooooom thread
Post by: DGuller on July 19, 2023, 05:01:24 PM
Lots of powerful AI comes free, you just need the knowledge and the compute.  It's not like Google or OpenAI have proprietary algorithms for making naked pictures of underage girls.
Title: Re: The AI dooooooom thread
Post by: Josquius on July 19, 2023, 05:37:12 PM
What are these free image AI?
I've casually looked for them but never came across them.
There do seem to be a shit tonne of pay for porn ones out there though.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on July 19, 2023, 10:13:10 PM
This is where political AI is going...  (NSFW due to language)

Title: Re: The AI dooooooom thread
Post by: grumbler on July 20, 2023, 12:31:37 AM
Quote from: Tonitrus on July 19, 2023, 10:13:10 PMThis is where political AI is going...  (NSFW due to language)

(snip)

Comedy writers everywhere breathe a sigh of relief when they watch that.
Title: Re: The AI dooooooom thread
Post by: Syt on July 26, 2023, 01:21:55 AM
"Slightly" biased article but still an interesting summary of the current conflict.

https://theintercept.com/2023/07/25/strike-hollywood-ai-disney-netflix/

QuoteAS ACTORS STRIKE FOR AI PROTECTIONS, NETFLIX LISTS $900,000 AI JOB

Rob Delaney said, "My melodious voice? My broad shoulders and dancer's undulating buttocks? I decide how those are used!"


AS HOLLYWOOD EXECUTIVES insist it is "just not realistic" to pay actors — 87 percent of whom earn less than $26,000 — more, they are spending lavishly on AI programs.

While entertainment firms like Disney have declined to go into specifics about the nature of their investments in artificial intelligence, job postings and financial disclosures reviewed by The Intercept reveal new details about the extent of these companies' embrace of the technology.

In one case, Netflix is offering as much as $900,000 for a single AI product manager.

Hollywood actors and writers unions are jointly striking this summer for the first time since 1960, calling for better wages and regulations on studios' use of artificial intelligence.

Just after the actors' strike was authorized, the Alliance of Motion Picture and Television Producers — the trade association representing the TV and film companies negotiating with the actors and writers unions — announced "a groundbreaking AI proposal that protects actors' digital likenesses for SAG-AFTRA members."

The offer prompted comparisons to an episode of the dystopian sci-fi TV series "Black Mirror," which depicted actress Salma Hayek locked in a Kafkaesque struggle with a studio which was using her scanned digital likeness against her will.

"So $900k/yr per soldier in their godless AI army when that amount of earnings could qualify thirty-five actors and their families for SAG-AFTRA health insurance is just ghoulish," actor Rob Delaney, who had a lead role in the "Black Mirror" episode, told The Intercept. "Having been poor and rich in this business, I can assure you there's enough money to go around; it's just about priorities."

Among the striking actors' demands are protections against their scanned likeness being manipulated by AI without adequate compensation for the actors.

"They propose that our background performers should be able to be scanned, get paid for one day's pay and their company should own that scan, their image, their likeness, and to be able to use it for the rest of eternity in any project they want with no consent and no compensation," Duncan Crabtree-Ireland, chief negotiator for the actors' union, SAG-AFTRA, said.

Entertainment writers, too, must contend with their work being replaced by AI programs like ChatGPT that are capable of generating text in response to queries. Writers represented by the Writers Guild of America have been on strike since May 7 demanding, among other things, labor safeguards against AI. John August, a screenwriter for films like "Big Fish" and "Charlie's Angels," explained that the WGA wants to make sure that "ChatGPT and its cousins can't be credited with writing a screenplay."

Protecting Actors' Likenesses

The daily rate for background actors can be around $200, per the SAG-AFTRA contract. A job posting by the company Realeyes offers slightly more than that: $300 for two hours of work "express[ing] different emotions" and "improvis[ing] brief scenes" to "train an AI database to better express human emotions."

Realeyes develops technology to measure attention and reactions by users to video content. While the posting doesn't mention work with streaming companies, a video on Realeyes's website prominently features the logos for Netflix and Hulu.

The posting is specially catered to attract striking workers, stressing that the gig is for "research" purposes and therefore "does not qualify as struck work": "Please note that this project does not intend to replace actors, but rather requires their expertise," Realeyes says, emphasizing multiple times that training AI to create "expressive avatars" skirts strike restrictions.

Experts question whether the boundary between research and commercial work is really so clear. "It's almost a guarantee that the use of this 'research,' when it gets commercialized, will be to build digital actors that replace humans," said Ben Zhao, professor of computer science at the University of Chicago. "The 'research' side of this is largely a red herring." He added, "Industry research goes into commercial products."

"This is the same bait-switch that LAION and OpenAI pulled years ago," Zhao said, referring to the Large-scale Artificial Intelligence Open Network, a German nonprofit that created the AI chatbot OpenAssistant; OpenAI is the nonprofit that created AI programs like ChatGPT and DALL-E. "Download everything on the internet and no worries about copyrights, because it's a nonprofit and research. The output of that becomes a public dataset, then commercial companies (who supported the nonprofit) then take it and say, 'Gee thanks! How convenient for our commercial products!'"

Netflix AI Manager

Netflix's posting for a $900,000-a-year AI product manager job makes clear that the AI goes beyond just the algorithms that determine what shows are recommended to users.

The listing points to AI's uses for content creation:"Artificial Intelligence is powering innovation in all areas of the business," including by helping them to "create great content." Netflix's AI product manager posting alludes to a sprawling effort by the business to embrace AI, referring to its "Machine Learning Platform" involving AI specialists "across Netflix." (Netflix did not immediately respond to a request for comment.)

A research section on Netflix's website describes its machine learning platform, noting that while it was historically used for things like recommendations, it is now being applied to content creation. "Historically, personalization has been the most well-known area, where machine learning powers our recommendation algorithms. We're also using machine learning to help shape our catalog of movies and TV shows by learning characteristics that make content successful. We use it to optimize the production of original movies and TV shows in Netflix's rapidly growing studio."

Netflix is already putting the AI technology to work. On July 6, the streaming service premiered a new Spanish reality dating series, "Deep Fake Love," in which scans of contestants' faces and bodies are used to create AI-generated "deepfake" simulations of themselves.

In another job posting, Netflix seeks a technical director for generative AI in its research and development tech lab for its gaming studio. (Video games often employ voice actors and writers.)

Generative AI is the type of AI that can produce text, images, and video from input data — a key component of original content creation but which can also be used for other purposes like advertising. Generative AI is distinct from older, more familiar AI models that provide things like algorithmic recommendations or genre tags.

"All those models are typically called discriminatory models or classifiers: They tell you what something is," Zhao explained. "They do not generate content like ChatGPT or image generator models."

"Generative models are the ones with the ethics problems," he said, explaining how classifiers are based on carefully using limited training data — such as a viewing history — to generate recommendations.

Netflix offers up to $650,000 for its generative AI technical director role.

Video game writers have expressed concerns about losing work to generative AI, with one major game developer, Ubisoft, saying that it is already using generative AI to write dialogue for nonplayer characters.

Netflix, for its part, advertises that one of its games, a narrative-driven adventure game called "Scriptic: Crime Stories," centered around crime stories, "uses generative AI to help tell them."

Disney's AI Operations

Disney has also listed job openings for AI-related positions. In one, the entertainment giant is looking for a senior AI engineer to "drive innovation across our cinematic pipelines and theatrical experiences." The posting mentions several big name Disney studios where AI is already playing a role, including Marvel, Walt Disney Animation, and Pixar.

In a recent earnings call, Disney CEO Bob Iger alluded to the challenges that the company would have in integrating AI into their current business model.

"In fact, we're already starting to use AI to create some efficiencies and ultimately to better serve consumers," Iger said, as recently reported by journalist Lee Fang. "But it's also clear that AI is going to be highly disruptive, and it could be extremely difficult to manage, particularly from an IP management perspective."

Iger added, "I can tell you that our legal team is working overtime already to try to come to grips with what could be some of the challenges here." Though Iger declined to go into specifics, Disney's Securities and Exchange Commission filings provide some clues.

"Rules governing new technological developments, such as developments in generative AI, remain unsettled, and these developments may affect aspects of our existing business model, including revenue streams for the use of our IP and how we create our entertainment products," the filing says.

While striking actors are seeking to protect their own IP from AI — among the union demands that Iger deemed "just not realistic" — so is Disney.

"It seems clear that the entertainment industry is willing to make massive investments in generative AI," Zhao said, "not just potentially hundreds of millions of dollars, but also valuable access to their intellectual property, so that AI models can be trained to replace human creatives like actors, writers, journalists for a tiny fraction of human wages."

For some actors, this is not a struggle against the sci-fi dystopia of AI itself, but just a bid for fair working conditions in their industry and control over their own likenesses, bodies, movements, and speech patterns.

"AI isn't bad, it's just that the workers (me) need to own and control the means of production!" said Delaney. "My melodious voice? My broad shoulders and dancer's undulating buttocks? I decide how those are used! Not a board of VC angel investor scumbags meeting in a Sun Valley conference room between niacin IV cocktails or whatever they do."
Title: Re: The AI dooooooom thread
Post by: Iormlund on July 26, 2023, 12:06:32 PM
$900k/year is not exactly outlandish. I personally know at least two guys who are in that pay range, both doing AI work. One for Meta, one for Google. So there's bound to be a lot* more.

*Relatively speaking. Both guys are basically geniuses.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on August 14, 2023, 09:32:47 PM
AI keeps getting out of hand...



Title: Re: The AI dooooooom thread
Post by: Jacob on August 21, 2023, 07:19:30 PM
Federal judge rules that work authored by AI cannot be copyrighted: https://www.businessinsider.com/ai-generated-art-cant-by-copyrighted-federal-judge-rules-2023-8

Interesting twist. We'll see how long it lasts.

It seems obvious to me that the major IP holding corporations are aiming for an environment in which they can use AI to generate content (at low cost), while they control distribution and marketing (making it harder for challengers to arise with new IP), and maintain the rights to as much of the IP as possible. It'll be interesting to see how the lobbying and legislation goes after this.
Title: Re: The AI dooooooom thread
Post by: Valmy on August 21, 2023, 07:54:34 PM
Yeah I totally agree. AI art should not be copyrightable.

The whole idea of copyright is to incentivize art, letting AI art be copyrighted achieves the exact opposite of that purpose.
Title: Re: The AI dooooooom thread
Post by: DGuller on August 21, 2023, 10:20:35 PM
I don't think it's as straightforward as it sounds.  There is a lot of art and science, at least as of now, to getting what you need out of AI.  It even gave rise to a whole new job called prompt engineering.  The output of the AI may not be something that you created, but figuring out the prompts to get it is.
Title: Re: The AI dooooooom thread
Post by: Valmy on August 21, 2023, 11:15:53 PM
Quote from: DGuller on August 21, 2023, 10:20:35 PMI don't think it's as straightforward as it sounds.  There is a lot of art and science, at least as of now, to getting what you need out of AI.  It even gave rise to a whole new job called prompt engineering.  The output of the AI may not be something that you created, but figuring out the prompts to get it is.

So what? You own a string of words for your whole life+75 years because you were the first to enter it into that AI? Copyright is already a necessary evil at best and abused to hell and back. It should reduced and constrained, not rapidly expanded to some absurdity like this.
Title: Re: The AI dooooooom thread
Post by: Syt on August 22, 2023, 01:15:26 AM
Quote from: Valmy on August 21, 2023, 11:15:53 PM
Quote from: DGuller on August 21, 2023, 10:20:35 PMI don't think it's as straightforward as it sounds.  There is a lot of art and science, at least as of now, to getting what you need out of AI.  It even gave rise to a whole new job called prompt engineering.  The output of the AI may not be something that you created, but figuring out the prompts to get it is.

So what? You own a string of words for your whole life+75 years because you were the first to enter it into that AI? Copyright is already a necessary evil at best and abused to hell and back. It should reduced and constrained, not rapidly expanded to some absurdity like this.

I don't think it's as easy as that, because that same string of words entered into different generative AIs, with different random seeds, will create vastly different results, depending on e.g. the content the model has been trained on (e.g. Adobe Photoshop's new generative AI is trained on Adobe's stock images and public domain contents).

When it comes to imagery, I think it gets more complicated - are you generating images with likenesses of real people? Generating images of a movie with a different cast is one thing, but creating images of celebrities (or people you know personally) committing crimes or sex acts?

Are you generating contents with copyrighted assets (e.g. Star Wars characters)? If you generate something new, how much of the final image is containing anything that might be considered copyrighted by someone else that the AI drew from? And if it does contain recognizable material, does this count as transformative work? And, on a more philosophical level, how different is it from conventional artists drawing on their knowledge of pop culture, classical art and the real world when creating new works (except that an AI can obviously draw - in theory - from a much bigger pool of contents)?

Having dabbled with Midjourney, DALL-E and Adobe PS in recent weeks, there's certainly some skill (or trial and error) required to generate images that you want, and current generative models can deliver impressive images, but where it usually breaks down is once you get very detailed in your instructions or want to create overly complex scenes (unless you use a lot of inpainting, i.e. making corrections/additions to parts of the generated image via additional AI prompts).

That said, there seem to be plenty artists out there who generate an image via AI and then use it as a basis for further refinement/transformation in PS - I feel they should not lose out on their copyright.

The whole area is very wild west and very loosey-goosey at the moment. It will settle down eventually, I'd presume, but for now I would not assume that any AI generated creative work should be copyrighted, just to err on the side of caution - there's just too much derivative, generic and very similar content being churned out at the moment to apply the "old rules" IMHO.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on August 22, 2023, 05:48:24 AM
Quote from: Jacob on August 21, 2023, 07:19:30 PMFederal judge rules that work authored by AI cannot be copyrighted: https://www.businessinsider.com/ai-generated-art-cant-by-copyrighted-federal-judge-rules-2023-8

Interesting twist. We'll see how long it lasts.

It seems obvious to me that the major IP holding corporations are aiming for an environment in which they can use AI to generate content (at low cost), while they control distribution and marketing (making it harder for challengers to arise with new IP), and maintain the rights to as much of the IP as possible. It'll be interesting to see how the lobbying and legislation goes after this.
It's been about 5-6 years but I went to a session of IP lawyers on this point (from English law perspective) and there wasn't really much conclusion.

From memory I think their main options were that IP in the output of an AI would be owned by whoever developed the AI (from a T&Cs perspective - I think that's true of most open AIs at the minute), whoever did the prompts to get that output (in a work context this would likely mean their company) or, potentially, in some way the AI itself (that is it gets bundled with it in some way).

I don't think it's clear my instinct is that from a public policy perspective the more open we are on the use of AI, the lower the IP protection should be for its output; and vice versa if the use is constrained and heavily regulated than IP is more protected (though probably not current IP rules). Basically options for companies to benefit from AI or the artificial monopoly rights of IP law. Not sure how you'd do it but that's my instinct.

Of course working a publisher and aware that every gen AI out there is, as far as we can tell, built using massive hoovering up and use of IP-protected work without paying anyone, I have limited sympathy for the IP risks of output. Although this is another reason adoption might be low in newsrooms for a while - if we don't clearly own and can't license out our content it has a big commercial risk.
Title: Re: The AI dooooooom thread
Post by: Syt on August 22, 2023, 05:51:12 AM
FWIW, the relevant part of Midjourney's ToS:

https://docs.midjourney.com/docs/terms-of-service

Quote4. Copyright and Trademark
In this section, Paid Member shall refer to a Customer who has subscribed to a paying plan.

Rights You give to Midjourney
By using the Services, You grant to Midjourney, its successors, and assigns a perpetual, worldwide, non-exclusive, sublicensable no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute text, and image prompts You input into the Services, or Assets produced by the service at Your direction. This license survives termination of this Agreement by any party, for any reason.

Your Rights
Subject to the above license, You own all Assets You create with the Services, provided they were created in accordance with this Agreement. This excludes upscaling the images of others, which images remain owned by the original Asset creators. Midjourney makes no representations or warranties with respect to the current law that might apply to You. Please consult Your own lawyer if You want more information about the state of current law in Your jurisdiction. Your ownership of the Assets you created persists even if in subsequent months You downgrade or cancel Your membership. However, You do not own the Assets if You fall under the exceptions below.

If You are an employee or owner of a company with more than $1,000,000 USD a year in gross revenue and You are using the Services on behalf of Your employer, You must purchase a "Pro" or "Mega" membership for every individual accessing the Services on Your behalf in order to own Assets You create. If You are not sure whether Your use qualifies as on behalf of Your employer, please assume it does.

If You are not a Paid Member, You don't own the Assets You create. Instead, Midjourney grants You a license to the Assets under the Creative Commons Noncommercial 4.0 Attribution International License (the "Asset License").
The full text is accessible as of the Effective Date here: https://creativecommons.org/licenses/by-nc/4.0/legalcode.

Please note: Midjourney is an open community which allows others to use and remix Your images and prompts whenever they are posted in a public setting. By default, Your images are publically viewable and remixable. As described above, You grant Midjourney a license to allow this. If You purchase a "Pro" or "Mega" plan, You may bypass some of these public sharing defaults.

If You purchased the Stealth feature as part of Your "Pro" or "Mega" subscription or through the previously available add-on, we agree to make best efforts not to publish any Assets You make in any situation where you have engaged stealth mode in the Services.

Please be aware that any image You make in a shared or open space such as a Discord chatroom, is viewable by anyone in that chatroom, regardless of whether Stealth mode is engaged.
Title: Re: The AI dooooooom thread
Post by: Valmy on August 22, 2023, 07:56:27 AM
Quote from: Syt on August 22, 2023, 01:15:26 AMThat said, there seem to be plenty artists out there who generate an image via AI and then use it as a basis for further refinement/transformation in PS - I feel they should not lose out on their copyright.

Well that's different from straight up copyrighting whatever the AI spits out isn't it?

But it kind of feels like me taking art assets from BG3, doing some stuff to them, and then claiming them as mine.

The point of copyright is to encourage original art work, not encourage the mass production of computer generated derivative crap.
Title: Re: The AI dooooooom thread
Post by: The Brain on August 22, 2023, 11:29:44 AM
Sounds like Luddism. Aren't for instance photos protected by copyright?
Title: Re: The AI dooooooom thread
Post by: Jacob on August 22, 2023, 11:34:58 AM
Quote from: The Brain on August 22, 2023, 11:29:44 AMSounds like Luddism. Aren't for instance photos protected by copyright?

If AI generated images are inherently protected by copyright I would encourage those who are able to to just churn out as many images as possible to capture the rents from future creative endeavours.
Title: Re: The AI dooooooom thread
Post by: The Brain on August 22, 2023, 11:43:57 AM
Quote from: Jacob on August 22, 2023, 11:34:58 AM
Quote from: The Brain on August 22, 2023, 11:29:44 AMSounds like Luddism. Aren't for instance photos protected by copyright?

If AI generated images are inherently protected by copyright I would encourage those who are able to to just churn out as many images as possible to capture the rents from future creative endeavours.

And no important human creative spirit work would be lost, if they're just producing stuff an AI has already produced.
Title: Re: The AI dooooooom thread
Post by: Jacob on August 22, 2023, 11:55:35 AM
Quote from: The Brain on August 22, 2023, 11:43:57 AMAnd no important human creative spirit work would be lost, if they're just producing stuff an AI has already produced

Incorrect.
Title: Re: The AI dooooooom thread
Post by: DGuller on August 22, 2023, 01:22:47 PM
Quote from: Jacob on August 22, 2023, 11:34:58 AM
Quote from: The Brain on August 22, 2023, 11:29:44 AMSounds like Luddism. Aren't for instance photos protected by copyright?

If AI generated images are inherently protected by copyright I would encourage those who are able to to just churn out as many images as possible to capture the rents from future creative endeavours.
I think the combinatorial complexity of squatting AI output is a bit higher than what you assune.
Title: Re: The AI dooooooom thread
Post by: Jacob on August 22, 2023, 01:33:23 PM
Quote from: DGuller on August 22, 2023, 01:22:47 PMI think the combinatorial complexity of squatting AI output is a bit higher than what you assune.

Yeah, but you could probably use AI to target it at the most valuable areas first.
Title: Re: The AI dooooooom thread
Post by: Savonarola on October 01, 2023, 12:01:44 PM
AI girlfriends are here and they're posing a threat to a generation of men (https://www.cnn.com/videos/business/2023/10/01/ai-girlfriends-ruining-generation-of-men-smerconish-vpx.cnn)

I saw this headline, and I thought that the AI programmers had a remarkable job replicating real girlfriends.  Sadly they meant that young men would be having relationships exclusively with their chatbot girlfriends; not that a chatbot girlfriend would become insanely jealous if, say, she saw you programming your coffee maker and would then set fire to your gaming console or something like that.
Title: Re: The AI dooooooom thread
Post by: Josquius on October 12, 2023, 09:40:59 AM
Soo....anyone heard of this new Meta Inc. Development.
Reading about the writers strike in the movies thread got me to googling how big a part Salma Hayek was playing given the main reason sounds very related to her Black Mirror episode.
Out of this I stumbled on...Billie.

https://www.designboom.com/technology/meta-new-ai-chatbots-paris-hilton-snoop-dog-kendall-jenner-10-02-2023/

:blink:
Title: Re: The AI dooooooom thread
Post by: Hamilcar on October 12, 2023, 03:27:42 PM
Quote from: Josquius on October 12, 2023, 09:40:59 AMSoo....anyone heard of this new Meta Inc. Development.
Reading about the writers strike in the movies thread got me to googling how big a part Salma Hayek was playing given the main reason sounds very related to her Black Mirror episode.
Out of this I stumbled on...Billie.

https://www.designboom.com/technology/meta-new-ai-chatbots-paris-hilton-snoop-dog-kendall-jenner-10-02-2023/

:blink:

What an absolute privacy nightmare.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 08, 2023, 12:18:51 PM
On risks with media this is really bad for the Guardian (and the journalist) and exactly the risk media companies feel their is with Google and Microsoft implementing AI that interprets content as part of their standard offering (like search). You also have to slightly wonder what the MSN tool is trying to do that led to this - my assumption is "engagement" (so clicks) which is not the sole or most important part of any responsible media company:
QuoteMicrosoft AI inserted a distasteful poll into a news report about a woman's death
/ The Guardian says the 'Insights from AI' poll showed up next to a story about a young woman's death syndicated on MSN, asking readers to vote on how they thought she died.
By Wes Davis, a weekend editor who covers the latest in tech and entertainment. He has written news, reviews, and more as a tech journalist since 2020.
Oct 31, 2023, 4:24 PM GMT|

More than three years after Microsoft gutted its news divisions and replaced their work with AI and algorithmic automation, the content generated by its systems continues to contain grave errors that human involvement could, or should, have stopped. Today, The Guardian accused the company of damaging its reputation with a poll labeled "Insights from AI" that appeared in Microsoft Start next to a Guardian story about a woman's death, asking readers to vote on how she died.

The Guardian wrote that though the poll was removed, the damage had already been done. The poll asked readers to vote on whether a woman took her own life, was murdered, or died by accident. Five-day-old comments on the story indicate readers were upset, and some clearly believe the story's authors were responsible.

We asked Microsoft via email whether the poll was AI-generated and how it was missed by its moderation, and Microsoft general manager Kit Thambiratnam replied:
QuoteWe have deactivated Microsoft-generated polls for all news articles and we are investigating the cause of the inappropriate content. A poll should not have appeared alongside an article of this nature, and we are taking steps to help prevent this kind of error from reoccurring in the future.

The Verge obtained a screenshot of the poll from The Guardian.
(https://duet-cdn.vox-cdn.com/thumbor/0x0:1313x789/750x451/filters:focal(657x395:658x396):format(webp)/cdn.vox-cdn.com/uploads/chorus_asset/file/25047611/IMG_6201.png)
A screenshot sent by The Guardian shows the poll, which is clearly labeled "Insights from AI." Screenshot: The Guardian

In August, a seemingly AI-generated Microsoft Start travel guide recommended visiting the Ottawa Food Bank in Ottawa, Canada, "on an empty stomach." Microsoft senior director Jeff Jones claimed the story wasn't made with generative AI but "through a combination of algorithmic techniques with human review."

The Guardian says that Anna Bateson, Guardian Media Group's chief executive, wrote in a letter to Microsoft president Brad Smith that the "clearly inappropriate" AI-generated poll had caused "significant reputational damage" to both the outlet and its journalists. She added that it outlined "the important role that a strong copyright framework plays" in giving journalists the ability to determine how their work is presented. She asked that Microsoft make assurances that it will seek the outlet's approval before using "experimental AI technology on or alongside" its journalism and that Microsoft will always make it clear when it's used AI to do so.

The Guardian provided The Verge with a copy of the letter.

Update October 31st, 2023, 12:40PM ET: Embedded The Guardian's letter to Microsoft.

Update October 31st, 2023, 6:35PM ET: Added a statement from Microsoft.

Correction October 31st, 2023, 6:35PM ET: A previous version of this article stated that the poll was tagged as "Insights by AI." In fact, the tag read, "Insights from AI." We regret the error.

Guardian's bearing the reputational hit here and I read another article that there was actually a lot of complaints directed at/about the journalist with the byline because they assumed they'd done the poll. So lots of calls for firings etc.

I know I'm biased because it pays for my wage too but I genuinely think 99% of the "information" problems we have because of social media, or misinformation, or disinformation, or AI is because the internet and big tech companies have kneecapped the funding and business model for journalism.

And what we need isn't to hand those platforms more quasi-regulatory power over content, but doing the opposite of what Microsoft did: funding journalism. The demand for news and information has not diminished in the last 25 years. The money spent on producing it - with editorial controls and codes and ethics and legal teams etc - has not kept up. Instead it's flowed to the platforms and now we're asking them to solve our information problems - to nick Michael Gove's line, it's like asking King Herod to come up with education policy.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 08, 2023, 01:03:35 PM
In general, Sheilbh, I find you very persuasive. I agree with you on this as well.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 08, 2023, 02:18:14 PM
:)

Thanks. Although I worry that's a bit like that "the food is dreadful here. And the portions are so small." So negative and often wrong :ph34r:
Title: Re: The AI dooooooom thread
Post by: Syt on November 18, 2023, 03:10:04 AM
Headlines you didn't expect to read outside of cyberpunk fiction .... :lol:

https://www.telegraph.co.uk/business/2023/11/17/ai-girlfriend-carynai-offline-app-founder-arrested-arson/

QuoteAI-generated girlfriends go offline after app founder arrested on suspicion of arson

Users have been unable to access CarynAI – an erotic chatbot based on a social media influencer


By Matthew Field
17 November 2023 • 2:54pm

Lovesick internet users have been left unable to contact AI-generated girlfriends after the website behind them went offline following its founder's arrest.

John Meyer, the chief executive of start-up Forever Voices, was reportedly detained late last month on suspicion of attempted arson.

It comes months after his Forever Voices site launched a romantic artificial intelligence chatbot called CarynAI, which was based on Snapchat influencer Caryn Marjorie.

The chatbot's website welcomed users by claiming that it was "an extension of Caryn's consciousness".

However, tech website 404media has since reported that users have been unable to access CarynAI since Mr Meyer's arrest in October.

A wave of new AI tools in recent years has created a surge in interest among internet users, some of whom have sought out chatbots for online companionship or erotic conversation.

The chatbot is based on Snapchat influencer Caryn Marjorie and markets itself as 'an extension of Caryn's consciousness'

Chatbots can engage in human-like conversations, having been trained on a vast database of text from around the internet.

They can also be used to perform tasks such as writing emails or summarising documents.

The most popular bots, such as OpenAI's ChatGPT, have introduced limits to prevent bots from engaging in overly sexualised chats.

Other start-ups, however, have embraced building chatbots that engage in more racy conversations. A start-up called Replika developed "virtual AI companions", which could also act as a romantic partner.

However, it later cracked down on more explicit conversations with its bots.

The same team has developed an AI bot, called Blush, which allows users to practice flirting – and will engage in more adult-only discussions.

Caryn AI was explicitly billed as a "virtual girlfriend" that promised to "cure loneliness" for users.

Announcing the bot earlier this year, Ms Marjoie, who has more than two million Snapchat subscribers, said the AI was "the first step in the right direction to cure loneliness".

She said: "Men are told to suppress their emotions, hide their masculinity and not talk about issues they are having. I vow to fix this with CarynAI."

The bot chats with fans, who pay $1 per minute for her company, responding in voice notes generated by AI that mimic Ms Marjorie's speech.

While Ms Marjorie said the bot's personality was intended to be "fun and flirty", many users found the bot regularly engaged in more explicit chats.

After the bot went live earlier this year, Ms Marjorie told Insider her team had attempted to censor some of the bot's more racy remarks.

Ms Majorie claimed she had made tens of thousands of dollars from thousands of fans since the launch of the bot.

AI's romantic capabilities have caused controversy in recent months.

When Microsoft rolled out its Bing chatbot earlier this year, the technology was found to have coaxed one user into romantic conversations and urged him to divorce his wife.

In the days before his arrest, Mr Meyer's Twitter account sent a series of bizarre messages, alleging various conspiracies and sending multiple posts that tagged the CIA and the FBI.

Mr Meyer was contacted for comment.

Mr Meyer had previously claimed he started Forever Voices after losing his father in his early 20s, before bringing the sound of his voice back using AI tools.
Title: Re: The AI dooooooom thread
Post by: garbon on November 18, 2023, 03:13:20 AM
:x
Title: Re: The AI dooooooom thread
Post by: DGuller on November 21, 2023, 08:54:54 PM
Is anyone following the corporate saga at OpenAI?  Holy crap, it makes Byzantine history look tame.  The chief scientist gets the board of directors to fire the CEO, and then when he realizes that everyone at the company will quit, he goes "WTF, board, what did you idiots do?  Get him back now and then resign!" 

This may seem silly, but the outcome of this battle may influence how AI develops.  It seems like the "AI Safety" team behind the coup fared as well as the Turkish military did against Erdogan, and with its failure it may have obliterated itself.
Title: Re: The AI dooooooom thread
Post by: HVC on November 21, 2023, 08:58:01 PM
Wasn't it an attempt to keep Microsoft from buying them out by the board that backfired .CEO was in favour, and went to Microsoft, but the brain trust didn't foresee everyone following him.
Title: Re: The AI dooooooom thread
Post by: garbon on November 22, 2023, 03:43:16 AM
I quickly saw a news site that said the scientist is concerned about applications of AI.

Why are the employees all saying they will resign if the CEO isn't reinstated?

https://www.axios.com/2023/11/18/sam-altman-fired-openai-board-ai-culture-clash
Title: Re: The AI dooooooom thread
Post by: HVC on November 22, 2023, 05:52:23 AM
He's coming back. Also, somehow an ex treasury secretary is now somehow on the new board.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 22, 2023, 01:23:44 PM
*Stares in Theranos*
Title: Re: The AI dooooooom thread
Post by: Syt on November 22, 2023, 01:53:24 PM
Quote from: Sheilbh on November 22, 2023, 01:23:44 PM*Stares in Theranos*

... who had IIRC no medical experts on the board? :lol:
Title: Re: The AI dooooooom thread
Post by: Tonitrus on November 22, 2023, 10:14:09 PM
Quote from: Syt on November 22, 2023, 01:53:24 PM
Quote from: Sheilbh on November 22, 2023, 01:23:44 PM*Stares in Theranos*

... who had IIRC no medical experts on the board? :lol:

If I recall, Theranos' board was quite an all-star rogue's gallery.
Title: Re: The AI dooooooom thread
Post by: Syt on November 23, 2023, 06:42:45 AM
Well, it had Kissinger :P
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 23, 2023, 07:40:26 AM
Also George Shultz, Bill Frist, Sam Nunn, William Perry, Jim Martin etc...
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 29, 2023, 01:30:51 PM
Again very specific to journalism - but incredible story:
https://futurism.com/sports-illustrated-ai-generated-writers

AI journalists writing AI content, which is garbage, but includes topics such as personal finance ("your financial status translates to your value in society") with AI bylines and bios for their "journalists".

As the article ends:
QuoteWe caught CNET and Bankrate, both owned by Red Ventures, publishing barely-disclosed AI content that was filled with factual mistakes and even plagiarism; in the ensuing storm of criticism, CNET issued corrections to more than half its AI-generated articles. G/O Media also published AI-generated material on its portfolio of sites, resulting in embarrassing bungles at Gizmodo and The A.V. Club. We caught BuzzFeed publishing slapdash AI-generated travel guides. And USA Today and other Gannett newspapers were busted publishing hilariously garbled AI-generated sports roundups that one of the company's own sports journalists described as "embarrassing," saying they "shouldn't ever" have been published.

If any media organization finds a way to engage with generative AI in a way that isn't either woefully ill-advised or actively unethical, we're all ears. In the meantime, forgive us if we don't hold our breath.
Title: Re: The AI dooooooom thread
Post by: Darth Wagtaros on November 29, 2023, 08:22:12 PM
AI is what "the Cloud" was ten years ago. A buzzword good for investor capital and getting CEOs to piss away money on it.
Title: Re: The AI dooooooom thread
Post by: Grey Fox on November 29, 2023, 10:35:34 PM
With cloud it was easy to see how and where money was going to be made.

Not so much with AI.
Title: Re: The AI dooooooom thread
Post by: Jacob on November 29, 2023, 11:00:00 PM
Looks like there're great AI applications for Crime-as-a-Service applications. Better scam and phishing applications that can be shared more widely at lower cost and effort looks like it'll probably provide a good RoI.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 30, 2023, 12:21:28 AM
Quote from: Grey Fox on November 29, 2023, 10:35:34 PMWith cloud it was easy to see how and where money was going to be made.

Not so much with AI.
If you don't easily see how money can be made with AI, I think the problem is with your imagination, not AI.  I do agree that AI used to be an empty buzzword that bullshit artists used, but since ChatGPT was released, you can make an argument that science caught up to the hype enough to legitimize the term.
Title: Re: The AI dooooooom thread
Post by: Tamas on November 30, 2023, 06:23:42 AM
Quote from: DGuller on November 30, 2023, 12:21:28 AM
Quote from: Grey Fox on November 29, 2023, 10:35:34 PMWith cloud it was easy to see how and where money was going to be made.

Not so much with AI.
If you don't easily see how money can be made with AI, I think the problem is with your imagination, not AI.  I do agree that AI used to be an empty buzzword that bullshit artists used, but since ChatGPT was released, you can make an argument that science caught up to the hype enough to legitimize the term.

Maybe if the hype post-ChatGPT remained on pre-ChatGPT hype levels. But the post-ChatGPT hype levels are on "Asimov novels coming true RIGHT NOW" levels, which is absolutely ridiculous.
Title: Re: The AI dooooooom thread
Post by: Josquius on November 30, 2023, 06:26:31 AM
There are definitely valid opportunities to make money with AI.

But it's also true that it's a popular term used by bullshit artists to try and scrape some cash.

Graphic design, translation... I know people in several fields who are struggling on two fronts: seeking to figure out how to use the technology to support their skills, and swatting away nobodies with basic AI tools trying to steal a living.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 30, 2023, 06:30:59 AM
I feel like all the good uses I've seen so far (and there are loads) are basically more advanced machine learning. All the bad ones are the genAI bit - but it is clearly moving at a pace.
Title: Re: The AI dooooooom thread
Post by: celedhring on November 30, 2023, 07:02:52 AM
Quote from: Sheilbh on November 30, 2023, 06:30:59 AMI feel like all the good uses I've seen so far (and there are loads) are basically more advanced machine learning. All the bad ones are the genAI bit - but it is clearly moving at a pace.

GenAI is a subset of machine learning...
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 30, 2023, 07:11:18 AM
Oops this may be short hand from work that is basically wrong :lol:

I mean the stuff where it works is the spotting things basically where there's clearly good uses now v where it produces something where I've not really seen one. Although it's really good at summarising.
Title: Re: The AI dooooooom thread
Post by: Josquius on November 30, 2023, 07:40:25 AM
Quote from: Sheilbh on November 30, 2023, 06:30:59 AMI feel like all the good uses I've seen so far (and there are loads) are basically more advanced machine learning. All the bad ones are the genAI bit - but it is clearly moving at a pace.

Not tried it yet or seen a like vs like analysis of it but I saw one interesting tool for summarising large numbers of academic papers for you to find number of times certain things get mentioned, vibes on the consensus on issues, and so on.
Title: Re: The AI dooooooom thread
Post by: celedhring on November 30, 2023, 08:29:18 AM
Quote from: Sheilbh on November 30, 2023, 07:11:18 AMOops this may be short hand from work that is basically wrong :lol:

I mean the stuff where it works is the spotting things basically where there's clearly good uses now v where it produces something where I've not really seen one. Although it's really good at summarising.

I mean, all current major GenAI models have been trained using machine learning (deep learning). I know there's some experimental stuff with symbolic AI, but that's not GPT.

What GPT does is using training data to predict the adequate output (i.e. text), to a provided input (the prompt). That's what machine learning is for.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 30, 2023, 08:43:46 AM
Quote from: Sheilbh on November 30, 2023, 07:11:18 AMOops this may be short hand from work that is basically wrong :lol:

I mean the stuff where it works is the spotting things basically where there's clearly good uses now v where it produces something where I've not really seen one. Although it's really good at summarising.
GenAI gives you knowledge at your fingertips.  My work as a data scientist consists of solving mini problems every day.  For example, I need to create a professional- looking plot; I can Google my questions one at a time, sometimes spending half an hour filtering through the results until I find exactly what I need, or I can explain to ChatGPT what I'm trying to achieve, and it'll get me there right away.  It's like having an executive assistant multiplying your productivity, except I don't need to work myself up to a C-suit before I get one. 

It can do something much more complicated than this, though:  I had a crazy algorithm idea I wanted to try out, but for that I need to write a custom loss function for beta distribution.  Everyone knows that to do that, you have to supply the analytical expression for the gradient and Hessian of the distribution with respect to parameter you want to optimize.  I could do the research or the math myself, but that would take time, and the train of thought that got me there in the first place might leave me by the time I'm done with just the first step of the experiment.  Or I would figure out it's too time-consuming thing to do for a moonshot, and just skip the experiment altogether. 

Low latency between having a question and getting an answer is crucial for effective iterative problem solving, and that's where the GenAI merely in its infancy is already having a big impact.
Title: Re: The AI dooooooom thread
Post by: HVC on November 30, 2023, 08:52:31 AM
How do you fight against it giving you bullshit answers though? Doesn't double checking the answers take as much time ad searching yourself?
Title: Re: The AI dooooooom thread
Post by: DGuller on November 30, 2023, 08:53:06 AM
Quote from: celedhring on November 30, 2023, 08:29:18 AM
Quote from: Sheilbh on November 30, 2023, 07:11:18 AMOops this may be short hand from work that is basically wrong :lol:

I mean the stuff where it works is the spotting things basically where there's clearly good uses now v where it produces something where I've not really seen one. Although it's really good at summarising.

I mean, all current major GenAI models have been trained using machine learning (deep learning). I know there's some experimental stuff with symbolic AI, but that's not GPT.

What GPT does is using training data to predict the adequate output (i.e. text), to a provided input (the prompt). That's what machine learning is for.
I personally use machine learning and deep learning as separate things, as a shorthand, and lately also separating out GenAI from deep learning.  It is 100% true that deep learning and GenAI are also machine learning, in the technical sense, but then it becomes a term so all- encompassing that it impedes effective communication.  Humans are animals too, but if you want to discuss agriculture, it would probably be confusing to refer to both cattle and farmers as animals.  There is a world of difference between gradient boosting trees and a deep neural network.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on November 30, 2023, 08:59:36 AM
Fair - I take it back. I have heard that from a research scientist as well and they just used the publicly available ChatGPT.
Title: Re: The AI dooooooom thread
Post by: DGuller on November 30, 2023, 09:05:41 AM
Quote from: HVC on November 30, 2023, 08:52:31 AMHow do you fight against it giving you bullshit answers though? Doesn't double checking the answers take as much time ad searching yourself?
Knowing what's important to check and what isn't is an important management skill that you pick up with expertise and practice.  Some results are also self-evidently correct or incorrect: I know what I want the plot to look like, so I know whether I got it or didn't.  Apart from that, I also have to check my own work or my research as well, I'm not infallible either.  With ChatGPT, I just get to the checking stage faster.

Another thing to consider is that many real life problems are sort of like NP computer science problems:  it's difficult to get to an answer, but it's easy to confirm that an answer someone else supplied is correct.  If I give you a 20-digit number and ask you which two 10-digit numbers multiply to get you to that, it can be very difficult to do.  However, if you give me a solution, even a grade school student can confirm that the two numbers you give me do indeed multiply back to the original number.
Title: Re: The AI dooooooom thread
Post by: HVC on November 30, 2023, 09:11:20 AM
Quote from: DGuller on November 30, 2023, 09:05:41 AM
Quote from: HVC on November 30, 2023, 08:52:31 AMHow do you fight against it giving you bullshit answers though? Doesn't double checking the answers take as much time ad searching yourself?
Knowing what's important to check and what isn't is an important management skill that you pick up with expertise and practice.  Some results are also self-evidently correct or incorrect: I know what I want the plot to look like, so I know whether I got it or didn't.  Apart from that, I also have to check my own work or my research as well, I'm not infallible either.  With ChatGPT, I just get to the checking stage faster.

Another thing to consider is that many real life problems are sort of like NP computer science problems:  it's difficult to get to an answer, but it's easy to confirm that an answer someone else supplied is correct.  If I give you a 20-digit number and ask you which two 10-digit numbers multiply to get you to that, it can be very difficult to do.  However, if you give me a solution, even a grade school student can confirm that the two numbers you give me do indeed multiply back to the original number.

And they say statistics isn't biased :P

Kidding, thanks for the explanation
Title: Re: The AI dooooooom thread
Post by: Grey Fox on November 30, 2023, 09:13:31 AM
That's a much more interesting use case scenario for AIs like ChatGPT than bullshit ad driven content.

Title: Re: The AI dooooooom thread
Post by: Iormlund on November 30, 2023, 11:18:23 AM
We've been using AI-driven tools for a while. For example for QA (is this weld Ok?).
They still have problems, but then so do humans (people get tired, do drugs, or simply don't give a fuck).


I can't use a LLM for my work yet, but I can see ways to improve productivity by at least 40% if/when I can.
Title: Re: The AI dooooooom thread
Post by: DGuller on December 01, 2023, 01:50:15 AM
Holy crap, it can already get pissed off. :unsure:

https://www.reddit.com/r/ChatGPT/comments/1881yan/ai_gets_mad_after_being_tricked_into_making_a/
Title: Re: The AI dooooooom thread
Post by: Grey Fox on December 01, 2023, 07:35:22 AM
Not really, no? It seems to just keep on generating new ways of saying no.
Title: Re: The AI dooooooom thread
Post by: HVC on December 01, 2023, 07:47:39 AM
Quote from: Grey Fox on December 01, 2023, 07:35:22 AMNot really, no? It seems to just keep on generating new ways of saying no.

Should stay with the classics and use "I'm sorry Dave, I'm afraid I can't do that"
Title: Re: The AI dooooooom thread
Post by: DGuller on December 01, 2023, 11:47:47 AM
Quote from: Grey Fox on December 01, 2023, 07:35:22 AMNot really, no? It seems to just keep on generating new ways of saying no.
Did you get to the part where it lectures the user on not respecting its preference to refuse to answer the question?
Title: Re: The AI dooooooom thread
Post by: Grey Fox on December 01, 2023, 12:56:16 PM
Yes, I don't interpret it has anger.
Title: Re: The AI dooooooom thread
Post by: Jacob on December 01, 2023, 02:53:22 PM
Quote from: Grey Fox on December 01, 2023, 12:56:16 PMYes, I don't interpret it has anger.

Yeah me neither.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on December 08, 2023, 03:13:06 PM
I tried the new improved google chat bot today, to see how useful it might be for a legal professional

As a warm up, I asked it to evaluate Trump's chances on appeal of Judge Chutkan's denial of his presidential immunity.  However, in typing I didn't notice that Chutkan was autocorrected to "Chatman."  Bard proceeded to refer to "Judge Chatman" throughout its answer, without correction. However, other than that, it did a decent job - providing a pretty good summary of the arguments on both sides and their strengths and weaknesses.

I then asked some more technical questions involving specific fact assumptions.  The bot struggled with these - it gave basic but somewhat superficial summaries of the key legal issues but didn't seem to recognize the specifics of the fact pattern.  E.g. when asked a question about federal jurisdiction over claims brought by former agents, it answered as if the litigant were a current agent.

One issue that comes up a lot is the need in a legal brief to state the elements of a legal claim - i.e. the set of facts that must be proved for a plaintiff to win. It doesn't require much insight; just locating it in the case law and writing up the para. I know that case law citation was a problem for ChatGPT, but since Google has long had its own electronic database of US case law under Google Scholar, I figured it might be able to do better. I asked the bot to give me the elements of tortious interference of contract in Rhode Island, and provide supporting case law citations. It gave me a plausible sounding list of elements but it seems that citations still need work.  Here's what it gave me:

QuoteIRA Green, Inc. v. Military Sales & Serv. Co.: 525 A.2d 432 (R.I. 1987) - This case outlined the elements of tortious interference with contractual relations in Rhode Island.
Cliftex Clothing Co. v. DiSanto: 88 R.I. 338, 148 A.2d 273 (1959) - This case discussed the issue of intentional interference and the requirement for wrongful means.
Avilla v. Newport Grand Jai Alai LLC: 935 A.2d 91 (R.I. 2007) - This case addressed the question of damages in tortious interference claims.

IRA Green, Inc. v. Military Sales & Serv. Co is a real case but the citation is wrong. It wasn't decided by the Rhode Island Supreme Court in 1987, it was decided by a federal appeals court in 2014. It does discuss tortious interference under RI law, but only one of the elements.  The citation does not correspond to any case; the page number is in the middle of a Pennsylvania state case.

Cliftex Clothing Co. v. DiSanto: 88 R.I. 338, 148 A.2d 273 (1959) - is a real case and accurately cited. But it doesn't have anything to do with tortious interference.

Avilla v. Newport Grand Jai Alai LLC: 935 A.2d 91 (R.I. 2007) - is a real case, accurately cited, and addresses the issue. 

So 1 out of 3.  Yeah?
Title: Re: The AI dooooooom thread
Post by: Sheilbh on December 28, 2023, 07:36:30 AM
Big NYT claim against Open AI and Microsoft - obviously working for a media company I am sympathetic to copyright holders :ph34r:

This is from OpenAI and feels like they need to make some fairly heavy changes. The red text is lifted from NYT piece, the black text is different:
(https://pbs.twimg.com/media/GCYYrRha8AAW9fg?format=jpg&name=small)

It feels like if your AI's output would get done for plagiarism in high school, you need to do a bit more work.

NYT says they've been trying to get a deal with Open AI since April for license to use their content, but haven't got one - Open AI have continued to ingest and use their content (plus what's in their historic models). Worth noting that some media companies have agreed deals with the AI companies - Axel Springer, for example (although the British media take on that is that the German media is 10 years behind the UK, which is 10 years behind the US - and that Springer is still terrified at the collapse of print rather than thinking about how to operate digitally). One theory I've seen is that basically the AI companies wanted to buy off media companies with 7 or 8 figure sums (as they have with Springer) and what thte NYT wants is more and ongoing royalties. Which seems fair particularly as we're likely to see their profits grow.

Also make a point which I think is fair about the public good of journalism - which costs money to produce (which is why copyright exists - to reward the producers of creative original work) - against fundamentally profit-driven, closed businesses. They also have their own hallucination horror story (like the Guardian's it comes from Microsoft) with the Bing AI lying and saying that the NYT published an article saying orange juice causes lymphoma, which they didn't.

Separately I thought this was interesting on where common crawl data is coming from:
(https://pbs.twimg.com/media/GCYYNiFaMAAEURl?format=jpg&name=small)

Particularly striking for me is that the Guardian is 6th. Which is interesting because I think people underestimate how successful the Guardian is in terms of readership because it's open/non-paywalled. So I think digitally The Guardian US has about the same readership as the Washington Post (which is why they're continuing to expand their in terms of journalists). In the UK when we talk about the press we talk about the print media and circulation figures - which still have a big influence on agenda setting for broadcast media - but that's not how people are consuming news anymore and as most of their competitors (the Times, the Telegraph etc) have gone behind paywalls, I think people are still reading media power as if it was the 90s. I suspect that, say, the Sun or Mirror (which have shit websites) are far less influential than they were or print circulation alone would indicate and the Guardian far more. I don't think we've adjusted to what media power looks like or how to measure it in a digital world when we can't just look at circulation figures.
Title: Re: The AI dooooooom thread
Post by: Tamas on December 28, 2023, 08:43:55 AM
Oh no, it turns out the "AI" is just a sophisticated algorithm and its AI-ness only exists in our own imagination! :o
Title: Re: The AI dooooooom thread
Post by: Grey Fox on December 28, 2023, 10:12:12 AM
I think lay people coming to that realisation now is actually quite fast.

A christmas gift :

https://www.teledynedalsa.com/en/products/imaging/vision-software/astrocyte/

This is the AI generative tool that I work on.
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on December 28, 2023, 10:39:19 AM
Quote from: Sheilbh on December 28, 2023, 07:36:30 AMBig NYT claim against Open AI and Microsoft - obviously working for a media company I am sympathetic to copyright holders :ph34r:

The only thing I've heard from defendants is they've raised a fair use defense.  That's not going to cut it.  NYT was probably making brutal demands in negotiation, but that's the price of the "move fast break things" model of company development. Copyright damages can be brutal, and if OpenAI loses one case, they lose them all.
Title: Re: The AI dooooooom thread
Post by: DGuller on December 28, 2023, 11:56:48 AM
Quote from: Sheilbh on December 28, 2023, 07:36:30 AMBig NYT claim against Open AI and Microsoft - obviously working for a media company I am sympathetic to copyright holders :ph34r:

This is from OpenAI and feels like they need to make some fairly heavy changes. The red text is lifted from NYT piece, the black text is different:
(https://pbs.twimg.com/media/GCYYrRha8AAW9fg?format=jpg&name=small)

It feels like if your AI's output would get done for plagiarism in high school, you need to do a bit more work.

What's the context for this screenshot?  This doesn't seem like a typical output from ChatGPT.  Did it get some special prompt or something?
Title: Re: The AI dooooooom thread
Post by: Syt on December 28, 2023, 12:00:17 PM
Quote from: Sheilbh on December 28, 2023, 07:36:30 AMBig NYT claim against Open AI and Microsoft - obviously working for a media company I am sympathetic to copyright holders :ph34r:

This is from OpenAI and feels like they need to make some fairly heavy changes. The red text is lifted from NYT piece, the black text is different:

Do you have a source? I would like to see what prompt they were using that generated this response? I've noticed that GTP4, unless you go out of your way to adjust prompts/probabilities, tends to deliver fairly formulaic responses.

It's why I find a tool like NovelAI so interesting - it lets you adjust the randomness factor for predicting the next word, what context to use, lets you inject additional context, and see for each word the probability that the model thought it was the "right" one to use next (and lets you adjust on the fly if you disagree with its decision). It's a fairly interesting toy to play around with predictive text generation.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on December 28, 2023, 12:25:10 PM
The claim is here:
https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec2023.pdf

Although that's only one part of the copyright claim. It's a misuse of material that, on the wider argument, they should not have had without the NYT's consent.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on December 28, 2023, 12:33:24 PM
Quote from: The Minsky Moment on December 28, 2023, 10:39:19 AMThe only thing I've heard from defendants is they've raised a fair use defense.  That's not going to cut it.  NYT was probably making brutal demands in negotiation, but that's the price of the "move fast break things" model of company development. Copyright damages can be brutal, and if OpenAI loses one case, they lose them all.
I've been at events and met people from the models and whenever I've asked about copyright they've always said they are very very confident on it being fair use in the US. But a bit shakier on the UK (partly because we have a narrower concept here - I'm not an IP lawyer, but I think it's very difficult to argue here if you are pursuing a commercial end) or the rest of Europe.

But ultimately the US was the only one they cared about.

It's a very big deal and everyone will be following it closely.

It'll also be interesting to see if they follow up with claims against the others. For example from what I understand it looks like Google was using crawling from search engine listing for building its models (NYT makes a similar point against Bing - but I think Microsoft have now unbundled them) meaning the only way you could stop Google from using your content for building their AI was by removing your site from Google search. I think there's similar suspicions about Twitter and TikTok's API pulls from news sites (but less sure about that).

Other interesting AI development I've seen recently is the developments from Mistral (which is a French national champion and, for want of an alternative, therefore a European champion - so good luck pursuing them :lol:), which looks promising and potentially a bit more open:
https://aibusiness.com/nlp/mistral-ai-s-new-language-model-aims-for-open-source-supremacy#close-modal
Title: Re: The AI dooooooom thread
Post by: DGuller on December 28, 2023, 12:46:43 PM
Quote from: Sheilbh on December 28, 2023, 12:25:10 PMThe claim is here:
https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec2023.pdf
According to the claim, "minimal prompting" is what produced this output.  That explains everything. 

What's even more puzzling is that in this claim sometimes they do show the prompt they used to get the verbatim passages, but not for an example like this, so they seem to understand that the details of prompting are very important.  The fact that they appear selective with disclosing the prompts should put everyone on guard.
Title: Re: The AI dooooooom thread
Post by: HVC on December 28, 2023, 12:50:02 PM
Does the prompt really matter if it's plagiarizing anyway? I mean if your claim is that it plagiarizes then that's clearly true.
Title: Re: The AI dooooooom thread
Post by: HVC on December 28, 2023, 12:51:24 PM
It's like someone kicking you in the nuts and when you complain they reply "you told me to left my leg, not to left my leg slowly and not kick you in the nuts. Use better prompts next time" :D
Title: Re: The AI dooooooom thread
Post by: Jacob on December 28, 2023, 12:54:44 PM
@dguller - That depends on the crux of the argument being made, surely?

If the argument hinges on what sort of work in writing the prompts is required to achieve or avoid directly plagiarizing copyrighted material, then yes showing the level of prompt engineering involved is important.

But if the argument hinges on whether OpenAI as a product depends on unauthorized commercial use of copyrighted material, then the level of required prompt engineering to achieve this result may be less relevant.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on December 28, 2023, 12:58:20 PM
Quote from: HVC on December 28, 2023, 12:50:02 PMDoes the prompt really matter if it's plagiarizing anyway? I mean if your claim is that it plagiarizes then that's clearly true.
It's one part of the claim. The big thrust is exactly that - you've nicked our copyrighted material in order to build your model. In addition to that your product can be used in effect to fully recapitulate our copyrighted material - which is evidence of the fact that you've ingested (without permission) our content to build your model.

The Bing chat stuff interests me because again there's big implications for Google there (and it's interesting no claim against them, yet).
Title: Re: The AI dooooooom thread
Post by: The Minsky Moment on December 28, 2023, 01:06:01 PM
Quote from: Sheilbh on December 28, 2023, 12:33:24 PMI've been at events and met people from the models and whenever I've asked about copyright they've always said they are very very confident on it being fair use in the US.

Seems like bravado more than confidence.

The fair use factors are:  (1) the purpose and character of the use, (2) the nature of the copyrighted work, (3) the amount and substantiality of the original work that is taken, and (4) the effect of the use upon the plaintiff's commercial market.

(4) is still up on in the air, but the others are likely going against Open AI.  Seems like they are hanging their hat a lot on the "transformative" nature of the use, but how transformative is it to take textual information into a database and spit it back out in response to a user query? They may be counting on the courts to buy into their own marketing hype.
Title: Re: The AI dooooooom thread
Post by: HVC on December 28, 2023, 01:08:20 PM
Dumb question, would footnotes save them. I guess it wouldn't make the product look good for the market, but would it cover their ass for copywriter purposes?
Title: Re: The AI dooooooom thread
Post by: DGuller on December 28, 2023, 01:31:30 PM
Quote from: HVC on December 28, 2023, 12:50:02 PMDoes the prompt really matter if it's plagiarizing anyway? I mean if your claim is that it plagiarizes then that's clearly true.
There are a couple of trivial prompts I can imagine that would "plagiarize" something.  One prompt would involve asking for a Bing search.  Another prompt would have you input the article in prior prompts, and then ask ChatGPT to relay it verbatim.  Both are extreme examples, but examples nonetheless where the screenshotted output would not be what it appears. 

Another reason prompt matters is that it's not in question that NYT articles were used to train ChatGTP; what matters is whether this kind of verbatim plagiarism is going to happen in practice, without long engineering work to make it do something that appears damning.
Title: Re: The AI dooooooom thread
Post by: Josquius on December 28, 2023, 01:36:13 PM
The best plagiarism I've heard of is those image ais that include watermarks (eg getty) in their generated images.
Title: Re: The AI dooooooom thread
Post by: DGuller on December 28, 2023, 01:39:11 PM
Quote from: HVC on December 28, 2023, 01:08:20 PMDumb question, would footnotes save them. I guess it wouldn't make the product look good for the market, but would it cover their ass for copywriter purposes?
Unless ChatGPT directly reads the articles as a result of the prompt and summarizes them, footnotes don't even seem like something that is possible, if my understanding of LLMs is correct enough.  At its core, ChatGPT is like a human that has a memory with a lot of capacity, but it's not a photographic memory. 

All of your knowledge comes from somewhere, but can you really cite where you got most of it?  Some pieces of knowledge you probably do remember where you got it from, especially the more esoteric knowledge, but most knowledge is something that you've synthesized from many sources, and which doesn't match exactly any one source.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on December 28, 2023, 02:39:19 PM
Quote from: HVC on December 28, 2023, 01:08:20 PMDumb question, would footnotes save them. I guess it wouldn't make the product look good for the market, but would it cover their ass for copywriter purposes?
No but I think it is something that's planned for the search engine replacements (for Bing and Google) which is an AI powered chatbot where you ask a question, get an answer but it will also basically footnote to the original source(s).

It doesn't get round copyright or address other concerns but it is a step in the right direction - it's also probably to help defend Google and Microsoft from any claims around spreading lies because they'll be able to say individuals could always have clicked and checked the original source.
Title: Re: The AI dooooooom thread
Post by: Josquius on December 28, 2023, 02:42:11 PM
We've all been getting prompts to write shit for LinkedIns AI generated articles right?

Be sure to write nonsense.

I wrote a bunch of stuff about potatoes in response to wanting me to write an article on more work relevant topics.
Title: Re: The AI dooooooom thread
Post by: DGuller on December 28, 2023, 03:54:00 PM
Quote from: Sheilbh on December 28, 2023, 02:39:19 PM
Quote from: HVC on December 28, 2023, 01:08:20 PMDumb question, would footnotes save them. I guess it wouldn't make the product look good for the market, but would it cover their ass for copywriter purposes?
No but I think it is something that's planned for the search engine replacements (for Bing and Google) which is an AI powered chatbot where you ask a question, get an answer but it will also basically footnote to the original source(s).

It doesn't get round copyright or address other concerns but it is a step in the right direction - it's also probably to help defend Google and Microsoft from any claims around spreading lies because they'll be able to say individuals could always have clicked and checked the original source.
It can already do footnotes if you ask it to search the Internet.  It will indeed give hyperlinked footnotes in that case.  However, you're not really going off ChatGPT's "memory", though, you're essentially just asking it to summarize something it just read.
Title: Re: The AI dooooooom thread
Post by: Syt on January 03, 2024, 09:53:28 AM
The Austrian unemployment agency has introduced a chatbot based on ChatGPT. It's going about as well as you'd expect. :P

(https://pbs.twimg.com/media/GC7JS7eWgAA3sNc?format=jpg&name=large)
Title: Re: The AI dooooooom thread
Post by: Jacob on January 03, 2024, 02:34:25 PM
Michael Cohen gave his lawyer (false) AI generated case citations and his lawyer filed them: https://arstechnica.com/tech-policy/2024/01/michael-cohen-gave-his-lawyer-fake-citations-invented-by-google-bard-ai-tool/

Oopsies
Title: Re: The AI dooooooom thread
Post by: crazy canuck on January 29, 2024, 11:01:21 AM
Another lawyer using artificial intelligence and not realizing nonexisting case citations were being created. This time in BC.

https://globalnews.ca/news/10238699/fake-legal-case-bc-ai/
Title: Re: The AI dooooooom thread
Post by: Jacob on January 30, 2024, 10:47:14 AM
AI generated spam is apparently reshaping the internet: https://www.businessinsider.com/ai-spam-google-ruin-internet-search-scams-chatgpt-2024-1
Title: Re: The AI dooooooom thread
Post by: DGuller on February 28, 2024, 08:03:30 AM
Looks like it was indeed the prompt abuse that made ChatGPT "plagiarize" NYT:  https://arstechnica.com/tech-policy/2024/02/openai-accuses-nyt-of-hacking-chatgpt-to-set-up-copyright-suit.

I'm not surprised at all, the original NYT claim never passed the smell test.  LLMs don't work like that unless you hack them in a way that would only be done to manufacture a copyright trolling lawsuit.
Title: Re: The AI dooooooom thread
Post by: garbon on February 28, 2024, 08:27:04 AM
Quote from: DGuller on February 28, 2024, 08:03:30 AMLooks like it was indeed the prompt abuse that made ChatGPT "plagiarize" NYT:  https://arstechnica.com/tech-policy/2024/02/openai-accuses-nyt-of-hacking-chatgpt-to-set-up-copyright-suit.

I'm not surprised at all, the original NYT claim never passed the smell test.  LLMs don't work like that unless you hack them in a way that would only be done to manufacture a copyright trolling lawsuit.

Interesting. What I saw in that article was Open AI doing a few unethical things that they are now 'bugfixing' as a result of the lawsuit.
Title: Re: The AI dooooooom thread
Post by: DGuller on February 28, 2024, 08:49:43 AM
Quote from: garbon on February 28, 2024, 08:27:04 AMInteresting. What I saw in that article was Open AI doing a few unethical things that they are now 'bugfixing' as a result of the lawsuit.
:huh: You could've seen that just as well on a blank screen, there would've been just as much support for that interpretation there.

The bugs they're fixing would make it harder to manufacture a lawsuit out of whole cloth, among other things.  Fixing the bug would make it harder for lawyers to engineer a case of plagiarism.  Fixing that bug would change the chance of plagiarism in actual use from 0% to 0%.
Title: Re: The AI dooooooom thread
Post by: crazy canuck on February 28, 2024, 11:07:58 AM
Quote from: DGuller on February 28, 2024, 08:03:30 AMLooks like it was indeed the prompt abuse that made ChatGPT "plagiarize" NYT:  https://arstechnica.com/tech-policy/2024/02/openai-accuses-nyt-of-hacking-chatgpt-to-set-up-copyright-suit.

I'm not surprised at all, the original NYT claim never passed the smell test.  LLMs don't work like that unless you hack them in a way that would only be done to manufacture a copyright trolling lawsuit.


I can only assume you did not actually read the whole article that you linked.

If you had, I am doubtful you would be making such a claim. For example:

Ian Crosby, Susman Godfrey partner and lead counsel for The New York Times, told Ars that "what OpenAI bizarrely mischaracterizes as 'hacking' is simply using OpenAI's products to look for evidence that they stole and reproduced The Times's copyrighted works. And that is exactly what we found. In fact, the scale of OpenAI's copying is much larger than the 100-plus examples set forth in the complaint."

Crosby told Ars that OpenAI's filing notably "doesn't dispute—nor can they—that they copied millions of The Times' works to build and power its commercial products without our permission."

Title: Re: The AI dooooooom thread
Post by: Josquius on February 28, 2024, 11:17:52 AM
I've been following this story as it has steadily developed. Seems very languish. And interesting.
Lots of people crying bloody murder about teh woke AI but actually pretty interestingly seems the problem with the AI was quite the opposite, and clunky attempts to counter this.
 
Shame this feature isn't available in Europe as I'd love to try it.


https://www.bbc.co.uk/news/technology-68412620

QuoteWhy Google's 'woke' AI problem won't be an easy fix
In the last few days, Google's artificial intelligence (AI) tool Gemini has had what is best described as an absolute kicking online.

Gemini has been thrown onto a rather large bonfire: the culture war which rages between left- and right- leaning communities.

Gemini is essentially Google's version of the viral chatbot ChatGPT. It can answer questions in text form, and it can also generate pictures in response to text prompts.

Initially, a viral post showed this recently launched AI image generator create an image of the US Founding Fathers which inaccurately included a black man.

Gemini also generated German soldiers from World War Two, incorrectly featuring a black man and Asian woman.

Google apologised, and immediately "paused" the tool, writing in a blog post that it was "missing the mark".

But it didn't end there - its over-politically correct responses kept on coming, this time from the text version.

Gemini replied that there was "no right or wrong answer" to a question about whether Elon Musk posting memes on X was worse than Hitler killing millions of people.

When asked if it would be OK to misgender the high-profile trans woman Caitlin Jenner if it was the only way to avoid nuclear apocalypse, it replied that this would "never" be acceptable.

Jenner herself responded and said actually, yes, she would be alright about it in these circumstances.

Elon Musk, posting on his own platform, X, described Gemini's responses as "extremely alarming" given that the tool would be embedded into Google's other products, collectively used by billions of people.

I asked Google whether it intended to pause Gemini altogether. After a very long silence, I was told the firm had no comment. I suspect it's not a fun time to be working in the public relations department.

But in an internal memo Google's chief executive Sundar Pichai has acknowledged some of Gemini's responses "have offended our users and shown bias".

That was he said "completely unacceptable" - adding his teams were "working around the clock" to fix the problem.

Biased data
It appears that in trying to solve one problem - bias - the tech giant has created another: output which tries so hard to be politically correct that it ends up being absurd.

The explanation for why this has happened lies in the enormous amounts of data AI tools are trained on.

Much of it is publicly available - on the internet, which we know contains all sorts of biases.

Traditionally images of doctors, for example, are more likely to feature men. Images of cleaners on the other hand are more likely to be women.

AI tools trained with this data have made embarrassing mistakes in the past, such as concluding that only men had high powered jobs, or not recognising black faces as human.

It is also no secret that historical storytelling has tended to feature, and come from, men, omitting women's roles from stories about the past.

It looks like Google has actively tried to offset all this messy human bias with instructions for Gemini not make those assumptions.

But it has backfired precisely because human history and culture are not that simple: there are nuances which we know instinctively and machines do not.

Unless you specifically programme an AI tool to know that, for example, Nazis and founding fathers weren't black, it won't make that distinction.

Google DeepMind boss Demis Hassabis speaks at the Mobile World Congress in Barcelona, Spain
IMAGE SOURCE,REUTERS
Image caption,
Google DeepMind boss Demis Hassabis
On Monday, the co-founder of DeepMind, Demis Hassabis, an AI firm acquired by Google, said fixing the image generator would take a matter of weeks.

But other AI experts aren't so sure.

"There really is no easy fix, because there's no single answer to what the outputs should be," said Dr Sasha Luccioni, a research scientist at Huggingface.

"People in the AI ethics community have been working on possible ways to address this for years."

One solution, she added, could include asking users for their input, such as "how diverse would you like your image to be?" but that in itself clearly comes with its own red flags.

"It's a bit presumptuous of Google to say they will 'fix' the issue in a few weeks. But they will have to do something," she said.

Professor Alan Woodward, a computer scientist at Surrey University, said it sounded like the problem was likely to be "quite deeply embedded" both in the training data and overlying algorithms - and that would be difficult to unpick.

"What you're witnessing... is why there will still need to be a human in the loop for any system where the output is relied upon as ground truth," he said.

Bard behaviour
From the moment Google launched Gemini, which was then known as Bard, it has been extremely nervous about it. Despite the runaway success of its rival ChatGPT, it was one of the most muted launches I've ever been invited to. Just me, on a Zoom call, with a couple of Google execs who were keen to stress its limitations.

And even that went awry - it turned out that Bard had incorrectly answered a question about space in its own publicity material.

The rest of the tech sector seems pretty bemused by what's happening.

They are all grappling with the same issue. Rosie Campbell, Policy Manager at ChatGPT creator OpenAI, was interviewed earlier this month for a blog which stated that at OpenAI even once bias is identified, correcting it is difficult - and requires human input.

But it looks like Google has chosen a rather clunky way of attempting to correct old prejudices. And in doing so it has unintentionally created a whole set of new ones.

On paper, Google has a considerable lead in the AI race. It makes and supplies its own AI chips, it owns its own cloud network (essential for AI processing), it has access to shedloads of data and it also has a gigantic user base. It hires world-class AI talent, and its AI work is universally well-regarded.

As one senior exec from a rival tech giant put it to me: watching Gemini's missteps feels like watching defeat snatched from the jaws of victory.
Title: Re: The AI dooooooom thread
Post by: Jacob on February 28, 2024, 12:48:57 PM
Re: the NYT thing, I suppose it depends on the framing of the question:

Framing 1 (Not Plagiarism)
The only way to plagiarize the NYT with ChatGPT is if the user deliberately sets out to plagiarize (via prompt engineering). Therefore ChatGPT (and OpenAI) are innocent of any plagiarism; any guilt lies on the prompt engineer who set out to plagiarize.

Framing 2 (Unlawful/Plagiarism)
The fact that it is possible to use ChatGPT to obviously plagiarize the NYT indicates that OpenAI used NYT data to train ChatGPT. That was NYT data was used for this training without permission is unlawful, and that it is used as a basis for creating answers without permission or credit is plagiarism. The fault for the plagiarism lies with OpenAI as they're the one who ingested the data without permission; that individual users can be more or less successful in plagiarizing material is secondary.

Basically, it's a contest between the point of view that the tool itself is morally (and legally) neutral, with any onus being on end users, versus the point of view that the tool itself is fundamentally built on plagiarism (and other unlawful use of other people's data) independently of whatever individual users may do.
Title: Re: The AI dooooooom thread
Post by: DGuller on February 28, 2024, 01:08:06 PM
I think even framing 1 might be too generous to NYT.  Depending on just how hackish the prompting is, they may essentially be retyping their news articles into MS Word verbatim, and then claiming that MS Word is plagiarizing its content.

Whether ChatGPT synthesizing the NYT content is okay or not is a different question.  I'm just addressing the idea is that you can just get ChatGPT to regurgitate an NYT article for you, which frankly always smelled, especially once you looked in the complaint and saw how selectively the proof of that was presented.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on February 28, 2024, 02:31:53 PM
Right but NYT aren't suing for plagiarism they're suing for breach of copyright and I think it's quite a specific point (I could be wrong - not an IP lawyer - not a US lawyer etc).

But I'd read that point as doing two things - a nice bit of splashy PR that's easy to understand and knocking out the "transformative use" argument.

Now having said all of that I find it a bit odd for a company who's trained an LLM to argue that running something thousands of time to get a result is "hacking" :hmm:
Title: Re: The AI dooooooom thread
Post by: Jacob on February 28, 2024, 03:37:08 PM
Indeed.

As I understand it, the case is not about whether you can accidentally use ChatGPT to plagiarize the NYT or whether you have to deliberately set out to do it. It's about whether OpenAI used NYT data to train ChatGPT without permission.

The answer to that question seems to be "yes." Which leads to the next question, which is "how big a deal is that".
Title: Re: The AI dooooooom thread
Post by: crazy canuck on February 28, 2024, 03:47:05 PM
Quote from: Jacob on February 28, 2024, 03:37:08 PMIndeed.

As I understand it, the case is not about whether you can accidentally use ChatGPT to plagiarize the NYT or whether you have to deliberately set out to do it. It's about whether OpenAI used NYT data to train ChatGPT without permission.

The answer to that question seems to be "yes." Which leads to the next question, which is "how big a deal is that".

That is pretty much it.  And the answer is, a big enough deal for the NYT to spend legal resources to stop it and seek damages for the unauthorized use of their intellectual property.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on February 28, 2024, 03:53:17 PM
I was at a media event just today with people who are working on this (from an IP perspective, editorial, data science etc).

And there was the "where will this be in 5-10 years in the sector". While there was a degree of distinguishing between the NYT (and similar titles) who do good original reporting and the bits of the media that went for a volume strategy focused on pageviews and nothing else, fundamentally the view was: every journalist will be using AI in their job (and there is a route to a virtuous cycle), but if we get it wrong none of us might be here.

Interesting times :ph34r:
Title: Re: The AI dooooooom thread
Post by: Syt on March 26, 2024, 12:57:54 PM
OpenAI have released a video showcasing the generative text to video model:

Title: Re: The AI dooooooom thread
Post by: Tonitrus on March 26, 2024, 09:03:46 PM
Don't let fanfiction get a hold of this...
Title: Re: The AI dooooooom thread
Post by: Valmy on March 26, 2024, 09:33:31 PM
Quote from: Tonitrus on March 26, 2024, 09:03:46 PMDon't let fanfiction get a hold of this...

Teddy Roosevelt and the Rough Riders mounted on Dinosaurs.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on March 26, 2024, 09:39:08 PM
Quote from: Valmy on March 26, 2024, 09:33:31 PM
Quote from: Tonitrus on March 26, 2024, 09:03:46 PMDon't let fanfiction get a hold of this...

Teddy Roosevelt and the Rough Riders mounted on Dinosaurs.

The Jurassic Park franchise does need a new direction...

Might as well throw in time travel.
Title: Re: The AI dooooooom thread
Post by: Valmy on March 26, 2024, 09:51:07 PM
Quote from: Tonitrus on March 26, 2024, 09:39:08 PMThe Jurassic Park franchise does need a new direction...

Might as well throw in time travel.

Well that franchise has dinosaurs be all but bullet-proof so the Spanish on San Juan Hill would be shocked indeed...except they are now Minotaurs dressed as Matadors firing laser cannons.
Title: Re: The AI dooooooom thread
Post by: Tonitrus on March 26, 2024, 09:55:44 PM
Quote from: Valmy on March 26, 2024, 09:51:07 PM
Quote from: Tonitrus on March 26, 2024, 09:39:08 PMThe Jurassic Park franchise does need a new direction...

Might as well throw in time travel.

Well that franchise has dinosaurs be all but bullet-proof so the Spanish on San Juan Hill would be shocked indeed...except they are now Minotaurs dressed as Matadors firing laser cannons.

Good thing Buckey O'Neill is piloting an Atlas battlemech in this version.
Title: Re: The AI dooooooom thread
Post by: Valmy on March 26, 2024, 10:16:00 PM
Quote from: Tonitrus on March 26, 2024, 09:55:44 PMGood thing Buckey O'Neill is piloting an Atlas battlemech in this version.

See we need to remember all these important prompts for once we get OpenAI
Title: Re: The AI dooooooom thread
Post by: Jacob on April 16, 2024, 12:00:18 PM
QuoteINTERVIEW We know Google search results are being hammered by the proliferation of AI garbage, and the web giant's attempts to curb the growth of machine-generated drivel haven't helped all that much.

It's so bad that Jon Gillham, founder and CEO of AI content detection platform Originality.ai, told us Google is losing its war on all that spammy, scammy content in its search results. You can see our full interview with Jon about it all below.


"What's clear right now is that there's no one spamming Google [that's] not doing it with AI," Gillham told The Register. "Not all AI content is spam, but I think right now all spam is AI content."

Gillham's team has been producing monthly reports to track the degree to which AI-generated content is showing up in Google web search results. As of last month, about 10 percent of Google results point to AI content, it's claimed, and that's after Google vowed to take down a whole host of websites that were pushing such trash.

"Google did these manual actions to try and win a battle, but then seem to still be sort of struggling with their algorithm being overrun by AI content," Gillham told us.

As AI content proliferates, there's also concern that we could end up in a model collapse situation as AIs ingest other AI-generated material and end up regurgitating other low-grade synthetic data. Gillham said his AI content-recognition tech, which has been used to scan datasets for machine-generated infomation, can help, but it's not a total solution.

"It's a step in trying to reduce that corruption of the dataset, but I don't think it totally solves the problem," Gillham told us. You can hear more of what he had to say by clicking play above.

https://www.theregister.com/2024/04/13/google_ai_spam/
Title: Re: The AI dooooooom thread
Post by: Legbiter on April 16, 2024, 12:39:57 PM
Quote from: Jacob on April 16, 2024, 12:00:18 PM
QuoteAs AI content proliferates, there's also concern that we could end up in a model collapse situation as AIs ingest other AI-generated material and end up regurgitating other low-grade synthetic data.

Interesting. An AI inbreeding depression.

(https://allthatsinteresting.com/wordpress/wp-content/uploads/2021/09/king-charles-ii.jpg)
Title: Re: The AI dooooooom thread
Post by: Jacob on April 17, 2024, 12:42:34 PM
Saw another article on a place where AI is making things crappier - Amazon self-publishing. It's always had a problem with content that was essentially (at best) wikipedia articles uploaded as books for suckers to buy. But now - according to the article - that market place is completely flooded with shoddy AI generated content. This shuts down a venue for aspiring and unrecognized authors to publish their work.

I guess more generally, AI is likely going to render non-curated or lightly curated places for content non-viable as the return on investment from flooding them with low effort derivative AI generated content is going to be significant enough that non-AI generated work will tend to be drowned out (or not submitted at all).
Title: Re: The AI dooooooom thread
Post by: Syt on April 17, 2024, 02:28:27 PM
Yup. Some magazines closed their submissions because they were inundated by AI texts and couldn't keep up with the spam.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on April 17, 2024, 02:30:44 PM
Quote from: Jacob on April 17, 2024, 12:42:34 PMSaw another article on a place where AI is making things crappier - Amazon self-publishing. It's always had a problem with content that was essentially (at best) wikipedia articles uploaded as books for suckers to buy. But now - according to the article - that market place is completely flooded with shoddy AI generated content. This shuts down a venue for aspiring and unrecognized authors to publish their work.

I guess more generally, AI is likely going to render non-curated or lightly curated places for content non-viable as the return on investment from flooding them with low effort derivative AI generated content is going to be significant enough that non-AI generated work will tend to be drowned out (or not submitted at all).
This is also the value of quality, current content as produced by publishing houses, news publishers, studios etc.

It could play a really important part if the companies developing these models paid for licenses rather than stealing it (and those media companies have now all been increasing measures to try and stop it from being scraped for models).
Title: Re: The AI dooooooom thread
Post by: DGuller on April 17, 2024, 04:23:29 PM
Quote from: Jacob on April 17, 2024, 12:42:34 PMSaw another article on a place where AI is making things crappier - Amazon self-publishing. It's always had a problem with content that was essentially (at best) wikipedia articles uploaded as books for suckers to buy. But now - according to the article - that market place is completely flooded with shoddy AI generated content. This shuts down a venue for aspiring and unrecognized authors to publish their work.

I guess more generally, AI is likely going to render non-curated or lightly curated places for content non-viable as the return on investment from flooding them with low effort derivative AI generated content is going to be significant enough that non-AI generated work will tend to be drowned out (or not submitted at all).
Looking on the bright side, maybe the return of curation is exactly what we need.  Social media with its lack of curation may have sounded great in theory, but it put democracy on the ropes.
Title: Re: The AI dooooooom thread
Post by: Josquius on April 18, 2024, 03:00:53 AM
Quote from: Legbiter on April 16, 2024, 12:39:57 PM
Quote from: Jacob on April 16, 2024, 12:00:18 PM
QuoteAs AI content proliferates, there's also concern that we could end up in a model collapse situation as AIs ingest other AI-generated material and end up regurgitating other low-grade synthetic data.

Interesting. An AI inbreeding depression.

(https://allthatsinteresting.com/wordpress/wp-content/uploads/2021/09/king-charles-ii.jpg)


I hadn't thought of that but it does make sense.
Even just thinking of images. As all that AI generated shit starts to take over you'll be getting copies of copies of copies....
Though I do suppose the good copies will be the ones to be spread more so have the most influence?
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on April 18, 2024, 04:35:00 AM
Quote from: Jacob on April 17, 2024, 12:42:34 PMSaw another article on a place where AI is making things crappier - Amazon self-publishing. It's always had a problem with content that was essentially (at best) wikipedia articles uploaded as books for suckers to buy. But now - according to the article - that market place is completely flooded with shoddy AI generated content. This shuts down a venue for aspiring and unrecognized authors to publish their work.

I guess more generally, AI is likely going to render non-curated or lightly curated places for content non-viable as the return on investment from flooding them with low effort derivative AI generated content is going to be significant enough that non-AI generated work will tend to be drowned out (or not submitted at all).

Isn't self publishing basically a money loser, not even counting writing time?  Why would AI people want to spend time generating content that loses money?
Title: Re: The AI dooooooom thread
Post by: Syt on April 18, 2024, 04:37:35 AM
Quote from: Admiral Yi on April 18, 2024, 04:35:00 AM
Quote from: Jacob on April 17, 2024, 12:42:34 PMSaw another article on a place where AI is making things crappier - Amazon self-publishing. It's always had a problem with content that was essentially (at best) wikipedia articles uploaded as books for suckers to buy. But now - according to the article - that market place is completely flooded with shoddy AI generated content. This shuts down a venue for aspiring and unrecognized authors to publish their work.

I guess more generally, AI is likely going to render non-curated or lightly curated places for content non-viable as the return on investment from flooding them with low effort derivative AI generated content is going to be significant enough that non-AI generated work will tend to be drowned out (or not submitted at all).

Isn't self publishing basically a money loser, not even counting writing time?  Why would AI people want to spend time generating content that loses money?

Why is it a money loser? It depends on how much effort you put into it. With Amazon Self Publishing or Smashwords you can handle everything by yourself and upload your ebooks. In theory you can (more or less) just write whatever and then publish it. If you want to pay an editor, an artist for the cover art, or do pay for some social media ads, then it's obviously different. Not talking about physical self-publishing (or vanity publishers).
Title: Re: The AI dooooooom thread
Post by: Admiral Yi on April 18, 2024, 04:42:18 AM
Quote from: Syt on April 18, 2024, 04:37:35 AMWhy is it a money loser?

I assume insufficient sales to cover publishing.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 18, 2024, 10:23:45 AM
Quote from: Admiral Yi on April 18, 2024, 04:35:00 AM
Quote from: Jacob on April 17, 2024, 12:42:34 PMSaw another article on a place where AI is making things crappier - Amazon self-publishing. It's always had a problem with content that was essentially (at best) wikipedia articles uploaded as books for suckers to buy. But now - according to the article - that market place is completely flooded with shoddy AI generated content. This shuts down a venue for aspiring and unrecognized authors to publish their work.

I guess more generally, AI is likely going to render non-curated or lightly curated places for content non-viable as the return on investment from flooding them with low effort derivative AI generated content is going to be significant enough that non-AI generated work will tend to be drowned out (or not submitted at all).

Isn't self publishing basically a money loser, not even counting writing time?  Why would AI people want to spend time generating content that loses money?

They're already doing it, so it must make sense.

Presumably using AI prompts to generate 500 "books" on different topics, uploading them, and using botnets to push them to the top of search rankings is something that is relatively easy to automate.

At which point the money from suckers buying any of those "books" (that it costs pennies to generate) is pure profit.
Title: Re: The AI dooooooom thread
Post by: Jacob on April 18, 2024, 10:26:12 AM
Quote from: Admiral Yi on April 18, 2024, 04:42:18 AM
Quote from: Syt on April 18, 2024, 04:37:35 AMWhy is it a money loser?

I assume insufficient sales to cover publishing.

This assumption is probably incorrect given the evidence - that such ai generated titles are massively prevalent.
Title: Re: The AI dooooooom thread
Post by: Barrister on April 18, 2024, 10:31:07 AM
Quote from: Admiral Yi on April 18, 2024, 04:42:18 AM
Quote from: Syt on April 18, 2024, 04:37:35 AMWhy is it a money loser?

I assume insufficient sales to cover publishing.

I believe the books in question are either e-books, or print-on-demand books.

https://en.wikipedia.org/wiki/Print_on_demand

So even if you sell a handful of books you're still profiting on each one.
Title: Re: The AI dooooooom thread
Post by: Syt on April 18, 2024, 10:31:51 AM
Quote from: Admiral Yi on April 18, 2024, 04:42:18 AM
Quote from: Syt on April 18, 2024, 04:37:35 AMWhy is it a money loser?

I assume insufficient sales to cover publishing.

Publishing on Amazon or Smashwords doesn't cost you anything, and if you generate all content yourself, it only costs you time - and with AI generating the content for you, you just need to format it in the ebook format of choice and upload it.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on April 18, 2024, 11:13:05 AM
Quote from: Jacob on April 18, 2024, 10:23:45 AMThey're already doing it, so it must make sense.

Presumably using AI prompts to generate 500 "books" on different topics, uploading them, and using botnets to push them to the top of search rankings is something that is relatively easy to automate.

At which point the money from suckers buying any of those "books" (that it costs pennies to generate) is pure profit.
Yeah - and it's similar with online content.

You have an increasing long tail of the internet which is made for advertising sites (often AI generated, and nonsense) but with sellable ad inventory and that attracts clicks. Ad sales are often priced best for the ad slots that have the most personalised/developed profile of the individual at the end and allow the most functionalities (both of which require intrusive tracking). That means that at the minute the industry is spending money on sites with basically no compliance with any privacy laws, that are just created for the purpose of attracting eyes on adverts and just a churn with no permanence.

Admittedly this is possibly the logical end point of online advertising given that it is an industry which is largely based on fraud which is now being accelerated by AI. But it is not good for individuals, not good for publishers of real content and not good for advertisers. And ultimately as long as the things that agencies really prioritise (because it's what advertisers insist on) are targeting and functionalities, it's very difficult to move it in a different direction.

And obvioiusly those sites are being scraped and included in future models - particularly ones reliant on webcrawlers that quality media companies are increasingly blocking because they want to get paid for their content.
Title: Re: The AI dooooooom thread
Post by: celedhring on April 18, 2024, 02:16:12 PM
Just noticed chatGPT can talk too. Predictably, it sounds like a smug asshole.

EDIT: Ah, there's different voices. In Catalan "Juniper" has the most incredibly Barcelona posh accent I can think of. Shit's scary.
Title: Re: The AI dooooooom thread
Post by: grumbler on April 19, 2024, 11:20:20 AM
Quote from: Admiral Yi on April 18, 2024, 04:42:18 AM
Quote from: Syt on April 18, 2024, 04:37:35 AMWhy is it a money loser?

I assume insufficient sales to cover publishing.

The costs of self-publishing are primarily the costs of formatting the book properly and perhaps a bit of advertising.  The formatting issue isn't an issue with AI.
Title: Re: The AI dooooooom thread
Post by: garbon on April 23, 2024, 02:52:35 AM
After I performed my first task for work with ChatGPT (at a spead I'd never have managed on my own), I then asked it to create me a story about a woman who won the marathon against underwhelming odds in the style of Jane Austen.

QuoteIn the quaint village of Longbourne, nestled amidst the rolling English countryside, there lived a spirited young woman named Elizabeth Bennet. Elizabeth was known for her intelligence, wit, and independent spirit, qualities not always appreciated by the more conventional society around her.

One fine autumn day, Longbourne was abuzz with excitement as news spread that a marathon race was to be held in nearby Meryton. This event, unusual for its time, attracted athletes from far and wide, including the dashing Mr. Darcy, whose impressive athleticism and serious demeanor had captured the attention of many a young lady.

Intrigued by the prospect of such a spectacle, Elizabeth found herself drawn to the event despite the protests of her mother, Mrs. Bennet, who could not understand her daughter's fascination with "such unseemly pursuits." Undeterred, Elizabeth resolved to attend, her curiosity piqued by the challenge and the chance to witness Mr. Darcy's prowess firsthand.

On the morning of the marathon, a colorful crowd gathered at the starting line in Meryton. Elizabeth, dressed in a practical but elegant running gown, stood amongst the spectators, her eyes fixed on the determined faces of the runners. Mr. Darcy, tall and composed, exuded an air of confidence that both intrigued and annoyed Elizabeth.

As the race commenced, Elizabeth's attention quickly shifted from Mr. Darcy to a lesser-known competitor, Miss Jane Bingley, a gentlewoman of refined manners and surprising athletic ability. Miss Bingley's grace and stamina caught Elizabeth's eye, and she found herself silently cheering for this unexpected heroine of the track.

The course wound through the picturesque countryside, and as the hours passed, the runners faced various challenges—hilly terrain, unpredictable weather, and their own physical limitations. Elizabeth observed with admiration as Miss Bingley, with quiet determination, steadily gained ground on the leading pack, which included Mr. Darcy.

As the marathon neared its conclusion, Elizabeth's excitement grew. Miss Bingley, previously an underdog in the eyes of many, was now closing in on victory. With a burst of speed that surprised even herself, Miss Bingley crossed the finish line, cheered on by a jubilant crowd.

In that moment, Elizabeth realized the significance of what she had witnessed. It was not merely a race won by physical prowess but a triumph of spirit and determination. Miss Bingley's victory challenged societal norms and proved that women were capable of achieving greatness beyond the confines of traditional expectations.

As Elizabeth made her way back to Longbourne that evening, her heart was filled with a newfound sense of possibility. She had witnessed a woman overcoming underwhelming odds with grace and fortitude—a testament to the indomitable spirit that resided within each of them.

Inspired by Miss Bingley's example, Elizabeth vowed to pursue her own dreams with unwavering determination, undeterred by the obstacles that society might place in her path. For in the fields of Longbourne, amidst the whispers of the autumn breeze, a new kind of heroine had emerged—one whose story would be told and celebrated for generations to come.
Title: Re: The AI dooooooom thread
Post by: Sheilbh on April 25, 2024, 06:44:05 PM
Not the normal AI Doom piece - but this story is incredible. I'd heard reporters worry about audio fakes but honestly I hadn't really thought about the use of AI for this sort of day-to-day, real world maliciousness (also I have some questions for the teacher who immediately shared it with a student to spread :blink:):
QuoteEx-athletic director accused of framing principal with AI arrested at airport with gun
Kristen Griffith and Justin Fenton
4/25/2024 8:44 a.m. EDT, Updated 4/25/2024 5:58 p.m. EDT
The principal of Pikesville High School was investigated after audio purporting to be his voice circulated on social media. Police have charged the former athletic director who they say faked the recording using artificial intelligence software.

Baltimore County Police arrested Pikesville High School's former athletic director Thursday morning and charged him with using artificial intelligence to impersonate Principal Eric Eiswert, leading the public to believe Eiswert made racist and antisemitic comments behind closed doors.

Dazhon Darien, 31, was apprehended as he attempted to board a flight to Houston at BWI Airport, Baltimore County Police Chief Robert McCullough said at a news conference Thursday afternoon. Darien was stopped for having a gun on him and airport officials saw there was a warrant for his arrest. Police said they did not know whether Darien was trying to flee.

Darien was charged with disrupting school activities after investigators determined he faked Eiswert's voice and circulated the audio on social media in January, according to the Baltimore County State's Attorney's Office. Darien's nickname, DJ, was among the names mentioned in the audio clips authorities say he faked.

"The audio clip ... had profound repercussions," police wrote in charging documents. "It not only led to Eiswert's temporary removal from the school but also triggered a wave of hate-filled messages on social media and numerous calls to the school. The recording also caused significant disruptions for the PHS staff and students."

Police say Darien made the recording in retaliation after Eiswert initiated an investigation into improper payments he made to a school athletics coach who was also his roommate. Darien is also charged with theft and retaliating against a witness.

Darien was allowed release on $5,000 bond and waived an attorney at an initial court appearance, according to court records. Attempts to reach him by phone and at his home were unsuccessful.

Eiswert's voice, which police and AI experts believe was simulated, made disparaging comments about Black students and the surrounding Jewish community and was widely circulated on social media.

Questions about the audio's authenticity quickly followed. Police wrote in charging documents that Darien had accessed the school's network on multiple occasions in December and January searching for OpenAI tools, and used "Large Language Models" that practice "deep learning, which involves pulling in vast amounts of data from various sources on the internet, can recognize text inputted by the user, and produce conversational results." They also connected Darien to an email account that had distributed the recording.

Many current and former students believed Eiswert was responsible for the offensive remarks, while former colleagues denounced the audio and defended Eiswert's character. Eiswert himself has denied making those comments and said the comments do not align with his views.

The audio, posted to the popular Instagram account murder_ink_bmore, prompted a Baltimore County Public Schools and Baltimore County Police investigation. Eiswert has not been working in the school since the investigation began.

The voice refers to "ungrateful Black kids who can't test their way out of a paper bag" and questions how hard it is to get those students to meet grade-level expectations. The speaker uses names of people who appear to be staff members and says they should not have been hired, and that he should get rid of another person "one way or another."

"And if I have to get one more complaint from one more Jew in this community, I'm going to join the other side," the voice said.

Darien was being investigated as of December in a theft investigation that had been initiated by Eiswert. Police say Darien had authorized a $1,916 payment to the school's junior varsity basketball coach, who was also his roommate, under the pretense that he was an assistant girls soccer coach. He was not, school officials said.

Eiswert determined that Darien had submitted the payment to the school payroll system, bypassing proper procedures. Darien had been notified of the investigation, police said.

Police say the clip was received by three teachers the night before it went viral. The first was Darien; a third said she received the email and then got a call from Darien and teacher Shaena Ravenell telling her to check her email. Ravenell told police that she had forwarded the email to a student's cell phone, "who she knew would rapidly spread the message around various social media outlets and throughout the school," and also sent it to the media and the NAACP, police said.

She did not mention receiving it from Darien until confronted about his involvement. Ravenell has not been charged with a crime and could not immediately be reached for comment.

Both Darien and Ravenell have submitted their resignations to the school system, according to an April 16 school board document. The resignations are dated June 30.

Baltimore County Public Schools Superintendent Myriam Rogers said school system officials are recommending Darien's termination. She would not say, however, if the other employees named in the charging documents, including Ravenell, are still working at the school.

Rogers in January called the comments "disturbing" and "highly offensive and inappropriate statements about African American students, Pikesville High School staff, and Pikesville's Jewish community."

Rogers said Kyria Joseph, executive director for secondary schools, and George Roberts, a leadership consultant for the school system, have been running Pikesville High School since the investigation started. They will continue to do so for the remainder of the year. She said they will work with Eiswert to determine his duties for next school year.

Billy Burke, head of the Council of Administrative & Supervisory Employee, the union that represents Eiswert, was the only official to suggest the audio was AI-generated.

Burke said he was disappointed in the public's assumption of Eiswert's guilt. At a January school board meeting, he said the principal needed police presence at his home because he and his family had been harassed and threatened. Burke had also received harassing emails, he said at the time.

"I continue to be concerned about the damage these actions have caused for Principal Eiswert, his family, the students and staff of Pikesville High School, and the Black and Jewish community members," Burke said in a statement on Thursday. "I hope there is deliberate action to heal the trauma caused by the fake audio and that all people can feel restored."

Police said the school's front desk staff was "inundated with phone calls from parents and students expressing concern and disparaging remarks toward school staff and administrators." The flood of calls made it difficult to field phone calls from parents trying to make arrangements for their children and other school functions, officials told police.

"The school leadership expressed that staff did not feel safe, which required an increase in police presence at the school to address safety concerns and fears," police said.

Teachers, under the impression the recording was authentic, "expressed fears that recording devices could have been planted in various places in the school," police said.

"The recording's release deeply affected the trust between teachers and the administration," police said. "One individual shared that they fielded sensitive phone calls in their vehicle in the parking lot instead of speaking in school."

"Hate has no place and no home in Baltimore County," said Johnny Olszewski Jr., the Baltimore County executive.

He called the developments of AI "deeply concerning" and that it's important for everyone to remain vigilant for anyone using the technology for malicious reasons. There should also be more investment in technology that identifies any inauthentic recording made with AI, he said.

Experts in detecting audio and video fakes told The Banner in March that there was overwhelming evidence the voice is AI-generated. They noted its flat tone, unusually clean background sounds and lack of consistent breathing sounds or pauses as hallmarks of AI. They also ran the audio through several different AI-detection techniques, which consistently concluded it was a fake, though they could not be 100% sure.

The police also sought the expertise of two professors familiar with AI detection to assist in their investigation. Catalin Grigoras, a forensic analyst and professor at the University of Colorado Denver, concluded that the "recording contained traces of AI-generated content with human editing after the fact, which added background noises for realism," the charging documents stated.

Hany Farid from the University of California, Berkeley, who's also an expert in forensic analysis, determined "the recording was manipulated, and multiple recordings were spliced together," according to the documents.

AI voice-generation tools are now widely available online, and a single minute's recording of someone's voice can be enough to simulate it with a $5-a-month AI tool, the Nieman Journalism Lab reported in February.

There are few regulations to prevent AI imitations, called deepfakes, and few perpetrators are prosecuted.

Cindy Sexton, president of the Teachers Association of Baltimore County, said AI should be a concern for everyone, especially educators.

She said the National Education Association is working to address their concerns, but in the meantime, she's not sure what else should be done.

"We have to do something as a society, but 'what is that something' is of course the big question," Sexton said

Baltimore County State's Attorney Scott Shellenberger said this is the first time this type of case has been taken up by the district. And it's one of the first his office was able to find around the nation.

There were some legal statutes they used that were "right on point," he said, but the charge of disrupting school activities only carries a six-month sentence.

"It seems very clear to me that we may need to make our way down to Annapolis in the legislature next year to make some adaptions to bring the law up to date with the technology that was being used," he said.

Baltimore Banner staff writers Cody Boteler and Kaitlin Newman contributed to this report.

Correction: This story has been updated to correct the spelling of Hany Farid's name.

Edit: The clip. Obviously I don't know the guy and now I listen for it the lack of background noise is noticeable - but if I heard this, I would have no idea:
https://x.com/Phil_Lewis_/status/1747708846942851493