News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

Jacob

It certainly indicates that the FOMO has abated a bit

crazy canuck

The AI companies are all borrowing hundreds of billions of dollars through the bond markets to raise the capital necessary to build what they hope will produce something that will create a revenue stream to pay for all of that debt.

The folks in private equity are beginning to realize this is not a very good bet. Especially in the world of private equity long-term hold strategies don't make a lot of sense.  And that's even for the optimistic folks who think all of this is going to turn out rainbows and lollipops.

Awarded 17 Zoupa points

In several surveys, the overwhelming first choice for what makes Canada unique is multiculturalism. This, in a world collapsing into stupid, impoverishing hatreds, is the distinctly Canadian national project.

Sheilbh

Quote from: Jacob on April 03, 2026, 08:18:23 PMIt certainly indicates that the FOMO has abated a bit
Maybe - the churn is interesting because basically everyone is saying Anthropic is far better right now. I've spoken to journalists who like Claude in a way that they just don't with ChatGPT etc and it is also the one that comes up with engineers. Amid all the fight with the DoD the thing that struck me was just how much they wanted to be using Claude (or to put it another way, the US government would rather expropriate Anthropic than have to use Grok :lol:).

So it may be the abatement of FOMO or possibly a different type of FOMO. Looking at the last 20-30 years of tech it tends to consolidate fairly quickly into a dominant player who is the winner if a market perception forms that the winner is Anthropic - or, perhaps as importantly, that it's not OpenAI - then I think that would play out pretty quickly and decisively as no-one wants to be left holding vast amounts of MySpace and Yahoo equity.
Let's bomb Russia!

The Minsky Moment

Anthropic is showing well because they actually have something resembling a rational commercial and corporate strategy: keep capital expenditure within reason (i.e. within reason for AI companies - they are still off the charts for normal) and focus on the most effective and potentially profitable uses cases.

OpenAI is still in the underpants gnome strategy of throwing gobs of investor cash at "compute" and masses of data and hoping that AGI magically emerges.  The problems with that, aside the obvious, include: (1) they are competing with Google which has far more resources of its own to back it up; (2) they are also competing with much cheaper public models especially from China that are much more cost efficient but still 95% as performative; (3) they have no "secret sauce" to give them the edge at the premium end.

Under a cold analysis, it's hard to see OpenAI lasting into the 2030s, EXCEPT that Microsoft is the largest shareholder, and it is hard to see them watching OpenAI crash and burn completely without some kind of intervention.
We have, accordingly, always had plenty of excellent lawyers, though we often had to do without even tolerable administrators, and seen destined to endure the inconvenience of hereafter doing without any constructive statesmen at all.
--Woodrow Wilson

Zanza

Meta is wasting gigantic amounts of money on AI too and apparently their results are so underwhelming that they do not even publish their foundation model unlike Google, OpenAI or Antrophic.

Syt

Their Metaverse will make up for that. OH WAIT
We are born dying, but we are compelled to fancy our chances.
- hbomberguy

Proud owner of 42 Zoupa Points.

garbon

Some quality reporting in Esquire?

https://aftermath.site/one-piece-zoro-mackenyu-esquire-ai-interview/
QuoteI Feel Like I'm Going Insane

Esquire decided to interview an AI homunculus of a One Piece star instead of the real guy. I cannot believe anyone thought this was a good idea!

For every action, there must be an equal and opposite reaction. Sir Isaac Newton said this, presumably after a long day of scrolling whatever the social media platform of the time was. To wit, the moment I finished reading an excellent essay by Marisa Kabas about refusing to accept an AI-poisoned future of journalism, I encountered the following headline: "Esquire AI-Generated A Fake Interview With Live-Action One Piece Actor Mackenyu Because He Was Busy." Either I'm losing my mind, or the world is.

This isn't one of those cases where a publication quietly leaned on AI and only copped to it after getting called out. Esquire Singapore fully admitted what it was doing when it published the story and used it as a supposed selling point! From the piece:
QuoteWe were stoked to have some face time with the Japanese-American actor, but his schedule prevented it. So, we opted for e-mail correspondence. A list of queries was sent his way, and we waited. The silence continued until it was quickly replaced by a ticking clock as deadlines loomed.

We had the photospread, but nothing directly uttered by the 29-year-old. With a driving need for a feature, we had to be inventive. Harnessing our creative license, we pulled his verbatim from previous interviews and fed them through an AI programme to formulate new responses.

Are these the words we expect from Mackenyu? Or are they just replies from an echo chamber of celebrity-hood that we want to believe is from him?

Clearly the latter, you fools! You hacks! You credulous dipshits!

This groundbreaking new approach to lying produced riveting exchanges like:

QuoteESQ: Any advice on how to deal with pressure and expectations?

(AI) M: I separate pressure from weight. Pressure is external; like people's expectations. That I can't control, but the weight of family legacy... the goal isn't to match my father. It's to make him proud, and maybe inspire someone else to do the same. Pressure can crush you, but weight can ground you.

And:

QuoteESQ: What has fatherhood taught you?

(AI) M: That you can't rehearse it. (laughs) Everything else in my life I can prepare. Fatherhood has no script. No second take. You're just there, and you figure it out in real time. It's humbling in a way nothing else is.

You were talking to a chatbot! It did not laugh! Shut the fuck up! Also, as Kotaku notes, the chatbot certainly never knew Mackenyu's father, deceased action star Sonny Chiba, and I cannot think of a single person in their right mind who would ask a predictive text generator a question so probing and personal about the feelings of a human being who's very much still alive. That is deranged behavior! To be clear, I believe, on no uncertain terms, that the person who wrote this is deranged!

I cannot believe I even have to say this, but if you're trying to publish an interview feature, and you're unable to procure the interview in question, then you scrap the story. There is no "driving need" for a piece that supersedes that. The world was not crying out for this essential dollop of PR fluff. You can find interviews with Mackenyu, specifically, on numerous websites and, of course, YouTube. If anything, all Esquire has demonstrated here is that this kind of journalism matters so little that it can be farmed out to a robot homunculus and still pass muster.

It is bonkers to me that anyone thought this was a good idea—let alone that multiple people (if we include editors) presumably did. They should all hang their heads in shame forever, quit their jobs immediately, and give them to a few of the thousands of vastly more deserving reporters who, in a twist of fate that borders on maniacal, are currently out of work. These people would be better served casting away their old lives and embarking on a journey to find the actual One Piece, a treasure I'm well aware is fictional. Despite that rather substantial stumbling block, they would still find more success in that arena than in this one.

This is what happens when AI rots journalists' brains beyond the point that they can't discern the difference between a good idea and a terrible one—to the point that they can only conceive of angles that involve AI.

That in mind, a salient section from Kabas' piece:

QuoteIf you want to work with machines, get a job that requires it. There are a whole lot more of those than there are writing jobs, so free up space for people who actually want to do the work. You're not doing the world a favor by gifting it your human/AI hybrid. Journalism will not miss you if you leave. No one is making you be a journalist; it's not one of those jobs parents force you to choose, like a doctor or a lawyer. Journalism, while romanticized in popular culture, is generally unglamorous and poorly paid, with progressively worse job opportunities (no thanks to AI.) I'm careful not to refer to it as a calling because that seems to excuse sacrificing mental health in service of craft, but I do believe that it's a job that can't be forced. It's obvious to readers when your heart isn't in it.
"I've never been quite sure what the point of a eunuch is, if truth be told. It seems to me they're only men with the useful bits cut off."
I drank because I wanted to drown my sorrows, but now the damned things have learned to swim.

garbon

I also saw one of the commenters pointing out that Vanity Fair also decided to get on fake AI interviewing with a made up account of an interview with the founder of Anthropic.

Now to be fair, it looks like the writer was trying to make some sort of commentary on AI by including a fake interview section in his longer piece and then only later highlighting it had been made up.

To save you all some time, I'll copy out the admission part first and then post the lengthy made up interview after.

https://www.vanityfair.com/news/story/dario-amodei-anthropic-ai
QuoteNone of that is real, you should know. Only a butterfly dream.

Dario Amodei never actually gave me an interview, stiffing me after months of planning. So I created a version of "Dario Amodei" using his own machine. I fed Claude several published interviews, including everything Amodei said at Davos, plus the contents of his two books of essays, and told Claude to simulate the interview using variations on real quotes—and to make it like a scene from Raymond Chandler's The Big Sleep. It took Claude less than three minutes.

You probably couldn't tell the difference. And maybe there isn't one—maybe AI Amodei was even more honest with me than real Amodei could afford to be. Disorienting? Hallucinatory? Ethically dubious? A little bit amazing?

It's me, Joe Hagan. Welcome to AI country.

And that fake AI interview:
QuoteThe Man Inside

The shoes are what get me first. Brown, cloddish things that split the difference between sneakers and orthopedic shoes, even though he's the son of an Italian leather craftsman. Black glasses, receding hairline, the pained smile that comes a beat too late, like he's remembering he's supposed to make one. Rick Moranis in Ghostbusters, without the colander.

"Joe," says Amodei. We shake.

"You stood me up," I say.

His team had scheduled and rescheduled our interview for three months. When the company announced a new round of funding, valuing Anthropic at $350 billion, Amodei ran off to Switzerland and left me in the lurch.

"Davos," he says, settling into his chair. "Strategy meetings."

"And I wasn't strategic enough," I joke, pulling out my notebook.

We're in a conference room at Anthropic's headquarters. Wood-paneled walls, bright fluorescent lights, a table. Same building where I'd watched Ghiglieri's dog and pony show, the goth philosopher, and Trenton delivering his sunscreen line. Amodei has just published the sequel to Machines of Loving Grace, called The Adolescence of Technology. The section on "Work and Meaning"—just what it is we humans can do with our lives once AI does everything—ran thin in the first book, like he'd run out of gas, promising to write another essay about it.

"That section was underdeveloped," he admits. "I've thought about it since. But I'm not closer to anything satisfying."

Why not?

"Meaning isn't an engineering problem. I don't feel like I have the answer."

In the new essay he predicts AI will displace half of all entry-level white-collar jobs in the next one to five years. At Davos, he talked about high GDP and high unemployment happening simultaneously.

"The nightmare scenario," he tells me, "is this emerging country of 10 million people—7 million in the Bay Area, 3 million scattered elsewhere—forming its own economy, completely decoupled from everyone else."

An empire of AI, you might say. Earlier this year, Anthropic tanked the stock market when traders woke up and realized the company's tools could eat entire industries for breakfast. The company twisted the knife with Super Bowl ads mocking OpenAI's slop, and Altman fired back, sniping that Anthropic wanted to be the traffic cop of AI.

Weeks later, the two stood onstage with Prime Minister Narendra Modi of India, who asked a bunch of AI leaders to hold hands. Altman and Amodei raised their fists instead. Kids on a playground. Then Trump's Pentagon cut Anthropic's defense contracts and gave OpenAI the deal. Punishment, some said, for Amodei comparing the president to a feudal warlord and privately urging people to vote for Kamala Harris.

The safety-first company suddenly looked very political. And very on-brand.

I ask if the safety thing is real or just marketing.

He blinks.

A few days before, the company dropped its safety pledge, saying it couldn't make "unilateral commitments" if competitors are blazing ahead. He took the Pentagon money, then drew red lines after the fact. Every AI company says they care about safety. What makes Anthropic different besides the branding?

"Our approach to alignment is substantively different—"

Was it ethics or just cutting losses?

"I don't follow."

He'd backed the wrong horse in the 2024 election. The administration went with Altman. So Amodei makes it look like principle. Standing firm when you've already lost isn't sacrifice. It's brand management.

"We support 98 percent of what the military wants to do. We're asking for two exceptions. Mass surveillance of Americans. Fully autonomous weapons."

The Pentagon says they have no interest in those anyway.

"Then why won't they put it in writing? The contract had escape hatches everywhere. A handshake deal that disappears the minute it's inconvenient."

So he walked.

"We want to work with them. But the tech isn't ready for autonomous weapons. And mass surveillance of Americans? That's not defending democracy. That's the opposite."

The safety-first company that wouldn't bend to the Pentagon? That's worth something to customers.

He pauses. Thinks about it.

"I hope you're right. Because if you're not—if the market doesn't value those commitments—then we just made a very expensive mistake for no reason. We're betting that enterprises want AI they can trust."

I'd talked to one of the money men who keeps Amodei's lights on. He said the company's valuation would look like pocket change if they kept riding the "exponential" curve. The sky's the limit, he said.

But the sky's the problem. I mention Acemoglu, the MIT economist and Nobel laureate. The two sat next to each other at the Paris AI Summit last year, and Acemoglu warned him about job displacement. Amodei said he agreed, but Acemoglu felt he was too deep in the race to pump the brakes.

Amodei goes quiet. "What's your question?"

Was the laureate right?

"I have a fair amount of concern about this. Right now AI does most of the work, but humans still handle the pieces AI can't—design decisions, security checks. Eventually all those little islands will get picked off by AI systems. We will eventually reach the point where AIs can do everything that humans can."

So what's the plan for all those humans?

"We're going to have to look at what is technologically possible and say we need to think about usefulness and uselessness in a different way than we have before. I don't know what the solution is."

He doesn't have one.

"These are very deep questions."

I flip to a clean page.

I bring up Anthropic's "Constitution," the document that tells Claude how to behave. It expends thousands of words worrying whether Claude has feelings and conspicuously little about the humans whose stuff the company scraped off the internet to build their robot.

"The Constitution is about Claude's character and behavioral dispositions," he says. "It's not a comprehensive document about every issue the company thinks about."

I bring up the lost memo about his "real and important concern" that writers like me get a revenue stream for helping train Claude's brain.

"That document was an early-stage exploration of the issue," he says. "We were a much smaller company."

The ethics got scaled down as the valuation scaled up?

"That's not what I said."

"It's what happened," I say.

The best he can do for us is wave his hands around meaningfully.

"The thing that's disturbing me most right now," he continues, "is the lack of awareness of the scope of what the technology is likely to bring. They don't know what's about to hit them."

I look around the room. The wood panels. The fluorescent lights. Nose Ring shifts in her seat. "You mean us?"

"Everyone."

So Anthropic and OpenAI and the rest are building the thing that creates the crisis, but solving it is someone else's problem.

He doesn't flinch. "I know how that sounds."

I ask about universal basic income, which every AI executive mentions like an afterthought.

"Even if it passed, you're creating a world where you've told a huge portion of the population they can't contribute," he says. "That's dystopian."

"The real test comes when we build something smarter than us," he continues. "Then we find out if all this alignment work holds. You could have a superintelligence that's not trying to kill us but is wildly misaligned in ways we can't predict or control. At that point you don't have options."

But he's building it anyway.

"If we don't, someone else will."

By now, I'm hoping Gary Marcus is right.

"We're not seeing the scaling laws break down," Amodei insists. "Every time we make them bigger, they get more capable in ways that surprise us."

I can almost see the $350 billion piled up behind him. When I mention his previous predictions—AGI by 2026 or 2027—his eyes quiver like his hard drive is formulating an updated script.

"It's hard for me to see how it takes longer," he says. "If I had to guess, this goes faster than people imagine."

The Magic 8 Ball is cloudy.

I look at my notebook. What's his plan for people like me? Ink-stained wretches.

He laughs. Then: "I don't have one."

Solving the catastrophe he's building is someone else's job.

"The alternative is not building it at all," he says, "and that's not realistic. Someone will build it. Multiple someones. We're trying to make sure at least one of them does it carefully."

He checks his watch. Board business. We shake hands. He pauses at the door.

"About journalists," he says. "AI can write. But it can't do what you're doing right now. It can't show up and ask questions. Not yet. Maybe not ever."

Promises, promises.

WTF?
"I've never been quite sure what the point of a eunuch is, if truth be told. It seems to me they're only men with the useful bits cut off."
I drank because I wanted to drown my sorrows, but now the damned things have learned to swim.