News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

Valmy

Soon they will create a powerful AI to detect AI leading to an AI to circumvent that AI leading to an AI to detect the AI that circumvents the AI that detects the other AI leading to the creation of another AI to...

Having AI research detailed by AI slop would be very 21st century.
Quote"This is a Russian warship. I propose you lay down arms and surrender to avoid bloodshed & unnecessary victims. Otherwise, you'll be bombed."

Zmiinyi defenders: "Russian warship, go fuck yourself."

Sheilbh

I posted an article a little earlier that sort of touched on this. Not hard sciences but he was finishing three years as associate editor of Security Studies, an IR journal.

He noted that basically the desk reject rate (so literally the first screen) rose to 75%. In part because AI was emerging in that time but not very good yet so it was easy to rejct.

Now it's getting better. Someone did a bit of a POC of an AI paper and the guy basically said it was fine. Not slop: "they're technically porficient. They follow the form. They're adequate. They're easy to do and require little creativity, but also constitute the kind of legitimate incremental work that Thomas Kuhn called 'normal science'."

He thought that basically it might end up with original and elegant theory becoming more important/valued in journals as "good quant work is becoming cheap and plentiful; good theory remains hard." As I say there's almost a side of that which sounds possibly positive - I think the turn to the quantitative and data has been very bad for academia. So maybe a re-valuing of good and original theory wouldn't be a bad thing - I also wonder if in addition to that being a good stylist matters more, which would be very good news.

From the conclusion:
QuotePublish and Vanish

If discernment becomes the ultimate arbiter of quality, we are heading even more toward a two-tier system in academic publishing. Top journals will focus on papers that are strikingly original or make important theoretical or empirical breakthroughs, while everyone else will publish the AI-produced papers that incrementally advance our understanding of narrow things. And perhaps theory will gain increasing prestige over sophisticated methods of analyzing data. One can dream.

Both theory and empirics are of course key parts of science, but the danger is the flood of Automatenwissenschaft becomes a kind of scholarly dark matter that exists to pad CVs and satisfy bureaucratic metrics, but which no one actually reads or relies upon. This is already true to some extent, but AI greatly accelerates the process.

For many scholars this moves them from a "Publish or Perish" model to a "Publish and Vanish" one. And when consuming the literature, the bifurcation forces scholars to rely even more on prestige hierarchies as a heuristic for importance. Paradoxically, the leveling effect of AI might make academia more elitist.

Professors have been at the forefront of slop consumption. We were swimming in AI essays before most people knew what ChatGPT was. It's still all my colleagues complain about. We saw how annoyingly effective it was, so it's not surprising that we would turn to the same tools for our research, especially for coding or quantitative work. The technology that flooded us with student essays will now flood us with our own work, and we will need more of the same discernment we've been complaining our students lack.
Let's bomb Russia!

Jacob

Theory may be more valued in journals, which is great I suppose.

However the problem is that the incremental science - which is what produces the practical output in the long term - is very easy to fake. Sure, AI may be good at doing incremental science if applied to it, but it's also really good at producing fraudulent science disguised as incremental science and it seems there's a lot of that going around. If we can't distinguish between the two, then we may end up with adverse results.

Sheilbh

I do think in the world of science and AI that there's a scale here. AlphaFold is a phenomenal and positive achievement, people using free ChatGPT chats to generate an article, less so.

But I freely acknowledge I know nothing about science - for example my criticism that too many disciplines have become focused on quantitative and data with nowhere near enough theory is more valid for the social sciences :lol:

To an extent though I wonder if, as in other areas, this is just an acceleration of existing trends - particularly around the pressure to publish as the core metric, over-specialisation and all that comes with that. Is what AI is doing just reflecting what administrators have turned higher education into?

I could be wrong but I suspect he's probably right that journals will just bifurcate into the elite and the slop-plus (which is not significantly discernible from a lot of what is already published).
Let's bomb Russia!

Jacob

Sure, maybe the journals will bifurcate and from the POV of a publisher that may be one of those "it is what it is" things.

From the point of view of actually advancing science AI may - contrary to the promise of AI salespeople - be counter-productive.

Because if we can't distinguish between a paper that says "we've established that X and Y in combination creates Z effect in cancer cells, here's the evidence" truthfully and one that has manufactured the evidence completely and no such effect actually exists - then one of the fundamental building blocks of new therapies and new practical applications of science end up being useless.

How can anyone - be they a human researcher or an AI model - generate new science of value if they can't distinguish between factual and spurious science to inform their work?

Admiral Yi

Quote from: Jacob on Today at 07:57:22 PMBecause if we can't distinguish between a paper that says "we've established that X and Y in combination creates Z effect in cancer cells, here's the evidence" truthfully and one that has manufactured the evidence completely and no such effect actually exists - then one of the fundamental building blocks of new therapies and new practical applications of science end up being useless.

We can distinguish between those two papers, in the same way we could in the pre AI era. We replicate the experiment.

HisMajestyBOB

Quote from: Admiral Yi on Today at 08:12:20 PM
Quote from: Jacob on Today at 07:57:22 PMBecause if we can't distinguish between a paper that says "we've established that X and Y in combination creates Z effect in cancer cells, here's the evidence" truthfully and one that has manufactured the evidence completely and no such effect actually exists - then one of the fundamental building blocks of new therapies and new practical applications of science end up being useless.

We can distinguish between those two papers, in the same way we could in the pre AI era. We replicate the experiment.

Sounds like a lot of work. Way easier to just ask "@grok is this true?"
Three lovely Prada points for HoI2 help

Jacob

Quote from: Admiral Yi on Today at 08:12:20 PMWe can distinguish between those two papers, in the same way we could in the pre AI era. We replicate the experiment.

Indeed.

But replicating every relevant incremental experiment since 2025 in your field is going to become an overwhelming burden on individual researchers very quickly.

Admiral Yi

Quote from: Jacob on Today at 08:35:22 PMIndeed.

But replicating every relevant incremental experiment since 2025 in your field is going to become an overwhelming burden on individual researchers very quickly.

Replicating every relevant incremental experiment since the dawn of time up to 2025 would have been an overwhelming burden as well.

Sheilbh

Yeah I think that's fair and I think there is a split again. I think like an awful lot that's going on in the world right now it's Janus faced and has a lot of possibility and risk. But I think it is likely to lead to new hierarchies and elites generally.

In some sense I think AI could help solve (or dissolve) current problems while in others accelerating them. So for example there has been in the last few decades a replication crisis in medicine (including cancer studies) and (especially) psychology but other areas too. The sheer volume of data that we can now collect combined with the business model of academia and the whole "publish or perish" culture means that lots of papers are being published that are not replicable. Within medicine the ones that matter I think are replicated and the duds ignored. In psychology millions of books were sold on the basis of experiments no-one can reproduce and those experiments also had a huge influence on politics (I think basically every Western state had a behavioural team in government - with a big impact for example on covid policy). I think the difference is maybe applying AI to a problem (or, indeed, to the code) v getting AI to write a paper.

In some cases I think AI will actually help unlock the knowledge in those vast stores of data (like the AlphaFold example), while in others it will further perpetuate the problem of research that just isn't true. In a way I think AI sharpens that because there is a question ethics and taste as much as anything about it that may cause more examination/discernment by academic publishing - in a way that decades of papers in areas (and, indeed, entire branches of psychology and economics that appear to have no factual basis but were hugely influential on policymakers) was kind of missed. It was human, it was in the correct form and, perhaps, sometimes it was too good to check - it was also quite possibly not true. So it will, possibly, split more explicitly between the work that is original and the work that isn't.

I was thinking about this with drugs research too. For example my understanding is that a lot of what the big pharma companies spend money is basically very minor changes to existing treatments - because there's a profitable seam there and it is lower cost to look at variation within the known. That is the current state of the art. Again with the massive datasets they have you can imagine that AI (like AlphaFold not ChatGPT) could be helpful in suggesting areas that are distinct for research.
Let's bomb Russia!

Zoupa

Quote from: Admiral Yi on Today at 08:43:23 PM
Quote from: Jacob on Today at 08:35:22 PMIndeed.

But replicating every relevant incremental experiment since 2025 in your field is going to become an overwhelming burden on individual researchers very quickly.

Replicating every relevant incremental experiment since the dawn of time up to 2025 would have been an overwhelming burden as well.

There was little need to replicate an experiment previously. That's the whole point of scientific journal publishing.

Jacob

Quote from: Admiral Yi on Today at 08:43:23 PMReplicating every relevant incremental experiment since the dawn of time up to 2025 would have been an overwhelming burden as well.

Yup. Which is why trustworthy and well functioning peer-review system is valuable, as it significantly lessens the need for everyone to do it themselves.