News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

Zoupa

Quote from: DGuller on January 22, 2026, 11:59:17 PM
Quote from: Zoupa on January 22, 2026, 08:50:41 PM
Quote from: Admiral Yi on January 22, 2026, 08:43:23 PM
Quote from: Jacob on January 22, 2026, 08:35:22 PMIndeed.

But replicating every relevant incremental experiment since 2025 in your field is going to become an overwhelming burden on individual researchers very quickly.

Replicating every relevant incremental experiment since the dawn of time up to 2025 would have been an overwhelming burden as well.

There was little need to replicate an experiment previously. That's the whole point of scientific journal publishing.
I guess you haven't heard of replication crisis, which was a big thing years before AI.  Good thing there was little need to replicate experiments, because it turned out most of the time you couldn't.  Humans were very much capable of bullshitting and willful lack of skepticism on their own, especially when academic career was on the line.

I was unphased. Crap studies and crap papers have always been around. The explosion we're seeing now is orders of magnitude. I'm referring to medical journals, as I'm not familiar with other fields of research.

crazy canuck

Quote from: The Brain on Today at 02:08:48 AMPeer-review doesn't involve checking if an experiment was actually performed or what the results actually were. The basic quality assurance method of the scientific system is replication.

That is accurate. One of the important functions of peer review of a scientific paper to both ensure that the procedures used by the scientists are adequately, explained so that the experiment can be replicated by somebody reading the paper and also considering the data that was derived from the experiments to ensure that the conclusion drawn from the experiments and reported in the manuscript are valid.

Quote from: DGuller on January 22, 2026, 11:59:17 PM
Quote from: Zoupa on January 22, 2026, 08:50:41 PM
Quote from: Admiral Yi on January 22, 2026, 08:43:23 PM
Quote from: Jacob on January 22, 2026, 08:35:22 PMIndeed.

But replicating every relevant incremental experiment since 2025 in your field is going to become an overwhelming burden on individual researchers very quickly.

Replicating every relevant incremental experiment since the dawn of time up to 2025 would have been an overwhelming burden as well.

There was little need to replicate an experiment previously. That's the whole point of scientific journal publishing.
I guess you haven't heard of replication crisis, which was a big thing years before AI.  Good thing there was little need to replicate experiments, because it turned out most of the time you couldn't.  Humans were very much capable of bullshitting and willful lack of skepticism on their own, especially when academic career was on the line.

Yi and DG have missed the point. 

AI exacerbates the problem. Nobody is claiming that a problem didn't exist prior to AI.
Awarded 17 Zoupa points

In several surveys, the overwhelming first choice for what makes Canada unique is multiculturalism. This, in a world collapsing into stupid, impoverishing hatreds, is the distinctly Canadian national project.

Baron von Schtinkenbutt

Quote from: Jacob on January 22, 2026, 08:35:22 PMBut replicating every relevant incremental experiment since 2025 in your field is going to become an overwhelming burden on individual researchers very quickly.

Going to be?  In many fields, it has been for decades.  The "replication crisis" well predates generative ML models.

Baron von Schtinkenbutt

#1023
Quote from: crazy canuck on Today at 07:56:07 AMAI exacerbates the problem. Nobody is claiming that a problem didn't exist prior to AI.

You don't interpret "[t]here was little need to replicate an experiment previously" as a denial that there was a problem previously?  That's how I do.

In many fields, like machine learning, human slop has been deluging reviewers for a couple decades.  I worked for a company owned by a Computer Science professor, and she bemoaned the slop reviewers were being subject to back in the mid 2010s.  LLMs have definitely made the problem worse, as they significantly improve the "productivity" of generating human research slop, but it's not "orders of magnitude" as Zoupa says unless you're in an area that was not subject to high volumes of slop coming from questionable researchers in China and India.

Jacob

Okay that was wrong. Mea culpa.

crazy canuck

Quote from: Baron von Schtinkenbutt on Today at 09:40:58 AM
Quote from: crazy canuck on Today at 07:56:07 AMAI exacerbates the problem. Nobody is claiming that a problem didn't exist prior to AI.

You don't interpret "[t]here was little need to replicate an experiment previously" as a denial that there was a problem previously?  That's how I do.

In many fields, like machine learning, human slop has been deluging reviewers for a couple decades.  I worked for a company owned by a Computer Science professor, and she bemoaned the slop reviewers were being subject to back in the mid 2010s.  LLMs have definitely made the problem worse, as they significantly improve the "productivity" of generating human research slop, but it's not "orders of magnitude" as Zoupa says unless you're in an area that was not subject to high volumes of slop coming from questionable researchers in China and India.

Where I say they are going wrong is the suggestion of that the solution to the problem is the same as it has ever been, replicating the experiment.

We are putting our heads in the sand if we pretend that the replication problem is the same now as it has ever been.  That is clearly ridiculous given the volume of fraudulent papers that are being submitted to the journals.
Awarded 17 Zoupa points

In several surveys, the overwhelming first choice for what makes Canada unique is multiculturalism. This, in a world collapsing into stupid, impoverishing hatreds, is the distinctly Canadian national project.

Sheilbh

Quote from: Baron von Schtinkenbutt on Today at 09:32:24 AM
Quote from: Jacob on January 22, 2026, 08:35:22 PMBut replicating every relevant incremental experiment since 2025 in your field is going to become an overwhelming burden on individual researchers very quickly.

Going to be?  In many fields, it has been for decades.  The "replication crisis" well predates generative ML models.
Yeah I think this is the interesting thing because in many ways I think that replication crisis was generated by our existing artificial intelligence: boards of administrators and funding criteria.

On the input side for some time we have been collecting vastly more data than we are able to meaningfully or reliably extract information from, or it is simply too complex or open to "solve" (the protein fold problem). And on the output side we have turned academia itself into something that is generating datapoints. The whole "publish or perish" approach is basically about ensuring you hit metrics in order to be reviewed by (non-expert, non-peer) funding bodies and administrators. That has I think directly fed into the replication crisis and also other trends within academia such as ever deeper and narrower specialisation (which I think is less likely to produce breakthroughs). We are already in a world of academic slop, it's just artisanal slop.

As I say I think AI is actually going to dissolve some of the problems from the input, as it may be better and able to produce meaningful output from the vast datasets we've been collecting. At the same time - especially if the metrics that we monitor and care about from an administrative and funding body perspective do not change - it will almost certainly exacerbate the academic slop problem.

But I think it is impossible to talk about the impact of AI and academia without having AlphaFold in that conversation. The team behind that literally won the Nobel Prize in Chemistry a year or two ago - from what I read there was some grumbling that it was too soon (because Nobel Prizes are often more lifetime achievement), but that the breakthrough was deserving was not doubted. Again I know nothing about this but my understanding is that protein folding was seen as an intractable problem in the field because the number of possible bonds was basically infinite. Within that area people did not expect their to be a solution to this in their lifetime, if ever (I read an article that noted that just doing it sequentially which is what we've done so far, for all the proteins we have would take longer than the age of the universe).

The first iteration of AlphaFold came out in 2018 and it was already a revolutionary breakthrough from our current understanding in its ability to predict the protein folds. But there were still accuracy issues, particularly the more complex the protein but it has continued to improve and the latest version is significantly better. There are still limitations and issues - which means you keep working and re-iterating and building new versions - but from what I've read it is a seismic shift in that area of research. We'll see the impact play out in the comming years when the other research or discovery that is based on our requires structural biology now have this new foundational tool to build on.

But I think this is what I mean by AI being able to help on one side while also accelerating on the other and, you know, you kind of hope that humans and academia will be able to (and increasingly better able to) dicriminate between the two.
Let's bomb Russia!