News:

And we're back!

Main Menu

Recent posts

#1
Off the Record / Re: The AI dooooooom thread
Last post by Jacob - Today at 12:33:02 PM
Quote from: Baron von Schtinkenbutt on Today at 11:28:01 AMThis is where I partially disagree.  The nature of the replication problem is the same as it has been (unethical researchers flooding journals with fraudulent papers).  The magnitude has increased significantly, though.  For some fields, this shifts the dynamic from "annoyed by frauds" to "overwhelmed by frauds", which is significant.  For others, it shifts the dynamic from "overwhelmed by frauds" to "very overwhelmed by frauds", which (in my opinion) isn't significant.

A pernicious problem that I think LLMs will greatly exacerbate is instances of high-quality, targeted fraud.  The ability of LLMs to craft well-worded bullshit makes it easier to craft papers with an agenda, where it will be harder to detect the fraud.  It's an extension of using LLMs to craft misinformation.  The peer review system has long had a problem here, since it really isn't set up to assume submitters are high-effort liars.  Making this significantly easier is something I think could change the nature of the replication problem.

This seems a reasonable take to me.
#2
Off the Record / Re: Facebook Follies of Friend...
Last post by Syt - Today at 12:21:49 PM
 :lol:
#3
Off the Record / Re: Facebook Follies of Friend...
Last post by DGuller - Today at 11:54:39 AM
Way too convenient of a setup.
#4
Off the Record / Re: Facebook Follies of Friend...
Last post by Crazy_Ivan80 - Today at 11:51:10 AM
 :lmfao:
#5
Off the Record / Re: Facebook Follies of Friend...
Last post by Sophie Scholl - Today at 11:40:16 AM
You cannot view this attachment.
#6
Off the Record / Re: The AI dooooooom thread
Last post by garbon - Today at 11:29:29 AM
Quote from: Sheilbh on Today at 11:08:02 AMYeah I think this is the interesting thing because in many ways I think that replication crisis was generated by our existing artificial intelligence: boards of administrators and funding criteria.

On the input side for some time we have been collecting vastly more data than we are able to meaningfully or reliably extract information from, or it is simply too complex or open to "solve" (the protein fold problem). And on the output side we have turned academia itself into something that is generating datapoints. The whole "publish or perish" approach is basically about ensuring you hit metrics in order to be reviewed by (non-expert, non-peer) funding bodies and administrators. That has I think directly fed into the replication crisis and also other trends within academia such as ever deeper and narrower specialisation (which I think is less likely to produce breakthroughs). We are already in a world of academic slop, it's just artisanal slop.

As I say I think AI is actually going to dissolve some of the problems from the input, as it may be better and able to produce meaningful output from the vast datasets we've been collecting. At the same time - especially if the metrics that we monitor and care about from an administrative and funding body perspective do not change - it will almost certainly exacerbate the academic slop problem.

But I think it is impossible to talk about the impact of AI and academia without having AlphaFold in that conversation. The team behind that literally won the Nobel Prize in Chemistry a year or two ago - from what I read there was some grumbling that it was too soon (because Nobel Prizes are often more lifetime achievement), but that the breakthrough was deserving was not doubted. Again I know nothing about this but my understanding is that protein folding was seen as an intractable problem in the field because the number of possible bonds was basically infinite. Within that area people did not expect their to be a solution to this in their lifetime, if ever (I read an article that noted that just doing it sequentially which is what we've done so far, for all the proteins we have would take longer than the age of the universe).

The first iteration of AlphaFold came out in 2018 and it was already a revolutionary breakthrough from our current understanding in its ability to predict the protein folds. But there were still accuracy issues, particularly the more complex the protein but it has continued to improve and the latest version is significantly better. There are still limitations and issues - which means you keep working and re-iterating and building new versions - but from what I've read it is a seismic shift in that area of research. We'll see the impact play out in the comming years when the other research or discovery that is based on our requires structural biology now have this new foundational tool to build on.

But I think this is what I mean by AI being able to help on one side while also accelerating on the other and, you know, you kind of hope that humans and academia will be able to (and increasingly better able to) dicriminate between the two.

I don't really understand what you are talking about when you are distinguishing between quant and theory and also input and output. Using AI to help examine the quant data points (as an input) won't likely diminish quant outputs if defined as 'The whole "publish or perish" approach is basically about ensuring you hit metrics in order to be reviewed by (non-expert, non-peer) funding bodies and administrators'. I don't see why that would disappear.  It just might help increase the robustness of papers as we can now efficiently see connections between the inputs that we could never have managed by human brainpower alone.  Nothing there feels less quanty.

AI on the outputs (aka writing papers) then seems like it will only increase slop life...so while there may be more robust papers coming out, they also may get lose in some of the noise that humans will need sort through. Other than using AI to detect AI, I don't see why humans would suddenly get better at discriminating between the robust and the AI slop given the sheer volume of papers that will/do exist.
#7
Quote from: crazy canuck on Today at 10:35:08 AMWhere I say they are going wrong is the suggestion of that the solution to the problem is the same as it has ever been, replicating the experiment.

I agree there.  My point was just that some fields reached the point where they were so overwhelmed with fraud that "just replicate" was infeasible well before LLMs.

Quote from: crazy canuck on Today at 10:35:08 AMWe are putting our heads in the sand if we pretend that the replication problem is the same now as it has ever been.  That is clearly ridiculous given the volume of fraudulent papers that are being submitted to the journals.

This is where I partially disagree.  The nature of the replication problem is the same as it has been (unethical researchers flooding journals with fraudulent papers).  The magnitude has increased significantly, though.  For some fields, this shifts the dynamic from "annoyed by frauds" to "overwhelmed by frauds", which is significant.  For others, it shifts the dynamic from "overwhelmed by frauds" to "very overwhelmed by frauds", which (in my opinion) isn't significant.

A pernicious problem that I think LLMs will greatly exacerbate is instances of high-quality, targeted fraud.  The ability of LLMs to craft well-worded bullshit makes it easier to craft papers with an agenda, where it will be harder to detect the fraud.  It's an extension of using LLMs to craft misinformation.  The peer review system has long had a problem here, since it really isn't set up to assume submitters are high-effort liars.  Making this significantly easier is something I think could change the nature of the replication problem.
#8
Off the Record / Re: What does a TRUMP presiden...
Last post by Richard Hakluyt - Today at 11:09:17 AM
Bonespurs McBonespurs projecting again.
#9
Off the Record / Re: The AI dooooooom thread
Last post by Sheilbh - Today at 11:08:02 AM
Quote from: Baron von Schtinkenbutt on Today at 09:32:24 AM
Quote from: Jacob on January 22, 2026, 08:35:22 PMBut replicating every relevant incremental experiment since 2025 in your field is going to become an overwhelming burden on individual researchers very quickly.

Going to be?  In many fields, it has been for decades.  The "replication crisis" well predates generative ML models.
Yeah I think this is the interesting thing because in many ways I think that replication crisis was generated by our existing artificial intelligence: boards of administrators and funding criteria.

On the input side for some time we have been collecting vastly more data than we are able to meaningfully or reliably extract information from, or it is simply too complex or open to "solve" (the protein fold problem). And on the output side we have turned academia itself into something that is generating datapoints. The whole "publish or perish" approach is basically about ensuring you hit metrics in order to be reviewed by (non-expert, non-peer) funding bodies and administrators. That has I think directly fed into the replication crisis and also other trends within academia such as ever deeper and narrower specialisation (which I think is less likely to produce breakthroughs). We are already in a world of academic slop, it's just artisanal slop.

As I say I think AI is actually going to dissolve some of the problems from the input, as it may be better and able to produce meaningful output from the vast datasets we've been collecting. At the same time - especially if the metrics that we monitor and care about from an administrative and funding body perspective do not change - it will almost certainly exacerbate the academic slop problem.

But I think it is impossible to talk about the impact of AI and academia without having AlphaFold in that conversation. The team behind that literally won the Nobel Prize in Chemistry a year or two ago - from what I read there was some grumbling that it was too soon (because Nobel Prizes are often more lifetime achievement), but that the breakthrough was deserving was not doubted. Again I know nothing about this but my understanding is that protein folding was seen as an intractable problem in the field because the number of possible bonds was basically infinite. Within that area people did not expect their to be a solution to this in their lifetime, if ever (I read an article that noted that just doing it sequentially which is what we've done so far, for all the proteins we have would take longer than the age of the universe).

The first iteration of AlphaFold came out in 2018 and it was already a revolutionary breakthrough from our current understanding in its ability to predict the protein folds. But there were still accuracy issues, particularly the more complex the protein but it has continued to improve and the latest version is significantly better. There are still limitations and issues - which means you keep working and re-iterating and building new versions - but from what I've read it is a seismic shift in that area of research. We'll see the impact play out in the comming years when the other research or discovery that is based on our requires structural biology now have this new foundational tool to build on.

But I think this is what I mean by AI being able to help on one side while also accelerating on the other and, you know, you kind of hope that humans and academia will be able to (and increasingly better able to) dicriminate between the two.
#10
Off the Record / Re: What does a TRUMP presiden...
Last post by Norgy - Today at 10:59:48 AM
Well, he pulled out of Afghanistan like it was Stormy Daniels.