My main gripe, though, is with the oversight (or lack thereof) in the peer review process. If a journal can’t even spot AI-generated images, it raises red flags about the entire paper’s credibility, regardless of the content’s origin.
The crux of the matter is the robustness of the review process
The pace at which AI can generate bullshit not only currently vastly outstrips the ability for individual humans to vet it, but is actually accelerating. We cannot manually solve this by saying “people just need to catch it.” Look at YT with CSAM or other federal violations - they literally can’t keep up with the content coming in despite having armies of people (with insane turnover I might add) trying to do it. So the bar has been changed from “you can’t have any of this stuff” to “you must put in reasonable effort to minimize it,” because we’ve simply accepted it can’t be done with humans - and that’s with the assistance of their current algorithms constantly scouring their content for red flags. Bear in mind this is an international, massive company with resources these journals can’t even dream of and almost all this content has been generated and uploaded by individual people.
These people I’m sure are perfectly capable of catching AI generated nonsense most of the time. But as the content gets more sophisticated and voluminous, the problem is only going to get worse. Stuff is going to get through. So we are at a crossroads where we throw up our hands and say “well there’s not much we can do, good luck separating the wheat from the chaff,” or we get creative. And this isn’t just in academic journals either. This is crossing into more and more industries, in particular if it requires writing. Someone(s) is throwing money and resources at getting AI to do it faster and cheaper than people can.
I feel like two different problems are conflated into one though.
The academic review process is broken.
AI generated bullshit is going to cause all sorts of issues.
Point two can contribute to point 1 but for that a bunch of stuff needs to happen. Correct my if I am wrong but as far as my understanding of peer-review processes are supposed to go it is something along the lines of:
A researcher submits their manuscript to a journal.
An editor of that journal validates the paper fits within the scope and aims of the journal. It might get rejected here or it gets send out for review.
When it does get send out for review to several experts in the field, the actual peer reviewers. These are supposed to be knowledgeable about the specific topic the paper is about. These then read the paper closely and evaluate things like methodology, results, (lack of) data, and conclusions.
Feedback goes to the editor, who then makes a call about the paper. It either gets accepted, revisions are required or it gets rejected.
If at point 3 people don’t do the things I highlighted in bold then to me it seems like it is a bit silly to make this about AI.
If at point 4 the editor ignores most feedback for the peer reviewers, then it again has very little to do with AI and everything the a base process being broken.
To summarize, yes AI is going to fuck up a lot of information, it already has. But by just shouting, “AI is at it again with its antics!” at every turn instead of looking further and at other core issues we will only make things worse.
Edit:
To be clear, I am not even saying that peer reviewers or editors should “just do their job already”. But fake papers have been increasingly an issue for well over a decade as far as I am aware. The way the current peer review process works simply doesn’t seem to scale to where we are today. And yes, AI is not going to help with that, but it is still building upon something that already was broken before AI was used to abuse it.
But by just shouting, “AI is at it again with its antics!” at every turn instead of looking further and at other core issues we will only make things worse.
I think this is a very unfair characterization of what I and others have voiced. This has always been a fundamental issue when talking to AI-evangelists, which you may not be but your argument seems to fall in line with. There is this inherently defensive posture I find whenever a critique is levied of AI, yet if I were so protective of something like the internal combustion engine, people would (rightfully) raise eyebrows. I agree that AI is a tool and often it is just widening cracks that exist, but we need to deal with these issues on multiple fronts and acknowledge that reckless adoption exacerbates the issue. And the new front that AI has opened up is scale. The ability for even someone with a modest, home-rolled LLM to just flood the internet with a bunch of crappy blog spam is outrageous and wasn’t even a possible 5 years ago. One person can do the damage of a thousand. Run a cursory google search and see what SEO + AI blog spam has wrought.
By characterizing it as “this was already an issue it’s not AI fault” is overly reductionist at its core. It’s passing the buck and saying that AI in no way, shape, or form, bears any responsibility for the problem. That just means we aren’t looking critically at what is a ultimately a tool and how it can be used for harm.
But fake papers have been increasingly an issue for well over a decade as far as I am aware.
Yes but these articles were not nearly as prolific. We are talking orders of magnitude more crap to sift through already occurring across many industries. It has never been this bad. Give the journals 1000 people and 100x the budget and eventually they will still be overcome. It’s not just “fix the review process.” It’s a complicated issue that is exploited in multiple ways.
I feel like this is the third time people are selective reading into what I have said.
I specifically acknowledge that AI is already causing all sorts of issues. I am also saying that there is also another issue at play. One that might be exacerbated by the use of AI but at its root isn’t caused by AI.
In fact, in this very thread people have pointed out that *in this case" the journal in question is simply the issue. https://beehaw.org/comment/2416937
In fact. The only people likely noticed is, ironically, the fact that AI was being used.
And again I fully agree, AI is causing massive issues already and disturbing a lot of things in destructive ways. But, that doesn’t mean all bullshit out there is caused by AI. Even if AI is tangible involved.
If that still, in your view, somehow makes me sound like an defensive AI evangelist then I don’t know what to tell you…
If you feel several people are selectively reading what you’re writing then you should consider what about your writing is perhaps contributing to the misinterpretation/selective reading. It’s not like we are working in concert.
but that doesn’t mean all bullshit out there is caused by AI
Again, you are mischaracterizing what I and others have said. No one asserted that. Quote where I said anything remotely like that.
The only irony I’m seeing is your seemingly engaging in the behavior you’re decrying.
Would you like me to quote every single one of your lines, line by line, and respond to them? Is that the kind of conversation you want to have? Or can you use common sense and infer that I am reading everything and responding to the things I think are worth responding to, which is pretty standard behavior in human conversations?
I am taking issue with elements of your comments. You are wholesale claiming I - and others - said things I/they did not, and then ignoring when I ask you to stop doing it or show me where I said whatever you accused me of.
When did anyone say
But by just shouting, “AI is at it again with its antics!” at every turn instead of looking further and at other core issues we will only make things worse”
Or
that doesn’t mean all bullshit out there is caused by AI
Who said those lines? Where are they? I’m not hiding behind “I didn’t personally say it.” I understand basic Internet thread etiquette. If you are reaffirming somebody else’s comment, you are generally standing behind most if not all of what they said. But nobody here is saying or doing the things you are claiming. You are tilting at windmills.
You can infer that either I consider the thing I did not specifically mention not worth mentioning, or I agree enough to not warrant debating it. This is like basic social etiquette dude. I am pointing out the specific elements I find objectionable and want to discuss. How meta do I need to get here?
The pace at which AI can generate bullshit not only currently vastly outstrips the ability for individual humans to vet it, but is actually accelerating. We cannot manually solve this by saying “people just need to catch it.” Look at YT with CSAM or other federal violations - they literally can’t keep up with the content coming in despite having armies of people (with insane turnover I might add) trying to do it. So the bar has been changed from “you can’t have any of this stuff” to “you must put in reasonable effort to minimize it,” because we’ve simply accepted it can’t be done with humans - and that’s with the assistance of their current algorithms constantly scouring their content for red flags. Bear in mind this is an international, massive company with resources these journals can’t even dream of and almost all this content has been generated and uploaded by individual people.
These people I’m sure are perfectly capable of catching AI generated nonsense most of the time. But as the content gets more sophisticated and voluminous, the problem is only going to get worse. Stuff is going to get through. So we are at a crossroads where we throw up our hands and say “well there’s not much we can do, good luck separating the wheat from the chaff,” or we get creative. And this isn’t just in academic journals either. This is crossing into more and more industries, in particular if it requires writing. Someone(s) is throwing money and resources at getting AI to do it faster and cheaper than people can.
I feel like two different problems are conflated into one though.
Point two can contribute to point 1 but for that a bunch of stuff needs to happen. Correct my if I am wrong but as far as my understanding of peer-review processes are supposed to go it is something along the lines of:
If at point 3 people don’t do the things I highlighted in bold then to me it seems like it is a bit silly to make this about AI. If at point 4 the editor ignores most feedback for the peer reviewers, then it again has very little to do with AI and everything the a base process being broken.
To summarize, yes AI is going to fuck up a lot of information, it already has. But by just shouting, “AI is at it again with its antics!” at every turn instead of looking further and at other core issues we will only make things worse.
Edit:
To be clear, I am not even saying that peer reviewers or editors should “just do their job already”. But fake papers have been increasingly an issue for well over a decade as far as I am aware. The way the current peer review process works simply doesn’t seem to scale to where we are today. And yes, AI is not going to help with that, but it is still building upon something that already was broken before AI was used to abuse it.
I think this is a very unfair characterization of what I and others have voiced. This has always been a fundamental issue when talking to AI-evangelists, which you may not be but your argument seems to fall in line with. There is this inherently defensive posture I find whenever a critique is levied of AI, yet if I were so protective of something like the internal combustion engine, people would (rightfully) raise eyebrows. I agree that AI is a tool and often it is just widening cracks that exist, but we need to deal with these issues on multiple fronts and acknowledge that reckless adoption exacerbates the issue. And the new front that AI has opened up is scale. The ability for even someone with a modest, home-rolled LLM to just flood the internet with a bunch of crappy blog spam is outrageous and wasn’t even a possible 5 years ago. One person can do the damage of a thousand. Run a cursory google search and see what SEO + AI blog spam has wrought.
By characterizing it as “this was already an issue it’s not AI fault” is overly reductionist at its core. It’s passing the buck and saying that AI in no way, shape, or form, bears any responsibility for the problem. That just means we aren’t looking critically at what is a ultimately a tool and how it can be used for harm.
Yes but these articles were not nearly as prolific. We are talking orders of magnitude more crap to sift through already occurring across many industries. It has never been this bad. Give the journals 1000 people and 100x the budget and eventually they will still be overcome. It’s not just “fix the review process.” It’s a complicated issue that is exploited in multiple ways.
I feel like this is the third time people are selective reading into what I have said.
I specifically acknowledge that AI is already causing all sorts of issues. I am also saying that there is also another issue at play. One that might be exacerbated by the use of AI but at its root isn’t caused by AI.
In fact, in this very thread people have pointed out that *in this case" the journal in question is simply the issue. https://beehaw.org/comment/2416937
In fact. The only people likely noticed is, ironically, the fact that AI was being used.
And again I fully agree, AI is causing massive issues already and disturbing a lot of things in destructive ways. But, that doesn’t mean all bullshit out there is caused by AI. Even if AI is tangible involved.
If that still, in your view, somehow makes me sound like an defensive AI evangelist then I don’t know what to tell you…
If you feel several people are selectively reading what you’re writing then you should consider what about your writing is perhaps contributing to the misinterpretation/selective reading. It’s not like we are working in concert.
Again, you are mischaracterizing what I and others have said. No one asserted that. Quote where I said anything remotely like that.
The only irony I’m seeing is your seemingly engaging in the behavior you’re decrying.
The fact that you specifically respond to this one highly specific thing. While I clearly have written more is exactly what I mean.
shrugs
Would you like me to quote every single one of your lines, line by line, and respond to them? Is that the kind of conversation you want to have? Or can you use common sense and infer that I am reading everything and responding to the things I think are worth responding to, which is pretty standard behavior in human conversations?
I am taking issue with elements of your comments. You are wholesale claiming I - and others - said things I/they did not, and then ignoring when I ask you to stop doing it or show me where I said whatever you accused me of.
When did anyone say
Or
Who said those lines? Where are they? I’m not hiding behind “I didn’t personally say it.” I understand basic Internet thread etiquette. If you are reaffirming somebody else’s comment, you are generally standing behind most if not all of what they said. But nobody here is saying or doing the things you are claiming. You are tilting at windmills.
You can infer that either I consider the thing I did not specifically mention not worth mentioning, or I agree enough to not warrant debating it. This is like basic social etiquette dude. I am pointing out the specific elements I find objectionable and want to discuss. How meta do I need to get here?