Wouldn’t you want a pediatric hepatobiliary surgeon? A four month old is going to be a tricky case, I’d think.
the chatbot couldn’t even recommend the right specialist 😑
Probs recommend a ‘Paedophile Hobgoblin’.
Radiology Case Reports seems to be a low quality journal. https://www.scimagojr.com/journalrank.php?category=2741&page=5&total_size=335
It’s OK, nobody will be able to read it anyway because it’s on Elsevier.
Dude. Couldn’t even proofread the easy way out they took
This is what baffles me about these papers. Assuming the authors are actually real people, these AI-generated mistakes in publications should be pretty easy to catch and edit.
It does make you wonder how many people are successfully putting AI-generated garbage out there if they’re careful enough to remove obviously AI-generated sentences.
I definitely utilize AI to assist me in writing papers/essays, but never to just write the whole thing.
Mainly use it for structuring or rewording sections to flow better or sound more professional, and always go back to proofread and ensure that any information stays correct.
Basically, I provide any data/research and get a rough layout down, and then use AI to speed up the refining process.
EDIT: I should note that I am not writing scientific papers using this method, and doing so is probably a bad idea.
There’s perfectly ethical ways to use it, even for papers, as your example fits. It’s been a great help for my adhd ass to get some structure in my writing.
https://www.oneusefulthing.org/p/my-class-required-ai-heres-what-ive
Yeah, same. I’m good at getting my info together and putting my main points down, but structuring everything in a way that flows well just isn’t my strong suit, and I struggle to sit there for long periods of time writing something I could just explain in a few short points, especially if there’s an expectation for a certain length.
AI tools help me to get all that done whilst still keeping any core information my own.
This almost makes me think they’re trying to fully automate their publishing process. So, no editor in that case.
Editors are expensive.
If they really want to do it, they can just run a local language model trained to proofread stuff like this. Would be way better
This is exactly the line of thinking that lead to papers like this being generated.
I don’t think so. They are using AI from a 3rd party. If they train their own specialized version, things will be better.
Here is a better idea: have some academic integrity and actually do the work instead of using incompetent machine learning to flood the industry with inaccurate trash papers whose only real impact is getting in the way of real research.
There is nothing wrong with using AI to proofread a paper. It’s just a grammar checker but better.
You can literally use tools to check grammar perfectly without using AI. What the LLM AI does is it predict what word comes next in a sequence, and if the AI is wrong as it often is then you’ve just attempted to publish a paper with halucinations wasting the time and effort of so many people because you’re greedy and lazy.
Proofreading involves more than just checking grammar, and AIs aren’t perfect. I would never put my name on something to get published publicly like this without reading it through at least once myself.
Hold up. That actually got through to publishing??
Yep. And AI will totally help.
Ooh I mean not help. It’ll make it much worse. Particularly with the social sciences. Which were already pretty fuX0r3d anyway due to the whole “your emotions equal this number” thing.
It’s Elsevier, so this probably isn’t even the lowest quality article they’ve published
It’s because nobody was there to highlight the text for them.
The entire abstract is AI. Even without the explicit mention in one sentence, the rest of the text should’ve been rejected as nonspecific nonsense.
Maybe a big red circle around the entire abstract would have helped
That’s not actually the abstract; it’s a piece from the discussion that someone pasted nicely with the first page in order to name and shame the authors. I looked at it in depth when I saw this circulate a little while ago.
Ah, that makes more sense. I looked up the original abstract and indeed it looks more like what you’d expect (hard to comprehend for someone that’s not in the field).
Though to clarify (for others reading this) they still did use generative AI to (help?) write the paper, which is only part of why it was withdrawn.
Many journals are absolute garbage that will accept anything. Keep that in mind the next time someone links a study to prove a point. You have to actually read the thing and judge the methodology to know if their conclusions have any merits.
Full disclosure: I don’t intend to be condescending.
Research Methods during my graduate studies forever changed the way I interpret just about any claim, fact, or statement. I’m obnoxiously skeptical and probably cynical, to be honest. It annoys the hell out of my wife but it beats buying into sensationalist headlines and miracle research. Then you get into the real world and see how data gets massaged and thrown around haphazardly…believe very little of what you see.
I have this problem too. My wife gets so annoyed at things because I question things I notice as biases or statistical irregularities instead of just accepting that they knee what they were doing. I have tried to explain it to her. Skepticism is not dismissal and it is not saying I am smarter than them, it is recognizing that they are human and that I may be more proficient in one spot they made a mistake than they were.
I will acknowledge that the lay need to stop trying to argue with scientists because “they did their own research”, but the actually informed and educated need to do a better job of calling each other out.
A good tactic, though not perfect, is to look at the journal impact factor.
We are in top dystopia mode right now. Students have AI write articles that are proofread and edited by AI, submitted to automated systems that are AI vetted for publishing, then posted to platforms where no one ever reads the articles posted but AI is used to scrape them to find answers or train all the other AIs.
How generative AI is clouding the future of Google search
The search giant doesn’t just face new competition from ChatGPT and other upstarts. It also has to keep AI-powered SEO from damaging its results.
More or less the same phenomenon of signal pollution:
“Google is shifting its responsibility for maintaining the quality of results to moderators on Reddit, which is dangerous,” says Ray of Amsive. Search for “kidney stone pain” and you’ll see Quora and Reddit ranking in the top three positions alongside sites like the Mayo Clinic and the National Kidney Foundation. Quora and Reddit use community moderators to manually remove link spam. But with Reddit’s traffic growing exponentially, is a human line of defense sustainable against a generative AI bot army?
We’ll end up using year 2022 as a threshold for reference criteria. Maybe not entirely blocked, but like a ratio… you must have 90% pre-2022 and 10% post-2022.
Perhaps this will spur some culture shift to publish all the data, all the notes, everything - which will be great to train more AI on. Or we’ll get to some type of anti-AI or anti-crawler medium.
How did this make it past review? I guess case reports might not have a peer review process
Elsevier
Fun fact! In the Netherlands, Elsevier publishes a weekly magazine about politics, which is basically the written version of Fox News for that country. Very nice that those people control like 50% of all academic publishing.
Military Industrial Publishing Complex. It isn’t tinfoil.
It is astounding to me that this happened. A complete failure of peer review, of the editors, and OF COURSE of the authors. Just absolutely bonkers that this made it to publication. Completely clown shoes.
It keeps happening across all fields. I think we are about to witness a complete overhaul of the publishing model.
Using AI to detect AI uses in research papers : the research paper.
I’ve been saying it to everyone who’ll listen …
the journals should be run by universities as non-profits with close ties to the local research community (ie, editors from local faculty and as much of the staff from the student/PhD/Postdoc body as possible). It’s really an obvious idea. In legal research, there’s a long tradition of having students run journals (Barrack Obama, if you recall, was editor of the Harvard Law Journal … that was as a student). I personally did it too … it’s a great experience for a student to see how the sausage is made.
My field’s too small to have separate journals for each university, but we do have one in the Free Journal Network that’s run by the community
You don’t need one in each University, that wouldn’t scale. There’s be natural specialisations. And journals could even move from University to university as academic personnel change over time.
The main point is that they’re non-profit and run by researchers for researchers.
the entire paragraph after the highlight is still AI too
Raneem Bader, Ashraf Imam, Mohammad Alnees, Neta Adler, Joanthan ilia, Diaa Zugayar, Arbell Dan, Abed Khalaileh. You are all accused of using chatgpt or whatever else to write your paper. How do you plead?
How do you plead?
“I apologize, but I do not feel comfortable performing any pleas or participating in negative experiences. As an AI language model, I aim to help with document production. Perhaps you would like me to generate another article?”
How do you feel about using chatgpt as a translation tool?
Depends on what kind of translation we’re talking here. Translating some chatter? Translating a web page (most of these suck)? Translating a book for it to be published? Translating a book so you can read it yourself? Translating a scientific paper so you can publish it, without proofreading the translation?
Is it the personal vs. private vs. public use that is bothersome or is it just the fact that these fuckers didn’t proofread I guess is what I’m trying to figure out
They didn’t proofread, plus there’s a real chance that some other parts of the paper might be AI nonsense. If something so glaringly problematic got past, what smaller mistakes are also there? They effectively poisoned their own paper
My money is on non-existent. I bet one of those dudes is real, at best.
I started a business with a friend to automatically identify things like this, fraud like what happened with Alzheimer’s research, and mistakes like missing citations. If anyone is interested, has contacts or expertise in relevant domains or just wants to talk about it, hit me up.
Google Retraction Watch. Academia has good people already doing this.
https://www.crossref.org/blog/news-crossref-and-retraction-watch/
Legend right here.
What’s the business model? (How does that generate revenue?)
We’re providing review assistance and some types of automated replication to publishers for a yearly rate, and planning to sell subscriptions to individual researchers for $50 /mo.
Ah… welp, tis the AI era, I guess…
“but for specific cases, it is essential to consult a medical professional”
Foolish robot! I am the medical professional!
In Elsevier’s defense, reading is hard and they have so much money to count.
what if this was actually just a huge troll, and it wasn’t AI.
Now that would be fucking hilarious.