I disagree there - peer review as a system isn’t designed to catch fraud at all, it’s designed to ensure that studies that get published meet a minimum standard for competence. Reviewers aren’t asked to look for fake data, and in most cases aren’t trained to spot it either.
Whether we need to create a new system that is designed to catch fraud prior to publication is a whole different question.
We could award a certain percentage of grants and grad students should be able to get degrees doing replication studies. Unfortunately everyone is chasing total paper count and impact factor rankings and shit.
Maybe we should consider replication studies to be “service to the community” when judging career accomplishments. Like, maybe you never chaired a conference but you published several replication studies instead. You could get your Masters students and/or undergrads to do the replications. We’d need journals that focus on replication studies, though.
Nah. Enough of this service to community stuff. It always ends up meaning us doing more work for free that someone else profits from. It should be incentiviced with grant funds. Studies I would want to make sure undergo replication are industry sponsored. Industry sponsored studies should have to pay into a pool and certain studies would be selected for replication analysis with these funds.
Whether we need to create a new system that is designed to catch fraud prior to publication is a whole different question
That system already exists. It’s what replication studies are for. Whether we desperately need to massively bolster the amount of replication studies done is the question, and the answer is ‘yes’.
Yeah, reviewing is about making sure the methods are sound and the conclusions are supported by the data. Whether or not the data are correct is largely something that the reviewer cannot determine.
If a machine spits out a reading of 5.3, but the paper says 6.2, the reviewer can’t catch that. If numbers are too perfect, you might be suspicious of it, but it’s really not your job to go all forensic accountant on the data.
I disagree there - peer review as a system isn’t designed to catch fraud at all, it’s designed to ensure that studies that get published meet a minimum standard for competence. Reviewers aren’t asked to look for fake data, and in most cases aren’t trained to spot it either.
Whether we need to create a new system that is designed to catch fraud prior to publication is a whole different question.
We could award a certain percentage of grants and grad students should be able to get degrees doing replication studies. Unfortunately everyone is chasing total paper count and impact factor rankings and shit.
Maybe we should consider replication studies to be “service to the community” when judging career accomplishments. Like, maybe you never chaired a conference but you published several replication studies instead. You could get your Masters students and/or undergrads to do the replications. We’d need journals that focus on replication studies, though.
Nah. Enough of this service to community stuff. It always ends up meaning us doing more work for free that someone else profits from. It should be incentiviced with grant funds. Studies I would want to make sure undergo replication are industry sponsored. Industry sponsored studies should have to pay into a pool and certain studies would be selected for replication analysis with these funds.
That system already exists. It’s what replication studies are for. Whether we desperately need to massively bolster the amount of replication studies done is the question, and the answer is ‘yes’.
An institute for reproducibility would be awesome
Agree! Maybe efforts spent working on projects assigned from the IFR would be rewarded with grant funds or grant extensions for novel projects.
But that’s not S E X Y! We need new research, to earn grants and subsidize faculty pay!
Yeah, reviewing is about making sure the methods are sound and the conclusions are supported by the data. Whether or not the data are correct is largely something that the reviewer cannot determine.
If a machine spits out a reading of 5.3, but the paper says 6.2, the reviewer can’t catch that. If numbers are too perfect, you might be suspicious of it, but it’s really not your job to go all forensic accountant on the data.