In academia, there has been recently some cases that came to light of very well known scientists that have fabricated their data out of thin air.
In some instances these papers have been cited many times by other researchers and some of them have even been praised. Thus, when the truth came to light, it also appears to the public that scientists have bad peer-review processes.
In light of the presented reasons, how can a reviewer make at least some sanity test that the data is (most likely) not fabricated? Suggesting so could come as a great injury to the researcher, but I think there should be some kind of mechanism to control this.
Answer
There is only one reliable way to do it, which is to try to replicate their results.
The unreliable, but not completely useless way, is to see if the numbers fit Benford's Law. Benford's Law describes the distribution of the first digit of many very diverse data sets. This is the distribution:
(public domain chart from Wikipedia)
Andreas Diekmann describes this further, in Not the First Digit! Using Benford's Law to Detect Fraudulent Scientific Data, a paper in the Journal of Applied Statistics from 2007
No comments:
Post a Comment