Most of the conferences and journals doesn't ask for code to be submitted along with the paper. How common is it to detect fake or incomplete results in such conference/journal review ? In my field of study in Computer Science, sometimes overwhelming claims are made in some papers, but since the field is still growing, how do reviewers/editors determine the authenticity of the results ? Is it common to ask for code or working demo in such cases.
Answer
Since you're referring to computer science, I'll talk about conferences. Peer review, as Dave Clarke mentions, is primarily how papers are scrutinized. But conference reviews are often on a severe time constraint. So for an experimental paper, the reviewer looks at the main ideas, sees if the experiments are convincing and handle the cases that the reviewer thinks are important, and evaluates the overall work for "general excitement". But the reviewer has no ability to check if the work is correct.
In a theoretical paper, a reviewer might skim proofs, and see if the main claims pass a "smell test". If the result is important, or surprising, the reviewer might try to verify key claims in detail, and expect the authors to argue why their techniques should work. But even there, no formal proof checking happens (that happens in the journal review).
Ultimately, there are two (imperfect) safeguards against incorrect claims:
- author reputation - you don't want to get a rap for writing faulty papers
- reproducibility - I might reproduce your work, or ask for your code, when writing my papers.
No comments:
Post a Comment