When something is peer reviewed, how much trust are the reviewers saying you should give it?
I'm not talking about whether peer review works, I'm asking what peer review means ideally.
For example, I know you should trust a peer reviewed article more than, say, a blog post. At the same time, if an article passes peer-review, that doesn't mean the reviewers are saying that the article should be accepted as eternal truth and never questioned further.
So how much trust does peer review give?
- Should one trust that citations are claiming what is in the articles they cite?
- If there isn't anyone who has disagreed yet, should I assume it's true for the purpose of discussions, without further examination?
- Should public policy be based on it, without further examination?
- Should I trust data to have been collected correctly?
- Should I trust data to have been processed correctly?
This probably varies by field (for example, I imagine pure mathematics and related fields, which already have high standards for rigour, are endowed with a ton of trust once peer reviewed).
Answer
Should one trust that citations are claiming what is in the articles they cite?
No. Too many people don't read their sources carefully enough (or even at all).
If there isn't anyone who has disagreed yet, should I assume it's true for the purpose of discussions, without further examination?
There is a wide range of possible levels of trust between something I read in a blog post and something on the level that "the square root of 2 is irrational". I'd put a random peer-reviewed article somewhere between the two extremes, depending on the discipline, the journal, the state of the art and so forth.
"Without further examination" should not be part of a scientist's vocabulary. Except maybe for the square root of 2.
Should public policy be based on it, without further examination?
No. In particularly not if the article reports on experimental findings in psychology, economics, sociology, medicine and so forth. Most published research findings are false. These disciplines always need replications and meta analyses, because they cannot perform experiments as tightly controlled as, say, in physics.
Should I trust data to have been collected correctly?
I do my best, but I wouldn't trust myself 100% to have collected my data correctly for published work. Stuff always happens.
In addition, peer review doesn't really enter into this question. Peer reviewers cannot easily assess your data collection - only your description of it. You could have made horrendous errors in good faith, and the reviewer wouldn't know.
Should I trust data to have been processed correctly?
See above. Would you trust software to be bug-free? You shouldn't. And again, reviewers don't review your data analysis as such - usually, your analysis scripts are not part of the bundle you submit.
Bottom line: peer review will increase my level of trust in an article, but not infinitely.
In addition, like a good Bayesian, I trust more surprising (which have a better chance of appearing in the more prestigious journals) findings less. Therefore, I usually expect articles in Nature and Science to be less easily replicable than less "sexy" findings published in other venues.
No comments:
Post a Comment