I know that power analysis is the statistically valid way to ensure you use the correct numer of samples or repeats in an experiment. But I have never seen any biologist actually conduct a power analysis. Mostly, researchers seems to use a rule of thumb (three technical, three biological replicates is a common one).
Should I be doing a power analysis each time I design an experiment, or can I just use one of the common biology rules of thumb? If not, what are the consequences for the validity of my results? And is there a situation where I will be required to use power analysis, which makes it advantageous to get used to doing it now?
Answer
You've already gotten a decent answer to this, but I'll provide my own thoughts on the subject.
Yes
It's necessary. It is absolutely something you should do before beginning an experiment, and preferably something you should do in collaboration with the person who is going to be helping you analyze your data. To address a couple points:
- You'll see all kinds of researchers doing all manner of sloppy things when it comes to statistics and data analysis. Reading some journals makes me groan. You won't necessarily find it a problem for your field, but one should ask oneself if the goal is merely not to get called out by your peers, or to actually run a well designed experiment.
- The consequences to the validity of your results lie in an increased risk of Type II error - the incorrect failure to reject the null hypothesis, or in slightly clearer English, finding no effect when an effect exists in reality. Which means, if you run an under powered study, you run the risk of doing the entire experiment, finding nothing, and being wrong. The consequences of that are myriad - first, it likely harms your chances to get published, as null results are often quite difficult to get into press. Second, if it does get into press, you've managed to get an incorrect finding into the literature, which will then be propagated in meta-analysis, reviews, and the minds of impressionable future readers. And then there's the chance you'll abandon a potentially productive line of inquiry because you couldn't be bothered to do power analysis.
- One thing to also consider is that not having conducted power analysis somewhat limits your ability to chase after interesting sub-findings. If you've built your experiment on a shaky foundation, and then want to do a second analysis on a sub-set of your results, you almost certainly don't have the power for it.
- There are times when you'll be required to do power analysis. If you do research that's clinically relevant, and you ever want it to appear in one of those journals, you may very well be required to show you had a properly powered experiment. And many grant applications require you do so. Even if they don't require it, some people will do a napkin math estimation of your power if they don't see it appearing in your application anywhere.
sjcockell is partially correct. To do power analysis, you at least need to have some notion of the effect measure you're likely to see. And these are indeed just estimations of what you'll see. But in nearly all circumstances, you'll likely have some ideas already. Are there similar experiments you can draw from? Your own pilot studies? A "feel" born of experience in your particular system?
It's also trivially easy to calculate power under a number of difference scenarios, to ensure your experiment is sufficiently powered if things go considerably worse than expected. For example, in a study I once did the power calculations for, we weren't sure what the ratio of exposed to unexposed subjects would be. So I ran it over a large range:
Which left me with the confidence that even if I was in my "worst case" scenario, I'd have reasonably good power at realistic effect sizes.
That's the true strength of power calculations. They'll tell you things about your study. What you need. What doesn't matter. Whether or not before you spend time and money pursuing an idea if you have a reasonable chance at success. Sit down with someone, take an hour or two (at most for a simple experiment) and do it right. Or ask CrossValidated for advice.
No comments:
Post a Comment