Statistical power in neuroscience

Today I’d like to highlight a post from a few weeks ago on Neurobonkers discussing a recent meta-analysis suggesting that most neuroscience studies are underpowered, meaning that they don’t have enough subjects to make definitive conclusions about their results.

Blog post here: http://bigthink.com/neurobonkers/the-neuroscience-power-crisis-whats-the-fallout

This really is a big issue in many fields, especially neuroscience and psychology. I wanted to chip in on this issue because it’s a problem in my field. I can’t tell you the number of papers (and more likely, posters) I’ve seen that claim seemingly interesting results based on…. 2 people. It’s shockingly poor science (and irresponsible, in a way) to try to suggest something based on a very small number of people. The problem with underpowered studies is that they have a higher chance of having what is called a “false positive“. Think of it this way, if you study two people and one of those people happens to be some sort of freak, then you can’t really take their average of whatever you’re measuring and generalise it to the rest of the population. The more people you study, the less likely this is to happen (although there is still always a statistical chance this will happen – it just gets smaller with larger power).

English: Photo by Bob Jacobs, Laboratory of Qu...

English: Photo by Bob Jacobs, Laboratory of Quantitative Neuromorphology Department of Psychology Colorado College (Photo credit: Wikipedia)

On the other hand, I feel like reviewers have an easy out when they want to reject a paper by saying that the study is underpowered when they have no evidence for this. Saying something is “underpowered” is only true if the number of subjects you have is not enough to see the effect size that you expect. For example, if the effect size of whatever you are looking at, say the difference in inflammation between depressed and non-depressed persons, is very large (i.e., the difference in levels inflammation between the groups is A LOT), then you need less people to reliably see this effect. If the effect size is expected to be small, then you need more people. So be wary if someone tells you that a study is underpowered without explaining why.

However, in the case of the paper mentioned above, the concern of the authors is that we may be misrepresenting results or even missing out on differences that are expected to be very subtle (and small) in the field of neuroscience, which I do think is a fair call.

When will neuroscience research become cheaper? Or when will governments begin to fund more research into this area?

More stuff about this:

Advertisements

4 thoughts on “Statistical power in neuroscience

  1. Dan says:

    You would hope that the peer-review process would ensure that any broad-reaching statements on the basis of results from such studies would be quashed. However, and do feel free correct me, peer review does not have any safeguards to ensure that a publications’ findings are based on good science.

    • It depends on what you mean by “good science”. Intentional fraud can be very difficult to detect unless the author has been careless (which happens often!). Incorrectly interpreting results is often brought up in the peer review process, but reviewers (in my experience) also make good use of critiquing a) the justification for the hypotheses and the study itself and b) the methodology and data analysis. In my field, it is very important that statistical analyses are done correctly, because if they aren’t, the results are simply misleading. In order to answer the research question properly, it is also critical to control for certain factors that could be influencing results. Reviewers often pick this up (but not always – so bad science can and does get through sometimes!).

      • Dan says:

        Thanks Dr Byrne. To me good science is conducted in accordance with the best practices of the time and ensures sufficient data sets are collected to support the resulting claims, which includes the extent of the subsequent hypothesis of wider implications.

        I agree with you that, without access to the actual raw data, it is impossible for reviewers to prevent publication of falsified results. And also that reviewers are operating on a merit system. Sometimes this results in papers getting published prematurely or rejected when they should not be.

        Keep up the great blog.

        • Thanks Dan! I have no doubt that with the single blind review system (which is the norm) that you get a lot of unnecessary rejection from direct competitors and even stealing ideas and publishing them themselves – in fact I’ve seen it happen to colleagues. I do believe, however, that the majority of scientists are honest in this regard and I hope it stays that way.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s