#SFN2018 The New Age of Statistics in Neuroscience

sfn18

#1

The Age of New Stats in Neuroscience


You have heard this once. You have heard this twice. Many scientific disciplines including psychology and neuroscience are undergoing a reproducibility crisis. A critical number of social science experiments published between 2010 and 2015 in prestigious peer-reviewed journals such as Science and Nature have failed replicability [1]. A number of neuroscience-related research have also shown the same trend [2]. This is a major challenge for scientists and publishers alike because “good” science is heralded by its ability to conform to three major conditions:

I. Repeatability: can the same lab using the same experimental setup obtain the same results?
II. Replicability: can a different lab using the same experimental setup obtain the same results?
III. Reproducibility: can a different team using a different experimental setup obtain the same results?

To get a better idea of what neuroscientists are thinking as possible solutions to this challenge, I went to the Professional Development Workshop called “Improving your science: Better inference, Reproducible Analyses, and the new Publication Landscape,” and all I can say is that it has been a religious experience. The panel included Robert Calin-Jageman, Christophe Bernard, Brian Wandell, and Marina Picciotto.

Here are some of the critical takeaways:

The need to publish negative results
Statistical significance is the least interesting thing about the results
(Gene Glass, 2004)

Confirmation bias and publication bias are common issues in scientific publications because authors are more likely to submit, and publishers are more likely to accept, papers that show positive, rather than negative or inconclusive results. This MUST change. If we do not allow negative data to be published, then a vast majority of researchers could potentially end up running experiments that have already yielded null effects.

ENeuro is trying to start this trend, and it will maintain rigorous expectations of such manuscripts as well. These include multiple experimental procedures to test the hypotheses, rigorous reproduction of experimental models of other labs that you claim to refute, as well as the meticulous use of positive and negative controls.

Use effect sizes and confidence intervals for better inference
Hodgkin and Huxley did not p all over their manuscripts
Robert Calin Jageman(SFN 2018)

In science, there are currently two main approaches to statistics: the testing approach and the estimation approach. Most of us are quite familiar with the testing approach; this is the one associated with the nefarious p-value. In this setup we ask the qualitative question “does X have an effect on Y?” and use a p-value or a Bayes factor to generate a definitive answer. Replication is rare in the testing approach.

Contrarily, the estimation approach is quantitative in nature and asks the question, “how much of an effect does X have on Y?” This is a much more reliable way of understanding and utilizing data because the estimation approach focuses on uncertainty and practical significance and encourages repeatability and replication of studies. So instead of a p-value, we would generate an effect size (the magnitude of difference between two groups) and a confidence interval (estimated range of values that is likely to include the unknown population parameter) This way scientists can also perform meta-analyses to combine results from multiple studies and identify the overall magnitude of the effect size.

I came out of this workshop repeating the slogan “DON’T TEST, ESTIMATE,” iterated by Dr. Robert Calin-Jagerman, and being completely sold on the idea. As scientists, we want to do the best research and more importantly, recognize the value of our data in ways that do not jeopardize the work done, but at the same time also informs the extent of its utility in an honest manner. Thus, as journals such as Journal of Neuroscience begin to reformulate the expected statistics from our work (as intensively explained by Dr. Marina Picciotto), this workshop reveals the speedy trend scientists themselves are following towards correcting biased statistical practices and entering a space of open-access and revealing science. I for one, am extremely grateful for this road less travelled.

[1] Camerer, C. F. et al. Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nature Human Behaviour 2, 637–644 (2018).

[2] Gilmore, R. O., Diaz, M. T., Wyble, B. A. & Yarkoni, T. Progress toward openness, transparency, and reproducibility in cognitive neuroscience. Ann. N. Y. Acad. Sci. 1396, 5–18 (2017).


Prabarna Ganguly
Twitter: @prabarna


#2

I wasn’t able to attend this one but I wanted to!! Thanks for the highlights! I completely agree we need to get better at publishing negative data. A negative result is still a result and we can save others time by not trying the same things over and over.