Jump to content

Live Chat: Maintaining Rigor When Research Operations Are in Flux 6/15/20 @ 2pm EDT


Recommended Posts

Kip Ludwig
15 minutes ago, Teresa Murray said:

Thank you for the suggestion. MATLAB is one platform that is most problematic. Been thinking about Python for some time. Also, I will look into Microsoft Teams.

 

2 minutes ago, Kip Ludwig said:

Response here: My lab has used the virtual time to start moving our hypotheses and methods to our website 'before conducting the experiment' so we can be backtracked to ensure we aren't doing retrospective analyses.  We've also done a lot of research into statistics during the downtime, working with local statisticians to understand in cases where we want to do ask a new question using data already taken, how to do that appropriately. In many cases it's using 'half the data' to develop the hypothesis while reserving the remaining half to 'test the hypothesis prospectively'.  Most times we are just using retrospective analyses to 'identify a future hypothesis to be prospectively tested'. But the most important thing is to be transparent about all the analyses that you've performed, not just 'the ones that turned out'.    

I think for pilot proof of concept studies, both are equally serious. 

Some resources on Neuronline to specifically address these details:

https://neuronline.sfn.org/scientific-research/minimizing-bias-in-experimental-design-and-execution

https://neuronline.sfn.org/scientific-research/ask-an-expert

https://neuronline.sfn.org/collection/resources-to-enhance-scientific-rigor


 

  • Like 1
Link to post
Share on other sites
Teresa Murray
4 minutes ago, Kip Ludwig said:

 

Response here: Like Teresa, this has definitely changed the way we think about experiments going forward. We used the opportunity to map out our experiments and papers over the next year, develop and troubleshoot code for automated analysis and 'real time' visualization of key data to troubleshoot experiments, and most importantly to more fully embrace rigor. We are working towards putting all of our hypotheses online with proposed analyses on our website before an experiment has been performed, so we can be 'backtracked' to make sure we aren't doing retrospective analyses that would require different statistical methods than prospectively tested hypotheses.  

We've used the time to develop a multi-lab meeting to discuss experiments before they are conducted to get outside perspectives on how to refine/identify key confounds we might be missing via group think.  This has been insanely helpful, so we've also implemented this as we draft papers to make sure 'naive readers' can follow the story/figures before submitting the paper.   

We have found that the 'break' in between animal experiments has been a good thing to improve our pipeline and tools, so we are going to deliberately put more 'breaks' into the schedule to allow more time for data analysis in between experiments to refine procedure.  

My collaborators and I have found ourselves taking more time to discuss our data analysis plans before collecting and analyzing data sets. Our collaborators have more time to meet, with more time to mull over procedures since they are not in the lab as often. We hope to make up for the extra time it takes to collect data by avoiding midstream changes in analyzing data after it is acquired. Of course, this should be the procedure any time. In retrospect, the pressure to analyze and publish may lead to jumping into non-optimal plans. This could lead to improved collaborations.

  • Like 1
Link to post
Share on other sites
Kip Ludwig

Thank you for a great online discussion! Please continue to use this forum to continue the discussion, Lique and I will check back periodically over the next couple of days to answer any remaining questions.   

  • Like 2
  • Thanks 1
Link to post
Share on other sites
Teresa Murray
7 minutes ago, Kip Ludwig said:

Thank you for the resources. Since there are fewer results to discuss in our lab meetings, I will take the time to present and discuss these during lab meetings.

  • Like 1
Link to post
Share on other sites
Lique Coolen

Thank you all so much for a fantastic discussion. Please keep posting your questions and comments. Kip and I will check this page regularly and continue to answer questions.

Have a great rest of your day. 

  • Like 3
Link to post
Share on other sites
Guest Santani Teng

For people working to implement remote psychophysical data collection, what are some ways you are controlling for things like display properties, the acoustic environment, etc.? I guess this wouldn't be pandemic-specific so much as remote-data specific, but we'll probably see a proportional uptick in data collected this way.

E.g. 
Woods, Kevin JP, et al. "Headphone screening to facilitate web-based auditory experiments." Attention, Perception, & Psychophysics 79.7 (2017): 2064-2072.

Tran, Michelle, et al. "Online recruitment and testing of infants with Mechanical Turk." Journal of experimental child psychology 156 (2017): 168-178.

Link to post
Share on other sites
Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...