Wet-lab biology is hard. Experiments are complicated, replicates are expensive, observations are noisy, and there is so much else happening besides the pathways of interest that it’s amazing any data is obtained at all. Oh, and a lot of it (for example synaptic cell biology) is really, really small, making obtaining those observations hard.

My doctorate is in computational neuroscience. My research used a lot of wet-lab data gathered by others but I didn’t do wet-lab work myself. The option was available, but compared to writing image analysis code and building simulation models, the wet-lab stuff just looked like too much pain. Massive kudos to my lab mates and all other neuroscientists out there that put the work in.

Discovering how biological organisms work at the cellular or molecular scale is extremely hard. You can’t isolate the cellular function you are interested in from all the other functions of life while you study it because the cells would, y’know, be dead. That means that for every pathway of interest, there are thousands or millions of other pathway events simultaneously happening, competing for space, energy and resources, and confounding the observations made. This is all also happening at or beyond the limits of our ability to observe - it’s all really small. Finally, the things we need to do to produce an observable effect - adding markers of various kinds, and just getting the cells in a place where we can observe them under a microscope; all change what is happening inside the cell, so in vitro results may no longer be representative of in vivo results. Like I said, getting observational data is challenging.

Because of this difficulty in obtaining clean data, all the possible confounding factors, and the sheer noisiness of the data, it is very hard to produce clear publishable results. Usually the data will indicate a preference for a process in certain cells in response to certain stimulus in a certain environment. Even third party replication of these results is hard to do, and in many cases turns out to produce conflicting results. This is part of the reason why there are so many truths published out there in the literature. There’s enough noisy data and conflicting results to support many mutually exclusive stories. Even then it could be the case that these apparently mutually exclusive stories are both happening depending on some other unknown factor that hasn’t been isolated yet. Or neither of them are valid, and there is something else happening we haven’t discovered yet. Science at its messiest :)

Published data in this field is of variable quality. It’s all noisy, but there may be additional biases and errors inherent in the literature - the positive result bias is an obvious one. The overall effect is that the reliability of the conclusions is not as high as you’d hope. Statistical errors in research has a been a discussion topic for years, for example Nuzzo (2014) and Ioannidis (2005).

How do we cope with this, given that the science is based on prior results, constructing theories like a house of cards. Not an easy question to answer. Good scientific practice copes to some extent, in that disproving theories on the basis of refuting evidence is core to the scientific method, and good experiment design practice requires listing the assumptions made in building an experiment and the results so obtained. These assumptions are there for all to see and potentially challenge.

The pressure to publish is a problem. If lab income is dependent on quantity of publication, then there is always the risk that inconclusive results get published. This recent cartoon from the excellent XKCD is spot on https://www.xkcd.com/2268/

Requiring the raw data, and the analysis done to be published along with the results is a good strategy. It means others can verify the quality of the analysis and makes it harder for insufficiently supported conclusions to slip through. It does however depend on other labs taking the time to reanalyse the data. Still, inter-lab competition does provide an incentive for labs to take a long, hard look at each other’s work. It may seem confrontational, but again refutation of the current theories in favour of a better one that better explains the evidence is how science works.

Shared at https://www.linkedin.com/pulse/neuroscience-research-publications-donal-stewart