In praise of negative results

The recent article in Nature on bias in research got me thinking again about an old chestnut. Publication bias. It’s everywhere. This is a particular problem with negative results, where treatment & control groups don’t show statistically significant differences. People don’t publish them as often and when they do they tend to get in lower impact journals. This is widely known as the file draw problem.

Why is this important? Well, put simply without negative results we are only getting part of the picture of what is going on. This is a problem for all branches of ecology, but particularly for field based work. For example, finding that management practice x did not significantly alter populations of species y when compared to controls may not seem that exciting. However, if it’s not published and someone else investigates the same management elsewhere and it turns out to increase the population of species y and they go on to publish this, there is a bias in the literature. This can give us a completely skewed perception of reality.

The problem is most acute when people are trying to summarise large areas of research using techniques like meta-analysis. In the hypothetical case of management practice x and species y from earlier, without including unpublished studies we could overestimate the average effects of management treatments. Although meta-analysis is great and I love what you can do with it, this is a fatal flaw.

The Centre for Evidence-Based Conservation (CEBC), who are the authority on systematic reviews and meta-analysis in ecology, recommend that researchers should hunt for non-published work to improve their analysis. While I agree that this is vastly preferable to including only studies from ISI journals, it still doesn’t solve the underlying problem. Contrary to the way scientists normally think we should actually be encouraging publication of negative results.

So how could we do this? A few journals which deal with applied subjects are already targeting the problem. The journal Restoration Ecology now has a section called “Set-backs and Surprises’’ which explicitly aims to publish negative results. As Richard Hobbs says in his editorial for the journal these results are just as important as hearing about projects which have worked. The website Conservation Evidence also aims to publish results, negative or positive, of conservation management in short, easy to understand articles. This should become more widespread outside of these areas. Synthesis of results is important for testing theory and the more information we have to test these theories the better.

Some people will undoubtedly read this and say “Hang on a minute! Surely positive results indicate good study design? We should only be considering the best research for testing theory or looking at the consequences of management.” Frankly these people can jump off a cliff. Yes, studies with near infinite sample sizes will find a difference between group A and group B. However, these differences will solely be a product of sample size. Ecological significance of effects is not the same as statistical significance. Yes, some studies with smaller sample sizes will have noisier results but we can account for this. The best means of testing a theory are by using as many different methodologies in as many different settings as possible. That is the true test of whether a theory fits. By excluding negative results we are, at best, slowing scientific progress. Given the pressures our natural world is facing, we do not have time for this.

Do you have any ideas how we could improve the biases in the ecological literature? How could we encourage publication of negative results, given they are generally perceived as less interesting?

Please feel free to leave any thoughts below.


3 thoughts on “In praise of negative results

  1. Ben Goldacre etc. advocate registering clinical trials before they’re carried out. If the experiment is pre-planned and recorded somewhere it should much easier to chase up the results. This would be harder to do with ecological studies, since they don’t have the wider framework of ethics approval that clinical trials do.

    Another option would be a journal/journals of negative results, this should be easier to set up with the recent surge in open access publishing and “open access 2.0” journals like this:

    It’s also worth mentioning that along with publication bias there’s a big issue with observation bias, i.e. you’re much more likely to spot some results than others. For example you’d be much more likely to spot a particularly damaging invasive species rather than one which appears and then sits quietly in its niche the undergrowth.

    This is something that has bothered me about the links drawn between ecosystem functioning and disease. There are a number of reports of increases in disease risk following damage/changes to ecosystems. I imagine that upon noticing an increase in disease transmission a team of epidemiologists are dispatched to see what’s up. If there’s a link with ecosystem degradation they publish their nice result. If in fact a change to the ecosystem causes a reduction in disease transmission, no-one goes to investigate.

    So even before publication bias we’re missing half the picture.

    1. Hey Nick. I like the Goldacre idea, but it seems a bit infeasible in ecology – as you say. You could arguably say that universities should keep a copy of work done for BSc, MSc & PhD students that could be made available after a certain time. I know there is an attempt to do this with PhD students, but don’t know about the other situations. This work is valuable, even if it isn’t published.

      I think the proliferation of web based journals and open access may hold the key to improving the situation in ecology. It is hard to envisage a world in which these would draw as much attention as the established journals, but at least it would be an option. I actually think this is a big enough problem that it would be worth talking to the established ecological societies like BES or ESA and seeing if they’d be interested in raising it as an issue, perhaps in an editorial or something. Obviously we need much more radical change than that, but it’s a start.

      The observation bias you mention is potentially even more of a problem. Like you said if something is perceived as a problem, like invasive species, it is more likely to be studied. Whereas there could be all sorts of things that we should be studying that we slip through the net.

      I think the perceived links between disease and ecosystem function you mention are a great example of observation bias, particularly as the link is now being used as an advocacy tool. You can easily imagine this occurring for situations in which causality is difficult to attribute, as is the case with many studies of ecosystem services etc. This is a reason long term monitoring stations are so vital and a good reason for why we shouldn’t be cutting them, as we are at the moment in the UK.

      I have no answer for how to get round observation bias, it seems like a bit of an intractable problem.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s