Are we in danger of underestimating biodiversity loss?

Almost all ecological research of human impacts on biodiversity looks at changes after they have happened. To do this, researchers usually compare a site where some kind of disturbance has happened to a nearby undisturbed site. This method is called space-for-time substitution. The assumption of this approach is that the only thing that differs between sites is this disturbance. However, this assumption is often incorrect. Sites may have had very different biodiversity before any disturbances, which can lead to under- or over-estimations of biodiversity changes as a result of human impacts. One result of this is that we aren’t really sure how tropical logging alters the composition of ecological communities. These problems are likely to be particularly acute when habitat fragmentation limits dispersal to some sites.

Up until recently there had been little work comparing how the results from space-for-time methods compare to methods that compare sites before and after disturbances. However, last week an elegantly designed study was published in the Journal of Applied Ecology which aimed to examine just this in the context of logging in Brazil. The paper aimed to compare space-for-time methods to before-after-control-impact (BACI) methods. Critically BACI studies measure biodiversity at sites at least once before the disturbance of interest takes place. Researchers then return to sites and remeasure them after the disturbance. Importantly both sites impacted by the disturbance and control sites are surveyed on both occasions. Using this method allows researchers to disentangle the effects of disturbances and any differences between sites prior to disturbance – a key advantage over space-for-time methods.

The paper by Filipe França and colleagues examined the differences in results obtained for space-for-time and BACI methods when looking at changes in dung beetle biodiversity in tropical logged forests in Brazil. To do this they surveyed 34 locations in a logging concession, 29 of which were subsequently logged at a variety of intensities. The intensity of logging (the number of trees/volume of wood removed per hectare) is a very important determinant of the impact of logging on biodiversity and carbon (see previous posts on this here and here). They then went back and re-surveyed these locations one year later. From the data collected, they calculated changes in species richness, community composition and total biomass of dung beetles.

Franca_logging
Figure 1 – Differences between before-after-control-impact (BACI) approach and space-for-time substitution (SFT) for changes in dung beetle species richness, community composition, and biomass. For more details see Franca et al. (2016).

When comparing space-for-time and BACI the paper found that BACI characterised changes in biodiversity significantly better than space-for-time methods. Critically, space-for-time methods underestimated the relationship between logging intensity and biodiversity losses, with changes in species richness twice as severe as estimated by space-for-time (see Figure 1). BACI methods also consistently provided higher explanatory power and steeper slopes between logging intensity and biodiversity loss.

So what does this mean for how we do applied ecology? I think it is clear that we need to employ BACI methods more often in the future. However, BACI comes with logistical and financial constraints. Firstly, it is virtually impossible to predict where disturbances are going to happen before they occur. As a result, Franca and colleagues think that if we want to carry out more BACI research in the future, we need to develop closer ties with practitioners. This will involve building relationships with logging and oil palm companies, as well as agricultural businesses and property developers. This may make some researchers uncomfortable, but we need to do this if we are to provide robust evidence for decision makers. Secondly, BACI studies take longer to carry out, so we need to convince those that hold the purse strings that they are worth investing in.

BACI is clearly something we should be using more often but does this mean that space-for-time approach is useless? Should we even be using space-for-time methods at all? I’m not being hyperbolic just to get some attention- some have argued that we should stop using chronosequences altogether because ecological succession is unpredictable. After momentarily going into a bit a crisis about this when I read some papers on succession last year, I have come to a slightly different conclusion. Space-for-time substitution sometimes predicts temporal changes well, but sometimes it doesn’t. What we need is to work out when the use of space-for-time approaches are acceptable, and when it would be better to use temporal methods. Reviews have highlighted that as ecosystems increase in complexity space-for-time methods become less useful for monitoring changes in biodiversity. For example, large local species pools mean that post-disturbance colonisation may be very variable between sites. This problem is  compounded in fragmented landscapes where there are barriers to dispersal of seeds and animals. Every additional layer of complexity makes post-disturbance dynamics more and more difficult to predict. Ultimately, the best way to address this problem is through some kind of synthesis.

Working out when space-for-time approaches are useful and when they are not is not something we are going solve overnight. Before we can review the evidence, we need some evidence in the first place.  This is part of the reason why papers like the one by França and colleagues that I’ve discussed here are vitally important. So next time you think about designing a study see if you can assess how the results from temporal methods compare those from  space-for-time methods. The results might just take you by surprise.


Filipe França & Hannah Griffiths have written a great post on the Journal of Applied Ecology blog going into more detail about the implications of their study. I strongly recommend you give it a look.

Local species richness may be declining after all

Recently two papers seemed to turn what we thought we knew about changes in biodiversity on their head. These papers by Vellend et al. and Dornelas et al. collated data from multiple sources and suggested that species richness at local scales is not currently declining. This was counter-intuitive because we all know that species are going extinct at unprecedented rates. However, it is possible that the introduction of non-native species and recovery of previously cultivated areas may offset extinctions leading to relatively little net change in local species richness.

This week a paper has been published that calls these findings into question. The paper by Andy Gonzalez and colleagues published in the journal Ecology, suggests that there are three major flaws with the analyses. These flaws mean that the answer to the question ‘Is local-scale species richness declining?’ currently remains unanswered and is unanswerable.

The papers of Vellend et al. and Dornelas et al. were meta-analyses of previously published papers. One issue with meta-analysis is that it is very prone to bias. Like any study if the samples (in this case ecological studies) are not representative of the population (in this case locations around the globe) then any results will be flawed. To test the representativeness of the datasets used by Vellend and Dornelas Gonzalez et al. examined how well they represented biodiversity and threats to biodiversity. This analysis (see below) showed that the papers were not representative of biodiversity or the threats faced by biodiversity (though curiously, the analysis of Dornelas et al. showed an overrepresentation of areas highly impacted by human impacts).

Gonzalez_1.png
Figure 1 – Spatial bias of the Vellend et al. (2013) and Dornelas et al. (2014) data syntheses. For more information see the paper by Gonzalez et al. (2016).

The paper also suggests that using short time series can underestimate losses. By analysing the effect of study duration and changes in species richness (see below) Gonzalez et al. claim that increases in study duration were correlated with a decline in species richness. This supports previous theory which suggests that there is often a time lag between disturbance events and species extinctions – termed ‘extinction debt.’ However, I’d be intrigued to see the results of removing the studies with the longest duration from this analysis since the authors admit that the analysis is sensitive to their inclusion. I’ve seen recent similar work that suggests the same kind of relationship might be seen for studies monitoring individual animal populations.

Gonzalez_2
Figure 2 – The effect of study duration on apparent changes in species richness.

Thirdly, Gonzalez et al. assert that including studies in which ecosystems were recovering from disturbance (e.g. regrowth on former agricultural fields) without taking into account historical losses that occurred during or after the disturbance biases estimates of change. The paper by Vellend et al. in particular combined studies of the immediate response of biodiversity to disturbances such as fire and grazing along with studies of recovery from the very same disturbances. Gonzalez et al. show that once studies of systems that were recovering are removed from Vellend et al’s analysis there is a negative trend in species richness changes.

The biases prevalent in the Vellend and Dornelas papers lead to Gonzalez et al. to suggest that the papers cannot conclude what the net changes in local species richness are at a global scale. However, they note that the results of Dornelas and Vellend are in sharp contrast to other syntheses of biodiversity changes which used reference undisturbed such as those by Newbold et al. and Murphy and Romanuk which reported average losses of species richness of 14 and 18% respectively.

In their conclusion Gonzalez et al. suggest that though meta-analysis is a powerful tool, it needs to be used with great care. Or to put it another way, with great power comes great responsibility. As someone who regularly uses meta-analysis to form generalisations about how nature works I completely agree with this statement. Traditionally scientists have used funnel plots (graphs with study sample size on the y-axis and effect size on the x-axis) to identify biases in their analyses. I’ve always been skeptical of this approach, especially in ecology where there is always a large amount of variation between sites. In the future syntheses would do well to follow the advice of Gonzalez et al. and really interrogate the data they are using to find any taxonomic, geographic, climatic or any other biases that might limit their ability to generalise. I know it’s something I’ll be taking more seriously in the future.

Gonzalez et al. also point out that most ecological research is carried out in Europe and North America. If we want to monitor biodivesity we need to increase efforts in biodiverse tropical regions, as well as boreal forests, tundra and deserts. We need to identify where these gaps need filling most and then relevant organisations need to prioritise efforts to carry out monitoring. I am positive that this can be achieved, but it will cost a lot money, needs to be highlighted as a priority and will ned a lot of political good will. Even with this effort some of the gaps in biodiverse regions, such as the Democratic Republic of Congo, will be extremely difficult to fill due to ongoing armed conflict

My take-home message from this paper is that we need to be more careful about how we do synthesis. However, I also think that species richness isn’t the only metric that we should focus on when talking about biodiversity change. Studies have shown that measures of the traits of species present in a community are generally more useful for predicting changes in ecosystem function than just using species richness. Species richness is the iconic measure of biodiversity, but it probably isn’t the best. Ecologists should view species richness in the same way as doctors view a thermometer – it’s a useful tool but you still need to be able to monitor blood pressure, take biopsies and listen to a patient’s lungs before you diagnose them*.

 


 

*Thanks to Falko Bushke whose analogy I stole from a comment he made on my blog post here.

 

Can bad reviews be useful?

Everyone who publishes has had bad experiences with peer review. Reviewers that miss the point of what you are trying to say, or just hate your paper.

Case in point is some work on tropical logging we just got published in Forest Ecology and Management (see my blog post on it here and the paper here). I don’t have loads of experience with peer review, but now have 3 papers under my belt, one currently in review and I have reviewed about 10 papers in total. One of the reviews I got for this paper was the worst I have ever been given. I’m not going to go into detail, but here are some choice quotes:

Unfortunately, This analysis does not bring any new results in comparison with others recently published synthesis…

Finally the assess of the impact of logging on tree species richness presented in this study is meaningless…

The straight forward conclusion of the authors does not bring much to the debate already closed….

Now. The the first two comments may or may not be true. The thing that annoyed me more than anything was the last comment. Debate in science should never be closed. Scientists should not provide a united front when there is contradictory evidence. If you disagree with a study either (a) re-analyse the data of the paper you don’t like and write a letter to the editor, (b) produce another piece of work testing the same hypotheses or (c) be really radical and offer to write a joint paper with the authors of the work you disagree with. Blocking the paper from publication should not be an option.

The review I got is nothing compared to what some others have had to deal with, but it was annoying to have my path blocked by a reviewer who didn’t want my work published simply because my results didn’t fit their world view.

However, I have taken some positives lessons from this. Firstly, try to be aware of your own biases when reviewing someone else’s work. Secondly, be fair and be careful with what you say. If you don’t like a paper don’t go on and on about it, remember that someone spent months of their life on that work. We constructive and concise. Thirdly, when you are writing a paper don’t go out of your way to be controversial. I think some of our drafts of the paper came off as a bit combative and thus produced this reaction. Getting this reaction from a reviewer probably means that some readers will have similar reactions. However, don’t shy away from controversy either. If your results support a controversial hypothesis don’t let people who disagree with your view of things block you from publication.

Inexpert opinion

This post was inspired by an amazing workshop given by Mark Burgman at the recent Student Conference on Conservation Science in Cambridge. I have done my best to get across what I learnt from it here, but it is not the final word on this issue.

Some experts. Yesterday.

It turns out experts aren’t necessarily all that good at estimation. They are often wrong and overconfident in their ability to get stuff right. This matters. A lot.

It matters because experts, particularly scientists, are often asked to predict something based on their knowledge of a subject. These predictions can be used to inform policy or other responses. The consequences of bad predictions can be dramatic.

For example, seismologists in L’Aquila, Italy, were asked whether there was a risk of threat to human life from earthquakes in the area by the media. They famously told reporters there was ‘no danger.’ They were wrong.

Not all cases are so dramatic, but apparently experts make these mistakes all the time. This has profound implications for conservation.

Expert opinion is used all the time in ecology and conservation where empirical data is hard or impossible to collect. For example the well known IUCN redlist draws on large pools of expert knowledge in determining range and population sizes for species. If these are very inaccurate then we have a problem.

Fortunately, there may be a solution.

This solution was first noticed in 1906 at a country fair. At this fair people were taking part in a contest to guess the weight of a prize ox. Of the 800 or so  people that took part nobody got the correct weight. However, the average guess was closer than most people in the crowd and most of the cattle experts. As a group these non-experts out performed the experts.

Apparently this is now a phenomena that is widely recognised.

Building on this a technique has been developed called the delphi method. It aims to improve peoples estimates by getting them to make an estimate, discuss it with other people in their assigned group and then make another estimate. You then take the mean estimate of the group.

Mark Burgman and colleagues have come up with a modified version of the technique. This involves people estimating something, giving the highest reasonable value for their estimate, their lowest reasonable value and a measure of their confidence (50-100%) that their limits contain the true value. Then you discuss them in your group and change you estimates and use these to derive a group mean. This can be done many times, and it seems estimates are better with more iterations.

I think this is a great idea. But you can take the idea even further. You can do this with a series of questions some of which you know the answer for. Using respondents answers to these questions you can calibrate how expert your experts actually are. Then you can weight people’s estimates based on the confidence you have in them, like in the example below.

Estimates of the time-to-failure of an earth dam, once the core starts to leak. Taken from Aspinall 2010.
Estimates of the time-to-failure of an earth dam, once the core starts to leak. Taken from Aspinall 2010.

This is an idea pretty similar to meta-analysis. We give more weight to the estimates we are more confident about.

These approaches have been around for a while and appear to have been used very rarely in ecology and conservation. Given how often expert opinion is used in conservation it is important we think hard about how reliable it actually is. It will never be perfect, but it can be better. This work is a step in the right direction.

In praise of negative results

The recent article in Nature on bias in research got me thinking again about an old chestnut. Publication bias. It’s everywhere. This is a particular problem with negative results, where treatment & control groups don’t show statistically significant differences. People don’t publish them as often and when they do they tend to get in lower impact journals. This is widely known as the file draw problem.

Why is this important? Well, put simply without negative results we are only getting part of the picture of what is going on. This is a problem for all branches of ecology, but particularly for field based work. For example, finding that management practice x did not significantly alter populations of species y when compared to controls may not seem that exciting. However, if it’s not published and someone else investigates the same management elsewhere and it turns out to increase the population of species y and they go on to publish this, there is a bias in the literature. This can give us a completely skewed perception of reality.

The problem is most acute when people are trying to summarise large areas of research using techniques like meta-analysis. In the hypothetical case of management practice x and species y from earlier, without including unpublished studies we could overestimate the average effects of management treatments. Although meta-analysis is great and I love what you can do with it, this is a fatal flaw.

The Centre for Evidence-Based Conservation (CEBC), who are the authority on systematic reviews and meta-analysis in ecology, recommend that researchers should hunt for non-published work to improve their analysis. While I agree that this is vastly preferable to including only studies from ISI journals, it still doesn’t solve the underlying problem. Contrary to the way scientists normally think we should actually be encouraging publication of negative results.

So how could we do this? A few journals which deal with applied subjects are already targeting the problem. The journal Restoration Ecology now has a section called “Set-backs and Surprises’’ which explicitly aims to publish negative results. As Richard Hobbs says in his editorial for the journal these results are just as important as hearing about projects which have worked. The website Conservation Evidence also aims to publish results, negative or positive, of conservation management in short, easy to understand articles. This should become more widespread outside of these areas. Synthesis of results is important for testing theory and the more information we have to test these theories the better.

Some people will undoubtedly read this and say “Hang on a minute! Surely positive results indicate good study design? We should only be considering the best research for testing theory or looking at the consequences of management.” Frankly these people can jump off a cliff. Yes, studies with near infinite sample sizes will find a difference between group A and group B. However, these differences will solely be a product of sample size. Ecological significance of effects is not the same as statistical significance. Yes, some studies with smaller sample sizes will have noisier results but we can account for this. The best means of testing a theory are by using as many different methodologies in as many different settings as possible. That is the true test of whether a theory fits. By excluding negative results we are, at best, slowing scientific progress. Given the pressures our natural world is facing, we do not have time for this.

Do you have any ideas how we could improve the biases in the ecological literature? How could we encourage publication of negative results, given they are generally perceived as less interesting?

Please feel free to leave any thoughts below.