# Are we in danger of underestimating biodiversity loss?

Almost all ecological research of human impacts on biodiversity looks at changes after they have happened. To do this, researchers usually compare a site where some kind of disturbance has happened to a nearby undisturbed site. This method is called space-for-time substitution. The assumption of this approach is that the only thing that differs between sites is this disturbance. However, this assumption is often incorrect. Sites may have had very different biodiversity before any disturbances, which can lead to under- or over-estimations of biodiversity changes as a result of human impacts. One result of this is that we aren’t really sure how tropical logging alters the composition of ecological communities. These problems are likely to be particularly acute when habitat fragmentation limits dispersal to some sites.

Up until recently there had been little work comparing how the results from space-for-time methods compare to methods that compare sites before and after disturbances. However, last week an elegantly designed study was published in the Journal of Applied Ecology which aimed to examine just this in the context of logging in Brazil. The paper aimed to compare space-for-time methods to before-after-control-impact (BACI) methods. Critically BACI studies measure biodiversity at sites at least once before the disturbance of interest takes place. Researchers then return to sites and remeasure them after the disturbance. Importantly both sites impacted by the disturbance and control sites are surveyed on both occasions. Using this method allows researchers to disentangle the effects of disturbances and any differences between sites prior to disturbance – a key advantage over space-for-time methods.

The paper by Filipe França and colleagues examined the differences in results obtained for space-for-time and BACI methods when looking at changes in dung beetle biodiversity in tropical logged forests in Brazil. To do this they surveyed 34 locations in a logging concession, 29 of which were subsequently logged at a variety of intensities. The intensity of logging (the number of trees/volume of wood removed per hectare) is a very important determinant of the impact of logging on biodiversity and carbon (see previous posts on this here and here). They then went back and re-surveyed these locations one year later. From the data collected, they calculated changes in species richness, community composition and total biomass of dung beetles.

When comparing space-for-time and BACI the paper found that BACI characterised changes in biodiversity significantly better than space-for-time methods. Critically, space-for-time methods underestimated the relationship between logging intensity and biodiversity losses, with changes in species richness twice as severe as estimated by space-for-time (see Figure 1). BACI methods also consistently provided higher explanatory power and steeper slopes between logging intensity and biodiversity loss.

So what does this mean for how we do applied ecology? I think it is clear that we need to employ BACI methods more often in the future. However, BACI comes with logistical and financial constraints. Firstly, it is virtually impossible to predict where disturbances are going to happen before they occur. As a result, Franca and colleagues think that if we want to carry out more BACI research in the future, we need to develop closer ties with practitioners. This will involve building relationships with logging and oil palm companies, as well as agricultural businesses and property developers. This may make some researchers uncomfortable, but we need to do this if we are to provide robust evidence for decision makers. Secondly, BACI studies take longer to carry out, so we need to convince those that hold the purse strings that they are worth investing in.

BACI is clearly something we should be using more often but does this mean that space-for-time approach is useless? Should we even be using space-for-time methods at all? I’m not being hyperbolic just to get some attention- some have argued that we should stop using chronosequences altogether because ecological succession is unpredictable. After momentarily going into a bit a crisis about this when I read some papers on succession last year, I have come to a slightly different conclusion. Space-for-time substitution sometimes predicts temporal changes well, but sometimes it doesn’t. What we need is to work out when the use of space-for-time approaches are acceptable, and when it would be better to use temporal methods. Reviews have highlighted that as ecosystems increase in complexity space-for-time methods become less useful for monitoring changes in biodiversity. For example, large local species pools mean that post-disturbance colonisation may be very variable between sites. This problem is  compounded in fragmented landscapes where there are barriers to dispersal of seeds and animals. Every additional layer of complexity makes post-disturbance dynamics more and more difficult to predict. Ultimately, the best way to address this problem is through some kind of synthesis.

Working out when space-for-time approaches are useful and when they are not is not something we are going solve overnight. Before we can review the evidence, we need some evidence in the first place.  This is part of the reason why papers like the one by França and colleagues that I’ve discussed here are vitally important. So next time you think about designing a study see if you can assess how the results from temporal methods compare those from  space-for-time methods. The results might just take you by surprise.

Filipe França & Hannah Griffiths have written a great post on the Journal of Applied Ecology blog going into more detail about the implications of their study. I strongly recommend you give it a look.

# Local species richness may be declining after all

Recently two papers seemed to turn what we thought we knew about changes in biodiversity on their head. These papers by Vellend et al. and Dornelas et al. collated data from multiple sources and suggested that species richness at local scales is not currently declining. This was counter-intuitive because we all know that species are going extinct at unprecedented rates. However, it is possible that the introduction of non-native species and recovery of previously cultivated areas may offset extinctions leading to relatively little net change in local species richness.

This week a paper has been published that calls these findings into question. The paper by Andy Gonzalez and colleagues published in the journal Ecology, suggests that there are three major flaws with the analyses. These flaws mean that the answer to the question ‘Is local-scale species richness declining?’ currently remains unanswered and is unanswerable.

The papers of Vellend et al. and Dornelas et al. were meta-analyses of previously published papers. One issue with meta-analysis is that it is very prone to bias. Like any study if the samples (in this case ecological studies) are not representative of the population (in this case locations around the globe) then any results will be flawed. To test the representativeness of the datasets used by Vellend and Dornelas Gonzalez et al. examined how well they represented biodiversity and threats to biodiversity. This analysis (see below) showed that the papers were not representative of biodiversity or the threats faced by biodiversity (though curiously, the analysis of Dornelas et al. showed an overrepresentation of areas highly impacted by human impacts).

The paper also suggests that using short time series can underestimate losses. By analysing the effect of study duration and changes in species richness (see below) Gonzalez et al. claim that increases in study duration were correlated with a decline in species richness. This supports previous theory which suggests that there is often a time lag between disturbance events and species extinctions – termed ‘extinction debt.’ However, I’d be intrigued to see the results of removing the studies with the longest duration from this analysis since the authors admit that the analysis is sensitive to their inclusion. I’ve seen recent similar work that suggests the same kind of relationship might be seen for studies monitoring individual animal populations.

Thirdly, Gonzalez et al. assert that including studies in which ecosystems were recovering from disturbance (e.g. regrowth on former agricultural fields) without taking into account historical losses that occurred during or after the disturbance biases estimates of change. The paper by Vellend et al. in particular combined studies of the immediate response of biodiversity to disturbances such as fire and grazing along with studies of recovery from the very same disturbances. Gonzalez et al. show that once studies of systems that were recovering are removed from Vellend et al’s analysis there is a negative trend in species richness changes.

The biases prevalent in the Vellend and Dornelas papers lead to Gonzalez et al. to suggest that the papers cannot conclude what the net changes in local species richness are at a global scale. However, they note that the results of Dornelas and Vellend are in sharp contrast to other syntheses of biodiversity changes which used reference undisturbed such as those by Newbold et al. and Murphy and Romanuk which reported average losses of species richness of 14 and 18% respectively.

In their conclusion Gonzalez et al. suggest that though meta-analysis is a powerful tool, it needs to be used with great care. Or to put it another way, with great power comes great responsibility. As someone who regularly uses meta-analysis to form generalisations about how nature works I completely agree with this statement. Traditionally scientists have used funnel plots (graphs with study sample size on the y-axis and effect size on the x-axis) to identify biases in their analyses. I’ve always been skeptical of this approach, especially in ecology where there is always a large amount of variation between sites. In the future syntheses would do well to follow the advice of Gonzalez et al. and really interrogate the data they are using to find any taxonomic, geographic, climatic or any other biases that might limit their ability to generalise. I know it’s something I’ll be taking more seriously in the future.

Gonzalez et al. also point out that most ecological research is carried out in Europe and North America. If we want to monitor biodivesity we need to increase efforts in biodiverse tropical regions, as well as boreal forests, tundra and deserts. We need to identify where these gaps need filling most and then relevant organisations need to prioritise efforts to carry out monitoring. I am positive that this can be achieved, but it will cost a lot money, needs to be highlighted as a priority and will ned a lot of political good will. Even with this effort some of the gaps in biodiverse regions, such as the Democratic Republic of Congo, will be extremely difficult to fill due to ongoing armed conflict

My take-home message from this paper is that we need to be more careful about how we do synthesis. However, I also think that species richness isn’t the only metric that we should focus on when talking about biodiversity change. Studies have shown that measures of the traits of species present in a community are generally more useful for predicting changes in ecosystem function than just using species richness. Species richness is the iconic measure of biodiversity, but it probably isn’t the best. Ecologists should view species richness in the same way as doctors view a thermometer – it’s a useful tool but you still need to be able to monitor blood pressure, take biopsies and listen to a patient’s lungs before you diagnose them*.

*Thanks to Falko Bushke whose analogy I stole from a comment he made on my blog post here.

# “Like walking through an open cemetery”

“I have been working in human-modified tropical forests for the past 14 years, but seeing these fires first hand was devastating,” wrote Erika responding to one of my questions “The smell of wet soil was gone and I could only smell smoke…even the usual cacophony of forest sounds disappeared…it was like walking through an open cemetery.”

“Sorry, trying not to work weekends…not going very well though…Today I just learned that 9 of my 20 plots have burned.” 2 more plots. Aside from the wider situation, this was the stuff of researcher’s nightmares.

Fires in Brazil reached record levels in 2015, with more than a quarter of a million separate fires recorded. However, these fires are not generally ‘natural’ – “Fires in the region always have a human ignition source.” Erika told me “They are used in slash-and-burn agriculture, to clear pastures of weeds and also to burn downed timber in newly deforested areas.” This year’s strong El Niño has caused drier conditions than normal making it “easier for agricultural fires to escape the targeted area and sweep through the forests.” Indonesia is facing a similar problem, where forests have been burned to clear space for new oil plantations, in what the Guardian’s George Monbiot  has described as the ‘greatest environmental disaster of the 21st century – so far.’

When I queried why it matters that the forest is burning, Erika was clear what the major issue is – the loss of unique biodiversity. “Every year over 100 new species are found in Amazonian forests. To see all this going up in smoke is a crime against humanity. It is a tragedy.”

“How are these fires likely to affect biodiversity?” I asked.

“The Amazon has not co-evolved with periodic fires…This means that Amazonian forests are not used to these events and…do not cope very well with it. In terms of plant communities, there is a sharp increase in the abundance of pioneer species, while high-wood density climax species disappear….Fires negatively affect…rare bird species, and the habitat specialists, such as the ant-following insectivores and the terrestrial gleaners. Overall, burned forests are significantly less diverse than their unburned counterparts.”

Amazonian forests that have burned repeatedly may eventually come to resemble more open savannahs and contain  very different species to relatively undisturbed old-growth forest.

But it’s not just biodiversity that is affected by these fires, but humans as well. In Indonesia there were evacuations of children by the navy, although some of the children, according to reports, still died from breathing difficulties . In Brazil the fires have “affected many of the local people…who reported a number of respiratory problems, such as dry cough, difficulties in breathing, and sore throats,” according to Erika. “People had to spend days building fire breaks to protect their land, instead of directly working on their crops.” People working on these farms already have a tough life as it is, without having to worry if their source of income will go up in smoke.

So what will happen to these forests in the future? Given time and, vitally, protection they can recover but Erika thinks this is unlikely “These burned forests may never recover. After the fire, several large trees die, creating a number of gaps in the forest canopy, through which more light and wind can reach the forest floor, making it drier and, as a consequence, more vulnerable to further fire events.”

The research Erika and her team are carrying out will help to answer the question of how burned forests recover but it is obvious that degraded forests, such as these, need to be seen as a greater conservation priority. More than 50% of the globe’s forests are degraded in one way or another. We cannot afford to only protect primary forests anymore.

Edit: I got an email from Erika a bit ago after I asked her what the best solution would be. I thought I should include it here:

“Funnily enough there are already quite a few good policies in place. The problem is that none is followed. For example, every year there is a ‘burning calendar’ establishing when farmers can use fire to burn their pastures or their croplands. During the peak of the dry season, the use of fire is forbidden. In 2015, given the extreme drought, some states even extended the prohibitive period. So all quite reasonable and good, right? The problem is that no one follows this rules and there is no law enforcement in place. So people carry business as usual and the forests carry on burning. To put in practice the existing laws would be the best solution.”

If you want to read more about the situation in Brazil take a look at the excellent article Erika has written  for ‘The Conversation.’

There are also a pair of videos that Erika’s team have made documenting the fires that you can see here and here.

# Beta-diversity – What is it good for?

A while ago I wrote a post asking whether everyone’s favourite measure of biodiversity, species richness, was useful. In it, I concluded that it is probably one of the bluntest, least informative measures of ecological communities we have and that we should try to use alternative metrics when possible. Recently, I started wondering about what other measures of biodiversity might be informative, and what they can be used for. And then a neat review of beta-diversity by James Jacob Socolar ( correction courtesy of James Gilroy on Twitter – thanks James!) and colleagues came out in Trends in Ecology and Evolution, so today I’ll focus on that, borrowing from some of their thoughts and hopefully adding some of my own along the way. In the future, at some point, I’ll write something about temporal changes in ecological communities at individual sites.

So, firstly what do I mean by beta-diversity? Beta-diversity broadly reflects the differences in community composition between sites.  Gamma diversity (regional diversity) is a product of both beta- and alpha-diversity (diversity at a single site). And there are lots* of different ways of measuring beta-diversity. The simplest metric for beta-diversity is termed ‘true beta-diversity’ and was defined by Whittaker in 1960 as:

$\beta=\frac\gamma\alpha$

This metric is perhaps the easiest to interpret, but it also needs a reliable estimate of gamma diversity, so may be difficult to use in practice. Using this method allows the relationship between alpha and gamma diversity to be investigated. Other measures can be based on dissimilarity matrices, identifying pairwise differences between sites. These metrics can then be used to look at drivers of these differences, such as the geographic distance between individual sites and environmental differences. However, dissimilarity matrix methods don’t allow the relationship between alpha and gamma diversity to be investigated. The above explanation probably explains the ubiquity of species richness as a metric in ecology – we can all (more-or-less) agree on what it means.

Changes in beta-diversity when humans alter natural landscapes can be unpredictable. When human disturbances are patchy, such as in the case of selective logging, beta diversity has been shown to be stable or increase due to an influx of generalist species in forest gaps.

In contrast, when human land-use change results in the conversion of natural ecosystems to a relatively homogeneous system in which only a small subset of species can survive, beta-diversity tends to decrease. Examples of such drivers include agricultural conversion and urbanisation. However, even high intensity farming can result in an increase in beta-diversity particularly if species populations decrease leading to greater dissimilarity purely as a result of random processes.  In summary, the response of beta-diversity to anthropogenic change appears to be relatively idiosyncratic.

All of this is well and good, but what use is beta-diversity to practical conservation? At first inspection, this is not clear. The general perception of species richness is that more species = better**. Does higher beta-diversity = better? Well, no, not necessarily. Given that the aims of conservation vary from place to place, it is not surprising that how beta-diversity can be used also varies.

The most obvious use of beta-diversity is in spatial planning of protected areas. In landscapes which show a high spatial turnover of species, managers might favour the use numerous distinct reserves to capture this variation. However, in a landscape in which beta-diversity results from differents in species richness a single protected area might be favoured. Also, if a natural ecosystem is particularly distinct from other candidate sites it may be considered a priority for protection.

High beta-diversity can also result from dispersal limitation in a landscape. For example, secondary forests in fragmented landscapes plants with seeds dispersed by wind may colonise sites more readily than those dispersed by animals that may not cross non-forest areas. So in cases where beta-diversity amongst patches of a similar habitat in a fragmented landscape is high, this may point to the need for restoration to increase connectivity. Successful restoration may result in a decrease of beta-diversity as dispersal between patches increases. For example, Renata Pardini’s work has shown that the small mammal communities of more highly connected fragments of Atlantic forest are more similar to other patches than unconnected fragments. However, as far as I know, there is relatively little evidence empirical that restoration has similar effects.

In the paper I mentioned earlier, Jacob Socolar and colleagues suggest that beta-diversity may also be useful in informing the land-sharing vs land-sparing debate (which i have previously written about here, here an here). They argue that the use of beta-diversity as part of this debate may show that heterogenous landscapes that include agri-environment schemes, management of natural systems and high intensity agriculture are better at maintaining alpha- beta and gamma-diversity. Thus, the incorporation of metrics other than population sizes of species, the classic approach for such comparisons, may produce different conclusions to current studies, which largely suggest land-sparing as a favoured approach. As always with conservation, this depends on what you think we should try to protect. Should we focus on particular species? Or should we look attempt to conserve the processes that maintain coarse-scale diversity?

For me, the key point that the paper makes is that even though two recent high-profie studies recently suggested local-scale alpha-diversity is relatively constant***, global scale gamma-diversity is declining. This suggests that rare species are getting rarer and common species are increasing in abundance. If we can work out how and why beta-diversity responds to land-use changes we can better understand how to conserve gamma-diversity. However, before we do that we need to develop methods to upscale from alpha to gamma diversity and determine how different disturbances alter beta-diversity. Novel approaches offer the potential to solve this problem, but substantial testing is needed to determine how useful they are.

*Patricia Koleff identified 24 metrics for use with presence-absence data and my  old CEH office mate Louise Barwell tested 29 different beta-diversity metrics that incorporated abundance data. Give both of these papers a read, they’re well worth your time.

**I don’t agree with this perception, I’m just extrapolating based on things I have heard from a few people. Deeply unscientific, I know.

***I saw Andrew Gonzalez present some work on the problems of these two studies at the 2015 British Ecological Society annual meeting and hope to post something when the paper comes out. I can’t say much, but it was fascinating stuff.

# Tropical deforestation causes dramatic biotic homogenisation

Although species richness is most ecologists go-to metric to ‘take the temperature’ of an ecosystem, it is not always the most useful. Even when species richness doesn’t change much over time many species may be being added to or lost from a community. Changes in human land use can cause loss of a particular taxonomic or functional groups, which can have important implications for ecosystem processes such as pollination or seed dispersal. This non-random loss of species as a result of human impacts can result in biotic homogenisation – where the communities in different location become more similar to each other. Biotic homogenisation has been seen all over the world in response to drivers like urbanisation, agricultural land-use change, and eutrophication.

However, up until recently, there had been little work on how biotic homogenisation impacted multiple taxonomic groups across landscapes. Work has also been almost entirely carried out at a single spatial scale. Given that taxonomic groups are likely to differ in their response to disturbances and that landscape scale processes may play a critical role in species persistence. Fortunately last week a paper was published by Ricardo (aka Bob) Solar and colleagues in Ecology Letters that attempted to fill these knowledge gaps.

Specifically the paper attempted to determine how much of the change in community composition as a result of changes in tropical forest land-use change were attributable to replacement of species (termed turnover) and loss of species (termed nestedness). Bob and his colleagues did this for birds, dung beetles, plants, orchid bees and ants at 335 sites (!) in 36 different landscapes in 2 regions of Brazil. The sites used were either primary forest experiencing varying degrees of human disturbance, secondary forests, cattle pasture or arable farmland.

In short the paper shows that:

• Species richness decreases as land-use intensity increases
• Differences in community composition between deforested sites were much lower than for forested areas
• Species turnover caused the majority of changes in community composition, but loss of species became more important as the intensity of disturbance increased

For me, the most interesting message of the paper the changes in community composition were largely attributable to replacement of species. This suggests that as species are lost following disturbance, colonisation of generalist species initially causes relatively little change in species richness. However, as land-use intensity increases the contribution of species loss to alteration in community composition became more important suggesting that communities in these locations tend to be made up of generalist species that are tolerant to human disturbances.

Interestingly, the paper also shows that provided that forest cover is maintained there was relatively little biotic homogenisation. So while it is obvious from previous work that the maintenance of undisturbed forests is vital to conserve tropical forest biodiversity, it is also obvious that degraded forest can play an important role in conservation.  This is especially true where few undisturbed forests still exist or degraded forest is widespread such as in SE Asia and Central America.

This work effectively shows that taxonomic homogenisation is occurring at multiple scales as a result of human land-use change. The next step is to see what types of species are being lost/retained. This means looking at the interaction between species traits and the land-use gradient (see more on that here). Previous work has suggested that body size and feeding preferences may play an important role in determining whether bird species can persist in degraded forests. Looking at this will allow us to gain a greater understanding of how biodiversity change may alter ecosystem processes and ultimately the ecosystem services on which we all depend.

# Is ecological succession predictable?

Over the last few years I have written quite a lot about forest succession. I have published a paper on the topic, have a paper in review about recovery of a forest under multiple stressors and will be starting more work on the it over the next few weeks. All in all, I think I have a reasonable idea what I’m talking about when it comes to succession, at least in forests. However, I’ve just read a paper on tropical forest succession that caught me a bit unawares*.

The paper in question is Natalia Norden and colleagues’ work that was recently published in PNAS. The authors collected data from 72 secondary forest plots monitored for 7-24 years at 7 different sites across tropical South and Central America. They then used this data to look whether we can predict trajectories plot stem density, basal area and species density during forest succession after total clearance. On the whole the paper found that trajectories were poorly predicted by models that looked at change as a function of forest age. From the figure below, you can pick some general trends in the direction of change with age – stem density might have a humped relationship with age for example. However, it is also clear that there is a huge amount of variation and some trajectories bounce around all over the place.

It’s obvious from looking at the figure above that the age of a secondary forest doesn’t really act as a proxy for its successional stage. In fact Norden and colleagues found that on average age only explains 20% of within site variation. Even if that is better than the average ecology paper, it’s still not very good. To explain the rates of change of different variables, Norden et al. fitted a set of different non-linear models for each site. Again, their findings emphasised the large amount of variation between different sites. Due to these idiosyncrasies, the authors of the paper see space-for-time substitution as a flawed method for predicting the dynamics of forests. They also suggest that such approaches should not be used for studies of succession of any sort of vegetation, arguing that previous work these methods has made succession appear as if it is deterministic, and it is not.

Now I’m not sure how the numbers of studies that use chronosequences vs monitoring over time to study succession stack up, but I’d be willing to be bet >80% of these papers use chronosequences, at least in forests. There are good reasons for using them: they take much less time than monitoring (especially in systems containing long-lived organisms), they are much less expensive, the logistics are less complex and as a result of all of these things, they are easier to get funded than a 10-20 year research programme. Norden et al.’s warning against using chronosequences based on their results, begs the question “Do we have other evidence of how well chronosequences perform?” The answer is that we do, and it doesn’t look too good for chronosequences. For example, Ted Feldpausch and colleagues found that space-for-time substitution resulted in overestimates of biomass accumulation for young secondary forests in the Amazon. Recently Mora and colleagues similarly suggested that chronosequences were poor predictors of forest characteristics.

So, is the chronosequence dead? Well, maybe not just yet. However, I think as researchers we need to be more circumspect about their use. In particular I think there are 4 questions that we need to answer to get a more well rounded view of the usefulness of chronosequences:

1. How much variation in future dynamics do they actually predict? – Chronosequences are far from perfect, but it still offers us some insight into future dynamics. Mora et al. showed that chronosequences can still account for 32-57% of variance in future forest characteristics. There must be a reasonably large number of chronosequences that have been sampled more than once that could be used to test their predictive ability. We need more studies that address this head on. If it turns out that they are very poor at explaining future dynamics, then maybe it is time to switch to better methods.
2. What variables do they predict most effectively? – Structural components of a system (biomass, stem density etc) should be easier to predict than community composition, since changes in structure are less likely to depend on idiosyncrasies such as the identity of initial colonising species. However, again, this has been tested relatively rarely.
3. Do chronosequences have more predictive power in some systems than others? – Predictive power should be greatest when abiotic conditions are relatively constant across a landscape, disturbance history at all sites is relatively similar and in regions with relatively small species pools. Under all of these conditions there should be less chance of wildly different successional trajectories occurring.
4. Where do animals fit into all this? – Predicting animal abundance and community composition is rarely studied in chronosequences, probably because their response to succession is that much less predictable than plant communities. Even though they are likely to perform relatively poorly, a comparison of the predictive ability of chronosequences for animal compared to plant communities would be interesting.

What do you think? Are there any other questions we need to answer to determine the value of chronosequences? Or do you have any views on the use of chronosequences in non-forest systems?

*To be fair, this probably shouldn’t have been that much of a surprise, review papers have been suggesting that chronosequences are far from the best way to do things for a while. Although, there are also papers that suggest that careful use of chronosequences is perfectly ok.

# Just what is resilience, anyway?

Last week I organised a workshop bringing together researchers interested in resilience from across the Biodiversity and Ecosystem Service Sustainability (BESS) programme run by NERC. I’ll write about things that came out of it over the next few weeks. Here is my first missive

“Is it real? Is it an obscure object of desire?” my boss Adrian asked during our workshop . Given that  nearly 20 years ago there were 163 different definitions of ecosystem resilience, it is perhaps no wonder that we we having a few problems. Part of this problem is that resilience is a boundary object – a term that is interpreted differently by different communities. During our meeting it became clear, for example, that ecological researchers and policy-makers did not necessarily mean the same thing when they were talking about resilience.

Generally, researchers see resilience as a property of a system. Being researchers, we want to quantify this resilience. However, it turns out that resilience can’t really be viewed as a single thing, since it is made up of a number of different qualities. Along with the expert guidance of Volker Grimm our workshop came up with 3 different elements that are important when assessing resilience for research:

1. Recovery – The return of a variable to the reference state after a disturbance.
2. Resistance – A variable staying essentially unchanged despite disturbances.
3. Persistence – Persistence of the system over time.

Using these three different properties allows researchers to look at different aspects of resilience and compare across systems. Making such comparisons is actually very difficult due to constrains on time and funding, as well as logistical problems. For example, to compare resistance of different communities you would ideally apply different intensities of disturbance in different locations. This may be possible in some relatively ‘fast’ systems such as grasslands but it is unlikely that you would get permission to do this to a woodland where you might have to cut trees down. In order to resolve this, mechanistic models can be exceptionally useful for investigating different scenarios of change. Combining this with empirical data collection in the same system can help us gain a more detailed understanding of resilience. This is something we are aiming to do in our current project as part of my post-doc work.

Policy makers on the other hand generally view resilience as a goal. Recently policy documents have begun to mention the importance of resilience. For example, one of the Convention on Biological Diversity’s 2020 aims is to:

By 2020, ecosystem resilience and the contribution of biodiversity to carbon stocks has been enhanced, through conservation and restoration, including restoration of at least 15 per cent of degraded ecosystems, thereby contributing to climate change mitigation and adaptation and to combating desertification.

Similarly the environment white paper in published by the UK government in 2010 mentions resilience 36 times and the Welsh government is aiming to create:

A biodiverse natural environment with healthy functioning ecosystems that support social, economic and ecological resilience and the capacity to adapt to change.

It is also included in US and Australian policy. So in the case of policy-makers it becomes clear that resilience is seen as a target. While for researchers resilience can mean something very specific policy-makers probably consider it to be closest to the previous definition of persistence.  At our workshop there were plenty of anecdotes about policy-makers saying things like “resilience is the new sustainability” and telling civil servants to “stick some resilience in your report, it’s the new thing.” There were also reports that some policy-makers wanted the production of maps of resilience. I think this is potentially dangerous. Given that the ratio of empirical work to conceptual stuff/reviews and perspectives pieces is about 1:1000 we simply don’t have enough evidence to produce these maps at the moment. If push came to shove then we could probably come up with a best guess based on ecological theory, but even then there would be all sorts of caveats.

I think it’s clear we will never reach a point where there is one definition of resilience that fits everyone’s need. However, when we talk about resilience we need to be clearer about what we mean by it. So next time you use it in a paper, for the love of god, define it.

*Edit #1 – I just came across this nice post by Jeremy Fox on defining stability concepts in ecology, which, if might be a useful companion piece to what I said here.

*Edit #2 – Ambroise Baker who helped organise the workshop with me has a short summary of the meeting over on the Lake BESS blog, you can see that here.