This post was inspired by an amazing workshop given by Mark Burgman at the recent Student Conference on Conservation Science in Cambridge. I have done my best to get across what I learnt from it here, but it is not the final word on this issue.
It turns out experts aren’t necessarily all that good at estimation. They are often wrong and overconfident in their ability to get stuff right. This matters. A lot.
It matters because experts, particularly scientists, are often asked to predict something based on their knowledge of a subject. These predictions can be used to inform policy or other responses. The consequences of bad predictions can be dramatic.
For example, seismologists in L’Aquila, Italy, were asked whether there was a risk of threat to human life from earthquakes in the area by the media. They famously told reporters there was ‘no danger.’ They were wrong.
Not all cases are so dramatic, but apparently experts make these mistakes all the time. This has profound implications for conservation.
Expert opinion is used all the time in ecology and conservation where empirical data is hard or impossible to collect. For example the well known IUCN redlist draws on large pools of expert knowledge in determining range and population sizes for species. If these are very inaccurate then we have a problem.
Fortunately, there may be a solution.
This solution was first noticed in 1906 at a country fair. At this fair people were taking part in a contest to guess the weight of a prize ox. Of the 800 or so people that took part nobody got the correct weight. However, the average guess was closer than most people in the crowd and most of the cattle experts. As a group these non-experts out performed the experts.
Apparently this is now a phenomena that is widely recognised.
Building on this a technique has been developed called the delphi method. It aims to improve peoples estimates by getting them to make an estimate, discuss it with other people in their assigned group and then make another estimate. You then take the mean estimate of the group.
Mark Burgman and colleagues have come up with a modified version of the technique. This involves people estimating something, giving the highest reasonable value for their estimate, their lowest reasonable value and a measure of their confidence (50-100%) that their limits contain the true value. Then you discuss them in your group and change you estimates and use these to derive a group mean. This can be done many times, and it seems estimates are better with more iterations.
I think this is a great idea. But you can take the idea even further. You can do this with a series of questions some of which you know the answer for. Using respondents answers to these questions you can calibrate how expert your experts actually are. Then you can weight people’s estimates based on the confidence you have in them, like in the example below.
This is an idea pretty similar to meta-analysis. We give more weight to the estimates we are more confident about.
These approaches have been around for a while and appear to have been used very rarely in ecology and conservation. Given how often expert opinion is used in conservation it is important we think hard about how reliable it actually is. It will never be perfect, but it can be better. This work is a step in the right direction.