Fear of commitment

This picture has nothings to do with the post – apart from the title.

Last week I went to a fantastic workshop run by the British Ecological Society. This blog has almost nothing to do with that but concerns a conversation I had in the pub afterwards. I will not name any of the people involved since it doesn’t seem fair.

After the workshop we went to an agreeable pub that only slightly resembled those from the film “The end of the world.” On the plus side it had a reasonable selection of drinks, though you did have to re-mortgage your house to buy one.

While there I was discussing post-PhD career options with people. As far as I remember the conversation went a bit like this:

Me: “The lack of job security in academia scares me a bit.”

Person A : “Yeah it’s tough, I worked abroad for a post-doc and my wife and I found it really hard to live in a place I didn’t like to further my career”

Me: “For me it’s more the knowledge that most PhD students will never get a permanent academic position and that we might have to work 60 or 70 hour weeks for the best part of 10 years to get close. Personally that’s why I think it’s always worth having a ‘Plan B.’”

Person B: “But if you have a ‘Plan B’ you might not fully commit to the job. I mean look at [name redacted] it’s inspiring, she lives, eats and breathes science. That’s her life.”

Now, this last bit stuck with me.

Is that what people think?

That to be a committed scientist means you neglect all other things in your life? That you shouldn’t have children or even hobbies because they get in the way? That without that you shouldn’t be doing the job?

There was a lot of talk at the workshop about the fact that having a career break to have a family would not be seen by funding agencies as a bad thing. That’s great in practice, but if other people hold the same attitude as ‘Person B’ what’s the point? Having a child is surely more of a distraction than me thinking that I might want to switch careers at some point in the future? If these people sit on funding boards we should just give up talking about diversity in academia because it’s never going to happen.

My lack of confidence that I will get a permanent job in academia is not unfounded, it comes from personal experience and more importantly given that I am allegedly a scientist, from real statistical evidence.

I have a few friends that are now well into their scientific careers and have about 10 years post-doc experience. That puts them in their late 30’s. These friends have young families and have moved countries a few times now. They have all published well and have a Nature papers between them. Yet they are facing the prospect of being ‘retired’ by science because they can’t get a permanent position and they are too expensive to hire as post-docs because of the amount of experience they have.

Getting out of science when you’ve reached that point is much more traumatic than doing it early on. Maybe a ‘Plan B’ would have been a good idea for these friends. What they’re going through is really fucking hard.

Reading other blogs has made me acutely aware that this is not a problem unique to my friends. The proportion of PhD students that get a permanent position has been decreasing as the number of PhDs rises. Because most of the people who run things – invariably white, middle-aged men – got their jobs as a time when it was easier to get a permanent position they don’t recognise this problem. Better people than me have said previously that PhD training should acknowledge this and emphasise the ways in which our skills can be used outside of traditional academic roles – that is after all where most of us will end up, whether we like it or not.

 

Sorry, if that was a bit of a rambly blog post this week but I needed to get this off my chest. Please respond with your thoughts. Do you have experience of switching to a non-research job after a PhD? Or have you found it easy to hold down a job in research whilst having a satisfying life outside of work? Either way let me know.

Advertisements

How can we value the studies used in meta-analysis?

Not by doing this to primary researchers. So let’s change things. Photo credit to Killer Cars on flickr.

I signed a letter this week asking ISI, Google scholar and Scopus to recognise the articles used in meta-analyses as if they were regular citations. I and many other people who use data that we haven’t collected feel that those that did the primary research are not being fully recognised and given enough credit. Research is ranked by citations and it is perverse to award someone a gold star for getting a citation that may support a single statement, but not for supplying data forms the basis of an entire study.

I agree with what I signed but, even if successful, it will take a while to implement.

My question is: What should I do about the problem that will make a difference now?

As far as I can see I have three options, none of the them perfect:

  1. Continue as before, ignoring this issue
  2. Cite papers I used in the main text so credit is given to primary researchers
  3. Offer co-authorship to those authors that provided me with data

Really I don’t know which is best.

The first would be the easiest to do and I’m sure many researchers will continue to do this – their lives are already complicated enough. I’m not really happy doing that though – it undermines valuable work by people in the field, without whom I wouldn’t have a job.

The second, for me, will never really work. I have a meta-analysis that I recently carried out that has >80 papers as data sources. I couldn’t cite these papers unless I wanted to have a reference list of >100 papers. This kind of thing doesn’t make publishers happy.

The third seems to me like a good compromise, but is the most difficult to do. For example, I am currently working on something using the data of others that has potentially controversial conclusions. What do I do in this case? Do I offer people co-authorship, even though they may well disagree with me about my analysis and conclusions?

It’s been running through my head for a while now and I’d like to get a few opinions from others about this. If you think any of these options is particularly appealing, tell me. Or do you have other ways to fix this problem in the short term? What would you do?

Whatever your thoughts, give me some feedback so I can work out the best path to take and please sign the open letter.

Guns, birds and squirrels

Tropical forests are getting ever more fragmented, human population in the tropics is increasing and guns are now widely available.

All of this has led to an explosion in the number of people hunting for food in the tropics.

This hunting can cause local extinction of bird and mammal species, with large bodied species being particularly at risk. This can lead to loss of species that eat fruit and therefore act as dispersers of plant seeds.

This dispersal is important since it means that species are dispersed widely around forest, rather that just being concentrated in small areas around their parent plant. However, those species that eat seeds, causing them to be damaged and therefore unable to germinate, may also be lost as a result of hunting. The balance between the losses of these two types of species will determine their effects on plant reproduction.

In general, it appears that losses of animal species, particularly larger species, as a result hunting tends to lead to an increase in the abundance of plant species which don’t require animals for dispersion. However, the results of these studies can sometimes be unclear due to lack of replication and because they have tended to be over a relatively short period of time.

A new study in Ecology Letters aims to tidy up our view of how hunting affects plant species. This work studied the dynamics of Lambir forest in Malaysian Borneo, which looks like this:

Lambir forest
photo credit: berniedup on flickr

Though it looks nice, this forest has been hunted for over 15 years and this has caused the local extinction of seed dispersing species like the white crested hornbill, which looks like this:

White crested hornbill
photo credit: berniedup on flickr

as well as the red giant flying squirrel, which looks like this:

Red giant flying squirrel
photo credit: vil.sandi on flickr

However, seed predators such as the sambar deer have also become locally extinct

5180069064_f5ba447792_b
photo credit: Smithosian Wild on flickr

This situations mirrors that of other study sites and makes it hard to determine how hunting will affect plant biodiversity.

For their study Rhett Harrison and colleagues investigated the changes in diversity and distribution of plant species in Lambir by monitoring nearly 500,000 (!) individual trees. They found that the density of seedlings tended to increase – suggesting a reduction in the amount of seed predation going on as well as a reduction in dispersal by animals. They also found that tree richness was reduced, though this reduction was relatively modest.

Figure 1 - richness and seedlings
Number of seedlings (a) and tree species richness (b) change during the study period. Error bars are 95% confidence intervals.

Most interestingly the study also suggests that plant species that need animals to disperse their seed tended to become relatively more clustered than species which didn’t rely on animals.

Figure 2- seed mode
Degree of clustering by dispersal mode during study. Lines around dots represent 95% confidence intervals.

All of these results suggest that hunting can have marked effects on tropical forest plant biodiversity – in the long run leading to a potential decline in some animal dispersed species.

Reading this study reminded me of attempts to link traits of species which determine the probability of extinction and those which affect ecosystem functions and services. In this case large body size is associated with a dietary preference for fruit or seeds – with obvious consequences for seed dispersal. What really sets this study apart is the length and size of it which means it is the most precise study of its kind. Linking these traits will allow us to generalise about the ecology of hunting in tropical forests but this is only part of the solution.

Large areas of South East Asia, West Africa and the Atlantic forest in Brazil are facing similar pressures from hunting, so this phenomenon may be quite widespread. Though it is obviously less of a threat to biodiversity when compared to deforestation and other more dramatic degradation the subtle effects of hunting may occur both inside and outside protected areas going relatively unnoticed. To tackle this problem effectively we need to know the motivation for this hunting. Only then can we start to deal with what to do to stop it.

In praise of negative results

The recent article in Nature on bias in research got me thinking again about an old chestnut. Publication bias. It’s everywhere. This is a particular problem with negative results, where treatment & control groups don’t show statistically significant differences. People don’t publish them as often and when they do they tend to get in lower impact journals. This is widely known as the file draw problem.

Why is this important? Well, put simply without negative results we are only getting part of the picture of what is going on. This is a problem for all branches of ecology, but particularly for field based work. For example, finding that management practice x did not significantly alter populations of species y when compared to controls may not seem that exciting. However, if it’s not published and someone else investigates the same management elsewhere and it turns out to increase the population of species y and they go on to publish this, there is a bias in the literature. This can give us a completely skewed perception of reality.

The problem is most acute when people are trying to summarise large areas of research using techniques like meta-analysis. In the hypothetical case of management practice x and species y from earlier, without including unpublished studies we could overestimate the average effects of management treatments. Although meta-analysis is great and I love what you can do with it, this is a fatal flaw.

The Centre for Evidence-Based Conservation (CEBC), who are the authority on systematic reviews and meta-analysis in ecology, recommend that researchers should hunt for non-published work to improve their analysis. While I agree that this is vastly preferable to including only studies from ISI journals, it still doesn’t solve the underlying problem. Contrary to the way scientists normally think we should actually be encouraging publication of negative results.

So how could we do this? A few journals which deal with applied subjects are already targeting the problem. The journal Restoration Ecology now has a section called “Set-backs and Surprises’’ which explicitly aims to publish negative results. As Richard Hobbs says in his editorial for the journal these results are just as important as hearing about projects which have worked. The website Conservation Evidence also aims to publish results, negative or positive, of conservation management in short, easy to understand articles. This should become more widespread outside of these areas. Synthesis of results is important for testing theory and the more information we have to test these theories the better.

Some people will undoubtedly read this and say “Hang on a minute! Surely positive results indicate good study design? We should only be considering the best research for testing theory or looking at the consequences of management.” Frankly these people can jump off a cliff. Yes, studies with near infinite sample sizes will find a difference between group A and group B. However, these differences will solely be a product of sample size. Ecological significance of effects is not the same as statistical significance. Yes, some studies with smaller sample sizes will have noisier results but we can account for this. The best means of testing a theory are by using as many different methodologies in as many different settings as possible. That is the true test of whether a theory fits. By excluding negative results we are, at best, slowing scientific progress. Given the pressures our natural world is facing, we do not have time for this.

Do you have any ideas how we could improve the biases in the ecological literature? How could we encourage publication of negative results, given they are generally perceived as less interesting?

Please feel free to leave any thoughts below.