We have just released a list of frequently asked questions (FAQ) on E-SURGE, a software application that is developed by Rémi Choquet in our team. E-SURGE allows the estimation of demographic parameters based on capture-recapture data. It relies on the multievent modeling framework developed by Roger Pradel to deal with uncertainty in the assignment of states to individuals. We will do our best to keep a list of applications updated. Do not hesitate to get back to us if you have any comments on the FAQ.
A new paper by a former PhD student of the team, Lucile Marescot, now a post-doc at UC Davis.
Marescot, L., Chapron, G., Chadès, I., Fackler, P., Duchamp, C., Marboutin, E. and O. Gimenez (2013). Complex decisions made simple: A primer on stochastic dynamic programming. Methods in Ecology and Evolution. In press. DOI: 10.1111/2041-210X.12082. PDF on request.
This review and tutorial paper (with R code) is about dynamic programming, a powerful mathematical technique to make decisions in presence of uncertainty. This paper would not have been born without the help of Guillaume Chapron, population modeller and large carnivores specialist, as well as Iadine Chadès and Paul Fackler both world experts in the field of decision making.
1. Under increasing environmental and financial constraints, ecologists are faced with making decisions about dynamic and uncertain biological systems. To do so, stochastic dynamic programming (SDP) is the most relevant tool for determining an optimal sequence of decisions over time.
2. Despite an increasing number of applications in ecology, SDP still suffers from a lack of widespread understanding. The required mathematical and programming knowledge as well as the absence of introductory material provide plausible explanations for this.
3. Here, we fill this gap by explaining the main concepts of SDP and providing useful guidelines to implement this technique, including R code.
4. We illustrate each step of SDP required to derive an optimal strategy using a wildlife management problem of the French wolf population. Our results show how the determination of optimal policies is sensitive to the incorporation of uncertainty.
5. SDP is a powerful technique to make decisions in presence of uncertainty about biological stochastic systems changing through time. We hope this review will provide an entry point into the technical literature about SDP and will improve its application in ecology.
I’m pretty sure that you recall this paper, which, in a nutshell, listed all what makes a boring paper (and proposed more constructive insights). Certainly, as a keen-motivated-enthusiastic-ambitious-original (etc) researcher, you approved the message beyond its ironic tone. But you never applied the recommendations yourself, fearing coauthors’, Editors or Referees’ ultra-academic mind which, roughly, consists in requiring that you remove any kind of originality in the writing of your manuscript. Honestly, I did the same, and definitively gave up with any fun writing no later than after the first draft of my first manuscript (or so). Now that we can publically take on our will to have fun when doing / writing science without being systematically ashamed by a chilling PhD comics-like comment (need examples?), let me add a positive update to Sand-Jensen’s “ten recommendations for boring scientific writing” (if not already done check the link above).
1. Be focused
“What did that paper deal with actually?”
To start simply, could you please ensure that we stupid readers fully understand what you’re talking about?
2. Be original and personal
“Does he really believe what he wrote?”
What of starting your paper with a personal field observation or research experience, rather than by platitudes including “climatic changes affect biodiversity”, “competition is key to assembly rules”, … We all remember about that an apple falling on Newton’s head made the gravitation theory (no matters whether that’s true). If you’ve imagined your new metapopulation model while looking at a motorway interchange, or if your discussion includes experience acquired during sunny birding / hiking days, there’s no reason not to state it (together with standard scientific support of course). We researchers are like anybody, we love fun stories and recall them better than complicated sentences. And one or two such anecdote won’t harm scientific rigor – just make it more digestible.
3. Write short contributions
“Will I still be there on page 15?”
We need to manage our literature reading accounting for (in order of priority) the shortened size of coffee mugs, the high speed of public transports (don’t you also use your train or flight trips to check your >50 pdf that permanently crowd your desktop?), the (not that bad) preference for hypothetico-deductive research, and the crazy amount of literature to go through in general. You’d save everybody’s time by struggling to make your paper as concise as possible – which would also somehow make the reviews shorter… A wise journal editor suggested me a maximum of 3-4 questions / predictions to test and a maximum of 20% of the total text allocated to the discussion. A bit academic, but good to be recalled.
4. Promote some implications and speculation
“Is my paper proposing anything novel at all?”
I liked this paper, and it’s not because it’s always better to like famous people’s stuff. Rather, there is something inspiring in proposing original explanation to ecological patterns (here, that pathogens contribute to the spatial structure of bird communities). No one seriously thinks that science is working like Legos, nicely building ideas over ideas in a nicely imbricated way. Sometimes you need challenging ideas to go forwards, and an unexplained / controversial result could be a nice occasion to make a proposal, even if you don’t have quantitative support for it – simply acknowledge that you’re speculating a bit and that no-one will die of it.
5. Emphasize illustrations, particularly good ones
“Would I be able to explain that figure as part of an undergrad exam?”
It looks stupid but it’s not that much. I won’t cite any example here but we all know at least one of these papers with longish tables, weird figures (if any), which we wonder about how the production staff let it be published. Do figures, and preferably, do figures that anyone can understand at a glance without an extensive outlook to the methods. A simple X and simple Y is not a shame even for high-level statisticians – complexity should be in ideas, not in the way we present it.
6. List all necessary steps of reasoning
“I don’t know where I’m going but I’m on my way”
Conceptual diagrams (like here) may critically determine readers’ ability to follow your predictions and methods –it’s also a good exercise to ensure that you’ve not missed a step somewhere yourself. I’m often criticized about unclear introduction outlines in which the flow is too implicit or blurred by unnecessary elements. I also make such comments very often when being on the other side of the review process, so this is clearly one of the major points to check, even for self-proclaimed good writers.
7. Use few abbreviations and technical terms
… unless you really don’t want any reader to understand what you’ve done, forget this website when writing. And recall that not everybody in ecology knows what a PCA, a MCMC, a ML, a GLMM are…
8. Try humor and flowery language
“Conquer all mysteries by rule and line, / Empty the haunted air and gnomed mine” (J. Keats)
In a recent manuscript, I started a (rather standard) sentence by: “From the other side of the lens “. Although this is by no way comparable to Robert Burns or Shakespeare’s style, a coauthor (whom I value a lot) made this comment: “This is an unusual expression in scientific writing. I’d prefer something less poetic. ». Should I conclude that because we are scientists, we should stick with trivial writing? Some (even good) papers are terribly flat and this does not help anything but making them unduly boring. Take the most of English literature just as you take the most of mathematics or stats. Have a look at the top cited papers in the ecological literature. They are all nicely written, make good use of stylistic techniques, and some even attempt to be fun. In fact, many of us add cool stuff in our congress talks, and the public appreciates. Why should we refrain to do the same in papers?
9. Recall that species and biology are not only statistical elements
“Fish counts are Poisson distributed”
ESA journals now encourage authors to publish photos of field designs as Supplementary Materials. It’s always a good idea to visualize what you’re talking about: one or two relevant plant, mammals or landscape photos could help illustrate your data, and also a good way to meet point 2. Not necessarily because you may learn something from a picture, but also because “you’ll only recall what you’ve actually seen in the wild” (wise advice from an entomology teacher – verified since then)…
10. Quote numerous papers for self-evident statements
“Science is fun (Boulet Team, 2013)”
Do we still need to cite many papers to justify that there are global changes ongoing? Or that forest birds live in forests? Or that dispersal is a major parameter in metapopulation dynamics? Citing just to cite is nice for H factors but not very informative. We could wonder whether a given citation will call the attention of the reader. If we don’t expect that anyone will refer to the full reference in the bibliographical list, then the citation might not be that useful.
Well, it’s easier to state the above than apply it within the highly static academic world of scientific publication. Maybe that’s also because we are self-refraining to try new, fun ways of publishing papers. We won’t get rejected because our paper is concise, well structured, understandable and well illustrated. A photo, humorous sentence or personal experience statement can’t be that harmful – at worse, the Referees will ask to trash it. There are good chances however that a slightly different, personal, colorful paper will be remembered by the readership. Isn’t it what we actually want?
conservation needs more analysts, not more field data
which, he says,
invariably elicits a hostile reception among field ecologists.
We took the opposite view and claimed: let’s collect data, and try to balance quantitative ecology with fieldwork! This was a very exciting experience along which we learnt a lot from each other by sharing confronting, structuring and synthesizing ideas. We would like to pursue the venture through this blog. Surprisingly to us, this short contribution got some coverage:
F1000 review – Good for Teaching, Interesting Hypothesis by J. Claudet and E. Darling: This commentary puts forward a vision that balances quantitative analyses of existing datasets with fieldwork in order to train young scientists in data-driven conservation. As a response to an equally invigorating commentary by Hugh Possingham, the authors acknowledge the need to train young scientists as ‘quantitative analysts’ that can monitor, model and evaluate conservation actions. At the same time, the authors emphasize that fieldwork experience is essential to remain biologically relevant – getting your feet wet and being out in the field helps understand both your study system and the value of collected data. This paper provides a compelling argument that desk-based analyses and field science are important for both new (and established) scientists in quantitative ecology and conservation.