Follow EcoPress on Twitter

“Like” NREL EcoPress

Are Some Scientists Overstating Predictions? Or How Good are Crystal Balls?

By Tom Stohlgren and Dan Binkley

The future is really hard to predict. Really. In fact, 95% of Americans think they can predict the future 90% better than they really can!   We just made up those statistics, but that’s what we often do when predicting the future – we take our best personal insights, observations, data, and models, and pretend we can see the future.

We just made up those statistics, but that’s what we often do when predicting the future – we take our best personal insights, observations, data, and models, and pretend we can see the future.

It’s a bit humbling, humorous, and humiliating, to consider some of the fantastic claims by well-intentioned scientists on the first Earth Day in 1970: 100-200 million people dying of starvation each year, a massive global cooling trend, and an end to life as we know it by 1985 (Ronald Reagan’s second term!), never materialized. Lily Tomlin lamented that maybe if we started listening to history, it wouldn’t have to keep repeating itself. Many famous and well-intentioned scientists think the future can be predicted much more clearly now than it could be in the past. Without naming any names, consider the following predictions:

  1. In 1992, a leading naturalist claimed the Earth was losing 27,000 species per year to extinction since 1980 (that’s 74 each day, three species each hour).
  2. In 2004, climate warming would have 15-37% of 1,100 rare terrestrial species ‘committed to extinction’ by 2050.
  3. In 2013, climate modelers and ecologists predicted a 60-77% decline in aspen in Colorado by 2060.

First, we wondered why leading physicists don’t seem to make sweeping and alarming predictions. We found a particularly enlightening paper with the enticing title, “The Good, The Bad, and The Ugly of Predictive Science”[1]. It explained a sort of common knowledge in the field that the foundation of large-scale predictable relationships was full of tradeoffs. The authors remind us that any mathematical or numerical model gains credibility by understanding the trade-offs between:

  1. Improving the fidelity to test data,
  2. Studying the robustness of predictions to uncertainty and lack-of-knowledge, and
  3. Establishing the “prediction looseness,” of the model. Prediction looseness here refers to the range of predictions expected from a model or family of models along the way.

Whether we are talking about climate scenarios or extinction predictions, we found it helpful to assess model credibility in this way. Let’s evaluate the three claims again.

  1. In 1992, a leading naturalist claimed the Earth was losing 27,000 species per year to extinction since 1980 (that’s 74 each day, three species per hour).

Documented extinction rates for the past 500 years average < 3 species per year[2,3]. This hints at a fidelity-to-data issue. But the lack-of-knowledge issue creeps in too. For past extinction rates, a spotty fossil record and an unknown total number of species on Earth make estimates of background extinction rates highly uncertain. Tracking current extinctions is equally uncertain, especially for tiny creatures in remote areas of the world. If we cannot even name 100 species that went extinct last year (or any the preceding year), is it wise to believe the true rate is 27,000 extinctions per year[2]?

The last two predictions can be looked at together.

  1. In 2004, climate warming would have 15-37% of 1,100 rare terrestrial species ‘committed to extinction’ by 2050, and
  2. In 2013, climate modelers and ecologists predicted a 60-77% decline in aspen in Colorado by 2060.

It’s become rather common to double-up climate change scenarios with models of habitat loss, despite a host of uncertainties in the climate models and the habitat suitability models[4]. In the case of climate models, it may not be enough to improve the calibration of climate models based on past climate station data, which focuses solely on the fidelity-to-data aspect[1]. The dearth of weather stations at high elevations and our inability to measure snow make predicting local precipitation highly uncertain in magnitude and direction[5]. And, if models selected for their robustness to uncertainty tend to make inconsistent predictions, this is more discouraging[1]. We’d all like to make accurate climate predictions at local scales to assess species loss, while being robust to the sources of uncertainty and lack-of- knowledge. However, given that climate projections are “subject to large and unquantifiable uncertainty”[5], any vegetation predictions based on those climate models must be subject to the same large and unquantifiable uncertainties.

However, given that climate projections are “subject to large and unquantifiable uncertainty”[5], any vegetation predictions based on those climate models must be subject to the same large and unquantifiable uncertainties.

The trade-offs here are enormous. If species distributions are driven by temperature and precipitation, but you can’t predict if it’s going to be wetter or drier at local scales, the current climate models may be very limited utility. In fact, a highly respected species-distribution modeler states, “The magnitude of uncertainties in species’ range modeling is currently so great that it might lead conservation planners, policy makers and other stakeholders to question the overall usefulness of science as an aid to solve real world problems”[4].

In the particular case of extinction models, the trade-off between robustness of predictions to uncertainty and lack-of-knowledge[1] is simply ignored. In the extinction predictions and aspen decline predictions, the climate models, which were never designed for local-scale predictions, presently lack sufficient information to assess the persistence of rare species in favorable micro-climate refugia, the adaptations of species, or the bi-direction migrations of species to and from favorable microsites[6]. In the aspen decline predictions, the models ignore other ecological factors important for local aspen persistence (e.g., fire, insect outbreaks, landslides)[7]. Indeed, if some plant species do more poorly under future climates, we should expect that other species are likely to do better as competition pressures shift. These sorts of ecological interactions are not easy to characterize quantitatively in present landscapes, so it’s hard to believe we can see the future in much detail.

A more egregious oversight of these three predictions is that none of the authors quantified the “prediction looseness” of their models[1]. What milestones should be expected on the way to the full development of the predicted futures? : What should we expect to see in 10, 20, and 30 years if your models are accurate? What should we conclude about the predictions if the observations come in far below (or above) expectations?

…all predictions should be stated as hypotheses, accompanied by short-term predictions with acceptance/rejection criteria, accompanied by simple monitoring to verify and validate projections, carefully communicated with model caveats and estimates of uncertainties.

Given the lessons from the first Earth Day in 1970, and the large and unquantifiable uncertainties in many long-term predictions, we think all predictions should be:

  1. stated as hypotheses,
  2. accompanied by short-term predictions with acceptance/rejection criteria,
  3. accompanied by simple monitoring to verify and validate projections,
  4. carefully communicated with model caveats and estimates of uncertainties.

We would go on to suggest that it’s the scientist’s responsibility to asses the validity of their predictions over time, and to follow up on news reports and uses of their model results, and respond to misuses of their results. Overstating predictions is a great way to lose credibility. As Mark Twain said, “There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.”

References

[1] Hemez, F. M., & Ben-Haim, Y. 2004. The good, the bad, and the ugly of predictive science. In 4th International Conference on Sensitivity Analysis of Model Output (pp. 8-11). http://www.tomcoyne.org/resources/Good_Bad_Ugly_of_Predictive-Science.pdf.

[2] Stork, N. E. 2010. Re-assessing current extinction rates. Biodiversity and Conservation, 19(2), 357-371.

[3] Loehle, C., & Eschenbach, W. 2012. Historical bird and terrestrial mammal extinction rates and causes. Diversity and Distributions, 18(1), 84-91.

[4] Thuiller, Wilfried, et al. 2008. Predicting global change impacts on plant species’ distributions: future challenges. Perspectives in Plant Ecology, Evolution and Systematics, 9.3: 137-152.

[5] Costa-Cabral, M., Coats, R., Reuter, J., Riverson, J., Sahoo, G., Schladow, G., … & Chen, L. 2013. Climate variability and change in mountain environments: some implications for water resources and water quality in the Sierra Nevada (USA). Climatic Change, 116(1), 1-14.

[6] Schwartz, MW, L.R. Iverson, A.M. Prasad, M. S.N., R.J. O’Connor. 2006. Predicting extinctions as a result of climate change. Ecology, 87:1611–1615.

[7] Stone W.E.,and Michael L. Wolfe. 1996. Response of understory vegetation to variable tree mortality following a mountain pine beetle epidemic in lodgepole pine stands in northern Utah. Plant Ecology, 122: 1-12.

Stohlgren

Tom Stohlgren, Ph.D., is a Senior Research Ecologist at the Natural Resource Ecology Laboratory at Colorado State University. He conducts research on invasive species, teaches, trains graduate students, and wears Hawaiian shirts every day.

DanBinkley

Dan Binkley, Ph.D., has been a forest ecology professor for over 30 years, working with how forests change over short and long periods of time and gradients of space.

The featured image was borrowed from The Red Zone Blog.