Essay by Eric Worrall
According to Technical University Munich, poor quality climate data and oversimplified model assumptions make it impossible to pinpoint significant future climate events.
Not the day after tomorrow: Why we can’t predict the timing of climate tipping points
by Technical University Munich
AUGUST 2, 2024A study published in Science Advances reveals that uncertainties are currently too large to accurately predict exact tipping times for critical Earth system components like the Atlantic Meridional Overturning Circulation (AMOC), polar ice sheets, or tropical rainforests.
…
First, predictions rely on assumptions regarding the underlying physical mechanisms, as well as regarding future human actions to extrapolate past data into the future. These assumptions can be overly simplistic and lead to significant errors.
Second, long-term, direct observations of the climate system are rare and the Earth system components in question may not be suitably represented by the data. Third, historical climate data is incomplete.
…
To illustrate their findings, the authors examined the AMOC, a crucial ocean current system. Previous predictions from historical data suggested a collapse could occur between 2025 and 2095. However, the new study revealed that the uncertainties are so large that these predictions are not reliable.
…Read more: https://phys.org/news/2024-08-day-tomorrow-climate.html
The study referenced by the article;
Uncertainties too large to predict tipping times of major Earth system components from historical data
MAYA BEN-YAMI , ANDREAS MORR, SEBASTIAN BATHIANY, AND NIKLAS BOERS
SCIENCE ADVANCES
2 Aug 2024
Vol 10, Issue 31Abstract
One way to warn of forthcoming critical transitions in Earth system components is using observations to detect declining system stability. It has also been suggested to extrapolate such stability changes into the future and predict tipping times. Here, we argue that the involved uncertainties are too high to robustly predict tipping times. We raise concerns regarding (i) the modeling assumptions underlying any extrapolation of historical results into the future, (ii) the representativeness of individual Earth system component time series, and (iii) the impact of uncertainties and preprocessing of used observational datasets, with focus on nonstationary observational coverage and gap filling. We explore these uncertainties in general and specifically for the example of the Atlantic Meridional Overturning Circulation. We argue that even under the assumption that a given Earth system component has an approaching tipping point, the uncertainties are too large to reliably estimate tipping times by extrapolating historical information.
Read more: https://www.science.org/doi/10.1126/sciadv.adl4841
One thing which really caught my eye is how sensitive the models are to changes to model inputs or data processing.
… The most fundamental assumption made in all these methods is that the system in question can undergo tipping for a given forcing. However, not all systems can undergo tipping, and as the methods assume tipping, they are susceptible to false positives. We now apply the three methods to time series generated by a linear model without any bifurcation but with an added mean trend, forced with red noise that increases in correlation strength (see Materials and Methods). The first and third method above predict tipping for this system (Fig. 1). For such a linear system, the ideal method would give the tipping time as infinite (i.e., no tipping time). However, the MLE method always predicts a finite tipping time, and the AC(1) extrapolation only gives an infinite tipping time for about a quarter of the cases. The generalized least squares (GLS)–based regression method is designed to account for nonstationary correlated noise, and its results do not indicate a notable decrease in system stability. Despite that, it still gives a finite tipping time for about half of the cases. …
Read more: https://www.science.org/doi/10.1126/sciadv.adl4841
The scientists were careful to suggest the error might be that models underestimate the risk of tipping points.
But the models are clearly not fit for guiding public policy.
Trying to infer behaviour from models which contain degrees of freedom nobody knows about, based on patchy and unreliable data over too short a period of time, models based on such noisy data that switching to a different statistical method leads to wildly different outcomes, that isn’t science, it is superstition. An Ouija board could match the quality of data provided by such models.
Related
Discussion about this post