Homogenisation algorithm skill testing with synthetic global benchmarks for the International Surface Temperature Initiative
Abstract
Our surface temperature data are good enough to give us confidence that the world has warmed since 1880. However, they are not perfect - we cannot be precise in the amount of warming for the globe and especially for small regions or specific locations. Inhomogeneity (non-climate changes to the station record) is a major problem. While progress in detection of, and adjustment for inhomogeneities is continually advancing, monitoring effectiveness on large networks and gauging respective improvements in climate data quality is non-trivial. There is currently no internationally recognised means of robustly assessing the effectiveness of homogenisation methods on real data - and thus, the inhomogeneity uncertainty in those data. Here I present the work of the International Surface Temperature Initiative (ISTI; www.surfacetemperatures.org) Benchmarking working group. The aim is to quantify homogenisation algorithm skill on the global scale against realistic benchmarks. This involves the creation of synthetic worlds of surface temperature data, deliberate contamination of these with known errors and then assessment of the ability of homogenisation algorithms to detect and remove these errors. The ultimate aim is threefold: quantifying uncertainties in surface temperature data; enabling more meaningful product intercomparison; and improving homogenisation methods. There are five components work: 1) Create ~30000 synthetic benchmark stations that look and feel like the real global temperature network, but do not contain any inhomogeneities - analog-clean-worlds 2) Design a set of error models which mimic the main types of inhomogeneities found in practice, and combined them with the analog-clean-worlds to give analog-error-worlds 3) Engage with dataset creators to run their homogenisation algorithms blind on the analog-error- world stations as they have done with the real data 4) Design an assessment framework to gauge the degree to which analog-error-worlds are returned to the original analog-clean-worlds by homogenisation and the detection/adjustment skill of the homogenisation algorithms 5) Present an assessment to the dataset creators of their method skill and estimated uncertainty remaining in the data due to inhomogeneity.
- Publication:
-
EGU General Assembly Conference Abstracts
- Pub Date:
- May 2014
- Bibcode:
- 2014EGUGA..16.8479W