Robust calibration of a global aerosol model
Abstract
Comparison of models and observations is vital for evaluating how well computer models can simulate real world processes. However, many current methods are lacking in their assessment of the model uncertainty, which introduces questions regarding the robustness of the observationally constrained model. In most cases, models are evaluated against observations using a single baseline simulation considered to represent the models' best estimate. The model is then improved in some way so that its comparison to observations is improved. Continuous adjustments in such a way may result in a model that compares better to observations but there may be many compensating features which make prediction with the newly calibrated model difficult to justify. There may also be some model outputs whose comparison to observations becomes worse in some regions/seasons as others improve. In such cases calibration cannot be considered robust. We present details of the calibration of a global aerosol model, GLOMAP, in which we consider not just a single model setup but a perturbed physics ensemble with 28 uncertain parameters. We first quantify the uncertainty in various model outputs (CCN, CN) for the year 2008 and use statistical emulation to identify which of the 28 parameters contribute most to this uncertainty. We then compare the emulated model simulations in the entire parametric uncertainty space to observations. Regions where the entire ensemble lies outside the error of the observations indicate structural model error or gaps in current knowledge which allows us to target future research areas. Where there is some agreement with the observations we use the information on the sources of the model uncertainty to identify geographical regions in which the important parameters are similar. Identification of regional calibration clusters helps us to use information from observation rich regions to calibrate regions with sparse observations and allow us to make recommendations for new observations. We use a technique called history matching with multiple outputs to constrain the uncertain parameters before attempting to calibrate the model. The application of history matching to various model outputs throughout the parametric uncertainty space reduces the likelihood of finding a good model for the wrong reasons and allows a more robust calibration.
- Publication:
-
AGU Fall Meeting Abstracts
- Pub Date:
- December 2013
- Bibcode:
- 2013AGUFMGC31B1059L
- Keywords:
-
- 3311 ATMOSPHERIC PROCESSES Clouds and aerosols;
- 3325 ATMOSPHERIC PROCESSES Monte Carlo technique;
- 3275 MATHEMATICAL GEOPHYSICS Uncertainty quantification;
- 3333 ATMOSPHERIC PROCESSES Model calibration