Improving Model Identification: Reconciling Theory with Observations & The Problem of Sufficient Statistics (Invited)
Abstract
Three decades of attempts to improve the procedures by which we reconcile models with observational data have been driven by efforts such as that of Johnston and Pilgrim (WRR 1976) who reported that “A true optimum set of (parameter) values was not found in over 2 years of full-time work concentrated on one watershed, although many apparent optimum sets were readily obtained.” Since that time, we have played with statistical theory (Likelihood, Bayesian and Multiple-Criteria methods) and optimization theory (Deterministic and Stochastic Global Search methods). But compared with the degree of effort expended, the improvements in model reliability have been relatively small, and the power to discriminate between alternative model hypotheses remains so weak that many people now prefer to talk about multiple ‘equally likely’ models. In this talk I will argue that for three decades we have effectively been putting the cart before the horse. With improved computational tools, our main focus has been on trying to improve model identification via improved mathematical rigor and through better and more robust statistics. However, the real problem of reconciling theory with observations (models with data) is not so much one of statistics as it is of information flow, a process that is bi-directional. On the one hand, the modeling problem is one of explicitly stating the hypothesis to be tested, along with a clear statement of what kinds of tests will unambiguously challenge the hypothesis. On the other hand, the observational problem is one of extracting diagnostically useful information from the data, information that directly supports or challenges the model hypothesis. The reconciliation problem is therefore more properly approached as one of a) making robust inferences regarding which aspects of the model hypothesis are (are not) supported by the observations, b) diagnostically guiding improvements to the theory (model), and c) suggesting what would constitute improvements to the process of acquiring observations. Recent work suggests that a general and robust theory of “Diagnostic Model Evaluation & Improvement” can be achieved through an improved understanding of the role of “Sufficient Statistics”, which can be used to confront the model with relevant information extracted from the data. This task will require the active collaboration of process scientists, modelers and systems theorists alike.
- Publication:
-
AGU Fall Meeting Abstracts
- Pub Date:
- December 2009
- Bibcode:
- 2009AGUFM.H23L..05G
- Keywords:
-
- 1840 HYDROLOGY / Hydrometeorology;
- 1894 HYDROLOGY / Instruments and techniques: modeling;
- 1910 INFORMATICS / Data assimilation;
- integration and fusion;
- 1968 INFORMATICS / Scientific reasoning/inference