Tossing the Earth: How to Reliably Test Earthquake Prediction Methods
Abstract
One of the most consequential issues of the earthquake prediction problem is reliable testing of hypothetical prediction methods. The danger of self-deception by data overfitting here is especially high due to both the scarceness of large earthquakes and the absence of a conventional wide-reaching theoretical framework. This talk gives an overview of the methods currently employed to test prediction algorithms and bridges the commonly accepted approaches to the problem. The main focus is on the two most widely used approaches to assessing prediction methods. Both of them evaluate the amount of new information revealed by the prediction method about the impending earthquake activity. The first one starts by estimating the expected spatio-temporal distribution of seismicity, and uses the classical likelihood paradigm to evaluate the prediction power. Accordingly, it uses the nomenclature of statistical estimation. The second one applies results of G. Molchan [Pure Appl. Geophys., 149: 233-247, 1997] that can be considered as a time-dependent analog of the Neyman-Pearson lemma to make a decision whether or not to expect an earthquake within a given spatio-temporal region. Accordingly, it uses the nomenclature of hypothesis testing. Importantly, this approach does not require the explicit knowledge of the earthquake hazard rate; in other words, the correct decision can be made with the realistically imprecise data. We discuss how the choice of the assessment method depends on a specific prediction situation using the outcomes of real-time prediction experiments. The best choice happened to depend crucially on the specifics of the prediction problem: set of target earthquakes; prediction time-span, resolution, etc.
- Publication:
-
AGU Fall Meeting Abstracts
- Pub Date:
- December 2004
- Bibcode:
- 2004AGUFM.S23A0302Z
- Keywords:
-
- 7223 Seismic hazard assessment and prediction;
- 7200 SEISMOLOGY;
- 3200 MATHEMATICAL GEOPHYSICS (New field)