Identifying the Hazard Before the Earthquake: How Far Have We Come, How Well Have We Done?
Abstract
With almost half a century of evolving understanding and tools at our disposal, how successful have we been at identifying active faults and correctly quantifying the future hazards associated with them? The characterization of seismic sources has multiple facets-location and geometry, frequency of rupture and slip rate, and amount of fault displacement and expected earthquake magnitude. From these parameters, site-specific design values, regional probabilistic fault rupture and ground motion hazard maps, and national building codes can be developed. The mid-1960s saw the initiation of investigations focused on identifying active faults with the earliest efforts geared to location and the potential for surface rupture; studies for critical facilities--power plants, dams, pipelines--were central to this development. These studies flourished in the 1970s during which time the importance of fault slip rates was recognized, and the latter part of the decade saw the first major paleoseismic studies aimed at multiple-event earthquake chronologies. During the 1980s paleoseismic data provided the basis for development of fault-specific magnitude-frequency distributions and concepts such as fault segmentation, which advanced source characterization. The 1990s saw active fault and paleoseismic investigations flourish internationally; AMS radiocarbon dating became widely used, which increased information on earthquake recurrence for a great number of faults. In the late 1990s and the 2000s advances in luminescence and cosmogenic radionuclide dating permitted slip rates to be routinely obtained from previously undatable deposits offset by faults, and the development of LiDAR led to identification of previously unrecognized active structures. These data are finding their way into increasingly sophisticated probabilistic ground motion and fault displacement models. How have these developments affected our ability to correctly identify and quantify a hazard prior to the earthquake? Judging success at this stage is difficult because our societal timeframe is short relative to the recurrence of even the shortest recurrence-interval faults. Success may be defined in terms of: 1) Quantification of a parameter (e.g., amount of offset) that is not exceeded when the fault fails. The characterization of the Denali fault for design of the Trans-Alaska pipeline is an early and very prominent example of this; 2) Avoidance. The State of California's Alquist-Priolo Special Studies Zone act has set the standard for requiring building setbacks from identified active faults; and 3) Changing the paradigm. The recognition, based on paleoseismic observations, that the Cascadia subduction zone has produced M9 earthquakes has completely revised the understanding of seismic hazard in the Pacific Northwest. Future successes will require time to become known. There have also been "surprises". The occurrence of earthquakes on blind faults at Loma Prieta, Northridge, and Darfield, the rupture of the Shih-Kung dam by the Chelungpu fault during the Chi-Chi earthquake, and the unexpected magnitude of the Tohoku earthquake and associated tsunami are cautions and serve as reminders of the limitations and uncertainties in identifying and characterizing earthquake sources.
- Publication:
-
AGU Fall Meeting Abstracts
- Pub Date:
- December 2011
- Bibcode:
- 2011AGUFM.S12A..01S
- Keywords:
-
- 7221 SEISMOLOGY / Paleoseismology;
- 4302 NATURAL HAZARDS / Geological