Fast Simulation of Tsunamis in Real Time
Abstract
The U.S. Tsunami Warning Centers primarily base their wave height forecasts on precomputed tsunami scenarios, such as the SIFT model (Standby Inundation Forecasting of Tsunamis) developed by NOAA's Center for Tsunami Research. In SIFT, tsunami simulations for about 1600 individual earthquake sources, each 100x50 km, define shallow subduction worldwide. These simulations are stored in a database and combined linearly to make up the tsunami from any great earthquake. Precomputation is necessary because the nonlinear shallow-water wave equations are too time consuming to compute during an event. While such scenario-based models are valuable, they tacitly assume all energy in a tsunami comes from thrust at the décollement. The thrust assumption is often violated (e.g., 1933 Sanriku, 2007 Kurils, 2009 Samoa), while a significant number of tsunamigenic earthquakes are completely unrelated to subduction (e.g., 1812 Santa Barbara, 1939 Accra, 1975 Kalapana). Finally, parts of some subduction zones are so poorly defined that precomputations may be of little value (e.g., 1762 Arakan, 1755 Lisbon). For all such sources, a fast means of estimating tsunami size is essential. At the Pacific Tsunami Warning Center, we have been using our model RIFT (Real-time Inundation Forecasting of Tsunamis) experimentally for two years. RIFT is fast by design: it solves only the linearized form of the equations. At 4 arc-minutes resolution calculations for the entire Pacific take just a few minutes on an 8-processor Linux box. Part of the rationale for developing RIFT was earthquakes of M 7.8 or smaller, which approach the lower limit of the more complex SIFT's abilities. For such events we currently issue a fixed warning to areas within 1,000 km of the source, which typically means a lot of over-warning. With sources defined by W-phase CMTs, exhaustive comparison with runup data shows that we can reduce the warning area significantly. Even before CMTs are available, we routinely run models based on the local tectonics, which provide a useful first estimate of the tsunami. Our runup comparisons show that Green's Law (i.e., 1-D runup estimates) works very well indeed, especially if computations are run at 2 arc-minutes. We are developing an experimental RIFT-based product showing expected runups on open coasts. While these will necessarily be rather crude they will be a great help to emergency managers trying to assess the hazard. RIFT is typically run using a single source, but it can already handle multiple sources. In particular, it can handle multiple sources of different orientations such as 1993 Okushiri, or the décollement-splay combinations to be expected during major earthquakes in accretionary margins such as Nankai, Cascadia, and Middle America. As computers get faster and the number-crunching burden is off-loaded to GPUs, we are convinced there will still be a use for a fast, linearized, modeling capability. Rather than applying scaling laws to a CMT, or distributing slip over 100x50 km sub-faults, for example, it would be preferable to model tsunamis using the output from a finite-fault analysis. To accomplish such a compute-bound task fast enough for warning purposes will demand a rapid, approximate technique like RIFT.
- Publication:
-
AGU Fall Meeting Abstracts
- Pub Date:
- December 2011
- Bibcode:
- 2011AGUFMNH21C1525F
- Keywords:
-
- 4564 OCEANOGRAPHY: PHYSICAL / Tsunamis and storm surges;
- 4341 NATURAL HAZARDS / Early warning systems;
- 4352 NATURAL HAZARDS / Interaction between science and disaster management authorities