Evaluating the performance of long short-term memory (LSTM) modeling for predicting streamflow in subcatchments of differing size and hydrologic characteristics in Reynolds Creek, Idaho
Abstract
With the influence of climate change causing more extreme and unprecedented events, it has become clear that accurate streamflow prediction will provide increasingly important information that can contribute to the management of water resources, climatological studies, and the mitigation of flooding impacts. Physical process-based models are useful tools for predicting streamflow in well-understood watersheds, but these models have shown to be lacking in areas where substantial calibration information is unavailable or incomplete. The long short-term memory (LSTM) model is a data-based machine learning approach that has the potential to mitigate this deficiency by producing reliable streamflow predictions in ungauged basins (Kratzert 2019). While this method has been well-tested as a lumped model on the CAMELS catchments, this study examines the performance of LSTM modeling for a distributed network of subcatchments in the Reynolds Creek Experimental Watershed in Idaho. In our experiment we train the LSTM on the CAMELS catchments, plus the largest stream gauge within the Reynolds Creek Experimental Watershed. This analysis has shown that LSTMs performance is better than the distributed, and physically-based, National Water Model (NWM), but the LSTM performance degrades as the test catchments drainage area decreases. We found that both the LSTM and the NWM tend to scale the runoff predictions roughly according to drainage area, and in doing so ignores sub-basin hydrologic processes, such as a snowmelt and a losing stream reach. We explore possibility of training the LSTM to capture sub-basin hydrologic processes, with the goal of using the LSTM for a semi-distributed model.
- Publication:
-
AGU Fall Meeting Abstracts
- Pub Date:
- December 2021
- Bibcode:
- 2021AGUFM.H31B..06D