Understanding Differences in the Performance of a Precipitation-Runoff Model on the Regional Scale
Abstract
A common feature of precipitation-runoff models is that the model performance usually deteriorates when moving from calibration to validation period or from optimized to regionally estimated parameters. The Nash- Sutcliffe efficiency (NSE) - probably the most widely used performance criterion - is a summarizing measure that gives no information about the type of the errors that cause this deterioration. In this study we apply a daily, conceptual precipitation-runoff model in 49 Austrian basins for the cases of parameters optimized with NSE, regionally estimated parameters, and simulation of a validation period. For each case NSE is split into three components representing the bias, the correlation and a measure of the variability of flow. The results show that for optimized parameters the bias is well constrained by NSE, but trade-off problems arise between the correlation and the variability of the flow, resulting in systematic deviations. In general, the correlation is the most dominant component of NSE, but it does not explain the deterioration of the model performance well. The biggest drop in NSE is observed in dry basins, where the bias is by far most important when moving from calibration to validation period, and especially when moving from optimized to regionally estimated parameters. In conclusion, this study demonstrates that the overall model performance is related to different sub-components, where in some cases trade-off problems arise, thus emphasizing the need for a multi- objective evaluation.
- Publication:
-
AGU Fall Meeting Abstracts
- Pub Date:
- December 2008
- Bibcode:
- 2008AGUFM.H43D1036K
- Keywords:
-
- 1800 HYDROLOGY;
- 1804 Catchment;
- 1846 Model calibration (3333);
- 1847 Modeling;
- 1860 Streamflow