Hydrology Research In Press, Uncorrected Proof © IWA Publishing 2012 | doi:10.2166/nh.2012.038
Prediction intervals for rainfall–runoff models: raw error method and split-sample validation
John Ewen and Greg O'Donnell
School of Civil Engineering and Geosciences, Newcastle University, Newcastle upon Tyne, NE1 7RU, UK. E-mail: email@example.com
First received 9 February 2011; accepted in revised form 20 December 2011. Available online 3 May 2012
A method (the ghost method) is developed here that calculates prediction intervals for the discharge hydrograph for a river catchment. It uses a calibrated rainfall–runoff model and a dataset containing raw errors such as residuals between observation and simulation. When calculating prediction intervals, raw errors are selected from the dataset and applied to the simulated hydrograph. The selection method is based on matching the simulated hydrological conditions to the hydrological conditions associated with the raw errors. To test the method, the split-sample calibration-validation approach advocated by Klemeš and used widely in hydrology is extended so that the data available for calibrating and testing are divided into three parts rather than two, called periods A, B and C. The rainfall–runoff model is calibrated for period A. For period B, the method by which prediction intervals are calculated is calibrated to give a specified high level of containment (e.g. 99% of observations lie within the prediction interval). Period C is used for testing, carried out in a way that shows the performance expected under operational conditions for real-world problems. Prediction intervals are calculated for the Hodder catchment, northwest England.
Keywords: calibration; prediction interval; rainfall–runoff modelling; uncertainty
Full Text PDF