TY - JOUR
T1 - Worldwide evaluation of mean and extreme runoff from six global-scale hydrological models that account for human impacts
AU - Zaherpour, Jamal
AU - Gosling, Simon N.
AU - Mount, Nick
AU - Schmied, Hannes Müller
AU - Veldkamp, Ted I.E.
AU - Dankers, Rutger
AU - Eisner, Stephanie
AU - Gerten, Dieter
AU - Gudmundsson, Lukas
AU - Haddeland, Ingjerd
AU - Hanasaki, Naota
AU - Kim, Hyungjun
AU - Leng, Guoyong
AU - Liu, Junguo
AU - Masaki, Yoshimitsu
AU - Oki, Taikan
AU - Pokhrel, Yadu
AU - Satoh, Yusuke
AU - Schewe, Jacob
AU - Wada, Yoshihide
N1 - Generated from Scopus record by KAUST IRTS on 2023-09-18
PY - 2018/6/1
Y1 - 2018/6/1
N2 - Global-scale hydrological models are routinely used to assess water scarcity, flood hazards and droughts worldwide. Recent efforts to incorporate anthropogenic activities in these models have enabled more realistic comparisons with observations. Here we evaluate simulations from an ensemble of six models participating in the second phase of the Inter-Sectoral Impact Model Inter-comparison Project (ISIMIP2a). We simulate monthly runoff in 40 catchments, spatially distributed across eight global hydrobelts. The performance of each model and the ensemble mean is examined with respect to their ability to replicate observed mean and extreme runoff under human-influenced conditions. Application of a novel integrated evaluation metric to quantify the models' ability to simulate timeseries of monthly runoff suggests that the models generally perform better in the wetter equatorial and northern hydrobelts than in drier southern hydrobelts. When model outputs are temporally aggregated to assess mean annual and extreme runoff, the models perform better. Nevertheless, we find a general trend in the majority of models towards the overestimation of mean annual runoff and all indicators of upper and lower extreme runoff. The models struggle to capture the timing of the seasonal cycle, particularly in northern hydrobelts, while in southern hydrobelts the models struggle to reproduce the magnitude of the seasonal cycle. It is noteworthy that over all hydrological indicators, the ensemble mean fails to perform better than any individual model - a finding that challenges the commonly held perception that model ensemble estimates deliver superior performance over individual models. The study highlights the need for continued model development and improvement. It also suggests that caution should be taken when summarising the simulations from a model ensemble based upon its mean output.
AB - Global-scale hydrological models are routinely used to assess water scarcity, flood hazards and droughts worldwide. Recent efforts to incorporate anthropogenic activities in these models have enabled more realistic comparisons with observations. Here we evaluate simulations from an ensemble of six models participating in the second phase of the Inter-Sectoral Impact Model Inter-comparison Project (ISIMIP2a). We simulate monthly runoff in 40 catchments, spatially distributed across eight global hydrobelts. The performance of each model and the ensemble mean is examined with respect to their ability to replicate observed mean and extreme runoff under human-influenced conditions. Application of a novel integrated evaluation metric to quantify the models' ability to simulate timeseries of monthly runoff suggests that the models generally perform better in the wetter equatorial and northern hydrobelts than in drier southern hydrobelts. When model outputs are temporally aggregated to assess mean annual and extreme runoff, the models perform better. Nevertheless, we find a general trend in the majority of models towards the overestimation of mean annual runoff and all indicators of upper and lower extreme runoff. The models struggle to capture the timing of the seasonal cycle, particularly in northern hydrobelts, while in southern hydrobelts the models struggle to reproduce the magnitude of the seasonal cycle. It is noteworthy that over all hydrological indicators, the ensemble mean fails to perform better than any individual model - a finding that challenges the commonly held perception that model ensemble estimates deliver superior performance over individual models. The study highlights the need for continued model development and improvement. It also suggests that caution should be taken when summarising the simulations from a model ensemble based upon its mean output.
UR - https://iopscience.iop.org/article/10.1088/1748-9326/aac547
UR - http://www.scopus.com/inward/record.url?scp=85049770640&partnerID=8YFLogxK
U2 - 10.1088/1748-9326/aac547
DO - 10.1088/1748-9326/aac547
M3 - Article
SN - 1748-9326
VL - 13
JO - Environmental Research Letters
JF - Environmental Research Letters
IS - 6
ER -