Votre recherche
Résultats 138 ressources
-
Recent research has extended conventional hydrological algorithms into a hexagonal grid and noted that hydrological modeling on a hexagonal mesh grid outperformed that on a rectangular grid. Among the hydrological products, flow routing grids are the base of many other hydrological simulations, such as flow accumulation, watershed delineation, and stream networks. However, most of the previous research adopted the D6 algorithm, which is analogous to the D8 algorithm over a rectangular grid, to produce flow routing. This paper explored another four methods regarding generating flow directions in a hexagonal grid, based on four algorithms of slope aspect computation. We also developed and visualized hexagonal-grid-based hydrological operations, including flow accumulation, watershed delineation, and hydrological indices computation. Experiments were carried out across multiple grid resolutions with various terrain roughness. The results showed that flow direction can vary among different approaches, and the impact of such variation can propagate to flow accumulation, watershed delineation, and hydrological indices production, which was reflected by the cell-wise comparison and visualization. This research is practical for hydrological analysis in hexagonal, hierarchical grids, such as Discrete Global Grid Systems, and the developed operations can be used in flood modeling in the real world.
-
Abstract. Model intercomparison studies are carried out to test and compare the simulated outputs of various model setups over the same study domain. The Great Lakes region is such a domain of high public interest as it not only resembles a challenging region to model with its transboundary location, strong lake effects, and regions of strong human impact but is also one of the most densely populated areas in the USA and Canada. This study brought together a wide range of researchers setting up their models of choice in a highly standardized experimental setup using the same geophysical datasets, forcings, common routing product, and locations of performance evaluation across the 1×106 km2 study domain. The study comprises 13 models covering a wide range of model types from machine-learning-based, basin-wise, subbasin-based, and gridded models that are either locally or globally calibrated or calibrated for one of each of the six predefined regions of the watershed. Unlike most hydrologically focused model intercomparisons, this study not only compares models regarding their capability to simulate streamflow (Q) but also evaluates the quality of simulated actual evapotranspiration (AET), surface soil moisture (SSM), and snow water equivalent (SWE). The latter three outputs are compared against gridded reference datasets. The comparisons are performed in two ways – either by aggregating model outputs and the reference to basin level or by regridding all model outputs to the reference grid and comparing the model simulations at each grid-cell. The main results of this study are as follows: The comparison of models regarding streamflow reveals the superior quality of the machine-learning-based model in the performance of all experiments; even for the most challenging spatiotemporal validation, the machine learning (ML) model outperforms any other physically based model. While the locally calibrated models lead to good performance in calibration and temporal validation (even outperforming several regionally calibrated models), they lose performance when they are transferred to locations that the model has not been calibrated on. This is likely to be improved with more advanced strategies to transfer these models in space. The regionally calibrated models – while losing less performance in spatial and spatiotemporal validation than locally calibrated models – exhibit low performances in highly regulated and urban areas and agricultural regions in the USA. Comparisons of additional model outputs (AET, SSM, and SWE) against gridded reference datasets show that aggregating model outputs and the reference dataset to the basin scale can lead to different conclusions than a comparison at the native grid scale. The latter is deemed preferable, especially for variables with large spatial variability such as SWE. A multi-objective-based analysis of the model performances across all variables (Q, AET, SSM, and SWE) reveals overall well-performing locally calibrated models (i.e., HYMOD2-lumped) and regionally calibrated models (i.e., MESH-SVS-Raven and GEM-Hydro-Watroute) due to varying reasons. The machine-learning-based model was not included here as it is not set up to simulate AET, SSM, and SWE. All basin-aggregated model outputs and observations for the model variables evaluated in this study are available on an interactive website that enables users to visualize results and download the data and model outputs.
-
Abstract Streamflow sensitivity to different hydrologic processes varies in both space and time. This sensitivity is traditionally evaluated for the parameters specific to a given hydrologic model simulating streamflow. In this study, we apply a novel analysis over more than 3000 basins across North America considering a blended hydrologic model structure, which includes not only parametric, but also structural uncertainties. This enables seamless quantification of model process sensitivities and parameter sensitivities across a continuous set of models. It also leads to high-level conclusions about the importance of water cycle components on streamflow predictions, such as quickflow being the most sensitive process for streamflow simulations across the North American continent. The results of the 3000 basins are used to derive an approximation of sensitivities based on physiographic and climatologic data without the need to perform expensive sensitivity analyses. Detailed spatio-temporal inputs and results are shared through an interactive website.
-
Abstract A warmer climate impacts streamflows and these changes need to be quantified to assess future risk, vulnerability, and to implement efficient adaptation measures. The climate simulations from the fifth phase of the Coupled Model Intercomparison Project (CMIP5), which have been the basis of most such assessments over the past decade, are being gradually superseded by the more recent Coupled Model Intercomparison Project Phase 6 (CMIP6). Our study portrays the added value of the CMIP6 ensemble over CMIP5 in a first North America wide comparison using 3,107 catchments. Results show a reduced spread of the CMIP6 ensemble compared to the CMIP5 ensemble for temperature and precipitation projections. In terms of flow indicators, the CMIP6 driven hydrological projections result in a smaller spread of future mean and high flow values, except for mountainous areas. Overall, we assess that the CMIP6 ensemble provides a narrower band of uncertainty of future climate projections, bringing more confidence for hydrological impact studies. , Plain Language Summary Greenhouse gas emissions are causing the climate to warm significantly, which in turn impacts flows in rivers worldwide. To adapt to these changes, it is essential to quantify them and assess future risk and vulnerability. Climate models are the primary tools used to achieve this. The main data set that provides scientists with state‐of‐the‐art climate model simulations is known as the Coupled Model Intercomparison Project (CMIP). The fifth phase of that project (CMIP5) has been used over the past decade in multiple hydrological studies to assess the impacts of climate change on streamflow. The more recent sixth phase (CMIP6) has started to generate projections, which brings the following question: is it necessary to update the hydrological impact studies performed using CMIP5 with the new CMIP6 models? To answer this question, a comparison between CMIP5 and CMIP6 using 3,107 catchments over North America was conducted. Results show that there is less spread in temperature and precipitation projections for CMIP6. This translates into a smaller spread of future mean, high and low flow values, except for mountainous areas. Overall, we assess that using the CMIP6 data set would provide a more concerted range of future climate projections, leading to more confident hydrological impact studies. , Key Points A comparison of hydrological impacts using Coupled Model Intercomparison Project version 5 (CMIP5) and Coupled Model Intercomparison Project Phase 6 (CMIP6) ensembles is performed over 3,107 catchments in North America The CMIP6 ensembles provide a narrower band of uncertainty for hydrological indicators in the future It is recommended to update hydrological impact studies performed using CMIP5 with the CMIP6 ensemble
-
Whether disasters influence adaptation actions in cities is contested. Yet, the extant knowledge base primarily consists of single or small-N case studies, so there is no global overview of the evidence on disaster impacts and adaptation. Here, we use regression analysis to explore the effects of disaster frequency and severity on four adaptation action types in 549 cities. In countries with greater adaptive capacity, economic losses increase city-level actions targeting recently experienced disaster event types, as well as actions to strengthen general disaster preparedness. An increase in disaster frequency reduces actions targeting hazard types other than those that recently occurred, while human losses have few effects. Comparisons between cities across levels of adaptive capacity indicate a wealth effect. More affluent countries incur greater economic damages from disasters, but also have higher governance capacity, creating both incentives and opportunities for adaptation measures. While disaster frequency and severity had a limited impact on adaptation actions overall, results are sensitive to which disaster impacts, adaptation action types, and adaptive capacities are considered.
-
This is a review article invited by Atmosphere-Ocean to document the contributions of Recherche en Prévision Numérique (RPN) to Numerical Weather Prediction (NWP). It is structured as a historical review and documents RPN’s contributions to numerical methods, numerical modelling, data assimilation, and ensemble systems, with a look ahead to potential future systems. Through this review, we highlight the evolution of RPN’s contributions. We begin with early NWP efforts and continue through to environmental predictions with a broad range of applications. This synthesis is intended to be a helpful reference, consolidating developments and generating broader interest for future work on NWP in Canada.
-
Rangecroft et al. provide an important and interesting paper on the challenges of interdisciplinary research and fieldwork with participants in water resource management. The paper shows the challenges of interaction between their research areas and demonstrates the importance of how a researcher interacts with their selected study sites. My key points reflect the use of different methodologies within social and natural sciences and across them as well as the main challenge of who has the power to influence the research directions. Research is not value-free and is highly influenced by one’s own training and knowledge, which needs to be addressed in the research activities. Finally, an option might be to move beyond interdisciplinary constraints and to work within a stronger transdisciplinary framework. Water research very much needs to interact with non-academic people to understand the challenges and possible solutions.
-
Abstract The performance of adaptation measures depends on their robustness against various possible futures, with varying climate change impacts. Such impacts are driven by both climatic as well as non-climatic drivers. Risk dynamics are then important, as the avoided risk will determine the benefits of adaptation actions. It is argued that the integration of information on changing exposure and vulnerability is needed to make projections of future climate risk more realistic. In addition, many impact and vulnerability studies have used a top-down rather a technical approach. Whether adaptation action is feasible is determined by technical and physical possibilities on the ground, as well as local capacities, governance and preference. These determine the hard and soft limits of adaptation. Therefore, it is argued that the risk metrics outputs alone are not sufficient to predict adaptation outcomes, or predict where adaptation is feasible or not; they must be placed in the local context. Several of the current climate risk products would fall short of their promise to inform adaptation decision-making on the ground. Some steps are proposed to improve adaptation modelling in order to better incorporate these aspects.
-
For the past few decades, remote sensing has been a valuable tool for deriving global information on snow water equivalent (SWE), where products derived from space-borne passive microwave radiometers are favoured as they respond to snow depth, an important component of SWE. GlobSnow, a novel SWE product, has increased the accuracy of global-scale SWE estimates by combining remotely sensed radiometric data with other physiographic characteristics, such as snow depth, as quantified by climatic stations. However, research has demonstrated that passive microwaves algorithms tend to underestimate SWE for deep snowpack. Approaches were proposed to correct for such underestimation; however, they are computer intensive and complex to implement at the watershed scale. In this study, SWEmax information from the near real time 5-km GlobSnow product, provided by Copernicus and the European Space Agency (ESA) and GlobSnow product at 25 km resolution were corrected using a simple bias correction approach for watershed scale applications. This method, referred to as the Watershed Scale Correction (WSC) approach, estimates the bias based on the direct runoff that occurs during the spring melt season. Direct runoff is estimated on the one hand from SWEmax information as main input. Infiltration is also considered in computing direct runoff. An independent estimation of direct runoff from gauged stations is also performed. Discrepancy between these estimates allows for estimating the bias correction factor. This approach is advantageous as it exploits data that commonly exists i.e., flow at gauged stations and remotely sensed/reanalysis data such as snow cover and precipitation. The WSC approach was applied to watersheds located in Eastern Canada. It was found that the average bias moved from 33.5% with existing GlobSnow product to 18% with the corrected product, using the recommended recursive filter coefficient β of 0.925 for baseflow separation. Results show the usefulness of integrating direct runoff for bias correction of existing GlobSnow product at the watershed scale. In addition, potential benefits are offered using the recursive filter approach for baseflow separation of watersheds with limited in situ SWE measurements, to further reduce overall uncertainties and bias. The WSC approach should be appealing for poorly monitored watersheds where SWE measurements are critical for hydropower production and where snowmelt can pose serious flood-related damages.
-
Abstract The estimation of sea levels corresponding to high return periods is crucial for coastal planning and for the design of coastal defenses. This paper deals with the use of historical observations, that is, events that occurred before the beginning of the systematic tide gauge recordings, to improve the estimation of design sea levels. Most of the recent publications dealing with statistical analyses applied to sea levels suggest that astronomical high tide levels and skew surges should be analyzed and modeled separately. Historical samples generally consist of observed record sea levels. Some extreme historical skew surges can easily remain unnoticed if they occur at low or moderate astronomical high tides and do not generate extreme sea levels. The exhaustiveness of historical skew surge series, which is an essential criterion for an unbiased statistical inference, can therefore not be guaranteed. This study proposes a model combining, in a single Bayesian inference procedure, information of two different natures for the calibration of the statistical distribution of skew surges: measured skew surges for the systematic period and extreme sea levels for the historical period. A data‐based comparison of the proposed model with previously published approaches is presented based on a large number of Monte Carlo simulations. The proposed model is applied to four locations on the French Atlantic and Channel coasts. Results indicate that the proposed model is more reliable and accurate than previously proposed methods that aim at the integration of historical records in coastal sea level or surge statistical analyses. , Plain Language Summary Coastal facilities must be designed as to be protected from extreme sea levels. Sea levels at high tide are the combination of astronomical high tides, which can be predicted, and skew surges. The estimation of the statistical distribution of skew surges is usually based on the skew surges measured by tide gauges and can be improved with the use of historical information, observations that occurred before the beginning of the tide gauge recordings. Extreme skew surges combined with low or moderate astronomical high tides would not necessarily generate extreme sea levels, and consequently some extreme historical skew surges could be missed. The exhaustiveness of historical information is an essential criterion for an unbiased estimation, but it cannot be guaranteed in the case of historical skew surges. The present study proposes to combine skew surges for the recent period and extreme sea levels for the historical period. The proposed model is compared to previously published approaches and appears to be more reliable and accurate. The proposed model is applied to four case studies on the French Atlantic and Channel coasts. , Key Points The exhaustiveness of historical sea record information is demonstrated based on French Atlantic coast data A comparative analysis of approaches to integrate historical information is carried out The efficiency of a new method for the combination of systematic skew surges and historical records is verified
-
In cold regions, ice jams frequently result in severe flooding due to a rapid rise in water levels upstream of the jam. Sudden floods resulting from ice jams threaten human safety and cause damage to properties and infrastructure. Hence, ice-jam prediction tools can give an early warning to increase response time and minimize the possible damages. However, ice-jam prediction has always been a challenge as there is no analytical method available for this purpose. Nonetheless, ice jams form when some hydro-meteorological conditions happen, a few hours to a few days before the event. Ice-jam prediction can be addressed as a binary multivariate time-series classification. Deep learning techniques have been widely used for time-series classification in many fields such as finance, engineering, weather forecasting, and medicine. In this research, we successfully applied convolutional neural networks (CNN), long short-term memory (LSTM), and combined convolutional–long short-term memory (CNN-LSTM) networks to predict the formation of ice jams in 150 rivers in the province of Quebec (Canada). We also employed machine learning methods including support vector machine (SVM), k-nearest neighbors classifier (KNN), decision tree, and multilayer perceptron (MLP) for this purpose. The hydro-meteorological variables (e.g., temperature, precipitation, and snow depth) along with the corresponding jam or no-jam events are used as model inputs. Ten percent of the data were excluded from the model and set aside for testing, and 100 reshuffling and splitting iterations were applied to 80 % of the remaining data for training and 20 % for validation. The developed deep learning models achieved improvements in performance in comparison to the developed machine learning models. The results show that the CNN-LSTM model yields the best results in the validation and testing with F1 scores of 0.82 and 0.92, respectively. This demonstrates that CNN and LSTM models are complementary, and a combination of both further improves classification.
-
Abstract An intensity–duration–frequency (IDF) curve describes the relationship between rainfall intensity and duration for a given return period and location. Such curves are obtained through frequency analysis of rainfall data and commonly used in infrastructure design, flood protection, water management, and urban drainage systems. However, they are typically available only in sparse locations. Data for other sites must be interpolated as the need arises. This paper describes how extreme precipitation of several durations can be interpolated to compute IDF curves on a large, sparse domain. In the absence of local data, a reconstruction of the historical meteorology is used as a covariate for interpolating extreme precipitation characteristics. This covariate is included in a hierarchical Bayesian spatial model for extreme precipitations. This model is especially well suited for a covariate gridded structure, thereby enabling fast and precise computations. As an illustration, the methodology is used to construct IDF curves over Eastern Canada. An extensive cross-validation study shows that at locations where data are available, the proposed method generally improves on the current practice of Environment and Climate Change Canada which relies on a moment-based fit of the Gumbel extreme-value distribution.