Votre recherche
Résultats 76 ressources
-
Coastal areas are particularly vulnerable to flooding from heavy rainfall, sea storm surge, or a combination of the two. Recent studies project higher intensity and frequency of heavy rains, and progressive sea level rise continuing over the next decades. Pre-emptive and optimal flood defense policies that adaptively address climate change are needed. However, future climate projections have significant uncertainty due to multiple factors: (a) future CO2 emission scenarios; (b) uncertainties in climate modelling; (c) discount factor changes due to market fluctuations; (d) uncertain migration and population growth dynamics. Here, a methodology is proposed to identify the optimal design and timing of flood defense structures in which uncertainties in 21st century climate projections are explicitly considered probabilistically. A multi-objective optimization model is developed to minimize both the cost of the flood defence infrastructure system and the flooding hydraulic risk expressed by Expected Annual Damage (EAD). The decision variables of the multi-objective optimization problem are the size of defence system and the timing of implementation. The model accounts for the joint probability density functions of extreme rainfall, storm surge and sea level rise, as well as the damages, which are determined dynamically by the defence system state considering the probability and consequences of system failure, using a water depth–damage curve related to the land use (Corine Land Cover); water depth due to flooding are calculated by hydraulic model. A new dominant sorting genetic algorithm (NSGAII) is used to solve the multi-objective problem optimization. A case study is presented for the Pontina Plain (Lazio Italy), a coastal region, originally a swamp reclaimed about a hundred years ago, that is rich in urban centers and farms. A set of optimal adaptation policies, quantifying size and timing of flood defence constructions for different climate scenarios and belonging to the Pareto curve obtained by the NSGAII are identified for such a case study to mitigate the risk of flooding and to aid decision makers.
-
In recent years, understanding and improving the perception of flood risk has become an important aspect of flood risk management and flood risk reduction policies. The aim of this study was to explore perceptions of flood risk in the Petite Nation River watershed, located in southern Quebec, Canada. A survey was conducted with 130 residents living on a floodplain in this river watershed, which had been affected by floods in the spring of 2017. Participants were asked about different aspects related to flood risk, such as the flood hazard experience, the physical changes occurring in the environment, climate change, information accessibility, flood risk governance, adaptation measures, and finally the perception of losses. An analysis of these factors provided perspectives for improving flood risk communication and increasing the public awareness of flood risk. The results indicated that the analyzed aspects are potentially important in terms of risk perception and showed that the flood risk perceptions varied for each aspect analyzed. In general, the information regarding flood risk management is available and generally understandable, and the level of confidence was good towards most authorities. However, the experiences of flood risk and the consequences of climate change on floods were not clear among the respondents. Regarding the adaptation measures, the majority of participants tended to consider non-structural adaptation measures as being more relevant than structural ones. Moreover, the long-term consequences of flooding on property values are of highest concern. These results provide a snapshot of citizens’ risk perceptions and their opinions on topics that are directly related to such risks.
-
Floods can be caused by heavy rainfall and the consequent overflow of rivers, causing low-lying areas to be affected. Populated regions close to riverbeds are the sectors most affected by these disasters, which requires modelling studies to generate different scenarios. The work focuses on the bibliometric analysis of the search for topics such as flood modelling focused on the research, risk, and assessment of these catastrophes, aiming to determine new trends and tools for their application in the prevention of these natural disasters. The methodology consists of: (i) search criteria and database selection, (ii) pre-processing of the selected data and software, and (iii) analysis and interpretation of the results. The results show a wide range of studies for dimensional analysis in different flood scenarios, which greatly benefit the development of flood prevention and risk strategies. In addition, this work provides insight into the different types of software and modelling for flood analysis and simulation and the various trends and applications for future modelling.
-
Abstract Confluences are sites of intense turbulent mixing in fluvial systems. The large‐scale turbulent structures largely responsible for this mixing have been proposed to fall into three main classes: vertically orientated (Kelvin–Helmholtz) vortices, secondary flow helical cells and smaller, strongly coherent streamwise‐orientated vortices. Little is known concerning the prevalence and causal mechanisms of each class, their interactions with one another and their respective contributions to mixing. Historically, mixing processes have largely been interpreted through statistical moments derived from sparse pointwise flow field and passive scalar transport measurements, causing the contribution of the instantaneous flow field to be largely overlooked. To overcome the limited spatiotemporal resolution of traditional methods, herein we analyse aerial video of large‐scale turbulent structures made visible by turbidity gradients present along the mixing interface of a mesoscale confluence and complement our findings with eddy‐resolved numerical modelling. The fast, shallow main channel (Mitis) separates over the crest of the scour hole's avalanche face prior to colliding with the slow, deep tributary (Neigette), resulting in a streamwise‐orientated separation cell in the lee of the avalanche face. Nascent large‐scale Kelvin–Helmholtz instabilities form along the collision zone and expand as the high‐momentum, separated near‐surface flow of the Mitis pushes into them. Simultaneously, the strong downwelling of the Mitis is accompanied by strong upwelling of the Neigette. The upwelling Neigette results in ∼50% of the Neigette's discharge crossing the mixing interface over the short collision zone. Helical cells were not observed at the confluence. However, the downwelling Mitis, upwelling Neigette and separation cell interact to generate considerable streamwise vorticity on the Mitis side of the mixing interface. This streamwise vorticity is strongly coupled to the large‐scale Kelvin–Helmholtz instabilities, which greatly enhances mixing. Comparably complex interactions between large‐scale Kelvin–Helmholtz instabilities and coherent streamwise vortices are expected at other typical asymmetric confluences exhibiting a pronounced scour hole.
-
Abstract Large rivers can retain a substantial amount of nitrogen (N), particularly in submerged aquatic vegetation (SAV) meadows that may act as disproportionate control points for N retention. However, the temporal variation of N retention in large rivers remains unknown since past measurements were snapshots in time. Using high‐frequency plants and NO 3 − measurements over the summers 2012–2017, we investigated how the climate variation influenced N retention in a SAV meadow (∼10 km 2 ) at the confluence zone of two agricultural tributaries entering the St. Lawrence River. Distinctive combinations of water temperature and level were recorded between years, ranging from extreme hot‐low (2012) and cold‐high (2017) summers (2°C and 1.4 m interannual range). Using an indicator of SAV biomass, we found that these extreme hot‐low and cold‐high years had reduced biomass compared to hot summers with intermediate levels. In addition, changes in main stem water levels were asynchronous with the tributary discharges that controlled NO 3 − inputs at the confluence. We estimated daily N uptake rates from a moored NO 3 − sensor and partitioned these into assimilatory and dissimilatory pathways. Measured rates were variable but among the highest reported in rivers (median 576 mg N m −2 d −1 , range 60–3,893 mg N m −2 d −1 ) and SAV biomass promoted greater proportional retention and permanent N loss through denitrification. We estimated that the SAV meadow could retain up to 0.8 kt N per year and 87% of N inputs, but this valuable ecosystem service is contingent on how climate variations modulate both N loads and SAV biomass. , Plain Language Summary Large rivers remove significant amounts of nitrogen pollution generated by humans in waste waters and from fertilizers applied to agricultural lands. Underwater meadows of aquatic plants remove nitrogen particularly well. To keep the river clean, plants use the nitrogen themselves and promote conditions where bacteria can convert this pollution into a gas typically found in air. Measuring nitrogen removal in rivers is really difficult, and we do not know how climate conditions influence this removal or plant abundance. We successfully measured nitrogen pollution removal from an underwater plant meadow in a large river over six summers. We found that plant abundance and river nitrogen inputs were critical to determine how much pollution was removed, and that these were controlled by climatic conditions. Plant abundance was controlled by both water temperatures and levels. When water was warm and levels were neither too high nor too low, conditions were perfect for lots of plants to grow, which mainly stimulated bacteria that permanently eliminated nitrogen. We showed that the amount of nitrogen pollution removed over the summer by the meadow changes with climatic conditions but in general represents the amount produced by a city of half a million people. , Key Points Nitrogen retention and biomass were measured at a high resolution over six summers in a submerged aquatic vegetation meadow of a large river Among the highest riverine, nitrate uptake rates were recorded, and 47%–87% of loads were retained with plants favoring denitrification Interannual climate variations influenced nitrate retention by altering water levels, temperature, plant biomass, and tributary nitrate load
-
Atmospheric reanalysis data provides a numerical description of global and regional water cycles by combining models and observations. These datasets are increasingly valuable as a substitute for observations in regions where these are scarce. They could significantly contribute to reducing losses by feeding flood early warning systems that can inform the population and guide civil security action. We assessed the suitability of two different precipitation and temperature reanalysis products readily available for predicting historic flooding of the La Chaudière River in Quebec: 1) Environment and Climate Change Canada's Regional Deterministic Reanalysis System (RDRS-v2) and 2) ERA5 from the Copernicus Climate Change Service. We exploited a multi-model hydrological ensemble prediction system that considers three sources of uncertainty: initial conditions, model structure, and weather forcing to produce streamflow forecasts up to 5 days into the future with a time step of 3 hours. These results are compared to a provincial reference product based on gauge measurements of the Ministère de l'Environnement et de la Lutte contre les Changements Climatiques. Then, five conceptual hydrological models were calibrated with three different meteorological datasets (RDRS-v2, ERA5, and observational gridded) and fed with two ensemble weather forecast products: 1) the Regional Ensemble Prediction System (REPS) from the Environment and Climate Change Canada and 2) the ensemble forecast issued by the European Centre for Medium-Range Weather Forecasts (ECMWF). Results reveal that the calibration of the model with reanalysis data as input delivered a higher accuracy in the streamflow simulation providing a useful resource for flood modeling where no other data is available. However, although the selection of the reanalysis is a determinant of capturing the flood volumes, selecting weather forecasts is more critical in anticipating discharge threshold exceedances.
-
Recent research has extended conventional hydrological algorithms into a hexagonal grid and noted that hydrological modeling on a hexagonal mesh grid outperformed that on a rectangular grid. Among the hydrological products, flow routing grids are the base of many other hydrological simulations, such as flow accumulation, watershed delineation, and stream networks. However, most of the previous research adopted the D6 algorithm, which is analogous to the D8 algorithm over a rectangular grid, to produce flow routing. This paper explored another four methods regarding generating flow directions in a hexagonal grid, based on four algorithms of slope aspect computation. We also developed and visualized hexagonal-grid-based hydrological operations, including flow accumulation, watershed delineation, and hydrological indices computation. Experiments were carried out across multiple grid resolutions with various terrain roughness. The results showed that flow direction can vary among different approaches, and the impact of such variation can propagate to flow accumulation, watershed delineation, and hydrological indices production, which was reflected by the cell-wise comparison and visualization. This research is practical for hydrological analysis in hexagonal, hierarchical grids, such as Discrete Global Grid Systems, and the developed operations can be used in flood modeling in the real world.
-
Abstract. Model intercomparison studies are carried out to test and compare the simulated outputs of various model setups over the same study domain. The Great Lakes region is such a domain of high public interest as it not only resembles a challenging region to model with its transboundary location, strong lake effects, and regions of strong human impact but is also one of the most densely populated areas in the USA and Canada. This study brought together a wide range of researchers setting up their models of choice in a highly standardized experimental setup using the same geophysical datasets, forcings, common routing product, and locations of performance evaluation across the 1×106 km2 study domain. The study comprises 13 models covering a wide range of model types from machine-learning-based, basin-wise, subbasin-based, and gridded models that are either locally or globally calibrated or calibrated for one of each of the six predefined regions of the watershed. Unlike most hydrologically focused model intercomparisons, this study not only compares models regarding their capability to simulate streamflow (Q) but also evaluates the quality of simulated actual evapotranspiration (AET), surface soil moisture (SSM), and snow water equivalent (SWE). The latter three outputs are compared against gridded reference datasets. The comparisons are performed in two ways – either by aggregating model outputs and the reference to basin level or by regridding all model outputs to the reference grid and comparing the model simulations at each grid-cell. The main results of this study are as follows: The comparison of models regarding streamflow reveals the superior quality of the machine-learning-based model in the performance of all experiments; even for the most challenging spatiotemporal validation, the machine learning (ML) model outperforms any other physically based model. While the locally calibrated models lead to good performance in calibration and temporal validation (even outperforming several regionally calibrated models), they lose performance when they are transferred to locations that the model has not been calibrated on. This is likely to be improved with more advanced strategies to transfer these models in space. The regionally calibrated models – while losing less performance in spatial and spatiotemporal validation than locally calibrated models – exhibit low performances in highly regulated and urban areas and agricultural regions in the USA. Comparisons of additional model outputs (AET, SSM, and SWE) against gridded reference datasets show that aggregating model outputs and the reference dataset to the basin scale can lead to different conclusions than a comparison at the native grid scale. The latter is deemed preferable, especially for variables with large spatial variability such as SWE. A multi-objective-based analysis of the model performances across all variables (Q, AET, SSM, and SWE) reveals overall well-performing locally calibrated models (i.e., HYMOD2-lumped) and regionally calibrated models (i.e., MESH-SVS-Raven and GEM-Hydro-Watroute) due to varying reasons. The machine-learning-based model was not included here as it is not set up to simulate AET, SSM, and SWE. All basin-aggregated model outputs and observations for the model variables evaluated in this study are available on an interactive website that enables users to visualize results and download the data and model outputs.
-
For the past few decades, remote sensing has been a valuable tool for deriving global information on snow water equivalent (SWE), where products derived from space-borne passive microwave radiometers are favoured as they respond to snow depth, an important component of SWE. GlobSnow, a novel SWE product, has increased the accuracy of global-scale SWE estimates by combining remotely sensed radiometric data with other physiographic characteristics, such as snow depth, as quantified by climatic stations. However, research has demonstrated that passive microwaves algorithms tend to underestimate SWE for deep snowpack. Approaches were proposed to correct for such underestimation; however, they are computer intensive and complex to implement at the watershed scale. In this study, SWEmax information from the near real time 5-km GlobSnow product, provided by Copernicus and the European Space Agency (ESA) and GlobSnow product at 25 km resolution were corrected using a simple bias correction approach for watershed scale applications. This method, referred to as the Watershed Scale Correction (WSC) approach, estimates the bias based on the direct runoff that occurs during the spring melt season. Direct runoff is estimated on the one hand from SWEmax information as main input. Infiltration is also considered in computing direct runoff. An independent estimation of direct runoff from gauged stations is also performed. Discrepancy between these estimates allows for estimating the bias correction factor. This approach is advantageous as it exploits data that commonly exists i.e., flow at gauged stations and remotely sensed/reanalysis data such as snow cover and precipitation. The WSC approach was applied to watersheds located in Eastern Canada. It was found that the average bias moved from 33.5% with existing GlobSnow product to 18% with the corrected product, using the recommended recursive filter coefficient β of 0.925 for baseflow separation. Results show the usefulness of integrating direct runoff for bias correction of existing GlobSnow product at the watershed scale. In addition, potential benefits are offered using the recursive filter approach for baseflow separation of watersheds with limited in situ SWE measurements, to further reduce overall uncertainties and bias. The WSC approach should be appealing for poorly monitored watersheds where SWE measurements are critical for hydropower production and where snowmelt can pose serious flood-related damages.
-
Abstract The estimation of sea levels corresponding to high return periods is crucial for coastal planning and for the design of coastal defenses. This paper deals with the use of historical observations, that is, events that occurred before the beginning of the systematic tide gauge recordings, to improve the estimation of design sea levels. Most of the recent publications dealing with statistical analyses applied to sea levels suggest that astronomical high tide levels and skew surges should be analyzed and modeled separately. Historical samples generally consist of observed record sea levels. Some extreme historical skew surges can easily remain unnoticed if they occur at low or moderate astronomical high tides and do not generate extreme sea levels. The exhaustiveness of historical skew surge series, which is an essential criterion for an unbiased statistical inference, can therefore not be guaranteed. This study proposes a model combining, in a single Bayesian inference procedure, information of two different natures for the calibration of the statistical distribution of skew surges: measured skew surges for the systematic period and extreme sea levels for the historical period. A data‐based comparison of the proposed model with previously published approaches is presented based on a large number of Monte Carlo simulations. The proposed model is applied to four locations on the French Atlantic and Channel coasts. Results indicate that the proposed model is more reliable and accurate than previously proposed methods that aim at the integration of historical records in coastal sea level or surge statistical analyses. , Plain Language Summary Coastal facilities must be designed as to be protected from extreme sea levels. Sea levels at high tide are the combination of astronomical high tides, which can be predicted, and skew surges. The estimation of the statistical distribution of skew surges is usually based on the skew surges measured by tide gauges and can be improved with the use of historical information, observations that occurred before the beginning of the tide gauge recordings. Extreme skew surges combined with low or moderate astronomical high tides would not necessarily generate extreme sea levels, and consequently some extreme historical skew surges could be missed. The exhaustiveness of historical information is an essential criterion for an unbiased estimation, but it cannot be guaranteed in the case of historical skew surges. The present study proposes to combine skew surges for the recent period and extreme sea levels for the historical period. The proposed model is compared to previously published approaches and appears to be more reliable and accurate. The proposed model is applied to four case studies on the French Atlantic and Channel coasts. , Key Points The exhaustiveness of historical sea record information is demonstrated based on French Atlantic coast data A comparative analysis of approaches to integrate historical information is carried out The efficiency of a new method for the combination of systematic skew surges and historical records is verified
-
In cold regions, ice jams frequently result in severe flooding due to a rapid rise in water levels upstream of the jam. Sudden floods resulting from ice jams threaten human safety and cause damage to properties and infrastructure. Hence, ice-jam prediction tools can give an early warning to increase response time and minimize the possible damages. However, ice-jam prediction has always been a challenge as there is no analytical method available for this purpose. Nonetheless, ice jams form when some hydro-meteorological conditions happen, a few hours to a few days before the event. Ice-jam prediction can be addressed as a binary multivariate time-series classification. Deep learning techniques have been widely used for time-series classification in many fields such as finance, engineering, weather forecasting, and medicine. In this research, we successfully applied convolutional neural networks (CNN), long short-term memory (LSTM), and combined convolutional–long short-term memory (CNN-LSTM) networks to predict the formation of ice jams in 150 rivers in the province of Quebec (Canada). We also employed machine learning methods including support vector machine (SVM), k-nearest neighbors classifier (KNN), decision tree, and multilayer perceptron (MLP) for this purpose. The hydro-meteorological variables (e.g., temperature, precipitation, and snow depth) along with the corresponding jam or no-jam events are used as model inputs. Ten percent of the data were excluded from the model and set aside for testing, and 100 reshuffling and splitting iterations were applied to 80 % of the remaining data for training and 20 % for validation. The developed deep learning models achieved improvements in performance in comparison to the developed machine learning models. The results show that the CNN-LSTM model yields the best results in the validation and testing with F1 scores of 0.82 and 0.92, respectively. This demonstrates that CNN and LSTM models are complementary, and a combination of both further improves classification.
-
Abstract An intensity–duration–frequency (IDF) curve describes the relationship between rainfall intensity and duration for a given return period and location. Such curves are obtained through frequency analysis of rainfall data and commonly used in infrastructure design, flood protection, water management, and urban drainage systems. However, they are typically available only in sparse locations. Data for other sites must be interpolated as the need arises. This paper describes how extreme precipitation of several durations can be interpolated to compute IDF curves on a large, sparse domain. In the absence of local data, a reconstruction of the historical meteorology is used as a covariate for interpolating extreme precipitation characteristics. This covariate is included in a hierarchical Bayesian spatial model for extreme precipitations. This model is especially well suited for a covariate gridded structure, thereby enabling fast and precise computations. As an illustration, the methodology is used to construct IDF curves over Eastern Canada. An extensive cross-validation study shows that at locations where data are available, the proposed method generally improves on the current practice of Environment and Climate Change Canada which relies on a moment-based fit of the Gumbel extreme-value distribution.
-
Extreme precipitation events can lead to disastrous floods, which are the most significant natural hazards in the Mediterranean regions. Therefore, a proper characterization of these events is crucial. Extreme events defined as annual maxima can be modeled with the generalized extreme value (GEV) distribution. Owing to spatial heterogeneity, the distribution of extremes is non-stationary in space. To take non-stationarity into account, the parameters of the GEV distribution can be viewed as functions of covariates that convey spatial information. Such functions may be implemented as a generalized linear model (GLM) or with a more flexible non-parametric non-linear model such as an artificial neural network (ANN). In this work, we evaluate several statistical models that combine the GEV distribution with a GLM or with an ANN for a spatial interpolation of the GEV parameters. Key issues are the proper selection of the complexity level of the ANN (i.e., the number of hidden units) and the proper selection of spatial covariates. Three sites are included in our study: a region in the French Mediterranean, the Cap Bon area in northeast Tunisia, and the Merguellil catchment in central Tunisia. The comparative analysis aim at assessing the genericity of state-of-the-art approaches to interpolate the distribution of extreme precipitation events.
-
La température extrême de l’eau influence de nombreuses propriétés physiques, chimiques et biologiques des rivières. l ’ évaluation de l ’ Une prédiction précise de la température de l’eau est importante pour impact environnemental. Dans ce cadre, différents modèles ont été utilisés pour estimer les températures de l ’ linéaires simp eau à différentes échelles spatiales et temporelles, allant des méthodes les pour déterminer l’incertitude à des modèles sophistiqués non linéaires. Cependant, cette variable primordiale n’a pas été traitée dans un contexte probabiliste (ou fréquentiste). Donc, l’estimation des évènements extrêmes thermiques à l’aide des approc hes d’analyse fréquentielle locale (AFL) est importante. Lors de l’estimation des extrêmes thermiques, il est crucial de tenir compte de la forme de la distribution de fréquences considérée. Dans la première partie de la thèse , nous nous concentrons sur la sélection de la distribution de probabilité la plus appropriée des températures des rivières. Le critère d critère d ’ ’ information d ’ Akaike (AIC) et le information bayésien (BIC) sont utilisés pour évaluer la qualité de l distributions statis ’ ajustement des tiques. La validation des distributions candidates appropriées est également effectuée en utilisant l ’ approche de diagramme de rapport des L obtenus montrent que la distribution de Weibull (W2) moments (MRD). Les résultats est celle qui semble s’ajuster le données provenant des stations de haute altitude, tandis que les mieux aux séries d’extrêmes provenant des stations situées dans les régions de basse altitude sont bien adaptées avec la distribution normale (N). Ceci correspond au premier article. L a ’ couverture spatiale des données de température des cours d ’ eau est limitée dans de nombreuses régions du monde. Pour cette raison, une analyse fréquentielle régionale (AFR) permettant d estimer les extrêmes de température des rivières sur des sites non jau gés ou mal surveillés est nécessaire. En général, l’AFR inclut deux étapes principales, la délimitation des régions homogènes (DRH) qui vise à déterminer les sites similaires, et l’estimation régionale (ER) qui transfère l’information depuis les sites déte rminés dans la première étape vers le site cible. Par conséquent, le modèle d’indice thermique (IT) est introduit dans le contexte d’AFR pour estimer les extrêmes du régime thermique. Cette méthode est analogue au modèle d ’ indice de crue (IF) largement uti lisé en hydrologie. Le modèle IT incorpore l’homogénéité de la distribution de fréquence appropriée pour chaque région, ce qui offre une plus grande flexibilité. Dans cette étude, le modèle IT est comparé avec la régression linéaire multiple (MLR). Les rés ultats indiquent que le modèle IT fournit la meilleure performance (Article 2) . Ensuite, l’approche d’analyse canonique des corrélations non linéaires (ACCNL) est intégrée dans la DRH, présentée dans le Chapitre 4 de ce manuscrit (Article 3). Elle permet de considérer la complexité des phénomènes thermiques dans l’étape de DRH. Par la suite, dans le but d’identifier des combinaisons (DRH-ER) plus prometteuses permettant une meilleure estimation, une étude comparative est réalisée. Les combinaisons considérées au niveau des deux étapes de la procédure de l’AFR sont des combinaisons linéaires, semi-linéaires et non linéaires. Les résultats montrent que la meilleure performance globale est présentée par la combinaison non linéaire ACCNL et le modèle additif généralisé (GAM). Finalement, des modèles non paramétriques tels que le foret aléatoire (RF), le boosting de gradient extrême (XGBoost) et le modèle régression multivariée par spline adaptative (MARS) sont introduits dans le contexte de l’AFR pour estimer les quantiles thermiques et les comparer aux quantiles estimés à l’aide du modèle semi-paramétrique GAM. Ces modèles sont combinés avec des approches linéaires et non linéaires dans l’étape DRH, telles que ACC et ACCNL, afin de déterminer leur potentiel prédictif. Les résultats indiquent que ACCNL+GAM est la meilleure, suivie par ACC+MARS. Ceci correspond à l’article 4. <br /><br />Extreme water temperatures have a significant impact on the physical, chemical, and biological properties of the rivers. Environmental impact assessment requires accurate predictions of water temperature. The models used to estimate water temperatures within this framework range from simple linear methods to more complex nonlinear models. However, w ater temperature has not been studied in a probabilistic manner. It is, therefore, essential to estimate extreme thermal events using local frequency analysis (LFA). An LFA aims to predict the frequency and amplitude of these events at a given gauged locat ion. In order to estimate quantiles, it is essential to consider the shape of the frequency distribution being considered. The first part of our study focuses on selecting the most appropriate probability distribution for river water temperatures. The Akai ke information criteria (AIC) and the Bayesian information criteria (BIC) are used to evaluate the goodness of fit of statistical distributions. An Lmoment ratio diagram (MRD) approach is also used to validate sui table candidate distributions. The results good fit for extremes data from the highindicate that the Weibull distribution (W2) provides a altitude stations, while the normal distribution (N) is most appropriate for lowaltitude stations. This corresponds to the first article. In many parts of the world, river temperature data are limited in terms of spatial coverage and size of the series. Therefore, it is necessary to perform a regional frequency analysis (RFA) to estimate river temperature extremes at ungauged or poorly monitored sites. Generall y, RFA involves two main steps: delineation of homogenous regions (DHR), which identifies similar sites, and regional estimation (RE), which transfers information from the identified sites to the target site. The thermal index (TI) model is introduced in t he context of RFA to estimate the extremes of the thermal regime. This method is analogous to the index flood (IF) model commonly used in hydrology. The TI model considers the homogeneity of the appropriate frequency distributions for each region, which pr ovides larger flexibility. This study compares the TI model with multiple linear regression (MLR) approach. Results indicate that the TI model leads to better performances (Article 2). Then, the nonlinear canonical correlations analysis (NLCCA) approach is integrated into the DHR, as presented in Chapter 4 of this manuscript (Article 3). It allows considering the complexity of the thermal phenomena in the DHR step. A comparative study is then conducted to identify more promising combinations (DHR RE), that RFA procedure, linear, semilead to best estimation results. In the two stages of the linear, and nonlinear combinations are considered. The results of this study indicate that the nonlinear combination of the NLCCA and the generalized additive model (GAM ) produces the best overall performances. Finally, nonparametric models such as random forest (RF), extreme gradient boosting (XGBoost), and multivariate adaptive regression splines (MARS) are introduced in the context of RFA in order to estimate thermal q uantiles and compare them to quantiles estimated using the semiparametric GAM model. The predictive potential of these models is determined by combining them with linear and nonlinear approaches, such as CCA and NLCCA, in the DHR step. The results indicat e that NLCCA+GAM is the best, followed by CCA+MARS. This corresponds to article 4.
-
Cette thèse traite des aspects de la quantification de l'incertitude dans l'évaluation des ressources éoliennes avec les pratiques d'analyses d'incertitude et de sensibilité. Les objectifs de cette thèse sont d'examiner et d'évaluer la qualité des pratiques d'analyse de sensibilité dans l'évaluation des ressources éoliennes, de décourager l'utilisation d'une analyse de sensibilité à la fois, d'encourager l'utilisation d'une analyse de sensibilité globale à la place, d'introduire des méthodes d'autres domaines., et montrer comment les analyses d'incertitude et de sensibilité globale ajoutent de la valeur au processus d'aide à la décision. Cette thèse est organisée en quatre articles : I. Une revue des pratiques d'analyse de sensibilité dans l'évaluation des ressources éoliennes avec une étude de cas de comparaison d'analyses de sensibilité individuelles et globales du coût actualisé de l'énergie éolienne offshore ; II. Technique Quasi-Monte Carlo dans l'analyse de sensibilité globale dans l'évaluation des ressources éoliennes avec une étude de cas sur les Émirats Arabes Unis; III. Utilisation de la famille de distribution Halphen pour l'estimation de la vitesse moyenne du vent avec une étude de cas sur l'Est du Canada; IV. Étude d'évaluation des ressources éoliennes offshore du golfe Persique avec les données satellitaires QuikSCAT.Les articles I à III ont chacun donné lieu à une publication évaluée par des pairs, tandis que l'article IV - à une soumission. L'article I propose des classifications par variable de sortie d'analyse de sensibilité, méthode, application, pays et logiciel. L'article I met en évidence les lacunes de la littérature, fournit des preuves des pièges, conduisant à des résultats d'évaluation erronés et coûteux des ressources éoliennes. L'article II montre comment l'analyse de sensibilité globale offre une amélioration au moyen du quasi-Monte Carlo avec ses plans d'échantillonnage élaborés permettant une convergence plus rapide. L'article III introduit la famille de distribution Halphen pour l'évaluation des ressources éoliennes. Article IV utilise les données satellitaires SeaWinds/QuikSCAT pour l'évaluation des ressources éoliennes offshore du golfe Persique. Les principales contributions à l'état de l'art avec cette thèse suivent. À la connaissance de l'auteur, aucune revue de l'analyse de sensibilité dans l'évaluation des ressources éoliennes n'est actuellement disponible dans la littérature, l'article I en propose une. L'article II relie la modélisation mathématique et l'évaluation des ressources éoliennes en introduisant la technique de quasi-Monte Carlo dans l'évaluation des ressources éoliennes. L'article III présente la famille de distribution de Halphen, de l'analyse de la fréquence des crues à l'évaluation des ressources éoliennes. <br /><br />This dissertation deals with the aspects of quantifying uncertainty in wind resource assessment with the practices of uncertainty and sensitivity analyses. The objectives of this dissertation are to review and assess the quality of sensitivity analysis practices in wind resource assessment, to discourage the use of one-at-a-time sensitivity analysis, encourage the use of global sensitivity analysis instead, introduce methods from other fields, and showcase how uncertainty and global sensitivity analyses adds value to the decision support process. This dissertation is organized in four articles: I. Review article of 102 feasibility studies: a review of sensitivity analysis practices in wind resource assessment with a case study of comparison of one-at-a-time vs. global sensitivity analyses of the levelized cost of offshore wind energy; II. Research article: Quasi-Monte Carlo technique in global sensitivity analysis in wind resource assessment with a case study on United Arab Emirates; III. Research article: Use of the Halphen distribution family for mean wind speed estimation with a case study on Eastern Canada; IV. Application article: Offshore wind resource assessment study of the Persian Gulf with QuikSCAT satellite data. Articles I-III have each resulted in a peer-reviewed publication, while Article IV – in a submission. Article I offers classifications by sensitivity analysis output variable, method, application, country, and software. It reveals the lack of collective agreement on the definition of sensitivity analysis in the literature, the dominance of nonlinear models, the prevalence of one-at a-time sensitivity analysis method, while one-at-a-time method is only valid for linear models. Article I highlights gaps in the literature, provides evidence of the pitfalls, leading to costly erroneous wind resource assessment results. Article II shows how global sensitivity analysis offers improvement by means of the quasi-Monte Carlo with its elaborate sampling designs enabling faster convergence. Article III introduces the Halphen distribution family for the purpose of wind recourse assessment. Article IV uses SeaWinds/QuikSCAT satellite data for offshore wind resource assessment of the Persian Gulf. The main contributions to the state-of-the-art with this dissertation follow. To the best of author’s knowledge, no review of sensitivity analysis in wind resource assessment is currently available in the literature, Article I offers such. Article II bridges mathematical modelling and wind resource assessment by introducing quasi-Monte Carlo technique to wind resource assessment. Article III introduces the Halphen distribution family from flood frequency analysis to wind resource assessment.
-
Les rivières sont des écosystèmes dynamiques qui reçoivent, transforment, et exportent de la matière organique comprenant du carbone (C), de l’azote (N), et du phosphore (P). De par leur grande surface de contact entre l’eau et les sédiments, elles offrent un potentiel élevé pour les processus de transformation de ces éléments, dans lesquels ils sont souvent conjointement impliqués. Ces transformations peuvent retirer les éléments de la colonne d’eau et ainsi diminuer leurs concentrations pour améliorer la qualité de l’eau. Par contre, les conditions climatiques (débit, température, luminosité), la configuration du territoire (forêt, urbanisation, agriculture), et la durée des activités humaines sur terre affectent la quantité, composition, et proportion de C, N, et P livrés aux cours d’eau receveurs. Dans un contexte où un surplus de nutriments (N, P) peut surpasser la capacité des rivières à retirer les éléments de l’eau, et où les extrêmes climatiques s’empirent à cause des changements climatiques, cette thèse met en lumière le rôle des rivières dans les dynamiques de C, N, et P pour une meilleure compréhension de la réponse des écosystèmes lotiques aux pressions actuelles et futures. La Rivière du Nord draine séquentiellement des régions couvertes de forêt, d’urbanisation, et d’agriculture, et oscille entre quatre saisons distinctes, l’exposant à des utilisations du territoire et conditions climatiques contrastées. Nous avons échantillonné les formes de C, N, et P à 13 sites le long du tronçon principal (146 km), une fois par saison pour trois ans. De façon générale, les concentrations de N et P totaux ont augmenté d’amont vers l’aval, concordant avec l’activité humaine plus importante dans la deuxième moitié du bassin versant, mais les concentrations de C organique total sont restées constantes peu importe la saison et l’année. La stœchiométrie écosystémique du C : N : P était donc riche en C comparé au N et P en amont, et s’est enrichie en nutriments vers l’aval. L’étendue (2319 : 119 : 1 à 368 : 60 : 1) couvrait presque le continuum terre – océan à l’intérieur d’une seule rivière. Des formes différentes de C, N, et P dominaient la stœchiométrie totale dépendamment des saisons et de l’utilisation du territoire. En été, la composition du N était dominée en amont par sa forme organique dissoute et par le nitrate en aval, tandis qu’en hiver, l’ammonium et le P dissous avaient préséance sur l’entièreté du continuum. Malgré une concentration constante, la proportion des molécules composant le C différait aussi selon la saison et l’utilisation du territoire. L’été était dominé par des formes dégradées par l’action microbienne et l’hiver par des formes bio- et photo-labiles. Ceci fait allusion au potentiel de transformation de la rivière plus élevé dans la saison chaude plutôt que sous la glace, où les formes plus réactives avaient tendance de s’accumuler. La composition du C en amont était aussi distincte de celle en aval, avec un seul changement abrupt ayant lieu entre la section forestière et la section d’utilisation du territoire urbaine et agricole. Ces changements de compositions n’étaient pas présents durant le printemps de crue typique échantillonné, mais dans l’inondation de fréquence historique nous avons observés des apports nouveaux de molécules provenant soit des apports terrestres normalement déconnectés du réseau fluvial ou de surverses d’égouts. L’influence des facteurs naturels et anthropiques s’est aussi reflétée dans les flux historiques riverains de C, N, et P (1980 – 2020). La précipitation explique le plus les flux de C et les flux de N dans la section pristine. Les apports historiques au territoire de N anthropique (nécessaires pour soutenir la population humaine et les activités agricoles) expliquent fortement la tendance temporelle à la hausse des flux riverains de N dans la section urbaine. Durant les quatre dernières décennies, un peu plus du tiers des apports de N au territoire sont livrés à la rivière annuellement, suggérant que la source urbaine de N anthropique est encore peu gérée. Le manque de corrélation entre les flux de P dans la rivière et les précipitations ou les apports au territoire de P anthropique peut être expliqué par les usines de traitement des eaux usées installées dans la région vers la fin des années 1990 qui ont fait diminuer presque de moitié le P livré à la rivière. La variation de ces flux s’est reflétée dans la stœchiométrie écosystémique historique, qui varie de 130 : 23 : 1 en 1980 à 554 : 87 : 1 en 2007-08 après l’effet de l’usine d’épuration et du N qui a augmenté. À travers les axes historiques, spatiaux, et saisonniers, cette thèse contribue à la compréhension du rôle des rivières dans la réception, la transformation, et l’export du C, N, et P. Combinée aux concentrations, l’approche de stœchiométrie écosystémique propose une façon d’intégrer apports et pertes des éléments pour les étudier de pair au niveau du bassin versant. Puis, comme certaines formes de C, N, et P sont associées à des sources terrestres spécifiques, ou à certains types de transformations, les inclure dans un cadre conceptuel combinant des extrêmes climatiques et des utilisations du territoire différentes offre un aperçu sur le résultat des sources et transformations des éléments. Enfin, les tendances décennales de C, N, et P riverains montrent l’influence des facteurs naturels et anthropiques sur la stœchiométrie écosystémique historique d’une rivière.
-
Compte tenu de la nécessité de mettre à jour les cartes d'inondation et de minimiser les coûts associés (collecte de données et ressources humaines), il existe un besoin de méthodes alternatives simplifiées ne reposant pas sur la modélisation hydrodynamique classique. L'une des méthodes simplifiées répondant à ce besoin est HAND (Height Above the Nearest Drainage), une approche qui requiert uniquement un modèle numérique d'altitude (MNA) et un réseau hydrographique. Celle-ci a été mise en œuvre dans PHYSITEL, un système d’information géographique SIG spécialisé pour les modèles hydrologiques distribués. Ainsi, pour une hauteur d’eau donnée dans plusieurs tronçons de rivière, il est possible de faire une délimitation de première instance de la surface inondée le long du réseau hydrographique d’un bassin versant. Par ailleurs, l'utilisation des informations fournies par HAND et l'application de l'équation de Manning permettent également de construire une courbe de tarage synthétique pour tout tronçon de rivière en l’absence de données bathymétriques. Ce mémoire présente l’application de cette approche, qui a été validée précédemment en partie sur de grands bassins, sur deux petits bassins, ceux de la rivière à La Raquette, d’une superficie de 133 km², et de la rivière Saint Charles, d’une superficie de 552 km². Trois stations de jaugeage dans chaque bassin ont fourni les informations de base nécessaires au processus de calage de l’approche. L’efficacité et l’adaptabilité de cette approche ont été évaluées dans ce projet en fonction des données disponibles, du temps de calcul et de la précision mesurée par le biais et l’erreur quadratique moyenne. Les incertitudes et sensibilités de l’approche ont été analysées en tenant compte de la résolution spatiale et du manque de données bathymétriques. De plus, des analyses innovatrices ont été produites dans l’application de HAND. Tels qu’une analyse de sensibilité globale pour informer le processus de calage ainsi que l’application d’un critère basé sur le nombre de Froude afin de permettre de valider le respect des hypothèses sous-jacentes à l’application de l’approche sur chaque tronçon de rivière d’un bassin. En utilisant des MNA à haute résolution(<5 m/pixel), des courbes de tarage synthétiques ont été produites avec des biais inférieurs à ±20 % par rapport à des courbes de tarage in-situ. De plus, la détermination d'un critère de sélection des courbes dans un biais de ± 5% par rapport à la courbe de tarage observée a permis d'obtenir des courbes de tarage synthétiques avec des erreurs quadratiques moyennes normalisées comprises entre 0,03 et 0,62. Ainsi, cette approche a été validée pour dériver des courbes de tarage synthétiques et, par conséquent, pour soutenir la délimitation des zones à risque d'inondation dans les petits bassins versants en tenant compte des incertitudes associées à l'application d'une approche de faible complexité. <br /><br />Given the emergent need to update flood inundation maps and minimize associated financial costs (data collection and human resources), simplified alternative methods to the classical hydrodynamic modelling method, are being developed. One of the simplified methods built to fulfill this need is the terrain-based Height Above the Nearest Drainage (HAND) method, which solely relies on a digital elevation model (DEM) and a river network. This approach was implemented in PHYSITEL, a specialized GIS for distributed hydrological models. For a given river reach and water height, HAND can provide a first-hand delineation of the inundated areas within a watershed. In addition, coupling the information provided by HAND and the Manning equation allows for the construction of a synthetic rating curve for any homogeneous river reach where bathymetric data are not available. Since this synthetic rating curve approach has been validated in part for large watersheds, this study tested this approach onto two small watersheds: the 133- km² La Raquette River watershed and the 552-km² Saint Charles River watershed. Three gauging stations on each basin provided the basic data to perform the calibration process. The effectiveness and adaptability of the approach was assessed as a function of available data, computational time, and accuracy measured using the bias and root mean squared error (RMSE). The uncertainties were quantified in terms of spatial resolution and lack of bathymetry data. In addition, innovative analyses were made on the application of the HAND-synthetic rating curve approach. First, a global sensitivity analysis was done to inform the calibration process, and then a Froude number-based criterion was applied to validate the application of the Manning equation on any river reach of a watershed. Using high-resolution DEMs (<5 m/pixel), we obtained synthetic rating curves with bias less than 20% when compared to in-situ rating curves. Finally, a curve selection criterion was applied to identify those curves having a bias of ± 5%. The selected synthetic rating curves had normalized mean squared errors between 0.03 and 0.62. Thus, the proposed approach was deemed appropriate to derive synthetic rating curves and support the delineation of flood risk areas in small watersheds all the while considering the uncertainties associated with applying a low complexity model.
-
La rivière Chaudière, située au sud de la Ville de Québec, est sujette aux inondations provoquées par la formation d'embâcles. Des inondations ont été enregistrées depuis 1896 jusqu'à ce jour malgré la mise en service, en 1967, d'un ouvrage de contrôle des glaces (ICS) à 3 kilomètres en amont de la Ville de Saint-Georges-de-Beauce afin de réduire les inondations causées par la glace dans le secteur le plus à risque de la rivière Chaudière. Les inondations par embâcles demeurent donc un problème récurrent qui affecte régulièrement 8 villes le long du tronçon de 90 kilomètres en aval de l'ICS. Dans le cadre d'un programme gouvernemental d'aléas d'inondation initié par le ministère des Affaires Municipales et de l'Habitation (MAMH), un mandat pour évaluer les cotes de crues en présence de glace de la rivière Chaudière a été confié à l'Université Laval. La modélisation d'embâcles combinée à des données d'observations historiques d'embâcles est utilisée pour déterminer les niveaux d'inondation par embâcles. L'approche préconisée consiste à contrôler un modèle de simulation hydraulique fluviale, plus spécifiquement le module HEC-RAS, avec un script externe en Python pour générer une distribution Monte-Carlo (MOCA) d'évènements d'embâcles le long du secteur de la rivière à l'étude. Les paramètres mécaniques tels que l'angle de frottement, la porosité et les vitesses de contrainte de cisaillement critiques sont également attribués de manière aléatoire par le script dans une plage délimitée. Les paramètres physiques et hydrologiques attribués à chaque évènement sont choisis au hasard en fonction d'une probabilité estimée à partir des observations historiques, soit le débit calculé à l'ICS, l'emplacement de l'embâcle, la longueur de l'embâcle et les degrés-jours de gel (épaisseur de la glace). Les cotes de crues selon les périodes de retour de 2, 20, 100 et 350 ans sont alors déterminées selon une équation statistique empirique de Gringorten, suivie d'une modulation pour tenir compte des facteurs externes non considérés par MOCA. Ces cotes de crues en présence de glace sont comparées à celles en eau libre telles que déterminées par la méthode classique. Le projet démontre que les niveaux d'eau calculés en présence de glace prédominent ceux en eau libre pour les villes en amont de Saint-Joseph-de-Beauce. La combinaison des niveaux d'eau en présence de glace et en eau libre, réalisée à l'aide de l'équation de la FEMA, montre que la probabilité d'atteindre un seuil spécifique d'élévation diminue la période de retour et en conséquence augmente les probabilités reliées aux inondations. Ce mémoire est le premier travail scientifique qui présente une validation complète de l'approche hydrotechnique utilisant les valeurs in situ de débit, de DJGC et de l'emplacement et de la longueur d'embâcles pour la détermination des cotes de crue par embâcles. Les valeurs de cotes de crues calculées avec la méthode MOCA sont comparées avec les données historiques dans le secteur à l'étude de la rivière Chaudière. La présente étude met en évidence les limitations et les conditions nécessaires pour l'utilisation de cette méthode. Ce projet de recherche montre aussi pour la première fois que l'approche hydrotechnique permet de calculer des courbes fréquentielles de niveaux d'eau en présence de glace qui peuvent être utilisées à des fins réglementaires au Québec.
-
Extreme flood events continue to be one of the most threatening natural disasters around the world due to their pronounced social, environmental and economic impacts. Changes in the magnitude and frequency of floods have been documented during the last years, and it is expected that a changing climate will continue to affect their occurrence. Therefore, understanding the impacts of climate change through hydroclimatic simulations has become essential to prepare adaptation strategies for the future. However, the confidence in flood projections is still low due to the considerable uncertainties associated with their simulations, and the complexity of local features influencing these events. The main objective of this doctoral thesis is thus to improve our understanding of the modelling uncertainties associated with the generation of flood projections as well as evaluating strategies to reduce these uncertainties to increase our confidence in flood simulations. To address the main objective, this project aimed at (1) quantifying the uncertainty contributions of different elements involved in the modelling chain used to produce flood projections and, (2) evaluating the effects of different strategies to reduce the uncertainties associated with climate and hydrological models in regions with diverse hydroclimatic conditions. A total of 96 basins located in Quebec (basins dominated by snow-related processes) and Mexico (basins dominated by rain-related processes), covering a wide range of climatic and hydrological regimes were included in the study. The first stage consisted in decomposing the uncertainty contributions of four main uncertainty sources involved in the generation of flood projections: (1) climate models, (2) post-processing methods, (3) hydrological models, and (4) probability distributions used in flood frequency analyses. A variance decomposition method allowed quantifying and ranking the influence of each uncertainty source on floods over the two regions studied and by seasons. The results showed that the uncertainty contributions of each source vary over the different regions and seasons. Regions and seasons dominated by rain showed climate models as the main uncertainty source, while those dominated by snowmelt showed hydrological models as the main uncertainty contributor. These findings not only show the dangers of relying on single climate and hydrological models, but also underline the importance of regional uncertainty analyses. The second stage of this research project focused in evaluating strategies to reduce the uncertainties arising from hydrological models on flood projections. This stage includes two steps: (1) the analysis of the reliability of hydrological model’s calibration under a changing climate and (2) the evaluation of the effects of weighting hydrological simulations on flood projections. To address the first part, different calibration strategies were tested and evaluated using five conceptual lumped hydrological models under contrasting climate conditions with datasets lengths varying from 2 up to 21 years. The results revealed that the climatic conditions of the calibration data have larger impacts on hydrological model’s performance than the lengths of the climate time series. Moreover, changes on precipitation generally showed greater impacts than changes in temperature across all the different basins. These results suggest that shorter calibration and validation periods that are more representative of possible changes in climatic conditions could be more appropriate for climate change impact studies. Following these findings, the effects of different weighting strategies based on the robustness of hydrological models (in contrasting climatic conditions) were assessed on flood projections of the different studied basins. Weighting the five hydrological models based on their robustness showed some improvements over the traditional equal-weighting approach, particularly over warmer and drier conditions. Moreover, the results showed that the difference between these approaches was more pronounced over flood projections, as contrasting flood magnitudes and climate change signals were observed between both approaches. Additional analyses performed over four selected basins using a semi-distributed and more physically-based hydrological model suggested that this type of models might have an added value when simulating low-flows, and high flows on small basins (of about 500 km2). These results highlight once again the importance of working with ensembles of hydrological models and presents the potential impacts of weighting hydrological models on climate change impact studies. The final stage of this study focused on evaluating the impacts of weighting climate simulations on flood projections. The different weighting strategies tested showed that weighting climate simulations can improve the mean hydrograph representation compared to the traditional model “democracy” approach. This improvement was mainly observed with a weighting approach proposed in this thesis that evaluates the skill of the seasonal simulated streamflow against observations. The results also revealed that weighting climate simulations based on their performance can: (1) impact the floods magnitudes, (2) impact the climate change signals, and (3) reduce the uncertainty spreads of the resulting flood projection. These effects were particularly clear over rain-dominated basins, where climate modelling uncertainty plays a main role. These finding emphasize the need to reconsider the traditional climate model democracy approach, especially when studying processes with higher levels of climatic uncertainty. Finally, the implications of the obtained results were discussed. This section puts the main findings into perspective and identifies different ways forward to keep improving the understanding of climate change impacts in hydrology and increasing our confidence on flood projections that are essential to guide adaptation strategies for the future.