Votre recherche
Résultats 584 ressources
-
<p>In snow-prone regions, snowmelt is one of the main drivers of runoff. For operational flood forecasting and mitigation, the spatial distribution of snow water equivalent (SWE) in near real time is necessary. In this context, in situ observations of SWE provide a valuable information. Nonetheless, the high spatial variability of snowpack characteristics makes it necessary to implement some kind of snow modelling to get a spatially continuous estimation. Data assimilation is thus a useful approach to combine information from both observation and modeling in near real-time. </p><p>For example, at the provincial government of Quebec (eastern Canada), the HYDROTEL Snowpack Model is applied on a daily basis over a 0.1 degree resolution mesh covering the whole province. The modelled SWE is corrected in real time by in situ manual snow survey which are assimilated using a spatial particles filter (Cantet et al., 2019). This assimilation method improves the reliability of SWE estimation at ungauged sites.</p><p>The availability of manual snow surveys is however limited both in space and time. These measurements are conducted on a bi-weekly basis in a limited number of sites. In order to further improve the temporal and spatial observation coverage, alternative sources of data should be considered.</p><p>In this research, it is hypothesized that data gathered by SR50 sonic sensors can be assimilated in the spatial particle filter to improve the SWE estimation. These automatic sensors provide hourly measurements of snow depth and have been deployed in Quebec since 2005. Beforehand, probabilistic SWE estimations were derived from the SR50 snow depth measurements using an ensemble of artificial neural networks (Odry et al. 2019). Considering the nature of the data and the conversion process, the uncertainty associated with this dataset is supposed larger than for the manual snow surveys. The objective of the research is to evaluate the potential interest of adding this lower-quality information in the assimilation framework.</p><p>The addition of frequent but uncertain data in the spatial particle filter required some adjustments in term of assimilation frequency and particle resampling. A reordering of the particles was implemented to maintain the spatial coherence between the different particles. With these changes, the consideration of both manual snow surveys and SR50 data in the spatial particle filter reached performances that are comparable to the initial particle filter that combines only the model and manual snow survey for estimating SWE in ungauged sites. However, the addition of SR50 data in the particle filter allows for continuous information in time, between manual snow surveys.</p><p>&#160;</p><p><strong>References:</strong></p><p>Cantet, P., Boucher, M.-A., Lachance-Coutier, S., Turcotte, R., Fortin, V. (2019). Using a particle filter to estimate the spatial distribution of the snowpack water equivalent. J. Hydrometeorol, 20.</p><p>Odry, J., Boucher, M.-A., Cantet,P., Lachance-Cloutier, S., Turcotte, R., St-Louis, P.-Y. (2019). Using artificial neural networks to estimate snow water equivalent from snow depth. Canadian water ressources journal (under review)</p>
-
RÉSUMÉ: Les courbes Intensité-Durée-Fréquence (IDF) sont l’outil mathématique principalement utilisé par les ingénieur(-e)s pour la modélisation des précipitations extrêmes à un endroit donné. Pour obtenir des courbes IDF fiables, des méthodes statistiques robustes sont nécessaires. L’utilisation de modèles d’échelle est indiquée pour estimer de façon plus précise les courbes IDF en réduisant le nombre de paramètres nécessaires pour modéliser le comportement ex-trême du processus de pluie. Un modèle d’échelle suppose l’existence d’une relation entre les distributions des maxima annuels d’intensité pour les différentes durées d’accumulation. Il existe différentes relations d’échelle possibles, donnant naissance à une variété de modèles. Dans ce mémoire, une procédure de test statistique est développée pour décider si un modèle d’échelle fixé est pertinent pour construire les courbes IDF à un endroit donné, basé sur les observations des maxima annuels historiques d’intensité de précipitations à cet endroit. Le test développé est une extension du test d’adéquation d’Anderson-Darling. Il implique de séparer la base de données en ensembles d’entraînement et de validation. L’ensemble d’entraînement est utilisé pour estimer les paramètres du modèle d’intérêt, et l’ensemble de validation est utilisé dans le calcul de la statistique. La distribution asymptotique de la statistique de test sous l’hypothèse nulle est établie dans un cadre général. Les quantiles de cette distribution théorique peuvent être approximés, ce qui permet de calculer analytiquement la région de rejet ainsi que la valeur-p pour le test. Dans le cas des courbes IDF, l’hypothèse nulle statue qu’un modèle d’échelle d’intérêt est adéquat. Les données sont des maxima d’intensité de précipitations. Les données correspondant à une durée d’accumulation fixée par l’utilisateur(trice) sont sélectionnées pour constituer l’ensemble de validation. Lorsque la durée choisie est la plus petite durée, les performances du test sont validées par une étude de simulation. Sous l’hypothèse nulle, le test rejette au taux nominal même pour des petits échantillons. Sous une hypothèse alternative (c’est-à-dire lorsque le modèle d’échelle utilisé pour générer les données s’écarte du modèle d’intérêt), le taux de rejet augmente avec la distance entre les modèles ainsi qu’avec la taille d’échantillon. Sur des données réelles, le test conduit à utiliser des modèles d’échelle différents à l’aéroport international Pierre-Elliott Trudeau de Montréal et à l’aérodrome Harbour de Vancouver. ABSTRACT: IDF curves are the primary mathematical tool used by engineers for modeling extreme precipitation at a given location. Reliable IDF curves require robust statistical methods. The use of scaling models allows for more precise estimations of IDF curves by reducing the num-ber of parameters needed to model the extreme behavior of the rainfall process. A scaling model assumes the existence of a relationship between the distributions of the annual inten-sity maxima accross the various accumulation durations. Diverse scaling relationships exist, giving rise to a variety of models. In this master thesis, a statistical testing procedure is developed to determine if a scaling model is suitable for constructing IDF curves at a given location, based on historical annual rain intensity maxima observed at that location. The developed test is an extension of the Anderson-Darling goodness-of-fit test. It involves splitting the database into training and validation sets. The training set is used to estimate the parameters of the model of interest, and the validation set is used in the calculation of the test statistic. The asymptotic distribution of the test statistic under the null hypothe-sis is established in a general framework. Quantiles of this theoretical distribution can be approximated, allowing for the analytical calculation of the rejection region as well as the p-value for the test. In the case of IDF curves, the null hypothesis states that a target scaling model is adequate. The data consist of precipitation intensity maxima. Data corresponding to a duration fixed by the user are selected to constitute the validation set. When the chosen duration is the smallest duration, the performances of the test are validated through a simulation study. Under the null hypothesis, the test maintains the nominal rejection rate even for small samples. Under an alternative hypothesis (i.e., when the scaling model used to generate the data deviates from the target model), the rejection rate increases with the discrepancy between the models as well as with the sample size. When applied to historical data, the test suggests the use of different scaling models at the Montréal Pierre-Elliott Trudeau international airport and at the Vancouver Harbour aerodrome.
-
Abstract Climate change is affecting freshwater systems, leading to increased water temperatures, which is posing a threat to freshwater ecological communities. In the Nechako River, a water management program has been in place since the 1980s to maintain water temperatures at 20°C during the migration of Sockeye salmon. However, the program's effectiveness in mitigating the impacts of climate change on resident species like Chinook salmon's thermal exposure is uncertain. In this study, we utilised the CEQUEAU hydrological model and life stage-specific physiological data to evaluate the consequences of the current program on Chinook salmon's thermal exposure under two contrasting climate change and socio-economic scenarios (SSP2-4.5 and SSP5-8.5). The results indicate that the thermal exposure risk is projected to be above the optimal threshold for parr and adult life stages under both scenarios relative to the 1980s. These life stages could face an increase in thermal exposure ranging from up to 2 and 5 times by 2090s relative to the 1980s during the months they occurred under the SSP5-8.5 scenario, including when the program is active (July 20th to August 20th). Additionally, our study shows that climate change will result in a substantial rise in cumulative heat degree days, ranging from 1.9 to 5.8 times (2050s) and 2.9 to 12.9 times (2090s) in comparison to the 1980s under SSP5-8.5. Our study highlights the need for a holistic approach to review the current Nechako management plan and consider all species in the Nechako River system in the face of climate change.
-
The interaction of water flow, ice, and structures is common in fluvial ice processes, particularly around Ice Control Structures (ICSs) that are used to manage and prevent ice jam floods. To evaluate the effectiveness of ICSs, it is essential to understand the complex interaction between water flow, ice and the structure. Numerical modeling is a valuable tool that can facilitate such understanding. Until now, classical Eulerian mesh-based methods have not been evaluated for the simulation of ice interaction with ICS. In this paper we evaluate the capability, accuracy, and efficiency of a coupled Computational Fluid Dynamic (CFD) and multi-body motion numerical model, based on the mesh-based FLOW-3D V.2023 R1 software for simulation of ice-structure interactions in several benchmark cases. The model’s performance was compared with results from meshless-based models (performed by others) for the same laboratory test cases that were used as a reference for the comparison. To this end, simulation results from a range of dam break laboratory experiments were analyzed, encompassing varying numbers of floating objects with distinct characteristics, both in the presence and absence of ICS, and under different downstream water levels. The results show that the overall accuracy of the FLOW-3D model under various experimental conditions resulted in a RMSE of 0.0534 as opposed to an overall RMSE of 0.0599 for the meshless methods. Instabilities were observed in the FLOW-3D model for more complex phenomena that involve open boundaries and a larger number of blocks. Although the FLOW-3D model exhibited a similar computational time to the GPU-accelerated meshless-based models, constraints on the processors speed and the number of cores available for use by the processors could limit the computational time.
-
Abstract. Efficient adaptation strategies to climate change require the estimation of future impacts and the uncertainty surrounding this estimation. Over- or underestimating future uncertainty may lead to maladaptation. Hydrological impact studies typically use a top-down approach in which multiple climate models are used to assess the uncertainty related to the climate model structure and climate sensitivity. Despite ongoing debate, impact modelers have typically embraced the concept of “model democracy”, in which each climate model is considered equally fit. The newer Coupled Model Intercomparison Project Phase 6 (CMIP6) simulations, with several models showing a climate sensitivity larger than that of Phase 5 (CMIP5) and larger than the likely range based on past climate information and understanding of planetary physics, have reignited the model democracy debate. Some have suggested that “hot” models be removed from impact studies to avoid skewing impact results toward unlikely futures. Indeed, the inclusion of these models in impact studies carries a significant risk of overestimating the impact of climate change. This large-sample study looks at the impact of removing hot models on the projections of future streamflow over 3107 North American catchments. More precisely, the variability in future projections of mean, high, and low flows is evaluated using an ensemble of 19 CMIP6 general circulation models (GCMs), 5 of which are deemed hot based on their global equilibrium climate sensitivity (ECS). The results show that the reduced ensemble of 14 climate models provides streamflow projections with reduced future variability for Canada, Alaska, the Southeast US, and along the Pacific coast. Elsewhere, the reduced ensemble has either no impact or results in increased variability in future streamflow, indicating that global outlier climate models do not necessarily provide regional outlier projections of future impacts. These results emphasize the delicate nature of climate model selection, especially based on global fitness metrics that may not be appropriate for local and regional assessments.
-
Abstract. Measurements of the size and shape of frazil ice particles and flocs in saline water and of frazil ice flocs in freshwater are limited. This study consisted of a series of laboratory experiments producing frazil ice at salinities of 0 ‰, 15 ‰, 25 ‰ and 35 ‰ to address this lack of data. The experiments were conducted in a large tank in a cold room with bottom-mounted propellers to create turbulence. A high-resolution camera system was used to capture images of frazil ice particles and flocs passing through cross-polarizing lenses. The high-resolution images of the frazil ice were processed using a computer algorithm to differentiate particles from flocs and determine key properties including size, concentration and volume. The size and volume distributions of particles and flocs at all four salinities were found to fit log-normal distributions closely. The concentration, mean size, and standard deviation of flocs and particles were assessed at different times during the supercooling process to determine how these properties evolve with time. Comparisons were made to determine the effect of salinity on the properties of frazil ice particles and flocs. The overall mean size of frazil ice particles in saline water and freshwater was found to range between 0.52 and 0.45 mm, with particles sizes in freshwater ∼13 % larger than in saline water. However, qualitative observations showed that frazil ice particles in saline water tend to be more irregularly shaped. The overall mean size of flocs in freshwater was 2.57 mm compared to a mean size of 1.47 mm for flocs in saline water. The average growth rate of frazil particles was found to be 0.174, 0.070, 0.033, and 0.024 mm min−1 and the average floc growth rate was 0.408, 0.118, 0.089, and 0.072 mm min−1 for the 0 ‰, 15 ‰, 25 ‰, and 35 ‰, respectively. Estimates for the porosity of frazil ice flocs were made by equating the estimated volume of ice produced based on thermodynamic conditions to the estimated volume of ice determined from the digital images. The estimated porosities of frazil ice flocs were determined to be 0.86, 0.82, 0.8 and 0.75 for 0 ‰, 15 ‰, 25 ‰ and 35 ‰ saline water, respectively.
-
A satisfactory performance of hydrological models under historical climate conditions is considered a prerequisite step in any hydrological climate change impact study. Despite the significant interest in global hydrological modeling, few systematic evaluations of global hydrological models (gHMs) at the catchment scale have been carried out. This study investigates the performance of 4 gHMs driven by 4 global observation-based meteorological inputs at simulating weekly discharges over 198 large-sized North American catchments for the 1971–2010 period. The 16 discharge simulations serve as the basis for evaluating gHM accuracy at the catchment scale within the second phase of the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP2a). The simulated discharges by the four gHMs are compared against observed and simulated weekly discharge values by two regional hydrological models (rHMs) driven by a global meteorological dataset for the same period. We discuss the implications of both modeling approaches as well as the influence of catchment characteristics and global meteorological forcing in terms of model performance through statistical criteria and visual hydrograph comparison for catchment-scale hydrological studies. Overall, the gHM discharge statistics exhibit poor agreement with observations at the catchment scale and manifest considerable bias and errors in seasonal flow simulations. We confirm that the gHM approach, as experimentally implemented through the ISIMIP2a, must be used with caution for regional studies. We find the rHM approach to be more trustworthy and recommend using it for hydrological studies, especially if findings are intended to support operational decision-making.
-
Summary Projected climate change effects on streamflow are investigated for the 2041–2070 horizon following the SRES A2 emissions scenario over two snowmelt-dominated catchments in Canada. A 16-member ensemble of SWAT hydrological model (HM) simulations, based on a comprehensive ensemble of the Canadian Regional Climate Model (CRCM) simulations driven by two global climate models (GCMs), with five realizations of the Canadian CGCM3 and three realizations of the German ECHAM5 is established per catchment. This study aims to evaluate, once model bias has been removed by statistical post-processing (SP), how the RCM-simulated climate changes differ from those of the parent GCMs, and how they affect the assessment of climate change-induced hydrological impacts at the catchment scale. The variability of streamflow caused by the use of different SP methods (mean-based versus distribution-based) within each statistical post-processing pathway of climate model outputs (bias correction versus perturbation) is also evaluated, as well as the uncertainty of natural climate variability. The simulations cover 1971–2000 in the reference period and 2041–2070 in the future period. For a set of criteria, results based on raw and statistically post-processed model outputs for the reference climate are compared with observations. This process demonstrates that SP is important not only for GCMs outputs, but also for CRCM outputs. SP leads to a high level of agreement between the CRCM and the driving GCMs in reproducing patterns of observed climate. The ensemble spread of the climate change signal on streamflow is large and varies with catchments and hydrological periods (winter/summer flows). The results of various hydrological indicators show that most of the uncertainty arises from the natural climate variability followed by the statistical post-processing. The uncertainty linked to the choice of statistical pathway is much larger than that associated with the choice of the method in quantifying the hydrological impacts. We find that the incorporation of dynamical downscaling of global models through the CRCM as an intermediate step in the GCM–RCM–SP–HM model chain does not lead to considerable differences in the assessment of the climate change impacts on streamflow for the study catchments.
-
Abstract Centennial‐to‐millennial temperature records of the past provide a context for the interpretation of current and future changes in climate. Quaternary climates have been relatively well studied in north‐east North America and the adjacent Atlantic Ocean over the last decades, and new research methods have been developed to improve reconstructions. We present newly inferred reconstructions of sea surface temperature for the north‐western Atlantic region, together with a compilation of published temperature records. The database thus comprises a total of 86 records from both marine and terrestrial sites, including lakes, peatlands, ice and tree rings, each covering at least part of the Holocene. For each record, we present details on seasons covered, chronologies and information on radiocarbon dates and analytical time steps. The 86 records contain a total of 154 reconstructions of temperature and temperature‐related variables. Main proxies include pollen and dinocysts, while summer was the season for which the highest number of reconstructions were available. Many records covered most of the Holocene, but many dinocyst records did not extend to the surface, due to sediment mixing, and dendroclimate records were limited to the last millennium. The database allows for the exploration of linkages between sea ice and climate and may be used in conjunction with other palaeoclimate and palaeoenvironmental records, such as wildfire records and peatland dynamics. This inventory may also aid the identification of gaps in the geographic distribution of past temperature records thus guiding future research efforts.
-
Abstract Groundwater quality modelling plays an important role in water resources management decision making processes. Accordingly, models must be developed to account for the uncertainty inherent in the modelling process, from the sample measurement stage through to the data interpretation stages. Artificial intelligence models, particularly fuzzy inference systems (FIS), have been shown to be effective in groundwater quality evaluation for complex aquifers. In the current study, fuzzy set theory is applied to groundwater-quality related decision-making in an agricultural production context; the Mamdani, Sugeno, and Larsen fuzzy logic-based models (MFL, SFL, and LFL, respectively) are used to develop a series of new, generalized, rule-based fuzzy models for water quality evaluation using widely accepted irrigation indices and hydrological data from the Sarab Plain, Iran. Rather than drawing upon physiochemical groundwater quality parameters, the present research employs widely accepted agricultural indices (e.g., irrigation criteria) when developing the MFL, SFL and LFL groundwater quality models. These newly-developed models, generated significantly more consistent results than the United States Soil Laboratory (USSL) diagram, addressed the inherent uncertainty in threshold data, and were effective in assessing groundwater quality for agricultural uses. The SFL model is recommended as it outperforms both MFL and LFL in terms of accuracy when assessing groundwater quality using irrigation indices.
-
Abstract A new GCM land surface scheme is introduced, incorporating three soil layers with physically based calculations of heat and moisture transfers at the surface and across the layer boundaries. Snow‐covered and snow‐free areas are treated separately. The energy balance equation is solved iteratively for the surface temperature; the surface infiltration rate is calculated using a simplified theoretical analysis allowing for surface ponding. Snow cover is modelled as a discrete ‘soil’ layer. The results generated by CLASS are compared with those of an older model incorporating the force‐restore method for the calculation of surface temperature and a bucket‐type formulation for the ground moisture. Several month‐long test runs are carried out in stand‐alone mode. It is shown that the surface temperature in the old scheme responds more slowly to diurnal forcing and more quickly to longer term forcing than that modelled by CLASS, while its one‐layer representation of soil moisture proves incapable of reproducing changes in the surface fluxes owing to surface variations of moisture content. Finally, the lumped treatment of snow and soil in the old scheme results in an extremely fast disappearance of the snow pack under certain conditions.
-
Canada has experienced some of the most rapid warming on Earth over the past few decades with a warming rate about twice that of the global mean temperature since 1948. Long-term warming is observed in Canada’s annual, winter and summer mean temperatures, and in the annual coldest and hottest daytime and nighttime temperatures. The causes of these changes are assessed by comparing observed changes with climate model simulated responses to anthropogenic and natural (solar and volcanic) external forcings. Most of the observed warming of 1.7°C increase in annual mean temperature during 1948–2012 [90% confidence interval (1.1°, 2.2°C)] can only be explained by external forcing on the climate system, with anthropogenic influence being the dominant factor. It is estimated that anthropogenic forcing has contributed 1.0°C (0.6°, 1.5°C) and natural external forcing has contributed 0.2°C (0.1°, 0.3°C) to the observed warming. Up to 0.5°C of the observed warming trend may be associated with low frequency variability of the climate such as that represented by the Pacific decadal oscillation (PDO) and North Atlantic oscillation (NAO). Overall, the influence of both anthropogenic and natural external forcing is clearly evident in Canada-wide mean and extreme temperatures, and can also be detected regionally over much of the country.
-
Abstract This study detected, for the first time, the long term annual and seasonal rainfall trends over Bihar state, India, between 1901 and 2002. The shift change point was identified with the cumulative deviation test (cumulative sum – CUSUM), and linear regression. After the shift change point was detected, the time series was subdivided into two groups: before and after the change point. Arc-Map 10.3 was used to evaluate the spatial distribution of the trends. It was found that annual and monsoon rainfall trends decreased significantly; no significant trends were observed in pre-monsoon, monsoon, post-monsoon and winter rainfall. The average decline in rainfall rate was –2.17 mm·year −1 and –2.13 mm·year −1 for the annual and monsoon periods. The probable change point was 1956. The number of negative extreme events were higher in the later period (1957–2002) than the earlier period (1901–1956).