Votre recherche
Résultats 166 ressources
-
Abstract River confluences are characterized by a complex mixing zone with three‐dimensional (3D) turbulent structures which have been described as both streamwise‐oriented structures and Kelvin–Helmholtz (KH) vertical‐oriented structures. The latter are visible where there is a turbidity difference between the two tributaries, whereas the former are usually derived from mean velocity measurements or numerical simulations. Few field studies recorded turbulent velocity fluctuations at high frequency to investigate these structures, particularly at medium‐sized confluences where logistical constraints make it difficult to use devices such as acoustic doppler velocimeter (ADV). This study uses the ice cover present at the confluence of the Mitis and Neigette Rivers in Quebec (Canada) to obtain long‐duration, fixed measurements along the mixing zone. The confluence is also characterized by a marked turbidity difference which allows to investigate the mixing zone dynamics from drone imagery during ice‐free conditions. The aim of the study is to characterize and compare the flow structure in the mixing zone at a medium‐sized (~40 m) river confluence with and without an ice cover. Detailed 3D turbulent velocity measurements were taken under the ice along the mixing plane with an ADV through eight holes at around 20 positions on the vertical. For ice‐free conditions, drone imagery results indicate that large (KH) coherent structures are present, occupying up to 50% of the width of the parent channel. During winter, the ice cover affects velocity profiles by moving the highest velocities towards the centre of the profiles. Large turbulent structures are visible in both the streamwise and lateral velocity components. The strong correlation between these velocity components indicates that KH vortices are the dominating coherent structures in the mixing zone. A spatio‐temporal conceptual model is presented to illustrate the main differences on the 3D flow structure at the river confluence with and without the ice cover. © 2019 John Wiley & Sons, Ltd.
-
Abstract The mean transit time (MTT) is an important descriptor of water storage and release dynamics in watersheds. Although MTT studies are numerous for many regions around the world, they are rare for prairie watersheds where seasonally cold or dry conditions require adequate methodological choices towards MTT estimation, especially regarding the handling of sparse data records and tracer selection. To examine the impact of such choices, we used timeseries of δ 18 O and δ 2 H from two contrasted years (2014 and 2015) and relied on two metrics and two modelling methods to infer MTTs in prairie watersheds. Our focus was on nested outlets with different drainage areas, geologies, and known run‐off generation mechanisms. The damping ratio and young water fraction (i.e., the fraction of streamflow with transit times lesser than 3 months) metrics, as well as the sine‐wave modelling and time‐based convolution modelling methods, were applied to year‐specific data. Results show that young water fractions and modelled MTT values were, respectively, larger and smaller in 2014, which was a wet year, compared with that in 2015. In 2014, most outlets had young water fractions larger than 0.5 and MTT values lesser than 6 months. The damping ratio, young water fraction, and sine‐wave modelling methods led to convergent conclusions about watershed water storage and release dynamics for some of the monitored sites. Contrasting results were, however, obtained when the same method was applied using δ 2 H instead of δ 18 O, due to differing evaporation fractionation, or when the time‐based convolution modelling method was used. Some methods also failed to provide any robust results during the dry year (i.e., 2015), highlighting the difficulty in inferring MTTs when data are sparse due to intermittent streamflow. This study therefore allowed the formulation of empirical recommendations for MTT estimation in prairie environments as a function of data availability and antecedent wetness conditions.
-
In time series of essential climatological variables, many discontinuities are created not by climate factors but changes in the measuring system, including relocations, changes in instrumentation, exposure or even observation practices. Some of these changes occur due to reorganization, cost-efficiency or innovation. In the last few decades, station movements have often been accompanied by the introduction of an automatic weather station (AWS). Our study identifies the biases in daily maximum and minimum temperatures using parallel records of manual and automated observations. They are selected to minimize the differences in surrounding environment, exposition, distance and difference in elevation. Therefore, the type of instrumentation is the most important biasing factor between both measurements. The pairs of weather stations are located in Piedmont, a region of Italy, and in Gaspe Peninsula, a region of Canada. They have 6years of overlapping period on average, and 5110 daily values. The approach implemented for the comparison is divided in four main parts: a statistical characterization of the daily temperature series; a comparison between the daily series; a comparison between the types of events, heat wave, cold wave and normal events; and a verification of the homogeneity of the difference series. Our results show a higher frequency of warm (+10%) and extremely warm (+35%) days in the automated system, compared with the parallel manual record. Consequently, the use of a composite record could significantly bias the calculation of extreme events.
-
In response to extreme flood events and an increasing awareness that traditional flood control measures alone are inadequate to deal with growing flood risks, spatial flood risk management strategies have been introduced. These strategies do not only aim to reduce the probability and consequences of floods, they also aim to improve local and regional spatial qualities. To date, however, research has been largely ignorant as to how spatial quality, as part of spatial flood risk management strategies, can be successfully achieved in practice. Therefore, this research aims to illuminate how spatial quality is achieved in planning practice. This is done by evaluating the configurations of policy instruments that have been applied in the Dutch Room for the River policy program to successfully achieve spatial quality. This policy program is well known for its dual objective of accommodating higher flood levels as well as improving the spatial quality of the riverine areas. Based on a qualitative comparative analysis, we identified three successful configurations of policy instruments. These constitute three distinct management strategies: the “program‐as‐guardian”, the “project‐as‐driver,” and “going all‐in” strategies. These strategies provide important leads in furthering the development and implementation of spatial flood risk management, both in the Netherlands and abroad.
-
Placement of large wood is a common stream restoration technique in western North America and increasingly in other parts of the world. Considerable information exists on response of anadromous salmonids in small (< 15 m bankfull width) coastal streams of western North America, but limited information exists on anadromous fish response to wood placement in larger streams or in the more arid interior Columbia River Basin. An extensive post-treatment design was used to sample 29 large wood placement projects to determine their physical and biological effectiveness. We sampled paired treatment and control reaches that were approximately 20 times longer than bankfull width and quantified fish abundance and habitat attributes during summer. Proportion of pool area, number of pools, large wood (LW), and pool forming large wood were significantly higher in paired treatment than control reaches. Juvenile Chinook salmon (Oncorhynchus tshawytscha), steelhead (O. mykiss) coho salmon (O. kisutch), and cutthroat trout (O. clarkii) abundances were significantly higher in treatment than control reaches, but no significant responses were detected for mountain whitefish (Prosopium williamsoni) or dace (Rhinichthys spp.). Chinook and coho responses were positively correlated with LW and pool area suggesting wood placement produced reach-scale increases of juvenile salmonid abundance.
-
Stream restoration approaches most often quantify habitat degradation, and therefore recovery objectives, on aquatic habitat metrics based on a narrow range of species needs (e.g., salmon and trout), as well as channel evolution models and channel design tools biased toward single-threaded, and “sediment-balanced” channel patterns. Although this strategy enhances perceived habitat needs, it often fails to properly identify the underlying geomorphological and ecological processes limiting species recovery and ecosystem restoration. In this paper, a unique process-based approach to restoration that strives to restore degraded stream, river, or meadow systems to the premanipulated condition is presented. The proposed relatively simple Geomorphic Grade Line (GGL) design method is based on Geographic Information System (GIS) and field-based analyses and the development of design maps using relative elevation models that expose the relic predisturbance valley surface. Several case studies are presented to both describe the development of the GGL method and to illustrate how the GGL method of evaluating valley surfaces has been applied to Stage 0 restoration design. The paper also summarizes the wide applicability of the GGL method, the advantages and limitations of the method, and key considerations for future designers of Stage 0 systems anywhere in the world. By presenting this ongoing Stage 0 restoration work, the authors hope to inspire other practitioners to embrace the restoration of dynamism and diversity through restoring the processes that create multifaceted river systems that provide long-term resiliency, meta-stability, larger and more complex and diverse habitats, and optimal ecosystem benefits.
-
Abstract Youth exposed to traumatic events are at higher risk for negative developmental outcomes, including low academic performance, poor social skills, and mental health concerns. To best address these risks, school‐based intervention services, and trauma‐informed practices can be provided. The goal of this study was to systematically review the intervention research conducted on school‐based trauma interventions, with specific attention to examine intervention effectiveness, feasibility, and acceptability across studies. It was found that feasibility and acceptability are not frequently examined, though the data available showed that Enhancing Resiliency Amongst Students Experiencing‐Stress (ERASE‐Stress) and school‐based cognitive behavioral therapy (CBT) had high rates of fidelity; and school‐based CBT had high levels of acceptability. The review also examined demographic variables and found that U. S.‐based research reported racially/ethnically diverse samples, and most samples were from low‐income populations. Most studies examined youth exposed to war‐ and terror‐related traumas or natural disaster‐related traumas. Additionally, this review provides future directions for research and reveals the need for further research on intervention feasibility and acceptability. A brief description of practice recommendations based on prior research has also been included. It also exposes the need for studies that examine various student demographic variables that are currently not examined and consistency in rating scale use in school‐based trauma intervention research.
-
Une équipe d’urgence formée de pompiers, de cols bleus et de militaires a réussi à sauver une centaine de maisons de l’inondation dans la rue De Gaulle, hier, dans l’arrondissement de Pierrefonds-Roxboro, en réparant in extremis la digue qui menaçait de céder face aux assauts de la rivière des Prairies.
-
L’état d’urgence a été prolongé pour encore cinq jours à Pierrefonds-Roxboro. C’est ce qu’a décrété le conseil municipal de Montréal qui s’est tenu pour la première fois, depuis 1926, en dehors de l’hôtel de ville.
-
Information on the size of academic search engines and bibliographic databases (ASEBDs) is often outdated or entirely unavailable. Hence, it is difficult to assess the scope of specific databases, such as Google Scholar. While scientometric studies have estimated ASEBD sizes before, the methods employed were able to compare only a few databases. Consequently, there is no up-to-date comparative information on the sizes of popular ASEBDs. This study aims to fill this blind spot by providing a comparative picture of 12 of the most commonly used ASEBDs. In doing so, we build on and refine previous scientometric research by counting query hit data as an indicator of the number of accessible records. Iterative query optimization makes it possible to identify a maximum number of hits for most ASEBDs. The results were validated in terms of their capacity to assess database size by comparing them with official information on database sizes or previous scientometric studies. The queries used here are replicable, so size information can be updated quickly. The findings provide first-time size estimates of ProQuest and EbscoHost and indicate that Google Scholar’s size might have been underestimated so far by more than 50%. By our estimation Google Scholar, with 389 million records, is currently the most comprehensive academic search engine.
-
Flood maps are the final products of dam failure studies that are required by dam safety regulations. A flood limit, which represents the maximum envelope reached by flood waves, is generally the result of a dam-break scenario simulated by a hydraulic numerical model. However, the numerical model uses only a limited portion of the available bathymetry data to build the terrain model (2D mesh plus topometric elevation at nodes). This is particularly so in the cases where the topo-metric data recorded by LIDAR was estimated in several million points. But the hydraulic numerical models rarely exceed hundreds of thousands of nodes, in particular because of the computer constraints and time associated with the operation of these models. The production of the final flood map requires consistency between projected levels and elevations for all points on the map. This verification may be tedious for a large area with several small secondary valleys of tributary streams that have not been represented by the original hydraulic numerical model. The aim of this work is to propose an automatic remeshing strategy that uses the envelope of the maximum dimensions reached by the original model coupled with the available LIDAR data to produce an improved mesh that can accurately capture the wet/dry fronts and the overflows of the secondary valleys. This model helps us to consider the maximum slope inside each element on the basis of the real data, instead of controlling the slope for not having negative depth or controlling the velocity. The algorithm is based on a few basic steps: (i) find the elements cut by the envelope of the wet/dry interfaces; (ii) project the topometric points onto the cut elements; (iii) if these points are very close to the interface, if they are found in a valley, or if they are more elevated than the corresponding cut elements, then these points will be added to the previous nodes and included in a subsequent triangulation step; and (iv) re-run the simulation on the new mesh. This algorithm has been implemented and validated in the study of a dambreak flow with a complex river topography on the Eastmain River and the Romaine-Puyjalon River.
-
Les changements climatiques impactent de plus en plus la vie, le développement et la vulnérabilité de plusieurs communautés à travers le monde, lesquelles devant de plus en plus mitiger les risques naturels. Au Québec, la gestion des risques présente une philosophie de « retour à la normale » qui se penche davantage sur les dimensions d’intervention et de rétablissement. Cependant, à la lumière des incertitudes amenées par les changements climatiques, il est impératif que les communautés québécoises aient les capacités d’augmenter leur résilience face aux risques naturels qui s’accentuent rapidement. Ainsi, la capacité d’adaptation doit se retrouver au cœur de la gestion des risques. Cela dit, il existe peu d’outils d’évaluation de la capacité d’adaptation au Québec, entendue comme l’ensemble des ressources dynamiques disponibles et accessibles qui permettent une augmentation de la résilience et une diminution de la vulnérabilité en transformant positivement une communauté et son environnement. La présente recherche vise ainsi à développer une méthode d’analyse de la capacité d’adaptation des individus et des communautés québécoises touchées par les inondations à l’aide de systèmes d’information géographique (SIG), en utilisant la Ville de Saint-Raymond de Portneuf comme étude de cas. Ce projet se base principalement sur les concepts de vulnérabilité, de résilience et d’adaptation pour recenser des indicateurs pouvant servir à caractériser et évaluer la capacité des personnes et municipalités exposées aux inondations à mobiliser les ressources nécessaires pour non seulement atténuer les risques lors de tels événements, mais aussi mieux les prévenir et s’en préparer. Des données socioéconomiques et d’aménagement du territoire sont notamment mises à profit pour des fins d’analyse de même que des données issues d’un sondage effectué en 2014 à la suite d’une inondation majeure par la CAPSA, l’organisme de bassin versant de la région de Portneuf, en collaboration avec le comité Rivière de la Ville de Saint-Raymond.