Votre recherche
Résultats 91 ressources
-
For the past few decades, remote sensing has been a valuable tool for deriving global information on snow water equivalent (SWE), where products derived from space-borne passive microwave radiometers are favoured as they respond to snow depth, an important component of SWE. GlobSnow, a novel SWE product, has increased the accuracy of global-scale SWE estimates by combining remotely sensed radiometric data with other physiographic characteristics, such as snow depth, as quantified by climatic stations. However, research has demonstrated that passive microwaves algorithms tend to underestimate SWE for deep snowpack. Approaches were proposed to correct for such underestimation; however, they are computer intensive and complex to implement at the watershed scale. In this study, SWEmax information from the near real time 5-km GlobSnow product, provided by Copernicus and the European Space Agency (ESA) and GlobSnow product at 25 km resolution were corrected using a simple bias correction approach for watershed scale applications. This method, referred to as the Watershed Scale Correction (WSC) approach, estimates the bias based on the direct runoff that occurs during the spring melt season. Direct runoff is estimated on the one hand from SWEmax information as main input. Infiltration is also considered in computing direct runoff. An independent estimation of direct runoff from gauged stations is also performed. Discrepancy between these estimates allows for estimating the bias correction factor. This approach is advantageous as it exploits data that commonly exists i.e., flow at gauged stations and remotely sensed/reanalysis data such as snow cover and precipitation. The WSC approach was applied to watersheds located in Eastern Canada. It was found that the average bias moved from 33.5% with existing GlobSnow product to 18% with the corrected product, using the recommended recursive filter coefficient β of 0.925 for baseflow separation. Results show the usefulness of integrating direct runoff for bias correction of existing GlobSnow product at the watershed scale. In addition, potential benefits are offered using the recursive filter approach for baseflow separation of watersheds with limited in situ SWE measurements, to further reduce overall uncertainties and bias. The WSC approach should be appealing for poorly monitored watersheds where SWE measurements are critical for hydropower production and where snowmelt can pose serious flood-related damages.
-
Abstract The estimation of sea levels corresponding to high return periods is crucial for coastal planning and for the design of coastal defenses. This paper deals with the use of historical observations, that is, events that occurred before the beginning of the systematic tide gauge recordings, to improve the estimation of design sea levels. Most of the recent publications dealing with statistical analyses applied to sea levels suggest that astronomical high tide levels and skew surges should be analyzed and modeled separately. Historical samples generally consist of observed record sea levels. Some extreme historical skew surges can easily remain unnoticed if they occur at low or moderate astronomical high tides and do not generate extreme sea levels. The exhaustiveness of historical skew surge series, which is an essential criterion for an unbiased statistical inference, can therefore not be guaranteed. This study proposes a model combining, in a single Bayesian inference procedure, information of two different natures for the calibration of the statistical distribution of skew surges: measured skew surges for the systematic period and extreme sea levels for the historical period. A data‐based comparison of the proposed model with previously published approaches is presented based on a large number of Monte Carlo simulations. The proposed model is applied to four locations on the French Atlantic and Channel coasts. Results indicate that the proposed model is more reliable and accurate than previously proposed methods that aim at the integration of historical records in coastal sea level or surge statistical analyses. , Plain Language Summary Coastal facilities must be designed as to be protected from extreme sea levels. Sea levels at high tide are the combination of astronomical high tides, which can be predicted, and skew surges. The estimation of the statistical distribution of skew surges is usually based on the skew surges measured by tide gauges and can be improved with the use of historical information, observations that occurred before the beginning of the tide gauge recordings. Extreme skew surges combined with low or moderate astronomical high tides would not necessarily generate extreme sea levels, and consequently some extreme historical skew surges could be missed. The exhaustiveness of historical information is an essential criterion for an unbiased estimation, but it cannot be guaranteed in the case of historical skew surges. The present study proposes to combine skew surges for the recent period and extreme sea levels for the historical period. The proposed model is compared to previously published approaches and appears to be more reliable and accurate. The proposed model is applied to four case studies on the French Atlantic and Channel coasts. , Key Points The exhaustiveness of historical sea record information is demonstrated based on French Atlantic coast data A comparative analysis of approaches to integrate historical information is carried out The efficiency of a new method for the combination of systematic skew surges and historical records is verified
-
In cold regions, ice jams frequently result in severe flooding due to a rapid rise in water levels upstream of the jam. Sudden floods resulting from ice jams threaten human safety and cause damage to properties and infrastructure. Hence, ice-jam prediction tools can give an early warning to increase response time and minimize the possible damages. However, ice-jam prediction has always been a challenge as there is no analytical method available for this purpose. Nonetheless, ice jams form when some hydro-meteorological conditions happen, a few hours to a few days before the event. Ice-jam prediction can be addressed as a binary multivariate time-series classification. Deep learning techniques have been widely used for time-series classification in many fields such as finance, engineering, weather forecasting, and medicine. In this research, we successfully applied convolutional neural networks (CNN), long short-term memory (LSTM), and combined convolutional–long short-term memory (CNN-LSTM) networks to predict the formation of ice jams in 150 rivers in the province of Quebec (Canada). We also employed machine learning methods including support vector machine (SVM), k-nearest neighbors classifier (KNN), decision tree, and multilayer perceptron (MLP) for this purpose. The hydro-meteorological variables (e.g., temperature, precipitation, and snow depth) along with the corresponding jam or no-jam events are used as model inputs. Ten percent of the data were excluded from the model and set aside for testing, and 100 reshuffling and splitting iterations were applied to 80 % of the remaining data for training and 20 % for validation. The developed deep learning models achieved improvements in performance in comparison to the developed machine learning models. The results show that the CNN-LSTM model yields the best results in the validation and testing with F1 scores of 0.82 and 0.92, respectively. This demonstrates that CNN and LSTM models are complementary, and a combination of both further improves classification.
-
Abstract An intensity–duration–frequency (IDF) curve describes the relationship between rainfall intensity and duration for a given return period and location. Such curves are obtained through frequency analysis of rainfall data and commonly used in infrastructure design, flood protection, water management, and urban drainage systems. However, they are typically available only in sparse locations. Data for other sites must be interpolated as the need arises. This paper describes how extreme precipitation of several durations can be interpolated to compute IDF curves on a large, sparse domain. In the absence of local data, a reconstruction of the historical meteorology is used as a covariate for interpolating extreme precipitation characteristics. This covariate is included in a hierarchical Bayesian spatial model for extreme precipitations. This model is especially well suited for a covariate gridded structure, thereby enabling fast and precise computations. As an illustration, the methodology is used to construct IDF curves over Eastern Canada. An extensive cross-validation study shows that at locations where data are available, the proposed method generally improves on the current practice of Environment and Climate Change Canada which relies on a moment-based fit of the Gumbel extreme-value distribution.
-
Extreme precipitation events can lead to disastrous floods, which are the most significant natural hazards in the Mediterranean regions. Therefore, a proper characterization of these events is crucial. Extreme events defined as annual maxima can be modeled with the generalized extreme value (GEV) distribution. Owing to spatial heterogeneity, the distribution of extremes is non-stationary in space. To take non-stationarity into account, the parameters of the GEV distribution can be viewed as functions of covariates that convey spatial information. Such functions may be implemented as a generalized linear model (GLM) or with a more flexible non-parametric non-linear model such as an artificial neural network (ANN). In this work, we evaluate several statistical models that combine the GEV distribution with a GLM or with an ANN for a spatial interpolation of the GEV parameters. Key issues are the proper selection of the complexity level of the ANN (i.e., the number of hidden units) and the proper selection of spatial covariates. Three sites are included in our study: a region in the French Mediterranean, the Cap Bon area in northeast Tunisia, and the Merguellil catchment in central Tunisia. The comparative analysis aim at assessing the genericity of state-of-the-art approaches to interpolate the distribution of extreme precipitation events.
-
Cette thèse traite des aspects de la quantification de l'incertitude dans l'évaluation des ressources éoliennes avec les pratiques d'analyses d'incertitude et de sensibilité. Les objectifs de cette thèse sont d'examiner et d'évaluer la qualité des pratiques d'analyse de sensibilité dans l'évaluation des ressources éoliennes, de décourager l'utilisation d'une analyse de sensibilité à la fois, d'encourager l'utilisation d'une analyse de sensibilité globale à la place, d'introduire des méthodes d'autres domaines., et montrer comment les analyses d'incertitude et de sensibilité globale ajoutent de la valeur au processus d'aide à la décision. Cette thèse est organisée en quatre articles : I. Une revue des pratiques d'analyse de sensibilité dans l'évaluation des ressources éoliennes avec une étude de cas de comparaison d'analyses de sensibilité individuelles et globales du coût actualisé de l'énergie éolienne offshore ; II. Technique Quasi-Monte Carlo dans l'analyse de sensibilité globale dans l'évaluation des ressources éoliennes avec une étude de cas sur les Émirats Arabes Unis; III. Utilisation de la famille de distribution Halphen pour l'estimation de la vitesse moyenne du vent avec une étude de cas sur l'Est du Canada; IV. Étude d'évaluation des ressources éoliennes offshore du golfe Persique avec les données satellitaires QuikSCAT.Les articles I à III ont chacun donné lieu à une publication évaluée par des pairs, tandis que l'article IV - à une soumission. L'article I propose des classifications par variable de sortie d'analyse de sensibilité, méthode, application, pays et logiciel. L'article I met en évidence les lacunes de la littérature, fournit des preuves des pièges, conduisant à des résultats d'évaluation erronés et coûteux des ressources éoliennes. L'article II montre comment l'analyse de sensibilité globale offre une amélioration au moyen du quasi-Monte Carlo avec ses plans d'échantillonnage élaborés permettant une convergence plus rapide. L'article III introduit la famille de distribution Halphen pour l'évaluation des ressources éoliennes. Article IV utilise les données satellitaires SeaWinds/QuikSCAT pour l'évaluation des ressources éoliennes offshore du golfe Persique. Les principales contributions à l'état de l'art avec cette thèse suivent. À la connaissance de l'auteur, aucune revue de l'analyse de sensibilité dans l'évaluation des ressources éoliennes n'est actuellement disponible dans la littérature, l'article I en propose une. L'article II relie la modélisation mathématique et l'évaluation des ressources éoliennes en introduisant la technique de quasi-Monte Carlo dans l'évaluation des ressources éoliennes. L'article III présente la famille de distribution de Halphen, de l'analyse de la fréquence des crues à l'évaluation des ressources éoliennes. <br /><br />This dissertation deals with the aspects of quantifying uncertainty in wind resource assessment with the practices of uncertainty and sensitivity analyses. The objectives of this dissertation are to review and assess the quality of sensitivity analysis practices in wind resource assessment, to discourage the use of one-at-a-time sensitivity analysis, encourage the use of global sensitivity analysis instead, introduce methods from other fields, and showcase how uncertainty and global sensitivity analyses adds value to the decision support process. This dissertation is organized in four articles: I. Review article of 102 feasibility studies: a review of sensitivity analysis practices in wind resource assessment with a case study of comparison of one-at-a-time vs. global sensitivity analyses of the levelized cost of offshore wind energy; II. Research article: Quasi-Monte Carlo technique in global sensitivity analysis in wind resource assessment with a case study on United Arab Emirates; III. Research article: Use of the Halphen distribution family for mean wind speed estimation with a case study on Eastern Canada; IV. Application article: Offshore wind resource assessment study of the Persian Gulf with QuikSCAT satellite data. Articles I-III have each resulted in a peer-reviewed publication, while Article IV – in a submission. Article I offers classifications by sensitivity analysis output variable, method, application, country, and software. It reveals the lack of collective agreement on the definition of sensitivity analysis in the literature, the dominance of nonlinear models, the prevalence of one-at a-time sensitivity analysis method, while one-at-a-time method is only valid for linear models. Article I highlights gaps in the literature, provides evidence of the pitfalls, leading to costly erroneous wind resource assessment results. Article II shows how global sensitivity analysis offers improvement by means of the quasi-Monte Carlo with its elaborate sampling designs enabling faster convergence. Article III introduces the Halphen distribution family for the purpose of wind recourse assessment. Article IV uses SeaWinds/QuikSCAT satellite data for offshore wind resource assessment of the Persian Gulf. The main contributions to the state-of-the-art with this dissertation follow. To the best of author’s knowledge, no review of sensitivity analysis in wind resource assessment is currently available in the literature, Article I offers such. Article II bridges mathematical modelling and wind resource assessment by introducing quasi-Monte Carlo technique to wind resource assessment. Article III introduces the Halphen distribution family from flood frequency analysis to wind resource assessment.
-
The canalized southernmost section of Wonscotonach (the Don River) in Tkarón:to (Toronto), also known as The Narrows, is a highly disturbed urban natural landscape. Following the 1886 Don Improvement Project, the Keating Channel, and today the Port Lands Revitalization and Flood Protection Project, these Lands have been harnessed and developed through settler colonization to tame and control the once-winding river. This research-creation—in the form of a curated online exhibition and written thesis—presents a critical (re)reading of the notion of improvement, becoming allied to the pre-colonial landscape and the knowledge it carried. This exhibition and thesis develop the concept of the meander, inspired by the non-linear trajectory of the pre-canalized Don River, as a model for the curatorial. The curatorial process of improvement becomes a wall, and the river meanders still began before the global COVID-19 pandemic and, subsequently, was derailed in March 2020. The exhibition’s final form was unknowable throughout much of the curatorial process. Thus, following the meander as a research-creation technique, the curatorial process, exhibitionary structure, and content had to adapt through lingering uncertainty. This thesis, contributing to the theoretical and practical knowledge of research-creation, looks to intersections with the curatorial following the theoretical underpinnings of Erin Manning and Brian Massumi, Natalie Loveless and Stefanie Springgay and Sarah E. Truman. As a project untethered from institutional timelines and normative requirements to ‘know a project in advance,’ as well as the conventions of a physical exhibition, this research-creation manifested through process-led, creative and exploratory techniques (such as walking and drawing) and slowed pace allowed by the COVID-19 pandemic’s reframing of time. This research-creation exhibition and written thesis develop a responsive and resilient curatorial process deeply indebted to Land-based knowledge.
-
Extreme flood events continue to be one of the most threatening natural disasters around the world due to their pronounced social, environmental and economic impacts. Changes in the magnitude and frequency of floods have been documented during the last years, and it is expected that a changing climate will continue to affect their occurrence. Therefore, understanding the impacts of climate change through hydroclimatic simulations has become essential to prepare adaptation strategies for the future. However, the confidence in flood projections is still low due to the considerable uncertainties associated with their simulations, and the complexity of local features influencing these events. The main objective of this doctoral thesis is thus to improve our understanding of the modelling uncertainties associated with the generation of flood projections as well as evaluating strategies to reduce these uncertainties to increase our confidence in flood simulations. To address the main objective, this project aimed at (1) quantifying the uncertainty contributions of different elements involved in the modelling chain used to produce flood projections and, (2) evaluating the effects of different strategies to reduce the uncertainties associated with climate and hydrological models in regions with diverse hydroclimatic conditions. A total of 96 basins located in Quebec (basins dominated by snow-related processes) and Mexico (basins dominated by rain-related processes), covering a wide range of climatic and hydrological regimes were included in the study. The first stage consisted in decomposing the uncertainty contributions of four main uncertainty sources involved in the generation of flood projections: (1) climate models, (2) post-processing methods, (3) hydrological models, and (4) probability distributions used in flood frequency analyses. A variance decomposition method allowed quantifying and ranking the influence of each uncertainty source on floods over the two regions studied and by seasons. The results showed that the uncertainty contributions of each source vary over the different regions and seasons. Regions and seasons dominated by rain showed climate models as the main uncertainty source, while those dominated by snowmelt showed hydrological models as the main uncertainty contributor. These findings not only show the dangers of relying on single climate and hydrological models, but also underline the importance of regional uncertainty analyses. The second stage of this research project focused in evaluating strategies to reduce the uncertainties arising from hydrological models on flood projections. This stage includes two steps: (1) the analysis of the reliability of hydrological model’s calibration under a changing climate and (2) the evaluation of the effects of weighting hydrological simulations on flood projections. To address the first part, different calibration strategies were tested and evaluated using five conceptual lumped hydrological models under contrasting climate conditions with datasets lengths varying from 2 up to 21 years. The results revealed that the climatic conditions of the calibration data have larger impacts on hydrological model’s performance than the lengths of the climate time series. Moreover, changes on precipitation generally showed greater impacts than changes in temperature across all the different basins. These results suggest that shorter calibration and validation periods that are more representative of possible changes in climatic conditions could be more appropriate for climate change impact studies. Following these findings, the effects of different weighting strategies based on the robustness of hydrological models (in contrasting climatic conditions) were assessed on flood projections of the different studied basins. Weighting the five hydrological models based on their robustness showed some improvements over the traditional equal-weighting approach, particularly over warmer and drier conditions. Moreover, the results showed that the difference between these approaches was more pronounced over flood projections, as contrasting flood magnitudes and climate change signals were observed between both approaches. Additional analyses performed over four selected basins using a semi-distributed and more physically-based hydrological model suggested that this type of models might have an added value when simulating low-flows, and high flows on small basins (of about 500 km2). These results highlight once again the importance of working with ensembles of hydrological models and presents the potential impacts of weighting hydrological models on climate change impact studies. The final stage of this study focused on evaluating the impacts of weighting climate simulations on flood projections. The different weighting strategies tested showed that weighting climate simulations can improve the mean hydrograph representation compared to the traditional model “democracy” approach. This improvement was mainly observed with a weighting approach proposed in this thesis that evaluates the skill of the seasonal simulated streamflow against observations. The results also revealed that weighting climate simulations based on their performance can: (1) impact the floods magnitudes, (2) impact the climate change signals, and (3) reduce the uncertainty spreads of the resulting flood projection. These effects were particularly clear over rain-dominated basins, where climate modelling uncertainty plays a main role. These finding emphasize the need to reconsider the traditional climate model democracy approach, especially when studying processes with higher levels of climatic uncertainty. Finally, the implications of the obtained results were discussed. This section puts the main findings into perspective and identifies different ways forward to keep improving the understanding of climate change impacts in hydrology and increasing our confidence on flood projections that are essential to guide adaptation strategies for the future.
-
Globally, the number of people at risk from flooding has been increasing since 2000, with the population from the South being more vulnerable. Millions of households are displaced by disasters every year. In 2009, the city of Ouagadougou in Burkina Faso experienced its most disastrous flood ever recorded. As a response, the government designed a permanent relocation plan in Yagma, a village located outside the city of Ouagadougou. The relocation plan disrupted the livelihoods of the households that were affected by the flood, leading many of them to return and rebuild their houses in flood prone areas. This paper contributes to a body of literature analyzing the heritage of postcolonialism on the flood vulnerability on the poorer communities in Ouagadougou. Using a political ecology frame, the thesis attempts to understand how the government of Burkina Faso and flood victims understand land and belongings, and how that understanding shaped the relocation program. After interviewing flood victims and government officials, an analysis revealed that contrasting views are at work. A perspective based on technical calculations and a neo-colonialist vision of development, on the one hand, and a grounded perspective based on relationships to the land and each other, on the other.
-
In the tropical environment such as Brazil, the frequency of rainfall-induced landslides is particularly high because of the rugged terrain, heavy rainfall, increasing urbanization, and the orographic effect of mountain ranges. Since such landslides repeatedly interfere with human activities and infrastructures, improved knowledge related to spatial and temporal prediction of the phenomenon is of interest for risk management. This study is an analysis of empirical rainfall thresholds, which aims to establish local and regional scale correlations between rainfall and the triggering of landslides in Angra dos Reis in the State of Rio de Janeiro. A statistical analysis combining quantile regression and binary logistic regression was performed on 1640 and 526 landslides triggered by daily rainfall over a 6-year period in the municipality and the urban center of Angra dos Reis, in order to establish probabilistic rainfall duration thresholds and assess the role of antecedent rainfall. The results show that the frequency of landslides is highly correlated with rainfall events, and surprisingly the thresholds in dry season are lower than those in wet season. The aspect of the slopes also seems to play an important role as demonstrated by the different thresholds between the southern and northern regions. Finally, the results presented in this study provide new insight into the spatial and temporal dynamics of landslides and rainfall conditions leading to their activation in this tropical and mountainous environment.
-
Abstract Resilience has become a cornerstone for risk management and disaster reduction. However, it has evolved extensively both etymologically and conceptually in time and across scientific disciplines. The concept has been (re)shaped by the evolution of research and practice efforts. Considered the opposite of vulnerability for a long time, resilience was first defined as the ability to resist, bounce back, cope with, and recover quickly from the impacts of hazards. To avoid the possible return to conditions of vulnerability and exposure to hazards, the notions of post-disaster development, transformation, and adaptation (build back better) and anticipation, innovation, and proactivity (bounce forward) were then integrated. Today, resilience is characterized by a multitude of components and several classifications. We present a selection of 25 components used to define resilience, and an interesting linkage emerges between these components and the dimensions of risk management (prevention, preparedness, response, and recovery), offering a perspective to strengthen resilience through the development of capacities. Despite its potential, resilience is subject to challenges regarding its operationalization, effectiveness, measurement, credibility, equity, and even its nature. Nevertheless, it offers applicability and opportunities for local communities as well as an interdisciplinary look at global challenges.
-
Seasonal snowpack deeply influences the distribution of meltwater among watercourses and groundwater. During rain-on-snow (ROS) events, the structure and properties of the different snow and ice layers dictate the quantity and timing of water flowing out of the snowpack, increasing the risk of flooding and ice jams. With ongoing climate change, a better understanding of the processes and internal properties influencing snowpack outflows is needed to predict the hydrological consequences of winter melting episodes and increases in the frequency of ROS events. This study develops a multi-method approach to monitor the key snowpack properties in a non-mountainous environment in a repeated and non-destructive way. Snowpack evolution during the winter of 2020–2021 was evaluated using a drone-based, ground-penetrating radar (GPR) coupled with photogrammetry surveys conducted at the Ste-Marthe experimental watershed in Quebec, Canada. Drone-based surveys were performed over a 200 m2 area with a flat and a sloped section. In addition, time domain reflectometry (TDR) measurements were used to follow water flow through the snowpack and identify drivers of the changes in snowpack conditions, as observed in the drone-based surveys. The experimental watershed is equipped with state-of-the-art automatic weather stations that, together with weekly snow pit measurements over the ablation period, served as a reference for the multi-method monitoring approach. Drone surveys conducted on a weekly basis were used to generate georeferenced snow depth, density, snow water equivalent and bulk liquid water content maps. Despite some limitations, the results show that the combination of drone-based GPR, photogrammetric surveys and TDR is very promising for assessing the spatiotemporal evolution of the key hydrological characteristics of the snowpack. For instance, the tested method allowed for measuring marked differences in snow pack behaviour between the first and second weeks of the ablation period. A ROS event that occurred during the first week did not generate significant changes in snow pack density, liquid water content and water equivalent, while another one that happened in the second week of ablation generated changes in all three variables. After the second week of ablation, differences in density, liquid water content (LWC) and snow water equivalent (SWE) between the flat and the sloped sections of the study area were detected by the drone-based GPR measurements. Comparison between different events was made possible by the contact-free nature of the drone-based measurements.
-
Abstract Background Posttraumatic stress disorder (PTSD) has been hailed by some as the emblematic mental disorder of the COVID-19 pandemic, assuming that PTSD’s life-threat criterion was met de facto. More plausible outcomes like adjustment disorder (AD) have been overlooked. Methods An online cross-sectional survey was launched in the initial stage of the pandemic using a convenience sample of 5 913 adults to compare the prevalence of COVID-related probable PTSD versus probable AD. The abridged Impact of Event Scale – Revised (IES-6) assessed the severity of trauma- and stressor-related symptoms over the previous week. Demographic and pandemic-related data (e.g., receiving a formal diagnosis of COVID-19, job loss, loss of loved one, confinement, material hardship) were collected. A Classification and Regression Tree analysis was conducted to uncover the pandemic experiences leading to clinical ‘caseness’. Caseness was defined by a score > 9 on the IES-6 symptom measure and further characterized as PTSD or AD depending on whether the Peritraumatic Distress Inventory’s life-threat item was endorsed or not. Results The participants were predominantly Caucasian (72.8%), women (79.2%), with a university degree (85%), and a mean age of 42.22 ( SD = 15.24) years; 3 647 participants (61.7%; 95%CI [60.4, 63.0]) met the threshold for caseness. However, when perceived life-threat was accounted for, only 6.7% (95%CI [6.1, 7.4]) were classified as PTSD cases, and 55% (95%CI [53.7, 56.2]) as AD cases. Among the AD cases, three distinct profiles emerged marked by the following: (i) a worst personal pandemic experience eliciting intense fear, helplessness or horror (in the absence, however, of any life-threat), (ii) a pandemic experience eliciting sadness/grief, and (iii) worrying intensely about the safety of significant others. Conclusions Studies considering the life-threat criterion as met de facto during the pandemic are confusing PTSD for AD on most counts. This misconception is obscuring the various AD-related idioms of distress that have emerged during the pandemic and the actual treatment needs.
-
Recently, in situ YSI EXO2 phycocyanin fluorescence probes have been widely deployed as a means to determine cyanobacterial abundance in drinking water sources, yet few studies have evaluated the effects of natural organic matter (NOM) and the ambient water temperature on the probe readings. In this study, Suwannee River NOM was added to laboratory cultivated cyanobacterial species to test the performance of the phycocyanin probe. The impact of temperature on phycocyanin fluorescence was evaluated by monitoring the laboratory cultivated cyanobacterial species and extracted phycocyanin pigment. Additionally, in situ phycocyanin fluorescence of the field samples from the water intake of a drinking water treatment plant (DWTP) in 2018 were compared with grab sample laboratory taxonomic analyses. We found: (1) the presence of Suwannee River NOM leads to the decrease in cell-bound cyanobacterial phycocyanin readings; (2) increasing ambient water temperature reduces dissolved and cell-bound cyanobacterial phycocyanin readings; (3) field study phycocyanin probe readings significantly correlated with the total cyanobacterial biovolume (R = 0.73, p < 0.1), and the relationship depends on the biovolume of dominant cyanobacterial species; (4) phycocyanin probe readings have a strong positive correlation with the natural light intensities; and (5) probe users should be fully aware of the sources of interferences when interpreting the results and apply the other physical-chemical parameters data simultaneously generated by the fluorometry to improve the probe’s measurements.
-
Health-related concerns about cyanobacteria-laden sludge of drinking water treatment plants (DWTPs) have been raised in the past few years. Microscopic taxonomy, shotgun metagenomic sequencing, and microcystin (MC) measurement were applied to study the fate of cyanobacteria and cyanotoxins after controlled sludge storage (stagnation) in the dark in a full-scale drinking water treatment plant within 7 to 38 days. For four out of eight dates, cyanobacterial cell growth was observed by total taxonomic cell counts during sludge stagnation. The highest observed cell growth was 96% after 16 days of stagnation. Cell growth was dominated by potential MC producers such as Microcystis, Aphanocapsa, Chroococcus, and Dolichospermum. Shotgun metagenomic sequencing unveiled that stagnation stress shifts the cyanobacterial communities from the stress-sensitive Nostocales (e.g., Dolichospermum) order towards less compromised orders and potential MC producers such as Chroococcales (e.g., Microcystis) and Synechococcales (e.g., Synechococcus). The relative increase of cyanotoxin producers presents a health challenge when the supernatant of the stored sludge is recycled to the head of the DWTP or discharged into the source. These findings emphasize the importance of a strategy to manage cyanobacteria-laden sludge and suggest practical approaches should be adopted to control health/environmental impacts of cyanobacteria and cyanotoxins in sludge.
-
The excessive proliferation of cyanobacteria in surface waters is a widespread problem worldwide, leading to the contamination of drinking water sources. Short- and long-term solutions for managing cyanobacterial blooms are needed for drinking water supplies. The goal of this research was to investigate the cyanobacteria community composition using shotgun metagenomics in a short term, in situ mesocosm experiment of two lakes following their coagulation with ferric sulfate (Fe2(SO4)3) as an option for source water treatment. Among the nutrient paramenters, dissolved nitrogen was related to Microcystis in both Missisquoi Bay and Petit Lac St. François, while the presence of Synechococcus was related to total nitrogen, dissolved nitrogen, dissolved organic carbon, and dissolved phosphorus. Results from the shotgun metagenomic sequencing showed that Dolichospermum and Microcystis were the dominant genera in all of the mesocosms in the beginning of the sampling period in Missisquoi Bay and Petit Lac St. François, respectively. Potentially toxigenic genera such as Microcystis were correlated with intracellular microcystin concentrations. A principal component analysis showed that there was a change of the cyanobacterial composition at the genus level in the mesocosms after two days, which varied across the studied sites and sampling time. The cyanobacterial community richness and diversity did not change significantly after its coagulation by Fe2(SO4)3 in all of the mesocosms at either site. The use of Fe2(SO4)3 for an onsite source water treatment should consider its impact on cyanobacterial community structure and the reduction of toxin concentrations.
-
Freshwater bodies and, consequently, drinking water treatment plants (DWTPs) sources are increasingly facing toxic cyanobacterial blooms. Even though conventional treatment processes including coagulation, flocculation, sedimentation, and filtration can control cyanobacteria and cell-bound cyanotoxins, these processes may encounter challenges such as inefficient removal of dissolved metabolites and cyanobacterial cell breakthrough. Furthermore, conventional treatment processes may lead to the accumulation of cyanobacteria cells and cyanotoxins in sludge. Pre-oxidation can enhance coagulation efficiency as it provides the first barrier against cyanobacteria and cyanotoxins and it decreases cell accumulation in DWTP sludge. This critical review aims to: (i) evaluate the state of the science of cyanobacteria and cyanotoxin management throughout DWTPs, as well as their associated sludge, and (ii) develop a decision framework to manage cyanobacteria and cyanotoxins in DWTPs and sludge. The review identified that lab-cultured-based pre-oxidation studies may not represent the real bloom pre-oxidation efficacy. Moreover, the application of a common exposure unit CT (residual concentration × contact time) provides a proper understanding of cyanobacteria pre-oxidation efficiency. Recently, reported challenges on cyanobacterial survival and growth in sludge alongside the cell lysis and cyanotoxin release raised health and technical concerns with regards to sludge storage and sludge supernatant recycling to the head of DWTPs. According to the review, oxidation has not been identified as a feasible option to handle cyanobacterial-laden sludge due to low cell and cyanotoxin removal efficacy. Based on the reviewed literature, a decision framework is proposed to manage cyanobacteria and cyanotoxins and their associated sludge in DWTPs.