Rechercher
Bibliographie complète 313 ressources
-
Abstract. We use a high-resolution regional climate model to investigate the changes in Atlantic tropical cyclone (TC) activity during the period of the mid-Holocene (MH: 6000 years BP) with a larger amplitude of the seasonal cycle relative to today. This period was characterized by increased boreal summer insolation over the Northern Hemisphere, a vegetated Sahara and reduced airborne dust concentrations. A set of sensitivity experiments was conducted in which solar insolation, vegetation and dust concentrations were changed in turn to disentangle their impacts on TC activity in the Atlantic Ocean. Results show that the greening of the Sahara and reduced dust loadings (MHGS+RD) lead to a larger increase in the number of Atlantic TCs (27 %) relative to the pre-industrial (PI) climate than the orbital forcing alone (MHPMIP; 9 %). The TC seasonality is also highly modified in the MH climate, showing a decrease in TC activity during the beginning of the hurricane season (June to August), with a shift of its maximum towards October and November in the MHGS+RD experiment relative to PI. MH experiments simulate stronger hurricanes compared to PI, similar to future projections. Moreover, they suggest longer-lasting cyclones relative to PI. Our results also show that changes in the African easterly waves are not relevant in altering the frequency and intensity of TCs, but they may shift the location of their genesis. This work highlights the importance of considering vegetation and dust changes over the Sahara region when investigating TC activity under a different climate state.
-
Abstract. The continental divide along the spine of the Canadian Rockies in southwestern Canada is a critical headwater region for hydrological drainages to the Pacific, Arctic, and Atlantic oceans. Major flooding events are typically attributed to heavy precipitation on its eastern side due to upslope (easterly) flows. Precipitation can also occur on the western side of the divide when moisture originating from the Pacific Ocean encounters the west-facing slopes of the Canadian Rockies. Often, storms propagating across the divide result in significant precipitation on both sides. Meteorological data over this critical region are sparse, with few stations located at high elevations. Given the importance of all these types of events, the Storms and Precipitation Across the continental Divide Experiment (SPADE) was initiated to enhance our knowledge of the atmospheric processes leading to storms and precipitation on either side of the continental divide. This was accomplished by installing specialized meteorological instrumentation on both sides of the continental divide and carrying out manual observations during an intensive field campaign from 24 April–26 June 2019. On the eastern side, there were two field sites: (i) at Fortress Mountain Powerline (2076 m a.s.l.) and (ii) at Fortress Junction Service, located in a high-elevation valley (1580 m a.s.l.). On the western side, Nipika Mountain Resort, also located in a valley (1087 m a.s.l.), was chosen as a field site. Various meteorological instruments were deployed including two Doppler light detection and ranging instruments (lidars), three vertically pointing micro rain radars, and three optical disdrometers. The three main sites were nearly identically instrumented, and observers were on site at Fortress Mountain Powerline and Nipika Mountain Resort during precipitation events to take manual observations of precipitation type and microphotographs of solid particles. The objective of the field campaign was to gather high-temporal-frequency meteorological data and to compare the different conditions on either side of the divide to study the precipitation processes that can lead to catastrophic flooding in the region. Details on field sites, instrumentation used, and collection methods are discussed. Data from the study are publicly accessible from the Federated Research Data Repository at https://doi.org/10.20383/101.0221 (Thériault et al., 2020). This dataset will be used to study atmospheric conditions associated with precipitation events documented simultaneously on either side of a continental divide. This paper also provides a sample of the data gathered during a precipitation event.
-
Abstract Large scale flood risk analyses are fundamental to many applications requiring national or international overviews of flood risk. While large‐scale climate patterns such as teleconnections and climate change become important at this scale, it remains a challenge to represent the local hydrological cycle over various watersheds in a manner that is physically consistent with climate. As a result, global models tend to suffer from a lack of available scenarios and flexibility that are key for planners, relief organizations, regulators, and the financial services industry to analyze the socioeconomic, demographic, and climatic factors affecting exposure. Here we introduce a data‐driven, global, fast, flexible, and climate‐consistent flood risk modeling framework for applications that do not necessarily require high‐resolution flood mapping. We use statistical and machine learning methods to examine the relationship between historical flood occurrence and impact from the Dartmouth Flood Observatory (1985–2017), and climatic, watershed, and socioeconomic factors for 4,734 HydroSHEDS watersheds globally. Using bias‐corrected output from the NCAR CESM Large Ensemble (1980–2020), and the fitted statistical relationships, we simulate 1 million years of events worldwide along with the population displaced in each event. We discuss potential applications of the model and present global flood hazard and risk maps. The main value of this global flood model lies in its ability to quickly simulate realistic flood events at a resolution that is useful for large‐scale socioeconomic and financial planning, yet we expect it to be useful to climate and natural hazard scientists who are interested in socioeconomic impacts of climate. , Plain Language Summary Flood is among the deadliest and most damaging natural disasters. To protect against large scale flood risk, stakeholders need to understand how floods can occur and their potential impacts. Stakeholders rely on global flood models to provide them with plausible flood scenarios around the world. For a flood model to operate at the global scale, climate effects must be represented in addition to hydrological ones to demonstrate how rivers can overflow throughout the world each year. Global flood models often lack the flexibility and variety of scenarios required by many stakeholders because they are computationally demanding. Designed for applications where detailed local flood impacts are not required, we introduce a rapid and flexible global flood model that can generate hundreds of thousands of scenarios everywhere in the world in a matter of minutes. The model is based on a historical flood database from 1985 to 2017 that is represented using an algorithm that learns from the data. With this model, the output from a global climate model is used to simulate a large sample of floods for risk analyses that are coherent with global climate. Maps of the annual average number of floods and number of displaced people illustrate the models results. , Key Points We present a global flood model built using machine learning methods fitted with historical flood occurrences and impacts Forced with a climate model, the global flood model is fast, flexible and consistent with global climate We provide global flood hazard (occurrence) and risk (population displaced) maps over 4,734 watersheds
-
Abstract A fundamental issue when evaluating the simulation of precipitation is the difficulty of quantifying specific sources of errors and recognizing compensation of errors. We assess how well a large ensemble of high‐resolution simulations represents the precipitation associated with strong cyclones. We propose a framework to breakdown precipitation errors according to different dynamical (vertical velocity) and thermodynamical (vertically integrated water vapor) regimes and the frequency and intensity of precipitation. This approach approximates the error in the total precipitation of each regime as the sum of three terms describing errors in the large‐scale environmental conditions, the frequency of precipitation and its intensity. We show that simulations produce precipitation too often, that its intensity is too weak, that errors are larger for weak than for strong dynamical forcing and that biases in the vertically integrated water vapor can be large. Using the error breakdown presented above, we define four new error metrics differing on the degree to which they include the compensation of errors. We show that convection‐permitting simulations consistently improve the simulation of precipitation compared to coarser‐resolution simulations using parameterized convection, and that these improvements are revealed by our new approach but not by traditional metrics which can be affected by compensating errors. These results suggest that convection‐permitting models are more likely to produce better results for the right reasons. We conclude that the novel decomposition and error metrics presented in this study give a useful framework that provides physical insights about the sources of errors and a reliable quantification of errors. , Plain Language Summary The simulations of complex physical processes always entail various sources of errors. These errors can be of different sign and can consequently cancel each other out when using traditional performance metrics such as the bias error metric. We present a formal framework that allows us to approximate precipitation according to three terms that describe different aspects of the rainfall field including large‐scale environmental conditions and the frequency and intensity of rainfall. We apply the methodology to a large ensemble of high‐resolution simulations representing the precipitation associated with strong cyclones in eastern Australia. We show that simulations produce precipitation too often, with an intensity that is too weak leading to strong compensation. We further define new error metrics that explicitly quantify the degree of error compensation when simulating precipitation. We show that convection‐permitting simulations consistently improve the performance compared to coarser resolution simulations using parameterized convection and that these improvements are only revealed when using the new error metrics but are not apparent in traditional metrics (e.g., bias). , Key Points Multiple high‐resolution simulations produce precipitation too often with underestimated intensity leading to strong error compensation Errors in precipitation are quantified using novel metrics that prevent error compensation showing value compared with traditional metrics Convection permitting simulations outperform the representation of precipitation compared to simulations using parameterized convection
-
Abstract The collection efficiency of a typical precipitation gauge-shield configuration decreases with increasing wind speed, with a high scatter for a given wind speed. The high scatter in the collection efficiency for a given wind speed arises in part from the variability in the characteristics of falling snow and atmospheric turbulence. This study uses weighing gauge data collected at the Marshall Field Site near Boulder, Colorado, during the WMO Solid Precipitation Intercomparison Experiment (SPICE). Particle diameter and fall speed data from a laser disdrometer were used to show that the scatter in the collection efficiency can be reduced by considering the fall speed of solid precipitation particles. The collection efficiency was divided into two classes depending on the measured mean-event particle fall speed during precipitation events. Slower-falling particles were associated with a lower collection efficiency. A new transfer function (i.e., the relationship between collection efficiency and other meteorological variables, such as wind speed or air temperature) that includes the fall speed of the hydrometeors was developed. The root-mean-square error of the adjusted precipitation with the new transfer function with respect to a weighing gauge placed in a double fence intercomparison reference was lower than using previously developed transfer functions that only consider wind speed and air temperature. This shows that the measured fall speed of solid precipitation with a laser disdrometer accounts for a large amount of the observed scatter in weighing gauge collection efficiency.
-
Abstract In spring 2011, an unprecedented flood hit the complex eastern United States (U.S.)–Canada transboundary Lake Champlain–Richelieu River (LCRR) Basin, destructing properties and inducing negative impacts on agriculture and fish habitats. The damages, covered by the Governments of Canada and the U.S., were estimated to C$90M. This natural disaster motivated the study of mitigation measures to prevent such disasters from reoccurring. When evaluating flood risks, long‐term evolving climate change should be taken into account to adopt mitigation measures that will remain relevant in the future. To assess the impacts of climate change on flood risks of the LCRR basin, three bias‐corrected multi‐resolution ensembles of climate projections for two greenhouse gas concentration scenarios were used to force a state‐of‐the‐art, high‐resolution, distributed hydrological model. The analysis of the hydrological simulations indicates that the 20‐year return period flood (corresponding to a medium flood) should decrease between 8% and 35% for the end of the 21st Century (2070–2099) time horizon and for the high‐emission scenario representative concentration pathway (RCP) 8.5. The reduction in flood risks is explained by a decrease in snow accumulation and an increase in evapotranspiration expected with the future warming of the region. Nevertheless, due to the large climate inter‐annual variability, short‐term flood probabilities should remain similar to those experienced in the recent past.
-
Abstract Timothy ( Phleum pratense L.) is expected to be more affected by climate change than other forage grasses. Therefore, alternatives to timothy, such as tall fescue [ Schedonorus arundinaceus (Shreb.) Dumort.], meadow fescue [ S. pratensis (Huds.) P. Beauv.], or meadow bromegrass ( Bromus biebersteinii Roem. & Schult.) should be explored. Our objective was to simulate and compare the yield and nutritive value of four alfalfa ( Medicago sativa L.)–grass mixtures and annual crops grown on two virtual dairy farms representative of eastern Canada under future climate conditions. The Integrated Farm System Model (IFSM) was used for these projections under the reference (1971–2000), near future (2020–2049), and distant future (2050–2079) climates for two climatically contrasting agricultural areas in eastern Canada (eastern Quebec; southwestern Quebec). In both future periods, annual forage dry matter (DM) yields of the four alfalfa–grass mixtures are projected to increase because of additional harvests, with greater DM yield increases projected in the colder area than in the warmer area. In both areas, the highest yield increase is projected for alfalfa–tall fescue mixture and the lowest for alfalfa–timothy mixture. The nutritive value of all mixtures should increase due to a greater proportion of alfalfa. In both areas, yields of silage and grain corn ( Zea mays L.), and soybean [ Glycine max (L.) Merr.] are projected to increase, but not those of wheat ( Triticum aestivum L.) and barley ( Hordeum vulgare L.). Tall fescue, meadow bromegrass, and meadow fescue are adequate alternatives to timothy grown in association with alfalfa under future climate conditions. , Core Ideas Forage yields of alfalfa–grass mixtures are projected to increase due to additional harvests. Mixture with tall fescue is projected to increase the most and timothy the least. Tall fescue, meadow fescue, and meadow bromegrass are valuable alternatives to timothy. Nutritive value is projected to increase due to more alfalfa in the mixture. Corn and soybean grain yields are projected to increase but not those of wheat and barley.
-
Abstract Accurate snowfall measurement is challenging because it depends on the precipitation gauge used, meteorological conditions, and the precipitation microphysics. Upstream of weighing gauges, the flow field is disturbed by the gauge and any shielding used usually creates an updraft, which deflects solid precipitation from falling in the gauge, resulting in significant undercatch. Wind shields are often used with weighing gauges to reduce this updraft, and transfer functions are required to adjust the snowfall measurements to consider gauge undercatch. Using these functions reduces the bias in precipitation measurement but not the root-mean-square error (RMSE). In this study, the accuracy of the Hotplate precipitation gauge was compared to standard unshielded and shielded weighing gauges collected during the WMO Solid Precipitation Intercomparison Experiment program. The analysis performed in this study shows that the Hotplate precipitation gauge bias after wind correction is near zero and similar to wind corrected weighing gauges. The RMSE of the Hotplate precipitation gauge measurements is lower than weighing gauges (with or without an Alter shield) for wind speeds up to 5 m s −1 , the wind speed limit at which sufficient data were available. This study shows that the Hotplate precipitation gauge measurement has a low bias and RMSE due to its aerodynamic shape, making its performance mostly independent of the type of solid precipitation.
-
Abstract Atmospheric blockings are generally associated with large-scale high-pressure systems that interrupt west-to-east atmospheric flow in mid and high latitudes. Blockings cause several days of quasi-stationary weather conditions, and therefore can result in monthly or seasonal climate anomalies and extreme weather events on the affected regions. In this paper, the long-term coupled CERA-20C reanalysis data from 1901 to 2010 are used to evaluate the links between blocking events over the North Atlantic north of 35° N, and atmospheric and oceanic modes of climate variability on decadal time scales. This study indicates more frequent and longer lasting blocking events than previous studies using other reanalyses products. A strong relationship was found between North Atlantic blocking events and North Atlantic Oscillation (NAO), Atlantic Multidecadal Oscillation (AMO) and Baffin Island–West Atlantic (BWA) indices, in fall, winter and spring. More blocking events occur during the negative phases of the NAO index and positive phases of the BWA mode. In some situations, the BWA patterns provide clearer links with the North Atlantic blocking occurrence than with the NAO alone. The correlation between the synchronous occurrences of AMO and blocking is generally weak, although it does increase for a lag of about 6–10 years. Convergent cross mapping (CCM) furthermore demonstrates a significant two-way causal effect between blocking occurrences and the NAO and BWA indices. Finally, while we find no significant trends in blocking frequencies over the last 110 years in the Northern Hemisphere, these events become longer lasting in summer and fall, and more intense in spring in the North Atlantic.
-
Abstract Digital leaf physiognomy (DLP) is considered as one of the most promising methods for estimating past climate. However, current models built using the DLP data set still lack precision, especially for mean annual precipitation (MAP). To improve predictive power, we developed five machine learning (ML) models for mean annual temperature (MAT) and MAP respectively, and then tested the precision of these models and some of their averaging compared with that obtained from other models. The precision of all models was assessed using a repeated stratified 10‐fold cross‐validation. For MAT, three combinations of models ( R 2 = .77) presented moderate improvements in precision over the multiple linear regression (MLR) model ( R 2 = .68). For log e (MAP), the averaging of the support vector machine (SVM) and boosting models improved the R 2 from .19 to .63 compared with that of the MLR model. For MAP, the R 2 of this model combination was 0.49, which was much better than that of the artificial neural network (ANN) model ( R 2 = .21). Even the bagging model, which had the lowest R 2 (.37) for log e (MAP), demonstrated better precision ( R 2 = .27) for MAP. Our palaeoclimate estimates for nine fossil floras were also more accurate, because they were in better agreement with independent paleoclimate evidence. Our study confirms that our ML models and their averaging can improve paleoclimatic reconstructions, providing a better understanding of the relationship between climate and leaf physiognomy.
-
Abstract. In the Arctic, during polar night and early spring, ice clouds are separated into two leading types of ice clouds (TICs): (1) TIC1 clouds characterized by a large concentration of very small crystals and TIC2 clouds characterized by a low concentration of large ice crystals. Using a suitable parameterization of heterogeneous ice nucleation is essential for properly representing ice clouds in meteorological and climate models and subsequently understanding their interactions with aerosols and radiation. Here, we describe a new parameterization for ice crystal formation by heterogeneous nucleation in water-subsaturated conditions coupled to aerosol chemistry in the Weather Research and Forecasting model coupled with chemistry (WRF-Chem). The parameterization is implemented in the Milbrandt and Yau (2005a, b) two-moment cloud microphysics scheme, and we assess how the WRF-Chem model responds to the run-time interaction between chemistry and the new parameterization. Well-documented reference cases provided us with in situ data from the spring 2008 Indirect and Semi-Direct Aerosol Campaign (ISDAC) over Alaska. Our analysis reveals that the new parameterization clearly improves the representation of the ice water content (IWC) in polluted or unpolluted air masses and shows the poor performance of the reference parameterization in representing ice clouds with low IWC. The new parameterization is able to represent TIC1 and TIC2 microphysical characteristics at the top of the clouds, where heterogenous ice nucleation is most likely occurring, even with the known bias of simulated aerosols by WRF-Chem over the Arctic.
-
Abstract Compound events (CEs) are weather and climate events that result from multiple hazards or drivers with the potential to cause severe socio-economic impacts. Compared with isolated hazards, the multiple hazards/drivers associated with CEs can lead to higher economic losses and death tolls. Here, we provide the first analysis of multiple multivariate CEs potentially causing high-impact floods, droughts, and fires. Using observations and reanalysis data during 1980–2014, we analyse 27 hazard pairs and provide the first spatial estimates of their occurrences on the global scale. We identify hotspots of multivariate CEs including many socio-economically important regions such as North America, Russia and western Europe. We analyse the relative importance of different multivariate CEs in six continental regions to highlight CEs posing the highest risk. Our results provide initial guidance to assess the regional risk of CE events and an observationally-based dataset to aid evaluation of climate models for simulating multivariate CEs.
-
Abstract. Several sets of reference regions have been used in the literature for the regional synthesis of observed and modelled climate and climate change information. A popular example is the series of reference regions used in the Intergovernmental Panel on Climate Change (IPCC) Special Report on Managing the Risks of Extreme Events and Disasters to Advance Climate Adaptation (SREX). The SREX regions were slightly modified for the Fifth Assessment Report of the IPCC and used for reporting subcontinental observed and projected changes over a reduced number (33) of climatologically consistent regions encompassing a representative number of grid boxes. These regions are intended to allow analysis of atmospheric data over broad land or ocean regions and have been used as the basis for several popular spatially aggregated datasets, such as the Seasonal Mean Temperature and Precipitation in IPCC Regions for CMIP5 dataset. We present an updated version of the reference regions for the analysis of new observed and simulated datasets (including CMIP6) which offer an opportunity for refinement due to the higher atmospheric model resolution. As a result, the number of land and ocean regions is increased to 46 and 15, respectively, better representing consistent regional climate features. The paper describes the rationale for the definition of the new regions and analyses their homogeneity. The regions are defined as polygons and are provided as coordinates and a shapefile together with companion R and Python notebooks to illustrate their use in practical problems (e.g. calculating regional averages). We also describe the generation of a new dataset with monthly temperature and precipitation, spatially aggregated in the new regions, currently for CMIP5 and CMIP6, to be extended to other datasets in the future (including observations). The use of these reference regions, dataset and code is illustrated through a worked example using scatter plots to offer guidance on the likely range of future climate change at the scale of the reference regions. The regions, datasets and code (R and Python notebooks) are freely available at the ATLAS GitHub repository: https://github.com/SantanderMetGroup/ATLAS (last access: 24 August 2020), https://doi.org/10.5281/zenodo.3998463 (Iturbide et al., 2020).
-
Precipitation and temperature are among major climatic variables that are used to characterize extreme weather events, which can have profound impacts on ecosystems and society. Accurate simulation of these variables at the local scale is essential to adapt urban systems and policies to future climatic changes. However, accurate simulation of these climatic variables is difficult due to possible interdependence and feedbacks among them. In this paper, the concept of copulas was used to model seasonal interdependence between precipitation and temperature. Five copula functions were fitted to grid (approximately 10 km × 10 km) climate data from 1960 to 2013 in southern Ontario, Canada. Theoretical and empirical copulas were then compared with each other to select the most appropriate copula family for this region. Results showed that, of the tested copulas, none of them consistently performed the best over the entire region during all seasons. However, Gumbel copula was the best performer during the winter season, and Clayton performed best in the summer. More variability in terms of best copula was found in spring and fall seasons. By examining the likelihoods of concurrent extreme temperature and precipitation periods including wet/cool in the winter and dry/hot in the summer, we found that ignoring the joint distribution and confounding impacts of precipitation and temperature lead to the underestimation of occurrence of probabilities for these two concurrent extreme modes. This underestimation can also lead to incorrect conclusions and flawed decisions in terms of the severity of these extreme events.