Rechercher
Bibliographie complète 313 ressources
-
Abstract. The first airborne measurements of the Far-InfraRed Radiometer (FIRR) were performed in April 2015 during the panarctic NETCARE campaign. Vertical profiles of spectral upwelling radiance in the range 8–50 µm were measured in clear and cloudy conditions from the surface up to 6 km. The clear sky profiles highlight the strong dependence of radiative fluxes to the temperature inversion typical of the Arctic. Measurements acquired for total column water vapour from 1.5 to 10.5 mm also underline the sensitivity of the far-infrared greenhouse effect to specific humidity. The cloudy cases show that optically thin ice clouds increase the cooling rate of the atmosphere, making them important pieces of the Arctic energy balance. One such cloud exhibited a very complex spatial structure, characterized by large horizontal heterogeneities at the kilometre scale. This emphasizes the difficulty of obtaining representative cloud observations with airborne measurements but also points out how challenging it is to model polar clouds radiative effects. These radiance measurements were successfully compared to simulations, suggesting that state-of-the-art radiative transfer models are suited to study the cold and dry Arctic atmosphere. Although FIRR in situ performances compare well to its laboratory performances, complementary simulations show that upgrading the FIRR radiometric resolution would greatly increase its sensitivity to atmospheric and cloud properties. Improved instrument temperature stability in flight and expected technological progress should help meet this objective. The campaign overall highlights the potential for airborne far-infrared radiometry and constitutes a relevant reference for future similar studies dedicated to the Arctic and for the development of spaceborne instruments.
-
Abstract Increased temperature will result in longer, more frequent, and more intense heat waves. Changes in temperature variability have been deemed necessary to account for future heat wave characteristics. However, this has been quantified only in Europe and North America, while the rest of the globe remains unexplored. Using late century global climate projections, we show that annual mean temperature increases is the key factor defining heat wave changes in most regions. We find that commonly studied areas are an exception rather than the standard and the mean climate change signal generally outweighs any influence from variability changes. More importantly, differences in warming across seasons are responsible for most of the heat wave changes and their consideration relegates the contribution of variability to a marginal role. This reveals that accurately capturing mean seasonal changes is crucial to estimate future heat waves and reframes our interpretation of future temperature extremes. , Key Points The influence of projected temperature variability changes on future heat waves varies across the globe Future heat waves are primarily controlled by annual mean changes, except in Europe and North America Mean seasonal warming is responsible for over 95% of projected heat wave changes in most region
-
Abstract. A far infrared radiometer (FIRR) dedicated to measuring radiation emitted by clear and cloudy atmospheres was developed in the framework of the Thin Ice Clouds in Far InfraRed Experiment (TICFIRE) technology demonstration satellite project. The FIRR detector is an array of 80 × 60 uncooled microbolometers coated with gold black to enhance the absorptivity and responsivity. A filter wheel is used to select atmospheric radiation in nine spectral bands ranging from 8 to 50 µm. Calibrated radiances are obtained using two well-calibrated blackbodies. Images are acquired at a frame rate of 120 Hz, and temporally averaged to reduce electronic noise. A complete measurement sequence takes about 120 s. With a field of view of 6°, the FIRR is not intended to be an imager. Hence spatial average is computed over 193 illuminated pixels to increase the signal-to-noise ratio and consequently the detector resolution. This results in an improvement by a factor of 5 compared to individual pixel measurements. Another threefold increase in resolution is obtained using 193 non-illuminated pixels to remove correlated electronic noise, leading an overall resolution of approximately 0.015 W m−2 sr−1. Laboratory measurements performed on well-known targets suggest an absolute accuracy close to 0.02 W m−2 sr−1, which ensures atmospheric radiance is retrieved with an accuracy better than 1 %. Preliminary in situ experiments performed from the ground in winter and in summer on clear and cloudy atmospheres are compared to radiative transfer simulations. They point out the FIRR ability to detect clouds and changes in relative humidity of a few percent in various atmospheric conditions, paving the way for the development of new algorithms dedicated to ice cloud characterization and water vapor retrieval.
-
Abstract This study evaluates the added value in the representation of surface climate variables from an ensemble of regional climate model (RCM) simulations by comparing the relative skill of the RCM simulations and their driving data over a wide range of RCM experimental setups and climate statistics. The methodology is specifically designed to compare results across different variables and metrics, and it incorporates a rigorous approach to separate the added value occurring at different spatial scales. Results show that the RCMs' added value strongly depends on the type of driving data, the climate variable, and the region of interest but depends rather weakly on the choice of the statistical measure, the season, and the RCM physical configuration. Decomposing climate statistics according to different spatial scales shows that improvements are coming from the small scales when considering the representation of spatial patterns, but from the large‐scale contribution in the case of absolute values. Our results also show that a large part of the added value can be attained using some simple postprocessing methods. , Key Points A rigorous methodology that allows evaluating the overall benefits of high‐resolution simulations The most reliable source of added value is the better representation of the spatial variability Substantial added value can also be attained using simple postprocessing methods
-
Abstract The use of windshields to reduce the impact of wind on snow measurements is common. This paper investigates the catching performance of shielded and unshielded gauges using numerical simulations. In Part II, the role of the windshield and gauge aerodynamics, as well as the varying flow field due to the turbulence generated by the shield–gauge configuration, in reducing the catch efficiency is investigated. This builds on the computational fluid dynamics results obtained in Part I, where the airflow patterns in the proximity of an unshielded and single Alter shielded Geonor T-200B gauge are obtained using both time-independent [Reynolds-averaged Navier–Stokes (RANS)] and time-dependent [large-eddy simulation (LES)] approaches. A Lagrangian trajectory model is used to track different types of snowflakes (wet and dry snow) and to assess the variation of the resulting gauge catching performance with the wind speed. The collection efficiency obtained with the LES approach is generally lower than the one obtained with the RANS approach. This is because of the impact of the LES-resolved turbulence above the gauge orifice rim. The comparison between the collection efficiency values obtained in case of shielded and unshielded gauge validates the choice of installing a single Alter shield in a windy environment. However, time-dependent simulations show that the propagating turbulent structures produced by the aerodynamic response of the upwind single Alter blades have an impact on the collection efficiency. Comparison with field observations provides the validation background for the model results.
-
Abstract The east coast of Australia is regularly influenced by midlatitude cyclones known as East Coast Lows. These form in a range of synoptic situations and are both a cause of severe weather and an important contributor to water security. This paper presents the first projections of future cyclone activity in this region using a regional climate model ensemble, with the use of a range of cyclone identification methods increasing the robustness of results. While there is considerable uncertainty in projections of cyclone frequency during the warm months, there is a robust agreement on a decreased frequency of cyclones during the winter months, when they are most common in the current climate. However, there is a potential increase in the frequency of cyclones with heavy rainfall and those closest to the coast and accordingly those with potential for severe flooding. , Key Points Winter cyclones are projected to decrease on the Australian east coast Cyclones associated with heavy rainfall may increase in frequency Projections of warm season cyclones remain uncertain
-
Abstract An important source of model uncertainty in climate models arises from unconfined model parameters in physical parameterizations. These parameters are commonly estimated on the basis of manual adjustments (expert tuning), which carries the risk of overtuning the parameters for a specific climate region or time period. This issue is particularly germane in the case of regional climate models (RCMs), which are often developed and used in one or a few geographical regions only. This study addresses the role of objective parameter calibration in this context. Using a previously developed objective calibration methodology, an RCM is calibrated over two regions (Europe and North America) and is used to investigate the transferability of the results. A total of eight different model parameters are calibrated, using a metamodel to account for parameter interactions. The study demonstrates that the calibration is effective in reducing model biases in both domains. For Europe, this concerns in particular a pronounced reduction of the summer warm bias and the associated overestimation of interannual temperature variability that have persisted through previous expert tuning efforts and are common in many global and regional climate models. The key process responsible for this improvement is an increased hydraulic conductivity. Higher hydraulic conductivity increases the water availability at the land surface and leads to increased evaporative cooling, stronger low cloud formation, and associated reduced incoming shortwave radiation. The calibrated parameter values are found to be almost identical for both domains; that is, the parameter calibration is transferable between the two regions. This is a promising result and indicates that models may be more universal than previously considered.
-
Abstract The aerodynamic response of snow gauges when exposed to the wind is responsible for a significant reduction of their collection performance. The modifications induced by the gauge and the windshield onto the space–time patterns of the undisturbed airflow deviate the snowflake trajectories. In Part I, the disturbed air velocity field in the vicinity of shielded and unshielded gauge configurations is investigated. In Part II, the airflow is the basis for a particle tracking model of snowflake trajectories to estimate the collection efficiency. A Geonor T-200B gauge inside a single Alter shield is simulated for wind speeds varying from 1 to 8 m s−1. Both time-averaged and time-dependent computational fluid dynamics simulations are performed, based on Reynolds-averaged Navier–Stokes (RANS) and large-eddy simulation (LES) models, respectively. A shear stress tensor k–Ω model (where k is the turbulent kinetic energy and Ω is the turbulent specific dissipation rate) is used for the RANS formulation and solved within a finite-volume method. The LES is implemented with a Smagorinsky subgrid-scale method that models the subgrid stresses as a gradient-diffusion process. The RANS simulations confirm the attenuation of the airflow velocity above the gauge when using a single Alter shield, but the generated turbulence above the orifice rim is underestimated. The intensity and spatial extension of the LES-resolved turbulent region show a dependency on the wind speed that was not detected by the RANS. The time-dependent analysis showed the propagation of turbulent structures and the impact on the turbulent kinetic energy above the gauge collecting section.
-
Summary For decades, researchers have thought it was difficult to remove the uncertainty from the estimates of forest carbon storage and its changes on national sales. This is not only because of stochasticity in the data but also the bias to overcome in the computations. Most studies of the estimation, however, ignore quantitative analyses for the latter uncertainty. This bias primarily results from the widely used volume‐biomass method via scaling up forest biomass from limited sample plots to large areas. This paper addresses (i) the mechanism of scaling‐up error occurrence, and (ii) the quantitative effects of the statistical factors on the error. The error compensators were derived, and expressed by ternary functions with three variables: expectation, variance and the power in the volume‐biomass equation. This is based on analysing the effect of power‐law function convexity on scaling‐up error by solving the difference of both sides of the weighted Jensen inequality. The simulated data and the national forest inventory of China were used for algorithm testing and application, respectively. Scaling‐up error occurrence stems primarily from an effect of the distribution heterogeneity of volume density on the total biomass amount, and secondarily from the extent of function nonlinearities. In our experiments, on average 94·2% of scaling‐up error can be reduced for the statistical populations of forest stands in a region. China's forest biomass carbon was estimated as approximately 6·0 PgC or less at the beginning of the 2010s after on average 1·1% error compensation. The results of both the simulated data experiment and national‐scale estimation suggest that the biomass is overestimated for young forests more than others. It implies a necessity to compensate scaling‐up error, especially for the areas going through extensive afforestation and reforestation in past decades. This study highlights the importance of understanding how both the function nonlinearity and the statistics of the variables quantitatively affect the scaling‐up error. Generally, the presented methods will help to translate fine‐scale ecological relationships to estimate coarser scale ecosystem properties by correcting aggregation errors.
-
Abstract The interdependence between climatic variables should be taken into account when developing climate scenarios. For example, temperature‐precipitation interdependence in the Arctic is strong and impacts on other physical characteristics, such as the extent and duration of snow cover. However, this interdependence is often misrepresented in climate simulations. Here we use two two‐dimensional (2‐D) methods for statistically adjusting climate model simulations to develop plausible local daily temperature ( T mean ) and precipitation ( Pr ) scenarios. The first 2‐D method is based on empirical quantile mapping (2Dqm) and the second on parametric copula models (2Dcopula). Both methods are improved here by forcing the preservation of the modeled long‐term warming trend and by using moving windows to obtain an adjustment specific to each day of the year. These methods were applied to a representative ensemble of 13 global climate model simulations at 26 Canadian Arctic coastal sites and tested using an innovative cross‐validation approach. Intervariable dependence was evaluated using correlation coefficients and empirical copula density plots. Results show that these 2‐D methods, especially 2Dqm, adjust individual distributions of climatic time series as adequately as one common one‐dimensional method (1Dqm) does. Furthermore, although 2Dqm outperforms the other methods in reproducing the observed temperature‐precipitation interdependence over the calibration period, both 2Dqm and 2Dcopula perform similarly over the validation periods. For cases where temperature‐precipitation interdependence is important (e.g., characterizing extreme events and the extent and duration of snow cover), both 2‐D methods are good options for producing plausible local climate scenarios in Canadian Arctic coastal zones. , Key Points We improved two methods for adjusting T mean , Pr , and their dependence in scenarios Methods are tested at Arctic coastal sites where T mean ‐ Pr dependence is crucial Both methods improve the plausibility of the local climate scenarios