Votre recherche
Résultats 405 ressources
-
Analyses of marine and terrestrial palynomorphs of Ocean Drilling Program (ODP) Site 645 in Baffin Bay led us to define a new biostratigraphical scheme covering the late Miocene to Pleistocene based on dinocyst and acritarch assemblages. Four biozones were defined. The first one, from 438.6 m below sea floor (mbsf) to 388 mbsf, can be assigned a late Miocene to early Pliocene age (>4.5 Ma), based on the common occurrence of Cristadinium diminutivum and Selenopemphix brevispinosa. Biozone 2, spanning from an erosional unconformity to a recovery hiatus, is marked by the highest occurrences (HOs) of Veriplicidium franklinii and Cristadinium diminutivum, which suggest an early Pliocene age >3.6 Ma (∼4.5 to ∼3.6 Ma). Biozone 3, above the recovery hiatus and up to 220.94 mbsf, corresponds to a late Pliocene or early Pleistocene age based on occurrences of Bitectatodinium readwaldii, Cymatiosphaera? icenorum, and Lavradosphaera canalis. Finally, between 266.4 and 120.56 mbsf, Biozone 4, marked by the HO of Filisphaera filifera, Filisphaera microornata, and Habibacysta tectata, has an early Pleistocene age (>1.4 Ma). Our biostratigraphy implies that horizon b1 of the Baffin Bay seismic stratigraphy corresponds to the recovery hiatus at ODP Site 645, which suggests a very thick Pliocene sequence along the Baffin Island slope. Dinocyst assemblages and terrestrial palynomorphs in our records indicate that the late Miocene and (or) early Pliocene were characterized by relatively warm coastal surface waters and boreal forest or forested tundra vegetation over adjacent lands. In contrast, the early Pleistocene dinocyst assemblages above the recovery hiatus indicate cold surface waters, while pollen data suggest reduced vegetation cover on adjacent lands.
-
Abstract Increasing forest soil organic carbon (SOC) storage is important for reducing carbon dioxide (CO 2 ) emissions from terrestrial ecosystems and mitigating global climate change. Although the effects of altitude, temperature and rainfall on organic carbon have been studied extensively, it is difficult to increase SOC storage by changing these factors in actual forest management. This study determined the SOC, soil physical and chemical properties, nutrient elements, heavy metal elements, soil minerals and microbial biomass in the 0–140‐cm soil layer of the monsoon broad‐leaved forest in the acid red soil region of southwestern China by stratification. We tried to identify the soil factors affecting the SOC storage of the forest in the acid red soil region and determine the weights of the factors affecting the SOC, with the aim of improving the SOC retention capacity in forest management by changing the main soil factors affecting SOC storage. The results showed that the soil factors affecting the forest SOC storage in this area are total nitrogen (N, 22.7%) > soil water content (19.9%) > active iron (including poorly crystalline iron, Fe o , 15.5%) > pH (9.5%) > phosphorus (P, 9.4%) > aluminium (Al, 8.9%) > silicon (Si, 7.1%) > sulphur (S, 6.8%). Of these factors, N, the water content, Fe o , and P are practical factors for forest management, whereas the pH, Al, Si and S are not. SOC was significantly positively correlated with the soil N concentration, water content, active iron content and P concentration ( p < .05). In acidic red soil areas, with active iron as the highlight, N, soil water content, phosphorus and active iron jointly regulate the forest SOC storage capacity. Consequently, in actual forest management, any measures to promote soil N and water content and to activate inactive iron can enhance the storage of SOC, as appropriate input of N and P fertiliser and irrigation in dry years and the dry season. Highlights The soil environmental factors affecting SOC storage in forest soil are quantified Activation of inactive iron helps SOC storage in forest soil Irrigation and N and P input are effective for helping SOC storage in forest soil N, WC, P and Fe o jointly regulate SOC in tropical acid red soil forest
-
Abstract Lightning climate change projections show large uncertainties caused by limited empirical knowledge and strong assumptions inherent to coarse-grid climate modeling. This study addresses the latter issue by implementing and applying the lightning potential index parameterization (LPI) into a fine-grid convection-permitting regional climate model (CPM). This setup takes advantage of the explicit representation of deep convection in CPMs and allows for process-oriented LPI inputs such as vertical velocity within convective cells and coexistence of microphysical hydrometeor types, which are known to contribute to charge separation mechanisms. The LPI output is compared to output from a simpler flash rate parameterization, namely the CAPE $$\times$$ × PREC parameterization, applied in a non-CPM on a coarser grid. The LPI’s implementation into the regional climate model COSMO-CLM successfully reproduces the observed lightning climatology, including its latitudinal gradient, its daily and hourly probability distributions, and its diurnal and annual cycles. Besides, the simulated temperature dependence of lightning reflects the observed dependency. The LPI outperforms the CAPE $$\times$$ × PREC parameterization in all applied diagnostics. Based on this satisfactory evaluation, we used the LPI to a climate change projection under the RCP8.5 scenario. For the domain under investigation centered over Germany, the LPI projects a decrease of $$4.8\%$$ 4.8 % in flash rate by the end of the century, in opposition to a projected increase of $$17.4\%$$ 17.4 % as projected using the CAPE $$\times$$ × PREC parameterization. The future decrease of LPI occurs mostly during the summer afternoons and is related to (i) a change in convection occurrence and (ii) changes in the microphysical mixing. The two parameterizations differ because of different convection occurrences in the CPM and non-CPM and because of changes in the microphysical mixing, which is only represented in the LPI lightning parameterization.
-
Abstract Large scale flood risk analyses are fundamental to many applications requiring national or international overviews of flood risk. While large‐scale climate patterns such as teleconnections and climate change become important at this scale, it remains a challenge to represent the local hydrological cycle over various watersheds in a manner that is physically consistent with climate. As a result, global models tend to suffer from a lack of available scenarios and flexibility that are key for planners, relief organizations, regulators, and the financial services industry to analyze the socioeconomic, demographic, and climatic factors affecting exposure. Here we introduce a data‐driven, global, fast, flexible, and climate‐consistent flood risk modeling framework for applications that do not necessarily require high‐resolution flood mapping. We use statistical and machine learning methods to examine the relationship between historical flood occurrence and impact from the Dartmouth Flood Observatory (1985–2017), and climatic, watershed, and socioeconomic factors for 4,734 HydroSHEDS watersheds globally. Using bias‐corrected output from the NCAR CESM Large Ensemble (1980–2020), and the fitted statistical relationships, we simulate 1 million years of events worldwide along with the population displaced in each event. We discuss potential applications of the model and present global flood hazard and risk maps. The main value of this global flood model lies in its ability to quickly simulate realistic flood events at a resolution that is useful for large‐scale socioeconomic and financial planning, yet we expect it to be useful to climate and natural hazard scientists who are interested in socioeconomic impacts of climate. , Plain Language Summary Flood is among the deadliest and most damaging natural disasters. To protect against large scale flood risk, stakeholders need to understand how floods can occur and their potential impacts. Stakeholders rely on global flood models to provide them with plausible flood scenarios around the world. For a flood model to operate at the global scale, climate effects must be represented in addition to hydrological ones to demonstrate how rivers can overflow throughout the world each year. Global flood models often lack the flexibility and variety of scenarios required by many stakeholders because they are computationally demanding. Designed for applications where detailed local flood impacts are not required, we introduce a rapid and flexible global flood model that can generate hundreds of thousands of scenarios everywhere in the world in a matter of minutes. The model is based on a historical flood database from 1985 to 2017 that is represented using an algorithm that learns from the data. With this model, the output from a global climate model is used to simulate a large sample of floods for risk analyses that are coherent with global climate. Maps of the annual average number of floods and number of displaced people illustrate the models results. , Key Points We present a global flood model built using machine learning methods fitted with historical flood occurrences and impacts Forced with a climate model, the global flood model is fast, flexible and consistent with global climate We provide global flood hazard (occurrence) and risk (population displaced) maps over 4,734 watersheds
-
Abstract The structure and “metabolism” (movement and conversion of goods and energy) of urban areas has caused cities to be identified as “super‐organisms”, placed between ecosystems and the biosphere, in the hierarchy of living systems. Yet most such analogies are weak, and render the super‐organism model ineffective for sustainable development of cities. Via a cluster analysis of 15 shared traits of the hierarchical living system, we found that industrialized cities are more similar to eukaryotic cells than to multicellular organisms; enclosed systems, such as factories and greenhouses, paralleling organelles in eukaryotic cells. We further developed a “super‐cell” industrialized city model: a “eukarcity” with citynucleus (urban area) as a regulating centre, and organaras (enclosed systems, which provide the majority of goods and services) as the functional components, and cityplasm (natural ecosystems and farmlands) as the matrix. This model may improve the vitality and sustainability of cities through planning and management.
-
Abstract A fundamental issue when evaluating the simulation of precipitation is the difficulty of quantifying specific sources of errors and recognizing compensation of errors. We assess how well a large ensemble of high‐resolution simulations represents the precipitation associated with strong cyclones. We propose a framework to breakdown precipitation errors according to different dynamical (vertical velocity) and thermodynamical (vertically integrated water vapor) regimes and the frequency and intensity of precipitation. This approach approximates the error in the total precipitation of each regime as the sum of three terms describing errors in the large‐scale environmental conditions, the frequency of precipitation and its intensity. We show that simulations produce precipitation too often, that its intensity is too weak, that errors are larger for weak than for strong dynamical forcing and that biases in the vertically integrated water vapor can be large. Using the error breakdown presented above, we define four new error metrics differing on the degree to which they include the compensation of errors. We show that convection‐permitting simulations consistently improve the simulation of precipitation compared to coarser‐resolution simulations using parameterized convection, and that these improvements are revealed by our new approach but not by traditional metrics which can be affected by compensating errors. These results suggest that convection‐permitting models are more likely to produce better results for the right reasons. We conclude that the novel decomposition and error metrics presented in this study give a useful framework that provides physical insights about the sources of errors and a reliable quantification of errors. , Plain Language Summary The simulations of complex physical processes always entail various sources of errors. These errors can be of different sign and can consequently cancel each other out when using traditional performance metrics such as the bias error metric. We present a formal framework that allows us to approximate precipitation according to three terms that describe different aspects of the rainfall field including large‐scale environmental conditions and the frequency and intensity of rainfall. We apply the methodology to a large ensemble of high‐resolution simulations representing the precipitation associated with strong cyclones in eastern Australia. We show that simulations produce precipitation too often, with an intensity that is too weak leading to strong compensation. We further define new error metrics that explicitly quantify the degree of error compensation when simulating precipitation. We show that convection‐permitting simulations consistently improve the performance compared to coarser resolution simulations using parameterized convection and that these improvements are only revealed when using the new error metrics but are not apparent in traditional metrics (e.g., bias). , Key Points Multiple high‐resolution simulations produce precipitation too often with underestimated intensity leading to strong error compensation Errors in precipitation are quantified using novel metrics that prevent error compensation showing value compared with traditional metrics Convection permitting simulations outperform the representation of precipitation compared to simulations using parameterized convection
-
Abstract In sub-Saharan Africa (SSA), precipitation is an important driver of agricultural production. In Uganda, maize production is essentially rain-fed. However, due to changes in climate, projected maize yield targets have not often been met as actual observed maize yields are often below simulated/projected yields. This outcome has often been attributed to parallel gaps in precipitation. This study aims at identifying maize yield and precipitation gaps in Uganda for the period 1998–2017. Time series historical actual observed maize yield data (hg/ha/year) for the period 1998–2017 were collected from FAOSTAT. Actual observed maize growing season precipitation data were also collected from the climate portal of World Bank Group for the period 1998–2017. The simulated or projected maize yield data and the simulated or projected growing season precipitation data were simulated using a simple linear regression approach. The actual maize yield and actual growing season precipitation data were now compared with the simulated maize yield data and simulated growing season precipitation to establish the yield gaps. The results show that three key periods of maize yield gaps were observed (period one: 1998, period two: 2004–2007 and period three: 2015–2017) with parallel precipitation gaps. However, in the entire series (1998–2017), the years 2008–2009 had no yield gaps yet, precipitation gaps were observed. This implies that precipitation is not the only driver of maize yields in Uganda. In fact, this is supported by a low correlation between precipitation gaps and maize yield gaps of about 6.3%. For a better understanding of cropping systems in SSA, other potential drivers of maize yield gaps in Uganda such as soils, farm inputs, crop pests and diseases, high yielding varieties, literacy, and poverty levels should be considered.
-
Abstract The collection efficiency of a typical precipitation gauge-shield configuration decreases with increasing wind speed, with a high scatter for a given wind speed. The high scatter in the collection efficiency for a given wind speed arises in part from the variability in the characteristics of falling snow and atmospheric turbulence. This study uses weighing gauge data collected at the Marshall Field Site near Boulder, Colorado, during the WMO Solid Precipitation Intercomparison Experiment (SPICE). Particle diameter and fall speed data from a laser disdrometer were used to show that the scatter in the collection efficiency can be reduced by considering the fall speed of solid precipitation particles. The collection efficiency was divided into two classes depending on the measured mean-event particle fall speed during precipitation events. Slower-falling particles were associated with a lower collection efficiency. A new transfer function (i.e., the relationship between collection efficiency and other meteorological variables, such as wind speed or air temperature) that includes the fall speed of the hydrometeors was developed. The root-mean-square error of the adjusted precipitation with the new transfer function with respect to a weighing gauge placed in a double fence intercomparison reference was lower than using previously developed transfer functions that only consider wind speed and air temperature. This shows that the measured fall speed of solid precipitation with a laser disdrometer accounts for a large amount of the observed scatter in weighing gauge collection efficiency.