“How well do we know global Net Primary Production?”
Your answer should explain the processes that control these patterns and the uncertainties in our knowledge of these processes.
Length and core requirements
Your essay should be 3000 words or less (not including figure captions and references).
It should directly address the essay title and synthesise materials from reading around the subject of terrestrial carbon dynamics: modelling and monitoring, as well as course practicals and seminar materials.
It should demonstrate an understanding of core underlying theories, models and appropriate measurement methods and be able to express them simply and clearly.
Topic area
The essay should provide an in-depth analysis of the current state of knowledge on the topic.
It should ideally use concepts from the main aspects of the course: basic theory, models, and measurement, and the integration of models and measurement.
Essay style
The essay should have a clear structure, and be based around material from multiple (>10) sources.
Your essay should be a synthesis of information from multiple (>10) papers you have read and other sources. It should not simply be a list of information from a set of different papers or a regurgitation of course notes and figures. It should not simply be a re-hash of sections of IPCC report sections. A synthesis in this sense means bringing together material from multiple sources and presenting it in your own words and with your own critical analysis of the information. This means you should NOT be using large numbers of direct quotes or just paraphrasing slightly by changing a few words around. You must bring the information together and present it in your own words.
It should be based around the same focus as the course, i.e. terrestrial vegetation, rather than soils or atmospheric processes, although you may wish to touch upon such matters for context.
You must work individually on your submission. When using practical- and seminar-related items, these must be materials you have generated yourself (e.g. you can’t use a graph that another student has generated: you must use your own).
As for all coursework of this nature, to get a very high mark, you will be expected to show clear insight into the subject matter and make use of materials beyond the basics provided in the lectures.
Figures, citations and quotes
You must follow standard UCL procedures on citations, and fully attribute all materials used.
All figures and any quotes must be correctly captioned, giving the original source of any graphics used. Any quotes should have a page number associated with them, BUT you should not be using large numbers of direct quotes or long quotes in an essay of this nature.
Avoid citations of material as ‘geog0133 coursenotes’ and strictly avoid any quotation or figures from the course notes themselves.
You may develop your own figures, but make clear what they are based on (or where the information comes from).
The essay must be fully referenced. For reference to individual figures are other specific information from long or complex sources, make sure you include as page number.
This is the first section of the MSc module ‘Terrestrial Carbon: modelling and monitoring’.
Students attending this course come from a range of different disciplines, so the major purpose of this introductory section is to introduce the basic concepts relating to climate and the terrestrial carbon cycle needed for the rest of this course.
In this lecture, we will:
consider the importance of understanding the science of climate change
look at basic principles of energy transfer in the earth system
examine greenhouse gases and their sources
You will need to do a considerable amount of reading as follow-up for this section, as it forms the basis of the rest of the course.
A good place to start is the latest IPCC Synthesis Report (SYR). At present, this is the Fifth (AR5), but AR6 is due in 2022.
This is a course on Terrestrial Carbon. Why then, of all of the factors affecting climate should we be interested in this?
What we mean by the term ‘terrestrial carbon’ is the carbon that is stored in the vegetation and soils of the Earth’s land surface (so, not that in the oceans or atmosphere for example). The Earth Systems Science view of the climate is that we must maintain a focus on interactions between the different spheres, but we must also understand what processes and interactions go on within each sphere.
Our focus here on terrestrial carbon is mainly because of the major role it plays in anthropogenic climate change. Additional parts of the context of this course are: the role that terrestrial vegetation plays in biodiversity; and the role of vegetation in providing food and fuel.
“(C)limate change is a defining issue of our generation. Our responses to the challenges of climate change - accurate prediction, equitable adaptation, and efficient mitigation - will influence the quality of life for … the world, for generations to come.” (NASA, 2010).
The best estimate of the human-induced contribution to warming is similar to the observed warming over this period (Figure SPM.3).
Figure SPM.3 | Assessed likely ranges (whiskers) and their mid-points (bars) for warming trends over the 1951–2010 period from well-mixed greenhouse gases, other anthropogenic forcings (including the cooling effect of aerosols and the effect of land use change), combined anthropogenic forcings, natural forcings and natural internal climate variability (which is the element of climate variability that arises spontaneously within the climate system even in the absence of forcings). The observed surface temperature change is shown in black, with the 5 to 95% uncertainty range due to observational uncertainty. The attributed warming ranges (colours) are based on observations combined with climate model simulations, in order to estimate the contribution of an individual external forcing to the observed warming. The contribution from the combined anthropogenic forcings can be estimated with less uncertainty than the contributions from greenhouse gases and from other anthropogenic forcings separately. This is because these two contributions partially compen- sate, resulting in a combined signal that is better constrained by observations. {Figure 1.9}
From the IPCC AR5 (synthesis report) we can state:
Observed changes and their causes
Human influence on the climate system is clear, and recent anthropogenic emissions of greenhouse gases are the highest in history. Recent climate changes have had widespread impacts on human and natural systems.
Future Climate Changes, Risks and Impacts
Continued emission of greenhouse gases will cause further warming and long-lasting changes in all components of the climate system, increasing the likelihood of severe, pervasive and irreversible impacts for people and ecosystems. Limiting climate change would require substantial and sustained reductions in greenhouse gas emissions which, together with adaptation, can limit climate change risks.
Future Pathways for Adaptation, Mitigation and Sustainable Dewvelopment
Adaptation and mitigation are complementary strategies for reducing and managing the risks of climate change. Substantial emissions reductions over the next few decades can reduce climate risks in the 21st century and beyond, increase prospects for effective adaptation, reduce the costs and challenges of mitigation in the longer term, and contribute to climate-resilient pathways for sustainable development.
Adaptation and Mitigation
Many adaptation and mitigation options can help address climate change, but no single option is sufficient by itself. Effective implementation depends on policies and cooperation at all scales, and can be enhanced through integrated responses that link adaptation and mitigation with other societal objectives.
It is instructive to compare some of the materials between AR4 to AR5. For example, from the IPCC AR4 (synthesis report) we can state that there is general agreement among scientists with regard to the following:
Global GHG emissions due to human activities have grown since pre-industrial times, with an increase of 70% between 1970 and 2004.
Global atmospheric concentrations of CO2, methane (CH4) and nitrous oxide (N2O) have increased markedly as a result of human activities since 1750 and now far exceed pre-industrial values determined from ice cores spanning many thousands of years.
Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic GHG concentrations. It is likely that there has been significant anthropogenic warming over the past 50 years averaged over each continent (except Antarctica).
Anthropogenic warming over the last three decades has likely had a discernible influence at the global scale on observed changes in many physical and biological systems.
Not surprisingly, in AR5 you will find much the same material, but our scientific understanding in many of the processes and/or scientific concensus has strengthened as new studies are preformed and new evidence presented. For example, in AR5, for the attribution of climate changes to human influences we are now able to state “Anthropogenic greenhouse gas emissions have increased since the pre-industrial era, driven largely by economic and population growth, and are now higher than ever. This has led to atmospheric concentrations of carbon dioxide, methane and nitrous oxide that are unprecedented in at least the last 800,000 years. Their effects, together with those of other anthropogenic driv- ers, have been detected throughout the climate system and are extremely likely to have been the dominant cause of the observed warming since the mid-20th century.”. In this context, ‘extremely likely’ means 95-100% confidence.
Advances since the Third Assessment Report (TAR) show that discernible human influences extend beyond average temperature to other aspects of climate. (IPCC Third Assessment Report)
The Earth’s climate can be seen as the result of a complex set of process interactions. To be able to rise to the challenges facing us, we need to better understand and monitor the processes governing climate and the ways in which they interact.
One way of expressing and trying to understand this is through an Earth System Science approach, in which we recognise and stress the importance of interactions in the way we apply science to tackling this. The inclusion of the Anthrosphere (or Anthroposphere) (the part of the environment that is made or modified by humans for use in human activities and human habitats) is critical for this view of the Earth’s climate as a set of interacting spheres of influence.
It is clear then that the climate system and its dynamics are things that we as scientistc need to better understand, particularly as climate change is something that will affect us all in some way or other.
Midnight sun, Reykjavik, Iceland. (Photo P. Lewis)
The Earth’s climate is driven by (shortwave) solar radiation. Around 31% of this incoming radiation is reflected by clouds, aerosols and gases in the atmosphere and by the land surface. The remaining 69% is absorbed, with almost 50% of the incoming radiation being absorbed at the Earth surface.
"Estimate of the Earth's annual and global mean energy balance. Over the long term, the amount of incoming solar radiation absorbed by the Earth and atmosphere is balanced by the Earth and atmosphere releasing the same amount of outgoing longwave radiation. About half of the incoming solar radiation is absorbed by the Earth's surface. This energy is transferred to the atmosphere by warming the air in contact with the surface (thermals), by evapotranspiration and by longwave radiation that is absorbed by clouds and greenhouse gases. The atmosphere in turn radiates longwave energy back to Earth as well as out to space. Kiehl and Trenberth (1997).
The shortwave radiation absorbed at the surface is, in the long term, transferred back to the atmosphere, so that around 69% of the incoming energy flux is re-rediated to space as longwave radiation.
The energy absorbed at the surface drives thermals (atmospheric convection) and evapo-transpiration (latent heat transfer: change of state of water). The rest of the energy balance is maintained by thermal (longwave) radiation emitted by the surface, the atmosphere and clouds.
As part of the energy cycle illustrated above though, a large proportion of the longwave radiation emitted by the surface is re-radiated back to the surface (and absorbed by the surface) by clouds and so-called greenhouse gases. This mechanism, the ‘trapping’ of longwave radiation in the atmosphere is what naturally maintains the temperature maintained on Earth – the ‘natural greenhouse effect’. Without this, the temperature at the Earth surface and in the atmosphere would be much less that it presently is: if the Earth were an ideal thermally conductive blackbody (that still reflected around 31% of the incoming shortwave radiation) the effective temperature would be around -19 C to emit the same energy flux required to balance the incoming radiation.
Radiation transmitted by the atmosphere at shortwave and longwave wavelengths
The figure above shows the major absorbing (and scattering, other than aerosols) constituents of the atmosphere for shortwave and longwave wavelengths and their impact on atmospheric transmission.
Obviously the atmospheric transmission depends on the concentrations of these constituents, but the figures given might be taken as typical. In the Ultraviolet, Ozone is primarily responsible for solar radiation absorption. At visible wavelengths, the main factors are Rayleigh scattering and aerosols. At thermal wavelengths, water vapour and CO2 are the most important constituents.
Clouds also affect atmospheric transmission. Low, thick cloud primarily reflect shortwave radiation, whereas high thin clouds allow most shortwave radiation through but absorb longwave radiation.
Aerosols have a range of complicated effects on radiation. Whilst many aerosols such as sulfates and nitrates reflect most shortwave radiation, black carbon absorbs most of it. Another important role of aerosols is to act as cloud condensation nuclei which enable water vapour in the atmosphere to condense and coalesce. Interesting biogenic sources include volatile organic compounds (VOCs) and other materials emitted from forests (Spracklen et al., 2008).
Radiative forcing (RF) is a measure of the radiative impact of components of the climate system (e.g. Greenhouse Gases (GHGs)) in terms of a warming impact (if positive). Formally, it is “a measure of the influence a factor has in altering the balance of incoming and outgoing energy in the Earth-atmosphere system and is an index of the importance of the factor as a potential climate change mechanism. … radiative forcing values are for changes relative to preindustrial conditions defined at 1750 and are expressed in watts per square meter (W/m^2).” (IPCC AR4 Synthesis Report). (see also “AR5 Climate Change 2013: Working Group I: The Physical Science Basis, Chapter 8: Anthropogenic and Natural Radiative Forcing”). The related concept of Effective Radiative Forcing (ERF), the “change in net TOA downward radiative flux after allowing for atmospheric temperatures, water vapour and clouds to adjust, but with surface temperature or a portion of surface conditions unchanged” (WG1AR5 Chapter08, Box 8.1).
Various important conclusions of IPCC AR5 are phrased in terms of (E)RF. For example:
The total anthropogenic ERF over the Industrial Era is positive, with a value of 2.3 (1.1 to 3.3) W m–2.
The best estimate of RF due to total solar irradiance (TSI) changes representative for the 1750 to 2011 period is 0.05 (to 0.10) W m–2. This is substantially smaller than the AR4 estimate. The contrast between this and the anthropogenic ERF stress the role of anthropogenic influence.
Due to increased concentrations, RF from WMGHGs (Well-mixed GHGs, CO2, CH4, N2O, SF6) has increased by 0.20 (0.18 to 0.22) W m–2 (8%) since the AR4 estimate for the year 2005
The net forcing by WMGHGs other than CO2 shows a small increase since the AR4 estimate for the year 2005
Ozone and stratospheric water vapour contribute substantially to RF.
There is robust evidence that anthropogenic land use change has increased the land surface albedo, which leads to an RF of –0.15 ± 0.10 W m–2
The RF of volcanic aerosols is well understood and is greatest for a short period (~2 years) following volcanic eruptions.
There is very high confidence that industrial-era natural forcing is a small fraction of the anthropogenic forcing except for brief periods following large volcanic eruptions.
Rockstrom et al. (2009) propose that “human changes to atmospheric CO2 concentrations should not exceed 350 parts per million by volume, and that radiative forcing should not exceed 1 watt per square metre above pre-industrial levels. Transgressing these boundaries will increase the risk of irreversible climate change, such as the loss of major ice sheets, accelerated sea- level rise and abrupt shifts in forest and agricultural systems. Current CO2 concentration stands at 387 p.p.m.v. and the change in radiative forcing is 1.5 W m^-2”
"Radiative forcing of climate change during the industrial era (1750–2011). Bars show radiative forcing from well-mixed greenhouse gases (WMGHG), other anthropogenic forcings, total anthropogenic forcings and natural forcings. The error bars indicate the 5 to 95% uncertainty. Other anthropogenic forcings include aerosol, land use surface reflectance and ozone changes. Natural forcings include solar and volcanic effects. The total anthropogenic radiative forcing for 2011 relative to 1750 is 2.3 W/m2 (uncertainty range 1.1 to 3.3 W/m2). This corresponds to a CO2-equivalent concentration (see Glossary) of 430 ppm (uncertainty range 340 to 520 ppm). {Data from WGI 7.5 and Table 8.6}"
The figure below from IPCC AR5 gives global mean radiative forcings (and 95% confidence intervals (CIs)) for some of the most significant GHGs and other components of the system. We see that the most significant anthropogenic positive RF term is CO2 followed by CH4, Tropospheric O3, Halocarbons, NO2, (natural) Solar irradiance variations, and black carbon effects on snow (lowering snow albedo). On the other hand, there are significant negative RF effects from aerosols (both directly in increasing the shortwave atmospheric albedo and indirectly through increasing cloud cover and cloud albedo) and surface albedo effects due to land use change (increasing albedo, e.g. through deforestation).
"Bar chart for RF (hatched) and ERF (solid) for the period 1750–2011, where the total ERF is derived from Figure 8.16. Uncertainties (5 to 95% confidence range) are given for RF (dotted lines) and ERF (solid lines)."
"Figure 8.16 | Probability density function (PDF) of ERF due to total GHG, aerosol forcing and total anthropogenic forcing. The GHG consists of WMGHG, ozone and stratospheric water vapour. The PDFs are generated based on uncertainties provided in Table 8.6. The combination of the individual RF agents to derive total forcing over the Industrial Era are done by Monte Carlo simulations and based on the method in Boucher and Haywood (2001). PDF of the ERF from surface albedo changes and combined con- trails and contrail-induced cirrus are included in the total anthropogenic forcing, but not shown as a separate PDF. We currently do not have ERF estimates for some forcing mechanisms: ozone, land use, solar, etc. For these forcings we assume that the RF is representative of the ERF and for the ERF uncertainty an additional uncertainty of 17% has been included in quadrature to the RF uncertainty. See Supplementary Material Sec- tion 8.SM.7 and Table 8.SM.4 for further description on method and values used in the calculations. Lines at the top of the figure compare the best estimates and uncertainty ranges (5 to 95% confidence range) with RF estimates from AR4."
Carbon, its name deriving from the Latin carbo for charcoal, is the 4th most abundant element in the universe. It is able to bond with itself and many other elements and forms over 10 million known compounds. It is present in the atmosphere as carbon dioxide (CO2) and other compounds such as methane (CH4), in all natural waters as dissolved CO2, in various carbonates in rocks, and as organic molecules in living and dead organisms in the biosphere (Encyclopedia of Earth). We have seen above that carbon is also important in radiative forcing directly in terms of Halocarbons in the atmosphere and black carbon deposits on snow, as well as indirectly elsewhere (e.g. land cover change).
Aerosol (E)RF is generally negative, although evidence is presented in AR5 for changing spatial patterns of RF (IPCC WG1 Ch 8)
Blasing (2016) “Recent Greenhouse Gas Concentrations” provides a table of greenhouse gases and their recent and pre-industrial atmospheric concentrations. It also provides an indication of the ‘Greenhouse Warming Potential (GWP)’, atmospheric lifetime and radiative forcing of the various gases. GWP is a measure of the radiative effects of emissions of greenhouse gases relative to an equal mass of CO2 emissions (so the GWP for CO2 is 1). We see that methane have a significantly higher GWP (25) over a 100 year horizon than CO2 but a shorter residency in the atmosphere.
GWP can be a useful tool, for instance for considering mitigation strategies, but it should be noted that various emission-based metrics of this nature can be used, and it is important to consider the time period over which the measure is considered (Box 3.2, AR5 <https://www.ipcc.ch/site/assets/uploads/2018/02/SYR_AR5_FINAL_full.pdf>)
"Global average abundances of the major, well-mixed, long-lived greenhouse gases - carbon dioxide, methane, nitrous oxide, CFC-12 and CFC-11 - from the NOAA global air sampling network are plotted since the beginning of 1979. These gases account for about 96% of the direct radiative forcing by long-lived greenhouse gases since 1750. The remaining 4% is contributed by an assortment of 15 minor halogenated gases (see text). Methane data before 1983 are annual averages from Etheridge et al. (1998), adjusted to the NOAA calibration scale [Dlugokencky et al., 2005]." This source: THE NOAA ANNUAL GREENHOUSE GAS INDEX (AGGI).
The figure above shows global abundances of CO2, CH4, N2O and major GHG chlorofluorocarbons (CFCs) in the atmosphere since 1979.
The temporal pattern of atmospheric CO2 shows a significant annual cycle, with a peak in Northern latitude Spring and a trough in Autumn.
"The graph shows recent monthly mean carbon dioxide measured at Mauna Loa Observatory, Hawaii.
The last four complete years of the Mauna Loa CO2 record plus the current year are shown. Data are reported as a dry air mole fraction defined as the number of molecules of carbon dioxide divided by the number of all molecules in air, including CO2 itself, after water vapor has been removed. The mole fraction is expressed as parts per million (ppm). Example: 0.000400 is expressed as 400 ppm.
In the above figure, the dashed red line with diamond symbols represents the monthly mean values, centered on the middle of each month. The black line with the square symbols represents the same, after correction for the average seasonal cycle. The latter is determined as a moving average of SEVEN adjacent seasonal cycles centered on the month to be corrected, except for the first and last THREE and one-half years of the record, where the seasonal cycle has been averaged over the first and last SEVEN years, respectively.
The last year of data are still preliminary, pending recalibrations of reference gases and other quality control checks. The Mauna Loa data are being obtained at an altitude of 3400 m in the northern subtropics, and may not be the same as the globally averaged CO2 concentration at the surface.
Source: NOAA ESRL
Carbon dioxide is emitted as part of the carbon cycle and by anthropgenic activities such as the burning of fossil fuels. We will deal with the carbon cycle below, but briefly examine direct anthropogenic emissions here.
By far the largest direct anthropogenic source of CO2 emissions is fossil fuel combustion, which is in turn driven by economic and population growth (AR5 p.5 <https://www.ipcc.ch/site/assets/uploads/2018/02/SYR_AR5_FINAL_full.pdf>).
Figure 1.8 | Decomposition of the change in total annual carbon dioxide (CO2) emissions from fossil fuel combustion by decade and four driving factors: population, income (gross domestic product, GDP) per capita, energy intensity of GDP and carbon intensity of energy. The bar segments show the changes associated with each individual factor, holding the respective other factors constant. Total emission changes are indicated by a triangle. The change in emissions over each decade is measured in gigatonnes of CO2 per year (GtCO2/yr); income is converted into common units, using purchasing power parities. {WGIII SPM.3}
The figure below shows estimated global total CO2 emissions since 1750, by world region.
Looking at more recent years (post 1970) (Fig 5.2 p.358, AR5 WG3, Chapter 5) emphasises how this is in some ways tied to population, in that global per capita emissions are relatively constant over the last 50 years. Such broad-scale analysis however hides many regional disparities and trends.
Methane arises from both natural and anthrogenic sources.
The annual cycles seen in the figure above are attributed to removal by the hydroxyl radical OH (ECI, Oxford, “Climate science of methane.) which is the major mechanism for the breakdown of CH4 in the troposphere.
Anthropogenic activity accounts for around 44-53% of N2O, with tropical soils and oceanic release account for the majority of the remainder (Davidson and Kanter, 2014).
This is a course on Terrestrial Carbon. Why then, of all of the factors affecting climate should we be interested in this?
What we mean by the term ‘terrestrial carbon’ is the carbon that is stored in the vegetation and soils of the Earth’s land surface (so, not that in the oceans or atmosphere for example). The Earth Systems Science view of the climate is that we must maintain a focus on interactions between the different spheres, but we must also understand what processes and interactions go on within each sphere.
Our focus here on terrestrial carbon is mainly because of the major role it plays in anthropogenic climate change. Additional parts of the context of this course are: the role that terrestrial vegetation plays in biodiversity; and the role of vegetation in providing food and fuel.
To understand the role of carbon in the earth system, we must gain some understanding of the general processes at work. We will consider first the biogeochemichal (concentrating on carbon), and then biogeophysical (albedo and evapotranspiration) processes in the following sections.
This course cannot cover all aspects of climate science and related biological, chemical and physical/meteorological aspects in great detail. The emphasis of the course is on students developing an understanding of monitoring and modelling terrestrial carbon, so we provide only a brief overview of other aspects.
For further reading, some references are provided. Students are encouraged to fill the gaps in their knowledge in other areas using:
Myhre, G., D. Shindell, F.-M. Bréon, W. Collins, J. Fuglestvedt, J. Huang, D. Koch, J.-F. Lamarque, D. Lee, B. Mendoza, T. Nakajima, A. Robock, G. Stephens, T. Takemura and H. Zhang, 2013: Anthropogenic and Natural Radiative Forcing. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
"This diagram of the fast carbon cycle shows the movement of carbon between land, atmosphere, and oceans. Yellow numbers are natural fluxes, and red are human contributions in gigatons of carbon per year. White numbers indicate stored carbon. Diagram adapted from U.S. DOE, Biological and Environmental Research Information System.)". Source: NASA Earth Observatory
The carbon cycle describes the flow of carbon between resevoirs in the Earth system. The largest pools of carbon are fossil carbon, deep ocean and reactive sediments, soil carbon, carbon at the ocean surface, and that in the atmosphere. After these comes that stored in plant biomass. Processes of photosynthesis, respiration and decomposition, as well as gas interchange at the ocean surface move carbon between the different pools. On top of that, we have the impact of anthropogenic emissions, which as we have seen above is injecting around 9 (8.749 using the 2008 figures above) Gigatons of carbon a year into the atmosphere from fossil fuel combustion.
A small aside on units:
8.749 * 3.667 * 1000 = 32082.6 Tg CO2 equivalent which is the figure quoted above. See CDIAC information on reporting units for details). Using 5.137 x 1018 kg as the mass of the atmosphere (Trenberth, 1981 JGR 86:5238-46), 1 ppmv of CO2 = 2.13 Gt of carbon (CDIAC FAQ). So, 8.749 Gt of carbon is equivalent to 4.11 ppmv of CO2.
The annual mean growth rate of CO2 in the atmosphere is currently over 2 ppmv of CO2, from which it can be inferred that just over 2 ppmv of CO2 must enter other fast pools of carbon.
A view of the carbon cycle with more detail, from the IPCC AR5 is:
"Simplified schematic of the global carbon cycle. Numbers represent reservoir mass, also called ‘carbon stocks’ in PgC (1 PgC = 1015 gC) and annual carbon exchange fluxes (in PgC yr–1). Black numbers and arrows indicate reservoir mass and exchange fluxes estimated for the time prior to the Industrial Era, about 1750. Red arrows and numbers indicate annual ‘anthropogenic’ fluxes averaged over the 2000–2009 time period. These fluxes are a perturbation of the carbon cycle during Industrial Era post 1750. The uptake of anthropogenic CO2 by the ocean and by terrestrial ecosystems, often called ‘carbon sinks’ are the red arrows part of Net land flux and Net ocean flux. Red numbers in the reservoirs denote cumulative changes of anthropogenic carbon over the Industrial Period 1750–2011. By convention, a positive cumulative change means that a reservoir has gained carbon since 1750. Uncertainties are reported as 90% confidence intervals."
"Global anthropogenic CO2 budget, accumulated since the Industrial Revolution (onset in 1750) and averaged over the 1980s, 1990s, 2000s, as well as the last 10 years until 2011. By convention, a negative ocean or land to atmosphere CO2 flux is equivalent to a gain of carbon by these reservoirs. The table does not include natural exchanges (e.g., rivers, weathering) between reservoirs. The uncertainty range of 90% confidence interval presented here differs from how uncertainties were reported in AR4 (68%)."
The increase in atmospheric CO2 is around 4.3 (+/- 0.2) PgCy^-1 (2002-2011 figures). The table above shows how this is partitioned from the main sources and sinks. Despite being the smallest flux, the Land-to-atmosphere flux has the largest uncertainty. Note that the ‘Residual land sink’ is calculated to balance the budget, rather than being a direct caclulation.
Figure showing information from above table for the global carbon cycle for thr 1990, figures in Gt carbon yr-1"
Source: S. Quegan, BIOMASS: ESA User Consultation Meeting, Lisbon, Portugal, 20-21 Jan 2009
The figure above illustrate what is currently known about both the magnitudes and uncertainties of what the global carbon cycle fluxes were in the 1990s. The increase in atmospheric carbon is less than that emitted from burning fossil fuels as discussed above. The balance is made up of net flows to the ocean and land. The largest uncertainty is in the net terrestrial uptake even though this is the smallest component of the flux. The land sink involves emissions from fire and land use change and a land carbon sink which has the greatest uncertainty of the sub components(0.9 to 4.3 Gt carbon yr-1). Estimates stocks of land carbon are also shown, which indicate a terrestrial vegetation pool of around 600 Gt of carbon (similar order of magnitude to that in the atmosphere) and a much larger but less mobile (on decadal to annual time scales) soil and detritus pool.
This figure shows current estimates of the carbon cycle for the 1990s. Where available, error bars are given. The cycle is a balance between emissions from anthropogenic sources and changes in the pools of the components of the carbon cycle. The anthropogenic source list is incomplete here as it does not include land use change (mainly tropical deforestation). The AR4 contains no estimate of uncertainty on this (figures above), just a range. The figures illustrated here show both the 'low' estimate of land use change fluxes, implying a low(ish) residual carbon sink, and the 'high' estimates, implying a high residual sink. The large 'uncertainty' (range of estimates) for the land use change flux therefore dominate the total error budgets. The residual sink term is mainly implied from estimates of the land use change term, although this certainly contains some uptake into global biomass.
Source: S. Quegan, BIOMASS: ESA User Consultation Meeting, Lisbon, Portugal, 20-21 Jan 2009
This figure gives an indication of the uncertainty in carbon emissions that is due to uncertainty in knowledge of biomass (ca. 1 Gt carbon yr-1) due to the way in which biomass and land use change fluxes are currently calculated
Source: S. Quegan, BIOMASS: ESA User Consultation Meeting, Lisbon, Portugal, 20-21 Jan 2009
The figure below shows net and gross CO2 fluxes from the land, separating these into Agriculture, Forestry and Other Land Use (AFOLU) and indirect emissions and removals. The uncertainty on the gross indirect fluxes is particularly high. The total net land flux (removal) is currently estimated to be on average (2007-2016) around 6.0 +/- 2.0 GtCO2y^-1 (AR5). One ton of carbon equals 3.67 tons of carbon dioxide, so this equates to 1.63 +/- 0.54 GtCy-1.
Net and gross fluxes of CO2 from land (annual averages for 2008–2017).Left: The total net flux of CO2 between land and atmosphere (grey) is shown with its two component fluxes, (i) net AFOLU emissions (blue), and (ii) the net land sink (brown), due to indirect environmental effects and natural effects on managed and unmanaged lands. Middle: The gross emissions and removals contributing to the net AFOLU flux. Right: The gross emissions and removals contributing to the land sink
Trying to reconcile the net land sink is complicated then, partly because of the large and competing components and also through a lack of real constraint on indirect fluxes. Further, even though countries report their annual AFOLU (and other) fluxes to the UN (GHG inventory), these are currently quite different from other estimates of the fluxes, from modelling or from bookkeeping models:
Global net CO2 emissions due to AFOLU from different approaches (in GtCO2 yr–1).Brown line: the mean and individual estimates (brown shading) from two bookkeeping models (Houghton and Nassikas 2017; Hansis et al. 2015). Blue line: the mean from DGVMs run with the same driving data with the pale blue shading showing the ±1 standard deviation range. Yellow line: data downloaded from FAOSTAT website (Tubiello et al. 2013576); the dashed line is primarily forest-related emissions, while the solid yellow line also includes emissions from peat fires and peat draining. Orange line: Greenhouse Gas Inventories (GHGI) based on country reports to UNFCCC (Grassi et al. 2018), data are shown only from 2005 because reporting in many developing countries became more consistent/reliable after this date.
As we saw above, the global annual flux of carbon to the atmosphere from microbial respiration and decomposition is thought to be around 120 Gt of carbon per year. This is approximately balanced by the process of photosynthesis that currently draws down around 123 Gt of carbon per year, including around 3 Gt attributed to anthropogenic inputs into the atmosphere that goes into the land sink.
Arriving at figures like that is complex for many reasons, so we have to think carefully about what we measure and model and also precisely define the terms we use. Lovett et al., (2006) provides a useful discussion of carbon flows in an ecosystem and the terms you will need to be aware of.
We can consider CO2 fluxes at the ecosystem level:
Gross Primary Productivity (GPP) is the amount of carbon (per unit area per unit time) taken up by green vegetation in the ecosystem, which is simply the photosynthetic rate (at the ecosystem level). Photosynthesis involves the use of (solar) energy to convert CO2 and H20 to glucose (C6H12O6) and oxygen (O2).
Plants use (metabolise) energy (burn carbohydrates) to maintain growth, reproduction and other life processes. This is the process of (autotrophic) respiration (\(R_a\)), which releases CO2 (and water) to the atmosphere. Additional respiration by soil micro-organisms and soil animals (consumers and decomposers) in the decomposition of soil organic matter is known as heterotrophic respiration (\(R_h\)). The total ecosystem respiration then, \(R_e\) is the sum of \(R_a\) and \(R_h\).
The net ecosystem productivity (NEP) is defined:
\[NEP = GPP - R_e\]
In this way, it represents the organic C available for storage within the ecosystem or that can be transported out of the ecosystem as a loss (e.g. fire or harvest). The sign of NEP indicates whether there is a net input to (positive NEP) or output from (negative NEP) the system. Lovett et al., (2006) note that this is not quite the same as the change in C storage, as that should include any imports and exports of organic C (e.g. fertilizer, wood harvesting) and non-biological oxidation of C.
The related term, Net Ecosystem Exchange (NEE), is essentially equal to NEP but opposite in sign (i.e. NEE is negative when GPP is higher than \(R_e\)). Technically, it also includes sources and sinks for CO2 that do not involve conversion to or from organic C (e.g. weathering reactions), but these will tend to be minor for terrestrial ecosystems, and NEE is mostly treated and -NEP.
\[NEE \approx -NEP\]
Diurnal variations in NEE over some different vegetation canopies (source: Zhang et al., 2015)
The figure above shows typical diurnal variations in CO2 fluxes over some vegetation canopies. The measure given is Net Ecosystem Exchange (net CO2 flux) (units: mg CO2 m-2 s-1). During the daytime, the fluxes are negative (i.e. there is a flow from the atmosphere to the ecosystem) and at nighttime, the flux (to the atmosphere) is positive (so the ecosystem loses CO2 to the atmosphere).
Ecosystem-level measures are of great value as they integrate all of the processes involved with plants, animals and soil, over some specified spatial and temporal domain. They are also things that we can measure, e.g. using eddy covariance towers, as we shall see later.
However, we also need to be able to model, measure and understand the processes underlying this which means considering the plant level. This is often not considered practically possible, so we group plants together that we want to consider as a set, or that we believe operate in a similar manner. Of course several, or many individual plants will form part of an ecosystem. This may sometimes then be considered as a ‘layer’ of vegetation (e.g. a tree canopy) or as several layers (canopy and understory). Such ‘layer’ separation can be important if the responses and timing of events in the different layers varies, as is often the case for a tree canopy and understory.
Taking all green plants in the ecosystem together then, we can try to isolate and define some of the ‘plant-scale’ processes. A critical term in this sense is net primary productivity (NPP), which is effectively the rate of biomass production. This is defined as ecosystem GPP minus plant respiration losses (\(R_a\)):
\[NPP = GPP - R_a\]
The ratio of NPP to GPP is known as the carbon use efficiency (CUE). This is the fraction of carbon absorbed by an ecosystem that is used in biomass production, and is quite similar across ecosystems, typically assumed to be around 0.5. DeLucia et al. (2007) for example confirm an average value of 0.53 across many forest ecosystems, but they note that individual values of CUE can range from 0.23 to 0.83.
"The relationship between net primary production (NPP) and gross primary production (GPP) for different forest types. Closed symbols represent values of GPP that were derived from estimates of NPP and Ra; open symbols represent values of GPP that were estimated independently from NPP. Symbols for the different forest types are: boreal (circles), West Coast Maritime (triangles), temperate conifer (squares), temperate deciduous (diamonds), temperate mixed (inverted triangles), and Tropical (stars). The intercept of the relationship between NPP and derived estimates of GPP (solid line) was significantly lower than the intercept for the relationship between NPP and independent estimates of GPP (dashed line; see results in paper for details)"
(source: DeLucia et al. (2007), GCB
Similarly, Zhang et al. (2009) showed that CUE exhibited a pattern depending on the main climatic characteristics such as temperature and precipitation and geographical factors such as latitude and altitude.
Global spatial pattern of the average NPP/GPP ratio.
This source: Zhang et al. GCB 2009
NPP varies over the year as the factors affecting the processes involed (essentially, light, temperature and water availability) vary over the growing seasdon. Nutrient availability also affects NPP but this is likely to vary over longer time periods. NPP can very quite significantly from one year to the next and over decadal timescales depending on climatic factors.
"Patterns of terrestrial NPP at different timescales in a temperate forest: Daily net primary production (NPP) changes during the growing season in response to climate variables including solar radiation and precipitation, while the duration of NPP during the growing season (i.e., spring green-up to autumn leaf fall) is largely a function of photoperiod. Annual NPP changes from one year to the next in response to longer-term trends in climate, including shifts in total solar radiation caused by differences in cloud cover from year to year. Decadal patterns of NPP track changes in ecological succession (Gough et al. 2007, 2008).". This source: Gough, C. M. (2011) Terrestrial Primary Production: Fuel for Life. Nature Education Knowledge 2(2):1
NPP varies quite considerably between biomes. The following table shows typical values of GPP, total Global NPP and NPP per unit area for the main biomes.
Globally then, the most productive biomes are tropical forests, savannah and grassland which together account for around half of global NPP, and the predominance of the tropics can be seen in the figure below. But per unit area, tropical and temperate forests are the most productive.
Global patterns of NPP, Net radiation, and rainfall, N. Hemisphere summer.
The figures above show global NPP distribution and related climatic and land surface properties for Northern hemisphere summer. The dataset ‘NPP’ broadly relates the abundance of vegetation, which relates to the total capacity of vegetation to photosynthesise. The primary driver of GPP (so NPP in broad terms) is the amount of vegetation and the amount of downwelling solar radiation. Although we do not have an image of the latter here, it is broadly in-line with the net radiation shown. There are of course many more subtle controls on NPP that we will consider later, but clearly these would include temperature range and water availability.
In Northern hemisphere summer then, NPP is most strongly spatially weighted to the Northern hemisphere because of these various drivers.
Global patterns of NPP, Net radiation, and rainfall, N. Hemisphere winter.
In Northern hemisphere winter, the distribution of NPP shifts to the Southern hemisphere, for the same reasons as indicated above.
"Comparison of the latitudinal distribution of the median (solid line), and 10th and 90th percentiles (dotted lines) of areally- weighted mean annual net primary productivity estimated by fifteen models within a 0.5o latitudinal band."
Source: Cramer et al., 1995, "IGBP/GAIM REPORT SERIES REPORT #5"
Since the total landmass (and in particular the vegetated landmass) in the Southern hemisphere is less than that of the Northern hemisphere, global NPP comes predominantly from Northern latitudes. Referring back to the plots of the annual cycle of atmospheric CO2 above, we noted a peak in May and a trough in October, largely then in response to global NPP increases in Spring and decreases in Autumn: the larger NPP in Northern hemisphere summer gradually decreases the atmospheric CO2 concentration. This is however complicated by the timing and spatial distribution of other CO2 sources and sinks.
The NEP then is NPP minus other losses to the atmosphere. These will generally include respiration by heterotrophs (organisms – fungi, animals and bacteria in the soil), but there may be other losses to the ecosystem such as through harvesting or fire.
Anthropogenic and wildfire carbon emissions (as well as ocean and soil fluxes) as well as atmospheric circulation also significantly affect the global distribution of CO2, so the global patterns of CO2 are not as ‘simple’ as just the NPP fluxes.
"CO2 weather. We depict the daily average of the pressure-weighted mean mole fraction of carbon dioxide in the free troposphere as modeled by CarbonTracker. Units are micromoles of CO2 per mole of dry air (μmol mol-1), and the values are given by the color scale depicted under the graphic. The "free troposphere" in this case is levels 5 through 10 of the TM5 model before 2005, and levels 6 through 10 after (due to an improvement in the vertical resolution for 2006 onwards). This corresponds to about 1.2km above the ground to about 5.5km above ground, or in pressure terms, about 850 hPa to about 500 hPa. Gradients in CO2 concentration in this layer are due to exchange between the atmosphere and the earth surface, including fossil fuel emissions, air-sea exchange, and the photosynthesis, respiration, and wildfire emissions of the terrestrial biosphere. These gradients are subsequently transported by weather systems, even as they are gradually erased by atmospheric mixing."
Source: NOAA carbontracker
Heimann, M., Reichstein, M. Terrestrial ecosystem carbon dynamics and climate feedbacks. Nature 451, 289–292 (2008). https://doi.org/10.1038/nature06591
Forests and Climate Change: Forcings, Feedbacks, and the Climate Benefits of Forests, G.B. Bonan, Science 320, 1444 (2008), DOI: 10.1126/science.1155121
Grace, J., (2001) Carbon Cycle, in Encyclopedia of Biodiversity, Vol. 1, Academic Press
DeLucia et al. (2007) Forest carbon use efficiency: is respiration a constant fraction of gross primary production? Global Change Biology (2007) 13, 1157–1167, doi: 10.1111/j.1365-2486.2007.01365.x
Lovett, G.M. et al., (2006) Is Net Ecosystem Production Equal to Ecosystem Carbon Accumulation? Ecosystems (2006) 9: 1–4 DOI: 10.1007/s10021-005-0036-3
What photosynthesis achieves is to fix solar energy. This is then used to support plant growth and produce organic matter that in turn supports animals and soil microbes. It is the primary mechanism for carbon (and chemical energy) input to ecosystems.
In photosynthesis, the plant uses water and nutrients from the soil, and carbon dioxide from the air with the sun's energy to create photosynthates using the primary photosynthetic enzyme Rubisco. Oxygen is released as a byproduct.
In essence, what it does is to split (proportionately) 12 water molecules (H20) and produce 6 molecules of oxygen gas (O2) and 6 of H20. Carbon dioxide is reduced to glucose (C6H12O6) which is the basic material from which other biochemical constituents of biomass are synthesised (Grace, 2001). Note that the additional 6H2O are omitted in the figure above which does not include transpiration.
Transpiration in plants is part of the water cycle and provides around 10% of the moisture found in the atmosphere. Transpiration uses around 90% of the water that enters the plant (the rest being used in cell growth and photosynthesis). Most transpiration water loss takes place in the stomata of the leaves. The guard cells of the stomata open to allow CO2 diffusion from the air for photosynthesis.
In that sense, it can be thought of as the “cost” associated with the opening of the stomata to allow the diffusion of carbon dioxide gas from the air.
Stomatal conductance, (e.g. in mmol m-2 s-2) is a measure of the rate of passage of carbon dioxide (CO2) exiting, or water vapor entering through the stomata of a leaf. The term conductance comes from analogy with electrical circuitry. It is controlled by the guard cells of the leaf stomata and controls transpiration rates and CO2 diffusion rates (along with gradients of water vapour and CO2).
movement of minerals (from the roots: xylem) and sugars (from photosynthesis: phloem) throughout the plant.
cooling (loss of heat energy through transpiration)
maintenance of turgor pressure in plant cells for plant structure and the functioning of guard cells in the stomata to regulate water loss and CO2 uptake.
There are three types of photosynthesis mechanisms in plants: C3, C4 and CAM. Plants that use the CAM mechanism include cacti and orchids that are adapted to extremely hot and dry environments. We will concentrate on C3 and C4 plants here because of their greater global significance.
"In the first step of the cycle CO2 reacts with RuBP to produce two 3-carbon molecules of 3-phosphoglyceric acid (3-PGA). This is the origin of the designation C3 or C3 in the literature for the cycle and for the plants that use this cycle. The entire process, from light energy capture to sugar production occurs within the chloroplast. The light energy is captured by the non-cyclic electron transport process which uses the thylakoid membranes for the required electron transport."
Source: GSU
Around 85% of plants use this mechanism, including wheat, barley, potatoes and sugar beet and most trees. C3 plants cannot grow in hot climates because the enzyme RuBisCO involved in catalyzing carboxylation of RuBP incorporates more oxygen into RuBP as temperatures increase (‘C2 photosynthesis’), leading to a process called photorespiration and a net loss of carbon (and nitrogen) that can act as a limit to growth. C3 plants are also sensitive to water availability.
C4 plants (and CAM plants) operate more efficiently than C3 plants under conditions of drought, high temperatures, and nitrogen or CO2 limitation. They do this by bypassing the photorespiration pathway and efficiently delivering CO2 to the RuBisCO enzyme.
"C4 plants almost never saturate with light and under hot, dry conditions much outperform C3 plants. They use a two-stage process were CO2 is fixed in thin-walled mesophyll cells to form a 4-carbon intermediate, typically malate (malic acid). The reaction involves phosphoenol pyruvate (PEP) which fixes CO2 in a reaction catalyzed by PEP-carboxylate. It forms oxaloacetic acid (OAA) which is quickly converted to malic acid. The 4-carbon acid is actively pumped across the cell membrane into a thick-walled bundle sheath cell where it is split to CO2 and a 3-carbon compound. This CO2 then enters the Calvin cycle in a chloroplast of the bundle sheath cell and produces G3P and subsequently sucrose, starch and other carbohydrates that enter the cells energy transport system. " Source: GSU
C4 plants include maize and sugarcane. They are rarely found outside of latitudes 60N to 46S. Although only a small proportion of flowering plants use this mechanism for carbon fixation, they are responsible for around 25% of photosynthesis on land.
In (autotrophic) respiration, plants convert the sugars made during photosynthesis back into CO2 and water, and release energy in the process.
Oxygen is taken up in this process. The energy released from respiration is used by the plant for growth and maintenance of existing material. It consumes between 25% and 75% of all of the carbohydrates generated in photosynthesis.
The main factors affecting net photosynthesis at the leaf level are: (i) light limitation; (ii) CO2 limitation; (iii) nitrogen limitation and photosynthetic capacity; (iv) water limitation; (v) temperature effects; and (vi) pollutants. (see pp.105-115 of Chapin et al. 2002)
light limitation
Light response curves measures plants response to light intensity.
"Relationship of net photosynthetic rate to photosynthetically active radiation and the processes that limit photosynthesis at different irradiances. The linear increase in photosynthesis in response to increased light (in the range of light limitation) indicates relatively constant light use efficiency. The light compensation point is the minimum irradiance at which the leaf shows a net gain of carbon. "
Source: Chapin, 2011, fig. 5.5
At low to moderate light levels, leaves have a near linear response to light intensity. The rate of change of net photosynthesis in this region to irradiance is the quantum yield of photosynthesis. This is similar for all C3 plants (in the absence of environmental stresses) at around 6% (Chapin et al., 2002). At higher light levels, saturation occurs as the efficiency of the photosynthetic mechanism is reduced. At higher levels still, net photosynthesis can decline as a result of photorespiration as described above.
Plants have some capacity to respond to changes in light conditions over time scales of days to months, such as by having leaves in more direct sunlight with more cell layers and higher photosynthetic capacity than shade leavesi by acclimation or adaptation over longer time periods. Respiration rate depends on tissue protein content (Chapin et al., 2002, ch. 6) so shade leaves with low photosynthetic capacity generally have a lower protein content to minimise respiration losses.
When upscaling light limitations to the canopy or ecosystem scale, the leaf area index (LAI) is a major constraint. In effect, the lower the LAI, the lower the radiation intercepted by the vegetation. Radiation interception is often approximated through Beer’s law:
I=I0exp(-kL)
where I is the radiation intensity intercepted, I0 is that at the top of the canopy, k is an extinction coefficient (a function of leaf angle distribution and clumping) and L is the LAI.
Assuming only first order interactions then gives the proprtion of radiation intercepted over the canopy, I/I0 as:
I/I0=1-exp(-kL)
so that for L = 0, no radiation is intercepted, and as L increases, so the proportion intercepted does as well. We shall return to consideration of this in later lectures, but for the moment we can suppose this simple relationship showing general principles.
CO2 limitation
Although we have noted spatial and temporal variations in CO2 concentrations, the variation is in fact only quite small, being of the order of 4% (Chapin et al., 2002) and insufficient to cause significant regional variations in photosynthesis. Further, although photosynthesis locally depletes the CO2 pool, it is not to a sufficient extent that it significantly affects the amount available.
"Relationship of the net photosynthetic rate to the CO2 concentration inside the leaf. Photosynthetic rate is limited by the rate of CO2 diffusion into the chloroplast in the initial (left-hand side) linear portion of the CO2 response curve and by biochemical processes at higher CO2 concentrations. The CO2 compensation point is the minimum CO2 concentration at which the leaf shows a net gain of carbon. "
Source: Chapin, 2002
The response curve of net photosynthesis to CO2 concentration inside the leaf of a C3 plant is shown above. At low levels, CO2 diffusion limits photosynthesis. With current atmospheric CO2 levels of around 390 ppmv most C3 plants would show an increase in photosynthetic rate with further increases. The magnitude of this is however uncertain due to plant acclimation and other factors.
Over the long term, it is likely that indirect effects of elevated CO2 concentrations may be more important than increased net photosynthesis rates (Chapin et al., 2002), such as thoise arising from changes to the water cycle.
C4 plants are relatively unresponsive to changing CO2 concentrations, which could possibly affect their competitaveness with C3 plants with rising CO2, but again, indirect effects are likely to be important and are hard to predict.
Nitrogen limitation and photosynthetic capacity
Photosynthetic capacity is the photosynthetic rate per unit leaf mass (in unstressed conditions). It isan important cocept as it can be thought of as the ‘carbon gaining potential per unit of biomass invested in leaves’ ((Chapin et al., 2002; p. 110). This measure is found to be highly positively correlated with leaf nitrogen concentration, because a large proportion of the nitrogen in leaves is used in photosynthetic enzymes:
"Relationship between leaf nitrogen concentration and maximum photosynthetic capacity (photosynthetic rate measured under favorable conditions) for plants from Earth's major biomes. Circles and the solid regression line are for 11 species from six biomes using a common methodology. Crosses and the dashed regression line are data from the literature. Redrawn from Reich et al. (1997)."
Source: Chapin, 2002
The leaf nitrogen concentration can be affected by such factors as high nitrogen concentrations in soils which is why nitrogen fertilizers can be so effective) or whether the plants are nitrogen-fixing through symbiosis with nitrogen-fixing bacteria in the soils.
We can also note that high photosynthetic capacities (asociated with high leaf N concentrations) have higher maximum stomatal conductance. This allows such plants to gain carbon rapidly (n the absence of environmental stresses), at the cost of higher water loss.
"Relationship between leaf nitrogen concentration and maximum stomatal conductance of plants from Earth's major biomes. Each point and its standard error represent a different biome: bc, broad-leafed crops; ce, cereal crops; co, evergreen conifer forest; dc, deciduous conifer forest; df, tropical dry forest; gl, grassland; mo, monsoonal forest; sc, sclerophyllous shrub; sd, dry savanna; sw, wet savanna; tc, tropical tree crop; td, temperate deciduous broadleaved forest; te, temperate evergreen broadleaved forest; tr, tropical wet forest; tu, herbaceous tundra. Redrawn from Schulze et al. (1994)."
Source: Chapin, 2002
A further observation relating to leaf N is that the higher the leaf N concentration, the shorter the leaf lifespan. Leaves with shorter lifspans tend to have lower specific leaf area (SLA, the leaf surface area per unit of biomass) (i.e. long-lived leaves are more dense), so higher leaf N concentration correlates with higher SLA.
"The effect of leaf lifespan on photosynthetic capacity, leaf nitrogen concentration, and specific leaf area. Symbols as in Fig. 5.13. Redrawn from Reich et al. (1997)"
Source: Chapin, 2002
According to Chapin et al. (2002, p. 112) then, “there is only modest variation in photosynthetic capacity per unit leaf area because leaves with a high photosynthetic capacity per unit leaf biomass aalso have a high SLA”. The Photosynthetic capacity per unit area (Parea) then (g cm-2 s-1) is a useful ecosystem-scale measure.
Water limitation
Water limitation reduces the capacity of leaves to match CO2 supply with light availability (Chapin et al., 2002, p. 113). Water stress is manifested as a decrease in leaf relative water content (RWC). Decreasing RWC progressively decreases stomatal conductance which slows CO2 assimilation (lower photosynthetic capacity) (Lawlor, 2002), although different studies show different responses for RWC between 100% and 70% (type 1 and 2 responses below).
"A, Schematic of the basic responses of actual photosynthetic rate (A) in air (360 umol CO2 m-2s-1) and potential photosynthetic rate (Apot) measured at elevated CO2 concentration, to relative water content (RWC). Type 1 and 2 responses of Apot are shown. In the Type 1 response, Apot is unaffected until a 20-30 % decrease in RWC occurs, when it becomes metabolically limited. In Type 2, the change is linear, showing progressive metabolic limitation. In both types in well-watered leaves, photosynthetic rate (A) is stimulated by elevated CO2. Elevated CO2 maintains A at the potential rate (Apot) in the Type 1 response as RWC decreases; but at RWC below approx. 80 % Apot decreases in Type 1. Elevated CO2 simulates A progressively less as RWC decreases in Type 2, showing that Apot is inhibited. B, Scheme of the changes in CO2 inside the leaf (Ci) during steady-state A, as stomatal conductance (gs) decreases with falling RWC, associated with Type 1 or Type 2 photosynthetic response (1 with Ci decreasing to compensation point; 2 with Ci decreasing but not to compensation point). The equilibrium compensation point, Gamma, associated with Type 1 response is indicated. There are differences between experiments, with Ci not decreasing, or decreasing somewhat, or substantially. This may reflect different methods of assessing Ci."
Source: Lawlor, 2002
In plants that are acclimated and adapted to dry conditions, plants reduce photosynthetic capacity and leaf N concentrations to give a low stomatal conductance that conserves water (Chapin et al., 2002, p. 113). Such plants also minimse leaf area (shedding or lower leaf production rates) to minimise water loss. Some such plants also minimise shortwave radiation absorption by higher reflectance at the leaf surface and or by having more vertically-inclined (erectophile) leaves.
Temperature effects
Extreme temperatures limit carbon uptake (Chapin et al., 2002, p. 114-115).
"Temperature response of photosynthesis in plants from contrasting temperature regimes. Species include antarctic lichen (Neuropogon acromelanus), a cool coastal dune plant (Ambrosia chamissonis), an evergreen desert shrub (Atriplex hymenelytra), and a summer-active desert perennial (Tidestromia oblongifolia). Redrawn from Mooney (1986)."
Source: Chapin, 2002
The figure above shows some typical response curves for plants adapted to different temperature regimes though, which means that what is considered extreme varies considerably between different plant types.
Pollutants
Finally, we mention pollutants (such as sulfur dioxide SO2) and Ozone (O3) in limiting photosynthesis. The main mechanism is by the pollutants entering and damaging the photosynthetic machinery (Chapin et al., 2002, p. 115). Plants can respond to this by reducing stomatal conductance to balance CO2 uptake with this reduced capacity, which also reducesthe entry of further pollutants.
This course cannot cover all aspects of climate science and related biological, chemical and physical/meteorological aspects in great detail. The emphasis of the course is on students developing an understanding of monitoring and modelling terrestrial carbon, so we provide only a brief overview of other aspects.
For further reading, some references are provided. Students are encouraged to fill the gaps in their knowledge in other areas using:
Lawlor, D.W. (2002) Limitation to Photosynthesis in Water-stressed Leaves: Stomata vs. Metabolism and the Role of ATP, Ann Bot (2002) 89 (7): 871-885. doi: 10.1093/aob/mcf110
Forests and Climate Change: Forcings, Feedbacks, and the Climate Benefits of Forests, G.B. Bonan, Science 320, 1444 (2008), DOI: 10.1126/science.1155121
Two critical driving variables we will need to understand patterns and responses of terrestrial vegetation are the incident shortwave solar radiation and temperature at the land surface. In this practical, we will explore the variations in shortwave solar radiation as a function of latitude and time of year, and associate surface temperature with these. In a future exercise, we will use such data to drive a model of carbon dynamics by vegetation.
Before running this practical, it is wise to update the original file, and then save a copy. To update manually, (github is having issues), use this code in a terminal on jupyter-hub (or on your own machine if that is your preference). First navigate to the original file:
Before running the notebook, complete the installation guide here. Then select the geog0113 kernel, from the kernel tab at the top of the page.
[1]:
importnumpyasnpimportmatplotlib.pyplotaspltfromgeog0133.solarimportsolar_model,radiationimportscipy.ndimage.filtersfromgeog0133.cruimportgetCRU,splurgefromdatetimeimportdatetime,timedelta# import codes we need into the notebook
Solar radiation drives the Earth system and strongly impacts the terrestrial Carbon cylcle.
We will use a model of solar radiation to examine diurnal and seasonal variations. We use the PyEphem Python package, which allows us to calculate the position of the sun with respect to an observer on Earth. So we need time as well as longitude and latitude.
Before we do that, let’s make sure we understand the main factors causing these variations.
We will use the solar_model method that gives time axis in units of Julian days, as well as the solar zenith angle (in degrees), the Earth-Sun distance, as well as the solar radiation in mol(photons)/m^2s.
[2]:
help(solar_model)
Help on function solar_model in module geog0133.solar:
solar_model(secs, mins, hours, days, months, years, lats, longs, julian_offset='2000/1/1')
A function that calculates the solar zenith angle (sza, in
degrees), the Earth-Sun distance (in AU), and the instantaneous
downwelling solar radiation in mol(photons) per square meter per
second for a set of time(s) and geographical locations.
A small aside on defining dates. We will be defining the date by the day of year (doy) when we run these exercises, but the code we are using requires a definition of date as days and month. To convert between these in Python, we use the datetime module to specify the 1st of January for the given year in the variable datetime_object, then add on the doy (actually doy-1 because when doy is 1 we mean January 1st). We then specify daymonth and year as
datetime_object.day,datetime_object.month,datetime_object.year
[3]:
# set a date by year and doyyear=2020doy=32# convert to datetime objectdatetime_object=datetime(year,1,1)+timedelta(doy-1)# extract day, month and yeardatetime_object.day,datetime_object.month,datetime_object.year
[3]:
(1, 2, 2020)
Orbital mechanics impose a variation in the Earth-Sun distance over the year. This is typically expressed in Astronomical Units (AU), so relative to the mean Earth-Sun distance.
The Exo-atmospheric solar radiation E0 (\(W/m^2\)) (total shortwave radiation) is defined as:
\[E0 = \frac{G_{sc}}{r^2}\]
where \(G_{sc}\) is the Solar Constant (around 1361 \(W/m^2\)) and \(r\) is the Earth-Sun distance in AU (inverse-squared law).
Around half of the SW energy is in the PAR region, so to calculate PAR, we scale by 0.5.
The energy content of PAR quanta is around \(220.e-3\)\(MJmol^{-1}\), so the top of atmosphere (TOA) PAR solar radiation is \(\frac{E0}{2 \times 220.e-3}\)\(mol/{m^2s}\), so 3093 = \(mol/{m^2s}\) at 1 AU.
We can use solar_model to calculate this distance and ipar variation, and visualise the orbit:
[15]:
year=2020doys=np.arange(1,366)dts=[datetime(year,1,1)+timedelta(int(doy)-1)fordoyindoys]# solar distance in AUprint()distance=np.array([solar_model(0.,[0],[12],\
dt.day,dt.month,dt.year,0,0)[2]fordtindts]).ravel()swrad=np.array([solar_model(0.,[0],[12],\
dt.day,dt.month,dt.year,0,0)[3]fordtindts]).ravel()print(f'distance range : {distance.min():6.4f} to {distance.max():6.4f} AU')print(f'swrad range : {swrad.min():6.0f} to {swrad.max():6.0f} mol/ (m^2 s)')
distance range : 0.9832 to 1.0167 AU
swrad range : 5985 to 6400 mol/ (m^2 s)
This animation shows the variation in Earth-Sun distance. The eccentricity of the Earth orbit is really quite low (0.0167) (a circle at 1 AU is shown as a dashed line) so the solar radiation varies by around 7% over the year. The ipar variation is illustrated by the size of the Sun in this animation, the diameter of which is made proportional to ipar but exxagerated to the 4th power for illustration.
We need to take account of this annual (7%) variation when we model PAR interception by vegetation, but it is not the strongest cause of variation over the year. Solar radiation is projected onto the Earth surface. We can express this projection by the cosine of the solar zenith angle. When the Sun is directly overhead (zenith angle zero degrees), then the projection is 1. As the zenith angle increases, the projected radiation decreases until it is 0 at a zenith angle of 90 degrees.
The seasons are not directly related to the Earth-Sun distance variations then. They arise in the main because the Earth is tilted on its axis by 23.5 degrees (obliquity).
We will use solar_model to calculate the amount of incoming PAR. To do this, we assume:
that PAR is around 50% of total downwelling (shortwave) radiation,
that the optical thickness (\(\tau\)) of the atmosphere in the PAR region is around 0.2
and that we multiply by all this by \(\cos(sza)\) to project on to a flat surface.
Let’s look at how that varies over the year, for some different latitudes.
[16]:
year=2020# plot solstices and equinoxes=[datetime(year,3,21),datetime(year,6,21),datetime(year,9,21),datetime(year,12,21)]des=[(e-datetime(year,1,1)).daysforeines]print(des)doys=np.arange(1,366)dts=[datetime(year,1,1)+timedelta(int(doy)-1)fordoyindoys]tau=0.2parprop=0.5latitudes=[90,90-23.5,23.5,0,-23.5,23.5-90,-90]fig,axs=plt.subplots(2,1,figsize=(10,10))# equinox and solstice as dashed linesaxs[0].plot(np.array([des]*3).T.ravel(),np.array([[0]*4,[1000]*4,[0]*4]).T.ravel(),'k--')axs[1].plot(np.array([des]*3).T.ravel(),np.array([[0]*4,[3100]*4,[0]*4]).T.ravel(),'k--')fori,latinenumerate(latitudes):swrad=np.array([solar_model(0.,[0,30],np.arange(25),dt.day,dt.month,dt.year,lat,0)[3]fordtindts])sza=np.array([solar_model(0.,[0,30],np.arange(25),dt.day,dt.month,dt.year,lat,0)[1]fordtindts])mu=np.cos(np.deg2rad(sza))ipar=(swrad*mu*np.exp(-tau/mu)*parprop)ipar[ipar<0]=0axs[0].plot(doys,ipar.mean(axis=1),label=f'lat {lat:.1f}')axs[1].plot(doys,ipar.max(axis=1),label=f'lat {lat:.1f}')axs[0].legend(loc='best',ncol=4)axs[0].set_title(f'mean iPAR over day')axs[0].set_xlabel('day of year')axs[0].set_ylabel('ipar ($PAR_{inc}\,\sim$ $mol\, photons/ (m^2 s)$)')axs[0].set_xlim(0,doys[-1])axs[1].legend(loc='best',ncol=4)axs[1].set_title(f'Noon iPAR over day')axs[1].set_xlabel('day of year')axs[1].set_ylabel('ipar ($PAR_{inc}\,\sim$ $mol\, photons/ (m^2 s)$)')axs[1].set_xlim(0,doys[-1])
[80, 172, 264, 355]
[16]:
(0.0, 365.0)
Now, lets look at variation over the day, for some selected days:
[7]:
tau=0.2parprop=0.5year=2020# equinox and solsticees=[datetime(year,3,21),datetime(year,6,21),datetime(year,9,21),datetime(year,12,21)]des=[(e-datetime(year,1,1)).daysforeines]print(des)latitudes=[90,90-23.5,23.5,0,-23.5,23.5-90,-90]fig,axs=plt.subplots(len(latitudes),1,figsize=(10,3.5*len(latitudes)))legends=['March Equinox','June Solstice','September Equinox','December Solstice']fori,latinenumerate(latitudes):axs[i].set_title(f'latitude: {lat:.1f}')axs[i].set_ylabel('ipar ')axs[i].set_xlim(0,24)axs[i].set_ylim(0,2700)forj,doyinenumerate(des):# datetimedt=datetime(year,1,1)+timedelta(doy-1)# call solar_modeljd,sza,distance,solar_radiation=solar_model(0.,np.array([0.,30]),np.arange(25),dt.day,dt.month,dt.year,lat,0)mu=np.cos(np.deg2rad(sza))s=(solar_radiation*parprop*np.exp(-tau/mu)*mu)axs[i].plot((jd-jd[0])*24,s,label=legends[j])axs[i].legend(loc='best')axs[0].set_title(f'latitude: {latitudes[0]:.1f} iPAR ($mol\, photons/ (m^2 s)$)')_=axs[-1].set_xlabel('day hour')
To run our photosynthesis model, we need to know the temperature as well as the IPAR. That is rather complicated to model, so we will use observational data to drive the model. We choose the interpolated CRU climate dataset for this.
You can select this using the `getCRU() <geog0133/cru.py>`__ function for a given latitude, longitude and year (2011 to 2019 inclusive here). This gives you access to monthly minimum and maximum temperatures, f['tmn'] and f['tmx'] respectively, as well as cloud cover percentage f['cld']. The dataset is for the land surface only. If you specify somewhere in the ocean, you will get an interpolated result.
We will use the cloud percentage to reduce iPAR, by setting a reduction factor by 1/3 (to a value of 2/3) on cloudy days (when f['cld'] is 100) and 1 when there is no cloud.
scale = 1 - f['cld']/300
[17]:
fromgeog0133.cruimportgetCRUf=getCRU(2019,longitude=0,latitude=56)tmx=f['tmx']tmn=f['tmn']cld=f['cld']plt.figure(figsize=(10,3))plt.plot(tmx,'r',label='tmx / C')plt.plot(tmn,'b',label='tmn / C')plt.plot(cld,'g',label='cld (percent)')# irradiance reduced by 1/3 on cloudy daysplt.plot(100*(1-cld/300.),'k',label='100*scale')plt.xlabel('month index')plt.legend(loc='best')_=plt.xlim(0,11)
Since we only have monthly minimum and maximum temperature, we need to be able to estimate the diurnal variation of temperature from these. * How can we synthesise a diurnal cycle in atmospheric surface temperature from the information we have available?
We can broadly achieve this by scaling by normalised iPAR. The iPAR cycles above have a diunal cycle, and we can use these cycles to create a daily cycle in temperature. A simple model of diurnal temperature variations then is to simply scale the normalised iPAR cycle by the monthly temperature minimum and maximum values:
Tc = ipar_*(Tmax-Tmin)+Tmin
[24]:
importnumpyasnpfromdatetimeimportdatetimefromdatetimeimporttimedeltadefradiation_simple(latitude,longitude,doy,tau=0.2,parprop=0.5,year=2019,Tmin=5.0,Tmax=30.0):''' Simple model of solar radiation making call to solar_model(), calculating modelled ipar Arguments: latitude : latitude (degrees) longitude: longitude (degrees) doy: day of year (integer) Keywords: tau: optical thickness (0.2 default) parprop: proportion of solar radiation in PAR Tmin: min temperature (C) 20.0 default Tmax: max temperature (C) 30.0 default year: int 2020 default '''# Calculate solar position over a day, every 30 mins# for somewhere like London (latitude 51N, Longitude=0)dt=datetime(year,1,1)+timedelta(doy-1)jd,sza,distance,solar_radiation=solar_model(0.,np.array([0.,30.]),np.arange(25),dt.day,dt.month,dt.year,latitude,0,julian_offset=f'{year}/1/1')mu=np.cos(np.deg2rad(sza))n=mu.shape[0]ipar=solar_radiation*parprop*np.exp(-tau/mu)*mu# u mol(photons) / (m^2 s)fd=getCRU(year,longitude=longitude,latitude=latitude)Tmax=fd['tmx'][dt.month-1]Tmin=fd['tmn'][dt.month-1]cld=fd['cld'][dt.month-1]scale=(1-cld/300.)print(f'Tmin {Tmin:.2f} Tmax {Tmax:.2f} Cloud {cld:.2f} Scale {scale:.2f}')# reduce irradiance by 1/3 on cloudy dayipar=ipar*scale# normaliseipar_=(ipar-ipar.min())/(ipar.max()-ipar.min())Tc=ipar_*(Tmax-Tmin)+Tmin#Tc = (Tc-Tc.min())#Tc = Tc/Tc.max()#Tc = (Tc*(Tmax-Tmin) + (Tmin))returnjd-jd[0],ipar,Tc
The above function can be used to plot an example temperature
[25]:
tau=0.2parprop=0.5year=2020latitude=51longitude=0doy=[35]fig,ax=plt.subplots(1,1,figsize=(10,5))ax2=ax.twinx()# loop over solstice and equinox doysjd,ipar,Tc=radiation_simple(latitude,longitude,doy[0])ax.plot(jd-jd[0],ipar,label=f'ipar doy {doy}')ax2.plot(jd-jd[0],Tc,'--',label=f'Tc doy {doy}')# plotting refinementsax.legend(loc='upper left')ax2.legend(loc='upper right')ax.set_ylabel('ipar ($PAR_{inc}\,\sim$ $\mu mol\, photons/ (m^2 s))$)')ax2.set_ylabel('Tc (C)')ax.set_xlabel("Fraction of day")ax2.set_ylim(0,None)_=fig.suptitle(f"latitude {latitude} longitude {longitude}")
Tmin 4.20 Tmax 10.90 Cloud 67.70 Scale 0.77
We have now sucessfully created an example diurnal cycle in atmoshperic surface temperature. * How successful is this synthesis in temperature? * When, in your experience, do the hottest and coldest surface temperatures occur in a day?
Typically there will be a time lag between greater radiation input and temperature caused by inertia in the system. We can mimic this effect with a one-sided smoothing filter that ‘smears’ the temperature forward in time. The degree of smearing is controlled by the filter width parameter f (e.g. 8 hours). This is achieved in the function splurge:
[26]:
defsplurge(ipar,f=8.0,plot=False):''' Spread the quantity ipar forward in time using a 1-sided exponential filter of width f * 2. Arguments: ipar: array of 1/2 hourly time step ipar values (length 48) Keyword: f : filter characteristic width. Filter is exp(-x/(f*2)) Return: ipar_ : normalised, splurged ipar array '''iff==0:return(ipar-ipar.min())/(ipar.max()-ipar.min())# build filter over large extentnrf_x_large=np.arange(-100,100)# filter function f * 2 as f is in hours# but data in 1/2 hour stepsnrf_large=np.exp(-nrf_x_large/(f*2))# 1-sided filternrf_large[nrf_x_large<0]=0nrf_large/=nrf_large.sum()ipar_=scipy.ndimage.filters.convolve1d(ipar,nrf_large,mode='wrap')ifplot:plt.figure(figsize=(10,5))plt.plot(nrf_x_large,nrf_large)return(ipar_-ipar_.min())/(ipar_.max()-ipar_.min())
[36]:
# Calculate solar position over a day, every 30 minsyear=2020doy=80latitude,longitude=51,0dt=datetime(year,1,1)+timedelta(doy-1)# solar radiationjd,sza,distance,solar_radiation=solar_model(0.,np.array([0.,30.]),np.arange(0,24),dt.day,dt.month,dt.year,latitude,longitude)hour=(jd-jd[0])*24# cosinemu=np.cos(np.deg2rad(sza))ipar=solar_radiation*parprop*np.exp(-tau/mu)*mu# show filter_=splurge(ipar,f=8,plot=True)
[37]:
# smear ipar# filter length f (hours)f=1.0# hoursforfinnp.arange(0,8):ipar_=splurge(ipar,f=f)plt.plot(hour,ipar_)print(f'f={f}, peak time shifted from {hour[np.argmax(ipar)]:.1f} hrs to {hour[np.argmax(ipar_)]:.1f} hrs')
f=0, peak time shifted from 12.0 hrs to 12.0 hrs
f=1, peak time shifted from 12.0 hrs to 13.0 hrs
f=2, peak time shifted from 12.0 hrs to 13.5 hrs
f=3, peak time shifted from 12.0 hrs to 14.0 hrs
f=4, peak time shifted from 12.0 hrs to 14.5 hrs
f=5, peak time shifted from 12.0 hrs to 15.0 hrs
f=6, peak time shifted from 12.0 hrs to 15.0 hrs
f=7, peak time shifted from 12.0 hrs to 15.0 hrs
A value of f of 8 give a 2 hour delay in the peak of around 2 hours which is a reasonable average value we can use to mimic temperature variations. A simple model of diurnal temperature variatioins then is to simply scale this by the monthly temperature minimum and maximum values:
Tc = ipar_*(Tmax-Tmin)+Tmin
One problem with this approach is that the variation we achieve in temperature (and IPAR) is significantly less than in reality. This can reduce the apparent variation in CO2 fluxes. In practice, when modellers attempt to estimate instantaneous (or short time-integral) values from means, they insert additional random variation to match some other sets of observed statistical variation (including correlations). We ignore that here, for simplicity.
[29]:
importnumpyasnpfromdatetimeimportdatetimefromdatetimeimporttimedeltadefradiation(latitude,longitude,doy,tau=0.2,parprop=0.5,year=2019,Tmin=5.0,Tmax=30.0,f=8.0):''' Simple model of solar radiation making call to solar_model(), calculating modelled ipar Arguments: latitude : latitude (degrees) longitude: longitude (degrees) doy: day of year (integer) Keywords: tau: optical thickness (0.2 default) parprop: proportion of solar radiation in PAR Tmin: min temperature (C) 20.0 default Tmax: max temperature (C) 30.0 default year: int 2020 default f: Temperature smearing function characteristic length (hours) '''# Calculate solar position over a day, every 30 mins# for somewhere like London (latitude 51N, Longitude=0)dt=datetime(year,1,1)+timedelta(doy-1)jd,sza,distance,solar_radiation=solar_model(0.,np.array([0.,30.]),np.arange(25),dt.day,dt.month,dt.year,latitude,0,julian_offset=f'{year}/1/1')mu=np.cos(np.deg2rad(sza))n=mu.shape[0]ipar=solar_radiation*parprop*np.exp(-tau/mu)*mu# u mol(photons) / (m^2 s)fd=getCRU(year,longitude=longitude,latitude=latitude)Tmax=fd['tmx'][dt.month-1]Tmin=fd['tmn'][dt.month-1]cld=fd['cld'][dt.month-1]scale=(1-cld/300.)print(f'Tmin {Tmin:.2f} Tmax {Tmax:.2f} Cloud {cld:.2f} Scale {scale:.2f}')# reduce irradiance by 1/3 on cloudy dayipar=ipar*scale# smear iparipar_=splurge(ipar,f=f)# normaliseTc=ipar_*(Tmax-Tmin)+Tmin#Tc = (Tc-Tc.min())#Tc = Tc/Tc.max()#Tc = (Tc*(Tmax-Tmin) + (Tmin))returnjd-jd[0],ipar,Tc
We can now run the driver synthesis for some given latitude/longitude and time period:
[30]:
tau=0.2parprop=0.5year=2020latitude=51longitude=0fig,ax=plt.subplots(1,1,figsize=(10,5))ax2=ax.twinx()# loop over solstice and equinox doysfordoyin[80,172,264,355]:jd,ipar,Tc=radiation(latitude,longitude,doy)ax.plot(jd-jd[0],ipar,label=f'ipar doy {doy}')ax2.plot(jd-jd[0],Tc,'--',label=f'Tc doy {doy}')# plotting refinementsax.legend(loc='upper left')ax2.legend(loc='upper right')ax.set_ylabel('ipar ($PAR_{inc}\,\sim$ $\mu mol\, photons/ (m^2 s))$)')ax2.set_ylabel('Tc (C)')ax.set_xlabel("Fraction of day")ax2.set_ylim(0,None)_=fig.suptitle(f"latitude {latitude} longitude {longitude}")
In this practical, you should have developed an improved understanding of solar radiation and temperature. These are critical drivers for plants, and this sort of understanding is key to appreciating the overall patterns of vegetation growth and to developing models of terrestrial Carbon dynamics. Of course, there are many other factors that impinge on actual Carbon dynamics, not the least of which is modification of the environment by man.
In the following lectures and practicals, we will integrate these drivers with process models.
In the previous section, we reviewed some of the important mechanisms for understanding global terrestrial carbon state and dynamics.
In this lecture, we consider strategies for building mathematical models to represent these concepts.
The land surface component of a climate or Earth System model, as well as models used in weather forecast can be called a ‘land surface scheme’ (LSS), implemented as a ‘land surface model’ (LSM). The purpose of such a scheme is to model energy and (e.g. water, carbon) flux interactions at the land surface. The main driver of and LSS is generally consideration of the surface energy balance.
\[R_n = S\downarrow (1 - \alpha) + L \downarrow - L \uparrow\]
where \(\alpha\) is the shortwave albedo, \(R_n\) is the net radiation, \(S \downarrow\) is the downwelling shortwave radiation and \(L \downarrow\) and \(L \uparrow\) are the downwelling and upwelling longwave radiation respectively. Although these terms balance globally over the long term, they do not do so locally and these energy dynamics drive the Earth climate system.
Source: Pitman (2003)
In the above figure, showing global averages of components of the energy balance, the solar radiation is represented as 100 units. Around 31 units are exchanged as sensible and latent heat fluxes (\(H\) and \(\lambda E\) respectively), but the properties of the land surface significantly affect the way these fluxes are partitioned. The land surface also acts to store energy on various timescales (diurnal, seasonal, and longer).
Here, \(E\) is evapotranspiration (water loss from soil, leaf surfaces and from plant transpiration) (\(kg m^{-2} s^{-1}\)), and \(\lambda\) is the latent heat of vapoursiation (\(J kg^{-1}\)).
The net radiation \(R_n\) must be balanced by \(H\) and \(\lambda E\) and by the soil flux \(G\) and the chemical energy flux \(F\) stored in photosynthesis:
\[R_n = H + \lambda E + G + F\]
We can already see from these equations that changes in e.g. albedo \(\alpha\) are likely to alter \(H\) and \(\lambda E\) by altering \(R_n\).
Source: Pitman (2003)
Being able to model the partition between \(H\) and \(\lambda E\) is important in climate / Earth system models because lower \(\lambda E\) implies a lower flux of water vapour to the atmosphere which implies lower cloudiness and rainfall. Lower \(H\) on the other hand tends to cool the planetary boundary layer and reduce convection.
\(H\) and \(\lambda E\) are turbulent heat fluxes (see Monteith and Unsworth, 1990, pp. 123-137 for more details), influenced by the turbulence of the airflow in the planetary boundary layer, the lowest part of the atmosphere, which is influenced by the aerodynamic roughness of the surface.
\(H\) can be represented as a quasi-diffusive process:
\[H = \frac{T_s - T_r}{r_a} \rho c_p\]
where \(T_s\) is the surface temperature, \(T_r\) is a reference temperature above the surface, \(r_a\) is the aerodynamic resistance , \(\rho\) is the density of air, and \(c_p\) is the specific heat of air.
The latent heat is a more complex process but can be represented (e.g. Sellers, 1992) as:
where \(e^* (T_s)\) is the saturated vapour pressure at \(T_s\), \(e_r\) is the vapour pressure at a reference height, \(\gamma\) is the psychrometric constant and \(r_s\) is the bulk surface resistance to the transfer of water from the surface to the air.
Simplified representation of the (bulk) surface and aerodynamic resistances for water vapour flowSource: FAO
The aerodynamic resistance is inversely dependent upon the wind speed and the logarithm of the surface roughness length, which, in turn, is a function of the drag properties of the land surface (Pitman, 2003).
The roughness length of the surface over vegetated areas is strongly influenced by vegetation height, so if the vegetation is altered or removed the roughness will decrease: a higher roughness length (e.g. over a forest) permits a greater exchange of turbulent heat fluxes for given meteorological conditions than a lower roughness length (e.g. grass).
A roughness (positive – self-enhancing) feedback can exist if vegetation conditions are altered (e.g. removing forest and replacing with grass):
Source: Pitman (2003)
The factors that affect turbulent energy flows between the land surface and the atmosphere also affect fluxes of materials and gases, a significant one being the flux of CO2 between plants and the atmosphere. This can be given as:
\[F = \frac{c_i - c_a}{r_{st} + r_a}\]
where \(F\) is the the CO2 flux density (\(kg m^{-2} s^{-1}\)), \(c_i\) is the internal leaf CO2 concentration, and \(c_a\) is the ambient CO2 concentration. Here, \(r_{st}\) is the stomatal resistance (see below), a measure of the difficulty (or ease) for the vegetation to transpire. The stomatal resistance is not the same as the bulk surface resistance \(r_s\) above as \(r_{s}\) includes the resistance to moisture transfer from the soil and leaf surface. For a crop or grassland, the bulk surface resistance can be estimated as (FAO):
\[r_s = \frac{r_{st}}{LAI_{active}}\]
where \(LAI_{active}\) is the active (sunlit) leaf area index, and \(r_{st}\) is the ‘bulk’ stomatal resistance of a well-illuminated leaf. Note that this approximation does not include evaporative fluxes from the leaf surface or soil however: in land surface schemes this is usually treated more carefully and involves explicit calculation of the individual resistance terms that combine to make the bulk surface resistance.
Following from the considerations of water fluxes above, the water balance can be represented in a LSS:
\[P = E - R_{drain} - R_{surf} - \Delta S\]
where \(P\) is water input to the system (precipitation and snow melt generally, but also translocation of water e.g. in irrigation), \(E\) is evapotranspiration (water from the surface to the atmosphere through evaporation from the soil and leaf surfaces and transpiration from leaves), \(R_{drain}\) is a slow drainage component of water loss from an area, \(R_{surf}\) is surface runoff, and \(\Delta S\) is the change in soil moisture storage.
Note the use of \(E\) here when considering water fluxes, but \(\lambda E\) above when considering energy fluxes.
A LSS then, implemented as a LSM, models the energy and water fluxes at the land surface and provides an interface of these to atmospheric modelling.
Usually, this will be done for a set of grid cells, where the inputs and outputs of each cell are considered separately. There is potential for a lateral flow of water
between cells due to the runoff terms in consideration of the water balance and also potentially for snowmelt or other water inputs to the cell.
The pedigree of most models of vegetation dynamics used to simulate carbon flows in terrestrial ecosystems models (TEMs) comes from attempts to describe the processes controlling crop growth in computer codes from the 1960s (Bouman et al., 1996) onwards. Such models express current knowledge of process using mathematical equations and the coupling of these in (computer) simulation models. We will consider crop models then to outline the basic processes involved.
To understand the basic functioning of such models, we can look at the system diagram of Rabbinge (1993):
“The relationship among potential, attainable and actual yield and defining, limiting and growth reducing factors (Rabbinge, 1993).” Source: Bouman et al., 1996
A set of defining factors (plant type, CO2, radiation, etc.) describe how the vegetation would grow under conditions where it is not limited by water and nutrient constraint. It will of course respond to variations in the environmental conditions (CO2, radiation, temperature). This provides a model of the potential growth of the vegetation under these conditions. The attainable growth then is the potential growth modulated by some limiting factors such as water or nutrient availabiliy. The actual growth then has some further modulation by pest, disease etc. or when considering carbon stocks perhaps other reducing factors such as grazing.
One might think at first sight that temperature (or even radiation) ought to be a ‘limiting’ factor, rather than a ‘defining’ term: if plants have insufficient temperature or light, they will grow ‘sub-optimally’. The main conceptual difference is really to do with how the modelling of process takes place. For crops, the ‘limiting resources’ (water, nitrogen etc.) are things that intervention by the farmer can affect (e.g. irrigation or fertilizers: yield increasing measures in the diagram), so in crop models it makes sense to separate the environmental factors that one has little ability to control once the crop is planted (except by expensive, experimental measures such as CO2 increase in FACE experiments) and those that are typically employed to improve yield. Further, we can see the reducing factors mentioned as things in which the farmer can intervene (yield protecting measures). So, in crop models, we tend to separate the processes in this way and this feeds through into the philiosphy of many of the more general ecosystem/carbon models.
The models we are interested in here are dynamic, as the state variables develop as a function of time. Many of these models are hierarchical because they consider the (eco)system as a hierarchy of organisational units (cell, leaf, canopy etc.) and the model then integrates these lower level processes. We can also call these models ‘state variable based’ in that they represent and store the dynamics of some set of weights (state variables, such as leaf area) that are updated at each time step of the system by rate variables (e.g. carbon flux).
One main difference between crop growth models and those we will more generally consider in modelling terrestrial carbon are that the focus on the former is on estimating yield from the crop, rather than other terms that may be important in carbon modelling such as total CO2 stocks or fluxes. In a general sense though, the structure of most such models is similar.
“Diagram of the relations in a typical School of de Wit crop growth model (SUCROS) for potential production. Boxes indicate state variables, valves rate variables. circles auxiliary variables, solid lines (arrows) the flow of matter and dotted lines the flow of information.” Source: Bouman et al., 1996
The diagram above shows a typical crop model structure, although as we have said this is similar for most ecosystem models.
The main state variable here is the pool of structural biomass, which is also represented as a set of pools in different plant organs (leaves, stems, roots and storage organs). The fundamental process within such a model then is the production of this pool of stuctural biomass and its allocation to organ pools. The other state variable here is the development stage which strongly affects the allocation rate from the ‘structural biomass’ pool to the different organs. It also impacts plant structure although this is not generally explicitly considered in such models.
The biogeochemical processes represented here are photosynthesis, respiration (maintenance and growth) and the development rate which controls the ‘stage of development’ of the crop. Note that only states and processes directly involving the plant are considered in such a crop growth model (i.e. we do not consider what happens to/in the soil).
There have been several generations of LSMs. The ‘first generation’ (Pitman, 2003), from the late 1960s, treated the basic processes described above and were a major step forward in climate modelling. These models used simple bulk aerodynamic transfer formulations and tended to use uniform and prescribed surface parameters, including water-holding capacity, albedo and roughness length and had a static representation of vegetation, and tended not to include canopy resistance in estimation of surface resistance.
One other weakness of these models was that they did not really provide a framework for some of the other flux transfers that were found to be important in climate models, such as CO2 flux.
There were various attempts to add complexity to these models, although this was not always done in a systematic way that demonstrated improved model performance from this added complexity (Pitman, 2003).
Source: Pitman (2003)
Second generation models
From the late 1970s, new concepts were added to these models such as multi-layer (two initially) representations of soil for moisture and temperature calculations, and the representation of vegetation as a single bulk layer.
Source: Pitman (2003)
These models represented the vegetation-soil system as something that interacts with the atmosphere, and had more complex representations of albedo (e.g. splitting the downwelling component into that in visible wavelengths (strongly absorbed by vegetation) and longer wavelengths. A significant advance in these models was the incorporation of satellite observations (e.g. Sellers et al., 1994). Other advances include: the inclusion of momentum transfer concepts; and some form of explicit biophysical control on evaporation (Pitman, 2003), connecting CO2 uptake and water loss through stomatal opening.
Most of these models then has a representation of stomatal conductance \(g_{st}\), the reciprocal of stomatal resistance \(r_{st}\) which , based on empirical evidence is generally modelled as:
\[g_{st} = F(PAR) [ f(\delta e) f(T) f(\psi_l)]\]
where \(PAR\) is the ‘photosynthetically active radiation’, i.e. the downwelling shortwave radiation at visible wavelengths where it is absorbed by plants for use in photosynthesis, \(\delta e\) is humidity, \(T\) is temperature, and \(\psi_l\) is the leaf water potential.
Another feature of these LSSs is their more complex soil moisture parameterisations and the incorporation of relatively sophisticated snow sub-models.
One of the major advances of the so-called third generation models, developed in the 1990s and beyond is their connection of the leaf stomatal conductance and carbon assimilation by leaves. In second generation models, \(g_{st}\) was only used to model transpiration, but it now became a key concept in estimating canopy photosynthesis, using the approach of Farquhar et al. (1980) and Farquhar and von Caemmerer (1982). Details of this model are dealt with below, but the essentials of the method are some semi-mechanistic model of stomatal conductance from the net leaf assimilation rate \(A_n\) and the calculation of math:A_n a as the minimum of potential assimilation under different limitations with leaf respiration subtracted.
Source: Pitman (2003)
Typical methods for scaling these processes from the leaf to canopy include assuming that leaf nitrogen concentrations (and the photosynthetic enzyme Rubisco) are proportional throughout the canopy (e.g. Sellers et al., 1992b).
Since carbon assimilation is calculated in these models, it is now possible to consider other canopy processes such as the allocation of carbon to different pools (leaves, wood etc.) and also to consider the turnover of these pools in emitting carbon to the atmosphere (e.g. accumulation or loss of soil carbon). It also provides the link to grow vegetaion dynamically rather than to specify e.g. plant LAI. Thus, these models have the potential to respond to climate change due to e.g. increased CO2 concentrations, as well as variations in water (and to a limited extent, nutrient) availability.
As Pitman (2003) stresses, the most identifiable element of these third generation models is their ability to model carbon.
Source: Pitman (2003)
One particular feature one can note is the concentration of national resources into ‘national’ land surface schemes, perhaps because of the way these models are tied to national meteorological (as well as climate modelling) efforts. Such land surface schemes include `JULES (Joint UK Land Environment Simulator) <>https://jules.jchmr.org`_ in the UK, Orchidee in France, and JSBACH (Jena Scheme for Biosphere Atmosphere Coupling in Hamburg) in Germany.
In this section, we have outlined the main processes in land surface schemes and models and highlighted the interplay of radiation and water.
We have introduced some core concepts in vegetation processes, in the context of crop models and see how vegetation growth can be modelled as a potential amount of carbon assimilation that is then limited by factors such as water and nutrient availability as well as being reduced by pests, disease etc.
We have also traced the development of various land surface schemes, highlighting that the current suite of models importantly incorporates carbon assimilation and allocation and allows dynamic vegetation to be modelled.
Fisher, J.B. Deborah N. Huntzinger, Christopher R. Schwalm, Stephen Sitch, Modeling the Terrestrial Biosphere, Annual Review of Environment and Resources 2014 39:1, 91-123
Bouman, B.A.M. ; van Keulen, H. ; van Laar, H.H. ; Rabbinge, R. (1996) The School of de Wit crop growth simulation models: A pedigree and historical overview, Agricultural Systems, 1996, Vol.52(2), p.171-198
Sellers PJ. 1992. Biophysical models of land surface processes. In Climate System Modelling, Trenberth KE (ed.). Cambridge University Press.
D. B. Clark, L. M. Mercado, S. Sitch, C. D. Jones, N. Gedney, M. J. Best, M. Pryor, G. G. Rooney, R. L. H. Essery, E. Blyth, O. Boucher, R. J. Harding, C. Huntingford, and P. M. Cox (2011) The Joint UK Land Environment Simulator (JULES), model description: Part 2: Carbon fluxes and vegetation dynamics, Geosci. Model Dev., 4, 701-722, 2011, doi:10.5194/gmd-4-701-2011
Pitman, A.J, (2003) The Evolution Of, And Revolution In, Land Surface Schemes Designed For Climate Models,, Int. J. Climatol. 23: 479-510 (2003)
In this course, our focus is on linking measurements from Earth Observation and other sources with models of terrestrial carbon at regional and global scales. The motivation for models here then is to express current understanding of the controls on carbon dynamics as embedded in Earth System / Terrestrial Ecosystem models. The role of observations is to test and constrain these models to enable: (i) monitoring of terrestrial carbon dynamics; (ii) improved prognostic models. The main focus of the modelling and monitoring is on Net Primary Productivity (NPP).
We can distinguish several types of models : terrestrial biogeochemical models (TBMs) which simulate fluxes of carbon, water and nitrogen coupled within terrestrial ecosystems; and dynamic global vegetation models (DGVMs) which further couple these processes interactively with changes in ecosystem structure and composition (competition among different plant functional types) (Prentice et al. 2001).
A particular class of models, known as Production efficiency models (PEMs) will be examined here, particularly because they are / can be closely aligned with global observations from satellites. Other than these, we will concentrate on DGVMs as these are the models most commonly used in large LSS modelling efforts.
Dynamic global vegetation models (DGVMs) seek to model global biogeochemical fluxes and vegetation dynamics to improve understanding of the dynamics of the terrestrial bioshphere and its interacrions with other components of the Earth system. They are generally process-based models, in that they attempt to develop a view of the system by modelling the underlying processes.
Source: PIK
The main components of a DGVM are: establishment, productivity and competition for resources, resource allocation, growth, disturbance and mortality. DGVMs can be forced i.e. run with trends of CO2, climate and land use from observations or scenarios or run interactively within climate models to analyse feedbacks.
Clearly many simplifications have to be made to do this sort of modelling at a global scale. This is partly from having to have suitable sets of parameters to describe the processes globally (model parameters) and partly for computational reasons. The trend is to include more complexity as time goes by in these models, as computational resources increase and our understanding of processes and our ability to parameterise the models improves.
The ability to model vegetation dynamics is an important aspect of DGVMs in that it allows for prognostic and paleo use, this also means that the design of DGVMs is geared towards modelling potential vegetation. Anthrogenic influences then, such as changes in land use are incorporated by forcing these effects (e.g. prescribing land cover/PFT) on top of the dynamic (‘natural’ / ‘potential’) modelling.
DGVMs have been developed from the 1990s in response to the need for models to investigate the responses of vegetation to environmental change at time scales from seconds to centuries and from the local to global scale (Woodward and Lomas, 2004). A non-exclusive list of DGVMs developed in the 1990s and early 2000s is:
LPJ - Germany, Sweden
IBIS - Integrated Biosphere Simulator - U.S.
MC1 - U.S.
HYBRID - U.K.
SDGVM - U.K.
SEIB-DGVM - Japan
TRIFFID - U.K.
VECODE - Germany
CLM-DVGM - U.S.
Peng (2000) provides a useful review of such models as they stood in the year 2000 that students should follow up for background information.
An overview of the main characteristics of some of the available DGVMs is provided in the table below:
One important abstraction made in DGVMs is the idea of plant functional types (PFTs). This is a way in which global parameterisations can be simplified, by assuming that the responses to resources annd climate of groups of plant types will be similar and so they can be lumped together. It recognises that we cannot model all plant species at a global scale and is a pragmatic response to this in defining a more limited set of functional types that can be grouped together.
One might try to do this based on a simple biome description, but one soon finds that many difficulties arise when considering biomes with complex structures and mixtures, such as savanna or mixed forest where different plants in the biome have very different responses to light, different phenologies etc.
Box (1996) suggests the following requirements for PFTs. They should:
represent the world’s most important plant types;
characterize them through their functional behavior;
and provide complete, geographically representative coverage of the world’s land areas
and discusses the various schools of thought on how these groupings should be arrived at.
One set of PFTs arrived at by Box is:
Source: Box (1996)
Other groupings of PFTs have been used, for instance that suggested by Bonan et al. (2002), one motivation here being linking to land cover classes defined in global land cover datasets supplemented by climatic zoning based on precipitation, and temperature/Growing degree days (GDD, see below):
Source: Bonan et al. (2002)
Quaife et al. (2008) examined the impact of different land cover schemes for applying the mapping from land cover to PFT:
Proportions (from left to right) of the C3 grass, crop, deciduous broadleaf and evergreen needleleaf Plant Functional Types (PFTs) across Great Britain according to (a) LCM2000 (b) GLC2000 (c) MODIS-BGC (d) MODIS- PFT (e) MODIS-UMD (f) MODIS-IGBP (g) MODIS-LAI/ fAPAR.Source: Quaife et al. (2008)
For context, the values of GPP, NEP and NPP are given for teh UK, assuming the land surface is covered with a single PFT.
Differences in NPP (\(gC m^{-2}\)) are shown:
so at any location, the impact of land cover uncertainties and the mapping used from land cover to PFT can have a significant impact on model calculations of carbon fluxes (the range of differences in NPP is 133 \(gC m^{-2}\)).
The current status of these models then includes a set of parameters describing vegetation functioning and other characteristics for each of these broad PFT classes, e.g. (NCAR).
There are many criticisms that can be applied to this approach and there are clearly complexities (those to do with the assignment of a particular PFT to a location, and those concerning the parameterisation of a PFT across broad areas for example), but it should be seen as a pragmatic response to the need to run the DGVMs globally.
One direction that this area of DGVM research is going in is in developing and using a global plant trait database: TRY which can allow the sorts of data that field ecologists measure on plant traits to be linked to what could be used in DGVMs (Kattge et al., 2011).
Source: Kattge et al., 2011
Because of the large number of samples involved, distributions of these traits can now be more fully explored. Early interesting findings include analysis of the fraction of variance explained by PFT or species for key traits:
Source: Kattge et al., 2011
here, we can see that for example for specific leaf area (SLA, one-sided leaf area per leaf dry mass) about 40% of the variation in SLA that exists in the database occurs between PFTs, also the variation in the mean between PFTs is similar to the within PFT variation for this trait.
Source: Kattge et al., 2011
The figure above shows frequency distributions of SLA for different PFTs. For some (e.g. needle leaf deciduous) the distribution is quite narrow (but a relative small sample number here) but for most, it is wide. Interestingly, the figure also shows the values of SLA used in different global vegetation models (in red) showing the very wide range of values assumed across different models.
Prentice (2011) provides a critique of current efforts in DGVMs, where he argues particularly for more/better model benchmarking. This can be done by comparing model outputs (not just carbon stocks and fluxes, but other measures such as vegetation greennesss or runoff) with measurements. A significant internation effort to coordinate such benchmarking is the International Land Model Benchmarking - iLAMB.
Our current understanding of how good these models are comes from sources such as the Carbon-Land Model Intercomparison Project - C-LAMP which included an analysis of the biogeochemical models CASA’, CN (Randerson et al., 2009). Among these models, global carbon sinks for the 1990s differed by a factor of 2, and the magnitude of net carbon uptake during the growing season in temperate and boreal forest ecosystems was under-estimated, probably due to delays in the timing of maximum LAI. In the tropics, the models overestimated carbon storage in woody biomass based on comparison with datasets from the Amazon.
Another source of information is straight model intercomparisons such as that of Sitch et al. (2008) which performed an intercomparison of five DGVMs. Whilst such work does not include much comparison with measurements, they are useful for understanding the agreement and divergence of current DGVMs under future climate scenarios.
One comparison in this study was with the best current estimates of global land carbon budgets for the 1980s and 1990s:
Source: Sitch et al. (2008)
Major findings are:
All models estimates are within the range of current knowledge of these budgets and relatively close to the mean IPCC values. The models were also in general agreement about the cumulative land uptake over the last 50 years.
They also simulated the correct sign of response to ENSO events but differed markedly in magnitude.
All five DGVMs have similar response of productivity to elevated atmospheric CO2 in agreement with field observations
The DGVMs are in less agreement in the way they respond to changing climate.
All models suggest a release of land carbon in response to climate, implying a significant positive climate-carbon cycle feedback in each case. This response is mainly due to a reduction in NPP and a decrease in soil residence time in the tropics and extra-tropics, respectively.
Source: Sitch et al. (2008)
Zoom in of lower right panel of above. Source: Sitch et al. (2008)
We see, for example significant scatter on the year to year responses of the models when considering the land-atmospherie carbon exchange, particularly for ENSO years, although the general trends are similar (hence the level of agreement noted above for decadal or longer integrals).
Source: Sitch et al. (2008)
Significant disagreement exists between the models on NPP response to climate in the tropics and soil respiration response to climate in the extra-tropics. In the above figure we see the NPP response to elevated CO2 and the sensitivity of some other terms to temperature.
Uncertainty in future cumulative land uptake associated with land processes is large and equivalent to over 50 years of anthropogenic emissions at current levels.
We can see from these various analyses that whilst there are certain (important) areas of agreement among the current DGVMs, significant uncertainty remains in estimating current carbon budgets and predicting future ones. Improvements in these models is of great importance to understanding possible climate changes and impacts.
In this section, were have reviewed DGVMs and their underlying concepts. We have seen that they represent a compromise between understanding and representation of process at the scales considered and compute power and data requirements. Fundamental to their development have been simplifications that allow them to be used globally, over a large range of timescales. This includes the idea of Plant Functional Types (PFTs) to represent the responses of different ‘types’ of vegetation to their environment. DGVMs were originally developed to represent ‘potential vegetation’, so management constraints have to be included on top of the models to be able to compare them with measurements.
Scheiter, S., Langan, L. and Higgins, S.I. (2013), Next‐generation dynamic global vegetation models: learning from community ecology. New Phytol, 198: 957-969. https://doi.org/10.1111/nph.12210
Box., E.O. 1996, Plant Functional Types and Climate at the Global Scale, Journal of Vegetation Science, Vol. 7, No. 3 (Jun., 1996), pp. 309-320
Sitch S, Huntingford C, Gedney N, Levy PE, Lomas M, Piao S, Betts R, Ciais P, Cox P, Friedlingstein P, Jones CD, Prentice IC, Woodward FI (2008) Evaluation of the terrestrial carbon cycle, future plant geography and climate-carbon cycle feedbacks using 5 Dynamic Global Vegetation Models (DGVMs). Global Change Biology 14:2015-2039, doi:10.1111/j.1365-2486.2008.01626.x
Randerson, James T., Forrest M. Hoffman, Peter E. Thornton, Natalie M. Mahowald, Keith Lindsay, Yen-Huei Lee, Cynthia D. Nevison, Scott C. Doney, Gordon Bonan, Reto Stockli, Curtis Covey, Steven W. Running, and Inez Y. Fung. September 2009. Systematic Assessment of Terrestrial Biogeochemistry in Coupled Climate-Carbon Models. Global Change Biology, 15(9):2462-2484. doi:10.1111/j.1365-2486.2009.01912.x. See also Supporting Information.
D. B. Clark, L. M. Mercado, S. Sitch, C. D. Jones, N. Gedney, M. J. Best, M. Pryor, G. G. Rooney, R. L. H. Essery, E. Blyth, O. Boucher, R. J. Harding, C. Huntingford, and P. M. Cox (2011) The Joint UK Land Environment Simulator (JULES), model description: Part 2: Carbon fluxes and vegetation dynamics, Geosci. Model Dev., 4, 701-722, 2011, doi:10.5194/gmd-4-701-2011
Prentice et al. The Carbon Cycle and Atmospheric Carbon Dioxide, 2001, IPCC AR3 WG1
Peng, C. (2000) From static biogeographical model to dynamic global vegetation model: a global perspective on modelling vegetation dynamics, Ecological Modelling, Volume 135, Issue 1, 25 November 2000, Pages 33–54
Kattge, J., et al. (2011), TRY: a global database of plant traits. Global Change Biology, 17: 2905-2935. doi: 10.1111/j.1365-2486.2011.02451.x
Cramer W, Kicklighter DW, Bondeau A, Moore Iii B, Churkina G, Nemry B, Ruimy A, Schloss AL: Comparing global models of terrestrial net primary productivity (NPP): Overview and key results. Global Change Biology 1999, 5:1-15.
Sellers PJ, Tucker CJ, Collatz GJ, Los SO, Justice CO, Dazlich DA, Randall DA. 1994. A global 1 degree by 1 degree NDVI data set for climate studies. Part 2: the generation of global fields of terrestrial biophysical parameters from the NDVI. International Journal of Remote Sensing 15: 3519-3545.
Sellers PJ. 1992. Biophysical models of land surface processes. In Climate System Modelling, Trenberth KE (ed.). Cambridge University Press.
One class of (data-driven) models used to understand and measure terrestrial Carbon dynamics are Production Efficient Models (PEMS). In this section, we review this class of model and set them in the context of the course objectives.
In the production efficiency approach to modelling NPP (or GPP) (the ‘Monteith’ approach, (Monteith, 1972, 1977)), a linear relationship is assumed between (non limited) canopy photosynthesis and absorbed PAR (Photosynthetically Active Radiation, i.e. the amount of shortwave radiation that is absorbed by the canopy).
Here, \(GPP\) is the Gross Primary Productivity (\(g C m^{-2}\)), \(\epsilon\) is the ‘light use efficiency’ (LUE) (g dry matter per MJ PAR), \(C_{drm}\) is the carbon content of dry matter (0.45 \(gC g^{-1}\)), \(f_{PAR}\) is the fraction of PAR absorbed by the canopy (also known as fAPAR), and \(PAR\) is Photosynthetically Active Radiation (\(MJ m^{-2}\)) or the downwelling shortwave radiation multipled by the the PAR fraction of direct and diffuse illumination taken together.
The \(scalars\) represent multiplicative environmental constraints that are typically meteorologically derived (i.e. limiting factors).
\(NPP\) is the Net Primary Productivity (\(g C m^{-2}\)) and \(R_a\) is the autotrophic respiration (\(g C m^{-2}\)).
The obvious attraction of this approach is that it is simple and that it caputures the ‘main effect’ that carbon assimilation increases with increasing PAR absorption in the absence of limiting factors (including such limits as scalars). An additonal (strong) attraction that we shall see is that fAPAR is potentially accessible from satellite data, so a major part of the model can be friven by observations globally.
Although such a linear relationship between PAR and photosynthesis does not exist at the leaf level, at the canopy level it is generally observed and can be a reasonable and simple model of GPP.
For crops (with sufficient nutrients and water) the LUE can remain constant over the full range of light intensity. For other vegetation types such as forests, it can be more complex and depend on other factors such as species, stand ageand soil fertility.
A useful review of some of the major current PEMs and an analysis of some of their deficiencies is provided by McCallum et al. (2009).
They review six PEMs (see McCallum et al. for brief descriptions of the unique features of each of the models):
LUE is often assumed constant in these models, e.g. constant globally in CASA or per biome via a land cover map as in MOD17. GLO-PEM does not assume a constant LUE.
Although all models make use of satellite data (primarily in the estimation of fAPAR), most also require climate data (for APAR and to drive limiting scalars). Only GLO-PEM runs on only satellite data (with the exception of attribution of C3 and C4 plants).
As we shall see in a following lecture, satellite-derived fAPAR is essentially an ‘measurement’ (though it is not actually a direct measurement). It is of necessity an estimate of fAPAR at the time of the satellite overpass and under the atmospheric conditions of the ‘measurement’. Since the conditions need to be clear of clouds for measurement, it may be thought of as being primarily a clear sky, or direct-radiation dominated measure. It is known that under diffuse conditions, the capacity of a canopy to absorb shortwave radiation (i.e. the fAPAR) tends to be higher, particularly for forest canopies. Do, even though the PAR may be lower under diffuse/cloudy conditions, the fAPAR may be higher than the ‘clear sky’ fAPAR.
Key findings of the review by McCallum et al. are:
LUE should not be assumed constant, but should vary by PFTs
Results are strongly dependent on the climate drivers used for particular models (which also complicates intercomparison)
Further use of satellite data iwould alleviate the need for many or all climate drivers.
PEMs should consider incorporating diffuse radiation, especially at daily resolution
PEMs should also consider the need to account for GPP saturation when radiation is high
Cramer et al. (1999) compared seventeen global models of NPP, including PEMs and mechanistic models.
There was broad general agreement among the models in global seasonal variations in NPP (range of variation around 50% of the lowest value with two outliers excluded), and generally quite low coefficient of variations of NPP spatially, except for a few areas.
These results are claimed to be ‘relatively good’ considering this is a relatively poorly understood variable,and one thing the study highlights is degree of our current uncertainty of this quantity. Within this range of spread then, the PEMs performed no more poorly than other models and PEMs should be considered a viable semi-independent approach for NPP monitoring.
An overview of the PEM approach is presented. The key idea here is that non-limited carbon assimilation can be assumed a linear function of the capacity of a canopy to absorb shortwave (specifically PAR) radiation and the amount of downwelling PAR.
These models are particularly useful as they can be largely driven by observations (or rather fAPAR, derived from satellite observations).
Several key issues in the use of such models are highlighted, but these models seem to perform ‘quite well’ in comparison to mechanistic approaches.
Since these models are driven by observations, they cannot directly be used in prognostic mode.
If you’re interested in the Solar Induced Flourescence discussed in the lectures, read more here
Monteith JL: Solar radiation and productivity in tropical ecosystems. J Appl Ecol 1972, 9:747-766.
Monteith JL: Climate and the efficiency of crop production in Britain. Philos Trans R Soc London, Ser B 1977, 281:277-294.
Li, X.; Xiao, J. Mapping Photosynthesis Solely from Solar-Induced Chlorophyll Fluorescence: A Global, Fine-Resolution Dataset of Gross Primary Production Derived from OCO-2. Remote Sens. 2019, 11, 2563. https://doi.org/10.3390/rs11212563
We have dealt with ideas of plant biomass (Carbon) accumulation. How this biomass is ‘used’ by a plant is responsive to environmental factors and generally happens in a sequence. This sequence and timing of events is known as phenology. This lecture will introduce some fundamental ideas of phenology, and discuss models.
For a field crop such as winter wheat, you will no doubt have seen fields being planted in the Autumn and a low crop cover appearing over the Winter as the seeds germinate and develop leaves to allow photosynthesis. As the weather warms, tillering starts, followed by stem elongation, and finally the wheat grains appear, develop and ripen. This process is described as plant phenology. In crop models, this is usually quite detailed. One example of this is the Zadok scale:
“the external Zadoks stages of the plant (red) and the two internal stages, double ridges and terminal spikelet (check the vertical text). It shows when the shoot components are initiated, grow and die (green boxes) and when the yield components are formed (bars)”.* Source: FAO
This describes the ‘timing’ (or rather sequencing) of the different stages of growth of a crop such as wheat. This sort of scale and diagram is useful for farmers in understanding and modellers in expressing the allocation of resources in the crop and when yield is determined.
A simplified version of a crop phenology model involves the following phases:
Germination
Seed germination has a strong temperature dependence in crops. The duration of this phase is usually expressed then as the time over which the sum of growing degree days (GDD) must reach some threshold value. GDD is useful for understanding some other stages of development.
The GDD for a particular day is usually modelled as the mean of the minimum and maximum temperature minus some base temperature (typically 10 C). This is then summed over time from planting and, in the absence of other ‘stress’ factors (e.g. lack of water or pests) will be characteristic of the length of the germination period.
Initial spread
During this phase, the production of biomass is dependent on the proportion of radiation intercepted by green leaves, in the absence of other stress factors. The amount of biomass production per unit ground area then is a function of leaf area index (LAI, one sided leaf area per unit ground area, dimensionless (\(m^2 m^{-2}\))). LAI is the product of the leaf biomass pool per unit area (in kg m-2) and the specific leaf area (SLA, one-sided area of a fresh leaf divided by its oven-dry mass, expressed in \(m^2 kg^{-1}\)) assuming SLA constant over the plant. As LAI growth then, the rate of biomass production increases so biomass production is close to exponential.
Full coverage
When the canopy reaches full coverage, adding more leaf area does not greatly increase the radiation interception so growth is dependent more strongly on incident radiation and respiration rate. Some plants (crops especially) then switch resource allocation from leaf production to generative growth or storage.
Allocation to storage organs
In this stage, biomass allocation goes into storage organs. Whilst the leaves remain intact the biomass production rate will be similar to the previous phase.
Ripening
Although in many crops leaves continually die as new ones are produced at the top of the canopy, in the ripening phase, biomass production is essentially terminated as all leaves senesce and no new ones are produced.
All plants, to some extent experience daily and seasonal variations in environmental conditions. Plants then tend to adjust their behaviour to these variations. There are diurnal variations in light, temperature and water. Many plants then exhibit cirdadian rhythms (24 hour cycles) for example in stomatal opening (Chapin et al., 2002 Ch. 6).
Plants in temperate climates experience strong seasonal variations in environment and generally exhibit a predictable pattern of phenology: they put more resources into leaf production at certain times, flowers at others etc. Often this is a response to photoperiod where e.g. leaf senescence begins and photosynthesis is reduced when day length is reduced or other environmental factors give cues to the onset of winter. To prepare for this, plants will tend to shift resources (nutrients, carbohydrates, water) from leaves to other organs to prevent their loss from the plant. These resources can then be used to initiate growth in the next season.
This ‘phasing’ by plants covers many aspects of plant growth and development, for instance allocation rates. When carbohydrates are produced in primary production, they are allocated to different pools (leaves, roots, wood etc.). Different plants have different patterns of allocation, e.g. deciduous or annual plants must allocate to leaf (for light) and roots (for water and nutrients) production early on, but evergreen plants already have leaves and so can put more resources into root production in the early season. When the growing season is short (e.g. Arctic tundra) storage of resources form one season to more rapidly start photosynthesis in the next season become more important.
Tissue turnover
Although plants gain carbohydrate through photosynthesis, they must also use some of it for respiration to maintain current organs and mechanisms and also for new growth. In the previous lecture we saw that the balance between these gives what we call the net primary productivity. If this starts to become negative (i.e. it costs more to the plant to keep some organs such as leaves that they contribute to photosynthesis) then it can be advantageous for plants to lose biomass e.g. by shedding leaves. This is an important mechanism for plants as it gives then control over this balance.
Senescence is the programmed breakdown of tissues in a plant which allows plants to reduce the impact of e.g. unproductive leaves. One upshot of this is that it allows plants to explore new space. For example, in grasses a relatively constant rate of leaf senescence allows maintenance costs associated with lower (early) leaves to be reduced as new leaves are produced higher in the canopy.
There is a danger of the loss of important resources (e.g. nutrients) when shedding leaves or other organs. Plants therefore tend to transfer soluble nutirents out of senescing tissue (resorption) and are able to maintain around half of the phosphorus, nitrogen and potassium from leaves at senescence (Chapin, 1992, Ch. 8).
It is interesting to note the relationship between Nitrogen Use Efficiencey (NUE), the ratio of dry mass to nitrogen in litter fall (Chapin et al., Ch. 8) and the nitrogen lost in litterfall:
“Relationship between the amount of nitrogen in litterfall and nitrogen use efficiency (ratio of dry mass to nitrogen in that litterfall). Each symbol is a different stand, including conifer forests (C), temperate deciduous forests (D), tropical evergreen forests (T), mediterranean ecosystems (M), and temperate forests dominated by nitrogen fixers (N). Redrawn from Vitousek (1982).” Source: Chapin et al. 2002 , Ch. 8
Plants on less fertile soils tend to be more efficient at using nitrogen than those on more fertile soils. NUE cab be increased by reducing tissue turnover times (e.g. keeping leaves longer: so plants with lower leaf turnover and higher NUE have a competitive advantage in nutrient poor soils) but this tends to imply a trade-off with the capacity to capture nutrients and carbon (Chapin et al. 2002 , Ch. 8) so is not a universal advantage.
Another part of the value of senescence is that it allows plants to shed parasites, pathogens and herbivores. By shedding leaves and roots then, then plants can respond to such attacks if their impact is likely damaging.
Other episodic factors that lead to loss of biomass include fire, wind storms etc. Whilst these can lead to severe impacts and nutrient losses to a plant, they also play a wider role in the ecosystem e.g. by providing gaps in a canopy or increasing heterogeneity of nutrient resources (Chapin, 1992, Ch. 6).
We have seen above that temperature and photoperiod (day length relative to night length) are important concepts for plant growth and development and it is hardly surprising that phenology is generally controlled by these environmental cues.
Using photoperiod as well as temperature is particularly important in humid extratropical areas where there are large temperature fluctuations from year to year as it stops plants picking up on temperature at the ‘wrong’ time of year (Korner and Basler, 2010).
Because the photoperiod is the same in winter and autumn, plants need a cue that winter is over. This is often obtained from the dose of low temperatures experienced by the plant, and amounts to a ‘chilling’ requirement by some plants before spring bud burst (Korner and Basler, 2010. The ‘phasing’ of the signals seems to be: chilling requirement which enable photoperiod sensitivity, then response to temperature.i For plants that have a chilling requirement then, bud break can be delayed by mild temperatures.
Burrows et al. 2011 analysed global temperature trends over the period 1960-2009 and have noted the following patterns of advance in Spring temperatures and delay in autumn temperatures.
“Seasonal shift (days/decade) is the change in timing of monthly temperatures, shown for April (top), representing Northern Hemisphere spring and Southern Hemisphere fall and October (bottom): positive where timing advances, negative where timing is delayed. Cross-hatching shows areas with small seasonal temperature change (<0.2 C/month), where seasonal shifts may be large.” Source: Burrows et al. 2011
Plant responses to such climatic variables means that phenology is likely to change under climate change, and there is already much evidence for this.
Our evidence for phenological changes comes from a combination of ground observations of this sort, flux towers, and satellite observations.
Until relatively recently there has been little work linking these two sources of information, partly due to a spatial paucity of ground data until recent years and partly because of complexities in matching the scales of the observations (Liang et al., 2011; Studer et al., 2007; White et al., 2009).
One of the most simple models for tracking phenology that has been extensively applied to satellite data is a logistic function of time:
\[y(t) = \frac{c}{1 + e^{a + bt}}+d\]
This is used for instance by Zhang et al., 2003 for tracking the dynamics of MODIS vegetation index data. The model is fitted to the VI data and transition date metrics calculated:
and allows the tracking of dynamics of the phenology metrics over time. Generally a model of this sort is used to derive data that are then used to model phenology.
(Growing) Degree day model
The simplest form of model that can be used prognostically then is a simple GDD model. Note that such models are only appropriate where temperature is a limiting factor in plant growth (the extratopics). Note that this does not include any account of chilling days or photoperiod. In this, approach some phenological metric such as spring greenup is used to calibrate parameters of a GDD model. The model can be phrased as:
\[GDD = \sum_{T>T_{base}} (T - T_{base})\]
where \(T_{base}\) is a base or ‘critical’ temperature and \(T\) is the air temperature (C) (e.g. at 2 m).
Some options exist after this point for implementation of such a model. They key is to identify some threshold value of GDD \(F^*\) that corresponds to the metric of interest. Most typically this is simply a GDD threshold. The sum of GDD generally starts from 1 January for the Northern hemisphere and 1 July for the Southern hemisphere to act as a consistent baseline.
Sobrino et al.. (2011) used a double logistic function of the same form for mapping changes in spring date timing trends for global vegetation:
Another approach is to use day of year (DOY) as a additional model parameter. Fisher et al. (2007) for example allow the GDD threshold to vary spatially but attempted to calibrate a model where the starting DOY of the summation and the base temperature were constant over a wide area. An interesting feature of that study is its consideration of some of the complexities in using satellite data for phenology modelling. They compared a calibration of a spring warming model for predicting the date of half maximum greenness, calibrated with satellite data over deciduous forests in New England. Whilst a calibration of the model was achieved:
a comparison with a null model (i.e. using mean values) against the model driven with climatological data showed little improvement over the use of the model.
A more detailed analysis of the data showed that it was not the model per se at fault, but the calibration and application of it to a broad PFT. They found that when the model was applied at a more specific level (northern (beech- maple-birch) and central (oak-hickory) hardwood forests) different responses to climate were observed and conclude that conclude that spatial location and species composition are critical factors for predicting the phenological response to climate change.
Generally, satellite observations based on visible and near infrared wavelengths are used for monitoring phenology, but Picard et al. (2005) used an uindex based on near infrared and middle infrared which is more sensitive to water content that vegetation greeness.
This was found to be particularly useful for the study area (Siberia) as the metric used (NDWI) decreases during snowmelt then increases around the date of bud burst.
They found that uncertainty from the bud burst date calibration had an impact of only around 8% of mean NPP, and suggest that the calibrated model is accurate enough for carbon budget calculations.
In tropical areas, simulating and understanding phenology is complicated (Bradley et al., 2011) as in places like the Amazon temperature is generally not thought to be limiting. Rather then, water availability is often more of a control when there is a strong seasonal cycle in precipitation. This is not a simple response however because trees with deep roots may have access to water reserves and show no pronounced annual variation.
There is evidence of enhanced tree mortality and decrease in growth however when Amazonian rain forests are exposed to a longer and more intense moisture deficit than normal (Phillips et al., 2009).
The state of phenology models in DGVMs for tropical areas then is at present rather weak and an area of active research.
Unsurprisingly, phenology in DGVMs is more complex that just the determination of bud burst. In JULES for example, it is dealt with in terms of a leaf mortality rate that is assumed to be a function of temperature. Other rules such as a constant rate of dropping leaves are implemented when the daily mean value of leaf turnover reaches twice its minimum value. In JULES budburst occurs at the same rate when leaf mortality drops back below this threshold. This then is a form of temperature dependence similar to those outlined above with an implict chilling requirement, but the parameters are clearly not the same as those considered above.
Clark et al. 2011 describe this in more detail. It is found that the model described above is not sufficient to produce realistic vegetation phyenology and so a variable \(p\) is introduced that describes the phenological status of the vegetation as the ratio \(LAI/LAI_{max}\) where \(LAI\) is the current LAI and \(LAI_{max}\) is the seasonal maximum LAI.
Vegetation phenology is seen to be an important concept in monitoring, modelling and understanding vegetation dynamics and its response to climate variations. There is a growing amount of observational data on phenology at various scales and more recent attempts to reconcile measures at different scales.
It is likely that for some areas at least, species specific (or slighly broader groupings of species) parameterisations of phenology need to be considerdd rather than just broad PFT definitions.
Most phenology analysis is done using simple degree day models, although some analyses also consider chilling requirements.
Phenology models in DGVMs may be phrased rather differently to those used in most analyses. Whilst maintaining a required ‘mechanistic approach’, current DGVM phenology models are not entirely satisfactory.
Bradley et al. 2011, Relationships between phenology, radiation and precipitation in the Amazon region, Global Change Biology Volume 17, Issue 6, pages 2245-2260, June 2011
Zhang, X. Y., Friedl, M. A., Schaaf, C. B., Strahler, A. H., Hodges, J. C. F., Gao, F., et al. (2003). Monitoring vegetation phenology using MODIS. Remote Sensing of Environment, 84, 471-475.
White, M. A., et al. (2009), Intercomparison, interpretation, and assessment of spring phenology in North America estimated from remote sensing for 1982-2006, Glob. Change Biol., 15(10), 2335-2359.
Studer, S. ; Stockli, R. ; Appenzeller, C. ; Vidale, P. ; Vidale, L. 2007, A comparative study of satellite and ground-based phenology , International Journal of Biometeorology, 2007, Vol.51(5), p.405-414
Sobrino, JA., Yves Julien, Luis Morales, 2011, Changes in vegetation spring dates in the second half of the twentieth century, International Journal of Remote Sensing , Vol. 32, Iss. 18, 2011
Schwartz, M.D and Hanes J.M. 2010, Continental-scale phenology: warming and chilling, Int. J. Climatol. 30: 1595-1598 (2010)
Burrows, Michael T ; Schoeman, David S ; Buckley, Lauren B ; Moore, Pippa ; Poloczanska, Elvira S ; Brander, Keith M ; Brown, Chris ; Bruno, John F ; Duarte, Carlos M ; Halpern, Benjamin S ; Holding, Johnna ; Kappel, Carrie V ; Kiessling, Wolfgang ; O’Connor, Mary I ; Pandolfi, John M ; Parmesan, Camille ; Schwing, Franklin B ; Sydeman, William J ; Richardson, Anthony J, The pace of shifting climate in marine and terrestrial ecosystems, Science (New York, N.Y.), Nov, 2011, Vol.334(6056), p.652-5
Reed, at el., 2009. Remote Sensing Phenology: Status and the Way Forward, in Noormets, A. Phenology of Ecosystem Processes: Application in Global Change Research. Springer, Dordrecht, pp. 275.
Picard, G., Quegan, S., Delabert, N., Lomas, M.R., Le Toan, T., Woodward, F.I. (2005) Bud-burst modelling in Siberia and its impact on quantifying the carbon budget, Global Change Biology 11 (2005) 2164-2176
Jeffrey Morisette, Mark Schwartz (Lead Author);C Michael Hogan (Topic Editor) “Phenology” . In: Encyclopedia of Earth. Eds. Cutler J. Cleveland (Washington, D.C.: Environmental Information Coalition, National Council for Science and the Environment).
Liang, LA ; Schwartz, MD ; Fei, SL, 2011, Validating satellite phenology through intensive ground observation and landscape scaling in a mixed seasonal forest, Remote sensing of environment, 2011 JAN 17, Vol.115(1), p.143-157
Korner and Basler, 2010, Phenology Under Global Warming, Science 19 March 2010: 1461-1462.DOI:10.1126/science.1186473
Defilia and Clot, 2005, Phytophenological trends in the Swiss Alps, 1951-2002, Meteorologische Zeitschrift, Vol. 14, No. 2, 191-196 (April 2005)
Fisher, JI ; Richardson, AD ; Mustard, JF, 2007, Phenology model from surface meteorology does not capture satellite-based greenup estimations, lobal change biology, 2007 MAR, Vol.13(3), p.707-721
In the previous sessions, we reviewed some of the important mechanisms for understanding global terrestrial carbon state and dynamics.
In this lecture, we consider strategies for building mathematical models to represent these concepts.
In the ‘Farquhar’ approach (Farquhar et al. 1980), the non-limited photosynthetic rate \(A_0\) is calculated for a fixed value of intercellular CO2 concentration \(C_{i,0}\) (Knorr, 1998). The actual carbon assimilation rate \(A:\) at the leaf level depends on the actual intercellular CO2 concentration \(C_{i}\). For the C3 pathway it is calculated as a function of the electron transport limited rate \(J_E\) and a carboxylating rate \(J_C\) controlled by the enzyme Rubisco (units \(mol(CO_2)m^{-2}s{-1}\)) and the leaf ‘dark’ respiration rate \(R_d\):
“Relationship of net photosynthetic rate to photosynthetically active radiation and the processes that limit photosynthesis at different irradiances. The linear increase in photosynthesis in response to increased light (in the range of light limitation) indicates relatively constant light use efficiency. The light compensation point is the minimum irradiance at which the leaf shows a net gain of carbon.”Source: Chapin
“Leaflet photosynthetic rates (leaflet area basis) versus inter-cellular CO2 concentration for soybean plants grown at 330 and 660 uL CO2 L-1.”SourceCampbell et al., 1988
Here, \(\Gamma_{*}\) (\(mol(CO_2) mol(air)^{-1}\)) is the CO2 compensation point without leaf respiration, \(K_C\) is the Michaelis-Menton constant for CO2 (\(mol(CO_2) mol(air)^{-1}\)), \(O_x\) is the oxygen concentration (0.21 \(mol(O_2) mol(air)^{-1}\)), \(K_O\) is the Michaelis-Menton constant for O2 (\(mol(O_2) mol(air)^{-1}\)), and \(J\) is the electron transport rate:
\[J = \frac{\alpha I J_{max}}{\sqrt{J_{max}^2 + \alpha^2 I^2}}\]
with \(I = I_{PAR}/E_{PAR}\) where \(I_{PAR}\) (\(Wm^{-2}\)) is the PAR absorption rate \(E_{PAR}\) the energy content of PAR quanta (220 \(kJ mol^{-1}\)), \(J_{max}\) the maximum electron transport rate, (\(mol(CO_2) m^{-2} s^{-1}\) and \(\alpha\) the efficiency of photon captupre (0.28).
The ‘Michaelis-Menton’ constants depend on vegetation temperature (\(T\) in \(K\)):
where \(K_{C,0}\) and \(K_{O,0}\) are the maximum Michaelis-Menton values for CO2 and O2 respectively \(460 x 10^{-6} mol(CO_2) mol(air)^{-1}\) and \(330 x 10 ^{-3} mol(O_2) mol(air)^{-1}\)), \(E_C\) is the activation energy for \(K_C\) (\(59396 J mol^{-1}\)), \(E_O\) is the activation energy for \(K_O\) (\(35948 J mol^{-1}\)), \(T_1\) is 25 \(C\) in K (i.e. 25 + 273.15) and \(T_0\) is the vegetation temperature relative to 25 \(C\) (i.e. \(T - T_1\)). \(R\) is the gas constant (8.314 \(J mol^{-1} K^{-1}\)).
The maximum electron transport rate \(J_{max}\) is modelled as the electron transport rate \(J\) at 25 \(C\) with a temperature dependence. \(J\) at 25 \(C\) is modelled as \(J_0 * NSCL\) where \(NSCL\) is a Nitrogen scaling factor at maximum carboxylation rate and maximum electron transport rate.
Plants regulate CO2 uptake and water loss through the regulation of the size of stomatal openings (mainly in leaves). The stomatal conductance, the reciprocal of stomatal resistance, is the flux of water vapour or CO2 per unit driving force. The usual interpretation is that when plants decrease stomatal conductance (increase resistance) to minimise water loss, photosynthesis declines redcuing the efficiency at which plants convert light to carbohydrates (Chapin et al., 2002).
"Cross-section of a leaf, showing the diffusion pathways of CO2 and H2O into and out of the leaf, respectively. Length of the horizontal arrows outside the leaf is proportional to wind speeds in the boundary layer" Source: Chapin
Jones (1998) reviews and criticises models of stomatal control of photosynthesis and transpiration. A number of possible (not necessarily exclusive) hypotheses for the role of stomata are considered in the various approaches. These include:
stomata operate in such a way as to minimize water loss relative to the amount of CO2 uptake (as above)
the prime role of stomata might be to avoid damaging plant water deficits (e.g. avoidance of cavitation)
stomatal control of transpiration has a role in maintaining leaf temperature within an optimal range
The hypotheses one puts forward about the potential role (or roles) of stomatal dynamics clearly have the potential for influence on conclusions one might draw about future climates, as these hypotheses lead to particular model forms being used in TEMs.
Source: Jones 1998
Understanding the control of stomata is complicated because of the various feedback mechanisms involves (figure above).
There are a number of models available to describe the response of leaf stomatal conductance (G, mm s-1 or sometimes mmol m-2 s-1). This include the empirical model of Jarvis (1976) which requires a large number of parameters through to semi-empircal approaches such as that of Jones (1983).
Despite such debates, most TEMs seem to use semi-empircal approaches to modelling stomatal conductance as a compromise between complexity and the number of parameters and other mechanisms required. Typical of these is the approach used in the Bethy model (Knorr, 1997) and the subsequent JSBACH model, as well as the Triffid model (Cox, 2001) and the subsequent JULES implementation, following Jones, 1983 for maximum (i.e. unstressed) canopy stomatal conductance, Gc0 (Knorr, 2000):
which is considered a function of leaf temperature (Tk, K), non-water-limited net leaf CO2 uptake rate(Ac0, umol m-2 s-1), Ca the atmospheric CO2 concentration (355 umol(CO2)/mol(air)) and standard non-stressed leaf- internal CO2 concentration, (Ci,0). R is the gas constant (8.3145 J/mol K, so units: kg m-2 s-2 mol-1 K-1), and p is the air pressure of the standard atmosphere (Pa, i.e. kg m-1 s-2), so that Gc0 is given in m/s (Knorr, 2000). The factor of 1.6 accounts for the different molecular diffusivities of water and carbon dioxide.
In Triffid, Ci0 is assumed a function of internal partial pressure of CO2 and the leaf surface humidity deficit:
where the symbol ci is now used in place of Ci0 and Gamma is the internal partial pressure of CO2 at which photosynthesis just balances photorespiration ( the photorespiration compensation point), D* is the humidity deficit at the leaf surface, and F0 and Dc are vegetation specific parameters:
In Bethy, the equation above is interpreted as a linear relationship between Gc0 and Ac0, i.e. between the maximum stomatal conductance and the maximum photosynthetic rate and the following equation used (after Schultze et al., 1994):
with Gc0 in mm s-1 and Ac0 in umol(CO2) m-2 s-1. This leads to the assumption (Knorr, 1998) that for C3 plants, Ci0 = 0.87Ca and for C4 plants Ci0 = 0.67 Ca (interpreting data from Schultze et al., 1994).
Most models place an additional limitation on Ac0, thence on stomatal conductance if soil water limits photosynthesis (and so stomata close). In Bethy/JSBACH, this is:
where be is assumed to change with soil water status in such a way that during the course of a day, the transpiration rate, Et, does not exceed a root supply rate, S, described by (Knorr, 2000):
\(b_e\) is thereby set each day either to 0 if Et never exceeds S, or to a value where S = Et at the time of highest atmospheric demand, assumed at 13:00 h. Ws is the soil water content, adjusted to take soil freezing into account and cw an empirical parameter representing root density. (this, verbatim from Knorr, 2000).
Above, es(Ta) is the saturation pressure at the actual temperature (Ta) and ea is the actual vapour pressure exerted by the water in the air, the difference between these two being the vapour pressure deficit (VPD) or saturation deficit which is a measure of the evaporative capacity of the air. These terms are usually expressed in kPa.
There are several hypotheses about how leaf stomatal conductance scales to an equivalent canopy stomatal conductance. In models such as SDGVM and triffid/JULES, it is assumed that canopy stomatal conductance increases with increasing leaf area index (LAI). In triffid/JULES and most other models:
where :math:f_{PAR}` is the fraction of incident ‘PAR’ radiation (shortwave radiation used in photosynthesis) absorbed by the canopy, L is the LAI and k is a geometric term representing leaf projection (and also clumping) – an extinction coefficient for the canopy (typically set to 0.5).
The rationale for this is that incident PAR decreases over the vertical extent of the canopy so stomatal conductance might be considered to also decrease as it is linked to assimilation rates which will decrease in this way.
In this section, we have outlined the ‘Farquar’ approach to modelling photosynthesis, that is used in this or related forms in most DGVMs.
The model relates the carbon assimilation rate to the the minimum of two potentially limiting factors, the electron transport limiting rate and a carboxylating rate, with leaf ‘dark’ respiration subtracted. This model has been seen to operate well at the leaf level and is relatively simple to implement and parameterise.
An important facet of this model for climate studies is that it relates carbon assimilation to ambient CO2 concentrations.
We have also outlined some concepts about what controls stomatal conductance. This is an important concept because it can limit carbon assimilation and relates to water use by the leaf (transpiration).
Farquhar, G.D., S. von Caemmerer, and J.A. Berry (1980). A Biochemical Model of Photosynthetic CO2 Assimilation in Leaves of C3 species. Planta 149:78-90. [download]
Farquhar GD, von Caemmerer S. 1982. Modeling of photosynthetic response to environmental conditions. In Physiological Plant Ecology. II. Water Relations and Carbon Assimilation, Lange OL, Nobel PS, Osmond CB, Ziegler H (eds). Encyclopedia of Plant Physiology, vol. 12B, Springer-Verlag: New York; 549-587.
Knorr, W. (1997) Satellite remote sensing and modelling of the global CO2 exchange of land vegetation: a synthesis study. Max-Planck-Institut fur Meteorologie Examensarbeit Nr. 49, 185 pp. ISSN 0938-5177 (in German and English), Max-Planck-Institut fur Meteorologie, Hamburg, Germany.
Knorr, W. (2000) Annual and interannual CO exchanges of the terrestrial biosphere: process-based simulations and uncertainties, Global Ecology & Biogeography (2000) 9, 225-252
In this practical, we will be using such data to drive a leaf-scale photosynthesis/carbon model based on that in JULES. You should ideally have some understanding of JULES before attempting the practical, and ideally read Best et al., 2011 and Clark et al., 2011. You will also find Sellers et al.,
1996 of use.
In this practical you will explore the characteristics and response of a model of the terrestrial carbon. At the end of this session, you should be able to better understand the theoretical material on Carbon models we covered in the lectures and explore how different types of vegetation respond to variations in environmental conditions.
The model implemented is based on that in JULES (Best et al., 2011; Clark et al., 2011) with some minor modifications. That model is in any case very similar to that of Sellers et al. (1996). You should probably refresh your memory of the Sellers paper.
We will be using driving data from 005_Solar_Practical.ipynb, so you should make sure you are familiar with that material before starting this.
We will be using the class photosynthesis from the Python code photJules.py. Embedded in the code, you will find a large number of parameters used to control the Carbon assimilation. These are grouped into ‘typical’ values (from the literature) for different Plant Functional Types (PFTs). The PFTs coded in this model are:
C3grass
C4grass
Broadleaftree
Needleleaftree
Shrub
Table 2 in Clark et al., 2011 provides a summary of the default PFT-dependent photosynthesis parameters:
Relating this to the previous practical, you might notice the variation in (leaf scale) single scattering albedo (omega) and the temperature ranges specified for the different PFTs.
Much of JULES is derived from Sellers et al. (1996). In this approach (for C3, Collatz et al. (1991)), the leaf-level Carbon assimilation rate W is limited by:
carboxylating rate Wc: the efficiency of the photosynthetic enzyme system (Rubisco-limited)
A dark respiration rate, Rd is subtracted from the assimilation.
A scalar control on Ws and Wc is Vcmax, the maximum rate of carboxylation of Rubisco. This is in turn scaled by the leaf Nitrogen parameter (n0). It is modulated by temperature relative to the temperature range constraints.
The light-limited rate We is defined by the product of the quantum efficiency alpha, the PAR absorption rate (ipar) projected onto a leaf surface, and the leaf absorptance (1 - omega), where omega is the leaf single scattering albedo.
For C3 plants, We and Wc are modulated by internal leaf CO2 concentration effects that are functions of Gamma, the CO2 compensation point without leaf respiration. For Wc, additional parameters, the Michaelis-Menton constants for CO2 and O2, come into play. These are in turn functions of temperature.
For C4 plants, Ws is directly scaled by internal leaf CO2 concentration relative to surface pressure.
The product of Vcmax and the PFT-specific factor fdr give dark respiration.
importnumpyasnpimportmatplotlib.pyplotaspltfromgeog0133.photJulesimportphotosynthesisfromgeog0133.photterimportplotme,day_plot,gpp_plotfromgeog0133.solarimportradiationfromgeog0133.ppimportdaily_PP,annual_nppfromgeog0133.cruimportgetCRU,splurgefrommatplotlibimportanimationfromdatetimeimportdatetime,timedelta# import codes we need into the notebook
A starting point is to produce a function that uses the model in an easy way. The function do_photosynthesis does just that. It takes a large number of options, and allows us to do different plots, etc. The parameters are:
ipar: Incoming radiation in units of \(\mu mol\, m^{−2}s^{−1}\) (default: 200)
Tc: Temperature in Celsius
co2_ppmv: \(CO_2\) concentration in units of ppmv
n: Length of array (default value: 100 bins)
pft_type: type of PFT (see JULES paper for details)
plotter: None or dictionary of plotting options
x : array to be used for \(x\)-axis for plots (or None in which case the Tc array is used)
The plotting dictionary is of the form:
plot_dict = {
n_subplots : 1, # number of sub-plots
subplot : 0, # index of this sub-plot
title : 'title', # subplot title
name : 'name', # plot file name
xlabel : 'x label'# x label
log : False # use log scale for y axis
}
The function returns:
photo,plotter
where photo contains the model calculations and plotter an updated plotting dictionary.
The main outputs are (all in \(mol \ m^{-2} s^{-1}\)):
Wc : carboxylating rate
We : light-limiting rate
Ws : transport rate
W : combined rate
Al : assimilation rate
Rd : dark respiration rate
all accessible as photo.Wc etc.
[4]:
defdo_photosynthesis(ipar=200.,Tc=None,co2_ppmv=390,n=100,pft_type='C3 grass',plotter=None,x=None,Tnoise=0.0):''' A function to run the photosynthesis model. Function allows the user to change a number of important photosynthesis parameters: incoming PAR radiation, canopy temperature, CO2 concentration, C3/C4 pathway and the PFT type. The first three parameters can be provided as arrays. The function will produce a plot of the variation of photosynthesis as it sweeps over the parameter range. If Tnoise is set to some +ve value, this is added to Tc as random noise. This can be used to mimic variations in Tc if Tc is some average (eg monthly) value '''fromgeog0133.photJulesimportphotosynthesisphoto=photosynthesis()photo.data=np.zeros(n)# set plant type to C3ifpft_type=='C4 grass':photo.C3=np.zeros([n]).astype(bool)else:photo.C3=np.ones([n]).astype(bool)photo.Lcarbon=np.ones([n])*1photo.Rcarbon=np.ones([n])*1photo.Scarbon=np.ones([n])*1# set pft type# options are:# 'C3 grass', 'C4 grass', 'Needleleaf tree', 'Shrub'# 'Broadleaf tree'# Note that if C4 used, you must set the array# self.C3 to Falsephoto.pft=np.array([pft_type]*n)# set up Ipar, incident PAR in (mol m-2 s-1)photo.Ipar=np.ones_like(photo.data)*ipar*1e-6# set co2 (ppmv)photo.co2_ppmv=co2_ppmv*np.ones_like(photo.data)# set up a temperature range (C)try:ifTcisNone:photo.Tc=Tcornp.arange(n)/(1.*n)*100.-30.else:photo.Tc=Tcexcept:photo.Tc=Tc# add noise Tnoisephoto.Tc+=np.random.normal(\
scale=Tnoise,size=photo.Tc.size).reshape(photo.Tc.shape)# initialisephoto.initialise()# reset defaultsphoto.defaults()# calculate leaf and canopy photosynthesisphoto.photosynthesis()try:ifx==None:x=photo.Tcexcept:passplotter=plotme(x,photo,plotter)returnphoto,plotter
We can run the photosynthesis for one or more PFT and plot results as a function of temperature by setting the PFT keyword to one of the following:
C3grass
C4grass
Broadleaftree
Needleleaftree
Shrub
[10]:
# list of all pftspfts=['C3 grass','C4 grass',\
'Broadleaf tree','Needleleaf tree','Shrub']plotter={'n_subplots':len(pfts),# number of sub-plots'name':'default',# plot name'ymax':40# max value for y set}# store the data for each PFT in a dictionaryoutput={}# loop over pftsforpftinpfts:output[pft],plotter=do_photosynthesis(ipar=400,pft_type=pft,plotter=plotter)
>>> Saved result in photter_default.png
We can access the data generated from the variable photo, e.g. for We:
[8]:
print(output['C3 grass'].We*1e6)# or eg maximum value using .max()print(f"max We value for C3 grass {output['C3 grass'].We.max()*1e6} umolCO2m-2s-1")
From the data in this table and your understanding of the controls on photosynthesis in the model, answer the following questions and confirm your answer by running the model.
which PFT has the highest values of We, and why?
How does this change with increasing ipar?
When ipar is the limiting factor, how does assimilation change when ipar increases by a factor of k?
For C3 grasses, what are the limiting factors over the temperatures modelled for ipar=200?
For C3 grasses, what are the limiting factors over the temperatures modelled for ipar=400?
For C4 grasses, what are the limiting factors over the temperatures modelled?
We will now use some appropriate weather data from our work in 03-Solar_Practical.ipynb to run the photosynthesis model for what should be typical conditions in space and time.
/Users/h/GEOG/GEOG0113/geog0133/docs/notebooks_lab/geog0133/cru.py:147: RuntimeWarning: invalid value encountered in true_divide
return (ipar_-ipar_.min())/(ipar_.max()-ipar_.min())
[23]:
# list of all pftspfts=['C3 grass','C4 grass',\
'Broadleaf tree','Needleleaf tree','Shrub']# store the data for each PFToutput={}# loop over pftsforpftinpfts:output[pft],plotter=do_photosynthesis(ipar=200,pft_type=pft)print(pft,output[pft].We.max()*1e6,'umolCO2m-2s-1')
C3 grass 20.07264113525945 umolCO2m-2s-1
C4 grass 9.959999999999997 umolCO2m-2s-1
Broadleaf tree 13.381760756839634 umolCO2m-2s-1
Needleleaf tree 13.381760756839634 umolCO2m-2s-1
Shrub 13.381760756839634 umolCO2m-2s-1
This is an interesting figure as we can now use our theory to produce plots of leaf-level assimilation that look similar to those we will find in the literature. The plots are also interesting because they show us which factor is limiting in these cases. For C3 grass at latitude 51 degrees, we see that Ws and Wc, the transport rate and carboxylating rate and respectively, both functions of Vcmax, are the main limiting factors in the day time. These are strong functions of
temperature, relative to the optimal temperature range for the given PFT. In the morning and evening, we see that assimilation becomes light-limited. The temperature range for C3 grass is 0 to 36 C, so the temperatures we see are well away from the extremes of toleration.
Looking at the left-hand column, we see that the maximum assimilation rate very much follows the Tc patterns: it is higher for days 172 and 264, and lower for 80 and 355. But another factor that is apparent is the daylength: this is considerably longer for day 80 than 355, so even if the maximum assimilation rate is similar, the carbon assimilation would likely be larger in the spring because of the longer daylength.
All of the above experimentation was just at the leaf level. We have essentially looked at responses to temperature and light intensity. Of course, in a ‘real’ canopy, there will be varying amounts of leaf area, so we have to consider how to scale up the leaf-level assimilation to the canopy scale.
Although there are various ways to scale from leaf-level assimilation to the canopy level, we have only implemented what is perhaps the simplest here. This is based on the assumption that there is an acclimatisation of leaf \(N\) throughout the canopy (Sellers et al., 1992) giving:
\[V_m = V_{m0} \overline{f(L)}\]
where \(\overline{f(L)}\) is the average fraction of absorbed PAR (as opposed to instantaneous) at leaf area index (LAI) \(L\), \(V_{m0}\) is the ‘maximum’ (top leaf) assimilation, and \(V_m\) is the canopy-scale assimilation.
Assuming a homogeneous canopy, the canopy scale PAR use efficiency \(\Pi\) is:
where \(\overline{fAPAR}\) is the (average) fraction of absorbed PAR by the canopy and \(\overline{k}\) is an effective extinction coefficient:
\[ \begin{align}\begin{aligned} \overline{k} = \left[ \frac{G(\mu)}{\mu} \right] {(1-\omega_l)}^{\frac{1}{2}}\\with :math:`\mu` the cosine of the (time mean) solar zenith angle (a path length term), :math:`G(\mu)` the ‘Ross’ or ‘\ :math:`G`\ ’-function giving the average normalised leaf projection in the direction of the (time mean) incoming radiation, and :math:`\omega_l` is the leaf single scattering albedo (unity minus leaf absorption) in the PAR region (see Sellers et al., 1992 for more details).\end{aligned}\end{align} \]
Under these assumptions then, we can calculate canopy scale photosynthesis.
Suppose we have an amount of leaf carbon of 0.07 \(kg\,C\,m^{−2}\) and a specific leaf density of 0.025 (\(kg\,C\,m^{−2}\) per unit of LAI) that is constant throughout the canopy (giving a LAI of 0.07/0.025 = 2.8), and a \(G\) gunction of 0.5 (e.g. a spherical leaf angle distribution). We can model this as:
[30]:
defGPP(p,verbose=False):# now we want the canopy level responsep.LAI=p.Lcarbon/p.sigmal# leaf single scattering albedop.omega=0.2p.G=0.5p.mubar=np.mean(p.mu_)p.kbar=(p.G/p.mubar)*np.sqrt(1-p.omega)p.fapar=1-np.exp(-p.kbar*p.LAI)ifverbose:print(f'doy {doy:03d} - mubar = {p.mubar:.2f}')print(f'doy {doy:03d} - kbar = {p.kbar:.2f}')print(f'doy {doy:03d} - fapar = {p.fapar.mean():.2f}')# kg C m-2 s-1: conversion factor from Clark et al. 2011p.PiG=0.012*(p.Al+p.Rd)*p.fapar/p.kbarreturn(p)
[5]:
latitude=51.longitude=0.0pft="C3 grass"Tnoise=0.0year=2019doys=[80,172,264,355]params=[]fori,doyinenumerate(doys):jd,ipar,Tc,mu=radiation(latitude,longitude,doy,\
domu=True,year=year)# run the leaf level modelp=do_photosynthesis(n=len(ipar),pft_type=pft,Tc=Tc, \
ipar=ipar,co2_ppmv=390,\
x=ipar,plotter=None,\
Tnoise=Tnoise)[0]delp.Pip.Lcarbon=0.07# kg C m-2p.mu_=mup=GPP(p,verbose=True)params.append((jd,ipar,Tc,p,doy))gpp_plot(params,info='',title=None)
The Net Ecosystem Productivity needs the plant respiration terms to be subtracted from the GPP. This is typically split into mainenance and growth respiration: \(R_{pm}\) and \(R_{pg}\) respectively. In Jules, \(R_{pg}\) is assumed to be a fixed fraction of NPP:
\[R_{p} = R_{pm} + R_{pg}\]
\[R_{pg} = r_g\Pi_{G}\]
where \(\Pi_{G}\) is the GPP (the canopy scale assimilation). In Jules, \(r_g\) is set to 0.25 for all PFTs (Clark et al., 2011). Leaf maintenance respiration in Jules is the (moisture-modified, through a term \(\beta\) that we have not dealt with here) canopy dark respiration (i.e. canopy-scaled). Root and stem respiration are set to depend on the nitrogen concentrations of the root and stem relative to the leaf nitrogen.
Since we have not introduced stem and root biomass yet, we will assume here that leaf, root and (respiring) stem biomass (\(L\), \(R\) and \(S\) respectively) we will assume these terms equal for the moment, since we only require their relative amounts:
\(N_x\) is the Nitrogen concentration of biomass component \(x\) and the factor 0.012 converts units (see Clark et al., 2011).
\[N_l = n_m L\]
\[N_r = n_m R \mu_{rl}\]
\[N_s = n_m S \mu_{sl}\]
where \(\mu_{xl}\) is the relative Nitrogen concentartion of biomass component \(x\) to leaf Nitrogen (assumed 1.0 here). \(\beta=1\) for unstressed conditions. So:
defNPP(p):p=GPP(p,verbose=False)# NPP calculationp.rg=0.25# scale Rd (respiration in the light) up to canopy herep.Rpm=0.036*p.Rd*p.fapar/p.kbar# Gpp from above, introducing betap.PiG=0.012*(p.Al-p.beta*p.Rd)*p.fapar/p.kbar# Grow respiration is a fraction of (GPP - maint resp)p.Rpg=p.rg*(p.PiG-p.Rpm)# ensure Rpg is non negativep.Rpg[p.Rpg<0]=0.# total respirationp.Rp=p.Rpm+p.Rpg# NPP: calculated as the differencep.Pi=p.PiG-p.Rpreturnp
Now plot this along with GPP:
[7]:
latitude=51.longitude=0.0pft="C3 grass"Tnoise=0.0year=2019doys=[80,172,264,355]params=[]fori,doyinenumerate(doys):jd,ipar,Tc,mu=radiation(latitude,longitude,doy,\
domu=True,year=year)# run the leaf level modelp=do_photosynthesis(n=len(ipar),pft_type=pft,Tc=Tc, \
ipar=ipar,co2_ppmv=390,\
x=ipar,plotter=None,\
Tnoise=Tnoise)[0]# we need to store the cosine of the sun anglep.mu_=mu# specify leaf carbonp.Lcarbon=0.07# kg C m-2# now we want the canopy level responsep=NPP(p)params.append((jd,ipar,Tc,p,doy))gpp_plot(params,info='',title=None)
We are now in a position to calculate the Carbon assimilation per day.
The values of GPP and NPP in the code are in units of \(Kg\, C\, m^{-2} s^{-1}\) and simulated every 30 seconds, so to get the daily sum in units of \(Kg\, C\, m^{-2} d^{-1}\) we can take the mean value, multiply by the number of seconds in a day:
[32]:
defdaily_PP(pft="C3 grass",Lcarbon=0.07,\
latitude=51.,longitude=0.,year=2019,\
Tnoise=0.0):''' Daily GPP and NPP Returns numpy arrays of: doys gpp npp Keywords: pft : PFT name default "C3 grass" Lcarbon : leaf carbon (kg C m-2) default 0.07 latitude : latitude (degrees) default 51.0 longitude : longitude (degrees) default 0.0 year : year, default 2019 (2011-2019 allowed) Tnoise : add Gaussian noise to Tc to mimic variations '''doys=np.arange(1,366,dtype=int)gpp=np.zeros_like(doys).astype(float)npp=np.zeros_like(doys).astype(float)fori,doyinenumerate(doys):jd,ipar,Tc,mu=radiation(latitude,longitude,\
int(doy),domu=True,year=year)# run the leaf level modelp=do_photosynthesis(n=len(ipar),pft_type=pft,Tc=Tc,\
ipar=ipar,co2_ppmv=390,\
x=ipar,plotter=None,\
Tnoise=Tnoise)[0]# now we want the canopy level responsep.mu_=mup.Lcarbon=Lcarbon# kg C m-2p=NPP(p)nsec_in_day=60*60*24gpp[i]=p.PiG.mean()*nsec_in_daynpp[i]=p.Pi.mean()*nsec_in_dayreturndoys,gpp,npp
plt.figure(figsize=(10,5))plt.title(f'GPP and NPP: lat {latitude:.1f} lon {longitude:.1f}:'+\
f' {year}, {pft}')plt.plot(doys,gpp*1e3,label='GPP')plt.plot(doys,npp*1e3,label='NPP')plt.ylabel('GPP/NPP $[g\,C m^{-2}d^{-1}]$')plt.xlabel('day of year')_=plt.legend()
This looks like a reasonable form for the plot: at 40 degrees latitude, we get a strong variation from winter to summer.
The ‘jagged’ shape of the lines arises from assuming the same value of temperature for each month. You can reduce the impact of that by setting some positive value to Tnoise if you like. The value you set specifies the standard deviation of the noise added to the temperature. If you do, then you might want to examine the impact of the noise (daily variation) on the mean NPP/GPP. Why might (zero mean) random noise (variation) on temperature be expected to affect the average NPP/GPP?
Since we have calcualted NPP and GPP, we can integrate them over the year. We can report the summation in units of \(g\, C\, m^{-2} y^{-1}\) or equivalently \(t\, C\, km^{-2} y^{-1}\) with \(t\) metric tons (1000 kg).
Sometimes, we will see NPP reported in units of \(t\, C\, ha^{-1} y^{-1}\). Since \(1\, ha = 0.01 km^2\), this involves a scaling factor of 0.01:
[11]:
print('mean NPP = {:.4f}{:s}'.format(np.mean(npp)*1e3,'g C m-2 day-1'))print('mean GPP = {:.4f}{:s}'.format(np.mean(gpp)*1e3,'g C m-2 day-1'))integral=np.sum(npp)*1e3# g C m-2 yr-1print(f"NPP = {integral*0.01:.2f} t C/ha/yr")
mean NPP = 1.7052 g C m-2 day-1
mean GPP = 3.2309 g C m-2 day-1
NPP = 6.22 t C/ha/yr
NPP is a key concept in terrestrial carbon dynamics. It expresses the ‘raw’ inputs of carbon (from the atmosphere) to vegetation. It is driven by solar radiation and so, not surprisingly broadly follows global patterns of radiation, but it is also limited by temperature, water, nutrients, and an opportunity to grow.
We have used all of the theory we have learned about in the lectures to develop a model of NPP that we can apply to different biomes (PFTs), locations and times. The model we have here is intentionally limited and slightly simplified for illustration, but is in essence the same as you will find in any DGVM. We have seen that the emphasis here has been on leaf-level carbon exchange, with a somewhat simplistic scaling of process from the leaf to the canopy, but that is the state of the art in this
area. We have also seen that the PFT approach allows us to draw general conclusions about primary production and the impact of climate on this, but even in this case, there are quite a number of internal parameters that are assumed known. We have not, in this work, included any idea of these parameters being uncertain or poorly constrained, but that is most surely the case. How well this sort of modelling relates to reality has been the subject of a large number of studies and comparisons with
measurement, and it is only through such comparisons that we learn about any deficiencies in our models.
One of the simplifications made in the modelling here is that the leaf carbon (hence the leaf area index, LAI) is specified: even though we calculate the carbon input, we do not (in this model) go the final stage of allocating this carbon to plant organs. One reason for that is that we then need to consider phenology, which is beyond the scope of this practical. When you are thinking through what you have learned from this practical though, you should have the ideas to hand from the lecture
materials to be able to think through how we might implement plant growth and senescence in a model of this nature: once you have that, you have most of the ingredients of your own DGVM!
The notebook may have downloaded as a copy with an extension 011_Photosynthesis_Modelling_Practical.ipynb.1. Use a mv command to change its name. If you’ve previously modified this notebook before, you may wish for another copy.
You may need to update some code within the geog0133 subdirectory. Use:
The modelling activities in the practical rely on a global dataset of temperature and cloud cover. We add to this a model of incoming radiation. These data are then used as input variables. The input data for a given doy and year can be plotted below. Can you make a similar map for GPP/NPP?
We can try to fill the map using a loop, a working example is below, though this has issues.
[85]:
#### remembering the doy and year from above we loop over the map#### (this is inefficient use of python, but it is the most usable and understandable)#### the loop only goes up to 50 in the lon index, this is slow enough, you can alter this and waitTnoise=0.0pft="C3 grass"Lcarbon=0.07c=0r=0### first an empty arrayarray_gpp=np.zeros_like(data_map['tmx'][0])*np.nanarray_npp=np.zeros_like(data_map['tmx'][0])*np.nanforl1,latinenumerate(lats[:]):forl2,loninenumerate(lons[:60]):### ignore the oceanifnp.isnan(ocean_mask[l1,l2]):continue#### don't do anything if no dataifdata_map['tmx'].mask[0,l1,l2]:continuejd,ipar,Tc,mu=radiation(lat,lon,\
int(doy),domu=True,year=year)# run the leaf level modelp=do_photosynthesis(n=len(ipar),pft_type=pft,Tc=Tc,\
ipar=ipar,co2_ppmv=390,\
x=ipar,plotter=None,\
Tnoise=Tnoise)[0]# now we want the canopy level responsep.mu_=mup.Lcarbon=Lcarbon# kg C m-2p=NPP(p)nsec_in_day=60*60*24array_gpp[l1,l2]=p.PiG.mean()*nsec_in_dayarray_npp[l1,l2]=p.Pi.mean()*nsec_in_day
[88]:
#### plotting the results of the above loopf=plt.figure(figsize=(7,10))ax=f.add_subplot(3,1,pn+1,projection=ccrs.PlateCarree())ax.set_title(datetime(year,month+1,1).strftime('%B %Y'))s=ax.pcolormesh(lons,lats,array_gpp)ax.add_feature(cfeature.COASTLINE)sax=plt.colorbar(s)sax.set_label('GPP')plt.show()
Here are some coding challenges you may wish to explore:
Sit and wait for the whole map to process and visualise it (not recommended).
Find a smaller area to plot. Use ax.set_extent([lon_min,lon_max,lat_min,lat_max]) to zoom in the projection. Can you adjust the loop above to only process the code in this area? (HINT iflon>lon_minandlon<lon_max))
Reduce the size of the problem. Iterate over every 10 grid cells in each direction, fill these cells with the value from the centre.
If you really know how to use your python and numpy efficiently, and you REALLY want these detailed global maps of NPP/GPP, look in the geog0133 folder by this notebook. Here are all the functions the we use in this practical. Can you modify them to take in the whole CRU maps as arrays? This is the efficient way to use numpy.
I can advise on these extra tasks, but they are extra material for those who want, so you are somewhat on your own!
In the previous lectures, we have dealt with the general context for wanting to monitor and model land surface processes and terrestrial carbon exchanges in particular. We have also deal with how these carbon processes are currently modelled in DGVMs and similar tools. We have emphasises the terrestrial vegetation components throughout, as (as we shall see) this is where remote sensing data have most to directly offer (it is difficult at best to monitor soil carbon stocks and fluxes from Earth Observation (EO)).
In this section, we will introduce ideas of measurement of quantities pertinent to terrestrial carbon. The material presented will be an overview only that students are expected to read through. The ‘lecture’ session this week will be run as a set of student-led seminars on particular aspects of this material (details below).
Schimel et al. (2014) Observing terrestrial ecosystems and the carbon cycle from space: a review, https://doi.org/10.1111/gcb.12822 <https://onlinelibrary.wiley.com/doi/abs/10.1111/gcb.12822>
disturbance (e.g. fire, disease, logging, but concentrating on fire here)
measurement of Carbon stocks (i.e. how much carbon is there in different pools (e.g. leaf, wood, roots)) and what are the (longer term) changes in these stocks?);
measurement of Carbon fluxes (i.e. what are the rates (on short time scales, seconds, hours, days, or months) of carbon exchanges and how are these partitioned (e.g. GPP, respiration)?);
We can more simply state the first two of these as ‘state’ and ‘rates’. Disturbance then, is the sudden change of state.
This division gives us 8 topics:
Disturbance: Remote Sensing Methods
Disturbance: Emissions
Disturbance: Models
Carbon stocks: Biomass density from ground data and allometric relationships
Carbon stocks: Ground-based measurements of leaf area
Carbon stocks: Biomass density from remote sensing
Carbon fluxes: Flux tower measurements
Carbon fluxes: Remote Sensing measurements
You will work on the presentations in teams of 2-3 people per team. You will be asked to organise yourselves into these teams so that all topics are covered. If you do not do so, you will be allocated a team.
The material you prepare and distribute should be informative enough that you could use it as a basis for answering exam questions on the topic of carbon-related measurements. If you look at exam papers for previous years, you will notice that it typically includes a question on topics from this material.
For the presentations, you should prepare:
a 10 minute ppt presentation on the topic
1-2 pages of text, summarising the main points of your talk, as a pdf to be distributed to the whole class
a page of references used, as a pdf to be distributed to the whole class
You will have a chance to present your slides live to the class during class sessions in the ‘presentation week’.
One of the most important aspects of disturbance that Remote Sensing is able to have a significant impact on is fire, so we will concentrate on that here.
One result of wildfire is the injection of particulates and gasses from the land surface into the atmosphere. There are several ways in which EO contributes to the monitoring of emissions from fires: (i) emissions accounting, where EO-based estimates of burned area (e.g. the MODIS burned area product) is used in the framework of Crutzen/Seiler to provide the IPCC Greenhouse Gas (GHG) Inventory Process; (ii) monitoring and modelling of biomass burning plumes; (iii) (atmospheric) remote sensing of trace gasses and proxies; (iv) fire radiative energy (related to CO2 emissions from wildfire), being a temporal integral of fire radiative power (measured from thermal EO); and (v) optical remote sensing of land surface change, attributed to fire impacts (burned area mapping).
We define three categories to be investigated in the student presentations: emissions, models, and remote sensing methods. Some references to get you started in the subject area are provided, but you will need to read more widely than this and do your own research as well. Since this section is on measurements, you should concentrate on issues such as the physical basis for the measurements made, what we have learned from measurements, and how measurements are used. Clearly, the modelling section will have less on actual measurements, but is here to allow us to relate the measurement material to the modelling work we have been looking at.
Paugam, R. et al. 2016, A review of approaches to estimate wildfire plume injection height within large-scale atmospheric chemical transport models, 10.5194/acp-16-907-2016
Xu, W. et al. 2020, First Study of Sentinel-3 SLSTR Active Fire Detection and FRP Retrieval: Night-time Algorithm Enhancements and Global Intercomparison to MODIS and VIIRS AF Products, https://doi.org/10.1016/j.rse.2020.111947
Zhang, T. Et al. 2020, Trends in eastern China agricultural fire emissions derived from a combination of geostationary (Himawari) and polar (VIIRS) orbiter fire radiative power products, https://doi.org/10.5194/acp-20-10687-2020
Johnston, J et al, 2020, Development of the user requirements for the Canadian wildfiresat satellite mission, https://doi.org/10.3390/s20185081
Fisher D., Wooster, M. 2019, Multi-decade global gas flaring change inventoried using the ATSR-1, ATSR-2, AATSR and SLSTR data records, https://doi.org/10.1016/j.rse.2019.111298
Simpson, J. Et al., 2016, Tropical Peatland Burn Depth and Combustion Heterogeneity Assessed Using UAV Photogrammetry and Airborne LiDAR, https://doi.org/10.3390/rs8121000
Roberts, G and Wooster M, 2014, Development of a multi-temporal Kalman filter approach to geostationary active fire detection & fire radiative power (FRP) estimation, https://doi.org/10.1016/j.rse.2014.06.020
D.P. Roy, et al. (2005) Prototyping a global algorithm for systematic fire-affected area mapping using MODIS time series data, Remote Sensing of Environment Volume 97, Issue 2 , 30 July 2005, Pages 137-162
Roy, D.P. et al, (2019) Landsat-8 and Sentinel-2 burned area mapping - a combined sensor multi-temporal change detection approach, 10.1016/j.rse.2019.111254
Crutzen, P.J., L.E. Heidt, J.P. Krasnec, W.H. Pollock and W. Seiler, 1979: Biomass burning as a source of atmospheric gases CO, H2, N2O, NO, CH3Cl and COS. Nature, 282, 253-256.
Seiler, W. and P.J. Crutzen, 1980: Estimates of gross and net fluxes of carbon between the biosphere and the atmosphere from biomass burning. Climatic Change, 2, 207-247
Giglio, L. et al. 2013, Analysis of daily, monthly, and annual burned area using the fourth‐generation global fire emissions database (GFED4), https://doi.org/10.1002/jgrg.20042
Van der Werf, G et al., 2017, Global fire emissions estimates during 1997-2016, 10.5194/essd-9-697-2017
van Marle, M. J. E., Kloster, S., Magi, B. I., Marlon, J. R., Daniau, A.-L., Field, R. D., et al. (2017). Historic global biomass burning emissions based on merging satellite observations with proxies and fire models (1750 - 2015). Geoscientific Model Development, 10, 3329-3357. doi:10.5194/gmd-2017-32.
Li, F. Et al, 2012, A process-based fire parameterization of intermediate complexity in a Dynamic Global Vegetation Model, Biogeosciences, 9, 2761–2780, 2012, www.biogeosciences.net/9/2761/2012/
Li, F., Val Martin, M. , Andreae, M.O. et al., (2019) Historical (1700–2012) global multi-model estimates of the fire emissions from the Fire Modeling Intercomparison Project (FireMIP). Atmospheric Chemistry and Physics, 19 (19). pp. 12545-12567. ISSN 1680-7316
Johnston, J et al, 2018, Satellite detection limitations of sub-canopy smouldering wildfires in the north american boreal forest, https://doi.org/10.3390/fire1020028
Archibald et al., 2013, Defining pyromes and global syndromes of fire regimes, Proc. Natl. Acad. Sci. U.S.A., 110 (16) (2013), pp. 6442-6447
Biomass density from ground data and allometric relationships
The ‘simplest’ (or at least the most direct way) way of measuring carbon stocks is destructive measurement (i.e. chop down a tree and measure the carbon content of the leaves and wood: roots are in any case more tricky), but clearly it would be fruitless to remove all terrestrial carbon stocks just to measure them. Because of this, sampling is required. Because of the historical interest of ‘foresters’ in timber, various efficient ways have been devised to estimate timber volume. Examples of this include Brown et al. (1989) who attempt to establish protocols for above ground biomass (AGB) for tropical forests. Note that ‘foresters’ estimates of ‘useable timber’ AGB will always be lower than the total AGB.
Most of these methods use measures ‘easy’ to obtain in the field to calibrate ‘allometric’ relationships with timber volume of AGB. Most typically the core measurement is DBH (trunk diameter at breast height). Sometimes, different allometric relationships are applied for different classes of tree (e.g. species for forestry, or other classifications for AGB estimates).
Scatterplot of AGB (kg) against DBH (cm) from Brown et al. (1989)
Note that there may be significant scatter, especially for higher biomass levels in such relationships. Other terms may be used to contribute to the allometric relationships, such as tree height and wood density or specific gravity. Since area-based estimates are typically required, some way of accounting for number density is also required. Since not all tree components are generally sampled or accounted for ‘expansion factors’ may generally be required to account for these.
In reviewing this subject, you should consider the likely accuracy of such measurements and the complexities involved. You should also consider other ground-based methods that are applied and some of the methods applied to scaling up estimates (e.g. simple regression/GIS modelling). You will find the FAO primer by Brown (1997) useful for this.
You should also consider how these methods are/can be used to monitor biomass change.
It would also be of relevance to make yourself aware of REDD+.
You might also like to consider how the below ground components of vegetation carbon stocks can be estimated/measured, as well as soil carbon.
Brown, S. 1997, Estimating Biomass and Biomass Change of Tropical Forests: a Primer. (FAO Forestry Paper - 134)
Brown, S., Gillespie, A.J.R., Lugo, A.E. (1989) Biomass estimation methods for tropical forests with application to forest inventory data, Forest science 15(4), 881-902.
Taylor, D., Hamilton, A.C., Lewis, S.L., Nantale, G. (2008) Thirty-Eight years of change in a tropical forest: plot data from Mpanga forest reserve, Uganda. African Journal of Ecology, 46, 655-667.
Malhi Y, Wood D, Baker TR, Wright J, Phillips OL, Cochrane T, Meir P, Chave J, Almeida S, Arroyo L, Higuchi N, Killeen TJ, Laurance SG, Laurance WF, Lewis SL, Monteagudo A, Neill DA, Vargas PN, Pitman NCA, Quesada CA, Salomao R, Silva JNM, Lezama AT, Terborgh J, Martinez RV, Vinceti B. (2006) The regional variation of aboveground live biomass in old-growth Amazonian forests. Global Change Biology, 12: 1107-1138.
Baker, T. R., Phillips, O. L., Malhi, Y., Almeida, S., Arroyo, L., Di Fiore, A., Killeen, T., Laurance, S. G., Laurance, W. F., Lewis, S. L., Lloyd, J., Monteagudo, A., Neill, D., Patino, S., Pitman, N., Silva, N. & Martinez, R. V. (2004a) Variation in wood density determines spatial patterns in Amazonian forest biomass. Global Change Biology, 10, 545-562.
Kristiina A. Vogt, Daniel J. Vogt and Janine Bloomfield (1998) Analysis of some direct and indirect methods for estimating root biomass and production of forests at an ecosystem level, Plant and Soil, 1998, Volume 200, Number 1, Pages 71-89
Clark and Kellner (2012) Tropical forest biomass estimation and the fallacy of misplaced concreteness. J. Veg. Sci. 23, 1191 – 1196. (doi:10. 1111/j.1654-1103.2012.01471.x).
Chave et al. (2014) Improved allometric models to estimate the aboveground biomass of tropical trees. Glob. Change Biol. 20, 3177 – 3190. (doi:10.1111/ gcb.12629)
Calders et al. (2015) Non-destructive estimates of above-ground biomass using terrestrial laser scanning. Methods Ecol. Evol. 6, 198 – 208. (doi: 10.1111/2041-210X.12301)
Gonzalez de Tanago et al (2017) Estimation of above-ground biomass of large tropical trees with Terrestrial LiDAR. Methods Ecol. Evol.
Disney MI, Boni Vicari M, Burt A, Calders K, Lewis SL, Raumonen P, Wilkes P (2018) Weighing trees with lasers: advances, challenges and opportunities. Interface Focus 8(2): 20170048. https://doi.org/10.1098/rsfs.2017.0048
As with biomass, measurement of LAI can be carried out destructively, but this is time consuming and costly. It is more usual therefore to apply methods that in some way rely on measurements of canopy transmission or gap fraction. A typical example of the former is the LAI2000 instrumentAs with biomass, measurement of LAI can be carried out destructively, but this is time consuming and costly. It is more usual therefore to apply methods that in some way rely on measurements of canopy transmission or gap fraction. A typical example of the former is the LAI2000 instrument that i that i measurement is taken above and below a canopy, and the transmission inferred. From this, LAI is then inferred. Care must be taken in using this instrument and interpreting the data, as factors such as clumping are not well-treated and the measure is essentially an ‘effective’ LAI.
More generally, some measure of gap fraction is used (see e.g. Welles & Cohen, 1996). In recent years, the use of digital photography (with a fisheye lens) has become commonplace.
You will find that there is much repetative literature in this area.
Ground-based measurements of LAI (and related fractional cover) are also very important for the ‘validation’ of satellite products of these measures.
Stenberg et al. (1994) Performance of the LAI-2000 plant canopy analyze3r in estimating leaf area index of some scots pine stands, Tree physiology, 14, 981-995.
Jon M. Welles and Shabtai Cohen (1996) Canopy structure measurement by gap fraction analysis using commercial instrumentation, J. Exp. Bot. (1996) 47 (9): 1335-1342. doi: 10.1093/jxb/47.9.1335
Nilson, T. and Kuusk, A., 2004, Improved algorithm for estimating canopy indices from gap fraction data in forest canopies, Agricultural and Forest Meteorology 124 (2004) 157-169
Nathalie J. J. Bréda(2003) round-based measurements of leaf area index: a review of methods, instruments and current controversies, J. Exp. Bot. (2003) 54 (392): 2403-2417. doi: 10.1093/jxb/erg263
Justice, A. Belward, J. Morisette, P. Lewis, J. Privette, F. Baret Developments in the validation of satellite products for the study of the land surface. International Journal of Remote Sensing 21(17) 3383-3390
Woodgate et al. (2015) An improved theoretical model of canopy gap probability for Leaf Area Index estimation in woody ecosystems, Forest Ecology and Management, 358, 303-320.
Whilst relationships can be calibrated between optical remote sensing measurements transformed to vegetation indices, and (above ground) biomass (e.g. Samimi and Kraus, 2004), these tend to be only quite local in their application, partly due to factors such as non-green biomass to which are not accounted for in such data (Gamon et al., 1995). There are arguments that a time integral of measures such as NDVI can provide more robust estimates, but this is essentially based on a PEM view of GPP and NPP and discussed more below.
The most promising EO technologies for biomass estimation are radar and lidar. The main reason for radar being useful is that the longer wavelength SARs in particular are mainly responsive to scattering from particlular tree branch/trunk components so backscatter can be broadly related to biomass. A problem is the saturation of these relationships at high biomass volumes.
SAR backscatter data can be supplemented with height estimates from interferometry in some cases, but decoherence over vegetation canopies makes this difficult to achieve with repeat pass methods. If height can be estimated, then allometric relationships can be applied to estimate AGB. Height estimates however require some estimate of both the scattering height in the canopy and the ground scattering height. This can sometimes be achieved with polarimetric data. In fact, decoherence itself is seen as a source of information, the idea being essentially that the decoherence is greater the higher the trees.
Another technology of value here is lidar measuremenmt, which aims to estimate tree or canopy height from the detection of ground and crown responses in a lidar waveform or the detection of ground and crown lidar ‘hits’ in discrete lidar data. Again, the translation to biomass relies on allometric relationships with height.
Starting points for reading:
Duncanson L, Rourke O, Dubayah R. 2015 Small sample sizes yield biased allometric equations in temperate forests. Nat. Sci. Rep. 5, 17153. (doi: 10.1038/srep17153
Houghton RA, Nassikas AA. 2017 Global and regional fluxes of carbon from land use and land cover change 1850 – 2015. Glob. Biogeochem. Cycles 31, 456 – 472. (doi:10.1002/2016GB005546)
Houghton RA, Byers B, Nassikas AA. 2015 A role for tropical forests in stabilizing atmospheric CO2. Nat. Clim. Change 5, 1022 – 1023. (doi:10.1038/ nclimate2869)Saatchi S et al. 2011 Benchmark map of forest carbon stocks in tropical regions across three continents. Proc. Natl Acad. Sci. USA 108, 9899 – 9904. (doi:10.1073/pnas.1019576108)
Baccini A et al. 2012 Estimated carbon dioxide emissions from tropical deforestation improved by carbon-density maps. Nat. Clim. Change 2, 182 – 185. (doi:10.1038/nclimate1354)
Mitchard ET, Saatchi SS, Baccini A, Asner GP, Goetz SJ, Harris NL, Brown S. 2013 Uncertainty in the spatial distribution of tropical forest biomass: a comparison of pan-tropical maps. Carbon Balance Manage. 8, 10. (doi:10.1186/1750-0680-8-10)
Mitchard ET et al. 2014 Markedly divergent estimates of Amazon forest carbon density from ground plots and satellite. Glob. Ecol. Biogeogr. 23, 935 – 946. (doi:10.1111/geb.12168)
John A. Gamon, Christopher B. Field, Michael L. Goulden, Kevin L. Griffin, Anne E. Hartley, Geeske Joel, Josep Penuelas and Riccardo Valentini (1995) Relationships Between NDVI, Canopy Structure, and Photosynthesis in Three Californian Vegetation Types, Ecological Applications, Vol. 5, No. 1, Feb., 1995
Lefsky, M. A, D. J Harding, M. Keller, W. B Cohen, C. C Carabajal, F. D.B Espirito-Santo, M. O Hunter, and R. de Oliveira Jr. 2005. Estimates of forest canopy height and aboveground biomass using ICESat. Geophysical Research Letters 32, no. 22: L22S02.
Koch, B. 2010. Status and future of laser scanning, synthetic aperture radar and hyperspectral remote sensing data for forest biomass assessment. ISPRS Journal of Photogrammetry and Remote Sensing 65, no. 6 (November): 581-590. doi:10.1016/j.isprsjprs.2010.09.001.
Dubayah, R. O, and J. B Drake. 2000. Lidar remote sensing for forestry. Journal of Forestry 98, no. 6: 44-46.
Balzter, H. 2001. Forest mapping and monitoring with interferometric synthetic aperture radar (INSAR). Progess in Physical Geography, 25(2):159-177.
Imhoff, M.L. (1995). Radar backscatter and biomass saturation: ramifications for global biomass inventory. IEEE Transactions on Geoscience and Remote Sensing, 33: 511-518.
Le Toan, T.; Beaudoin, A.; Guyon, D. (1992). Relating forest biomass to SAR data. . IEEE Transactions on Geoscience and Remote Sensing, 30(2): 403-411.
Thuy Le Toan, Shaun Quegan, Ian Woodward, Mark Lomas and Nicolas Delbart, et al. (2004) Relating Radar Remote Sensing of Biomass to Modelling of Forest Carbon Budgets Climatic Change, 2004, Volume 67, Numbers 2-3, Pages 379-402
Elgene O. Box, Brent N. Holben and Virginia Kalb (1989) Accuracy of the AVHRR vegetation index as a predictor of biomass, primary productivity and net CO2 flux, Plant Ecology, 1989, Volume 80, Number 2, Pages 71-89
Cyrus Samimi and Tanja Kraus (2004) Biomass estimation using Landsat-TM and -ETM+. Towards a regional model for Southern Africa? GeoJournal, 2004, Volume 59, Number 3, Pages 177-187
A good deal of what has been learned about the processes involved in terrestrial carbon, most certainly when it comes to testing models, has been done on the back of flux tower measurements. The majority of these use ‘eddy covariance’ methods that, under turbulent wind conditions, allow measurement of Net Ecosystem Productivity to be inferred from gas excahnge measurements (water vapour and CO2 mainly, but also e.g. methane). NEP can be inferred from the intergral of these measurements. Because they require turbulence, this method does not work well at night generally, so forms of ‘gap filling’ are applied. Other rate terms such as NEP or GPP can be inferred from the NEP data, usually through the application of a model.
For terrestrial ecosytems, instruments are generally mounted on a tower above the vegetation. Thy measure gas exchange from a ‘footprint’ around the tower, where the size of this depends on factors such as vegetation roughness (but may typically be around 1 km) and the direction of the footpring relative to the tower depends on the wind direction.
Other methods rely on measuring concentrations of gases, rather than fluxes. Fluxes can then be inferred assuming some model of atmospheric transport and surface exchange. These methods tend to cover larger areas. Examples are the ‘ta;; towers’ network, including e.g. measurements oon the Angus mast in Scotland.
Instruments such as the LiCor Li-8100A or other chamber instruments can be used for soil flux measurements or measurements over very short vegetation.
Rayner, P. J. et al. Two decades of terrestrial carbon fluxes from a carbon cycle data assimilation system (CCDAS). Global Biogeochem. Cy. 19, GB2026 (2005).
Rayner, P. J. The current state of carbon-cycle data assimilation. Curr. Opin. Env. Sust. 2, 289–296 (2010).
Prueger et al. (2005) Tower and Aircraft Eddy Covariance Measurements of Water Vapor, Energy, and Carbon Dioxide Fluxes during SMACEX, JOURNAL OF HYDROMETEOROLOGY, 6,954-960.
CarboEurope (also see AmeriFlux, AsiaFlux, KoFlux, OzFlux, ChinaFlux, FluxnetCanada)
Baldocchi, D.D . 2008. Breathing of the Terrestrial Biosphere: Lessons Learned from a Global Network of Carbon Dioxide Flux Measurement Systems. Australian Journal of Botany. 56, 1-26.
Baldocchi, D.; Falge, E.; Gu, L.; Olson, R.; Hollinger, D.; Running, S.; Anthoni, P.; Bernhofer, C.; Davis, K.; Evans, R.; Others, (2001). “FLUXNET: A New Tool to Study the Temporal and Spatial Variability of Ecosystem-Scale Carbon Dioxide”. Bulletin of the American Meteorological Society 82(11):2415-2434.
It is difficult to measure land surface rate terms directly from EO, but as reviewed by Grace et al. (2007) the closest we can get to these are probably those that directly relate to photosynthesis, such as fluorescence and PRI. There are certainly some complexities to the interpretation of such data, but it is very exciting to think that we can now demonstrate that such measurements are feasible from space.
Another technology that can measure something related to CO2 rates is fire radiative power from thermal instruments. This can be directly related to the rate of carbon release by fire and integrated to obtain the amount of biomass consumed by the fire.
The most common way of trying to estimate NPP and GPP from EO measurements involves the use of fAPAR or NDVI (or similar) measurements from optical data. We have seen earlier how fAPAR fits into estimates of GPP, both in the Sellers (1992) scaling of leaf photosynthesis and respiration and in the PEM approach. There is much literature on these subjects, but see Prince and Goward (1995) for one of the core papers on this. See also Potter et al. (1993). See e.g. the various Gobron et al. papers for some background on fAPAR data.
Direct or indirect inference of LAI from EO is also relevant to driving and testing carbon models, so you should investigate papers on this subject (e.g. Baret et al. 2007)
One problem that has faced the EO community for some time is that there can be quite large discrepencies between different fAPAR and LAI products. This is partly down to different ‘meanings’ of LAI etc. (e.g. whether clumping is included, what sun angle the fAPAR data are for, whether they are fAPAR or interception). However, these same areas of ‘confusion’ also pervade the ecosystem modelling community.
Starting points for reading
Grace, C. Nichol, M. Disney, P. Lewis, T. Quaife, P. Bowyer (2007), Can we measure terrestrial photosynthesis from space directly, using spectral reflectance and fluorescence?, Global Change Biology, 13 (7), 1484-1497., doi:10.1111/j.1365-2486.2007.01352.x.
Joiner, Y. Yoshida, A. P. Vasilkov, Y. Yoshida, L. A. Corp, and E. M. Middleton (2010) First observations of global and seasonal terrestrial chlorophyll fluorescence from space, Biogeosciences Discuss., 7, 8281–8318, 2010
Christian Frankenberg Joshua B. Fisher, John Worden, Grayson Badgley, Sassan S. Saatchi, Jung‐Eun Lee, Geoffrey C. Toon, André Butz, Martin Jung, Akihiko Kuze, and Tatsuya Yokota (2011) New global observations of the terrestrial carbon cycle from GOSAT: Patterns of plant fluorescence with gross primary productivity, EOPHYSICAL RESEARCH LETTERS, VOL. 38, L17706, doi:10.1029/2011GL048738, 2011
Guanter, L. Alonso, L. Gómez-Chova, J. Amorós-López, J. Vila, and J. Moreno (2007) Estimation of solar-induced vegetation fluorescence from space measurements, Geophysical Research Letters, 34, L08401, doi:10.1029/2007GL029289, 2007.
Justice, C. O., Giglio, L., Korontzi, S., Owens, J., Morisette, J. T., Roy, D., Descloitres, J., Alleaume, S., Petitcolin, F., & Kaufman, Y. (2002). The MODIS fire products. Remote Sensing of Environment, 83, 244-262.
Wooster, M. J., G. Roberts, G. L. W. Perry, and Y. J. Kaufman (2005), Retrieval of biomass combustion rates and totals from fire radiative power observations: FRP derivation and calibration relationships between biomass consumption and fire radiative energy release, J. Geophys. Res., 110, D24311, doi:10.1029/2005JD006318.
Roberts, G., M. J. Wooster, G. L. W. Perry, N. Drake, L.-M. Rebelo, and F. Dipotso (2005), Retrieval of biomass combustion rates and totals from fire radiative power observations: Application to southern Africa using geostationary SEVIRI imagery, J. Geophys. Res., 110, D21111, doi:10.1029/2005JD006018.
Stephen D. Prince and Samuel N. Goward (1995) Global Primary Production: A Remote Sensing Approach, Journal of Biogeography, Vol. 22, No. 4/5
Potter C,.S., et al. (1993) Terrestriial ecosystem production: a process model based on global satellite and surface data. Global Biogeochem. Cycles, 7,811-841.
Gobron, N., Knorr, W., Belward, A. S., Pinty, B. (2010) Fraction of Absorbed Photosynthetically Active Radiation (FAPAR). Bulletin of the American Meteorological Society, 91(7):S50-S51.
Gobron, N., Pinty, B., Aussedat, O., Chen, J. M., Cohen, W. B., Fensholt, R., Gond, V., Lavergne, T., Mélin, F., Privette, J. L., Sandholt, I., Taberner, M., Turner, D. P., Verstraete, M. M., Widlowski, J.-L. (2006) Evaluation of Fraction of Absorbed Photosynthetically Active Radiation Products for Different Canopy Radiation Transfer Regimes: Methodology and Results Using Joint Research Center Products Derived from SeaWiFS Against Ground-Based Estimations. Journal of Geophysical Research Atmospheres, 111(13), D13110.
Gobron, N., Pinty, B., Verstraete, M. M., Widlowski, J.-L. (2000) Advanced Vegetation Indices Optimized for Up-Coming Sensors: Design, Performance and Applications. IEEE Transactions on Geoscience and Remote Sensing, 38(6):2489-2505. DOI: 10.1109/36.885197
Baret, F., O. Hagolle, B. Geiger, P. Bicheron, B. Miras, M. Huc, B. Berthelot, f. Nino, M. Weiss, O. Samain, J.L. Roujean, and M. Leroy, LAI, FAPAR, and FCover CYCLOPES global products derived from Vegetation. Part 1 : principles of the algorithm, Remote Sensing of Environment, 110:305-316, 2007.
Garrigues, S., R. Lacaze, F. Baret, J.T. Morisette, M. Weiss, J. Nickeson, R. Fernandes, S. Plummer, N.V. Shabanov, R. Myneni, W. Yang, Validation and Intercomparison of Global Leaf Area Index Products Derived From Remote Sensing Data, Journal of Geophysical Research, 113, G02028, doi:10.1029/2007JG000635, 2008.
Weiss, M., F. Baret, S. Garrigues, and R. Lacaze, LAI and FAPAR CYCLOPES global products derived from Vegetation. Part 2 : validation and comparison with MODIS C4 products, Remote Sensing of Environment, 110:317-331, 2007.
J.L. Widlowski, B. Pinty, M. Clerici, Y. Dai, M. De Kauwe, K. de Ridder, A. Kallel, H. Kobayashi, T. Lavergne, W. Ni-Meister, A. Olchev, T. Quaife, S. Wang, W. Yang, Y. Yang, and H. Yuan (2011), RAMI4PILPS: An intercomparison of formulations for the partitioning of solar radiation in land surface models, Journal of Geophysical Research, 116, G02019, 25, DOI: 10.1029/2010JG001511.
Disney et al. (2016) A new global fAPAR and LAI dataset derived from optimal albedo estimates: comparison with MODIS products, Remote Sensing, 8(4), 275; doi:10.3390/rs8040275.
Disney (2016) Remote sensing of vegetation: potentials, limitations, developments and applications. In: K. Hikosaka, K., Niinemets, U. and Anten, N. P. R. (eds) Canopy Photosynthesis: From Basics to Applications. Springer Series: Advances In Photosynthesis and Respiration, Springer, Berlin, pp289-331. ISBN: 978-94-017-7290-7. DOI: 10.1007/978-94-017-7291-4. See PDF:
MacBean, N. et al. Using satellite data to improve the leaf phenology of a global terrestrial biosphere model. Biogeosciences 12, 7185–7208 (2015).
Joiner, J. et al. The seasonal cycle of satellite chlorophyll fluorescence observations and its relationship to vegetation phenology and ecosystem atmosphere carbon exchange. Remote Sens. Environ. 152, 375–391 (2014).
Sun, Y. et al. OCO-2 advances photosynthesis observation from space via solar-induced chlorophyll fluorescence. Science 358, doi: 10.1126/science.aam5747 (2017).
Frankenberg, C. et al. New global observations of the terrestrial carbon cycle from GOSAT: Patterns of plant fluorescence with gross primary productivity. Geophys. Res. Lett. 38, L17706 (2011).
Guanter, L. et al. Retrieval and global assessment of terrestrial chlorophyll fluorescence from GOSAT space measurements. Remote Sens. Environ. 121, 236–251 (2012).
Liu et al. (2011) Global long-term passive microwave satellite-based retrievals of vegetation optical depth, Geophysical Research Letters, 38 (2011)
Liu et al., 2015 Recent reversal in loss of global terrestrial biomass, Nature Climate Change, 5 (2015), pp. 470–474
In these notes, we have tried to be reasonably comprehensive, if brief in pointing out some of the major measurement technologies for measuring carbon stocks and fluxes. The text is deliberately brief as the aim is for the students to pick a topic within those covered here (or in something related, in agreement with the course tutor) and to prepare a short seminar on the subject. This is likely best conducted in small teams (e.g. 2 or 3 people). You should prepare a small number of slides, but mostly you should be in a position to talk to the rest of the class about the subject and respond to questions.
One of the implicit aims here is to make sure that you read around the subject, so, although this seminar is not formally assessed, we are expecting some intellectual depth to your presentation and discussion.
Since you will, of necessity, be concentrating on one of the topics above (or even part of one of these), you should make sure that in further reading following the seminar you delve into some of the key literature for the other topics.
The amount of time available for each talk will depend on the number of students taking the course, so this will be discussed with the course tutor.
Data Assimilation (DA) is a set of statistical techniques to obtain estimates of some state vector \(x\) by merging information from observations\(y\) and any prior (background) information on the state, \(x_b\). The translation between the state vector and the observations is achieved through an observation operator\(H(x)\) which proposes an estimate of the observations \(\hat{y} = H(x)\).
In this lecture, you will be introduced to some of the underlying statistical concepts, some of the methods you can use for DA, and some applications.
Importantly, the uncertainties in all of these elements must be taken into account. If we assume Gaussian (Normal) distribution of uncertainty, then the prior distribution (what we believed the state to be before we included information from any observations) is:
It is important that you understand the notation and concepts here, so we will pause to consider what equation (1) means for a moment.
The term \(P_b(x)\) expresses the probability of the state \(x\) being any particular value, where \(x\) in the general case is a vector. This is the probability distribution function (PDF). You would normally use this to describe what the probability is of \(x\) lying between some limits (i.e. you integrate the PDF over some limits).
As a probability, the PDF is normalised so that its integral (over the entire bounds of possibility) is unity. The term \([1/((2 \pi)^{n/2} \sqrt{\det(B)})]\) expresses the scaling required to make that so.
The matrix \(B\) expresses the variances and covariances between elements of \(x\). So, for example, if \(x\) had only two elements, \(x_0\) and \(x_1\), it would look something like:
where \(b_{i,j}\) is the covariance between state element \(i\) and \(j\) (remember that the covariance between element \(i\) and itself, is of course the variance). Note that the matrix is symmetric about the leading diagonal (\(b_{i,j} \equiv b_{j,i}\)).
We assume that students are at least familiar with the univariate concepts of the mean and standard deviation of a Gaussian distribution. For this multivariate case then, we have a standard deviation in the parameter (state vector element) \(x_0\), being \(\sigma_{0}\) and one in the parameter \(x_1\), being \(\sigma_{1}\). These of course define the spread of the distributions of each of these terms. These control the leading diagonal elements of the matrix \(B\) (they are squared in the matrix, so are stated as variance terms).
The term \(\rho_{0,1}\) is the correlation coefficient, which ranges from \(-1.0\) to \(1.0\) and expresses the degree of linear correlation between \(x_0\) and \(x_1\). This is just a normalised expression of the covariance. As an example, we consider a distribution with \(x = [ 0.2, 0.5 ]^T\) and \(B = [[0.3^2 , -0.5 \times 0.3 \times 0.2][-0.5 \times 0.2 \times 0.3,0.2^2 ]]\):
This might seem rather trivial to some students, but it is best to make sure that you have these concepts clear in your head when considering the statistics here.
The determinant of \(B\), used in equation (1), is a measure of the total variance expressed by the matrix. In the trivial example for which \(\sigma_{0} = \sigma_{1}\) and \(\rho_{0,1} = 0\), \(\det(B) = \sigma_{0}^2\), so \(\sqrt{\det(B)} = \sigma_{0}\) is a measure of the total ‘equivalent’ standard deviation (the width of the distribution, if you like, in the same units as \(x_0\) and \(x_1\)). You should be able to see more clearly now how the scaling factor arises.
In the univariate Gaussian case, you may be used to considering a term such as a Z-score, that is the normalised distance of some location \(x\) from the mean, so \(Z^2 = [(x - x_b)/\sigma_x]^2\) should be a familiar concept. The multivariate equivalent is just: \(Z^2 = (x - x_b)^T B^{-1} (x - x_b)\), where \(B^{-1}\) is the inverse of the matrix \(B\), the equivalent of \(1/\sigma_x^2\) in the univariate case.
Whilst you will generally use computer codes to calculate a matrix inverse, it is instructive to recall that, for the symmetric \(2 \times 2\) matrix \(B\) above, the inverse, \(B^{-1}\) is given through:
importnumpyasnpsd_0=0.3sd_1=0.2rho=-0.5# Form BB=np.matrix([[sd_0**2,rho*sd_0*sd_1],[rho*sd_0*sd_1,sd_1**2]])# inverseBI=B.I# check:print'B x B-1 = I'printB,'x'printBI,'='printBI*B
We will see that in many DA exercises (and quite often when statistics are used) the off diagonal elements of a matrix (the covariance terms) are ignored (i.e. set to zero). Sometimes this is simply because nothing is really known of the covariance structure. In this case of course, we obtain a distribution of the form:
which is aligned to the axes. We can see immediately that this is quite different to that with a correlation coefficient of -0.5, so we must recognise that we cannot lightly ignore such correlation information, however pragmatic or convenient it might seem to be.
If, in the above examples, we consider the probability of the coordinate \([0,0]\) in these distributions, we can note that it is:
importnumpyasnpmean_0=0.2mean_1=0.5sd_0=0.3sd_1=0.2# case 1: with correlationrho=-0.5test=[0.,0.]dx0=test[0]-mean_0dx1=test[1]-mean_1B00=sd_0**2B11=sd_1**2B01=sd_0*sd_1*rhoZ2=(dx0*B00+dx1*B01)*dx0+(dx0*B01+dx1*B11)*dx1detB=B00*B11-B01**2scale=(2.*np.pi)*np.sqrt(detB)p0=(1./scale)*np.exp(-0.5*Z2)print'p0: rho = -0.5: p(0,0) =',p0# case 1: without correlationrho=-0.0test=[0.,0.]dx0=test[0]-mean_0dx1=test[1]-mean_1B00=sd_0**2B11=sd_1**2B01=sd_0*sd_1*rhoZ2=(dx0*B00+dx1*B01)*dx0+(dx0*B01+dx1*B11)*dx1detB=B00*B11-B01**2scale=(2.*np.pi)*np.sqrt(detB)p1=(1./scale)*np.exp(-0.5*Z2)print'p1: rho = 0.0: p(0,0) =',p1print'p1/p0 =',p1/p0
Now we have got some appreciation of multivariate Gaussian statistics we can think about how this all works when we combine distributions. This is at the heart of any DA approach. The most fundamental idea in this can be expressed by Bayes theorum:
\[P(b | a) = \frac{P(b) P(a|b)}{P(a)}\]
where we understand \(P(a | b)\) as a conditional probability, the probability of \(a\)given\(b\).
Suppose we have some observations \(y\), and we have a model (observation operator) that provides an estimate of \(y\), \(\hat{y}\) for some values of \(x\):
\[\hat{y} = H(x)\]
so
\[y = \hat{y} + \epsilon = H(x) + \epsilon\]
where \(\epsilon\) are the errors arising from the modelling of \(y\) and any errors in \(y\) itself (noise in the observations).
The PDF of the observations is the PDF of the observations given\(x\):
where \(R\) is the variance-covariance matrix expressing the uncertainty in the model and the observations (i.e. the summary statistics of \(\epsilon\)).
Using Bayes theorem:
\[P(x | y) = \frac{P(x) P( y | x)}{P(y)}\]
Here, \(P(y)\) is simply a normalising term that we could express as \(P(y) = \int p(y | x) p(x) dx\), so we can write more simply:
\[P(x | y) \propto P(x) P( y | x)\]
The importance of this then is that we can combine probabilities by multiplication of the PDFs. If we have a previous estimate of \(x\), \(x_b\), and the observations \(y\), then we can get a new (improved) estimate of \(x\), \(x'\) through:
provided \(H(x)\) is linear (otherwise we don’t get a Gaussian distribution when we apply \(H(x)\)), we can write: \(H(x) = Hx\) where \(H\) is the linear observation operator.
The optimal estimate of \(x'\) can be found by finding the value of \(x\) which gives the maximum of the likelihood (probability) function (the maximum of equation (6)). Because of the negative exponential, this is the same as finding the value of \(x\) that gives the minimum of equation (5). This is found by solving for the value of \(x\) for which the partial differentials of \(J(x)\) with respect to \(x\) are zero.
The differential of \(J(x)\), \(J'(x)\) is known as the Jacobian.
We can recognise that \(J(x)\) as a form of cost function which is itself the addition of other cost functions. Each of these cost functions provide a constraint on our (optimal) estimate of \(x\): \(J_b(x)\) constrains the solution by our former belief in its state (the background); \(J_o(x)\) provides a constraint based on observations.
We can estimate the posterior uncertainty (matrix, here) from the curvature of the cost function. This is found by the inverse of the second differential of \(J(x)\), \(J''(x)\), which is known as the Hessian.
For a diversion, we can calculate the Jacobian and Hessian terms from equations (3) and (4). For the Jacobian:
It is worth considering the Hessian in the linear case in a little more detail. The prior uncertainty (i.e. what we knew before we added any observations) was simply \(B\).
After we add information (observations) the posterior uncertainty, \(C_{post}\) is:
(15)\[C_{post} = \left( B^{-1} + H^T R^{-1} H \right)^{-1}\]
In the simplest case, we might suppose that we have a direct observation of the state vector \(x\), so \(H(x) = Ix = x\), where \(I\) is the identity operator, or more simply, \(H=I\). Then:
In the univariate case, consider when we have two observations of \(x\), say \(x_a\) and \(x_b\), with uncertainties (standard deviations) \(\sigma_a\) and \(\sigma_b\) respectively. The uncertainty after combining these two observations, \(\sigma_{post}\) is expressed by:
which is just a variance-weighted mean of the two observations: in the trivial case where the uncertainty in the observations is the same for both samples, we have simply the mean as the optimal estimate, which is what you would expect.
Clearly, this is a very simple example, but thinking through and understanding such cases can help you develop a deeper appreciation of the more complex (general) cases and also help you to develop some ‘mathematical intuition’ into such problems.
To illustrate this for a two-dimensional example:
importnumpyasnpimportscipy.optimize# priorxb=np.array([0.1,0.5])B=np.matrix([[0.2**2,0.5*0.2*0.3],[0.5*0.2*0.3,0.3**2]])# a direct observation: sd = 0.1xr=np.array([0.15,0.4])R=np.matrix([[0.1**2,0.0],[0.0,0.1**2]])BI=B.IRI=R.I# starting guessx=np.array([0.,0.])defcost(x,xb,BI,xr,RI):''' Return J and J' at x '''Jb=np.dot(np.array(0.5*(xb-x)*BI),(xb-x))[0]Jr=np.dot(np.array(0.5*(xr-x)*RI),(xr-x))[0]JbPrime=-(xb-x)*BIJrPrime=-(xr-x)*RIreturnJr+Jb,np.array(JrPrime+JbPrime)[0]defuncertainty(x,xb,BI,xr,RI):# inverse of Hessianreturn(BI+RI).Iretval=scipy.optimize.fmin_l_bfgs_b(cost,x,args=(xb,BI,xr,RI))# x newx=retval[0]# uncertaintyCpost=uncertainty(x,xb,BI,xr,RI)# print priorpsigma0=np.sqrt(B[0,0])psigma1=np.sqrt(B[1,1])prho12=B[0,1]/(psigma0*psigma1)print'prior: x0,x1 :',xb[0],xb[1]print'prior: sd0,sd1,rho:',psigma0,psigma1,prho12# print observationrsigma0=np.sqrt(R[0,0])rsigma1=np.sqrt(R[1,1])rrho12=R[0,1]/(rsigma0*rsigma1)print'observation: x0,x1 :',xr[0],xr[1]print'observation: sd0,sd1,rho:',rsigma0,rsigma1,rrho12sigma0=np.sqrt(Cpost[0,0])sigma1=np.sqrt(Cpost[1,1])rho12=Cpost[0,1]/(sigma0*sigma1)print'posterior: x0,x1 :',x[0],x[1]print'posterior: sd0,sd1,rho:',sigma0,sigma1,rho12
Here, we have used the theory laid out above, with an Identity observation operator. Rather than trying anything fancy with finding the minimum of the cost function, we simply call an optimisation routine (scipy.optimize.fmin_l_bfgs_b in this case) to which we provide a cost function that returns \(J\) and \(J'\) for a given \(x\). We then calculate the uncertainty from the inverse of the Hessian as described above. Actually, we will see that this is sometime quite a practical approach.
Aside: to make a movie from this, which is quite interesting, its probably easiest to use the unix tool convert which is part of the ImageMagick package. To interface to this from python:
A measure of reduction in uncertainty is found from the ratio of the posterior uncertainty to the prior uncertainty in some form. One common measure that involves such ideas is the relative entropy.
The concept of relative entropy comes from information theory (Cover & Cover, 1991). Entropy (in information theory) is a measure of the amount of information needed (on average) to describe a random variable (i.e. one with uncertainty). The relative entropy (or Kullback Leibler distance) then is a measure of the ‘distance’ between two distributions, in this case, the prior and the posterior distributions.
One part of the relative entropy can be expressed as dispersion (Kleeman, 2002) of the posterior\(C_{post}\) relative to the prior\(C_{prior}\):
where \(tr(C)\) denotes the *trace* operator (the sum of the diagonal elements) and \(n\) is the rank of the matrix (the number of elements in \(x\)).
Another metric is a form of ‘distance’ moved by the mean state relative to the prior uncertainty in going from the prior mean to the posterior (the ‘signal’):
Taking our univariate example with equal variances above, we obtain: \(D = (1/2)(\ln{2} + (1/2) - 1) = 0.097\). Here, \(\det{C_{prior}}/\det{C_{post}} = 2\) (the reduction in uncertainty as expressed by the matrix determinants) whereas \(tr(C_{post} C_{prior}^{-1}) = 1/2\) gives a summary of relative variance terms. If there was no information added, then the posterior would be the same as the prior and we would get a value of \(D = \ln{(1)} + n - n = 0\). In ‘bits’, the relative entropy then (ignoring the signal part) is \(0.097/\ln{2} = 0.14\) in this case. It is not very straightforward to interpret such units or such information measures, but they do at least proivide even the novice DA person with a relative measure of information content after data have been added ralative to what was known before.
Looking at the solution that we provided illustrative examples for (two samples, but different variances) we can calculate:
# just remind ourselves of the values aboveCprior=np.matrix(B)Cpost=np.matrix(Cpost)xpre=xbxpost=xD=0.5*(np.log(np.linalg.det(Cprior)/np.linalg.det(Cpost))+ \
(Cpost*Cprior.I).trace()[0,0]-Cpost.shape[0])print'Dispersion =',DS=0.5*np.dot((xpost-xpre).T*Cprior.I,xpost-xpre)[0,0]print'Signal =',Sprint'relative entropy =',(D+S)/np.log(2.),'bits'
We have laid out the theoretical framework for data assimilation above (assuming Gaussian statistics at the moment), and illustrated it with some simple linear examples.
Data assimilation is particularly powerful because it can be made to work with complex models and complex (large) datasets, provided appropriate methods are chosen.
In this section, we will consider some typical methods of solution.
In many ways, the most simple way of setting up a DA system is to use variational methods directly. Variational methods (the calculus of variations) essentially tells us how to apply constraints to our estimation. In DA language, you will come across methods such as 4DVar that is an example of such a system.
The heart of this is simply the statement of a set of cost functions as we developed above:
\[J(x) = J_b(x) + \sum_{i=0}^{i=n} J_{oi}(x)\]
where \(J_b(x)\) is the background or prior term as before, and \(J_{oi}(x)\) are a set of observational cost functions, associated with some set of observations.
Most typically, a dynamic model is used as a strong constraint to the problem. What this means is that, for instance in Numerical Weather Prediction (NWP), the state vector \(x\) represents a set of parameters of an NWP model. This would normally include some set of initial conditions (the current state of the weather system) and some parameters controlling the NWP model. The state vector \(x\) may be very large in such a case, as it will include a representation of the state of the atmosphere (and land/ocean surface) in some gridded form. The background (prior) might contain a climatology (average conditions that we wish to improve on) or perhaps the results of a previous analysis.
The strong constraint means that the state estimate must belong to something that can be predicted by the model. In effect, we assume that the model is correct and only the initial conditions and parameters controlling its behaviour are in error.
Using a strong constraint, we can run the NWP model forward in time (over space) to predict what the conditions will be for any particular representation of \(x\). When we have an observation available, we transform from the state space (\(x\)) to the observational space (\(y\)) with an observation operator \(H(x)\) and a representation of model and observational uncertainties \(R\).
All we then need to do, is to solve for which values of \(x\) give the minimum of the cost function over the time and space windows that we consider.
To make the problem tractable, the model \(x_i = M_{0 \rightarrow i}(x)\) is often linearised. This means that we replace \(M(x)\) by a (first order) tangent linear approximation (the derivative) so that:
where we use \(M'\) here to represent the tangent linear approximation of \(M\). Note that this is a matrix.
So long as we use this linear approximation to \(M\), we can calculate the state \(x\) at any time step by applying multiple operators:
\[x_i = M'_i M'_{i-1} \cdots M'_0 x\]
Clearly this sort of approximation gets poorer the longer the time period over which it is applied because errors from the linearisation will stack up (even if the model and starting state were perfect).
The figure above, from Bouttier and Courtier (1999) <pdf/02EC-BouttierCourter-DAconcepts.pdf> illustrates how such a strong constraint 4DVar works.
We define some assimilation window in time, which we can sample over \(n\) steps from \(t_0\). The full 4D scheme involves one of the ‘temporal’ \(x\) vectors over a 3D space.
In this example, the state \(x\) represented the paramater trajectory over time. Our background (prior) might come from a previous forecast (or in some cases average conditions) which gives us \(x_b\) and \(B\).
We have some set of (5 here) observations that we wish to use to correct the forecast. We derive a cost function that is the sum of \(J_b\) and the observational cost terms, and solve for the trajectory \(x\) that minimises the combined cost function.
This is exactly the same, in principle, to the simple ‘two sample’ solution we solved for above, and in fact the algorithm we used to find the cost function minimum (L_BFGS_B) is quite appropriate for this task, even for large state vectors.
The strong constraint is a useful and appropriate approach so long as the model is to be trusted, at least for short-term forecasts. This is why is has been of great value in the NWP community.
Sometimes though, we make a model that we interpret only to be a guide, and not something we want to strictly enforce.
An example of this is found in the concept of smoothing (Twomey, 2002; Hansen et al., 2006).
There are many geographical and physical phenomena that are correlated in space and/or time: any two samples close by are likely to be more similar than samples far apart.
One way of expressing this as a model is through the expectation that the change in the property is zero, with some uncertainty. This is a first difference model, and when applied as a weak constraint, provides a way of optimally smoothing and interpolating between observations.
Another way of phrasing this ‘first difference’ model is to say that it is a zero-order process model (i.e. we expect tomorrow to be the same as today, with some uncertainty).
If we phrase the model as:
\[D x = 0\]
with uncertainty \(C\), then we can phrase a constraint through a cost function as above:
\[J_D(x) = \frac{1}{2} (D x)^T C^{-1} (D x)\]
which is linear and has the derivatives:
\[J'_D(x) = (D^T C^{-1} D) x\]
\[J''_D(x) = (D^T C^{-1} D)\]
The matrix \(D\) then, expresses the differential, so if \(x\) were simply time samples, this would look something like:
Interestingly, there is an equivalent convolution filter for this, which is a Laplace function, the width of which is controlled by the uncertainty \(C\):
importnumpyasnpimportmatplotlib.pyplotaspltN=1000I=np.matrix(np.identity(N))# generate the D matrixD=(I-np.roll(I,1))# squared for the constraintD2=D.T*Dforsdin[1e-2,2e-2,4e-2]:# inverseF=(D2+sd*sd*I).I# plot the central liney=np.array(F)[int(N/2)]plt.plot(y/y.max(),label=f'SD: {sd:.2f}')plt.legend(loc='best',fancybox=True,shadow=True)plt.show()
The relative uncertainty between the observations and the model (the first difference operator), expressed as sd in the code above, controls the degree of smoothing: the lower the uncertainty in the model, the more narrow the (effective) filter and the less smoothing is applied.
Applying this difference would be a poor model as a stong constraint, but can be very effective as a weak constraint.
Smoothing of this sort is also known as regularisation and has very wide application in science.
We can of course apply higher order derivative constraints and these are sometime appropriate (see Twomey, 2002). A second-order difference constraint is equivalent to (weakly) constraining the slope of the state trajectory to be constant (a first-order process model).
The figure above shows an application of this to filter a noisy time series.
Synthetic observations (of NDVI) are generated and noise added to produce the blue dots (with given error bars). The result of applying a first difference constraint to the solution fives the red line as the mean estimate, with the grey shading as the posterior uncertainty. The green line is the original signal (from which the samples were taken). Regularisation (a first order difference model as a weak constraint) is able to reconstruct the original signal better than the original samples.
Whilst regularisation then is quite a good simple example of a weak constraint, it can of course be much more widely applied. The main advantage is that the state does not have to be something that the model predicts, which means that e.g. if some set of observations showed some phenomenon not dealt with by a model, in a weak constraint, the state would be affected by the observations. In a strong constraint, this would likely be picked up as some bias, but in a weak constraint system we can (at least partially) overcome weaknesses in model structure.
Although variations schemes have many attractions, they only really work efficiently if some code of the tangent linear model (or more efficiently, the adjoint) exist, or all components of the constraints are linear. This is required to make the minimisation of the cost function as efficient (and fast) as possible.
Additionally, we are normally limited to making the assumption that the statistics are Gaussian.
For many appliocations, particularly real-time applications, sequential methods should be considered.
Perhaps one of the best known of these is the Kalman Filter (KF), which arose from the need to correct trajectory estimates for spacecraft. The core idea is the same as the DA methods above: we have some model of state (e.g. if we know the velocity, we know have some expectation of where an object is going), but we wish to update this with observations to improve our estimate.
In this approach, we start from some prior knowledge of the state \(x_{k-1|k-1}\) and its uncertainty \(P_{k-1|k-1}\).
Based on our (process) model, we predict what the state will be at the next time step and update the uncertainty, giving \(x_{k|k-1}\) and \(P_{k|k-1}\).
We then perform an update step, where we compare the prediction with an observation \(y_k\) (if available) and update the estimate giving \(x_{k|k}\) and \(P_{k|k}\).
We then repeat this process for further timesteps.
We can write the prediction step as:
\[x_{k|k-1} = M_k^T x_{k-1|k-1}\]
\[P_{k|k-1} = M_k^T P_{k-1|k-1} M_k + Q_k\]
where \(M_k\) is the model that transitions \(x_{k-1|k-1}\) to \(x_{k|k-1}\) (as a linear operator) and \(Q_k\) is the uncertainty in this prediction.
Thinking about the regularisation approach described above, we could suppose \(M_k\) to be an identity matrix, representing a zero-order process model (so \(x_{k|k-1} = x_{k-1|k-1}\). The uncertainty matrix \(Q_k\) then plays the same role as \(C^{-1}\) above.
This part is just straightforward propagation of uncertainty.
The ‘clever’ part of the KF is the update step:
We suppose a linear observation operator \(H_k\) and an observation \(y_k\). Using the predicted state, we model an approximation to the observation \(\hat{y}\):
\[\hat{y}_k = H_k x_{k|k-1}\]
which gives a measurement residual \(r_k\):
\[r_k = y_k - \hat{y}_k = y_k - H_k x_{k|k-1}\]
The innovation (residual) uncertainty is:
\[S_k = H_k^T P_{k|k-1} H_k + R_k\]
where \(R_k\) is the measurement (and observation operator) uncertainty.
The (optimal) Kalman gain, \(K_k\) is:
\[K_k = P_{k|k-1} H_k S_k^{-1}\]
which is essentially the ratio of the current state uncertainty and the residual uncertainty (mapped through the observation operator). Note that the Kalman gain is only a function of uncertainties.
The Kalman gain is applied to the residual to allow an update of the state:
\[x_{k|k} = x_{k|k-1} + K_k r_k\]
The updated posterior uncertainty then is:
\[P_{k|k} = (I - K_k H_k^T) P_{k|k-1}\]
which expresses a reduction in uncertainty.
It is an interesting exercise to go through the derivation of the Kalman gain (see e.g. Wikipedia) though we will not do that here.
Clearly then, this is a sequential method: we update the state estimate whenever we have an observation, and trust to the model otherwise.
If the model is linear and the noise known and Gaussian, the KF can be a very effective DA method. If this is not the case, it can become rather unstable.
There are many variants of the Kalman filter that attempt to overcome some of its shortcomings.
You will find the terms ‘smoothers’ and ‘filters’ in the DA literature. In essence, what this is is a distinction between whether the effective DA filter is applied only on time (or space) samples up to the current sample (a filter) or to samples both forward and backward in time (a smoother).
One way of thinking about this is to consider DA as a form of convolution filtering, with the filter being effectively a one-sided function, and the smoother being two-sided:
importnumpyasnpimportmatplotlib.pyplotaspltN=1000I=np.matrix(np.identity(N))# generate the D matrixD=(I-np.roll(I,1))# squared for the constraintD2=D.T*DD4=D2.T*D2sd=1e-2# inverseF=(D2+sd*sd*I).I# plot the central liney=np.array(F)[int(N/2)]plt.subplot(2,1,1)plt.plot(y/y.max(),label='Smoother')plt.legend(loc='best',fancybox=True,shadow=True)plt.subplot(2,1,2)y[int(N/2):]=0.plt.plot(y/y.max(),label='Filter')plt.legend(loc='best',fancybox=True,shadow=True)plt.show()
The variational methods we have shown above are generally smoothers as they apply the constraint to the whole time series, though of course when this is limited to an assimilation window, it is a smoother within the window, but a filter on the timestep of the window.
The estimate from a smoother will (in the absence of sudden changes) always provide a better estimate than the equivalent filter. But sometimes, implementing a smoother is more complex (in sequential cases, we need to make the model run backwards). Additionally, filters can be used for real time applications, whereas smoother cannot (we do not have observations into the future).
An extremely elegant, but often computationally expensive alternative to the DA methods given above is the Monte Carlo Markov Chain (MCMC) approach.
A common approach is the Metropolis-Hastings (MH) algorithm (see e.g. Chib and Greenberg, 1995 or the Wikipedia page).
There are clear step-by-step instructions for MH on Wikipedia.
At heart, an update of the state \(x\) is proposed, \(x'\), then a metric is calculated which weighs the likeihood of the new state relative to the old state. If the new state is an improvement, then it is selected as an update. If it is not (the metric \(a\) is less than 1), then a probability of \(a\) is assigned to taking this new step.
This is a Markov chain as it only concerns transitions from one state estimate to the next. A set of these Markov chains are sampled, each chain starting from arbitrary values.
Two major advantages of the approach are: (i) it can deal with arbitrary distributions (i.e. not just Gaussian); (ii) it will generally solve for the global optimum distribution (whereas many other approaches may get trapped in local minima).
We have presented in this session some of the core ideas underlying data assimilation methods, starting from nothing beyong an appreciation of what a univariate mean and standard deviation are. An important part of any DA method is combining uncertainties, so we have been through that in some detail, both mathematically and graphically.
We have then reviewed some of the main methods used in DA, concentrating on variational methods and the Kalman filter, but mentioning other approaches. This should give you some awareness of the core methods out these to perform DA and some of the pros and cons of each approach.
The Carbon Cycle Data Assimilation System (CCDAS) (Scholze et al., 2007) is a project (and software) that is a DA system built around the Bethy land surface process model (Knorr, 2000).
It has variously been used to assimilate fAPAR observations and atmospheric CO2 data to help constrain the model operation.
The link to atmospheric CO2 is through modelling of CO2 flux patterns (NEP) as a constraint to atmospheric inverse modelling. The surface model is coupled to an atmospheric transport model.
fAPAR is linked to leaf C (therefore LAI) state within Bethy. The assimilation process is used to optimise the parameters thatr control phenology and soil moisture.
In earlier work, CCDAS operated in a two-stage mode, first assimilating fAPAR data, then running this forward to constrain the atmospheric data interpretation.
A variational scheme is used in which prior (Gaussian) distributions are characterised for the variables controlling Bethy and initial estimates of atmospheric CO2 concentrations. An observation constraint is also used, as described above, with observation operators mapping from the state (Bethy parameters and atmospheroc CO2) space to that of the observations. An adjoint of the code is used for efficient cost function minimisation.
In the new scheme (Scholze et al., 2007) fAPAR is more fully integrated into the CCDAS:
[Source: Scholze et al., 2007]
Once fAPAR data have been assimilated, diagnostic variables, such as CO2 fluxes can be directly calculated by the system (along with their associated uncertainties).
It is worth noting that this is the first DA system of this sort, aimed at constraining CO2 fluxes and concentrations. The various studies using CCDAS have demonstrated the advantages of such an approach, perhaps one of the most important being the uncertainty framework that is applied. In most other runs of biogeochemical models, fluxes are predicted, and these may in some way have been constrained by observations (typically, ‘calibrations’ at flux tower sites), but only a DA system can really deal with the propagation of uncertainties from the assumed prior knowledge and observational constraints through to the diagnostics (fluxes etc.).
Various criticisms can be made of the approach used (e.g. assumptions of Gaussian distributions, the lack of real uncertainty information ob the fAPAR product data used in the assimilation or the difficulty of making sure that fAPAR as represented in the model has the same meaning as that used to derive the satellite products), but the work has been truely ground-breaking in pointing the way forward in the use of data and models in terrestrial carbon modelling and monitoring.
Some typical results from Scholze et al., 2007 are shown here:
[Source: Scholze et al., 2007]
Even though the fAPAR data are rather partial in their coverage (clouds, snow), sometime have rather large high frequency variations, and the final match between observations and the model are not always within the (large) uncertainty bounds of the data, the assimilation is seen to have an often quite large impact on NPP estimates. The reduction in uncertainty can be as large as nearly 50%, but is tyoically smaller. This is due in part at least to the large observational uncertainties assumed, but also an expression of the ability of such data to constrain NPP estimates (i.e. it depends on some factors that fAPAR cannot readily constrain).
EnKF of surface reflectance data into an ecosystem model
One of the criticisms of the CCDAS is the rather simplistic observation operator used for fAPAR. Part of the reason for this is that it is computationally cheap to make use of satellite data prioducts of this sort. At the same time, these data do not make full use of the information content of the satellite data used, and uncertainty information is very difficult to ascribe to data products that have no tracking of uncertainty.
An alternative approach is to try to use low level satellite data more directly in the DA scheme. This approach was adopted by Quaife et al. (2008) using the ‘simple’ ecosystem model DALEC:
The idea here is that rather than interpreting the satellite datai and using that in tha DA, we use a non-linear observation operator based on radiative transfer considerations of tree crown shadowing/hiding and volumetric scattering within the crowns as the operator \(H(x)\).
The model had been previously calibrated against measurements at a flux tower site, although uncertainty information on this was only directly available in a simplified form. A forward run of the model produced the following NEP data. Observations (from the flux tower) are shown as black dots (again no uncertainty information was available).
The DA method used here is an Ensemble Kalman filter, which is able to cope with the non-linear observation operator and doesn’t require an estimate of the derivatives (i.e. doesn’t need adjoint code).
The observation data used in the DA were MODIS surface reflectance data at red and near infrared wavelengths. Estimates of uncertainty were available for these, although there was no correlation information available.
Of course the remote sensing observations depend on terms that are not directly considered in the (process) model, sich as soil terms, canopy cover (& other clumping), leaf chlorophyll etc. In this case, Quaife et al. performed a first pass estimate of these ‘ancillary’ variables and assumed they were fixed in the DA at these values.
The DA proceded by affecting the foliar carbon pool, through a mapping from LAI/leaf area density used in the observation operator. The EnKF was used to provided improved estimates of foliar carbon, which in turn affected the NEP estimates.
The result of the DA is given here (actually, this is slightly different to that used in the paper, as this result also treats snow cover in the reflectance data in winter):
or as an animation:
We see that allowing the DA of surface reflectance to affect the leaf carbon improved the NEP estimate considerably (particularly when winter observations were better constrained by incorporating a snow term in the observation operator). The ability of such data to constrain NEP is much less than detailed flux and stock measurements at the site (see the Williams et al. reference) but this is hardly surprising as we were only affecting leaf carbon and had no real impact on slow soil turnover factors affecting soil respiration.
Also, we can see that although the DA produced a very good estimate of NEP, its modelling of GPP (which one might assume to be better as this does not include soil components directly) was quite poor if the figure of Williams et al. is true.
As with the previous application, there are many criticisms that can be made of this DA exercise. What is important about this work is its attempt to provide a direct linkage from the process model through to the satellite observations. This might be made more fully consistent if top of atmosphere (ToA) data were used instead of surface reflectance, as it would then be more straightforward to track and more fully quantify the observation uncertainties. Whilst this is certainly feasible, it adds another layer of complexity to the DA. Further, it is clearly not a good idea to assume that all of the ‘ancillary’ parameters are fixed (or to set them to estimates in the rather simplistic manner taken here) and also not to account for uncertainty in these terms.
Ochidee is the French DGVM model and has been used in a wide range of cases for understanding carbon dynamics. Whilst a solid and useful model, it of course needs continuing validation and improvements in constraints to improve its reliability. A significant paper in this regard is that of MacBean, N. et al., (2018) which uses Solar-induced fluorescence (SIF) as a proxy for GPP to optimise the model GPP parameters in a strong constraint DA scheme. Recall that a strong constraint system is one where we trust the model, but believe there may be error/uncertainty associated with the model parameters. The paper is significant for this course in two respects: first, it provides a direct link to the course material on GPP and DGVMs, and second, it makes use of novel satellite-based estimates of GPP for the data assimilation. Previously, constraint of process (GPP) has only been possible at limited sites where fluxes are measured or by attempting to constrain the vegetation state (NDVI, LAI etc.). Whilst the use of state variables is useful in helping constrain DGVMs (as in the CCDAS example above), the state in the models is a result of process (GPP, NPP) and it is thought that constraining with flux measurements provides a much more direct and robust route to model testing, calibration and improvement.
In the approach, the GOME SIF dataset is related to GPP via (empirical) linear models per biome. Orchidee default parameter distributions provide the prior GPP in the figure below. The data assimilation with the SIF observations then results in an update on the model parameter distributions and a reduction in global GPP uncertainty by 83%.
Global mean annual sum (2007–2011) and spatial distribution of: (a) GOME-2 SIF; (b) JUNG up-scaled FLUXNET data-driven GPP product18; (c) ORCHIDEE prior GPP; (d) ORCHIDEE posterior GPP; (e) difference in ORCHIDEE simulated GPP (posterior – prior); (f) reduction in GPP uncertainty (1σ). The maps were created from the ORCHIDEE model simulations performed in this study, GOME-2 SIF data, and the JUNG product, using the Python programming language v2.7.13 (Python So ware Foundation – available at http://www.python.org) Matplotlib (v2.0.2) plotting library54 with the Basemap Toolkit (http://matplotlib.org/basemap/). See Section on Data Availability for GOME-2 SIF and JUNG product availability, the ORCHIDEE model licence information and ORCHIDEE code availability.
The Earth Observation Land Data Assimilation (EO-LDAS) project (Lewis et al., 2012) (see also EOLDAS website) aimed to develop a generic scheme for performing DA using EO data.
In the prototype tool developed, linear difference operators (regularisers) were used as the model constraint, along with a prior constraint and observational constraints. Whilst various configurations were implemented and demonstrated, perhaps the most interesting and useful is the non linear radiative transfer model implemented as the main observation operator, along with associated adjoint code. The availability of adjoint code means that variational methods can be efficiently used, so the system was mainly designed as a weak constraint variational DA system.
The observation oeprator models canopy reflectance as a function of leaf biochemistry (leaf chlorophyll, dry matter, water etc.), soil spectral proprties (through some spectral basis functions), and canopy structural characteristics (LAI, leaf angle distribution, leaf size and canopy height (NB reflectance is only sensitive to the ratio of these latter two)).
Such a system can be used to simply estimate the (full) set of biophsyical parameters controlling the EO signal at one particular time. An example, using MERIS reflectance data is:
Here, observations in 15 wavebands in the visible and near infrared were used to solve for an estimate of the (7) biophysical parameters that the signal is sensitive to, assuming weak prior knowledge on these terms.
EO-LDAS is able to achieve this from a single MERIS observation, but not surprisingly, even thouse the uncertainty in the forward modelled reflectance is reduced compared to the observation uncertainties (red are original, green here is modelled from the observation operator), the uncertainties in the biophytsical parameter estimates are typically high (and cover e.g. the full range of possible values for LAI), although the uncertainty in the estimate for chlorophyll concentration here is relatively low.
If we then try to use the biophysical parameters estimated from the MERIS data to predict what another sensor (at the same place, on the same day) would see, the results are rather disappointing:
The green triangles show MODIS observations and the blue line the prediction from parameters constrained by the MERIS observation.
At first sight, this might seem odd: we were able to describe the MERIS data very well with the model, but it turns out that its predictive power (to other wavebands and angles) is poor. This is because of the high uncertainty in the state vector arrived at from MERIS and also because of correlations between the state vector elements.
Lewis et al. (2012) performed an experiment using EO-LDAS with synthetic observations for the forthcoming Sentinel-2 MSI sensor. A feature of Sentinel-2 MSI (when fully operational with 2 satellites) will be the capability for quite high repeat coverage at quite high spatial resolution: 5 days typically (less with cloud cover) and a resolution of 10s of ms.
To explore the sort of information we might be able to derive from such data, Lewis et al. used a regularisation constraint (or rather two separately first and second difference constraints) to compare with what might be obtained using sets of single observations.
The bottom line result was that, even when just considering the biophsyical properties at the time of observations, a reduction in uncertainty of around a factor of 2 could be achieved just by using such a weak constaint model (i.e. assuming nothing about the form of the parameter trajectories other than smnoothness).
Of course, in any such exercise, the degree of smoothness is not something that is known before hand, so must itself be estimated. This can be achieved in a number of ways, one of which is to perform a generalised cross validation. This involves leaving out an observation and performing the DA for different degrees of smoothness and seeing how well each of these is able to predict the ‘left out’ measurement.
The figure below shows the average of this metric as a function of \(\gamma\) (which is effectively the smoothness parameter) over all samples (i.e. the average behaviour when leaving out each sample in turn):
Results for four scenarios are shown. Those labelled ‘O1’ and ‘O2’ refer to using first and second order constraints respectively. Those labelled ‘complete’ refer to assumed refular observations every 5 days, whereas ‘cloud’ has 50% of samples removed.
One thing to notice from this is that these functions have quite broad minima, which means that the result of the DA is not very strongly dependent on the degree of smoothness assumed. This is more true when the sampling rate is higher (‘Complete’) for obvious reasons.
The results for the ‘complete case’ are:
and for the ‘cloudy’ case:
Global mean annual sum (2007–2011) and spatial distribution of: (a) GOME-2 SIF; (b) JUNG up-scaled FLUXNET data-driven GPP product18; (c) ORCHIDEE prior GPP; (d) ORCHIDEE posterior GPP; (e) difference in ORCHIDEE simulated GPP (posterior – prior); (f) reduction in GPP uncertainty (1σ). The maps were created from the ORCHIDEE model simulations performed in this study, GOME-2 SIF data, and the JUNG product, using the Python programming language v2.7.13 (Python So ware Foundation – available at http://www.python.org) Matplotlib (v2.0.2) plotting library54 with the Basemap Toolkit (http://matplotlib.org/basemap/). See Section on Data Availability for GOME-2 SIF and JUNG product availability, the ORCHIDEE model licence information and ORCHIDEE code availability.
The left hand column of these figures (‘single obs. inversion’) shows results obtained by considering each (day’s) observation independently. The dashed line on all graphs shows the true temporal trajectory for that parameter. A total of 73 observations are available for the ‘complete’ case, over a year.
The rows show the different model parameters: LAI, leaf chlorophyll, leaf water, leaf dry matter, leaf complexity (N), and soil brightness.
When the LAI is low (at the start and end of the time period), both LAI and soil properties are quite accurately estimated and have low uncertainties. As LAI reaches around 2, the ability to estimate soil properties is drastically reduced. The uncertainties (for ‘single obs.’) for all of the leaf terms are quite high, although, within the noise, the basic ‘shape’ of the parameter trajectory can be determined.
When regularisation constraints are applied, the uncertainty is dramatically reduced. Further, an estimate of the model state is available for all days. The regularisation results for teh cloudy scenario are not very much poorer than for the full sampling.
This experiment is effective in showing the power of DA methods to help constrain estimates of surface biophsyical variables. Even if nothing is known of the likely form of the parameter dynamcis, first or second order regularisation constraints can prove effective. The processing cost for this, using a variational approach (solving for all \(365 \times 6 = 2190\) states simultaneously) is, at present quite high, though that could be greatly reduced with more efficient operators and less intensive methods to solve the cross validation.
We have reviewed some applications using DA in land surface monitoring and linking to ecosystem models. The first of these, CCDAS is in many ways very advanced, and there are similar efforts underway to develop systems of this sort for other land surface schemes. Probably the most important aspect of these systems is the incorporation of an uncertainty framework into the modelling. Before CCDAS (and still all too often) this is something missing in land surface models, whilst at the same time there are many many papers and hours spent ‘calibrating’ the models and proposing new mechanisms to overcome apparent shortcomings. CCDAS points the way in doing better science with terrestrial carbon models, and also in allowing a multitude of observations (well, fAPAR and CO2 concentrations at the moment) to test and update the model. There are still however many flaws in the land surface models, and this work is scientifically and societally important.
The Orchidee case is a clear example of the value of satellite observations in constraining DGVM behaviour. The model is set up in much the same way as CCDAS, but rather than state variables, the DA is effected by what are essentially global flux measurements (GPP). The impact on the model performance and un certainty is dramatic and shows the great promise of such data.
At present CCDAS and Orchidee use only a very simple interface to observations. In the next example, the work of Quaife et al., we showed how (within an EnKF scheme) an interface could be built to map directly from the state variables of the process model through to EO observations (well, close to these, surface reflectance rather than ToA radiance measurements, but this should be seen as a stage on the way). There were several flaws in some of the details of the approach, but as a demonstrator of the potential of using such complex observation operators for linking to the optical EO data, this is significant.
In the final example, we showed how a weak constraint variational system can be used to estimate surface state properties. Although this EO-LDAS is not in this case coupled to a process model (other than empirical difference constraints) the utility of the DA was demonstrated, and the idea is readily applied to such coupled modelling. Further, it is unlikely that the process models will, for the foreseeable futre at least, provide models of the expected dynamics of most of the terms that control the EO signal, so some method of dealing with the fact that they are not fixed (such as regularisation constraints) is important in its own right.
The state of the art in ‘practical’ merging of satellite data and terrestrial carbon models is not, at the present moment very advanced. Until recently, it mainly consisted of using NDVI data to constrain some phenology parameters of the models. This has now been extended by SIF-based estimates of GPP, which provide an extremely powerful source of information in this regard.
Also, when we do attempt to derive satellite products of surface biophysical properties, we do not, on the whole, provide reliable uncertainty information with the data. Even if we did, we find that there are really rather different interpretations of what such properties refer to both between different terrestrial models and between these and the information coming from satellite products. All of this is compounded by the rather large volumes of (raw) satellite data that we have access to, and many users feel they would like this to be simplified in some form.
That said, there is emerging, in the DA field, a set of techniques that will allow us to merge and test and calibrate the land surface schemes using a variety of EO (and other) data in a consistent manner, and it is likely that this is how EO data will be used ‘in the future’.
Lewis, P., Gomez–Dans, J., Kaminski, T., Settle, J., Quaife, T., Gobron, N., Styles, J., Berger, M. (2012) An Earth Observation Land Data Assimilation System (EO-LDAS). Remote Sensing of Environment
Quaife, P. Lewis, M. DE Kauwe, M. Williams, B. Law, M. Disney, P. Bowyer (2008), Assimilating Canopy Reflectance data into an Ecosystem Model with an Ensemble Kalman Filter, Remote Sensing of Environment, 112(4),1347-1364.
Scholze, M., T. Kaminski, P. Rayner, W. Knorr, and R. Giering (2007), Propagating uncertainty through prognostic CCDAS simulations, J. Geophys. Res., 112, D17305, doi:10.1029/2007JD008642.
Knorr, W. (2000), Annual and interannual CO2 exchanges of the terrestrial biosphere: Process0based simulations and uncertainties, Global Ecol. Biogeogr., 9(3), 225-252.
Knorr, T. Kaminski, M. Scholze, N. Gobron, B. Pinty, R. Giering, and P.-P. Mathieu. Carbon cycle data assimilation with a generic phenology model. J. Geo- phys. Res., doi:10.1029/2009JG001119, 2010.
Solar radiation and land surface temperature : Answers to exercises
At what time of year is PAR radiation incident on Earth the highest?
Why is this so?
[5]:
#### ANSWERSmsg='''At what time of year is PAR radiation incident on Earth the highest?Why is this so?The periapsis (closest distance between the Sun and the Earth)occurs in Northern Latitude winter,so PAR incident on the Earth is *highest* in Northern Latitude winter.'''print(msg)
At what time of year is PAR radiation incident on Earth the highest?
Why is this so?
The periapsis (closest distance between the Sun and the Earth)
occurs in Northern Latitude winter,
so PAR incident on the Earth is *highest* in Northern Latitude winter.
Describe and explain the patterns of iPAR as a function of day of year and latitude.
Comment on the likely reality of these patterns.
[8]:
# ANSWERmsg='''Explain the patterns of modelled iPAR as a function of day of yearand latitude.We have plotted iPAR as a function of DOY for various latitudes.90 and -90 are the poles and only receive radiation during half ofthe year, bounded by the spring and autumn equinox (dashed lines).The peak occurs at the solstice (24 hours of daylight).66.5 and -65.5 are the polar circles. At and above these latitidesthere is no iPAR (0 hours of daylight) in the winter (summer)equinox for the Arctic (Antarctic) but 24 hours of daylight atsolstice (they peak at one solstice and have a minimum at the other).23.5 (-23.5) are the topics. These are significant because theyhave the sun directly overhead (sza = 0) at noon on the solstice.0 degrees is the equator. The sun is directly overheadat noon on the equinox. We also see that there is the smallest variationin iPAR, according to this model.The peak magnitide of noon iPAR varies considerably with latitude,but there is surprisingly little variation in the peak *mean* iPAR withlatitude. This is because of variations in daylength.The explanation for all of this lies in the seasonal behaviourof the solar zenith angle (you should expand on this in your answer!).'''print(msg)
Explain the patterns of modelled iPAR as a function of day of year
and latitude.
We have plotted iPAR as a function of DOY for various latitudes.
90 and -90 are the poles and only receive radiation during half of
the year, bounded by the spring and autumn equinox (dashed lines).
The peak occurs at the solstice (24 hours of daylight).
66.5 and -65.5 are the polar circles. At and above these latitides
there is no iPAR (0 hours of daylight) in the winter (summer)
equinox for the Arctic (Antarctic) but 24 hours of daylight at
solstice (they peak at one solstice and have a minimum at the other).
23.5 (-23.5) are the topics. These are significant because they
have the sun directly overhead (sza = 0) at noon on the solstice.
0 degrees is the equator. The sun is directly overhead
at noon on the equinox. We also see that there is the smallest variation
in iPAR, according to this model.
The peak magnitide of noon iPAR varies considerably with latitude,
but there is surprisingly little variation in the peak *mean* iPAR with
latitude. This is because of variations in daylength.
The explanation for all of this lies in the seasonal behaviour
of the solar zenith angle (you should expand on this in your answer!).
[9]:
msg='''Comment on the likely reality of these patterns.These plots take no account of several important factors:* altitude (less atmospheric path for attenuation) so the radiation will be higher at altitude. Also, the spectral nature of the SW radiation will vary with altitude, so the proportion of PAR may also vary.* cloud cover: In the tropics in particular, extensive cloud cover will lower the irradiance at the surface.* slope: the Earth here is assumed flat, relative to the geoid, but the local terrain slope and aspect will strongly affect local conditions (consider the projection term).* Earth curvature: the airmass here in the attenuation term is considered 1/cos(sza). That is a good approximation up to around 70 degrees. Beyond that, Earth curvature effects and refraction should normally be accounted for. However, since the iPAR is low under those conditions, it is often ignored. It may be significant towards the Poles.'''print(msg)
Comment on the likely reality of these patterns.
These plots take no account of several important factors:
* altitude (less atmospheric path for attenuation) so
the radiation will be higher at altitude. Also, the
spectral nature of the SW radiation will vary with
altitude, so the proportion of PAR may also vary.
* cloud cover: In the tropics in particular, extensive
cloud cover will lower the irradiance at the surface.
* slope: the Earth here is assumed flat, relative to the
geoid, but the local terrain slope and aspect will strongly
affect local conditions (consider the projection term).
* Earth curvature: the airmass here in the attenuation term
is considered 1/cos(sza). That is a good approximation
up to around 70 degrees. Beyond that, Earth curvature
effects and refraction should normally be accounted for.
However, since the iPAR is low under those conditions,
it is often ignored. It may be significant towards the Poles.
Use the radiation() function to explore ipar and temperature for different locations and times of year
How are these patterns likely to affect what plants grow where?
[16]:
#### ANSWERimportnumpyasnpimportmatplotlib.pyplotaspltfromgeog0133.solarimportsolar_model,radiationimportscipy.ndimage.filtersfromgeog0133.cruimportgetCRU,splurgefromdatetimeimportdatetime,timedeltamsg='''Use the radiation() function to explore ipar and temperature for different locations and times of year'''print(msg)tau=0.2parprop=0.5year=2020longitude=30# example -- you should do more locationsforlatin[0,51]:fig,ax=plt.subplots(1,1,figsize=(10,5))ax2=ax.twinx()# loop over solstice and equinox doysfordoyin[80,172,264,355]:jd,ipar,Tc=radiation(lat,longitude,doy)ax.plot(jd-jd[0],ipar,label=f'ipar doy {doy}')ax2.plot(jd-jd[0],Tc,'--',label=f'Tc doy {doy}')# plotting refinementsax.legend(loc='upper left')ax2.legend(loc='upper right')ax.set_ylabel('ipar ($PAR_{inc}\,\sim$ $\mu mol\, photons/ (m^2 s))$)')ax2.set_ylabel('Tc (C)')ax.set_xlabel("Fraction of day")ax2.set_ylim(0,None)_=fig.suptitle(f"latitude {lat} longitude {longitude}")
Use the radiation() function to explore ipar and temperature for different locations and times of year
[17]:
msg='''How are these patterns likely to affect what plants grow where?We see the same main effects as in the exercises above wrt latitudinal variationsand variations over the year.Outside of the tropics, we note the large variations in IPARand temperature over the seasons. Plants respond to seasonal cues (phenology)to optimise their operation. In the topics, water is generally a moresignificant driver than temperature or IAPR.'''print(msg)
How are these patterns likely to affect what plants grow where?
We see the same main effects as in the exercises above wrt latitudinal variations
and variations over the year.
Outside of the tropics, we note the large variations in IPAR
and temperature over the seasons. Plants respond to seasonal cues (phenology)
to optimise their operation. In the topics, water is generally a more
significant driver than temperature or IAPR.
Carbon Modelling Practical : Answers to exercises
From the data in this table and your understanding of the controls on photosynthesis in the model, answer the following questions and confirm your answer by running the model.
which PFT has the highest values of We, and why?
How does this change with increasing ipar?
When ipar is the limiting factor, how does assimilation change when ipar increases by a factor of k?
For C3 grasses, what are the limiting factors over the temperatures modelled for ipar=200?
For C3 grasses, what are the limiting factors over the temperatures modelled for ipar=400?
For C4 grasses, what are the limiting factors over the temperatures modelled?
[1]:
#### ANSWERmsg=f'''which PFT has the highest values of We, and why?From the notes, product of the quantum efficiency alpha,the PAR absorption rate (ipar) and the leaf absorptance(1 - omega, where omega is the leaf single scattering albedo).So, for given ipar, it is controlled by the product ofalpha and (1 - omega).The C3 plants have the same value of omega here (0.15)so 1 - omega = 0.85. For C4, this is 0.83.C3 grasses have the highest value of alpha (0.12).So,For C3 max, we have alpha * (1 - omega) = 0.85 * 0.12 = {0.85*0.12}For C4 , we have alpha * (1 - omega) = 0.83 * 0.06 = {0.83*0.06}so, C3 grasses should have the highest We.The other C3 plants should have the same We curves.We can demonstrate this:Maximum We for each PFT for default CO2 and ipar'''print(msg)# list of all pftspfts=['C3 grass','C4 grass',\
'Broadleaf tree','Needleleaf tree','Shrub']# store the data for each PFToutput={}# loop over pftsforpftinpfts:output[pft],plotter=do_photosynthesis(ipar=200,pft_type=pft)print(pft,output[pft].We.max()*1e6,'umolCO2m-2s-1')
which PFT has the highest values of We, and why?
From the notes, product of the quantum efficiency alpha,
the PAR absorption rate (ipar) and the leaf absorptance
(1 - omega, where omega is the leaf single scattering albedo).
So, for given ipar, it is controlled by the product of
alpha and (1 - omega).
The C3 plants have the same value of omega here (0.15)
so 1 - omega = 0.85. For C4, this is 0.83.
C3 grasses have the highest value of alpha (0.12).
So,
For C3 max, we have alpha * (1 - omega) = 0.85 * 0.12 = 0.102
For C4 , we have alpha * (1 - omega) = 0.83 * 0.06 = 0.0498
so, C3 grasses should have the highest We.
The other C3 plants should have the same We curves.
We can demonstrate this:
Maximum We for each PFT for default CO2 and ipar
---------------------------------------------------------------------------NameError Traceback (most recent call last)
<ipython-input-1-45b02fe7642c> in <module> 38# loop over pfts 39for pft in pfts:---> 40output[pft],plotter = do_photosynthesis(ipar=200,pft_type=pft) 41 print(pft,output[pft].We.max()*1e6,'umolCO2m-2s-1') 42NameError: name 'do_photosynthesis' is not defined
[6]:
#### ANSWERmsg='''How does this change with increasing ipar?It scales with ipar, so increasing ipar increases We'''print(msg)output={}pfts=['C3 grass','C4 grass']forpftinpfts:output[pft],plotter=do_photosynthesis(pft_type=pft,ipar=200.)print(pft,'ipar=200',output[pft].We.max()*1e6,'umolCO2m-2s-1')output[pft],plotter=do_photosynthesis(pft_type=pft,ipar=400.)print(pft,'ipar=400',output[pft].We.max()*1e6,'umolCO2m-2s-1')
How does this change with increasing ipar?
It scales with ipar, so increasing ipar increases We
C3 grass ipar=200 20.07264113525945 umolCO2m-2s-1
C3 grass ipar=400 40.1452822705189 umolCO2m-2s-1
C4 grass ipar=200 9.959999999999997 umolCO2m-2s-1
C4 grass ipar=400 19.919999999999995 umolCO2m-2s-1
[7]:
#### ANSWERmsg='''When ipar is the limiting factor, how does assimilation changewhen ipar increases by a factor of k?This is almost the same question as above:When ipar is the limiting factor (for all cases) it scalesdirectly with the value of ipar -- so increasing ipar by a factor ofk will increase assimilation by that same factor.'''print(msg)
When ipar is the limiting factor, how does assimilation change
when ipar increases by a factor of k?
This is almost the same question as above:
When ipar is the limiting factor (for all cases) it scales
directly with the value of ipar -- so increasing ipar by a factor of
k will increase assimilation by that same factor.
[8]:
### ANSWERmsg='''For C3 grasses, what are the limiting factors over the temperatures modelled for ipar=200?'''# list of all pftspfts=['C3 grass']plotter={'n_subplots':len(pfts),# number of sub-plots'name':'default',# plot name'ymax':None# max value for y set}# store the data for each PFToutput={}# loop over pftsforpftinpfts:output[pft],plotter=do_photosynthesis(ipar=200,pft_type=pft,plotter=plotter)msg='''With ipar=200 and co2_ppmv=390, we have the same graph we saw earlier.The limiting factors are Ws up to around 17 C(close to We value), then We to around 36 Cthen Wc. At moderate temperatures then, and low ipar,light is the main limiting factor. At lower temperaturesit is transport-limited (almost the same as carboxylation)and at higher tempertures it is limited by carboxylation.'''print(msg)
>>> Saved result in photter_default.png
With ipar=200 and co2_ppmv=390, we have the same graph we saw earlier.
The limiting factors are Ws up to around 17 C
(close to We value), then We to around 36 C
then Wc. At moderate temperatures then, and low ipar,
light is the main limiting factor. At lower temperatures
it is transport-limited (almost the same as carboxylation)
and at higher tempertures it is limited by carboxylation.
[9]:
### ANSWERmsg='''For C3 grasses, what are the limiting factors over the temperatures modelled for ipar=400?'''# list of all pftspfts=['C3 grass']plotter={'n_subplots':len(pfts),# number of sub-plots'name':'default',# plot name'ymax':None# max value for y set}# store the data for each PFToutput={}# loop over pftsforpftinpfts:output[pft],plotter=do_photosynthesis(ipar=400,pft_type=pft,plotter=plotter)msg='''With ipar=400 and co2_ppmv=390, we have removed thelight limitation. The main shape follows closelythat of Wc, though up to around 17 C it isactually Ws that is limiting here.'''print(msg)
>>> Saved result in photter_default.png
With ipar=400 and co2_ppmv=390, we have removed the
light limitation. The main shape follows closely
that of Wc, though up to around 17 C it is
actually Ws that is limiting here.
[10]:
### ANSWERmsg='''For C4 grasses, what are the limiting factors over the temperatures modelled?'''# list of all pftspfts=['C4 grass']# set ymax here to be able to see the plotsplotter={'n_subplots':len(pfts),# number of sub-plots'name':'default',# plot name'ymax':50# max value for y set}# store the data for each PFToutput={}# loop over pftsforpftinpfts:output[pft],plotter=do_photosynthesis(ipar=200,pft_type=pft,plotter=plotter)msg='''With ipar=200 and co2_ppmv=390, we have the same graph we saw earlier.The limiting factors are Wc up to around 17 C and after 36 C.For moderate temperatures, it is light limited. The light-limited ratedefines the maximum assimilation rate.'''print(msg)
>>> Saved result in photter_default.png
With ipar=200 and co2_ppmv=390, we have the same graph we saw earlier.
The limiting factors are Wc up to around 17 C and after 36 C.
For moderate temperatures, it is light limited. The light-limited rate
defines the maximum assimilation rate.
[11]:
#### ANSWER# list of all pftspfts=['C4 grass']# store the data for each PFToutput={}# set ymax here to be able to see the plotsplotter={'n_subplots':len(pfts),# number of sub-plots'name':'default',# plot name'ymax':50# max value for y set}# loop over pftsforpftinpfts:output[pft],plotter=do_photosynthesis(ipar=400,pft_type=pft,plotter=plotter)msg='''As we increase ipar, we reduce the temperature range at whichlight limitation kicks in, and increase the maximum rateproportionately by the proportionate increase in ipar.'''print(msg)
>>> Saved result in photter_default.png
As we increase ipar, we reduce the temperature range at which
light limitation kicks in, and increase the maximum rate
proportionately by the proportionate increase in ipar.
msg='''Explain the factors limiting Carbon assimilation for several PFTs, latitudes and time of yearYou should relate your answer to the plots on assimilation as a function of temperature we examined earlier.We show an example of C3 and C4 grass at the tropic of CancerFirst, we notice that the assimilation rate is significantly higherfor the C4 grass than for a C3 grass.Light limitation kicks in with the daytime as we have previosly seen.The temperature ranges for C3 and C4 grasses are : 0-36 C and 13-45 Crespectively. At days 172 and 264, we see the temperature is sometimesabove the upper limit for C3 grass, and both Ws and Wc are very lowduring this period, and the assimilation rate is close to zero all afternoon.This is not the case for the more temperature-tolerant C4 grasses. For days80 and 355, the temperature is more comfortable for both types of grassand assimilation broadly follows the temperature increase over the day.'''print(msg)
Explain the factors limiting Carbon assimilation for several PFTs, latitudes and time of year
You should relate your answer to the plots on assimilation as a function of temperature we examined earlier.
We show an example of C3 and C4 grass at the tropic of Cancer
First, we notice that the assimilation rate is significantly higher
for the C4 grass than for a C3 grass.
Light limitation kicks in with the daytime as we have previosly seen.
The temperature ranges for C3 and C4 grasses are : 0-36 C and 13-45 C
respectively. At days 172 and 264, we see the temperature is sometimes
above the upper limit for C3 grass, and both Ws and Wc are very low
during this period, and the assimilation rate is close to zero all afternoon.
This is not the case for the more temperature-tolerant C4 grasses. For days
80 and 355, the temperature is more comfortable for both types of grass
and assimilation broadly follows the temperature increase over the day.