Open access peer-reviewed chapter

Kalman Filter Harmonic Bank for Vostok Ice Core Data Analysis and Climate Predictions

By Migdat Hodzic and Ivan Kennedy

Submitted: June 22nd 2020Reviewed: September 30th 2020Published: January 4th 2021

DOI: 10.5772/intechopen.94263

Downloaded: 106

Abstract

The Vostok ice core data cover 420,000 years indicating the natural regularity of Earth’s surface temperature and climate. Here, we consider four major cycles of similar duration, ranging from 86,000 to 128,000 years, comprising 15% of periods for the warming interglacials compared to some 85% of cooling periods. Globally, we are near the peak of a rapid warming period. We perform a detailed frequency analysis of temperature and CO2 cycles, as a primary stage in building a logical Climate Prediction Engine (CPE), illustrated with specific harmonics. This analysis can be repeated for all harmonics and various cycle combinations. Our time correlation estimates the CO2 time lag for temperature at 400–2300 years, depending on the cycle, longer on average than previously concluded. We also perform Fast-Fourier transform analysis, identifying a full harmonic spectrum for each cycle, plus an energy analysis to identify each harmonic amplitude − to achieve further prediction analysis using a Kalman filter harmonic bank. For Vostok data we can use combinations of different cycles compared to the most recent for learning and then the current ongoing cycle for testing. Assuming causal time regularity, more cycles can be employed in training, hence reducing the prediction error for the next cycle. This results in prediction of climate data with both naturally occurring as well as human forced CO2 values. We perform this detailed time and frequency analysis as a basis for improving the quality of our climate prediction methodologies, with particular attention to testing alternative hypotheses of the possible causes of climate change. These include the effect on albedo of suspended dust and increasing water vapor with temperature in initiating interglacial warming, the effect of temperature and pH values of surface water on ambient level of CO2 in the atmosphere and finding a larger latent heat capacity in the atmosphere required to sustain its circulatory motions, leading to friction and turbulent release of heat in boundary layer. All these potentials can be examined in an effective CPE.

Keywords

  • Vostok data
  • time and frequency analysis
  • Kalman filter harmonic bank
  • climate prediction engine
  • machine learning

1. Introduction

Extensive climatic data for the past four ice ages and earlier, the period in which Homo sapiensevolved, is available in various scientific reports analyzing ice cores, commencing from the mid-1950s. There are various sites on Antarctica and Greenland where intensive ice core drilling has occurred since 1956, with several countries supporting more than a score of different drilling projects. Currently, intensive ice core drilling is being conducted in other areas as well, so an even larger data set is anticipated. In previous papers [1, 2, 3, 4] the history and limitations of ice core drilling are described in detail. Our purpose in this paper is to employ Vostok ice core data for time and frequency related analyses to set a basis for improving prediction of climate variations. The data sets include derivations of relative temperature, carbon dioxide (CO2), methane (CH4), oxygen, dust and solar variation (insolation), during the past 420,000 years. Because isotopic fractionation of oxygen-18 and deuterium in snowfall is temperature dependent and a strong spatial correlation exists between mean annual temperature and mean isotopic ratios it is possible to derive ice-core climate records. References [5, 6] presented the first record for full glacial–interglacial period from an ice core drilled in the Russian Vostok station in Antarctica. However, it is important to establish criteria for controlling the quality of this new science that we claim could eventually lead to a Climate Prediction Engine (CPE), based on verified causes.

A 420,000 year record was constructed from the [5] study on a 3 km deep core of ice (Figure 1). Another source of similar ice core data is available from European EPICA drilling project [7] which lasted from 1998 until 2005. EPICA data are comparable with Vostok data. In this paper we focus on specific analysis related to only two of the Vostok data variables, namely relative temperature and CO2 content. We also touch upon possible effect of the dust on various points along Vostok timeline. We will include other data at a later date. Data sets used are from [8] and, with corrections, [9]. The variation of atmospheric CO2, temperature and dust are shown in Figure 1 together with our definition of four cycles (C1, C2, C3, C4) formed from the maxima of the variables. These cycles could also be defined starting from the variable minima. Before presenting very detailed time and frequency analysis of specific subset of Vostok data, we make some general observations related to climate fluctuations of CO2, temperature and dust.

Figure 1.

Vostok data variation in global relative temperature and CO2 content [8,9].

Variations in global climate measured by temperature change automatically involve the thermal energy content and heat capacity of the atmosphere. These highly variable systems can be contrasted with the energy content of conservative physical systems like planetary orbits, where the principle of least action defines the trajectories favored; conservative systems can be reliably modeled as an interaction between gravity and inertial forces, with the angular momentum and action held constant as a function of mass and the gravitational constant [10]. By contrast, the molecular states in the global ecosystem are dominated by transient energy flows, oscillating in a complex set of time scales as short as hourly to as long as geological epochs. Such variations in energy flows invite frequency analysis, characterized with Fourier transforms to provide underlying information.

A “snowball” Earth [11], more or less covered by ice and snow as the ice cores were fashioned clearly has diminished total energy in terms of molecular quantum states affecting molecular vibration, rotation, translation − as well as sustaining the more coherent circulatory motions in the atmosphere and the ocean, transferring thermal energy towards the poles. A warm Earth as at present has all these flows in higher quantum states, raising their entropy as we have quantified for atmospheric molecules elsewhere [12]. Variation in physical parameters such as temperature is obvious but complex in causes, with any longer-term trends overlying daily or seasonal trends as a function of latitude.

One state property that does not vary over time on Earth except geologically is atmospheric pressure, given that it is effectively based on the weight of the atmosphere, apart from variation for water, some 4% during the El Nino cycle [13]. The current IPCC consensus concludes that the most significant warming is from greenhouse gas content, such as that of CO2, methane and nitrous oxide, but this is not necessarily the major controlling cause of increasing temperature, given how negotiable thermal energy is. Globally, we are conducting a large uncontrolled experiment to see if CO2 is so important. Although originally thought that the CO2 data from ice cores might be considered as proof of its causal role in global warming, but given that changes in CO2 lag temperature changes its resultant release or absorption from solution in sea water is more likely the cause of correlations in the relationship.

An additional causative factor controlling the atmospheric pressure of CO2 that seems to have been overlooked is varying acidity of waters, which can also respond to temperature. In warm periods with high oxygen levels, acidification by oxidation of reduced sulfur, nitrogen and carbon compounds is favored [14]. Alkaline conditions are favored anaerobically. The reduction of dimethyl sulfate, a major oxidant in sea water, can also lead to the evolution of reduced sulfur compounds, which can be converted to sulfuric acid in the atmosphere by ultraviolet oxidation. The alkaline pH of the ocean about 8.2 is effectively the same as that obtained in aerated water equilibrated with limestone (CaCO3). Sources of alkaline carbonated salts in the ocean is basaltic volcanic rocks and soils, in contrast to more acidic granitic deposits that tend to acidify water.

We have shown using equilibrium theory that a lowering in surface ocean pH of 0.01 units can lead to an increased CO2 pressure in the atmosphere of 8–10 ppm by volume. So, a change of 100 ppm could occur locally if the ocean surface or water on land was acidified by 0.10 unit, say from pH 8.20 to 8.10 as observed recently. Much larger local changes could occur, from a catastrophic event such as major volcanic eruption. The annual oscillation in CO2 pressure at 3500 m altitude on Mona Loa, Hawaii could be a result of the change in the high surface temperature of the nearby ocean between winter and summer, rather than imbalance between photosynthetic assimilation and respiratory evolution of CO2, often claimed in general climate models. The CO2 pressure in Hawaii peaks after winter in May when the sea temperature is about 23°C and reaches its minimum in late October when the sea is about 27°C. A peculiarity of calcite is that its solubility declines with higher temperature [15], so its precipitation removing CO2 from the atmosphere could partly or fully explain the oscillation. By contrast, at the sampling site at Cape Grim in Tasmania, where the annual sea temperature has a mean temperature of 15°C with variation between 11 and 19, a range which does not lead necessarily to precipitation of calcite, there is only slight evidence of an oscillation. While the burning of fossil fuels would be the main current cause of pH variation in the ocean, any other acidifying processes in the atmosphere or on land such as from agriculture, could also be controlling influences.

For all of the above reasons regarding the complexity of causes of climate change, we consider it would be beneficial if the ice core data could be subjected to careful Fourier frequency analysis, yielding detailed data regarding long term mechanisms of variation in climate, also pointing to an appropriate dynamic modeling of the underlying processes. In this chapter we focus specifically on CO2 Vostok data for C1 as defined in Figure 1, as an example of our CPE approach, extending this to CO2 for C2, C3 and C4, or their combinations. Other data such as temperature, methane, dust, and any other available ice core data such as EPICA, can be analyzed similarly.

Advertisement

2. Methodology

The eminent statistician Fisher [16] was an early exponent of testing statistical significance by harmonic analysis. Some preliminary spectral analysis has been conducted on Vostok ice core data set as reported in [17, 18, 19]. In this paper, a detailed spectral analysis in R and Excel was applied to Barnola et al. data set [8, 9]. In such analysis, time series are decomposed into underlying sine and cosine functions to establish the most important frequencies. Various texts on Fast Fourier Transforms (FFT) were also sources for frequency determination. These approaches allow construction of periodograms quantifying the contributions of the individual frequencies to the time series regression. The methodology developed for bioinformatics [20] has general application for time series and was employed in this study. In this paper we perform our own time and frequency analysis using R and Excel. Trudinger et al. [21, 22] have also applied Kalman filter analysis to ice core data. This allowed a more rigorous analysis of CO2 variability for the Law Dome ice cores of Antarctica over the recent 1000 years than the usual deconvolution method. These authors pointed out that the Kalman filter allows better calculation of uncertainties in the deduced sources. The uncertainties correspond to the selected range of frequencies. They claimed [22] that it allows investigation of statistical properties that are directly related to physical properties.

Our aim is to apply Kalman filter based methodology to define the logic for a Climate Prediction Engine (Figure 2) based on 4 data cycles, Figure 1. Cycle C1 is the most recent period, the current interglacial warming period nearing its maximum but still incomplete. The overall number of published data points for both variables (relative temperature and CO2) is 363. The individual cycles are determined by locating maximum absolute values for relative temperature and CO2. Individual cycles differ in the number of data points slightly. Also, because of a lag observed between the maxima for temperature and CO2 data, the number of data points differs slightly in two data sets in each cycle. This is also confirmed in our time correlation analysis below. Table 1 summarizes the number of data points in each cycle. Note that each cycle is defined as top-to-top data values for both CO2 and temperature, resulting in a different number of data points for CO2 and temperature. However, the beginning of each interglacial where temperatures commences climbing might prove superior for some analyses, given that the controlling causes for reversal may be more consistent for these periods. Obviously, the current cycle is still evolving and new data may be added as required for further analysis. We can achieve that by appending the original Vostok data with additional current points, that might skew the previous natural data progression. Comparison of two data sets including current data may be very useful in estimating global near and far future effects of factors like temperature and CO2 content − an ultimate goal in this research.

Figure 2.

Climate prediction engine (CPE) logic.

No of Data PointsCycle C1Cycle C2Cycle C3Cycle C4Total
Temperature741369954363
CO2801359949363

Table 1.

Number of original Vostok data points for each cycle.

Boldfaced entries point to CO2 and Cycle 1 related values.

Note that one such additional current data point may not make a large difference on the current C1, as far as harmonic analysis, which has long term time behavior built in, but the short term time effects may be more prominent. Our approach covers both short and long term effects and it can be used to perform sensitivity analysis using current climate data together with original Vostok data. The uneven data sampling times within each cycle) may pose some numerical issues as well for the analysis in the context of Kalman filter (KF) prediction methodology. There are various methods missing data problem and one of the simpler methods, yet effective one, is to enter missing data by some (linear or nonlinear) approximation method. We inject additional data using linear interpolation, also achieving Nyquist sampling requirements and in the process minimize the number of required harmonics used by the Kalman filter harmonic bank (KFHB). For C1 which originally had 80 data points for CO2 we added additional data points for the total of 128 suitable for FFT but also corresponding to the length of 128,000 years so far in the cycle.

An aim is to gain more insight into Vostok data in time and frequency domains, and check the corresponding amplitude and energy content in order to reduce the number of significant harmonic components (Figure 2). Other authors used similar energy consideration but in our case, we propose the simple and effective mathematical model of a stochastic harmonic oscillator, based on which a KFHB can be built and used to further analyze Vostok data as well as predict near and far future data behavior. Note that besides C1 other cycles can be used to improve the precision by combining two or three cycles based on past data for C1, C2. C3. Amplitude and energy analysis performed here reduces a number of required individual Kalman Filter Harmonic Oscillators (KFHO) as individual blocks for the KFHB. It is important to note that using this approach rather than just combining amplitudes and harmonic components directly as described for example in [23], is to accommodate stochasticity of the data and the overall probabilistic nature of the prediction problem using KFs. This approach adds to the robustness of the method and provides better probabilistic accuracy both for short and long term periods, as well as allowing for prediction sensitivity analysis using various values, such as CO2 levels as measured now and estimated for some future periods. Our methodology can be applied for both research as well as for policy making tools for the future climate related societal and technological decisions.

In this paper we perform time analysis of all cycles and C1 CO2 frequency analysis (boldfaced in Table 1) to illustrate the KFHB approach (Section 7).

3. A climate prediction engine

The block diagram in Figure 2 summarizes our CPE. It seeks the ability to predict various quantities using both past data (from one or more cycles) as well as the current data available This also allows to run various sensitivity analysis by using a variety of future scenarios as far as CO2 and temperature, for example. The CPE can be applied to any ice core data, Vostok, EPICA or any other and for any variable, CO2, temperature, methane, O2, or dust.

The CPE consists of three major blocks. First one is a data base of various ice core data, both original, conditioned and inserted as required to make the data more uniformly distributed across the time span of each cycle. The second block is data analysis, in time and frequency domains, including various correlation measures. The third block is the prediction engine which consists of a set of oscillators (KFHO) that produce a KFHB (Section 8). Various predictions as well as sensitivity analysis are performed in this block. Finally prediction parameters are fine-tuned by original ice core data vs. the prediction errors. This is primarily done to fine tune KFHO gains. Section 8 contains all the mathematical details of KFHB.

4. Average sampling time analysis

First, we considered the entire set of Vostok data regarding time sampling. This will yield some initial indication of important issues in dealing with the data and how to perform further Vostok data filling to minimize the effects of non-uniform data distribution. As Table 1 indicates the number of data points and the corresponding time periods for each cycle is quite different. This has to be taken into account when analysis is performed, as far as machine learning use of individual cycles for the benefit of estimating on going C1 data values. Table 2 summarizes approximate duration of each cycle based on our definition of four cycles.

Duration in yearsCycle C1Cycle C2Cycle C3Cycle C4
Temperature127,726115,15686,46296,782
CO2128,399109,80086,14895,587

Table 2.

Climatic cycles approximate duration in years.

Boldfaced entries point to CO2 and Cycle 1 related values.

Detailed visual inspection of maximum values in Figures 2 and 3 indicates that CO2 lags relative temperature in C4, C3, and C2. In C1, the data may be a little skewed due to the number of recent data points where maximum values are still not clearly identified in Vostok data (left-most CO2 data, Figure 1), as C1 is still evolving. Better lag estimates can be obtained by cross correlation analysis as done in Section 4 below. Another feature is the varied time differences when the data is obtained from ice core readings. In some cases difference between two data samples are only 400–500 years, but in some as much as 5000 years. To start the analysis for each cycle we calculated average time sampling values as summarized in Table 3. In Section 5 this will be corrected to some extent by data filling and determining an ideal Nyquist and data sampling rate.

Figure 3.

Short term temperature and CO2 cross correlation.

Average sampling time in yearsCycle C1Cycle C2Cycle C3Cycle C4
Temperature17038008651792
CO216058138621911

Table 3.

Average data sampling time in years for each cycle.

Boldfaced entries point to CO2 and Cycle 1 related values.

The differences in average sampling times obviously come from the number of data points collected and the duration of each individual cycle. To facilitate our general approach all other combination of available cycles data are also considered. The idea is enrich with the varied of all available data for prediction purposes and also training purposes of machine learning approach. For example Table 4 indicates respectively average sampling times for C1 and C2 combined (C12), C2 and C3 combined (C23), as well as C2, C3 and C4 combined (C234), plus the total data set C1234. Note from Table 4 that more cycles we add the more uniform average data sampling times become, until it becomes equivalent for both CO2 and temperature. Further refinement of average sampling time is given in Section 5.2 bellow.

Average sampl. time, yearsCycles C12Cycles C23Cycles C234Cycles C1234
Temperature108782710081149
CO2110883410201149

Table 4.

Combined cycles C12, C23, C234 and C1234 mean data sampling times in years.

Boldfaced entries point to CO2 and combined Cycle C12.

5. Time correlation analysis

As a starting point in time cross correlation analysis we examine correlation coefficients − single numbers that can be used as simple measure of cross correlation intensity between two variables. There are several coefficients named after their inventors such as Spearman, Pearson and Kendal [24] and they all indicate certain statistical properties that can relate two data series. Table 5 summarizes standard cross coefficient between relative temperature and CO2 for individual cycles as well as for the entire Vostok data set. The intensity of the cross correlation is quite high, on average more than 0.8 for the entire set. If we split the cycles into up and down sub cycles we obtain Table 6 which indicates cross correlation coefficients for up and down cycle parts. Overall these coefficients indicate bigger spread between up and down sub cycles, and are very sensitive to where the break between up and down parts is chosen.

Cross correlation coefficientCycle C1Cycle C2Cycle C3Cycle C4Entire cycle C1234
Temperature vs. CO20.83890.80940.80690.81910.821

Table 5.

Cross correlation coefficient for individual and entire cycle.

Cycle 1DownCorrelation0.869DownYears108,328
Ratio7.05582
UpCorrelation0.8262UpYears15,353
TotalCorrelation0.8389TotalYears123,681
Cycle 2DownCorrelation0.8008DownYears97,087
Ratio8.653802
UpCorrelation0.9014UpYears11,219
TotalCorrelation0.8094TotalYears108,306
Cycle 3DownCorrelation0.8558DownYears72,931
Ratio5.961337
UpCorrelation0.6503UpYears12,234
TotalCorrelation0.8069TotalYears85,165
Cycle 4DownCorrelation0.8519DownYears83,533
Ratio6.824034
UpCorrelation0.827UpYears12,241
TotalCorrelation0.8191TotalYears95,774

Table 6.

Cross correlation coefficients for individual sub cycles.

In general, one of the coefficients (up or down) is considerably larger than the overall single cycle coefficient. This might indicate that the usefulness of the individual up and down cross correlation analysis may be limited of the current C1 cycle. In the context of machine learning methodology this points to putting less emphasis on the cycles from longer in the past compared to the ongoing C1 cycle. To complete this analysis, the down-up period (boldfaced) ratios in Table 6 indicate that the descending period is on average 6 to 8.6 times longer than the ascending period during interglacials, given that the cooling period is that much longer than the warming part. See also Figure 9 for C1 CO2.

Time correlation analysis produces a variety of useful information about periodicity and correlation strength among data samples of a given quantity. In particular autocorrelations produce the measure of self correlation of a data series, and the cross correlations indicate how two different data sets are correlated. We used standard programs such as R and Excel to generate those. One property of crosscorrelations Rxy is very useful in analysis of relative temperature vs. CO2 content and that is the estimate of the time delay between the two. In general one needs to locate the maximum value point for Rxy and locate the corresponding argument, time lag in our case, between relative temperature and CO2 content. Figure 3 (with numerical values around zero lag) illustrates short term calculations for cross correlation between the two variables for the total C1234 cycle. We can read the value of the delay τdelay between temperature and CO2 as approximately equal to two lag units. Since the calculation is done for the entire data set, from Table 4 we can read an average sampling time for C1234 as 1149 years, hence we can make a rough upper limit approximation of the size of the delay for the entire data set to be:

τdelay<2times1,149years2,300years

The calculation is approximate, primarily because the non-uniform distribution of the Vostok data. Some data is separated by only hundred years rather than thousands. This points to a need to make the data more uniform by inserting additional data. We address how to harmonize these data in Section 5. The total delay appears to be of order of 2000 or more years and not 100–200 years. That may be an important finding which can influence our thinking about the role of CO2 increase caused by human actions. Figure 4 indicates entire data set auto and cross correlation. It is clear that the data exhibits some periodicity. To determine average time delays between relative temperature and CO2 for individual cycles we examine Figures 58 which also have numerical values around zero lag. The first two diagrams in Figure 5 are autocorrelations and they also indicate certain periodicity within the each cycle but obviously not as well as the entire data set. The third diagram shows cross correlation between two variables. The time delay can be read from cross correlation and for C1 it is less than one lag period but maybe more than zero lag, due to the non uniformity of data. We can estimate it as less than half of one period lag. From Table 3 for both relative temperature and CO2 the average data sampling times are 1703 and 1605 years putting the absolute delay at around 800–850 years or less. Similar approximate estimates can be done for other cycles. Reading from C2 and C3 cross correlations, Figures 7 and 8 and Table 3, by the same consideration the delay is less than half the cycle, which translates into 400–435 years on average, or less. For C4 (Figure 8 and Table 3) the delay appears to be of order of one cycle sampling time, a delay around 1800 years. To get a more precise approximations we would need more uniform data and finer resolution around the zero lag where the cross correlation is at its maximum. Note that on the individual cycle up and down parts this delay may differ from the average cycle level, also indicated by larger correlation coefficients spread in Table 6. We can identify no specific pattern in these coefficients regarding up and down sub cycles having larger or smaller coefficient. Overall, there is a significant CO2 delay across total Vostok data C1234 compared to the relative temperature. For individual cycles a more detailed cross correlation analysis should be done, especially following data insertion. We made very rough estimates above. For practical reasons, the most important figure to keep in mind is the current cycle C1 delay which points to 800–850 years or less.

Figure 4.

Total long term temperature and CO2 autocorrelations (left to right, above) and cross correlation (bellow) indicating inherent data periodicity.

Figure 5.

Cycle C1 temperature and CO2 autocorrelations (left to right, above) and cross correlation (bellow) indicating some periodicity within the cycle C1 itself.

Figure 6.

Cycle C2 temperature and CO2 cross correlation indicating some periodicity within the cycle C2 itself.

Figure 7.

Cycle C3 temperature and CO2 cross correlation indicating some periodicity within the cycle C3 itself.

Figure 8.

Cycle C4 temperature and CO2 cross correlation indicating some periodicity within the cycle C4 itself.

Note at the end of this Section that the way various cycles are defined (Figure 1) also affects the analysis. In a follow up work we aim at repeating this analysis by defining cycles from minim to minimum CO2 values. Choice of maximum or minimum values may assist in determining what triggers cycle reversals. We have early indications that the dust might have a significant role in this reversal.

Advertisement

6. Cycle C1 CO2 frequency analysis

As an illustration of frequency analysis we examine Cycle C1 CO2 data in details. Similar approach can be used for other data, such as temperature, methane, oxygen and dust. The first concern is to make the C1 CO2 data more homogeneous hence the sampling time can be more precise and useful.

6.1 Data filling

The original Cycle 1 Vostok data contains a total of 80 CO2 data points are very unevenly distributed in time. This poses issues with FFT or DFT harmonic analysis. Hence we corrected the situation somewhat by inserting certain number of additional “data points” using linear approximation between the original data points with the biggest time difference between the neighboring points, hence generating a total of 128 original and inserted data points. In particular, the very first point we added to the original Vostok data is the current value of CO2 which we stated at 350 [25] for the year 1990, as an example. Current levels are estimated at over 400 ppm. Our CPE model can be used for a variety of sensitivity analysis by playing “what if” and processing the data through CPE.

Next data is the first original Vostok data, 2362 years in the past. This will correspond to 64 FFT harmonics described next. Note that the time period of C1 CO2 data is 128,399 years (Table 2), time difference between the oldest C1 data and the current time, Figure 9 above.

Figure 9.

Cycle C1 CO2 content.

6.2 Sampling time

For sampling time, we proceeded as follows. As stated in Section 2 the cycle duration is measured between the maximum (CO2 and temperature) points in each cycle. Hence, we have an average of a bit over 1000 years time sampling for Cycle 1, i.e. T=128,3991281= 1011 years, which is used as the CO2 data sampling time. The FFT analysis on 128 total points produces 64 harmonics with the highest frequency of fmax=f2=12T=12022103=0.000495x103Hz. For the analysis we use T=1.011with the understanding it is in 1000s of years. Table 7 summarizes Cycle 1 harmonic content ordered by the amplitude, from the largest to the smallest to identify each harmonic energy contribution. First harmonic H1 is boldfaced. As shown in Section 7.1, for the discretization purposes, it is important to satisfy an additional sampling time requirement given in inequality (7) bellow which produces equivalent resonant frequency of continuous and discrete models, and in turn makes for more precise discrete KFHO and KFHB models as well. This increases required sampling frequency and reduces time sampling period bellow 1011 years, and we deal with it using energy analysis as described in Table 8. For other approaches to non-uniform data analysis see [26, 27] as well as general literature on non uniform and optimal Fourier analysis.

H NoFrequency fPeriod 1/fPhaseAmplitudeCumulative %
00N/A0231.006858.83917
10.007727498129.4080.75498736.2781468.07948
20.01545499564.7040.55538315.8717772.12214
40.0309099932.352−1.118779.40912874.51872
30.02318249343.1360.4782046.81551876.25468
70.05409248318.486861.4351685.57659477.67508
100.07727497512.94080.3169395.09845378.97369
50.03863748825.8816−1.141824.84757480.20841
120.0927299710.784−0.201464.78034981.426
60.04636498521.568−2.021313.4937682.31588
170.1313674587.6122350.4710063.46298783.19793
140.1081849659.243429−0.580733.13869383.99738
150.1159124638.62720.4899322.95919884.75111
260.2009149364.977231−0.083242.3026185.3376
250.1931874385.176320.5278652.03137185.85501
380.2936449063.405474−0.484542.01803986.36902
320.2472799214.044−0.1711.84164386.8381
110.08500247311.764360.0410371.82736987.30354
340.2627349163.8061180.3714391.79466287.76066
370.2859174093.4975140.4021431.7181488.19828
600.4636498522.15680.0508091.69436888.62985
210.1622774486.162286−1.801941.61760289.04186
230.1777324435.6264350.2316431.6027689.4501
280.2163699314.6217140.2368011.57040189.85009
270.2086424334.792889−0.216261.56069590.24761

Table 7.

Summary of cycle 1 CO2 harmonic analysis.

Boldfaced entries correspond to the first harmonic H1.

Highest frequencyMaximum frequencyCumulative amplitude %Total H’sSampling Ts <Chosen Ts
150.29364586.369151.0839961.011
150.29364588.1983191.0839961.011
200.4636590.2476240.6865310.650
630.48683296.6477440.6538390.650

Table 8.

Cycle 1 CO2 amplitude and sampling time.

Boldfaced entries point to the lowest limit on Sampling Time Ts.

6.3 Amplitude and energy analysis

Table 7 shows that the average signal value (DC or H0) is 231 and the rest of the energy content (reflected in a corresponding harmonic amplitude) of the harmonics follows the highest-to-lowest amplitude pattern H1, H2, H4, H3, H7, H10, H5, and so on.

The cumulative column in Table 7 indicates amplitude sum percentage of subsequent harmonics compared to the sum of all harmonic amplitudes. For example to reach 90% + of the total amplitude we need a total of 25 harmonics, including H0. The last (boldfaced) harmonic in this list is H27. For a bit smaller cumulative amplitude, say 88%+, we only need 19 harmonics, with H37 as the last one. Table 8 further summarizes energy analysis. For example, a chosen sampling time of 1011 satisfies the requirement in inequality (7) bellow all the way to cumulative energy of 88.1983%. To reach the energy level of 90.2476%, much lower sampling time would be required, i.e. 650 years instead of 1011. In order to satisfy this requirement we would need to almost double the total number of data points and use more harmonics. Fortunately. not all the amplitudes are at their maximums all the time, hence the above percentages are only very conservative lower bounds. In reality we can calculate the errors by MAPE (Maximum Absolute Percentage Error) and these will be much more realistic, actually of order of 3–5% off of 100%, per Table 9. Also, energy of these first seven harmonics is of order of 99% + due to its calculation based on amplitude squares. This analysis gives us an idea of various issues at hand related to non-uniformity of Vostok data as well as number of harmonics to be used based on energy analysis. Our aim is to show that properly designed KFHB based CPE can deal with these issues in a very effective way. In Section 7 we illustrate various points raised here using first 7 harmonics, H1, H2, H4, H3, H7, H10, H5.

MAPEHarmonics
11 + 21 + 2 + 41 + 2 + 4 + 31 + 2 + 4 + 3 + 71 + 2 + 4 + 3 + 7 + 101 + 2 + 4 + 3 + 7 + 10 + 5
y0.0580.0430.0380.0360.0340.0330.031
x Corrected0.0580.0430.0370.0360.0340.0330.030
x Predicted0.0560.0440.0390.0370.0350.0340.033

Table 9.

Kalman filter harmonic bank estimation errors.

7. Kalman filter harmonic bank

In an illustrative example in this paper we focus on C1 CO2 harmonic analysis and the corresponding KFHO for the strongest, amplitude wise, first harmonic H1 in Table 7. The same analysis can be repeated for any harmonic as we elaborate in Sections bellow. We proceed with the development of KFHB which models a series of harmonic components obtained by the energy analysis, which approximates to a reasonable degree (say 90% + or higher cumulative amplitude percentage) the original Vostok data. First step is to define a general Markov model [28] for a generic harmonic oscillator in discrete time described in the following Section. Other authors also used the Kalman filter approach to analyze ice core data, but with a different emphasis, [21, 22] as mentioned in Introduction.

7.1 Discrete time harmonic oscillator

To start we introduce a continuous model of a Harmonic Oscillator described in [29, 30]:

ẋ1ẋ2=01ω020x1x2E1

where ω0is radial frequency, ω0=2πf0, with f0frequency in Hz. The solution to the above state equation is sine or cosine function depending on the initial conditions. The state variables x1and x2are “position” and “velocity” of whatever variable we are dealing with, such as CO2 content and its rate of change. The first level of continuous-to-discrete approximation is based on one point derivative approximation and sampling time T, whereas we obtain:

x1t+1x2t+2=1Tω02T1x1tx2tE2

with “t+1” standing for t+1T, and similarly t+2Tfor “t+2”, where Tis dropped for simplicity. To the above discretized time model we can also add model uncertainty via additional stochastic zero mean Gaussian white inputs r1 and r2, with certain variance values, V1 and V2 which can be fine-tuned. Hence we have:

x1t+1x2t+1=1Tω02T1x1tx2t+r1tr2tE3

The initial conditions are given as a transposed vector x10x20with:

x10=A0cosθ0,x20=A0sinθ0E4

where A0and θ0are the amplitude and the phase of the harmonic ω. Better discrete approximation can be obtained by 2 point derivative approximation whereas we obtain:

x1t+1=a0x1tx1t1E5

where parameter a0is calculated to match discrete and continuous resonant frequencies:

a0=2cosω0TE6

It is important to note that in this case one needs to choose sampling time T=1/fto satisfy:

f>πf0,ω0=2πf0E7

which is higher than the standard Nyquist frequency, f>2f0. We show in the next Section an example of this. Eq. (5) produces cosine function with the proper initial conditions x10in (4). The corresponding equation for x2t+1is equivalent to (5) with the initial condition x20in (4) to produce sin function. Instead of proceeding with two state model (3) we can produce another two state model using (5):

x1t+1x1t=a0110x1tx1t1+w1tw2tE8
y0t=x1t+v0tE9

where w1tand w2tare zero mean Gaussian white inputs with joint covariance symmetric matrix Q = Q11Q12Q12Q22, [2]. The measurement y0tof x1twith the measurement error v0tis simply defined in (9) and it corresponds to the harmonic ω0. Measurement error v0tis zero mean stochastic process with variance R0assumed constant across all time samples, which is a reasonable assumption, unless there is a compelling reason to make it time varying. The model (8) and (9) above remains the same. Obviously equivalent model holds for x2t+1with the proper initial conditions. The model as given by (8) and (9) is our starting point for KFHO described next. The consideration holds for any harmonic ω.

7.2 Kalman filter harmonic oscillator

To facilitate the next step, we rewrite (8) as:

x1t+1x2t+1=a0110x1tx2t+w1tw2tE10

where x2(t+1) is just an auxiliary notation for x1tand it is not x2tin (3). Then the standard KF equations in the above case produce [2]:

Prediction Step:

x̂1t+1/t=a0x̂1t/tx̂2t/tE11
x̂2t+1/t=x̂1t/tE12

Correction Step:

x̂1t+1/t+1=x̂1t+1/t+K11t+1y0t+1E13
x̂2t+1/t+1=x̂2t+1/t+K21t+1y0t+1E14

In (13), (14) above, y0t=y0tx̂1t/t1is Innovation Sequence and filter gains are:

K11t=P11t/t1/[P11t/t1+R]E15
K21t=P12t/t1/P11t/t1+RE16

The corresponding Prediction Step and Correction Step variances and covariances of the estimation error x1t/t1=x1tx̂1t/t1, and x1t/t=x1tx̂1t/t, and similarly for the state x2t, are:

Prediction Step:

p11(t+1/t)=a02p11t/t+2a0p12t/t+p22t/t+Q11.E17
p12(t+1/t)=a0p11t/tp12t/t+Q12E18
p22t+1/t=p11t/t+Q22E19

Correction Step:

p11t+1/t+1=[1-K11t]p11t+1/tE20
p12t+1/t+1=[1-K11t]p12t+1/tE21
p22t+1/t+1=K21tp12t+1/t+p22t+1/tE22

The initial conditions for the above equations are:

p111/0,p121/0,andp221/0E23

and they are determined by the initial Kalman Filter design. One way to determine them is to use matrix Qvalues:

p111/0=Q11,p121/0=Q12,andp221/0=Q22E24

Here the values of Q11and Q22are assumed to be of same the order because they represent uncertainty in modeling x1tand x2(t), and they are just one step apart values of the same state. Simple correlation analysis of x1tand x2(t) indicates that Q12is of order of a0Q11/4. Figure 10 below shows a block diagram of a single ω0KFHO. Here we have the total state vector corresponding to specific harmonic ω0as:

Figure 10.

Kalman filter harmonic oscillator.

x̂0t/t1=x̂1t/t1x̂2t/t1TE25

7.3 Kalman filter harmonic bank

Once we define single harmonic KF as in Figure 10 we can proceed and construct a KFHB as an assemblage of a number of individual harmonic filters in parallel with the combine outputs to form the original signal (data). We assume a set of harmonics ω=ω1ω2ω3ωNand for each of ωi, i=1,2,,Nwe define a separate KFHO as described above. Note that harmonics are related to each other via:

ωi=iω1,i=1,2,,NE26

and the total signal output (such as Votok data) is the sum of individual harmonic ωi=2πfioutputs:

yt=1NyitE27

Figure 11 indicates this arrangement with KFHO where we have:

Figure 11.

Kalman filter harmonic bank.

x̂t/t1=1Nx̂it/t1E28

representing the total KFHB predicted state estimate. These can be used for short and long term prediction purposes for CO2, temperature or other variables of interest. We can similarly define a set of corrected state estimates which are more precise than predicted with the real data available.

x̂t/t=1Nx̂it/tE29

General idea here as compared to a simple sum of harmonic cosine signals (following inverse Fourier Transform) as in (27) with:

yt=1Nyit=1NAicosωit+θiE30

is to accommodate stochasticity of the underlying Vostok measurement data as well as a simple linear structure of Kalman Filters Harmonic Oscillator, and its ability to make predictions for the signal future values in the probabilistic (more realistic) environment.

8. Vostok cycle C1 CO2 Kalman filter harmonic oscillators

8.1 First harmonic H1

The harmonic analysis of Cycle C1 Vostok data is presented in Section 5. In this Section we proceed and build a specific first harmonic H1 KFHO, as an example of CPE set up, with the initial conditions:

x̂10/0=A1cosθ1,x̂20/0=2πf1A1sinθ1E31

were A1 and θ1 are first harmonic amplitude and phase. In Table 10 we show H1 KFHO gains from Eqs. (15), (16) for the first seven time samples. The boldfaced values indicate the moment (after 5 time samples) when the gains become constant. These constant gains can be used in the KFHO design for its simplicity. Table 11 summarizes various filter parameters. The results for predicted and corrected state estimates x̂1and x̂2for 128 time samples are shown in Figure 12. The reference harmonic H1 data is calculated using standard cosine function and it is the “measurement” as in (9) (Table 12).

OptimalConstant Gain
k1(+)k2(+)
N/AN/A
0.961490.48075
0.936160.29119
0.936330.29926
0.935780.30137
0.93570.30151
0.93570.30151
0.93570.30151

Table 10.

Filter gains.

Boldfaced entries indicate constant Kalman Filter gains.

Q11100
Q22100
Q1249.759
R25
a−1.99759

Table 11.

Parameters.

Figure 12.

Complete data, H1 harmonic, predicted and corrected states.

x1(0)x2(0)
InitialInitial
231.00680
26.42065−1.20707
13.48622−0.81265
4.10981.643843
6.050971−0.45685
0.754028−1.87793
4.844518−0.7715
2.0162971.070199

Table 12.

Initial values.

8.2 Other harmonics (H2, H4, H3, H7, H10, H5)

The other harmonic calculations are done likewise, with specific amplitude and phase used and we will not give the obvious details here. The initial conditions are calculated in the equivalent way as given in (31) using different values for amplitude, frequency and the phase angle, using Table 7 harmonic values. Once calculated, initial conditions drive the KFHO for each harmonic, with the various harmonics KFHO parameters similar to the Tables 1112 above for H1 and the other harmonic signals like in Figure 12 but with different frequencies and phases. Next all of the predicted and corrected x1 and x2 estimates are combined per KFHB output in Eqs. (28) and (29). Resulting values are compared to the known original and inserted Vostok data and appropriate MAPE are generated to check the performance of the method. Table 9 summarizes resulting MAPE for all 7 harmonics. Note that the results are based on constant filter gains in KFHB similar to the values in Table 10 which are equivalent to the optimal time varying gains to at least 2nd decimal point. Parameters in Table 11 were not optimized in any way and they can be tuned further based on errors obtained. This may be done so Q values would be adjusted per specific harmonic amplitude. We made a simple assumption that the standard model deviations and variances are of order of 100 for both states and all harmonics, and cross variance of order of a0Q11/2, as noted earlier. Table also indicates what we expected, namely that the corrected errors are smaller than the predicted ones.

Note that the values for y are just specific harmonic amplitudes calculated based on maximum amplitude and phase angle, per Eq. (20). The prediction errors correspond to one sampling interval, in our case 1011 years. This is large enough for long terms calculations and prediction. If we chose to have 2 or more sampling intervals the prediction errors will obviously increase.

As far as corrected errors, they correspond to the situation where we have a specific CO2 value to use for KFHB correction. For example if we chose to consider a current CO2 value we can append it to the beginning of the Vostok data, and we actually did that by adding 350 PPM value for CO2 ‘now’ in front of the ‘newest’ Vostok data, more than 2000 years old. The CPE as we envisioned it allows for all sorts of scenarios, sensitivity analysis, ‘what if’ scenarios for all the variables, CO2, temperature, methane, and so on, EPICA and other ice core measurements, both for short term (100+ years) as well as long term (1000 years+). To give a visual impression of Vostok data approximated by first seven harmonics, see Figure 13 above. Approximation by 7 strongest amplitude harmonics produces very smoothed Vostok data (in blue) without capturing very abrupt changes on a smaller time scale, as in original Vostok data (in red). This can be improved by addition of more harmonics. The advantage which our CPE offers is its flexibility to treat both natural induced values of CO2 and other variables as well as modern times human produced effects attributed to global warming.

Figure 13.

Comparison of Vostok data vs. approximation with 7 harmonics.

9. Further considerations

Other cycle contributions can be examined as a base for KFHB based machine learning training and testing. This is important due to different duration of various cycles and the uncertainty about the duration of the ongoing Cycle 1. Then older cycles may carry less useful data for the predictions about the current cycle, especially in a view of current global warming effects due to human activities not present in the climate past. On the other hand the old data carries useful short and long term information which can be used judiciously in the KFHB fine tuning, in particular due to its inherent periodicity. Hence multicycle analysis can be very useful for machine learning implementation of our CPE based on KFHB idea. The minimal choice of harmonics will allow us to devise a reasonably simple machine learning algorithm for training, testing and prediction purposes based on KFHB. For example the longer data sets in C234 can be used as a training data set, as can shorter C23, in order to predict the completion of C1, calculating prediction error E1,234 (Error in predicting C1 given C2, C3 and C4 cycles) or E1,23 (predicting C1 given only C2 and C3 data). This applies to both temperature and CO2. This can be repeated for other components in Vostok data set, such as methane, oxygen and insolation. Similarly for the European EPICA data set as well as set of cycles indicated in Milankovich theory [31]. Hence, once we predict C1 using C123 or C23, we obtain errors E1,234 and E1,23. Intuitively we can expect that E1,23 > E1,234, i.e. training based on larger data set ideally would produce smaller test and prediction errors. This has to be confirmed by the further analysis, in particular due to different lengths of the various cycles.

Besides our aim at producing an effective CPE methodology, the harmonic analysis spurred a variety of related thinking and ideas which we also summarize in this paper. Some further ones follow. Climate change on the time scales of the ice cores has been considered as consistent with the IPCC’s hypotheses [11], focusing on permanent greenhouse gases, particularly CO2, methane and nitrous oxide. This has included the role of increasing water vapor but viewed only as a secondary amplifying factor. The effect of temperature on water vapor pressure is shown by the Clausius-Clapeyron equation, dictating an exponential increase in vapor pressure as temperature rises. But the role of water vapor in modern global warming is only considered in GCMs as a derivative of primary warming by permanent greenhouse gases. This may be in error, as modern irrigation is now adding an extra 4% of water to land surfaces from 1960. It is possible to estimate water vapor content of atmospheres of different eras from temperature data. We have also hypothesized a positive forcing from irrigation water [32, 33] in addition to other primary sources of warming such as the Milankovich astronomical cycles. This may prove a more reliable means of correlation using the link already established between water vapor responsible for more than 80% of the air heating.

One feature of some of the ice core analyses is the irregular but rising increasing levels of dust as the planet became colder (Figure 1), possibly absorbing rather than scattering solar radiation. From frequency analysis, a marked impulse effect can be recognized, given that the peak of the dust samples in ice cores clearly coincides with the commencement of the interglacials, suggesting a role in initiating the warming process. Dust-climate feedbacks have recently been highlighted as having a role in the final stages of the glacial cooling process [34]; but we have surmised, the opposite that dust on the surface of snow and ice has decreased the albedo, capturing more radiant heat from the Sun. If true, a process in which a decreasing albedo effect from widespread dust in very cold, dry conditions (similar to the Antarctic now) initiates warming that is accentuated by increasing water vapor can be proposed. The rapid upward change of temperature in the interglacial would be directly consistent with the exponential increase in water vapor with temperature. In that case, the release of CO2 could properly be seen as an effect of Henry’s Law [35], rather than cause.

Consequently, the climate sensitivity for CO2 may be overestimated. It would be of interest to know the scale of wind velocities in the glacials, since the inertial force of wind motion is opposed to gravity in the case of dust suspension and its carriage to high albedo surfaces. When dust particles are dry and disaggregated, current dust storms carrying particles above are most prominent. Furthermore, we have proposed [33, 36] that the atmosphere can store an order of magnitude more thermal energy because of the additional degree of freedom of motion in vertical motion. That is the motion involved in circulating air in anticyclones-cyclones. In a tropical cyclone, condensing water vapor is considered as providing energy to drive the cyclonic motion, using heat derived from the surface of the ocean. This follows from our finding [12] that configurationally entropy (the inverse of free energy) is a logarithmic function of the physical action, a scalar property related to angular momentum but including the dimensionless angular motion [10]. If confirmed this could mean that weather extremes such as very hot days with greater fire risk could be caused by collisions between anticyclones with extra thermal energy released as heat as the laminar flow of air becomes turbulent.

Advertisement

10. Conclusion

In this paper we have analyzed Vostok ice core data using (i) time correlations, (ii) harmonic analysis, as well as (iii) amplitude and energy consideration, and proposed a (iv) general prediction approach using KFHB methodology. In particular we focused on Cycle 1 of CO2 data in frequency domain as a representative example. The general approach is to split Vostok data set into 4 smaller sets, as per climate periodicity indicated in the set. The outcome is a choice of set of high energy harmonics for all cycles and any of their combinations for designing CPE based on KFHB which is a linear combination of several individual KFHOs, for effective data prediction purposes. This can be incorporated into a practical machine learning methodology for training and testing, as well as data prediction using collected Vostok or other available climate data sets. We believe that our CPE based approach offers advantages in its simplicity and for short as well as long term prediction abilities via KFHB approach which can produce optimal results in the context of stochastic data set. Several issues remain to be solved, in particular (i) uneven cycle lengths as well as better approach to (ii) non uniformity of ice core data. We are addressing both in our ongoing work. Our analysis also seeks to find evidence regarding causes and results in climate science, an essential requirement for more certainty in weather and climate predictions. We consider that current GCMs have considerable uncertainties in such predictions.

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Migdat Hodzic and Ivan Kennedy (January 4th 2021). Kalman Filter Harmonic Bank for Vostok Ice Core Data Analysis and Climate Predictions, Glaciers and the Polar Environment, Masaki Kanao, Danilo Godone and Niccolò Dematteis, IntechOpen, DOI: 10.5772/intechopen.94263. Available from:

chapter statistics

106total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

The Vegetation of the South Shetland Islands and the Climatic Change

By Jair Putzke and Antonio Batista Pereira

Related Book

First chapter

Seismic Wave Interactions Between the Atmosphere - Ocean - Cryosphere System and the Geosphere in Polar Regions

By Masaki Kanao, Alessia Maggi, Yoshiaki Ishihara, Masa-yuki Yamamoto, Kazunari Nawa, Akira Yamada, Terry Wilson, Tetsuto Himeno, Genchi Toyokuni, Seiji Tsuboi, Yoko Tono and Kent Anderson

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us