Open access peer-reviewed chapter

Predictability in Deterministic Dynamical Systems with Application to Weather Forecasting and Climate Modelling

Written By

Sergei Soldatenko and Rafael Yusupov

Submitted: 23 May 2016 Reviewed: 07 November 2016 Published: 15 March 2017

DOI: 10.5772/66752

From the Edited Volume

Dynamical Systems - Analytical and Computational Techniques

Edited by Mahmut Reyhanoglu

Chapter metrics overview

1,956 Chapter Downloads

View Full Metrics

Abstract

Climate system consisting of the atmosphere, ocean, cryosphere, land and biota is considered as a complex adaptive dynamical system along with its essential physical properties. Since climate system is a nonlinear dissipative dynamical system that possesses a global attractor and its dynamics on the attractor are chaotic, the prediction of weather and climate change has a finite time horizon. There are two kinds of predictability of climate system: one is generated by uncertainties in the initial conditions (predictability of the first kind) and another is produced by uncertainties in parameters that describe the external forcing (predictability of the second kind). Using the concept of the ‘perfect’ climate model, two kinds of predictability are considered from the standpoint of the mathematical theory of climate.

Keywords

  • climate system
  • deterministic chaos
  • predictability
  • stability

1. Introduction

High‐complexity computational models that simulate earth's climate system (ECS) have earned well‐deserved recognition as the indispensable and primary instrument for numerical weather prediction (NWP) as well as for the study of climate change and variability caused by both natural processes and human activities [14]. In spite of dramatic progress achieved over the past few decades in weather forecasting and climate simulation thanks to the advances in computing hardware and algorithms and to a substantial increase in the volume of climatological data, contemporary computational climate models can reconstruct the real world only with a certain degree of validity [3]. There are several major sources of discrepancy between climate model simulation results and reality. First of all, climate models remain an ideal mathematical abstraction of a real physical system, namely the ECS. These models ignore some physical, dynamical and chemical processes or, at least, represent them in a simplified fashion. As a result, various physical simplifications in the formulation of climate models substantially influence their adequacy [5]. Second, the NWP and climate simulation are mathematically an initial‐value (Cauchy) and/or a boundary‐value (Dirichlet or von Neumann) problem, which is solved numerically using finite‐difference, spectral or another appropriate method. Consequently, uncertainties emerging in the initial and boundary conditions as well as in the climate model parameters and external forcing, approximation, truncation and round‐off errors lead to distinctions between the model output and the observed real state of the ECS. Third, let us suppose that we have the ‘perfect’ model of the ECS. It means that exact governing equations are known exactly and can be solved. However, even in this, hypothetically ideal, case the ability of climate models to predict the future remains limited. This can be explained by the fact that the atmosphere, which is the most rapidly changing component of the ECS, is strongly nonlinear and exhibits irregular (chaotic) spatial‐temporal oscillations on all scales ranging from millimetre seconds (turbulent fluctuations) to thousands of kilometres and several years (climate variability). This phenomenon known as deterministic chaos was first discovered by Lorenz [6]. The chaotic nature of the atmosphere significantly limits our ability to successfully predict the weather and climate since the predicted trajectory of the ECS is unstable with respect to both the infinitesimal errors in initial conditions and external forcing [7]. Even with a perfect atmospheric model and accurate initial condition, we cannot predict the weather beyond approximately two weeks.

For further discussion, we need to clarify that terms ‘weather’ and ‘climate’ have different meanings. Weather is defined as the daily conditions of the atmosphere in terms of such atmospheric variables as temperature, humidity, wind direction and velocity, surface pressure, cloud cover and precipitation. In turn, the climate represents an ensemble of states traversed by climate system over a sufficiently long temporal interval (about 30 years, according to the World Meteorological Organization). Here, the ensemble includes not only a set of system states but also the probability measure defined on this set. Therefore, climate, roughly speaking, can be considered as the ‘average’ weather, in terms of mean and variance, in a certain geographical location over many years.

Time horizon of a forecast's usefulness and validity can be characterized by the specific measure known as predictability. Predictability is commonly understood as the degree to which it is possible to make an accurate qualitative or quantitative forecast of the future system's state. The study of atmospheric predictability was initiated by Thompson [8] and Lorenz [6, 9] more than 50 years ago and was extensively explored theoretically using various numerical and statistical models since then (e.g. [1017]). One of the obvious measures of predictability that can be used to verify a weather forecast is the mean‐squared error (the average of the squared differences between forecasts and observations). This measure increases over time and asymptotically approaches some finite value known as the saturation value. Therefore, predictability is lost when the forecast errors become comparable to the saturation value in magnitude. If this happens, the forecast result is not better than any randomly selected trajectory of the system. However, for a number of reasons, mean‐squared error and other weather forecast verification metrics (e.g. mean absolute error and mean error) are rarely used to estimate the climate system predictability in practice (for details, see Ref. [18]).

Predictability characterizes both the physical system itself and the model of this system that is used to make a forecast. However, in atmospheric and climate studies we are interested in the predictability of real dynamical processes rather than the predictability of the model used in simulations.

According to Lorenz [19], in weather and climate modelling we are facing the predictability of two kinds reflecting the internal and external variability of the climate system, respectively. The predictability of the first kind relates to the Cauchy (initial value) problem, namely the prediction of sequential states of the ECS for constant values of external parameters and given variations in the initial conditions. In contrast, the predictability of the second kind refers to a boundary‐value problem, specifically to the prediction of response of the climate system in asymptotical equilibrium to perturbations in external parameters (forcing).

This chapter considers both the predictability of atmospheric and climate processes with respect to the initial data errors (predictability of the first kind) as well as the predictability with respect to external perturbations (predictability of the second kind). The stability of dynamical system is also discussed since stability is a key problem related to predictability in dynamical systems.

Advertisement

2. Climate system as a complex adaptive dynamical system

Let us begin with some preliminary notes and definitions which will be used in this chapter.

The term ‘system’ generally refers to a goal‐oriented set of interconnected and interdependent elements that operate together to achieve some objectives [20]. The system is called complex if it possesses such characteristics as emergent behaviour, nonlinearity and high sensitivity to initial conditions and/or to perturbations, self‐organization, chaotic behaviour, feedback loop, spontaneous order, robustness and hierarchical structure. Complexity in systems arises from nonlinear spatio‐temporal interactions between their components. These nonlinear interactions lead to the appearance of new dynamical properties (for example, synchronous oscillations and other structural changes) that cannot be observed by exploring constituent elements individually.

Complex systems include a special class of systems that have the capacity to adapt to system's environment. These systems are known as complex adaptive systems. In a complex adaptive system, parts are linked together in such a way that the entire system as a whole has the capacity to transform fundamentally the interrelations and interdependences between its components, the collective behaviour of a system and also the behaviour of individual components due to the external forcing. Complex adaptive systems are dynamical systems since they evolve and change over time. These systems have a number of properties that include the following [21, 22]: co‐evolution, connectivity, sub‐optimality, requisite variety and iteration, edge of chaos and, certainly, emergence and self‐organization.

The ECS ( S ) is understood as a complex, large‐scale physical system that consists of the following five basic and interacting constituent subsystems [23]:

  1. Atmosphere ( A ), the gaseous and aerosol envelope of the earth that propagates from the land, water bodies and ice‐covered surface outwards to space.

  2. Hydrosphere ( H ), the ocean and other water bodies on the surface of our planet, and water that is underground and in the atmosphere.

  3. Cryosphere ( C ), the sea ice, freshwater ice, snow cover, glaciers, ice caps and ice sheets and permafrost.

  4. Lithosphere ( L ), the solid, external part of the earth.

  5. Biosphere ( B ), the part of our planet where life exists, i.e. S = A H C L B

The ECS components are characterized by a finite set of distributed variables whose values at a given time determine their state. The most unstable and rapidly oscillating component of the ECS is the atmosphere.

The ECS is a large‐scale and unique physical system that possesses a number of specific properties (e.g. [2429]) making the exploration of this system a high complexity problem. In contradistinction to many problems in physics, the study of the climate system, its change and variability cannot be implemented by a direct physical experiment due to climate system's essential features as a large‐scale physical system. Laboratory experiments and analytical approaches have a very limited applicability to climate exploration by virtue of extreme complexity of the ECS. As a result, in climate studies the computational simulation represents the primary instrument and as such requires the development of appropriate mathematical models and numerical algorithms.

The utilization of mathematical models in climate research involves the development of a specific mathematical theory that allows one to explore the climate system along with its mathematical models. The contemporary mathematical theory of climate is based on methods of the qualitative theory of differential equations that enables us to explore the behaviour of climate system in its phase space [30]. In other words, the dynamical system theory is currently the theoretical foundation of mathematical climate theory. In this context, the ECS can be viewed as a complex adaptive dynamical system [21, 22].

The ECS belongs to the class of complex adaptive systems due to the following factors:

  1. The ECS is a complex large‐scale physical system combining the atmosphere, hydrosphere, cryosphere, land and biota together with global biochemical cycles (such as cycles of CO2, N2O and CH4) and aerosols. Components of the climate system are heterogeneous thermo‐dynamical subsystems characterized by specific variables that determine their states. Elements of the ECS have strong differences in their structure, dynamics, physics and chemistry. They cover processes with different temporal and spatial scales, and link together via numerous physical coupling mechanisms, which can be either weak or strong. Each subsystem of the ECS can in turn be viewed as being composed of subsystems, which are themselves composed of subsystems. For example, the atmosphere can be divided into several layers based on its vertical temperature distribution. These layers are respectively the troposphere, stratosphere, mesosphere and thermosphere. The atmosphere can also be divided into surface layer, boundary layer and free atmosphere based on the influence of surface friction.

  2. Each component of the ECS is characterized by a specific response time. This fact is very important to building the ECS’ models. The relation of a certain component to some ECS’ model is determined by the ratio between the temporal scale of processes under consideration and its response time. For example, the atmosphere, which has a response time of about one month in the troposphere, can be considered a sole component of the ECS’ model for processes with temporal scales of days to weeks. In this case, oceans, land surface and ice cover are considered as the boundary conditions and/or external forcing. If we study processes which have temporal scales of months to years, the atmosphere and ocean must be included in the ECS’ model together with sea ice. Thus, computational models of the ECS are built up from hierarchy of models, forming finally a complex integrated model.

  3. The ECS has a large number of positive and negative feedback mechanisms which control the behaviour of the ECS. Some examples of these mechanisms are ice‐albedo feedback (positive feedback), water vapour feedback (positive feedback), cloud feedback (both positive and negative feedbacks), carbon cycle feedback (negative feedback), feedback due to Arctic methane release (positive feedback) and many others.

  4. Physical and dynamical processes in the ECS cover a broad spectrum of temporal and spatial scales. Time scales are varied from seconds to decades, and spatial spectrum of dynamical processes covers molecular to planetary scales. Dynamical processes in the ECS and its components are nonlinear. Subsystems of the ECS interact with one another nonlinearly producing, under certain conditions, a chaotic behaviour of subsystems and the overall climate system.

  5. The ECS and its components inherently have emergent properties. Examples of atmospheric emergent phenomena include but are not limited to clouds, large‐scale eddies (cyclones and anticyclones) and small‐scale vortices such as tornados. Examples of climate emergent phenomena are the El Niño–Southern Oscillation, which is a quasi‐periodical irregular variation in the ocean surface temperature over the Pacific in tropics that strongly influences global climate, ocean circulation patterns and glacial‐interglacial cycles. Natural emergent phenomena appear spontaneously under certain favourable conditions.

  6. The ECS is a thermodynamically open and non‐isolated system because it exchanges energy with its surroundings. However, the ECS is a closed system with respect to the exchange of matter with its surroundings. The energy that drives the ECS is mainly solar energy. The ECS is affected by changes in external driving forces, which imply natural causes such as solar activity variations and volcanic activities, as well as man‐made changes in chemical composition of the atmosphere. However, the impact of the ECS on the outer space is insignificant. Currently, changes in climate are mostly affected by variations in the atmospheric composition of particles and gases. In the Arctic, the role of changes in albedo (reflection coefficient) is also tangible. The most influential gas component to affect the climate is CO2, which comprises about 70% points of the global warming potential.

  7. The components of the ECS are also non‐isolated systems. They act as cascading systems and interact with each other in various ways including through the transfer of momentum, sensible and latent heat, gases and particles. All together they compose the climate system, which is a unique large‐scale natural system.

  8. Dynamical processes in the ECS fluctuate due to both internal factors (natural oscillations) and external forcing (forced oscillations). Natural fluctuations are caused by internal instability (for example, hydrodynamic instability such as barotropic and baroclinic) with respect to stochastic perturbations. Human impacts, both intentional and unintentional, belong to the category of external forcing.

Undoubtedly, there are other specific properties of the ECS that should be taken into account while studying climate as a complex adaptive system and building models of the ECS.

To simulate the ECS, we should assign some mathematical object that is an abstract representation of the real climate system taking into account its essential features mentioned above. This object is known as a perfect model of the ECS. It is usually assumed that a perfect model is deterministic semi‐dynamical system that is dissipative, ergodic and possesses a global attractor. It is also assumed that any trajectory generated by the model is unstable [30].

Formally, an abstract climate system model represents a set of multi‐dimensional nonlinear differential equations in partial derivatives, which generates finite dimensional deterministic semi‐dynamical system of the form [24, 30]

d x / d t = F ( x , p , f ) ,     x R n , x | t = 0 = x 0 , t 0 , E1

where x is the state vector, the components of which characterize the state of a system at a given time t, x 0 is a given initial state of a system, n is the dimension of dynamical system, p R p is the vector of model parameters and f is the external forcing. The solution to climate model equations (1) cannot be found analytically and one needs to employ available numerical methods. For that reason, in order to obtain numerical solution, the original set of partial differential equations is replaced with discrete spatio‐temporal approximations using any appropriate technique (e.g. finite‐difference method, Galerkin approach, etc.). Thus, in weather and climate simulation we mainly deal with discrete dynamical systems.

Suppose that the set of n real variables x 1 , x 2 , , x n defines the current state of discrete‐time dynamical system representing the ECS. A certain particular state x = ( x 1 , x 2 , , x n ) corresponds to a point in an n‐dimensional space Q R n , known as the system phase space. Let t m Z + ( m = 0,1,2, ) be the discrete time, and let f = ( f 1 , f 2 , , f n ) be a smooth vector‐valued function defined in the domain Q R n . This function describes the evolution of the system state from one moment to another. Then, a deterministic discrete‐time semi‐dynamical system that approximates the continuous time dynamical system (1) can be specified by the following equation:

x ( t m + 1 ) = f ( x ( t m ) ) ,     x ( t 0 ) = x 0 ,     m = 0, 1, 2, , . E2

It is obvious that a family of operators forms a semi‐group:

f s + p = f s f p ,     f 0 = I ,     s , p Z + , E3

where I is the identity operator. Therefore, the system state x ( t m ) at time t m can be explicitly expressed via the initial condition x 0 :

x ( t m ) = f m ( x 0 ) , E4

where f m denotes an m‐folding application of f to x 0 . The sequence { x ( t m ) } m = 0 is a trajectory of system (2) in its phase space, which is uniquely defined by the initial values of state variables x 0 .

For reference, let us reproduce a couple of definitions [30].

Definition 1. The solution x ( t ) to system (1) is Lyapunov stable if ε > 0 , δ ( ε ) > 0 such that

x 0 x 0 * < δ ( ε ) x ( t ) x * ( t ) < ε , t 0 , E5

where x * ( t ) is the solution to the system

d x * / d t = F ( x * , p , f ) ,      x * | t = 0 = x 0 . * . E6

Definition 2. The solution x ( t ) to system (1) is stable with respect to the continuous perturbation δ F if ε > 0,   δ ( ε ) > 0 such that

δ F < δ ( ε ) x ( t ) x * ( t ) < ε ,      t 0 , E7

where x ( t ) is the solution to the following perturbed equation:

d x * / d t = F ( x * , p , f ) + δ F ,      x * | t = 0 = x 0 * . E8

These definitions are important when considering both kinds of predictability.

The key point for further consideration is the assumption that climate system model described by the set of nonlinear partial differential equations (1) is ‘perfect'. We suppose that system (1) is nonlinear dissipative semi‐dynamical system ( t 0 ) that has an absorbing set in the phase space and its solution exists and is unique for any t 0 . Next, we assume that the system (1) possesses a global attractor of finite dimension that is a certain set in the system's phase space towards which a system tends to evolve for a wide variety of initial conditions of the system. Global attractor is characterized by the attraction property and invariance [30]. So, the dynamics of system (1) can be formally divided into to two phases: (1) movement towards the attractor and (2) motion on the attractor. When studying the climate system stability and predictability we assume that the system trajectory is on the attractor and its dynamics are chaotic. We also assume that system (1) possesses the property of ergodicity. Thus, statistical characteristics of the climate system (e.g. the first x ¯ = x and second v a r ( x ) = x 2 x ¯ 2 moments) can be calculated by averaging along a certain trajectory.

Structurally, any climate system model represents a set of interacting and interdependent models of lower level (i.e. atmospheric model, model of the ocean, etc.). The number of these lower level models is determined by the objectives of a problem under consideration. For example, to study the large‐scale climate variability the model can include the following major components: tropical, mid‐latitude and polar troposphere, stratosphere, ocean, land ice, ocean and sea ice, surface and boundary layers, hydrological cycle, clouds (e.g. convective and stratiform), precipitation, aerosols, CO2 and CH4 cycles, solar radiation, terrestrial emission, etc. Other subsystems of the ECS (e.g. vegetation, land surface and biota) can be considered as the boundary conditions and external forcing. In numerical weather prediction problem, some atmospheric model (either global, regional or local) is the main component of the forecasting system, while ocean, sea ice, land surface are used only to impose boundary conditions. Note that models of general circulation of the atmosphere and the ocean represent main computational instruments for simulating the ECS.

Advertisement

3. Climate model governing equations

The main energy source of the ECS is the Sun. Spatial inhomogeneity and temporal changes of the heat energy that the earth's surface receives from the Sun are the main cause of motions in the atmosphere and ocean. Equations that govern the atmospheric and oceanic circulation represent the mathematical expressions of fundamental laws of physics: conservation of momentum, conservation of mass, conservation of water and conservation of energy (the first law of thermodynamics). Some diagnostic relationships between variables are also used (i.e. the equation of state). Almost every model uses a slightly different set of equations tailored to a specific problem. However, all climate models include the following basic equations: two equations for horizontal motions (or equation for the vorticity and divergence), equation for the vertical velocity (or hydrostatic equation), continuity equation, as well as thermodynamic and moisture equations. Equations of motion are derived from the law of conservation of momentum applicable to a rotating system. These equations describe all types and scales of atmospheric motions that are important for the formation of weather and climate (i.e. large‐scale Rossby waves, planetary waves and gravity waves). Conservation of mass is mathematically expressed in the form of continuity equation, equation for conservation of moisture and equations for conservation of other substances taken into account in a particular climate model.

The set of equations that describes the general circulation of the atmosphere can be written in the spherical co‐ordinate system ( λ , ϕ ) defined by longitude λ and latitude ϕ , with normalized pressure as a vertical coordinate σ = p / p s , where p is pressure and p s is the surface pressure [1, 31]. The set of the model equations includes two momentum equations:

u t = η v 1 a cos ϕ λ ( Φ + K ) R T v a cos ϕ ln p s λ σ ˙ u σ = F u V + F u H , E9
v t = η u 1 a ϕ ( Φ + K ) R T v a ln p s ϕ σ ˙ v σ = F v V + F v H , E10

where u and v are zonal and meridional velocities, a is the earth's average radius, σ = d σ / d t is the vertical velocity in the σ co‐ordinate system, Φ is geopotential, T is temperature, R is the gas constant for dry air, K = ( u 2 + v 2 ) / 2 is the kinetic energy, η = ς + f is the absolute vorticity, f is the Coriolis parameter and ς is the relative vorticity that is given by

ς = 1 a cos ϕ [ v λ ϕ ( u cos ϕ ) ] . E11

The virtual temperature T v is represented as

T v = T [ 1 + ( R v R 1 ) q ] , E12

where T is the temperature, q is the specific humidity and R v is the gas constant for water vapour. The terms F u V and F v V describe the vertical friction and terms F u H and F v H the horizontal diffusion. Generally, however, the momentum equations are transformed into the equations for the absolute vorticity η and the divergence D using new independent variable μ = sin ϕ :

η t = 1 a ( 1 μ 2 ) λ ( N v + cos ϕ F v V ) 1 a μ ( N u + cos ϕ F u V ) + F η H , E13
D t = 1 a ( 1 μ 2 ) λ ( N u + cos ϕ F u V ) + 1 a μ ( N v + cos ϕ F v V ) + F D H 2 ( Φ + K + R T 0 ln p s ) , E14

where the horizontal divergence is given by

D = 1 a cos ϕ [ u λ + ϕ ( v cos ϕ ) ] . E15

The spherical horizontal Laplacian can be written as

2 = 1 a 2 ( 1 μ 2 ) 2 λ 2 + 1 a 2 μ [ ( 1 μ 2 ) μ ] . E16

To provide the computational effectiveness of numerical integration scheme, the virtual temperature is partitioned into two parts, one of which T 0 is a function of the vertical coordinate only, i.e. T v ( λ , μ , σ , t ) = T 0 ( σ ) + T ' v ( λ , μ , σ , t ) . Then, the nonlinear dynamical terms N u and N v can be represented in the following form:

N u = η V R T v ' 1 a ln p s λ σ ˙ U σ , E17
N v = η U R T v ' ( 1 μ 2 ) a ln p s μ σ ˙ V σ , E18

where U = u cos ϕ and V = v cos ϕ .

The thermodynamic equation, which represents the mathematical expression of the first law of thermodynamic, is written for a perturbation in temperature T ' calculated with respect to the mean T 0 ( σ ) mentioned above:

T ' t = 1 a ( 1 μ 2 ) λ ( U T ' ) 1 a μ ( V T ' ) + T ' D σ ˙ T ' σ + R T v c p * ω p + Q + F T V + F T H 1 c p * [ u ( F u V + F u H ) + v ( F v V + F v H ) ] , E19

where Q is the diabatic heating rate, ω is the pressure vertical velocity and c p * is given by

c p * = c p [ 1 + ( c v c p 1 ) ] . E20

Here, c p is the specific heat of dry air at a constant pressure and c v is the specific heat of water vapour at a constant pressure.

The equation for specific humidity is used to describe the hydrologic cycle in the atmosphere:

q t = 1 a ( 1 μ 2 ) λ ( U q ) 1 a μ ( V q ) + q D σ ˙ q σ + S + F q V + F q H , E21

where the term S describes the source/sink processes for water vapour, and F q V and F q H are the vertical and horizontal water vapour diffusion.

Let us consider now the continuity equation that represents the conservation of mass law:

ln p s t = U a ( 1 μ 2 ) ln p s λ V a ln p s μ D σ ˙ σ . E22

Integrating this equation from the top ( σ = 0 ) to the bottom ( σ = 1 ), with the vertical boundary conditions σ ˙ = 0 at σ = 1 and σ = 0 , one can obtain the equation for surface pressure prediction:

ln p s t = 0 1 [ D + U a ( 1 μ 2 ) ln p s λ + V a ln p s μ ] d σ . E23

Combining the continuity equation and the equation for the surface pressure, one can derive the diagnostic equation for the vertical velocity σ ˙ :

σ ˙ = σ 0 1 [ D + U a ( 1 μ 2 ) ln p s λ + V a ln p s μ ] d σ 0 σ [ D + U a ( 1 μ 2 ) ln p s λ + V a ln p s μ ] d σ . E24

Two diagnostic equations, the hydrostatic equation and the equation of state, are also components of a set of equations that are used to simulate the atmospheric general circulation. The hydrostatic equation is

Φ / ln σ = R T v . E25

In the integral form, this equation can be written as

Φ = Φ s 1 σ R T v d ln σ , E26

where Φ s is the geopotential at the earth's surface. The equation of state is written as

p = ρ R T v , E27

where ρ is the air density.

Boundary conditions in the longitudinal direction are periodic, and the solution to the atmospheric model equations is bounded at the north and south poles. Vertical boundary conditions represent the vanishing of vertical velocity both at the bottom and at the top of the atmosphere: σ ˙ = 0 at σ = 1 and σ = 0 .

Equations used in the ocean model are written in the Boussinesq hydrostatic approximation with a rigid lid in the spherical coordinate system, with depth z as a vertical coordinate defined as negative downwards from z = 0 , which denotes the ocean surface [1, 31]. The set of model equations include the following:

  1. The horizontal equations of motion:

    u t + L ( u ) ( f + u a tan ϕ ) v + 1 a ρ o cos ϕ p λ = k V 2 u z 2 + F u , E28
    v t + L ( v ) + ( f + u a tan ϕ ) u + 1 a ρ o p ϕ = k V 2 v z 2 + F v , E29

    where k V is the vertical eddy viscosity coefficient, ρ 0 is the density of sea water and the advection operator, L ( α ) , is given by

    L ( α ) = 1 a cos ϕ ( u α λ + v α cos ϕ ϕ ) + w α z . E30

    The horizontal viscosity terms, F u and F v , are defined as

    F u = k H [ 2 u + ( 1 tan 2 ϕ ) u a 2 2sin ϕ a 2 cos 2 ϕ v λ ] , E31
    F v = k H [ 2 v + ( 1 tan 2 ϕ ) v a 2 + 2sin ϕ a 2 cos 2 ϕ u λ ] , E32

    where k H is the horizontal eddy viscosity coefficient. The given form of the diffusion terms, F u and F v , is required for conserving angular momentum property.

  2. The hydrostatic equation:

    p / z = g ρ . E33

  3. The thermodynamic equation:

    T t + L ( T ) = κ V 2 T z 2 + κ H 2 T , E34

    where κ V and κ H are, respectively, the vertical and horizontal eddy diffusivity coefficients.

  4. The equation for the mass continuity of salinity:

    S t + L ( S ) = κ V 2 S z 2 + κ H 2 S . E35

  5. The equation of continuity:

    w z = 1 a cos ϕ u λ 1 a cos ϕ v cos ϕ ϕ . E36

  6. The equation of state:

    ρ = ρ ( T , S . p ) . E37

Due to their extreme complexity, weather and climate models can be implemented on computers only using numerical techniques. Since models are based on partial differential equations, it is necessary, first, to ensure that the problem under consideration is well posed, i.e. it has a unique solution that depends on the boundary and initial conditions. Thus, both the initial and boundary conditions must be properly specified. Next, weather and climate mathematical models should be transformed into numerical models that can be implemented on computers. The most widely used technique for solving differential equations of weather and climate models is the finite‐difference method according to which the derivatives in the partial differential equations are approximated on a certain temporal‐spatial grid. Thus, instead of continuous functions, which describe the state of climate system and its components, we deal with discrete functions defined only for specific times separated by the time step Δ t and specific space locations separated by spatial (horizontal Δ s and vertical Δ h ) steps. As a result, instead of partial differential equation we obtain finite‐difference equations (numerical model). It is very important that numerical schemes used for the discretization of model differential equations must satisfy several fundamental requirements: finite‐difference equations must be consistent with model differential equations, the solution of finite‐difference equations must converge to the solution of differential equations and numerical schemes must be computationally stable. In practice, finite difference is not the only method used to solve weather and climate problems. The most popular among other methods are the family of Galerkin techniques, spectral, finite‐volume and finite element approaches.

In contemporary climate models, due to their discrete spatial and temporal structure, a large number of physical processes and cycles cannot be clearly represented and formulated by model equations. Climate models are theoretically incapable of simulating processes on spatial scales of the order of magnitude that is twice the model grid length [32]. Such thermo‐dynamical, physical and chemical processes and cycles are parameterized, i.e. expressed parametrically using simplified description. Study of the climate system by computer simulation requires extensive computational resources. As a result, the predictability problem is usually studied either on the basis of low‐order models, which possess the main properties of the climate system (nonlinearity, chaos, dissipative, etc.), or on the basis of complex climate models using the ensemble approach or the Monte Carlo method.

Advertisement

4. Predictability of climate system

4.1. Predictability of the first kind

The first kind predictability of climate processes (predictability of climate processes with respect to the initial conditions) will be considered under the assumption that the climate system (1) evolves on its attractor. Since system (1) is a nonlinear dissipative dynamical system, its attractor, known as a strange attractor, has an extremely complex fractal structure and can be characterized by such parameters as dimension, characteristic Lyapunov exponents, invariant measure and asymptotically steady solution and others. If some trajectory of system (1) is enclosed in a bounded phase volume (attractor), then the system's dynamics demonstrate deterministic chaos: the behaviour of simulated system resembles a stochastic process despite the fact that the system is described by deterministic laws and its evolution is governed by deterministic differential equations. So, all orbits of a system that start close enough will diverge from one another, however, will never depart from the attractor. The rate of separation of infinitesimally close orbits is characterized by positive Lyapunov exponents. The number of directions along which the orbit is unstable is equal to the number of positive Lyapunov exponents n λ (note that n λ < n , where n is a system's dimension). Thus, trajectories of climate dynamical systems are Lyapunov unstable.

To consider the initial growth rates of errors in the initial conditions let us linearize Eq. (1) around some trajectory to obtain the equation in variations:

d x ' / d t = M t x ' 0 , E38

where M t = F / x is the tangent propagator along the trajectory between the initial state x ' 0 and the forecast state x ' at a certain time t (actually M t is a Jacobian matrix). Obviously, one can obtain

x ' ( t ) 2 = ( M t x ' 0 , M t x ' 0 ) = ( M t * M t x ' 0 , x ' 0 ) , E39

where (·,·) is the inner product in R n and M * is the transpose of M . Since the operator M t * M t is self‐adjoint, then for any t one can consider the following eigenvalue problem:

M t * M t ψ i = σ i ψ i , E40

where σ i is the ith eigenvalue of the matrix M t * M t and ψ i is the corresponding eigenvector. Representing x ' 0 in the form of series as x ' 0 = i α i ψ i , one can get x ' ( t ) 2 = i σ i α i 2 . So, the forecast error on temporal interval [ 0, t ] depends on errors in the initial distribution of eigenvectors ψ i and singular values of the tangent linear propagator M t . Since system (1) is ergodic, we can also calculate the Lyapunov exponents λ i in accordance with the multiplicative theorem [33]:

λ i = lim t 1 t ln σ i ( M t * M t ) ,      i = 1, , n . E41

The Lyapunov exponents define the exponential growth (decay) of linear independent components of x’ at x ' 0 . The knowledge of the Lyapunov exponent spectrum of a dynamical system allows one to estimate the attractor fractal dimension, the rate of Kolmogorov‐Sinai entropy production and the characteristic e‐folding time. Knowledge of these parameters is very important for the stability and predictability analysis of dynamical systems. The fractal dimension of attractors of dissipative dynamical systems can be determined by applying the Kaplan‐Yorke conjecture [34]:

D K Y = J + i = 1 J λ i / | λ i + 1 | , E42

where J is the maximum integer such that the sum of the J largest exponents is still non‐negative, i.e. i = 1 J λ i > 0 . The sum of all positive Lyapunov exponents, according to theorem [35], gives an estimate of the Kolmogorov‐Sinai entropy, i.e. the value showing mean divergence of the trajectories on attractors. The arrangement of the Lyapunov exponents in (42) is as follows: λ 1 λ 2 λ n d . The multiplicative inverse (reciprocal) of the largest Lyapunov exponent is referred to as the characteristic e ‐folding time.

Let δ 0 be the initial perturbation of x ' used to integrate equation (8). Since the system is Lyapunov unstable, after some sufficiently large temporal interval of integration the distance between two hyper‐points in the phase space reaches the value of δ t . Let δ ¯ t be the accepted error tolerance, then the predictability time of a system can be roughly estimated as

T p λ max 1 ln ( δ ¯ t / δ 0 ) , E43

where λ max is the leading Lyapunov exponent. The error doubling time can be calculated as t = ln2 / λ max . However, Lyapunov exponents are very useful instrument to estimate the predictability of low‐order dynamical systems [36].

Climate data observations are subject to measurement errors. The simplest way to represent the resulting uncertainty is to define the probability density function (PDF) ρ ( x , t 0 ) or, generally, the set of a finite measure μ 0 on which the initial state x 0 is concentrated. The time evolution of a system leads to a divergence and mixing of points of this set. Since the initial state x 0 is concentrated on a set having the measure μ 0 , then after some period of time the measure will become μ t . Let μ ¯ be the invariant ergodic measure. Suppose the convergence theorem μ μ ¯ does exist. Hence, at a certain time t t ε the measure μ t falls into the ε ‐neighbourhood of μ ¯ . Consequently, the initial data information characterized by μ 0 will be completely lost. So, one can say that the time t ε defines the potential predictability of a system under consideration [16]. Thus, a focal point of the predictability problem is to prove the existence of ergodic measure and the existence of convergence theorem. This problem, however, is extremely difficult to solve because the structure of the invariant measure generated on the system attractor is sophisticated and non‐smooth. To avoid this problem, the stochastic regularization can be applied [37]. So, in lieu of system (1), the following stochastic dynamical system will be considered [16]:

d x / d t = F ( x ) + η ( t ) , E44

where η is a Gaussian stochastic process: η i ( t ) η j ( t ' ) = 2 d i j δ ( t t ' ) ,    d i j 0 . This procedure is correct since our knowledge about the model parameters is always limited, thus real climate models have random errors, which are represented by the term η . Under the assumption that d i j = d , one can write the Fokker‐Plank equation with respect to PDF ρ ( x , t ) , which describes the evolution of ρ [30]:

ρ / t + d i v ( F ( x ) ρ ) = d Δ ρ ,     ρ 0,    ρ d x = 1 . E45

Let ρ ¯ be a stationary solution to Eq. (45), i.e. div ( F ( x ) ρ ) = d Δ ρ . If x belongs to the compact manifold without boundary, then ρ ¯ is asymptotically stable [37]. The existence of a stationary solution (i.e. attractor) at infinity has been proved for finite‐dimensional dynamical systems [38].

Suppose that the initial condition x 0 is specified then the condition ρ | t = 0 = δ ( x x 0 ) is also specified and enable us to solve Eq. (45). The numerical integration of Eq. (45) transforms the PDF ρ ( x , t ) , which asymptotically evolves to the stationary solution ρ ¯ : ρ ρ ¯ at t t ε . Thus, at sufficiently large time t ε predictability is finally lost. There is a question: how can we estimate the time t ε ? Let us consider the following one‐variable stochastic dynamical equation [16].

d x / d t = γ x + η , E46
x | t = 0 = x 0 , η ( t ) η ( t ' ) = 2 η 2 δ ( t t ' ) ,    η = 0 , E47

where x 0 is the known initial condition and η is the Gaussian δ ‐correlated process. If we average Eq. (46) we obtain

d x / d t = γ x , x | t = 0 = x 0 , E48

thus x = x 0 e γ t . For the newly introduced variable θ ( t ) = x 2 , we can obtain the following equation:

d θ / d t = 2 γ θ + 2 η x . E49

Since x ( t ) = x 0 e γ t + 0 t e γ ( t τ ) η ( τ ) d τ , then

d θ / d t = 2 γ θ + 4 η 2 . E50

The solution to this equation is

θ ( t ) = 2 η 2 γ ( 1 e 2 γ t ) . E51

Equation for the PDF ρ has the following form:

ρ / t = ( ρ x γ ) / x + η 2 2 ρ / x 2 . E52

The stationary solution to Eq. (52) can be found if we suppose that the left‐hand side is equal to zero. Then, we have ρ ¯ = ( 1 / π θ ¯ ) e x 2 / θ ¯ , where θ ¯ = 2 η 2 / γ . We assume that the solution to Eq. (48) is of the form

ρ ( t ) = 1 π θ ( t ) e ( x x ( t ) ) 2 / θ ( t ) . E53

By substituting (53) into (52) one can be convinced that if θ ( t ) and x ( t ) satisfy Eq. (48) and Eq. (50), respectively, then Eq. (53) is the solution to the Fokker‐Planck equation (52). As a result, any initial data that is normally distributed will be attracted to the steady solution of Eq. (52), which is also normally distributed. The dissipation parameter γ determines the rate at which PDF ρ approaches ρ ¯ . The auto‐correlation function for the stationary stochastic process (46) can be written as

C ( τ ) = 2 η 2 γ e γ τ θ ¯ e γ t . E54

Thus, the potential predictability of system (46) can be characterized by the auto‐correlation function of the process x ( t ) and, therefore, the convergence of ρ ( t ) to ρ ¯ can be explored based only on function C ( τ ) with time lag τ . This conclusion is valid for the set of multi‐dimensional differential equations [16]. In this case, however, the covariance matrix is used instead of the auto‐correlation function. It is very important that for climate models the convergence of the covariance matrix C ( t ) to the covariance matrix of stationary process C ¯ is defined only by climatological values of climate model variables. As a result, potential predictability is also determined by climatological data.

Generally, the potential predictability can be defined as the convergence time of initial distribution to the equilibrium one. To quantify the rate of convergence of one‐dimensional distributions to the equilibrium ones, the concept of entropy can be used. If the information entropy S = ρ ln ρ d α is taken as a measure of predictability, then for the Gaussian distribution ρ = ( 1 / 2 π σ 2 ) e ( α α ¯ ) 2 / ( 2 σ 2 ) information entropy can be expressed as S = ln σ 2 + C . It can be shown that the variance and, therefore, the entropy are directly dependent on the Lyapunov exponents [39]. To study the predictability of climate system, the relative entropy S r = ρ ln ( ρ / ρ ¯ ) d α , where ρ ¯ is an equilibrium PDF, is a more suitable measure [40]. Relative entropy is invariant with respect to nonlinear transformations of α and ρ ρ ¯ at t .

4.2. Predictability of the second kind

Predictability of the second kind relates to the predictability of changes in climate system caused by infinitesimal perturbations in the parameters that describe the external forcing. Climate prediction does not involve forecasting weather conditions at either a certain geographical region or globally. On the contrary, climate prediction aims to forecast statistics of the climate system averaged over sufficiently long period of time. So, we are interested in how external perturbations affect certain aspects of climate statistic, such as the first x ¯ (mean) and/or second σ x 2 (variance) moments. One of the most important problems in the exploration of predictability of the second kind is to distinguish the response signal of the climate system to perturbed external forcing from the noise in the model output results. The signal‐to‐noise ratio can be used to make the conclusion with respect to the usefulness of the obtained climate system response. Thus, the predictability of the second kind is mathematically reduced to finding the response function of the climate system model [39].

Consider the following finite‐dimensional dynamical system that is controlled by some external forcing f (e.g. the concentration of carbon dioxide in the atmosphere):

d x / d t = F ( x ) + f ,       x | t = 0 = x 0 , E55

Suppose that system (55) possesses the attractor A and let μ be its invariant measure. The behaviour of this system will be explored on the attractor A . Since system (55) a priori possesses the property of ergodicity, its statistical characteristics are calculated by averaging along a single, sufficiently long, random trajectory. Thus, the average state x and variance σ x 2 of system (55) are defined, respectively, as

x = lim T 1 T 0 T x ( t ) d t = A x d μ ,       σ x 2 = A ( x x ) 2 d μ . E56

Let system (55) be perturbed by an infinitesimal disturbance in the external forcing δ f such that δ f f :

d x * / d t = F ( x * ) + f + δ f . E57

For this system x * = A x * d μ * and σ x 2 = A ( x * x * ) 2 d μ * . Let us introduce the new variable x ' ( t ) = x ( t ) x * ( t ) . Assuming that x ' is rather small then, combining (55) and (56), one can obtain the following linear equation for variable x ' :

d x ' / d t = J ( x ) x ' + δ f . E58

where J ( x ) = F / x is the Jacobian. Let δ f be a staircase function that is activated at t = 0 then the solution to Eq. (58) can be written in terms of the Green's function:

x ' ( t ) = 0 t G ( t , t ' ) δ f ( t ' ) d t ' . E59

The operator R = 0 t G ( t , t ' ) d t ' is a sought‐for response function (operator). If at t = 0 the distribution of initial states is identical for both unperturbed (55) and perturbed (57) systems, then one can calculate the average response operator:

R = 0 t G ( t , t ' ) d t ' = 0 t G ( t t ' ) d ( t t ' ) . E60

By averaging both sides of Eq. (59), one can get the following linear equation to calculate the system's response to the external forcing:

x ' = R δ f . E61

Suppose that system (55) is regular, i.e. for this system the quadratic conservation law is valid and system itself satisfies the Liouville equation for incompressibility in the phase space. Assume also that the system is in equilibrium. Taking into consideration the fluctuation dissipation theorem [41], the average impulse response operator of the regular system in equilibrium is expressed via system's statistics:

G ( t , t ' ) = G ( t t ' ) = C ( t t ' ) C 1 ( 0 ) , E62

where C ( t t ' ) = x ( t ) x Τ ( t ' ) is the system's auto‐correlation matrix with time lag τ = t t ' . Now we can combine (60) and (62) to get the following well‐known formula [42]:

x ' = 0 C ( t ) C 1 ( 0 ) d t δ f . E63

Thus, the mean response of climate system to external forcing is determined by observations of unperturbed climate oscillation.

Advertisement

5. Concluding remarks

The prediction of climate change caused by natural processes and human‐induced drivers is one of the most critical scientific issues facing the mankind in the 21st century. Computer‐simulated climate models represent a very powerful and, perhaps, the only research instrument for studying climate and its dynamics. One of the key components of climate models, namely the model of the atmospheric general circulation, currently also serves as a primary tool for the numerical weather prediction all around the globe. However, the climate (atmospheric) system's trajectory calculated via numerical integration of multi‐dimensional partial differential equations that describe the climate (atmospheric) system evolution is unstable with respect to both perturbations (errors) in the initial conditions and infinitesimal external forcing expressed by some model parameters and/or boundary conditions. This instability limits the time horizon of the validity of the climate (weather) forecast and leads to predictability problem.

In this chapter, the climate system is considered as a complex adaptive dynamical system that possesses a number of specific properties such as, for example, dissipativity, nonlinearity and chaoticity. From this perspective, the climate predictability problem is best discussed and analysed by formally examine two kinds of predictability. The first kind of predictability refers to the initial value problem (estimating the impact of perturbations in the initial conditions on the forecast skill), while the second kind of predictability relates to the boundary value problem (estimating the impact of external forcing on the system's behaviour).

References

  1. 1. Washington W.M., Parkinson C.L. Introduction to Three‐Dimensional Climate Modelling. 2nd ed. Sausalito, California: University Science Book; 2005. 368 pp.
  2. 2. McGuffie K., Henderson‐Sellers A. The Climate Modelling Primer. 4th ed. New York: J. Wiley & Sons; 2014. 456 pp.
  3. 3. Randall D.A., Wood R.A., Bony S., et al. Climate models and their evaluation. In: Solomon S., Qin D., Manning M., et al., editors. Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge and New York: Cambridge University Press; 2007. 74 pp.
  4. 4. Coiffier J. Fundamentals of Numerical Weather Prediction. Cambridge: Cambridge University Press; 2012. 368 pp.
  5. 5. Parker W.S. Confirmation and adequacy‐for‐purpose in climate modelling. Proc. Aristotel. Soc. Suppl. Vol. 2009; 83: 233–249.
  6. 6. Lorenz E.N. Deterministic nonperiodic flow. J. Atmos. Sci. 1963; 20: 130–141.
  7. 7. Selvam A. Chaotic Climate Dynamics. Bristol, UK: Luniver Press; 2007. 156 pp.
  8. 8. Thompson P.D. Uncertainty of initial state as a factor in the predictability of large‐scale atmospheric flow patterns. Tellus. 1957; 9: 275–295.
  9. 9. Lorenz E.N. A study of the predictability of a 28‐variable atmospheric model. Tellus. 1965; 17: 321–333.
  10. 10. Smagorinsky J. Problems and promises of deterministic extended range forecasting. Bull. Amer. Meteor. Soc. 1969; 50: 286–312.
  11. 11. Leith C.E. Predictability in theory and practice. In: Hoskins B.J., Pearce R.P., editors. Large‐Scale Dynamical Processes in the Atmosphere. New York: Academic Press; 1983. pp. 365–383.
  12. 12. Fraedrich K. Estimating weather and climate predictability on attractors. J. Atmos. Sci. 1987; 44: 722–728.
  13. 13. Dalcher A., Kalnay E. Error growth and predictability in operational ECMWF forecasts. Tellus. 1987; 39A: 474–491.
  14. 14. Chou J.F. Predictability of the atmosphere. Adv. Atmos. Sci. 1989; 6: 335–346.
  15. 15. Farrell B.F. Small error dynamics and the predictability of atmospheric flows. J. Atmos. Sci. 1990; 47: 2409–2416.
  16. 16. Dymnikov V.P. Potential predictability of large‐scale atmospheric processes. Atmos. Oceanic Phys. 2004; 40: 579–585.
  17. 17. Palmer T.N. Predictability of weather and climate: From theory to practice. In: Palmer T., Hagedorn R., editors. Predictability of Weather and Climate. New York: Cambridge University Press; 2006. pp. 1–10.
  18. 18. DelSole T. Predictability and information theory. Part I: Measures of predictability. J. Atmos. Sci. 2004; 61: 2425–2440.
  19. 19. Lorenz E.N. Climate Predictability: The Physical Basis of Climate Modelling. Geneva: World Meteorological Organization; GARP Publication Series; 1975. Vol. 16, pp. 132–136.
  20. 20. Meadows D., Write D. Thinking in Systems: A Primer. Vermont: Chelsea Green Publishing; 2008.
  21. 21. Waldrop M.M. Complexity: The Emerging Science at the Edge of Order and Chaos. New York: Simon & Schuster; 1993. 380 pp.
  22. 22. Gros C. Complex and Adaptive Dynamical Systems: A Primer. Berlin: Springer‐Verlag; 2010. 326 pp.
  23. 23. Daede A.P.M., Ahlonsou R., Ding Y., Schimel D. The climate system: An overview. In: Houghton J.T., Ding Y., Grogs D.J., et al., editors. IPCC, 2001: Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge: Cambridge University Press; 2001. pp. 85–98.
  24. 24. Dijkstra H.A. Nonlinear Climate Dynamics. New York: Cambridge University Press; 2013. 367 pp.
  25. 25. Trenberth K.E. Climate System Modelling. Cambridge: Cambridge University Press; 2010. 820 pp.
  26. 26. Barry R.G., Hall‐McKim E.A. Essentials of the Earth's Climate System. Cambridge: Cambridge University Press; 2014. 271 pp.
  27. 27. Marshall J., Plumb R.A. Atmosphere, Ocean and Climate Dynamics. San Diego: Academic Press; 2016. 489 pp.
  28. 28. Holton J.R. An Introduction to Dynamical Meteorology. New York: Academic Press; 2004. 535 pp.
  29. 29. Pedlosky J. Geophysical Fluid Dynamics. New York: Springer‐Verlag; 1992. 710 pp.
  30. 30. Dymnikov V.P., Filatov A.N. Mathematics of Climate Modelling. Boston: Birkhäuser; 1997. 264 pp.
  31. 31. Washington W.M., VerPlamk L. A Description of Coupled General Circulation Models of the Atmosphere and Oceans Used for Carbon Dioxe Studies. Boulder, Colorado: NCAR (National Centre for Atmospheric Research)/TN‐271; 1986. 34 pp.
  32. 32. Mezinger F., Arakawa A. Numerical Methods Used in Atmospheric Models. Geneva: World Meteorological Organization; GARP Publication Series; 1979. Vol. 17, 64 pp.
  33. 33. Oseledets V.I. Multiplicative ergodic theorem: Characteristic Lyapunov exponents of dynamical systems. Trans. Moscow Math. Soc. 1968; 19: 179–210.
  34. 34. Kaplan J.L., Yorke A.J. Chaotic behaviour in multidimensional difference equations. In: Peitgen H.‐O., Walter H.‐O., editors. Functional Differential Equations and Approximations of Fixed Points. Lecture Notes in Mathematics. Berlin: Springer‐Verlag; 1979. pp. 228–237.
  35. 35. Pesin B.J. Characteristic Lyapunov exponents and smooth ergodic theory. Russian Math. Surveys. 1977; 32: 55–114.
  36. 36. Palmer T.N. Predicting Uncertainty in Forecasts of Weather and Climate. ECMWF Technical Memorandum No. 294. ECMWF Shinfield Park: ECMWF (European Centre for Medium Range Weather Forecasting), Reading; 1999. 64 pp.
  37. 37. Zeeman E.C. Stability of dynamical systems. Nonlinearity. 1988; 1: 115–155.
  38. 38. Noarov A.I. Sufficient condition for the existence of a stationary solution to the Fokker‐Planck equation. J. Comput. Math. Physics. 1997; 5: 587–598.
  39. 39. Dymnikov V.P. Stability and Predictability of Large Scale Atmospheric Processes. Moscow: INM RAS; 2007. 283 pp.
  40. 40. Kleeman R. Measuring dynamical prediction utility using relative entropy. J. Atmos. Sci. 2002; 59: 2057–2972.
  41. 41. Kraichnan R.H. Classical fluctuation‐relaxation theorem. Phys. Rev. 1959; 113: 1181–1182.
  42. 42. Leith C.E. Climate response and fluctuation dissipation. J. Atmos. Sci. 1975; 32: 2022–2026.

Written By

Sergei Soldatenko and Rafael Yusupov

Submitted: 23 May 2016 Reviewed: 07 November 2016 Published: 15 March 2017