Open access peer-reviewed chapter

Stochastic Theory of Coarse-Grained Deterministic Systems: Martingales and Markov Approximations

Written By

Michel Moreau and Bernard Gaveau

Submitted: 07 November 2020 Reviewed: 08 January 2021 Published: 18 February 2021

DOI: 10.5772/intechopen.95903

From the Edited Volume

Advances in Dynamical Systems Theory, Models, Algorithms and Applications

Edited by Bruno Carpentieri

Chapter metrics overview

427 Chapter Downloads

View Full Metrics

Abstract

Many works have been devoted to show that Thermodynamics and Statistical Physics can be rigorously deduced from an exact underlying classical Hamiltonian dynamics, and to resolve the related paradoxes. In particular, the concept of equilibrium state and the derivation of Master Equations should result from purely Hamiltonian considerations. In this chapter, we reexamine this problem, following the point of view developed by Kolmogorov more than 60 years ago, in great part known from the work published by Arnold and Avez in 1967. Our setting is a discrete time dynamical system, namely the successive iterations of a measure-preserving mapping on a measure space, generalizing Hamiltonian dynamics in phase space. Using the notion of Kolmogorov entropy and martingale theory, we prove that a coarse-grained description both in space and in time leads to an approximate Master Equation satisfied by the probability distribution of partial histories of the coarse-grained state.

Keywords

  • stochastic theory
  • coarse-grained deterministic systems
  • Markov processes
  • martingales

1. Introduction

It is generally admitted that Thermodynamics and Statistical Physics could be deduced from an exact classical or quantum Hamiltonian dynamics, so that the various paradoxes related to irreversibility could also be explained, and nonequilibrium situations could be rigorously studied as well. These questions have been and still are discussed by many authors (see, for instance Refs. [1, 2, 3, 4] and many classical textbooks, for instance [5, 6, 7, 8, 9, 10]), who have introduced various plausible hypotheses [7, 8, 9, 10, 11, 12, 13, 14], related to the ergodic principle [8, 9, 10, 11], to solve them. It seems that there are two major kinds of problems. First, to justify that physical systems can reach an equilibrium state when they are isolated, or in contact with a thermal bath (which remains to be defined). Secondly, to justify various types of reduced stochastic dynamics, depending on the phenomena to be described: Boltzmann equations, Brownian motions, fluid dynamics, Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchies, etc..: see for instance Refs [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 15, 16]. Concerning the first type of problems (reaching an equilibrium, if any) very rough estimations show [17] that the time scales to reach equilibrium, using only Hamiltonian dynamics and measure inaccuracies, are extremely large, contrarily to everyday experience, and quantum estimations are even worse [17]. Essentially, these times scale as Poincaré recurrence times and they increase as an exponential of the number of degrees of freedom (see Section 3 of this chapter for a brief discussion and references).

Here we concentrate on the second type of problems: is it possible to derive a stochastic Markovian process from an “exact” deterministic dynamics, just by coarse graining the microscopic state space? We generalize and complete the formalism recently presented [18] for Hamiltonian systems. Our framework is now more general and applies to all deterministic systems with a measure preserving dynamics, which, by Liouville theorem, include Hamiltonian dynamics.

Following Kolmogorov, we start with a measure space with a discrete time dynamics given by the successive iterations of a measure preserving mapping. The Kolmogorov entropy, or trajectory entropy, has been defined by Kolmogorov as an invariant of stationary dynamical systems (see Arnold and Avez book [19] for a pedagogical presentation). We follow his work and generalize part of his results. We also use martingale theory [20, 21, 22, 23] to show that the stationary coarse-grained process almost surely tends to a Markov process on partial histories including n successive times, when n tends to infinity. From this result, we show that in the nonstationary situation, the probability distribution of such partial histories approximately satisfies a Master equation. Its probability transitions can be computed from the stationary distribution, expressed in terms of the invariant measure. It follows that, with relevant hypotheses, the mesoscopic distribution indeed tends to the stationary distribution, as expected.

Our next step is to coarse grain time also. The new, coarse-grained time step is now n τ, τ being the elementary time step of the microscopic description, and n being the number of elementary steps necessary to approximately “erase” the memory with a given accuracy. The microscopic dynamics induces new dynamics on partial histories of length n. We show that it is approximately Markovian if n is large enough. This idea is a generalization of the Brownian concept: a particle in a fluid is submitted to a white noise force which is the result of the coarse-graining of many collisions, and the time step is thus the coarse-graining of many microscopic time steps [8, 24]. The Brownian motion emerges as a time coarse-grained dynamics.

In Section 2, we recall various mathematical concepts (Kolmogorov entropy, martingale theory) and use them to derive the approximate Markov property of the partial histories, and eventually to obtain an approximate Master Equation for the time coarse-grained mesoscopic distribution [18].

In Section 3, we briefly consider the problem of relaxation times and recall very rough estimations showing that an exact Hamiltonian dynamics predicts unrealistic, excessively large relaxation times [17], unless the description is completed by introducing other sources of randomness than the measure inaccuracies leading to space coarse-graining. Note that, following Kolmogorov [19], we do not address the Quantum Mechanics formalism.

Advertisement

2. Microscopic and mesoscopic processes in deterministic dynamics

2.1 Microscopic dynamics: Definitions and notations

It has been shown recently [18] that coarse-grained Hamiltonian systems can be approximated by Markov processes provided that they satisfy reasonable properties, covering many realistic cases. These conclusions can be extended to a large class of deterministic systems generalizing classical Hamiltonian systems, which we now describe. We first specify our hypotheses and notations.

2.1.1 Deterministic microdynamics

Consider a deterministic system S . Its states x, belonging to a state space X, will be called “microstates”, in agreement with the usual vocabulary of Statistical Physics. The deterministic trajectory due to the microscopic dynamics transfers the microstate x 0 at time 0 to the microstate xt = φ t x 0 at time t. The evolution function φ t satisfies the current properties of dynamic systems: φ t φ s = φ t + s , φ 0 = I , t and s being real numbers and I being the identical function.

The dynamics is often invariant by time reversion, as assumed in many works on Statistical Physics: we refer to classical textbooks on the subject for details [5, 6, 7, 8], but we will not use such properties in this chapter.

2.1.2 Microscopic distribution

Assume that the exact microstate x 0 is unknown at time 0, but is distributed according to the probability measure μ on the phase space X. The microscopic probability distribution μt at time t is given by

μ t A = μ φ t A . E1

for any measurable subset A of X. If μ is stationary, it is preserved by the dynamics: μt (A) = μ (A). This condition, however, it not necessarily satisfied, in particular for physical systems during their evolution.

We will focus on two important cases:

  1. the finite case: X is finite and consists in N microstates.

  2. the absolutely continuous case: X R n , where (i) R is the set of real numbers and n is an integer (usually very large, and even in the case of Hamiltonian dynamics), and (ii) the measure μ is absolutely continuous with respect to the Lebesgue measure ω on R n : these exists an integrable probability density p(x) such that for any measurable subset A of X

μ A = A p x x . E2

Furthermore, we assume that (iii) the Lebesgue measure of X (or volume of X) V = vol X X x is finite, and (iv) the Lebesgue measure ω is preserved by the dynamics for any t and any measurable subset A of X.:

vol A = vol φ t A . E3

The last two assumptions obviously generalize basic properties of Hamiltonian dynamics in a finite volume of phase space. Thus, by (1)(3), the probability density is conserved along any trajectory: at time t the probability density is

p x t = p 0 φ t x p φ t x . E4

2.1.3 Initial microscopic distribution: The stationary situation

Suppose that S is an isolated physical system and no observation was made on S at time 0 nor before 0. Then, in the absence of any knowledge on S , we admit that at the initial time S is distributed according to the only unbiased probability law, which is the uniform law. This is clearly justified in the finite case, according to the physical meaning traditionally given to probability: in fact, attributing different probabilities for two distinct microstates of X would imply that some measurement would allow one to distinguish them objectively, which is not the case at time 0.

In the absolutely continuous case, initial uniformity is less obvious: it amounts to assuming that the system should be found with equal probability in two regions of the state space with equal volumes if no information allows one to give preference to any of these regions. This is of course a subjective assertion, but for Hamiltonian systems it agrees with the semi-quantum principle which asserts that, in canonical coordinates, equal volumes of the phase space correspond to equal numbers of quantum states.

Another way for choosing the initial probability distribution is to make use of Jaynes’ principle [25], which is to maximize the Shannon entropy of the distribution under the known constraints over this distribution: in the present case of an isolated system which has not been previously observed, this principle also leads to the uniform law. It is not really better founded than the previous, elementary reasoning, but it may be more satisfying and it can be safely used in more complex situations. We refer to most textbooks on statistical mechanics for discussing these well-known, basic questions.

The uniform distribution in a finite space, either discrete or absolutely continuous, is clearly stationary. In addition to the previous hypotheses, we will assume that the space X is indecomposable [26]: the only subsets of X which are preserved by the evolution function φt are the empty set ∅ and X itself. Then, the stationary probability distribution is unique [18].

For simplicity, we will henceforth assume that the phase space X is finite.

Initial, nonstationary situation. In certain situations, the system can be prepared by submitting it to specific constraints before the initial time 0. Then it may not be distributed uniformly in X at t = 0. We will consider this case in the next paragraph.

2.2 Mesoscopic distributions

2.2.1 Mesoscopic states

Because of the imprecision of the physical observations, it is impossible to determine exactly the microstate of the system, but it is currently admitted that the available measure instruments allow one to define a finite partition of X into subsets i M  ≡ (ik ), k = 1, 2,…M, such that it is impossible to distinguish two microstates belonging to the same subset i. So, in practice the best possible description of the system consists in specifying the subset i where its microstate x lies: i can be called the mesostate of the system. The probability for the system to be in the mesostate i at time t will be denoted p(i,t). It is not sure, however, that two microstates belonging two different mesostates can always be distinguished: this point will be considered in Section 3.2.2.

Remark: for convenience, we use the same letter p to denote the probability in a countable state space, as well as the probability density in the continuous case. This creates no confusion when the variable type is explicitly mentioned. This is the case now since, as mentioned previously, we assume that the space X is discrete. The transposition to the continuous case is generally obvious, although the complete derivations may be more difficult.

2.2.2 The stationary situation

If time 0 is the beginning of all observations and actions, we assume that the initial microscopic distribution μ is uniform and stationary, as discussed previously, and the probability to find system S in the mesostate i 0 at time 0 is p(i 0, 0) = μ(i 0,). The probability to be in i at time t is p 0 i t = μ t i = μ φ t i . The stationary joint probability to find S in i 0 at time 0 and in i at time t is

p 0 i 0 0 i t ) = μ φ t i i 0 = μ i φ t i 0 E5

and the conditional probability of finding S in the i at time t, knowing that it was in i 0 at time 0 is

p 0 i t i 0 0 = p 0 i t i 0 0 p 0 i 0 0 = μ φ t i i 0 μ i 0 = μ i φ t i 0 μ i 0 E6

Similarly, the stationary n-times joint probability and related conditional probabilities are readily obtained from

p 0 i 0 0 i 1 t 1 i n 1 t n 1 = μ φ i t n 1 t n 1 i 0 . E7

with, for any t: p 0 i 0 t i 1 t 1 + t i n 1 t n 1 + t = p 0 i 0 0 i 1 t 1 i n 1 t n 1 .

For the sake of simplicity, we will discretize the times 0 < t 1 < t 2…, and write ti  = kiτ, ki being a nonnegative integer and τ a constant time step, which will be taken as time unit.

2.2.3 Non stationary situation

If S is a physical system, interactions may exist before or at time 0, so that the S can be constrained to lie in a certain subset A of X at time 0. However, since it is not possible to distinguish two microstates corresponding to the same mesostate, A should be a union of mesostates, or at least one mesostate. If it is known that at time 0 the microsate x of the system belongs to the mesostate i, we should assume that the initial microscopic distribution is uniform over i, since no available observation can give further information on x: so, in the discrete case, if n(i) is the number of microstates included in i and χ i (x) the characteristic function of i

p x 0 x i = 1 n i χ x i E8

In the absolutely continuous case, the similar conditional density is obtained in the same way, replacing the number of microscopic states contained in the mesostate i by its volume v(i). For simplicity, we follow considering the discrete case, with obvious adaptations to the continuous case.

If one only knows the mesoscopic initial distribution p(i,0) that at time 0 the system belongs to i, for each mesostate i of M , the initial microscopic distribution becomes

p x 0 = i 1 n i p i 0 χ i x = i p i 0 μ i χ i x N E9

N being the total number of microstates in X.

The n-times nonstationary mesoscopic probabilities are obtained from (9)

p n i 0 0 i 1 1 i n 1 n 1 = p i 0 φ 1 i 1 φ i n + 1 n 1 0 = p i 0 μ i 0 n i 0 φ 1 i 1 φ i n + 1 n 1 N . E10

where n(A) is the number of microstates belonging to some subset A of X. So

p n i 0 0 i 1 1 i n 1 n 1 = μ i 0 φ 1 i 1 φ i n + 1 n 1 p i 0 μ i 0 . E11

and all multiple probabilities follow, for instance

p n 1 i 1 1 i n n = i 0 μ i 0 φ 1 i 1 φ i n n p i 0 μ i 0 . E12

The corresponding process is generally not Markovian. For instance, if i 0ϕ −1 i 1 ≠ ∅, i 1ϕ −1 i 2 ≠ ∅ and i 0ϕ −2 i2  = ∅, it is easily seen that p ( . i , 2 2 i , 1 1 i , 0 0 ) = 0 but p ( i , 2 2 i , 1 1 ) 0 .

From the definition of the relative probabilities, one can formally write

p i , 2 t 2 = i 1 p i , 2 t 2 i 1 t 1 p i , 1 t 1 . E13

but in general this equation is useless, since the conditional probability p(i2, t2|i1, t1) cannot be computed independently of p(i1, t1).

It results from (11) that the nonstationary conditional probabilities, conditioned by the whole past up to time 0, are identical to the corresponding stationary probabilities: as an example

p i n n i n 1 n 1 i 0 0 = p 0 i n n i n 1 n 1 i 0 0 = μ i 0 φ 1 i 1 φ n i n μ i 0 φ 1 i 1 φ n + 1 i n 1 . E14

We will make use of this simple but important property later.

2.3 Entropy of the mesoscopic process resulting from deterministic, microscopic system

Kolmogorov and other authors [19] studied the entropy and ergodic properties of the stationary mesoscopic process defined previously, following methods introduced by Shannon in the framework of signal theory [27, 28, 29, 30]. These methods, and part of Kolmogorov’s results, can be extended to the nonstationary process (11).

2.3.1 The n-times entropy and the instantaneous entropy of the mesoscopic system

Following Kolmogorov, we consider y the Shannon entropy [27, 28, 29, 30] of the trajectory (i) n  = (i 0, …, i n-1) in the phase space

S p n = i 0 , i n 1 p n i 0 0 i n 1 n 1 ln p n i 0 0 i n 1 n 1 . E15

On the other hand, the new information obtained by observing the system in the mesoscopic state in at time tn , knowing that it was in the respective states i 0, …i n-1 at the prior times 0, …n-1, will be called the instantaneous entropy

s n p = S n + 1 p S p n = i 0 , ; ; ; i n p i 0 0 . i n n ln p ( i n , n i n 1 n 1 i , 0 0 ) 0 = i 0 , ; ; ; i n 1 p i 0 0 . i n 1 n 1 S ( p n i n 1 n 1 i , 0 0 ) . E16

where p denotes the infinite process. The properties of S(pn ) and sn (p) have been extensively studied by Kolmogorov and other authors in the case of the stationary process (6) [19]: they are summarily mentioned in 2.5. They are not necessarily valid for the nonstationary process.

2.3.2 Maximizing the n-times entropy of the mesoscopic system: The “Markov scheme”

If one knows the first two distributions p 1 and p 2, one can mimics the exact mesoscopic distributions pn by using the Jaynes’ principle, maximizing the entropy S(qn ) of a ditribution qn under the constraints q 1 = p 1 and q 2 = p 2. Then it is found that optimal distribution qn is the Markov distribution q ¯ n satifying these constraints [18].

It is shown in Ref. [18] that for n > 2, both the n-times entropy S n q ¯ and the instantaneous entropy s n q ¯ are larger than the correponding entropies S n p and s n p of the exact process p, except if p is Markov: p =  q ¯ .

The Markov process q ¯ n is not really an approximation of the mesoscopic process p, because q ¯ n does not tend to pn when n → ∞. Approximating the exact mesoscopic process by a Markov process will be the main purpose of the next section.

2.4 Entropy and memory in the stationary situation

2.4.1 Kolmogorov entropy of the stationary process

Here we consider the stationary process arising from the initial uniform microscopic distribution μ(x), when the n-times stationary probability is p n 0 given by (7). For the sake of simplicity we omit the index 0 in the present Section, unless otherwise specified. It can be shown [19] that the entropy Sn (p) is an increasing, concave function of n

s n S n + 1 p S n p 0 . E17
s n + 1 s n = S n + 1 p 2 S n p + S n 1 p 0 . E18

It results from (17) and (18), and also from 2.5.2, that the limits

lim n 1 n S n p = lim n s n p = s p . E19

exist: s(p) is the Kolmogorov entropy of the evolution function f with respect to the partition (i) of the mesoscopic states [19]. More simply, we can call it entropy of the mesoscopic process.

2.4.2 Memory decrease in the stationary mesoscopic process

It has been proved recently [18] that, although it is infinite, the memory of the mesoscopic process fades out with time: for n large enough, if N > n the probability of iN at time N conditioned by the n last events, is practically equal to the probability at time N, conditioned by the whole past down to time 0.

p i N N i N 1 N 1 i n N n p i N N i N 1 N 1 i 0 0 when n . E20

More precisely, for any ε > 0, there exists a positive integer n such that for any N > n

0 < s N < s n < ε . E21

where sn is the instantaneous entropy given by (14). In fact, let us write

Π N i N = p ( i N , N i N 1 N 1 . . i 0 0 ) = μ f N i N f N + 1 i N 1 i 0 ; E22
Π N n i N = p ( i N , N i N 1 N 1 . . i N n N n ) = μ f N i N f N + 1 i N 1 f N + n i N n . E23

For a given n, formula (23) allows one to define a new process p (n) from the original process p, which can be called “the approximate process of order n” of p (see Section 2.6). It results from (21) and from the stationarity of p that for any ε > 0, there is an integer n(ε) depending only on ε, such that for any integers N, n > n(ε)

0 < s n p s N p = i 0 , i N 1 μ ( i 0 f 1 i 1 f N + 1 i N 1 S 0 , N 1 Π N Π N n < ε . E24

where S 0 , N 1 Π N | Π N n is the relative entropy of ΠN with respect to Π N n : the last right hand member of Eq. (22) is the average of this relative entropy on the past of N. Because sN (p) decreases to a limit s ¯ p when N → ∞, it results that

0 < δ s n p s n p s ¯ p ε if n > n ε . E25

The total variation distance d(P,Q) between two distributions Pj and Qj over the states j of a finite set (j) is

d P Q = 1 2 j P j Q j . E26

Then, the total variation distance d 0 , N 1 Π N Π N n between ΠN and Π N n (for a given past trajectory between times 0 and N-1) is related to the relative entropy [18, 31] and it can be concluded that

d 0 , N 1 Π N Π N n 2 d 0 , N 1 Π N Π N n 2 < ε / 2 if n ε < n < N . E27

2.4.3 Convergence properties of the approximate process

Let us write m = N-n > 0. It follows [18] from (25) that for any fixed m, the total variation distance between the exact and the approximate probabilities d 0 , m + n 1 Π m + n Π m + n n tends to 0 in probability when n → ∞

d 0 , m + n 1 Π m + n Π m + n n p 0 if n . E28

So, the probability that this distance exceeds a given accuracy a > 0 can be made as small as desired by choosing n large enough.

Further results can be obtained by newly using the sationnarity of process p. In fact, it can be shown [18] that p is a martingale [20, 21, 22]. Then, general results from martingales theory (see below) show that when n → ∝ the distance between the stationary conditional probability Π m + n and its approximation Π m + n n tends to 0 almost surely [18], as well as and in probability

d 0 , m + n 1 Π m + n Π m + n n a . s . 0 if n . E29

So, the approximation Π m + n n converges to Π m + n for almost all trajectories [18].

We now sketch the derivation of this conclusion from martingale theory.

2.5 Martingale theory and almost sure convergence

For convenience, we first summarize some definitions and results of martingale theory [20, 21, 22], before applying them to the mesoscopic laws of deterministic systems. We refer to [20] for adressing more general cases.

2.5.1 Definitions

  1. simplified definition: a (discrete time) sequence of stochastic variables Xn is a martingale if for all n:

X n < and X n + 1 X , n X 1 = X n . E30

where X denotes the average (mathematical expectation) of the stochastic variable X.

  1. more generally (see the general definition, for instance, in [20])

    If • (Ω, F , P) is a probability space (where Ω is the state space, P is the probability law, and F is the set of all subspaces (σ-algebra) for which P is defined),

    F n is an increasing sequence of σ-algebras extracted from F ( F n ⊂  F n + 1 ⊂ … ⊂  F ), and.

    • for all n ≥ 0, Xn is a stochastic variable defined on (Ω, F n, P),

    the sequence X n is a martingale if X n < and X n + 1 F n = X n .

2.5.2 Convergence theorem for martingales

Among the remarkable properties of martingales, the following convergence theorem holds [20, 21]:

If (Xn ) is a positive martingale, the sequence Xn converges almost surely to a stochastic variable X.

So, for almost all trajectories ω, Xn (ω) → X(ω) with probability 1 when n → ∞.

Stronger and more general results can be found in the references.

2.5.3 Application to the n th approximation of the stationary mesocopic process

The stochastic variable Y N = p 0 i N N i N 1 N 1 i 0 0 is a martingale. In fact, because of the stationarity of p 0 we have, renumbering the states

p 0 i N i 1 N 1 i N 0 = p 0 i 0 i 1 1 i N N p 0 i 0 F N . E31

where F N is the σ-algebra generated by i −1, …i-N . Let us write

π N = p 0 i 0 i 1 1 i N N = p 0 i 0 F N . E32

We have, because F N 1 F N

π N F N 1 = p 0 i 0 F N F N 1 = p 0 i 0 F N 1 = π N 1 . E33

So, πN is a martingale on the σ-algebra F N , and by the convergence theorem, it converges almost surely to a.

limit π when N → ∞.

Now if N > n, let us write m = N-n > 0. Because of the stationarity of p0

p 0 i n + m i 1 n + m 1 i m m = p 0 i 0 i 1 1 i n n = π n i . E34

Thus, for any fixed, positive m

π n + m π n a . s . 0 . E35

The absolute value distance between π n + m and πn is obtained by summing π n + m i π n i over the M possible states i, So

d 0 , n + m 1 q n + m q n + m n = d π m + n π n a . s . 0 if n . E36

which is (29), one of our main, formal results.

2.6 n-times Markov approximation of the mesoscopic stationary process

Returning to inequalities (19), when the value ε is fixed for obtaining a required precision, the value n = n(ε) is determined and a satisfying approximation of the exact mesoscopic process is obtained by neglecting the memory effects at time differences larger than n [18] Thus, one replaces p i N N i N 1 N 1 . . i 0 0 by

p n ( i N , N i N 1 N 1 . . i 0 0 ) = p ( i N , N i N 1 N 1 . . i N n N n ) if N > n = p ( i N , N i N 1 N 1 . . i 0 0 ) if N n . E37

With the convention

p n i 0 0 i N N = p i 0 0 i N N if N n . E38

all the probabilities related to the approximate process p (n) are defined from the probabilities of p: this defines p (n), the approximate process of order n of p. So, p (n) has a finite memory of size n, whereas p has in general an infinite memory.

The process p (n) is a Markov process on the partial trajectories IK consisting of groups of n successive mesocopic states

I K = i Kn i Kn + 1 i K + 1 n 1 M n E39

Its probability distributions can be written in abbreviated notations

P K n I 0 T 0 I 1 T 1 I K 1 T K 1 p n I 0 0 1 n 1 I 1 n n + 1 ...2 n 1 I , K 1 K 1 n Kn 1 . E40

TK being the group of n successive times: T k = kn , kn + 1 , , k + 1 n 1 . From the approximation (18) it follows (see Appendix A) that

P 0 I K T K I K 1 T K 1 I 0 T 0 P 0 n I K T K I K 1 T K 1 . E41

where we now use the upper index0 in P 0 and P 0(n) to recall that, in the present section, p is the stationary distribution. Note that, because of this stationarity

P 0 n I K T K I K 1 T K 1 = P 0 n I K T 1 I K 1 T 0 = P 0 I K T 1 I K 1 T 0 W I K I K 1 . E42

So, the transition matrix W is well defined from the known stationary distribution p 0.

From the approximate relation (41) if follows that the exact stationnary process P 0 on the partial history IK during the time interval TK approximately obeys the n-times Markov Equation (see Section 2.7)

P 0 I K T K I K 1 W I K I K 1 P 0 I K 1 T K 1 . E43

while the n th approximation P 0(n) satisfies (33) exactly.

2.7 Markov approximations of the nonstationary mesoscopic process

We return to the nonstationary process p generated by the deterministic microscopic process from an arbitrary initial distribution of the mesoscopic states, given by (11). As in paragraph 2.6, it is now necessary to distinguish the stationary process p° by the upper index0.

One can write the trivial equality

p i N N . i N + n 1 N + n 1 = i , N 1 , i 0 p i N + n 1 N + n 1 i N N i N 1 N 1 i 0 0 p i 0 0 . i N 1 N 1 . E44

We now use remark (14): the conditional probabilities, conditioned by the whole past up to time 0, are identical in the stationary and nonstationary situations. The stationary distributions p 0 can be approximated by its n th approximation p 0(n) introduced in Section 2.6. Thus we can write

p 0 i N + n 1 N + n 1 i N N i N 1 n 1 i 0 0 = p 0 i N + n 1 N + n 1 i N + n 2 N + n 2 i 0 0 p 0 i N + n 2 N + n 2 i N + n 3 N + n 3 i 0 0 p 0 i N N i N 1 N 1 i 0 0 p 0 n i N + n 1 N + n 1 i N N i N 1 n 1 i N n N n . E45

With (35), Eq. (34) yields the approximate n-times Markov Equation

p i N N i N + n 1 N + n 1 i , N 1 , i 0 p 0 i N + n 1 N + n 1 i N N i N 1 n 1 i N n N n p i N n N n i N 1 N 1 . E46

Taking N = Kn for an integrer K ≥ 0, using the condensed notations of § 2.6 and definition (42), Eq. (46) yields an approximate Master Equation for the probability P I K T of the partial history IK during the time interval TK

P I K T K I K W I K I K 1 P I K 1 T K 1 E47

which is the Eq. (43) obtained for the stationary probability P 0 I K T . Let P n I K T be the exact solution of Eq. (47) that coincides with the exact P at the n first elementary times 0, 1, … n-1 of the system history: P n I 0 T = P I 0 T . Then, P n I K T defines the n th approximation of P I K T : in principle, it can be computed from Eq. (47) since the probability transitions W are known by (41).

The stationary probabilities approximation P 0(n) deduced from p 0 provide the stationary solution of (47)

P 0 n I , K T K = p 0 i Kn Kn . i K n + 1 1 K n + 1 1 = p 0 i Kn 0 i K n + 1 1 n 1 . E48

So, when K → ∞,

P n I , K T K P 0 n I , K T K . E49

and consequently, for any integer k ∈ [0, n-1], the n th approximation of the mesoscopic distribution p satisfies

p n i Kn + k μ i k = μ i if K . E50

for any initial mesocopic distribution, which is the basic assumption of statistical thermodynamics. Supplementary assumptions allow one to conclude that, in realistic situations, the mesoscopic distribution p itself satisfies this property (see Appendix B).

2.8 Time averages and simple Markov approximation

Up to now, we took as time unit some time step τ which gives the time scale of microscopic phenomena. By considering some finite partition (i) of the phase space X and replacing the microscopic states xX by the mesocopic states i ∈ (ik ), we have performed a space coarse graining, as necessary for taking practical observations into account. For the same purpose, one should also introduce [18] a space coarse graining, since the time scale θ = n τ of current observations is much larger than τ: n > > 1.

All mesoscopic functions remaining practically constant on the time scale θ, their averages can be computed from the time averages P ¯ K of the probabilities pk over θ

p ¯ K = 1 n k T K p k . E51

where K is an integer ≥ 1 and TK is the time interval (τ being the time unit) TK  = (K-1)n, K + 1, …Kn-1.

Suppose (a) that the mesoscopic probabilities p are slowly variating functions of the mesoscopic states, (i.e. for any positive α, |p(i) – p(j)| < α if the distance between the mesostates i and j is small enough, with an appropriate metric in the spase of mesostates), and (b) that discontinuous trajectories have low probabilities and can be neglected. Of course, these assumptions are not verified for some important, well known processes such as Brownian processes, but they seem to be reasonable for modeling physical processes where the inertial effects are strong enough. Then, a simple approximation is to consider that

p 0 i Kn 1 Kn 1 i K 1 n K 1 n i K 1 n 1 K 1 n 1 i K 2 n K 2 n p 0 i , K ¯ Kn 1 i , K ¯ K 1 n i K ¯ 1 K 1 n 1 i K ¯ 1 K 2 n W ¯ ( i K ¯ i K ¯ 1 ) . E52

where

K ¯ = 1 n k T K k = 1 n k = K 1 n Kn 1 . E53

Consider the time-averaged probability

P ¯ i K ¯ K 1 n k T K p i , k k 1 n k T K p i , K ¯ k . E54

Using the Markov Eq. (47) and the complementary approximations (42), we obtain the new Master Equation

P ¯ i K j W ¯ i j P ¯ j K 1 . E55

This equation is much simpler than Eq. (47), since it applies in the space M of the M mesostates (i), whereas (47) is valid in the space M n of n successive mesostates. However, Eq. (45) relies on several approximations that are difficult to control. In spite of these difficulties, which can only be precisely discussed for specific examples, Master Equations like (55), resulting from deterministic microscopic systems by coarse-graining both their states and time, are a practical way to study their evolution of a mesoscopic scale, used in innumerable works.

Advertisement

3. Discussion of the Markov representation derived from Hamiltonian dynamics, and estimation of the uniformization time

The previous results show that the coarse grained mesoscopic dynamics can eventually be represented by a Master Equation, because the memory of this dynamics is gradually lost over time. However, they do not provide the time scale of this fading. In order to estimate its order of magnitude simply, we make an intuitive remark: the conditional probability to jump from some mesostate i to another one can be evaluated without knowing the past history of the system if one knows the initial microscopic distribution over i. The only unbiaised initial distribution is the uniform one. Thus, one can consider that the system has a memory limited to one time step if uniformity is approximately realized in each mesoscopic cell: this is the basis of the elementary Markov models of mesoscopic evolution. Let T be the average time needed to reach uniformity in a mesoscopic scale, starting from strong inhomogeneity. In a first approximation it is reasonable to use this uniformization time T to characterize the time scale over which a Markov evolution can describe the system.

3.1 Uniformization time in a mesoscopic cell: An elementary estimation for Hamiltonian systems

Using oversimplified, but reasonable arguments [17], we now coarsely estimate the uniformization time T in a mesoscopic cell. As an example, we consider n identical particles initially located in this cell, among N identical particles in an isolated vessel. The complete system obeys Hamilton mechanics.

Assume that the particles constitute a gas under normal conditions, with density ρ ≈ 3. 1025 molecules.m−3. A mesocopic state can be reasonably represented by a cube of size l ≈ 10−6 m (as an order of magnitude), which contains n ≈ 3. 107 molecules. We now divide the mesoscopic cell into m “microscopic” cells whose size λ is comparable to the size of a molecule: each of these microscopic cells, however, should contain a sufficient number particles for allowing them to interact from time to time. We can take λ ≈ 10−8 m, so each microscopic cell approximately contains 30 molecules, and there are m ≈ 106 microscopic cells in a mesocopic cell. The particles have an average absolute value v ≈ 500 m.s−1 in typical conditions. They can jump between the various microcells of the same mesocopic cell. They can also jump out of their initial mesoscopic cell, but they are replaced by molecules proceeding from other cells, and we assume that these contrary effects coarsely compensate themselves, except in the first stage of the evolution if the initial mesocopic distribution is strongly inhomogeneous.

Because all particles are identical, an almost microscopic configuration of a mesoscopic cell can be defined by specifying the number of particles in each of its microscopic cells. Focusing on a given mesoscopic cell, we compute the number of its possible configurations, and we estimate the average time θ necessary for the system to visit all these configurations. Note that the uniformization time T is obviously much larger than θ: T > > θ. So, θ is a lower bound of T.

The number of ways of partitioning the n identical particles into the m microscopic cells is

C = m + n 1 ! n ! m 1 ! exp m + n φ x with φ x = x ln x 1 x ln 1 x and x = n / m + n . E56

The system jumps from one of these configurations to another one each time one the present particles jumps to another microscopic cell. The order of magnitude of the time needed for a particle to cross a micro-cell is λ/v, and the time between two configurations changes is τ ≈ (1/n) λ/v.. In order that all configurations are visited during time θ we should have at least θ ≈ C τ (in fact, θ should be much larger than because of the multiple visits during θ). So we conclude from (46) and relevant approximations that a lower bound of Θ satisfies

n φ x x ln v Θ λ with x = n / m + n 1 . E57

With the previous numerical values

θ λ v n m m 2.10 . 11 30 10 6 s . E58

which is far larger than the age of universe (now estimated to be about 14 × 109 years, or 4.4 × 1017 s)!

Although these calculations are very rudimentary, it is clear that, in the framework of purely Hamiltonian systems, the microscopic distribution within a mesocopic cell remains far from uniformity during any realistic time if it is initially fairly inhomogeneous.

More generally, it is clear that the uniformization time T should be of the order of Poincaré time [32, 33, 34, 35, 36] in a mesoscopic cell, which is known to be extraordinarily long [9, 37].

3.2 An elementary, empirical approach of mesoscopic systems

The practical relevance of Markov processes to model a large class of physical systems is supported by a vast literature. We have seen that the progressive erasure of its memory over time allows one to justify the use of a Markov process to represents the evolution of the coarse-grained system. However, such representation can also stem from random disturbances due to the measurements or other sources of stochasticity: then, one has to renounce to a purely deterministic microscopic dynamics, as formerly proposed by many authors, even without adopting the formalism of Quantum Mechanics. It is interesting to compare the time scales of the relaxation to equilibrium in both approaches with an elementary example.

3.2.1 Uniformization induced by randomization

Suppose now that the measure process does not induce any significant change in the average molecules energy - so, their average velocity remains unchanged – but that it causes a random reorientation of their velocity. A rudimentary, one dimensional model of such a randomization could be to assume that each time a molecule is about to pass to a neighboring cell, it will go indifferently to one of the neighboring microscopic cell. In a one dimensional version of the model, a molecule perform a random walk on the η = l/λ = 102 points representing the microscopic cells contained in the mesoscopic cell, and we adopt periodic conditions at the boundaries of the mesocopic cell. The η × η transition matrix of the process is a circulant matrix which, in its simplest version, has transition probabilities ½ to jump from any state to one of its neighbors, and it is known that its eigenvalues λk are λk  = cos(2πk/η), k = 0, 1, …[η/2]. The number of jumps necessary for relaxing to the uniform, asymptotic distribution is of the order

1 / ln λ 1 2 / η 2 500 s .

which correspond to a relaxation time of 500. λ/v ≈ 10−8 s, which is very short for current measurements, but comparable with (or even larger than) the time scale of fast modern experiments. Considering a 3-dim model would not change this time scale significantly. It is conceivable that he molecules are not necessarily reoriented each time they leave a microscopic cell. Even if the proportion of reoriented molecules is as low as 10−6, the relaxation time is of order 10−2 s, which is insignificant in many simple measures. In this case the Markov representation can be justified.

3.2.2 Semi-classical Hamiltonian systems

In analogy with the previous randomized system, we can introduce a new source of stochasticity in the coarse-grained deterministc systems considered in Sections 2 and 3. This could be done by assuming that a particle cannot be described by a point, but by a probability density centered on the point that would represent it classically: such a description borrows one, but not all, of the axioms of wave mechanics, and it can be qualified as a “semi-quantical” description. A similar assumption can be introduced without referring to quantum mechanics, by noticing that a particle cannot be localized in a given mesoscopic cell with complete certainty, because of its finite size: if it is mainly attributed to a given cell, there exists a small probability that it also belongs to a neighboring cell. Even without formalizing these possibilities, one can presume that such random effects shorten drastically the memory of the mesoscopic process, and make it short with respect to ordinary measure times: then the Markov approximation described in Section 2 can correctly represent the evolution of the observed coarse-grained process.

Advertisement

4. Conclusion

We have studied the mesoscopic, stochastic process derived from a deterministic dynamics applied to the cells determined by measure inaccuracies. The stationary process, which arises when the microscopic initial state is distributed according to a time invariant measure, was studied by Kolmogorov and further authors: we extended their methods and some of their results, and considered the nonstationary process which stems from a noninvariant initial measure. We have shown that, according to Jaynes’ principle, the “exact” mesocopic process can be approximately replaced by the Markov process which, at any time n, reproduces the one-time probability of each mesostate and the transition probabilities from it. This Markov process maximizes the trajectory entropy up to time n, as well as the entropy at time n, conditioned by prior events. The Jaynes’ principle, however, does not control the accuracy of this estimate: this was our next concern.

So, a sequence of successive approximations has been defined for the stationary mesoscopic process, based on one of our main results: the probability of any mesostate state conditioned by all past events, can be approximated by its probability conditioned by the n last past events only, the integer n being determined by the maximum distance allowed between these probabilities, as small as it may be. This property entails that the nonstationary mesoscopic process can be approximated by a n-times Markov process or even, after a time coarse-graining, by an ordinary one-time Markov process. These approximations require certain conditions which should be fulfilled by “normal” physical systems, with possible exceptions for slowly relaxing systems. If they are satisfied, the existence of a thermodynamic equilibrium is derived for a coarse-grained system obeying a measure-preserving deterministic dynamics, in particular an Hamiltonian dynamics, without introducing ad-hoc external noises. However, very rough estimations of the relaxation time show that for reasonable values of the parameters this time is extraordinarily long and completely unrealistic.

We conclude that, although the basic hypotheses of thermodynamics can be justified from a Hamiltonian or deterministic microscopic dynamics applied to the mesoscopic cells, the observed time scales of the relaxation to equilibrium cannot be explained without going beyond pure Hamilton mechanics, by introducing additional random effects, in particular due to the intrinsic imprecision of the particles localization.

Advertisement

With the notations of Section 4.2, we consider approximation (55), which is the basis of the n-times Markov approximation both in the stationary and nonstationary situations. Repeating approximation (51) we can write

p ( i , N + n 1 N + n 1 ; ; i , N N i N 1 N 1 i , N n N n i 0 0 ) = = p i , N + n 1 N + n 1 i , N + n 2 N + n 2 . i 0 0 . . p ( i , N N i N 1 N 1 i 0 0 ) p n i , N + n 1 N + n 1 i ; N + n 2 i N 1 N 1 p n ( i , N N i N 1 N 1 i N n N n ) . E59

The last line of (49) is p n ( i , N + n 1 N + n 1 ; ; i , N N i N 1 N 1 i 0 0 ) . We write

p ( i , N + n 1 N + n 1 ; ; i , N N i N 1 N 1 i , N n N n i 0 0 ) p n ( i , N + n 1 N + n 1 ; ; i , N N i N 1 N 1 i , N n N n i 0 0 ) Q N n . E60

and for k > n we define lk , using the abbreviations (32)

l k i 0 , i k 1 ln Π i k k Π k n i k . E61

We have by (24)

s n p s k p = i 0 , i k p i 0 0 i k k N ln Π k i k Π k n i k = l k i 0 , i k 1 σ k n . E62

(Note that σk is positive, although this not necessarily true for lk ). By (24) for any positive ε

2 d 2 p , k p k n σ k i 0 , i k 1 < ε if n is large enough . E63

Averaging the logarithm of Eq. (60) we have

L N n ln Q N n = k = 0 n 1 s n p s N + k p n s n p s p = n δs n . E64

δs n s n p s p can be interpreted as an entropy fluctuation with respect to its equilibrium thermodynamic value. If such a fluctuation relaxes exponentially to 0 with time, as usual, the last term of (54) tends to 0 when n → ∞. Then, the n-times Markov approximations 4.2 and 5.1 are justified. Although exponential relaxation can be considered as a characteristic of “normal” physical systems, slower relaxations can occur: in this case the Markov approximation may be invalid.

This tendency can be reasonably expected from the approximation of the exact mesocopic process by Markov processes, but it can only be affirmed by adding additional assumptions to the basic assumptions. We first prove a simple, useful lemma.

B.1. Lemma. Consider a d-dim sequence u n,k with 2 positive, integer indices n, k, satisfying the following properties:

  1. it is absolutely bounded: there is a positive real number M such that |un,k | < M for all integers n, k.

  2. for all n, k, there are positive numbers εn (independent of n) and ν (independent of n and k) such that

u n , k < ε n + ν u n , k 1 and ε n 0 if n . E65

Then u n,k  → 0 if n → ∞ and k → ∞.

In fact, for any positive ε, there is an integer n 0 such that εn  < ε if n > n 0, and

u n , k < ε 1 + ν + + ν k 1 + ν k u n , 0 < ε 1 ν + ν k u n , 0 if n > n 0 . E66

So, |un,k | can be made as small as desired by chosing n and k large enough.

B.2. For given integers n and K larger than 1, and states ik ∈ M, k = 0, 1,… (K + 1)n-1, we will write.

p i 0 0 i 1 1 . . i K + 1 n 1 K + 1 n 1 p 0 1 K + 1 n 1 for the sake of simplicity. In these abbreviated notations, we have

p Kn Kn + 1 . . K + 1 n 1 = i , 0 i 1 , i Kn 1 p ( K + 1 n , 1 Kn Kn 1 ...0 ) p 0 1 . . Kn 1 . E67

We know that

p ( K + 1 n , 1 Kn Kn 1 ...0 ) = p 0 ( K + 1 n , 1 Kn Kn 1 ...0 ) . E68

p 0 being the stationary probabilities, and that, for large n

p 0 ( K + 1 n , 1 Kn Kn 1 ...0 ) p 0 ( K + 1 n , 1 Kn Kn 1 K n 1 ) . E69

More precisely, in the conditions discussed previously, for any given positive ε, there is an integer n(ε) such that

p 0 ( K + 1 n 1 Kn Kn 1 ...0 ) p 0 ( K + 1 n 1 Kn Kn 1 K n 1 ) < ε if n n ε E70

So, Eq. (67) becomes

p Kn Kn + 1 . . K + 1 n 1 = i , 0 i Kn 1 p ( K + 1 n 1 Kn Kn 1 ...0 ) p 0 ( K + 1 n 1 Kn Kn 1 ...0 ) p 0 1 . . Kn 1 + i , 0 i Kn 1 p 0 K + 1 n 1 Kn Kn 1 K 1 n ) p K 1 n . . Kn 1 a n , K + i , 0 i Kn 1 p 0 K + 1 n 1 Kn Kn 1 K 1 n ) p K 1 n . . Kn 1 . E71

where the 1st term of the last line satisfies a n K < ε if n ≥ n(e).

The second term in the last line of (71) is, in other notations, the righth hand side term of the approximate Master Eq. (47) of Section 2.8

P I K T K I K W I K I K 1 P I K 1 T K 1 . E72

This approximate Master Equation can now be written more precisely, in the notations of 2.8

P I K T K = A n , K I K T k + I K W I K T K I K 1 T K 1 P I K 1 T K 1 . E73

where A n , K I K T k is just the a n,K term of (71) expressed in the notations of 2.8 where TK is the group of n successive times: kn , kn + 1 , . . , k + 1 n 1 , and I K = i Kn i Kn + 1 i K + 1 n 1 M n describes the corresponding partial history of the mesoscopic system.

On the other hand, we know that the stationary distribution P 0 satisfies the Master Eq. (72) exactly. So, writing

U n , K I K T K P I K T K P 0 I K T K . E74

we have

U n , K I K T K = A n , K I K T k + I K W I K I K 1 U n , K 1 I K 1 T K 1 . E75

Note that W depends of n, but is independent of K.

From (72)(75) it results that U n , K I K T K tend to 0 when n and K tend to infinite if, furthermore, the following.

condition (c) holds. In fact, the ( M n - dim) vector Un,K is orthogonal to the left-eigenstate with eigenvalue 1.

of matrix W. All the eigenvalues of the projection of W in the corresponding subspace have an absolute value smaller than 1. Thus the lemma 1 applies if condition (c) is satisfied:

(c) When n increases, the absolute values of the nonstationary eigenvalues of W have an upper bound <1.

This property is likely to hold if the actual stationary mesoscopic process is not too different from an exact Markov process. So, it is reasonable to conjecture that property (c) holds for typical actual systems.

References

  1. 1. Boltzmann L. Vorlesungen über Gastheorie. In: part. Vol. 2. Leipzig: J. A. Barth; 1898
  2. 2. L. Boltzmann. Reply to Zermelo’s Remarks on the theory of heat.In: History ofmodern physical sciences: The kinetic theory of gases, ed. S. Brush, ImperialCollege Press, 57, 567 (1896)
  3. 3. Ehrenfest P, Ehrenfest T. The conceptual foundations of the statisticalapproach in Mechanics. New York: Dover; 1990
  4. 4. Uhlenbeck GE. Anoutline of Statistical Mechanics, in Fundamental problemsin Statistical Mechanics. II. ed. Cohen, North Holland, Amsterdam: E. G. D; 1968
  5. 5. Landau LD, Lifshitz EM. Statistical Physics. 3rd ed. Oxford: Pergamon Press; 1980
  6. 6. Landau LD, Pitaevskii LP. Physical Kinetics. Oxford: Pergamon Press; 1981
  7. 7. H.B. Callen, Thermodynamics and an Introduction to Thermostatistics( John Wiley and Sons, New York, (1985)
  8. 8. Reif F. Fundamentals of Statistical and Thermal Physics. NY: Mc Graw Hill; 1965
  9. 9. J.P. Eckmann and D. Ruelle, Ergodic theory of chaos and strange attractors, Rev Mod Phys57, 617 (1985)
  10. 10. J.R. Dorfman, A, Introduction to Chaos I, Nonequilibrium Statistical Mechanics (Cambridge, New York, (1999)
  11. 11. Gallavotti G. Statistical Mechanics: a Short Treatise. Berlin: Springer; 1999
  12. 12. Gallavotti G. Journal of Statistical Physics. 1995;78:1571
  13. 13. Evans DJ, Cohen EGD, Morries GP. Phs. Rev. Lett. 1993;71:15
  14. 14. Gallavotti G, Cohen EGD. Phys. Rev. Lett. 1995;74:2694
  15. 15. Yvon J. La Théorie Statistique des Liquides. Paris: Herman; 1935
  16. 16. Forster D. Hydrodynamic Fluctuations. Benjamen, Reading, Masschusetts: Broken Symmetry and Correlation Functions; 1975
  17. 17. B. Gaveau and L.S. Schulman, EPJST 224, 8, p 891 (2015)
  18. 18. Gaveau B, Moreau M. Chaos. 2020;30:083104
  19. 19. Arnold VI, Avez A. Ergodic probems of Classical Mechanics. Mathematical Physics Monographs: Benjamin; 1968
  20. 20. Doob J. Stochastic Processes. N.Y: Wiley; 1953
  21. 21. Levy P. Théorie de l’addition des variables aléatoires. Paris: Gauthier-Villars; 1937
  22. 22. Feller W. An Introduction to Probability Theory and Its Applications, vol II. N.Y: Wiley; 1971
  23. 23. McKean HP. Stochastic Integrals. London: Academic Press, NY; 1968
  24. 24. Gaveau B, Gaveau MA. Diffusion of a particle in a very rarefied gas, Symposium on rarefied gas dynamics, Muntz, Weaver, Campell eds. Progress in Astronomics and Aeronautics. 1988;118(61)
  25. 25. Jaynes ET. Phys. Rev. II 108, 171 (1957). Phys. Rev. II. 1957;106:620
  26. 26. Khinchin AI. Mathemtical Foundations of Statistical Mechanics. N.Y: Dover; 1949
  27. 27. C. E. Shannon, A mathematical theory of communication, Bell System Technical Journal, vol. 27, p. 449–423 and 623–656 (1948)
  28. 28. Brillouin L. Science an Information Theory. NY: Academic Press; 1956
  29. 29. Khinchin AI. Mathematical Foundations of Information Theory. N.Y: Dover; 1957
  30. 30. MacMillan R. The basic theorems of information theory. Ann Math Statistics. 1953;24:193
  31. 31. T. Cover. and J. Thomas,Elements of Information Theory. Wiley: N.Y; 1991
  32. 32. H. Poincaré. Acta Math. 13, 1 (10890)
  33. 33. Kac M. Bull. Amer.Math. Soc. 1947;53:1002
  34. 34. Wolfowitz J. Bull. Amer. Math. Soc. 1949;55:394
  35. 35. Blum JR, Rosenblatt JI. J. Math. Sci. (Delhi). 1967;2:1
  36. 36. Schulman LS. Phys. Rev. A. 1978; 18 (5):2379
  37. 37. Gallavotti G, Cohen EGD. J. Stat. Phys. 1995;80:931

Written By

Michel Moreau and Bernard Gaveau

Submitted: 07 November 2020 Reviewed: 08 January 2021 Published: 18 February 2021