Open access peer-reviewed chapter

Stochastic Theory of Coarse-Grained Deterministic Systems: Martingales and Markov Approximations

By Michel Moreau and Bernard Gaveau

Submitted: November 7th 2020Reviewed: January 8th 2021Published: February 18th 2021

DOI: 10.5772/intechopen.95903

Downloaded: 110

Abstract

Many works have been devoted to show that Thermodynamics and Statistical Physics can be rigorously deduced from an exact underlying classical Hamiltonian dynamics, and to resolve the related paradoxes. In particular, the concept of equilibrium state and the derivation of Master Equations should result from purely Hamiltonian considerations. In this chapter, we reexamine this problem, following the point of view developed by Kolmogorov more than 60 years ago, in great part known from the work published by Arnold and Avez in 1967. Our setting is a discrete time dynamical system, namely the successive iterations of a measure-preserving mapping on a measure space, generalizing Hamiltonian dynamics in phase space. Using the notion of Kolmogorov entropy and martingale theory, we prove that a coarse-grained description both in space and in time leads to an approximate Master Equation satisfied by the probability distribution of partial histories of the coarse-grained state.

Keywords

  • stochastic theory
  • coarse-grained deterministic systems
  • Markov processes
  • martingales

1. Introduction

It is generally admitted that Thermodynamics and Statistical Physics could be deduced from an exact classical or quantum Hamiltonian dynamics, so that the various paradoxes related to irreversibility could also be explained, and nonequilibrium situations could be rigorously studied as well. These questions have been and still are discussed by many authors (see, for instance Refs. [1, 2, 3, 4] and many classical textbooks, for instance [5, 6, 7, 8, 9, 10]), who have introduced various plausible hypotheses [7, 8, 9, 10, 11, 12, 13, 14], related to the ergodic principle [8, 9, 10, 11], to solve them. It seems that there are two major kinds of problems. First, to justify that physical systems can reach an equilibrium state when they are isolated, or in contact with a thermal bath (which remains to be defined). Secondly, to justify various types of reduced stochastic dynamics, depending on the phenomena to be described: Boltzmann equations, Brownian motions, fluid dynamics, Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchies, etc..: see for instance Refs [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 15, 16]. Concerning the first type of problems (reaching an equilibrium, if any) very rough estimations show [17] that the time scales to reach equilibrium, using only Hamiltonian dynamics and measure inaccuracies, are extremely large, contrarily to everyday experience, and quantum estimations are even worse [17]. Essentially, these times scale as Poincaré recurrence times and they increase as an exponential of the number of degrees of freedom (see Section 3 of this chapter for a brief discussion and references).

Here we concentrate on the second type of problems: is it possible to derive a stochastic Markovian process from an “exact” deterministic dynamics, just by coarse graining the microscopic state space? We generalize and complete the formalism recently presented [18] for Hamiltonian systems. Our framework is now more general and applies to all deterministic systems with a measure preserving dynamics, which, by Liouville theorem, include Hamiltonian dynamics.

Following Kolmogorov, we start with a measure space with a discrete time dynamics given by the successive iterations of a measure preserving mapping. The Kolmogorov entropy, or trajectory entropy, has been defined by Kolmogorov as an invariant of stationary dynamical systems (see Arnold and Avez book [19] for a pedagogical presentation). We follow his work and generalize part of his results. We also use martingale theory [20, 21, 22, 23] to show that the stationary coarse-grained process almost surely tends to a Markov process on partial histories including nsuccessive times, when ntends to infinity. From this result, we show that in the nonstationary situation, the probability distribution of such partial histories approximately satisfies a Master equation. Its probability transitions can be computed from the stationary distribution, expressed in terms of the invariant measure. It follows that, with relevant hypotheses, the mesoscopic distribution indeed tends to the stationary distribution, as expected.

Our next step is to coarse grain time also. The new, coarse-grained time step is now n τ, τbeing the elementary time step of the microscopic description, and nbeing the number of elementary steps necessary to approximately “erase” the memory with a given accuracy. The microscopic dynamics induces new dynamics on partial histories of length n.We show that it is approximately Markovian if nis large enough. This idea is a generalization of the Brownian concept: a particle in a fluid is submitted to a white noise force which is the result of the coarse-graining of many collisions, and the time step is thus the coarse-graining of many microscopic time steps [8, 24]. The Brownian motion emerges as a time coarse-grained dynamics.

In Section 2, we recall various mathematical concepts (Kolmogorov entropy, martingale theory) and use them to derive the approximate Markov property of the partial histories, and eventually to obtain an approximate Master Equation for the time coarse-grained mesoscopic distribution [18].

In Section 3, we briefly consider the problem of relaxation times and recall very rough estimations showing that an exact Hamiltonian dynamics predicts unrealistic, excessively large relaxation times [17], unless the description is completed by introducing other sources of randomness than the measure inaccuracies leading to space coarse-graining. Note that, following Kolmogorov [19], we do not address the Quantum Mechanics formalism.

Advertisement

2. Microscopic and mesoscopic processes in deterministic dynamics

2.1 Microscopic dynamics: Definitions and notations

It has been shown recently [18] that coarse-grained Hamiltonian systems can be approximated by Markov processes provided that they satisfy reasonable properties, covering many realistic cases. These conclusions can be extended to a large class of deterministic systems generalizing classical Hamiltonian systems, which we now describe. We first specify our hypotheses and notations.

2.1.1 Deterministic microdynamics

Consider a deterministic system S. Its states x, belonging to a state space X, will be called “microstates”, in agreement with the usual vocabulary of Statistical Physics. The deterministic trajectory due to the microscopic dynamics transfers the microstate x0 at time 0 to the microstate xt= φtx0at time t. The evolution function φtsatisfies the current properties of dynamic systems: φtφs=φt+s,φ0=I, tand s being real numbers and Ibeing the identical function.

The dynamics is often invariant by time reversion, as assumed in many works on Statistical Physics: we refer to classical textbooks on the subject for details [5, 6, 7, 8], but we will not use such properties in this chapter.

2.1.2 Microscopic distribution

Assume that the exact microstate x0 is unknown at time 0, but is distributed according to the probability measure μon the phase space X.The microscopic probability distribution μtat time tis given by

μtA=μφtA.E1

for any measurable subset Aof X. If μis stationary, it is preserved by the dynamics: μt(A) = μ(A). This condition, however, it not necessarily satisfied, in particular for physical systems during their evolution.

We will focus on two important cases:

  1. the finite case: Xis finite and consists in Nmicrostates.

  2. the absolutely continuous case: XRn, where (i)Ris the set of real numbers and nis an integer (usually very large, and even in the case of Hamiltonian dynamics), and (ii) the measure μis absolutely continuous with respect to the Lebesgue measure ωon Rn: these exists an integrable probability density p(x) such that for any measurable subset Aof X

μA=Apxx.E2

Furthermore, we assume that (iii) the Lebesgue measure of X(or volume of X) V=volXXxis finite, and (iv) the Lebesgue measure ωis preserved by the dynamics for any tand any measurable subset Aof X.:

volA=volφtA.E3

The last two assumptions obviously generalize basic properties of Hamiltonian dynamics in a finite volume of phase space. Thus, by (1)(3), the probability density is conserved along any trajectory: at time tthe probability density is

pxt=p0φtxpφtx.E4

2.1.3 Initial microscopic distribution: The stationary situation

Suppose that Sis an isolated physical system and no observation was made on Sat time 0 nor before 0. Then, in the absence of any knowledge on S, we admit that at the initial time Sis distributed according to the only unbiased probability law, which is the uniform law. This is clearly justified in the finite case, according to the physical meaning traditionally given to probability: in fact, attributing different probabilities for two distinct microstates of Xwould imply that some measurement would allow one to distinguish them objectively, which is not the case at time 0.

In the absolutely continuous case, initial uniformity is less obvious: it amounts to assuming that the system should be found with equal probability in two regions of the state space with equal volumes if no information allows one to give preference to any of these regions. This is of course a subjective assertion, but for Hamiltonian systems it agrees with the semi-quantum principle which asserts that, in canonical coordinates, equal volumes of the phase space correspond to equal numbers of quantum states.

Another way for choosing the initial probability distribution is to make use of Jaynes’ principle [25], which is to maximize the Shannon entropy of the distribution under the known constraints over this distribution: in the present case of an isolated system which has not been previously observed, this principle also leads to the uniform law. It is not really better founded than the previous, elementary reasoning, but it may be more satisfying and it can be safely used in more complex situations. We refer to most textbooks on statistical mechanics for discussing these well-known, basic questions.

The uniform distribution in a finite space, either discrete or absolutely continuous, is clearly stationary. In addition to the previous hypotheses, we will assume that the space Xis indecomposable [26]: the only subsets of Xwhich are preserved by the evolution function φtare the empty set ∅ and Xitself. Then, the stationary probability distribution is unique [18].

For simplicity, we will henceforth assume that the phase space Xis finite.

Initial, nonstationary situation.In certain situations, the system can be prepared by submitting it to specific constraints before the initial time 0. Then it may not be distributed uniformly in Xat t = 0. We will consider this case in the next paragraph.

2.2 Mesoscopic distributions

2.2.1 Mesoscopic states

Because of the imprecision of the physical observations, it is impossible to determine exactly the microstate of the system, but it is currently admitted that the available measure instruments allow one to define a finite partition of Xinto subsets iM ≡ (ik), k = 1, 2,…M, such that it is impossible to distinguish two microstates belonging to the same subset i. So, in practice the best possible description of the system consists in specifying the subset iwhere its microstate xlies: ican be called the mesostateof the system. The probability for the system to be in the mesostate iat time twill be denoted p(i,t). It is not sure, however, that two microstates belonging two different mesostates can always be distinguished: this point will be considered in Section 3.2.2.

Remark:for convenience, we use the same letter pto denote the probability in a countable state space, as well as the probability density in the continuous case. This creates no confusion when the variable type is explicitly mentioned. This is the case now since, as mentioned previously, we assume that the space Xis discrete. The transposition to the continuous case is generally obvious, although the complete derivations may be more difficult.

2.2.2 The stationary situation

If time 0 is the beginning of all observations and actions, we assume that the initial microscopic distribution μis uniform and stationary, as discussed previously, and the probability to find system Sin the mesostate i0 at time 0 is p(i0, 0) = μ(i0,). The probability to be in iat time tis p0it=μti=μφti. The stationary joint probability to find Sin i0 at time 0 and in iat time tis

p0i00it)=μφtii0=μiφti0E5

and the conditional probability of findingSin the iat time t, knowing that it was in i0 at time 0 is

p0iti00=p0iti00p0i00=μφtii0μi0=μiφti0μi0E6

Similarly, the stationary n-times joint probability and related conditional probabilities are readily obtained from

p0i00i1t1in1tn1=μφitn1tn1i0.E7

with, for any t: p0i0ti1t1+tin1tn1+t=p0i00i1t1in1tn1.

For the sake of simplicity, we will discretize the times 0 < t1 < t2…, and write ti = kiτ, kibeing a nonnegative integer and τa constant time step, which will be taken as time unit.

2.2.3 Non stationary situation

If Sis a physical system, interactions may exist before or at time 0, so that the Scan be constrained to lie in a certain subset Aof Xat time 0. However, since it is not possible to distinguish two microstates corresponding to the same mesostate, Ashould be a union of mesostates, or at least one mesostate. If it is known that at time 0 the microsate xof the system belongs to the mesostate i, we should assume that the initial microscopic distribution is uniform over i, since no available observation can give further information on x: so, in the discrete case, if n(i) is the number of microstates included in iand χi(x) the characteristic function of i

px0xi=1niχxiE8

In the absolutely continuous case, the similar conditional density is obtained in the same way, replacing the number of microscopic states contained in the mesostate iby its volume v(i). For simplicity, we follow considering the discrete case, with obvious adaptations to the continuous case.

If one only knows the mesoscopic initial distribution p(i,0) that at time 0 the system belongs to i, for each mesostate iof M, the initial microscopic distribution becomes

px0=i1nipi0χix=ipi0μiχixNE9

Nbeing the total number of microstates in X.

Then-times nonstationary mesoscopic probabilitiesare obtained from (9)

pni00i11in1n1=pi0φ1i1φin+1n10=pi0μi0ni0φ1i1φin+1n1N.E10

where n(A) is the number of microstates belonging to some subset Aof X. So

pni00i11in1n1=μi0φ1i1φin+1n1pi0μi0.E11

and all multiple probabilities follow, for instance

pn1i11inn=i0μi0φ1i1φinnpi0μi0.E12

The corresponding process is generally not Markovian. For instance, if i0ϕ−1 i1 ≠ ∅, i1ϕ−1 i2 ≠ ∅ and i0ϕ−2 i2 = ∅, it is easily seen that p(.i,22i,11i,00)=0butp(i,22i,11)0.

From the definition of the relative probabilities, one can formally write

pi,2t2=i1pi,2t2i1t1pi,1t1.E13

but in general this equation is useless, since the conditional probability p(i2, t2|i1, t1) cannot be computed independently of p(i1, t1).

It results from (11) that the nonstationary conditional probabilities, conditioned by the whole past up to time0, are identical to the corresponding stationary probabilities: as an example

pinnin1n1i00=p0innin1n1i00=μi0φ1i1φninμi0φ1i1φn+1in1.E14

We will make use of this simple but important property later.

2.3 Entropy of the mesoscopic process resulting from deterministic, microscopic system

Kolmogorov and other authors [19] studied the entropy and ergodic properties of the stationary mesoscopic process defined previously, following methods introduced by Shannon in the framework of signal theory [27, 28, 29, 30]. These methods, and part of Kolmogorov’s results, can be extended to the nonstationary process (11).

2.3.1 The n-times entropy and the instantaneous entropy of the mesoscopic system

Following Kolmogorov, we consider y the Shannon entropy [27, 28, 29, 30] of the trajectory (i)n = (i0, …, in-1) in the phase space

Spn=i0,in1pni00in1n1lnpni00in1n1.E15

On the other hand, the new information obtained by observing the system in the mesoscopic state inat time tn, knowing that it was in the respective states i0, …in-1 at the prior times 0, …n-1, will be called the instantaneous entropy

snp=Sn+1pSpn=i0,;;;inpi00.innlnp(in,nin1n1i,00)0=i0,;;;in1pi00.in1n1S(pnin1n1i,00).E16

where pdenotes the infinite process. The properties of S(pn) and sn(p) have been extensively studied by Kolmogorov and other authors in the case of the stationary process (6) [19]: they are summarily mentioned in 2.5. They are not necessarily valid for the nonstationary process.

2.3.2 Maximizing the n-times entropy of the mesoscopic system: The “Markov scheme”

If one knows the first two distributions p1 and p2, one can mimics the exact mesoscopic distributions pnby using the Jaynes’ principle, maximizing the entropy S(qn) of a ditribution qnunder the constraints q1 = p1 and q2 = p2. Then it is found that optimal distribution qnis the Markov distribution q¯n satifying these constraints [18].

It is shown in Ref. [18] that for n > 2, both the n-times entropy Snq¯and the instantaneous entropy snq¯are larger than the correponding entropies Snpand snpof the exact process p, except if pis Markov: p = q¯.

The Markov process q¯nis not really an approximation of the mesoscopic process p, because q¯n does not tend to pnwhen n → ∞. Approximating the exact mesoscopic process by a Markov process will be the main purpose of the next section.

2.4 Entropy and memory in the stationary situation

2.4.1 Kolmogorov entropy of the stationary process

Here we consider the stationary process arising from the initial uniform microscopic distribution μ(x), when the n-times stationary probability is pn0given by (7). For the sake of simplicity we omit the index 0 in the present Section, unless otherwise specified. It can be shown [19] that the entropy Sn(p) is an increasing, concave function of n

snSn+1pSnp0.E17
sn+1sn=Sn+1p2Snp+Sn1p0.E18

It results from (17) and (18), and also from 2.5.2, that the limits

limn1nSnp=limnsnp=sp.E19

exist: s(p) is the Kolmogorov entropy of the evolution function fwith respect to the partition (i) of the mesoscopic states [19]. More simply, we can call it entropy of the mesoscopic process.

2.4.2 Memory decrease in the stationary mesoscopic process

It has been proved recently [18] that, although it is infinite, the memory of the mesoscopic process fades out with time: for nlarge enough, if N > nthe probability of iNat time Nconditioned by the nlast events, is practically equal to the probability at time N, conditioned by the whole pastdown to time 0.

piNNiN1N1inNnpiNNiN1N1i00whenn.E20

More precisely, for any ε > 0, there exists a positive integer nsuch that for any N > n

0<sN<sn<ε.E21

where snis the instantaneous entropy given by (14). In fact, let us write

ΠNiN=p(iN,NiN1N1..i00)=μfNiNfN+1iN1i0;E22
ΠNniN=p(iN,NiN1N1..iNnNn)=μfNiNfN+1iN1fN+niNn.E23

For a given n, formula (23) allows one to define a new process p(n) from the original process p, which can be called “the approximate process of order n” of p(see Section 2.6). It results from (21) and from the stationarity of pthat for any ε > 0, there is an integer n(ε) depending only on ε, such that for any integers N, n > n(ε)

0<snpsNp=i0,iN1μ(i0f1i1fN+1iN1S0,N1ΠNΠNn<ε.E24

where S0,N1ΠN|ΠNnis the relative entropy of ΠNwith respect to ΠNn: the last right hand member of Eq. (22) is the average of this relative entropy on the past of N.Because sN(p) decreases to a limit s¯pwhen N → ∞, it results that

0<δsnpsnps¯pεifn>nε.E25

The total variation distance d(P,Q) between two distributions Pjand Qjover the states jof a finite set (j) is

dPQ=12jPjQj.E26

Then, the total variation distance d0,N1ΠNΠNnbetween ΠNand ΠNn(for a given past trajectory between times 0 and N-1) is related to the relative entropy [18, 31] and it can be concluded that

d0,N1ΠNΠNn2d0,N1ΠNΠNn2<ε/2ifnε<n<N.E27

2.4.3 Convergence properties of the approximate process

Let us write m = N-n > 0. It follows [18] from (25) that for any fixed m, the total variation distance between the exact and the approximate probabilities d0,m+n1Πm+nΠm+nntends to0 in probabilitywhen n → ∞

d0,m+n1Πm+nΠm+nnp0ifn.E28

So, the probability that this distance exceeds a given accuracy a > 0 can be made as small as desired by choosing nlarge enough.

Further results can be obtained by newly using the sationnarity of process p. In fact, it can be shown [18] that pis a martingale [20, 21, 22]. Then, general results from martingales theory (see below) show that when n → ∝ the distance between the stationary conditional probability Πm + nand its approximation Πm+nntends to 0 almost surely[18], as well as and in probability

d0,m+n1Πm+nΠm+nna.s.0ifn.E29

So, the approximation Πm+nnconverges to Πm + nfor almost all trajectories [18].

We now sketch the derivation of this conclusion from martingale theory.

2.5 Martingale theory and almost sure convergence

For convenience, we first summarize some definitions and results of martingale theory [20, 21, 22], before applying them to the mesoscopic laws of deterministic systems. We refer to [20] for adressing more general cases.

2.5.1 Definitions

  1. simplified definition: a (discrete time) sequence of stochastic variables Xnis a martingale if for all n:

Xn<andXn+1X,nX1=Xn.E30

where Xdenotes the average (mathematical expectation) of the stochastic variable X.

  1. more generally(see the general definition, for instance, in [20])

    If • (Ω, F, P) is a probability space (where Ω is the state space, Pis the probability law, and Fis the set of all subspaces (σ-algebra) for which Pis defined),

    Fn is an increasing sequence of σ-algebras extracted from F(Fn ⊂ Fn + 1 ⊂ … ⊂ F), and.

    • for all n ≥ 0, Xnis a stochastic variable defined on (Ω,Fn, P),

    the sequence Xn is a martingale if Xn<andXn+1Fn=Xn.

2.5.2 Convergence theorem for martingales

Among the remarkable properties of martingales, the following convergence theorem holds [20, 21]:

If (Xn) is a positive martingale, the sequence Xnconverges almost surely to a stochastic variable X.

So, for almost all trajectories ω, Xn(ω) → X(ω) with probability 1 when n → ∞.

Stronger and more general results can be found in the references.

2.5.3 Application to the nth approximation of the stationary mesocopic process

The stochastic variable YN=p0iNNiN1N1i00is a martingale. In fact, because of the stationarity of p0 we have, renumbering the states

p0iNi1N1iN0=p0i0i11iNNp0i0FN.E31

where FNis the σ-algebra generated by i−1, …i-N. Let us write

πN=p0i0i11iNN=p0i0FN.E32

We have, because FN1FN

πNFN1=p0i0FNFN1=p0i0FN1=πN1.E33

So, πNis a martingale on the σ-algebra FN, and by the convergence theorem, it converges almost surely to a.

limit πwhen N → ∞.

Now if N > n, let us write m = N-n > 0. Because of the stationarity of p0

p0in+mi1n+m1imm=p0i0i11inn=πni.E34

Thus, for any fixed, positive m

πn+mπna.s.0.E35

The absolute value distance between πn + mand πnis obtained by summing πn+miπniover the Mpossible states i, So

d0,n+m1qn+mqn+mn=dπm+nπna.s.0ifn.E36

which is (29), one of our main, formal results.

2.6 n-times Markov approximation of the mesoscopic stationary process

Returning to inequalities (19), when the value εis fixed for obtaining a required precision, the value n = n(ε) is determined and a satisfying approximation of the exact mesoscopic process is obtained by neglecting the memory effects at time differences larger than n[18] Thus, one replaces piNNiN1N1..i00by

pn(iN,NiN1N1..i00)=p(iN,NiN1N1..iNnNn)ifN>n=p(iN,NiN1N1..i00)ifNn.E37

With the convention

pni00iNN=pi00iNNifNn.E38

all the probabilities related to the approximate process p(n) are defined from the probabilities of p: this defines p(n), the approximate process of order nof p. So, p(n) has a finite memory of size n, whereas phas in general an infinite memory.

The process p(n) is a Markov process on the partial trajectories IKconsisting of groups of nsuccessive mesocopic states

IK=iKniKn+1iK+1n1MnE39

Its probability distributions can be written in abbreviated notations

PKnI0T0I1T1IK1TK1pnI001n1I1nn+1...2n1I,K1K1nKn1.E40

TKbeing the group of nsuccessive times: Tk=kn,kn+1,,k+1n1. From the approximation (18) it follows (see Appendix A) that

P0IKTKIK1TK1I0T0P0nIKTKIK1TK1.E41

where we now use the upper index0 in P0 and P0(n) to recall that, in the present section, pis the stationary distribution. Note that, because of this stationarity

P0nIKTKIK1TK1=P0nIKT1IK1T0=P0IKT1IK1T0WIKIK1.E42

So, the transition matrix Wis well defined from the known stationary distribution p0.

From the approximate relation (41) if follows that the exact stationnary process P0on the partial history IKduring the time interval TKapproximately obeys the n-times Markov Equation (see Section 2.7)

P0IKTKIK1WIKIK1P0IK1TK1.E43

while the nth approximation P0(n) satisfies (33) exactly.

2.7 Markov approximations of the nonstationary mesoscopic process

We return to the nonstationary process pgenerated by the deterministic microscopic process from an arbitrary initial distribution of the mesoscopic states, given by (11). As in paragraph 2.6, it is now necessary to distinguish the stationary process p° by the upper index0.

One can write the trivial equality

piNN.iN+n1N+n1=i,N1,i0piN+n1N+n1iNNiN1N1i00pi00.iN1N1.E44

We now use remark (14): the conditional probabilities, conditioned by the whole past up to time 0, are identical in the stationary and nonstationary situations. The stationary distributions p0 can be approximated by its nth approximation p0(n) introduced in Section 2.6. Thus we can write

p0iN+n1N+n1iNNiN1n1i00=p0iN+n1N+n1iN+n2N+n2i00p0iN+n2N+n2iN+n3N+n3i00p0iNNiN1N1i00p0niN+n1N+n1iNNiN1n1iNnNn.E45

With (35), Eq. (34) yields the approximate n-times Markov Equation

piNNiN+n1N+n1i,N1,i0p0iN+n1N+n1iNNiN1n1iNnNnpiNnNniN1N1.E46

Taking N = Knfor an integrer K ≥ 0, using the condensed notations of § 2.6 and definition (42), Eq. (46) yields an approximate Master Equation for the probability PIKTof the partial history IKduring the time interval TK

PIKTKIKWIKIK1PIK1TK1E47

which is the Eq. (43) obtained for the stationary probability P0IKT. Let PnIKTbe the exact solution of Eq. (47) that coincides with the exact Pat the nfirst elementary times 0, 1, … n-1 of the system history: PnI0T=PI0T. Then, PnIKTdefines the nth approximation of PIKT: in principle, it can be computed from Eq. (47) since the probability transitions Ware known by (41).

The stationary probabilities approximation P0(n) deduced from p0 provide the stationary solution of (47)

P0nI,KTK=p0iKnKn.iKn+11Kn+11=p0iKn0iKn+11n1.E48

So, when K → ∞,

PnI,KTKP0nI,KTK.E49

and consequently, for any integer k∈ [0, n-1], the nth approximation of the mesoscopic distribution psatisfies

pniKn+kμik=μiifK.E50

for any initial mesocopic distribution, which is the basic assumption of statistical thermodynamics. Supplementary assumptions allow one to conclude that, in realistic situations, the mesoscopic distribution pitself satisfies this property (see Appendix B).

2.8 Time averages and simple Markov approximation

Up to now, we took as time unit some time step τwhich gives the time scale of microscopic phenomena. By considering some finite partition (i) of the phase space Xand replacing the microscopic states xXby the mesocopic states i∈ (ik), we have performed a space coarse graining, as necessary for taking practical observations into account. For the same purpose, one should also introduce [18] a space coarse graining, since the time scale θ = n τof current observations is much larger than τ: n> > 1.

All mesoscopic functions remaining practically constant on the time scale θ, their averages can be computed from the time averages P¯Kof the probabilities pkover θ

p¯K=1nkTKpk.E51

where Kis an integer ≥ 1 and TKis the time interval (τbeing the time unit) TK = (K-1)n, K + 1, …Kn-1.

Suppose (a) that the mesoscopic probabilities pare slowly variating functions of the mesoscopic states, (i.e.for any positive α, |p(i) – p(j)| < αif the distance between the mesostates iand jis small enough, with an appropriate metric in the spase of mesostates), and (b) that discontinuous trajectories have low probabilities and can be neglected. Of course, these assumptions are not verified for some important, well known processes such as Brownian processes, but they seem to be reasonable for modeling physical processes where the inertial effects are strong enough. Then, a simple approximation is to consider that

p0iKn1Kn1iK1nK1niK1n1K1n1iK2nK2np0i,K¯Kn1i,K¯K1niK¯1K1n1iK¯1K2nW¯(iK¯iK¯1).E52

where

K¯=1nkTKk=1nk=K1nKn1.E53

Consider the time-averaged probability

P¯iK¯K1nkTKpi,kk1nkTKpi,K¯k.E54

Using the Markov Eq. (47) and the complementary approximations (42), we obtain the new Master Equation

P¯iKjW¯ijP¯jK1.E55

This equation is much simpler than Eq. (47), since it applies in the space Mof the Mmesostates (i), whereas (47) is valid in the space Mnof nsuccessive mesostates. However, Eq. (45) relies on several approximations that are difficult to control. In spite of these difficulties, which can only be precisely discussed for specific examples, Master Equations like (55), resulting from deterministic microscopic systems by coarse-graining both their states and time, are a practical way to study their evolution of a mesoscopic scale, used in innumerable works.

3. Discussion of the Markov representation derived from Hamiltonian dynamics, and estimation of the uniformization time

The previous results show that the coarse grained mesoscopic dynamics can eventually be represented by a Master Equation, because the memory of this dynamics is gradually lost over time. However, they do not provide the time scale of this fading. In order to estimate its order of magnitude simply, we make an intuitive remark: the conditional probability to jump from some mesostate ito another one can be evaluated without knowing the past history of the system if one knows the initial microscopic distribution over i.The only unbiaised initial distribution is the uniform one. Thus, one can consider that the system has a memory limited to one time step if uniformity is approximately realized in each mesoscopic cell: this is the basis of the elementary Markov models of mesoscopic evolution. Let Tbe the average time needed to reach uniformity in a mesoscopic scale, starting from strong inhomogeneity. In a first approximation it is reasonable to use this uniformization time Tto characterize the time scale over which a Markov evolution can describe the system.

3.1 Uniformization time in a mesoscopic cell: An elementary estimation for Hamiltonian systems

Using oversimplified, but reasonable arguments [17], we now coarsely estimate the uniformization time Tin a mesoscopic cell. As an example, we consider nidentical particles initially located in this cell, among Nidentical particles in an isolated vessel. The complete system obeys Hamilton mechanics.

Assume that the particles constitute a gas under normal conditions, with density ρ ≈ 3. 1025 molecules.m−3. A mesocopic state can be reasonably represented by a cube of size l ≈ 10−6 m (as an order of magnitude), which contains n ≈ 3. 107 molecules. We now divide the mesoscopic cell into m“microscopic” cells whose size λis comparable to the size of a molecule: each of these microscopic cells, however, should contain a sufficient number particles for allowing them to interact from time to time. We can take λ ≈ 10−8 m, so each microscopic cell approximately contains 30 molecules, and there are m ≈ 106 microscopic cells in a mesocopic cell. The particles have an average absolute value v ≈ 500 m.s−1 in typical conditions. They can jump between the various microcells of the same mesocopic cell. They can also jump out of their initial mesoscopic cell, but they are replaced by molecules proceeding from other cells, and we assume that these contrary effects coarsely compensate themselves, except in the first stage of the evolution if the initial mesocopic distribution is strongly inhomogeneous.

Because all particles are identical, an almost microscopic configuration of a mesoscopic cell can be defined by specifying the number of particles in each of its microscopic cells. Focusing on a given mesoscopic cell, we compute the number of its possible configurations, and we estimate the average time θnecessary for the system to visit all these configurations. Note that the uniformization time Tis obviously much larger than θ: T> > θ. So, θis a lower bound of T.

The number of ways of partitioning the nidentical particles into the mmicroscopic cells is

C=m+n1!n!m1!expm+nφxwithφx=xlnx1xln1xandx=n/m+n.E56

The system jumps from one of these configurations to another one each time one the present particles jumps to another microscopic cell. The order of magnitude of the time needed for a particle to cross a micro-cell is λ/v, and the time between two configurations changes is τ ≈ (1/n) λ/v.. In order that all configurations are visited during time θwe should have at least θ ≈ C τ(in fact, θshould be much larger than because of the multiple visits during θ). So we conclude from (46) and relevant approximations that a lower bound of Θsatisfies

nφxxlnvΘλwithx=n/m+n1.E57

With the previous numerical values

θλvnmm2.10.1130106s.E58

which is far larger than the age of universe (now estimated to be about 14 × 109 years, or 4.4 × 1017 s)!

Although these calculations are very rudimentary, it is clear that, in the framework of purely Hamiltonian systems, the microscopic distribution within a mesocopic cell remains far from uniformity during any realistic time if it is initially fairly inhomogeneous.

More generally, it is clear that the uniformization time Tshould be of the order of Poincaré time [32, 33, 34, 35, 36] in a mesoscopic cell, which is known to be extraordinarily long [9, 37].

3.2 An elementary, empirical approach of mesoscopic systems

The practical relevance of Markov processes to model a large class of physical systems is supported by a vast literature. We have seen that the progressive erasure of its memory over time allows one to justify the use of a Markov process to represents the evolution of the coarse-grained system. However, such representation can also stem from random disturbances due to the measurements or other sources of stochasticity: then, one has to renounce to a purely deterministic microscopic dynamics, as formerly proposed by many authors, even without adopting the formalism of Quantum Mechanics. It is interesting to compare the time scales of the relaxation to equilibrium in both approaches with an elementary example.

3.2.1 Uniformization induced by randomization

Suppose now that the measure process does not induce any significant change in the average molecules energy - so, their average velocity remains unchanged – but that it causes a random reorientation of their velocity. A rudimentary, one dimensional model of such a randomization could be to assume that each time a molecule is about to pass to a neighboring cell, it will go indifferently to one of the neighboring microscopic cell. In a one dimensional version of the model, a molecule perform a random walk on the η = l/λ = 102 points representing the microscopic cells contained in the mesoscopic cell, and we adopt periodic conditions at the boundaries of the mesocopic cell. The η × ηtransition matrix of the process is a circulant matrix which, in its simplest version, has transition probabilities ½ to jump from any state to one of its neighbors, and it is known that its eigenvalues λkare λk = cos(2πk/η), k = 0, 1, …[η/2]. The number of jumps necessary for relaxing to the uniform, asymptotic distribution is of the order

1/lnλ12/η2500s.

which correspond to a relaxation time of 500. λ/v ≈ 10−8 s, which is very short for current measurements, but comparable with (or even larger than) the time scale of fast modern experiments. Considering a 3-dim model would not change this time scale significantly. It is conceivable that he molecules are not necessarily reoriented each time they leave a microscopic cell. Even if the proportion of reoriented molecules is as low as 10−6, the relaxation time is of order 10−2 s, which is insignificant in many simple measures. In this case the Markov representation can be justified.

3.2.2 Semi-classical Hamiltonian systems

In analogy with the previous randomized system, we can introduce a new source of stochasticity in the coarse-grained deterministc systems considered in Sections 2 and 3. This could be done by assuming that a particle cannot be described by a point, but by a probability density centered on the point that would represent it classically: such a description borrows one, but not all, of the axioms of wave mechanics, and it can be qualified as a “semi-quantical” description. A similar assumption can be introduced without referring to quantum mechanics, by noticing that a particle cannot be localized in a given mesoscopic cell with complete certainty, because of its finite size: if it is mainly attributed to a given cell, there exists a small probability that it also belongs to a neighboring cell. Even without formalizing these possibilities, one can presume that such random effects shorten drastically the memory of the mesoscopic process, and make it short with respect to ordinary measure times: then the Markov approximation described in Section 2 can correctly represent the evolution of the observed coarse-grained process.

4. Conclusion

We have studied the mesoscopic, stochastic process derived from a deterministic dynamics applied to the cells determined by measure inaccuracies. The stationary process, which arises when the microscopic initial state is distributed according to a time invariant measure, was studied by Kolmogorov and further authors: we extended their methods and some of their results, and considered the nonstationary process which stems from a noninvariant initial measure. We have shown that, according to Jaynes’ principle, the “exact” mesocopic process can be approximately replaced by the Markov process which, at any time n, reproduces the one-time probability of each mesostate and the transition probabilities from it. This Markov process maximizes the trajectory entropy up to time n, as well as the entropy at time n, conditioned by prior events. The Jaynes’ principle, however, does not control the accuracy of this estimate: this was our next concern.

So, a sequence of successive approximations has been defined for the stationary mesoscopic process, based on one of our main results: the probability of any mesostate state conditioned by all past events, can be approximated by its probability conditioned by the nlast past events only, the integer nbeing determined by the maximum distance allowed between these probabilities, as small as it may be. This property entails that the nonstationary mesoscopic process can be approximated by a n-times Markov process or even, after a time coarse-graining, by an ordinary one-time Markov process. These approximations require certain conditions which should be fulfilled by “normal” physical systems, with possible exceptions for slowly relaxing systems. If they are satisfied, the existence of a thermodynamic equilibrium is derived for a coarse-grained system obeying a measure-preserving deterministic dynamics, in particular an Hamiltonian dynamics, without introducing ad-hocexternal noises. However, very rough estimations of the relaxation time show that for reasonable values of the parameters this time is extraordinarily long and completely unrealistic.

We conclude that, although the basic hypotheses of thermodynamics can be justified from a Hamiltonian or deterministic microscopic dynamics applied to the mesoscopic cells, the observed time scales of the relaxation to equilibrium cannot be explained without going beyond pure Hamilton mechanics, by introducing additional random effects, in particular due to the intrinsic imprecision of the particles localization.

With the notations of Section 4.2, we consider approximation (55), which is the basis of the n-times Markov approximation both in the stationary and nonstationary situations. Repeating approximation (51) we can write

p(i,N+n1N+n1;;i,NNiN1N1i,NnNni00)==pi,N+n1N+n1i,N+n2N+n2.i00..p(i,NNiN1N1i00)pni,N+n1N+n1i;N+n2iN1N1pn(i,NNiN1N1iNnNn).E59

The last line of (49) is pn(i,N+n1N+n1;;i,NNiN1N1i00). We write

p(i,N+n1N+n1;;i,NNiN1N1i,NnNni00)pn(i,N+n1N+n1;;i,NNiN1N1i,NnNni00)QNn.E60

and for k > nwe define lk, using the abbreviations (32)

lki0,ik1lnΠikkΠknik.E61

We have by (24)

snpskp=i0,ikpi00ikkNlnΠkikΠknik=lki0,ik1σkn.E62

(Note that σkis positive, although this not necessarily true for lk). By (24) for any positive ε

2d2p,kpknσki0,ik1<εifnis large enough.E63

Averaging the logarithm of Eq. (60) we have

LNnlnQNn=k=0n1snpsN+kpnsnpsp=nδsn.E64

δsnsnpspcan be interpreted as an entropy fluctuation with respect to its equilibrium thermodynamic value. If such a fluctuation relaxes exponentially to 0 with time, as usual, the last term of (54) tends to 0 when n → ∞. Then, the n-times Markov approximations 4.2 and 5.1 are justified. Although exponential relaxation can be considered as a characteristic of “normal” physical systems, slower relaxations can occur: in this case the Markov approximation may be invalid.

This tendency can be reasonably expected from the approximation of the exact mesocopic process by Markov processes, but it can only be affirmed by adding additional assumptions to the basic assumptions. We first prove a simple, useful lemma.

B.1. Lemma.Consider a d-dim sequence un,k with 2 positive, integer indices n, k, satisfying the following properties:

  1. it is absolutely bounded: there is a positive real number Msuch that |un,k| < Mfor all integers n, k.

  2. for all n, k,there are positive numbers εn(independent of n) and ν(independent of nand k) such that

un,k<εn+νun,k1andεn0ifn.E65

Then u n,k → 0 if n → ∞ and k → ∞.

In fact, for any positive ε, there is an integer n0 such that εn < εif n > n0, and

un,k<ε1+ν++νk1+νkun,0<ε1ν+νkun,0ifn>n0.E66

So, |un,k| can be made as small as desired by chosing nand klarge enough.

B.2.For given integers nand Klarger than 1, and states ik∈ M, k = 0, 1,… (K + 1)n-1, we will write.

pi00i11..iK+1n1K+1n1p01K+1n1for the sake of simplicity. In these abbreviated notations, we have

pKnKn+1..K+1n1=i,0i1,iKn1p(K+1n,1KnKn1...0)p01..Kn1.E67

We know that

p(K+1n,1KnKn1...0)=p0(K+1n,1KnKn1...0).E68

p0 being the stationary probabilities, and that, for large n

p0(K+1n,1KnKn1...0)p0(K+1n,1KnKn1Kn1).E69

More precisely, in the conditions discussed previously, for any given positive ε, there is an integer n(ε) such that

p0(K+1n1KnKn1...0)p0(K+1n1KnKn1Kn1)<εifnnεE70

So, Eq. (67) becomes

pKnKn+1..K+1n1=i,0iKn1p(K+1n1KnKn1...0)p0(K+1n1KnKn1...0)p01..Kn1+i,0iKn1p0K+1n1KnKn1K1n)pK1n..Kn1an,K+i,0iKn1p0K+1n1KnKn1K1n)pK1n..Kn1.E71

where the 1st term of the last line satisfies anK<εif n ≥ n(e).

The second term in the last line of (71) is, in other notations, the righth hand side term of the approximate Master Eq. (47) of Section 2.8

PIKTKIKWIKIK1PIK1TK1.E72

This approximate Master Equation can now be written more precisely, in the notations of 2.8

PIKTK=An,KIKTk+IKWIKTKIK1TK1PIK1TK1.E73

where An,KIKTkis just the an,Kterm of (71) expressed in the notations of 2.8 where TKis the group of nsuccessive times:kn,kn+1,..,k+1n1, and IK=iKniKn+1iK+1n1Mndescribes the corresponding partial history of the mesoscopic system.

On the other hand, we know that the stationary distribution P0 satisfies the Master Eq. (72) exactly. So, writing

Un,KIKTKPIKTKP0IKTK.E74

we have

Un,KIKTK=An,KIKTk+IKWIKIK1Un,K1IK1TK1.E75

Note that Wdepends of n,but is independent of K.

From (72)(75) it results that Un,KIKTKtend to 0 when nand Ktend to infinite if, furthermore, the following.

condition (c) holds. In fact, the (Mn- dim) vector Un,Kis orthogonal to the left-eigenstate with eigenvalue 1.

of matrix W. All the eigenvalues of the projection of Win the corresponding subspace have an absolute value smaller than 1. Thus the lemma 1 applies if condition (c) is satisfied:

(c)When nincreases, the absolute values of the nonstationary eigenvalues of Whave an upper bound <1.

This property is likely to hold if the actual stationary mesoscopic process is not too different from an exact Markov process. So, it is reasonable to conjecture that property (c) holds for typical actual systems.

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Michel Moreau and Bernard Gaveau (February 18th 2021). Stochastic Theory of Coarse-Grained Deterministic Systems: Martingales and Markov Approximations, Advances in Dynamical Systems Theory, Models, Algorithms and Applications, Bruno Carpentieri, IntechOpen, DOI: 10.5772/intechopen.95903. Available from:

chapter statistics

110total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Qualitative Analysis for Controllable Dynamical Systems: Stability with Control Lyapunov Functions

By Adela Ionescu

Related Book

First chapter

Some Unconstrained Optimization Methods

By Snezana S. Djordjevic

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us