Open access peer-reviewed chapter

Fundamentals of Irreversible Thermodynamics for Coupled Transport

Written By

Albert S. Kim

Submitted: 07 February 2019 Reviewed: 30 April 2019 Published: 02 August 2019

DOI: 10.5772/intechopen.86607

From the Edited Volume

Non-Equilibrium Particle Dynamics

Edited by Albert S. Kim

Chapter metrics overview

1,603 Chapter Downloads

View Full Metrics

Abstract

Engineering phenomena occur in open systems undergoing irreversible, non-equilibrium processes for coupled mass, energy, and momentum transport. The momentum transport often becomes a primary or background process, on which driving forces of physical gradients govern mass and heat transfer rates. Although in the steady state no physical variables have explicit variation with time, entropy increases with time as long as the systems are open. The degree of irreversibility can be measured by the entropy-increasing rate, first proposed by L. Onsager. This book conceptually reorganizes the entropy and its rate in broader aspects. Diffusion is fully described as an irreversible, i.e., entropy increasing, phenomenon using four different physical pictures. Finally, an irreversible thermodynamic formalism using effective driving forces is established as an extension to the Onsager’s reciprocal theorem, which was applied to core engineering phenomena of fundamental importance: solute diffusion and thermal flux. In addition, the osmotic and thermal fluxes are explained in the unified theoretical framework.

Keywords

  • irreversible thermodynamics
  • non-equilibrium thermodynamics
  • Onsager’s reciprocal theorem
  • entropy rate
  • diffusion pictures
  • irreversible transport equation

1. Introduction

This chapter contributes to a comprehensive explanation of the steady-state thermodynamics of irreversible processes with detailed theoretical derivations and examples. The origin and definitions of entropy are described, irreversible thermodynamics for a steady state is revisited based on Onsager’s reciprocal theorem, and thermal and solute diffusion phenomena are recapitulated as examples of single-component irreversible thermodynamic processes.

1.1 Thermodynamic states

In fundamental and applied sciences, thermodynamics (or statistical mechanics) plays an important role in understanding macroscopic behaviors of a thermodynamic system using microscopic properties of the system. Thermodynamic systems have three classifications based on their respective transport conditions at interfaces.

An open system allows energy and mass transfer across its interface, a closed system allows transfer of energy only, but preventing mass transfer, and, finally, an isolated system allows no transport across its interface.

Transfer phenomenon of mass and energy are represented using the concept of flux, which is defined as a rate of passing a physical variable of interest across a unit cross-sectional area per unit time. If the flux is constant, input and output rates of a physical quantity within a finite volume are equal, and the density remains constant because a net accumulation within the systems is zero. If the flux varies spatially, specifically J = J x y z , then its density within the specified volume changes with time, i.e., ρ = ρ t . This balance is defined as the equation of continuity:

ρ t + J = 0 E1

Many engineering processes occur in an open environment, having specific mass and energy transfer phenomena as practical goals. An exception is a batch reaction, where interfacial transport is blocked and a transient variation of the internal system is of concern. If the internal characteristic of the open system changes with time, the system moves toward a transient, non-equilibrium state. However, the transiency is subject to the human perception of the respective time scale. If engineering system performance is averaged over a macroscopic time scale, such as hours, days, and weeks, the time-averaged performance is a primary concern as those quantities can be compared with experimental data. Instead of transiency, the time to reach a steady state becomes more important in operating engineering processes because a steady-state operation is usually sought. Usually, the time to reach a steady state is much shorter than the standard operation time in a steady state.

1.2 Time scale and transiency

In theoretical physics, statistical mechanics and fluid dynamics are not fully unified, and non-equilibrium thermodynamics is unsolved in theoretical physics. It is often assumed that the fluid flow is not highly turbulent, and a steady state is reached with a fully developed flow field. The thermodynamic characteristics are maintained within the steady flow, and the static equilibrium is assumed to be valid within small moving fluid elements. In such a situation, each fluid element can be qualitatively analogous to a microstate of the thermodynamic ensemble.

Nevertheless, a conflict between the thermodynamics and fluid dynamics stems from the absence of a clear boundary between the static equilibrium for isolated systems and the steady state of open systems. In principle, the steady state belongs to the non-equilibrium state although the partial differentials of any physical quantities are assumed to be zero (i.e., / t = 0 ). A density does not change with time, but the flux exists as finite and constant in time and space. The time scale of particle motion can be expressed using the particle relaxation time defined as τ p = m / β , where m and β exist as particle mass and Stokes’ drag coefficient, respectively. The time scale for the fluid flow can be evaluated as the characteristic length divided by the mean flow speed, but the particle relaxation time scale is much shorter than the flow time scale. Therefore, the local equilibrium may be applied without significant deviation from the real thermodynamic state.

In engineering, various dimensionless numbers are often used to characterize a system of interest. The Reynolds (Re) and Péclet (Pe) numbers indicate ratios of the convective transport to viscous momentum and diffusive heat/mass transport in a fluid, respectively. The Nusselt and Sherwood numbers represent ratios of the diffusion length scale as compared to the boundary layer thickness of the thermal and mass diffusion phenomenon, respectively. The Prandtl and Schmidt numbers represent ratios of momentum as compared to thermal and mass diffusivities, respectively. Other dimensionless numbers include the Biot number (Bi) (for both heat and mass transfer), the Knudsen number (Kn) (molecular mean free path to system length scale), the Grashof (Gr) number (natural buoyancy to viscous forces), and the Rayleigh number (natural convective to diffusive heat/mass transfer).

Note that all the dimensionless numbers described here implicitly assume the presence of fluid flow in open systems because they quantify the relative significance of energy, momentum, and mass transport. The static equilibrium approximation (SEA) must be appropriate if the viscous force is dominant within a fluid region, preventing transient system fluctuation, as the non-equilibrium thermodynamics is not fully established in theoretical physics and steady-state thermodynamics requires experimental observations to determine thermodynamic coefficients between driving forces and generated fluxes.

1.3 Statistical ensembles

Thermodynamics often deals with macroscopic, measurable phenomena of systems of interest, consisting of objects (e.g., molecules or particles) within a volume. Statistical mechanics is considered as a probabilistic approach to study the microscopic aspects of thermodynamic systems using microstates and ensembles and to explain the macroscopic behavior of the respective systems.

Seven variables exist within statistical mechanics (i.e., temperature T , pressure P , and particle number N , which are conjugated to entropy S , volume V , chemical potential μ , and finally energy E of various forms). The thermodynamic ensemble uses the first and second laws of thermodynamics and provides constraints of having three out of the six variables (excluding E ) remaining constant. The other three conjugate variables are theoretically calculated or experimentally measured. Statistical ensembles are either isothermal (for constant temperature) or adiabatic (of zero heat exchanged at interfaces). The adiabatic category includes NVE (microcanonical), NPH , μVL , and μPR ensembles, and isothermal ensembles possess NVT (canonical), NPT (isobaric-isothermal or Gibbs), and μVT (grand canonical). Here, ensembles of NVE and NPH are called microcanonical and isenthalpic, and those of NVT , μVT , and NPT are called canonical, grand canonical, and isothermal-isobaric, respectively. Within statistical mechanical theories and simulations, canonical ensembles are most widely used, followed by grand canonical and isothermal-isobaric ensembles. The adiabatic ensembles are equivalent to isentropic ensembles (of constant entropy) and are represented as NVS , NPS , μVS and μPS instead of NVE , NPH , μVL , and μPR , respectively. Non-isothermal ensembles often represent entropy S as a function of a specific energy form, of which details can be found elsewhere [1].

Advertisement

2. Entropy revisited

2.1 Thermodynamic laws

Thermodynamic laws can be summarized as follows:

  • The zeroth law: For thermodynamic systems of A , B , and C , if A = C and B = C , then A = B .

  • The first law: The internal energy change Δ U is equal to the energy added to the system Q , subtracted by work done by the system W (i.e., Δ U = Q W ).

  • The second law: An element of irreversible heat transferred, δQ , is a product of the temperature T and the increment of its conjugate variable S (i.e., δQ = T d S ).

  • The third law: As T 0 , S constant , and S = k B ln Ω , where Ω is the number of microstates.

The entropy S is defined in the second thermodynamic laws, and its fundamental property is described in the third law, linking the macroscopic element of irreversible heat transferred (i.e., δQ ) and the microstates of the system.

Suppose you have N objects (e.g., people) and need to position them in a straight line consisting of the same number of seats. The first and second objects have N and N 1 choices, respectively; similarly, the third one has N 2 ; the fourth one has N 3 choices; and so on. The total number of ways of this experiment is as follows:

N N 1 N 2 N 3 2 1 = N ! E2

Example 1: In a car, there are four seats including a driver’s. Three guests will occupy the same number of seats. How many different configurations are available? There are three people, A, B, and C, and three seats, S 1 , S 2 , and S 3 . If A can chose a seat first, then A has three choices. Then, B and C have, in a sequence, two and one choices. Then, the total number of possible configurations are 3 2 1 = 3 ! = 6 .

Next, when the N objects are divided into two groups. Group 1 and group 2 can contain N 1 and N 2 objects, respectively. Then, the total number of the possible ways to place N objects into two groups is

N ! N 1 ! N 2 !

which is equal to the number of combinations of N objects taking N 1 objects at a time

C N 1 N = N ! N 1 ! N N 1 ! E3

For example, consider the following equation of a binomial expansion

x + y 3 = 1 x 3 + 3 x 2 y + 3 xy 2 + 1 y 3 = n = 0 3 a n x n y 3 n E4

where a 0 = a 3 = 1 and a 1 = a 2 = 3 . For the power of N , the equation exists as

x 1 + x 2 N = N 1 = 0 N 2 = 0 N ! N 1 ! N 2 ! x 1 N 1 x 2 N 2 = k = 0 N C N k x 1 k x 2 N k E5

where N 1 + N 2 = N and

C N k = N ! k ! N k ! = C N N k E6

If we add x 3 with a constraint condition of N = k = 1 3 N k , then

x 1 + x 2 + x 3 N = N 1 = 0 N 2 = 0 N 3 = 0 N ! N 1 ! N 2 ! N 3 ! x 1 N 1 x 2 N 2 x 3 N 3 E7

where the coefficient of the polynomial expansion can be written as follows:

N ! N 1 ! N 2 ! N 3 ! = N ! k = 1 3 N k ! E8

using the product notation of

y 1 y 2 y 3 y m = k = 1 m y k E9

Example 2: Imagine that we have three containers and ten balls. Each container has enough room to hold all ten balls. Let N i (for i = 1 3 ) be the number of balls in i th container. How many different configurations are available to put ten balls into the three containers? If N 1 = 2 , N 2 = 3 , and N 3 = 5 , then the equation is with the answer being 2520:

N ! N 1 ! N 2 ! N 3 ! = 10 ! 2 ! 3 ! 5 ! = 3628800 2 6 120 = 2520 E10

satisfying N = N 1 + N 2 + N 3 = 10 .

2.2 Definitions

2.2.1 Boltzmann’s entropy

A thermodynamic system is assumed to have a number of small micro-systems. Say that there are N micro-systems and m N thermodynamic states. This situation is similar to N = 10 balls in m = 3 containers. The number of balls in container 1, 2, and 3 is N 1 , N 2 , and N 3 , respectively. Then the total number of different configurations of micro-systems in m micro-states is defined as

Ω N = N ! k = 1 m N k ! E11

Boltzmann proposed a representation of entropy of the entire ensemble as

S B = k B ln Ω N E12

2.2.2 Gibbs entropy

The Gibbs entropy can be written using Ω , as

S k B = ln Ω N = ln N ! k = 0 m N k ! = ln N ! k = 0 m ln N k !

and using Stirling’s formula as

ln N ! = N ln N / e

for a large N 1 , to derive

S k B N ln N / e k = 0 m N k ln N / e = N k = 0 m N k N ln N k N

Finally, we have

S = k B N k = 0 m p k ln p k

where p k = N k / N exists as the probability of finding the system in thermodynamic state k . Gibbs introduced a form of entropy as

s G = k B k = 0 m p k ln p k

which is equal to the system entropy per object or particle, denoted as

s G = S N = k B k = 0 m p k ln p k

2.2.3 Shannon’s entropy

In information theory, Shannon’s entropy is defined as [2]

S Sh = i p i log b p i E13

As the digital representation of integers is binary, the base b is often set as two. Note that Shannon’s entropy is identical to Gibbs entropy, if Boltzmann’s constant k B is discarded and the natural logarithm ln = log e is replaced by log 2 . Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlying probability distribution, not the meaning of the events themselves. Example 3 deals with tossing a coin or a dice and how the entropy S increases with respect to the number of available outcomes.

Example 3: Let’s consider two conventional examples, i.e., a coin and a dice. Their Gibbs entropy values (i.e., entropy per object) are

s coin k B = k = 1 2 p k ln p k = k = 1 2 1 2 ln 1 2 = ln 2 = 0.6931
s dice k B = k = 1 6 1 6 ln 1 6 = ln 6 = 1.791

The system entropies of the coin and the dice are

S coin / k B = 2 × 0.6931 = 1.386
S dice / k B = 6 × 1.791 = 10.750

and their ratio is

S dice S coin = 6 × ln 6 2 × ln 2 = 3 ln 2 3 ln 2 = 3 × 2.5850 = 7.754 > 3

where three indicates the ratio of the number of available cases of a dice (6) to that of a coin (2). The entropy ratio, 7.754, is higher than the ratio of available states, 3.

Advertisement

3. Diffusion: an irreversible phenomenon

Diffusion refers to a net flow of matter from a region of high concentration to a region of low concentration, which is an entropy-increasing process, from a more ordered to a less ordered state of molecular locations. For example, when a lump of sugar is added to a cup of coffee for a sweeter taste, the solid cube of sugar dissolves, and the molecules spread out until evenly distributed. This change from a localized to a more even distribution exists as a spontaneous and, more importantly, irreversible process. In other words, diffusion occurs by itself without external driving forces. In addition, once diffusion occurs, it is not possible for the molecular distribution to return to its original undiffused state. If diffusion does not occur spontaneously, then there is no natural mixing, and one may have a bitter coffee taste and sweet sugar taste in an unmixed liquid phase. In general, diffusion is closely related to mixing and demixing (separation) within a plethora of engineering applications. Why does diffusion occur, and how do we understand the spontaneous phenomena? A key stands as the entropy-changing rate from one static equilibrium to the other. Before discussing diffusion as an irreversible phenomenon, however, the following section includes several pictures so as to create a better understanding of diffusion phenomenon as one of the irreversible thermodynamic processes.

3.1 Mutual diffusion

Diffusion is often driven by the concentration gradient referred to as c , typically in a finite volume, temperature, and pressure. As temperature increases, molecules gain kinetic energy and diffuse more actively in order to position evenly within the volume. A general driving force for isothermal diffusion exists as a gradient of the chemical potential μ between regions of higher and lower concentrations.

As shown in Figure 1, diffusion of solute molecules after removing the mid-wall is spontaneous. Initially, two equal-sized rectangular chambers A and B are separated by an impermeable wall between them. The thickness of the mid-wall is negligible in comparison to the box length; in each chamber of A and B , the same amount of water is contained. Chamber A contains seawater of salt concentration 35,000 ppm, and chamber B contains fresh water of zero salt concentration. If the separating wall is removed slowly enough not to disturb the stationary solvent medium but fast enough to initialize a sharp concentration boundary between the two concentration regions, then the concentration in B increases as much as that in A decreases because mass is neither created nor annihilated inside the container. This spontaneous mixing continues until both concentrations become equal and, hence, reach a thermodynamic equilibrium consisting of a half-seawater/half-fresh water concentration throughout the entire box. Diffusion occurs wherever and whenever the concentration gradient exists, and diffusive solute flux is represented using Fick’s law as follows [3, 4]:

Figure 1.

Diffusion in a rectangular container consisting of two equal-sized chambers A and B (a) before and (b) after the mid-wall is removed.

J s = D d c d x in 1 D E14

or

J s = D c in 3 D E15

where D is diffusion coefficient (also often called diffusivity) of a unit of m 2 / s . A length scale of diffusion can be estimated by Dδt where δt is a representative time interval. In molecular motion, δt can be interpreted as a time duration required for a molecule to move as much as a mean free path (i.e., a statistical averaged distance between two consecutive collisions).

3.2 Stokes-Einstein diffusivity

When the solute concentration is low so that interactions between solutes are negligible, the diffusion coefficient, known as the Stokes-Einstein diffusivity, may be given by

D 0 = k B T 6 πηa E16

where k B is the Boltzmann constant, η is the solvent viscosity1, and a is the (hydrodynamic) radius of solute particles. Stokes derived hydrodynamic force that a stationary sphere experiences when it is positioned in an ambient flow [5]:

F H = 6 πηav E17

where v represents a uniform fluid velocity, which can be interpreted as the velocity of a particle relative to that of an ambient fluid. F H is linearly proportional to v , and its proportionality 6 πηa is the denominator of the right-hand side of Eq. (16). Einstein used the transition probability of molecules from one site to the another, and Langevin considered the molecular collisions as random forces acting on a solute (see Section 3.3 for details). Einstein and Langevin independently derived the same equation as (16) of which the general form can be rewritten as

D 0 = k B T 2 d πη E18

where d is the spatial dimension of the diffusive system (i.e., d = 1 , 2 , and 3 for 1D, 2D, and 3D spaces).

3.3 Diffusion pictures

Several pictures of diffusion phenomena are discussed in the following section, which give probabilistic and deterministic viewpoints. If one considers an ideal situation where there exists only one salt molecule in a box filled with solvent (e.g., water) of finite T , P , and V . Since the sole molecule exists, there is no concentration gradient. Mathematically, the concentration is infinite at the location of the molecule and absolutely zero anywhere else: c = V 1 δ r r 0 where r 0 is an initial position of the solute and r is an arbitrary location within the volume. However, the following question arises. Why does a single molecule diffuse without experiencing any collisions in the absence of other molecules? The answer is that the solvent medium consists of a number of (water) molecules having a size of an order of O 10 10 m. The salt molecule will suffer a tremendous number of collisions with solvent molecules of a certain kinetic energy at temperature T . Since each of these collisions can be thought of as producing a jump of the molecule, the molecule must be found at a distance from its initial position where the diffusion started. In this case, the molecule undergoes Brownian motion. Note that the single molecule collides only with solvent molecules while diffusing, which exists as a type of diffusion called self-diffusion.

3.3.1 Self-diffusion and random walk

A particle initially located at r 0 has equal probabilities of 1/6 to move in ± x ± y ± z directions. For mathematical simplicity, we restrict ourselves to 1D random walk of a dizzy individual, who moves to the right or to the left with a 50:50 chance. Initially (at time t = 0 ), the individual is located at x 0 = 0 and starts moving in a direction represented by Δ x = ± l where + l and l indicate the right and left distances that the individual travels with an equal probability, respectively. At the next step, t 1 = t 0 + Δ t = Δ t , the individual’s location is found at

x 1 = x 0 + Δ x 1 = Δ x 1 E19

where Δ x 1 can be + l or l . At the time of the second step, t 2 = t 1 + Δ t = 2 Δ t , the position is

x 2 = x 1 + Δ x 2 = Δ x 1 + Δ x 2 E20

where Δ x 2 = ± l . At t n = n Δ t ( n 1 ), the position may be expressed as

x n = Δ x 1 + Δ x 2 + + Δ x n 1 + Δ x n = i = 1 n Δ x i E21

If there are a number of dizzy individuals and we can determine an average for their seemingly random movements, then

x n = i = 1 n Δ x i = n Δ x = 0 E22

because Δ x has a 50:50 chance of + l and l :

Δ x = + l 1 2 + l 1 2 = 0 E23

Now let us calculate a mean of x 2 :

x n 2 = Δ x 1 + Δ x n Δ x 1 + Δ x n = Δ x 1 2 + Δ x 1 Δ x 2 + + Δ x 1 Δ x n + Δ x 2 Δ x 1 + Δ x 2 2 + + Δ x 2 Δ x n + + Δ x n Δ x 1 + Δ x n Δ x 2 + + Δ x n 2 E24

and in a concise form

x n 2 = i j Δ x i Δ x j + k = 1 n Δ x k 2 = 0 + n Δ x 2 = nl 2 E25

because i j Δ x i Δ x j = 0 and Δ x k 2 = Δ x 2 = l 2 . In the calculation of off-diagonal terms, Δ x i Δ x j can have four possible values with equal chance of + + , + , + , and . The products of the two elements in the parenthesis are + , , , and + with equal probability of 25%. Therefore, a sum of them is zero. Because n is the number of time steps, it can be replaced by t / Δ t where t is the total elapsed time. The diffusion coefficient in one-dimensional space was derived in the previous section as D = l 2 / 2 Δ t . Then, the mean of squared distance at time t is calculated as

x 2 t = 2 Dt E26

and the root-mean-square distance is

x rms = x 2 t = 2 Dt E27

Note that x rms is proportional to t 1 / 2 in the random walk, as compared to the constant velocity case x = vt t 1 . Then, the diffusivity for 1D is explicitly

D = x rms 2 2 t = nl 2 2 n Δ t = l 2 2 Δ t E28

3.3.2 Einstein’s picture

The concentration C x t after an infinitesimal time duration δt from t within a range d x between x and x + d x is calculated as [6]

C x t + δt d x = d x + C x y t Φ y d y E29

where Φ is the transition probability for a linear displacement y and the right-hand side indicates the amount of adjacent solutes that move into the small region d x . The probability distribution satisfies

+ Φ y d y = 1 E30

and we assume that Φ is a short ranged, even function, meaning that it is non-zero for small y and symmetric, Φ y = Φ y . In this case, we approximate the integrand of Eq. (29) as

C x y t = C x t C x y + 1 2 ! 2 C x 2 y 2 + E31

and substitute Eq. (31) with Eq. (29). We finally derive the so-called diffusion equation:

C t = D B 2 C x 2 E32

where the diffusivity is defined as

D B = y 2 2 ! δt E33

where y 2 is the mean value of y 2 , calculated as

y 2 = + y 2 Φ y d y E34

Within this calculation, we used

C x t + δt = C x t + C t δt + E35

and

y = + y Φ y d y = 0 E36

because y Φ is an odd function. Mathematically, Einstein’s picture uses short-ranged transition probability function, which does not need to be specifically known, and Taylor’s expansion for a small time interval and short displacement. Conditions required for Eq. (32) are as follows: (i) transition distance is longer than the size of molecule, d x O a , and (ii) time interval δt is long enough to measure d x after a tremendous number of collisions with solvent molecules, satisfying δt τ p , where τ p is the particle relaxation time (see Langevin’s picture).

3.3.3 Langevin’s picture

Let us consider a particle of mass m , located at x t with velocity v d x / d t at time t . For simplicity, we shall treat the problem of diffusion in one dimension. It would be hopeless to deterministically trace all the collisions of this particle with a number of solvent molecules in series. However, these collisions can be regarded as a net force A t effective in determining the time dependence of the molecule’s position x t . Newton’s second law of motion can be written in the following form [7, 8]:

m d v d t = βv + A t E37

which is called Langevin’s equation. In Eq. (37), A t is assumed to be randomly and rapidly fluctuating. We multiply x on both sides of Eq. (37) to give

mx d v d t = βxv + xA t E38

and take a time average of both sides during an interval τ , defined as

= 1 τ t t + τ d t E39

Then, we have after a much longer time than the particle relaxation time τ :

m x d v d t = β xv + xA t E40

Because the random fluctuating force A t is independent of the particle position x t , we calculate

xA = x A = x 0 = 0 E41

For further derivation, we use the following identities:

d x 2 d t = 2 x x ̇ = 2 xv E42
d 2 x 2 d t 2 = d d t 2 x x ̇ = 2 v 2 + 2 xv E43

to provide

m 1 2 d 2 x 2 d t 2 v 2 = β 1 2 d x 2 d t E44

We let z = d x 2 / d t and rewrite Eq. (44):

m d z d t = β z 2 k B T β E45

because the kinetic energy of this particle is equal to the thermal energy:

1 2 mv 2 = 1 2 k B T E46

where k B is the Boltzmann constant. Note that the origin of the particle motion exists as the number of its collisions with solvent molecules at temperature T . If we take an initial condition of z = 0 indicating either position or velocity is initially zero, then we obtain

z t = 2 k B T β 1 e t / τ p = d x 2 d t E47

where τ p = m / β is the particle relaxation time. One more integration with respect to time yields

x 2 = 2 k B T β 0 t 1 e t / τ p d t = 2 k B T τ p β t τ p + e t / τ p 1 E48

If t τ p , then t / τ p in the rectangular parenthesis is dominant:

x 2 = 2 k B T β t 2 D B t E49

Stokes’ law of Eq. (17) indicates β = 6 πηa , and, therefore, the diffusion coefficient of Brownian motion or Stokes-Einstein diffusivity is

D B = k B T 6 πηa E50

identical to Eq. (16). The root-mean-square distance is

x rms = x 2 = 2 D B t E51

which is proportional to t . Note that x = 0 . From an arbitrary time t , the particle drifts for an interval Δ t , where Δ t τ p , and then

x rms Δ t = x 2 t + Δ t x 2 t = 2 D B Δ t E52

Then, the time step Δ t is of a macroscopic scale in that one can appreciate the movement of the particle of an order of particle radius. For a short time t τ p , the mean-square distance of Eq. (48) is approximated as x rms = v rms t , indicating a constant velocity motion.

Einstein’s and Langevin’s pictures provide identical results for x rms and D B as related to Stokes’ law. On one hand, if a particle is translating with a constant velocity, its distance from the initial location is linearly proportional to the elapsed time; on the other hand, if particle is diffusing, its root-mean-square distance is proportional to t .

3.3.4 Gardiner’s picture

In Langevin’s Eq. (37), the randomly fluctuating force can be written as

A t = αf t E53

where f satisfies

f t = 1 T p 0 T p f t d t = 0 for T p τ p E54

and

f i t f j t = δ ij δ t t E55

Relationships between parameters are

β = 6 πμ a p E56
α = 2 β k B T = 2 D B 1 k B T E57
D B = k B T β E58

(See the next section for the Brownian diffusivity D B .) As such, we assume that

f t d t = d W t E59

where d W is the Ito-Wiener process [9, 10], satisfying

d W = 0 E60
d W 2 = d t E61

Then, we can obtain the stochastic differential equation (SDE) as

m d v = F x βv d t + α d W E62

The relationship between x , v , and t can be obtained as follows [11]:

d x = v d t E63
d v = F x m v τ p d t + α m d W E64

Note that Eq. (63) is free from the fundamental restriction of Langevin’s equation (i.e., τ p d t ) by introducing the Ito-Wiener process in Eq. (64). The time interval d t can be arbitrarily chosen to improve calculation speed and/or numerical accuracy.

Eq. (63) uses the basic definition of velocity as a time derivative of the position in the classical mechanics, and Eq. (64) represents the randomly fluctuating force using the Ito-Weiner process, d W . If we keep Langevin’s picture, then these two equations should have forms of

d x = v d t + 2 D B d W E65
d v = F x m v τ p d t E66

where the random fluctuation disappears in the force balance and appears as a drift displacement, 2 D B d W . Let C x be the concentration of particles near the position x of a specific particle. Note that x is not a fixed point in Eulerian space but a moving coordinate of a particle being tracked. An infinitesimal change of C is

d C x = C ' d x + 1 2 ! C d x 2 + E67

where

C = C x E68
C = 2 C x 2 E69

The first term of Eq. (67) is

C d x = C v d t + 2 D B d W C v d t E70

using Eq. (60) of the time average, which implies that the diffusion time scale already satisfies the restricted condition of d t τ p . The second term of Eq. (67) is

C d x 2 = C v d t + 2 D B d W 2 C 2 D B d t E71

after dropping the second order term of d t and the first order term of d W . Substitution of Eqs. (70) and (71) with Eq. (67) gives

d C x = C v d t + C D B d t E72

and therefore

C t = D B 2 C x 2 + v C x E73

which looks similar to the conventional convective diffusion equation with the sign of v reversed. Eq. (73) indicates that a group of identical particles of mass m undergoes convective and diffusive transport in the Eulerian space. A particle in the group is located at the position x at time t , moving with velocity v . This specific particle observes the concentration C of other particles nearby its position x . Therefore, Eq. (73) exists as the convective diffusion equation in the Lagrangian picture. If the particle moves with velocity v in a stationary fluid, then the motion is equivalent to particles that perform only diffusive motion within a fluid moving with v . To emphasize the fluid velocity, we replace v with u ; then the Lagrangian convective diffusion Eq. (73) becomes the original (Eulerian) convection-diffusion equation:

C t = D B 2 C x 2 u C x E74

which can be directly obtained by replacing Eq. (65) by

d x = u d t + 2 D B d W E75
Advertisement

4. Dissipation rates

4.1 Energy consumption per time

In classical mechanics, work done due to an infinitesimal displacement of a particle d r under the influence of force field F is

d W = F d r E76

The time differentiation of Eq. (76) provides an energy consumption rate (i.e., power represented by P ) as a dot product of the particle velocity v and the applied force F :

W ̇ = d W d t = v F E77

For an arbitrary physical quantity Q , variation rate of its density can be represented as

1 V d Q d t = 1 V d r d t Q = v q E78

where V is the constant system volume and q = Q / V is a volumetric density of Q or named specific Q . Eq. (78) indicates that a density changing rate of Q is equal to q operated by v . If we replace Q by the internal energy of the system, then the specific energy consumption rate is expressed as

W ̇ = 1 V d W d t = v w = v A c w E79

where w and w are specific work done and work done per length, respectively, and A c is the cross-sectional area normal to w . For a continuous media, w causes transport phenomena in a non-equilibrium state, and v / A c is generated as proportional to a flux. A changing rate can be quantified as a product of a driving force and a flux, as implicated from Eq. (77).

Let us consider a closed system possessing ξ 1 and ξ 2 , as some thermodynamic quantities characterizing the system state. The values of ξ i at a state of equilibrium are denoted ξ 1 0 and ξ 2 0 and values outside equilibrium ξ 1 and ξ 2 . Within a static equilibrium, the entropy represented by S of the system is maintained as the maximum. For a system away from the static equilibrium, the generalized driving force is defined as

X k = S ξ k E80

which is obviously zero for all k at the static equilibrium. A flux J j of ξ j is defined as

J j = 1 A c d ξ j d t = ξ ̇ j A c = k L jk X k E81

which assumes that J j represents a linear combination of all the existing driving forces X k . We take Onsager’s symmetry principle [12, 13], which indicates that the kinetic coefficient L jk for all j and k are symmetrical such as

L jk = L kj E82

The entropy production rate per unit volume, or the specific entropy production rate, is defined as

σ = d s d t E83

where s = S / V . We expand the specific entropy s with respect to infinitesimal changes of ξ k as an independent variable:

σ = k d ξ k d t s ξ k = k A c J k 1 A c S ξ k = k J k X k E84

which represents the changing rate of the specific entropy as a dot project of flux J and driving force X . The subscript k in Eq. (84) is for physical quantities on which the respective entropy depends. For mathematical simplicity, a new quantity is defined as Y k = TX k , where T is the absolute temperature in Kelvin to have

= k J k Y k E85

Note that has a physical dimension equal to that of the specific power. An inverse relationship of Eq. (81) is

X k = l R kl J l E86

where R kl represents an inverse matrix of L jk , i.e., R kl L jk = δ lj , which can be proven by substituting Eq. (86) with Eq. (81):

J j = k L jk X k = l k L jk R kl J l = l δ jl J l = J j E87

Substitution of Eq. (86) to Eq. (84) represents σ in terms of flux J

σ = j , k J j R jk J k = J R J E88

where J and J exist as the row and column vectors of J , respectively, and R represents the generalized resistance matrix. A partial derivative of σ with respect to an arbitrary flux J i is equal to twice the generalized driving force:

σ J i = j , k δ ij R jk J k + J j R jk δ ik = k R ik J k + j J j R ji = 2 X i E89

In this case, the specific entropy dissipation rate σ is presented using the flux and σ differential with the flux by substituting Eq. (89) with Eq. (84):

σ = 1 2 k J k σ J k E90

which indicates that the specific entropy increases with respect to the flux and proves that the systems is away from a pure, static equilibrium state.

4.2 Effective driving forces

The second thermodynamic law represents the infinitesimal entropy change in the microcanonical ensemble:

d S = 1 T d E + P T d V i μ i T d N i E91

where E represents the internal energy, P represents the system pressure within a volume V , and μ i and N i are the chemical potential and the mole number of species i , respectively. Eq. (91) implies the entropy S = S E V N i as a function of the internal energy E , the volume V , and the number of species i N i .This gives ξ 1 = E , ξ 2 = V , and ξ i = N i ( i = 3 for water and i = 4 for solute). The driving forces X i are particularly calculated as

X 1 = X q = S E V , N s = 1 T E92
X 2 = X v = S V E , N s = P T E93
X 3 = X s = S N s E , V = μ s T E94

where subscripts q , v , and s of X indicates heat, volume of solvent, and solute, respectively. In Eq. (92), entropy S is differentiated by energy E , keeping V , and N s invariant, which are applied to Eqs. (93) and (94). Eq. (94) indicates that the driving force is a negative gradient of the chemical potential divided by the ambient temperature. Within the isothermal-isobaric ensemble, Gibbs free energy is defined as

G = H TS E95

where H = E + PV is enthalpy. If the solute concentration is diluted (i.e., N w N s ), it is referred to as a weak solution. As such, the overall chemical potential can be approximated as

μ = G N = G N w + N s G N w = μ w = H ¯ S ¯ T E96

where H ¯ and S ¯ represent molar enthalpy and entropy, respectively. An infinitesimal change of Gibbs free energy is, in particular, written as

d G = S d T + Vd P + μ s d N s E97

which is equivalent to

d G N w d μ w = S ¯ d T + V ¯ d P + μ s d c E98

where V ¯ is a molar volume of the system, μ s is the solute chemical potential, and c = N s / N w is the molar fraction of solute molecules. The gradient of the solvent chemical potential was rewritten as a linear combination of gradients of temperature, pressure, and molar solute fraction:

μ w = S ¯ T + V ¯ P + μ s c E99

where the following mathematical identity was used

μ k T = μ k 1 T + 1 T μ k E100

In general, fluxes of heat, solvent volume, and solute molecules are intrinsically coupled to their driving forces, such as

J q J v J s = L qq L qv L qs L qv L vv L vs L qs L vs L ss X q X v X s E101

where Onsager’s reciprocal relationship, L ij = L ji , is employed.

4.3 Applications

4.3.1 Solute diffusion

The primary driving force for the solute transport is X s = μ s / T , if temperature and pressure gradients are not significant in solute transport. We consider the diffusive flux of solute only in an isothermal and isobaric process and neglect terms of L qs and L vs :

J s = L ss μ s T = L ss T μ s E102

which is equivalent to Fick’s law of

J s = D c E103

where D [m2/s] is a solute diffusion coefficient. If Eq. (102) is expressed in terms of concentration gradient, we have

J ss = L ss T μ s c c E104

By Eqs. (103) and (104), one can find

L ss T μ s c T = D E105

Then, the entropy-changing rate based on the solute transport is calculated as

σ s = J s X s = L ss T 2 μ s 2 = D / T μ s / c T μ s 2 E106

Next, we consider the Stokes-Einstein diffusivity:

D SE = k B T 3 πη d p E107

where k B is Boltzmann constant, η is the solvent viscosity, and d p is the diameter of a particle diffusing within the solvent medium. The phenomenological coefficient L qq is represented as

L ss = D SE T μ s / c T = k B T 3 πη d p T μ s c T 1 E108

For weakly interacting solutes, the solute chemical potential is

μ = μ 0 + R T ln a E109

where μ 0 is generally a function of T and P , which are constant in this equation, R is the gas constant, and a is the solute activity. For a dilute solution, the activity represented by a is often approximated as the concentration c (i.e., a c ). The proportionality between L ss and D SE is

T μ s c T 1 = T 1 R T / c = c R E110

which leads to

L ss = k B T 3 πη d p c R = N A Tc 3 πη d p E111

where N A is the Avogadro constant.

For a dilute isothermal solution, we represent the entropy-changing rate as

σ s = DR c 2 c = R D J ss 2 c E112

for an isothermal and isobaric process. Assuming that D is not a strong function of c , Eq. (112) indicates that the diffusive entropy rate σ s is unconditionally positive (as expected), increases with the diffusive flux, and decreases with the concentration c . Within this analysis, c is defined as molar or number fraction of solute molecules to the solvent. For a dilute solution, conversion of c to a solute mass or mole number per unit volume is straightforward.

4.3.2 Thermal flux

The thermal flux consists of conductive and convective transports, proportional to T and P , respectively. Neglecting the solute diffusion in Eq. (101), the coupled equations of heat and fluid flows are simplified as

J q J v = α β β γ X q X v E113

using Onsager’s reciprocal relationship in that the off-diagonal coefficients are symmetrical. The driving forces are

X q X v = 1 / T P / T = 1 T 2 1 0 P T T P E114

The substitution of Eq. (113) into (114) gives

T 2 J q J v = α β β γ 1 0 P T T P = α + β + T P E115

or

T 2 J q / β J v / γ = α / β + P T β / γ + P T T P E116

Subtracting the first row by the second row of Eq. (116) provides

J q β J v γ = 1 T 2 α β β γ T E117
J q = β γ J v 1 T 2 α β β γ T E118

Through physical interpretation, one can conclude that

β γ = h ˜ E119

where h ˜ represents the system enthalpy as a function of temperature. Finally, the coupled heat transfer equation is

J q = κ q T + h ˜ J v E120

where

κ q = α β h ˜ T 2 E121

is the thermal conductivity.

Advertisement

5. Concluding remarks

In this chapter, we investigated diffusion phenomenon as an irreversible process. By thermodynamic laws, entropy always increases as a system of interest evolves in a non-equilibrium state. The entropy-increasing rate per unit volume is a measure of how fast the system changes from the current to a more disordered state. Entropy concept is explained from the basic mathematics using several examples. Diffusion phenomenon is explained using (phenomenological) Fick’s law, and more fundamental theories were summarized, which theoretically derive the diffusion coefficient and the convection-diffusion equation. Finally, the dissipation rate, i.e., entropy-changing rate per volume, is revisited and obtained in detail. The coupled, irreversible transport equation in steady state is applied to solute diffusion in an isothermal-isobaric process and heat transfer that is consisting of the conductive and convective transport due to the temperature gradient and fluid flow, respectively. As engineering processes are mostly open in the steady state, the theoretical approaches discussed in this chapter may be a starting point of the future development in irreversible thermodynamics and statistical mechanics.

References

  1. 1. Kim AS, Kim H-J. Membrane thermodynamics for osmotic phenomena. In: Yonar T, editor. Desalination. Rijeka: InTechOpen; 2017. pp. 1-26. ISBN: 978-953-51-3364-3
  2. 2. Shannon CE. A mathematical theory of communication. Bell System Technical Journal. 1948;270(3):379-423. DOI: 10.1002/j.1538-7305.1948.tb01338.x
  3. 3. Fick A. Ueber diffusion. Annalen der Physik und Chemie. 1855;170(1):59-86
  4. 4. Fick A. On liquid diffusion. Journal of Membrane Science. 1995;100(1):33-38. ISSN: 0376-7388. DOI: 10.1016/0376-7388(94)00230-v
  5. 5. Stokes GG. On the effect of internal friction of fluids on the motion of pendulums. Transactions of the Cambridge Philosophical Society. 1851;9:1-106
  6. 6. Einstein A. Investigations of the Theory of the Brownian Movement. New York, NY: Dover Publications, Inc.; 1956
  7. 7. Langevin P. Sur la theorie du mouvement brownien. Comptes Rendus de l’Académie des Sciences (Paris). 1908;146:530-533
  8. 8. Howard Lee M. Solutions of the generalized Langevin equation by a method of recurrence relations. Physical Review B. 1982;260(5):2547-2551
  9. 9. Wiener N. Differential Space. Journal of Mathematical Physics. 1923;58:31-174
  10. 10. Ito M. An extension of nonlinear evolution equations of the K-dV (mK-dV) type to higher orders. Journal of the Physical Society of Japan. 1980;49(0):771-778. DOI: 10.1143/JPSJ.49.771
  11. 11. Gardiner CW. Handbook of Stochastic Methods: For Physics, Chemistry and the Natural Sciences. Springer; 1996
  12. 12. Onsager L. Reciprocal relations in irreversible processes. II. Physical Review. 1931;38(12):2265-2279. DOI: 10.1103/physrev.38.2265
  13. 13. Onsager L. Reciprocal relations in irreversible processes. I. Physical Review. 1931;37(4):405-426. DOI: 10.1103/PhysRev.37.405

Notes

  • Greek symbol μ is also often used for viscosity in fluid mechanics literature. In this book, chemical potential is denoted as μ .

Written By

Albert S. Kim

Submitted: 07 February 2019 Reviewed: 30 April 2019 Published: 02 August 2019