Open access peer-reviewed chapter

Fundamentals of Irreversible Thermodynamics for Coupled Transport

By Albert S. Kim

Submitted: February 7th 2019Reviewed: April 30th 2019Published: August 2nd 2019

DOI: 10.5772/intechopen.86607

Downloaded: 915


Engineering phenomena occur in open systems undergoing irreversible, non-equilibrium processes for coupled mass, energy, and momentum transport. The momentum transport often becomes a primary or background process, on which driving forces of physical gradients govern mass and heat transfer rates. Although in the steady state no physical variables have explicit variation with time, entropy increases with time as long as the systems are open. The degree of irreversibility can be measured by the entropy-increasing rate, first proposed by L. Onsager. This book conceptually reorganizes the entropy and its rate in broader aspects. Diffusion is fully described as an irreversible, i.e., entropy increasing, phenomenon using four different physical pictures. Finally, an irreversible thermodynamic formalism using effective driving forces is established as an extension to the Onsager’s reciprocal theorem, which was applied to core engineering phenomena of fundamental importance: solute diffusion and thermal flux. In addition, the osmotic and thermal fluxes are explained in the unified theoretical framework.


  • irreversible thermodynamics
  • non-equilibrium thermodynamics
  • Onsager’s reciprocal theorem
  • entropy rate
  • diffusion pictures
  • irreversible transport equation

1. Introduction

This chapter contributes to a comprehensive explanation of the steady-state thermodynamics of irreversible processes with detailed theoretical derivations and examples. The origin and definitions of entropy are described, irreversible thermodynamics for a steady state is revisited based on Onsager’s reciprocal theorem, and thermal and solute diffusion phenomena are recapitulated as examples of single-component irreversible thermodynamic processes.

1.1 Thermodynamic states

In fundamental and applied sciences, thermodynamics (or statistical mechanics) plays an important role in understanding macroscopic behaviors of a thermodynamic system using microscopic properties of the system. Thermodynamic systems have three classifications based on their respective transport conditions at interfaces.

An opensystem allows energy and mass transfer across its interface, a closedsystem allows transfer of energy only, but preventing mass transfer, and, finally, an isolatedsystem allows no transport across its interface.

Transfer phenomenon of mass and energy are represented using the concept of flux, which is defined as a rate of passing a physical variable of interest across a unit cross-sectional area per unit time. If the flux is constant, input and output rates of a physical quantity within a finite volume are equal, and the density remains constant because a net accumulation within the systems is zero. If the flux varies spatially, specifically J=Jxyz, then its density within the specified volume changes with time, i.e., ρ=ρt. This balance is defined as the equation of continuity:


Many engineering processes occur in an open environment, having specific mass and energy transfer phenomena as practical goals. An exception is a batch reaction, where interfacial transport is blocked and a transient variation of the internal system is of concern. If the internal characteristic of the open system changes with time, the system moves toward a transient, non-equilibrium state. However, the transiency is subject to the human perception of the respective time scale. If engineering system performance is averaged over a macroscopic time scale, such as hours, days, and weeks, the time-averaged performance is a primary concern as those quantities can be compared with experimental data. Instead of transiency, the time to reach a steady state becomes more important in operating engineering processes because a steady-state operation is usually sought. Usually, the time to reach a steady state is much shorter than the standard operation time in a steady state.

1.2 Time scale and transiency

In theoretical physics, statistical mechanics and fluid dynamics are not fully unified, and non-equilibrium thermodynamics is unsolved in theoretical physics. It is often assumed that the fluid flow is not highly turbulent, and a steady state is reached with a fully developed flow field. The thermodynamic characteristics are maintained within the steady flow, and the static equilibrium is assumed to be valid within small moving fluid elements. In such a situation, each fluid element can be qualitatively analogous to a microstate of the thermodynamic ensemble.

Nevertheless, a conflict between the thermodynamics and fluid dynamics stems from the absence of a clear boundary between the static equilibrium for isolated systems and the steady state of open systems. In principle, the steady state belongs to the non-equilibrium state although the partial differentials of any physical quantities are assumed to be zero (i.e., /t=0). A density does not change with time, but the flux exists as finite and constant in time and space. The time scale of particle motion can be expressed using the particle relaxation time defined as τp=m/β, where mand βexist as particle mass and Stokes’ drag coefficient, respectively. The time scale for the fluid flow can be evaluated as the characteristic length divided by the mean flow speed, but the particle relaxation time scale is much shorter than the flow time scale. Therefore, the local equilibrium may be applied without significant deviation from the real thermodynamic state.

In engineering, various dimensionless numbers are often used to characterize a system of interest. The Reynolds (Re) and Péclet (Pe) numbers indicate ratios of the convective transport to viscous momentum and diffusive heat/mass transport in a fluid, respectively. The Nusselt and Sherwood numbers represent ratios of the diffusion length scale as compared to the boundary layer thickness of the thermal and mass diffusion phenomenon, respectively. The Prandtl and Schmidt numbers represent ratios of momentum as compared to thermal and mass diffusivities, respectively. Other dimensionless numbers include the Biot number (Bi) (for both heat and mass transfer), the Knudsen number (Kn) (molecular mean free path to system length scale), the Grashof (Gr) number (natural buoyancy to viscous forces), and the Rayleigh number (natural convective to diffusive heat/mass transfer).

Note that all the dimensionless numbers described here implicitly assume the presence of fluid flow in open systems because they quantify the relative significance of energy, momentum, and mass transport. The static equilibrium approximation (SEA) must be appropriate if the viscous force is dominant within a fluid region, preventing transient system fluctuation, as the non-equilibrium thermodynamics is not fully established in theoretical physics and steady-state thermodynamics requires experimental observations to determine thermodynamic coefficients between driving forces and generated fluxes.

1.3 Statistical ensembles

Thermodynamics often deals with macroscopic, measurable phenomena of systems of interest, consisting of objects (e.g., molecules or particles) within a volume. Statistical mechanics is considered as a probabilistic approach to study the microscopic aspects of thermodynamic systems using microstates and ensembles and to explain the macroscopic behavior of the respective systems.

Seven variables exist within statistical mechanics (i.e., temperature T, pressure P, and particle number N, which are conjugated to entropy S, volume V, chemical potential μ, and finally energy Eof various forms). The thermodynamic ensemble uses the first and second laws of thermodynamics and provides constraints of having three out of the six variables (excluding E) remaining constant. The other three conjugate variables are theoretically calculated or experimentally measured. Statistical ensembles are either isothermal (for constant temperature) or adiabatic (of zero heat exchanged at interfaces). The adiabatic category includes NVE(microcanonical), NPH, μVL, and μPRensembles, and isothermal ensembles possess NVT(canonical), NPT(isobaric-isothermal or Gibbs), and μVT(grand canonical). Here, ensembles of NVEand NPHare called microcanonical and isenthalpic, and those of NVT, μVT, and NPTare called canonical, grand canonical, and isothermal-isobaric, respectively. Within statistical mechanical theories and simulations, canonical ensembles are most widely used, followed by grand canonical and isothermal-isobaric ensembles. The adiabatic ensembles are equivalent to isentropic ensembles (of constant entropy) and are represented as NVS, NPS, μVSand μPSinstead of NVE, NPH, μVL, and μPR, respectively. Non-isothermal ensembles often represent entropy Sas a function of a specific energy form, of which details can be found elsewhere [1].


2. Entropy revisited

2.1 Thermodynamic laws

Thermodynamic laws can be summarized as follows:

  • The zeroth law: For thermodynamic systems of A, B, and C, if A=Cand B=C, then A=B.

  • The first law: The internal energy change ΔUis equal to the energy added to the system Q, subtracted by work done by the system W(i.e., ΔU=QW).

  • The second law: An element of irreversible heat transferred, δQ, is a product of the temperature Tand the increment of its conjugate variable S(i.e., δQ=TdS).

  • The third law: As T0, Sconstant, and S=kBlnΩ, where Ωis the number of microstates.

The entropy Sis defined in the second thermodynamic laws, and its fundamental property is described in the third law, linking the macroscopic element of irreversible heat transferred (i.e., δQ)and the microstates of the system.

Suppose you have Nobjects (e.g., people) and need to position them in a straight line consisting of the same number of seats. The first and second objects have Nand N1choices, respectively; similarly, the third one has N2; the fourth one has N3choices; and so on. The total number of ways of this experiment is as follows:


Example 1: In a car, there are four seats including a driver’s. Three guests will occupy the same number of seats. How many different configurations are available? There are three people, A, B, and C, and three seats, S1, S2, and S3. If A can chose a seat first, then A has three choices. Then, B and C have, in a sequence, two and one choices. Then, the total number of possible configurations are 321=3!=6.

Next, when the Nobjects are divided into two groups. Group 1 and group 2 can contain N1and N2objects, respectively. Then, the total number of the possible ways to place Nobjects into two groups is


which is equal to the number of combinations of Nobjects taking N1objects at a time


For example, consider the following equation of a binomial expansion


where a0=a3=1and a1=a2=3. For the power of N, the equation exists as


where N1+N2=Nand


If we add x3with a constraint condition of N=k=13Nk, then


where the coefficient of the polynomial expansion can be written as follows:


using the product notation of


Example 2: Imagine that we have three containers and ten balls. Each container has enough room to hold all ten balls. Let Ni(for i=13) be the number of balls in ithcontainer. How many different configurations are available to put ten balls into the three containers? If N1=2, N2=3, and N3=5, then the equation is with the answer being 2520:


satisfying N=N1+N2+N3=10.

2.2 Definitions

2.2.1 Boltzmann’s entropy

A thermodynamic system is assumed to have a number of small micro-systems. Say that there are Nmicro-systems and mNthermodynamic states. This situation is similar to N=10balls in m=3containers. The number of balls in container 1, 2, and 3 is N1, N2, and N3, respectively. Then the total number of different configurations of micro-systems in mmicro-states is defined as


Boltzmann proposed a representation of entropy of the entire ensemble as


2.2.2 Gibbs entropy

The Gibbs entropy can be written using Ω, as


and using Stirling’s formula as


for a large N1, to derive


Finally, we have


where pk=Nk/Nexists as the probability of finding the system in thermodynamic state k. Gibbs introduced a form of entropy as


which is equal to the system entropy per object or particle, denoted as


2.2.3 Shannon’s entropy

In information theory, Shannon’s entropy is defined as [2]


As the digital representation of integers is binary, the base bis often set as two. Note that Shannon’s entropy is identical to Gibbs entropy, if Boltzmann’s constant kBis discarded and the natural logarithm ln=logeis replaced by log2. Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlying probability distribution, not the meaning of the events themselves. Example 3 deals with tossing a coin or a dice and how the entropy Sincreases with respect to the number of available outcomes.

Example 3: Let’s consider two conventional examples, i.e., a coin and a dice. Their Gibbs entropy values (i.e., entropy per object) are


The system entropies of the coin and the dice are


and their ratio is


where three indicates the ratio of the number of available cases of a dice (6) to that of a coin (2). The entropy ratio, 7.754, is higher than the ratio of available states, 3.


3. Diffusion: an irreversible phenomenon

Diffusion refers to a net flow of matter from a region of high concentration to a region of low concentration, which is an entropy-increasing process, from a more ordered to a less ordered state of molecular locations. For example, when a lump of sugar is added to a cup of coffee for a sweeter taste, the solid cube of sugar dissolves, and the molecules spread out until evenly distributed. This change from a localized to a more even distribution exists as a spontaneous and, more importantly, irreversible process. In other words, diffusion occurs by itself without external driving forces. In addition, once diffusion occurs, it is not possible for the molecular distribution to return to its original undiffused state. If diffusion does not occur spontaneously, then there is no natural mixing, and one may have a bitter coffee taste and sweet sugar taste in an unmixed liquid phase. In general, diffusion is closely related to mixing and demixing (separation) within a plethora of engineering applications. Why does diffusion occur, and how do we understand the spontaneous phenomena? A key stands as the entropy-changing rate from one static equilibrium to the other. Before discussing diffusion as an irreversible phenomenon, however, the following section includes several pictures so as to create a better understanding of diffusion phenomenon as one of the irreversible thermodynamic processes.

3.1 Mutual diffusion

Diffusion is often driven by the concentration gradient referred to as c, typically in a finite volume, temperature, and pressure. As temperature increases, molecules gain kinetic energy and diffuse more actively in order to position evenly within the volume. A general driving force for isothermal diffusion exists as a gradient of the chemical potential μbetween regions of higher and lower concentrations.

As shown in Figure 1, diffusion of solute molecules after removing the mid-wall is spontaneous. Initially, two equal-sized rectangular chambers Aand Bare separated by an impermeable wall between them. The thickness of the mid-wall is negligible in comparison to the box length; in each chamber of Aand B, the same amount of water is contained. Chamber Acontains seawater of salt concentration 35,000 ppm, and chamber Bcontains fresh water of zero salt concentration. If the separating wall is removed slowly enough not to disturb the stationary solvent medium but fast enough to initialize a sharp concentration boundary between the two concentration regions, then the concentration in Bincreases as much as that in Adecreases because mass is neither created nor annihilated inside the container. This spontaneous mixing continues until both concentrations become equal and, hence, reach a thermodynamic equilibrium consisting of a half-seawater/half-fresh water concentration throughout the entire box. Diffusion occurs wherever and whenever the concentration gradient exists, and diffusive solute flux is represented using Fick’s law as follows [3, 4]:

Figure 1.

Diffusion in a rectangular container consisting of two equal-sized chambersAandB(a) before and (b) after the mid-wall is removed.




where Dis diffusion coefficient (also often called diffusivity) of a unit of m2/s. A length scale of diffusion can be estimated by Dδtwhere δtis a representative time interval. In molecular motion, δtcan be interpreted as a time duration required for a molecule to move as much as a mean free path (i.e., a statistical averaged distance between two consecutive collisions).

3.2 Stokes-Einstein diffusivity

When the solute concentration is low so that interactions between solutes are negligible, the diffusion coefficient, known as the Stokes-Einstein diffusivity, may be given by


where kBis the Boltzmann constant, ηis the solvent viscosity1, and ais the (hydrodynamic) radius of solute particles. Stokes derived hydrodynamic force that a stationary sphere experiences when it is positioned in an ambient flow [5]:


where vrepresents a uniform fluid velocity, which can be interpreted as the velocity of a particle relative to that of an ambient fluid. FHis linearly proportional to v, and its proportionality 6πηais the denominator of the right-hand side of Eq. (16). Einstein used the transition probability of molecules from one site to the another, and Langevin considered the molecular collisions as random forces acting on a solute (see Section 3.3 for details). Einstein and Langevin independently derived the same equation as (16) of which the general form can be rewritten as


where dis the spatial dimension of the diffusive system (i.e., d=1,2,and3for 1D, 2D, and 3D spaces).

3.3 Diffusion pictures

Several pictures of diffusion phenomena are discussed in the following section, which give probabilistic and deterministic viewpoints. If one considers an ideal situation where there exists only one salt molecule in a box filled with solvent (e.g., water) of finite T, P, and V. Since the sole molecule exists, there is no concentration gradient. Mathematically, the concentration is infinite at the location of the molecule and absolutely zero anywhere else: c=V1δrr0where r0is an initial position of the solute and ris an arbitrary location within the volume. However, the following question arises. Why does a single molecule diffuse without experiencing any collisions in the absence of other molecules? The answer is that the solvent medium consists of a number of (water) molecules having a size of an order of O1010m. The salt molecule will suffer a tremendous number of collisions with solvent molecules of a certain kinetic energy at temperature T. Since each of these collisions can be thought of as producing a jump of the molecule, the molecule must be found at a distance from its initial position where the diffusion started. In this case, the molecule undergoes Brownian motion. Note that the single molecule collides only with solvent molecules while diffusing, which exists as a type of diffusion called self-diffusion.

3.3.1 Self-diffusion and random walk

A particle initially located at r0has equal probabilities of 1/6 to move in ±x±y±zdirections. For mathematical simplicity, we restrict ourselves to 1D random walk of a dizzy individual, who moves to the right or to the left with a 50:50 chance. Initially (at time t=0), the individual is located at x0=0and starts moving in a direction represented by Δx=±lwhere +land lindicate the right and left distances that the individual travels with an equal probability, respectively. At the next step, t1=t0+Δt=Δt, the individual’s location is found at


where Δx1can be +lor l. At the time of the second step, t2=t1+Δt=2Δt, the position is


where Δx2=±l. At tn=nΔt(n1), the position may be expressed as


If there are a number of dizzy individuals and we can determine an average for their seemingly random movements, then


because Δxhas a 50:50 chance of +land l:


Now let us calculate a mean of x2:


and in a concise form


because ijΔxiΔxj=0and Δxk2=Δx2=l2. In the calculation of off-diagonal terms, ΔxiΔxjcan have four possible values with equal chance of ++, +, +, and . The products of the two elements in the parenthesis are +, , , and +with equal probability of 25%. Therefore, a sum of them is zero. Because nis the number of time steps, it can be replaced by t/Δtwhere tis the total elapsed time. The diffusion coefficient in one-dimensional space was derived in the previous section as D=l2/2Δt. Then, the mean of squared distance at time tis calculated as


and the root-mean-square distance is


Note that xrmsis proportional to t1/2in the random walk, as compared to the constant velocity case x=vtt1. Then, the diffusivity for 1D is explicitly


3.3.2 Einstein’s picture

The concentration Cxtafter an infinitesimal time duration δtfrom twithin a range dxbetween xand x+dxis calculated as [6]


where Φis the transition probability for a linear displacement yand the right-hand side indicates the amount of adjacent solutes that move into the small region dx. The probability distribution satisfies


and we assume that Φis a short ranged, even function, meaning that it is non-zero for small yand symmetric, Φy=Φy. In this case, we approximate the integrand of Eq. (29) as


and substitute Eq. (31) with Eq. (29). We finally derive the so-called diffusion equation:


where the diffusivity is defined as


where y2is the mean value of y2, calculated as


Within this calculation, we used




because yΦis an odd function. Mathematically, Einstein’s picture uses short-ranged transition probability function, which does not need to be specifically known, and Taylor’s expansion for a small time interval and short displacement. Conditions required for Eq. (32) are as follows: (i) transition distance is longer than the size of molecule, dxOa, and (ii) time interval δtis long enough to measure dxafter a tremendous number of collisions with solvent molecules, satisfying δtτp, where τpis the particle relaxation time (see Langevin’s picture).

3.3.3 Langevin’s picture

Let us consider a particle of mass m, located at xtwith velocity vdx/dtat time t. For simplicity, we shall treat the problem of diffusion in one dimension. It would be hopeless to deterministically trace all the collisions of this particle with a number of solvent molecules in series. However, these collisions can be regarded as a net force Ateffective in determining the time dependence of the molecule’s position xt. Newton’s second law of motion can be written in the following form [7, 8]:


which is called Langevin’s equation. In Eq. (37), Atis assumed to be randomly and rapidly fluctuating. We multiply xon both sides of Eq. (37) to give


and take a time average of both sides during an interval τ, defined as


Then, we have after a much longer time than the particle relaxation time τ:


Because the random fluctuating force Atis independent of the particle position xt, we calculate


For further derivation, we use the following identities:


to provide


We let z=dx2/dtand rewrite Eq. (44):


because the kinetic energy of this particle is equal to the thermal energy:


where kBis the Boltzmann constant. Note that the origin of the particle motion exists as the number of its collisions with solvent molecules at temperature T.If we take an initial condition of z=0indicating either position or velocity is initially zero, then we obtain


where τp=m/βis the particle relaxation time. One more integration with respect to time yields


If tτp, then t/τpin the rectangular parenthesis is dominant:


Stokes’ law of Eq. (17) indicates β=6πηa, and, therefore, the diffusion coefficient of Brownian motion or Stokes-Einstein diffusivity is


identical to Eq. (16). The root-mean-square distance is


which is proportional to t. Note that x=0. From an arbitrary time t, the particle drifts for an interval Δt, where Δtτp, and then


Then, the time step Δtis of a macroscopic scale in that one can appreciate the movement of the particle of an order of particle radius. For a short time tτp, the mean-square distance of Eq. (48) is approximated as xrms=vrmst, indicating a constant velocity motion.

Einstein’s and Langevin’s pictures provide identical results for xrmsand DBas related to Stokes’ law. On one hand, if a particle is translating with a constant velocity, its distance from the initial location is linearly proportional to the elapsed time; on the other hand, if particle is diffusing, its root-mean-square distance is proportional to t.

3.3.4 Gardiner’s picture

In Langevin’s Eq. (37), the randomly fluctuating force can be written as


where fsatisfies




Relationships between parameters are


(See the next section for the Brownian diffusivity DB.) As such, we assume that


where dWis the Ito-Wiener process [9, 10], satisfying


Then, we can obtain the stochastic differential equation (SDE) as


The relationship between x, v, and tcan be obtained as follows [11]:


Note that Eq. (63) is free from the fundamental restriction of Langevin’s equation (i.e., τpdt) by introducing the Ito-Wiener process in Eq. (64). The time interval dtcan be arbitrarily chosen to improve calculation speed and/or numerical accuracy.

Eq. (63) uses the basic definition of velocity as a time derivative of the position in the classical mechanics, and Eq. (64) represents the randomly fluctuating force using the Ito-Weiner process, dW. If we keep Langevin’s picture, then these two equations should have forms of


where the random fluctuation disappears in the force balance and appears as a drift displacement, 2DBdW. Let Cxbe the concentration of particles near the position xof a specific particle. Note that xis not a fixed point in Eulerian space but a moving coordinate of a particle being tracked. An infinitesimal change of Cis




The first term of Eq. (67) is


using Eq. (60) of the time average, which implies that the diffusion time scale already satisfies the restricted condition of dtτp. The second term of Eq. (67) is


after dropping the second order term of dtand the first order term of dW. Substitution of Eqs. (70) and (71) with Eq. (67) gives


and therefore


which looks similar to the conventional convective diffusion equation with the sign of vreversed. Eq. (73) indicates that a group of identical particles of mass mundergoes convective and diffusive transport in the Eulerian space. A particle in the group is located at the position xat time t, moving with velocity v. This specific particle observes the concentration Cof other particles nearby its position x. Therefore, Eq. (73) exists as the convective diffusion equation in the Lagrangian picture. If the particle moves with velocity vin a stationary fluid, then the motion is equivalent to particles that perform only diffusive motion within a fluid moving with v. To emphasize the fluid velocity, we replace vwith u; then the Lagrangian convective diffusion Eq. (73) becomes the original (Eulerian) convection-diffusion equation:


which can be directly obtained by replacing Eq. (65) by


4. Dissipation rates

4.1 Energy consumption per time

In classical mechanics, work done due to an infinitesimal displacement of a particle drunder the influence of force field Fis


The time differentiation of Eq. (76) provides an energy consumption rate (i.e., power represented by P) as a dot product of the particle velocity vand the applied force F:


For an arbitrary physical quantity Q, variation rate of its density can be represented as


where Vis the constant system volume and q=Q/Vis a volumetric density of Qor named specific Q. Eq. (78) indicates that a density changing rate of Qis equal to qoperated by v. If we replace Qby the internal energy of the system, then the specific energy consumption rate is expressed as


where wand ware specific work done and work done per length, respectively, and Acis the cross-sectional area normal to w. For a continuous media, wcauses transport phenomena in a non-equilibrium state, and v/Acis generated as proportional to a flux. A changing rate can be quantified as a product of a driving force and a flux, as implicated from Eq. (77).

Let us consider a closed system possessing ξ1and ξ2, as some thermodynamic quantities characterizing the system state. The values of ξiat a state of equilibrium are denoted ξ10and ξ20and values outside equilibrium ξ1and ξ2. Within a static equilibrium, the entropy represented by Sof the system is maintained as the maximum. For a system away from the static equilibrium, the generalized driving force is defined as


which is obviously zero for all kat the static equilibrium. A flux Jjof ξjis defined as


which assumes that Jjrepresents a linear combination of all the existing driving forces Xk. We take Onsager’s symmetry principle [12, 13], which indicates that the kinetic coefficient Ljkfor all jand kare symmetrical such as


The entropy production rate per unit volume, or the specific entropy production rate, is defined as


where s=S/V. We expand the specific entropy swith respect to infinitesimal changes of ξkas an independent variable:


which represents the changing rate of the specific entropy as a dot project of flux Jand driving force X. The subscript kin Eq. (84) is for physical quantities on which the respective entropy depends. For mathematical simplicity, a new quantity is defined as Yk=TXk, where Tis the absolute temperature in Kelvin to have


Note that has a physical dimension equal to that of the specific power. An inverse relationship of Eq. (81) is


where Rklrepresents an inverse matrix of Ljk, i.e., RklLjk=δlj, which can be proven by substituting Eq. (86) with Eq. (81):


Substitution of Eq. (86) to Eq. (84) represents σin terms of flux J


where Jand Jexist as the row and column vectors of J, respectively, and Rrepresents the generalized resistance matrix. A partial derivative of σwith respect to an arbitrary flux Jiis equal to twice the generalized driving force:


In this case, the specific entropy dissipation rate σis presented using the flux and σdifferential with the flux by substituting Eq. (89) with Eq. (84):


which indicates that the specific entropy increases with respect to the flux and proves that the systems is away from a pure, static equilibrium state.

4.2 Effective driving forces

The second thermodynamic law represents the infinitesimal entropy change in the microcanonical ensemble:


where Erepresents the internal energy, Prepresents the system pressure within a volume V, and μiand Niare the chemical potential and the mole number of species i, respectively. Eq. (91) implies the entropy S=SEVNias a function of the internal energy E, the volume V,and the number of species iNi.This gives ξ1=E, ξ2=V, and ξi=Ni(i=3for water and i=4for solute). The driving forces Xiare particularly calculated as


where subscripts q, v, and sof Xindicates heat, volume of solvent, and solute, respectively. In Eq. (92), entropy Sis differentiated by energy E, keeping V, and Nsinvariant, which are applied to Eqs. (93) and (94). Eq. (94) indicates that the driving force is a negative gradient of the chemical potential divided by the ambient temperature. Within the isothermal-isobaric ensemble, Gibbs free energy is defined as


where H=E+PVis enthalpy. If the solute concentration is diluted (i.e., NwNs), it is referred to as a weak solution. As such, the overall chemical potential can be approximated as


where H¯and S¯represent molar enthalpy and entropy, respectively. An infinitesimal change of Gibbs free energy is, in particular, written as


which is equivalent to


where V¯is a molar volume of the system, μsis the solute chemical potential, and c=Ns/Nwis the molar fraction of solute molecules. The gradient of the solvent chemical potential was rewritten as a linear combination of gradients of temperature, pressure, and molar solute fraction:


where the following mathematical identity was used


In general, fluxes of heat, solvent volume, and solute molecules are intrinsically coupled to their driving forces, such as


where Onsager’s reciprocal relationship, Lij=Lji, is employed.

4.3 Applications

4.3.1 Solute diffusion

The primary driving force for the solute transport is Xs=μs/T, if temperature and pressure gradients are not significant in solute transport. We consider the diffusive flux of solute only in an isothermal and isobaric process and neglect terms of Lqsand Lvs:


which is equivalent to Fick’s law of


where D[m2/s] is a solute diffusion coefficient. If Eq. (102) is expressed in terms of concentration gradient, we have


By Eqs. (103) and (104), one can find


Then, the entropy-changing rate based on the solute transport is calculated as


Next, we consider the Stokes-Einstein diffusivity:


where kBis Boltzmann constant, ηis the solvent viscosity, and dpis the diameter of a particle diffusing within the solvent medium. The phenomenological coefficient Lqqis represented as


For weakly interacting solutes, the solute chemical potential is


where μ0is generally a function of Tand P, which are constant in this equation, Ris the gas constant, and ais the solute activity. For a dilute solution, the activity represented by ais often approximated as the concentration c(i.e., ac). The proportionality between Lssand DSEis


which leads to


where NAis the Avogadro constant.

For a dilute isothermal solution, we represent the entropy-changing rate as


for an isothermal and isobaric process. Assuming that Dis not a strong function of c, Eq. (112) indicates that the diffusive entropy rate σsis unconditionally positive (as expected), increases with the diffusive flux, and decreases with the concentration c. Within this analysis, cis defined as molar or number fraction of solute molecules to the solvent. For a dilute solution, conversion of cto a solute mass or mole number per unit volume is straightforward.

4.3.2 Thermal flux

The thermal flux consists of conductive and convective transports, proportional to Tand P, respectively. Neglecting the solute diffusion in Eq. (101), the coupled equations of heat and fluid flows are simplified as


using Onsager’s reciprocal relationship in that the off-diagonal coefficients are symmetrical. The driving forces are


The substitution of Eq. (113) into (114) gives




Subtracting the first row by the second row of Eq. (116) provides


Through physical interpretation, one can conclude that


where h˜represents the system enthalpy as a function of temperature. Finally, the coupled heat transfer equation is




is the thermal conductivity.


5. Concluding remarks

In this chapter, we investigated diffusion phenomenon as an irreversible process. By thermodynamic laws, entropy always increases as a system of interest evolves in a non-equilibrium state. The entropy-increasing rate per unit volume is a measure of how fast the system changes from the current to a more disordered state. Entropy concept is explained from the basic mathematics using several examples. Diffusion phenomenon is explained using (phenomenological) Fick’s law, and more fundamental theories were summarized, which theoretically derive the diffusion coefficient and the convection-diffusion equation. Finally, the dissipation rate, i.e., entropy-changing rate per volume, is revisited and obtained in detail. The coupled, irreversible transport equation in steady state is applied to solute diffusion in an isothermal-isobaric process and heat transfer that is consisting of the conductive and convective transport due to the temperature gradient and fluid flow, respectively. As engineering processes are mostly open in the steady state, the theoretical approaches discussed in this chapter may be a starting point of the future development in irreversible thermodynamics and statistical mechanics.


  • Greek symbol μ is also often used for viscosity in fluid mechanics literature. In this book, chemical potential is denoted as μ .

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Albert S. Kim (August 2nd 2019). Fundamentals of Irreversible Thermodynamics for Coupled Transport, Non-Equilibrium Particle Dynamics, Albert S. Kim, IntechOpen, DOI: 10.5772/intechopen.86607. Available from:

chapter statistics

915total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Using the Principles of Nonequilibrium Thermodynamics for the Analysis of Phase Transformations in Iron-Carbon Alloys

By Bobyr Sergiy Volodimyrovych

Related Book

First chapter

Can Ocean Thermal Energy Conversion and Seawater Utilisation Assist Small Island Developing States? A Case Study of Kiribati, Pacific Islands Region

By Michael G. Petterson and Hyeon Ju Kim

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us