Open access peer-reviewed chapter

Entropy in Quantum Mechanics and Applications to Nonequilibrium Thermodynamics

By Paul Bracken

Submitted: December 12th 2019Reviewed: February 20th 2020Published: April 17th 2020

DOI: 10.5772/intechopen.91831

Downloaded: 538


Classical formulations of the entropy concept and its interpretation are introduced. This is to motivate the definition of the quantum von Neumann entropy. Some general properties of quantum entropy are developed, such as the quantum entropy which always increases. The current state of the area that includes thermodynamics and quantum mechanics is reviewed. This interaction shall be critical for the development of nonequilibrium thermodynamics. The Jarzynski inequality is developed in two separate but related ways. The nature of irreversibility and its role in physics are considered as well. Finally, a specific quantum spin model is defined and is studied in such a way as to illustrate many of the subjects that have appeared.


  • classical
  • quantum
  • partition function
  • temperatures
  • entropy
  • irreversible

1. Introduction

The laws of thermodynamics are fundamental to the present understanding of nature [1, 2]. It is not surprising then to find they have a very wide range of applications beyond their original scope, such as to gravitation. The analogy between properties of black holes and thermodynamics could be extended to a complete correspondence, since a black hole in free space had been shown to radiate thermally with a temperature T=κ/2π, where κis the surface gravity. One should be able to assign an entropy to a black hole given by SH=AH/4where AHis the surface area of the black hole [3]. In the nineteenth century, the problem of reconciling time asymmetric behavior with time symmetric microscopic dynamics became a central issue in this area of physics [4]. Lord Kelvin wrote about the subjection of physical phenomenon to microscopic dynamical law. If then the motion of every particle of matter in the universe were precisely reversed at any instant, the course of nature would be simply reversed for ever after [5]. Physical processes, on the other hand, are irreversible, such as conduction of heat and diffusion processes [6, 7]. It subsequently became apparent that not only is there no conflict between reversible microscopic laws and irreversible microscopic behavior, but there are extremely strong reasons to expect the latter from the former. There are many reasons; for example, there exists a great disparity between microscopic and macroscopic scales and the fact that the events we observe in the macroworld are determined not only by the microscopic dynamics but also by the initial conditions or state of the system.

In the twentieth century, it became clear that the microworld was described by a different kind of physics along with mathematical ideas that need not be taken into account in describing the macroworld. This is the subject of quantum mechanics. Even though the new quantum equations have similar symmetry properties as their classical counterparts, it also reveals numerous phenomena that can contribute at this level to the problems mentioned above. These physical phenomena which play various roles include the phenomenon of quantum entanglement, the effect of decoherence in general, and the theory of measurements as well.

The purpose of this is to study the subject of entropy as it applies to quantum mechanics [8, 9]. Its definition is to be relevant to very small systems at the atomic and molecular level. Its relationship to entropies known at other scales can be examined. It is also important to relate this information from this new area of physics to the older and more established theories of thermodynamics and statistical physics [10, 11, 12, 13, 14, 15]. To summarize, many good reasons dictate that the arrow of time is specified by the direction of increase of the Boltzmann entropy, the von Neumann macroscopic entropy. To relate the quantum Boltzmann approach to irreversibility to measurement theory, the measuring apparatus must be included as a part of the closed quantum mechanical system.


2. Entropy and quantum mechanics

Boltzmann’s great insight was to connect the second law of thermodynamics with phase space volume. This he did by making the observation that for a dilute gas, logΓMis proportional up to terms negligible compared to the system size, to the thermodynamic entropy of Clausius. He then extended his insight about the relation between thermodynamic entropy and logΓMto all macroscopic systems, no matter what their composition. This gave a macroscopic definition of the observationally measureable entropy of equilibrium macroscopic systems. With this connection established, he generalized it to define an entropy for systems not in equilibrium.

Clearly, the macrostate Mxis determined by x, a point in phase space, and there are many such points, in fact a continuum, which correspond to the same M. Let ΓMthen be the region in Γconsisting of all microstates xcorresponding to a given macrostate M. Boltzmann associated with each microstate xof a macroscopic system Ma number SB, which depends only on Mx, such that up to multiplicative and additive constants is given by


This Sis called the Boltzmann entropy of a classical system. The constant kB=1.381016erg/K is called Boltzmann’s constant, and if temperature is measured in ergs instead of Kelvin, it may be set to one. Boltzmann argued that due to large differences in the sizes of ΓM, SBxtwill typically increase in a way which explains and describes the evolution of physical systems towards equilibrium.

The approach of Gibbs, which concentrates primarily on probability distributions or ensembles, is conceptually different from Boltzmann’s. The entropy of Gibbs for a microstate xof a macroscopic system is defined for an ensemble density ρxto be


In (2), ρxis the probability for the microscopic state of the system to be found in the phase space volume element dx. Suppose ρxis taken to be the generalized microcanonical ensemble associated with a macrostate M


Then clearly


The probability density for the system in the equilibrium macrostate ρMeqis the same as that for the microcanonical and equivalent to the canonical or grandcanonical ensemble when the system is of macroscopic size. The time development of SBand SGsubsequent to some initial time when ρ=ρMis very different unless M=Meqwhen there is no further systematic change in Mor ρ. In fact, SGρnever changes in time as long as xevolves according to Hamiltonian evolution, so ρevolves according to the Liouville equation. Then SGdoes not give any indication that the system is evolving towards equilibrium. Thus the relevant entropy for understanding the time evolution of macroscopic systems is SBand not SG.

From the standpoint of mathematics, these expressions for classical entropies can be unified under the heading of the Boltzmann-Shannon-Gibbs entropy [16]. A very general form of entropy which includes those mentioned can be defined in a mathematically rigorous way. To do so, let ΩAμbe a finite measure space, νa probability measure that is absolutely continuous with respect to μ, and its Radon-Nikodym derivative /exists. The generalized BSGentropy is defined to be


when the integrand is integrable.

This includes the classical Boltzmann-Gibbs entropy when and are given by


It also includes the Shannon entropy appearing in information theory in which


In this case, (5) gives the entropy to be


In attempting to translate these considerations to the quantum domain, it is immediately clear that a perfect analogy does not exist.

Although the situation is in many ways similar in quantum mechanics, it is not identical. The irreversible incompressible flow in phase space is replaced by the unitary evolution of wave functions in Hilbert space and velocity reversal of xby complex conjugation of the wave function. The analogue of the Gibbs entropy (2) of an ensemble is the von Neumann entropy of a density matrix ρ:


This formula was given by von Neumann. It generalizes the classical expression of Boltzmann and Gibbs to the realm of quantum mechanics. The density matrix with maximal entropy is the Gibbs state. The range of SvNis the whole of the extended real line 0, so to every number ζwith 0<ζ, there is a density matrix ρsuch that SvNρ=ζ. Like the classical SGρ, this does not alter in time for an isolated system evolving under Schrödinger evolution. It has value zero whenever ρrepresents a pure stare. Similar to SGρ, it is not most appropriate for describing the time symmetric behavior of isolated macroscopic systems. The Szilard engine composed of an atom is an example in which the entropy of a quantum object is made use of. von Neumann discusses the macroscopic entropy of a system, so a macrostate is described by specifying values of a set of commuting macroscopic observable operators Â, such as particle number, energy, and so forth, to each of the cells that make up the system corresponding to the eigenvalues aα, an orthogonal decomposition of the system’s Hilbert space Hinto linear subspaces Γ̂αin which the observables Âtake the values aα. Let Παthe projection into Γ̂α. von Neumann then defines the macroscopic entropy of a system with density matrix ρ˜as


Here, pαρ˜is the probability of finding the system with density matrix ρ˜in the microstate Mα


and Γ̂αis the dimension of Γ̂α. An analogous definition is made for a system which is represented by a wave function Ψ; simply replace pαρby pαΨ=ΨΠαΨ. In fact, ΨΨjust corresponds to a particular pure density matrix.

von Neumann justifies (10) by noting that




and ρ˜is macroscopically indistinguishable from ρ.

A correspondence can be made between the partitioning of classical phase space Γand the decomposition of Hilbert space Hand to define the natural quantum analogues to Boltzmann’s definition of SBMin (1) as


where Γ̂Mαis the dimension of Γ̂Mα. With definition (14) the first term on the right of (10) is just what would be stated for the expected value of the entropy of a classical system of whose macrostate we were unsure. The second part of (10) will be negligible compared to the first term for a macroscopic system, classical or quantum, and going to zero when divided by the number of particles.

Note the difference that in the classical case, the state of the system is described by xΓαfor some α, so the system is always in one of the macrostates Mα. For a quantum system described by ρor Ψ, this is not the case. There is no analogue of (1) for general ρor Ψ. Even when the system is in a macrostate corresponding to a definite microstate at t0, only the classical system will be in a unique macrostate at time t. The quantum system will in general evolve into a superposition of different macrostates, as is the case in the Schrödinger Cat paradox. In this wave function, Ψcorresponding to a particular macrostate evolves into a linear combination of wave functions associated with very different macrostates. The classical limit is obtained by a prescription in which the density matrix is identified with a probability distribution in phase space and the trace is replaced by integration over phase space. The superposition principle excludes partitions of the Hilbert space: an orthogonal decomposition is all that is relevant.

2.1 Properties of entropy functions

Entropy functions have a number of characteristic properties which should be briefly described in the quantum case. The set of observables will be the bounded, self-adjoint operators with discrete spectra in a Hilbert space. The set of normal states can be taken to be the density operators or positive operators of trace one.

The entropy functional satisfies the following inequalities. Let λi>0and iλi=1. Then Shas the concavity property:


with equality if all λiare equal.

Subadditivity holds with equality if and only if ρiρj=0, ij




where the first equality holds iff TBρ=ρand the second iff Sρk=Sρfor all k. The conditional entropy is defined to be


The formal expression will be interpreted as follows. If A,Bare positive traceless operators with complete orthonormal sets of eigenstates aiand bi, using a resolution of identity, iaiAlogAai=i,jaiAbjbjlogAai=i,jaiaibjlogaibjaiso that


Concavity of the function xlogxensures the terms of the final sum are nonnegative. In order that Sρ1ρ2<, it is necessary that πρ1πρ2where πW=suppWis the support projection of W, so ρ1<ρ2. From the definition, Sρ1ρ20with equality if ρ1=ρ2. If λρ1ρ2, for some λ01, Sρ1ρ2logλfrom operator monotony of logz. If ρ=iλiρi, then


which gives (15) and (16). If Tis a trace-preserving operator, then ρ<, and


This is to say that Tis entropy-increasing.

The concept of irreversibility is clearly going to be relevant to the subject at hand, so some thoughts related to it will be given periodically in what follows. A possible way to account for irreversibility in a closed system in nature is by the various types of course-graining. There are also strong reasons to suggest the arrow of time is provided by the direction of increase of the quantum form of the Boltzmann entropy. The measuring apparatus should be included as part of the closed quantum mechanical system in order to relate the quantum Boltzmann approach to irreversibility to the concept of a measurement. Let Scbe a composite system consisting of a macroscopic system Scoupled to a measuring instrument I, so Sc=S+I, where Iis a large but finite N-particle system. A set of course-grained mutually commuting extrinsic variables are provided whose eigenspaces correspond to the pointer positions of I. von Neumann’s picture of the measurement process is basic to the approach, but according to which, the coupling of Sto Ileads to the following effects. A pure state of Sdescribed by a linear combination αcαψαof its orthonormal energy eigenstates is converted into a statistical mixture of these states for which cα2is the probability of finding the system in state ψα. It also sends a certain set of classical or intercommuting, macroscopic variables Mof Ito values indicated by pointer readings that indicate which of the states is realized.

There is an amplification process of the SIcoupling where different microstates of Sgive rise to macroscopically different states of I. If Iis designed to have readings which are in one-to-one correspondence with the eigenstates of S, it may be assumed index αof its microstates goes from 1to n. Denote the projection operator for subspace Kby Πα, then


and each element of the abelian subalgebra of takes the form with Mαscalars


Define the projection operators:


Suppose Ais measured on system S, initially in a state of the composite system described by a density matrix ρ. The value pαis obtained with probability τα=Trρπα. After the measurement, the state of the composite system is accounted for by the density matrix:


This is a mixture of states in each of which Ahas a definite value.

The transformation ρρ˜=απαρπαmay be viewed as a loss of information contained in non-diagonal terms ψαρπαwith αβin ααπαρπα. When a sequence of measurements is carried out and a time evolution is permitted to occur between measurements leads one to assign to a sequence of events πα1t1πα2t2παntnthe probability distribution:


where ρ=ρ0, over the set of histories, where the πksatisfy (22) with Πreplaced by the π. Let us define


The following definition can now be stated. A history is said to decohereif and only if


A state is called decoherent with respect to the set of παif and only if


This implies that TrπαρπαA=0for all αα, which is equivalent to παρ=0for all α. In contrast to infinite systems where there is no need to refer to a choice of projections, decoherent mixed states over the macroscopic observables can be described by relations between the density matrix and the projectors. They would be of the form ρm=ΨΨwith Ψ=αλαπαΦαsuch that αλα2=1and ΦαHand satisfy


The relative or conditional entropy between two states Sρ1ρ2was defined in (18), and it plays a crucial role. It is worth stating a few of its properties, as some are necessary for the theorem:


When γis a completely positive map, or embedding


The last two inequalities are known as joint concavity and monotonicity of the relative entropy. The following result may be thought of as a quantum version of the second law.

Theorem: Suppose the initial density matrix is decoherent at zero time (29) with respect to παand have finite entropy


and it is not an equilibrium state of the system. Let ρtf, for tf>0, be any subsequent state of the system, possibly an equilibrium state. Then for an automorphic, unitary time evolution of the system between 0ttf


where S0=Stfif and only if (e)α<βπαρtfπβ+πβρtfπα=0.

Proof: Set ρtf=απαρtfπα=ρtfγ, so ρis obtained from ρby means of a completely positive map. It follows that


The first equality uses the cyclic property of the trace and the definition of ρ. The second equality uses decoherence of ρ0, and the next inequality is a consequence of (34). The evolution is unitary and hence preserves entropy which is the last equality. This implies that StS0and the equality condition (e) follows from (32).

Of course, entropy growth as in the theorem is not necessarily monotonic in the time variable. For this reason, it is usual to refer to fixed initial and final states. For thermal systems, a natural choice of the final state is the equilibrium state of the system. It is the case in thermodynamics that irreversibility is manifested as a monotonic increase in the entropy. Thermodynamic entropy, it is thought, is related to the entropy of the states defined in both classical and quantum theory. Under an automorphic time evolution, the entropy is conserved. One application of an environment is to account for an increase. A type of course-graining becomes necessary together with the right conditions on the initial state to account for the arrow of time. In quantum mechanics, the course-graining seems to be necessary and may be thought of as a restriction of the algebra and can also be interpreted as leaving out unobservable quantum correlations. This may, for example, correspond to decoherence effects important in quantum measurements. Competing effects arise such as the fact that correlations becoming unobservable may lead to entropy increase. There is also the effect that a decrease in entropy might be due to nonautomorphic processes. Although both effects lead to irreversibility, they are not cooperative but rather contrary to one another. The observation that the second law does hold implies these nonautomorphic events must be rare in comparison with time scales relevant to thermodynamics.


3. Quantum mechanics and nonequilibrium thermodynamics

Some aspects of equilibrium thermodynamics are examined by considering an isothermal process. Since it is a quasistatic process, it may be decomposed into a sequence of infinitesimal processes. Assume initially the system has a Hamiltonian Hγin thermal equilibrium at a temperature T. Boltzmann’s constant is set to one. The state is given by the Gibbs density operator ρ. This expression can also be written in terms of the energy eigenvalues εnand eigenvectors nof H. The probability of finding the system in state nis


The average external energy Uof the system is given as


When the parameter γis changed to γ+dγ, both εnand pnas well as Uchange to


Each instantaneous infinitesimal process can be broken down into a part which is the work performed; the second is the heat transformed as the system relaxes to equilibrium. This breakup motivates us to define


so dU=δQ+δW, and δis used to indicate that heat and work are not exact differentials. The free energy of the system is defined to be F=TlogZ, so dF=ndεnpnwhich means


By integrating over the infinitesimal segments, we find Wis


Inverting Eq. (38) for pn, we can solve for


Substituting into the relation for δQ, we get two terms, one proportional to logZand the other to logpn. The term with logZwhen the pksatisfy kpk=1is


It remains to study


By the chain rule


So δQis not a function of the state but is related to the variation of something that is. Define the entropy Sas usual from (9), S=npnlogpn, and arrive at


This relation only holds for infinitesimal processes. For finite and irreversible processes, there may be additional terms to the entropy change. This has been quite successful at describing many different types of physical system [17, 18, 19].

A deep insight has come recently into the properties of nonequilibrium thermodynamics which could be achieved by regarding work as a random variable. For example, consider a process in which a piston is used to compress a gas in a cylinder. Due to the nature of the gas and its chaotic motion, each time the piston is pressed, the gas molecules exert a back reaction with a different force. This means the work needed to achieve a given compression changes each time something is carried out.

Usually a knowledge of nonequilibrium processes is restricted to inequalities such as the Jarzynski inequality. He was able to show by interpreting work Was a random variable that an inequality can be obtained, even for a process performed arbitrarily far from equilibrium.

Suppose the system is always prepared in the same state initially. A process is carried out and the total work Wperformed is measured. Repeating this many times, a probability distribution for the work PWcan be constructed. An average for Wcan be computed using PWas


Jarzynski showed that the statistical average of eβWsatisfies


where ΔF=FTγfFTγi. It holds for a process performed arbitrarily far from equilibrium. Now the inequality WΔFis contained in (50) and can be realized by applying Jensen’s inequality, which states that eβWeβW.

In macroscopic systems, individual measurements are usually very close to the average by the law of large numbers. For mictoscopic systems, this is usually not true. In fact, the individual realizations of Wmay be smaller than ΔF. These cases would be local violations of the second law but for large systems become extremely rare. If the function PWis known, the probability of a local violation of the second law is


To get (50) requires detailed knowledge of the system’s dynamics, be it classical, quantum, unitary, or whatever.

Consider nonunitary quantum dynamics. Initially, the system has Hamiltonian Hi=Hγi. The system was in thermal equilibrium with a bath at temperature T. The initial state of the system is the Gibbs thermal density matrix (38). Let εniand ndenote the initial eigenvalues and eigenvectors of Hias εniis obtained with probability pn=eβεni/Z.

Immediately after this measurement, γchanges from γ0=γito γτ=γfaccording to the rule γt. If it is assumed the contact with the bath is very weak during this process, the state of the system evolves according to


where Uis the unitary evolution operator which satisfies Schrödinger’s equation, itU=HtU, U0=1.

The Hamiltonian is Hγfat the end and has energy levels εmf, eigenvectors m, so the probability εnfmeasured is mψτ2=mUτn2. This may be interpreted as the conditional probability a system in nwill be in mafter time τ.

No heat has been exchanged with the environment, so any change in the environment has to be attributed to the work performed by the external agent and is


where both εniand εmfare fluctuating and change during each realization of the experiment. The first εniis random due to thermal fluctuations and εmfis random due to quantum fluctuations in Was a random variable by (53).

To get an expression for PWobtained by repeating the process several times, this is a two-step measurement process. From probability theory, if A,Bare two events, the total probability pABthat both events have occurred is


where pBis the probability Bwhich occurs and pABis the conditional probability Bthat has occurred. The probability of both events that have occurred is mUτn2pn. Since we are interested in the work performed, we write


And some over all allowed events, weighted by their probabilities, and arrange the terms according to the values εmfεni. In most systems, there are present a rather large number of allowed levels, and even more allowed differences εmfεni. It is more efficient to use the Fourier transform


This has the inverse Fourier transform


Using (55), we obtain that


Hence, it may be concluded that


This turns out to be somewhat easier to work with than PW, and (59) plays a similar role as Zin equilibrium statistical mechanics. From Gy, the statistical moments of Wcan be found by expanding


A formula for the quantum mechanical formula for the moments can be found as well. The average work is W=HfHi, where for any operator A, we have At=TrUtAUtρas the expectation value of Aat time t. This follows from the fact that the state of the system at tis ρt=UtρUt. From the definition of G, it ought to be the case that Gy==eβW. However, ρin (38) and (59) yields


Using Z=eβF, (61) yields (50)


Nothing has been assumed about the speed of this process. Thus inequality (50) must hold for a process arbitrarily far from equilibrium.


4. Heat flow from environment approach

There is another somewhat different way in which the Jarzynski inequality can be generalized to quantum dynamics. In a classical system, the energy of the system can be continuously measured as well as the flow of heat and work. Continuous measurement is not possible in quantum mechanics without disrupting the dynamics of the system [20].

A more satisfactory approach is to realize that although work cannot be continuously measured, the heat flow from the environment can be measured. To this end, the system of interest is divided into a system of interest and a thermal bath. The ambient environment is large, and it rapidly decoheres and remains at thermal equilibrium, uncorrelated and unentangled with the system. Consequently, we can measure the change in energy of the bath Qwithout disturbing the dynamics of the system. The open-system Jarzynski identity is expressed as


For a system that has equilibrated with Hamiltonian Hinteracting with a thermal bath at temperature T, the equilibrium density matrix is ρ=eβH/Z=eβFβH, where β=1/KBT. The dynamics of an open quantum system is described by a quantum operator ρ˜=, a linear trace-preserving, complete positive map of operators. Any such complete positive superoperator has an operator-sum representation


Conversely, any operator-sum represents a complete positive superoperator. The set of operators Aαis often called Krauss operators. The superoperator is trace-preserving and conserves probability if αAαAα=I. In the simplest case, the dynamics of an isolated quantum system is described by a single unitary operator U=U1.

The interest here is in the dynamics of a quantum system governed by a time-dependent Hamiltonian weakly coupled to an extended, thermal environment. Let the total Hamiltonian be


where ISand IBare system and bath identity operators, HStthe system Hamiltonian, HBthe bath Hamiltonian, and Hintthe bath-system interaction with εa small parameter. Assume initially the system and environment are uncorrelated such that the initial combined state is ρSρeqB, where ρeqBis the thermal density equilibrium matrix of the bath.

By following the unitary dynamics of the combined total system for a finite time and measuring the final state of the environment, a quantum operator description of the system dynamics can also be obtained:


Here Uis the unitary evolution operator of the total system


and TrBis the partial trace over the bath degrees of freedom, εiBare the energy eigenvalues, bis the orthonormal energy eigenvectors of the bath, and ZBis the bath partition function. Assume the bath energy states are nondegenerate. Then (66) implies the Krauss operators for this dynamics are


Suppose the environment is large, with a characteristic relaxation time short compared with the bath-system interactions, and the system-bath coupling εis small. The environment remains near thermal equilibrium, unentangled and uncorrelated with the system. The system dynamics of each consecutive time interval can be described by a superoperator derived as in (66) which can then be chained together to form a quantum Markov chain:


The Hermitian operator of a von Neumann-type measurement can be broken up into a set of eigenvalues λσand orthonormal projection operators πσsuch that H=σλσπσ. In a more general sense, the measured operator of a positive operator-valued measurement need not be projectors or orthonormal. The probability of observing the a-th outcome is


The state of the system after this interaction is


The result of the measurement can be represented by using a Hermitian map superoperator A:


An operator-value sum maps Hermitian operators into Hermitian operators:


In the other direction, any Hermitian map has an operator-value-mean representation. Hermitian maps provide a particularly concise and convenient representation of sequential measurements and correlation functions. For example, suppose Hermitian map Arepresents a measurement at time 0, Cis a different measurement at time t, and the quantum operation Strepresents the system evolution between the measurements. The expectation value of a single measurement is


The correlation function bta0can be expressed as


It may be shown that just as every Hermitian operator represents some measurement on the Hilbert space of pure states, every Hermitian map can be associated with some measurement on the Liouville space of mixed states.

A Hermitian map representation of heat flow can now be constructed under assumptions that the bath and system Hamiltonian are constant during the measurement and the bath-system coupling is very small. A measurement on the total system is constructed, and thus the bath degrees of freedom are projected out. This leaves a Hermitian map superoperator that acts on the system density matrix alone. Let us describe the measurement process and mathematical formulation together.

Begin with a composite system which consists of the bath, initially in thermal equilibrium weakly coupled to the system:


Measure the initial energy eigenstate of the bath so based on (76):


Now allow the system to evolve together with the bath for some time:


Finally, measure the final energy eigenstate of the bath:


Taking the trace over the bath degrees of freedom produces the final normalized system density matrix where trace over Sgives the probability of observing the given initial and final bath eigenstates. Multiply by the Boltzmann weighted heat, and sum over the initial and final bath states to obtain the desired average Boltzmann weighted heat flow:


Replace the heat bath Hamiltonian by ISHB=HHStIBεHint. The total Hamiltonian commutes with the unitary dynamics and cancels. The interaction Hamiltonian can be omitted in the small coupling limit giving


Collecting the terms acting on the bath and system separately and replacing the Krauss operators describing the reduced dynamics of the system, the result is


To summarize, it has been found that the average Boltzmann weighted heat flow is represented by


where Srepresents the reduced dynamics of the system. The Hermitian map superoperator Rtis given by


The paired Hermitian map superoperators act at the start and end of a time interval. They give a measure of the change in the energy of the system over that interval. This procedure does not disturb the system beyond that already incurred by coupling the system to the environment. The Jarzynski inequality now follows by applying this Hermitian map and quantum formalism. Discretize the experimental time into a series of discrete intervals labeled by an integer t.

The system Hamiltonian is fixed within each interval. It changes only in discrete jumps at the boundaries. The heat flow can be measured by wrapping the superoperator time evolution of each time interval Stalong with the corresponding Hermitian map measurements Rt1SRt. In a similar fashion, the measurement of the Boltzmann weighted energy change of the system can be measured with eβΔE=TrRτSRτ1. The average Boltzmann weighted work of a driven, dissipative quantum system can be expressed as


In (85), ρeqtis the system equilibrium density matrix when the system Hamiltonian is HtS.

This product actually telescopes due to the structure of the energy change Hermitian map (84) and the equilibrium density matrix (65). This leaves only the free energy difference between the initial and final equilibrium ensembles, as can be seen by writing out the first few terms


In the limit in which the time intervals are reduced to zero, the inequality can be expressed in the continuous Lindblad form:


5. A model quantum spin system

A magnetic resonance experiment can be used to illustrate how these ideas can be applied in practice. A sample of noninteracting spin-1/2particles are placed in a strong magnetic field B0which is directed along the zdirection. Denote by σj,j=x,y,zthe usual Pauli matrices and 1the 2×2identity matrix. It is assumed the motion of the system is unitary. Then the spin is governed by the Hamiltonian:


In units where is one, B0represents the characteristic precession frequency of the spin. Since Hois diagonal in the ±basis that diagonalizes σz, the matrix exponential and partition function are given by


If we set σ˜to be the equilibrium magnetization of the system, σ˜=σxth, the thermal density matrix is


and σ˜corresponds to the parametric response of a spin-1/2particle.

The work segment is implemented by introducing a very small field of amplitude Brotating in the xyplane with frequency ω. The work parameter is governed by the field


Typically, B0ωTand B0.01T, so we may approximate B<<B0. The total Hamiltonian is the combination


The oscillating field plays the role of a perturbation which although weak may initiate transitions between the up and down spin states and will be most frequent at the resonance condition ω=B0, so the driving frequency matches the natural oscillation frequency.

The time evolution operator Utis calculated now. To do this, define a new operator Vtby means of the equation


Substituting (43) into the evolution equation for Ut, itU=HtU, U0=1. It is found that Vtobeys the Schrödinger equation:


It is found that Vtsatisfies


Using the commutation relations of the Pauli matrices and the fact that


it is found that the terms in the evolution equation can be simplified


By means of these results, it remains to simplify


Taking these results to (95), we arrive at


This means Vtevolves according to a time-dependent Hamiltonian, so the solution can be written as


and the full-time evolution operator is given by


Since the operators σyand σzdo not commute, the exponentials in (101) cannot be using the usual addition rule.

To express (100) otherwise, suppose Mis an arbitrary matrix such that M2=1. When αis an arbitrary parameter, power series expansion of eMyields


Now H1can be put in equivalent form


Since σi2=1, it follows that


Consequently, (100) can be used to prove that Vtis given by




the evolution operator is then given by


The functions utand vtin (107) are given as


Apart from a phase factor, the final result depends only on Ωand ϑ, and these in turn depend on B0, B, and ωthrough (108). To understand the physics of Uta bit better, suppose the system is initially in the pure state +. The probability will be found in state after time tis


This expression represents the transition probability per unit time a transition will occur. Since the unitarity condition UU=1implies that u2+v2=1, we conclude u2is the probability when no transition occurs. Note vis proportional to sinϑ, which gives a physical meaning to ϑ. It represents the transition probability and reaches a maximum when ω=B0at resonance where Ω=B, so uand vsimplify to


Now that Utis known, the evolution of any observable Acan be calculated


If Ais replaced by σzin (111), we obtain


Substituting v2, this takes the form


Consider the average work. Suppose B<<B0, so the unperturbed Hamiltonian H0can be used instead of the full Hamiltonian Htwhen expectation values of quantities are calculated which are related to the energy. Let us determine the energy of the system at any tby taking operator Ato be H0:


The average work at time tis simply the difference between the energy st time t1and t=0. Since v0=0, this difference is


since sin2ϑ=1cos2ϑ=B2/Ω2. The average work oscillates indefinitely with frequency Ω/2. This is a consequence of the fact the time evolution is unitary. The amplitude multiplying the average work is proportional to the initial magnetization σ˜and B2/Ω2, so the ratio is a Lorentzian function.

The equilibrium free energy is F=TlogZwhere Z=2coshB0/2T. The free energy of the initial state at t=0and final state at any arbitrary time is the same yielding


This is a consequence of the fact that B<<B0. According to WF, it should be expected that


Given the matrices for Utand ρthat have been determined so far, the function Gcan be computed:


Set x=and recall use definition (42) for σ˜in the second term of (118) to give


Substituting (119) into (118), we can conclude


This is the Jarzynski inequality, since it is the case that ΔF=0here. The statistical moments of the work can be obtained by writing an expression for Ginto a power series


From (121), the first and second moments can be obtained; for example


As a consequence, the variance of the work can be determined


A final calculation that may be considered is the full distribution of work PW. Now PWis the inverse Fourier transform of Gy:


Using the Fourier integral form of the delta function, (124) can be written as


Work taken as a random variable can take three values W=0,+B0,B0where B0is the energy spacing between the up and down states. The event W=B0corresponds to the case where the spin was originally up and then reversed, so an up-down transition. The energy change is B0/2B0/2=B0. Similarly, W=B0is the opposite flip from this one, and W=0is the case with no spin flip.

The second law would have us think that W>0, but a down-up flip should have W=B0, so PW=B0is the probability of observing a local violation of the second law. However, since PW=±B0is proportional to 1±σ˜, up-down flips are always more likely than down-up. This ensures that W0, so violations of the second law are always exceptions to the rule and never dominate.

The work performed by an external magnetic field on a single spin-1/2particle has been studied so far. The energy differences mentioned correspond to the work. For noninteracting particles, energy is additive. Hence the total work Wwhich is performed during a certain process is the sum of works performed on each individual particle W=W1++WN. Since the spins are all independent and energy is an extensive variable, it follows that W=NW. where Wis the average work from (115).


6. Conclusions

We have tried to give an introduction to this frontier area that lies in between that of thermodynamics and quantum mechanics in such a way as to be comprehensible. There are many other areas of investigation presently which have had interesting repercussions for this area as well. There is a growing awareness that entanglement facilitates reaching equilibrium [21, 22, 23]. It is then worth mentioning that the ideas of einselection and entanglement with the environment can lead to a time-independent equilibrium in an individual quantum system and statistical mechanics can be done without ensembles. However, there is really a lot of work yet to be done in these blossoming areas and will be left for possible future expositions.

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Paul Bracken (April 17th 2020). Entropy in Quantum Mechanics and Applications to Nonequilibrium Thermodynamics, Quantum Mechanics, Paul Bracken, IntechOpen, DOI: 10.5772/intechopen.91831. Available from:

chapter statistics

538total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Equations of Relativistic and Quantum Mechanics (without Spin)

By Vahram Mekhitarian

Related Book

First chapter

Classical and Quantum Conjugate Dynamics – The Interplay Between Conjugate Variables

By Gabino Torres-Vega

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us