Classical formulations of the entropy concept and its interpretation are introduced. This is to motivate the definition of the quantum von Neumann entropy. Some general properties of quantum entropy are developed, such as the quantum entropy which always increases. The current state of the area that includes thermodynamics and quantum mechanics is reviewed. This interaction shall be critical for the development of nonequilibrium thermodynamics. The Jarzynski inequality is developed in two separate but related ways. The nature of irreversibility and its role in physics are considered as well. Finally, a specific quantum spin model is defined and is studied in such a way as to illustrate many of the subjects that have appeared.
- partition function
The laws of thermodynamics are fundamental to the present understanding of nature [1, 2]. It is not surprising then to find they have a very wide range of applications beyond their original scope, such as to gravitation. The analogy between properties of black holes and thermodynamics could be extended to a complete correspondence, since a black hole in free space had been shown to radiate thermally with a temperature , where is the surface gravity. One should be able to assign an entropy to a black hole given by where is the surface area of the black hole . In the nineteenth century, the problem of reconciling time asymmetric behavior with time symmetric microscopic dynamics became a central issue in this area of physics . Lord Kelvin wrote about the subjection of physical phenomenon to microscopic dynamical law. If then the motion of every particle of matter in the universe were precisely reversed at any instant, the course of nature would be simply reversed for ever after . Physical processes, on the other hand, are irreversible, such as conduction of heat and diffusion processes [6, 7]. It subsequently became apparent that not only is there no conflict between reversible microscopic laws and irreversible microscopic behavior, but there are extremely strong reasons to expect the latter from the former. There are many reasons; for example, there exists a great disparity between microscopic and macroscopic scales and the fact that the events we observe in the macroworld are determined not only by the microscopic dynamics but also by the initial conditions or state of the system.
In the twentieth century, it became clear that the microworld was described by a different kind of physics along with mathematical ideas that need not be taken into account in describing the macroworld. This is the subject of quantum mechanics. Even though the new quantum equations have similar symmetry properties as their classical counterparts, it also reveals numerous phenomena that can contribute at this level to the problems mentioned above. These physical phenomena which play various roles include the phenomenon of quantum entanglement, the effect of decoherence in general, and the theory of measurements as well.
The purpose of this is to study the subject of entropy as it applies to quantum mechanics [8, 9]. Its definition is to be relevant to very small systems at the atomic and molecular level. Its relationship to entropies known at other scales can be examined. It is also important to relate this information from this new area of physics to the older and more established theories of thermodynamics and statistical physics [10, 11, 12, 13, 14, 15]. To summarize, many good reasons dictate that the arrow of time is specified by the direction of increase of the Boltzmann entropy, the von Neumann macroscopic entropy. To relate the quantum Boltzmann approach to irreversibility to measurement theory, the measuring apparatus must be included as a part of the closed quantum mechanical system.
2. Entropy and quantum mechanics
Boltzmann’s great insight was to connect the second law of thermodynamics with phase space volume. This he did by making the observation that for a dilute gas, is proportional up to terms negligible compared to the system size, to the thermodynamic entropy of Clausius. He then extended his insight about the relation between thermodynamic entropy and to all macroscopic systems, no matter what their composition. This gave a macroscopic definition of the observationally measureable entropy of equilibrium macroscopic systems. With this connection established, he generalized it to define an entropy for systems not in equilibrium.
Clearly, the macrostate is determined by , a point in phase space, and there are many such points, in fact a continuum, which correspond to the same . Let then be the region in consisting of all microstates corresponding to a given macrostate . Boltzmann associated with each microstate of a macroscopic system a number , which depends only on , such that up to multiplicative and additive constants is given by
This is called the Boltzmann entropy of a classical system. The constant erg/K is called Boltzmann’s constant, and if temperature is measured in ergs instead of Kelvin, it may be set to one. Boltzmann argued that due to large differences in the sizes of , will typically increase in a way which explains and describes the evolution of physical systems towards equilibrium.
The approach of Gibbs, which concentrates primarily on probability distributions or ensembles, is conceptually different from Boltzmann’s. The entropy of Gibbs for a microstate of a macroscopic system is defined for an ensemble density to be
In (2), is the probability for the microscopic state of the system to be found in the phase space volume element . Suppose is taken to be the generalized microcanonical ensemble associated with a macrostate
The probability density for the system in the equilibrium macrostate is the same as that for the microcanonical and equivalent to the canonical or grandcanonical ensemble when the system is of macroscopic size. The time development of and subsequent to some initial time when is very different unless when there is no further systematic change in or . In fact, never changes in time as long as evolves according to Hamiltonian evolution, so evolves according to the Liouville equation. Then does not give any indication that the system is evolving towards equilibrium. Thus the relevant entropy for understanding the time evolution of macroscopic systems is and not .
From the standpoint of mathematics, these expressions for classical entropies can be unified under the heading of the Boltzmann-Shannon-Gibbs entropy . A very general form of entropy which includes those mentioned can be defined in a mathematically rigorous way. To do so, let be a finite measure space, a probability measure that is absolutely continuous with respect to , and its Radon-Nikodym derivative exists. The generalized entropy is defined to be
when the integrand is integrable.
This includes the classical Boltzmann-Gibbs entropy when and are given by
It also includes the Shannon entropy appearing in information theory in which
In this case, (5) gives the entropy to be
In attempting to translate these considerations to the quantum domain, it is immediately clear that a perfect analogy does not exist.
Although the situation is in many ways similar in quantum mechanics, it is not identical. The irreversible incompressible flow in phase space is replaced by the unitary evolution of wave functions in Hilbert space and velocity reversal of by complex conjugation of the wave function. The analogue of the Gibbs entropy (2) of an ensemble is the von Neumann entropy of a density matrix :
This formula was given by von Neumann. It generalizes the classical expression of Boltzmann and Gibbs to the realm of quantum mechanics. The density matrix with maximal entropy is the Gibbs state. The range of is the whole of the extended real line , so to every number with , there is a density matrix such that . Like the classical , this does not alter in time for an isolated system evolving under Schrödinger evolution. It has value zero whenever represents a pure stare. Similar to , it is not most appropriate for describing the time symmetric behavior of isolated macroscopic systems. The Szilard engine composed of an atom is an example in which the entropy of a quantum object is made use of. von Neumann discusses the macroscopic entropy of a system, so a macrostate is described by specifying values of a set of commuting macroscopic observable operators , such as particle number, energy, and so forth, to each of the cells that make up the system corresponding to the eigenvalues , an orthogonal decomposition of the system’s Hilbert space into linear subspaces in which the observables take the values . Let the projection into . von Neumann then defines the macroscopic entropy of a system with density matrix as
Here, is the probability of finding the system with density matrix in the microstate
and is the dimension of . An analogous definition is made for a system which is represented by a wave function ; simply replace by . In fact, just corresponds to a particular pure density matrix.
von Neumann justifies (10) by noting that
and is macroscopically indistinguishable from .
A correspondence can be made between the partitioning of classical phase space and the decomposition of Hilbert space and to define the natural quantum analogues to Boltzmann’s definition of in (1) as
where is the dimension of . With definition (14) the first term on the right of (10) is just what would be stated for the expected value of the entropy of a classical system of whose macrostate we were unsure. The second part of (10) will be negligible compared to the first term for a macroscopic system, classical or quantum, and going to zero when divided by the number of particles.
Note the difference that in the classical case, the state of the system is described by for some , so the system is always in one of the macrostates . For a quantum system described by or , this is not the case. There is no analogue of (1) for general or . Even when the system is in a macrostate corresponding to a definite microstate at , only the classical system will be in a unique macrostate at time . The quantum system will in general evolve into a superposition of different macrostates, as is the case in the Schrödinger Cat paradox. In this wave function, corresponding to a particular macrostate evolves into a linear combination of wave functions associated with very different macrostates. The classical limit is obtained by a prescription in which the density matrix is identified with a probability distribution in phase space and the trace is replaced by integration over phase space. The superposition principle excludes partitions of the Hilbert space: an orthogonal decomposition is all that is relevant.
2.1 Properties of entropy functions
Entropy functions have a number of characteristic properties which should be briefly described in the quantum case. The set of observables will be the bounded, self-adjoint operators with discrete spectra in a Hilbert space. The set of normal states can be taken to be the density operators or positive operators of trace one.
The entropy functional satisfies the following inequalities. Let and . Then has the concavity property:
with equality if all are equal.
Subadditivity holds with equality if and only if ,
where the first equality holds iff and the second iff for all . The conditional entropy is defined to be
The formal expression will be interpreted as follows. If are positive traceless operators with complete orthonormal sets of eigenstates and , using a resolution of identity, so that
Concavity of the function ensures the terms of the final sum are nonnegative. In order that , it is necessary that where is the support projection of , so . From the definition, with equality if . If , for some , from operator monotony of . If , then
This is to say that is entropy-increasing.
The concept of irreversibility is clearly going to be relevant to the subject at hand, so some thoughts related to it will be given periodically in what follows. A possible way to account for irreversibility in a closed system in nature is by the various types of course-graining. There are also strong reasons to suggest the arrow of time is provided by the direction of increase of the quantum form of the Boltzmann entropy. The measuring apparatus should be included as part of the closed quantum mechanical system in order to relate the quantum Boltzmann approach to irreversibility to the concept of a measurement. Let be a composite system consisting of a macroscopic system coupled to a measuring instrument , so , where is a large but finite -particle system. A set of course-grained mutually commuting extrinsic variables are provided whose eigenspaces correspond to the pointer positions of . von Neumann’s picture of the measurement process is basic to the approach, but according to which, the coupling of to leads to the following effects. A pure state of described by a linear combination of its orthonormal energy eigenstates is converted into a statistical mixture of these states for which is the probability of finding the system in state . It also sends a certain set of classical or intercommuting, macroscopic variables of to values indicated by pointer readings that indicate which of the states is realized.
There is an amplification process of the coupling where different microstates of give rise to macroscopically different states of . If is designed to have readings which are in one-to-one correspondence with the eigenstates of , it may be assumed index of its microstates goes from to . Denote the projection operator for subspace by , then
and each element of the abelian subalgebra of takes the form with scalars
Define the projection operators:
Suppose is measured on system , initially in a state of the composite system described by a density matrix . The value is obtained with probability . After the measurement, the state of the composite system is accounted for by the density matrix:
This is a mixture of states in each of which has a definite value.
The transformation may be viewed as a loss of information contained in non-diagonal terms with in . When a sequence of measurements is carried out and a time evolution is permitted to occur between measurements leads one to assign to a sequence of events the probability distribution:
where , over the set of histories, where the satisfy (22) with replaced by the . Let us define
The following definition can now be stated. A history is said to decohere if and only if
A state is called decoherent with respect to the set of if and only if
This implies that for all , which is equivalent to for all . In contrast to infinite systems where there is no need to refer to a choice of projections, decoherent mixed states over the macroscopic observables can be described by relations between the density matrix and the projectors. They would be of the form with such that and and satisfy
The relative or conditional entropy between two states was defined in (18), and it plays a crucial role. It is worth stating a few of its properties, as some are necessary for the theorem:
When is a completely positive map, or embedding
The last two inequalities are known as joint concavity and monotonicity of the relative entropy. The following result may be thought of as a quantum version of the second law.
Theorem: Suppose the initial density matrix is decoherent at zero time (29) with respect to and have finite entropy
and it is not an equilibrium state of the system. Let , for , be any subsequent state of the system, possibly an equilibrium state. Then for an automorphic, unitary time evolution of the system between
where if and only if (e).
Proof: Set , so is obtained from by means of a completely positive map. It follows that
The first equality uses the cyclic property of the trace and the definition of . The second equality uses decoherence of , and the next inequality is a consequence of (34). The evolution is unitary and hence preserves entropy which is the last equality. This implies that and the equality condition (e) follows from (32).
Of course, entropy growth as in the theorem is not necessarily monotonic in the time variable. For this reason, it is usual to refer to fixed initial and final states. For thermal systems, a natural choice of the final state is the equilibrium state of the system. It is the case in thermodynamics that irreversibility is manifested as a monotonic increase in the entropy. Thermodynamic entropy, it is thought, is related to the entropy of the states defined in both classical and quantum theory. Under an automorphic time evolution, the entropy is conserved. One application of an environment is to account for an increase. A type of course-graining becomes necessary together with the right conditions on the initial state to account for the arrow of time. In quantum mechanics, the course-graining seems to be necessary and may be thought of as a restriction of the algebra and can also be interpreted as leaving out unobservable quantum correlations. This may, for example, correspond to decoherence effects important in quantum measurements. Competing effects arise such as the fact that correlations becoming unobservable may lead to entropy increase. There is also the effect that a decrease in entropy might be due to nonautomorphic processes. Although both effects lead to irreversibility, they are not cooperative but rather contrary to one another. The observation that the second law does hold implies these nonautomorphic events must be rare in comparison with time scales relevant to thermodynamics.
3. Quantum mechanics and nonequilibrium thermodynamics
Some aspects of equilibrium thermodynamics are examined by considering an isothermal process. Since it is a quasistatic process, it may be decomposed into a sequence of infinitesimal processes. Assume initially the system has a Hamiltonian in thermal equilibrium at a temperature . Boltzmann’s constant is set to one. The state is given by the Gibbs density operator . This expression can also be written in terms of the energy eigenvalues and eigenvectors of . The probability of finding the system in state is
The average external energy of the system is given as
When the parameter is changed to , both and as well as change to
Each instantaneous infinitesimal process can be broken down into a part which is the work performed; the second is the heat transformed as the system relaxes to equilibrium. This breakup motivates us to define
so , and is used to indicate that heat and work are not exact differentials. The free energy of the system is defined to be , so which means
By integrating over the infinitesimal segments, we find is
Inverting Eq. (38) for , we can solve for
Substituting into the relation for , we get two terms, one proportional to and the other to . The term with when the satisfy is
It remains to study
By the chain rule
So is not a function of the state but is related to the variation of something that is. Define the entropy as usual from (9), , and arrive at
This relation only holds for infinitesimal processes. For finite and irreversible processes, there may be additional terms to the entropy change. This has been quite successful at describing many different types of physical system [17, 18, 19].
A deep insight has come recently into the properties of nonequilibrium thermodynamics which could be achieved by regarding work as a random variable. For example, consider a process in which a piston is used to compress a gas in a cylinder. Due to the nature of the gas and its chaotic motion, each time the piston is pressed, the gas molecules exert a back reaction with a different force. This means the work needed to achieve a given compression changes each time something is carried out.
Usually a knowledge of nonequilibrium processes is restricted to inequalities such as the Jarzynski inequality. He was able to show by interpreting work as a random variable that an inequality can be obtained, even for a process performed arbitrarily far from equilibrium.
Suppose the system is always prepared in the same state initially. A process is carried out and the total work performed is measured. Repeating this many times, a probability distribution for the work can be constructed. An average for can be computed using as
Jarzynski showed that the statistical average of satisfies
where . It holds for a process performed arbitrarily far from equilibrium. Now the inequality is contained in (50) and can be realized by applying Jensen’s inequality, which states that .
In macroscopic systems, individual measurements are usually very close to the average by the law of large numbers. For mictoscopic systems, this is usually not true. In fact, the individual realizations of may be smaller than . These cases would be local violations of the second law but for large systems become extremely rare. If the function is known, the probability of a local violation of the second law is
To get (50) requires detailed knowledge of the system’s dynamics, be it classical, quantum, unitary, or whatever.
Consider nonunitary quantum dynamics. Initially, the system has Hamiltonian . The system was in thermal equilibrium with a bath at temperature . The initial state of the system is the Gibbs thermal density matrix (38). Let and denote the initial eigenvalues and eigenvectors of as is obtained with probability .
Immediately after this measurement, changes from to according to the rule . If it is assumed the contact with the bath is very weak during this process, the state of the system evolves according to
where is the unitary evolution operator which satisfies Schrödinger’s equation, , .
The Hamiltonian is at the end and has energy levels , eigenvectors , so the probability measured is . This may be interpreted as the conditional probability a system in will be in after time .
No heat has been exchanged with the environment, so any change in the environment has to be attributed to the work performed by the external agent and is
where both and are fluctuating and change during each realization of the experiment. The first is random due to thermal fluctuations and is random due to quantum fluctuations in as a random variable by (53).
To get an expression for obtained by repeating the process several times, this is a two-step measurement process. From probability theory, if are two events, the total probability that both events have occurred is
where is the probability which occurs and is the conditional probability that has occurred. The probability of both events that have occurred is . Since we are interested in the work performed, we write
And some over all allowed events, weighted by their probabilities, and arrange the terms according to the values . In most systems, there are present a rather large number of allowed levels, and even more allowed differences . It is more efficient to use the Fourier transform
This has the inverse Fourier transform
Using (55), we obtain that
Hence, it may be concluded that
This turns out to be somewhat easier to work with than , and (59) plays a similar role as in equilibrium statistical mechanics. From , the statistical moments of can be found by expanding
A formula for the quantum mechanical formula for the moments can be found as well. The average work is , where for any operator , we have as the expectation value of at time . This follows from the fact that the state of the system at is . From the definition of , it ought to be the case that . However, in (38) and (59) yields
Nothing has been assumed about the speed of this process. Thus inequality (50) must hold for a process arbitrarily far from equilibrium.
4. Heat flow from environment approach
There is another somewhat different way in which the Jarzynski inequality can be generalized to quantum dynamics. In a classical system, the energy of the system can be continuously measured as well as the flow of heat and work. Continuous measurement is not possible in quantum mechanics without disrupting the dynamics of the system .
A more satisfactory approach is to realize that although work cannot be continuously measured, the heat flow from the environment can be measured. To this end, the system of interest is divided into a system of interest and a thermal bath. The ambient environment is large, and it rapidly decoheres and remains at thermal equilibrium, uncorrelated and unentangled with the system. Consequently, we can measure the change in energy of the bath without disturbing the dynamics of the system. The open-system Jarzynski identity is expressed as
For a system that has equilibrated with Hamiltonian interacting with a thermal bath at temperature , the equilibrium density matrix is , where . The dynamics of an open quantum system is described by a quantum operator , a linear trace-preserving, complete positive map of operators. Any such complete positive superoperator has an operator-sum representation
Conversely, any operator-sum represents a complete positive superoperator. The set of operators is often called Krauss operators. The superoperator is trace-preserving and conserves probability if . In the simplest case, the dynamics of an isolated quantum system is described by a single unitary operator .
The interest here is in the dynamics of a quantum system governed by a time-dependent Hamiltonian weakly coupled to an extended, thermal environment. Let the total Hamiltonian be
where and are system and bath identity operators, the system Hamiltonian, the bath Hamiltonian, and the bath-system interaction with a small parameter. Assume initially the system and environment are uncorrelated such that the initial combined state is , where is the thermal density equilibrium matrix of the bath.
By following the unitary dynamics of the combined total system for a finite time and measuring the final state of the environment, a quantum operator description of the system dynamics can also be obtained:
Here is the unitary evolution operator of the total system
and is the partial trace over the bath degrees of freedom, are the energy eigenvalues, is the orthonormal energy eigenvectors of the bath, and is the bath partition function. Assume the bath energy states are nondegenerate. Then (66) implies the Krauss operators for this dynamics are
Suppose the environment is large, with a characteristic relaxation time short compared with the bath-system interactions, and the system-bath coupling is small. The environment remains near thermal equilibrium, unentangled and uncorrelated with the system. The system dynamics of each consecutive time interval can be described by a superoperator derived as in (66) which can then be chained together to form a quantum Markov chain:
The Hermitian operator of a von Neumann-type measurement can be broken up into a set of eigenvalues and orthonormal projection operators such that . In a more general sense, the measured operator of a positive operator-valued measurement need not be projectors or orthonormal. The probability of observing the -th outcome is
The state of the system after this interaction is
The result of the measurement can be represented by using a Hermitian map superoperator :
An operator-value sum maps Hermitian operators into Hermitian operators:
In the other direction, any Hermitian map has an operator-value-mean representation. Hermitian maps provide a particularly concise and convenient representation of sequential measurements and correlation functions. For example, suppose Hermitian map represents a measurement at time , is a different measurement at time , and the quantum operation represents the system evolution between the measurements. The expectation value of a single measurement is
The correlation function can be expressed as
It may be shown that just as every Hermitian operator represents some measurement on the Hilbert space of pure states, every Hermitian map can be associated with some measurement on the Liouville space of mixed states.
A Hermitian map representation of heat flow can now be constructed under assumptions that the bath and system Hamiltonian are constant during the measurement and the bath-system coupling is very small. A measurement on the total system is constructed, and thus the bath degrees of freedom are projected out. This leaves a Hermitian map superoperator that acts on the system density matrix alone. Let us describe the measurement process and mathematical formulation together.
Begin with a composite system which consists of the bath, initially in thermal equilibrium weakly coupled to the system:
Measure the initial energy eigenstate of the bath so based on (76):
Now allow the system to evolve together with the bath for some time:
Finally, measure the final energy eigenstate of the bath:
Taking the trace over the bath degrees of freedom produces the final normalized system density matrix where trace over gives the probability of observing the given initial and final bath eigenstates. Multiply by the Boltzmann weighted heat, and sum over the initial and final bath states to obtain the desired average Boltzmann weighted heat flow:
Replace the heat bath Hamiltonian by . The total Hamiltonian commutes with the unitary dynamics and cancels. The interaction Hamiltonian can be omitted in the small coupling limit giving
Collecting the terms acting on the bath and system separately and replacing the Krauss operators describing the reduced dynamics of the system, the result is
To summarize, it has been found that the average Boltzmann weighted heat flow is represented by
where represents the reduced dynamics of the system. The Hermitian map superoperator is given by
The paired Hermitian map superoperators act at the start and end of a time interval. They give a measure of the change in the energy of the system over that interval. This procedure does not disturb the system beyond that already incurred by coupling the system to the environment. The Jarzynski inequality now follows by applying this Hermitian map and quantum formalism. Discretize the experimental time into a series of discrete intervals labeled by an integer .
The system Hamiltonian is fixed within each interval. It changes only in discrete jumps at the boundaries. The heat flow can be measured by wrapping the superoperator time evolution of each time interval along with the corresponding Hermitian map measurements . In a similar fashion, the measurement of the Boltzmann weighted energy change of the system can be measured with . The average Boltzmann weighted work of a driven, dissipative quantum system can be expressed as
In (85), is the system equilibrium density matrix when the system Hamiltonian is .
This product actually telescopes due to the structure of the energy change Hermitian map (84) and the equilibrium density matrix (65). This leaves only the free energy difference between the initial and final equilibrium ensembles, as can be seen by writing out the first few terms
In the limit in which the time intervals are reduced to zero, the inequality can be expressed in the continuous Lindblad form:
5. A model quantum spin system
A magnetic resonance experiment can be used to illustrate how these ideas can be applied in practice. A sample of noninteracting spin-particles are placed in a strong magnetic field which is directed along the direction. Denote by the usual Pauli matrices and the identity matrix. It is assumed the motion of the system is unitary. Then the spin is governed by the Hamiltonian:
In units where is one, represents the characteristic precession frequency of the spin. Since is diagonal in the basis that diagonalizes , the matrix exponential and partition function are given by
If we set to be the equilibrium magnetization of the system, , the thermal density matrix is
and corresponds to the parametric response of a spin-particle.
The work segment is implemented by introducing a very small field of amplitude rotating in the plane with frequency . The work parameter is governed by the field
Typically, and , so we may approximate . The total Hamiltonian is the combination
The oscillating field plays the role of a perturbation which although weak may initiate transitions between the up and down spin states and will be most frequent at the resonance condition , so the driving frequency matches the natural oscillation frequency.
The time evolution operator is calculated now. To do this, define a new operator by means of the equation
Substituting (43) into the evolution equation for , , . It is found that obeys the Schrödinger equation:
It is found that satisfies
Using the commutation relations of the Pauli matrices and the fact that
it is found that the terms in the evolution equation can be simplified
By means of these results, it remains to simplify
Taking these results to (95), we arrive at
This means evolves according to a time-dependent Hamiltonian, so the solution can be written as
and the full-time evolution operator is given by
Since the operators and do not commute, the exponentials in (101) cannot be using the usual addition rule.
To express (100) otherwise, suppose is an arbitrary matrix such that . When is an arbitrary parameter, power series expansion of yields
Now can be put in equivalent form
Since , it follows that
Consequently, (100) can be used to prove that is given by
the evolution operator is then given by
The functions and in (107) are given as
Apart from a phase factor, the final result depends only on and , and these in turn depend on , , and through (108). To understand the physics of a bit better, suppose the system is initially in the pure state . The probability will be found in state after time is
This expression represents the transition probability per unit time a transition will occur. Since the unitarity condition implies that , we conclude is the probability when no transition occurs. Note is proportional to , which gives a physical meaning to . It represents the transition probability and reaches a maximum when at resonance where , so and simplify to
Now that is known, the evolution of any observable can be calculated
If is replaced by in (111), we obtain
Substituting , this takes the form
Consider the average work. Suppose , so the unperturbed Hamiltonian can be used instead of the full Hamiltonian when expectation values of quantities are calculated which are related to the energy. Let us determine the energy of the system at any by taking operator to be :
The average work at time is simply the difference between the energy st time and . Since , this difference is
since . The average work oscillates indefinitely with frequency . This is a consequence of the fact the time evolution is unitary. The amplitude multiplying the average work is proportional to the initial magnetization and , so the ratio is a Lorentzian function.
The equilibrium free energy is where . The free energy of the initial state at and final state at any arbitrary time is the same yielding
This is a consequence of the fact that . According to , it should be expected that
Given the matrices for and that have been determined so far, the function can be computed:
This is the Jarzynski inequality, since it is the case that here. The statistical moments of the work can be obtained by writing an expression for into a power series
From (121), the first and second moments can be obtained; for example
As a consequence, the variance of the work can be determined
A final calculation that may be considered is the full distribution of work . Now is the inverse Fourier transform of :
Using the Fourier integral form of the delta function, (124) can be written as
Work taken as a random variable can take three values where is the energy spacing between the up and down states. The event corresponds to the case where the spin was originally up and then reversed, so an up-down transition. The energy change is . Similarly, is the opposite flip from this one, and is the case with no spin flip.
The second law would have us think that , but a down-up flip should have , so is the probability of observing a local violation of the second law. However, since is proportional to , up-down flips are always more likely than down-up. This ensures that , so violations of the second law are always exceptions to the rule and never dominate.
The work performed by an external magnetic field on a single spin-particle has been studied so far. The energy differences mentioned correspond to the work. For noninteracting particles, energy is additive. Hence the total work which is performed during a certain process is the sum of works performed on each individual particle . Since the spins are all independent and energy is an extensive variable, it follows that . where is the average work from (115).
We have tried to give an introduction to this frontier area that lies in between that of thermodynamics and quantum mechanics in such a way as to be comprehensible. There are many other areas of investigation presently which have had interesting repercussions for this area as well. There is a growing awareness that entanglement facilitates reaching equilibrium [21, 22, 23]. It is then worth mentioning that the ideas of einselection and entanglement with the environment can lead to a time-independent equilibrium in an individual quantum system and statistical mechanics can be done without ensembles. However, there is really a lot of work yet to be done in these blossoming areas and will be left for possible future expositions.