Path-Integral Description of Cooper Pairing

Before we start applying path integration to treat Cooper pairing and superfluidity, it is a good idea to quickly review the concepts behind path integration. There are many textbooks providing plentiful details, such as Feynman’s seminal text [1] and Kleinert’s comprehensive compendium [2], and other works listed in the bibliography [3–5]. We will assume that the reader is already familiar with the basics of path-integral theory, so if the following paragraphs are not merely reminders to you, it is probably better to first consult these textbooks.


Introduction
Before we start applying path integration to treat Cooper pairing and superfluidity, it is a good idea to quickly review the concepts behind path integration. There are many textbooks providing plentiful details, such as Feynman's seminal text [1] and Kleinert's comprehensive compendium [2], and other works listed in the bibliography [3][4][5]. We will assume that the reader is already familiar with the basics of path-integral theory, so if the following paragraphs are not merely reminders to you, it is probably better to first consult these textbooks.
Quantum mechanics, according to the path-integral formalism, rests on two axioms. The first axiom, the superposition axiom, states that the amplitude of any process is a weighed sum of the amplitudes of all possible possibilities for the process to occur. These "possible possibilities" should be interpreted as the alternatives that cannot be distinguished by the experimental setup under consideration. For example, the amplitude for a particle to go from a starting point "A" to a final point "B" is a weighed sum of the amplitudes of all the paths that this particle can take to get to "B" from "A". The second axiom assigns to the weight the complex value exp{iS /h} where S is the action functional. In our example, each path x(t) that the particle can take to go from A to B gets a weight exp {iS [x(t)]/h} since the action is the time integral of the Lagrangian. There is a natural link with quantum-statistical mechanics: in the path-integral formalism, quantum statistical averages are expressed as the same weighed averages but now the weight is a real value exp{−S [x(τ)]/h} and the path is taken in imaginary time τ = it.
In the example of the above paragraph, we considered a particle which could take many different paths from A to B. However, the same axioms can be applied to fields. As an example we take a complex scalar field φ x,t , where x and t denote position and time respectively. Let us mentally discretize space-time, and to make things easy, we assume there are only five moments in time and five places to sit. In this simple universe, the field φ x,t is represented by a set of 25 complex numbers, i.e. an element of C 25 if C is the set of complex numbers. Summing over all possible realizations of the fields corresponds to integrating over C 25 , a 25-fold integral over complex variables, or a 50-fold integral over real variables. Writing φ x,t = u x,t + iv x,t with u and v real, the summation over all possible possibilities for φ x,t is written as Dφ x,t := 5 ∏ x=1 5 ∏ t=1 du x,t dv x,t .
( 1 ) The notation with calligraphic D indicates the path-integral sum, and we keep this notation for actual continuous spacetime, that we may see as a limit of a finer and finer grid of spacetime points (our 5x5 grid is obviously very crude). Although this is, strictly speaking, no longer a sum over paths, it is still called a path integral because the description is based on the same axiomatic view as outlined in the previous paragraph.
Each particular realization of φ x,t again gets assigned a weight, where now we need a functional of φ x,t (or, in our example, a function of 25 complex variables). Again, we use the action functional S [φ x,t ] = L(φ x,t , φ x,t )dt (2) to construct the weight, where L is the Lagrangian of the field theory suitable for φ x,t . A central quantity to calculate is the statistical partition sum where τ = it indicates imaginary times required for the quantum statistical expression, running from τ = 0 to τ =hβ with β = 1/(k B T) the inverse temperature. Bose gases in condensed matter are described by complex scalar fields like φ x,t , and the path integral can basically only be solved analytically when the action functional is quadratic in form, i.e. when S [φ x,τ ]/h = dx dt dx dt φ x,τ A(x, t; x , t )φ x ,τ .
In our simple universe, A would be a 25×25 matrix, and the path integral would reduce to a product of 25 complex Gaussian integrals, leading to The proportionality is written here because every integration also gives a (physically unimportant) factor π that can be absorbed in the integration measure, if needed.
Fermionic systems, such as the electrons in a metal or ultracold fermionic atoms in a magnetic trap, cannot be described by complex scalar fields: fermionic fields should anticommute [6]. If we axiomatically impose anticommutation onto scalar variables, we obtain Grassmann variables [7]. Since there is also a spin degree of freedom, the fermionic fields require a spin index σ as well as spacetime indices x, τ: ψ x,τ,σ . As Grassmann variables anticommute, we have ψ 2 x,τ,σ = 0. Integrals over Grassmann variables ("Berezin-Grassmann integrals" [8]) are defined by dψ x,τ,σ = 0 and ψ x,τ,σ dψ x,τ,σ = 1. As was the case for bosonic fields, also for fermionic fields there is only one generic path integration that can be done analytically, namely that with a quadratic action. A quadratic action functional in Grassmann fields is written in general form as whereψ x,τ,σ and ψ x,τ,σ are different Grassmann variables (so in our example, we would need 50 pairs of Grassmann elements). The result is Despite having only analytic results for quadratic action functionals, the path-integral technique is nevertheless a very versatile tool and has become in fact the main tool to study field theory [9]. The trick usually consists in finding suitable transformations and approximations to bring the path integral into the same form as that with quadratic action functionals.

The action functional for the atomic Fermi gas
In the study of superconductivity, ultracold quantum gases offer a singular advantage over condensed matter systems in that their system parameters can be tuned experimentally with a high degree of precision and over a wide range. For example, the interaction strength between fermionic atoms is tunable by an external magnetic field. This field can be used to vary the scattering length over a Feshbach resonance, from a large negative to a large positive value. In the limit of large negative scatting lengths, a cloud of ultracold fermionic atoms will undergo Cooper pairing, and exhibit superfluidity when cooled below the critical temperature. On the other side of the resonance, at large positive scattering lengths, a molecular bound state gets admixed to the scattering state, and a Bose-Einstein condensate (BEC) of fermionic dimers can form. With the magnetically tunable s-wave scattering length, the entire crossover region between the Bardeen-Cooper-Schrieffer (BCS) superfluid and the molecular BEC can be investigated in a way that is thus far not possible in solids.
Also the amount of atoms in each hyperfine state can be tuned experimentally with high precision. Typically, fermionic atoms (such as 40 K or 6 Li) are trapped in two different hyperfine states. These two hyperfine spin states provide the "spin-up" and "spin-down" partners that form the Cooper pairs. Unlike in metals, in quantum gases the individual amounts of "spin-up" and "spin-down" components of the Fermi gas can be set independently (using evaporative cooling and Rabi oscillations). This allows to investigate how Cooper pairing (and the ensuing superfluidity) is frustrated when there is no equal amount of spin ups and spin downs, i.e. in the so-called "(spin-)imbalanced Fermi gas". In a superconducting metal, the magnetic field that would be required to provide a substantial imbalance between spin-up and spin-down electrons is simply expelled by the Meissner effect, so that the imbalanced situation cannot be studied. The particular question of the effect of spin-imbalance is of great current interest [10], and we will keep our treatment general enough to include this effect.
Finally, the geometry of the gas is adaptable. Counterpropagating laser beams can be used to make periodic potentials for the atoms, called "optical lattices". Imposing such a lattice in just one direction transforms the atomic cloud into a stack of pancake-shaped clouds, with tunable tunneling amplitude between pancakes. The confinement in the out-of-pancake direction can be made tight enough to allow to study the physics of the two-dimensional system effectively. Imposing an optical lattice in more than one direction, all manner of crystals can be formed, and it becomes possible to engineer an experimental realization of the Bose and Fermi Hubbard models, for example. The joint tunability of the number of atoms, temperature, dimensionality, and interaction strength has in the past couple of decades turned quantum gases into powerful quantum simulators of condensed matter models [11]. In this respect, ultracold quantum gases are also being used to enlarge our knowledge of superconductivity, through the study of pairing and superfluidity.
A key aspect of ultracold quantum gases is that the interatomic interaction can be characterized by a single number, the s-wave scattering length mentioned above. In fact, the requirement to use the label "ultracold" is that the typical wave length associated with the atomic motion is much larger than the range of the interatomic potential, so that higher partial waves in the scattering process are frozen out. This aspect allows to simplify the treatment of the interatomic interactions tremendously. Rather than using a complicated interatomic potential, we can use a contact pseudopotential V(r − r ) = gδ(r − r ), and adapt its strength g such that the model-or pseudopotential has the same s-wave scattering length as the true potential. Doing so requires some care [12,13], and using the Lippmann-Schwinger equation up to second order results in the following expression for the renormalized strength g of the contact interaction (in the three dimensional case): Moreover, for a Fermi gas there is an additional simplification: due to the obligation of antisymmetrizing the wave function the s-wave scattering amplitude between fermions with the same (hyperfine) spin is zero. This means that in an ultracold Fermi gas, only atoms with different spin states interact. This allows us to write the action functional for the Fermi gas of atoms with mass m as Here, β = 1/(k B T) is again the inverse temperature. The quadratic term corresponds to the free particle Lagrangian. The amounts of spin up σ =↑ and spin down σ =↓ are set by the chemical potentials μ ↑ and μ ↓ , respectively. It is the total particle density n ↑ + n ↓ that is used to define a Fermi wave vector: k F = (3π 2 (n ↑ + n ↓ )) 1/3 in three dimensions, and k F = (2π(n ↑ + n ↓ )) 1/2 in two dimensions. The quartic term corresponds to the interaction part, and we have used the contact pseudopotential.
To keep our notations simple and make integrals and variables dimensionless, we will introduce natural units of k F , the Fermi wave number, and E F , the Fermi energy. Also we use Sinceψ x,τ,σ ψ x,τ,σ dx was dimensionless to start with, it must be equal to the corresponding expression with the primed variables. Introducing μ σ = μ σ /E F , β = βE F and g = gk 3 F /E F , and using that E F = (hk F ) 2 /(2m) we get Finally, note that, in our units (with k = k/k F ) the renormalized strength of the contact potential is Dropping the primes, we get the starting point of our treatment: This is, if you will, the statement of the problem that we want to investigate. Working with the operator version of quantum mechanics, you would state your starting Hamiltonian -in the path-integral formalism, you have to state your starting action functional. Ours describes a gas of fermionic particles, of two spin species (with the possibility of spin imbalance), with contact interactions between particles of different spins. The goal of our calculation is to obtain the free energy of the system (so that we have access to its thermodynamics), and to identify the superfluid phase (and its order parameter). This is done in five main steps, and in the following subsections we go through them in detail.

Step 1: Hubbard-Stratonovich fields and the Nambu spinors
The Hubbard-Stratonovich transformation is based on the Gaussian integral formula for completing the squares: In this formula, the auxiliary fieldsΔ x,τ , Δ x,τ do not have a spin index and are complex bosonic fields and not Grassmann variables. We interpret this bosonic field as the field of the fermion pairs, as illustrated in figure (1).
Note that we have to take care about the measure of integration. The Grassmann path integral means by definition The factors (between brackets) in this product can be swapped as long as we keep the two fields together: so if we keep dψ x,τ,σ dψ x,τ,σ pairs together, the order of the {x, τ, σ} does not matter. The Grassmann path integral over the η spinors means by definition

Now replace component by component, to get
There is, for every x, τ, a minus sign when compared to the measure of Dψ x,τ,σ Dψ x,τ,σ : This is important when taking the integrals. Indeed, for the Gaussian integral with coefficients A x,τ (these are 2 × 2 matrices since our Nambu spinors have 2 components): Note that the determinant here is only the determinant over the 2 × 2 matrix between the Nambu spinors, it is the "spinor determinant", indicated by a σ subscript. By exponentiating the logarithm we can write this as where with the sum we mean the trace: which really is nothing else but So, the swap in order gives the minus sign in front of the determinant. Indeed, sometimes the integration measure does matter.

Step 2: Performing the Grassmann integrations
Now we still have to figure out what the matrix between the Nambu spinors is before we can perform the integrations over the Grassmann fields in (15). For reasons that become clear in the light of Green's functions, we will not call this matrix A, but instead we will call it −G −1 . and prove that We do this by expanding as follows This givesη The first three terms in (21) are exactly as in the action in (15). The last term is more difficult since it involves a derivative that is positioned between two Grassmann variables. We know that when two Grassmann variables are swapped, they get a minus sign. To see what we should do with derivatives, we have to jump a bit ahead of ourselves and do the Fourier transforms. Then we get rid of the operator character of the derivatives, and we can clearly see what the rules are. Starting from (see next section): we know what the derivative will mean Now there is no trouble in swapping the Grassmann variables, this just results in a minus sign: ψ k,n,σ iω nψk,n,σ =ψ k,n,σ (−iω n ) ψ k,n,σ .
So the rule is: if there is an odd-degree derivative sandwiched between two conjugate Grassmann variables, and these are swapped, there is no sign change. For second derivatives, there is again a sign change, for third derivatives again no change,.... If we now apply this newly gained knowledge to the fourth term of (21) we arrive at Now we see that the fourth term in (21) is indeed also equal to the corresponding term in (15).
Note that in general, the inverse Green's matrix −G −1 x,τ is not diagonal in x, τ (for interactions other than the delta function). We should treat −G −1 as an operator in e.g. position representation, and write The matrix gives the operator in position (and time) representation. If it were not diagonal there would be terms likeη x ,τ x , τ −G −1 x, τ η x,τ in the Gaussian integral and we would need to first diagonalize the whole spacetime matrix. Luckily, we are using a contact potential. Note that when the system is not interacting, there will be no pairs, and we get in position representation This result does not contain any approximation (apart from the choice of starting Lagrangian). But, −G −1 depends on Δ x,τ and contains a bunch of derivatives too, so we have no way to calculate the logarithm of that (remember that the determinant is here the spinor determinant over the 2 × 2 matrix but, as noted earlier, there is no problem with that). We will need to go to reciprocal space. Rather than using the spacetime coordinates x, τ we work in the space of wave numbers k and Matsubara frequencies ω n = (2n + 1)π/β for fermions and n = 2nπ/β for bosons (both with n ∈ Z).

Intermezzo 1: The long road to reciprocal space
The goal of this section is to rewrite (27) in reciprocal space, so that we can trace over the wave numbers and (Matsubara) frequencies rather than positions and times. The starting point for fermions is For bosons we replace ω n = (2n + 1)π/β by n = 2nπ/β. The available wave numbers are the same for bosons as for fermions, they are given by {k x , k y , k z } = (2π/L) {n x , n y , n z } with n x , n y , n z ∈ Z. There are various valid choices for normalizing the plane waves, and that tends to lead to confusion in the results found in the literature. That is why we will go through quite some detail to follow the effects of the choice of normalization that we have made here. We basically want the reciprocal space kets to obey the completeness relation The spacetime kets obey This leads to the orthonormality relations The consistency of these relations can be proven by inserting expression (30) for I between the ket and the bra in (31), and using β 0 dτ = β and dx = V. This gives us an integral representation of the delta function, Similarly, we have the orthogonality relation Enforcing the consistency of inserting expression (29) in between the bra and the ket in (33), leads to the following integral for the delta function: Note the factor 1/(βV) in front of the summation. It is easy to check, by the way, that the definitions for the Fourier transform of the Grassmann variables in the previous paragraph are consistent with the definitions of the Fourier transforms given here. How do the fields transform under this? If we introduce Δ q,m = q, m|Δ and assume that Δ x,τ = x, τ|Δ then For the mind in need of consistency checks, note that substituting (35) in (36) gives back (34). And substituting (36) into (35) gives back (32). The same relations hold between ψ x,τ,σ and ψ k,n,σ -we used those already in (22). We also introduceΔ q,m = Δ|q, m and assume that Δ x,τ = Δ|x, τ , then we havē Also here we have the same relations linkingψ x,τ,σ andψ k,n,σ , cf. expression (23). Note that here we have distributed the βV factors evenly over the Fourier and the inverse Fourier. As a result, we will have to tag a factor βV along later. If we keep the βV completely in the Fourier transform (38) or (36) then this does not appear.
Everything is now is set for the Fourier transformation of the partition function given in (26). The first term in the action of (26) is re-expressed with (37) and (35) as where we have used (32) in the last step. This is merely Parseval's rule.
Every term in (40) can be transformed to reciprocal space using the rules and conventions that we stated above. The first term in (40) becomes The second term transforms completely analogously: In the interaction terms, all three fields have to be transformed Δ k+k ,n+n ψ k,n,↑ψk ,n ,↓ , and analogously for the fourth term we arrive at In (43) and (44) where the reciprocal space representation of the inverse Green's function is given by: where the following Nambu spinors were used η k,n = ψ k,n,↑ ψ k,n,↓ enη k,n = ψ k,n,↑ ψ k,n,↓ .
Only the first part, the non-interacting part, is diagonal. The part related to the fermion pair field is not diagonal in reciprocal space, in the sense that components of the inverse Green's function with k , n different from k, n are nonzero, so we cannot use the result (18). In principle this is still a quadratic integral and we could do it, but it would involve taking a determinant (or logarithm) of k , n −G −1 k , n . This is a ∞ × ∞ matrix with 2 × 2 matrices as its elements. So let us postpone this horror, and see if we can get somewhere with approximations.

Step 3: the saddle-point approximation
We have performed step 2, the Grassmann integrations, and even have obtained two equivalent expressions of the result, in position space and in reciprocal space. The result still contains a path integral over the bosonic pair fieldsΔ q,m , Δ q,m , and this integral cannot be done analytically. Then why go through all the trouble of introducing these fields, and doing the fermionic integrals, if in the end we are left with another path integral that cannot be done exactly? The advantage of having rewritten the action into the form with the bosonic fields, is that we can use additional information about this bosonic pair field. Indeed, if we want to investigate the superfluid state, where the pairs are Bose condensed, we know that the field will be dominated by one contribution, that of the q = 0 term. This is similar to the assumption of Bogoliubov in his famous treatment of the helium superfluid. In that case, Bogoliubov proposed to shift the bosonic operators over a q = 0 contribution, so that the shifted operators could be seen as small fluctuations. This scheme is what we will apply now to treat the Bose fluid of pairs described byΔ q,m , Δ q,m .
As mentioned, the simplest approximation is to assume all pairs are condensed in the q = 0, m = 0 state and set for the two pair fields We introduce the factor βV for the ease of calculation and to give Δ units of energy. By applying the saddle point (48) and (49) Δψ k,n,↑ψ−k,−n,↓ + Δ * ψ −k,−n,↓ ψ k,n,↑ .
We explicitly write out the action without Nambu spinors here to make an important and subtle point, which otherwise is quickly overlooked. The alert reader will have noticed that we have re-indexed the indices in the spin-down two-particle term: k → −k and n → −n.
This re-indexation is necessary in order to write the action in Nambu spinor notation. Here the following Nambu spinors will be used η k,n = ψ k,n,↑ ψ −k,−n,↓ enη k,n = ψ k,n,↑ ψ −k,−n,↓ .
This then leads to the inverse Green's function in the saddle-point approximation Our choice has made the inverse Green's function diagonal, since none of the terms Δ k +k,n +n with k +k = 0 or n + n = 0 survive due to the delta functions in (48), and similarly forΔ.
In the saddle-point approximation to the partition sum we no longer have to perform the integrations over the bosonic degrees (it is this integration which was approximated): Z sp = Dψ k,n,σ Dψ k,n,σ exp βV g We can now perform the Grassmann integration using expression (7): (54) This spinor-determinant is given by Introducing we can rewrite this as This can easily be checked by substituting and seeing that both expressions are indeed equal (and equal to ω 2 n + k 2 − μ 2 + |Δ| 2 − ζ 2 − 2iω n ζ). So, the saddle-point partition sum becomes The partition function is also linked to the thermodynamic potential through the well-known formula Z sp = e −βF sp (T,V,μ ↑ ,μ ↓ ) .
Note that this is the free energy as a function of the chemical potentials. It is related to the usual free energy through or, in μ and ζ: where N = N ↑ + N ↓ and δN = N ↑ − N ↓ . F is commonly referred to as the "thermodynamic potential". In the following notations, we use Ω(T, V, μ, ζ) = F(T, V, μ, ζ)/V for the thermodynamic potential per unit volume, where n = N/V and δn = δN/V are the total density and the density difference We give these expressions explicitly here to avoid the subtle difficulties related to identifying the dependent variables. For the saddle-point action we get: We dropped the explicit dependence on V because it also drops from the expression for Ω sp . From (60) we get the saddle-point thermodynamic potential per unit volume Note that since the density of k-states in reciprocal space is V/(2π) 3 we can replace We do not have to take the Matsubara sums now. For example we might want to take derivatives first.
In the sum over Matsubara frequencies we sum up terms with all n ∈ Z. We can re-order the terms in this sum. In particular, we can set n = −1 − n and sum over all n . For this substitution iω n = iω −n−1 = −iω n . This means that in a general Matsubara summation as shifted Matsubara frequencies, and ξ k = k 2 − μ, we get We use this notation to make an important point. The thermodynamic potential per unit volume Ω sp will depend on our choice of saddle-point value, but this is not one of the thermodynamic variables like T, μ and ζ. If we want to treat Δ as a separate input for Ω sp , we will write this as Ω sp (T, μ, ζ; Δ) to emphasize the distinction between Δ and the true thermodynamic variables. We extract the dependence of Δ on the thermodynamic variables from the gap equation and we have to insert this result back into Ω sp (T, μ, ζ; Δ(T, μ, ζ)) = Ω sp (T, μ, ζ) when applying thermodynamic relations. For example, the number equations are given by the thermodynamic relations If we want to treat Δ as a separate variable then we need to use the chain rule Now you might wonder what all the fuss is about, since we just imposed so that here We have been cautious with the partial derivatives and thermodynamic relations, and this might seem superfluous for the saddle-point approximation, but it will become very important when we are adding fluctuation corrections to the thermodynamic potential.

Results at the saddle-point level
To obtain the results at the saddle-point level, we need to perform the Matsubara summations in the thermodynamic potential: Due to the presence of the logarithm this Matsubara sum is divergent. Luckily however, the divergent part, which does not depend on the system parameters, can be neatly isolated, as shown in (79). This means that the free energy can be regularized by subtracting this unphysical term. This then results in: Using the renormalized contact potential strength, this becomes (81) We kept (k F a s ) as the measure of interaction strength, explicitly writing k F although we have used it as a length unit, as if (k F a s ) were a single symbol for the interaction strength. In three dimensions, this is reduced to (82) Since we have isotropy, this is (83) This result is shown in figure (2).
In the limit of low temperature we have a simplification: Also note that in the limit of large k, the logarithm behaves as so that it is precisely the extra terms ξ k + |Δ| 2 /(2k 2 ) that keep the integrand from diverging and keep the saddle-point free energy that we calculate here finite. Remember that ξ k = k 2 − μ and E k = ξ 2 k + Δ 2 .

From this result, and from
we can obtain the gap equation: For every temperature and every μ, ζ, this equation can be solved to obtain Δ sp (T, μ, ζ), the value of Δ that indeed minimizes Ω sp (as can be checked from the sign of the second derivative, or by visual inspection of the plot of Ω sp as a function of Δ at fixed T, μ, ζ, see figure (2)). The resulting saddle-point gap is illustrated in figure (3) and figure (4).
The presence of imbalance (ζ = 0) only affects the energies E k < ζ in the temperature zero limit, since in that limit The presence of a gap results in E k > Δ. Thus, as long as ζ < Δ, the imbalance (at T=0) will not affect the gap equation, and we retrieve the Clogston limit for superconductivity. This argument is only valid in the BCS limit, because for the BEC/BCS crossover we still need to solve (independently) the number equation that is coupled to the gap equation. Results for this are illustrated in figure (4). Only in the BCS limit we can set μ = E F (see figure 5)and not worry about the overall chemical potential further. The value of the gap parameter that minimizes the saddle-point free energy is shown as a function of the interaction strength 1/(k F a s ) for ζ = 0. In the limit of (k F a s ) −1 → −∞, the Bardeen-Cooper-Schrieffer (BCS) result is retrieved, whereas in the limit of (k F a s ) −1 → +∞, a Bose-Einstein condensate (BEC) of tightly bound Cooper pairs is formed. The different curves show the effect of increasing the temperature: the BCS state is more strongly affected than the BEC state. However, in the BEC state, phase fluctuations (discussed in the next sections) will become the dominant mechanism to destroy superfluidity, rather than the breakup of the Cooper pairs as in BCS. When working with a fixed number of particles, the chemical potentials need to be related to these numbers of particles through the number equations. These are again found from the thermodynamic potential, and from we get and Now note that we did indeed work at a fixed number of particles from the start, in introducing our units k F = (3π 2 n) 1/3 . Hence, these two number equations become and we have to solve the gap equation in conjunction with these two number equations: all three have to be satisfied. The third one can be solved and merely fixes ζ as a function of δn sp , but the first number equation together with the gap equation are coupled in the two remaining unknowns μ, Δ. Solutions for Δ are shown in figures (3) and (4). Figure (5) shows results for the chemical potential.

Intermezzo 2: Partition sum phase factor
Before we look at fluctuations beyond mean field, there is an important remark to be made: we can no longer delay looking at non-diagonal Gaussian integrations. Remember, when we do not restrict ourselves to the saddle-point value, we need to evaluate (see Eq. (45)) Dψ k,n,σ Dψ k,n,σ exp − ∑ k,n ∑ k ,n η k ,n · k , n −G −1 k, n · η k,n = ∏ k,n (−1) Dη k,n Dη k,n exp − ∑ k,n ∑ k ,n η k ,n · k , n −G −1 k, n · η k,n .
Remember that the factors (−1) are coming from the change in integration measure. To keep track of it, we will give it its own symbol Figure 5. The chemical potential μ is shown as a function of the interaction parameter 1/(k F a s ), for a low temperature and a balanced gas (ζ = 0). In the regime of negative a s the chemical potential tends to the Fermi energy, as in the BCS theory. For positive a s , the chemical potential tends to the binding energy of the strongly bound 'Cooper molecules' that Bose-Einstein condense to form a superfluid.
The matrix k , n −G −1 k, n has rows and columns indexed by k, n and k , n . Each element in the matrix is itself a 2 × 2 matrix, and we could think of k , n −G −1 k, n = A k ,n ;k,n as an N × N 'metamatrix' of matrices.
On the other hand, we could consider −G −1 as a 2N × 2N matrix of scalars, indexed by k, n, j where j = 1, 2 indexes the spinor components. That way we need to consider the Nambu integral where the determinant is now of the 2N × 2N matrix. More matrix trickery is coming. We start innocently, by claiming Now, remember that • The determinant of any matrix is equal to the product of its eigenvalues.
• The logarithm of a 2N × 2N matrix B is another 2N × 2N matrix C such that B = exp (C) = I+ ∑ ∞ n=1 (C) n /n!. • The logarithm of a diagonal matrix is a diagonal matrix with the logarithm of the diagonal elements.
From these statements it can be shown that the logarithm of the determinant of a matrix equals the trace of the logarithm of the matrix. In other words where the trace is taken over all k, n, j values. Combining (97) with (98)

Step 4: Adding fluctuations beyond mean field
If we want to improve on the saddle-point solution, we set This is like the Bogoliubov shift in the second quantized theory of helium or Bose gases. We have a condensate contribution ∝ Δ and add to it small fluctuations φ q,m . Then we will expand up to second order in the fluctuations φ q,m ,φ q,m and get a quadratic (bosonic) path integral that we can perform exactly. This line summarized the long tough program ahead in a single sentence.
We can choose our fluctuations around the saddle point differently. Rather than using two conjugate fields, we can choose to vary amplitude and phase, and employ fields |φ| q,m e iθ q,m so that Δ q,m = βVδ(q)δ m,0 Δ * + |φ| q,m e −iθ q,m .
This leads to a hydrodynamic description as a function of the density of fluctuations and the phase field. Basically this is a change in the integration variables representing the Gaussian fluctuation contribution to the free energy, and we expect the same results. We will work here with (103) and (104). Plugging this into the expression (46) for the inverse Green's function we can write with like before, a diagonal piece for the saddle-point contribution, and now the additional piece Here again the change in sign in the index of the φ termsφ k+k ,n+n →φ k−k ,n−n is due to a re-indexation of the k and n indices, similar to expression (50). Now we are ready to expand ln −G −1 in the exact expression (101) for the partition function, using matrix algebra So, the result for the (quadratic approximation to the) partition sum using the "Bogoliubov shifted fields" becomes Z q = X Dφ q,m Dφ q,m exp ∑ q,m 1 g βVΔ * δ(q)δ m,0 +φ q,m βVΔδ(q)δ m,0 + φ q,m We can bring all the stuff that does not depend on the fluctuations in front of the integrals The factor on the first line is nothing else but Z sp , expression (102). This absorbs the nasty factor X and results in exp −βVΩ sp .
Only at this point we make an approximation and claim that F is small enough to state We neglect the terms from G sp F 3 and higher in orders of F. We get What about the terms linear in the fluctuation fields ? They have to vanish. If we correctly determined the saddle point, then that means that the derivative (i.e. small derivations) vanishes. This then results in: and so the quantity that we need to calculate is Tr G sp FG sp F . The remaining path integral will be denoted by Z f l , the partition sum of fluctuations, so that Z q = Z sp Z f l .

Intermezzo 3: More matrix misery
The trace is taken over all k, n values and over the Nambu components. We get with tr σ the trace of the 2 × 2 spinor matrix. The matrix k , n G −1 sp k, n is not so hard to invert, since it is diagonal in k, n indices. We start from The diagonal elements in k, n need to be inverted. It is not hard to invert a two by two matrix: We already had Then, This equals Now we can use this to calculate This is Since we need to take the trace over the resulting 2 × 2 product, we only need to calculate the upper left element of the matrix multiplication, namely and the lower right element We use iω n + k 2 − μ ↓ = (iω n + ξ k + ζ) with ξ k = k 2 − μ, and the similar relations, to get ∑ k,n This looks confusing. We rename some summation indices and You will have noticed that we let the matrix elements depend on the bosonic Matsubara frequencies i m = i(2πm/β). If we remember our short hand notations then it is clear where these appear. The matrix M is at the heart of our treatment of the fluctuations. It acts like a bosonic Green's function for the pair fields. It has some symmetry properties which can be derived from (134)-(137) by shifting the summation variables. We find Moreover, since our saddle-point result shows us that |Δ| is fixed by the gap equation but we can choose the phase independently, we put Δ real and get We still need to take sums over fermionic Matsubara frequencies, and integrals over k in the matrix elements, this will be a bundle of joy for those who love complex analysis and residue calculus.
More generally, we will have to study the poles of the components M ij (q, z) in the complex plane to find excitations: poles in the Green's functions give us the quasiparticle spectrum.

Step 5: Integrating out the fluctuations
The bosonic path integral is easier than the fermionic one, since we do not have to worry about the signs. We do have to worry not to double-count the fields. Since we sum over all q, and seeing the symmetry properties of M, each term appears twice. We can restrict ourselves to half the q, m space and still get all the terms: Therefore, when we integrate we get with This can be rewritten as The π 4 , unlike the minus sign we had before, gives an irrelevant factor for the bosons, shifting the overall free energy by a constant amount. We obtain The total thermodynamic potential per unit volume Ω can now be divided in a saddle-point contribution Ω sp and a fluctuation contribution Ω f l : the latter satisfies and since Z = Z sp Z f l leads to Ω = Ω sp + Ω f l we find where we used the symmetry properties of the fluctuation matrix (140) and (141).
The equations determining our system become the following. For the gap equation we still have the saddle-point condition: Nearly nowhere is this more evident than in the case of a two-dimensional Fermi gas (see [15]). Without taking into account fluctuations, the saddle point result predicts pairing in the 2D Fermi gas at T * . Including fluctuations, this temperature is reduced to T p . But the mere presence of pairing is still insufficient to guarantee superfluidity. Indeed, superfluidity is further suppressed by phase fluctuations and occurs only below T BKT , the Berezinskii-Kosterlitz-Thouless temperature.
For the density n = n sp + n f l we derive the free energy with respect to μ keeping only ζ and T constant, and for the density difference δn = δn sp + δn f l we derive to ζ keeping only μ and T constant. Now remember our previous discussion: when we keep Δ as a variable we must use In these relations the derivatives are taken with respect to a thermodynamic variable while keeping the others constant. Note that Δ is an order parameter but not a thermodynamic variable for these relations.

Conclusions
The fermionic Matsubara summation in M 12 results in The above equations provide the necessary ingredients to calculate the free energy, which in turn gives access to the thermodynamic variables of the system. Analyzing competing minima or evolving minima in the free energy also allows to derive phase diagrams for the system. In particular, from the saddle-point free energy (158) The gap and two number equations have to be fulfilled simultaneously, and need to be solved together to determine Δ, μ, ζ from T, n, δn.
The different approaches to fermionic superfluidity can be catalogued through the gap and number equations that they consider. The full set, shown here, corresponds to the so-called "Gaussian pair fluctuation" (GPF) approach advocated by Hu, Liu and Drummond [14]. If the last terms in the number equations are dropped, we obtain the famous Nozières and Schmitt-Rink results [16], ported to the path-integral formalism by Sá de Melo, Randeria and Engelbrecht [17]. Finally, if also the fluctuation part Ω f l is disregarded, we simply have the mean-field or saddle-point results discussed earlier [18]. The results that we summarized here form the starting point of many on-going investigations: the effects of imbalance, the effects of reducing the dimensionality (to study the Berezinskii-Kosterlitz-Thouless transitions [15]) and of optical potentials, the search for the Fulde-Ferrell-Larkin-Ovchinnikov state [19] and other exotic pairing states,... We hope that the above derivation of the basic functional integral formalism will be of use to the reader, not only to gain a deeper insight in these applications of the formalism, but also to extend and modify it.