Open Access is an initiative that aims to make scientific research freely available to all. To date our community has made over 100 million downloads. It’s based on principles of collaboration, unobstructed discovery, and, most importantly, scientific progression. As PhD students, we found it difficult to access the research we needed, so we decided to create a new Open Access publisher that levels the playing field for scientists across the world. How? By making research easy to access, and puts the academic needs of the researchers before the business interests of publishers.
We are a community of more than 103,000 authors and editors from 3,291 institutions spanning 160 countries, including Nobel Prize winners and some of the world’s most-cited researchers. Publishing on IntechOpen allows authors to earn citations and find new collaborators, meaning more people see your work not only from your own field of study, but from other related fields too.
Cosmic neutrinos have been playing a key role in cosmology since the discovery of their mass. They can affect cosmological observables and have several implications being the only hot dark matter candidates that we currently know to exist. The combination of massive neutrinos and an adequate theory of gravity provide a perfect scenario to address questions on the dark sector that have remained unanswered for years. In particular, in the era of precision cosmology, galaxy clustering and redshift-space distortions afford one of the most powerful tools to characterise the spatial distribution of cosmic tracers and to extract robust constraints on neutrino masses. In this chapter, we study how massive neutrinos affect the galaxy clustering and investigate whether the cosmological effects of massive neutrinos might be degenerate with f(R) gravity cosmologies, which would severely affect the constraints.
chapter and author info
Jorge Enrique García-Farieta*
Universidad Nacional de Colombia, Bogotá, Colombia
Rigoberto Ángel Casas Miranda
Universidad Nacional de Colombia, Bogotá, Colombia
*Address all correspondence to: email@example.com
From first principles, it is well known that a theory of gravity is needed to describe the spatial properties and dynamics of the large-scale structures (LSS) of the universe. The observational data collected for several decades provide strong support to the concordance model Lambda cold dark matter (ΛCDM), which yields a consistent description of the main properties of the LSS [1, 2, 3, 4]. However, since cosmological observations have entered in an unprecedented precision era, one of the current aims is to test some of the most fundamental assumptions of the concordance model of the universe. In this sense, the ΛCDM model assumes (i) the general theory of relativity (GR) as the theory describing gravitational interactions at large scales, (ii) the standard model of particles and (iii) the cosmological principle. Moreover, in this framework, the universe is currently dominated by dark energy (DE) in the form of a cosmological constant, responsible for the late-time cosmic acceleration [5, 6, 7] and by a cold dark matter (CDM) component that drives the formation and evolution of cosmic structures.
Recently, several shortcomings have been found in the ΛCDM scenario, like a possible tension in the parameter constraint of H0 and when different probes are used [4, 8, 9, 10]. This has motivated the interest on theoretical models beyond GR. Among them, the models based on f(R) gravity are the favourite ones because of their generality and rich phenomenology [11, 12]. Moreover, modified gravity (MG) models represent one of the most viable alternatives to explain cosmic acceleration  that require satisfying simultaneously solar system constraints and to be consistent with the measured accelerated cosmic expansion and large-scale constraints [14, 15, 16, 17]. An extra motivation to study MG models is given by the fact that massive neutrinos, the only (hot) dark matter candidates we actually know to exist, can affect these observables and have several cosmological implications . However, the degeneracy between some MG models and the total neutrino mass [19, 20, 21] give rise to a limitation of many standard cosmological statistics [22, 23]. In this context, the clustering analysis and redshift-space galaxy clustering have been proven to be a powerful cosmological probe to discriminate among MG scenarios with massive neutrinos, as will be discussed in the following sections.
Regarding the dynamics of background universe in the standard framework, it is well described by the Friedmann-Lemaître-Robertson-Walker (FLRW) metric, whose line element in natural units is given by
where a(t) is the so-called scale factor, (, , ) are dimensionless spherical-polar coordinates and kdefines the geometry of the universe under consideration to be flat (), open () or closed (). The equations of motion that describe the time evolution of a(t) and the dynamic growth of the universe are called Friedmann equationsand are given by
that can be re-expressed as
after eliminating the curvature term. These equations lead to the definition of the Hubble parameter as , which drives the expansion rate of the universe and usually is represented in terms of the dimensionless factor h, defined by the expression H0 = 100 h km s−1 Mp c−1 at the present epoch. In order to have a full description of the background universe, it is necessary an equation of state (EoS) of the cosmic fluid, considering that it has three principal components: baryonic matter, dark matter and radiation (see Figure 1a). A first approximation consists in assuming a linear relationship between and ; thus, the equation of state can be written as , where wis a parameter that in principle can be time-dependent , but the simplest approach is to consider it as a constant. Under this assumption, Eq. (4) is easy to solve, resulting in . The case with corresponds to the so-called cosmological constant; it is obtained assuming a constant density energy so that the corresponding EoS is . The case of a flat universe () is interesting because of its agreement with many observational results ; it also implies a special value of the matter density in the universe that allows to introduce naturally a critical density in terms of the Hubble parameter. The critical density is also useful to define the dimensionless density parameter for the various species i, related to radiation , matter , cosmological constant and curvature . It is easy to verify that the sum of these parameters is equal to unity, as it can be expected from the Friedmann equations; in fact, is known as the cosmic sum rule. Therefore, a Friedmann universe can be described by the cosmological parameters such that the expansion rate as a function of the scale factor is given by
This equation is usually written as a dimensionless function defined by . The last constraints on cosmological parameters obtained by the Planck satellite show that , consistent with a cosmological constant, and , where contains the density of baryons and cold dark matter . Additionally, in the last few years, it became usual to include the energy density of neutrinos ; they contribute to the radiation density at early times but behave as matter after the non-relativistic transition at late times , so that for a flat universe, the total energy density is given by . Figure 1a shows the percentages, derived from  data, in which each species contributes to the total content of the universe. Figure 1b shows the evolution of the density parameter (in units of ) as a function of the scale factor a. For further details concerning the background universe, see Refs. [25, 26, 27].
One of the most interesting modifications of GR is that which modifies the Einstein-Hilbert action by introducing a scalar function, f(R), as follows:
where Ris the Ricci scalar, Gis the Newton’s gravitational constant, gis the determinant of the metric tensor and is the Lagrangian density of all matter fields. For a classification of f(R) theories of gravity and the assumptions needed to arrive to the various versions of f(R) gravity and GR, see, e.g., . Thus, for a general f(R) model, one can consider a spatially flat FLRW universe with metric , so that varying the Einstein-Hilbert action with respect to , one can get a general form of the modified Einstein field equations. Consequently, the corresponding modified Friedmann equations are given by
where and the over-dot denotes a derivative with respect to the cosmic time t. In general, the background evolution of a viable f(R) is not simple as it has been shown by [29, 30, 31]. However it is possible to get an approximation in a way that is analogous to the DE models, by neglecting the higher derivative and the non-linear terms. By defining the growth rate as , the equation that describes the growth of matter perturbations in terms of the density contrast in a f(R) model is approximated by [29, 32]
where Geffis the effective gravitational constant that can be written as . A plausible f(R) function able to satisfy the solar system constraints, to mimic the ΛCDM model at high-redshift regime where it is well tested by the CMB and, at the same time, to accelerate the expansion of the universe at low redshift but without a cosmological constant , suggests that
which can be satisfied by a broken power law function such that
where the mass scale mis defined as and c1, c2 and nare non-negative free parameters of the model . For this f(R) model, the background expansion history is consistent with the ΛCDM case by choosing , where and are the dimensionless density parameters for vacuum and matter, respectively.
Nowadays it is generally accepted that some MG theories such as the  f(R) are strongly degenerated in a wide range of their observables with the effects of massive neutrinos; see, e.g., [19, 20, 21, 35]. This represents a serious challenge constraining cosmological models from current and future galaxy surveys requiring robust and reliable methods to disentangle both phenomena. Furthermore, for some specific combinations of the f(R) function and of the total neutrino mass , standard statistics would not distinguish them from the standard ΛCDM expectations (see [21, 22, 36]). In addition, since the degeneracy is mostly driven by the non-linear behaviour of both the MG and the massive neutrinos effects on the LSS, the linear tools are not suitable to properly disentangle the combined parameter space .
2.2 Massive neutrinos and the large-scale structure
Motivated by the apparent violation of energy, momentum and spin in -decay processes, Pauli proposed the existence of neutrinos in 1930 to keep the conservation laws safe. Eventually, 26 years after they have been theoretically postulated, the neutrinos were detected for the first time by Cowan . Neutrinos are classified in three ‘flavours’ in the standard model of particles; they were considered to be massless for some time until the discovery of the neutrino oscillation phenomena, i.e. related to the change of flavour . Since then, it is known that at least two of the three neutrino families are massive, in contrast to the particle standard model assumption; however, measuring the absolute masses of the neutrinos is not easy, which makes this a very active field of research today, both for cosmology and particle physics. In a cosmological context, the neutrinos leave detectable imprints on observations that can then be used to constrain their properties; in particular, the presence of massive neutrinos impacts the background evolution of the universe and the growth of structures . In the early universe, massive neutrinos are relativistic and indistinguishable from the massless ones, behaving like photons, meaning that their energy density drops like . In this stage, neutrinos are in thermal equilibrium, and their momentum follows the standard Fermi-Dirac distribution
where is the number of cosmic neutrinos with momentum between pand , is the number of neutrino spin states, is the neutrino temperature at redshift zand is the Boltzmann constant. In principle, in the momentum distribution function, the chemical potential should be also included; however, it has been shown to be negligible for cosmological neutrinos . The temperature of the cosmic neutrino background and the one from CMB are related by , such that the temperature of the neutrino background at certain redshift zis given by . Then, when the average momentum of neutrinos drops below a certain mass, they become non-relativistic, and their energy density drops like , behaving like baryons and cold dark matter. Figure 2 shows the evolution of the massive neutrino density, normalised to the today’s critical density, as a function of the scale factor from its early stage to the late universe.
After neutrinos with mass decouple from the rest of the plasma at redshift , as shown by Eq. (14)
the number density per flavour is fixed by the temperature, so that the universe is currently filled by a relic neutrino background, uniformly distributed, with a density of 113 part/cm3 per species and an average temperature of 1.95 K. As neutrinos are non-relativistic particles at late times, they contribute to the total matter density of the universe , so that , where , and are the dimensionless density parameters for CDM, baryons and neutrinos, respectively. The density background is affected by massive neutrinos such that a perturbation in the density field is well described by [39, 42] as follows:
being the neutrino perturbations and the density contribution related to massive neutrinos that can be expressed in terms of the total neutrino mass, , as follows:
Currently, several observations provide limits on the total neutrino mass under the assumption of standard GR . Depending of their mass, neutrinos can affect different quantities such as the matter-radiation equality and at the same time imprint features in cosmological observables like the clustering, the matter power spectrum, the halo mass function and the redshift-space distortions [18, 44, 45, 46, 47].
3. Halo mass function and clustering analysis
Considering the impact of MG and massive neutrinos in the clustering, a powerful cosmological test to discriminate among these scenarios is provided by the redshift-space distortions (RSD), that is, the shift in the position of the tracers due to their peculiar motions. For this purpose, cosmological simulations have become a powerful tool for testing theoretical predictions and to lead observational projects. In this context, the formation and evolution of cosmic structures can be understood as a dynamical system of many particles, which trace the underlying mass distribution in a certain cosmological model. The N-body simulations, methods and algorithms have progressed continuously, achieving a high resolution to resolve finer structures with millions of particles, reducing the gap between theory and observations. For a detailed description on fundamentals of cosmological simulations, see, e.g., [48, 49, 50, 51].
Since the formation and evolution of cosmic structures is based on the growth of small fluctuations in the density field, it is expected that the amplitude of these initial perturbations have the correct value at late times to match the observed clustering today. An analytical development based on perturbation theory makes possible to follow the growth of structures to a certain extent using the linear approximation, being valid as long as . Nevertheless, these calculations are limited and cannot be extrapolated to explain completely the observational data; they break down on a scale where the density contrast . Moreover, beyond the linear regime, the observed structures have a density contrast in a wide range from cosmic voids with to and larger. It makes necessary a more elaborated description of the perturbations in the non-linear regime, which can be achieved using higher-order perturbation theory or numerical simulations .
In this section we show a complementary analysis to the one performed in  in order to investigate the clustering in the context of modified gravity with massive neutrinos. We used a subset of the DUSTGRAIN-pathfinderruns , which implement the Hu-Sawicki f(R) model including massive neutrinos and whose cosmological parameters are consistent with Planck 2015 constraints .
3.1 Halo mass function
As CDM haloes form from collapsing regions that detach from the background density field, their abundance can be related to the volume fraction of a Gaussian density smoothed on a radius Rabove a critical collapse threshold . The comoving number density of the haloes is strictly related to underlining cosmological model, such that within a mass interval , the halo mass function is given by
where is the multiplicity function, the RMS variance of the linear density field smoothed on scale R(M) and the mean matter density. The product quantifies the amount of mass contained in fluctuations of typical mass . The simplest argument to compute analytically the multiplicity function comes from the spherical collapse theory, following the  formalism, such that a perturbation is supposed to collapse when it reaches the threshold , by assuming that the probability distribution for a perturbation on a scale Mis a Gaussian function with variance , resulting in
Another approach to determine is given by accurate fitting functions, like the proposed by , which extends phenomenologically the results of . For  the function is expected to be universal to the changes in redshift and cosmology and is parameterized as follows:
where Ais an amplitude of the mass function and a, b, cand dare free parameters that depend on halo definition. The variance is usually given by
where is the linear matter power spectrum as a function of the wave number kand Wis the Fourier transform of the real-space top-hat window function of radius R. A fundamental feature of the mass function is that it decreases monotonically with increasing masses; furthermore, its dependency on cosmology is encoded in the variance , as shown by the integrand of Eq. (20). From the point of view of N-body simulations, an approach to compute the mass function is given straightforward from Eq. (17), by counting the number of haloes above a certain mass threshold in a comoving volume Vsuch as
where Ais the area, and are the redshift boundaries and is the comoving volume element.
Figure 3 shows the mass function of CDM haloes measured for all models of the DUSTGRAIN-pathfinderruns at six different redshifts . Each panel contains the mass function, per each model as labelled, to track its evolution in redshift. As reference, the black dashed line represents the theoretical expectation by  for a flat ΛCDM model. As expected, massive haloes are less abundant with respect to smaller ones in a fixed comoving volume. The mass function decreases with redshift, since at earlier times the density field is smoother than at late times. The plot is logarithmic, meaning that the number density of large mass haloes falls off by several orders of magnitude over the range of redshifts shown. The f(R) models both with and without neutrinos reproduce in very well agreement this pattern, but only at really high masses, significant differences appear.
Figure 4 compares the halo mass functions of the different DUSTGRAIN-pathfindersimulations, computed at (left column), (central column) and (right column). The lower panels show the percentage difference with respect to the ΛCDM model. It is possible to see that the effect of f(R) and massive neutrinos on the dynamical evolution of the matter density field results in different halo formation epochs and different number density of collapsed systems. In particular, the model (blue) is the most deviated model from the standard scenario, whereas the , and models mimic the ΛCDM behaviour over a wide range of masses.
3.2 Clustering analysis
To quantify the halo clustering, we used the two-point correlation function (2PCF) that can be defined as the joint probability of finding a pair of objects at certain spatial separation given a tracer distribution. To measure the full 2PCF denoted as , we used the Landy-Szalay estimator given by :
where is the cosine of the angle between the line of sight and the comoving halo pair separation, , and and DRrepresent the normalised number of data-data, random-random and data-random pairs, respectively. This estimator is almost unbiased with minimum variance; this is the reason why it is preferred over the other estimators regarding the clustering measurements. Since the possible deviations from GR are more evident on small scales, we consider an intermediate non-linear range from 1 to 50 and random samples ten times larger than the halo ones. Then, in order to examine how significant is the RSD correction, it is convenient to expand the 2D 2PCF in the orthonormal basis of the Legendre polynomials  such that
where each coefficient corresponds to the lth multipole moment:
Figure 5 shows the RSD effects on the iso-correlation curves of the 2D two-point correlation function (2PCF) in the plane (), where and coordinates are, respectively, the perpendicular and parallel components along the line of sight of the observer. The 2PCF has been computed for the ΛCDM catalogue of the DUSTGRAIN-pathfindersimulations, in real space (left panel), and for the corresponding sample in redshift space (right panel). The contours are drawn at the iso-correlation levels . In real space the correlation function is undistorted, describing circular curves in this plane. In redshift space, the effect caused by the RSD is clearly visible on small scales, where the 2PCF is stretched in the direction of (Fingers-of-God effect), and in the infall effect on large scales, the contours are squashed along the perpendicular direction (Kaiser effect).
The symmetry of the 2PCF in real space means that the full clustering signal is encoded in the monopole moment , while the rest of the multipole moments are statistically equal to zero. Figure 6 shows the monopole moment, , of the CDM haloes for all models considered in the DUSTGRAIN-pathfinderproject, at three different redshifts . Subpanels show the percentage difference between MG models [f(R) with and without massive neutrinos] and the ΛCDM model. The monopole moments of the 2PCF of the and models are the ones that deviate most from ΛCDM at low redshift. This behaviour is also present in the models , and , but it is less significant. In general, it is observed that massive neutrinos increase the clustering signal for all models, especially at high redshift, the , and models being degenerated with respect to ΛCDM. The quadrupole and hexadecapole moments are consistent with zero at 1error bar.
Another important feature to take into account in the clustering analysis is related to the bias that is introduced when the ΛCDM model is wrongly assumed to predict the DM clustering of a f(R) universe with massive neutrinos . This effective halo bias, , allows to characterise the relation between the halo clustering and the underlying mass distribution. By using the theoretical effective bias proposed by  and its corresponding mass function, it is possible to disentangle the degeneracy with respect to . The level of agreement with the measurements obtained from the f(R) and scenarios give us a better understating of the discrepancy with the ΛCDM ones. To compute the theoretical mass function, we consider the linear CDM + baryon power spectrum as have been stated in the CDM prescription  and replacing with . These quantities can be obtained with CAMB or another Boltzmann code; the linear power-spectrum for CDM, , is expressed in terms of and , with and being the transfer functions. It implies, as shown by , that the effect of neutrinos on the cluster abundance is well captured by rescaling the smoothed density field such that
with the CDM power spectrum obtained by rescaling the total matter power spectrum with the corresponding transfer functions, and , weighted by the density of each species so that
The impact on the bias when the CDM prescription is not considered can be appreciated in Figure 7. In most cases this correction is small with the exception of the model. A detailed discussion on these results can be found in .
3.3 Modelling the redshift-space distortions
In a realistic case, spectroscopic surveys observe a combination of density and velocity fields in redshift space. The observed redshift is a combination of cosmological effects plus an additional term caused by the peculiar motions along the line of sight of the observer. This combination makes the redshift-space catalogues appear distorted with respect to the real-space ones, and they can be reproduced from N-body simulations since the positions and velocities are known. Currently, the modelling of the redshift-space distortions provide a powerful tool to test the gravity theory by exploring the spatial statistics encoded in the 2PCF, which is anisotropic due to the dynamic distortions.
We consider the first two even multipoles of the 2PCF, and , taking into account that odd multipole moments vanish by symmetry. The analysis consisted of modelling (i) the individual multipole moments and (ii) both multipole moments simultaneously. To derive cosmological constraints from the clustering signal and quantify the effects of f(R) gravity and massive neutrinos on RSD, we performed a Bayesian analysis to set constraints on the linear growth rate and the linear bias. All numerical tasks were performed with the CosmoBolognaLib1 .
The Kaiser formula is a good description of the RSD only at very large scales, where non-linear effects can be neglected, but it does not describe accurately the non-linear regime. Thus, with the aim of extracting information from the RSD signal at non-linear regime and considering the increasing precision of recent and upcoming surveys, many more approaches have been proposed. There is a vast literature that shows the efforts to model the RSD beyond the linear Kaiser model [63, 64, 65, 66], some of them making use of a phenomenological description of the velocity field and others, instead, taking into account higher orders in perturbation theory since, in principle, there is no reason to stop at linear order. Other approaches do a combination of both frameworks. A simple alternative to model the redshift-space 2PCF at small scales consists of extending the Kaiser formula, by adding a phenomenological damping factor that plays the role of a pairwise velocity distribution. It can account for both linear and non-linear dynamics. Therefore, to construct the likelihood, we consider this model sometimes called dispersion model, which introduces a damping function to describe the distortions in the clustering at small scales (Fingers-of-God). This model is enough accurate to quantify the relative differences between f(R) models with massive neutrinos and ΛCDM. For the Bayesian analysis, the dispersion model is fully described by three parameters, , and , that we constrain by minimising numerically the negative log-likelihood.
Figure 8 shows the normalised covariance matrices of the redshift-space monopole and quadrupole moments of CDM haloes with bootstrap errors resampling at three different redshifts and 1.6. As it can be appreciated, the covariance matrices represent how the scatter propagates into the likelihood and on the final posterior probabilities of the parameters. Then, to assess the posterior distributions of the three model parameters, we perform a MCMC analysis. The fitting analysis is limited to the scale range , assuming flat priors in the ranges , and . Figure 9 shows the -posterior constraints obtained from the MCMC analysis of , and for each mock catalogue of the DUSTGRAIN-pathfinder.The figure shows the constraints for all models considered in this work at using the monopole (orange), quadrupole (green) and monopole plus quadrupole (blue), while the intersection regions correspond to the joint analysis of the multipoles. Thus, the joint analysis of the redshift-space monopole and quadrupole is able to break the degeneracy between and even in the presence of massive neutrinos. In Figure 10, we show the theoretical behaviour of the linear distortion parameter and the growth factor as a function of the redshift for each family of models, assuming a flat ΛCDM model.
From the -posterior contours, at confidence levels, the results suggest that the clustering information encoded in the two first non-null multipole moments of the 2PCF can discriminate the alternative MG models considered in this work at . At low redshifts the f(R) models studied are statistically indistinguishable from ΛCDM, and further studies are required to break this degeneracy.
In this chapter we have introduced the theoretical framework of modern cosmology in present massive neutrinos. We emphasize on the structure formation and on the statistical description of the density field as well as the measurements of galaxy clustering and discuss the redshift-space distortions and the differences between clustering in real and redshift space, considering that in the recent years, the spatial distribution of matter on cosmological scales has become one of the most efficient probes to investigate the properties of the universe, such as test gravity theories on large scales, to explore the dark sector and the origin of the accelerated expansion of the universe as well as a probe to constrain alternative cosmological models.
In the context of models based on modified gravity and massive neutrino cosmologies, we investigated the spatial properties of the large-scale structure by exploiting the DUSTGRAIN-pathfindersimulations that follow, simultaneously, the effects of f(R) gravity and massive neutrinos. These are two of the most interesting scenarios that have been recently explored to account for possible observational deviations from the standard ΛCDM model. In particular, we studied whether redshift-space distortions in the 2PCF multipole moments can be effective, breaking the cosmic degeneracy between these two effects. We analysed the redshift-space distortions in the clustering of dark matter haloes at different redshifts, focusing on the monopole and quadrupole moments of the two-point correlation function, both in real and redshift space. The deviations with respect to ΛCDM model have been quantified in terms of the linear growth rate parameter. We found that multipole moments of the 2PCF from redshift-space distortions provide a useful probe to discriminate between ΛCDM and modified gravity models, especially at high redshifts (), even in the presence of massive neutrinos. The linear growth rate constraints that we obtain from all the analysed mock catalogues are statistically distinguishable from ΛCDM predictions at high redshifts.
We thank Lauro Moscardini, Federico Marulli and Alfonso Veropalumbo for the continuous development of the CosmoBolognaLib and their suggestions during this project. We also thank Carlo Giocoli and Marco Baldi for the crucial work performing the DUSTGRAIN-pathfinderruns.
Jorge Enrique García-Farieta and Rigoberto Ángel Casas Miranda (May 22nd 2020). Massive Neutrinos and Galaxy Clustering in <em>f</em>(<em>R</em>) Gravity Cosmologies, Progress in Fine Particle Plasmas, Tetsu Mieno, Yasuaki Hayashi and Kun Xue, IntechOpen, DOI: 10.5772/intechopen.92205. Available from:
Over 21,000 IntechOpen readers like this topic
Help us write another book on this subject and reach those readers
Retracted: Induction Plasma Synthesis of Nanomaterials
By Jiayin Guo
We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.