This chapter aims at reviewing how modeling cold dark matter as weakly interacting massive particles (WIMPs) gets increasingly constrained as models have to face stringent cosmological and phenomenological experimental results as well as internal theoretical requirements like those coming from a renormalization-group analysis. The review is based on the work done on a two-singlet extension of the Standard Model of elementary particles. We conclude that the model stays viable in physically meaningful regions that soon will be probed by direct-detection experiments.
- cold dark matter
- light WIMP
- extension of Standard Model
- rare decays
Dark matter accounts for about 26.5% of the total mass‐energy density of the Universe , but we still do not know what it is. It is called dark because it is not accounted by the visible matter, the conventional baryons and leptons, which take about 4.9% of the total mass‐energy density . As it clearly interacts through gravity, some argue that it could still be baryonic, in the form of massive astrophysical compact halo objects (MACHOs) which emit dim or no light  or some sort of huge gravitational objects like galaxy‐sized black holes. Indeed, such high concentrations of matter would bend passing light, the so‐called gravitational lensing phenomenon, including microlensing, in ways we can detect. But the amount of dark matter we know of would produce gravitational lensing with a significantly higher number of occurrences than what observation accounts for.
Neutrinos have long been thought of composing the dark matter around us. However, Standard Model neutrinos are light, and so too fast‐moving (hot) to compose the (cold) dark matter structures we see. But sterile neutrinos, non‐Standard Model particles, can be heavier, and so could be dark matter candidates. This possibility has been reignited with the recent detection of an X‐ray emission line at an energy of coming from galaxy clusters, the Andromeda galaxy, the Galactic Center and the Draco dwarf spheroidal galaxy. This line is consistent with the decay of a sterile neutrino .
In fact, there is by now quasi‐consensus that dark matter ought to be understood outside the realm of conventional matter. One other scenario is that of (pseudo)scalar particles of tiny mass , the so‐called ultralight axions that could account for the dark matter content of the Universe. This is supported by high‐resolution cosmological simulations . Axions originated in quantum chromodynamics, the theory of quarks and gluons, in relation to the axial anomaly in this theory and the strong Charge Conjugation Parity Symmetry Violation (CP violation) problem. But like anything else related to dark matter, they elude detection. The Axion Dark Matter Experiment (ADMX) may bring in answers in the near future .
But maybe the most popular candidate for dark matter is an electrically neutral and colorless weakly interacting massive particle (WIMP). Such a particle originated in supersymmetric (SUSY) extensions of the Standard Model. The most obvious such a candidate is the neutralino, a neutral R‐odd supersymmetric particle. Indeed, neutralinos are only produced or destroyed in pairs, thus constituting the lightest SUSY particles. However, alas, as rich, attractive and beautiful as SUSY can be, supersymmetric particles continue to elude detection at the Large Hadron Collider (LHC), at least in Run 1 experiments with a center‐of‐mass energy . Run 2 experiments with are currently under way, targeting a final luminosity of about , and so are tested in more involved and less stringent formulations of supersymmetry .
It must be stressed that until now, we have not detected dark matter, at least not in a conclusive manner. Indeed, we know dark matter is there only because of its gravitational interactions, and this is why and how we believe it contributes about a quarter of the mass energy of the known Universe. But we still do not know whether dark matter really interacts with ordinary matter. We believe it does, even if very weakly. We believe these interactions can yield signals with enough strength so that we can detect dark matter or produce it in collisions of Standard Model particles .
We must also understand that a detection process relies primarily on a theory or a model. A theory like supersymmetry, which originated in the realm of elementary particle physics, is devised as an extension to the Standard Model that is based on a yet‐to‐be‐detected symmetry between fermionic and bosonic states . Its DM connection came only later. In fact, in the rather long period between the Higgs mechanism proposal  and the detection of the Higgs particle , various extensions of the Standard Model were proposed in order to alleviate some of its shortcomings, the so‐called “Beyond the Standard Model” (BSM) Physics . A number of these BSM models bear in them extra fields, meaning extra particles with specific properties. Until today, such particles have never been detected. With time and change in focus, the most stable of these hypothetical particles have then been proposed as candidates for dark matter, many in the form of WIMPs. The advantage of such a paradigm is clear: the calculational techniques that built strength in the realm of particle physics were ready at the service of dark matter search with little extra effort in development. But the experimental framework was also ready. Such a state of affairs could partly explain the popularity of WIMP physics, compared to other possible scenarios for dark matter.
Accordingly, many experiments have been devised specifically to detect dark matter. Each, of course, must be based on a specific scheme that is based on a specific scenario. There are experiments that try to detect dark matter directly, through missing energy momentum after a WIMP collides directly with an ordinary nucleus. The low‐background DAMA (NaI) and then DAMA/LIBRA (NaI[Ti]) experiments at Gran Sasso in Italy  add a twist to this by trying to detect dark matter in the galactic halo via its suggested model‐independent flux annual modulation . The CoGeNT experiment  in Soudan (Minnesota, USA) also tries to detect this annual modulation, but in the region where the WIMP mass is . The CDMS I (Stanford, USA) , then CDMS II (Soudan, USA) , and now the superCDMS (Soudan, USA, then SNOLAB, Sudbury, Canada)  perform direct detection, measuring ionization and phonon signals resulting from a WIMP‐nucleus collision, sensitive in the low‐mass region. The XENON10 , then XENON100 , then the coming XENON1t , all in Gran Sasso, Italy, use liquid Xenon as a detecting medium for WIMP‐nucleon and WIMP‐electron collisions. There is also the Large Underground Xenon (LUX) experiment (South Dakota, USA) , as a direct‐detection experiment, and its more sensitive successor LZ experiment . The CRESST experiment , followed by CRESST II , both at Gran Sasso, Italy, also try to detect dark matter directly with low mass. We also have the series of EDELWEISS experiments  (Modane, France), which target low‐mass WIMPs. The list is exhaustive, and could not be accounted here due to space constraints.
The above experiments are terrestrial, with instruments buried underground to reduce noise. But there are other experiments which are space borne that carry out indirect detection in cosmic rays. There is the Fermi Gamma‐Ray Space Telescope (Fermi‐LAT), which has found excess of gamma rays in the galactic center that cannot be explained by conventional sources and which is compatible with the presence of dark matter . Fermi‐LAT uses what we call indirect methods, namely, collecting gamma‐ray signals and removing from these those emitted by all possible known sources. Another space‐borne experiment is the Alpha Magnetic Spectrometer (AMS) experiment at the international space station , collecting and analyzing signals from cosmic rays. In addition, the Payload for Antimatter Matter Exploration and Light‐nuclei Astrophysics (PAMELA) experiment  is a particle identifier that uses a permanent magnet spectrometer for space cosmic‐ray direct measurements.
A third prong in the dark matter search enterprise is to produce it in particle colliders like the LHC . There is an added difficulty here, which is that we do not know in which mass range we should look into. It could well be that the present center‐of‐mass energy that is available, 13 TeV, may not be sufficient. Nevertheless, the search for dark matter at the LHC is intense. One reason is that, experimentally, this is feasible now: small amounts of missing energy and transverse momentum can be detected now. Note that the present detectors are not built to detect dark matter directly. Rather, the latter would appear as a missing energy or missing momentum. For example, we now look at events in which a boson and a missing transverse momentum are produced in a proton‐proton collision at . The boson decays into two charged leptons, a recognizable signature, and a possible missing transverse momentum, which would indicate the production of dark matter in the process. A similar search, conducted previously by the CMS Collaboration and based on data collected with (Run 1), found no evidence of new physics and hence set limits on dark matter production. A recent search performed by the ATLAS Collaboration with with an integrated luminosity of also reported no evidence .
What should be clear by now is that interpreting signals as dark matter necessitates modeling. On the other hand, any model needs experimental results to restrict the range of its free parameters, to fine‐tune these parameters, and, ultimately, in many cases, to be eliminated. The aim of this chapter is to shed light on the main steps a phenomenologist takes when building a model for dark matter, then testing the model against experimental results. It is an attempt to look into the modeling process itself, from the “cradle to the grave,” so to speak. The discussion is based on a model proposed in Ref.  for cold dark matter, exposed to particle‐physics phenomenology in , and further restricted by internal consistency in Ref. . We will see how gradually the parameters of the model are constrained, and how the region of viability is reached. To carry out the discussion smoothly, we have chosen a model which is simple enough to avoid confusion created by the often involved details of the calculations and could‐be‐complexity of the model itself, but at the same time rich enough to be able to accommodate a vast range of experimental results. The material presented in this chapter is drawn from the works just cited.
This chapter is organized as follows. After this Introduction, Section 2 motivates and then presents the model based on WIMP physics, namely, a two‐singlet extension of the Standard Model of elementary particles. We will try to avoid lengthy arguments and focus on the essentials. Section 3 shows how the measured amount of dark matter relic‐density constrains the value of the dark matter annihilation cross‐section, a constraint any model has to satisfy. We then discuss how the two‐singlet extension fits into this, and add to it a perturbativity ingredient. Section 4 takes the two‐singlet model into the arena of particle phenomenology and sees how it copes with rare meson decays. Section 5 goes back to the fundamentals and runs a renormalization‐group analysis to inquire into the sustainability of the model. Section 6 puts all these constraints together and determines the regions of viability of the model. Section 7 is left for concluding remarks.
2. A model for dark matter: motivation and parametrization
As mentioned in the Introduction, the most popular candidate for dark matter is an electrically neutral colorless weakly interacting massive particle (WIMP), and the neutralino, the lightest supersymmetric particle, is a robust fit for this role. However, as explained in Ref.  and references therein, it is hard to argue in favor of a neutralino when it comes to light cold dark matter, say, a WIMP mass of up to 10 GeV. In addition, up to now, we have not detected supersymmetric signatures at the LHC .
Therefore, with no prior hints as to what the internal structure of the WIMP might be, one adopts a bottom‐up approach, in which one extends the Standard Model by adding to it the simplest of fields, one real spinless scalar, which will be the WIMP. This field must be a Standard Model gauge singlet so that we avoid any “direct contact” with any of the Standard Model particles. It is allowed to interact with visible particles only via the Higgs field. It is made stable against annihilation by enforcing upon it the simplest of symmetries, a discrete symmetry that does not break spontaneously. This construction is called the minimal extension to the Standard Model. In view of its cosmological implication, the minimal extension has first been proposed in Ref.  and has been extensively studied and explored in Ref. . However, this model is shown in Ref.  to be inadequate if we want the WIMP to be light.
In the logic of this bottom‐up approach, adding another real scalar seems the natural step forward. This field will also be endowed with a symmetry, but this one we will break spontaneously, and the reason is to open new channels for dark matter annihilation, which implies an increase in the corresponding annihilation cross‐section, which in turn would allow smaller WIMP masses, something we want to achieve. Needless to say that this auxiliary field must also be a Standard Model gauge singlet.
Therefore, we extend the Standard Model by adding two real, spinless and ‐symmetric fields: the dark matter field for which the symmetry is unbroken and an auxiliary field for which it is spontaneously broken. Both fields are Standard Model gauge singlets and hence can interact with “visible” particles only via the Higgs doublet, taken in the unitary gauge. We must also assume all processes calculable in perturbation theory. The details of the spontaneous breaking of the electroweak gauge symmetry and the additional auxiliary symmetry are left aside .
The potential function that involves the physical scalar Higgs field , the dark matter field , and the physical auxiliary scalar field is as follows:
The quantities and are the masses of the corresponding fields and respectively, and all the other parameters are real coupling constants. Also, the part of the Standard Model Lagrangian that is relevant to Dark matter annihilation is given in terms of the physical fields and by the following potential function:
The coupling constants in the above expression are given by the following relations, in which the quantities , and are the masses of the fermion , the and the gauge bosons, respectively:
The angle is the mixing angle between the fields and . The quantities and , both positive, are the vacuum expectation values of the Higgs and auxiliary fields, respectively.
This model has nine free parameters to start with, three mass parameters and six coupling constants . As already mentioned, perturbativity is assumed, which means all the original coupling constants are small. The dark matter self‐coupling constant in Eq. (1) will not enter the lowest‐order calculations we will consider, and so this parameter stays free for the time being and we are left with eight parameters. The spontaneous breaking of the electroweak and symmetries for the Higgs and auxiliary fields, respectively, introduces the two vacuum expectation values and . The value of is fixed experimentally to be  and for the present discussion, we fix the value of at the order of the electroweak scale, say, . In addition, the Higgs mass is now known , . Hence, five free parameters remain. Three of these are chosen to be the two physical masses (dark matter) and (field), plus the mixing angle between and . The two last parameters we choose are the two physical mutual coupling constants (dark matter—Higgs) and (dark matter—particle), see Eq. (1).
3. Constraints from cosmology and perturbativity
Any model of dark matter has to comply with astrophysical observations. Indeed, dark matter is believed to have been produced in the early Universe. A most popular paradigm for this production is the so‐called “freeze‐out scenario” by which dark matter, thought of as a set of elementary particles, interacts with ordinary matter, weakly but with enough strength to generate common thermal equilibrium at high temperature. However, as the cosmos is cooling down, at some temperature , the rate of expansion of the Universe becomes higher than the rate of dark matter particle annihilation, which forces dark matter to decouple from ordinary matter, and hence a “freeze‐out”—is thus called the freeze‐out temperature. The DM relic density is essentially the one we measure today :
where is Hubble constant in units of .
In a model where dark matter is seen as WIMPs that can annihilate into ordinary elementary particles, the relic density can be related to the annihilation DM cross‐section . Indeed, in the framework of the standard cosmological model, one can derive the following relation :
The quantity is the Planck mass, is the dark matter mass, , and is the number of relativistic degrees of freedom with a mass less than . The quantity is the thermally averaged annihilation cross‐section of a pair of two dark matter particles multiplied by their relative speed in their center‐of‐mass reference frame. Solving (4) with the current value (5) for with between 19.2 and 21.6 , we obtain the following constraint on the annihilation cross‐section:
This is one major constraint any WIMP model like the one we discuss here has to satisfy. Indeed, the quantity is calculable in perturbation theory, and so, the implementation of (6) will induce an admittedly complicated but important relation between the free parameters of the model, hence reducing their space of freedom, reducing their number by one. Also, the constraint induced by (6) can be used to examine aspects of the theory like perturbativity. To implement perturbativity in the present two‐singlet model, we use (6) to obtain the mutual coupling constant (coupling between the DM field and auxiliary field ) in terms of the dark matter mass for given values of (coupling between and Higgs) and study its behavior to tell which dark matter mass regions are consistent with perturbativity. It should be mentioned that once the two mutual coupling constants and are small, all the other physical coupling constants will be small.
The quantity is calculated in perturbation theory using all possible annihilation channels the model allows for . As the model has many parameters, the behavior of the mutual coupling constant is bound to be rich. Sampling is therefore necessary. In this review, we briefly comment on the behavior of for two sets of the parameters (). A more substantial discussion can be found in Ref. .
The first set of parameters is a small mixing angle , a weak mutual ‐Higgs coupling constant , and a ‐mass . The corresponding behavior of versus is shown in Figure 1. The range of displayed is from to . In this regime, the first feature we see is that the relic‐density constraint on dark matter annihilation forbids WIMP masses . Furthermore, just about , the ‐quark threshold, the mutual coupling constant starts at about , a value, while perturbative, that is roughly 80‐fold larger than the mutual Higgs coupling constant . Then as the DM mass increases, decreases, steeply first, more slowly as we cross the mass toward the mass. Just before , the coupling hops onto another solution branch that is just emerging from negative territory, gets back to the first one at precisely as this latter carries now smaller values, and then jumps up again onto the second branch as the first crosses the axis down. It goes up this branch with a moderate slope until becomes equal to , a value at which the annihilation channel opens. Just beyond , there is a sudden fall to a value that is about half the value of , and stays flat till where it starts increasing, sharply after 60 GeV. In the mass interval m0 ≃ 66–79 GeV, there is a “desert” with no positive real solutions to the relic‐density constraint, hence no viable dark matter candidate exists. Beyond , the mutual coupling constant keeps increasing monotonously, with a small notch at the mass and a less noticeable one at the mass. As it increases, its values remain perturbative.
The second set of parameters we feature is still a small Higgs mixing angle , an increased ‐Higgs mutual coupling constant , and a moderate mass . The behavior of the mutual coupling constant versus the DM mass is displayed in Figure 2. Here too, no viable DM masses exist below roughly , at which value starts at . It decreases with a sharp change of slope at the ‐quark threshold, then makes a sudden dive at about 5 GeV, a change of branch at down till about where it jumps up back onto the previous branch just before going to cross into negative territory. It drops sharply at and then increases slowly until . Then, no viable WIMP masses exist, a desert. As we see, for this set of parameters (), the model constrains the dark matter mass inside the interval , with perturbative coupling constants.
With the same mixing angle and mutual coupling constant , larger masses yield roughly the same behavior, but with values of that could be nonperturbative. For example, when , the mutual coupling starts very high () at , and then decreases rapidly. There is a usual change of branches and a desert starting at about 49 GeV, a behavior that is peculiar in a way because the desert starts at a mass , that is, before the opening of the annihilation channel. In other words, the dark matter is annihilating into the light fermions only and the model is perturbatively viable in the range of 20–49 GeV.
4. Constraints from direct detection
Perhaps the most known constraints on a WIMP model are those coming from direct‐detection experiments like the many we have cited in the introductory section. In such experiments, the signal sought for would typically come from the elastic scattering of a WIMP off a nonrelativistic nucleon target. However, as mentioned in the Introduction, until now, none of these direct‐detection experiments have yielded an unambiguous dark matter signal. Rather, with increasing precision from one generation to the next, these experiments put increasingly stringent exclusion bounds on the dark matter‐nucleon elastic‐scattering total cross‐section in terms of the dark matter mass , and because of these constraints, many models can get excluded.
Therefore, a theoretical dark matter model like the two‐singlet extension we discuss here has to satisfy these bounds to remain viable. For this purpose, we calculate as a function of for different values of the parameters (and compare its behavior against the experimental bounds. The calculation is carried out with sufficient details in Ref. , and the total cross‐section for non‐relativistic ‐nucleon elastic scattering is given by
In this relation, is the nucleon mass and is the baryon mass in the chiral limit. The mutual coupling constants and are defined in Eq. (1). The relic‐density constraint on the dark matter annihilation cross‐section (6) has to be imposed throughout. In addition, we require now that the coupling constants be perturbative, and we do this by imposing the additional requirement .
Generically, as increases, the detection cross‐section starts from high values, slopes down to minima that depend on the parameters, and then picks up moderately. There are features and action at the usual mass thresholds, with varying sizes and shapes. Regions coming from the relic‐density constraint and new ones originating from the additional perturbativity requirement are excluded.
For the purpose of illustration, we choose three indicative sets of values for the parameters (). We start first with a Higgs‐mixing angle , a weak mutual ‐Higgs coupling , and an mass . The behavior of versus is shown in Figure 3. There, we see that for the two mass intervals 20–65 GeV and 75–100 GeV, plus an almost singled‐out dip at , the elastic scattering cross‐section is below the sensitivity of SuperCDMS. However, XENON1T should probe all these masses, except and GeV.
Increasing has the effect of closing possibilities for very light dark matter and thinning the intervals as it drives the predicted masses to larger values. Indeed, in Figure 4, where , in addition to the dip at that crosses SuperCDMS but not XENON1T, we see acceptable masses in the ranges of 40–65 GeV and 78 GeV up. The intervals narrow as we descend, surviving XENON1T only as spiked dips at 62 GeV and around 95 GeV.
On the other hand, a larger mutual coupling constant has the general effect of squeezing the acceptable intervals of by pushing the values of up, and it may even happen that at some point, the model has no predictability. This case is shown in Figure 5, where , and . In this example, the effects of increasing the values of both and . As we see, the model cannot even escape Cryogenic Dark Matter Search II (CDMSII).
5. Constraints from particle phenomenology
If a dark matter model based on WIMP physics is not killed already by the constraints coming from cosmology, perturbativity, and direct detection, it has to undergo the tests of particle phenomenology. To see how this works, we discuss here the constraints on our two‐singlet model that come from a small selection of low‐energy processes, namely, the rare decays of mesons. The forthcoming discussion is based on work done in Ref. . There, the interested reader will find a fuller account of this study, together with relation to Higgs phenomenology. Note that the dark matter relic‐density constraint in Eq. (6) and the perturbativity requirement are implemented systematically. Also, as in Ref. , we will restrict the discussion to light cold dark matter.
We therefore look at the constraints that come from the decay of the meson in the state () into one photon and one particle . For , the branching ratio for this process is given by the relation:
In the above expression, with the mass of given by , the branching ratio , is the QCD coupling constant, the QCD coupling constant at the scale , the quantity is the Fermi coupling constant, and is the ‐quark mass . The function incorporates the effect of QCD radiative corrections given in  and the step function is denoted by . However, a rough estimate of the lifetime of indicates that the latter is likely to decay inside a typical particle detector, which means we should take into account its most dominant decay products. We first have a process by which decays into a pair of pions, with the following decay rate:
Here, is the pion mass and is the kaon mass. Also, chiral perturbation theory is used below the kaon pair production threshold [43, 44], and the spectator‐quark model above up to roughly 3 GeV, with the dressed and quark masses . Note that this rate includes all pions, charged and neutral. Above the threshold, there is the production of both a pair of kaons and particles. The decay rate for production is
The particle also decays into and quarks (mainly ). Including the radiative QCD corrections, the corresponding decay rates are given by
The dressed quark mass and the running strong coupling constant are defined at the energy scale . There is also a decay into a pair of gluons, with the rate
Here, is the QCD coupling constant at the spectator‐quark model scale, between roughly 1 and 3 GeV.
We then have the decay of into leptons, the corresponding rate given by
where is the lepton mass. Finally, can decay into a pair of dark matter particles, with a decay rate:
The coupling constant is given in Eq. (1). The branching ratio for decaying via into a photon plus , where represents any kinematically allowed final state, will be
In particular, corresponds to a decay into invisible particles.
The best available experimental upper bounds on ‐state branching ratios are (i) for ; (ii) for ; (iii) for . Figure 6 displays the corresponding branching ratios of decays via as functions of , together with these upper bounds. Also, the best available experimental upper bounds on branching ratios are: (i) for ; (ii) for . Typical corresponding branching ratios are shown in Figure 7.
If we perform a systematic scan of the parameter space, we find that the main effect of the Higgs‐dark matter coupling constant and the dark matter mass is to exclude, via the relic‐density and perturbativity constraints, regions of applicability of the model. This is shown in Figures 6 and 7, where the region is excluded. Otherwise, these two parameters have little effect on the shapes of the branching ratios themselves. The onset of the channel for abates sharply the other channels, and this one becomes dominant by far. The effect of the mixing angle is to enhance all branching ratios as it increases, due to the factor . The dark matter decay channel reaches the invisible upper bound already for , for fairly small , say, 0.5 GeV. The other channels find it hard to get to their respective experimental upper bounds, even for large values of . There are further constraints that come from particle phenomenology tests. The interested reader may refer to  for further details.
6. Internal constraints
Further constraints on a field‐theory dark matter model come from internal consistencies. Indeed, one must ask how high in the energy scale the model is computationally reliable. To answer this question, one investigates the running of the coupling constants as a function of the scale via the renormalization‐group equations (RGE). One‐loop calculations are amply sufficient. A detailed study of the RGE for our two‐singlet model was carried out in Ref. . The brief subsequent discussion is drawn from there, and the reader is referred to that article for more details.
In an RGE study, there are two standard issues to monitor, namely, the perturbativity of the scalar coupling constants and the vacuum stability of the theory. Imposing these two latter as conditions on the model will indicate at what scale it is valid. As mentioned in the Introduction, it has been anticipated that new physics, such as supersymmetry would appear at the LHC at the scale . Present results from ATLAS and CMS indicate no such signs yet. One consequence of this is that the cutoff scale may be higher. In this model, the RGE study suggests that it can be . As ever, the DM relic‐density constraint is systematically imposed, together with the somewhat less stringent perturbativity restriction .
Remember that the model is obtained by extending the Standard Model with two real, spinless, and ‐symmetric SM‐gauge‐singlet fields. The potential function of the scalar sector after spontaneous breaking of the gauge and one of the symmetries is given in Eq. (1). The potential function before symmetry breaking is the one we need in this section. It is given in Eq. :
The field is still the WIMP with unbroken symmetry, and is the auxiliary field before spontaneously breaking its symmetry. Both fields interact with the SM particles via the Higgs doublet . The masses , and as well as all the coupling constants are real positive numbers.1
A one‐loop renormalization‐group calculation yields the following β‐functions for the above scalar coupling constants :
As usual, by definition where is the running mass scale, starting from . Note that the DM self‐coupling constant has so far been decoupled from the other coupling constants, but not anymore in view of Eq. (17) now that the running is the focus. However, its initial value is arbitrary and its ‐function is always positive. This means will only increase as increases, quickly if starting from a rather large initial value, slowly if not. Therefore, without losing generality in the subsequent discussion, we fix . Hence, here too we still effectively have four free parameters: , , , and .
Furthermore, the constants , and are the SM and strong gauge couplings, known  and given to one‐loop order by the expression:
where and , for respectively. The coupling constant is that between the Higgs field and the top quark. To one‐loop order, it runs according to Ref.  the following expression:
with , where is the Higgs vacuum expectation value and is the top mass. Note that we are taking into consideration the fact that the top‐quark contribution is dominant over that of the other fermions of the Standard Model.
After the two spontaneous breakings of symmetry, we end up with the two vacuum expectation values: for the Higgs field , and for the auxiliary field . In this section, we take . Above , the fields and parameters of the theory are those of (16). Below , the fields and parameters are those of Eq. (1). We take the values of the physical parameters at the mass scale . The initial conditions for the coupling constants in (16) in terms of these physical free parameters are as follows:
Note that, normally, as we go down the mass scale, we should seam quantities in steps: at , , and . However, the corrections to (20) are of one‐loop order times or , small enough for our present purposes to neglect. The perturbativity constraint we impose on all dimensionless scalar coupling constants is . Also, vacuum stability means that for the self‐coupling constants , and , and the conditions:
for the mutual couplings , and .
Figure 8 displays the behavior of the self‐couplings under RGE for , and . The dramatic effect is on the Higgs self‐coupling constant which quickly gets into negative territory, at about , thus rendering the theory unstable beyond this mass scale. This is better displayed in Figure 9, where the Renormalization Group (RG) behavior of is shown by itself. Such a negative slope for is expected, given the negative contributions to in (17). The coupling constant is dominant over the other couplings and controls perturbativity, leaving its region much later, at about . This seems to be a somewhat general trend: the non‐Higgs SM particles seem to flatten the runnings of the scalar couplings.
The runnings of the mutual coupling constants for the same set of parameters’ values are displayed in Figure 10. They also get flattened by the other SM particles, but they stay positive. They dwell well below the self‐couplings. Increasing and will raise the mutual coupling and not the two others, higher than in some regions.
Raising will also make the self‐couplings and run faster while affecting very little . It will also make the mutual coupling starts higher, and so demarked from and . By contrast, the effect of is not very dramatic: the self‐couplings are not much affected and the mutuals only evolve differently, without any particular boosting of . Details and further comments are found in .
7. All constraints together: viability regions
The above RGE analysis taught us two lessons: (i) The two couplings and control perturbativity. (ii) The change of sign of controls vacuum stability. Equipped with these indicators, we can try to systematically locate the regions in parameter space in which the model is viable. We have by now a number of tools at our disposal. First, the DM relic‐density constraint (6), which has been and will continue to be applied throughout. We have the RGE analysis of the previous section. We will require both and to be smaller than , and to be positive. From the phenomenological implications we deduced in Section 5, we will retain only two: the mixing angle and the physical self‐coupling are to be chosen small. Last, we want the model to comply with the experimental direct‐detection upper bounds. The condition we impose is that of Eq. (7) be within the XENON 100 upper bounds . We will vary and and track the viability regions in the plane. The relevant mass range for and is 1–160 GeV. This is because there are no reliable data to discuss below the GeV and beyond takes us outside the perturbativity region.2
One important issue must be addressed before we proceed: How far do we want the model to be perturbatively predictive and stable? The maximum value for the mass scale should not be very high. One reason, more conceptual, is that we want to allow the model to be intermediary between the current Standard Model and some possible higher structure at higher energies. Another one, more practical, is that a too high is too restrictive for the parameters themselves. From the results of the RGE analysis , a reasonable compromise is to set .
With all this in mind, Figure 11 displays the regions (blue) for which the model is viable when and . The mass is confined to the interval 116–138 GeV while the DM mass is confined mainly to the region above , the left boundary of which having a positive slope as increases. In addition, has a small showing in the narrow interval 57–68 GeV. The effect of increasing the mixing angle is to enrich the existing regions without relocating them. This is displayed in Figure 12 for which is increased to . As increases, the region between the narrow band and the larger one to the right gets populated. This means more viable DM masses above , but stays in the same interval.
By contrast, increasing the Higgs‐DM mutual coupling has the opposite effect, that of shrinking existing viability regions. To see this, compare Figure 13, for which and with Figure 12. We see indeed shrunk regions, pushed downward by a few GeVs, which is not a substantial relocation. This effect should be expected because increasing raises well enough above 1 so that we leave perturbativity sooner. Increasing is also caught up by the relic‐density constraint, which tends to shut down such larger values of when is large. The direct‐detection constraint has also a similar effect. Further comments can be found in .
8. Concluding remarks
The purpose of this chapter was to help the reader understand how modeling cold dark matter evolves from motivating the model itself to constraining the space of its parameters. We took as prototype a two‐singlet extension to the Standard Model of elementary particles within the paradigm of weakly interacting massive particles.
The first set of constraints the model had to undergo came from cosmology and perturbativity. The model had to reproduce the known relic density of cold dark matter while being consistent with perturbation theory. The second set of tests came from direct detection, in the form of the total elastic cross‐section of a WIMP scattering off a non‐relativistic nucleon that had to satisfy bounds set by several direct‐detection experiments. We have seen that the model is capable of satisfying all the existing bounds and will soon be probed by the coming XENON1t experiment. The third set of constraints came from particle phenomenology. We have seen how rare decays constrain the predictions of the model for light cold dark matter. The fourth set of constraints came from internal consistency of the model, in the form of viability and stability under running coupling constants via a renormalization‐group analysis. We have concluded that the model can still make sound predictions in important and useful physical regions. We then have investigated the regions in the space of parameters in which the model is viable when all these four sets of constraints are applied together with a maximum cutoff , a scale at which heavy degrees of freedom may start to be relevant. We have deduced that for small and , the auxiliary field mass is confined to the interval 116–138 GeV, while the DM mass is confined mainly to the region above , with a small showing in the narrow interval 57–68 GeV. Increasing enriches the existing viability regions without relocating them, while increasinghas the opposite effect, that of shrinking them without substantial relocation.
There is one aspect of the study we have not touched upon in this review, and that is the connection with and consequences from Higgs physics. This has been analyzed in Refs. [32, 33]. This aspect is important, of course, too important maybe to be just touched upon in this limited space. Such an analysis also needs to be reactualized in view of the many advances made in Higgs physics .
Despite all our efforts, dark matter stays elusive. Many models that tried to understand it have failed. The fate of the two‐singlet model may not be different. But this will not be a source of disappointment. On the contrary, failure will only fuel motivation to try and explore new ideas.
- The mutual couplings can be negative as discussed below, see (21).
- In practice, m0 is taken up to 200 GeV, but there are no additional features to report.