Open access peer-reviewed chapter

The Most Probable Cosmic Scale Factor Consistent with the Cosmological Principle, General Relativity and the SMPP

Written By

Arthur N. James

Submitted: 09 May 2021 Reviewed: 08 July 2021 Published: 29 July 2021

DOI: 10.5772/intechopen.99325

From the Edited Volume

Dark Matter - Recent Observations and Theoretical Advances

Edited by Michael L. Smith

Chapter metrics overview

387 Chapter Downloads

View Full Metrics

Abstract

Current literature on the evolution of the cosmic scale factor is dominated by models using a dark sector, these all involve making many conjectures beyond the basic assumption that the Cosmological Principle selects a space–time metric of the Friedmann–Lemaître–Robertson–Walker type through which ordinary Standard Model of Particle Physics matter moves according to General Relativity. In this chapter a different model is made using the same basic assumptions but without making extra conjectures, it depends on following the idea introduced by Boltzmann that when physically meaningful concepts fluctuate the value which will be observed is the one which has the highest probability. This change removes the mathematically incorrect procedure of averaging the matter density before solving Einstein’s Equation, the procedure which causes the introduction of many of the conjectures. In the non-uniform era the changes are that the evolution of the scale factor is influenced by the formation of structure and removes the conjecture of having to use two inconsistent probability distributions for matter through space, one to calculate the scale factor and one to represent structure. The new model is consistent from the earliest times through to the present epoch. This new model is open and matches SNe 1a redshift data, an observation which makes it a viable candidate and implies that it should be fully investigated.

Keywords

  • cosmology
  • gravitation
  • dark matter
  • dark energy

1. Introduction

This first section introduces the motivational background to the study described in this chapter. The study is a response to the difficulties found by an academic physicist trying to upgrade from an amateur cosmologist, who just followed conclusions published in the scientific literature, to a more professional stance by finally studying General Relativity late in life. In 1956 before starting nuclear physics research for a doctoral degree a lifelong interest in the cosmos was triggered by Martin Ryle’s course in Radio Astronomy where he described how using Malmquist bias on the C2 Catalogue source counts he could demonstrate that the cosmos was evolving [1].

The teaching of physics in the early 1950s was largely in the style of natural philosophy which means that nature was observed and then modelling of the phenomena was made by searching for some appropriate mathematics. The current position in cosmology is different. For many years through the middle of the twentieth century the simple assumption of a flat space Friedmann–Lemaître–Robertson–Walker (FLRW) metric was used despite the absence of any direct observational evidence supporting this choice. Observations of individual objects in the sky are satisfactorily described using such flat space but there are unobservable consequences such as horizons present in the associated cosmology. Later the mathematical invention of Cosmic Inflation to overcome the horizon problem associated with the flat cosmologies appears to have converted the flatness assumption into an approved folk lore not to be questioned.

Since that time cosmology has included many conjectures required to match the real observed Universe, each of these should carry with it an unknown improbability weighting. Because of the accumulation of such weightings the old fashioned way of choosing between different models describing the same observations would have been to quote Occam’s Razor and select the model with the least conjectures so as to improve the odds of being correct.

The study described here is an attempt to use only well authenticated physics and observations in a return to basics and a natural philosophers method for constructing a model for the evolution of the cosmos. This leads to a new model for the cosmological scale factor which is essentially free from additional conjectures.

Advertisement

2. Background

A hundred years ago Friedmann combined the cosmological principle with Einstein’s Equation to predict ways in which a cosmic metric could change with time, his initial model filled the cosmos with a uniform non-relativistic distribution of matter which slowed down any existing expansion of the cosmos. The symmetry described by the cosmological principle enables the modelling of an expanding evolving cosmos with curved space–time because these conditions imply that the background space–time metric must be an FLRW metric. At low matter densities Friedmann’s solution has open space sections, as the density increases the solution appears to change smoothly through one special solution with a flat space section into the high density region where the space sections are closed and the Universe collapses back to a point. That description is misleading, in more general situations when other fluids are also present in the cosmos there are three disconnected families of solutions, open, flat and closed. Within each family there are many variations in the way that the cosmic scale factor can change with time depending on the particular mixture of substances filling the cosmos. Against any assumed background metric for the Universe the Universe’s content of ordinary matter can be modelled through the formation of structures using conventional physics, see e.g. Peebles’ textbook [2].

The present epoch of the Universe is characterised by a non-uniform distribution of matter and a simple Friedmann solution is not possible because Einstein’s Equation is non-linear and the procedures of solving and smoothing must be carried out in that order, they do not commute (see e.g. page 452 in Padmanabhan’s book [3]). When the wrong order of procedures is used the source distribution does not properly represent the natural distribution so that the solution to Einstein’s Equation is that for a non-existing situation, it must be nonsense in the context of representing nature. The Dark Sector cosmologies which are currently widely used incorporate this mathematical misdemeanour, the cosmology becomes the solution to a problem where its dust component has to be in two places at once, both in galaxy clusters and simultaneously everywhere else, this property of the dust material defies relativity. Being aware of these difficulties suggests questioning all the conjectures which form the essential starting point for Dark Sector models.

The study described here has two themes, one is the blunt rejection of the Dark Energy models, the second theme is a proposal using only well established physical concepts that an open cosmology is highly probable and should be examined further by groups with the appropriate knowledge, skills and computing resources.

In Section 3 the essential physical knowledge and observations which provide a common basis of knowledge for both the new and the Dark Sector models is described, this knowledge is used to make clear those situations where an additional conjecture is necessary to make further progress. All of the science used is well described in many textbooks, the ones quoted here are Padmanahaban’s [3] and Peebles’ [2]. The notation commonly used in applying such basic knowledge to the formation of cosmological models is introduced by describing the uniform radiation era. Section 4 will introduce the proposed new method using probability density distributions using a simple description of the present epoch, the resulting model is an open cosmology. Section 5 describes how the new method may also be used to describe the whole of the evolution of the cosmic background metric from the radiation era to the present day. Section 6 indicates many of the conjectures which have to be made to construct the Dark Sector models. Section 7 concludes by advocating development of the open model and lists some questions which have to be addressed and answered before continuing to use the Dark Sector models.

Advertisement

3. Basic knowledge common to all models

The symmetry described by the cosmological principle implies that the cosmos has a background space–time metric of the FLRW type. The three families of these metrics are distinguished by their space sections being either open, closed or flat, the cosmic scale factor of these metrics will be increasing with time to describe the expansion. The 3D curvature of the space of each time slice is determined by the geometry of the FLRW metric, the flat space section family have infinite radius of curvature whereas the open and closed families have a radius of curvature equal to the scale factor, see e.g. Padmanabhan [3]. This mathematical fact is currently being widely ignored in fitting procedures such as that of the Planck collaboration [4] where small deviations from “flatness” are being interpreted as indicating non flat space sections. Such small curvatures would indicate departures from the cosmological principle which is the most essential assumption of the entire conception of modelling background space-times.

Observations and laboratory experiments confirm that the only directly observable substances in the Universe are made from components of the Standard Model of Particle Physics (SMPP). If in addition to the cosmological principle the contents of the universe are also uniform across each time slice then Einstein’s Equation of General Relativity can be used to determine the way in which the cosmic scale factor changes with time. There are two types of fluid which can be made from the particles of the SMPP, particles which are moving with relativistic speed make a radiation fluid with significant pressure whereas particles moving slowly make a dust fluid with zero pressure. The equation of state relation between the energy density and pressure influences the rate of change of the cosmic scale factor through Friedmann’s Equations which combine General Relativity and FLRW metrics.

The energy density and pressure of particulate fluids are thermodynamic intensive quantities and will be fluctuating, they should correctly be described using the statistical methods of physics and probability distributions. The convention for pressure and temperature introduced by Boltzmann but now used throughout physics for fluctuating quantities is to use the most probable value as the value to represent overall behaviour with the fluctuations being assumed relative to that value. If electromagnetic signalling is used as an example then two situations must be considered, when many particles are involved the fluctuations are thermal noise, symmetrical around the mode of the distribution which equals the average, but when only very small numbers are involved the fluctuations are shot noise with the mode being zero and with fluctuations in just one direction. Probability distributions where the most probable value is zero will be used in this study when considering non-uniformity such as the matter density in the Universe in the present epoch.

Through the second half of the twentieth century it became apparent that the early universe could be modelled using a radiation fluid uniformly filling an expanding cosmos, the radiation era of the Big-Bang. The cosmic density of light elements predicted by Big-Bang nucleosynthesis verifies both the expansion rate as being that of a radiation era and that the density of SMPP matter is close to the value estimated from observations of astronomical objects. The importance of the Cosmic Microwave Background Radiation (CMBR) to cosmology cannot be exaggerated, it is the most reliable demonstration of the cosmological principle whilst simultaneously justifying the assumption that before its emission at the recombination temperature for atomic Hydrogen the density distribution was uniform. The fluctuations in the CMBR imply that the cosmological principle must be used in a form where the density probability distribution is the same at every spacial point in a particular time slice.

The CMBR is an important boundary in the development of cosmic expansion, before that time uniformity was ensured by radiation mixing throughout the ionised plasma. The emission of the CMBR signals the end of ionisation and the cessation of the mixing, the universe becomes non-uniform. That the universe is non-uniform is obvious in the present epoch, the cosmos is populated by many galaxies in complex structures, the way in which these can form from the minute fluctuations in the CMBR can be explained using ordinary physics [2].

Towards the end of the uniform era the dominating fluid of SMPP particles filling the universe changes in character from the earliest relativistic radiation fluid to a non-relativistic dust fluid with zero pressure. Using the normal Friedmann description the equation showing the relationship between Hubble’s constant H(t) and the cosmic scale factor a(t) adjusted so that these take the values Ho and ao(t) = 1 at the present time is

H2=Ho2ΩRa4+ΩBMa3kc2H0a2E1

The ΩR and ΩBM are cosmic densities of radiation and baryonic matter normalised in the usual way to that of the flat Friedmann matter only cosmological model. The curvature term where k equals −1, 0 or + 1 has a different character, it is part of the FLRW geometry and represents the relation between the absolute value of the cosmic scale factor and the spacial curvature of the particular chosen geometry, it has nothing to do with Einstein’s Equation but stems only from the symmetry of the Cosmological Principle [3].

At early times in the expanding universe when the cosmic scale factor is smallest the radiation density ΩR term dominates but when the baryonic matter density ΩBM deduced from the Big-Bang nucleosynthesis is used the non-relativistic matter term will have just become relevant at the emission of the CMBR and a simple Friedmann matter only model suggests an open cosmos. The curvature term will always be insignificant during the uniform era which is therefore insensitive to its value and whether k is −1, 0 or + 1 can not be deduced from observations of the uniform era.

Advertisement

4. The present epoch and the open model

Observations of the local environment have shown that it is characterised by an array of galaxies arranged in structures which are separating according to the Hubble flow. At each point in space Einstein’s Equation ensures that it is only the local density of matter which will control the rate of change of volume of a small element of space around that point, the local matter density changes the local scale factor. Remote massive objects will distort the space element as they approach but will return it to its original state after they have passed, not affecting the local scale factor. The matter density probability distribution at each point can be estimated by considering the density distribution over space, it is peaked strongly at zero. The vast majority of points have zero density which means that the local scale factor will be expanding progressively faster than anywhere where the matter density is positive. It is this mechanism where dense regions expand more slowly than empty regions which is essential to trigger the formation of structure. However the main consequence in the context of the local cosmology is that emptiness is becoming ever more frequent. Using a maximum probability algorithm that the best estimate of the cosmic scale factor must be the most frequent local scale factor sets the cosmic scale factor as that for emptiness.

The FLRW metric for empty space is Friedmann’s well known empty universe solution, a useful demonstration of this is given by Vishwakarma [5], he also shows that this metric and a Dark Sector model metric have equally good fits to SNe 1a redshift data. The empty universe solution has a metric like that of the cosmology proposed by Milne [6] in 1935, open and expanding with its cosmic scale factor being the product of the velocity of light and the age of the universe. The galaxies are in free fall and simply drift apart making the Hubble flow. At any time Hubble’s constant is the inverse of the age of the universe and its present value is the only parameter of this cosmology. The obvious conclusion must be that Milne’s metric, a good solution of Einstein’s Equation of General Relativity and the Cosmological Principle, is the best metric with which to approximately model the present day epoch.

In this argument for the present epoch it is assumed that Einstein’s Equations have been exactly solved by Nature before smoothing the solution is attempted, it is then completely free from the commutation misdemeanour inherent in every dark energy model. The resulting model, Milne’s metric, is just a normal FLRW model of a smooth universe neglecting blemishes such as the contamination by many massive galaxies, a situation which also occurs for all other models. Einstein’s Equation is fully respected in this open model, both in the scale factor solution and in the next weak curvature approximation which has to be made to describe the interrelated motions of all the massive objects through the use of Newton’s Laws of Motion.

Advertisement

5. The open model from the radiation era to the present day

If the mode of the density probability distribution rather than the average is used for the uniform era then modelling the scale factor using Eq. 1 is unaffected because the uniformity ensures only a narrow distribution of densities and the mode of the density distribution will be almost identical to the mean in this case. After the emission of the CMBR the density distribution becomes non-uniform and changing with time. Using the mode of the density distribution for the non-uniform era produces a model where the cosmic scale factor responds to the formation of structure by modifying the Friedmann equation as time evolves.

The model is generated by imagining a stepwise numerical time integration technique where the cosmic scale factor change through a step will be calculated using Friedmann’s Equation with the density mode from the start of the step while the changes to the density distribution function during the step are calculated non relativistically in the usual way. At the end of the step the density mode will be different so the Friedmann’s Equation which should be used in the next step to compute the scale factor will be different. In this way the initial conditions for each step match the solution and the nearness of the approximation to a continuous integration with smoothing following solution will increase as the step size decreases. An idiotically simple example of using the wrong order of procedures in a non-linear problem and then applying such an integration procedure to evade its effects is given as an appendix.

In this stepwise process the Friedmann equation to be used is shown below including explicitly the curvature term for an open cosmology (where Ha = c)

H2=Ho2ΩRa4+ΩBMCta3+c2Hoa2E2

The term ΩBMC(t) represents that Baryonic Matter Component which affects the cosmic scale factor at each time, this term changes from its full value ΩBM at the CMBR time to zero for the present day the details depending on how the mode of the matter distribution function evolves with time. Because ΩR and ΩBMC(t) are small for the present epoch the curvature term c2(Hoa)−2 for the Milne metric dominates. In any fitting procedure for the CMBR and structure formation the only parameters will be the properties of the CMBR fluctuation distribution, the Hubble constant Ho and the baryonic matter content ΩBM, all parameters having clear physical meaning in relation to the SMPP and General Relativity and related to observations.

If the early value for Hubble’s constant predicted from the CMBR does not match the late value determined by the SNe 1a data then the value for the matter term required to fit the data may be larger than ΩBM, perhaps dark matter such as that suspected from studies of the Bullet cluster [7] will have to be included. This dark matter must behave in the same way as SMPP matter according to the rules of General Relativity and should not be considered in any way as being similar to the cosmic dust component of Dark Energy models.

A qualitative description of what happens as the Cosmic Baryonic Matter term reduces to zero is simply that isolated discrete lumps of matter are unable to influence the scale factor of the whole of a time slice at once, it should be obvious that such an influence violates relativity principles. As stars and galaxies form they remove the matter from its cosmic role leaving the smaller curvature term to finally take over control of the expansion.

Advertisement

6. The problems of the concordance model

The simplest Dark Energy models introduce four conjectures into their model for the cosmic scale factor, flat space sections, cosmological Dark Matter (DM), Dark Energy (Λ) and weird properties for the dust component. The Friedmann equation on which all these models are based is shown here with a zero a−2 curvature term because of the flatness assumption

H2=Ho2ΩRa4+ΩBM+ΩDMWDa3+ΩΛE3

In this equation both matter terms, baryonic ΩBM and dark ΩDM, have been bracketed together because the physical properties for this Weird Dust (WD) are very strange. The Weird Dust appears to have two different density distributions across a time slice, the modeller chooses to use one or the other depending on context, a smooth distribution to compute the cosmic scale factor evolution but then a non-uniform one to compute structure formation. This inconsistency within the model makes it bad science due to the illogicality where the premise of uniformity for the matter content does not match the outcome of the model calculations which predict the destruction of uniformity.

Another way of describing the situation is that in addition to the normal physical properties of an isolated concentrated lump of dust these models conjecture an extra weird property for that isolated concentrated lump of dust, that of being able to instantly affect the universe’s cosmic scale factor uniformly throughout the universe. Such behaviour is not allowed by General Relativity which respects the restrictions of a finite velocity of light.

Each of the conjectured Dark components comes with a quantity of substance and an equation of state all of which are artefacts of the model. These two quantities and two functions provide enough flexibility in the fitting procedures for the solution to respond to the attraction towards flatness imposed on the models by using a zero cosmic curvature term in the Friedmann equation. The quantity ‘Ωk’ = (1 - ΩM - ΩΛ) which is seen for example in the fitting procedures for Planck data [4] cannot represent curvature in a FLRW metric, its value is generally found to be near zero which must simply be a measure of the accuracy within which the fit has approached flatness.

One more conjecture is made in Dark Energy models because the horizon introduced by the flatness means that Inflation must also be conjectured to ensure the cosmological principle is present through both the uniform and non-uniform eras.

Advertisement

7. Summary and conclusion

A new technique to find an approximate solution to Einstein’s Equation for the cosmic scale factor in a non uniform universe has been found by going back to basic physics, SMPP plus General Relativity, and following Boltzmann’s use of maximum probability concepts in physics. This technique causes the cosmic scale factor to be affected by the formation of structures in the universe. The resulting open cosmological model is very different from conventional cosmologies, its simplest predictions are that evolution from the early radiation era to the present epoch produces an empty cosmos with a small contamination of massive galaxies drifting apart in accordance with the observed SN1a redshift data. There appears to be no obvious contradictory observational data to this new cosmological model. This open cosmology has a lower density than flat models, the expansion through the CMBR epoch and the structure forming era will be slower giving extra time for the formation of early astrophysical objects. Detailed structure calculations to see how this open model fits the CMBR and the Baryonic Acoustic Oscillation data are required to validate it further, computations which can be justified by the simplicity of all the physical concepts required to establish it.

The empty universe’s expanding cosmic scale factor can only be modelled by an open cosmology implying that the cosmos has always been open and causally connected. A causally connected model of the universe does not require inflation to establish the uniformity of the cosmological principle. Such a model implies little about the most primordial universe, it must be open, contain the SMPP and a radiation spectrum to match the details of the CMBR.

Examination of Dark Energy models of the cosmos suggests several questions about the conjectures used in the models which should be answered successfully before proceeding to further investigations. The main objection to the Dark Energy models must be the mathematical commutation of procedures misdemeanour, that this is a problem has not gone unacknowledged but the problem has not been confronted directly, it has only been circumvented by additional conjectures which do not eliminate the misdemeanour (see [8] and references therein). Apart from this misdemeanour but perhaps in consequence of the misdemeanour a list of questions requiring answers are:-.

  • Why select a flat FLRW metric, what model independent observation supports such a choice?

  • There may be dark matter such as that which might be concentrated in objects such as the Bullet Cluster but what is the conjectured uniformly distributed cosmic dark matter?

  • What is the meaning of the strangely weird conjectured properties of the dust component in the concordance model Friedmann equation?

  • Where does the eternal supply of energy for the conjectured dark energy come from?

Advertisement

The Dark Energy models for the cosmic scale factor average the source term in Einstein’s Field Equations before solving, but that problem is non-linear meaning that a mathematical misdemeanour has been included right at the beginning of the modelling, the wrong initial conditions have been used to solve the problem and the answer must be wrong. A peculiar feature of this problem is that the equations are correct and nature provides the correct solution, a consequence is that the Dark Energy models introduce artefacts to correct the wrong solution towards the correct solution, these are Dark Matter and Dark Energy. The following idiotically simple problem is given here to illustrate how this has been done.

Consider a problem where the answer is known from other considerations to be 1/3, the question is:-.

“What is the average value of y where y = x2 over the range -1 < x < +1?”

Averaging then solving gives the answer as zero, wrong because solving and averaging are performed in the wrong order. The initial conditions have been altered from the correct situation to an incorrect one. It is possible to adjust this answer by conjecturing an arbitrary parameter which can be adjusted to match the known answer. This arbitrary parameter will be adjusted to the value 1/3, this parameter is an artefact of the method used to solve the problem but has no real meaning.

Now split the range up into small segments, averaging first for each segment then solving, a final averaging of all the intermediate steps gives an answer quite close to the previously known correct value. By responding to the changing situation as the value of x increases the error has been vastly reduced and no arbitrary constant is required. The error will depend linearly on the step size enabling its size and effect to be detected.

In modelling the development of structure in cosmology the solution is too complicated for normal integration but essentially Dark Energy models do the whole problem without acknowledging the changing situation. The models have to introduce artefacts such as extra dust and Dark Energy, with arbitrary constants and functions, in order to fit observations. The new method using the mode of the matter probability distribution leading to the open cosmology will have reduced the effect of the mathematical misdemeanour by responding to the changing situation, the smaller the step size the smaller the error. No extra artefacts will have to be introduced.

References

  1. 1. Ryle, M. The Observatory. 1955; 75: 137
  2. 2. Peebles P. J. E., Principles of Physical Cosmology. Chichester: Princeton University Press; 1993.
  3. 3. Padmanabhan T., Gravitation, Foundations and Frontiers. Cambridge: Cambridge University Press; 2010.
  4. 4. Ade P. A. R. et al, Planck 2015 results, XIII. Cosmological parameters. Astron.Astrophys, 2016; 594 A13
  5. 5. Vishwakarma R. G., Mysteries of the Geometrization of Gravitation. Res. Astron. Astrophys. 2013; 13: 1409-1422
  6. 6. Milne E. A., Relativity, gravitation and world-structure. Oxford: Clarendon Press; 1935
  7. 7. Clowe D., Gonzalez A. and Markevich M., Weak-Lensing Mass Reconstruction of the Interacting Cluster 1E 0657-558: Direct Evidence for the Existence of Dark Matter. Astrophys. J. 2004; 604 (2): 596-603
  8. 8. Buchert T., Mourier P. and Roy X., On average properties of inhomogeneous fluids in general relativity III: general fluid cosmologies. Gen. Rel. Grav; 2020: 52 27

Written By

Arthur N. James

Submitted: 09 May 2021 Reviewed: 08 July 2021 Published: 29 July 2021