It is suspected that the quantum evolution equations describing the micro-world as we know it are of a special kind that allows transformations to a special set of basis states in Hilbert space, such that, in this basis, the evolution is given by elements of the permutation group. This would restore an ontological interpretation. It is shown how, at low energies per particle degree of freedom, almost any quantum system allows for such a transformation. This contradicts Bell’s theorem, and we emphasise why some of the assumptions made by Bell to prove his theorem cannot hold for the models studied here. We speculate how an approach of this kind may become helpful in isolating the most likely version of the Standard Model, combined with General Relativity. A link is suggested with black hole physics.
- foundations quantum mechanics
- fast variables
- cellular automaton
- classical/quantum evolution laws
- Stern-Gerlach experiment
- Bell’s theorem
- free will
- Standard Model
- anti-vacuum state
Since its inception, during the first three decades of the 20century, quantum mechanics was subject of intense discussions concerning its interpretation. Since experiments were plentiful, and accurate calculations could be performed to compare the experimental results with the theoretical calculations, scientists quickly agreed on how detailed quantum mechanical models could be arrived at, and how the calculations had to be done.
The question what the intermediate results of a calculation actually tell us about the physical processes that are going on, remained much more mysterious. Opinions diverged considerably, up to today, one hundred years later.
The historical events that led to this situation are well-known, and have been recounted in excellent reports ; there is no need to repeat these here extensively. It was realised that all oscillatory motion apparently comes in energy packets, which seem to behave as particles, and that the converse should also be true: all particles with definite energies must be associated to waves. The original descriptions were somewhat vague, but the year 1926 provided a new landmark: Erwin Schrödinger’s equation . Originally, the equation was intended to describe just one particle at the time, but soon enough it was found how to extend it to encompass many particles that may be interacting.
Indeed, in his original paper, Schrödinger went quite far in discussing Hamilton’s principle, boundary conditions, the hydrogen atom and the electromagnetic transitions from one energy level to an other. One extremely useful observation was made by Max Born : the absolute square of a wave function, at some spot in position space, must simply stand for the
But then, many more questions were asked, many of them very well posed, but the answers sound too ridiculous to be true, and, as I shall try to elucidate, they
In contrast, an apparently quite reasonable conclusion was already reached in discussions among scientists in the 1920s, centred around Niels Bohr in Copenhagen, called the ‘Copenhagen Interpretation’. They spelled out the rules for formulating what the equations were, and how to elaborate them to make firm predictions. Indeed, we know very well how to use the equation. The properties of atoms, molecules, elementary particles and the forces between all of these can be derived with perplexing accuracy using it. The way the equation is used is nothing to complain about, but what exactly does it say?
Paul Dirac for instance, advised not to ask questions that cannot be answered by any experiment; such questions cannot be important. We know precisely how to use Schrödinger’s equation; all that scientists have to do is insert the masses and coupling parameters of all known particles into the equation, and calculate. What else can you ask for? Many of my colleagues decided to be strictly ‘agnostic’ about the interpretation, which is as comfortable a position to take as what is was for 19century scientists to stay ‘agnostic’ about the existence of atoms.
The Copenhagen verdict was:
The present author accepts all conclusions the Copenhagen group had reached, except this last one. It will be important to ask for models that can elucidate the events that take place in an experiment. We do wish to know which sensible models can in principle explain the Schrödinger equation and which will not.
What happens to its wave function when you actually observe a particle? What does it mean if the Schrödinger equation suggests that interference takes place between different possible paths a particle can take? Those questions I can now answer, but others are still way out of reach: the masses and coupling parameters of the elementary particles have been determined by experiment, but we do not have acceptable theories at all to explain or predict their values. If the history of science is something to be taken to mind, it may be that asking unconventional questions will lead to better insights.
The Schrödinger equation is simple and it works, but some of the explanations why it works seem to get the proportions of a Hieronymus Bosch painting. This does not sound right. Almost a full century has passed since the equation was written down, and we still do not know what or whom to believe, while other scientists get irritated by all this display of impotence . Why is it that we still do not agree?
I think I know some of the answers, but almost
Today’s theories give us little to proceed further from where we are now. The Standard Model explains a lot, but not everything. This is why it is so important to extend our abilities to do experiments as far as we can. Recently, audacious plans have been unfolded by the European particle physics laboratory CERN, for building a successor of its highly successful Large Hadron Collider (LHC). While the experimental groups working with the LHC have provided for strong evidence supporting the validity of the Standard Model up to the TeV domain, theoreticians find it more and more difficult to understand why this model can be all there is to describe what happens further beyond that scale. There must be more; our present theoretical reasoning leads to questions doubting the extent to which this model can be regarded as ‘natural’ if more of the same particles at higher energies are allowed to exist, while the existence of totally new particles would be denied.
Inspired by what historians of science are telling us about similar situations in the past history of our field, investigators are hoping for a ‘paradigm shift’. However, while it is easy to postulate that we ‘are doing something wrong’, most suggestions for improvement are futile; suggesting that the Standard Model would be ‘wrong’ is clearly not going to help us. The ‘Future Circular Collider’ is a much better idea; it will be an accelerator with circumference of about 100 km, being able to reach a c.m. collision energy of 100 TeV. The importance of such a device is that it will provide a crucial background forcing theoreticians to keep their feet on the ground: if you have a theory, it better agree with the newest experimental observations.
2. The generic realistic model
The central core of our theory consists of a set of models whose logic is entirely classical and deterministic. Deterministic does not mean
The equations will be
Our deterministic theory will be formulated in terms of integer numbers only, which can be defined exactly without the need of infinitely many decimal places. This kind of precision in defining theories may well be what is needed to understand quantum mechanics.
For simplicity, we imagine a universe with finite size and finite time. As for their mathematical structure, all deterministic models are then very much alike. All finite-size discrete models must have finite Poincaré recursion times. There will be different closed cycles with different periods, see Figure 1. Counting these cycles, one finds that the rank of a cycle is physically a conserved quantity, almost by definition. For simplicity, we constrain ourselves to
We now emphasise that the evolution law of such a deterministic system can be exactly described in terms of a legitimate, conventional Schrödinger equation. We say that quantum mechanics is a
By diagonalising this matrix, one finds all its eigenvectors and eigenvalues. Within one cycle, the eigenvalues of are , where is time, is the period, and is an integer. The formal expression for the eigenvectors is easily obtained:
where are the ontological states, labelled by the integer , and are the energy eigenvectors. We read off in the basis formed by the states that the Hamiltonian takes the values.
At first sight, this does not look like quantum mechanics; the series of eigenvalues (2) seems to be too regular. In  it was proposed to add arbitrary additive energy renormalization terms, depending on the cycle we are in, but the problem is then still that it is difficult to see how this can reproduce Hamiltonians that we are more familiar with. The energy eigenstates seem to consist of large sequences of spectral lines with uniform separations. A more powerful idea has been proposed recently [10, 11]. More use must be made of locality. We wish the Hamiltonian to be the sum of locally defined energy density operators,
Now this is really possible. The price to be paid is to add
The fast variables, , are basically fields that rapidly repeat their values with periodicities , which we choose all to be large and mostly different. To reproduce
To a good approximation, the fast variables will be non-interacting. This means that the energy levels will take the form , where the are all integer, and it implies that there is one ground level, , while all excited states have energies . Clearly, our conditions on the fast variables were chosen such that their excited energy levels exceed all energy values that can be reached in our experiments.1
Note that energy is exactly conserved. Therefor we may assume that, if an initial state is dominated by the state , it will stay in that state.
Now consider the quantum model that we wish to mimic. Let that have a basis of states, with , to be called the
Two states and are interchanged whenever the fast variables in the immediate vicinity of states and simultaneously cross a certain pre-defined point on their (fast) orbits.
Here, the ‘vicinity’ must be a well-defined notion for these states. In the case of non-relativistic particles, it means that we defined the states as the particle(s) in the coordinate representation . In the relativistic case we take the basis of states specified by the fields . This does imply that, in both cases, we regard the particles and/or fields to undergo exchange transitions that eventually will generate the desired Schrödinger equation or field equations.
One can describe these classical interchange transitions in terms of a ‘quantum’ perturbation Hamiltonian.
where is one of the three Pauli matrices , acting on the two-dimensional subspace spanned by the two states and .
Some special points on the orbits of the fast variables and will be indicated as and . If the fast variables and reach their special positions
In Eq. (4), we used a discretised notation, where the time unit is chosen such that it is the time needed to advance the fast variables by only one step in their (discretised) orbits. One may check that the factor is crucial to guarantee that, if the special point is reached, the equation.
describes a classical interchange, without generating superpositions. The minus sign is unavoidable but causes no harm. We chose the Pauli matrix because, when combined with the factor in the Schrödinger equation, the wave function will be propagated as a
The Hamiltonian describing the evolution of the slow variables is now derived by assuming that the fast variables never get enough energy to go to any of their excited energy states. Their lowest energy states are obeying
so that the expectation value of a Kronecker delta is
where is the number of points on the fast orbit of this variable.
Eq. (5) could als be used if we had only one Kronecker delta in Eq. (4), but this would cause exactly one transition during one period of the fast variable, which makes the effective Hamiltonian too large to be useful. Choosing two Kronecker deltas causes one transition only to take place after much more time, making the insertion (4) of the desired order of magnitude to serve as a contribution in the effective Hamiltonian of the slow variables.
By adding a large number of similar transition events in the orbits of all fast variables, causing transitions for all pairs of (neighbouring) slow variables, we can now generate any desired contributions to the effective Hamiltonian elements causing transition among the slow variables. The result will be.
where the numbers and are the total numbers of points on the orbits of the fast variables and , and the numbers indicate the numbers of the special transition points on the donut formed by the orbits of the pair .
We encounter the restriction that the matrix elements will come with rational coefficients in front. The fundamental reason for the coefficients to be rational is that, eventually, all discretised classical models have finite Poincaré recursion times. In practice one may expect that this problem goes away when, for realistic classical systems, the Poincaré recursion times will rapidly go to infinity.
We have now achieved the following. Let there be given any Hamiltonian with matrix elements in a finite-dimensional vector space, and given a suitably chosen basis in this vector space, preferably one where every state can be endowed with coordinates . Then we have defined slow variables describing the physical states, and we added fast variables whose excited states are beyond the reach of our experiments. We found classical interactions, prescribed as exchanges between the classical states, such that the effective Hamiltonian will approach the given one.
The system obeys the Schrödinger equation dictated by this Hamiltonian, and, by construction, all probabilities evolve as is mandated by the Copenhagen doctrine. The reader may ask how to obtain the diagonal elements of the Hamiltonian, and how to make it contain complex numbers. The answer is that these can also be generated by using an additional binary degree of freedom as mentioned above.
In principle, one could have used any orthonormal basis of states to be used in our construction, but in practice we would like to recover locality in some way. The demand of locality in the classical system implies that we should demand locality for the fast variables and the slow ones. This appears to be straightforward. For non-relativistic particles, one may use the basis of states defined by the position operators . In the relativistic case, one needs the field operators and their quantum eigen states to start off with.
The theory we arrive at appears to be closely related to Nelson’s ‘stochastic quantum mechanics’ . We think our construction has a more solid mathematical foundation, explaining how the quantum entanglement arises naturally from the energy conservation law, associated to time translation invariance.
3. Symmetries and superpositions
The interpretation of the Schrödinger equation that we obtained is that it merely describes the evolution of the probability distribution for the slow variables, after averaging over the positions of the fast variables. The fast variables dictate the evolution, but they act too fast for us to observe this directly. The new thing in our procedure is that we have the choice to also describe the fast variables using quantum mechanics as a tool: fast and slow variables together go into a vector representation of what happens.2
In statistical treatments of moving variables, with well-determined evolution laws, it should be completely clear that the probability distributions of the final state are the result of our choices for the probability distributions of the initial states.
At first sight, the group of rotations can also be regarded as pure permutations, and, although the lattice structure of our local coordinate space will be severely affected, one might suspect that our present understanding of physics comes from smearing the lattice back into a continuum; this may be a reasonable approach towards understanding rotational symmetry.
However, more severe problems arise if we consider the notion of spin in a particle. We need to take spin into account when analysing Bell’s theorem. In the treatment displayed in the previous section, the spin variable of a particle would be a discretised variable , with integer spacings ranging from to , where is the total spin quantum number, being integer or half-odd-integer. These would be promoted to the status of classical variables, and then we can set up exactly the right Schrödinger equation for particles like the Dirac particle. What happens with its ontological interpretation if we rotate that?
It is clear that, in this case, rotation transformations transform the ‘real states’ into superpositions. In doing so, the rotation group can serve as the prototype of many symmetry considerations in quantum mechanics. How do we analyse the Stern-Gerlach experiment?
The superimposed states obey exactly the same Schrödinger equations as the basis elements do, and we had chosen the latter at will. So one possible answer could be: it does not matter which of the states we call real; there is no experiment to help us make the distinction. But this is debatable. The Stern-Gerlach experiment in its vertical orientation distinguishes particles with spin up from particles with spin down, these have different orbits. Remember that, in this chapter, we focus on going beyond the usual statistical interpretation of quantum mechanics, aiming at a description of pure, real states. The only accepted probabilities are 1 and 0.
The real physical states we work with form a basis of Hilbert space, and the equations we work with ensure that any state that starts off being real, occurring either with probability 1 or with probability 0, continues to be a real state forever. This must also hold for any Stern-Gerlach set-up or any of the other paradoxical contraptions that have been proposed over the years. Real state in real state out. This was called the ‘law of ontology conservation’ .
At first sight it seems that a Stern-Gerlach experiment, after a rotation over an arbitrary angle, turns into a superposition of several real states. This is true in the mathematical sense. It is the easiest way to visualise what the rotation group stands for. However, if we physically rotate a Stern-Gerlach experiment, by undoing and re-arranging nuts and bolts, We do something else. The new experiment again goes into one of the realistic states; the nuts and bolts also go into new physical states, so this is not quite the same kind of rotation.
Notice however, that if a particle with spin leaves one Stern-Gerlach instrument and continues its way in an other, rotated, device, then, as we know from standard quantum mechanics, it emerges in a superposition, or more accurately, in a probabilistic distribution. Where does this stochastic behavior come from? What happens if we do interference experiments with the various emerging beams of particles?
Apparently, there are other variables that play a role. We can blame the fast variables for this. The fast variables for the rotated device do not coincide with the previous fast variables. In specifying the state of the particle in the first device, we forgot to observe where exactly the fast variable was. We couldn’t observe this, as it was moving too fast. The transformation (in this case that is the rotation), formally involved the excited energy modes of the fast variables. In practice, we know that the energies of the quantum particles in both devices are too low to detect the excited modes, but in formulating the interactions, using the special points in Figure 2b and c, the excited modes do play a role because the interaction points are localised.
From these considerations, we claim that whatever is left of the various paradoxes should be nothing to worry about.
4. On Bell’s theorem
Yet, this conclusion is often criticised. To set the stage, let us recapitulate J.S. Bell’s theorem : a source is constructed that emits two entangled photons simultaneously. Such sources exist, so no further justification of its properties is asked for. If is the direction of the photons emitted, then the helicities are in the direction. Entanglement here means that the 2-photon state is.3
where the stands for the polarisation and stands for the polarisation. Alice and Bob use polarisers to analyse the photons they see. Alice rotates her polariser to an angle and Bob chooses an angle , and these choices are assumed to be totally independent. The two photons “do not know” what the angles and are; they are assumed to emerge with a polarisation angle . According to the usual view of what hidden variables are, the probability that both Alice and Bob are detecting their photon is written as ; the probability that a photon with orientation is detected by Alice is assumed to be , and the probability that Bob makes an observation is written as . One then writes.
All probabilities and are assumed to be between and . is also positive and integrates to one. Bell would argue that this expression should apply to theories such as ours, simply by merging the fast variables with the parameter .
Writing , photons obey:
The correlation between Alice’s and Bob’s measurement is then written as.
Standard quantum mechanics allows one to choose the entangled photon state (9) as if it is oriented towards either Alice or Bob, since it is rotation independent. The outcome is then.
In the fashionable hidden variable language, Eq. (10) is assumed to be valid, which implies that the photon must take care of giving Alice and Bob their measurement outcome whatever their choices and are, and these outcomes are found to obey the CHSH inequality , derived directly from Eq. (10). One then finds that Eq. (5) conflicts with Eq. (10). There is a mismatch of at least a factor , realised when or .
Several ‘loopholes’ were proposed, having to do with the limited accuracy of the experiments, but these will not help us, since we claim that our theory exactly reproduces quantum mechanics, and therefore Eq. (13) should be reproduced by our theory. It violates CHSH. How can this happen?
Our short answer is that we have a classically evolving system that exactly reproduces the probability expressions predicted by the Schrödinger equation, including Eq. (13), in a given basis of Hilbert space. The model is local and allows for any initial state; it does not require any kind of ‘conspiracy’ or ‘retrocausality’, or even non-locality.
This should settle the matter, but it is true that the violation of the CHSH inequality is quite surprising.
The difficulty resides in assumption (10). Bell derives it directly from causality. If no signal can travel from the space–time point where Alice does her measurement to the point where Bob does his experiment, and
It amounts to the statement that the variables and , are mutually independent. However, in , we computed the minimal non-vanishing correlations between the angles , and that could reproduce the quantum expression (13) exactly. We found4:
where normalises the total probability to one (its value depends on the integration domain chosen). This expression shows a non-vanishing 3-variable correlation, without any 2-variable correlations as soon as one averages over any of the three variables. An equation such as (14) should replace (10).
One can read this to mean that the settings and have an effect on , but one can also say that the choice of made by the photon, affected the settings chosen by Alice and Bob.5 Perhaps the best way to interpret this strange feature is that it be an aspect of information: the fact that the fast variables occupy all states in their orbits with equal probabilities is expressed by saying that they live in their energy ground states. The choice of the phases here is a man-made ambiguity that may propagate backwards in time. It is not an observable ‘spooky signal’, since nothing propagates backwards in time in the classical formulation.
When we say that the photons (together with the fast variables) ‘affect’ the settings chosen by Alice and Bob, it implies that Alice and Bob have no ‘free will’. Of course they haven’t, their actions are completely controlled by the equations. We can’t change setting without changing what happens in region 3 of Figure 3.
It is important then to realise that our theory is
We can add to this an important observation when the classical degrees of freedom are considered:
We note that this aspect of our scenario implies the absence of ‘free will’ for Alice and Bob in choosing their settings. Alice and Bob are forced to obey classical laws, such that the rule
5. Where are the fast variables and the slow variables in the standard model?
At first sight we may seem to be a long way from describing quantum field theories such as the Standard Model. In principle, one may expect something resembling a cellular automaton, where we may be able to project the various field variables as data on a cellular lattice. However, as described in Section 3, we have to deal with the question how continuous and discrete symmetry patterns, essential for the Standard Model to work, can be introduced. As is well-known, once we have all local and global symmetries in place, the entire Standard Model is almost fixed, with only a few dozen interaction parameters to be determined. We make a gentile attempt at finding some sign posts that could indicate to us where to start.
In a very important paper , F. Jegerlehner describes the Standard model as a minimalistic outcome of an algebraic structure whose basic interaction properties are essentially natural near its ultimate cut-off scale, the Planck length, except that the Higgs field self coupling happens to vanish, or almost vanish, at that scale. It seems as if the universe is metastable, or perhaps just at the edge of stability. When we scale towards the TeV scale, using the renormalization group equations, one discovers that the Higgs self-coupling slowly grows towards its present value, and this appears to explain the recently observed Higgs mass remarkably well.
There are important new questions that may be raised in connection with the present work. One is where in the cellular automaton this copious algebra is generated; and of course we want to know how any kind of fast oscillating variables can arise. Previously, this author was just thinking of very heavy virtual particles such as the vector bosons that represent the remaining grand unification symmetry, but there is a problem with that: as described in Section 2, Figure 2, the dynamical fast variable must have the geometry of a multi-dimensional torus, whereas fields have a more trivial topological structure if they indeed form vector representations of the unifying algebra, see Section 2.
This perhaps can be done better . The general philosophy that might be useful here starts from a fundamental observation. Fields that describe data at the Planck scale, can only propagate as fields at much more conventional scales (from milli-electronVolts to nearly a TeV), if there is a mechanism that prevents them from obtaining effective mass terms. To be precise, the dynamical field equations must allow them to be shifted by a constant with only minor effects on the energy of the state. At our scale of physics (to be referred to as the SM scale), fields can be shifted in any way, depending on space and time, such that energies also change within the energy domain of our SM scale. This means that the effective mass term must be at the SM scale. When we move towards the Planck scale, this mass term must rapidly approach to zero. Physically, the only mechanism that can do this is the Goldstone mechanism:
Only if a field effectively describes a symmetry transformation, and if, at the Planck scale, our world is invariant under this symmetry transformation, then we can understand how this field can propagate all the way to the SM scale.
Since the Standard Model has a rich spectrum of possible fields (fermionic and bosonic), this would force us to suspect that each of these fields must represent a symmetry transformation under which the Planck-scale theory is either exactly invariant (when the mass term vanishes) or invariant in a very good approximation (when the mass is of SM scale or smaller). Indeed, this should also hold for the fermionic fields, and this points towards supersymmetry at the Planck scale.
In short, every field component in the SM represents a generator for an almost exact symmetry of the Planck scale model. If we would be dealing with only scalar fields in the Standard Model, this would give us all the symmetry transformations, including estimates on how well the system is invariant.
Unfortunately, the real situation will be a lot more complicated. We have fermionic fields that transform as spinors under rotation, and we have vector fields that themselves again obey local gauge symmetries. How do we deal with that? It would be a great assignment for a team of PhD students to design and elaborate a logically coherent mathematical scheme.
This scheme might eventually produce logical guidelines for setting up cellular automaton models in such a way that their behavior at SM scales indeed reproduce the SM. But this is not all. The resulting automaton will still be a
6. General relativity and black holes
Finally, there is General Relativity. This theory must be regarded as just an other theme of the general concept of local gauge theories. It represents a non compact gauge group of curved coordinate transformations and it may well be that it can be handled similarly. It is important to remember that this theory is not renormalizable when presented in its usual form. We do observe that the addition of one further interaction term, the square of the conformal Weyl curvature term , restores renormalisability at the cost of negative energy modes . Perhaps this mode can serve as a fast variable, but much more work will be needed to remedy various difficulties.
Theories for quantum mechanics that also aspire to include General Relativity, must address the fundamental black hole question. Black holes that are sufficiently large and heavy compared to the Planck scale of units, can be perfectly well described by classical,
The present author found that there exists a unique procedure to neutralise the firewalls, but it does not happen automatically. To see what may well happen, one should compute the effects that particles entering a black hole have on the Hawking particles leaving. It is not an act of destruction but a precisely calculable effect of repositioning those rays of out going material. The bottom line is that the
These findings allow one to construct a unique expression for the
However, we also hit upon a more sobering difficulty, The region behind the horizon has to be used to describe the
The only answer that we could find is that we should act in a way similar to what P.A.M. Dirac did in order to overcome the negative energy problem in the Dirac equation. Now in a black hole, we have bosons and electrons alike, but we can achieve the same result by assuming that the entire band of energy eigenstates in a field theory should be bounded from below
The region beyond the vacuum then represents a CPT inversion of the region at our side of the horizon. This picture appears to make perfectly sense, and we believe it to be likely that it resolves the energy inversion problem in black hole physics.
This solution of the energy inversion problem replaces the infinite energy spectrum of all harmonic oscillators generated by the fields outside the horizon, with a spectrum of evenly separated energy levels that have both a beginning and an end, the end being the highest possible energy level. We note that this is not only the energy spectrum of an atom with finite spin inside a homogeneous magnetic field (the Zeeman atom), but it also represents the energy levels of a periodic system with finite time steps in its evolution law, see the beginning of Section 2.
Indeed, we find that black holes may be telling us something about the origin of quantum mechanics.
Our aim was to rescue the concept of ontology as opposed to epistemology in quantum mechanics. This tells us that the atoms, molecules, electrons and other tiny entities are features of things that really exist. They evolve into different states or objects that also exist, according to universal physical laws. We find that this makes perfect sense if what we now perceive as quantum mechanics is understood as a vector representation of the states as they exist and evolve. Vector representations themselves allow superposition, and one finds that the superpositions of ‘ontological’ states evolve through the same Schrödinger equations as the original states themselves. This in turn implies that one may ignore everything that is said about ontological existence as long as we use Born’s dictum that the absolute squares of the superposition coefficients represent probabilities. The reason why we nevertheless attach much importance to our ontological interpretation is that it implies a severe restriction for the evolution laws; asking for the existence of an ontological representation forces us to redesign the set of elementary basis elements of Hilbert space, which might implicate new constraints on what kinds of Standard Model we may suspect to describe our world.
An ontological interpretation is also of great help in resolving the numerous ‘paradoxes’ that have been around confusing scientists as well as young students as to what ‘reality’ really is about. Questions such as the physical process that seems to be associated to the ‘collapse of the wave function’, the ‘measurement problem’, as well as the difficulties raised in the EPR paper as well as Bell’s theorem, questions surrounding the features of entanglement, and the Greenberger - Horne - Zeilinger (GHZ) paradox, all become much less counter intuitive and mysterious than what they look like in their original quantum settings.
The explanation of these features is that the real thing that is happening is the classically evolving collection of microscopic objects, of which the fastest periodically moving things automatically enter into a completely featureless, even distribution over all of their possible states.
Remarkably, the reason why the states of the fastest moving objects stay in an even distribution is better understood in the quantum formalism than when using the original classical picture: the highest energy excitations are difficult or almost impossible to excite, simply because the energy needed for that is usually unavailable to us: in our accelerators we can only reach a dozen or so TeVs, and in cosmic rays the highest detectable energies are still well below the Planck scale. Therefore, the excited modes are only virtually present, and may well be ignored in practice. And, since all superposition coefficients for the ground state are equal, the distribution is featureless, in practice – according to Born.
Thus, what we really find is that the lowest energy states of the slow variables become entangled due to their interactions with the fastest variables. Quantum mechanics ensues; it is mathematically inevitable.
Our work is far from finished. Fresh young minds should probe the remaining mysteries; in particular, the Standard Model is built from fundamental symmetry principles. There are more symmetries than one might have expected from ‘just any’ classical system: there are many continuous symmetries, and also non-compact symmetries such as Lorentz invariance and general coordinate transformation invariance, and there are exact local gauge invariances as dictated by the gauge fields in the Standard Model.
Finally, a natural place must be found where we can put and understand the black hole solutions of Einstein’s equations. They too must obey the laws of quantum mechanics, before we can embrace these remarkable systems in our overall picture of nature. Data obtained from the observations of cosmologists must also be incorporated. What we are searching for is nothing less than a grand picture of the evolution laws shaping our physical world.
- More precisely, we talk of energies that can be associated to single quantum particles at isolated points in space–time.
- Do keep in mind that the distinction between fast variables and slow variables is a feature of our simplified models, but possibly unnecessary in the real world. All variables are real, evolving classically according to the same or similar classical laws.
- Alternatively, a source emitting two spin 1/2 particles could be used. The angles of the polarisation will then be twice the angles of the photon orientation discussed here, and there will be other modifications due to the fact that these particles are fermions.
- This outcome is model dependent, and if we choose the model to be physically more plausible, the correlations become even stronger.
- This then would be an example of the ‘butterfly effect’. It is not as crazy as it sounds. As soon as we include the fast variables in the discussion, the dynamics becomes invariant under time reversal, and the statement that a later photon is correlated with settings chosen earlier is then not strange at all.