The history of Physics contains some inexplicable mysteries, and one of them is this: back in the early 20th century, Einstein was working on the idea of the ‘Photon’ and on the idea of the ‘Signal’ at essentially the same time [1,2], but he did not relate them to each other. They seem to have arisen totally separate in his mind, and they led to totally separate subsequent developments. The Photon played a central role in the development of Quantum Mechanics (QM), and the Signal played the central role in the development of Special Relativity Theory (SRT).
QM and SRT are now the two great pillars of early 20th century Physics, but they seem to be in conflict over the issue of communication. In QM, Schrödinger’s Equation is basically the Fourier transform (just a restatement in terms of different variables) of a statement from Classical Mechanics: Total energy = kinetic energy + potential energy. This statement has no signal propagation speed involved in it. So QM appears to allow instantaneous communication over arbitrary distances. But in SRT, Einstein’s Second Postulate limits all communication to light speed, .
Since QM and SRT conflict so dramatically on the issue of communication, at least one of them must, in some sense, be wrong. So we are left unsure about what to believe, or take as a foundation for future development.
Can we find out anything decisive from experiments or observations?
The foundation for QM is usually called the Quantum Hypothesis, and not the Quantum Postulate, because the granularity aspect of QM is experimentally testable in various ways. So far, the granularity aspect of QM never seems to fail. This fact stands in favor of QM. But many people still do not believe in the instantaneous communication aspect of QM that is manifest in Schrödinger’s Equation.
The foundation for SRT can be called the Light-Speed Postulate, but not the Light-Speed Hypothesis, because light speed really is not experimentally testable, since a test would involve at least two different spacetime points, and the correlation of data from two different spacetime points would involve the Light-Speed Postulate itself. Despite numerous claims to the contrary, SRT has not been tested in a way that actually could have falsified it, and only very indirect testing appears even feasible for SRT.
Einstein’s General Relativity Theory (GRT) flows from SRT, and GRT is testable, at least observationally, although not experimentally. But the new hypotheses that it offers for observational test are few in number, and great in technical difficulty. So even indirect testing of SRT through GRT does not look very promising.
An altogether different approach therefore seems needed: instead of demanding experiments or observations, we should be reviewing the founding ideas themselves, in light of new insights gathered in the intervening century. Evidently, at least one of the founding ideas, and possibly both of them, need to be updated, or else retired and replaced. This paper aims to identify possible update(s)/replacement(s) that may help.
Some of the needed insights come from engineering practice, rather than from theoretical physics. In the mid 20th century there was a flowering of Information Theory (IT), first in connection with wartime code breaking and code making, and then in connection with the post-war communication industry. All of that development led in turn to our modern computation industry, and our current ‘Age of Information’.
IT uses the concept of Entropy, taken from classical Thermodynamics, and with the application of a minus sign, provides a quantitative mathematical measure for Information. This measure can be used in support of all sorts of engineering concept analyses and design decisions, etc. A convenient reference about the IT concepts and their general applications is Leon Brillouin’s wonderful little book Science and Information Theory . Flores Gallegos  discusses some particular applications in QM.
Viewed from our vantage point here in the early 21st century, IT actually provides a clear disqualifier for Einstein’s Second Postulate. The problem is this: the Second Postulate is based on the behavior of a classical infinite plane wave, and an infinite plane wave cannot convey any information whatsoever!
The reason for this perhaps startling assertion is that an infinite plane wave is to electromagnetic communication what a steady hum is to auditory communication: background at best. There is no music in the monotonous hum, and there is no message in the infinite plane wave.
Information requires structure: amplitude modulation, or frequency modulation, or on-off switching. An infinite plane wave does not have any such structure. Because of this deficit, we certainly need a new Signal model as the foundation for an updated SRT.
Possibly we also need a better Photon model at the foundation of QM. We do not actually have a detailed and universally accepted model for photons in QM. Attention has focused more on material systems, which are said to change state, and in so doing, emit or absorb photons that carry packets of energy and angular momentum. As for electric and magnetic fields, what we have is the notion from Quantum Electrodynamics of a ‘virtual photon’. This terminology reveals the desirability of having some unified, photon-like, approach for both Coulomb-Ampère and radiation fields, but it does not actually provide details.
Without detail to argue against, we are apparently free to develop a Photon model de novo, in a way that serves not only as the Photon model for QM, but also provides the more realistic Signal model needed for an updated SRT.
The new Photon/Signal model need not involve yet another new Postulate. Remember what Euclid taught the world through his Geometry: use no more Postulates than absolutely necessary. The reason is that unnecessary Postulates can conflict with other Postulates already in place, and so lead to Paradoxes.
In SRT, we do indeed have many Paradoxes, involving rods, clocks, trains, lightening strikes, snakes, barns, twins, and so on, and on. A prime suspect for their root cause is the unnecessary Second Postulate. In QM, Schrödinger’s Equation seemingly came full-formed from heaven, and to that extent was also a Postulate, and indeed one that conflicted with Einstein’s Second Postulate. The mysterious feel of quantum duality, for example, may suggest a possible QM Paradox yet to be fully articulated. If so, the new Photon/Signal model can offer a candidate approach to solve the problem.
The Photon/Signal model is just very familiar, old-fashioned mathematics: 1) partial differential equations (Maxwell’s first order coupled field equations), 2) their family of solutions (starting with Gaussian pulses, and generating by differentiations the higher-and-higher order Hermite polynomials multiplying the Gaussian pulses), and 3) boundary conditions (no backflow of energy behind the source, no overflow of energy beyond the receiver). This formulation is enough to determine the particular solution that fits any particular problem.
One general rule in Science is this: when a venerable and well-tested mathematical approach exists, try it first, before abandoning it in favor of a new approach. The irony is that Einstein chose not to apply usual the mathematical approach, and instead to introduce his additional Postulate. But he did it so long ago that, despite the long list of Paradoxes generated by SRT, his Second Postulate has itself become ‘venerable’, and therefore nearly impossible to unseat.
But with SRT updated with the new Photon/Signal model, based entirely on the old-fashioned mathematical approach, QM no longer needs to conflict with SRT. Atoms can have solution states in which energy loss by radiation is countered by an energy gain mechanism newly identified with the updated SRT. It is no longer necessary to postulate the Schrödinger equation just to prevent atomic death by loss of orbit energy to far-field radiation.
With n new starting point for QM, there can be a new development for QM. Some type of Quantum Gravity has long been sought, but with GRT being founded in SRT, with c-speed-only communication, and with QM being founded in Classical Physics, with instantaneous communication, that goal has been hard to reach. But the combination of an updated SRT and traditional Statistical Mechanics (SM) offers a way forward.
The new protocol for treating gravity is this: 1) First notice that gravitational attraction formally resembles a statistical residue from magnetic interactions between elements of charge-neutral matter that are carrying electrical currents, ‘current elements’ for short. Current elements were well described by Ampère before Maxwell ever came along. 2) Then apply SM to pairs of current elements.
QM also connects to modern technology problems. Chemistry is wonderfully rich application area for QM. But current-day Quantum Chemistry (QC) is not very easy to use, mainly because of heavy computation loads associated with numerical integrations. The new photon/signal model leads to a different approach for QC, one that is algebraic, rather than integral, in character. This paper includes some recent results from the application of Algebraic Chemistry (AC) to research on the chemistry of water; namely, the form of water known as ‘EZ water’, because it excludes positive ions.
Finally, QM may connect to Elementary Particle Physics in a manner yet to be fully developed. Just as the myriad compounds in Chemistry arises from not-very-many chemical elements, some significant part of the myriad of currently understood ‘elementary’ particles may arise from just two of them: the electron and the positron.
The suggestion for this idea lies in Chemistry’s Periodic Table (PT). The way that the electron spin states fill up with increasing nuclear charge suggests the existence of not only electron pairs that have opposing spins, but also electron rings that have aligned spins. Electron rings involve up to three, five, even seven, electrons. Basically, electrons within atoms form, not only opposite-spin couples, but also same-spin teams. Since electron rings apparently do occur in atoms, they may well also occur separated from atomic nuclei, and therefore looking like exotic elementary particles. And the same may be said of positrons. Thus we have a rich array of possibilities yet to explore.
2. The photon / signal model
The first part of the Photon/Signal model consists of the governing partial differential equations. These are Maxwell’s four first-order coupled field equations. Jackson  gives Maxwell’s equations in modern notation and Gaussian units as:
Here B is magnetic field and E is electric field. The constant , where is electric permittivity and is magnetic permeability. In free space, D=ε0E, H=B/μ0, and charge density and current density J are zero. Free space will be the case of interest henceforth in this paper.
The second part of the Photon/Signal model is the family of suitable finite-energy solutions. This is developed as follows. Let the word ‘pulse’ be the short description for a field profile that is rounded on top and sloping down on the sides, and fading gradually to zero; for example, a Gaussian function.
The mechanism for the waveform development is that the spatial derivatives applied in Maxwell’s equations change the original Gaussian pulse into successively longer wavelets consisting of successively higher order Hermite polynomials multiplying the original Gaussian. As a result, the energy in the wavelet gets more and more spread out along the propagation path.
Observe that waveform development is inexorable, just like ever-growing entropy in Thermodynamics. This is, I believe, where Entropy really enters into Physics. That is, Maxwell’s first order coupled field equations are what give to Physics its obvious Arrow of Time. In citing electromagnetism as the cause for irreversibility, this idea follows Bentwich . (It does not, however, give any hope for reversing anything.)
Let the spatial variable argument for the pulse be . Let the direction of the initial pulse be . Figure 1 illustrates this Gaussian pulse, along with a snapshot showing how it evolves over one complete cycle through Maxwell’s first-order coupled field equations. Series 1 is the input pulse, and Series 2 is the waveform developed from it. A more complicated Figure was given in Whitney . This simplified Figure 1 focuses on just the one issue: waveform development. The wavelet is shown bold because it has not been given sufficient attention before.
Now, to solve the stated propagation problem, one can pose a primary pair of pulses in E and B to guarantee travel, and, for more realism, add a second such pair, offset a quarter cycle in time and perpendicular in space, to model circular polarization, like a real physical photon exhibits.
The third part of the Photon/Signal model is the pair of propagation boundary conditions: no backflow of energy behind the source, and no overflow of energy beyond the receiver. To guarantee these boundary conditions, one can demand E=0 at these boundaries.
One can fulfill the required zero E fields by matching the signal leaving the source toward the receiver with 1) another fictitious signal going from the source in the opposite direction, so as to make the E field zero at the source, and 2) another fictitious signal approaching the receiver from the opposite direction, so as to make the E field zero at the receiver. One can even continue this boundary-fixing process, to clean up each tiny new departure from zero E at a boundary that each additional fictitious signal creates at the end of the path opposite to the end that it is meant to correct.
Carried to an infinite sum of corrections upon corrections, this is certainly a very complicated picture about electromagnetic fields. How then can we extract from it some simple statement about propagation speed? We have all been trained to just say . “But relative to what?” we might ask. If we knew the source and receiver were stationary relative to each other, we could be sure that , consistent with Einstein’s Second Postulate. Otherwise, we would have to say more.
If we would focus attention to moments when the bulk of the energy is very near the receiver, we could be fairly confident that , again consistent with Einstein’s Second Postulate. But if we would focus attention to moments when the bulk of the energy is still very near the source, moments when the receiver is nothing more than a distant phantom, we would be hard pressed to argue against the proposition that . (This would be consistent with the 1908 Ritz Proposal , which was much investigated in the early to mid 20th century as a candidate alternative to Einstein’s Second Postulate, but was ultimately rejected.)
If we would need to characterize an entire propagation scenario, we would have to go even further, and consider all moments along the way. We would have neither the Einstein Second Postulate, nor the Ritz Proposal, but rather something else more complicated. It certainly must conform to Einstein’s Second Postulate late in the scenario, when the bulk of the energy is near the receiver. But early in the scenario, when the bulk of the energy is still near the source, it must conform to Ritz’s Proposal. And in between, it must represent in some mathematically appropriate way an idea not previously considered: a transition from one reference for to the other.
Here is one way to formulate the problem. Let variable represent distance along the propagation path, and let variable represent time into the propagation process. At any point there are fields E and B with magnitudes and , and from them a local energy density:
This ratio provides a function that begins at the source, and ends at the receiver, and at the temporal midpoint of the scenario, gives equal weight to both the source and the receiver. This behavior captures the proposed transition of the reference to which the light speed is assumed relative.
3. Using the changing reference for c changes the results
The concept of the changing reference for to be relative to appears relevant for an important scenario that was considered even before Einstein arrived on the scene. The scenario involves the potentials and fields created by rapidly moving sources, and it was addressed starting in the late nineteenth century and very early twentieth centuries. Researchers then made the same Assumption that Einstein later made his Second Postulate in founding SRT, but they were not attentive enough to see that there indeed was an Assumption, and to call it out as his Second Postulate. So Einstein is to be commended for calling attention to this Assumption.
The sources generally cited for this early (1898 to 1901) problem are A. Liénard  and E. Wiechert . Although they worked at about the same time, they worked separately. They got the same results, as did all contemporary and subsequent investigators, because all persons working from then up until now have used the same input Assumption; namely, that the speed of light is always with respect to the receiver of the light.
The Liénard- Wiechert results are given in , and in every other standard EM book. They are displayed in , and that short passage is quoted again here, for review and subsequent further discussion:
“The standard scalar and vector potentials are:
“where n⋅β, β is source velocity normalized by light speed , and n=R/R (a unit vector), and R=rsource (t-R/c)-rreceiver (t) (an implicit definition for the terminology ‘retarded’).
“The LW fields obtained from those potentials are then:
“The fields are radiation fields, and they make a Poynting vector (energy flow per unit area per unit time) that lies along nretarded:
“The fields are Coulomb-Ampère fields, and the Coulomb field
“does not lie along nretarded as one might initially expect; instead, it lies along (n-β)retarded. Assume that β does not change much over the total field propagation time, in which case (n-β)retarded is virtually indistinguishable from npresent.”
Thus the Coulomb attraction/repulsion and the radiation Poynting vector have distinctly different directions. This result does not look right physically. It looks as though advance information is being provided on one, but not the other, of two information channels.
Now consider the same problem using the new and more nuanced definition for the light speed reference. Observe that a line that connects the source position at the temporal midpoint of the scenario to the receiver position at the temporal midpoint of the scenario defines both the distance and the direction that the energy must travel in order to achieve the proposed transmission of the signal from the source to the receiver. This specification implies that we need potentials and fields to be, not retarded, but half-retarded. Now the potentials become:
The fields become:
The Poynting vector becomes:
The direction of the Coulomb field becomes:
This direction is the same as the direction of the radiation Poynting vector. That is, the Coulomb field and the Poynting vector are now reconciled to the same direction, instead of conflicting with each other.
4. The corrected force direction means the hydrogen atom can survive classically
Many authors have expressed the opinion that really explaining the Hydrogen atom requires some presently-unknown short-range repulsive force between the electron and the proton; see, for example, Lokajicek, et al . But given the results just presented, no mysterious new repulsive force is needed. With the direction of the Coulomb field being nhalf-retarded, there is a tiny tangential component of Coulomb force aligned with the orbit velocity. So there is a torque on the atom, and the torque pumps energy into the atom, and that process can work to balance the energy loss due to radiation.
That is to say: having a more nearly correct model for potentials and fields created by rapidly moving charges makes it possible to explain the immortality of the Hydrogen atom without first postulating the immortality of the Hydrogen atom; i.e., postulating Schrödinger’s equation.
That is to say: we need not postulate Schrödinger’s equation; w can instead just carry out the old-fashioned math.
In my 2012 and 2013 Intech papers, I listed just the pertinent results. Here is more detail and derivation:
Let the masses of the electron and the proton be and . Note that , but is not infinite.
Let the orbit radii of the electron and the proton be and . Note that , but is not zero.
Let the charges on the electron and the proton be and .
The magnitude of the nominally attractive force within the atom is .
Let the orbit frequency be . The orbit speed of the electron is and that of the proton is .
The magnitude of the tiny tangential force on the electron is .
The magnitude of the tiny tangential force on the proton is .
The magnitudes of the torques on the electron and on the proton are and , both equal to .
The total torque on the electron-proton system is .
The torque power delivered to the system is .
The squared orbit frequency is determined from either or .
The more convenient of the two options is . With that expression, the approximation yields:
This is a reasonably simple expression. But more important than its simplicity is its very existence. The existence of any such expression means that there exists an energy gain mechanism to balance against the known energy loss mechanism, i.e. radiation. This situation provides a chance for balance, allowing the Hydrogen atom to avoid death by energy loss to radiation.
The is actually quite large, and so it changes the whole emphasis of worry concerning the Hydrogen atom. The question becomes, not why does the Hydrogen atom not radiate and collapse to death, but rather why does the Hydrogen atom not torque itself up and expand way beyond its known size?
The fact is: there exists much, much more radiation than was previously worried about, and it is enough to produce the proper balance between radiation and torque.
In , I said that the extra radiation arises from finite signal propagation speed, which results in circular motion of the center of mass of the Hydrogen atom, which in turn produces Thomas rotation, and thereby scales up by a factor of the overall rotation rate generating the radiation, which increases the radiation power by a factor of .
But what should one say about that center-of-mass circular motion? In Newtonian physics, where signal propagation speed is infinite, there is no such thing. In Maxwell physics, the emphasis is on fields, and the responses of individual charges, but not as much on the responses of whole charge systems, such as atoms. So the issue doesn’t come up there. In Einstein’s relativity physics, the emphasis is often on the observers of events more than on the events themselves. System center-of-mass circulation seems not to come up, although Thomas rotation does.
In , I noted that Thomas rotation is generally believed to be a result of the properties of Lorentz transformations, and hence of SRT. That is the belief because one can think of Lorentz transformations, not only in the usual, passive sense, as conversion from an observer in one inertial coordinate frame to another observer in another inertial coordinate frame, but also in the active sense, as the application of a ‘boost’ in velocity, the result of an acceleration, the result of a physical force. A series of non-co-linear boosts does indeed produce Thomas rotation.
But in  I also remarked that Thomas rotation does arise, not just from Lorentz transformations, but also from Galilean Transformations. That fact can be demonstrated in detail as follows:
For simplicity, let all motion be in the plane. The scenario begins at time coordinate with one of the particles, say the electron, at rest at spatial coordinates . Let an attraction from another particle act in the direction. Let an increment of velocity ΔVx be imposed, and let an increment of time elapse. The coordinates of the electron then become:
Now let an attraction from another particle act in the direction. Let an increment of velocity ΔVy be imposed, and let another increment of time elapse. The coordinates of the electron then become:
Observe that, if the Galilean velocity boosts had been applied in the opposite order, then the ending coordinates of the electron would have been:
Observe that: the squared incremental length changes have the same magnitude either way: . That fact means the two possible sequences of Galilean boost applications differ only by a rotation. That means each one individually contains a rotation equal to half that total angle difference. This is the Thomas rotation.
Let me now go further, and assert that Thomas rotation will arise from any kind of velocity transformation – Lorentz, or Galilean, or any other new kind that may not have a name yet. Thomas rotation is a property of actual reality, not of any particular mathematical model for reality.
With the Thomas rotation included, the total radiation from the atomic system is:
The value of the separation for which is:
In the traditional approach to QM, , where is the reduced mass, defined by , and very nearly equal to , and is Planck’s constant, Joule-sec, a fundamental constant given by Nature.
The present analysis does not require Planck’s constant as an input. Instead, it provides an estimate of Planck’s constant as an output:
and, for convenience, also an estimate of the often-seen reduced Planck’s constant:
These estimates can be improved by taking due account of the fact that the sines of small angles are not exactly equal to the angles themselves, and the cosines of small angles are not exactly equal to unity. The sine corrections are third order in angle, while the cosine corrections are second order in angle, which is more significant. Including those corrections reduces the radiation power slightly, and so reduces the solution slightly, and so reduces the estimates of and slightly – a step in the right direction.
5. EM interactions between neutral atoms: A candidate model for gravity?
Our current best understanding of gravity comes from GRT, which is founded in, and developed from, SRT. The parameter from SRT appears in GRT, and reveals the lineage. Like electromagnetic signals, gravitational signals must have the finite propagation speed . So might gravitational signals actually be electromagnetic signals? If we choose to update SRT with a more realistic signal model in place of the number , does that require a similar update for GRT? This Section explores such questions.
First let it be noted that the Einstein gravitational field equations, like Maxwell’s coupled field equations, are a description at the microscopic scale, but the observable phenomena exist at an extremely macroscopic scale.
Figure 2 illustrates a fairly typical barred spiral galaxy. What this image seems to suggest is: there are mass concentrations at the two armpits of this galaxy. Maybe they are mega stars, or black holes. Maybe they orbit another mass concentration at the center of the galaxy, or maybe they just orbit each other. In any event, the two mass concentrations together create a rather structured field of gravitational potential, into which millions, or billions, of smaller stars are entrained, or temporarily detained, as they orbit the galaxy.
Figure 3 shows the skeleton of a potential field created by two super-massive bodies orbiting at half the signal propagation speed. The lines mark minima in gravitational potential as a function of angle around the galaxy. Observe that this skeleton approximately matches this galaxy image.
Note 1: To persist over time, the orbit speed of the driving two-body system in Figs. 2 and 3 must be such that the rates of energy loss by gravitational radiation and energy gain by torquing balance each other. Like the Hydrogen atom, the galactic-size two-body problem can be solved this way.
Note 2: Since most of the captive stars in Fig 2 orbit at lesser speed than the driving two bodies, and their potential pattern, with its skeleton, Fig. 3, the individual stars in the outer reaches do not keep up with the rotating spiral potential pattern. An individual star sees a recurring ‘density wave’ of neighbor stars first approaching, then receding. Such density waves have long been known, but not well explained.
Note 3: Fig. 3 is constructed using brute-force calculation of potential for lots and lots of location points, and then using numerical search over angles for local minimum values, and then fitting a function to the minima. Note the slightly sinuous bar between the two driving bodies. Even this detail is suggested in Fig. 2.
Now, in order to model gravity in terms of electromagnetic interactions, we need an expression of electromagnetic interaction that is appropriate for neutral atoms. The best expression appears to be one that was known even before Maxwell. André Marie Ampère already had a well-developed theory about forces between what he called ‘current elements’. This term referred to charge-neutral material increments in electrical circuits. In modern times, P. Graneau wrote extensively about Ampère’s theory and experiments; see for example Graneau 
Ampère’s theory works perfectly well for ordinary closed circuits, as well as for incomplete broken circuits, such as may exist momentarily in transient situations, like explosive rupture of circuits. Ampère’s theory ought not be forgotten solely on the basis that more modern theory also works perfectly well for closed and stable electrical circuits. Indeed, in some technological applications involving transient situations like ruptures, Ampère’s theory explains more than the modern theory does.
One reason why Ampère’s formulation can sometimes be more powerful than a formulation based on Maxwell’s equations is that Ampère’s formulation can describe a multi-participant scenario straight away, whereas Maxwell’s equations require iteration through successive steps involving both Maxwell’s equations, and the Lorentz force law for the force F acting on a charge moving with velocity v:
The iteration goes as follows: first, the input charge positions and motions generate E and B fields; second, the Lorentz force law tells how each charge responds to the fields from the other charges; another step through Maxwell’s equations tells how all the fields change, and so on.
One particular scenario illustrates the difference between the approaches especially well. Consider a current-carrying wire. It is tedious to use the Maxwell-Lorentz-Maxwell-Lorentz iterative approach to arrive at the understanding that the moving electrons favor the surface of the wire, or even the exterior neighborhood near the wire, leaving the interior of the wire depleted of electrons, and therefore in a state of internal repulsion between the remaining positive nuclei. The Ampère’s formulation skips over all this detail, and describes the resulting consequence: the wire experiences internal longitudinal force, and in fact might even rupture. If it does rupture, one can easily tell that the event was not due to ordinary resistive heating and melting, since the fragments are found to be neither hot to touch nor melted in appearance.
The Ampère approach looks promising for gravity problems because any gravity problem is definitely a multi-participant scenario. And for the same reason, the following analysis also invokes ideas from modern Statistical Mechanics.
Ampère’s force formula can be written:
The indices and identify two interacting currents. The and are current magnitudes. The and are magnitudes of tiny directed length increments Δm and Δn through which the currents flow. The products of currents and directed length increments, Δm and Δm, are the current elements. The is the length of the vector separation rm,n between the current elements. The , , and are angles with respect to the connecting line between the two current elements, and with respect to each other. Current element Δm is at angle from the connecting line, and current element Δn is at angle from the connecting line. The is the angle between the two planes defined by the connecting line and each of the two current elements, as if the distance did not separate them. The value ranges are all full circle: ,
One can get a feel for the general behavior of Ampère’s force formula by considering the angle factor for a few special cases:
Current elements side-by-side and parallel, as in parallel wires. Both current elements are perpendicular to the connecting line, so and are and and are zero. But is zero, and , so the angle factor evaluates to . The force is then negative. The current elements attract each other. If they reside in parallel wires, the wires attract each other. This you know from experience is true. In a plasma, instead of in a solid wire, it is called the ‘pinch effect’.
Current elements side-by-side, but anti-parallel. This case is just opposite to Case 1 above: now and . The current elements repel each other. If they reside in a circuit, that circuit likes to straighten out any kinks and enclose more area. This you may know from experience is true.
Current elements end-to-end, as in an electrical circuit. All three angles are zero, all three cosines are unity, and the angle factor evaluates to , so the force is positive. The current elements repel each other.
The aspect of the Ampère force law is just like Newton’s law for gravity. Ampère designed his law that way, because, in his time, the greatest prior achievement in Science was Newton’s conquest of gravity. Now we wish to return the favor, and exploit the Ampère Force Law to understand something novel about gravity.
What makes the Ampère current element so potentially appropriate for application to gravity? First of all, it is charge-neutral, like the masses in a gravity scenario. Secondly, its electrons are moving, and although its nucleus is moving too, that motion is not anywhere near as fast. So at all times and all places where matter exists, at the microscopic level a net electron current flows.
The concept that current elements generate forces that can attract or repel each other suggest that pairs of current elements – or pairs of atoms – can be regarded as a system that can have positive or negative total energy. The kinetic part of the energy may be disregarded, since the current elements may be essentially static, but the potential part of the energy is worth paying attention to.
The main novel feature that gravity presents is that we usually have, not two current elements, but huge numbers of atoms, and each atom must have some relationship with all other atoms. The complexity of the situation naturally conjures up ideas from Statistical Mechanics. Here, ideas from Statistical Mechanics are applied to gravity described in terms of Ampère forces between atoms that are viewed as current elements. Some atom-to-atom relationships are momentarily attractive, and some relationships are momentarily repulsive, and all relationships must vary over time. We can look at the population of atom pairs as a whole, and think of it as a statistical ensemble, in which every condition of attraction/repulsion is represented somewhere.
Every area of physics that has statistical ensembles has Gaussian probability functions. In Classical Thermodynamics, a Gaussian probability density function for a random variable, such as a component of a particle momentum vector, implies maximum entropy, subject to a prescribed value for the standard deviation of that random variable. In Quantum Mechanics, a Gaussian probability density function (the squared wave function amplitude) is associated with minimum uncertainty, meaning minimum product of standard deviations in Fourier conjugate variables, like position and momentum.
Sometimes it is not immediately obvious that a problem involves the equivalent of a Gaussian function, because the Gaussian itself involves a squared variable, such as or . The squared variable is proportional to some energy . So one sees probability density functions expressed in the form where is the average value of energy , usually something like , where is Boltzmann’s consistent and is absolute temperature.
In the case of gravity, that energy of interest is gravitational potential energy. It can have both positive and negative values. The central concept in Statistical Mechanics is that lower-energy states are populated more richly than higher-energy states are. This concept means that any two atoms, viewed as current elements, will be with respect to each other in a state of negative potential energy more often than in a state of positive potential energy. So they will, on average, attract each other more than repel each other. Therefore, is negative. Since temperature cannot be negative, this is something novel.
To deal with negative we really need a probability density function with a Gaussian factor of the form , where is another parameter, positive, but also not related to temperature.
A few examples can illustrate how to find the parameters.
Let the two energies be and . With the attractive–force, negative-energy state dominating the scenario, is negative, . The two Boltzmann factors are:
The average energy must satisfy the definition:
Simple trial calculations and numerical search of the results works well enough to solve the problem at hand. The solution is approximately:
(There is also, of course, a positive, and presently irrelevant, solution of the same magnitude.)
For this case we have we have line integration in place of point evaluation. becomes:
Because the denominator is simpler, begin with that. It is:
This is a positive number.
The numerator is more complicated, but it can be evaluated using integration by parts:
where and , so that . The first term in the numerator evaluation is:
The second term in the numerator evaluation is:
The sought numerator divided by denominator for is then:
Since the sought is negative, equal to , we have:
Again, numerical search is a practical approach for finding a solution. We find:
Observe that, as should be expected, this solution is significantly smaller in magnitude than was the solution with only two energy values, , to balance between, which came in at .
The continuous Gaussian profile,
extends to infinity in both directions. This attribute is inappropriate for the problem at hand, which definitely possesses limits beyond which the modeling problem does not extend. Therefore, let us turn to discrete approximations for a Gaussian. These are based on the binomial expansion for an arbitrary . The binomial coefficients are familiar to many people from Pascal’s famous triangle:
The numbers in Pascal’s triangle are constructed with addition of neighboring numbers above. This is easy for small , but small means crude, and we need refined. So we want large . So we need a formula involving multiplication instead of addition. That would be:
Observe that the binomial coefficients are symmetric around the middle of the list, like a Gaussian function is symmetric around zero argument. If is an even number, the number of binomial coefficients is , odd, and the middle one, the maximum one, is . If is an odd number, the number of binomial coefficients is , an even number, and the middle two numbers, the maximum two, are both .
For our modeling problem, let be an odd number. Let the binomial coefficients be represented as with to . Let them be associated with equal energy increments starting from and covering the range to zero. Associate the minimum binomial coefficient with the increment starting with , and the maximal binomial coefficient with the increment ending with , and associate the other coefficients with the increments between those limits. The problem to solve is:
Numerical investigations done to date suggest that the solution comes at approximately . Here the for this discrete model is analogous to the standard deviation for the corresponding continuous Gaussian.
The problem of modeling gravity therefore reduces to the problem of determining what value of should be used. Here is the most pertinent fact: compared to anything electromagnetic, gravity is extremely weak. Consider two Hydrogen atoms at a given separation distance. Let us compare the gravitation force with the maximum Ampère force between them.
The gravitational attraction is proportional to , where is the universal gravitation constant, about per , and is the mass of the proton, about , so is about . Overall,
Clearly, the maximum Ampère force between atoms viewed as current elements is generously larger than the gravitational force between atoms viewed as charge-neutral masses – by about 32 orders of magnitude!
But the typical Ampère force is nowhere near as big as the maximum Ampère force, due to the fact that it depends on three angles, any one of which can spoil it. Occurrence of the maximum Ampère force is very rare indeed. And occurrence of the minimum (negative) Ampère force is equally rare. Only the piddling near-zero Ampère forces are common, and even then, each tiny attractive force is mostly cancelled with the tiny repulsive force of equal magnitude but less frequent occurrence. All we have is a tiny residue of attractive force, due to the nature of Boltzmann factors and Statistical Mechanics.
Of course, being tiny does not mean being insignificant. Like the tiny residue that is microwave background radiation, the tiny residue that is gravity is a possible key to understanding something about the Universe in which we live. Here is an example problem: at present, we know the actual particle radius of the electron is something extremely tiny, but we do not know what its numerical value is. There exist a number of length-dimensioned quantities associated with the electron, all called ‘radius’, but distinguished by specific names and numerical values. MacGregor  lists seven of them. Most are on the order of cm, although one is much smaller, and is presently only upper-bounded at cm.
The radius attributed to the electron can have a role in the gravity problem. The ratio of an atomic radius to the electron radius can imply a candidate level of discretization for the binomial approximation to the Gaussian factor involved in the gravity problem. For an atomic radius, let us consider the first orbit radius of Hydrogen, . For the electron radius, let us consider two of the possibilities from MacGregor .
One of the electron radii is called classical. This one captures the Coulomb energy equivalence
The ratio is then:
The square root of this number would then be the dimensionless for the discretization:
This is a number already famous in Physics, but in a context other than gravity. It is the inverse of the so-called ‘fine structure constant’ , defined as:
This number plays a role in spectroscopy, where spectral lines occur in families, closely spaced but clearly distinguishable. There, the explanation comes from QM; clearly, another manifestation of natural discretization.
But if the classical radius of the electron were used in the gravity problem, the ratio of the average Ampère force magnitude to the maximum Ampère force magnitude would be approximately
This ratio is not appropriately small, so this is not the right discretization level for the gravity problem.
Another one of the electron radii given by MacGregor  is called actual. It characterizes results of scattering experiments, and is the one presently only upper-bounded, at cm. No one knows how much smaller it could eventually turn out to be. So how much smaller would it have to be, in order to account for the extreme weakness of gravity? Gravity requires , or.
That in turn requires:
At present, such a value for certainly looks impossible to test with any kind of measurement. It is smaller than anything we yet know about any elementary particle. But that circumstance may be a good thing, because an extremely small electron makes it easier to understand what data from Chemistry reflects, discussed next. And an extremely small electron, along with a correspondingly small positron, helps explain aspects Elementary Particle Physics, discussed after that.
6. Algebraic chemistry and EZ water
Prof. Gerald Pollack of U. Washington wrote the authoritative book  about the physical phenomenon called ‘EZ water’. The EZ is short for Exclusion Zone, with the word ‘exclusion’ referring to a surface phenomenon that expels positive hydronium ions.
Prof. Pollack gave a talk about EZ Water at the 2013 meeting of the Natural Philosophy Alliance at College Park, MD, USA. All the phenomena he described were surprising; some were truly puzzling. EZ water apparently makes extended orderly arrays of hexagonal units. How can that behavior comport with our understanding that Nature maximizes entropy? Explanations then available were not at all quantitative. That fact suggested a real need for a more quantitative approach.
I had recently written my book about Algebraic Chemistry (AC).  The name reflects the fact that the technique has no integrals or other complicated math operations that would demand capabilities beyond those of a hand calculator. The worst operation is square root. So the AC approach looked promising for quick application to EZ water.
The fundamental idea behind AC is that all atoms share some similarities with Hydrogen atoms: 1) They have a nucleus that is similar to a proton, but scaled up to nuclear charge and nuclear mass ; 2) They have a population of electrons that is not entirely unlike a single electron; i.e., an interacting community that is somewhat coherent, and somewhat like one big electron orbiting the nucleus; 3) It is possible for the electron count to be different from the nuclear charge. This last possibility is what characterizes ions, and thereby creates all of Chemistry.
We begin with a clue: Eq. (17) indicates that the radius of the Hydrogen atom scales with the mass of the proton. This fact suggests that the base orbit energy of the Hydrogen atom scales with the inverse of proton mass. It further suggests that for element with nuclear charge and mass , the base orbit energy may scale with . If so, then when first-order ionization potentials for all elements are scaled by the inverse factor, , then the scaled first-order ionization potentials (called ) might fall into some pattern.
We proceed with an observation: A pattern indeed emerges: the rise on every period in the Periodic Table is exactly the same factor, .
We make a Hypothesis: All contain information valuable for all other elements: population generic information. Each contains a universal baseline contribution about interaction between the nucleus and the population of electrons as a whole. For all elements beyond Hydrogen, there is also a contribution about interactions among the electrons.The can be very significant. For Helium, is huge, meaning that two electrons bond together very strongly. And for Lithium, is negative, meaning that two electrons actively work together to try to exclude a third electron.
Over the periods, there is obvious detail about the electron-electron interactions. Within each period, there are obvious sub-periods keyed to the nominal angular-momentum quantum number that is being filled. Plotted on a log scale, all sub-period rises are straight lines. The slopes all appear to be rational fractions. We can display these rational fractions in a Table, as was done in  and :
A non-traditional parameter is included in the display because, for , it is possible to write a simple formula for the fraction:
Also, all periods in the Periodic Table have length .
All this numerical regularity suggests that there really is a reliable pattern here, and we can reasonably seek to exploit it. Here is the first exploitation that suggests itself: Given first-order ionization potentials of many elements, we can estimate the additional energy required to remove a second electron from each, and then a third, and so on. This was first done in . Formulae were given for each individual electron removal or addition, and evaluated for a large number of elements.
One point that Ref.  emphasized was that the energy to remove a second electron, or a third, and so on, is not the same thing as the so- called ‘second-order ionization potential’, ‘third-order ionization potential’, and so on. Those energies are very large, which implies that those events are very violent: ripping two, or three, or more, electrons off an atom all at once. Those energies do exhibit a lot of numerical regularity, but that isn’t important for understanding typical lab-bench chemistry, which is all about gentle events that occur one-at-a-time.
Ref  presented the equivalent summed formulae for removal of, or addition of, one, two, and three electrons, removed or added one-at-a-time. Basically, use of these formulae save the user some repetitive arithmetic that would be incurred using the formulae from Ref. .
Development of More Formulae:
For the present paper, the formulae from  are extended from the illustrative cases of removing, or adding, one, two, and three electrons, to the general case of removing, or adding, electrons.
Ref.  used symbols and to distinguish between energy increments associated with electron-nucleus interaction, and energy increments associated with electron-electron interactions. That distinction is analogous to the distinction between work and heat in thermodynamics: the work part is something a human can control, and the heat part is something that Nature simply does, regardless of what the human does.
The ion produced has a little less attraction between the nucleus and the now reduced electron cloud. So removing another electron should take a little less work:
This pattern generalizes to:
We also had:
We inferred in  that:
This pattern generalizes to:
Finally, in  we had:
This pattern generalizes to:
Now let us turn to adding electrons. First, use the formula for the energy for removing an electron from a neutral atom of element to describe instead removing an electron from the singly charged negative ion of element , which has electrons to start with:
Reversing the direction of the operation:
This means the work for adding one electron into the nuclear field is:
And the heat for re-adjusting the electron population is:
Now let us add a second electron. This will require additional work:
And it will cause another heat adjustment:
This means total energy involved in adding two electrons is:
Likewise, the total energy involved in adding three electrons is:
This patten generalizes to:
Numerical Data to Insert in Formulae:
Numerical data for elements up to number 118 are given in . The numerical analysis of EZ water requires at most the data for the first ten elements. Expressed in electron volts, eV, these numerical data are:
Here are some example calculations concerning possible ionic configurations of ordinary, normal water.
Most people would guess that water is . But let us evaluate that ionic configuration. The transition takes:
So takes .
The transition takes:
So the ionic configuration requires . This is a positive energy requirement, which implies that some external assistance is needed to create this ionic configuration. So normal water may not be after all!
Another possibility is readily at hand though. The ionic configuration for normal water could be . The transition takes:
So takes . Notice that this is a huge negative energy. It reflects the fact that electrons really like to make pairs. Indeed, their propensity to do so motivated the invention of the so-called ‘spin’ quantum number. Without spin, many electrons in atoms would be violating the ‘Pauli Exclusion Principle’, which says that only one electron can be in any particular quantum state. Electron pairs are famous in Condensed Matter Physics too, under the name ‘Cooper pairs’.
Proceeding now to the transition , that takes:
Thus the creation of the water molecule in the ionic configuration demands altogether . This energy is solidly negative, which means that ordinary water is overwhelmingly in this ionic configuration, . This is normal water.
However, in situations where more than one version of anything can exist, both generally do exist, in proportions determined by their so-called Boltzmann factors, . Here is energy, is Boltzmann’s constant, and is absolute temperature. Boltzmann factors are the result of entropy maximization at work. Because of non-zero Boltzmann factors, there will exist a tiny, tiny fraction of the first ionic configuration, .
This analysis of normal water shows how quantitative approaches can sometimes unseat long-standing, but never-justified, assumptions in Chemistry.
About EZ Water:
The following transition is generally thought to represent the creation of EZ water:
Here parentheses are used to avoid implying anything about what charge the individual atoms within any ion or radical may carry. A full numerical analysis should consider all possible, or at least all plausible, ionic configurations of every molecule or radical involved.
One possible ionic configuration for the EZ water ion is . The takes total energy The takes total energy . So the ionic configuration altogether takes . This is a negative energy, so this ionic configuration certainly can occur.
But there is also another possibility for the EZ water ion . It could have the ionic configuration . An takes , so takes . An takes energy:
So takes . Then the ionic configuration takes . This energy is much more negative than that of the first candidate ionic configuration for EZ water, . This fact means EZ water is nearly always in this second candidate ionic configuration, .
One possible ionic configuration for the hydronium ion is . This, I believe, is what most people would guess. But from the study of regular water, we know the candidate would take , and that the candidate would take , so the candidate ionic configuration for hydronium would take . This energy is positive, so this ionic configuration for the hydronium ion is not promising.
However, as was the case with the EZ water ion, there is another possibility for the hydronium ion. It could have the ionic configuration . The would take , and the would take:
So for the hydronium ion , the second candidate ionic configuration would take . This very negative energy explains why the reaction product that accompanies EZ water is a hydronium ion, rather than a naked proton plus a normal water molecule, which would take , which is not as negative.
The EZ water ion and the hydronium ion together take . Compare this energy to the energy taken by three normal water molecules: . The EZ water ion with the hydronium ion has lower energy than the three normal water molecules. That means that Nature will take any opportunity to make EZ water ions and hydronium ions.
It appears that what creates the opportunity is a surface, plus a little energy to separate the ions. Any material body provides some gravity to create a surface, and if there is also some small energy source, such as sunlight, to help separate ions, and if there is also some normal water, the situation will automatically create EZ water too. Even an icy comet might be able to create some EZ water.
Just as the myriad compounds in Chemistry arise from not-very-many chemical elements, some significant part of the m
Observe that radiation dominates for radii below the crossing point, whereas torquing dominates for radii above the crossing point. This means the balance between the two effects is unstable: a small excursion from balance in either direction causes more excursion in the same direction. This is interesting. It means that Hydrogen does not like to exist as an isolated atom. It wants to engage in chemical reactions. In the Universe at large, you will find Hydrogen in molecules, or other molecules, or you will find Hydrogen plasma, consisting of naked protons and free electrons, but you will not find many isolated Hydrogen atoms.
We can discover even more about the Hydrogen atom if we include the appropriate angle sine and cosine factors in the calculations. These factors are oscillatory. So negative numbers sometimes occur with the torquing curve, and they cannot be plotted on a logarithmic scale. However, we are mainly interested in the points of balance between torquing and radiation, and the radiation curve is always non-negative, so the torquing curve is non-negative too at the balance points.
Figure 5 shows that we have not just one balance point, but many balance points. The balance points occur in close pairs, one stable and one unstable. The two pairs furthest left on the plot do not show as crossings because of the finite resolution of the plot, but they are certainly present.
More solution pairs are to be found, off the Figure to the left. Indeed, the solution pairs continue indefinitely, into smaller and smaller system radii. So Hydrogen has an infinite family of ‘sub-states’. Mills discusses these in .
The smaller and smaller radii of the balance points in Fig. 5 correspond to higher and higher orbit speeds. This idea conflicts with a prohibition imposed by SRT: no physical particle possessing mass is allowed to move at a speed matching or exceeding light speed . What does this conflict mean? I believe it means the prohibition should be understood more precisely to say: no physical particle possessing mass can be perceived to move at a speed matching or exceeding light speed , if we agree to process all received data in accord with Einstein’s Second Postulate. If we do not agree to that, then there is no particle speed limit.
We can also study positronium (a system consisting of one electron and one positron). See Fig. 6. As with Hydrogen, the oscillatory angle factors create a family of solutions for this system too. Half of them are stable, and half are unstable, and uncountably many of them occur at small radii and high speeds well in excess of . Due to the finite resolution of the plot, only one pair of solutions is clearly visible in Fig. 6. But two more, at smaller radius and higher speed, are also certainly present. We can characterize these, and all high-speed solutions, without even knowing exactly what the radiation curve is like - its exact amplitude, or its dependence. The one low-speed solution is just . The many high-speed solutions have to occur in pairs just above and below orbit speeds of the form , where is an arbitrary positive integer.
A parenthetical note applies for Fig. 6: for same-mass systems, the amplitude of the radiation curve should be less by a factor of 4 because there is no center-of-mass motion. This sort of numerical detail does not significantly affect where the solutions fall. That is determined almost entirely by the cosine factors that produce the deep dips that intersect the peaks in the curve for rate of energy gain due torquing.
The oscillatory nature of the angle factors can turn a situation of seeming repulsion into a situation of actual attraction. This phenomenon of sign reversal due to signal delay is well known to engineers, who often deal with oscillating signals in feedback control systems For the present application, consider two electrons in a circular orbit, and suppose they move at speed . One electron launches its signal radially outward. By the time this electron has executed half an orbit, this signal has expanded a distance equal to the orbit diameter. By then, the two electrons have exchanged places. So the expanding signal first contacts the second electron at exactly the signal launch point. Then the two electrons complete their orbit. At the end, the second electron finally understands its signal: it is to move radially. But by now, the two electrons have changed places again, and for the second electron, the direction commanded is inward. That situation is equivalent to attraction.
Given this mechanism for attraction, we can also study homogeneous systems: two electrons, or two positrons, for example. Again, there exist both stable and unstable solutions, and there are infinitely many of each, corresponding to orbit speeds of the form for arbitrary positive integer . See Fig. 7.
An additional parenthetical note applies for Fig. 7: for same-charge systems, the angular pattern of the radiation is quadrupole, rather than dipole, so the amplitude of the radiation energy loss curve should decline as rather than . Again, this sort of numerical detail does not significantly affect where the solutions fall, which is determined almost entirely by the cosine factors that produce the deep dips that intersect the peaks in the curve for rate of energy gain due torquing.
The stable solutions for two electrons bring to mind the situation that is so well known in Chemistry: electron pairs. They are everywhere in Chemistry. The most famous case occurs for Helium. Helium is a noble gas, and it reacts with other elements only under extreme duress. Helium has two electrons, and pulling one electron away is very costly: Helium has the highest ionization potential of any element. The message is: two electrons definitely do form a stable subsystem within an atom. The standard QM explanation for this invokes the concept of electron spin, with two possible values, , allowing two electrons in the same overall energy state. Electron pairing also occurs famously in Solid State Physics, under the name of Cooper Pairs.
In the present Chapter, the new concept applied is the more realistic signal model for use in an improved version of SRT. The realistic signal model is based on Information Theory,
The new concept is implemented with very standard mathematics: differential equations, their family of solutions, and the particular problem boundary conditions. These mathematical ingredients for a proper signal model were all available in 1905, but they were not used in SRT. Why? I believe the fundamental reason is the history: Information Theory was not yet available, so no researcher at that time would have been likely to detect the inadequacy of the infinite plane wave as a signal model.
The present paper has shown that there are rewards for instead using the realistic photon/signal model. They include more insight into Quantum Mechanics, and into Gravity Theory, and potentially into Elementary Particle Physics. These are all subjects to be studied much more fully in the future.
Many textbook treatments of SRT devote a lot of space to Lorentz Transformations (LT’s). The present work has not mentioned LT’s at all. To this author, LT’s just seem to describe the wrongly informed opinions of different observers. So I don’t really want to focus on LT’s. But I have to mention them, because repairing SRT to take proper account of the concepts of IT casts doubt on Einstein’s SRT, and hence on LT’s. Therefore, I hereby relegate the unavoidable discussion of coordinate transformations, LT’s and others, to the following Appendix.
The situation in the late nineteenth century included the following fact: Maxwell’s first order coupled field equations appeared not to be invariant under Galilean transformation of coordinates (GT’s). Phipps  has written extensively about this apparent conflict between Maxwell’s Electromagnetic Theory and Newton’s Mechanics. In the early twentieth century, SRT brought in LT’s, and the conflict seemed to be resolved: Maxwell’s electromagnetic theory was clearly invariant under LT’s. This fact was taken as evidence in favor of Einstein’s SRT over Newton’s Mechanics.
But there is a puzzle left to resolve: Maxwell’s first order coupled field equations appear to qualify as tensor equations. Mathematicians had developed tensors in the first place to enable the articulation of mathematical statements that would be coordinate-free. So tensor equations are by definition invariant to all invertible coordinate transformations.
So what had happened here? I believe two circumstances had collided to create a very bad situation. One circumstance was that Mathematics had such a long history of developing eternal truths: the focus had been on arithmetic, geometry, and trigonometry – all of them eternal in character. Even archeo-astronomy was largely about the eternal repetition of events, and not about temporal evolution of events. Eternal truths really need not have a time dimension. They can, however, have as many spatial dimensions as may be desired, and that became the focus for much of tensor analysis. The other circumstance was that time became a really significant variable with the advent of modern Physics: Kepler, Galileo, Newton, and Maxwell. And time is a really different kind of variable than space is. Maxwell was very well aware of the difference, as he developed his electromagnetic theory in terms of Hamilton’s quaternions. The modern equivalent of the quaternion tool is the set of four complex Pauli spin matrices:
The first one, the time-like one, is the identity matrix. The other three, the space-like ones, produce the identity matrix when squared. When two of them are cross-multiplied, they generate a factor of times the third one, corresponding to the vector cross product in three-dimensional space.
The collision between circumstances came in the formulation of differential operators. People were familiar with the scalar chain rule, d/dt =∂/∂t+v∂/∂x, and did not realize that more information was needed in the context of vector and tensor applications with time as well as space dimensions.The Pauli matrices are easy to appreciate visually, so I will also discuss transformations of coordinates in terms of matrices also - but only real ones, not complex ones. Let stand for any spatial coordinate. A general coordinate transformation involving and has the form:
For the familiar Lorentz transformation, , where is the speed of the new coordinate frame relative to the old one. The letter is lower case to remind us that ; i.e. . We have:
For the long discarded Galilean transformation, and . The letter is upper case to remind us that is not limited, and might exceed . So we have:
For all such general coordinate transformations, there also exists a complement transformation:
Its purpose is to preserve inner products; for example:
For Lorentz transformation, the complement transformation is the inverse, or equivalently, the reverse transformation:
But for Galilean transformation, the complement transformation is:
This is the inverse transformation, but not the reverse transformation. It is rather the transpose of the reverse transformation. It looks so very strange because, for more than a century now, only Lorentz transformations of velocity have been used in mainstream theoretical Physics, and transposition does not change them.
The vital role for this strange new thing lies with the differential operators. The story is much like it was for the coordinates: there are two complementing transformations, and they involve, not only inversion/reversal, but also transposition. Let represent differentiation with respect to the time-like coordinate, and represent differentiation with respect to the spatial variable. Let us demand invariance of inner products involving differential operators; for example, like:
i.e., always two statements – not just one statement. This level of detail was missing from the scalar chain rule, and that omission caused people to believe that Maxwell’s equations could not be shown to be invariant under GT. And so they welcomed LT instead. This is not to say we should now revert to using GT again. Indeed, because of the half-retardation issue discussed in Sect. 3, the best transformation to use may involve, not , but rather /
The use of matrices can make the detail needed in such future study very clear. However, many mathematicians tend to prefer tensor notation. But current-day tensor notation uses only two index positions, both on the right: down called ‘covariant’, up, called ‘contravariant’. To represent the transformations needed for Physics, it would be helpful, and maybe necessary, to add two more index locations, up and down on the left, to acknowledge transposition, and using words like ‘trans-covariant’ and ‘trans-contravariant’ to emphasize what putting indices in those positions means.
This Chapter is dedicated to the memory of a most courageous researcher in theoretical and applied electrodynamics: Dr. Peter Graneau, 1921-2014. He encouraged me, and many other researchers, to give serious attention to the History behind Physics.