Many of us who made a living in the 20th century did so by functioning as some kind of engineer. Though schooled mostly in Physics, this author often functioned in those days as an engineer. It was a good continuing education. One aspect of it was the big tool kit in use. For example, some subject systems were best viewed in the frequency domain: a system functioning as a filter would suppress some frequencies and enhance others. But other systems were better viewed in the time domain: a system functioning as a controller would take a time series of input signals and produce a time series of output commands. Neither approach was considered more right, or more fundamental, than the other. They were complementary.
But in the 20th century, things felt less eclectic in Physics. Especially in the literature of Quantum Mechanics (QM), there often seemed to be a lot of passion about what viewpoint was allowed, and what viewpoint was not allowed. We were taught that it just was not correct to think of an atom as a nucleus with electrons in orbits around it. There could not be orbits; there had to be only ‘orbitals’, a new word coined to refer to complex wave functions that extended over all space, and provided only spatial densities of probability, in the form of squared amplitude. Except for its phase factor, there was no sense of time-line to an orbital. It was a stable state.
So in QM, the emphasis was all on the stable states. Between the stable states, there could occur transitions, resulting in emission or absorption of a photon, but the state transitions themselves were essentially instantaneous, and not open to study. This emphasis on the stable states, and the avoidance of the transitions between them, implied that questions about the details of state transitions should be regarded as illegitimate.
Back at the turn of the 20th century, there was a good reason for the avoidance of details about process in QM: we did not understand how any atom could resist one totally destructive process that was expected within the context of classical electrodynamics. Even the simplest atom, the Hydrogen atom, was expected to continuously radiate away its orbit energy, and so quickly collapse. So ever paying attention to any details of process looked fraught with peril.
A way of escaping the issue of process in QM came along with the discovery of the photon. Quantization of light was implied in the spectrum of blackbody radiation, and demonstrated in the photoelectric effect. Those developments gave us Planck’s constant. Planck’s constant provided a constraint for defining the ground state of the Hydrogen atom. So Schrödinger wrote it into his equation for the wave function of the electron in the Hydrogen atom. The Schrödinger equation produced a set of solutions representing a whole family of stationary states for the Hydrogen atom. We could forget process, and focus on those stationary states.
But today, one of the big application areas for QM is in development of computational approaches fitting the name Quantum Chemistry (QC). Chemistry is largely about reactions, and certainly every reaction is a complex process. So the chemical reactions are like the quantum state transitions: they have, not only the stationary state before, and the stationary state after, but also something of interest in between. So the historical injunction against inquiry into the specifics of quantum state transitions tends to inhibit the full application of QM to the process-related problems that QC presents. So consideration of process is no longer avoidable for QM.
Fortunately, history is never the final story; it exists mainly to be updated from time to time. A Chapter in an earlier Book in this series (Whitney, 2012) argued that our understanding of Maxwell’s Electromagnetic Theory (EMT) at the turn of the 20th century was incomplete. When we develop a description for the photon based on EMT, we learn some facts that have bearing on the communication between the electron and the proton in the Hydrogen atom, and how that communication in turn supports the continuing existence of the atom as a system. So consideration of a quantum process becomes less perilous for QM.
But the practical difficulties are numerous. In many cases, molecules involve numbers of atoms that are too small for any kind of statistical ideas to be applicable, but too large for traditional QM calculations to be practical; hence, they are altogether awkward to address. Furthermore, Chemistry is all about reactions, which can involve many molecules. And sometimes there are multiple reaction steps, or even multiple paths, each one with multiple steps, each one with a time line worthy of detailed numerical study.
The practical difficulties of QC arise largely because the most common way of thinking about QM is still in terms of wave functions. Their amplitudes are squared to make probability density functions, multiplied by functions or differential operators representing variables of interest, and integrated over argument variables. It can add up to way too much computation.
So how can QM better meet the needs of QC? A potentially helpful concept comes out of QM: the concept of duality. Abad and Huichalaf (2012) describe it in terms of seeming contradiction, and seeming is certainly the right characterization, for duality is not really a contradiction at all. Consider, for example, the traditional wave vs. particle duality of light. The earlier Chapter (Whitney, 2012) presented a model for the photon based on Maxwell’s four coupled field equations, together with boundary conditions representing the source and the receiver of the photon. The Maxwell photon model is pulse-like at emission, and evolves into an extended wave-like condition, and then collects back into a more confined pulse-like condition for absorption. In QM, all observable objects are like the Maxwell photon model, in that they present both particle-like and wave-like aspects. Which aspect is seen just depends on when and how one observes the object.
To meet the needs of QC, we clearly need to develop and exploit another duality analogous to the traditional wave vs. particle duality of observed objects. We need a duality of observer descriptions: state descriptions vs. process descriptions. Where the traditional QM approach starts with the idea of quantum state, which is something stationary, the dual approach must start with the idea of quantum process, which naturally has a time line to it. The temporal evolution of the Maxwell photon model is fundamentally a process, and it can be followed in detail, with no untoward disasters, so it is a promising point of departure for this work.
One key thing about photons is that they have finite energy. In this respect, photons are completely different from infinite plane waves, which have infinite energy. The Maxwell photon model has the finite total energy as needed. That energy is always trapped in the space between the source and the receiver. So another key role in the photon model is played by mathematical boundary conditions. Section 2 picks up where the earlier work left off, discussing in more detail what the boundary conditions are, what they do, and how they do it.
Photons were not known in Maxwell’s day, so the implication of their finite energy was not then appreciated. Coupled with their finite propagation speed, their finite energy causes the definite Arrow of Time that has long been considered such a mystery in Physics. Many people suppose that the Arrow of Time has to do with Thermodynamics, because that subject deals with entropy and irreversibility. Searching for a mechanism, many people would think of friction in Newtonian mechanics. When told that Electrodynamics displays the Arrow of Time, many people think first of the friction-like effect of radiation reaction acting on accelerating charges. But actually, the Arrow of Time is present quite apart from anything that happens to material particles. It appears in the photon itself.
There is a reason why this fact was not emphasized a long time ago. Section 3 recalls how Maxwell’s coupled field equations were immediately inserted, one into another, in order to reduce the set of four coupled field equations into a set of two un-coupled wave equations. The two un-coupled wave equations clearly display the finite propagation speed, but they totally hide the effect of finite energy.
The problem is this: the two un-coupled wave equations are less restrictive than the four coupled field equations are, so they have a larger set of solution functions, some of which do not also solve the four coupled field equations. One example of such a solution is a finite-energy pair of orthogonal E and B pulses that travels without distortion and faithfully delivers information.
In the early 20th century, Einstein used this kind of solution for the role of ‘signal’ in his Special Relativity Theory (SRT). His goal was to capture the spirit of Maxwell’s EMT into SRT. But his signal model is inadequate for that job. The Maxwell photon model better captures the spirit of Maxwell, and only slightly modifies the SRT results, and better ties those results into QM.
What about the QM of material particles? Section 4 revisits the Schrödinger equation. Viewed as the analog to a statement from Newtonian mechanics, it has no essential irreversibility to it. That is one reason why its solutions are stable states. That is why something else is needed for the study of transitions between the quantum states of atoms, and for the numerous chemical processes that QC should address.
Section 5 points out that the main thing for Chemistry is not revealed in the Hydrogen atom, due to the fact that the Hydrogen atom has only one electron. The main thing for Chemistry is the variety of relationships among multiple electrons. Chemistry data suggest that electrons can sometimes actually attract, rather than repel, each other. That propensity can be important in driving chemical reactions. The calculation approach suitable for QC was introduced in the earlier Chapter (Whitney 2012). It is called Algebraic Chemistry (AC). It is a big subject, detailed further in a full Book (Whitney 2013).
Section 6 concludes this Chapter. It draws a lesson from all the problems treated here. The lesson is that we sometimes actually make problems very much worse than they need to be by oversimplifying them. Many seeming mysteries in physical science are nothing but our own creations.
2. More about photons
The earlier Chapter (Whitney, 2012) revisited the quantum of light, the photon, and its relationship to Classical Electrodynamics. That Chapter argued that a simple mental picture of a photon as a pair of electric and magnetic field pulses that travel together, but do not change their pulse shapes, does not comport with Maxwell’s four coupled field equations. Instead, there has to be a temporal evolution, first from pulses emitted by a source, into a waveform shape extended in the propagation direction, then back to a more compact shape, concluding with absorption by a receiver; in short, a whole time-line process.
This Section gives further mathematical detail about the temporal evolution of the photon waveform. In summary, the evolution begins with emission of Gaussian field pulses at the photon source. After emission, the fields develop according to Maxwell’s coupled field equations. The development is constrained and guided by boundary conditions that represent the initial source of the photon and the ultimate receiver of the photon. In the end, all the energy accumulates near the receiver, and can finally be swallowed by it.
In more detail, Maxwell’s coupled field equations cause the Gaussians in E and B to beget first-order Hermite polynomials in B and E, and then those beget second-order Hermite polynomials in E and B, and so on, indefinitely as time goes on. The roots of each newly generated Hermite polynomial interleave with the roots of the previously generated Hermite polynomial, with one more root being added at each step of the process. This process is illustrated with Figure 1 in (Whitney, 2012). It shows spreading of the waveform in its propagation direction.
Next, there have to be boundary conditions to represent the source and the receiver. The boundary conditions can be like those representing the mirrors in a laser cavity: they can enforce a zero in the E field at the boundary locations. As a result of the zero-E condition at the source, the spreading never causes any backflow of field energy into the space behind the source. And as a result of the zero-E condition at the receiver, the waveform spreading never causes any field energy to propagate into the space beyond the receiver. So eventually all the energy just ‘piles up’ before its final absorption into the receiver.
A mental picture of the photon in terms of electric and magnetic fields is quite complicated, even without the boundary conditions. There have to be two field vectors and in orthogonal directions to create a cross product, the Poynting vector of energy propagation. And there have to be two such pairs of fields, one a quarter cycle out of phase with the other in both space and time, to create circular polarization. That makes a quartet of field vectors to think about. Then to create the boundary conditions, there have to be two more such quartets of field vectors, arranged to propagate in the opposite direction, and placed to provide the E-field cancellations at the boundaries. But each of these field vector quartets slightly spoils the boundary condition fixed by the other one. So then an infinite regression of more and more field quartets is demanded.
A mental picture of the photon can be formulated much more simply in terms of its overall profile of energy density, . Before the boundary conditions are imposed, the energy density profile is always a simple Gaussian, the height of which decreases over time, and the width of which increases over time. Then to impose the boundary conditions, the infinite Gaussian tails get folded, and refolded infinitely many times over, back into the space between the source and the receiver. That means the energy density profile is a slightly deformed from Gaussian in the vicinity of the source, and the vicinity of the receiver: cut off sharply at those points, and slightly more than doubled in height near them, because zero means double and double energy density. Of course, the total energy, the integral of the energy density profile between the source and the receiver, never changes.
Figure 1 illustrates this mental picture of the photon as a changing energy density profile with constant total energy. The three data series plotted correspond to the energy density profiles near the beginning, in the middle, and near the end, of the photon propagation scenario. Take note of the phenomenon of waveform spreading. It is not reversible. It shows the Arrow of Time.
3. On the arrow of time
There is an interesting history to the Arrow of Time, and it begins with the beginning of Mathematical Physics itself. Newton developed his laws of mechanics at a time of great interest in celestial events. In that sphere, the absence of contact invites the assumption of no friction, and in the absence of friction, Newtonian mechanics is invariant under time reversal. In this case, there is in principle no Arrow of Time. However, in practice there is a related problem about chaos, which develops in the mathematics when three or more bodies interact. It was certainly impossible to solve for three trajectories in closed form, i.e. as simple functions of time. Even today, it is not entirely clear exactly what is possible to do.
Maxwell developed his electromagnetic theory at a time of developing industrial technology. In that domain, the first thing encountered is limitation. With real-world machines, there is always an Arrow of Time, and it is best described with the science of Thermodynamics. In Maxwell’s electrodynamics, there are two important features. First, there is not instantaneous-action-at-a-distance; there is a finite propagation speed. It shows up clearly with propagating waves. Second, there can be infinite plane waves in the mathematics, but not in physical reality. Reality is always about finite energy.
The finite propagation speed was discovered first. People quickly transformed Maxwell’s four coupled field equations into something more familiar: two un-coupled wave equations. They then often fixated on the infinite plane wave solutions of those equations. The implications of finite energy were then left for investigation later. The job was partially addressed in the study of Optics: finite apertures create diffraction patterns transverse to the propagation direction. But the implications of pulsing in the propagation direction itself would become interesting only much later; for example, with the invention of pulsed lasers.
But much work was left undone at that time. So let us pursue the investigation a little further now. In modern notation and Gaussian units, Maxwell’s four coupled field equations go
Here is the magnetic field vector and is the electric field vector. In free space , , , and charge density and current density vector are zero.
The two uncoupled wave equations go
Because the time derivatives in the two un-coupled wave equations are second order, the sign attributed to time cancels out. So the two un-coupled wave equations are invariant under time reversal. This fact means the two un-coupled wave equations are not equivalent to the four coupled field equations. The two un-coupled wave equations have a larger set of solutions than do the four coupled field equations. So while all of the solutions to the four coupled field equations will satisfy the two un-coupled wave equations, only some of the solutions to the two un-coupled wave equations will satisfy the four coupled field equations.
So some solutions to the two un-coupled wave equations do not satisfy the four coupled field equations. The infinite plane wave is one of those. It has the parameter , which was interpreted as the speed of propagation for light, not just in the context of the infinite plane wave, but also in every other context; in particular, in the context of the concept of signal that Einstein used in the development of Special Relativity Theory (SRT).
But the signal for SRT cannot be just an infinite plane wave. A signal has to have some discernable feature in order to carry any information. Bracken (2012) discusses the mathematical concept of information, and its relation to entropy. The infinite plane wave has a delta-function spectrum, meaning just one wavelength, zero entropy, and hence zero information content.
Instead of an infinite plane wave, a signal needs finite E and B pulses. These can be constructed from a spectrum of infinite plane waves with infinitesimal amplitudes. But as we have seen, such pulses have to evolve over time into spread-out waveforms. Pulses without such evolution can satisfy the two uncoupled wave equations, but they cannot satisfy Maxwell’s four coupled field equations.
The purpose of the SRT signal concept was to capture the essence of Maxwell’s EMT into SRT. So in retrospect, it does not seem very appropriate to have used a model for the signal that does not satisfy Maxwell’s four coupled field equations. It seems more appropriate to use the Maxwell photon model instead. Einstein did develop SRT at nearly the same time as he did his Nobel-prize winning work about the photoelectric effect – and the photon. But mysteriously, the idea that the SRT signal is similar to the photon did not come up at that time.
Better late than never, we can tackle that problem now. Clearly, at the beginning of the propagation scenario, the boundary condition at the source dominates the energy profile, and at the end of the propagation scenario, the boundary condition at the receiver dominates the energy profile. From this assessment, it is easy to imagine a generalization of the propagation scenario. Like the mirrors in a tunable laser, the source and the receiver in the propagation scenario can be imagined to move relative to each other. This possibility then suggests the deep question:
What is the frame of reference for light speed c ?
This question lies at the heart of SRT. There, the reference for light speed is any and all observers. That is Einstein’s Second Postulate. To be fair, it was also the hidden Assumption of all prior works in Electrodynamics. So Einstein is to be credited for making it so explicit.
But a Postulate is a rather formalized way of stating an Assumption, and, as such, does not invite as much scrutiny, or convey as much reward when scrutiny is given, as an Assumption would do. Linguistically, the Second Postulate may seem quite incomprehensible, but its status of Postulate, rather than Assumption, invites a level of deference. This appears to be a partial answer for the ‘Why?’ articulated above.
However, use of the Maxwell photon model instead of the Einstein signal does allow a more linguistically normal kind of statement. Observe that during times soon after the emission, the boundary condition that is numerically more significant is the one demanding zero E at the source. But during times just before the reception, the boundary condition that is numerically more significant is the one demanding zero E at the receiver. The consequence is that the photon energy starts out traveling at speed relative to the source, but then finishes up traveling at speed relative to the receiver. In between, the reference for has to transition gradually from one frame to the other, along with the numerical dominance of the corresponding boundary conditions.
This analysis leaves us with a conundrum: If the Maxwell photon model can be adopted as the signal for developing SRT, then the Einstein’s Second Postulate is true at the moment of reception, but not generally true at any moment before that event. So then SRT cannot be universally valid. But if the Maxwell photon model cannot be adopted as the signal for developing SRT, then SRT should not be considered as founded in Maxwell’s EMT. So then SRT is not founded in any prior science. This conundrum does not disqualify SRT as a useful theory, but it does mean that prudence demands investigation of other theories too.
Use of the Maxwell photon model instead of Einstein’s Second Postulate does produce results somewhat different from those of Einstein’s SRT. Fortunately, it is not very difficult to work out these slightly different results. The different results that are important for connecting better with QM are given in Whitney (2012). In summary, they are the following: 1) Within the Hydrogen atom, there exists not just the one electromagnetic process, the radiation from the accelerating electron; there exist three electromagnetic processes. The other two are: 2) torquing internal to the atomic system, and 3) circular motion of its center of mass. The torquing produces an energy gain mechanism that more than compensates the energy loss due to radiation. But the center-of-mass motion amplifies the radiation loss, bringing the atomic system into its final balance. The combination of these three electromagnetic processes, instead of just one, makes it possible to model a stable Hydrogen atom electromagnetically.
Here is how finite signal propagation speed causes the second and third effects. Each particle is attracted to a former position of the other. The forces are not central, and are not even balanced. The non-centrality centrality causes the torquing. The force imbalance causes the center of mass circulation. That in turn causes Thomas rotation, which amplifies the radiation. It is interesting that Thomas rotation was first discovered in the context of sequential Lorentz transformations, and is generally believed to be a consequence of SRT. See, for example, De Zela (2012). But Thomas rotation actually does not depend on Lorentz transformations; it emerges just as well from sequential Galilean transformations. It is not a relativistic effect.
4. More about the hydrogen atom
The treatment of the Hydrogen atom given in Whitney 2012 was the first-order approximation, in which the cosine and the sine of an angle traversed around the circular orbit were represented to first order, by unity and by the angle itself. Let us now go further. The total radiation power changes from
The total torquing power changes from
Observe that both of these expressions are oscillatory. But where is confined to non-negative values, is not. So while always represents a mechanism for energy loss, does not always represent a mechanism for energy gain. For electron orbit radius below the Hydrogen ground state orbit, oscillates between giving energy gain and giving energy loss. Therefore, there exist many values of where , and the Hydrogen atom can exist, and even persist. These represent ‘sub-states’ of the Hydrogen atom. Such states are sometimes discussed in the literature of experimental physics, but never in the literature of theoretical physics. For one thing, they seem to involve orbit speeds beyond the speed of light, and so violate SRT. For another thing, they are not within the universe of discourse of the Schrödinger approach to the Hydrogen atom.
The Schrödinger approach to the hydrogen atom
Schrödinger’s famous equation representing a particle, such as an electron, reads:
where is the wave function, is position in three-dimensional space, and is time. The is the reduced Planck’s constant , is the particle mass, is the usual three-space second-derivative operator, and is the potential energy, created for example by a nucleus. For an atom that is not moving, and is not perturbed by some measuring device, is time-invariant, reducing to .
Schrödinger’s equation is about a wave function, and not about a particle. So in the beginning, it seemed to have no clear foundation in the science prior to its own time. It was taken as a gift from heaven. But actually, Schrödinger’s equation does have a foundation – just not entirely within Physics, but also partly within Engineering Science. The following analysis shows that Schrödinger’s equation reduces to a classical equation based on Newton’s laws. The reduction uses Fourier transforms, a tool very commonly used in Engineering Science.
Consider first the wave function . By definition, it satisfies the normalization condition
The function has four-dimensional Fourier transform , where is momentum and is total energy. This Fourier transform is defined by:
This function too satisfies a normalization condition
The corresponding inverse Fourier transform is defined by
(An aside: definitions of Fourier transforms for other applications sometimes deploy the factors of differently, although always such that the round trip from one space to the other and back again has for each dimension.)
Observe first of all that if is very sharply peaked over a small range , say centered at , then will be very spread out over a large range , centered at . And vice versa: large makes for small . That means the Fourier pair of functions and automatically generates a relationship that looks like the Heisenberg uncertainty relationship.
The product has its minimum possible value when is a Gaussian function, in which case is also a Gaussian function. For the Physics application, the product is Planck’s constant .
There is no way around the law . It is a property of Mathematics in general, and not of QM in particular. But physicists do worry about its meaning for QM. For example, Cini (2012) seems ready to remove the deBroglie and Schrödinger classical probability wave approaches from the main narrative of QM, and begin it instead with quantum field theory. One problem with this strategy is the risk of putting too much trust in SRT, which appears possibly flawed in its founding Postulate.
In contrast to physicists, engineers just accept the law , because in their world there never exists a measurement without a spread, and they regard any proposed perfectly precise physical quantity as just a metaphysical idea, and not a real physical thing.
The present analysis proceeds in that spirit. The next step is to rewrite Schrödinger’s equation in the form:
With its seemingly superfluous factors, this form of Schrödinger’s equation looks more complicated than the original form. But this form ultimately leads to tremendous simplification and explanatory power. And, as in Berkdemir (2012), the objective here is mainly pedagogical.
The first term on the left side of the rewritten Schrödinger’s equation is:
where indicates statistically average value.
The second term on the left side is just:
The one term on the right side is:
So viewed in this way, Schrödinger’s equation reads:
This presentation of Schrödinger’s equation just says that the classical kinetic energy plus the potential energy makes the total energy. This is basically a statement from classical mechanics, ultimately derivable from Newton’s laws.
Observe that if is time invariant, and so reduces to just , then is also time invariant, and reduces to just . That is, time then enters into the wave function only through its un-observable phase factor. That is why represents a stationary state for the Hydrogen atom.
Observe too that, however one writes Schrödinger’s equation, there is no parameter , or any other trace of Maxwell theory in it. That is why Schrödinger’s equation cannot give any clue about sub-states of the Hydrogen atom.
Observe next that, in giving only stationary solutions, Schrödinger’s equation does not reveal the irreversibility that we know exists in our macroscopic world. Lunin (2012) has identified this absence of irreversibility in Schrödinger’s equation as an unsolved problem. Skála & Kapsa (2012) have noted that the measuring apparatus is not described in QM, and they have worked out an approach to deal with that deficit. Streklas (2012) has done the same with regard to the surrounding environment in which the system sits.
(An aside: the background could include a gravitational field, and Streklas notes how classical General Relativity Theory (GRT) breaks down at the Planck scale. As a cause for this failure, I suspect the SRT background from which GRT developed.)
In the present exposition, the time variation of can represent the intrusion of a measurement process, or an environmental factor, or some other disturbance, and then Schrödinger’s equation can capture the phenomenon of irreversibility.
Can the foundation for Schrödinger’s equation be further explicated by using the proposed Maxwell photon model? Recall that in its emission / propagation / reception scenario, the Maxwell photon model naturally displays particle-like localization at the two ends, and wave-like periodicity in the middle. The middle part of the photon scenario establishes a precedent for he use of the wave function as the subject of the Schrödinger equation.
The Maxwell photon model also helps explain why Schrödinger’s equation seemingly demands complex numbers, while Physics before that time used them for convenience, but not out of necessity. Recall that that the Maxwell photon model has a second vector pair a quarter cycle out of phase with the first vector pair to make the circular polarization. That sort of phase issue naturally brings complex numbers.
Also, recall that the important output from the Maxwell photon model was its energy density, defined in terms of squared electric and magnetic fields. If we represented the fields a quarter cycle out of phase as imaginary numbers, then we would need fields, not just squared, but multiplied by complex conjugate fields. That operation would resemble the familiar operation for probability density.
Finally, the Maxwell photon model can help clarify the issue of ‘duality’. The word has been taken to suggest a mysteriously simultaneous wave-particle character. But the general Schrödinger equation, with a time-dependent to represent some sort of measurement process, could certainly display the same less mysterious, more pedestrian, kind of duality that the Maxwell photon displays: sequential particle-like and wave-like behaviors.
Schrödinger’s equation gives a lot more than just the ground state of Hydrogen. Like Maxwell’s equations, it admits an infinite set of solutions. They are currently understood as representing an infinite set of excited states of the Hydrogen atom.
What exactly are excited states of the Hydrogen atom? The usual understanding is that they refer to something like spherical neighborhoods around the nucleus at larger radii, and that the electron can live in any one of these neighborhoods, and if it tumbles into a lower neighborhood, then a photon will be released.
I want to encourage readers to consider also any and all alternative interpretations that may be offered for the meaning of the term ‘excited state’. My own working idea (Whitney 2012) is that ‘excited state’ does not refer to an attribute that a single Hydrogen atom can possess. The Hydrogen atom is too simple; it has too few degrees of freedom. My mental image of ‘excited state’ is a system involving, not one, but several, Hydrogen atoms.
The basis for such a candidate interpretation is that a balance between radiation and torquing works out, not only for two charges of opposite sign, but also for two, or more, charges of the same sign – if superluminal orbit speeds are allowed. And what is there to disallow them? The only factors are Einstein’s Second Postulate and his resulting SRT, which together embed a rash denial of the well-known Arrow of Time. So be prudent; don’t a priori disallow super-luminal orbit speeds.
The idea of the excited state of an atom as being actually a system of several atoms answers a need that was identified in Gevorkyan (2012). He pointed to spontaneous transition between quantum levels of a system as a hard-to-explain phenomenon. Indeed it is hard to explain in terms of excited states of a single atom. But it is easy to explain in terms of a system involving several atoms: the system can simply disintegrate back into several isolated single atoms.
5. Quantum chemistry
The main thing for Chemistry is not revealed in the Hydrogen atom, due to the fact that the Hydrogen atom has only one electron. The main thing for Chemistry is the variety of relationships among multiple electrons. Chemistry data suggest that electrons can sometimes actually attract, rather than repel, each other. That propensity can be important in driving chemical reactions. A basis for understanding that process is needed, and it cannot be found in the Schrödinger equation, or in any extension of Quantum Mechanics that injects Special Relativity Theory.
Buzea, Agop, & Nejneru (2012) investigate the Bohm/Vigier approach and the Madelung approach for the kinds of problems that Chemistry presents. The former approach relies at its outset on SRT, which seems risky to those of us who doubt SRT. The latter approach invokes a ‘quantum potential’ for interaction with a ‘subquantic medium’. That sounds like ‘aether’, and seems risky to those of us who doubt the existence of ‘aether’.
So perhaps additional approaches are still to be welcomed. One such approach was introduced in Whitney (2012), is expanded in Whitney (2013, in press), and is discussed further below.
The approach is called Algebraic Chemistry (AC). The name reflects the fact AC is carried out entirely with algebra, and not numerical integration. In fact, the math is hardly even algebra, since only the occasional square root goes beyond simple Arithmetic. Such simple math suffices because the AC approach is based on scaling laws. The model for the Hydrogen atom is the prototype for similar models of the atoms of all the other elements. The input information for all atoms is consists ionization potentials. The raw data set looks quite daunting, but as reported in Whitney (2012), the data fall into neat patterns when scaled by where is the nuclear mass number and is the nuclear charge.
This scaling produces a variable that we call ‘population generic’ because information about any element can be inferred from information about other elements. The AC Hydrogen-based model invites the division of each ionization potential into two parts, one part being for the Hydrogen-like interaction of the electron population as a whole with the nucleus, and the other part being the increment for the electron-electron interactions.
Modeling the energy requirements for making ionsThe and for 118 elements are given in Whitney (2012). The utility of that data lies in the larger universe of inferences it supports. It can be used to estimate the actual energy involved in creating any ionization state of any element. To refer to the data and used, the term ‘population generic’ applies. To refer to the inferences made, the term ‘element-specific’ applies.
The formulae used are essentially the same for every element, so let us use the symbol ‘’ for an arbitrary element, so we can write the formulae in a symbolic way. First consider the transition . It definitely takes an energy investment of , where the factors of and restore the population-generic information to element-specific information. This energy investment corresponds to a potential ‘wall’ to be gotten over. The wall has two parts, and . The transition may also consume some heat, or generate some heat, as the remaining electrons form new relationships, not necessarily instantaneously. This process constitutes adjustment to the rock pile, or the ditch, on the other side of the potential wall. It is represented by a term , where the factors of and restore the population-generic information to element-specific information tailored for . Thus altogether, takes:
Now consider removal of a second electron, . Being already stripped of one of its electrons, the system has less internal Coulomb attraction than neutral has. So the factor of multiplying for has to change to something smaller. Since Coulomb attraction generally reflects the product of the number of positive charges (here ) and the number of negative charges (here ), the reduced factor is . Given this factor, takes:
Observe that putting the steps and together, the terms involving cancel, leaving that altogether, takes:
This reduction to just two terms involving ’s is typical of all sequential ionizations, of however many steps. Observe too that in the cumulative, we have two terms in , and . This makes sense since two electrons are removed.
From the above, it should be clear how to proceed with stripping however many more electrons you may be interested in removing. Now consider adding an electron to . The problem is similar to removing an electron from , but in reverse. So, takes:
Going one step further, the problem of adding another electron to is similar to removing an electron from , but in reverse. So, takes:
Observe that putting the steps and together, the terms involving cancel, leaving that altogether, takes:
From the above, it should be clear how to proceed with adding however many more electrons you may be interested in adding.
To illustrate the development and use of information, consider a few example elements: Hydrogen, Carbon and Oxygen. The steps to develop essential information for Hydrogen are:
The steps to develop the information needed for Carbon are far more numerous because it routinely gives or takes so many electrons. The steps are:
The steps to develop information for Oxygen are:
What can we tell from all this information? Consider a few of the molecules that these elements can make. The simplest one is the Hydrogen molecule . Forming it takes eV. That number is very negative, which means the Hydrogen molecule forms quickly, even explosively. Isolated neutral Hydrogen atoms are rare in Nature. Even at very low density, in deep space, Hydrogen atoms would rather form molecules, or form plasma, than remain as neutral atoms. How ironic it is that the prototypical atom for the development of QM was something that is hardly to be found in Nature!
Another simple molecule is . This oxygen molecule illustrates the interesting possibility of more than one ionic configuration, a situation that turns out to be the case for many molecules. can be or . Forming takes eV, and forming takes eV. Although the second ionic configuration is better in terms of energy, the two are close. In situations like this, both ionic configurations exist, in proportions determined by thermodynamic entropy maximization, which depends on temperature.
The fact that the two ionic configurations of are so close in energy means that transitions between them are easy. This can make an absorber and emitter of low energy photons; i.e. infrared photons, i.e. heat. This can in turn make Oxygen act as a so-called ‘greenhouse gas’; i.e., a contributor to atmospheric warming. But as -consuming animals, we just never speak of in such a derogatory way.
Consider . Water again illustrates the possibility of more than one ionic configuration for a given molecule. Water is known to dissociate into the naked proton and the hydroxyl radical . In turn, the hydroxyl radical has to be the combination of ions ; there is not an alternative form using , because then would have to be neutral. So we might well imagine that had ionic configuration . But the formation of that ionic configuration takes eV, a slightly positive energy. That can’t be right for the common water molecule covering our planet. So in fact, common water must not live in the ionic configuration to which it dissociates, and thereby dies. Therefore, consider the alternative ionic configuration . This one requires eV. This is a decidedly negative energy, and so is believable for a decidedly stable molecule.
Water in the normal ionic configuration has to form a tetrahedron, with two naked protons on two vertices and two electron pairs on and the other two vertices. That is why the water molecule we know has a bend to it. Viewing the Hydrogen nuclei as lying on arms originating from the Oxygen nucleus, the angle of the bend is the angle characteristic of arms from the center to two corners of a regular tetrahedron – on the order of .
The other ionic configuration for water, , is the charge mirror image of the commonly known one. But it has to be a completely different shape: not tetrahedral, but instead linear. It just looks like , where the dots mean ‘chemical bond’. This form of water apparently does exist, but only in a very un-natural circumstance. There exists an electrochemically created substance known as ‘Brown’s gas’ that has occasioned some impossibly wild claims about energy generation, but has also been investigated quite legitimately for applications in welding. A linear isomer of water is thought to be the active ingredient in Brown’s gas.
The story of water tells us that even the most familiar of compounds can have some very interesting isomers. The conclusion to be drawn is that any molecule with three or more atoms can have isomers that differ, certainly in ionic configuration, but probably also in molecular shape, and in resultant chemical properties.
The very negative eV result for forming the ionic configuration of normal water is what makes the burning of all hydrocarbons so worthwhile as energy sources. It is the main thing, but it generally attracts no mention. Other reaction products get all the attention.
Consider . Carbon dioxide is a normal atmospheric constituent, and a product of hydrocarbon combustion, a possible contributor to global warming, and sometimes a target for government regulation.
again illustrates the possibility of several ionic configurations for a molecule. has at least four plausible ionization configurations: , , , and . The energy requirements to make them from the neutral atoms are:
The first ionic configuration listed is the one favored electrically, but the others must also occur, all in thermodynamically determined proportion. The fact that there are so many possibilities for just this one little tri-atomic molecule means that QC can benefit from using AC to identify all the possibilities in a situation, and select rationally among them, and spend computation power wisely.
Single-electron state filling over the periodic table
Nobody is yet satisfied that we completely understand the Periodic Table. QM informs us of single electron states and their quantum numbers, and we can tell from spectroscopic data what single electron states are filled for each element, and we can see what the governing rule is, but we do not understand why that is the rule, and we do not understand why there are exceptions to the rule.
The normal order of state filling can be described in terms of the quantum numbers for radial level, for orbital angular momentum, and for spin. The normal order of state filling goes with increasing , with all of first, and then all of . So that makes the normal order:
In Period 1: and ;
Then in Period 2:
, and three times, three times;
Then in Period 3:
, and three times, three times;
Then in Period 4:
, five times, five times,
three times, three times;
Then in Period 5:
, five times, five times,
three times, three times;
Then in Period 6:
, seven times, seven times,
five times, five times,
three times, three times;
Then in Period 7:
, seven times, seven times,
five times, five times,
three times, three times.
It is perhaps possible to do enough QM calculations to develop a numerical explanation for this pattern. But we do not have from QM any higher-level, conceptual explanation for this pattern.
And on top of that, there are 19 exceptions to the pattern. They are:
In Period 4: Chromium , Copper ;
In Period 5: Niobium , Molybdenum , Rubidium , Rhodium , Palladium , Silver ;
In Period 6: Lanthanum , Cerium , Gadolinium , Platinum , Gold ;
In Period 7: Actinium , Thallium , Protactinium , Uranium , Neptunium , Cerium .
So it is a good project for AC to try to improve this situation, both in regard to explaining the pattern, and in regard to explaining the exceptions.
The Hydrogen-based model used for AC makes the electron population a rather localized subsystem, orbiting the nucleus, rather than enclosing it. The electron subsystem is composed of electron rings spinning at superluminal speeds, stacked together like little magnets. This model creates a hierarchy of magnetic confinement levels. Two rings with two electrons each create a ‘magnetic bottle’, and it can contain up to two geometrically smaller rings with three electrons each. Two such three-electron rings create a stronger ‘magnetic thermos jug’. That can contain up to two geometrically smaller rings with five electrons each. Two such five-electron rings create an even stronger ‘magnetic Dewar flask’. It is capable of containing up to two geometrically smaller rings with seven electrons each.
The electron state filling sequence is determined by a rather ‘fractal’ looking algorithm: Always build and store an electron ring for the largest number of electrons possible, where ‘possible’ means having a suitable magnetic confinement volume available to fit into, and where ‘suitable’ means created by two electron rings with smaller electron count, and not yet filled with two electron rings of larger electron count. Sometimes only a new two-electron ring is possible, and that is what starts a new period in the Periodic Table.
So there follows the expected order for the filling of single electron states across the Periodic Table.
AC can also identify factors that account for individual exceptions. The worst exception is Palladium, because it has not just one, but two, violations of the nominal pattern. Here is the explanation for Palladium. According to the nominal electron filling pattern, would have a not yet used space for two three-electron rings, and hence it would have an un-used two-electron ring. Total consumption of that un-used two-electron ring into an unfilled five-electron ring allows an extremely symmetric stack of filled electron rings. It goes: 2, 3, 5,5, 3, 2, 3,3, 2, 3, 5,5, 3, 2. The opportunity for such symmetry is what trumps the nominal pattern.
Here is a list of brief comments about all of the exceptions:
Chromium robs one electron from a two-electron ring to complete a five-electron ring.
Copper robs one electron from a two-electron ring to complete a five-electron ring.
Niobium robs one electron from a two-electron ring to complete a five-electron ring.
Molybdenum robs one electron from a two-electron ring to complete a five-electron ring.
Rubidium robs one electron from a two-electron ring to complete a five-electron ring.
Rhodium robs one electron from a two-electron ring to complete a five-electron ring.
Palladium completely consumes a two-electron ring to complete a five-electron ring.
Silver robs one electron from a two-electron ring to complete a five-electron ring.
Lanthanum puts an electron in a five-electron place instead of a seven-electron place.
Cerium puts an electron in a five-electron place instead of a seven-electron place.
Gadolinium puts an electron in a five-electron place instead of a seven-electron place.
Platinum robs one electron from a two-electron ring to complete a five-electron ring.
Gold robs one electron from a two-electron ring to complete a five-electron ring.
Actinium puts an electron in a five-electron place instead of a seven-electron place.
Thallium puts an electron in a five-electron place instead of a seven-electron place.
Protactinium puts an electron in a five-electron place instead of a seven-electron place.
Uranium puts an electron in a five-electron place instead of a seven-electron place.
Neptunium puts an electron in a five-electron place instead of a seven-electron place.
Cerium puts an electron in a five-electron place instead of a seven-electron place.
It is this author’s opinion is that a more fully developed quantum mechanics, giving equal attention to both stable states, and the transitions between them, would fulfill a property that the subject matter of quantum mechanics has always demanded: some kind of duality. All of the objects of study in quantum mechanics exhibit a wave-particle duality, and the theory itself needs a corresponding kind of duality: attention both to the definition of stable states, and to the study of details of state transitions.
This paper attempts to make some progress in that direction. It gives several examples of old problems treated with a new approach. The first of these concerns the nature of the photon. There is a perception that the discovery of the photon marks a departure from Maxwell. This author disagrees with that perception. The second problem concerns Schrödinger’s equation. There is a perception that Schrödinger’s equation marks a departure from Newton, and the classical physics of particles. This author disagrees with that perception too. The third problem has to do with the application of QM in Chemistry. There is a perception that such an application of QM demands extensive computer calculation. This author believes in an alternative approach based on scaling laws: Algebraic Chemistry.
All of these problems illustrate a reasoned concern about the current practice of Physics. I think Physics sometimes goes a step too far in the direction of reductionism. Einstein’s signal for SRT, with only a speed parameter, is a step too far. The photon concept without Maxwell’s equations is a step too far. An atom with a stationary nucleus is a step too far. Schrödinger’s equation without the factor is a step too far. A single atom with multiple excited states is a step too far. And so on.
Consider some history. At the turn of the 20th century, there was a lot of work concerning a isolated electron. Why does the electron not explode due to internal Coulomb forces? Why does the equation of motion for the electron allow run-away solutions that do not occur in Nature? There are many such puzzles. Chapter 17 in the standard textbook by Jackson (1975) discusses radiation damping, self-fields of a particle, scattering and absorption of radiation by a bound system, all from the classical viewpoint and from the viewpoint of SRT. It is a status report, not a final resolution. These matters are still not fully resolved. I suspect too much reductionism as their cause.
Always remember: Sometimes, backing off from reductionism, and analyzing a slightly more complicated problem, actually leads to simpler results.