Dependency of neck-of-the-flow position on the concentration and temperature of free electrons .
Relativity theory plays a key role in the operation of the Global Positioning System (GPS). It relies on Einstein’s second postulate of the special theory of relativity (STR)  that states that the speed of light in free space is a constant (c), independent of the state of motion of both the source and the observer. As a result, it becomes possible to measure distances using atomic clocks. Indeed, the modern-day definition of the meter is the distance travelled by a light pulse in c-1 s . The value of the speed of light in free space is simply defined to have a fixed value of c=2.99792458 x 108 ms-1. As a consequence, the distance Δr between two fixed points is therefore obtained by measuring the elapsed time Δt for light to traverse it and multiplying this value with c, i.e., Δr = c Δt.
A key objective in the GPS methodology is to accurately measure the distance between an orbiting satellite and a given object. This type of basic information for a series of satellites can then be used to accurately determine the location of the object . The measurement of the elapsed time for light to pass between two points requires communication between atomic clocks located at each position. The problem that needs to be overcome in the GPS methodology is that the rates of the clocks carried onboard satellites are not the same as those that are stationary on the Earth’s surface. For this reason it is imperative that we have a quantitative understanding of how the rates of clocks vary with their state of motion and position in a gravitational field. This topic will be covered in the next section of this chapter.
It also must be recognized that the speed of light only remains constant as it moves through free space and also remains at the same gravitational potential. This is an important distinction that has significant practical consequences when the light pulses must pass through regions in which space-weather events are occurring.
Experimental and theoretical studies of the state of the ionosphere and physicochemical processes that occur in it are caused by the necessity of reliable functioning of communication channels in different frequency bands. In recent years, most attention is paid to the improvement of satellite communication and navigation systems that use transionospheric communication channels. The reliability of communication and navigation systems using ionospheric communication channels depends mainly on the knowledge of the ionosphere behavior both in quiet and perturbed conditions. This determines the situation when not only ionospheric plasma perturbations related to the dynamics of the atmosphere but also processes related to electromagnetic wave interaction with neutral atoms and molecules of the medium in which the wave propagates should be treated as inhomogeneities. On the other hand, an analysis of disturbances and failures of operation of space communication systems that use the decimeter wave range and the development of theoretical concepts of physical processes responsible for these phenomena gives new information about the state of the medium and provides the opportunities for further improvement of communication systems. This led to the necessity of the development of special experimental techniques for studying the ionosphere in order to determine the physical reasons for GPS signal delays. In the second part of this Chapter the role of Rydberg atoms and molecules of neutral ionospheric plasma components excited in collisions with electrons in the formation of UHF and infrared radiations of the E – and D– layers of Earth’s ionosphere is discussed. A new physical mechanism of satellite signal delay due to a cascade of re-emissions on Rydberg states in the decimeter range is suggested.
2. Time dilation and length contraction
One of the key results of Maxwell’s theory of electromagnetism introduced in 1864 was the finding that the speed of light waves has a constant value that is given by the relation: , where is the electric permittivity and is the magnetic permeability in free space. This result appeared quite strange to physicists of the time because it immediately raised questions about how the speed of anything could be the same in all inertial systems. It clashed with the idea that there was an aether in which light waves are permanently at rest.
Maxwell’s equations are not invariant to the Galilean transformation and therefore seemed to be inconsistent with the relativity principle (RP). Voigt  was the first  to give a different space-time transformation that did accomplish this objective. It differed by a constant factor in all of its equations from what is now known as the Lorentz transformation (LT). In 1899 Lorentz wrote down a general version of this transformation :
The space-time coordinates (x, y, z, t) and (, , , ) for the same object measured by observers who are respectively at rest in two inertial systems S and are compared in these equations. It is assumed that is moving along the positive x axis relative to S with constant speed v, and that their coordinate systems are coincident for . Voigt’s value for is . However, Lorentz pointed out than any value for ε in the general Lorentz transformation leaves Maxwell’s equations invariant and thus there is a degree of freedom that needs to be eliminated on some other basis before a specific version can be unambiguously determined. This degree of freedom will be important in the ensuing discussion of relativity theory given below.
The form of the above equations suggested that the Newtonian concept of absolute time needed to be altered in order to be consistent with Maxwell’s electromagnetic theory and corresponding experimental observations. Unlike the simple relation assumed for the Galilean transformation, it would appear from eq. (1a) that the space and time coordinates are mixed in a fundamental way. This meant that observers in relative motion to one another might not generally agree on the time a given event occurred. Poincaré  argued in 1898, for example, that the lengths of time intervals might be different for the two observers, i.e. , and therefore that two events that occur simultaneously in S might not occur at the same time based on clocks that are at rest in . This eventuality has since come to be known as remote non-simultaneity. The space-time mixing concept also predicts that the rates of moving clocks are slower than those of identical counterparts employed by the observer in his own rest frame (time dilation). Larmor  published a different version of eqs. (1a-d) (with ) in 1900, as well as a proof that one obtains the same prediction for relativistic length variations (hereafter referred to as FitzGerald-Lorentz length contraction or FLC) on this basis that both FitzGerald  and Lorentz  had derived independently in 1889 and 1892, respectively, to second order in . Time dilation and remote non-simultaneity also follow directly from Larmor’s version of the Lorentz transformation.
In 1905 Einstein’s paper  on what he later referred to as the special theory of relativity (STR) became the focus of attention, although at least a decade passed before his ideas gained wide acceptance among his fellow physicists. He came out strongly against the necessity of there being an aether which serves as the medium in which light is carried. He argued instead that such a concept is superfluous, and likewise that there is no space in absolute rest and no “velocity vector associated with a point of empty space in which electromagnetic processes take place” . He formulated his version of electromagnetic theory on the basis of two well-defined postulates, the RP and the constancy of the speed of light in free space independent of the state of motion of the detector/observer or the source of the light. Starting from this basis, Einstein went on to derive a version of the relativistic space-time transformation that leaves Maxwell’s equations invariant. It is exactly the same transformation that Larmor  had reported five years earlier and has ever since been referred to as the Lorentz transformation (LT), i.e. with a value of (Einstein refers to this function as in Ref. 1) in Lorentz’s general eqs. (1a-d). Consequently, Larmor’s previous conclusion about the FLC also became an integral part of STR, as well as Poincaré’s conjecture  of remote non-simultaneity.
One of the most interesting features of Einstein’s derivation of the LT is that it contains a different justification for choosing a value of unity for than Larmor gave. On p. 900 of Ref. 1, he states without further discussion that “is a temporarily unknown function of v,” the relative speed of the two rest frames S and occurring in the LT. He then goes on to show that considerations of symmetry exclude any other possible value for this function than unity.
Einstein proposed a number of intriguing experiments to test STR, many of which involved the phenomenon of time dilation. The relationship between the time intervals and between the same two events based on stationary clocks in S and , respectively, can be obtained directly from eq. (1a) as:
whereby Δx is the distance between the locations of the clock in S at the times the measurements are made there. If one has the normal situation in which the two measurements are made at exactly the same location in S, i.e. with , it follows from STR (with ) that
This equation corresponds to time dilation in the rest frame of the observer in S. It states that the observer in must obtain a larger value for the elapsed time since . One can alter the procedure, however, so that the roles of the two observers are reversed. The inverse relation to eq. (1a) in the LT, i.e. with , is:
The corresponding relation between time intervals thus becomes
when one makes the standard assumption that the measurements are carried out at the same location in (). When one compares eq. (3) with eq. (5), it is evident that there is something paradoxical about Einstein’s result. This comparison seems to be telling us that each clock can be running slower than the other at the same time. It is always the “moving” clock that is running slower than its identical stationary counterpart, although it is not self-evident how this distinction can be made unequivocally based on the above derivations. Many authors [12-14] have taken this result to be a clear indication that something is fundamentally wrong with Einstein’s STR. The majority view among relativistic physicists nonetheless holds that such a situation is a direct consequence of the RP , which holds that all inertial systems are equivalent.
The same type of symmetry arises for length measurements, in which case analogous manipulations of eq. (1b) and its inverse lead to both:
which are the FLC predictions in the direction parallel to the relative velocity of the two observers. The orthodox interpretation is that length contraction and time dilation occur together in both rest frames, i.e. eq. (5) goes with eq. (6), whereas eq. (3) goes with eq. (7). In other words, lengths on the moving object contract in the parallel direction at the same time (and by the same factor) that the rates of its clocks slow down. There is no difference in the length measurements in directions that are perpendicular to the relative velocity of the two rest frames according to STR , as is easily seen from eqs. (1c-d) using the value of which defines Einstein’s (and Larmor’s) LT. The resulting anisotropic character of the FLC is another of the STR predictions that has evoked much discussion and controversy over the years.
It is important see that the above symmetry in the predicted results of the two observers represents a stark departure from the long-held belief in the objectivity of measurement. Since time-immemorial it had previously been assumed that different observers should always agree in principle as to which piece of cloth is longer or which bag of flour weighs more. The rational system of units is clearly based on this principle. Accordingly, one assumes that while individual observers can disagree on the numerical value of a given physical measurement because they use different units to express their results, they should nonetheless agree completely on the ratio of any two such values of the same type. In what follows, this state of affairs will be referred to as the Principle of Rational Measurement (PRM). Einstein’s theory  claims instead that observers in relative motion may not only disagree on the ratio of two such measured values but even on their numerical order. It is not feasible to simply state that, on the basis of eq. (6), the reason the observer in S obtains a smaller value for the length of a given object than his counterpart in S’ is because he employs a unit of distance which is times larger. One could state with equal justification based on eq. (7) that the unit of distance in S is times smaller than in . In short, the whole concept of rational units is destroyed by the predicted symmetric character of measurement in Einstein’s STR.
Einstein’s belief in the above symmetry principle was by no means absolute, however. In the same paper , he includes a discussion of a clock that moves from its original rest position and returns via a closed path at some later time. The conclusion is that the rate of the moving clock is decreased during its journey, so that upon return to the point of origin it shows a smaller value for the elapsed time than an identical clock that remained stationary there. The symmetry is broken by the fact that it is possible to distinguish which clock remained at rest in the original inertial system and which did not [1,16]. Einstein ends the discussion by concluding that a clock located at the Equator should run slower by a factor of than its identical counterpart at one of the Poles (is the Earth’s radius and is the circular frequency of rotation about the polar axis). This explanation raises a number of questions of its own, however. For example, how can we be sure that a moving clock did not undergo acceleration before attaining its current state of uniform translation? If it did, then its clock rate should be determined independently by its current speed relative to the inertial system from which it was accelerated, and should not depend, at least directly, on its speed relative to a given observer in another inertial system.
The answers to the above questions could only be determined by experiments which were not yet available in 1905. For example, Einstein pointed out that time dilation should produce a transverse (second-order) Doppler effect not expected from the classical theory of light propagation. Eqs. (3) and (5) indicate that a moving light source should emit a lower frequency (greater period) of radiation for a given observer than that of an identical stationary source in his laboratory. The symmetry principle derived from the LT indicates that two observers exchanging light signals should each measure a red shift, i.e a lowering in frequency and corresponding increase in wavelength, from the other’s source of radiation when the line joining them is perpendicular to the direction of their relative velocity.
Two years later , Einstein used the results of his 1905 paper to derive the Equivalence Principle (EP) between kinematic acceleration and gravity. His analysis of electromagnetic interactions at different gravitational potentials led him to make several more revolutionary predictions on the basis of the EP. First of all, he concluded that the light-speed postulate did not have universal validity. He claimed instead that it should increase with the altitude of the light source. At the same time, he determined that the frequency of light waves should increase as the source moves to a higher gravitational potential. Consequently, light emanating from the Sun should be detected on the Earth’s surface with a lower frequency than for an identical source located there. This effect has since been verified many times and in general is referred to as the gravitational red shift.
3. Six experiments that led to GPS
3.1. Transverse Doppler effect
The first significant test of Einstein’s time-dilation theory was carried out by Ives and Stilwell in 1938 . The object of this experiment is to measure the wavelength of light emitted from a moving source at nearly right angles to the observer in the laboratory. In addition to the ordinary first-order Doppler effect, it was expected from STR  that a quadratic shift from the normal wavelength () should be observed by virtue of time dilation at the source. Because of the constancy of the speed of light and the predicted decrease in frequency emanating from the source, an increase in wavelength should be found in accordance with the transverse Doppler effect according to the general relation:
(is the angle from which the light is observed). Of the many difficulties inherent in the experiment, the main one was the requirement of measuring the wavelength of light emitted in a perpendicular direction () relative to the observer. The ingenious solution proposed by the authors was to average the wavelengths obtained for light emitted in both the forward and backward directions, thereby removing the first-order dependence on v in eq. (8). The test was performed with hydrogen canal rays for velocities in the neighborhood of c. The two shifted lines in the corresponding spectra were recorded on the same photographic plate as the un-shifted line at 4861 Å. The predicted shift in the experiment was 0.0472Å, i.e. , with c. Six different plates were considered and the average shift observed was +0.0468 Å. The sign is also in agreement with expectations based on eq. (8), that is, a shift to the red was measured in each case.
The accuracy of the Ives-Stilwell experiment was gradually improved. It was later estimated that the actual experimental uncertainty in the original investigation lay in the 10-15% range, although the experimental points could be fitted to a curve with as little as 2-3% deviation. Mandelberg and Witten  made substantial improvements in the overall procedure, using hydrogen ion velocities of up to 0.0093 c. Their results indicate that the exponent in the factor in eq. (8) for the quadratic shift has a value of 0.498±0.025, within about 5% of Einstein’s predicted value of 0.5.
In spite of the generally good agreement between theory and experiment with regard to the time-dilation effect, there is still a loose end that has been almost universally ignored in the ensuing discussion. It is clear that the predicted slowing down of clocks in the transverse Doppler investigations is accompanied by an increase in wavelength, not the decrease that must be expected on the basis of the FLC (see Sect. II). According to the RP, the wavelength of the radiation that would be measured by an observer co-moving with the light source is just the normal value obtained when the identical source is at rest in the laboratory. The only way to explain this deduction is to assume that the refraction grating that is used in the rest frame of the moving source has increased in length by the same factor as the clock rates have decreased. Moreover, the amount of the increase must be the same in all directions because the clock-rate decrease is clearly independent of the orientation of the radiation to the laboratory observer. That is the only way to explain how the speed of light can be the same in all directions, at least if one believes in an objective theory of measurement. This conclusion is quite different than for the anisotropic length contraction phenomenon predicted by the FLC. It indicates instead that time dilation in a given rest frame is accompanied by isotropic length expansion. We will return to this point in the discussion of the next series of time-dilation tests involving the decay of meta-stable particles.
3.2. Lifetimes of meta-stable particles
Shortly after the first Ives-Stilwell investigation, Rossi and coworkers  reported on their measurements of the transition rate of cosmic-ray muons and other mesons traveling at different speeds relative to the Earth’s atmosphere. The effect of time dilation is much greater in this experiment since the particles are observed to move with speeds exceeding c. The lifetime of muons at rest is 2.20x10-6 s. The decay is exponential and so the transition probability in the laboratory is proportional to exp (). According to eq. (5) (with ), the lifetime of the accelerated muons in the Earth’s atmosphere should be times larger. It was found that the survival probability is proportional to exp () for all values of v, which is therefore consistent with the predicted increase in the muon lifetime. Subsequent improvements in the experiment  verified this result to within a few per cent.
In the original studies of meson decays in the atmosphere , the quantity which was determined experimentally is the average range before decay L. It was assumed that the value of L is proportional to both the speed v of the particles and their lifetime in a given rest frame. Consequently, the range is shorter from the vantage point of the particles’ rest frame than for an observer on the Earth’s surface, since the latter’s measured lifetime is longer. Implicit in this conclusion is the assumption that the speed v of the particles relative to the Earth’s surface is the same for observers in all rest frames, and thus that has a constant value for each of them. The results of the latter experiments have subsequently been used in textbooks [22, 23] to illustrate the relationship between time dilation and length contraction in STR . Closer examination of the measured data shows that this conclusion is actually incorrect. The reason that the range of the particles is shorter for the observer moving with the meta-stable particles is clearly because his unit of distance is longer than for his counterpart O on the Earth’s surface. The distance traveled is actually the same for both observers, just as is the amount of elapsed time for them. The reason that O measures a longer time is because his clock runs times faster. Yet both agree on the speed v of the muons based on their respective units of time and distance. This means that the unit of time (his “second”) for is s using the standard definition of 1.0 s employed by O in his rest frame . Similarly, the unit of distance (his “meter”) for must also be times larger than the standard unit of 1.0 m employed by O. As a result, finds the distance between his position and the Earth’s surface to be systematically smaller by a factor of than does O. As before with the Ives-Stilwell experiment, the conclusion is thus not that lengths in the rest frame of the accelerated system have contracted, but rather that they have expanded relative to those of identical objects in the Earth’s rest frame. Moreover, the amount of the length expansion must be the same in all directions, since otherwise it is impossible to explain how and O can still agree on the value of the speed of light independent of its orientation to each of them.
3.3. High-speed rotor experiments
The above two experiments were in quantitative agreement with Einstein’s predictions about the amount of time dilation , but neither one of them provided a test for the truly revolutionary conclusion that measurement is subjective. As discussed in Sect. II, Einstein’s LT leads to both eqs. (3) and (5) and therefore is completely ambiguous about which of two clocks in motion is running slower. Yet his argument about clocks located respectively on the Equator and one of the Earth’s Poles  suggests a completely different, far more traditional (objective), view of such relationships. Both theories indicate that accelerated clocks should run slower than those left behind at their original location. One needs a “two-way experiment” to actually differentiate between the two theoretical positions, one in which the “observer” is moving faster in the laboratory than the object of the measurements
Hay et al.  carried out x-ray frequency measurements employing the Mössbauer effect that had the potential of resolving this question. Both the light source and the absorber/detector were mounted on an ultracentrifuge that attained rotational speeds of up to 500 revolutions per second. The authors thereby eliminated the angular dependence in eq. (8) by ensuring that the relative motion of the source to the detector was almost perfectly transverse (). Even more important in the present context, the absorber was fastened near the rim of the rotor while the light source was located close to the rotor’s axis , which means that the situation is opposite to that in the Ives-Stilwell investigation  and it is now the observer who is moving faster in the laboratory.
According to Einstein’s LT , the above distinction should have no bearing on the outcome of the experiment. This point is borne out by Will’s analysis of the transverse Doppler effect  in which he expresses the expected result as follows [see his eq. (6)]:
where and are the observed frequencies at the absorber/receiver and the emitter, respectively, and is the speed of the emitter relative to the receiver. The symmetry in this expression is obvious. The receiver should always measure a smaller frequency than that observed at the light source. Will’s analysis is also consistent with the Mansouri-Sexl approach  used to detect violations of STR.
In describing their experimental results, Hay et al.  state that the “expected shift can be calculated in two ways.” By this they mean either by treating the acceleration of the rotor as an effective gravitational field (Einstein’s EP) or by “using the time dilatation of special relativity.” However, the empirical formula they report for the expected fractional shift in both the energy and frequency of the gamma rays is not consistent with eq. (9) since it is proportional to , where R1 and R2 are the respective distances of the absorber and x-ray source from the rotor axis . In other words, the sign of the shift changes when the positions of the absorber and source are exchanged, in clear violation of Einstein’s symmetry principle for time dilation. What the empirical formula actually shows is that the magnitude of the effect is correctly predicted by the LT, but not the sign.
The rotor experiments were also carried out later by Kündig  and by Champeney et al. . The latter authors report their results in terms of the speeds va and vs of the absorber and the x-ray source, respectively:
This result makes several additional points clear: a) there is a shift to higher frequency (to the blue) if the absorber is rotating faster than the source and b) the magnitude of the shift is consistent with a higher-order formula:
If the x-ray source is at rest in the laboratory (), eq. (11) reduces to . Kündig  summarizes this result by stating “that the clock which experiences acceleration is retarded compared to the clock at rest.” The slower a clock, the more waves it counts per second , hence the observed blue shift when the absorber is located farther from the rotor axis than the source. The empirical results are thus seen to be consistent with Einstein’s speculation  about the relative rates of clocks located at different latitudes on the Earth’s surface. The ambiguity implied by eqs. (3) and (5), as derived from the LT, is replaced by a completely objective theory of time dilation satisfying the relation:
where and are the respective velocities of the clocks relative to a specific rest frame, which in the present case is the rotor axis. The above equation is also seen to be consistent with both the Ives-Stilwell [18,19] and meta-stable decay experiments [20,21], for which the observer is at rest in the laboratory () and the object of the measurement has undergone acceleration to speed relative to him.
It is important to see that the alternative derivation  of the empirical formula for frequency shifts given in eq. (10) that makes use of Einstein’s EP  also corresponds to an objective theory of measurement. Accordingly, one assumes the relationship for the gravitational red shift :
where the centrifugal field of force corresponds to a radial gravitational field strength of in each case (is the difference in gravitational potential). There is clearly no ambiguity as to which clock lies higher in the gravitational field and thus there is no question as to which clock runs slower in this formulation. This point is missed in many theoretical descriptions of the frequency shifts in the rotor experiments. For example, Sard  claims that the gravitational red shift and the transverse Doppler effects are consistent, and that this “is not surprising, since both follow in a straightforward way from the interweaving of space and time in the Lorentz transformation.” This view completely overlooks the simple fact that the transverse Doppler formula in eq. (9) derived from the LT does not allow for the occurrence of blue shifts, whereas the corresponding EP result in eq. (13) demands that they occur whenever the absorber in the rotor experiments moves faster than the x-ray source relative to the laboratory rest frame. Sherwin  attempts to clarify the situation by asserting that the ambiguity inherent in eq. (9) only holds for uniform translation, but he fails to give actual experimental results that verify this conclusion. Both Rindler  and Sard  try to justify the empirical result in terms of orthodox STR by considering the relative retardations experienced by the two clocks in the rotor experiments. This argument leads directly to eqs. (10-11), but it does so by eliminating the basic subjective character of the LT in doing so. Once one assumes that two clock rates are strictly proportional, in accord with eq. (12), there is no longer any basis for claiming that the measurement process is subjective. More discussion of this general subject may be found in Ref. .
3.4. Terrestrial measurements of the gravitational red shift
In this formula, νX is the frequency of light emitted from a source that is at rest at a given gravitational potential, whereas νD is the corresponding value of the frequency detected by an observer who is not in relative motion to the source but is located at some other position in the gravitational field. The potential energy difference between the source and detector determines the fractional amount of the frequency shift. For an experiment near the Earth’s surface, , where h is the difference in altitude between the source and detector and g is the acceleration due to gravity. The sign convention is such that when the source is located at a higher gravitational potential than the detector.
The same proportionality factor occurs [17,34] in the relationship between the light speeds measured at the same two positions in the gravitational field, where is the value measured at the detector and is the corresponding value measured at the light source:
As a result, it is clear that according to the EP, both the frequency and the speed of light increase as the light source moves to a higher gravitational potential. Moreover, the fractional amount of the change is the same in both cases. As will be discussed in the following, there is ample experimental evidence to indicate that both of these relations are correct.
Terrestrial confirmation of the gravitational red shift first became possible with the advent of the Mössbauer technique for detecting changes in x-ray frequencies. Pound and Snider  placed a 57Fe source at a distance h of 22.5 m above an absorber. According to eq. (14), the fractional change in the x-ray frequency (3.47x1018 Hz) should have a value of ghc-2=2.45x10-15. The authors employed the EP directly to obtain their results. By imparting a downward velocity of to the detector, the resulting first-order Doppler effect exactly balanced the gravitational red shift:
The observed results were found to be within expectations by a factor of 0.9990±0.0076 .
The interpretation of the above experiment is nonetheless a matter of some confusion. Pound and Snider  basically avoided a theoretical discussion of the ramifications of their work vis-à-vis the general theory of relativity (GTR ), but they noted that “the description of the effect as an ‘apparent weight’ of photons is suggestive. The velocity difference predicted is identical to that which a material object would acquire in free fall for a time equal to the time of flight.” The latter value is assumed to be , so that the expected velocity increase of the falling particle would indeed be . However, it is important to see that the frequency of the emitted light in absolute terms is actually the same at the light source as it is on the ground below when it arrives there. As Einstein stated in his 1911 article , this is a clear physical requirement because the number of wave crests arriving at a given point over a specific time interval is a constant . The reason that the detector at the ground level records a higher frequency is because the unit of time there is greater than at the location of the light source. Therefore, more wave crests are counted in 1 s on the ground clock than for its counterpart at a higher altitude. Comparison of the rates of atomic clocks located at different altitudes over long periods of time has confirmed Einstein’s conclusion quantitatively on this point .
The above discussion raises another question, however, namely what happens to the speed of light as it descends from the source? According to eq. (15) and Einstein’s EP [17,34], the answer is clearly that it decreases, and by the same fractional amount as for the associated frequency red shift given in eq. (14). To be specific for the Pound-Snider example, the speed of light must have a value of c at ground level for an observer located there, whereas the same observer measures the value at the light source to be ghc-2 times larger. In other words, the speed of light actually decreases as it passes downward from the source to the detector at ground level. This is clearly the opposite change that is expected from the “apparent weight” argument alluded to above. Again, there is ample experimental evidence that Einstein’s eq. (15) is quantitatively accurate. One knows, for example, from the work of Shapiro et al.  that radio signals slow in the neighborhood of massive bodies such as Venus and Mercury. There are time delays between the emission of radio pulses towards either of these planets and detection of their echoes when they return to the Earth’s surface. This shows that the speed of light decreases under the influence of a gravitational field, i.e. with in eq. (15).
There is another important question that needs to be considered in the present discussion, however, namely what happens to the wavelength of light as the source changes its position in a gravitational field? Einstein  concluded that “light coming from the solar surface…has a longer wavelength than the light generated terrestrially from the same material on Earth .” Since this statement comes directly after his proof of eq. (14) for frequencies, it seems quite plausible he was basing it on the assumption that the product of wavelength and frequency ν is equal to the speed of light. In order to obtain the quoted result on this basis, however, it was also necessary for him to assume that the speed of light is actually the same at the Sun’s surface as on the Earth. This explanation is nonetheless in conflict with eq. (15), which it must be emphasized appears in the same publication . Making the same assumption about the relationship between light speed, wavelength and frequency and comparing eq. (15) with eq. (14) leads unequivocally to the conclusion that the wavelength of light for a given source is completely independent of the latter’s position in a gravitational field:
It is important in general to recognize that in each of eqs. (14,15,17), the same quantity is the object of measurement for two different observers. The only reason why these observers do not obtain the same result in each case is because they employ different units for their various measurements. In Sect. II, this state of affairs has been referred to as the Principle of Rational Measurement (PRM). The quantity in parentheses in eqs. (14,15), (), can be looked upon as a conversion factor between their respective units; it will be designated in the ensuing discussion simply as S. It will be seen that the conversion factors for other physical quantities can always be expressed as powers of S. For example, the conversion factor for wavelength, and therefore distances in general because the scaling must be completely uniform in each rest frame, is . These conversion factors satisfy the ordinary rules of algebra. For example, since frequency and time are inversely related, it follows that the conversion factor for times is . The unit of velocity or speed is equal to the ratio of the distance travelled to elapsed time and hence the conversion factor in this case is equal to the ratio of the corresponding conversion factors, namely , consistent with the previous definition from eq. (15).
It is useful to employ the above concepts to follow the course of the light pulses in the Pound-Snider experiment . The analysis of two events is sufficient to illustrate the main points in this discussion, namely the initial emission of the x-rays at the higher gravitational potential (I) and their subsequent arrival at the absorber (II). It is important to use the same set of units in comparing the corresponding measured results. In terms of the units at the x-ray source, the observer X there measures the following values for event I: , and , where and are the standard frequency and wavelength of the light source. Upon arrival at the absorber, the same observer measures the corresponding values: , and . In other words, the value of the frequency has not changed during the passage of the x-rays down to the absorber, but the speed of light has decreased in accordance with the EP (). Consequently, the wavelength of the light must have decreased by the same factor in order to satisfy the general relation (phase velocity of light) in free space between frequency, wavelength and light speed ().
The observer D located at the absorber measures generally different values for the same two events, not because he is considering fundamentally different processes, but rather because he bases his numerical results on a different system of physical units (PRM). He therefore finds for event II: , and . His unit of distance is the same and so there is complete agreement on the value of the wavelength (). However, his units of frequency and speed are S times smaller than for his counterpart at the higher gravitational potential, and therefore he measures values for these two quantities which are S times greater in each case. This set of results illustrates the very important general principle of measurement, namely that the numerical value obtained is inversely proportional to the unit in which it is expressed. The corresponding values for the initial emission process are accordingly: , and .
In absolute terms what has happened as a result of the downward passage of the x-rays between source and absorber/detector is that the light frequency has remained constant throughout the entire process, exactly as Einstein demanded in his Jahrbuch review . On the other hand, both the light speed and the corresponding wavelength of the radiation have deceased by a factor of S. This is in agreement with Einstein’s second postulate of STR , namely that the observer at the absorber must find that the speed of light when it arrives there has a value of c in his units because it is at the same gravitational potential as the observer at that point in time.
Before closing this section, it is worthwhile to mention how the units of other physical properties vary/scale with gravitational potential. To begin with, energy E satisfies the same proportionality relationship as for frequency and light speed [see eqs. (14,15)]:
Indeed, one can derive this equation to a suitable approximation using Newton’s gravitational theory and the definition of gravitational potential energy as . According to STR , the value of the energy EX at the light source is equal to mc2 and thus the fractional increase is , in agreement with eq. (18) at least for infinitesimal values of h for which the difference between inertial and gravitational mass is negligible. The fact that energy and frequency scale in the same way means that Planck’s constant ħ does not vary with gravitational potential. This in turn means that angular momentum in general scales as S0, the same as for distance.
Einstein also stated in his Jahrbuch article  that the above dependence of energy on gravitational potential implies that there is a position-dependent component corresponding to an inertial mass m equal to E/c2 . Because E and c [i.e. the generic speed of light of eq. (15)] both vary as S, it follows that the unit of inertial mass scales as S-1, the same as time. Values of linear momentum are the same for observers located at different gravitational potentials, since the corresponding scaling/conversion factors for inertial mass and velocity exactly cancel one another. With reference to the Pound-Snider experiment , this means that x-ray photons increase in momentum as they fall through the gravitational field. This follows from quantum mechanics and the fact that the corresponding wavelength of the radiation decreases in the process. Hence, the relation holds for observers located at all gravitational potentials since the light speed is decreasing at exactly the same rate as the momentum is increasing, while the energy of the photons is conserved throughout the entire process. This situation might seem counter-intuitive, but it emphasizes that photons are different than material particles with non-zero rest mass, for which momentum and velocity are strictly proportional (see also the related discussion of light refraction in Ref. 41). Note also that because of the governing quantum mechanical equations, this result is consistent with the relation for light waves, also already alluded to above. More details about gravitational scaling may be found in a companion publication .
3.5. Clock rates on circumnavigating airplanes
Despite the progress that had been made in carrying out experiments that verify various aspects of Einstein’s STR , there was still considerable uncertainty as to how to predict the results of future investigations of the time-dilation effect by the time the next significant advance was made in 1971. At that time, Hafele and Keating [43-44] carried out tests of cesium atomic clocks located onboard circumnavigating airplanes that traveled in opposite directions around the globe. In their introductory remarks, the authors noted that the “relativistic clock paradox” was still a subject of “the most enduring scientific debates of this century.” The first paper presents some predictions of the expected time differences for clocks located on each airplane as well as at the origin of their respective flights. Their calculations are clearly based on an objective theory of time dilation that is consistent with eq. (12). The symmetric relationship that is inherent in eqs. (3) and (5) derived from the LT is completely ignored. Gone is the idea that the observer at the airport should find that the clocks on the airplanes run slower while his counterpart on the airplane should find the opposite ordering in clock rates. To sustain this argument, Hafele and Keating argue  that standard clocks located on the Earth’s surface are generally not suitable as reference in their calculations because of their acceleration around the Earth’s polar axis.
All speeds to be inserted into eq. (12) must therefore be computed relative to the Earth’s non-rotating polar axis, or more simply, relative to the gravitational midpoint of the Earth itself. The latter therefore serves the same purpose as the rotor axis in the Hay et al. experiments  discussed in Sect. III.C. In short, the rotational speed RΩ of the Earth at a given latitude must be taken into account for both objects on the ground and on the airplanes. If the airplane travels with ground speed v in an easterly direction, this means that the ratio of the elapsed times of an onboard clock and that of a clock on the ground is:
The interesting conclusion is that a clock traveling in the westerly direction with the same ground speed () should run faster than both its easterly counterpart and also the ground clock at the airport of departure.
The latter expression does not include the effect of the gravitational red shift discussed in Sect. III.D. If the altitude of the airplane relative to the ground is h, the ratio becomes:
The gravitational increase in clock rate on the airplane was typically somewhat smaller than the time-dilation effect for the clock traveling eastward, so that the overall effect was still a decrease in its rate relative to the clock at the airport. The two effects reinforce each other for the airplane traveling in the westerly direction.
The predicted effects were mirrored in the experimental data . The observed time difference between the ground and easterly traveling clocks was -40 ± 23 ns as compared to the predicted value of -59 ± 10 ns. The corresponding values for the westward traveling clock were +275 ± 21 ns vs. 273 ± 7 ns. The main sources of error were due to instabilities in the cesium clock rates and uncertainties in the actual flight path of the airplanes.
3.6. Transverse Doppler effect on a rocket
The transverse Doppler effect was again used to study the effects of motion and gravity on clock rates in an experiment carried out in 1976 by Vessot and Levine . The frequency of a hydrogen-maser oscillator system carried onboard a rocket was measured as it moved up and down in the Earth’s gravitational field. The first-order Doppler effect was eliminated by using a transponder system on the rocket, leaving only the second-order transverse Doppler effect and the gravitational red shifts as the cause of the observed frequency shifts. The flight lasted for nearly two hours and carried the rocket to a maximum altitude of 10000 km.
The speed of the rocket was monitored as a function of altitude and the amount of the frequency shift was predicted on the basis of eq. (20) except that one took into account the variation of g with altitude to obtain the desired accuracy and also consider the effect of the centrifugal acceleration of the ground station [see eq. (1) of Ref. 45]. The position and velocity of both the rocket and the ground station were measured in an Earth-centered inertial frame, consistent with the assumptions of the Hafele-Keating analysis  of their data for circumnavigating airplanes. It was also necessary to take account of the refractive index of the ionosphere as the rocket passed through this region of space. The Doppler-canceling feature of the experiment was also successful in removing the gross effects of such variations.
The authors concluded that the rocket experiment was consistent with Einstein’s EP to a suitably high level of accuracy. By assuming the validity of the EP, they were also able to place an upper bound on the ratio of the one-way and two-way velocity of light. The corresponding Δc/c values ranged from 1.9x10-8 to -5.6x10-8 during various parts of the rocket’s trajectory.
4. Application of the experimental clock-rate results
The experiments described in the preceding question provide the necessary empirical data to enable the accurate prediction of how the rates of clocks vary as they are carried onboard rockets and satellites. This information is critical for the success of the Global Positioning System because of the need to measure the elapsed times of light signals to pass between its satellites and the Earth’s surface.
Before considering how to make the desired predictions based on the above experience, it is interesting to examine how Einstein’s original theory of clock rate variations  has held up in the light of actual experiments. There has been unanimity in the theoretical discussions accompanying these experiments in claiming that their results mirror perfectly Einstein’s predictions. This is particularly true of the authors of the transverse Doppler experiments using high-speed rotors [25,28-29] as well as for Hafele and Keating in their investigations of the rates of atomic clocks onboard circumnavigating airplanes [43,44].
Such an evaluation overlooks the basic point discussed in Sect. II, however, namely that Einstein had two clearly distinguishable conclusions regarding the question of which of two identical clocks in relative motion runs slower in a given experiment . One set of predictions was based on the LT and the conclusion that all inertial systems are equivalent by virtue of the RP. This is a thoroughly subjective theory of measurement which is characterized by the ambiguous, seemingly contradictory, results obtained in eqs. (3) and (5), both of which are derived in a straightforward manner from the LT. In this view, which clock runs slower is simply a matter of one’s perspective. This conclusion is consistent with Will’s equation  for the transverse Doppler effect (see eq. ), which states that the emitted frequency of a moving light source is necessarily lower than for the same source when it is at rest in the laboratory of the observer. By contrast, Einstein’s alternative conclusion was that an accelerated clock is definitely slowed relative to one that remains at rest in its original location. That corresponds to a thoroughly objective theory of measurement consistent with eq. (12), which requires the assignment of a definite origin (objective rest system ORS ) from which to measure the speeds of the respective clocks being compared.
There is no doubt which of Einstein’s two conclusions is meant by the various authors when they assert that their results are perfectly consistent with relativity theory. This point was made quite clear in Sherwin’s analysis  of the rotor experiments, which he referred to as a demonstration of “the transverse Doppler effect for accelerating systems.” It is the unambiguous nature of the result of the clock paradox that sets the rotor experiments apart from the classical theory of time dilation derived from the LT. Hafele and Keating  take the same position in presenting the underlying theory of their measurements of the rates of atomic clocks. In that case, the authors claim the determining factor for the choice of the Earth’s non-rotating axis as the reference system for application of eq. (12) is that it is an inertial frame. However, this conclusion overlooks the fact that the Earth is constantly accelerating as it makes its way around the Sun, and thus it is at least quantitatively inaccurate to claim that a position at one of the Earth’s Poles is in constant uniform translation .
A simpler alternative view is that the Earth’s center of mass (ECM) plays the same role for clock rates as for falling bodies. The only way to change the direction in which free fall occurs is to somehow escape the gravitational pull of the Earth. This has been most notably accomplished with the NASA expeditions to the Moon, in which case there is a definite point in the journey where the rocket tends to fall away from planet Earth and towards its satellite instead. It also occurs in a more conventional fashion when a body undergoes sufficient centrifugal acceleration in a parallel direction to the Earth’s surface. In this view, the ORS  for computing time dilation is always the same as the reference point for free fall, and thus is subject to change in exactly the same manner. The current speed of the Earth relative to the Sun is therefore irrelevant for the quantitative prediction of clock rates for the applications at hand.
There is a far less subtle point to be considered in the present discussion, however. The conclusion that an accelerated clock always runs slower than one that remains at its original rest location has very definite consequences as far as relativity theory is concerned. It shows unequivocally that Einstein’s symmetry principle  is violated, and therefore that the LT is contradicted by the experimental findings. Discussions in the literature [30-32] always emphasize that the acceleration effect was predicted by Einstein in his original paper  and that it is also entirely consistent with his EP , but there is never any mention of the conflict with the LT, which is after all the essential cornerstone of STR from which many long-accepted conclusions follow. The latter include most especially the FLC and remote non-simultaneity. Showing that the LT fails in one of its most basic predictions, the ambiguity in the ordering of local clock rates for two moving observers , necessarily raises equally critical questions about the validity of all its other conclusions. For example, does Lorentz invariance hold for the space and time variables under these experimental conditions? Are two events that occur at exactly the same time for one observer not simultaneous for someone else who happens to be moving relative to him? The latter question is particularly important for the GPS methodology since it relies on the assumption that the time of emission of a light signal is the same on the ground as on an orbiting satellite as long as one corrects for differences in clock rates at the respective locations. We will return to these questions in Sect. V, but first attention will be focused on the way in which the available experimental data on clock rate variations can be applied in practice.
4.1. Synchronization of world-wide clock networks
There are two causes of clock-rate variations in terrestrial experiments. Both of them involve the ECM, one depending exclusively on the speed of a given clock relative to that position and the other on the corresponding difference in gravitational potential. As a result, in comparing different clocks on the Earth’s surface, it is necessary to know both the latitude of each clock as well as its altitude h relative to sea level. The time-dilation effect is governed exclusively by eq. (12), whereby the speed to be inserted in the respective factors is equal to , where is the Earth’s rotational frequency (2π radians per 24 h = 86 400 s) and RE is the Earth’s radius (or more accurately, the distance between the location of the clock and the ECM). A standard clock S needs to be designated. Theoretically, there is no restriction on its location. Its latitude (as well as its altitude relative to the ECM) is then an important parameter in computing the ratio of the rates of each clock in the network with that of the standard clock. It is helpful to define the ratio R as follows:
According to eq. (12), this ratio tells us how much slower (if ) or faster (if ) the given clock runs than the standard if both are located at the same gravitational potential. The gravitational red shift needs to be taken into account to obtain the actual clock-rate ratio. For this purpose, it is helpful to define a second ratio S for each clock:
where r is the distance of the clock to the ECM. This ratio tells us how much faster () the clock runs relative to the standard by virtue of their difference in gravitational potential. The elapsed time on the clock for a given event can then be converted to the corresponding elapsed time on the standard clock by combining the two ratios as follows:
It is possible to obtain the above ratios without having any communication between the laboratories that house the respective clocks. The necessary synchronization can begin by sending a light signal directly from the position of the clock A lying closest to the standard clock S. The corresponding distance can be determined to as high an accuracy as possible using GPS. Division by c then gives the elapsed time for the one-way travel of the signal based on the standard clock S. The time of arrival on the standard clock is then adjusted backward by this amount to give the time of emission TS0(A) for the signal, again based on the standard clock. The corresponding time of the initial emission read from the local clock is also stored with the value T0(A). In principle, all subsequent timings can be determined by subtracting T0(A) from the current reading on clock A to obtain to be inserted in eq. (23). The time TS of the event on the standard clock is then computed to be:
where R and S are the specific values of the ratios computed above for clock A.
Once the above procedure has been applied to clock A, it attains equivalent status as a standard. The next step therefore can be applied to the clock which is nearest either to clock A or clock S. In this way the network of standard clocks can be extended indefinitely across the globe. Making use of the “secondary” standard (A) naturally implies that all timings there are based on its adjusted readings. It is important to understand that no physical adjustments need to be made to the secondary clock, rather its direct readings are simply combined with the R and S factors in eq. (24) to obtain the timing results for a hypothetical standard. A discussion of this general point has been given earlier by Van Flandern . The situation is entirely analogous to having a clock in one’s household that runs systematically slower than the standard rate. One can nonetheless obtain accurate timings by multiplying the readings from the faulty clock by an appropriate factor and keeping track of the time that has elapsed since it was last set to the correct time. The key word in this discussion is “systematic.” If the error is always of quantitatively reliable magnitude, the faulty clock can replace the standard without making any repairs.
4.2. Adjustment of satellite clocks
The same principles used to standardize clock rates on the Earth’s surface can also be applied for adjusting satellite clocks. Assume that the clock is running at the standard rate prior to launch and is perfectly synchronized with the standard clock (i.e. as adjusted at the local position). In order to illustrate the principles involved, the gravitational effects of other objects in the neighborhood of the satellite are neglected in the following discussion, as well as inhomogeneous characteristics of the Earth’s gravitational field.
The main difference relative to the previous example is that the R and S factors needed to make the adjustment from local to standard clock rate using eqs. (23-24) are no longer constant. Their computation requires a precise knowledge of the trajectory of the satellite, specifically the current value of its speed v and altitude r relative to the ECM. The acceleration due to gravity changes in flight and so the ratio S also has to be computed in a more fundamental way. For this purpose, it is helpful to define the following quantity connected with the gravitational potential, which is a generalization of the quantity in parentheses in eq. (18):
where G is the universal gravitational constant (6.670x10-11 Nm2/kg2) and ME is the gravitational mass of the Earth (5.975x1024 kg). The value of S is therefore given as the ratio of the A values for the satellite and the standard clock:
which simplifies to eq. (22) near the Earth’s surface (with ).
The corresponding value of the R ratio is at least simple in form:
Note that the latitude is not that of the launch position relative to the ECM, but rather that of the original standard clock. The accuracy of the adjustment procedure depends primarily on the determination of the satellite speed v relative to the ECM at each instant.
In this application the underlying principle is to adjust the satellite clock rate to the corresponding standard value over the entire flight, including the period after orbit has been achieved. The correction is made continuously in small intervals by using eq. (23) and the current values of R and S in each step. The result is tantamount to having the standard clock running at its normal rate on the satellite. This above procedure super-cedes the “pre-correction” technique commonly discussed in the literature  according to which the satellite clock is physically adjusted prior to launch. The latter’s goal is to approximately correct for the estimated change in clock rate expected if the satellite ultimately travels in a constant circular trajectory once it achieves orbital speed. The present theoretical procedure has the advantage of being able to account for departures from a perfectly circular orbit and also for rate changes occurring during the launch phase.
5. The new Lorentz transformation
Although the relativistic principles for adjusting the rates of moving clocks are well understood, there is still an open question about how this all fits in with Einstein’s STR . The standard argument, starting with the high-speed rotor experiments of Hay et al. , has been that one only needs to know the speed of the clock relative to some specific inertial system in order to compute the decrease in its rate using eq. (12). While this is true, the procedure itself cannot be said to be consistent with the basic subjectivity of Einstein’s original theory. As discussed in the beginning of Sect. IV, one has to disregard the predictions of the LT in order to have a theory which unambiguously states which of two clocks is running slower, namely the one that is accelerated by the greater amount relative to the aforementioned inertial system [28,31]. The consequences of ignoring the predictions of the LT in this situation are generally avoided in theoretical discussions of clock rates, with authors [16, 30-32] preferring instead to explain how correct predictions can be obtained by assuming that the amount of time dilation is proportional to the factor computed from the speed v of the accelerated clock. The argument is made that the LT can only be used for clocks in uniform translation , but the implication is that its many other predictions still retain their validity, such as remote non-simultaneity and Lorentz space-time invariance. However, the fact is that there is no basis for making the latter conclusion. The best that can be said is that there is no essential connection between the two types of predictions, and so failing in one of them does not necessarily rule out the validity of the others.
The standard procedure for addressing such problems is to find a way to amend the theory that removes the contradiction at hand, namely in the present case, the prediction of STR that two clocks can both be running slower than the other at the same time , while still retaining the capability of making all the other hitherto successful predictions of the theory without further modification. In previous work [48-50] it has been shown that the following space-time transformation accomplishes this objective by making a different choice for the function in eqs. (1a-d) than the value of unity assumed by Einstein in his original derivation of the LT :
In this set of equations, and Q is a proportionality factor that defines the ratio of the two clock rates in question. It is assumed that the rates of moving clocks are strictly proportional to one another, thereby eliminating the space-time mixing inherent in the LT. Reviewing the arguments that Einstein used in his derivation of the LT shows that he in fact made an undeclared assumption therein without any attempt to justify it on the basis of experimental observations. The assumption was that can only depend on v, the relative speed of the primed and unprimed coordinate systems related in the transformation. It leads to the and relations of the LT instead of eqs. (28c-d), but it also forces the mixing of space and time variables in the LT instead of the simpler result in eq. (28a).
The alternative Lorentz transformation (ALT) in eqs. (28a-d) satisfies both of Einstein’s postulates  because of its direct relation to the general equations in eqs. (1a-d), and it is also consistent with the same velocity transformation (VT) that is an integral part of STR:
It is more than an alternative to Einstein’s LT, however, because it is possible to show that the LT is invalid even for uniformly translating systems . This can be seen by considering the example of two observers measuring the length of a line that is oriented perpendicularly to their relative velocity. The FLC, which is derived from the LT, predicts  that they must agree on this value (). If their clocks run at different rates, however, they will measure different elapsed times () for a light pulse to travel the same distance. Since they also must agree on the speed of light, however, one concludes that the corresponding two distance measurements also differ, since .
This contradiction has been referred to as the “clock riddle” in earlier work [50-51], as opposed to the “clock paradox” frequently discussed in the literature. In fact, the prediction is merely the result of Einstein’s original (undeclared) assumption mentioned above. The contradiction is easily removed by making a different assumption for the value of in eqs. (1a-d). The ALT makes this choice by assuming that the clock rates of the two observers must be strictly proportional, as expressed by eq. (28a). The latter assumption is clearly consistent with the high-speed rotor [25,28-29] and Hafele-Keating [43-44] experiments, and leads to the result expected on the basis of time dilation and the light-speed constancy postulate. The ALT also leads directly to the key experimental results connected with the aberration of starlight at the zenith and the Fresnel light-drag experiment, since they can be derived quite simply from the VT of eqs. (29a-c).
The ALT is the centerpiece of a thoroughly objective version of relativity theory. It subscribes to the Principle of Rational Measurement (PRM), whereby two observers must always agree in principle on the ratio of physical quantities of the same type . There is never any question about which clock is running slower or which distance is shorter, unlike the case for Einstein’s STR. The new theory [48,49] does this by restating the RP: The laws of physics are the same in all inertial systems, but the units on which they are expressed may vary systematically between rest frames. The passengers locked below deck in Galileo’s ship cannot notice differences between traveling on a calm sea and being tied up on the dock next to the shore, but this does not preclude the possibility that all their clocks run at a different rate in one state of constant motion than in the other. Such distinctions first become clear when they are able to look out of a window and compare their measurements of the same quantity with those of their counterpart in another rest frame.
At the same time, the ALT insists on remote simultaneity by virtue of eq. (28a), since it is no longer possible for to vanish without doing so as well. It does away with space-time mixing, the FLC and Lorentz invariance, and it also removes any possibility of time reversal () since the proportionality factor Q is necessarily greater than zero in eqs. (28a-d). Not one of the latter effects has ever been verified experimentally, although there has been endless speculation about each of them. The ALT might be more aptly referred to as the “GPS LT,” because it conforms to the basic assumption of this technology that the rates of atomic clocks in relative motion are only affected by their current state of motion and position in a gravitational field. More details concerning these aspects of relativity theory may be found in a companion article .
6. GPS positioning errors during strong solar activity
The relativistic and gravitational effects discussed above are observed at sufficiently large distances from GPS satellites to the E-layer of the atmosphere. In accordance with existing experimental data these effects lead to positioning errors which do not exceed 3 m on the Earth’s surface. The next largest sources of error may be due to satellite geometry with respect to the GPS receiver, as well as orbit deformation due to the influence of gravity and the uneven distribution of this field.
We consider below another key aspect of the process of point-location determination by GPS. It is based on the fact that the distance can be determined using information from the atomic clock, and the assumption that the light pulses are always moving with the same speed through space. Then, to determine the appropriate distance it is sufficient to multiply this speed by the time of passage of the light pulse between two points in space. Use of this simple approach leads to problems in GPS positioning that arise from the possibility of light pulse propagation at different speeds through free space. This becomes particularly evident during periods of solar activity and the occurrence of magnetic storms, when the positioning errors are quite large. If one takes the generally accepted view that associates the distortion of GPS signals with wave optics, the positioning errors that occur during these periods correspond to an increase in the optical length of the signal propagation. Allowance for refraction in the scattering of waves on the plasma-type seals like blobs and bubbles with refractive index = 1.0000162  in the F- and upper E-layers of the ionosphere (at altitudes of 100-400 km above the Earth’s surface) leads to errors in positioning which are close to relativistic ones. The latter is in good agreement with experiment .
With increasing solar activity the time of the GPS signal passage from satellite to the Earth rises, which leads to positioning error enlargement. And it can be realized both in a short period of time (with duration 5-20 min) and for a long period (lasting several hours). In the first case, it occurs under the influence of the radiation coming from the solar flare. In the second case, it takes place in 30-35 hours after the flare under the influence of the solar wind. A concrete example is the time-dependence of the violations of the GPS satellite system during periods of solar activity which was published on the website of Cornell University . According to measurements carried out in real-time monitoring stations of Arecibo Observatory (Puerto Rico) daily from August 30 to September 02, 2011 between 03.00 and 04.00 on the Coordinated Universal Time (UTC), there was a 20-minute failure of GPS. The horizontal positioning error here reached 50 m and more.
More powerful geomagnetic disturbances lead to the complete disappearance of the signal at the GPS receiver for a sufficiently long period of time . Thus, the data obtained at the Sao Luis Observatory (Brazil) on September 15-16, 2011 showed that the loss of GPS signal occurred several times during the day. The signal at the receiver sporadically disappeared five times for 5-30 minutes each between 16.00 UTC September 15 to 01.00 UTC September 16, 2011. Moreover, the horizontal positioning error during these days greatly exceeded the value of 50 m.
It was shown in  that during solar flares of different power there was a certain sequence of decreasing carrier/noise ratio for the frequencies L1=1.57542 GHz and L2=1.22760 GHz. During the solar flare of X-1 level (22.15 UTC, December 14, 2006) the carrier/noise ratio for the frequency L1 had become worse. At the same time, the carrier/noise ratio for the L2 frequency remained unchanged. The flare of the X-3 level (02.40 UTC, December 13, 2006) led to a simultaneous deterioration of the carrier/noise ratio for both frequencies. The duration of the phenomena observed in both cases was about 30 minutes.
The next phenomenon that deserves attention is the increasing power of the signal received by GPS receiver during the period of strong solar activity. The time-dependence of the power of the GPS signal, and an integral number of failures at the receiver during a geomagnetic disturbance on July 15, 2000 was published in . There was an increase of about three times in the intensity of the signal at the receiver with respect to satellite signal power. The integral number of failures grew with increasing intensity of the received signal. The authors did not give an explanation for the growth of intensity.
The observations show that a significant increase of high-frequency (UHF) radiation from the upper atmosphere is a result of solar flares. The intensity of UHF radiation in such events in 40 or more times higher than typical levels of solar microwave bursts . In particular, such events were observed during periods of geomagnetic disturbances, such as observations at wavelengths in the 3-50 cm range . The observations were made simultaneously at several points within the project SETI. The intensities were not given in this paper. The corresponding graphical dependences were normalized to the maximum value. Analysis of different possibilities to generate the observed UHF radiation has shown that the largest contribution to the resulting picture of the spectrum is made by transitions between Rydberg levels of neutral components of the non-equilibrium two-temperature plasma. They are excited by the action of solar radiation flux or a stream of electrons emitted from the upper ionosphere in a collision with electrons .
Highly excited states of atoms and molecules are called Rydberg states if they are located near the ionization limit and are characterized by the presence of an infinite sequence of energy levels converging to the ionization threshold. They represent an intermediate between the low-lying excited states and ionized states located in the continuous spectrum. Rydberg atoms and molecules have one excited weakly bound electron, the state of which is characterized by the energy level and angular momentum with respect to the ion core. Energy levels with large angular momenta do not depend on . They are so-called orbitally degenerate states. These Rydberg states are statistically the most stable when the electron spends most of the time at large distances from the ion core. The process leading to the formation of these states is called -mixing. In the upper atmosphere -mixing flows quickly and is irreversible, i.e. almost all Rydberg particles are in degenerate states. As a result, the differences between atoms and molecules are lost, and the range of UHF radiation is homogeneous (i.e. not dependent on the chemical composition of the excited particles) . The -mixing process takes place in a dense neutral gas medium with a concentration greater than 10-12 cm-3, which corresponds to the altitude km. The criterion for the efficiency of the process is the condition that the volume of electron cloud of the Rydberg particle A** (with radius 2n2a0, where a0 is equal to the Bohr radius) has at least one neutral molecule M. The interaction between them leads to the formation of quasimolecules A**M. The potential energy curves of these molecules split off from degenerate Coulomb levels and are classified according to the value of the angular momentum L of a weakly bound electron with respect to the molecule M . The shape of the potential curve is determined by the characteristics of the elastic scattering of slow electrons on molecule M.
The optical transitions between split and degenerate states of quasimolecule A**M that occur without changing the principal quantum numbers () correspond to radiation in the decimeter range .
Rydberg states of the particles are not populated at altitudes km due to the quenching process. This takes place at the time of interaction of Rydberg particles with unexcited molecules of oxygen during the intermediate stage of ion complex creation (harpoon mechanism), i.e.
where s is the vibrational quantum number. It is due to the fact that the negative molecular ion has a series of resonance vibrationally excited autoionizing levels located on the background of the ionization continuum (see Figure 1). Taking into account these two factors, it can be assumed that the layer of the atmosphere emitting in the decimeter range is formed between 50 and 110 km.
The increased solar activity leads to the formation of two types of non-equilibrium plasma in the D- and E-layers of the ionosphere. They are recombination plasma and photoionization plasma. The first type corresponds to a non-equilibrium two-temperature plasma in which Rydberg states are populated by collisional transitions of free electrons into bound states of the discrete spectrum due to inelastic interaction with the neutral components of the environment . The electron temperature Te here varies from 1000 to 3000 K , and the ambient temperature Ta of this layer varies from 200 to 300 K depending on altitude. Note that in the lower D-layer, the collisional mechanism of Rydberg-state population is dominant. Thermalization of electrons takes place mainly due to the reaction of the vibrational excitation of molecular nitrogen
leading to the formation of an intermediate negative ion.
The second type corresponds to a photoionization plasma in which the population of Rydberg states occurs under the action of the light flux coming from the solar flare. In this case, the quantity of the Rydberg states is determined by the intensity of incident radiation. Here, in contrast to the recombination plasma, the low-lying Rydberg states are also populated, which leads to production of the infrared radiation.
7. The radiation of recombination plasma in the decimeter band
Let us consider the effect of molecules N2 and O2 on the UHF spectrum of spontaneous emission (absorption) in the decimeter band which appears in the D- and E-layers of Earth’s ionosphere during strong geomagnetic disturbances . Since the concentration of free electrons ne is small compared with the concentration of atmospheric particles ρa both in normal conditions and in case of a magnetic storm, there is no noticeable change in the ambient temperature Ta [64,65]. Both for night and day time, it is of the order of thermal temperature in the D- and E-layers . It is due to the fact that a high translational temperature of the particles coming from the ionosphere (F-layer and above) when entering a denser medium is spent on the vibrational and rotational excitation of atmospheric molecules. Further relaxation of the excitation occurs in the process of resonant transfer of internal energy. Its transport is carried out in subsequent collisions. The vibrational temperature Tv should relax faster than the rotational TN. The temperatures Ta and TN usually equalize fairly quickly because of the rapid exchange of energy between rotational and translational motions. The rate of transmission increases with increasing excitation energy of the medium. Thus, in the D-layer the electron temperature Te separates from the ambient temperature Ta, and non-equilibrium quasistationary two-temperature recombination plasma is set, where Ta<<Te, which agrees well with direct measurements .
Assuming that the electron flow is stationary and their concentration ne slightly varies over time, the populations of excited states of Rydberg atoms and molecules can be determined from the balanced equations taking into account the processes of recombination, ionization and radiation . Since the frequency of collisions of electrons with medium particles is equal to 1012-1014 s-1, two population distributions of atoms and molecules on excitation energies of discrete states are formed in the plasma due to rapid energy exchange between bound and free electrons with medium particles and each other. The first distribution with a temperature Te corresponds to highly excited Rydberg states with binding energies En, which is less than the characteristic energy. Rydberg states are not populated close to this energy at the interval that is called the bottleneck of the recombination flux or «neck of flow» (see Fig.2). The second population distribution with the ambient Ta temperature is caused by collisions between particles of the medium and refers to the low-lying electronic states where radiative transitions dominate. Thus, two effective regions of population can be identified in the spectrum: the part with En<, where the collision processes dominate, and the part En >, where excitation of the states is due to radiative transitions. The population of the levels located between these regions is negligible since they are effectively quenched because of radiative transitions in the IR, visible, and UV range.
Passage of the electrons through the neck of the flow on the energy scale is the slowest stage of the process that determines the kinetics of non-equilibrium plasma. The energy is found from the condition of minimum of quenching rate constant due to transitions to low-lying states and is defined as :
(here concentration of electrons ne is in cmand temperature T in K). The dependences of n* on the concentration ne and temperature of free electrons calculated according to (30) are listed in Table 1.
The Rydberg state perturbations by the neutral particles are reduced to the splitting of degenerate levels of the highly excited and quasimolecules in groups of sublevels whose positions are determined by the characteristics of weakly bound electron scattering on the nitrogen and oxygen molecules . Moreover, the transitions between the terms, split off from the Coulomb levels, provide the greatest contribution to the UHF radiation, where is the angular momentum of the weakly bound electron with respect to the perturbing particle of the medium . A non-equilibrium two-temperature plasma is formed at cm densities when the detachment of the electron temperature from the medium temperature occurs under the influence of the flux of ionospheric electrons. Populations of Rydberg particles generated here are defined by the temperature , concentration ne and flow of free electrons, and medium concentration . Increase in concentration in the lower part of the D-layer leads to an increase in the shock quenching rate of Rydberg states. At cm their population decreases sharply. The spatial region for the formation of excited particles concentrates in the lower part of the E-layer and within the D-layer closer to its upper boundary (this region is 25-30 km wide), where the states with principal quantum numbers will be populated most effectively .
The states with small momenta are perturbed strongly by the ionic core (for example, in the nitrogen and oxygen molecules). The perturbing particle M field in turn only influences such superpositions of Coulomb center states that have low electron angular momenta L with respect to M. These two «different center» groups of terms will be called Rydberg l and degenerate L terms. The latter are defined as 
where quantum defects induced by the field of the molecule M are equal to
|Concentration of free electrons|
|Effective neck of the flow principal quantum number n ∗at various Te values (K)|
Here is the length of electron scattering on the perturbing particle М, is its quadrupole moment, and are the isotropic and anisotropic polarizability components, is the semiclassical momentum of a weakly bound electron in the Coulomb field, is the interatomic distance of the А М quasimolecule, quantity , where E is the total energy of the Rydberg particle measured from the ground state of ion (here and throughout, we use the atomic system of units with ).
Figure 3 shows the energy diagram of the potential energy curves of quantum L-states depending on interatomic coordinate R. For angular momenta L=0 and L=1 they are split from degenerate Coulomb levels with principal quantum numbers n+1 and n in the classical turning points. Blue arrows indicate the optical transitions between the split and degenerate states of the quasimolecule А**М occurring without changing the principal quantum number . This corresponds to UHF radiation in the decimeter range. The red arrows show similar transitions with a change in principal quantum number which correspond to IR radiation.
Radiation intensities of photon energy emission per unit time for and without changing principal quantum number defined as
where and are the appropriate transition frequencies, is the effective principal quantum number of term. The procedure for calculating the coefficients and is described in detail in . Averaging of the intensities (33) and (34) is over all possible interatomic coordinates as follows
(is the electron wave function of А**М quasimolecule). This corresponds to a static consideration of the problem, which is valid if the average electron velocity is much higher than the characteristic velocity of atomic particles in the medium when we have the inequality (where is the reduced mass). Under our conditions, this situation is certainly satisfied. As the main distortion of the Coulomb wave function is in the vicinity of the perturbing particle M, we can put to the first approximation that . Considering also that radiation from the states with a given value of is not coherent, we should add consistently the transition intensities (33) and (34) for the corresponding emission lines of the and quasimolecules separately in order to determine a total contribution of all possible transitions and multiply them by the effective population of the Rydberg term . By analogy with  they are calculated as
(is the statistical sum of positive molecular ions). The non-equilibrium plasma factor taking into account the flux of precipitating electrons from the ionosphere and the quenching of Rydberg states of the particles can be represented as
The coefficient is given by the flow of electrons, and the value of characterizes the decrease in the populations of due to quenching of the Rydberg particles in a neutral medium . It is defined as the ratio of the effective number of electrons, moving in a flow with velocity and passing the distance through a unit area during the lifetime of the split-off L excited state, to the equilibrium concentration of electrons, i.e.
The lifetime of degenerate Rydberg states of the quasimolecule characterized by the time of spontaneous emission in the infrared
(i.e. ) which is defined as , where is the intensity of the energy emission in IR region. Since with the frequency increase the radiation lifetime decreases as , the gain factor can be expressed in the form
where the maximum is achieved at a frequency and electron temperature . Under these conditions, the value of Quantity is the position of the maximum population of Rydberg states, which is defined as 
Here denotes the weight factors of and molecules equal to and respectively. The values of and are their quadrupole moments and rotational constants expressed in units. It will be recalled that in the system of the atomic units, the concentration of medium, rotational constant , and medium temperature are determined as
(x is the exponent that characterizes the medium concentration which in these conditions varies from 12 to 16). Note, that according to (10) the position of the Rydberg state population maximum along the axes decreases as ~ and ~ as the temperature of electrons and medium concentration increase. In the calculations performed below, the spectroscopic parameters of molecules and molecular ions of nitrogen and oxygen are taken from . Scattering lengths, quadrupole moments, and static polarizabilities of the nitrogen and oxygen molecules, for which data from [70-74] were used, are given in Table 2. The corresponding dependences are shown in Table 3.
The intensity of the UHF radiation per unit volume is written as follows:
The index in (40) takes on various values (or ) according to the sequence in which radiation lines intersect the straight-line frequency. The distribution of n-dependent emission lines (corresponding to values) for and molecules contain the series of four transitions which converge with increasing to the limiting transition .
The shift in frequency of these limits for and series leads to the heterogeneity of the UHF spectrum, in which there are three spectral ranges of frequencies. The first includes the frequencies GHz, the second corresponds to the GHz range, and third contains the GHz one.
The optical thickness of the radiating layer is defined by the limiting value of the medium concentration from which the value of vanishes. This situation occurs when the concentration of electrons cm for medium concentration cm . On the other hand, the dependence of is determined by the quenching process of Rydberg particles and drops sharply when cm . In these circumstances, a quenching is accompanied by the transfer of electronic excitation energy into translational motion of the heavy medium components. Since the concentration of excited particles is small compared with the medium concentration, the process of cooling flow of free electrons in the rotational transitions of molecules should be carried efficiently in the bottom of the D-layer. The Rydberg states are eliminated when the and temperatures are equalized. This condition is the criterion for the lower boundary of the plasma layer, which does not depend on the radiation frequency and corresponds to cm .
Figure 4 shows the dependence of the frequency of the emission lines for A**N2 and A**O2 quasimolecules on the principal quantum number n for the transitions between the split-off and degenerate Coulomb levels and the transitions between the split-off levels, which are calculated according to formulas (31) and (32). At the 0.8 - 10 GHz frequency range the distributions with respect to L for the radiation lines depending on n for each quasimolecule contain the series of four lines, corresponding to the transitions, which for increasing converge in the limit to the transitions, where L = 0, 1, 2, 3. These lines are marked with symbols and for A**N2 and A**O2 quasimolecules, respectively.
|ρ a, cm − 3||n ma x|
It is seen that the frequency shift for the and limits relative to each other leads to the formation of three spectral ranges, which transitions are suppressed for small values of n. This difference is caused by the characteristics of slow-electron scattering by the nitrogen and oxygen molecules.
The intensities of incoherent UHF radiation for the excited medium calculated by the formula (11) in the 0.8-1.8 GHz range are presented in Fig.5 and Fig.6 as functions of frequency for quiet and disturbed ionosphere states. It is shown that the profile of UHF radiation is a non-monotonic function of the radiation frequency and increases sharply near the right edge of the range.
With increase in electron concentration by two orders of magnitude, the relative intensity W increases by about four orders of magnitude. This is likely directly related to the observed effect of sequential disappearance of the and frequencies of GPS signal as the power of the sun storm increases  because the position of the first radiation intensity attenuation region (1.17 – 1.71 GHz) and the «transparency window» of the propagation of satellite signals nearly coincide.
8. Radiation of photoionization plasma
The photoionization plasma is formed during 20-30 minutes under the influence of wideband radiation coming into the atmosphere after a solar flare has occurred. This process is caused by multi-quantum excitation of the electronic states of nitrogen and oxygen atoms and molecules. In this case the spin-forbidden character of the corresponding radiative transitions is removed due to interaction of excited particles with ambient molecules M and creation of the A**N2 and A**O2 quasimolecules. The distribution of Rydberg-state populations for large values of principal quantum number n >30 is similar to that discussed above in non-equilibrium recombination plasma. A difference lies in the fact that the top of the energy scale is further depleted by photoionization of the Rydberg states. For small values n < 10, the process of quenching of Rydberg and low-lying excited states takes place because of predissociation including non-adiabatic transitions through intermediate valence configurations as well as the resonant (non-resonant) transfer of internal energy due to collisional processes with subsequent thermalization of the medium. The effect mentioned above is confirmed by an observed increase in temperature of the neutral environment Ta with increasing height in the range 40-60 km. Thus, in photoionization plasma the orbitally degenerate L-states are populated primarily for principal quantum numbers 10<n<30. In contrast to the recombination plasma, the population-distribution function here is dependent on the intensity of light and is not a function of either the temperature of free electrons Te or the resulting shape of the absorption line.
Under these conditions, spontaneous decimeter radiation (which fits to ) will be accompanied by strong IR radiation (with ), the intensity of which exceeds by at least three orders of magnitude the intense UHF radiation of recombination plasma . This can be explained by the fact that according to , the maximum of IR radiation for the recombination plasma accounts for the interval of principal quantum numbers , and in case of the photo-ionization plasma, the maximum should be located in the vicinity of n = 10. It should also be noted that the l-mixing process for the low-lying Rydberg states is suppressed, and the influence of the environment should be significantly reduced . This is particularly important for the formation of the frequency spectrum profile of the IR radiation under conditions of strong solar activity. The maximum of the radiation should occur near the bottom of the D-layer at an altitude of 50-60 km. This is due to the fast decrease ~ of the l-mixing cross section with increasing principal quantum number n.
This phenomenon should lead to the population emptying of the low-lying molecular Rydberg states as a result of predissociation . In reality, it is necessary that l-mixing occurs, i.e. inside the Rydberg-electron cloud of the A** particle, at least one medium molecule M should react to form an A**M quasimolecule. This condition corresponds to the principal quantum number
which depends on the concentration of the medium . Table 4 shows that it is realized at a concentration of about 1017 cm-3. This means that the process of l-mixing plays an important role in shaping the frequency profile. No less important here are the processes of quenching of Rydberg states, which should reproduce the simple dependence of intensity I on n at the left long-wave side of the profile, for example, as . The slope of this curve depends on the ratio of luminous flux and the rate of quenching.
The success of the GPS methodology rests on an objective theory of measurement which denies Einstein’s symmetry principle derived on the basis of the LT. There is never any ambiguity in principle as to which of two clocks in relative motion runs slower. The goal of relativity theory is to quantitatively predict relative rates of clocks on the basis of information regarding their respective states of motion and positions in a gravitational field. For this purpose it is necessary to designate a specific rest frame (ORS) to act as reference in determining the speeds of the clocks to be used to compute the amount of time dilation. The Earth’s center of mass (ECM) serves as the ORS for computing the rates of clocks located on orbiting satellites as well as on the Earth’s surface. The effects of gravity on the relative rates can then also be computed from knowledge of their respective positions in space relative to the ECM. To the surprise of many, it was found that the transverse Doppler effect was asymmetric when the light source and detector are mounted on a high-speed rotor, thus agreeing with Einstein’s alternative interpretation of time dilation that rejects the above symmetry principle and concludes instead that an accelerated clock always runs slower than its stationary counterpart. The EP was used to obtain a quantitative prediction of these results, but it was pointed out by Sherwin that the lack of symmetry expected from the LT proves that it is only valid for uniform translation. This conclusion has been largely ignored, however, and emphasis has been placed instead on the possibility of using the experimental data to quantitatively predict the dependence of the rates of clocks on their state of motion and position in a gravitational field. While this information has proven invaluable in the development of GPS, it leaves open the question of whether other predictions of the LT such as space-time mixing, remote non-simultaneity, Fitzgerald-Lorentz length contraction (FLC) and Lorentz invariance are valid under similar circumstances in which acceleration is present, and even whether the LT has validity for uniformly translating systems.
In considering this point it has been noted that Einstein made an undeclared assumption, a hidden postulate, in his derivation of the LT which he made no attempt to justify. An example has been given to show that this assumption in fact leads to contradictory predictions regarding whether two observers in uniform relative motion agree or disagree on the length of an object that is oriented perpendicularly to their velocity (clock riddle). It has thereupon been shown that all hitherto successful predictions of relativity theory, including those that are relevant to GPS, can be obtained by assuming that clock rates of observers have a constant ratio as long as neither their relative state of motion nor their respective positions in a gravitational field change. The latter assumption leads to an alternative Lorentz transformation (ALT) which is compatible with Einstein’s two postulates of relativity and the same velocity transformation (VT) that is obtained using conventional STR. This version of the theory eliminates the contradiction of the clock riddle while continuing to successfully predict the aberration of starlight at the zenith and the results of the Fresnel light-drag experiment. The resulting theory recognizes that the rates of proper clocks do differ between inertial systems, but that it is nonetheless impossible for a given observer to determine his state of motion on the basis of strictly in situ measurements. This is because there is a uniform scaling of the rates of all natural clocks when they are accelerated by the same amount. The same holds true for all other physical quantities. This conclusion is perfectly consistent with the RP, which can be restated as follows: The laws of physics are the same in all inertial systems but the units in which they are expressed can and do differ in a systematic manner.
The difference between the impacts of photoionization and recombination plasma on the distortion of the GPS satellite signals is clearly manifested in the observed dependence of the positioning errors with respect to time. In the first case, there is sharp and narrow (up to 20 min) peak with a positioning error of more than 50 m. The second case corresponds to the formation of a bell-shaped dependence of the error with a typical width of several hours and positioning errors of 15-20 m.
During periods of solar activity, characteristic IR radiation should be formed, which is indicative of Rydberg states having been formed in the D-layer of the Earth’s ionosphere. The integrated intensity close to the IR radiation maximum gives information about the population of Rydberg states at altitudes of 50-60 km. These results can be used as a starting point for the corresponding kinetic calculations. The slope on the left side of the frequency profile contains information about the magnitude of the light flux.
An independent line of the research may be a systematic analysis of the long-wave IR spectrum of radiation (for transitions) emitted by the D-layer for a long time during a strong geomagnetic storm. This band of the spectrum falls in the range of principal quantum numbers 20 < n < 40, where the l-mixing process is very efficient. The features of the frequency profile described above should also appear in this case. By contrast, the spectral appearance of transitions will be richer with the radiation of A**N2 and A**O2 quasimolecules in the decimeter range (for the transitions). This effect may serve as a plasma diagnostic and IR-tomography of the ionosphere D-layer.
In conclusion, it should be noted that two physical factors play key roles during the propagation of GPS satellite signals. The first is the absorption and subsequent stimulated emission of electromagnetic waves on Rydberg states of the A**N2 and A**O2 quasimolecules with a time delay of 10-5-10-6 seconds in a single scattering process. The second is due to incoherent plasma radiation. These processes are applied independently of each other. The increase in two to three times in the envelope of the resonance intensity profile and the formation of a phase shift are the most characteristic features for the resonance absorption of electromagnetic waves with subsequent re-emission. These two processes alone can explain the power growth and the disappearance of GPS signal observed in . This means that the frequencies of the signals emitted by the satellite and subsequently received on the ground may differ substantially from one other.
Thus, the physical cause of the time delay and phase shift of the GPS satellite signal during periods of strong solar activity is associated with a cascade of resonant re-emission of Rydberg states of the A**N2 and A**O2 quasimolecules in the E-and D-layers of the ionosphere, i.e. they are determined by quantum properties of the propagating medium.