Bistatic synthetic aperture radar (BiSAR), operates with separate transmitter and receiver that are mounted on different platforms (Cherniakov & Nezlin, 2007), will play a great role in future radar applications (Krieger & Moreira, 2006). BiSAR configuration can bring many benefits in comparison with monostatic systems, such as the exploitation of additional information contained in the bistatic reflectivity of targets (Eigel et al., 2000, Burkholder et al., 2003), improved flexibility (Loffeld et al., 2004), reduced vulnerability (Wang & Cai, 2007), forward looking SAR imaging (Ceraldi et al., 2005). These advantages could be worthwhile, e.g., for topographic features, surficial deposits, and drainage, to show the relationships that occur between forest, vegetation, and soils. Even for objects that show a low radar cross section (RCS) in monostatic SAR images, one can find distinct bistatic angle to increase their RCS to make these objects visible in BiSAR images. Furthermore, a BiSAR configuration allows a passive receiver, operating at a close range, to receive the data reflected from potentially hostile areas. This passive receiver may be teamed with a transmitter at a safe place, or make use of opportunistic illuminators such as television and radio transmitters or even unmanned vehicles [Wang, 2007a].
However, BiSAR is subject to the problems and special requirements that are neither not encountered or encountered in less serious form for monostatic SAR (Willis, 1991). The biggest technological challenge lies in synchronization of the two independent radars: time synchronization, the receiver must precisely know when the transmitter fires (in the order of nanoseconds); spatial synchronization, the receiving and transmitting antennas must simultaneously illuminate the same spot on the ground; phase synchronization, the receiver and transmitter must be coherent over extremely long periods of time. The most difficult synchronization problem is the phase synchronization. To obtain focused BiSAR image, phase information of the transmitted pulse has to be preserved. In a monostatic SAR, the co-located transmitter and receiver use the same stable local oscillator (STALO), the phase can only decorrelate over very short periods of time (about sec.). In contrast, for a BiSAR system, the transmitter and receiver fly on different platforms and use independent master oscillators, which results that there is no phase noise cancellation. This superimposed phase noise corrupts the received signal over the whole synthetic aperture time. Moreover, any phase noise (instability) in the master oscillator is magnified by frequency multiplication. As a consequence, the low phase noise requirements imposed on the oscillators of BiSAR are much more higher than the monostatic cases. In the case of indirect phase synchronization using identical STALOs in the transmitter and receiver, phase stability is required over the coherent integration time. Even the toleration of low frequency or quadratic phase synchronization errors can be relaxed to 90 , the requirement of phase stability is only achievable with ultra-high-quality oscillators (Wei , 2004). Moreover, aggravating circumstances are accompanied for airborne platforms because of different platform motions, the performance of phase stability will be further degraded.
Although multiple BiSAR image formation algorithms have been developed (Wang et al., 2006). BiSAR synchronization aspects have seen much less development, at least in open literature. The requirement of phase stability in BiSAR was first discussed in (Auterman, 1984), and further investigated in (Krieger et al., 2006, Krieger & Younis, 2006), which conclude that uncompensated phase noise may cause a time variant shift, spurious sidelobes and a deterioration of the impulse response, as well as a low-frequency phase modulation of the focused SAR signal. The impact of frequency synchronization error in spaceborne parasitic interferometry SAR is analyzed in (Zhang et al., 2006) and an estimation of oscillator’s phase offset in bistatic interferometry SAR is invstigated in (Ubolkosold et al., 2006). In an alike manner, linear and random time synchronization errors are discussed in (Zhang et al., 2005).
As a consequence of these difficulties, there is a lack of practical synchronization technique for BiSAR. But its application is of great scientific and technological interest, several authors have proposed some potential synchronization techniques or algorithms, such as ultra-high-quality oscillators (Gierull, 2006), a direct exchange of radar pulses (Moreira et al., 2004), a ping-pong interferometric mode in case of full-active systems (Evans, 2002) and an appropriate bidirectional link (Younis et al., 2006a, Younis et al., 2006b, Eineder, 2003). The practical work is to develop a practicable synchronization technique without too much alteration to existing radars.
This chapter concentrates on general BiSAR synchronization, aims at the development of a practical solution for time and phase synchronization aspects without too much alteration to existing radars. The remaining sections are organized as follows. In Section 2, the impact of synchronization errors on BiSAR systems are analysed by using an analytical models. A conclusion is made that some synchronization compensation techniques must be applied to focus BiSAR raw data. Then, possible time synchronization and phase synchronization approaches are investigated in Section 3 and Section 4, respectively. Finally, Section 5 concludes the chapter with some possible future work.
2. Impact of synchronization errors on BiSAR systems
2.1. Fundamental of phase noise
The instantaneous output voltage of a signal generator or oscillator V(t) is (Lance et al., 1984)
where and are the nominal amplitude and frequency, respectively, is a start phase, and are the fluctuations of signal amplitude and phase, respectively. Notice that, here, we have assumed that (Wang et al., 2006)
It is well known that defined as the spectral density of phase fluctuations on a ‘per-Hz’ is the term most widely used to describe the random characteristics of frequency stability, which is a measure of the instantaneous time shifts, or time jitter, that are inherent in signals produced by signal generators or added to signals as it passes through a system (Wall & Vig, 1995). Although an oscillator’s phase noise is a complex interaction of variables, ranging from its atomic composition to the physical environment of the oscillator, a piecewise polynomial representation of an oscillator’s phase noise exists and is expressed as (Rutman, 1978)
where the coefficients describe the different contributions of phase noise, and represents the phase fluctuation frequency. As modeled in the Eq. (3), they can be represented by several physical mechanisms which include random walk frequency noise, flicker frequency noise. Random walk frequency noise (Vannicola & Varshney, 1983) is because of the oscillator’s physical environment (temperature, vibration, and shocks etc.). This phase noise contribution can be significant for a moving platform, and presents design difficulties since laboratory measurements are necessary when the synthesizer is under vibration. White frequency noise originates from additive white thermal noise sources inside the oscillator’s feedback loop. Flicker phase noise generally is produced by amplifiers, and white phase noise is caused by additive white noise sources outside the oscillator’s feedback loop (Donald, 2002).
In engineering, for the condition that the phase fluctuations occurring at a rate of and are small compared with 1 rad, a good approximation is
where is defined as the ratio of the power in one sideband referred to the input carrier frequency on a per-Hertz of bandwidth spectral density basis to the total signal power at Fourier frequency from the carrier per device.
2.2. Model of phase noise
One cannot foresee to simulate the phase noise if one does not have a model for the phase noise. In (Hanzo et al., 2000), a white phase noise model is discussed, but it cannot describe the statistical process of phase noise. In (Foschini & Vannucci, 1988), a Wiener phase noise model is discussed, but it cannot describe the low-frequency phase noise, since this part of phase noise is an unstationary process. As different phase noise will bring different effects on BiSAR (see Fig. 1), the practical problem is that how to develop an useful and comprehensive model of frequency instability that can be understood and applied in BiSAR processing. Unfortunately, Eq. (3) is a frequency-domain expression and not convenient in analyzing its impact on BiSAR. As such, we have proposed an analytical model of phase noise, as shown in Fig. 2. This model uses Gaussian noise as the input of a hypothetical low-pass filter and its output is then considered as phase noise, that is this model may represent the output of a hypothetical filter with impulse response receiving an input signal .
It is well known that the power spectral density (PSD) of the output signal is given by the product , where the filter transfer function is the Fourier transform of . Notice that, here, must be satisfied with
where a sharp up cutoff frequency and a sharp down cutoff frequency are introduced. Notice that time domain stability measures sometimes depend on and which must then be given with any numerical result, although no recommendation has been made for this value. and are adopted. Thereby, the PSD of phase noise in Fig. 2 can be analytical expressed as
where is the PSD of Gaussian noise and also the input of filter, and is a constant. An inverse Fourier transform yields
where and denote the phase noise in time domain and a convolution, respectively.
2.3. Impact of phase synchronization errors
Since only STALO phase noise is of interest, the modulation waveform used for range resolution can be ignored and the radar can be simplified into an azimuth only system (Auterman, 1984). Suppose the transmitted signal is sinusoid whose phase argument is
The first term is the carrier frequency and the second term is the phase, and is the ratio of the carrier frequency to STALO frequency. After reflection from a target, the received signal phase is that of the transmitted signal delayed by the round-trip time . The receiver output signal phase results from demodulating the received signal with the receiver STALO which has the same form as the transmitter STALO
Hence we have
The first term is a frequency offset arising from non-identical STLO frequencies, which will result focused image with a drift. Because this drift can easily be corrected using ground calibrator, it can be ignored here. The second term forms the usual Doppler term as round-trip time to the target varies, it should be preserved. The last term represents the effect of STALO frequency instability which is of interest. As a typical example, assuming a X-band airborne SAR with a swath of 6km. Generally, a typical STALO used in current SAR has a frequency accuracy ( ) of or better (Wei , 2004). As a typical example, assuming a BiSAR system with the following parameters: radar carrier frequency is , the speed of light is , the round-trip from radar to target is , and then the phase error in fast-time is found to be
which has negligible effects on the synchronization phase. Hence, we have an approximative expression
That is to say, the phase noise of oscillator in fast-time is negligible, we can consider only the phase noise in slow-time.
Accordingly, the phase error in BiSAR can be modelled as
It is assumed that and are independent random variables having identical PSD . Then, the PSD of phase noise in BiSAR is
Where the factor arises from the addition of two uncorrelated but identical PSD. This is true in those cases where the effective division ratio in frequency synthesizer is equal to the small integer fractions exactly. In other instances, an experimental formula is (Kroupa, 1996)
Take one STALO as an example, whose phase noise parameters are listed in Table 1. This STALO can be regarded as a representative example of the ultra stable oscillator for current airborne SAR systems. Predicted phase errors are shown in Fig. 3 for a time interval of . Moreover, the impacts of phase noise on BiSAR compared with the ideal compression results in azimuth can be founded in Fig. 4(a). We can draw a conclusion that oscillator phase instabilities in BiSAR manifest themselves as a deterioration of the impulse response function. It is also evident that oscillator phase noise may not only defocus the SAR image, but also introduce significant positioning errors along the scene extension.
Furthermore, it is known that high-frequency phase noise will cause spurious sidelobes in the impulse function. This deterioration can be characterized by the integrated sidelobe ratio (ISLR) which measures the transfer of signal energy from the mainlobe to the sidelobes. For an azimuth integration time, , the ISLR contribution because of phase errors can be computed in dB as
A typical requirement for the maximum tolerable ISLR is , which enables a maximum coherent integration time of in this example as shown in Fig. 4(b). This result is coincident with that of (Krieger & Younis, 2006).
Generally, for , the region of interest for SAR operation, can be modelled as (Willis, 1991)
Note that is the value of at for a specific oscillator. As the slope of Eq. (17) is so high, there is
Hence, the deterioration of ISLR may be approximated as
It was concluded in (Willis, 1991) that the error in this approximation is less than for .
2.4. Impact of time synchronization errors
Timing jitter is the term most widely used to describe an undesired perturbation or uncertainty in the timing of events, which is a measurement of the variations in the time domain, and essentially describes how far the signal period has wandered from its ideal value. For BiSAR applications, timing jitter becomes more important and can significantly degrade the performance of image quality. Thus a special attenuation should be given to study the effects of timing jitter in order to predict possible degradation on the behavior of BiSAR systems. Generally speaking, we can model jitter in a signal by starting with a noise-free signal and displacing time with a stochastic process. Figure 5 shows a square wave with jitter compared to an ideal signal. The instabilities can eventually cause slips or missed signals that result in loss of radar echoes.
Because bistatic SAR is a coherent system, to complete the coherent accumulation in azimuth, the signals of same range but different azimuths should have the same phase after between the echo window and the PRF (pulse repetition frequency) of the receiver system would be a fixed value to preserve a stable phase relationship. But once there is clock timing jitter, the start time of the echo sampling window changes with certain time difference between the echo sampling window (or PRI, pulse repetition interval) and the real echo signal, as shown in Fig. 5. Consequently, the phase relation of the sampled data would be destroyed.
To find an analytical expression for the impact of time synchronization error on BiSAR images, we suppose the transmitted radar is
where is the window function, is the pulse duration, is the carrier angular frequency and is the chirp rate, respectively. Let denote the time synchronization errors of BiSAR, the radar echo from a scatterer is given by
where the first term is the range sampling window centered at , having a length of , is the speed of light, and is the delay corresponding to the time it takes the signal to travel the distance transmitter-target-receiver distance, .
Considering only time synchronization error, that is to say, phase synchronization is ignored here, we can obtain the demodulated signal as
Suppose the range reference signal is
The signal, after range compression, can be expressed as
where is the radar signal bandwidth and is the range drift because of time synchronization errors.
From Eq. (24) we can notice that if the two clocks deviate a lot, the radar echoes will be lost due to the drift of echo sampling window. Fortunately, such case hardly occurs for current radars. Hence we considered only the case that each echo can be successfully received but be drifted because of clock timing jitter. In other words, the collected data with the same range but different azimuths are not on the same range any more. As an example, Fig. 6(a) illustrates one typical prediction of time synchronization error. From Fig. 6(b) we can conclude that, time synchronization errors will result unfocused images, drift of radar echoes and displacement of targets. To focus BiSAR raw data, some time synchronization compensation techniques must be applied.
Notice that the requirement of frequency stability may vary with applications. Image generation with BiSAR requires a frequency coherence for at least the coherent integration time. For interferometric SAR (InSAR) (Muellerschoen et al., 2006), however this coherence has to be expanded over the whole processing time ( Eineder, 2003 ).
3. Direct-path Signal-based synchronization approach
A time and phase synchronization approach via direct-path signal was proposed in (Wang et al., 2008). In this approach, the direct-path signal of transmitter is received with one appropriative antenna and divided into two channels, one is passed though an envelope detector and used to synchronize the sampling clock, and the other is downconverted and used to compensate the phase synchronization error. Finally, the residual time synchronization error is compensated with range alignment, and the residual phase synchronization error is compensated with GPS (global positioning systems)/INS (intertial navigation system)/IMU (intertial measurement units) information, then the focusing of BiSAR image may be achieved.
3.1. Time synchronization
As concluded previously, if time synchronizes strictly, intervals between the echo window and the PRF (pulse repetition frequency) of the receiver would be a fixed value to preserve a stable phase relationship. But once there is time synchronization error, the start time of the echo sampling window changes with certain time difference between the echo sampling window (or PRI, pulse repetition interval) and the real echo signal. As a consequence, the phase relation of the sampled data would be destroyed.
It is well known that, for monostatic SAR, the azimuth processing operates upon the echoes which come from target points at equal range. Because time synchronization errors (without considering phase synchronization which are compensated separately in subsequent phase synchronization processing) have no effect on the initial phase of each echo, time synchronization errors can be compensated separately with range alignment. Here the spatial domain realignment (Chen & Andrews, 1980) is used. That is, let and denote the recorded complex echo from adjacent pulses where is the PRI and is the range assumed within one PRI. If we consider only the magnitude of the echoes, then , where . The is the amount of misalignment, which we would like to estimate. Define a correlation function between the two waveforms and as
From Schwartz inequality we have that will be maximal at and the amount of misalignment can thus be determined. Note that some other range alignment methods may also be adopted, such as frequency domain realignment, recursive alignment (Delisle & Wu, 1994), and minimum entropy alignment. Another note is that, sensor motion error will also result the drift of echo envelope, which can be corrected with motion compensation algorithms. When the transmitter and receiver are moving in non-parallel trajectories, the range change of normal channel and synchronization channel must be compensated separately. This compensation can be achieved with motion sensors combined with effective image formation algorithms.
3.2. Phase synchronization
After time synchronization compensation, the primary causes of phase errors include uncompensated target or sensor motion and residual phase synchronization errors. Practically, the receiver of direct-path can be regarded as a strong scatterer in the process of phase compensation. To the degree that motion sensor is able to measure the relative motion between the targets and SAR sensor, the image formation processor can eliminate undesired motion effects from the collected signal history with GPS/INS/IMU and autofocus algorithms. This procedure is motion compensation that is ignored here since it is beyond the scope of this paper. Thereafter, the focusing of BiSAR image can be achieved with autofocus image formation algorithms, e.g., (Wahl et al., 1994).
Suppose the th transmitted pulse with carrier frequency is
where is the original phase, and is the radar signal in baseband
where is Doppler frequency for the th transmitted pulse. Suppose the demodulating signal in receiver is
Hence, the received signal in baseband is
Suppose the range reference function is
Range compression yields
We can notice that the maxima will be at , where we have
Hence, the residual phase term in Eq. (33) is
In a like manner, we have
where and are the original Doppler frequency and error-free demodulating frequency in receiver, respectively.
Accordingly, and are the frequency errors for the th pulse. Hence, we have
Generally, and are typical on the orders of and , respectively, then is founded to be smaller than rad, which has negligiable effects. Furthermore, since and can be obtained from GPS/INS/IMU, Eq. (39) can be simplified into
From Eq. (41) we can get , then the phase synchronization compensation for reflected channel can be achieved with this method. Notice that the remaining motion compensation errors are usually low frequency phase errors, which can be compensated with autofocus image formation algorithms.
In summary, the time and phase synchronization compensation process may include the following steps:
Step 1, extract one pulse from the direct-path channel as the range reference function;
Step 2, direct-path channel range compression;
Step 3, estimate time synchronization errors with range alignment;
Step 4, direct-path channel motion compensation;
Step 5, estimate phase synchronization errors from direct-path channel;
Step 6, reflected channel time synchronization compensation;
Step 7, reflected channel phase synchronization compensation;
Step 8, reflected channel motion compensation;
Step 9, BiSAR image formation.
4. GPS signal disciplined synchronization approach
For the direct-path signal-based synchronization approach, the receiver must fly with a sufficient altitude and position to maintain a line-of-sight contact with the transmitter. To get around this disadvantage, a GPS signal disciplined synchronization approach is investigated in (Wang, 2009).
4.1. System architecture
Because of their excellent long-term frequency accuracy, GPS-disciplined rubidium oscillators are widely used as standards of time and frequency. Here, selection of a crystal oscillator instead of rubidium is based on the superior short-term accuracy of the crystal. As such, high quality space-qualified 10MHz quartz crystal oscillators are chosen here, which have a typical short-term stability of and an accuracy of . In addition to good timekeeping ability, these oscillators show a low phase noise.
As shown in Fig. 7, the transmitter/receiver contains the high-performance quartz crystal oscillator, direct digital synthesizer (DDS), and GPS receiver. The antenna collects the GPS L1 (1575.42MHz) signals and, if dual frequency capable, L2 (1227.60MHz) signals. The radio frequency (RF) signals are filtered though a preamplifier, then down-converted to
intermediate frequency (IF). The IF section provides additional filtering and amplification of the signal to levels more amenable to signal processing. The GPS signal processing component features most of the core functions of the receiver, including signal acquisition, code and carrier tracking, demodulation, and extraction of the pseudo-range and carrier phase measurements. The details can be found in many textbooks on GPS (Parkinson & Spilker, 1996).
The USO is disciplined by the output pulse-per-second (PPS), and frequency trimmed by varactor-diode tuning, which allows a small amount of frequency control on either side of the nominal value. Next, a narrow-band high-resolution DDS is applied, which allows the generation of various frequencies with extremely small step size and high spectral purity. This technique combines the advantages of the good short-term stability of high quality USO with the advantages of GPS signals over the long term. When GPS signals are lost, because of deliberate interference or malfunctioning GPS equipment, the oscillator is held at the best control value and free-runs until the return of GPS allows new corrections to be calculated.
4.2. Frequency synthesis
Since DDS is far from being an ideal source, its noise floor and spurs will be transferred to the output and amplified by 2 ( denotes the frequency multiplication factor) in power. To overcome this limit, we mixed it with the USO output instead of using the DDS as a reference directly. Figure 8 shows the architecture of a DDS-driven PLL synthesizer. The frequency of the sinewave output of the USO is 10MHz plus a drift , which is fed into a double-balanced mixer. The other input port of the mixer receives the filtered sinewave output of the DDS adjusted to the frequency . The mixer outputs an upper and a lower sideband carrier. The desired lower sideband is selected by a 10MHz crystal filter; the upper sideband and any remaining carriers are rejected. This is the simplest method of simple sideband frequency generation.
The DDS output frequency is determined by its clock frequency and an M-bit number written to its registers, where is the length of register. The value is added to an accumulator at each clock uprate, and the resulting ramp feeds a sinusoidal look-up table followed by a DAC (digital-to-analog convertor) that generates discrete steps at each update, following the sinewave form. Then, the DDS output frequency is (Vankka, 2005)
Clearly, for the smallest frequency step we need to use a low clock frequency, but the lower the clock frequency, the harder it becomes to filter the clock components in the DDS output. As a good compromise, we use a clock at about 1MHz, obtained by dividing the nominal 10MHz USO output by 10. Then, the approximate resolution of the frequency output of the DDS is . Here, is assumed. This frequency is subtracted from the output frequency of the USO. The minimum frequency step of the frequency corrector is therefore , which is . Thereafter, the DDS may be controlled over a much larger frequency range with the same resolution while removing the USO calibration errors. Thus, we can find an exact value of the 48-bit DDS value M to correct the exact drift to zero by measuring our PPS, divided from the 10MHz output, against the PPS from the GPS receiver.
However, we face the technical challenge of measuring the time error between the GPS and USO pulse per second signals. To overcome this difficulty, we apply a high-precision time interval measurement method. This technique is illustrated in Fig. 9, where the two PPS signals are used to trigger an ADC (analog-to-digital convertor) to sample the sinusoid that is directly generated by the USO. Denoting the frequency of as , we have
Similarly, for , there is
Hence, we can get
Where and denote the calculated clock periods. Since there is , we have
To find a general mathematical model, suppose the collected sinewave signal with original phase is
Parting into two nonoverlapping subsets, and , we have
Thereby we have
Thus, can be calculated by
Since the parameters , , , and are all measurable, the time error between and can be obtained from (50). As an example, assuming the signal-to-noise ratio (SNR) is and , simulations suggest that the RMS (root mean square) measurement accuracy is about 0.1ps. We have assumed that some parts of the measurement system are ideal; hence, there may be some variation in actual systems. The performance of single frequency estimators has been detailed in (Kay, 1989).
Finally, time and phase synchronization can be achieved by generating all needed frequencies by dividing, multiplying or phase-locking to the GPS-disciplined USO at the transmitter and receiver.
4.3. Residual synchronization errors compensation
Because GPS-disciplined USOs are adjusted to agree with GPS signals, they are self-calibrating standards. Even so, differences in the PPS fluctuations will be observed because of uncertainties in the satellite signals and the measurement process in the receiver (Cheng et al., 2005). With modern commercial GPS units, which use the L1-signal at 1575.42MHz, a standard deviation of 15ns may be observed. Using differential GPS (DGPS) or GPS common-view, one can expect a standard deviation of less than 10ns. When GPS signals are lost, the control parameters will stay fixed, and the USO enters a so-called free-running mode, which further degrades synchronization performance. Thus, the residual synchronization errors must be further compensated for BiSAR image formation.
Differences in the PPS fluctuations will result in linear phase synchronization errors, , in one synchronization period, i.e., one second. Even though the USO used in this paper has a good short-term timekeeping ability, frequency drift may be observed in one second. These errors can be modeled as quadratic phases. We model the residual phase errors in the i-th second as
Motion compensation is ignored here because it can be addressed with motion sensors. Thus, after time synchronization compensation, the next step is residual phase error compensation, i.e., autofocus processing.
We use the Mapdrift autofocus algorithm described in (Mancill & Swiger, 1981). Here, the Mapdrift technique divides the i-th second data into two nonoverlapping subapertures with a duration of 0.5 seconds. This concept uses the fact that a quadratic phase error across one second (in one synchronization period) has a different functional form across two half-length subapertures, as shown in Fig. 10 (Carrara et al., 1995). The phase error across each subapertures consists of a quadratic component, a linear component, and an inconsequential constant component of radians. The quadratic phase components of the two subapertures are identical, with a center-to-edge magnitude of radians. The linear phase components of the two subapertures have identical magnitudes, but opposite slopes. Partition the i-th second azimuthal data into two nonoverlapping subapertures. There is an approximately linear phase throughout the subaperture.
Similarly, for the second subaperture we have
After applying a Fourier transform, we get
where denotes the error-free cross-correlation spectrum. The relative shift between the two apertures is , which is directly proportional to the coefficient in Eq. (51).
Next, various methods are available to estimate this shift. The most common method is to measure the peak location of the cross-correlation of the two subapterture images.
After compensating for the quadratic phase errors in each second, Eq. (51) can be changed into
Applying again the Mapdrift described above to the i-th and (i+1)-th second data, the coefficients in (58) can be derived. Define a mean value operator as
Hence, we can get
where . Then, the coefficients in (51) can be derived, i.e., the residual phase errors can then be successfully compensated. This process is shown in Fig. 11.
Notice that a typical implementation applies the algorithm to only a small subset of available range bins, based on peak energy. An average of the individual estimates of the error coefficient from each of these range bins provides a final estimate. This procedure naturally reduces the computational burden of this algorithm. The range bins with the most energy are likely to contain strong, dominant scatterers with high signal energy relative to clutter energy. The signatures from such scatterers typically show high correlation between the two subaperture images, while the clutter is poorly correlated between the two images.
It is common practice to apply this algorithm iteratively. On each iteration, the algorithm forms an estimate and applies this estimate to the input signal data. Typically, two to six iterations are sufficient to yield an accurate error estimate that does not change significantly on subsequent iterations. Iteration of the procedure greatly improves the accuracy of the final error estimate for two reasons First, iteration enhances the algorithm’s ability to identify and discard those range bins that, for one reason or another, provide anomalous estimates for the current iteration. Second, the improved focus of the image data after each iteration results in a narrower cross-correlation peak, which leads to a more accurate determination of its location. Notice that the Mapdrift algorithm can be extended to estimate high-order phase error by dividing the azimuthal signal history in one second into more than two subapertures. Generally speaking, N subapertures are adequate to estimate the coefficients of an Nth-order polynomial error. However, decreased subaperture length will degrade both the resolution and the signal-to-noise ratio of the targets in the images, which results in degraded estimation performance.
Although the feasibility of airborne BiSAR has been demonstrated by experimental investigations using rather steep incidence angles, resulting in relatively short synthetic aperture times of only a few seconds, the time and phase synchronization of the transmitter and receiver remain technical challenges. In this chapter, with an analytical model of phase noise, impacts of time and phase synchronization errors on BiSAR imaging are derived. Two synchronization approaches, direct-path signal-based and GPS signal disciplined, are investigated, along with the corresponding residual synchronization errors.
One remaining factor needed for the realization and implementation of BiSAR is spatial synchronization. Digital beamforming by the receiver is a promising solution. Combining the recorded subaperture signals in many different ways introduces high flexibility in the BiSAR configuration, and makes effective use of the total signal energy in the large illuminated footprint.