Open access peer-reviewed chapter

Bistatic Synthetic Aperture Radar Synchronization Processing

By Wen-Qin Wang

Published: January 1st 2010

DOI: 10.5772/7184

Downloaded: 4177

1. Introduction

Bistatic synthetic aperture radar (BiSAR), operates with separate transmitter and receiver that are mounted on different platforms (Cherniakov & Nezlin, 2007), will play a great role in future radar applications (Krieger & Moreira, 2006). BiSAR configuration can bring many benefits in comparison with monostatic systems, such as the exploitation of additional information contained in the bistatic reflectivity of targets (Eigel et al., 2000, Burkholder et al., 2003), improved flexibility (Loffeld et al., 2004), reduced vulnerability (Wang & Cai, 2007), forward looking SAR imaging (Ceraldi et al., 2005). These advantages could be worthwhile, e.g., for topographic features, surficial deposits, and drainage, to show the relationships that occur between forest, vegetation, and soils. Even for objects that show a low radar cross section (RCS) in monostatic SAR images, one can find distinct bistatic angle to increase their RCS to make these objects visible in BiSAR images. Furthermore, a BiSAR configuration allows a passive receiver, operating at a close range, to receive the data reflected from potentially hostile areas. This passive receiver may be teamed with a transmitter at a safe place, or make use of opportunistic illuminators such as television and radio transmitters or even unmanned vehicles [Wang, 2007a].

However, BiSAR is subject to the problems and special requirements that are neither not encountered or encountered in less serious form for monostatic SAR (Willis, 1991). The biggest technological challenge lies in synchronization of the two independent radars: time synchronization, the receiver must precisely know when the transmitter fires (in the order of nanoseconds); spatial synchronization, the receiving and transmitting antennas must simultaneously illuminate the same spot on the ground; phase synchronization, the receiver and transmitter must be coherent over extremely long periods of time. The most difficult synchronization problem is the phase synchronization. To obtain focused BiSAR image, phase information of the transmitted pulse has to be preserved. In a monostatic SAR, the co-located transmitter and receiver use the same stable local oscillator (STALO), the phase can only decorrelate over very short periods of time (about 1×103sec.). In contrast, for a BiSAR system, the transmitter and receiver fly on different platforms and use independent master oscillators, which results that there is no phase noise cancellation. This superimposed phase noise corrupts the received signal over the whole synthetic aperture time. Moreover, any phase noise (instability) in the master oscillator is magnified by frequency multiplication. As a consequence, the low phase noise requirements imposed on the oscillators of BiSAR are much more higher than the monostatic cases. In the case of indirect phase synchronization using identical STALOs in the transmitter and receiver, phase stability is required over the coherent integration time. Even the toleration of low frequency or quadratic phase synchronization errors can be relaxed to 90 , the requirement of phase stability is only achievable with ultra-high-quality oscillators (Weiβ, 2004). Moreover, aggravating circumstances are accompanied for airborne platforms because of different platform motions, the performance of phase stability will be further degraded.

Although multiple BiSAR image formation algorithms have been developed (Wang et al., 2006). BiSAR synchronization aspects have seen much less development, at least in open literature. The requirement of phase stability in BiSAR was first discussed in (Auterman, 1984), and further investigated in (Krieger et al., 2006, Krieger & Younis, 2006), which conclude that uncompensated phase noise may cause a time variant shift, spurious sidelobes and a deterioration of the impulse response, as well as a low-frequency phase modulation of the focused SAR signal. The impact of frequency synchronization error in spaceborne parasitic interferometry SAR is analyzed in (Zhang et al., 2006) and an estimation of oscillator’s phase offset in bistatic interferometry SAR is invstigated in (Ubolkosold et al., 2006). In an alike manner, linear and random time synchronization errors are discussed in (Zhang et al., 2005).

As a consequence of these difficulties, there is a lack of practical synchronization technique for BiSAR. But its application is of great scientific and technological interest, several authors have proposed some potential synchronization techniques or algorithms, such as ultra-high-quality oscillators (Gierull, 2006), a direct exchange of radar pulses (Moreira et al., 2004), a ping-pong interferometric mode in case of full-active systems (Evans, 2002) and an appropriate bidirectional link (Younis et al., 2006a, Younis et al., 2006b, Eineder, 2003). The practical work is to develop a practicable synchronization technique without too much alteration to existing radars.

This chapter concentrates on general BiSAR synchronization, aims at the development of a practical solution for time and phase synchronization aspects without too much alteration to existing radars. The remaining sections are organized as follows. In Section 2, the impact of synchronization errors on BiSAR systems are analysed by using an analytical models. A conclusion is made that some synchronization compensation techniques must be applied to focus BiSAR raw data. Then, possible time synchronization and phase synchronization approaches are investigated in Section 3 and Section 4, respectively. Finally, Section 5 concludes the chapter with some possible future work.

2. Impact of synchronization errors on BiSAR systems

2.1. Fundamental of phase noise

The instantaneous output voltage of a signal generator or oscillator V(t) is (Lance et al., 1984)


where Voand νoare the nominal amplitude and frequency, respectively, ϕois a start phase, δε(t)and δϕ(t)are the fluctuations of signal amplitude and phase, respectively. Notice that, here, we have assumed that (Wang et al., 2006)


It is well known that Sϕ(f)defined as the spectral density of phase fluctuations on a ‘per-Hz’ is the term most widely used to describe the random characteristics of frequency stability, which is a measure of the instantaneous time shifts, or time jitter, that are inherent in signals produced by signal generators or added to signals as it passes through a system (Wall & Vig, 1995). Although an oscillator’s phase noise is a complex interaction of variables, ranging from its atomic composition to the physical environment of the oscillator, a piecewise polynomial representation of an oscillator’s phase noise exists and is expressed as (Rutman, 1978)


where the coefficients hα2describe the different contributions of phase noise, and frepresents the phase fluctuation frequency. As modeled in the Eq. (3), they can be represented by several physical mechanisms which include random walk frequency noise, flicker frequency noise. Random walk frequency noise (Vannicola & Varshney, 1983) is because of the oscillator’s physical environment (temperature, vibration, and shocks etc.). This phase noise contribution can be significant for a moving platform, and presents design difficulties since laboratory measurements are necessary when the synthesizer is under vibration. White frequency noise originates from additive white thermal noise sources inside the oscillator’s feedback loop. Flicker phase noise generally is produced by amplifiers, and white phase noise is caused by additive white noise sources outside the oscillator’s feedback loop (Donald, 2002).

In engineering, for the condition that the phase fluctuations occurring at a rate of fand are small compared with 1 rad, a good approximation is


where L(f)is defined as the ratio of the power in one sideband referred to the input carrier frequency on a per-Hertz of bandwidth spectral density basis to the total signal power at Fourier frequency ffrom the carrier per device.

2.2. Model of phase noise

One cannot foresee to simulate the phase noise if one does not have a model for the phase noise. In (Hanzo et al., 2000), a white phase noise model is discussed, but it cannot describe the statistical process of phase noise. In (Foschini & Vannucci, 1988), a Wiener phase noise model is discussed, but it cannot describe the low-frequency phase noise, since this part of phase noise is an unstationary process. As different phase noise will bring different effects on BiSAR (see Fig. 1), the practical problem is that how to develop an useful and comprehensive model of frequency instability that can be understood and applied in BiSAR processing. Unfortunately, Eq. (3) is a frequency-domain expression and not convenient in analyzing its impact on BiSAR. As such, we have proposed an analytical model of phase noise, as shown in Fig. 2. This model uses Gaussian noise as the input of a hypothetical low-pass filter and its output is then considered as phase noise, that is this model may represent the output of a hypothetical filter with impulse response h(t)receiving an input signalx(t).

Figure 1.

Impacts of various oscillator frequency offsets: (a) constant offset, (b) linear offset, (c) Sinewave offset, (d) random offset.

Figure 2.

Analytical model of phase noise.

It is well known that the power spectral density (PSD) of the output signal is given by the productSx(f)|H(f)|2, where the filter transfer function H(f)is the Fourier transform ofh(t). Notice that, here, |H(f)|2must be satisfied with

|H(f)|2={Sφ(f),    fl|f|fhSφ(fl),   |f|fl0,            elseE5

where a sharp up cutoff frequency fhand a sharp down cutoff frequency flare introduced. Notice that time domain stability measures sometimes depend on fhand flwhich must then be given with any numerical result, although no recommendation has been made for this value. fh=3kHzand fl=0.01Hzare adopted. Thereby, the PSD of phase noise in Fig. 2 can be analytical expressed as


where Sx(f)is the PSD of Gaussian noise and also the input of filter, and Kis a constant. An inverse Fourier transform yields


where φ(t)and denote the phase noise in time domain and a convolution, respectively.

2.3. Impact of phase synchronization errors

Since only STALO phase noise is of interest, the modulation waveform used for range resolution can be ignored and the radar can be simplified into an azimuth only system (Auterman, 1984). Suppose the transmitted signal is sinusoid whose phase argument is


The first term is the carrier frequency and the second term is the phase, and Mis the ratio of the carrier frequency to STALO frequency. After reflection from a target, the received signal phase is that of the transmitted signal delayed by the round-trip timeτ. The receiver output signal phase ϕ(t)results from demodulating the received signal with the receiver STALO which has the same form as the transmitter STALO


Hence we have


The first term is a frequency offset arising from non-identical STLO frequencies, which will result focused image with a drift. Because this drift can easily be corrected using ground calibrator, it can be ignored here. The second term forms the usual Doppler term as round-trip time to the target varies, it should be preserved. The last term represents the effect of STALO frequency instability which is of interest. As a typical example, assuming a X-band airborne SAR with a swath of 6km. Generally, a typical STALO used in current SAR has a frequency accuracy (δf) of 109/1sor better (Weiβ, 2004). As a typical example, assuming a BiSAR system with the following parameters: radar carrier frequency is1×1010Hz, the speed of light is3×108m/s, the round-trip from radar to target is12000m, and then the phase error in fast-time is found to be


which has negligible effects on the synchronization phase. Hence, we have an approximative expression


That is to say, the phase noise of oscillator in fast-time is negligible, we can consider only the phase noise in slow-time.

Accordingly, the phase error in BiSAR can be modelled as


It is assumed that φT(t)and φR(t)are independent random variables having identical PSDSφ(f). Then, the PSD of phase noise in BiSAR is


Where the factor 2arises from the addition of two uncorrelated but identical PSD. This is true in those cases where the effective division ratio in frequency synthesizer is equal to the small integer fractions exactly. In other instances, an experimental formula is (Kroupa, 1996)


Take one 10MHzSTALO as an example, whose phase noise parameters are listed in Table 1. This STALO can be regarded as a representative example of the ultra stable oscillator for current airborne SAR systems. Predicted phase errors are shown in Fig. 3 for a time interval of10s. Moreover, the impacts of phase noise on BiSAR compared with the ideal compression results in azimuth can be founded in Fig. 4(a). We can draw a conclusion that oscillator phase instabilities in BiSAR manifest themselves as a deterioration of the impulse response function. It is also evident that oscillator phase noise may not only defocus the SAR image, but also introduce significant positioning errors along the scene extension.

Furthermore, it is known that high-frequency phase noise will cause spurious sidelobes in the impulse function. This deterioration can be characterized by the integrated sidelobe ratio (ISLR) which measures the transfer of signal energy from the mainlobe to the sidelobes. For an azimuth integration time, Ts, the ISLR contribution because of phase errors can be computed in dB as

Frequency, Hz1101001k10

Table 1.

Phase noise parameters of one typical STALO.

Figure 3.

Simulation results of oscillator phase instabilities with ten realisations: (a) predicted phase noise in 10 s in X-band (linear phase ramp corresponding to a frequency offset has been removed). (b) predicted high-frequency including cubic and more phase errors.

Figure 4.

Impacts of phase noise on BiSAR systems: (a) impact of predicted phase noise in azimuth. (b) impact of integrated sidelobe ratio in X-band.

A typical requirement for the maximum tolerable ISLR is20dB, which enables a maximum coherent integration time Tsof 2sin this example as shown in Fig. 4(b). This result is coincident with that of (Krieger & Younis, 2006).

Generally, forf10Hz, the region of interest for SAR operation, L(f)can be modelled as (Willis, 1991)


Note that L1is the value of L(f)at f=1Hzfor a specific oscillator. As the slope of Eq. (17) is so high, there is


Hence, the deterioration of ISLR may be approximated as


It was concluded in (Willis, 1991) that the error in this approximation is less than 1dBforTs0.6s.

2.4. Impact of time synchronization errors

Timing jitter is the term most widely used to describe an undesired perturbation or uncertainty in the timing of events, which is a measurement of the variations in the time domain, and essentially describes how far the signal period has wandered from its ideal value. For BiSAR applications, timing jitter becomes more important and can significantly degrade the performance of image quality. Thus a special attenuation should be given to study the effects of timing jitter in order to predict possible degradation on the behavior of BiSAR systems. Generally speaking, we can model jitter in a signal by starting with a noise-free signal and displacing time with a stochastic process. Figure 5 shows a square wave with jitter compared to an ideal signal. The instabilities can eventually cause slips or missed signals that result in loss of radar echoes.

Because bistatic SAR is a coherent system, to complete the coherent accumulation in azimuth, the signals of same range but different azimuths should have the same phase after between the echo window and the PRF (pulse repetition frequency) of the receiver system would be a fixed value to preserve a stable phase relationship. But once there is clock timing jitter, the start time of the echo sampling window changes with certain time difference between the echo sampling window (or PRI, pulse repetition interval) and the real echo signal, as shown in Fig. 5. Consequently, the phase relation of the sampled data would be destroyed.

To find an analytical expression for the impact of time synchronization error on BiSAR images, we suppose the transmitted radar is


where rect[·]is the window function, Tris the pulse duration, ωois the carrier angular frequency and is the chirp rate, respectively. Let e(t)denote the time synchronization errors of BiSAR, the radar echo from a scatterer is given by


where the first term is the range sampling window centered atRref, having a length ofTw, cis the speed of light, and τis the delay corresponding to the time it takes the signal to travel the distance transmitter-target-receiver distance,RB.

Figure 5.

Impacts of time synchronization error on BiSAR data.

Figure 6.

Impact of time synchronization errors: (a) predicted time synchronization errors in 10 s . (b) impact on BiSAR image for one point target.

Considering only time synchronization error, that is to say, phase synchronization is ignored here, we can obtain the demodulated signal as


Suppose the range reference signal is


The signal, after range compression, can be expressed as


where Bis the radar signal bandwidth and ΔR=ce(t)is the range drift because of time synchronization errors.

From Eq. (24) we can notice that if the two clocks deviate a lot, the radar echoes will be lost due to the drift of echo sampling window. Fortunately, such case hardly occurs for current radars. Hence we considered only the case that each echo can be successfully received but be drifted because of clock timing jitter. In other words, the collected data with the same range but different azimuths are not on the same range any more. As an example, Fig. 6(a) illustrates one typical prediction of time synchronization error. From Fig. 6(b) we can conclude that, time synchronization errors will result unfocused images, drift of radar echoes and displacement of targets. To focus BiSAR raw data, some time synchronization compensation techniques must be applied.

Notice that the requirement of frequency stability may vary with applications. Image generation with BiSAR requires a frequency coherence for at least the coherent integration time. For interferometric SAR (InSAR) (Muellerschoen et al., 2006), however this coherence has to be expanded over the whole processing time ( Eineder, 2003 ).

3. Direct-path Signal-based synchronization approach

A time and phase synchronization approach via direct-path signal was proposed in (Wang et al., 2008). In this approach, the direct-path signal of transmitter is received with one appropriative antenna and divided into two channels, one is passed though an envelope detector and used to synchronize the sampling clock, and the other is downconverted and used to compensate the phase synchronization error. Finally, the residual time synchronization error is compensated with range alignment, and the residual phase synchronization error is compensated with GPS (global positioning systems)/INS (intertial navigation system)/IMU (intertial measurement units) information, then the focusing of BiSAR image may be achieved.

3.1. Time synchronization

As concluded previously, if time synchronizes strictly, intervals between the echo window and the PRF (pulse repetition frequency) of the receiver would be a fixed value to preserve a stable phase relationship. But once there is time synchronization error, the start time of the echo sampling window changes with certain time difference between the echo sampling window (or PRI, pulse repetition interval) and the real echo signal. As a consequence, the phase relation of the sampled data would be destroyed.

It is well known that, for monostatic SAR, the azimuth processing operates upon the echoes which come from target points at equal range. Because time synchronization errors (without considering phase synchronization which are compensated separately in subsequent phase synchronization processing) have no effect on the initial phase of each echo, time synchronization errors can be compensated separately with range alignment. Here the spatial domain realignment (Chen & Andrews, 1980) is used. That is, let ft1(r)and ft2(r)denote the recorded complex echo from adjacent pulses where t2t1=Δtis the PRI and ris the range assumed within one PRI. If we consider only the magnitude of the echoes, thenmt1(r+Δr)mt2(r), wheremt1(r)|ft1(r)|. The Δris the amount of misalignment, which we would like to estimate. Define a correlation function between the two waveforms mt1(r)and mt2(r)as


From Schwartz inequality we have that R(s)will be maximal at s=Δrand the amount of misalignment can thus be determined. Note that some other range alignment methods may also be adopted, such as frequency domain realignment, recursive alignment (Delisle & Wu, 1994), and minimum entropy alignment. Another note is that, sensor motion error will also result the drift of echo envelope, which can be corrected with motion compensation algorithms. When the transmitter and receiver are moving in non-parallel trajectories, the range change of normal channel and synchronization channel must be compensated separately. This compensation can be achieved with motion sensors combined with effective image formation algorithms.

3.2. Phase synchronization

After time synchronization compensation, the primary causes of phase errors include uncompensated target or sensor motion and residual phase synchronization errors. Practically, the receiver of direct-path can be regarded as a strong scatterer in the process of phase compensation. To the degree that motion sensor is able to measure the relative motion between the targets and SAR sensor, the image formation processor can eliminate undesired motion effects from the collected signal history with GPS/INS/IMU and autofocus algorithms. This procedure is motion compensation that is ignored here since it is beyond the scope of this paper. Thereafter, the focusing of BiSAR image can be achieved with autofocus image formation algorithms, e.g., (Wahl et al., 1994).

Suppose the nth transmitted pulse with carrier frequency fTnis


where φd(n)is the original phase, and s(t)is the radar signal in baseband

Let tdndenote the delay time of direct-path signal, the received direct-path signal is

where fdnis Doppler frequency for the nth transmitted pulse. Suppose the demodulating signal in receiver is


Hence, the received signal in baseband is

withΔfn=fTnfRn, where φd(n)is the term to be extracted to compensate the phase synchronization errors in reflected signal. A Fourier transform applied to Eq. (30) yields

Suppose the range reference function is


Range compression yields


We can notice that the maxima will be att=tdnΔfn/γ, where we have


Hence, the residual phase term in Eq. (33) is

As Δfnand γare typical on the orders of 1kHzand1×1013Hz/s, respectively. πΔfn2/γhas negligiable effects. Eq. (35) can be simplified into

In a like manner, we have

fd(n+1)=fd0+δfd(n+1),       fR(n+1)=fR0+δfR(n+1)E38

where fd0and fR0are the original Doppler frequency and error-free demodulating frequency in receiver, respectively.

Accordingly, δfd(n+1)and δfR(n+1)are the frequency errors for the (n+1)th pulse. Hence, we have


Generally, δfd(n+1)+δfR(n+1)and td(n+1)tdnare typical on the orders of 10Hzand109s, respectively, then 2π(δfd(n+1)+δfR(n+1))(td(n+1)tdn)is founded to be smaller than 2π×108rad, which has negligiable effects. Furthermore, since td(n+1)and tdncan be obtained from GPS/INS/IMU, Eq. (39) can be simplified into

Withψe(t)=[ψ(n+1)ψ(n)]2π(fR0+fd0)(td(n+1)tdn). We then have

From Eq. (41) we can getφd(n), then the phase synchronization compensation for reflected channel can be achieved with this method. Notice that the remaining motion compensation errors are usually low frequency phase errors, which can be compensated with autofocus image formation algorithms.

In summary, the time and phase synchronization compensation process may include the following steps:

Step 1, extract one pulse from the direct-path channel as the range reference function;

Step 2, direct-path channel range compression;

Step 3, estimate time synchronization errors with range alignment;

Step 4, direct-path channel motion compensation;

Step 5, estimate phase synchronization errors from direct-path channel;

Step 6, reflected channel time synchronization compensation;

Step 7, reflected channel phase synchronization compensation;

Step 8, reflected channel motion compensation;

Step 9, BiSAR image formation.

4. GPS signal disciplined synchronization approach

For the direct-path signal-based synchronization approach, the receiver must fly with a sufficient altitude and position to maintain a line-of-sight contact with the transmitter. To get around this disadvantage, a GPS signal disciplined synchronization approach is investigated in (Wang, 2009).

4.1. System architecture

Because of their excellent long-term frequency accuracy, GPS-disciplined rubidium oscillators are widely used as standards of time and frequency. Here, selection of a crystal oscillator instead of rubidium is based on the superior short-term accuracy of the crystal. As such, high quality space-qualified 10MHz quartz crystal oscillators are chosen here, which have a typical short-term stability of σAllan(Δt=1s)=1012and an accuracy ofσrms(Δt=1s)=1011. In addition to good timekeeping ability, these oscillators show a low phase noise.

As shown in Fig. 7, the transmitter/receiver contains the high-performance quartz crystal oscillator, direct digital synthesizer (DDS), and GPS receiver. The antenna collects the GPS L1 (1575.42MHz) signals and, if dual frequency capable, L2 (1227.60MHz) signals. The radio frequency (RF) signals are filtered though a preamplifier, then down-converted to

Figure 7.

Functional block diagram of time and phase synchronization for BiSAR using GPS disciplined USOs.

intermediate frequency (IF). The IF section provides additional filtering and amplification of the signal to levels more amenable to signal processing. The GPS signal processing component features most of the core functions of the receiver, including signal acquisition, code and carrier tracking, demodulation, and extraction of the pseudo-range and carrier phase measurements. The details can be found in many textbooks on GPS (Parkinson & Spilker, 1996).

The USO is disciplined by the output pulse-per-second (PPS), and frequency trimmed by varactor-diode tuning, which allows a small amount of frequency control on either side of the nominal value. Next, a narrow-band high-resolution DDS is applied, which allows the generation of various frequencies with extremely small step size and high spectral purity. This technique combines the advantages of the good short-term stability of high quality USO with the advantages of GPS signals over the long term. When GPS signals are lost, because of deliberate interference or malfunctioning GPS equipment, the oscillator is held at the best control value and free-runs until the return of GPS allows new corrections to be calculated.

4.2. Frequency synthesis

Since DDS is far from being an ideal source, its noise floor and spurs will be transferred to the output and amplified by 2 ( denotes the frequency multiplication factor) in power. To overcome this limit, we mixed it with the USO output instead of using the DDS as a reference directly. Figure 8 shows the architecture of a DDS-driven PLL synthesizer. The frequency of the sinewave output of the USO is 10MHz plus a driftΔf, which is fed into a double-balanced mixer. The other input port of the mixer receives the filtered sinewave output of the DDS adjusted to the frequencyΔf. The mixer outputs an upper and a lower sideband carrier. The desired lower sideband is selected by a 10MHz crystal filter; the upper sideband and any remaining carriers are rejected. This is the simplest method of simple sideband frequency generation.

Figure 8.

Functional block diagram of GPS disciplined oscillator.

The DDS output frequency is determined by its clock frequency fclkand an M-bit number 2j(j[1,M])written to its registers, where Mis the length of register. The value 2jis added to an accumulator at each clock uprate, and the resulting ramp feeds a sinusoidal look-up table followed by a DAC (digital-to-analog convertor) that generates discrete steps at each update, following the sinewave form. Then, the DDS output frequency is (Vankka, 2005)


Clearly, for the smallest frequency step we need to use a low clock frequency, but the lower the clock frequency, the harder it becomes to filter the clock components in the DDS output. As a good compromise, we use a clock at about 1MHz, obtained by dividing the nominal 10MHz USO output by 10. Then, the approximate resolution of the frequency output of the DDS isdf=1MHz/248=3.55109Hz. Here, M=48is assumed. This frequency is subtracted from the output frequency of the USO. The minimum frequency step of the frequency corrector is therefore3.55109Hz/106, which is3.551016. Thereafter, the DDS may be controlled over a much larger frequency range with the same resolution while removing the USO calibration errors. Thus, we can find an exact value of the 48-bit DDS value M to correct the exact drift to zero by measuring our PPS, divided from the 10MHz output, against the PPS from the GPS receiver.

However, we face the technical challenge of measuring the time error between the GPS and USO pulse per second signals. To overcome this difficulty, we apply a high-precision time interval measurement method. This technique is illustrated in Fig. 9, where the two PPS signals are used to trigger an ADC (analog-to-digital convertor) to sample the sinusoid that is directly generated by the USO. Denoting the frequency of PPS_GPSasfo, we have


Figure 9.

Measuring time errors between two 1PPS with interpolated sampling technique.

Similarly, forPPS_USO, there is


Hence, we can get


Where nand mdenote the calculated clock periods. Since there isϕB=ϕD, we have


To find a general mathematical model, suppose the collected sinewave signal with original phase ϕi(i(A,C))is


Parting x(n)into two nonoverlapping subsets, x1(n)andx2(n), we have

S1(k)=FFT[x1(n)],     S2(k)=FFT[x2(n)]E48

Thereby we have

|S1(k1)|=|S1(k)|max,     |S2(k2)|=|S2(k)|maxE49

Thus, ϕi(i(A,C))can be calculated by


Since the parametersm, n, ϕC, ϕAandfoare all measurable, the time error between PPS_GPSand PPS_USOcan be obtained from (50). As an example, assuming the signal-to-noise ratio (SNR) is 50dBandfo=10MHz, simulations suggest that the RMS (root mean square) measurement accuracy is about 0.1ps. We have assumed that some parts of the measurement system are ideal; hence, there may be some variation in actual systems. The performance of single frequency estimators has been detailed in (Kay, 1989).

Finally, time and phase synchronization can be achieved by generating all needed frequencies by dividing, multiplying or phase-locking to the GPS-disciplined USO at the transmitter and receiver.

4.3. Residual synchronization errors compensation

Because GPS-disciplined USOs are adjusted to agree with GPS signals, they are self-calibrating standards. Even so, differences in the PPS fluctuations will be observed because of uncertainties in the satellite signals and the measurement process in the receiver (Cheng et al., 2005). With modern commercial GPS units, which use the L1-signal at 1575.42MHz, a standard deviation of 15ns may be observed. Using differential GPS (DGPS) or GPS common-view, one can expect a standard deviation of less than 10ns. When GPS signals are lost, the control parameters will stay fixed, and the USO enters a so-called free-running mode, which further degrades synchronization performance. Thus, the residual synchronization errors must be further compensated for BiSAR image formation.

Differences in the PPS fluctuations will result in linear phase synchronization errors, φ0+2πΔft=a0+a1t, in one synchronization period, i.e., one second. Even though the USO used in this paper has a good short-term timekeeping ability, frequency drift may be observed in one second. These errors can be modeled as quadratic phases. We model the residual phase errors in the i-th second as

φi(t)=ai0+ai1t+ai2t2,    0t1E51

Motion compensation is ignored here because it can be addressed with motion sensors. Thus, after time synchronization compensation, the next step is residual phase error compensation, i.e., autofocus processing.

We use the Mapdrift autofocus algorithm described in (Mancill & Swiger, 1981). Here, the Mapdrift technique divides the i-th second data into two nonoverlapping subapertures with a duration of 0.5 seconds. This concept uses the fact that a quadratic phase error across one second (in one synchronization period) has a different functional form across two half-length subapertures, as shown in Fig. 10 (Carrara et al., 1995). The phase error across each subapertures consists of a quadratic component, a linear component, and an inconsequential constant component of Ω/4radians. The quadratic phase components of the two subapertures are identical, with a center-to-edge magnitude of Ω/4radians. The linear phase components of the two subapertures have identical magnitudes, but opposite slopes. Partition the i-th second azimuthal data into two nonoverlapping subapertures. There is an approximately linear phase throughout the subaperture.

φei(t+tj)=b0j+a1jt,    |t|Ta4E52
with((2j1)/21)/2,j[1,2]. Then the model for the first subaperture g1(t)is the product of the error-free signal history s1(t)and a complex exponential with linear phase

Similarly, for the second subaperture we have


After applying a Fourier transform, we get


where S12(ω)denotes the error-free cross-correlation spectrum. The relative shift between the two apertures isΔω=b11b12, which is directly proportional to the coefficient ai2in Eq. (51).

Figure 10.

Visualization of quadratic phase error.


Next, various methods are available to estimate this shift. The most common method is to measure the peak location of the cross-correlation of the two subapterture images.

After compensating for the quadratic phase errors ai2in each second, Eq. (51) can be changed into

φic(t)=ai0+ai1t,    0t1E58

Applying again the Mapdrift described above to the i-th and (i+1)-th second data, the coefficients in (58) can be derived. Define a mean value operator φ2as


Hence, we can get

a1i=(tt¯)(φeiφei¯)2(tt¯)22,    a0i=φ¯eib1it2E60

whereφ¯eiφei2. Then, the coefficients in (51) can be derived, i.e., the residual phase errors can then be successfully compensated. This process is shown in Fig. 11.

Figure 11.

Estimator of residual phase synchronization errors

Notice that a typical implementation applies the algorithm to only a small subset of available range bins, based on peak energy. An average of the individual estimates of the error coefficient from each of these range bins provides a final estimate. This procedure naturally reduces the computational burden of this algorithm. The range bins with the most energy are likely to contain strong, dominant scatterers with high signal energy relative to clutter energy. The signatures from such scatterers typically show high correlation between the two subaperture images, while the clutter is poorly correlated between the two images.

It is common practice to apply this algorithm iteratively. On each iteration, the algorithm forms an estimate and applies this estimate to the input signal data. Typically, two to six iterations are sufficient to yield an accurate error estimate that does not change significantly on subsequent iterations. Iteration of the procedure greatly improves the accuracy of the final error estimate for two reasons First, iteration enhances the algorithm’s ability to identify and discard those range bins that, for one reason or another, provide anomalous estimates for the current iteration. Second, the improved focus of the image data after each iteration results in a narrower cross-correlation peak, which leads to a more accurate determination of its location. Notice that the Mapdrift algorithm can be extended to estimate high-order phase error by dividing the azimuthal signal history in one second into more than two subapertures. Generally speaking, N subapertures are adequate to estimate the coefficients of an Nth-order polynomial error. However, decreased subaperture length will degrade both the resolution and the signal-to-noise ratio of the targets in the images, which results in degraded estimation performance.

5. Conclusion

Although the feasibility of airborne BiSAR has been demonstrated by experimental investigations using rather steep incidence angles, resulting in relatively short synthetic aperture times of only a few seconds, the time and phase synchronization of the transmitter and receiver remain technical challenges. In this chapter, with an analytical model of phase noise, impacts of time and phase synchronization errors on BiSAR imaging are derived. Two synchronization approaches, direct-path signal-based and GPS signal disciplined, are investigated, along with the corresponding residual synchronization errors.

One remaining factor needed for the realization and implementation of BiSAR is spatial synchronization. Digital beamforming by the receiver is a promising solution. Combining the recorded subaperture signals in many different ways introduces high flexibility in the BiSAR configuration, and makes effective use of the total signal energy in the large illuminated footprint.


This work was supported in part by the Specialized Fund for the Doctoral Program of Higher Education for New Teachers under contract number 200806141101, the Open Fund of the Key Laboratory of Ocean Circulation and Waves, Chinese Academy of Sciences under contract number KLOCAW0809, and the Open Fund of the Institute of Plateau Meteorology, China Meteorological Administration under contract number LPM2008015.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Wen-Qin Wang (January 1st 2010). Bistatic Synthetic Aperture Radar Synchronization Processing, Radar Technology, Guy Kouemou, IntechOpen, DOI: 10.5772/7184. Available from:

chapter statistics

4177total chapter downloads

1Crossref citations

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Planar Antenna Technology for mm-Wave Automotive Radar, Sensing, and Communications

By Joerg Schoebel and Pablo Herrero

Related Book

First chapter

Optimum Mechanical Design of Binary Actuators Based on Shape Memory Alloys

By A. Spaggiari, G. Scirè Mammano and E. Dragoni

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More about us