## 1. Introduction

Bistatic synthetic aperture radar (BiSAR), operates with separate transmitter and receiver that are mounted on different platforms (Cherniakov & Nezlin, 2007), will play a great role in future radar applications (Krieger & Moreira, 2006). BiSAR configuration can bring many benefits in comparison with monostatic systems, such as the exploitation of additional information contained in the bistatic reflectivity of targets (Eigel et al., 2000, Burkholder et al., 2003), improved flexibility (Loffeld et al., 2004), reduced vulnerability (Wang & Cai, 2007), forward looking SAR imaging (Ceraldi et al., 2005). These advantages could be worthwhile, e.g., for topographic features, surficial deposits, and drainage, to show the relationships that occur between forest, vegetation, and soils. Even for objects that show a low radar cross section (RCS) in monostatic SAR images, one can find distinct bistatic angle to increase their RCS to make these objects visible in BiSAR images. Furthermore, a BiSAR configuration allows a passive receiver, operating at a close range, to receive the data reflected from potentially hostile areas. This passive receiver may be teamed with a transmitter at a safe place, or make use of opportunistic illuminators such as television and radio transmitters or even unmanned vehicles [Wang, 2007a].

However, BiSAR is subject to the problems and special requirements that are neither not encountered or encountered in less serious form for monostatic SAR (Willis, 1991). The biggest technological challenge lies in synchronization of the two independent radars: time synchronization, the receiver must precisely know when the transmitter fires (in the order of nanoseconds); spatial synchronization, the receiving and transmitting antennas must simultaneously illuminate the same spot on the ground; phase synchronization, the receiver and transmitter must be coherent over extremely long periods of time. The most difficult synchronization problem is the phase synchronization. To obtain focused BiSAR image, phase information of the transmitted pulse has to be preserved. In a monostatic SAR, the co-located transmitter and receiver use the same stable local oscillator (STALO), the phase can only decorrelate over very short periods of time (about

Although multiple BiSAR image formation algorithms have been developed (Wang et al., 2006). BiSAR synchronization aspects have seen much less development, at least in open literature. The requirement of phase stability in BiSAR was first discussed in (Auterman, 1984), and further investigated in (Krieger et al., 2006, Krieger & Younis, 2006), which conclude that uncompensated phase noise may cause a time variant shift, spurious sidelobes and a deterioration of the impulse response, as well as a low-frequency phase modulation of the focused SAR signal. The impact of frequency synchronization error in spaceborne parasitic interferometry SAR is analyzed in (Zhang et al., 2006) and an estimation of oscillator’s phase offset in bistatic interferometry SAR is invstigated in (Ubolkosold et al., 2006). In an alike manner, linear and random time synchronization errors are discussed in (Zhang et al., 2005).

As a consequence of these difficulties, there is a lack of practical synchronization technique for BiSAR. But its application is of great scientific and technological interest, several authors have proposed some potential synchronization techniques or algorithms, such as ultra-high-quality oscillators (Gierull, 2006), a direct exchange of radar pulses (Moreira et al., 2004), a ping-pong interferometric mode in case of full-active systems (Evans, 2002) and an appropriate bidirectional link (Younis et al., 2006a, Younis et al., 2006b, Eineder, 2003). The practical work is to develop a practicable synchronization technique without too much alteration to existing radars.

This chapter concentrates on general BiSAR synchronization, aims at the development of a practical solution for time and phase synchronization aspects without too much alteration to existing radars. The remaining sections are organized as follows. In Section 2, the impact of synchronization errors on BiSAR systems are analysed by using an analytical models. A conclusion is made that some synchronization compensation techniques must be applied to focus BiSAR raw data. Then, possible time synchronization and phase synchronization approaches are investigated in Section 3 and Section 4, respectively. Finally, Section 5 concludes the chapter with some possible future work.

## 2. Impact of synchronization errors on BiSAR systems

### 2.1. Fundamental of phase noise

The instantaneous output voltage of a signal generator or oscillator *V*(*t*) is (Lance et al., 1984)

where

It is well known that

where the coefficients

In engineering, for the condition that the phase fluctuations occurring at a rate of

where

### 2.2. Model of phase noise

One cannot foresee to simulate the phase noise if one does not have a model for the phase noise. In (Hanzo et al., 2000), a white phase noise model is discussed, but it cannot describe the statistical process of phase noise. In (Foschini & Vannucci, 1988), a Wiener phase noise model is discussed, but it cannot describe the low-frequency phase noise, since this part of phase noise is an unstationary process. As different phase noise will bring different effects on BiSAR (see Fig. 1), the practical problem is that how to develop an useful and comprehensive model of frequency instability that can be understood and applied in BiSAR processing. Unfortunately, Eq. (3) is a frequency-domain expression and not convenient in analyzing its impact on BiSAR. As such, we have proposed an analytical model of phase noise, as shown in Fig. 2. This model uses Gaussian noise as the input of a hypothetical low-pass filter and its output is then considered as phase noise, that is this model may represent the output of a hypothetical filter with impulse response

It is well known that the power spectral density (PSD) of the output signal is given by the product

where a sharp up cutoff frequency

where

where

### 2.3. Impact of phase synchronization errors

Since only STALO phase noise is of interest, the modulation waveform used for range resolution can be ignored and the radar can be simplified into an azimuth only system (Auterman, 1984). Suppose the transmitted signal is sinusoid whose phase argument is

The first term is the carrier frequency and the second term is the phase, and

Hence we have

The first term is a frequency offset arising from non-identical STLO frequencies, which will result focused image with a drift. Because this drift can easily be corrected using ground calibrator, it can be ignored here. The second term forms the usual Doppler term as round-trip time to the target varies, it should be preserved. The last term represents the effect of STALO frequency instability which is of interest. As a typical example, assuming a X-band airborne SAR with a swath of 6km. Generally, a typical STALO used in current SAR has a frequency accuracy (

which has negligible effects on the synchronization phase. Hence, we have an approximative expression

That is to say, the phase noise of oscillator in fast-time is negligible, we can consider only the phase noise in slow-time.

Accordingly, the phase error in BiSAR can be modelled as

It is assumed that

Where the factor

Take one

Furthermore, it is known that high-frequency phase noise will cause spurious sidelobes in the impulse function. This deterioration can be characterized by the integrated sidelobe ratio (ISLR) which measures the transfer of signal energy from the mainlobe to the sidelobes. For an azimuth integration time,

A typical requirement for the maximum tolerable ISLR is

Generally, for

Note that

Hence, the deterioration of ISLR may be approximated as

It was concluded in (Willis, 1991) that the error in this approximation is less than

### 2.4. Impact of time synchronization errors

Timing jitter is the term most widely used to describe an undesired perturbation or uncertainty in the timing of events, which is a measurement of the variations in the time domain, and essentially describes how far the signal period has wandered from its ideal value. For BiSAR applications, timing jitter becomes more important and can significantly degrade the performance of image quality. Thus a special attenuation should be given to study the effects of timing jitter in order to predict possible degradation on the behavior of BiSAR systems. Generally speaking, we can model jitter in a signal by starting with a noise-free signal and displacing time with a stochastic process. Figure 5 shows a square wave with jitter compared to an ideal signal. The instabilities can eventually cause slips or missed signals that result in loss of radar echoes.

Because bistatic SAR is a coherent system, to complete the coherent accumulation in azimuth, the signals of same range but different azimuths should have the same phase after between the echo window and the PRF (pulse repetition frequency) of the receiver system would be a fixed value to preserve a stable phase relationship. But once there is clock timing jitter, the start time of the echo sampling window changes with certain time difference between the echo sampling window (or PRI, pulse repetition interval) and the real echo signal, as shown in Fig. 5. Consequently, the phase relation of the sampled data would be destroyed.

To find an analytical expression for the impact of time synchronization error on BiSAR images, we suppose the transmitted radar is

where

where the first term is the range sampling window centered at

Considering only time synchronization error, that is to say, phase synchronization is ignored here, we can obtain the demodulated signal as

Suppose the range reference signal is

The signal, after range compression, can be expressed as

where

From Eq. (24) we can notice that if the two clocks deviate a lot, the radar echoes will be lost due to the drift of echo sampling window. Fortunately, such case hardly occurs for current radars. Hence we considered only the case that each echo can be successfully received but be drifted because of clock timing jitter. In other words, the collected data with the same range but different azimuths are not on the same range any more. As an example, Fig. 6(a) illustrates one typical prediction of time synchronization error. From Fig. 6(b) we can conclude that, time synchronization errors will result unfocused images, drift of radar echoes and displacement of targets. To focus BiSAR raw data, some time synchronization compensation techniques must be applied.

Notice that the requirement of frequency stability may vary with applications. Image generation with BiSAR requires a frequency coherence for at least the coherent integration time. For interferometric SAR (InSAR) (Muellerschoen et al., 2006), however this coherence has to be expanded over the whole processing time ( Eineder, 2003 ).

## 3. Direct-path Signal-based synchronization approach

A time and phase synchronization approach via direct-path signal was proposed in (Wang et al., 2008). In this approach, the direct-path signal of transmitter is received with one appropriative antenna and divided into two channels, one is passed though an envelope detector and used to synchronize the sampling clock, and the other is downconverted and used to compensate the phase synchronization error. Finally, the residual time synchronization error is compensated with range alignment, and the residual phase synchronization error is compensated with GPS (global positioning systems)/INS (intertial navigation system)/IMU (intertial measurement units) information, then the focusing of BiSAR image may be achieved.

### 3.1. Time synchronization

As concluded previously, if time synchronizes strictly, intervals between the echo window and the PRF (pulse repetition frequency) of the receiver would be a fixed value to preserve a stable phase relationship. But once there is time synchronization error, the start time of the echo sampling window changes with certain time difference between the echo sampling window (or PRI, pulse repetition interval) and the real echo signal. As a consequence, the phase relation of the sampled data would be destroyed.

It is well known that, for monostatic SAR, the azimuth processing operates upon the echoes which come from target points at equal range. Because time synchronization errors (without considering phase synchronization which are compensated separately in subsequent phase synchronization processing) have no effect on the initial phase of each echo, time synchronization errors can be compensated separately with range alignment. Here the spatial domain realignment (Chen & Andrews, 1980) is used. That is, let

From Schwartz inequality we have that

### 3.2. Phase synchronization

After time synchronization compensation, the primary causes of phase errors include uncompensated target or sensor motion and residual phase synchronization errors. Practically, the receiver of direct-path can be regarded as a strong scatterer in the process of phase compensation. To the degree that motion sensor is able to measure the relative motion between the targets and SAR sensor, the image formation processor can eliminate undesired motion effects from the collected signal history with GPS/INS/IMU and autofocus algorithms. This procedure is motion compensation that is ignored here since it is beyond the scope of this paper. Thereafter, the focusing of BiSAR image can be achieved with autofocus image formation algorithms, e.g., (Wahl et al., 1994).

Suppose the

where

where

Hence, the received signal in baseband is

withSuppose the range reference function is

Range compression yields

(33) |

We can notice that the maxima will be at

Hence, the residual phase term in Eq. (33) is

AsIn a like manner, we have

Letwhere

Accordingly,

Generally,

From Eq. (41) we can get

In summary, the time and phase synchronization compensation process may include the following steps:

*Step 1*, extract one pulse from the direct-path channel as the range reference function;

*Step 2*, direct-path channel range compression;

*Step 3*, estimate time synchronization errors with range alignment;

*Step 4*, direct-path channel motion compensation;

*Step 5*, estimate phase synchronization errors from direct-path channel;

*Step 6*, reflected channel time synchronization compensation;

*Step 7*, reflected channel phase synchronization compensation;

*Step 8*, reflected channel motion compensation;

*Step 9*, BiSAR image formation.

## 4. GPS signal disciplined synchronization approach

For the direct-path signal-based synchronization approach, the receiver must fly with a sufficient altitude and position to maintain a line-of-sight contact with the transmitter. To get around this disadvantage, a GPS signal disciplined synchronization approach is investigated in (Wang, 2009).

### 4.1. System architecture

Because of their excellent long-term frequency accuracy, GPS-disciplined rubidium oscillators are widely used as standards of time and frequency. Here, selection of a crystal oscillator instead of rubidium is based on the superior short-term accuracy of the crystal. As such, high quality space-qualified 10MHz quartz crystal oscillators are chosen here, which have a typical short-term stability of

As shown in Fig. 7, the transmitter/receiver contains the high-performance quartz crystal oscillator, direct digital synthesizer (DDS), and GPS receiver. The antenna collects the GPS L1 (1575.42MHz) signals and, if dual frequency capable, L2 (1227.60MHz) signals. The radio frequency (RF) signals are filtered though a preamplifier, then down-converted to

intermediate frequency (IF). The IF section provides additional filtering and amplification of the signal to levels more amenable to signal processing. The GPS signal processing component features most of the core functions of the receiver, including signal acquisition, code and carrier tracking, demodulation, and extraction of the pseudo-range and carrier phase measurements. The details can be found in many textbooks on GPS (Parkinson & Spilker, 1996).

The USO is disciplined by the output pulse-per-second (PPS), and frequency trimmed by varactor-diode tuning, which allows a small amount of frequency control on either side of the nominal value. Next, a narrow-band high-resolution DDS is applied, which allows the generation of various frequencies with extremely small step size and high spectral purity. This technique combines the advantages of the good short-term stability of high quality USO with the advantages of GPS signals over the long term. When GPS signals are lost, because of deliberate interference or malfunctioning GPS equipment, the oscillator is held at the best control value and free-runs until the return of GPS allows new corrections to be calculated.

### 4.2. Frequency synthesis

Since DDS is far from being an ideal source, its noise floor and spurs will be transferred to the output and amplified by 2 ( denotes the frequency multiplication factor) in power. To overcome this limit, we mixed it with the USO output instead of using the DDS as a reference directly. Figure 8 shows the architecture of a DDS-driven PLL synthesizer. The frequency of the sinewave output of the USO is 10MHz plus a drift

The DDS output frequency is determined by its clock frequency
*M*-bit number

Clearly, for the smallest frequency step we need to use a low clock frequency, but the lower the clock frequency, the harder it becomes to filter the clock components in the DDS output. As a good compromise, we use a clock at about 1MHz, obtained by dividing the nominal 10MHz USO output by 10. Then, the approximate resolution of the frequency output of the DDS is

However, we face the technical challenge of measuring the time error between the GPS and USO pulse per second signals. To overcome this difficulty, we apply a high-precision time interval measurement method. This technique is illustrated in Fig. 9, where the two PPS signals are used to trigger an ADC (analog-to-digital convertor) to sample the sinusoid that is directly generated by the USO. Denoting the frequency of

Similarly, for

Hence, we can get

Where

To find a general mathematical model, suppose the collected sinewave signal with original phase

Parting

Thereby we have

Thus,

Since the parameters
*ps*. We have assumed that some parts of the measurement system are ideal; hence, there may be some variation in actual systems. The performance of single frequency estimators has been detailed in (Kay, 1989).

Finally, time and phase synchronization can be achieved by generating all needed frequencies by dividing, multiplying or phase-locking to the GPS-disciplined USO at the transmitter and receiver.

### 4.3. Residual synchronization errors compensation

Because GPS-disciplined USOs are adjusted to agree with GPS signals, they are self-calibrating standards. Even so, differences in the PPS fluctuations will be observed because of uncertainties in the satellite signals and the measurement process in the receiver (Cheng et al., 2005). With modern commercial GPS units, which use the L1-signal at 1575.42MHz, a standard deviation of 15ns may be observed. Using differential GPS (DGPS) or GPS common-view, one can expect a standard deviation of less than 10ns. When GPS signals are lost, the control parameters will stay fixed, and the USO enters a so-called free-running mode, which further degrades synchronization performance. Thus, the residual synchronization errors must be further compensated for BiSAR image formation.

Differences in the PPS fluctuations will result in linear phase synchronization errors,

Motion compensation is ignored here because it can be addressed with motion sensors. Thus, after time synchronization compensation, the next step is residual phase error compensation, i.e., autofocus processing.

We use the Mapdrift autofocus algorithm described in (Mancill & Swiger, 1981). Here, the Mapdrift technique divides the i-th second data into two nonoverlapping subapertures with a duration of 0.5 seconds. This concept uses the fact that a quadratic phase error across one second (in one synchronization period) has a different functional form across two half-length subapertures, as shown in Fig. 10 (Carrara et al., 1995). The phase error across each subapertures consists of a quadratic component, a linear component, and an inconsequential constant component of

Similarly, for the second subaperture we have

LetAfter applying a Fourier transform, we get

where

Next, various methods are available to estimate this shift. The most common method is to measure the peak location of the cross-correlation of the two subapterture images.

After compensating for the quadratic phase errors

Applying again the Mapdrift described above to the i-th and (i+1)-th second data, the coefficients in (58) can be derived. Define a mean value operator

Hence, we can get

where

Notice that a typical implementation applies the algorithm to only a small subset of available range bins, based on peak energy. An average of the individual estimates of the error coefficient from each of these range bins provides a final estimate. This procedure naturally reduces the computational burden of this algorithm. The range bins with the most energy are likely to contain strong, dominant scatterers with high signal energy relative to clutter energy. The signatures from such scatterers typically show high correlation between the two subaperture images, while the clutter is poorly correlated between the two images.

It is common practice to apply this algorithm iteratively. On each iteration, the algorithm forms an estimate and applies this estimate to the input signal data. Typically, two to six iterations are sufficient to yield an accurate error estimate that does not change significantly on subsequent iterations. Iteration of the procedure greatly improves the accuracy of the final error estimate for two reasons First, iteration enhances the algorithm’s ability to identify and discard those range bins that, for one reason or another, provide anomalous estimates for the current iteration. Second, the improved focus of the image data after each iteration results in a narrower cross-correlation peak, which leads to a more accurate determination of its location. Notice that the Mapdrift algorithm can be extended to estimate high-order phase error by dividing the azimuthal signal history in one second into more than two subapertures. Generally speaking, N subapertures are adequate to estimate the coefficients of an *Nth*-order polynomial error. However, decreased subaperture length will degrade both the resolution and the signal-to-noise ratio of the targets in the images, which results in degraded estimation performance.

## 5. Conclusion

Although the feasibility of airborne BiSAR has been demonstrated by experimental investigations using rather steep incidence angles, resulting in relatively short synthetic aperture times of only a few seconds, the time and phase synchronization of the transmitter and receiver remain technical challenges. In this chapter, with an analytical model of phase noise, impacts of time and phase synchronization errors on BiSAR imaging are derived. Two synchronization approaches, direct-path signal-based and GPS signal disciplined, are investigated, along with the corresponding residual synchronization errors.

One remaining factor needed for the realization and implementation of BiSAR is spatial synchronization. Digital beamforming by the receiver is a promising solution. Combining the recorded subaperture signals in many different ways introduces high flexibility in the BiSAR configuration, and makes effective use of the total signal energy in the large illuminated footprint.