Open access

Bistatic Synthetic Aperture Radar Synchronization Processing

Written By

Wen-Qin Wang

Published: January 1st, 2010

DOI: 10.5772/7184

Chapter metrics overview

5,178 Chapter Downloads

View Full Metrics

1. Introduction

Bistatic synthetic aperture radar (BiSAR), operates with separate transmitter and receiver that are mounted on different platforms (Cherniakov & Nezlin, 2007), will play a great role in future radar applications (Krieger & Moreira, 2006). BiSAR configuration can bring many benefits in comparison with monostatic systems, such as the exploitation of additional information contained in the bistatic reflectivity of targets (Eigel et al., 2000, Burkholder et al., 2003), improved flexibility (Loffeld et al., 2004), reduced vulnerability (Wang & Cai, 2007), forward looking SAR imaging (Ceraldi et al., 2005). These advantages could be worthwhile, e.g., for topographic features, surficial deposits, and drainage, to show the relationships that occur between forest, vegetation, and soils. Even for objects that show a low radar cross section (RCS) in monostatic SAR images, one can find distinct bistatic angle to increase their RCS to make these objects visible in BiSAR images. Furthermore, a BiSAR configuration allows a passive receiver, operating at a close range, to receive the data reflected from potentially hostile areas. This passive receiver may be teamed with a transmitter at a safe place, or make use of opportunistic illuminators such as television and radio transmitters or even unmanned vehicles [Wang, 2007a].

However, BiSAR is subject to the problems and special requirements that are neither not encountered or encountered in less serious form for monostatic SAR (Willis, 1991). The biggest technological challenge lies in synchronization of the two independent radars: time synchronization, the receiver must precisely know when the transmitter fires (in the order of nanoseconds); spatial synchronization, the receiving and transmitting antennas must simultaneously illuminate the same spot on the ground; phase synchronization, the receiver and transmitter must be coherent over extremely long periods of time. The most difficult synchronization problem is the phase synchronization. To obtain focused BiSAR image, phase information of the transmitted pulse has to be preserved. In a monostatic SAR, the co-located transmitter and receiver use the same stable local oscillator (STALO), the phase can only decorrelate over very short periods of time (about 1 × 10 3 sec.). In contrast, for a BiSAR system, the transmitter and receiver fly on different platforms and use independent master oscillators, which results that there is no phase noise cancellation. This superimposed phase noise corrupts the received signal over the whole synthetic aperture time. Moreover, any phase noise (instability) in the master oscillator is magnified by frequency multiplication. As a consequence, the low phase noise requirements imposed on the oscillators of BiSAR are much more higher than the monostatic cases. In the case of indirect phase synchronization using identical STALOs in the transmitter and receiver, phase stability is required over the coherent integration time. Even the toleration of low frequency or quadratic phase synchronization errors can be relaxed to 90 , the requirement of phase stability is only achievable with ultra-high-quality oscillators (Wei β , 2004). Moreover, aggravating circumstances are accompanied for airborne platforms because of different platform motions, the performance of phase stability will be further degraded.

Although multiple BiSAR image formation algorithms have been developed (Wang et al., 2006). BiSAR synchronization aspects have seen much less development, at least in open literature. The requirement of phase stability in BiSAR was first discussed in (Auterman, 1984), and further investigated in (Krieger et al., 2006, Krieger & Younis, 2006), which conclude that uncompensated phase noise may cause a time variant shift, spurious sidelobes and a deterioration of the impulse response, as well as a low-frequency phase modulation of the focused SAR signal. The impact of frequency synchronization error in spaceborne parasitic interferometry SAR is analyzed in (Zhang et al., 2006) and an estimation of oscillator’s phase offset in bistatic interferometry SAR is invstigated in (Ubolkosold et al., 2006). In an alike manner, linear and random time synchronization errors are discussed in (Zhang et al., 2005).

As a consequence of these difficulties, there is a lack of practical synchronization technique for BiSAR. But its application is of great scientific and technological interest, several authors have proposed some potential synchronization techniques or algorithms, such as ultra-high-quality oscillators (Gierull, 2006), a direct exchange of radar pulses (Moreira et al., 2004), a ping-pong interferometric mode in case of full-active systems (Evans, 2002) and an appropriate bidirectional link (Younis et al., 2006a, Younis et al., 2006b, Eineder, 2003). The practical work is to develop a practicable synchronization technique without too much alteration to existing radars.

This chapter concentrates on general BiSAR synchronization, aims at the development of a practical solution for time and phase synchronization aspects without too much alteration to existing radars. The remaining sections are organized as follows. In Section 2, the impact of synchronization errors on BiSAR systems are analysed by using an analytical models. A conclusion is made that some synchronization compensation techniques must be applied to focus BiSAR raw data. Then, possible time synchronization and phase synchronization approaches are investigated in Section 3 and Section 4, respectively. Finally, Section 5 concludes the chapter with some possible future work.

Advertisement

2. Impact of synchronization errors on BiSAR systems

2.1. Fundamental of phase noise

The instantaneous output voltage of a signal generator or oscillator V(t) is (Lance et al., 1984)

V ( t ) = [ V o + δ ε ( t ) ] sin [ 2 π ν o t + ϕ o + δ ϕ ( t ) ] E1

where V o and ν o are the nominal amplitude and frequency, respectively, ϕ o is a start phase, δ ε ( t ) and δ ϕ ( t ) are the fluctuations of signal amplitude and phase, respectively. Notice that, here, we have assumed that (Wang et al., 2006)

δ ε ( t ) V o 1 , a n d [ δ ϕ ( t ) ] t 1 E2

It is well known that S ϕ ( f ) defined as the spectral density of phase fluctuations on a ‘per-Hz’ is the term most widely used to describe the random characteristics of frequency stability, which is a measure of the instantaneous time shifts, or time jitter, that are inherent in signals produced by signal generators or added to signals as it passes through a system (Wall & Vig, 1995). Although an oscillator’s phase noise is a complex interaction of variables, ranging from its atomic composition to the physical environment of the oscillator, a piecewise polynomial representation of an oscillator’s phase noise exists and is expressed as (Rutman, 1978)

S ϕ ( f ) = α = 2 2 h α 2 f α 2 E3

where the coefficients h α 2 describe the different contributions of phase noise, and f represents the phase fluctuation frequency. As modeled in the Eq. (3), they can be represented by several physical mechanisms which include random walk frequency noise, flicker frequency noise. Random walk frequency noise (Vannicola & Varshney, 1983) is because of the oscillator’s physical environment (temperature, vibration, and shocks etc.). This phase noise contribution can be significant for a moving platform, and presents design difficulties since laboratory measurements are necessary when the synthesizer is under vibration. White frequency noise originates from additive white thermal noise sources inside the oscillator’s feedback loop. Flicker phase noise generally is produced by amplifiers, and white phase noise is caused by additive white noise sources outside the oscillator’s feedback loop (Donald, 2002).

In engineering, for the condition that the phase fluctuations occurring at a rate of f and are small compared with 1 rad, a good approximation is

L ( f ) = S ϕ ( f ) 2 E4

where L ( f ) is defined as the ratio of the power in one sideband referred to the input carrier frequency on a per-Hertz of bandwidth spectral density basis to the total signal power at Fourier frequency f from the carrier per device.

2.2. Model of phase noise

One cannot foresee to simulate the phase noise if one does not have a model for the phase noise. In (Hanzo et al., 2000), a white phase noise model is discussed, but it cannot describe the statistical process of phase noise. In (Foschini & Vannucci, 1988), a Wiener phase noise model is discussed, but it cannot describe the low-frequency phase noise, since this part of phase noise is an unstationary process. As different phase noise will bring different effects on BiSAR (see Fig. 1), the practical problem is that how to develop an useful and comprehensive model of frequency instability that can be understood and applied in BiSAR processing. Unfortunately, Eq. (3) is a frequency-domain expression and not convenient in analyzing its impact on BiSAR. As such, we have proposed an analytical model of phase noise, as shown in Fig. 2. This model uses Gaussian noise as the input of a hypothetical low-pass filter and its output is then considered as phase noise, that is this model may represent the output of a hypothetical filter with impulse response h ( t ) receiving an input signal x ( t ) .

Figure 1.

Impacts of various oscillator frequency offsets: (a) constant offset, (b) linear offset, (c) Sinewave offset, (d) random offset.

Figure 2.

Analytical model of phase noise.

It is well known that the power spectral density (PSD) of the output signal is given by the product S x ( f ) | H ( f ) | 2 , where the filter transfer function H ( f ) is the Fourier transform of h ( t ) . Notice that, here, | H ( f ) | 2 must be satisfied with

| H ( f ) | 2 = { S φ ( f ) ,      f l | f | f h S φ ( f l ) ,     | f | f l 0 ,             else E5

where a sharp up cutoff frequency f h and a sharp down cutoff frequency f l are introduced. Notice that time domain stability measures sometimes depend on f h and f l which must then be given with any numerical result, although no recommendation has been made for this value. f h = 3 k H z and f l = 0.01 H z are adopted. Thereby, the PSD of phase noise in Fig. 2 can be analytical expressed as

S φ ( f ) = K S x ( f ) | H ( f ) | 2 E6

where S x ( f ) is the PSD of Gaussian noise and also the input of filter, and K is a constant. An inverse Fourier transform yields

φ ( t ) = K x ( t ) h ( t ) E7

where φ ( t ) and denote the phase noise in time domain and a convolution, respectively.

2.3. Impact of phase synchronization errors

Since only STALO phase noise is of interest, the modulation waveform used for range resolution can be ignored and the radar can be simplified into an azimuth only system (Auterman, 1984). Suppose the transmitted signal is sinusoid whose phase argument is

ϕ T ( t ) = 2 π f T t + M φ T ( t ) E8

The first term is the carrier frequency and the second term is the phase, and M is the ratio of the carrier frequency to STALO frequency. After reflection from a target, the received signal phase is that of the transmitted signal delayed by the round-trip time τ . The receiver output signal phase ϕ ( t ) results from demodulating the received signal with the receiver STALO which has the same form as the transmitter STALO

ϕ R ( t ) = 2 π f R t + M φ R ( t ) E9

Hence we have

ϕ ( t ) = 2 π ( f R f T ) t + 2 π f T τ + M ( φ R ( t ) φ T ( t τ ) ) E10

The first term is a frequency offset arising from non-identical STLO frequencies, which will result focused image with a drift. Because this drift can easily be corrected using ground calibrator, it can be ignored here. The second term forms the usual Doppler term as round-trip time to the target varies, it should be preserved. The last term represents the effect of STALO frequency instability which is of interest. As a typical example, assuming a X-band airborne SAR with a swath of 6km. Generally, a typical STALO used in current SAR has a frequency accuracy ( δ f ) of 10 9 / 1 s or better (Wei β , 2004). As a typical example, assuming a BiSAR system with the following parameters: radar carrier frequency is 1 × 10 10 H z , the speed of light is 3 × 10 8 m / s , the round-trip from radar to target is 12000 m , and then the phase error in fast-time is found to be

M [ ϕ T ( t ) ϕ T ( t τ ) ] = 2 π δ f τ 2 π × 10 10 × 10 9 × 12000 3 8 = 8 π 3 × 10 4 ( r a d ) E11

which has negligible effects on the synchronization phase. Hence, we have an approximative expression

ϕ T ( t ) ϕ T ( t τ ) E12

That is to say, the phase noise of oscillator in fast-time is negligible, we can consider only the phase noise in slow-time.

Accordingly, the phase error in BiSAR can be modelled as

ϕ B ( t ) = M [ φ T ( t ) φ R ( t ) ] E13

It is assumed that φ T ( t ) and φ R ( t ) are independent random variables having identical PSD S φ ( f ) . Then, the PSD of phase noise in BiSAR is

S φ B ( f ) = 2 M 2 S φ ( f ) E14

Where the factor 2 arises from the addition of two uncorrelated but identical PSD. This is true in those cases where the effective division ratio in frequency synthesizer is equal to the small integer fractions exactly. In other instances, an experimental formula is (Kroupa, 1996)

S φ B ( f ) = 2 [ M 2 S φ ( f ) + 10 8 f + 10 14 ] E15

Take one 10 M H z STALO as an example, whose phase noise parameters are listed in Table 1. This STALO can be regarded as a representative example of the ultra stable oscillator for current airborne SAR systems. Predicted phase errors are shown in Fig. 3 for a time interval of 10 s . Moreover, the impacts of phase noise on BiSAR compared with the ideal compression results in azimuth can be founded in Fig. 4(a). We can draw a conclusion that oscillator phase instabilities in BiSAR manifest themselves as a deterioration of the impulse response function. It is also evident that oscillator phase noise may not only defocus the SAR image, but also introduce significant positioning errors along the scene extension.

Furthermore, it is known that high-frequency phase noise will cause spurious sidelobes in the impulse function. This deterioration can be characterized by the integrated sidelobe ratio (ISLR) which measures the transfer of signal energy from the mainlobe to the sidelobes. For an azimuth integration time, T s , the ISLR contribution because of phase errors can be computed in dB as

I S L R = 10 log [ 1 / T s 2 M 2 S φ ( f ) d f ] E16
Frequency, Hz 1 10 100 1k 10
S φ ( f ) , d B c / H z -80 -100 -145 -145 -160

Table 1.

Phase noise parameters of one typical STALO.

Figure 3.

Simulation results of oscillator phase instabilities with ten realisations: (a) predicted phase noise in 10 s in X-band (linear phase ramp corresponding to a frequency offset has been removed). (b) predicted high-frequency including cubic and more phase errors.

Figure 4.

Impacts of phase noise on BiSAR systems: (a) impact of predicted phase noise in azimuth. (b) impact of integrated sidelobe ratio in X-band.

A typical requirement for the maximum tolerable ISLR is 20 d B , which enables a maximum coherent integration time T s of 2 s in this example as shown in Fig. 4(b). This result is coincident with that of (Krieger & Younis, 2006).

Generally, for f 10 H z , the region of interest for SAR operation, L ( f ) can be modelled as (Willis, 1991)

L ( f ) = L 1 10 3 log f E17

Note that L 1 is the value of L ( f ) at f = 1 H z for a specific oscillator. As the slope of Eq. (17) is so high, there is

log [ 1 / T s L ( f ) d f ] L ( 1 T s ) E18

Hence, the deterioration of ISLR may be approximated as

I S L R 10 log ( 4 M 2 L 1 10 3 log T s ) = 10 log ( 4 M 2 L 1 ) + 30 log T s E19

It was concluded in (Willis, 1991) that the error in this approximation is less than 1 d B for T s 0.6 s .

2.4. Impact of time synchronization errors

Timing jitter is the term most widely used to describe an undesired perturbation or uncertainty in the timing of events, which is a measurement of the variations in the time domain, and essentially describes how far the signal period has wandered from its ideal value. For BiSAR applications, timing jitter becomes more important and can significantly degrade the performance of image quality. Thus a special attenuation should be given to study the effects of timing jitter in order to predict possible degradation on the behavior of BiSAR systems. Generally speaking, we can model jitter in a signal by starting with a noise-free signal and displacing time with a stochastic process. Figure 5 shows a square wave with jitter compared to an ideal signal. The instabilities can eventually cause slips or missed signals that result in loss of radar echoes.

Because bistatic SAR is a coherent system, to complete the coherent accumulation in azimuth, the signals of same range but different azimuths should have the same phase after between the echo window and the PRF (pulse repetition frequency) of the receiver system would be a fixed value to preserve a stable phase relationship. But once there is clock timing jitter, the start time of the echo sampling window changes with certain time difference between the echo sampling window (or PRI, pulse repetition interval) and the real echo signal, as shown in Fig. 5. Consequently, the phase relation of the sampled data would be destroyed.

To find an analytical expression for the impact of time synchronization error on BiSAR images, we suppose the transmitted radar is

S T ( t ^ ) = r e c t [ t ^ T r ] exp ( j ω o t ^ + j π γ t ^ 2 ) E20

where r e c t [ · ] is the window function, T r is the pulse duration, ω o is the carrier angular frequency and is the chirp rate, respectively. Let e ( t ) denote the time synchronization errors of BiSAR, the radar echo from a scatterer is given by

S r ( t ) = r e c t [ t R r e f / c e ( t ) T w ] r e c t [ t τ T r ] exp [ j ω o ( t τ ) + j π γ ( t τ ) 2 ] E21

where the first term is the range sampling window centered at R r e f , having a length of T w , c is the speed of light, and τ is the delay corresponding to the time it takes the signal to travel the distance transmitter-target-receiver distance, R B .

Figure 5.

Impacts of time synchronization error on BiSAR data.

Figure 6.

Impact of time synchronization errors: (a) predicted time synchronization errors in 10 s . (b) impact on BiSAR image for one point target.

Considering only time synchronization error, that is to say, phase synchronization is ignored here, we can obtain the demodulated signal as

S r ' ( t ) = r e c t [ t R r e f / c e ( t ) T w ] r e c t [ t τ T r ] exp [ j ω o τ + j π γ ( t τ ) 2 ] E22

Suppose the range reference signal is

S r e f ( r ) = exp [ j π γ t 2 ] E23

The signal, after range compression, can be expressed as

S r e f ( r ) = r e c t [ r R r e f c T w ] sinc [ B ( r R B + c e ( t ) ) c ] exp [ j R B ω o c ] E24

where B is the radar signal bandwidth and Δ R = c e ( t ) is the range drift because of time synchronization errors.

From Eq. (24) we can notice that if the two clocks deviate a lot, the radar echoes will be lost due to the drift of echo sampling window. Fortunately, such case hardly occurs for current radars. Hence we considered only the case that each echo can be successfully received but be drifted because of clock timing jitter. In other words, the collected data with the same range but different azimuths are not on the same range any more. As an example, Fig. 6(a) illustrates one typical prediction of time synchronization error. From Fig. 6(b) we can conclude that, time synchronization errors will result unfocused images, drift of radar echoes and displacement of targets. To focus BiSAR raw data, some time synchronization compensation techniques must be applied.

Notice that the requirement of frequency stability may vary with applications. Image generation with BiSAR requires a frequency coherence for at least the coherent integration time. For interferometric SAR (InSAR) (Muellerschoen et al., 2006), however this coherence has to be expanded over the whole processing time ( Eineder, 2003 ).

Advertisement

3. Direct-path Signal-based synchronization approach

A time and phase synchronization approach via direct-path signal was proposed in (Wang et al., 2008). In this approach, the direct-path signal of transmitter is received with one appropriative antenna and divided into two channels, one is passed though an envelope detector and used to synchronize the sampling clock, and the other is downconverted and used to compensate the phase synchronization error. Finally, the residual time synchronization error is compensated with range alignment, and the residual phase synchronization error is compensated with GPS (global positioning systems)/INS (intertial navigation system)/IMU (intertial measurement units) information, then the focusing of BiSAR image may be achieved.

3.1. Time synchronization

As concluded previously, if time synchronizes strictly, intervals between the echo window and the PRF (pulse repetition frequency) of the receiver would be a fixed value to preserve a stable phase relationship. But once there is time synchronization error, the start time of the echo sampling window changes with certain time difference between the echo sampling window (or PRI, pulse repetition interval) and the real echo signal. As a consequence, the phase relation of the sampled data would be destroyed.

It is well known that, for monostatic SAR, the azimuth processing operates upon the echoes which come from target points at equal range. Because time synchronization errors (without considering phase synchronization which are compensated separately in subsequent phase synchronization processing) have no effect on the initial phase of each echo, time synchronization errors can be compensated separately with range alignment. Here the spatial domain realignment (Chen & Andrews, 1980) is used. That is, let f t 1 ( r ) and f t 2 ( r ) denote the recorded complex echo from adjacent pulses where t 2 t 1 = Δ t is the PRI and r is the range assumed within one PRI. If we consider only the magnitude of the echoes, then m t 1 ( r + Δ r ) m t 2 ( r ) , where m t 1 ( r ) | f t 1 ( r ) | . The Δ r is the amount of misalignment, which we would like to estimate. Define a correlation function between the two waveforms m t 1 ( r ) and m t 2 ( r ) as

R ( s ) m t 1 ( r ) m t 2 ( r s ) d r [ m t 1 2 ( r ) d r m t 2 2 ( r s ) d r ] 1 / 2 E25

From Schwartz inequality we have that R ( s ) will be maximal at s = Δ r and the amount of misalignment can thus be determined. Note that some other range alignment methods may also be adopted, such as frequency domain realignment, recursive alignment (Delisle & Wu, 1994), and minimum entropy alignment. Another note is that, sensor motion error will also result the drift of echo envelope, which can be corrected with motion compensation algorithms. When the transmitter and receiver are moving in non-parallel trajectories, the range change of normal channel and synchronization channel must be compensated separately. This compensation can be achieved with motion sensors combined with effective image formation algorithms.

3.2. Phase synchronization

After time synchronization compensation, the primary causes of phase errors include uncompensated target or sensor motion and residual phase synchronization errors. Practically, the receiver of direct-path can be regarded as a strong scatterer in the process of phase compensation. To the degree that motion sensor is able to measure the relative motion between the targets and SAR sensor, the image formation processor can eliminate undesired motion effects from the collected signal history with GPS/INS/IMU and autofocus algorithms. This procedure is motion compensation that is ignored here since it is beyond the scope of this paper. Thereafter, the focusing of BiSAR image can be achieved with autofocus image formation algorithms, e.g., (Wahl et al., 1994).

Suppose the n th transmitted pulse with carrier frequency f T n is

x n ( t ) = s ( t ) exp [ j ( 2 π f T n t + φ d ( n ) ) ] E26

where φ d ( n ) is the original phase, and s ( t ) is the radar signal in baseband

s ( t ) = r e c t [ t T r ] exp ( j π γ t 2 ) E27
Let t d n denote the delay time of direct-path signal, the received direct-path signal is
s d n ' ( t ) = s ( t t d n ) exp [ j 2 π ( f T n + f d n ) ( t t d n ) ] exp ( j φ d ( n ) ) E28

where f d n is Doppler frequency for the n th transmitted pulse. Suppose the demodulating signal in receiver is

s f ( t ) = exp ( j 2 π f R n t ) E29

Hence, the received signal in baseband is

S d n ( t ) = s ( t t d n ) exp ( j 2 π ( f T n + f d n ) t d n ) exp ( j 2 π Δ f n t ) exp ( j φ d ( n ) ) E30
with Δ f n = f T n f R n , where φ d ( n ) is the term to be extracted to compensate the phase synchronization errors in reflected signal. A Fourier transform applied to Eq. (30) yields
S d n ( f ) = r e c t [ f Δ f n γ T r ] exp [ j 2 π ( f Δ f n ) t d n ] exp [ j π ( f Δ f n ) 2 γ ] × exp [ j 2 π ( f T n + f d n ) t d n + j φ d n ] E31

Suppose the range reference function is

S r e f ( t ) = r e c t [ t T r ] exp ( j π γ t 2 ) E32

Range compression yields

y d n ( t ) = ( γ T o Δ f n ) sin c [ ( γ T o Δ f n ) ( t t d n + Δ f n / γ ) ] × exp [ j π Δ f n ( t t d n + Δ f n / γ ) ] exp { j [ 2 π ( f d n + f R n ) t d n π Δ f n 2 γ + φ d ( n ) ] } E33

We can notice that the maxima will be at t = t d n Δ f n / γ , where we have

exp [ j π Δ f n ( t t d n + Δ f n / γ ) ] | t = t d n Δ f n / γ = 1 E34

Hence, the residual phase term in Eq. (33) is

ψ ( n ) = 2 π ( f d n + f R n ) t d n π Δ f n 2 γ + φ d ( n ) E35
As Δ f n and γ are typical on the orders of 1 k H z and 1 × 10 13 H z / s , respectively. π Δ f n 2 / γ has negligiable effects. Eq. (35) can be simplified into
ψ ( n ) = 2 π ( f d n + f R n ) t d n + φ d ( n ) E36

In a like manner, we have

ψ ( n + 1 ) = 2 π ( f d ( n + 1 ) + f R ( n + 1 ) ) t d ( n + 1 ) + φ d ( n + 1 ) E37
Let
f d ( n + 1 ) = f d 0 + δ f d ( n + 1 ) ,         f R ( n + 1 ) = f R 0 + δ f R ( n + 1 ) E38

where f d 0 and f R 0 are the original Doppler frequency and error-free demodulating frequency in receiver, respectively.

Accordingly, δ f d ( n + 1 ) and δ f R ( n + 1 ) are the frequency errors for the ( n + 1 ) th pulse. Hence, we have

φ d ( n + 1 ) φ d n = [ ψ ( n + 1 ) ψ ( n ) ] 2 π ( f R 0 + f d 0 ) ( t d ( n + 1 ) t d n ) 2 π ( δ f d ( n + 1 ) + δ f R ( n + 1 ) ) ( t d ( n + 1 ) t d n ) E39

Generally, δ f d ( n + 1 ) + δ f R ( n + 1 ) and t d ( n + 1 ) t d n are typical on the orders of 10 H z and 10 9 s , respectively, then 2 π ( δ f d ( n + 1 ) + δ f R ( n + 1 ) ) ( t d ( n + 1 ) t d n ) is founded to be smaller than 2 π × 10 8 rad, which has negligiable effects. Furthermore, since t d ( n + 1 ) and t d n can be obtained from GPS/INS/IMU, Eq. (39) can be simplified into

φ d ( n + 1 ) φ d n = ψ e ( n ) E40
With ψ e ( t ) = [ ψ ( n + 1 ) ψ ( n ) ] 2 π ( f R 0 + f d 0 ) ( t d ( n + 1 ) t d n ) . We then have
φ d ( 2 ) φ d ( 1 ) = ψ e ( 1 ) φ d ( 3 ) φ d ( 2 ) = ψ e ( 2 ) ............................. φ d ( n + 1 ) φ d ( n ) = ψ e ( n ) E41

From Eq. (41) we can get φ d ( n ) , then the phase synchronization compensation for reflected channel can be achieved with this method. Notice that the remaining motion compensation errors are usually low frequency phase errors, which can be compensated with autofocus image formation algorithms.

In summary, the time and phase synchronization compensation process may include the following steps:

Step 1, extract one pulse from the direct-path channel as the range reference function;

Step 2, direct-path channel range compression;

Step 3, estimate time synchronization errors with range alignment;

Step 4, direct-path channel motion compensation;

Step 5, estimate phase synchronization errors from direct-path channel;

Step 6, reflected channel time synchronization compensation;

Step 7, reflected channel phase synchronization compensation;

Step 8, reflected channel motion compensation;

Step 9, BiSAR image formation.

Advertisement

4. GPS signal disciplined synchronization approach

For the direct-path signal-based synchronization approach, the receiver must fly with a sufficient altitude and position to maintain a line-of-sight contact with the transmitter. To get around this disadvantage, a GPS signal disciplined synchronization approach is investigated in (Wang, 2009).

4.1. System architecture

Because of their excellent long-term frequency accuracy, GPS-disciplined rubidium oscillators are widely used as standards of time and frequency. Here, selection of a crystal oscillator instead of rubidium is based on the superior short-term accuracy of the crystal. As such, high quality space-qualified 10MHz quartz crystal oscillators are chosen here, which have a typical short-term stability of σ A l l a n ( Δ t = 1 s ) = 10 12 and an accuracy of σ r m s ( Δ t = 1 s ) = 10 11 . In addition to good timekeeping ability, these oscillators show a low phase noise.

As shown in Fig. 7, the transmitter/receiver contains the high-performance quartz crystal oscillator, direct digital synthesizer (DDS), and GPS receiver. The antenna collects the GPS L1 (1575.42MHz) signals and, if dual frequency capable, L2 (1227.60MHz) signals. The radio frequency (RF) signals are filtered though a preamplifier, then down-converted to

Figure 7.

Functional block diagram of time and phase synchronization for BiSAR using GPS disciplined USOs.

intermediate frequency (IF). The IF section provides additional filtering and amplification of the signal to levels more amenable to signal processing. The GPS signal processing component features most of the core functions of the receiver, including signal acquisition, code and carrier tracking, demodulation, and extraction of the pseudo-range and carrier phase measurements. The details can be found in many textbooks on GPS (Parkinson & Spilker, 1996).

The USO is disciplined by the output pulse-per-second (PPS), and frequency trimmed by varactor-diode tuning, which allows a small amount of frequency control on either side of the nominal value. Next, a narrow-band high-resolution DDS is applied, which allows the generation of various frequencies with extremely small step size and high spectral purity. This technique combines the advantages of the good short-term stability of high quality USO with the advantages of GPS signals over the long term. When GPS signals are lost, because of deliberate interference or malfunctioning GPS equipment, the oscillator is held at the best control value and free-runs until the return of GPS allows new corrections to be calculated.

4.2. Frequency synthesis

Since DDS is far from being an ideal source, its noise floor and spurs will be transferred to the output and amplified by 2 ( denotes the frequency multiplication factor) in power. To overcome this limit, we mixed it with the USO output instead of using the DDS as a reference directly. Figure 8 shows the architecture of a DDS-driven PLL synthesizer. The frequency of the sinewave output of the USO is 10MHz plus a drift Δ f , which is fed into a double-balanced mixer. The other input port of the mixer receives the filtered sinewave output of the DDS adjusted to the frequency Δ f . The mixer outputs an upper and a lower sideband carrier. The desired lower sideband is selected by a 10MHz crystal filter; the upper sideband and any remaining carriers are rejected. This is the simplest method of simple sideband frequency generation.

Figure 8.

Functional block diagram of GPS disciplined oscillator.

The DDS output frequency is determined by its clock frequency f c l k and an M-bit number 2 j ( j [ 1 , M ] ) written to its registers, where M is the length of register. The value 2 j is added to an accumulator at each clock uprate, and the resulting ramp feeds a sinusoidal look-up table followed by a DAC (digital-to-analog convertor) that generates discrete steps at each update, following the sinewave form. Then, the DDS output frequency is (Vankka, 2005)

f = f c l k 2 j 2 M , j [ 1 , 2 , 3 , .... , M 1 ] E42

Clearly, for the smallest frequency step we need to use a low clock frequency, but the lower the clock frequency, the harder it becomes to filter the clock components in the DDS output. As a good compromise, we use a clock at about 1MHz, obtained by dividing the nominal 10MHz USO output by 10. Then, the approximate resolution of the frequency output of the DDS is d f = 1 M H z / 2 48 = 3.55 10 9 H z . Here, M = 48 is assumed. This frequency is subtracted from the output frequency of the USO. The minimum frequency step of the frequency corrector is therefore 3.55 10 9 H z / 10 6 , which is 3.55 10 16 . Thereafter, the DDS may be controlled over a much larger frequency range with the same resolution while removing the USO calibration errors. Thus, we can find an exact value of the 48-bit DDS value M to correct the exact drift to zero by measuring our PPS, divided from the 10MHz output, against the PPS from the GPS receiver.

However, we face the technical challenge of measuring the time error between the GPS and USO pulse per second signals. To overcome this difficulty, we apply a high-precision time interval measurement method. This technique is illustrated in Fig. 9, where the two PPS signals are used to trigger an ADC (analog-to-digital convertor) to sample the sinusoid that is directly generated by the USO. Denoting the frequency of P P S _ G P S as f o , we have

T 1 = ϕ B ϕ A 2 π f o E43

Figure 9.

Measuring time errors between two 1PPS with interpolated sampling technique.

Similarly, for P P S _ U S O , there is

T 2 = ϕ D ϕ C 2 π f o E44

Hence, we can get

Δ T = ( n m ) T 0 + T 1 T 2 E45

Where n and m denote the calculated clock periods. Since there is ϕ B = ϕ D , we have

Δ T = ( n T 0 + ϕ C 2 π f o ) ( m T 0 + ϕ A 2 π f o ) E46

To find a general mathematical model, suppose the collected sinewave signal with original phase ϕ i ( i ( A , C ) ) is

x ( n ) = cos ( 2 π f o n + ϕ i ) E47

Parting x ( n ) into two nonoverlapping subsets, x 1 ( n ) and x 2 ( n ) , we have

S 1 ( k ) = F F T [ x 1 ( n ) ] ,       S 2 ( k ) = F F T [ x 2 ( n ) ] E48

Thereby we have

| S 1 ( k 1 ) | = | S 1 ( k ) | max ,       | S 2 ( k 2 ) | = | S 2 ( k ) | max E49

Thus, ϕ i ( i ( A , C ) ) can be calculated by

ϕ i = 1.5 arg [ S 1 ( k 1 ) ] 0.5 arg [ S 2 ( k 2 ) ] E50

Since the parameters m , n , ϕ C , ϕ A and f o are all measurable, the time error between P P S _ G P S and P P S _ U S O can be obtained from (50). As an example, assuming the signal-to-noise ratio (SNR) is 50 d B and f o = 10 M H z , simulations suggest that the RMS (root mean square) measurement accuracy is about 0.1ps. We have assumed that some parts of the measurement system are ideal; hence, there may be some variation in actual systems. The performance of single frequency estimators has been detailed in (Kay, 1989).

Finally, time and phase synchronization can be achieved by generating all needed frequencies by dividing, multiplying or phase-locking to the GPS-disciplined USO at the transmitter and receiver.

4.3. Residual synchronization errors compensation

Because GPS-disciplined USOs are adjusted to agree with GPS signals, they are self-calibrating standards. Even so, differences in the PPS fluctuations will be observed because of uncertainties in the satellite signals and the measurement process in the receiver (Cheng et al., 2005). With modern commercial GPS units, which use the L1-signal at 1575.42MHz, a standard deviation of 15ns may be observed. Using differential GPS (DGPS) or GPS common-view, one can expect a standard deviation of less than 10ns. When GPS signals are lost, the control parameters will stay fixed, and the USO enters a so-called free-running mode, which further degrades synchronization performance. Thus, the residual synchronization errors must be further compensated for BiSAR image formation.

Differences in the PPS fluctuations will result in linear phase synchronization errors, φ 0 + 2 π Δ f t = a 0 + a 1 t , in one synchronization period, i.e., one second. Even though the USO used in this paper has a good short-term timekeeping ability, frequency drift may be observed in one second. These errors can be modeled as quadratic phases. We model the residual phase errors in the i-th second as

φ i ( t ) = a i 0 + a i 1 t + a i 2 t 2 ,      0 t 1 E51

Motion compensation is ignored here because it can be addressed with motion sensors. Thus, after time synchronization compensation, the next step is residual phase error compensation, i.e., autofocus processing.

We use the Mapdrift autofocus algorithm described in (Mancill & Swiger, 1981). Here, the Mapdrift technique divides the i-th second data into two nonoverlapping subapertures with a duration of 0.5 seconds. This concept uses the fact that a quadratic phase error across one second (in one synchronization period) has a different functional form across two half-length subapertures, as shown in Fig. 10 (Carrara et al., 1995). The phase error across each subapertures consists of a quadratic component, a linear component, and an inconsequential constant component of Ω / 4 radians. The quadratic phase components of the two subapertures are identical, with a center-to-edge magnitude of Ω / 4 radians. The linear phase components of the two subapertures have identical magnitudes, but opposite slopes. Partition the i-th second azimuthal data into two nonoverlapping subapertures. There is an approximately linear phase throughout the subaperture.

φ e i ( t + t j ) = b 0 j + a 1 j t ,      | t | T a 4 E52
with ( ( 2 j 1 ) / 2 1 ) / 2 , j [ 1 , 2 ] . Then the model for the first subaperture g 1 ( t ) is the product of the error-free signal history s 1 ( t ) and a complex exponential with linear phase
g 1 ( t ) = s 1 ( t ) exp ( b 01 + b 11 t ) E53

Similarly, for the second subaperture we have

g 2 ( t ) = s s ( t ) exp ( b 02 + b 12 t ) E54
Let
g 12 ( t ) = g 1 ( t ) g 1 ( t ) = s 1 ( t ) s 2 ( t ) exp [ j ( b 01 b 02 ) + j ( b 11 b 12 ) t ] E55

After applying a Fourier transform, we get

G 12 ( ω ) = 1 / 4 1 / 4 g 12 ( t ) e j ω t d t = exp ( j ( b 01 b 02 ) ) S 12 ( ω b 11 + b 12 ) E56

where S 12 ( ω ) denotes the error-free cross-correlation spectrum. The relative shift between the two apertures is Δ ω = b 11 b 12 , which is directly proportional to the coefficient a i 2 in Eq. (51).

Figure 10.

Visualization of quadratic phase error.

a i 2 = Δ ω = b 11 b 12 E57

Next, various methods are available to estimate this shift. The most common method is to measure the peak location of the cross-correlation of the two subapterture images.

After compensating for the quadratic phase errors a i 2 in each second, Eq. (51) can be changed into

φ i c ( t ) = a i 0 + a i 1 t ,      0 t 1 E58

Applying again the Mapdrift described above to the i-th and (i+1)-th second data, the coefficients in (58) can be derived. Define a mean value operator φ 2 as

φ 2 1 / 2 1 / 2 φ d t E59

Hence, we can get

a 1 i = ( t t ¯ ) ( φ e i φ e i ¯ ) 2 ( t t ¯ ) 2 2 ,      a 0 i = φ ¯ e i b 1 i t 2 E60

where φ ¯ e i φ e i 2 . Then, the coefficients in (51) can be derived, i.e., the residual phase errors can then be successfully compensated. This process is shown in Fig. 11.

Figure 11.

Estimator of residual phase synchronization errors

Notice that a typical implementation applies the algorithm to only a small subset of available range bins, based on peak energy. An average of the individual estimates of the error coefficient from each of these range bins provides a final estimate. This procedure naturally reduces the computational burden of this algorithm. The range bins with the most energy are likely to contain strong, dominant scatterers with high signal energy relative to clutter energy. The signatures from such scatterers typically show high correlation between the two subaperture images, while the clutter is poorly correlated between the two images.

It is common practice to apply this algorithm iteratively. On each iteration, the algorithm forms an estimate and applies this estimate to the input signal data. Typically, two to six iterations are sufficient to yield an accurate error estimate that does not change significantly on subsequent iterations. Iteration of the procedure greatly improves the accuracy of the final error estimate for two reasons First, iteration enhances the algorithm’s ability to identify and discard those range bins that, for one reason or another, provide anomalous estimates for the current iteration. Second, the improved focus of the image data after each iteration results in a narrower cross-correlation peak, which leads to a more accurate determination of its location. Notice that the Mapdrift algorithm can be extended to estimate high-order phase error by dividing the azimuthal signal history in one second into more than two subapertures. Generally speaking, N subapertures are adequate to estimate the coefficients of an Nth-order polynomial error. However, decreased subaperture length will degrade both the resolution and the signal-to-noise ratio of the targets in the images, which results in degraded estimation performance.

Advertisement

5. Conclusion

Although the feasibility of airborne BiSAR has been demonstrated by experimental investigations using rather steep incidence angles, resulting in relatively short synthetic aperture times of only a few seconds, the time and phase synchronization of the transmitter and receiver remain technical challenges. In this chapter, with an analytical model of phase noise, impacts of time and phase synchronization errors on BiSAR imaging are derived. Two synchronization approaches, direct-path signal-based and GPS signal disciplined, are investigated, along with the corresponding residual synchronization errors.

One remaining factor needed for the realization and implementation of BiSAR is spatial synchronization. Digital beamforming by the receiver is a promising solution. Combining the recorded subaperture signals in many different ways introduces high flexibility in the BiSAR configuration, and makes effective use of the total signal energy in the large illuminated footprint.

Advertisement

Acknowledgments

This work was supported in part by the Specialized Fund for the Doctoral Program of Higher Education for New Teachers under contract number 200806141101, the Open Fund of the Key Laboratory of Ocean Circulation and Waves, Chinese Academy of Sciences under contract number KLOCAW0809, and the Open Fund of the Institute of Plateau Meteorology, China Meteorological Administration under contract number LPM2008015.

References

  1. 1. Auterman J. L. 1984 Phase stability requirements for a bistatic SAR. Proceedings of IEEE Nat. Radar Conf, 48 52 , Atlanta, Georgia.
  2. 2. Burkholder R. J. Gupta I. J. Johnson J. T. 2003 Comparison of monostatic and bistatic radar images. IEEE Antennas Propag. Mag., 45 3 41 50 .
  3. 3. Carrara W. G. Goodman R. S. Majewski R. M. 1995 Spotlight Synthetic Aperture Radar: Signal Processing Algorithms. Artech House, London, 978-0-89006-728-4
  4. 4. Ceraldi E. Franceschetti G. Iodice A. Riccio D. 2005 Estimating the soil dielectric constant via scattering measurements along the specular direction. IEEE Trans. Geosci. Remote Sens., 43 2 295 305 .
  5. 5. Chen C. C. Andrews H. C. 1980 Target-motion-induced radar imaging. IEEE Trans. Aerosp. Electron. Syst., 16 1 2 14 .
  6. 6. Cheng C. L. Chang F. R. Tu K. Y. 2005 Highly accurate real-time GPS carrier phase-disciplined oscillator. IEEE Trans. Instrum. Meas., 54 819 824 .
  7. 7. Cherniakov M. Nezlin D. V. 2007 Bistatic Radar: Principles and Practice, John Wiley, ISBN, New York, 978-0-47002-630-4
  8. 8. Delisle G. Y. Wu H. Q. 1994 Moving target imaging and trajectory computation using ISAR. IEEE Trans. Aerosp. Electron. Syst., 30 3 887 899 .
  9. 9. Donald R. S. 2002 Phase-locked loops for wireless communications: digital, Analog and Optical Implementations, Kluwer Academic Publishers, New York, 978-0-79237-602-6
  10. 10. Eigel R. Collins P. Terzuoli A. Nesti G. Fortuny J. 2000 Bistatic scattering characterization of complex objects. IEEE Trans. Geosci. Remote Sens., 38 5 2078 2092 .
  11. 11. Eineder M. 2003 Oscillator clock drift compensation in bistatic interferometric SAR. Proceedings of IEEE Geosci. Remote Sens. Symp., 1449 1451 , Toulouse, France.
  12. 12. Eineder M. 2003 Oscillator clock shift compensation in bistatic interferometric SAR. Proceedings of IEEE Geosci. Remote Sens. Symp., 1449 1451 , Toulouse, France.
  13. 13. Evans N. B. Lee P. Girard R. 2002 The Radarsat-2/3 topgraphic mission. Proceedings of Europe Synthetic Aperture Radar. Symp., Cologne, Germany.
  14. 14. Foschini G. J. Vannucci G. 1988 Characterizing filtered light waves corrupted by phase noise. IEEE Trans. Info. Theory, 34 6 1437 1488 .
  15. 15. Gierull C. 2006 Mitigation of phase noise in bistatic SAR systems with extremely large synthetic apertures. Proceedings of Europe Synthetic Aperture Radar. Symp., Dresden, Germany.
  16. 16. Hanzo L. Webb W. Keller T. 2000 Signal- and multi-carrier quadrature amplitude modulation-principles and applications for personal communications, WLANs and broadcasting. John Wiley & Sons Ltd, 978-0-47149-239-9
  17. 17. Kay S. 1989 A fast and accurate single frequency estimator. IEEE Trans. Acoustics, Speech, Sig. Process., 27 1987 1990 .
  18. 18. Krieger G. Moreira A. 2006 Spaceborne bi- and multistatic SAR: potential and challenge. IEE Proc. Radar Sonar Navig., 153 3 184 198 .
  19. 19. Krieger G. Younis M. 2006 Impact of oscillator noise in bistatic and multistatic SAR. IEEE Geosci. Remote Sens. Lett., 3 3 424 429 .
  20. 20. Krieger G. Cassola M. R. Younis M. Metzig R. 2006 Impact of oscillator noise in bistatic and multistatic SAR. Proceedings of IEEE Geosci. Remote Sens. Symp., 1043 1046 , Seoul, Korea.
  21. 21. Kroupa V. F. 1996 Close-to-the carrier noise in DDFS. Proceedings of Int. Freq. Conrol. Symp., 934 941 , Honolulu.
  22. 22. Lance A. L. Seal W. D. Labaar F. 1984 Phase-noise and AM noise measurements in the frequency domain. Infrared Millim. Wave, 11 3 239 289 .
  23. 23. Loffeld O. Nies H. Peters V. Knedlik S. 2004 Models and useful relations for bistatic SAR processing. IEEE Trans. Geosci. Remote Sens., 42 10 2031 2038 .
  24. 24. Moreira A. Krieger I. Hajnsek M. Werner M. Hounam D. Riegger E. Settelmeyer E. 2004 TanDEM-X: a TerraSAR-X add-on satellite for single-pass SAR interferometry. Proceedings of IEEE Geosci. Remote Sens. Symp., 1000 1003 , Anchorage, USA.
  25. 25. Muellerschoen R. J. Chen C. W. Hensley S. Rodriguez E. 2006 Error analysis for high resolution topography with bistatic single-pass SAR interferometry. Proceedings of IEEE Int. Radar Conf., 626 633 , Verona, USA.
  26. 26. Parkinson B. W. Spilker J. J. 1996 Global Position System: Theory and Applications. American Institute of Aeronautics and Asronautics, Washington, D.C.
  27. 27. Rutman J. 1978 Characterization of phase and frequency instabilities in precision frequency sources: fifteen years of progress. Proceeding of IEEE, 66 9 1048 1073 .
  28. 28. Unolkosold P. Knedlik S. Loffeld O. 2006 Estimation of oscillator’s phase offset, frequency offset and rate of change for bistatic interferometric SAR. Proceedings of Europe Synthetic Aperture Radar. Symp., Dresden, Germany.
  29. 29. Vankka J. 2005 Digital Synthesizers and Transmitters for Software Radio. Springer, Netherlands, 978-1-40203-194-6
  30. 30. Vannicola V. C. Varshney P. K. 1983 Spectral dispersion of modulated signals due to oscillator phase instability: white and random walk phase model. IEEE Trans. Communications, 31 7 886 895 .
  31. 31. Wahl D. E. Eichel P. H. Ghilia D. C. Jakowatz C. V. Jr 1994 Phase gradient autofocus- a robust tool for high resolution SAR phase correction. IEEE Trans. Aerosp. Electron. Syst., 30 3 827 835 .
  32. 32. Walls F. L. Vig J. R. 1995 Fundamental limits on the frequency stabilities of crystal oscillators. IEEE Trans. Ultra. Ferroelectr. Freq. Control., 42 4 576 589 .
  33. 33. Wang W. Q. Cai J. Y. 2007 A technique for jamming bi- and multistatic SAR systems. IEEE Geosci. Remote Sens. Lett., 4 1 80 82 .
  34. 34. Wang W. Q. 2007a Application of near-space passive radar for homeland security. Sens. Imag. An Int. J., 8 1 39 52 .
  35. 35. Wang W. Q. 2009 GPS based time and phase synchronization processing for distributed SAR. IEEE Trans. Aerosp. Electron. Syst., in press.
  36. 36. Wang W. Q. Cai J. Y. Yang Y. W. 2006 Extracting phase noise of microwave and millimeter-wave signals by deconvolution. IEE Proc. Sci. Meas. Technol., 153 1 7 12 .
  37. 37. Wang W. Q. Ding C. B. Liang X. D. 2008 Time and phase synchronization via direct-path signal for bistatic SAR systems. IET Radar, Sonar Navig., 2 1 1 11 .
  38. 38. Wang W. Q. Liang X. D. Ding C. B. 2006 An Omega-K algorithm with inte grated synchronization compensation. Proceedings of Int. Radar. Symp., 395 398 , Shanghai, China.
  39. 39. Wei . M. 2004 Synchronization of bistatic radar systems. Proceedings of IEEE Geosci. Remote Sens. Symp., 1750 1753 , Anchorage.
  40. 40. Wei . M. 2004 Time and phase synchronization aspects for bistatic SAR systems. Proceedings of Europe Synthetic Aperture Radar. Symp., 395 398 , Ulm, Germany.
  41. 41. Willis N. J. 1991 Bistatic Radar, Artech House, ISBN, Boston, MA, 978-0-89006-427-6
  42. 42. Younis M. Metzig R. Krieger G. 2006a Performance prediction and verfication for bistatic SAR synchronization link. Proceedings of Europe Synthetic Aperture Radar. Symp., Dresden, Germany.
  43. 43. Younis M. Metzig R. Krieger G. 2006b Performance prediction of a phase synchronization link for bistatic SAR. IEEE Geosci. Remote Sens. Lett., 3 3 429 433 .
  44. 44. Zhang X. L. Li H. B. Wang J. G. 2005 The analysis of time synchronization error in bistatic SAR system. Proceedings of IEEE Geosci. Remote Sens. Symp., 4615 4618 , Seoul, Korea.
  45. 45. Zhang Y. S. Liang D. N. Wang J. G. 2006 Analysis of frequency synchronization error in spaceborne parasitic interferometric SAR system. Proceedings of Europe Synthetic Aperture Radar. Symp., Dresden, Germany.

Written By

Wen-Qin Wang

Published: January 1st, 2010