Open access

Theoretical Modeling for Polarimetric Scattering and Information Retrieval of SAR Remote Sensing

Written By

Ya-Qiu Jin

Published: 01 October 2009

DOI: 10.5772/8323

From the Edited Volume

Advances in Geoscience and Remote Sensing

Edited by Gary Jedlovec

Chapter metrics overview

4,853 Chapter Downloads

View Full Metrics

1. Introduction

Synthetic aperture radar (SAR) imagery technology is one of most important advances in space-borne microwave remote sensing during recent decades. Completely polarimetric scattering from complex terrain surfaces can be measured. Fully understanding and retrieving information from polarimetric scattering signatures of natural media and SAR images have become a key issue for the SAR remote sensing and its broad applications.

This paper presents some research progress in Fudan University on theoretical modeling of the terrain surface for polarimetric scattering simulation and Mueller matrix solution, mono-static and bistatic SAR image simulation, new parameters for unsupervised surface classification, DEM inversion, change detection from multi-temporal SAR images, and reconstructions of buildings from multi-aspect SAR images, etc. Some applications are discussed.

Advertisement

2. Model of Vegetation Canopy and Mueller Matrix solution

As a polarized wave E i n c ( χ , ψ ) is incident upon the natural media, the scattering field is written as

[ E v s E h s ] = e i k r r [ S v v    S v h S h v    S h h ] [ E v i E h i ] e i k r r S ¯ E i n c ( χ , ψ ) E1

where 22-D (dimensional) complex scattering amplitude function S ¯ can be measured by the polarimetry. The incident polarization is indicated by the elliptic and orientation angles ( χ , ψ ) . Using the Mueller matrix solution of vector radiative transfer equation ( Jin, 1994 ) and Eq. (1), the scattered Stokes vector (four Stokes parameters) can be obtained or measured as (Jin, 2005)

I s ( θ , ϕ ) = M ¯ ( θ , ϕ ; π θ 0 , ϕ 0 ) I i ( χ , ψ ) E2

The Mueller matrix solution of a layer of scatterers model, as shown in Figure 1, is written as

M ¯ ( θ , ϕ ; π θ 0 , ϕ 0 ) = M ¯ 0 + M ¯ 1 + M ¯ 2 + M ¯ 3 + M ¯ 4 E3
M ¯ 0 = exp [ d sec θ κ ¯ e ( θ , ϕ ) ] R ¯ ( θ , ϕ ; π θ 0 , ϕ 0 ) exp [ d sec θ 0 κ e ( π θ 0 , ϕ 0 ) ] M ¯ 1 = sec θ d 0 d z exp [ z sec θ κ ¯ e ( θ , ϕ ) ] P ¯ ( θ , ϕ ; π θ 0 , ϕ 0 ) exp [ z sec θ 0 κ ¯ e ( π θ 0 , ϕ 0 ) ] M ¯ 2 = sec θ d 0 d z exp [ z sec θ κ ¯ e ( θ , ϕ ) ] 0 2 π d ϕ 0 π / 2 d θ sin θ P ¯ ( θ , ϕ ; θ , ϕ ) exp [ ( d z ) sec θ κ ¯ e ( θ , ϕ ) ] R ¯ ( θ , ϕ ; π θ 0 , ϕ 0 ) exp [ d sec θ 0 κ ¯ e ( π θ 0 , ϕ 0 ) ] M ¯ 3 = sec θ exp [ d sec θ κ ¯ e ( θ , ϕ ) ] 0 2 π d ϕ 0 π / 2 sin θ d θ R ¯ ( θ , ϕ ; π θ , ϕ ) d 0 d z exp [ ( d z ) sec θ κ ¯ e ( π θ , ϕ ) ] P ¯ ( π θ , ϕ ; π θ 0 , ϕ 0 ) exp [ z sec θ 0 κ ¯ e ( π θ 0 , ϕ 0 ) ] M ¯ 4 = sec θ exp [ d sec θ κ ¯ e ( θ , ϕ ) ] 0 2 π d ϕ 0 π / 2 d θ sin θ R ¯ ( θ , ϕ ; π θ , ϕ ) exp [ ( d z ) sec θ κ ¯ e ( π θ , ϕ ) ] 0 2 π d ϕ 0 π / 2 d θ sin θ P ¯ ( π θ , ϕ ; θ , ϕ ) exp [ ( d z ) sec θ κ ¯ e ( θ , ϕ ) ] R ¯ ( θ , ϕ ; π θ 0 , ϕ 0 ) exp [ d sec θ 0 κ ¯ e ( π θ 0 , ϕ 0 ) ] E4

where κ ¯ e ,   P ¯ ,   R ¯ are the extinction matrix, phase matrix of non-spherical scatterers, and reflectivity matrix of underlying rough surface, respectively. Five contributions from scatterers and underlying surface are illustrated in Figure 1. κ ¯ e ,   P ¯ can be expressed by S p q S s t * of S ¯ in Eq. (1). The co-polarized and cross-polarized backscattering coefficients, σ c and σ x , polarization degree m s for scattered Stokes echo with partial polarization and other functions can be numerically calculated ( Jin and Chen, 2002 ).

Eqs. (4a-c) will be applied to numerical simulation of polarimetric scattering from the terrain surface with physical parameters of scatterers (Jin, 2005), as described in next sections.

Figure 1.

A model of non-spherical scatterers for vegetation canopy.

The Mueller matrix is a 4×4-D real matrix with complex eigenvalues and eigenvectors. To be physically realizable, this matrix must satisfy the Stokes criterion together with several other restrictive conditions. Unfortunately, however, these restrictions do not have any direct physical interpretation in terms of the eigenstructure of the Mueller matrix.

The coherency matrix C ¯ is applied to the study of polarimetric scattering of SAR image ( Jin and Cloude, 1994 ). Define the scattering vector as

k P = 1 2 [ S v v + S h h , S v v S h h , S v h + S h v , i ( S v h S h v ) ] T = 1 2 [ A , B , C , i D ] T E5

where A, B, C, D are sequentially defined in Eq. (5), and the subscript P denotes Pauli vectiorization. The coheremcy matrix is defined as

C ¯ = k P k P + = 1 4 [ A A * A B * A C * A D * B A * B B * B C * B D * C A * C B * C C * C D * D A * D B * D C * D D * ] E6

The eigenvalue and eigenvector of the coherency matrix are defined as

C ¯ = k P k P + = i = 1 4 λ i k P i k P i + E7

All eigenvalues are real and non-negative, and λ 1 λ 2 λ 3 λ 4 .

The coherency matrix is also related with the Mueller matrix as

M i + 1, s + 1 = t u 1 2 Trace ( σ ¯ i   σ ¯ t   σ ¯ s   σ ¯ u ) [ C ¯ ] t u E8

where σ ¯ i ,   i = 0,1,2,3 are the Pauli matrices.

Usually, the case of S v h = S h v is simply assumed. The entropy H is defined by the eigenvalues of the coherency matrix as

H = i = 1 3 P i log 3 P i , P i = λ i / ( λ 1 + λ 2 + λ 3 ) E9

The entropy is an important feature since it relates the randomness of scatter media with other physical parameters, such as canopy configuration, biomass, etc.

The coherency matrix is 4×4-D positive semidefinite Hermitian and as such has real non-negative eigenvalues and complex orthogonal eigenvectors. Amplitude and difference of four eigen-values are functionally related with polarimetric scattering process of the terrain canopy. Eigen-analysis of the coherency matrix yields better physical insight into the polarimetric scattering mechanisms than does the Mueller matrix and further, it can be employed to physically identify a Mueller matrix by virtue of the semi-definite nature of the corresponding coherency matrix.

For the first-order solution in small albedo, cross polarization is small and correlations such as the terms C A * ,   C B * ,   D A * ,   D B * etc. in Eq. (6) are always very small, and might be neglected in our following derivations. Actually, for weak assumption of azimuthal symmetric orientation, these correlations have been proved to be zero. Applying this approximation to Eq. (6), the eigenvalues of C ¯ have been derived as ( Jin and Cloude, 1994 )

λ 1,2 = 1 8 [ | A | 2 + | B | 2 ± ( | A | 2 | B | 2 ) 2 + 4 A * B A B * ] , λ 3 = 1 4 | C | 2 E10

It is interesting to see that if A B * A * B = A A * B B * at an ordered case, it yields λ 2 = 0 .

Substituting A,B,C and D of Eq. (6) into Eq. (10), the eigen-values are derived using the power terms of | S v v | 2 ,   | S h h | 2 ,   | S h v | 2 as follows,

λ 1 = 1 2 ( | S v v | 2 + | S h h | 2 ) λ 2 , λ 2 = 1 2 | S v v | 2 | S h h | 2 ( | S v v | 2 + | S h h | 2 ) ( 1 Δ ) , λ 3 = | S h v | 2 E11

where the configuration parameter

Δ = S v v S h h * S v v * S h h | S v v | 2 | S h h | 2 E12

varies from 0 of totally random media to 1 of ordered media, i.e. no-random case.

It can be seen that the first eigen-value λ 1 now indicates the total vv and hh co-polarized power subtracting their coherence defined by the parameter of Eq. (12), which varies from 1 at ordered case to 0 in totally random medium. The second eigen-value λ 2 now takes into account for the vv and hh coherent power. As approaches to one, λ 2   0 for ordered media, and as approaches to zero, λ 2 would increase to indicate the loss of the vv and hh power coherency for totally random media. The third eigen-value λ 3 is now due to depolarization caused by the media randomness.

The (p,q=v,h)-polarized backscattering coefficient is

σ p q = 4 π cos θ | S p q | 2 E13

Substituting Eq. (13) into Eqs. (11,12), the functions P i ,   i = 1,2,3 , of Eq. (9) are then derived as

P 1 = 1 P 2 P 3 ,    P 2 = X ( 1 Δ ) ( 1 δ ) ,    P 3 = δ ( 1 δ ) 1 E14

where

X σ h h / σ v v ( 1 + σ h h / σ v v ) 2 ,     δ = 2 σ h v σ v v + σ h h E15

Note that all functions P i ,   i = 1,2,3 , can be now calculated by polarized backscattering coefficient σ p q ,    p , q = v , h , not by Eq. (7) from full Mueller matrix solution. Then, the entropy H of Eq. (9) is calculated using P i of Eq. (14). It can be seen that the function P 1 is related with the total vv and hh co-polarized power σ h h + σ v v ; P 2 is due to the difference between σ h h and σ v v ; and P 3 is due to depolarization σ h v . They are modulated by the media configuration and randomness via the parameters Δ and δ . As the medium becomes more random or disordered, δ increases and Δ approaches zero. Vice versa, as the medium becomes more ordered, δ decreases and Δ increases. Thus, the relationship between the entropy H and backscattering measurements σ p q   ( p q = v v , h h , h v ) is established.

Define the co-polarized and cross polarized indexes, respectively, as

C P I 10 log 10 ( σ h h / σ v v )    ( d B ) , X P I δ E16

The entropy H is directly related with backscattering indexes CPI and XPI via Eqs. (14~16).

Figure 2(a) presents an AirSAR image of total power of σ v v ,   σ h h and σ v h at the L band. The image data from some typical areas such as the lake, an island surface, sparse trees, and thick forest (see the frames in the figure) are chosen, as shown in the discrete points of Figure 2(b), and are compared with theoretical results of H, CPI and XPI. The H is first rigorously calculated by polarimetric data of the Mueller matrix, i.e. the eigenvalues of coherency matrix C ¯ . Using the AirSAR data σ v v ,   σ h h ,   σ v h and the averaged δ in the respective region, the lines are calculated with appropriate Δ to match the center of discrete data.

Figure 2.

An AirSAR image of total power σ v v ,   σ h h ,   σ v h at L band and Entropy H vs CPI. Line 1: Δ = 0.257 , δ = 0.176 , Line 2: Δ = 0.318 , δ = 0.156 , Line 3: Δ = 0.365 , δ = 0.118 , Line 4: Δ = 0. , δ = 0. , Line 5: Δ = 0.731 , δ = 0.0265 .

Advertisement

3. SAR Imaging Simulation (MPA)

Given the sophisticated mechanisms of electromagnetic (EM) wave scattering and SAR signal collecting, it is hard to tackle SAR imagery without the aid of forward simulation of scattering based on physical modeling. Simulation of SAR imaging presents a different mode to study inhomogeneous terrain scene under SAR observation. It may evaluate or predict the SAR observation, interpret the physical scattering behavior, and take account of the parameterized characteristics of terrain surface objects. Some approaches to SAR image simulation in previous literature might be classified into two catalogues: one focuses on the statistical characteristics of SAR images, and the other on physical scattering process. Natural scene is more complicated including randomly, inhomogeneously distributed, penetrable or impenetrable objects, such as vegetation canopies, manmade structures and perturbed surface topography. Particularly, volumetric scattering through penetrable scatterers, e.g. timberland forests, crops, green plants etc., plays an essential or dominant role in SAR imagery. It is meaningful to develop computerized simulation of SAR imaging over comprehensive terrain scene with heterogeneous terrain objects.

we present an novel approach of the Mapping and Projection Algorithm (MPA) to polarimetric SAR imaging simulation for comprehensive scenarios, which takes account of scattering, attenuation, shadowing and multiple scattering of spatially distributed volumetric and surface scatterers ( Xu and Jin, 2006 ). In this approach, scattering contributions from scatterers are cumulatively summed on the slant range plane (the mapping plane) following geometric principles, while their extinction and shadowing effects are cumulatively multiplied on the ground range plane (the projection plane). It finally constructs a general imaging picture of the whole terrain scene via mapping and projection operations. A MPA is then devised to speed up the simulation of the whole process of scattering, extinction, mapping and projection in association with a grid partition of the comprehensive terrain scene. Our SAR simulation scheme incorporates polarimetric scattering, attenuation or shadowing of several typical terrain surfaces, as well as the coherent speckle generation.

3.1. The mapping and projection algorithm (MPA)

Since the slant range is much larger than the synthetic aperture, the incidence angle to the same target is nearly invariant during the interval of radar flying over a length of synthetic aperture. Thus in the imaging simulation, the whole scene is divided into cross-track lines along the azimuth dimension, and then the scattering contribution from the terrain objects lying in each line (included in the incidence plane) are calculated sequentially. It finally constructs a scattering picture or a scattering map of the whole scene.

Following from VRT, we derived a general expression of the scattering power at range r of x -position in azimuth dimension as

S ( x , r ) = θ 0 θ 1 exp [ r 0 r d r κ e + ( x , r , θ ) ] P ( x , r , θ ) exp [ r r 0 d r κ e ( x , r , θ ) ] r d x d r d θ E17

where the phase function P stands for scattering, κ e ( x , r , θ ) , κ e + ( x , r , θ ) for the backward and forward extinction of an differential element d v at the position ( x , r , θ ) of the imaging space. θ is incident angle. Integrating S ( x , r ) over the n , i -th pixel i.e. x [ x n , x n + 1 ) , s [ s i , s i + 1 ) , the corresponding discrete scattering map S n , i can be obtained. Considering polarimetric scattering, the computational discrete form of Eq. (17) for both volume and surface scatterer is given as

S ¯ n , i = m = m n m n + 1 1 p = p i p i + 1 1 q = 0 Q 1 p = 0 p E ¯ o + ( m , p , q ) S ¯ o ( m , p , q ) p = p 0 E ¯ o ( m , p , q ) E18
S ¯ o ( m , p , q ) = { v e f S ¯ vol ( m , p , q ) for volume scatterer a e f S ¯ surf ( m , p , q ) for surface scatterer E19
E ¯ o ± ( m , p , q ) = { exp [ d e f κ ¯ e ± ( m , p , q ) ] for volume scatterer 0 for surface scatterer E20

where v e f = γ v 0 is the effective volume of volume scatterer, calculated from particle number γ and volume v 0 ; a e f = Δ x Δ s is the effective area of surface scatterer; d e f is the effective penetration depth. All of them take into account the proportion coefficients of each scatterer in the same discrete unit. The subscript o denotes scattering or extinction of a single scatterer. Eq. (18) presents a general description of SAR imaging which deals with single scattering, attenuation or shadowing of all possible objects in the imaging space.

We further devise a mapping and projection algorithm (MPA) to fast compute Eq. (17) The first step is to partition the scene and terrain objects lying in the horizontal plane x o y into multi-grids. Each grid unit contains its corresponding parts of the terrain objects overhead, e.g. ground, vegetation, building, etc.

As shown in Figure 3, two arrays E _ ± and S _ are deployed for temporary storage of scattering and extinction (the under-bar denotes an array). The mapping plane and projection plane are equally divided into discrete cells corresponding to elements of S _ , E _ ± . Attenuation or shadowing effect of each grid unit is counted in the increasing sequence of y axis and cumulatively multiplied into the array E _ ± .

Figure 3.

The MPA algorithm for natural media scene.

The MPA calculating sequence of the terrain objects in one grid unit is: first to calculate the scattering power of the current terrain object and accumulate onto the array S _ , where the attenuation or shadowing from other terrain objects are obtained from array E _ ± ; then, to calculate the attenuation or shadowing caused by the current terrain object itself, and cumulatively multiply onto the array E _ ± ; ultimately when all grid units of the current line are counted, the scattering map is exactly produced in the array S _ .

As shown in Figure 3, a vertical segment (length L ) of volume scatterers above the current grid unit is firstly divided into K sub-segments L k in order to make each sub-segment mapping into one cell SI k in mapping plane, while its mid-point is projected to cell EI k in projection plane. After the mapping and projecting, the elements of array S _ are refreshed as

S _ [ SI k ] : = S _ [ SI k ] + E _ + [ EI k ] S ¯ o k E _ [ EI k ] , k = 1,.., K E21

S ¯ o k is the scattering of the k -th sub-segment. Assuming that volume scatterers always uniformly fill the whole sub-segment, then the effective volume in Eq. (19) is v e f = Δ x Δ y L k f s . Therefore, we have

S ¯ o k = Δ x Δ y L k f s S ¯ vol = v e f S ¯ vol E22

where f s is the fractional volume of scatterers. Otherwise, it is necessary to estimate the total number of scatterers γ within the sub-segment, and calculate scattering from Eq. (19). The next step is to refresh the elements of the array E _ ± as

{ E _ [ EI k ] : = E ¯ o , k E _ [ EI k ] k = 1 ,.., K E _ + [ EI k ] : = E _ + [ EI k ] E ¯ o , k + k = 1 ,.., K E23

where E ¯ o , k ± = exp ( d e f κ ¯ e ) is the attenuation of the k -th sub-segment. Due to discrete calculation for attenuation, the array E _ ± should store the averaged attenuation. Thus, the ratio of shadowed area of the sub-segment to the whole projection cell should be taken into account, when counting the contribution of the sub-segment to the mean attenuation. The effective depth in E ¯ o , k ± is calculated as

  d e f = v e f R x R r / tan θ = Δ x Δ y L k R x R r / tan θ = tan θ Δ x Δ y L k R x R r E24

It can be seen from the effective depth d e f that if we redistribute all volume scatterers of the sub-segment and make them cover the whole projection cell under the invariance of density, then the layer depth of redistributed scatterers along the projection line is seen as the effective depth.

In a similar way, larger surface scatterers, like vertical wall surfaces, cliff slopes etc., are segmented according to mapping cells. But, for nearly horizontal surfaces, e.g. ground or roof surface, the facet cut by one grid unit is small enough and is projected into one projection cell, i.e. K = 1 . Hence, given the normal vector of facet n ^ , the effective area and its scattering in Eq. (21) are calculated in two ways:

S ¯ o k = a e f S ¯ surf = { Δ x Δ y S ¯ surf cos ϕ 2 / cos ϕ 1 K = 1 Δ x L k S ¯ surf cos ϕ 2 / sin ϕ 1 K 1 E25

where ϕ 1 is the angle between n ^ and z axis; ϕ 2 is the angle between n ^ and i ^ incidence. L k is the length of sub-segment. Moreover, the array E _ ± should be set to zero over all projection interval from the first projection cell to the last one, i.e.

E _ ± [ j ] : = 0 EI 1 j EI K E26

The elements of the array E _ ± are initialized to unit matrices at the start point of calculation for each line, while the array S _ should be saved before cleared to start the next line. Note that each line is strictly computed in the sequence of increasing y in order to precisely count attenuation caused by the terrain objects ahead.

Double and triple scatterings between terrain objects and the underlying ground are regarded as shown in Figure 3. First, the corresponding ground patch of multiple scattering can be located by the ray tracing technique. Then the length of propagation path, i.e. the effective range r e f can be given as

r e f = { ( r Object + r Ground + r Path ) / 2 for double scattering r Object + r Path for triple scattering E27

In the same way, multiple scattering of a segment of scatterers is mapped into the array S _ by r e f . The same step of dividing into sub-segments is employed if the mapping interval exceeds one cell.

For double scattering, the attenuation or shadowing suffered through the propagation paths of r Object , r Ground are the same with single scattering of the terrain object and ground patch, respectively. It means that the array E _ ± can be used as well. The attenuation during the way from the terrain object to ground patch r Path , is omitted for simplicity. Therefore, the double scattering contribution can be written as

S _ [ SI e f ] : = S _ [ SI e f ] + E _ + [ EI Ground ] S ¯ Ground 2+ ρ S ¯ Object 2 E _ [ EI Object ] + E _ + [ EI Object ] ρ S ¯ Object 2 + S ¯ Ground 2 E _ [ EI Ground ] E28

where EI Ground , EI Object are the projection indices of the terrain object and the ground patch respectively. S ¯ Ground 2 ± , S ¯ Object 2 ± are the Mueller matrices of the ground patch and the terrain object along the forward and backward directions of double-scattering, respectively. The coefficient ρ = v e f or a e f is calculated similarly to Eqs. (19,22,25).

For triple scattering (object-ground-object), the expression can be directly written as:

S _ [ SI e f ] : = S _ [ SI e f ] + E _ + [ EI Object ] S ¯ Object 2 + S ¯ Ground 3 ρ S ¯ Object 2 E _ [ EI Object ] E29

where S ¯ Ground 3 is the Mueller matrix of ground patch along the triple scattering path; ρ = v e f or a e f .

Correct sequence for calculation of each grid unit is first to count single scattering, then multiple scattering, at last its attenuation or shadowing. Notice that the ground patch of multiple scattering could be located at any place around the terrain object. For simplicity, we take the projection cell in the current incidence plane as an approximate substitute. Some other coding techniques are adopted to guarantee the correct computation and make it more efficient.

3.2. Scattering models for terrain objects

For vegetation canopies, VRT model of a layer of random non-spherical particles ( Jin, 1994 ) is employed. Leaves, small twigs or thin stems are modeled as non-spherical dielectric particles under the Rayleigh or Rayleigh-Gans approximation, while branches, trunks or thick stems are modeled as dielectric cylinders. A tree model is composed of crown and trunk. The crown is a cloud with a simple geometrical shape containing randomly oriented non-spherical particles. The trunk is an upright cylinder with the top covered by the crown. Oblate or disk-like particles and elliptic crown are adopted for broad-leaf forest, while prolate or needle-like particles and cone-like crown are for needle-leaf forest. In addition, the crown shapes take a small random perturbation. Similarly, a farm filed is modeled as a layer of randomly oriented non-spherical particles with perturbed layer depth. If the grid units are small enough, the segment of crown or crops cut by one grid unit can be regarded as a segment fully filled with particles. Differences between the trunk and crown are: (a) the trunk is solid within a grid unit i.e. fractional volume is set to 1; (b) double scattering between the trunk and ground is taken into account.

Scattering of the building is seen as the surface scattering from its wall and roof surfaces and multiple interactions with the ground surface. The integral equation method (IEM) is employed to calculate surface scattering. The building is modeled as a parallelepiped with an isoscales triangular cylinder layered upon it. Due to the orientation of wall surface and roof surface, the incident angle must be transferred into the local coordinates of the surface, as well as the polar basis of wave propagation. Geometrical relationships among the wall surfaces, the roof surfaces, as well as the double and triple scatterings between the wall and ground can be found in Franceschetti et al. (2002). Figure 2 illustrates the building model, its image and shadowing in SAR imagery.

Figure 4.

Image and shadow of a building in SAR imagery.

For ground facet, the tangent vectors along x , y axes on the current grid unit are used to construct the facet plane.

The MPA approach is based on the VRT theory for incoherent scattering power, not coherent summation of scattering fields. However, speckle due to coherent interference is one of the most critical characteristics of SAR imagery, especially for studies of SAR image filtering and optimization. Assuming Gaussian probability distribution of the scattering vector k L = [ S h h , S h v , S v h , S v v ] T , random scattering vector is then generated as follows:

k L = k 0 C ¯ 1 2 ,      k 0 i = I i + j Q i ,    i = 1,.., d E30

where I i , Q i are independent Gaussian random numbers with the zero mean and unit variance. Given the positive semidefinite property of C ¯ , its square root can be obtained by Cholesky decomposition.

3.3. Simulation results

The configurations and parameter settings of the radar and platform in simulation are selected follow the AIRSAR sensor of NASA/JPL.

A virtual comprehensive terrain scene is designed, as shown in Figure 5(a), as according to a true DEM of Guangdong province, south China. It contains different types of forests covering on the hill, crops farmland, ordered or random buildings in urban and suburban regions, roads and rivers.

A simulated scattering map (decibel of normalized scattering power, from -50dB to 0dB, pseudo color: R-HH, G-HH, B-HV.) at L band with 12m resolution (pixel spacing is 5m) is displayed in Figure 5(b). The simulated SAR image at L band is shown in Figure 5(c).

Figures 6(a,b) give the simulated scattering map and SAR image at C band, respectively. In this higher band, vegetation scatterings are significantly increased, as well as attenuation is increased and wave penetrable depth is decreased. It causes that trees become thinner while the shadowing becomes darker.

Figure 5.

Simulated SAR image at L band.

Figure 6.

Scattering and Simulated SAR image at C bands.

At the upper-right corner, the blocks images appear like parallel lines, which reveal the dominance of double scattering in the urban area. However, on the other side below the river, the buildings are oriented nearly 45 degrees to the radar flying direction. As a result, the cross-pol scattering (blue) is stronger, while double scattering is reduced. Timberlands can be classified into two classes: broad leaves in yellow color due to balance between HH and VV scattering, and narrow leaves in green color due to weaker scattering of HH. The narrow leaves trees make weaker attenuation of HH and therefore stronger HH scattering of the shadowed ground, which is the reason why most areas of narrow leaves forest become red. Overlay and shadowing effects caused by mountainous topography are particularly perceptible. Croplands are generally uniform and dense, and appear like patches in the image. Its VV scattering (green) is always stronger than HH. They have distinguishable brightness due to different density, sizes, shapes etc. More detail discussion can be seen in ( Xu and Jin, 2006 )

3.4. Bistatic SAR imaging

Bistatic SAR (BISAR) with separated transmitter and receiver flying on different platforms has become of great interests. Most of BISAR research is focused on the engineering realization and signal processing algorithm with few on land-based bistatic experiments or theoretical modeling of bistatic scattering. The MPA provides a fast and efficient tool for monostatic imaging simulation. It involves the physical scattering process of multiple terrain objects, such as vegetation canopy, buildings and rough ground surfaces (Xu and Jin, 2008a).

Figure 7.

Bistatic MPA

Similar to mono-static MAP, the bistatic-MAP steps are described as follows,

(1) The arrays E _ + , E _ are initialized as unit matrices, S _ is as zero matrices. Then, visit the grid units, sequentially, along + y direction.

(2) Perform 3D projections of the current grid unit along incidence and scattering directions to the cells p d , p u of E _ + , E _ , respectively.

(3) Determine the mapping position of the current grid unit based on the information of its synthetic aperture and Doppler history, which corresponds to the cell m of the array S _ .

(4) Obtain or calculate the scattering matrix S ¯ 0 and the upward/downward attenuation matrix E ¯ 0 ± of the current grid unit.

(5) Refresh the elements of S _ as

S _ [ m ] : = S _ [ m ] + E _ + [ p u ] S ¯ 0 E _ [ p d ] E31

Refresh the elements of E _ + , E _ as

E _ [ p d ] : = E ¯ 0 E _ [ p d ] ,       E _ + [ p u ] : = E _ + [ p u ] E ¯ 0 + E32

(6) Return to Step (2), and continue to visit the next grid at the same coordinate of y or, if all grids at this coordinate have been visited, step forward to a larger coordinate of y till the whole scene is exhausted.

Figures 7, 8 explain the bistatic imaging process of a tree and a building. One of the differences between bistatic and monostatic cases is the shadowing area projected in both incidence and scattering directions. Additionally, due to the split incidence and scattering directions, double scattering terms of object-ground and ground-object are different. One reflects at the ground and diffusely scatters from the object, the other scatters at the object and then reflects from the ground. These two scattering terms have different paths which give rise to separated double-scattering images.

Figure 8.

Composition of a tree and a house in BISAR image.

A comprehensive virtual scene as shown in Figure 9(a) is designed for simulation and analysis. Simulated BISAR images of the configurations, AT (across track) and TI (translational invariant) are shown in Figures 9(b,c), respectively.

Figure 9.

Simulated BISAR images of AT and TI.

3.5. Unified bistatic polar bases and polarimetric analysis

It is found in the BISAR simulation that in AT BISAR image, bistatic scattering of terrain objects preserves polarimetric characteristics analogous to monostatic case. while the major disparity is on total scattering power.

However, the polarimetric parameters behavior is different in the image of general TI mode. First, H is generally higher, probably due to the fact that large bistatic angle is more likely to reveal the randomness and complexity of the object, and makes the scattering energy more uniformly distributed on different polarizations. Second, almost all terrain surfaces show α 45 . It is found that scattering energy is concentrated on the 4-th component of Pauli scattering vector k P all over the image.

It seems that α might lose its capability to represent the polarimetric characteristics under certain bistatic configuration. Generally speaking, all parameters α , β , γ in monostatic SAR image cannot reflect polarimetric characteristics in the general TI case.

Conventional polarimetric parameters such as α , β , γ may largely depend on angular settings of the sensors rather than instinct properties of the target. It becomes inconvenient to interpret the physical meaning of polarimetric parameters when they are severely involved with the bistatic configuration.

As proposed in many studies, inverse problems of bistatic scattering are usually discussed in a coordinates system determined by the bisectrix. We believe it is important to first transform the conventional bistatic polar bases to a new one defined by the bisectrix. The unified bistatic polar bases are defined as

h ^ i = b ^ × k ^ i | b ^ × k ^ i | ,     v ^ i = h ^ i × k ^ i ,     b ^ = k ^ s k ^ i | k ^ s k ^ i | E33

where b ^ denotes the bisectrix, which is defined as the bisector of the incident and scattered wave vectors in the plane, as shown in Figure 10(a).

Figure 10.

Unified bistatic polar bases and bistatic (TI) pseudo color images coded by NPC after unified bistatic polar bases transform.

The relationship between the unified bistatic polar bases and the conventional polar bases can be described by polar basis rotation, i.e. E ,   S ¯ defined in ( v ^ i , h ^ i , k ^ i ) are re-defined as E ,   S ¯ in ( v ^ i , h ^ i , k ^ i ) . It is written as

E i = U ¯ i E i , S ¯ = U ¯ s S ¯ U ¯ i T , U ¯ i = [ v ^ i v ^ i h ^ i v ^ i v ^ i h ^ i h ^ i h ^ i ] E34

where U ¯ i is the rotation matrix. Figure 10(b) gives the bistatic TI image after transfom of Eq. (34). The normalized Pauli components (NPC) are defined as P i = | k P , i | / | k P | ,   i = 1,..,4 . The appearance of the NPC color-coded image after transform agrees with the convention of human vision.

We modify the definition of Cloude’s parameters as

k P = k P [ cos α cos γ exp ( j ϕ 1 ) sin α cos β exp ( j ϕ 2 ) sin α sin β exp ( j ϕ 3 ) cos α sin γ exp ( j ϕ 4 ) ] E35

The redefined α , β , γ are calculated and plotted in Figure 11 for bistatic image of Figure 10(b). It can be seen that the redefined α , β , γ are more capable to represent or classify the polarimetric characteristics of different scattering types.

Redefinition of Cloude’s parameters is only a preliminary attempt of bistatic polarimetric interpretation. It remains open for further study in a systematic way and for verification using both simulation and experiments.

Figure 11.

Redefined α , β , γ after bistatic polar bases transform.

The MPA is also applied to simulation of SAR image of undulated lunar surface (Fa and Jin, 2009). Based on the statistics of the lunar cratered terrain, e.g. population, dimension and shape of craters, the terrain feature of cratered lunar surface is numerically generated. According to inhomogeneous distribution of the lunar surface slope, the triangulated irregular network is employed to make the digital elevation of lunar surface model. The Kirchhoff approximation of surface scattering is then applied to simulation of lunar surface scattering. The SAR image for cratered lunar surface is numerically generated. Making use of the digital elevation and Clementine UVVIS data at Apollo 15 landing site as the ground truth, an SAR image at Apollo 15 landing site is simulated.

Advertisement

4. Terrain Surface Classification using De-Orientation Theory

Classification of complex terrain surfaces using polarimetric SAR imagery is one of most important SAR applications. An unsupervised method based on the entropy H and target decomposition parameters has been well developed by Cloude et al., which extracts the target decomposition parameter α and entropy H from eigen-analysis of coherence matrix and construct an unsupervised classification spectrum on the α H plane.

Scattering of the terrain targets functionally depends on the scatter orientation, shape, dielectric property, and scattering mechanism etc. Scatter targets of complex terrain surfaces are often randomly oriented and cause randomly fluctuating echoes. It is difficult to make classification of randomly oriented and randomly distributed scatter targets. Different scatters with different orientations can make the similar scattering, and vice versa, the same scatters with random orientation can make different scattering to make confused classification.

In this section, a transformation of the target scattering vector to rotate the target along the sight line is derived. Deorientation is introduced to transform the target orientation into such fixed state with minimization of cross-polarization (min-x-pol). And meanwhile, the angle is extracted to indicate the angle deviation of the target orientation from this min-x-pol state.

A set of new parameters u , v , w from the target scattering vector is defined to indicate the ratio and phase difference of the two co-polarized (co-pol) backscattering terms, and the significance of cross polarized (x-pol) term. All these parameters as well as the entropy H are applied to classification of random surface targets.

Numerical simulations of polarimetric scattering of a single small non-spherical particle, and a layer of random non-spherical particles above a rough surface are studied to show the effectiveness of the parameters u , v , ψ , H and the capability of u , v , H for classification of complex terrain surfaces (Xu and Jin, 2005).

As examples, the terrain surface classifications for a SIR-C and an AirSAR images are presented.

4.1. De-orientation and parameterization

From Eq. (1), the scattering vector is usually defined as

k P = [ S h h + S v v , S h h S v v ,2 S x ] T / 2 or   k L = [ S h h , 2 S x , S v v ] T E36

where S x S h v = S v h . Cloude et al. defined the parameterization of k P in terms of the parameters α , β etc. as

It yields

a = tan 1 ( | S v v | / | S h h | ) ,   b = 1 2 arg ( S v v / S h h ) ,   c = cos 1 ( 2 | S x | / | k L | ) E38

Rotating the angle ψ of the polarization base along with the sight line, the electrical field vector E becomes E as

E = [ R ] E ,       [ R ] = [ cos ψ sin ψ sin ψ cos ψ ] E39

In backscattering direction, this rotation makes the scattering vector k P to be k P , which can be expressed as

k P = [ U ] k P ,      [ U ] = [ 1 0 0 0 cos 2 ψ sin 2 ψ 0 sin 2 ψ cos 2 ψ ] E40

Applying minimization of cross polarization (min-x-pol) to k P , i.e. let

| k P ,3 | 2 ψ = 0,      2 | k P ,3 | 2 ψ 2 0

it yields the deorientation angle ψ m is obtained as

ψ m = [ sgn { cos ( ϕ 2 ϕ 3 ) } β 2 ] π 2 E41

It means that such rotation of the angle ψ m along the sight line makes k P to the min-x-pol status as deoriented k P d .

New parameters u , v , w are then defined from the deoriented scattering vector of Eqs. (38,40) as:

u = sin c cos 2 a ,      v = sin c sin 2 a cos 2 b ,      w = cos c E42

In the case of non-deterministic targets, we obtain the uncorrelated scattering vectors, i.e. eigenvectors, through eigen-analysis of coherency matrix. Here the most significant eigenvector i.e. principal eigenvector is considered as the representative scattering vector of non-deterministic targets. Thus, the deorientation of non-deterministic targets is merely conducted on the principal eigenvector.

4.2. Numerical simulation

Based on the model of Figure 1, polarimetric scattering is calculated, and the scattering vector and Mueller matrix are obtained. A model of a layer of random non-spherical particles above a rough surface for scattering from terrain surfaces, as shown in Figure 1, is applied to numerical simulation of u , v , H at L band in various cases.

Figures 12(a,b) show numerical relationship between the parameters u , v and the particle parameters: Eular angle orientation and particle shape (oblate or prolate spheroids). It can be seen that | u | indicates the non-symmetry of particle’s projection along with the sight line, and the dielectric property of the particle, | ε s | , affects | u | , and ε s / ε s affects v .

Figure 12(a) presents the distributions of some typical terrain surfaces on the | u | H plane, which are obtained from the scattering model of Figure 1. It can be seen that H indicates the complexity of layer structures of terrain surfaces. Different dielectric property of the soil land and oceanic surface also makes u identifiable on the | u | axis.

Figure 12(b) presents the distribution of scattering terms on the plane | u | v , where M i denotes the i-th order scattering. Concluded from these figures, H indicates the canopy randomness, u is useful for distinguishing different terrain targets and v is helpful to take account of scattering mechanisms of different-orders.

Figure 12.

Distributions of typical surfaces and different scattering mechanism.

4.3. Surface classification

The flow-chart of the classification method of PolSAR Data based on deorientation and new parameters u , v , ψ is shown in Figure 13. The classification decision tree of parameters u , v , H is displayed on Figure 14.

Figure 13.

Flow-chart of the surface classification.

Figure 14.

Classification decision tree of parameters u , v , H .

An AirSAR data at L band of the Boreal area in Canada with rich resources of vegetation is chosen for classification and orientation-analysis, as shown in Figure 15(a). Figure 15(b) is the deorientation classification over this area. Terrain surfaces are divided into 8 classes following the decision-tree: Timber, Urban 2, Urban 1, Canopy 2, Canopy 1, Surface 3, Surface 2 and Surface 1.

Figure 15.

A SIR-C SAR image and surface classification.

Figure 16 shows the data distributions on the planes | u | H for parameter v 0.2 and v 0.2 , and corresponding classification.

Figure 16.

Classification by u , v , H for 19 classes.

The orientation distribution of several classes are selected to be displayed in Figures 17(a,b). It can be seen that

  1. While in the Urban 1 (sparse forest of vertical trunks, instead of artificial constructions), there are uniformly vertical orientations indicating orderly trunks.

  2. Random orientation in the Canopy 1 means that the vegetation canopy in this area might be the disordered bush. Note that the region with the roads inside the forest might show randomness confused by the bush vegetation on the roadsides.

Following the above orientation analysis, the terrain surfaces are further classified into the subclasses and are identified by their types.

Figure 17.

Classification of AirSAR data, Boreal area.

4.4. Faraday rotation and Surface classification

To obtain the moisture profiles in subcanopy and subsurface, it needs UHF/VHF bands (435MHz, 135MHz) to penetrate through the dense subcanopy (~20kg/m2 and more) with little scattering and reach subsurface. However, when the wave passes through the anisotropic ionosphere and action of geomagnetic field, the polarization vectors of electromagnetic wave are rotated, which is called the Faraday rotation (FR). The FR depends on the wavelength, electronic density, geomagnetic field, the angle between the direction of wave propagation and geomagnetic field, and the wave incidence angle.

Assuming that the propagation direction is not changed passing through homogeneous ionosphere, d s = sec θ i d z (where z ^ is the normal to the surface), and geomagnetic field keeps constant as one at 400 km altitude, FR is simply written as

Ω F 2620 ρ e B ( 400 ) λ 2 cos Θ B sec θ i ( radians ) E43

where Θ B is the angle between the directions of electromagnetic wave propagation and geomagnetic field, ρ e is the total electron content per unit area ( 10 16 electron/m2), and B ( 400 ) is the intensity of geomagnetic field (T) at 400 km altitude.

The FR may decrease the difference between co-pol backscattering, enhance cross-polarized echoes, and mix different polarized terms. Thus, the satellite-borne SAR data at low frequency becomes distorted due to FR effect. Since the FR is proportional to the square power of the wavelength, it yields especially serious impact on the SAR observation operating at the frequency lower than L band. The FR angle at P band can reach dozens of degrees.

As a polarized electromagnetic wave passes through the ionosphere, the scattering matrix S ¯ F with FR (indicated by superscript F ) is written by the scattering matrix S ¯ without FR as follows

[ S h h F S h v F S v h F S v v F ] = [ cos Ω sin Ω sin Ω cos Ω ] [ S h h S h v S h v S v v ] [ cos Ω sin Ω sin Ω cos Ω ] S h h F = S h h cos 2 Ω S v v sin 2 Ω ,            S v v F = S h h sin 2 Ω + S v v cos 2 Ω S h v F = S h v + ( S h h + S v v ) sin Ω cos Ω ,    S v h F = S h v ( S h h + S v v ) sin Ω cos Ω E44

The measured polarimetric data with FR, S ¯ F or M ¯ F , are distorted and cannot be directly applied to terrain surface classification. However, fully polarimetric 4×4-D M ¯ without FR can be inverted from M ¯ F . And it shows that the remained ± π / 2 ambiguity error does not affect the classification parameters: u , v , H , α , A . Based on intuitive assumption of gradual change of FR degree with geographical position, a method of 2D phase unwrapping method with random benchmark can be employed to eliminate the ± π / 2 ambiguity error is designed. Some example shows the fully polarimetric SAR without FR at low frequency, such as P band, can be fully inverted (Qi and Jin, 2007).

Advertisement

5. Change Detection of Terrain Surface from Multi-Temporal SAR Images

Multi-temporal observations of SAR remote sensing imagery provide fast and practical technical means for surveying and assessing such vast changes. One direct application is to detect and classify the information on changes in the terrain surfaces. It would be laborious to make an intuitive assessment for a huge amount of multi-temporal image data over a vast area. Such assessment based on qualitative gray-level analysis is not accurate and might lose some important information. How to detect and automatically analyze information on change in the terrain surfaces is a key issue in remote sensing.

In this section, two-thresholds EM and MRF algorithm (2EM-MRF) is developed to detect the change direction of backscattering enhanced, reduced and unchanged regimes from the SAR difference image (Jin and Luo, 2004, Jin and Wang, 2009).

On May 12, 2008, a major earthquake, measuring 8.0 on the Richter scale, jolted southwestern China’s Sichuan Province, Wenchuan area. To evaluate the damages and terrain surface changes caused by the earthquake, multi-temporal ALOS PALSAR image data before and after the earthquake are applied to detection and classification of the terrain surface changes. Using the tools of Google Earth for surface mapping, the surface change situation after the earthquake overlapped the DEM topography can be demonstrated in multi-azimuth views as animated cartoon. The detection and classification are also compared with the optical photographs. It is proposed that multi-thresholds EM and MRF analysis may become traceable when multi-polarization, multi-channels, multi-sensors multi-temporal image data become available.

5.1. Two thresholds expectation maximum algorithm

Consider two co-registered images X 1 and X 2 with size I × J at two different times ( t 1 , t 2 ) . Their difference image is X = ( X 2 X 1 ) , and denote X D = { X ( i , j ) ,   1 i I ,1 j J } (in some cases, X = X 2 / X 1 also can be used).

As usually, one-threshold expectation maximum (EM) algorithm has been employed to classify two opposite changes: the unchanged class ω n and the changed class ω c . The probability density function P ( X ) ,   X X D was modeled as a mixture distribution of two density components associated with ω n and ω c , i.e.

p ( X ) = p ( X | ω n ) P ( ω n ) + p ( X | ω c ) P ( ω c ) E45

Assume that both the conditional probabilities p ( X | ω n ) and p ( X | ω c ) are modeled by Gaussian distributions. The iterative equations for estimating the statistical parameters and the a priori probability for the class ω n are the following

P t + 1 ( ω n ) = X ( i , j ) X D P t ( ω n ) p t ( X ( i , j ) | ω n ) p t ( X ( i , j ) ) I J mean μ n t + 1 = X ( i , j ) X D P t ( ω n ) p t ( X ( i , j ) | ω n ) p t ( X ( i , j ) ) X ( i , j ) X ( i , j ) X D P t ( ω n ) p t ( X ( i , j ) | ω n ) p t ( X ( i , j ) ) variance = X ( i , j ) X D P t ( ω n ) p t ( X ( i , j ) | ω n ) p t ( X ( i , j ) [ X ( i , j ) μ n t ] 2 X ( i , j ) X D P t ( ω n ) p t ( X ( i , j ) | ω n ) p t ( X ( i , j ) ) E46

where the superscripts t and t +1 denote the current and next iterations.

Analogously, these equations can also be used to estimate p ( ω c ) with μ c and σ c 2 .

The estimates are computed starting from the initial values by iterating the above equations until convergence. The initial value of the estimated parameters can be obtained by the analysis of the histogram of the difference image. A pixel subset S n likely to belong to ω n and a pixel subset S c likely to belong to ω c can be obtained by preliminarily selecting two-threshold T n and T c on the histogram. This is equivalent to solving the ML boundary T o for the two classes ω n and ω c on the difference image. The optional threshold T o requires

P ( ω c ) P ( ω n ) = p ( X | ω n ) p ( X | ω c ) E47

Under the assumption of Gaussian distribution, Eq. (3) yields

( σ n 2 σ c 2 ) T o 2 + 2 ( μ n σ c 2 μ c σ n 2 ) T o + μ c 2 σ n 2 μ n 2 σ c 2 + 2 σ n 2 σ c 2 ln [ σ c P ( ω n ) σ n P ( ω c ) ] = 0 E48

to obtain optimal T o .

Figures 18 (a,b) are the ALOS PALSAR images (L band, HH polarization) in February 17 and May 19, 2008 before and after earthquake in Beichuan County, Wenchuan area, respectively. Spatial resolution is 4.7m4.5m. Figure 18(c) is the difference image of Figures 18(a,b), which seems very difficult to evaluate the terrain surface changes only using man’s vision, especially to accurately classify the change classes in large scale.

Figure 18.

ALOS PALSAR image in in Beichuan.

The pixels of the difference image (i.e. X D = σ D 0 = σ 2 0 σ 1 0 ) are classified into three classes: ω c 1 of σ D 0 -enhanced, ω n of σ D 0 -unchanged and ω c 2 of σ D 0 -reduced. Thus, the probability density function P ( X ) is modeled as a mixture density distribution consisting of three components:

p ( X ) = p ( X | ω c 1 ) P ( ω c 1 ) + p ( X | ω n ) P ( ω n ) + p ( X | ω c 2 ) P ( ω c 2 ) E49

The parameter T o 1 is first obtained by application of the EM algorithm to the enhanced class ω c 1 and no-enhanced class ω n 1 ( ω n and ω c 2 ). Then, we obtain T o 2 by applying the EM to the reduced class ω c 2 and no-reduced class ω n 2 ( ω n and ω c 1 ), where ω n 1 = ω n ω c 2 , ω n 2 = ω n ω c 1 . Finally, the two results are superposed to get the final classification.

Figure 19.

The histogram of the difference image and change detection using 2EM method.

As an example, Figure 19(a) shows the grayscale histogram of a small part chosen from the difference image, Figure 18(c). Three classes of the probability distributions are outlined in this figure. The histogram is normalized in accordance with the probability density distributions.

The initial statistical parameters of the subsets of

S n 1 = { X ( i , j ) | 0 X ( i , j ) T n 1 } , S c 1 = { X ( i , j ) | X ( i , j ) T c 1 are first derived ,  and then derive the parameters of S n 2 = { X ( i , j ) | T n 2 X ( i , j ) 0 } , S c 2 = { X ( i , j ) | X ( i , j ) T c 2 } E50

The initial statistical parameters are related to the classes ω n 1 , ω c 1 and the classes ω n 2 , ω c 2 , respectively. Then, Eqs. (46a-c) are sequentially used to perform the iterations on the four classes, i.e. the a priori probability and other statistical parameters [ P ( ω n 1 ) , μ n 1 , σ n 1 2 ], [ P ( ω c 1 ) , μ c 1 , σ c 1 2 ], [ P ( ω n 2 ) , μ n 2 , σ n 2 2 ] and [ P ( ω c 2 ) , μ c 2 , σ c 2 2 ]. Solving Eq. (48), the thresholds T o 1 = 3.1643 dB and T o 2 = 2.8381 dB are obtained.

Figure 19(b) is the change detection using EM algorithm, where red color indicates the area with scattering enhanced, blue color indicates the area with scattering reduced, and green color denotes no-changed. It makes three classes change of the difference image.

5.2. The change detection using 2EM-MRF

Actually, the pixels of the image are spatially correlated, i.e. a pixel belonging to the class ω k is likely to be surrounded by pixels belonging to the same class. To take account of spatial texture may yield more reliable and accurate change detection.

Let the set C = { C ,1 L } represent the possible sets of the labels in the difference image:

C = { C ( i , j ) ,1 i I ,1 j J } , C ( i , j ) { ω c 1 , ω c 2 , ω n } E51

where L = 3 I J . According to Bayes rule for minimum error, the classification result should maximize the posterior conditional probability,

C k = arg max C C { P ( C | X D ) } = arg max C C { P ( C ) p ( X D | C ) } E52

where P ( C ) is the prior model for the class labels, and p ( X D | C ) is the joint density function of the pixel values in the difference image X D given the set of labels C .

The MRF algorithm is employed with a spatial neighborhood 55 pixels system to take account of spatial texuture. The generation of the final change-detection map involves the labeling of all the pixels in the difference image so that the posterior probability of Eq. (52) is maximized.

The MRF algorithm is equivalent to the minimization of the Gibbs energy function. The MRF algorithm is iteratively carried out.

Figure 20(a) presents the final result of 2EM-MRF for detection and classification of the terrain surface changes from the difference image, Figure 18(c). The numbers 1~8 indicate some areas with typical changes.

It can be seen that, for example, in the area 1 the river was largely blocked up with landslide; in the area 2, there were landslides causing large scale blocks; the area 3 is Beichuan town, where the terrain surface was significantly undulated or roughed due to landslide and building collapse; in the areas 4 and 5 the river was blocked up due to landslides along river lines; in the area 6 the highway was significantly blocked up with landslide; in the area 7 there was collective landslides almost towards one azimuth direction, and the flat area with reduced scattering along the river might be due to seasonal risen water in May than February; in the area 8 collapse of mountain blocks caused the terrain surface undulating to enhance stronger scattering.

Figure 20(b) gives the topography and contour lines of the same area from the google website. It is useful to locate where and which kind of the changes are happening. It might be seen that the landslides, e.g. in the areas 1 and 7, are correlated with direction and magnitude of slopes. A quantitative analysis to assess the landslides and surface damage can be further developed.

Figure 20.

Automatic detection and classification of terrain surface changes using 2EM-MRF.

As an example, Figure 20(c) gives a comparison of the change at the region 3 with a photograph distributed to public from the website of the Ministry of the State Resources of China. It can be clarified in both figures that A shows lanslides and makes the surface smooth, and B shows the highway blocked. It can be seen that optical shadowing actually does not confuse these classifications.

Using the tool of Google Earth mapping with the 2EM-MRF results, the terrain surface change situation classified by three types overlapped the DEM topography can be showed in multi-azimuth views as a animated cartoons. Since all process of 2EM-MRF are automatic and carried out on real time, it should be helpful, especially for commanding the rescue works in disaster scene. Figure 21 shows four views.

It can be seen that information retrieval from a single amplitude image (scattered power) with one-frequency, mono-polarization at one time is very limited. In fully polarimetric technology, for example, using target decomposition theory, deorientation transform etc., the physical status of the terrain surfaces such as vegetation canopy, road, building, river etc. can be well classified and would be of greatly helpful to change detection. Four Stokes parameters of polarimetric measurement can be feasible to show the surface slope variation and anisotropy, and INSAR has been well applied to retrieval of surface digital elevation. All of these progress show superiority over a single amplitude image analysis and manual vision estimation. It is also helpful to fuse CFAR (constant false alarm rate) for detection of the object, e.g. edge and block, from a SAR image.

As multi-polarized, multi-channels and multi-temporal image data become available, 2EM-MRF can be further extended to utilization of principal characteristic parameters of the difference image, such as VV+HH, VV-HH, entropy, angular α , β , γ , ψ for change detection and classification. More information about terrain surface changes can be retrieved.

Figure 21.

Multi-azimuth views in animated cartoon showing terrain surface changes on Google Earth mapping.

Advertisement

6. DEM Inversion from a Single POL-SAR Image

Co-polarised or cross-polarised backscattering signature is the function of the incidence wave with the ellipticity angle χ and orientation angle ψ . Recently, polarimetric INSAR image data has been utilized to generate digital surface elevation and to invert terrain topography. When the terrain surface is flat, polarimetric scattering signature has the maximum largely at the orientation angle ψ = 0 . However, it has been shown that as the surface is tilted, the orienattion angle ψ at the maximum of co-polarized (co-pol) or cross-polarized (cross-pol) signature can shift from ψ = 0 . This shift can be applied to convert the surface slopes. Making use of the assumption of real co-pol and zero cross-pol scattering amplitude functions, the ψ shift is expressed by the real scattering amplitude functions. Since both the range and azimuth angles are coupled, two- or multi-pass SAR image data are required for solving two unknowns of the surface slopes. This approach has been well demonstrated for inversion of digital elevation mapping (DEM) and terrain topography by using airborne SAR data.

However, scattering signature is an ensemble average of echo power from random scatter media. Measurable Stokes parameters as the polarized scattering intensity should be directly related to the ψ shift. In this section, using the Mueller matrix solution, the ψ shift is newly derived as a function of three Stokes parameters, I v s , I h s , U s , which are measurable by the polarimetric SAR imagery.

Using the Euler angles transformation between the principal and local coordinates, the orientation angle ψ is related with both the range and azimuth angles, β and γ , of the tilted surface pixel and radar viewing geometry. These results are consistent with Lee et al. (1998) and Schuler (2000), but are more general.

It is proposed that the linear texture of tilted surface alignment is used to specify the azimuth angle γ . The adaptive threshold method and image morphological thinning algorithm are applied to determine the azimuth angle γ from image linear textures. Thus, the range angle β is then solved, and both β and γ are utilized to obtain the azimuth slope and range slope. Then, the full multi-grid algorithm is employed to solve the Poisson equation of DEM and produce the terrain topography from a single pass Polarimetric SAR image (Jin, 2005; Jin and Luo, 2003 ).

6.1. The Ψ shift as a function of the Stokes parameters

Consider a polarized electromagnetic wave incident upon the terrain surface at ( π θ i , φ i ) . Incidence polarization is defined by the ellipticity angle χ and orientation angle ψ ( Jin 1994 ). The 22-D complex scattering amplitude functions are obtained from polarimetric measurement as

I ¯ i ( χ , ψ ) = [ I v i , I h i , U i , V i ] T = [ 1 2 ( 1 cos 2 χ cos 2 ψ ) , 1 2 ( 1 + cos 2 χ cos 2 ψ ) , cos 2 χ sin 2 ψ , sin 2 χ ] T E53
σ c = 4 π cos θ i P n E54

where

P n = 0.5 [ I v s ( 1 cos 2 χ cos 2 ψ ) + I h s ( 1 + cos 2 χ cos 2 ψ ) + U s cos 2 χ sin 2 ψ + V s sin 2 χ ] E55

When the terrain surface is flat, co-pol backscattering σ c versus the incidence polarization ( χ , ψ ) has the maximum at ψ = 0 . However, it has been shown that as the surface is tilted, the orientation angle ψ for the σ c maximum is shifted from ψ = 0 .

Let P n / ψ = 0 at the maximum σ c and χ = 0 of symmetric case, it yields

0 = ( I h s I v s ) sin 2 ψ + U s cos 2 ψ   + 0.5 ( I h s + I v s ) ' + 0.5 U s sin 2 ψ + 0.5 ( I h s I v s ) ' cos 2 ψ E56

where the superscript of prime denote / ψ . It can be shown that

0.5 ( I h s + I v s ) ' ~ ( M 13 + M 23 ) cos 2 ψ ~ ( Re S v v S v h * + Re S h v S h h * ) cos 2 ψ 0.5 U s ~ M 33 cos 2 ψ ~ Re S v v S h h * + S v h S h v * cos 2 ψ ( I h s I v s ) ~ 0.5 ( M 11 + M 22 ) cos 2 ψ ~ 0.5 ( | S v v | 2 + | S h h | 2 ) cos 2 ψ E57
0.5 ( I h s I v s ) ' ~ 0.5 ( M 13 M 23 ) cos 2 ψ ~ 0.5 ( Re S v v S v h * Re S h v S h h * ) cos 2 ψ U s ~ 0.5 ( M 31 + M 32 )~Re S v v S h v * + Re S v h S h h * E58

It can be seen that (57a) and (57b) are much less than (57c), and (58a) is much less than (58b), so the last three terms on RHS of Eq. (56) are now neglected. Thus, it yields the ψ shift at the σ c maximum expressed by the Stokes parameters as follows

tan 2 ψ = U s I h s I v s E59

It can be seen that the third Stokes parameter U s 0 does cause the ψ shift.

By the way, if both U s and I h s I v s approach zero, e.g. scattering from uniformly oriented scatterers or isotropic scatter media such as thick vegetation canopy, ψ cannot be well defined by Eq. (59).

6.2. The range and azimuth slopes and DEM inversion

The polarization vectors h ^ i , v ^ i of the incident wave at ( π θ i , ϕ i ) in the principal coordinate ( x ^ , y ^ , z ^ ) are defined as ( Jin, 1994 )

h ^ i = z ^ × k ^ i | z ^ × k ^ i | = sin ϕ i x ^ + cos ϕ i y ^ and v ^ i = h ^ i × k ^ i where the incident wave vector k ^ i = sin θ i cos ϕ i x ^ + sin θ i sin ϕ i y ^ cos θ i z ^ E60

As the pixel surface is tilted with local slope, the polarization vectors should be re-defined following the local normal vector z ^ b as follows

h ^ b = z ^ b × k ^ i b | z ^ b × k ^ i b | and v ^ b = h ^ b × k ^ i b E61

By using the transformation of the Euler angles ( β , γ ) between two coordinates ( x ^ , y ^ , z ^ ) and ( x ^ b , y ^ b , z ^ b ) ( Jin, 1994 ), it has

x ^ = cos γ cos β x ^ b + sin γ y ^ b cos γ sin β z ^ b y ^ = sin γ cos β x ^ b + cos γ y ^ b + sin γ sin β z ^ b z ^ = sin β x ^ b + cos β z ^ b E62

Substituting above relations into Eqs. (60,61), it yields

h ^ i = cos β sin ( γ + ϕ i ) x ^ b + cos ( γ + ϕ i ) y ^ b + sin β sin ( γ + ϕ ) z ^ b E63
k ^ i b = ( cos β sin θ i cos ( γ + ϕ i ) sin β cos θ i ) x ^ b + sin θ i sin ( γ + ϕ i ) y ^ b         [ sin β sin θ i cos ( γ + ϕ i ) + cos β cos θ i ] z ^ b E64

Using Eq. (64) to Eq. (61), the polarization vector h ^ b for local surface pixel is written as

h ^ b = a x ^ b + b y ^ b a 2 + b 2 E65

where a = sin θ i sin ( γ + ϕ i ) , b = sin θ i cos β cos ( γ + ϕ i ) sin β cos θ i . Thus, Eqs. (63,65) yield the orientation angle as

cos ψ = h ^ b h ^ i = cos β sin θ i cos ( γ + ϕ i ) sin β cos θ i a 2 + b 2 tan ψ = tan β sin ( γ + ϕ i ) sin θ i cos θ i tan β cos ( γ + ϕ i ) E66

Thus, the range and azimuth slopes of the pixel surface can be obtained as

S R = tan β cos γ ,     S A = tan β sin γ E67

Since a single ψ shift cannot simultaneously determine two unknowns of β and γ in Eq. (64), two- or multi-pass SAR image data are usually needed.

If only the single-pass SAR image data are available, one of two unknown angles, β or γ , should be first determined. The azimuth alignment of tilted surface pixels can be visualized as a good indicator of the azimuth direction. We apply the adaptive threshold method and image morphological thinning algorithm to specify the azimuth angle in all SAR image pixels. The algorithm contains the following steps:

(1) Make speckle filtering over the entire image.

(2) Apply the adaptive threshold method to produce a binary image. The global threshold value is not adopted because of the heterogeneity of the image pixels.

(3) Apply the image morphological processing for the binary image, remove those isolated pixels and fill small holes. Referring to the part of binary’s “1” as the foreground and the part of binary’s “0” as the background, the edges from the foreground are extracted.

(4) Each pixel on the edge is set as the center of a square 2121 window, and a curve segment through the centered pixel is then obtained. Then, applying the polynomial algorithm for fitting curve segment in the least-squares sense, the tangential slope of the centered pixel is obtained. It yields the azimuth angle of the centered pixel. Make a mark on that pixel so that it won’t be calculated in the next turn.

(5) Removing the edge in Step (4) from the foreground, a new foreground is formed. Repeat Step (4) until the azimuth angle of every pixel in the initial foreground is determined.

(6) Make the complementary binary image, i.e. the initial background now becomes the foreground. Then, the Steps (4) and (5) are repeated to this complementary image until the azimuth angle of every pixel in the initial background is determined.

This approach provides a supplementary information to firstly determine the angle γ over whole image area if there is no other information available.

Figure 22.

Morphological thinning algorithm to determine the azimuth angle.

As an example for DEM inversion, the L-band polarimetric SAR data is shown in Figure 23(a). As the azimuth angle γ of each pixel is obtained by the adaptive threshold method and thinning algorithm as described in the above steps, the β angle of each pixel can be determined by Eq. (66b). Taking ϕ i = 0 , it yields

tan β = tan ψ sin θ i sin γ + tan ψ cos θ c i o s γ E68

where the orientation angle ψ is calculated using Eq. (59), while the incident angle θ i is determined by the SAR view geometry.

Substituting β and γ into Eq. (67), the azimuth slope S A and range slope S R are obtained. We utilize the two slopes to invert the terrain topography and DEM. The slopes S A and S R of all pixels in the SAR image are calculated as shown in Figures 23(b,c).

Figure 23.

SAR image and azimuth/range slopes.

The DEM can be generated by solving the Poisson equation for a M × N rectangular grid area. The Poisson equation can be written as

2 ϕ ( x , y ) = ρ ( x , y ) E69

where 2 is the Laplace operator. The source function ρ ( x , y ) consists of the surface curvature calculated by the slopes S ( x , y ) , where the x ^ ,   y ^ -directions are used as the range and azimuth directions, respectively. In this section, the full multi-grid (FMG) algorithm is employed to solve the Poisson equation. The benefits of FMG are due to its rapid-convergence, robustness and low computation load. The inverted DEM of SAR image in Figure 23(a) is given in Figures 24(a,b), and its contour map, Figure 24(b) has been validated in comparison to the map of China Integrative Atlas (1990).

Figure 24.

A DEM inversion.

Advertisement

7. Reconstruction of Buildings Objects from Multi-Aspect SAR Images

Reconstruction of 3D-objects from SAR images has become a key issue for information retrieval for SAR monitoring. 3D reconstruction of man-made object is usually based on interferometeric technique or fusion with other data resources, e.g. optical and GIS data. It has been noted that scattering from man-made objects produces bright spots in sub-metric resolution, but present strips/blocks image in metric resolution. This difference is largely attributed to the different imaging ways employed for different resolutions, for example, spotlight for sub-metric resolution, stripmap for metric resolution.

An automatic method for detection and reconstruction of 3D objects from multi-aspect metric-resolution SAR images. The steps are as follows:

The linear profile of the building objects is regarded as the most prominent characteristics. The POL-CFAR detector, Hough transform, and corresponding image processing techniques are applied to detection of parallelogram-like façade-images of building objects. Results of simulated SAR images as well as real 4-aspect Pi-SAR images are given subsequently after each step.

A probabilistic description of the detected façade- images is presented. Maximum-likelihood estimation (ML) of building objects from multi-aspect observed façade-images is given. Eventually, in association with a hybrid priority rule of inversion reliability, an automatic algorithm is designed to match multi-aspect façade- images and reconstruct the building objects. Moreover, taking advantage of the multi-aspect coherence of building-images, a new iterative co-registration method is presented.

Finally, reconstruction results are presented, and good performance is evaluated comparing with ground truth data. It is also concluded that the reconstruction accuracy closely depends on the number of available aspects. In the end, a practicable scheme of the 3D-object reconstruction from satellite-borne SAR image is proposed (Xu and Jin, 2007).

7.1. Object image detection

As shown in Figure 25, the scattering image of a simple building is produced by direct scattering from the wall/façade, roof and edges, double scattering from wall-ground and shadows projected etc. In metric resolution, the scattering produces bright strips/blocks from respective parts of the object. A key step of reconstruction is to identify and extract these strips/blocks.

For the case of a smooth wall/façade, the only double scattering term to be considered must follow a specific propagation path, i.e. wall (reflect) to ground (diffuse). Simple building object is taken into account and is modeled as a cuboid, and the spatial distribution of the building objects is assumed not crowded, i.e. without serious shadowing and superposition. The cuboid object is described by seven geometric parameters: 3D position, 3D size and orientation. Besides, the flat roof is assumed with much less scattering compared with the edges and façades.

Figure 26 shows (a) a model of the cuboid object, (b) its simulated SAR image, (b) using the mapping and projection algorithm, (c) a photo of real rectangular building, and (d) its image of Pi-SAR observation, respectively.

It can be seen from Figure 26 that the longer wall of the cuboid-like building, called as major wall henceforth, plays the dominant role in a SAR image. In this section, main attention is focused on the major wall image. At the first step, the edge detectors of constant false alarm rate (CFAR) such as ratio gradient are used for edge detection.

Figure 25.

SAR imaging of a simple building model and its image composition.

To our experience, the POL-CFAR detector derived from complex Wishart distribution can fulfill the segmentation requirements of medium- and small-scale building-images. Besides, the ridge filter applied after this step can accommodate segmentation error to some extent.

Figure 26.

A cuboid object in m-resolution SAR images.

In order to improve the segmentation precision, the detected edge by window-operator needs to be further thinned. Using the edge intensity produced in the POL-CFAR detection, an edge thinning method of ‘ridge filtering’ is presented. Taking an 8-neighbor 3×3 window as an example, the center pixel is regarded as on the ridge if its intensity is higher than the pixels along two sides.

4-aspect Pi-SAR images acquired over Sendai, Japan by a square-loop flying path (Flight No. 8708, 8709, 8710, 8711, X-band, pixel resolution 1.25m) are taken as an example of real SAR image study. The region of interest (ROI) is the Aobayama campus of Tohoku University.

Figure 27(a) shows an aspect Pi-SAR images as an example.

Figures 27(b-d) show the results processed by the edge detection of Pi-SAR images.

The most distinct feature of a building object in SAR image is parallel lines. The Hough transform is employed to detect straight segments from the thinned edges, and parallelogram outlines of the façade-images can then be extracted. It is carried out in tiling manner in this paper, i.e. the original picture is partitioned into blocks, each of which is detected independently via Hough transform.

Figure 27.

A PI-SAR image and edge detection.

The detection steps of parallel line segments are: i) find bright spots in transform domain with a minimum distance between every two of them so as to avoid re-detection; ii) search the segments consisted of successive points along the corresponding parallel lines in spatial domain, which is longer than a minimum length, and the distance between two successive points is shorter than a maximum gap; iii) only the pairs of points lying on two lines and facing directly are taken into account for segment searching.

Figure 28.

Extraction of strip-like building-images.

Figures 28(a-d) show the detection from 4 aspects Pi-SAR images. Few building-images are not detected and some false images are wrongly produced. In fact, there is a compromise between over-detection and incomplete-detection. We prefer to preserve more detected building-images, so as to control the non-detected rate, while the false images are expected to be eliminated through the subsequent auto-selection of effective images for reconstruction. However, there always remain some undetectable images, attributable to shadowing of tree canopy, overlapping of nearby buildings or with too complicated wall structures.

The detected parallelogram of a homogenous scattering area could be direct scattering from façade, double scattering of wall-ground, combination of direct and double scatterings, projected shadow of building, or even strong scattering of strip-like objects (e.g. metal fence or metal awning), which is not considered in classification.

Shadowing is identified if scattering power of that area is much lower than the vicinity. Specifically, first set up two parallel equal-length strip windows on its two sides and then calculate the median scattering powers of the three regions. If the middle one is weaker than two sides, it is classified as shadow instead of building image.

The wall-ground double scattering can be differentiated from direct scattering based on polarimetric information. The de-orientation parameter v indicating the scattering mechanism is used to identify double scattering.

To implement target reconstruction from multi-aspect SAR data, calibrating multi-aspect data requires that at least one aspect is calibrated beforehand, and other aspects are then calibrated with respect to this calibrated aspect. A natural object, such as flat bare ground, is usually chosen as a reference target, which is expected to preserve identical scattering for all aspects. Thereafter, the channel imbalance factors are estimated from distribution of the phase difference and amplitude ratio of co-polarized, hh and vv, echoes of the reference target, and are then used to compensate the whole SAR images (Xu and Jin, 2008b).

7.2. Building reconstruction

Complexity of objects-terrain surfaces and image speckles incorporate certain uncertainty for detection of object images. To well describe multi-aspect object image, the parameterized probability is introduced for further proceeding of automatic reconstruction. For convenience, the detected building-images are parameterized. Generally, the edge pixels detected by CFAR are randomly situated around the real, or say, theoretical boundary of the object. It is reasonable to presume that the deviation distances of the edge points follow a normal distribution.

The original edge can be equivalently treated as a set of independent edge points. The line detection approach is considered as an equivalent linear regression, i.e. line fitting from random sample points. According to linear regression from independent normal samples, the slope of the fitted line follows the normal distribution.

All parameters have normal distributions and their variances are determined by the variance of the edge points deviation. After counting the deviation of the edge points in the vicinity of each lateral of all detected building-images and making its histogram, the variance can be determined through a minimum square error (MSE) fitting of the histogram using normal probability density function (PDF). The 4-aspect Pi-SAR images are counted.

Figure 29.

Geometry of a wall/façade imaging.

Given a group of building-images detected from multi-aspect SAR images, the corresponding maximum- likelihood probability can be further used as an assessment of the coherence among this group of multi-aspect images. A large maximum-likelihood probability indicates a strong coherence among the group of multi-aspect building-images, and vice versa.

Multi-aspect co-registration, as a critical pre- processing step, is necessary when dealing with real SAR data.

Given the specification of a region of interest (ROI), e.g. the longitudes and latitudes, the corresponding area in SAR image of each aspect can be coarsely delimited according to its orbit parameters. It can be regarded as a coarse co-registration step. Manual intervention is necessary if the orbit parameters are not accurate enough.

Only parameters of the building-images are needed to be co-registered to the global coordinates, but rather than the original SAR images. It is regarded as a fine co-registration step. In this study, only linear co-registrations are considered. A simple and effective method should be manually choosing ground control points. In general, a featured terrain object or building is first selected as the reference of zero-elevation, and then its locations of different aspects are pinpointed. Coherence among multi-aspect building-images is the basis of automatic reconstruction.

Finally, an automatic multi-aspect reconstruction algorithm is designed. The building objects are reconstructed from the 4-aspect simulated SAR images. The inversion accuracy is very good. shows the 3D reconstructed objects on the true DEM. It seems the inverted elevations also match well with the true DEM.

Figure 30.

reconstructed objects and comparisons with optical picture.

Due to the difficulty for authors to collect ground truth data, a high-resolution satellite optical picture (0.6m QuickBird data) is used as a reference. Geometric parameters of each building manually measured from the picture are taken as ground truth data to evaluate the accuracy of reconstruction.

There is a trade-off between the false and correct reconstruction rates. If we increase the false alarm rate of edge detector, relax the requirements of building-image detection and/or increase the false alarm rate of judging correctly matched group, it will raise the reconstruction rate, but also boost the false reconstruction rate. Efficiency will be deteriorated if the false reconstruction rate goes too high. On the contrary, the false reconstruction rate can be reduced and the accuracy of reconstructed buildings can be improved, but the reconstruction rate will also decline.

A critical factor confining the reconstruction precision should be the number of effective aspects, hereafter referred as effective aspect number (EAN).T he reconstruction result will become better if more SAR images of different aspects are available.

Main error of reconstruction is caused by the boundary-deviation of detected building-images, which is originated from complicated scattering and interactions of spatially distributed objects and backgrounds. In probabilistic description of detected building-image, the presented boundary in SAR image is seen as the same as reality. However, the real detected building-image might be biased or even partly lost due to the obstacle and overlapping.

In addition, it is noticed that large-scale buildings might not be well reconstructed. The reason is partly attributed to the premise of Wishart distribution of POL-CFAR detector. Since the images of large-scale buildings might reveal more detail information about the texture feature and heterogeneity, it deteriorates the performance of edge detector. To develop a new edge detector based on certain specific speckle model for high-resolution images can improve the results. Another more feasible way is to employ a multi-scale analysis, through which building-images of different scales can present Gaussian properties in their own scale levels. Of course, the expense for multi-scale analysis might be a loss in precision.

Another issue to be addressed is the exploitation of multi-aspect polarimetric scattering information. In heterogeneous urban area, the terrain objects appear distinctively in different aspects, which gives rise to a very low coherence between their multi-aspect scatterings. Hence, polarimetric scattering information may not be a good option for the fusion of multi-aspect SAR images over urban area

After the ROI is coarsely chosen in each aspect image, edge detection and object-image extraction are carried out, subsequently. Then the object-images are parameterized and finely co-registered. As long as multi-aspect object-images are automatically matched, 3D objects are reconstructed at the same time.

The merits of this approach are the process automation with few manual interventions, the fully utilization of all-aspect information, and the high efficiency for computer processing. Making a reconstruction on a 4-aspect Pi-SAR dataset (500×500) takes less than 10 min CPU time (CPU frequency 3GHz). Complexity of this approach is about O ( K N 2 ) , where K is the aspect number and N is the size of SAR images (K=4, N=500 in this case).

It is tractable to extend this approach to reconstruction of other kinds of objects. For other types of primary objects, given the a priori knowledge, new image detection methods have to be developed. It is possible to treat more complicated buildings as combinations of different primary objects, which can be reconstructed separately and then combine together.

Considering a space-borne SAR with the functions of left/right side looking (e.g. Radarsat2 SAR) and forward/backward oblique side looking (e.g. PRISM in ALOS), six different aspect settings are available for ascending orbit. There is a 20 angle between ascending and descending flights for sun-synchronous orbit. Therefore, it can observe from 12 different aspects. Suppose the sensor use different aspect in each visit and the repeat period is 15 days, a set of SAR images acquired from 12 aspects can be obtained through a 3-month observation. Then, application of SBT reconstruction with acceptable precision.

Advertisement

Acknowledgments

This work was supported by the National Science Foundation of China with the grant No. NSFC 406370338.

References

  1. 1. Jin Y. Q. 2005 Theory and Approach of Information Retrievals from Electromagnetic Scattering and Remote Sensing, Springer, 1402040296 The Netherlands.
  2. 2. Jin Y. Q. 2000 Information of Electromagnetic Scattering and Radiative Transfer in Natural Media. Science Press, 7-03008-259-1, Beijing.
  3. 3. Jin Y. Q. 1994 Theory and Approach of Information Retrievals from Electromagnetic Scattering and Remote Sensing, Sworld Scietific, 9-81021-648-3, Singapore
  4. 4. Jin Y. Q. 1992 A Mueller matrix approach to complete polarimetric scattering from a layer of non-uniformly oriented, non-spherical scatterers. Journal of Quantitative Spectroscopy and Radiative Transfer, 48 3 295 306 , 0022-4073
  5. 5. Jin Y. Q. 1994 Polarimetric scattering from a layer of random clusters of small spheroids. IEEE Transactions on Antenna and Propagation, 42 8 1138 1144 , 0018-926X.
  6. 6. Jin Y. Q. Cloude S. 1994 Numerical eigenanalysis of the coherency matrix for a layer of random nonspherical scatterers. IEEE Transactions on Geoscience and Remote Sensing, 32 6 1179 1185 , 0196-2892
  7. 7. Jin Y. Q. 1995 Polarimetric scattering and emission from a layer of random clustters of small spheroids and dense spheres. Microwave Radiometry and Remote Sensing of the Environment, ed. by Solimini, D., VSP, 419 427 , The Netherlands, 906764189
  8. 8. Jin Y. Q. 1995 Polarimetric scattering from heterogeneous particulate media. Journal of Applied Physics, 78 8 4835 4840 , 1087-3848
  9. 9. Jin Y. Q. Zhang N. Chang M. 1999 The Mueller and coherency matrices solution for polarimetric scattering simulation for tree canopy in SAR imaging at C band. Journal of Quantitative Spectroscopy and Radiative Transfer, 63 2 699 612 , 0022-4073
  10. 10. Jin Y. Q. Zhang N. 2002 Statistics of four Stokes parameters in multi-look polarimetric SAR imagery. Canadian Journal of Remote Sensing, 28 4 610 619 , 0703-8992
  11. 11. Jin Y. Q. Chen Y. 2002 An improved method of the minimum entropy for refocusing moving target image in the SAR observation. The Imaging Science Journal, 50 6 147 152 , 1368-2199
  12. 12. Jin Y. Q. Chen F. 2002 Polarimetric scattering indexes and information entropy of the SAR imagery for surface monitoring. IEEE Transactions on Geoscience and Remote Sensing, 40 11 2502 2506 , 0196-2892
  13. 13. Jin Y. Q. Liang Z. 2003 Iterative solution of multiple scattering and emission from inhomogeneous scatter media. Journal of Applied Physics, 93 4 2251 2256 .
  14. 14. Jin Y. Q. Chen F. 2003 Scattering simulation for inhomogeneous layered canopy and random targets beneath canopies by using the Mueller matrix solution of the pulse radiative transfer. Radio Science, 38 6 1107 1116 , 0048-6604
  15. 15. Jin Y. Q. Liang Z. 2004 Iterative inversion from the multi-order Mueller matrix solution of vector radiative transfer equation for a layer of random spheroids. Journal of Quantitative Spectroscopy and Radiate Transfer, 83 4 303 311 , 0022-4073
  16. 16. Jin Y. Q. Liang Z. 2004 An approach of the three-dimensional vector radiative transfer equation for inhomogeneous scatter media. IEEE Transactions on Geoscience and Remote Sensing, 42 2 355 360 , 0196-2892
  17. 17. Jin Y. Q. Chen F. Chang M. 2004 Retrievals of underlying surface roughness and moisture for stratified vegetation canopy using polarized pulse echoes in the specular direction. IEEE Transactions on Geoscience and Remote Sensing, 42 2 426 433 , 0196-2892
  18. 18. Jin Y. Q. Liang Z. 2004 Iterative retrieval of canopy biomass and surface moisture by using multi-order Mueller matrix solutions of polarimetric radiative transfer. Canadian Journal of Remote Sensing, 30 2 169 175 , 0703-8992
  19. 19. Jin Y. Q. Luo L. 2004 Terrain topography inversion using single-pass polarimetric SAR image data. Science in China (F), 47 4 490 500 , 1009-2757
  20. 20. Jin Y. Q. Chen F. Luo L. 2004 Automatic analysis of change detection of multi-temporal ERS-2 SAR images by using two-thresholds EM and MRF algorithms. The Imaging Science Journal, 52 234 241 , 1368-2199
  21. 21. Jin Y. Q. Liang Z. 2004 Scattering and emission from inhomogeneous vegetation canopy and alien target by using three-dimensional vector radiative transfer (3D-VRT) equation. Journal of Quantitative Spectroscopy and Radiative Transfer, 92 3 261 274 , 0022-4073
  22. 22. Jin Y. Q. 2007 Theory and application for retrieval and fusion of spatial and temporal quantitative information from complex natural environment. Frontiers of Earth Science in China, 1 3 284 298 , 1673-7385
  23. 23. Jin Y. Q. Xu F. Fa W. 2007 Numerical simulation of polarimetric radar pulse echoes from lunar regolith layer with scatter inhomogeneity and rough interfaces. Radio Science, 42, RS3007, doi: 10.1029/ RS2006003523 1-10, 0048-6604
  24. 24. Jin Y. Q. Wang D. 2009 Automatic Detection of the Terrain Surface Changes after Wenchuan Earthquake, May 2008 from ALOS SAR Images Using 2EM-MRF Method, IEEE Geoscience and Remote Sensing Letters, 6 2 344 348 , 0196-2892
  25. 25. Cao G. Z. Jin Y. Q. 2007 A hybrid algorithm of BP-ANN/GA for data fusion of landsat ETM+ and ERS-2 SAR in urban terrain classification. International Journal of Remote Sensing, 28 2 293 305 , 0143-1161
  26. 26. Cao G. Jin Y. Q. 2007 Automatic detection of main road network in dense urban area using microwave SAR image. The Image Science Journal, 55 215 222 , 1368-2199
  27. 27. Chang M. Jin Y. Q. 2002 The temporal Mueller matrix solution for polarimetric scattering from a layer of random non-spherical particles with a pulse incidence. Journal of Quantitative Spectroscopy and Radiative Transfer, 74 339 353 , 0022-4073
  28. 28. Chang M. Jin Y. Q. 2003 Temporal Mueller matrix solution for polarimetric scattering from inhomogeneous random media of non-spherical scatterers under a pulse incidence. IEEE Transactions on Antenna and Propagation, 51 4 820 832 , 0018-926X.
  29. 29. Dai E. Jin Y. Q. Hamasaki T. Sato M. 2008 3-D stereo reconstruction of buildings using polarimetric images acquired in opposite directions. IEEE Geoscience and Remote Sensing Letters, 5 2 236 240 , 0196-2892
  30. 30. Fa W. Jin Y. Q. 2009 Image Simulation of SAR Remote Sensing over Inhomo-geneously Undulated Lunar Surface. Science in China (F), 52 4 559 574 , 1009-2757
  31. 31. Liang Z. Jin Y. Q. 2003 Iterative approach of high-order scattering solution for vector radiative transfer of inhomogeneous random media. Journal of Quantitative Spectroscopy and Radiative Transfer, 77 1 12 , 0022-4073
  32. 32. Qi R. Jin Y. Q. 2007 Analysis of Faraday rotation effect in satellite-borne SAR observation at P band. IEEE Transactions on Geoscience and Remote Sensing, 45 5 1115 1122 , 0196-2892
  33. 33. Wei Z. Xu F. Jin Y. Q. 2008 Phase unwrapping for SAR interferometry based on ant colony optimization algorithm. International Journal of Remote Sensing, 29 3 711 725 , 0143-1161
  34. 34. Xu F. Jin Y. Q. 2005 Deorientation theory of polarimetric scattering targets and application to terrain surface classification. IEEE Transactions on Geoscience and Remote Sensing, 43 10 2351 2364 , 0196-2892
  35. 35. Xu F. Jin Y. Q. 2006 Multi-parameters inversion of a layer of vegetation canopy over rough surface from the system response function based on the Mueller matrix solution of pulse echoes. IEEE Transactions on Geoscience and Remote Sensing, 44 7 2003 2015 , 0196-2892
  36. 36. Xu F. Jin Y. Q. 2006 Imaging simulation of polarimetric synthetic aperture radar remote sensing for comprehensive terrain scene using the mapping and projection algorithm. IEEE Transactions on Geoscience and Remote Sensing, 44 11 3219 3234 , 0196-2892
  37. 37. Xu F. Jin Y. Q. 2007 Automatic reconstruction of 3-D stereo buildings from multi-aspect high-resolution SAR images, IEEE Transactions on Geoscience and Remote Sensing. 45 7 2336 2353 , 0196-2892
  38. 38. Xu F. Jin Y. Q. 2008a Imaging simulation of bistatic SAR and polarimetric analysis. IEEE Transactions on Geoscience and Remote Sensing, 46 8 2233 2248 , 0196-2892
  39. 39. Xu F. Jin Y. Q. 2008b Calibration and validation of a set of multi-aspect airborne polarimetric Pi-SAR data. International Journal of Remote Sensing, 29 23 6801 6810 , 0143-1161
  40. 40. Xu F. Jin Y. Q. 2009 Raw signal processing of stripmap bistatic SAR. IET-Radar Sonar and Navigation, 3 3 233 244 , 1751-8784

Written By

Ya-Qiu Jin

Published: 01 October 2009