Open access peer-reviewed chapter

# Theoretical Modeling for Polarimetric Scattering and Information Retrieval of SAR Remote Sensing

By Ya-Qiu Jin

Published: October 1st 2009

DOI: 10.5772/8323

## 1. Introduction

Synthetic aperture radar (SAR) imagery technology is one of most important advances in space-borne microwave remote sensing during recent decades. Completely polarimetric scattering from complex terrain surfaces can be measured. Fully understanding and retrieving information from polarimetric scattering signatures of natural media and SAR images have become a key issue for the SAR remote sensing and its broad applications.

This paper presents some research progress in Fudan University on theoretical modeling of the terrain surface for polarimetric scattering simulation and Mueller matrix solution, mono-static and bistatic SAR image simulation, new parameters for unsupervised surface classification, DEM inversion, change detection from multi-temporal SAR images, and reconstructions of buildings from multi-aspect SAR images, etc. Some applications are discussed.

## 2. Model of Vegetation Canopy and Mueller Matrix solution

As a polarized waveEinc(χ,ψ)is incident upon the natural media, the scattering field is written as

[EvsEhs]=eikrr[Svv  SvhShv  Shh][EviEhi]eikrrS¯Einc(χ,ψ)E1

where 22-D (dimensional) complex scattering amplitude functionS¯can be measured by the polarimetry. The incident polarization is indicated by the elliptic and orientation angles(χ,ψ). Using the Mueller matrix solution of vector radiative transfer equation ( Jin, 1994 ) and Eq. (1), the scattered Stokes vector (four Stokes parameters) can be obtained or measured as (Jin, 2005)

Is(θ,ϕ)=M¯(θ,ϕ;πθ0,ϕ0)Ii(χ,ψ)E2

The Mueller matrix solution of a layer of scatterers model, as shown in Figure 1, is written as

M¯(θ,ϕ;πθ0,ϕ0)=M¯0+M¯1+M¯2+M¯3+M¯4E3
M¯0=exp[dsecθκ¯e(θ,ϕ)]R¯(θ,ϕ;πθ0,ϕ0)exp[dsecθ0κe(πθ0,ϕ0)]M¯1=secθd0dzexp[zsecθκ¯e(θ,ϕ)]P¯(θ,ϕ;πθ0,ϕ0)exp[zsecθ0κ¯e(πθ0,ϕ0)]M¯2=secθd0dzexp[zsecθκ¯e(θ,ϕ)]02πdϕ0π/2dθsinθP¯(θ,ϕ;θ,ϕ)exp[(dz)secθκ¯e(θ,ϕ)]R¯(θ,ϕ;πθ0,ϕ0)exp[dsecθ0κ¯e(πθ0,ϕ0)]M¯3=secθexp[dsecθκ¯e(θ,ϕ)]02πdϕ0π/2sinθdθR¯(θ,ϕ;πθ,ϕ)d0dzexp[(dz)secθκ¯e(πθ,ϕ)]P¯(πθ,ϕ;πθ0,ϕ0)exp[zsecθ0κ¯e(πθ0,ϕ0)]M¯4=secθexp[dsecθκ¯e(θ,ϕ)]02πdϕ0π/2dθsinθR¯(θ,ϕ;πθ,ϕ)exp[(dz)secθκ¯e(πθ,ϕ)]02πdϕ0π/2dθsinθP¯(πθ,ϕ;θ,ϕ)exp[(dz)secθκ¯e(θ,ϕ)]R¯(θ,ϕ;πθ0,ϕ0)exp[dsecθ0κ¯e(πθ0,ϕ0)]E4

whereκ¯e, P¯, R¯are the extinction matrix, phase matrix of non-spherical scatterers, and reflectivity matrix of underlying rough surface, respectively. Five contributions from scatterers and underlying surface are illustrated in Figure 1.κ¯e, P¯can be expressed bySpqSst*ofS¯in Eq. (1). The co-polarized and cross-polarized backscattering coefficients,σcandσx, polarization degreemsfor scattered Stokes echo with partial polarization and other functions can be numerically calculated ( Jin and Chen, 2002 ).

Eqs. (4a-c) will be applied to numerical simulation of polarimetric scattering from the terrain surface with physical parameters of scatterers (Jin, 2005), as described in next sections.

The Mueller matrix is a 4×4-D real matrix with complex eigenvalues and eigenvectors. To be physically realizable, this matrix must satisfy the Stokes criterion together with several other restrictive conditions. Unfortunately, however, these restrictions do not have any direct physical interpretation in terms of the eigenstructure of the Mueller matrix.

The coherency matrixC¯is applied to the study of polarimetric scattering of SAR image ( Jin and Cloude, 1994 ). Define the scattering vector as

kP=12[Svv+Shh,SvvShh,Svh+Shv,i(SvhShv)]T=12[A,B,C,iD]TE5

where A, B, C, D are sequentially defined in Eq. (5), and the subscript P denotes Pauli vectiorization. The coheremcy matrix is defined as

The eigenvalue and eigenvector of the coherency matrix are defined as

C¯=kPkP+=i=14λikPikPi+E7

All eigenvalues are real and non-negative, andλ1λ2λ3λ4.

The coherency matrix is also related with the Mueller matrix as

Mi+1,s+1=tu12Trace(σ¯i σ¯t σ¯s σ¯u)[C¯]tuE8

whereσ¯i, i=0,1,2,3are the Pauli matrices.

Usually, the case ofSvh=Shvis simply assumed. The entropy H is defined by the eigenvalues of the coherency matrix as

H=i=13Pilog3Pi,Pi=λi/(λ1+λ2+λ3)E9

The entropy is an important feature since it relates the randomness of scatter media with other physical parameters, such as canopy configuration, biomass, etc.

The coherency matrix is 4×4-D positive semidefinite Hermitian and as such has real non-negative eigenvalues and complex orthogonal eigenvectors. Amplitude and difference of four eigen-values are functionally related with polarimetric scattering process of the terrain canopy. Eigen-analysis of the coherency matrix yields better physical insight into the polarimetric scattering mechanisms than does the Mueller matrix and further, it can be employed to physically identify a Mueller matrix by virtue of the semi-definite nature of the corresponding coherency matrix.

For the first-order solution in small albedo, cross polarization is small and correlations such as the termsCA*, CB*, DA*, DB*etc. in Eq. (6) are always very small, and might be neglected in our following derivations. Actually, for weak assumption of azimuthal symmetric orientation, these correlations have been proved to be zero. Applying this approximation to Eq. (6), the eigenvalues ofC¯have been derived as ( Jin and Cloude, 1994 )

λ1,2=18[|A|2+|B|2±(|A|2|B|2)2+4A*BAB*],λ3=14|C|2E10

It is interesting to see that ifAB*A*B=AA*BB*at an ordered case, it yieldsλ2=0.

Substituting A,B,C and D of Eq. (6) into Eq. (10), the eigen-values are derived using the power terms of|Svv|2, |Shh|2, |Shv|2as follows,

λ1=12(|Svv|2+|Shh|2)λ2,λ2=12|Svv|2|Shh|2(|Svv|2+|Shh|2)(1Δ),λ3=|Shv|2E11

where the configuration parameter

Δ=SvvShh*Svv*Shh|Svv|2|Shh|2E12

varies from 0 of totally random media to 1 of ordered media, i.e. no-random case.

It can be seen that the first eigen-valueλ1now indicates the total vv and hh co-polarized power subtracting their coherence defined by the parameter of Eq. (12), which varies from 1 at ordered case to 0 in totally random medium. The second eigen-valueλ2now takes into account for the vv and hh coherent power. As approaches to one,λ2  0for ordered media, and as approaches to zero,λ2would increase to indicate the loss of the vv and hh power coherency for totally random media. The third eigen-valueλ3is now due to depolarization caused by the media randomness.

The (p,q=v,h)-polarized backscattering coefficient is

σpq=4πcosθ|Spq|2E13

Substituting Eq. (13) into Eqs. (11,12), the functionsPi, i=1,2,3, of Eq. (9) are then derived as

P1=1P2P3,  P2=X(1Δ)(1δ),  P3=δ(1δ)1E14

where

Xσhh/σvv(1+σhh/σvv)2,   δ=2σhvσvv+σhhE15

Note that all functionsPi, i=1,2,3, can be now calculated by polarized backscattering coefficientσpq,  p,q=v,h, not by Eq. (7) from full Mueller matrix solution. Then, the entropy H of Eq. (9) is calculated usingPiof Eq. (14). It can be seen that the functionP1is related with the total vv and hh co-polarized powerσhh+σvv;P2is due to the difference betweenσhhandσvv; andP3is due to depolarizationσhv. They are modulated by the media configuration and randomness via the parametersΔandδ. As the medium becomes more random or disordered,δincreases andΔapproaches zero. Vice versa, as the medium becomes more ordered,δdecreases andΔincreases. Thus, the relationship between the entropy H and backscattering measurementsσpq (pq=vv,hh,hv)is established.

Define the co-polarized and cross polarized indexes, respectively, as

CPI10log10(σhh/σvv)  (dB),XPIδE16

The entropy H is directly related with backscattering indexes CPI and XPI via Eqs. (14~16).

Figure 2(a) presents an AirSAR image of total power ofσvv, σhhandσvhat the L band. The image data from some typical areas such as the lake, an island surface, sparse trees, and thick forest (see the frames in the figure) are chosen, as shown in the discrete points of Figure 2(b), and are compared with theoretical results of H, CPI and XPI. The H is first rigorously calculated by polarimetric data of the Mueller matrix, i.e. the eigenvalues of coherency matrixC¯. Using the AirSAR dataσvv, σhh, σvhand the averagedδin the respective region, the lines are calculated with appropriateΔto match the center of discrete data. Figure 2.An AirSAR image of total power σ v v ,   σ h h ,   σ v h at L band and Entropy H vs CPI. Line 1: Δ = 0.257 , δ = 0.176 , Line 2: Δ = 0.318 , δ = 0.156 , Line 3: Δ = 0.365 , δ = 0.118 , Line 4: Δ = 0. , δ = 0. , Line 5: Δ = 0.731 , δ = 0.0265 .

## 3. SAR Imaging Simulation (MPA)

Given the sophisticated mechanisms of electromagnetic (EM) wave scattering and SAR signal collecting, it is hard to tackle SAR imagery without the aid of forward simulation of scattering based on physical modeling. Simulation of SAR imaging presents a different mode to study inhomogeneous terrain scene under SAR observation. It may evaluate or predict the SAR observation, interpret the physical scattering behavior, and take account of the parameterized characteristics of terrain surface objects. Some approaches to SAR image simulation in previous literature might be classified into two catalogues: one focuses on the statistical characteristics of SAR images, and the other on physical scattering process. Natural scene is more complicated including randomly, inhomogeneously distributed, penetrable or impenetrable objects, such as vegetation canopies, manmade structures and perturbed surface topography. Particularly, volumetric scattering through penetrable scatterers, e.g. timberland forests, crops, green plants etc., plays an essential or dominant role in SAR imagery. It is meaningful to develop computerized simulation of SAR imaging over comprehensive terrain scene with heterogeneous terrain objects.

we present an novel approach of the Mapping and Projection Algorithm (MPA) to polarimetric SAR imaging simulation for comprehensive scenarios, which takes account of scattering, attenuation, shadowing and multiple scattering of spatially distributed volumetric and surface scatterers ( Xu and Jin, 2006 ). In this approach, scattering contributions from scatterers are cumulatively summed on the slant range plane (the mapping plane) following geometric principles, while their extinction and shadowing effects are cumulatively multiplied on the ground range plane (the projection plane). It finally constructs a general imaging picture of the whole terrain scene via mapping and projection operations. A MPA is then devised to speed up the simulation of the whole process of scattering, extinction, mapping and projection in association with a grid partition of the comprehensive terrain scene. Our SAR simulation scheme incorporates polarimetric scattering, attenuation or shadowing of several typical terrain surfaces, as well as the coherent speckle generation.

### 3.1. The mapping and projection algorithm (MPA)

Since the slant range is much larger than the synthetic aperture, the incidence angle to the same target is nearly invariant during the interval of radar flying over a length of synthetic aperture. Thus in the imaging simulation, the whole scene is divided into cross-track lines along the azimuth dimension, and then the scattering contribution from the terrain objects lying in each line (included in the incidence plane) are calculated sequentially. It finally constructs a scattering picture or a scattering map of the whole scene.

Following from VRT, we derived a general expression of the scattering power at rangerofx-position in azimuth dimension as

S(x,r)=θ0θ1exp[r0rdrκe+(x,r,θ)]P(x,r,θ)exp[rr0drκe(x,r,θ)]rdxdrdθE17

where the phase functionPstands for scattering,κe(x,r,θ),κe+(x,r,θ)for the backward and forward extinction of an differential elementdvat the position(x,r,θ)of the imaging space.θis incident angle. IntegratingS(x,r)over then,i-th pixel i.e.x[xn,xn+1),s[si,si+1), the corresponding discrete scattering mapSn,ican be obtained. Considering polarimetric scattering, the computational discrete form of Eq. (17) for both volume and surface scatterer is given as

S¯n,i=m=mnmn+11p=pipi+11q=0Q1p=0pE¯o+(m,p,q)S¯o(m,p,q)p=p0E¯o(m,p,q)E18
S¯o(m,p,q)={vefS¯vol(m,p,q)for volume scattereraefS¯surf(m,p,q)for surface scattererE19
E¯o±(m,p,q)={exp[defκ¯e±(m,p,q)]for volume scatterer0for surface scattererE20

wherevef=γv0is the effective volume of volume scatterer, calculated from particle numberγand volumev0;aef=ΔxΔsis the effective area of surface scatterer;defis the effective penetration depth. All of them take into account the proportion coefficients of each scatterer in the same discrete unit. The subscriptodenotes scattering or extinction of a single scatterer. Eq. (18) presents a general description of SAR imaging which deals with single scattering, attenuation or shadowing of all possible objects in the imaging space.

We further devise a mapping and projection algorithm (MPA) to fast compute Eq. (17) The first step is to partition the scene and terrain objects lying in the horizontal planexoyinto multi-grids. Each grid unit contains its corresponding parts of the terrain objects overhead, e.g. ground, vegetation, building, etc.

As shown in Figure 3, two arraysE_±andS_are deployed for temporary storage of scattering and extinction (the under-bar denotes an array). The mapping plane and projection plane are equally divided into discrete cells corresponding to elements ofS_,E_±. Attenuation or shadowing effect of each grid unit is counted in the increasing sequence ofyaxis and cumulatively multiplied into the arrayE_±.

The MPA calculating sequence of the terrain objects in one grid unit is: first to calculate the scattering power of the current terrain object and accumulate onto the arrayS_, where the attenuation or shadowing from other terrain objects are obtained from arrayE_±; then, to calculate the attenuation or shadowing caused by the current terrain object itself, and cumulatively multiply onto the arrayE_±; ultimately when all grid units of the current line are counted, the scattering map is exactly produced in the arrayS_.

As shown in Figure 3, a vertical segment (lengthL) of volume scatterers above the current grid unit is firstly divided intoKsub-segmentsLkin order to make each sub-segment mapping into one cellSIkin mapping plane, while its mid-point is projected to cellEIkin projection plane. After the mapping and projecting, the elements of arrayS_are refreshed as

S_[SIk]:=S_[SIk]+E_+[EIk]S¯okE_[EIk],k=1,..,KE21

S¯okis the scattering of thek-th sub-segment. Assuming that volume scatterers always uniformly fill the whole sub-segment, then the effective volume in Eq. (19) isvef=ΔxΔyLkfs. Therefore, we have

S¯ok=ΔxΔyLkfsS¯vol=vefS¯volE22

wherefsis the fractional volume of scatterers. Otherwise, it is necessary to estimate the total number of scatterersγwithin the sub-segment, and calculate scattering from Eq. (19). The next step is to refresh the elements of the arrayE_±as

{E_[EIk]:=E¯o,kE_[EIk]k=1,..,KE_+[EIk]:=E_+[EIk]E¯o,k+k=1,..,KE23

whereE¯o,k±=exp(defκ¯e)is the attenuation of thek-th sub-segment. Due to discrete calculation for attenuation, the arrayE_±should store the averaged attenuation. Thus, the ratio of shadowed area of the sub-segment to the whole projection cell should be taken into account, when counting the contribution of the sub-segment to the mean attenuation. The effective depth inE¯o,k±is calculated as

def=vefRxRr/tanθ=ΔxΔyLkRxRr/tanθ=tanθΔxΔyLkRxRrE24

It can be seen from the effective depthdefthat if we redistribute all volume scatterers of the sub-segment and make them cover the whole projection cell under the invariance of density, then the layer depth of redistributed scatterers along the projection line is seen as the effective depth.

In a similar way, larger surface scatterers, like vertical wall surfaces, cliff slopes etc., are segmented according to mapping cells. But, for nearly horizontal surfaces, e.g. ground or roof surface, the facet cut by one grid unit is small enough and is projected into one projection cell, i.e.K=1. Hence, given the normal vector of facetn^, the effective area and its scattering in Eq. (21) are calculated in two ways:

S¯ok=aefS¯surf={ΔxΔyS¯surfcosϕ2/cosϕ1K=1ΔxLkS¯surfcosϕ2/sinϕ1K1E25

whereϕ1is the angle betweenn^andzaxis;ϕ2is the angle betweenn^andi^incidence.Lkis the length of sub-segment. Moreover, the arrayE_±should be set to zero over all projection interval from the first projection cell to the last one, i.e.

E_±[j]:=0EI1jEIKE26

The elements of the arrayE_±are initialized to unit matrices at the start point of calculation for each line, while the arrayS_should be saved before cleared to start the next line. Note that each line is strictly computed in the sequence of increasingyin order to precisely count attenuation caused by the terrain objects ahead.

Double and triple scatterings between terrain objects and the underlying ground are regarded as shown in Figure 3. First, the corresponding ground patch of multiple scattering can be located by the ray tracing technique. Then the length of propagation path, i.e. the effective rangerefcan be given as

ref={(rObject+rGround+rPath)/2for double scatteringrObject+rPathfor triple scatteringE27

In the same way, multiple scattering of a segment of scatterers is mapped into the arrayS_byref. The same step of dividing into sub-segments is employed if the mapping interval exceeds one cell.

For double scattering, the attenuation or shadowing suffered through the propagation paths ofrObject,rGroundare the same with single scattering of the terrain object and ground patch, respectively. It means that the arrayE_±can be used as well. The attenuation during the way from the terrain object to ground patchrPath, is omitted for simplicity. Therefore, the double scattering contribution can be written as

S_[SIef]:=S_[SIef]+E_+[EIGround]S¯Ground2+ρS¯Object2E_[EIObject]+E_+[EIObject]ρS¯Object2+S¯Ground2E_[EIGround]E28

whereEIGround,EIObjectare the projection indices of the terrain object and the ground patch respectively.S¯Ground2±,S¯Object2±are the Mueller matrices of the ground patch and the terrain object along the forward and backward directions of double-scattering, respectively. The coefficientρ=veforaefis calculated similarly to Eqs. (19,22,25).

For triple scattering (object-ground-object), the expression can be directly written as:

S_[SIef]:=S_[SIef]+E_+[EIObject]S¯Object2+S¯Ground3ρS¯Object2E_[EIObject]E29

whereS¯Ground3is the Mueller matrix of ground patch along the triple scattering path;ρ=veforaef.

Correct sequence for calculation of each grid unit is first to count single scattering, then multiple scattering, at last its attenuation or shadowing. Notice that the ground patch of multiple scattering could be located at any place around the terrain object. For simplicity, we take the projection cell in the current incidence plane as an approximate substitute. Some other coding techniques are adopted to guarantee the correct computation and make it more efficient.

### 3.2. Scattering models for terrain objects

For vegetation canopies, VRT model of a layer of random non-spherical particles ( Jin, 1994 ) is employed. Leaves, small twigs or thin stems are modeled as non-spherical dielectric particles under the Rayleigh or Rayleigh-Gans approximation, while branches, trunks or thick stems are modeled as dielectric cylinders. A tree model is composed of crown and trunk. The crown is a cloud with a simple geometrical shape containing randomly oriented non-spherical particles. The trunk is an upright cylinder with the top covered by the crown. Oblate or disk-like particles and elliptic crown are adopted for broad-leaf forest, while prolate or needle-like particles and cone-like crown are for needle-leaf forest. In addition, the crown shapes take a small random perturbation. Similarly, a farm filed is modeled as a layer of randomly oriented non-spherical particles with perturbed layer depth. If the grid units are small enough, the segment of crown or crops cut by one grid unit can be regarded as a segment fully filled with particles. Differences between the trunk and crown are: (a) the trunk is solid within a grid unit i.e. fractional volume is set to 1; (b) double scattering between the trunk and ground is taken into account.

Scattering of the building is seen as the surface scattering from its wall and roof surfaces and multiple interactions with the ground surface. The integral equation method (IEM) is employed to calculate surface scattering. The building is modeled as a parallelepiped with an isoscales triangular cylinder layered upon it. Due to the orientation of wall surface and roof surface, the incident angle must be transferred into the local coordinates of the surface, as well as the polar basis of wave propagation. Geometrical relationships among the wall surfaces, the roof surfaces, as well as the double and triple scatterings between the wall and ground can be found in Franceschetti et al. (2002). Figure 2 illustrates the building model, its image and shadowing in SAR imagery.

For ground facet, the tangent vectors alongx,yaxes on the current grid unit are used to construct the facet plane.

The MPA approach is based on the VRT theory for incoherent scattering power, not coherent summation of scattering fields. However, speckle due to coherent interference is one of the most critical characteristics of SAR imagery, especially for studies of SAR image filtering and optimization. Assuming Gaussian probability distribution of the scattering vectorkL=[Shh,Shv,Svh,Svv]T, random scattering vector is then generated as follows:

kL=k0C¯12,    k0i=Ii+jQi,  i=1,..,dE30

whereIi,Qiare independent Gaussian random numbers with the zero mean and unit variance. Given the positive semidefinite property ofC¯, its square root can be obtained by Cholesky decomposition.

### 3.3. Simulation results

The configurations and parameter settings of the radar and platform in simulation are selected follow the AIRSAR sensor of NASA/JPL.

A virtual comprehensive terrain scene is designed, as shown in Figure 5(a), as according to a true DEM of Guangdong province, south China. It contains different types of forests covering on the hill, crops farmland, ordered or random buildings in urban and suburban regions, roads and rivers.

A simulated scattering map (decibel of normalized scattering power, from -50dB to 0dB, pseudo color: R-HH, G-HH, B-HV.) at L band with 12m resolution (pixel spacing is 5m) is displayed in Figure 5(b). The simulated SAR image at L band is shown in Figure 5(c).

Figures 6(a,b) give the simulated scattering map and SAR image at C band, respectively. In this higher band, vegetation scatterings are significantly increased, as well as attenuation is increased and wave penetrable depth is decreased. It causes that trees become thinner while the shadowing becomes darker.

At the upper-right corner, the blocks images appear like parallel lines, which reveal the dominance of double scattering in the urban area. However, on the other side below the river, the buildings are oriented nearly 45 degrees to the radar flying direction. As a result, the cross-pol scattering (blue) is stronger, while double scattering is reduced. Timberlands can be classified into two classes: broad leaves in yellow color due to balance between HH and VV scattering, and narrow leaves in green color due to weaker scattering of HH. The narrow leaves trees make weaker attenuation of HH and therefore stronger HH scattering of the shadowed ground, which is the reason why most areas of narrow leaves forest become red. Overlay and shadowing effects caused by mountainous topography are particularly perceptible. Croplands are generally uniform and dense, and appear like patches in the image. Its VV scattering (green) is always stronger than HH. They have distinguishable brightness due to different density, sizes, shapes etc. More detail discussion can be seen in ( Xu and Jin, 2006 )

### 3.4. Bistatic SAR imaging

Bistatic SAR (BISAR) with separated transmitter and receiver flying on different platforms has become of great interests. Most of BISAR research is focused on the engineering realization and signal processing algorithm with few on land-based bistatic experiments or theoretical modeling of bistatic scattering. The MPA provides a fast and efficient tool for monostatic imaging simulation. It involves the physical scattering process of multiple terrain objects, such as vegetation canopy, buildings and rough ground surfaces (Xu and Jin, 2008a).

Similar to mono-static MAP, the bistatic-MAP steps are described as follows,

(1) The arraysE_+,E_are initialized as unit matrices,S_is as zero matrices. Then, visit the grid units, sequentially, along+ydirection.

(2) Perform 3D projections of the current grid unit along incidence and scattering directions to the cellspd,puofE_+,E_, respectively.

(3) Determine the mapping position of the current grid unit based on the information of its synthetic aperture and Doppler history, which corresponds to the cellmof the arrayS_.

(4) Obtain or calculate the scattering matrixS¯0and the upward/downward attenuation matrixE¯0±of the current grid unit.

(5) Refresh the elements ofS_as

S_[m]:=S_[m]+E_+[pu]S¯0E_[pd]E31

Refresh the elements ofE_+,E_as

E_[pd]:=E¯0E_[pd],     E_+[pu]:=E_+[pu]E¯0+E32

(6) Return to Step (2), and continue to visit the next grid at the same coordinate ofyor, if all grids at this coordinate have been visited, step forward to a larger coordinate ofytill the whole scene is exhausted.

Figures 7, 8 explain the bistatic imaging process of a tree and a building. One of the differences between bistatic and monostatic cases is the shadowing area projected in both incidence and scattering directions. Additionally, due to the split incidence and scattering directions, double scattering terms of object-ground and ground-object are different. One reflects at the ground and diffusely scatters from the object, the other scatters at the object and then reflects from the ground. These two scattering terms have different paths which give rise to separated double-scattering images. Figure 8.Composition of a tree and a house in BISAR image.

A comprehensive virtual scene as shown in Figure 9(a) is designed for simulation and analysis. Simulated BISAR images of the configurations, AT (across track) and TI (translational invariant) are shown in Figures 9(b,c), respectively.

### 3.5. Unified bistatic polar bases and polarimetric analysis

It is found in the BISAR simulation that in AT BISAR image, bistatic scattering of terrain objects preserves polarimetric characteristics analogous to monostatic case. while the major disparity is on total scattering power.

However, the polarimetric parameters behavior is different in the image of general TI mode. First,His generally higher, probably due to the fact that large bistatic angle is more likely to reveal the randomness and complexity of the object, and makes the scattering energy more uniformly distributed on different polarizations. Second, almost all terrain surfaces showα45. It is found that scattering energy is concentrated on the 4-th component of Pauli scattering vectorkPall over the image.

It seems thatαmight lose its capability to represent the polarimetric characteristics under certain bistatic configuration. Generally speaking, all parametersα,β,γin monostatic SAR image cannot reflect polarimetric characteristics in the general TI case.

Conventional polarimetric parameters such asα,β,γmay largely depend on angular settings of the sensors rather than instinct properties of the target. It becomes inconvenient to interpret the physical meaning of polarimetric parameters when they are severely involved with the bistatic configuration.

As proposed in many studies, inverse problems of bistatic scattering are usually discussed in a coordinates system determined by the bisectrix. We believe it is important to first transform the conventional bistatic polar bases to a new one defined by the bisectrix. The unified bistatic polar bases are defined as

h^i=b^×k^i|b^×k^i|,   v^i=h^i×k^i,   b^=k^sk^i|k^sk^i|E33

whereb^denotes the bisectrix, which is defined as the bisector of the incident and scattered wave vectors in the plane, as shown in Figure 10(a). Figure 10.Unified bistatic polar bases and bistatic (TI) pseudo color images coded by NPC after unified bistatic polar bases transform.

The relationship between the unified bistatic polar bases and the conventional polar bases can be described by polar basis rotation, i.e.E, S¯defined in(v^i,h^i,k^i)are re-defined asE, S¯in(v^i,h^i,k^i). It is written as

Ei=U¯iEi,S¯=U¯sS¯U¯iT,U¯i=[v^iv^ih^iv^iv^ih^ih^ih^i]E34

whereU¯iis the rotation matrix. Figure 10(b) gives the bistatic TI image after transfom of Eq. (34). The normalized Pauli components (NPC) are defined asPi=|kP,i|/|kP|, i=1,..,4. The appearance of the NPC color-coded image after transform agrees with the convention of human vision.

We modify the definition of Cloude’s parameters as

kP=kP[cosαcosγexp(jϕ1)sinαcosβexp(jϕ2)sinαsinβexp(jϕ3)cosαsinγexp(jϕ4)]E35

The redefinedα,β,γare calculated and plotted in Figure 11 for bistatic image of Figure 10(b). It can be seen that the redefinedα,β,γare more capable to represent or classify the polarimetric characteristics of different scattering types.

Redefinition of Cloude’s parameters is only a preliminary attempt of bistatic polarimetric interpretation. It remains open for further study in a systematic way and for verification using both simulation and experiments. Figure 11.Redefined α , β , γ after bistatic polar bases transform.

The MPA is also applied to simulation of SAR image of undulated lunar surface (Fa and Jin, 2009). Based on the statistics of the lunar cratered terrain, e.g. population, dimension and shape of craters, the terrain feature of cratered lunar surface is numerically generated. According to inhomogeneous distribution of the lunar surface slope, the triangulated irregular network is employed to make the digital elevation of lunar surface model. The Kirchhoff approximation of surface scattering is then applied to simulation of lunar surface scattering. The SAR image for cratered lunar surface is numerically generated. Making use of the digital elevation and Clementine UVVIS data at Apollo 15 landing site as the ground truth, an SAR image at Apollo 15 landing site is simulated.

## 4. Terrain Surface Classification using De-Orientation Theory

Classification of complex terrain surfaces using polarimetric SAR imagery is one of most important SAR applications. An unsupervised method based on the entropyHand target decomposition parameters has been well developed by Cloude et al., which extracts the target decomposition parameterαand entropyHfrom eigen-analysis of coherence matrix and construct an unsupervised classification spectrum on theαHplane.

Scattering of the terrain targets functionally depends on the scatter orientation, shape, dielectric property, and scattering mechanism etc. Scatter targets of complex terrain surfaces are often randomly oriented and cause randomly fluctuating echoes. It is difficult to make classification of randomly oriented and randomly distributed scatter targets. Different scatters with different orientations can make the similar scattering, and vice versa, the same scatters with random orientation can make different scattering to make confused classification.

In this section, a transformation of the target scattering vector to rotate the target along the sight line is derived. Deorientation is introduced to transform the target orientation into such fixed state with minimization of cross-polarization (min-x-pol). And meanwhile, the angle is extracted to indicate the angle deviation of the target orientation from this min-x-pol state.

A set of new parametersu,v,wfrom the target scattering vector is defined to indicate the ratio and phase difference of the two co-polarized (co-pol) backscattering terms, and the significance of cross polarized (x-pol) term. All these parameters as well as the entropyHare applied to classification of random surface targets.

Numerical simulations of polarimetric scattering of a single small non-spherical particle, and a layer of random non-spherical particles above a rough surface are studied to show the effectiveness of the parametersu,v,ψ,Hand the capability ofu,v,Hfor classification of complex terrain surfaces (Xu and Jin, 2005).

As examples, the terrain surface classifications for a SIR-C and an AirSAR images are presented.

### 4.1. De-orientation and parameterization

From Eq. (1), the scattering vector is usually defined as

kP=[Shh+Svv,ShhSvv,2Sx]T/2or kL=[Shh,2Sx,Svv]TE36

whereSxShv=Svh. Cloude et al. defined the parameterization ofkPin terms of the parametersα,βetc. as

It yields

a=tan1(|Svv|/|Shh|), b=12arg(Svv/Shh), c=cos1(2|Sx|/|kL|)E38

Rotating the angleψof the polarization base along with the sight line, the electrical field vectorEbecomesEas

E=[R]E,     [R]=[cosψsinψsinψcosψ]E39

In backscattering direction, this rotation makes the scattering vectorkPto bekP, which can be expressed as

kP=[U]kP,    [U]=[1000cos2ψsin2ψ0sin2ψcos2ψ]E40

Applying minimization of cross polarization (min-x-pol) tokP, i.e. let

|kP,3|2ψ=0,    2|kP,3|2ψ20

it yields the deorientation angleψmis obtained as

ψm=[sgn{cos(ϕ2ϕ3)}β2]π2E41

It means that such rotation of the angleψmalong the sight line makeskPto the min-x-pol status as deorientedkPd.

New parametersu,v,ware then defined from the deoriented scattering vector of Eqs. (38,40) as:

u=sinccos2a,    v=sincsin2acos2b,    w=coscE42

In the case of non-deterministic targets, we obtain the uncorrelated scattering vectors, i.e. eigenvectors, through eigen-analysis of coherency matrix. Here the most significant eigenvector i.e. principal eigenvector is considered as the representative scattering vector of non-deterministic targets. Thus, the deorientation of non-deterministic targets is merely conducted on the principal eigenvector.

### 4.2. Numerical simulation

Based on the model of Figure 1, polarimetric scattering is calculated, and the scattering vector and Mueller matrix are obtained. A model of a layer of random non-spherical particles above a rough surface for scattering from terrain surfaces, as shown in Figure 1, is applied to numerical simulation ofu,v,Hat L band in various cases.

Figures 12(a,b) show numerical relationship between the parametersu,vand the particle parameters: Eular angle orientation and particle shape (oblate or prolate spheroids). It can be seen that|u|indicates the non-symmetry of particle’s projection along with the sight line, and the dielectric property of the particle,|εs|, affects|u|, andεs/εsaffectsv.

Figure 12(a) presents the distributions of some typical terrain surfaces on the|u|Hplane, which are obtained from the scattering model of Figure 1. It can be seen thatHindicates the complexity of layer structures of terrain surfaces. Different dielectric property of the soil land and oceanic surface also makesuidentifiable on the|u|axis.

Figure 12(b) presents the distribution of scattering terms on the plane|u|v, whereMidenotes the i-th order scattering. Concluded from these figures,Hindicates the canopy randomness,uis useful for distinguishing different terrain targets andvis helpful to take account of scattering mechanisms of different-orders.

### 4.3. Surface classification

The flow-chart of the classification method of PolSAR Data based on deorientation and new parametersu,v,ψis shown in Figure 13. The classification decision tree of parametersu,v,His displayed on Figure 14.

An AirSAR data at L band of the Boreal area in Canada with rich resources of vegetation is chosen for classification and orientation-analysis, as shown in Figure 15(a). Figure 15(b) is the deorientation classification over this area. Terrain surfaces are divided into 8 classes following the decision-tree: Timber, Urban 2, Urban 1, Canopy 2, Canopy 1, Surface 3, Surface 2 and Surface 1.

Figure 16 shows the data distributions on the planes|u|Hfor parameterv0.2andv0.2, and corresponding classification. Figure 16.Classification by u , v , H for 19 classes.

The orientation distribution of several classes are selected to be displayed in Figures 17(a,b). It can be seen that

1. While in the Urban 1 (sparse forest of vertical trunks, instead of artificial constructions), there are uniformly vertical orientations indicating orderly trunks.

2. Random orientation in the Canopy 1 means that the vegetation canopy in this area might be the disordered bush. Note that the region with the roads inside the forest might show randomness confused by the bush vegetation on the roadsides.

Following the above orientation analysis, the terrain surfaces are further classified into the subclasses and are identified by their types.

### 4.4. Faraday rotation and Surface classification

To obtain the moisture profiles in subcanopy and subsurface, it needs UHF/VHF bands (435MHz, 135MHz) to penetrate through the dense subcanopy (~20kg/m2 and more) with little scattering and reach subsurface. However, when the wave passes through the anisotropic ionosphere and action of geomagnetic field, the polarization vectors of electromagnetic wave are rotated, which is called the Faraday rotation (FR). The FR depends on the wavelength, electronic density, geomagnetic field, the angle between the direction of wave propagation and geomagnetic field, and the wave incidence angle.

Assuming that the propagation direction is not changed passing through homogeneous ionosphere,ds=secθidz(wherez^is the normal to the surface), and geomagnetic field keeps constant as one at 400 km altitude, FR is simply written as

whereΘBis the angle between the directions of electromagnetic wave propagation and geomagnetic field,ρeis the total electron content per unit area (1016electron/m2), andB(400)is the intensity of geomagnetic field (T) at 400 km altitude.

The FR may decrease the difference between co-pol backscattering, enhance cross-polarized echoes, and mix different polarized terms. Thus, the satellite-borne SAR data at low frequency becomes distorted due to FR effect. Since the FR is proportional to the square power of the wavelength, it yields especially serious impact on the SAR observation operating at the frequency lower than L band. The FR angle at P band can reach dozens of degrees.

As a polarized electromagnetic wave passes through the ionosphere, the scattering matrixS¯Fwith FR (indicated by superscript F ) is written by the scattering matrixS¯without FR as follows

[ShhFShvFSvhFSvvF]=[cosΩsinΩsinΩcosΩ][ShhShvShvSvv][cosΩsinΩsinΩcosΩ]ShhF=Shhcos2ΩSvvsin2Ω,          SvvF=Shhsin2Ω+Svvcos2ΩShvF=Shv+(Shh+Svv)sinΩcosΩ,  SvhF=Shv(Shh+Svv)sinΩcosΩE44

The measured polarimetric data with FR,S¯ForM¯F, are distorted and cannot be directly applied to terrain surface classification. However, fully polarimetric 4×4-DM¯without FR can be inverted fromM¯F. And it shows that the remained±π/2ambiguity error does not affect the classification parameters:u,v,H,α,A. Based on intuitive assumption of gradual change of FR degree with geographical position, a method of 2D phase unwrapping method with random benchmark can be employed to eliminate the±π/2ambiguity error is designed. Some example shows the fully polarimetric SAR without FR at low frequency, such as P band, can be fully inverted (Qi and Jin, 2007).

## 5. Change Detection of Terrain Surface from Multi-Temporal SAR Images

Multi-temporal observations of SAR remote sensing imagery provide fast and practical technical means for surveying and assessing such vast changes. One direct application is to detect and classify the information on changes in the terrain surfaces. It would be laborious to make an intuitive assessment for a huge amount of multi-temporal image data over a vast area. Such assessment based on qualitative gray-level analysis is not accurate and might lose some important information. How to detect and automatically analyze information on change in the terrain surfaces is a key issue in remote sensing.

In this section, two-thresholds EM and MRF algorithm (2EM-MRF) is developed to detect the change direction of backscattering enhanced, reduced and unchanged regimes from the SAR difference image (Jin and Luo, 2004, Jin and Wang, 2009).

On May 12, 2008, a major earthquake, measuring 8.0 on the Richter scale, jolted southwestern China’s Sichuan Province, Wenchuan area. To evaluate the damages and terrain surface changes caused by the earthquake, multi-temporal ALOS PALSAR image data before and after the earthquake are applied to detection and classification of the terrain surface changes. Using the tools of Google Earth for surface mapping, the surface change situation after the earthquake overlapped the DEM topography can be demonstrated in multi-azimuth views as animated cartoon. The detection and classification are also compared with the optical photographs. It is proposed that multi-thresholds EM and MRF analysis may become traceable when multi-polarization, multi-channels, multi-sensors multi-temporal image data become available.

### 5.1. Two thresholds expectation maximum algorithm

Consider two co-registered imagesX1andX2with sizeI×Jat two different times(t1,t2). Their difference image isX=(X2X1), and denoteXD={X(i,j), 1iI,1jJ}(in some cases,X=X2/X1also can be used).

As usually, one-threshold expectation maximum (EM) algorithm has been employed to classify two opposite changes: the unchanged classωnand the changed classωc. The probability density functionP(X), XXDwas modeled as a mixture distribution of two density components associated withωnandωc, i.e.

p(X)=p(X|ωn)P(ωn)+p(X|ωc)P(ωc)E45

Assume that both the conditional probabilitiesp(X|ωn)andp(X|ωc)are modeled by Gaussian distributions. The iterative equations for estimating the statistical parameters and the a priori probability for the classωnare the following

Pt+1(ωn)=X(i,j)XDPt(ωn)pt(X(i,j)|ωn)pt(X(i,j))IJmeanμnt+1=X(i,j)XDPt(ωn)pt(X(i,j)|ωn)pt(X(i,j))X(i,j)X(i,j)XDPt(ωn)pt(X(i,j)|ωn)pt(X(i,j))variance=X(i,j)XDPt(ωn)pt(X(i,j)|ωn)pt(X(i,j)[X(i,j)μnt]2X(i,j)XDPt(ωn)pt(X(i,j)|ωn)pt(X(i,j))E46

where the superscripts t and t +1 denote the current and next iterations.

Analogously, these equations can also be used to estimatep(ωc)withμcandσc2.

The estimates are computed starting from the initial values by iterating the above equations until convergence. The initial value of the estimated parameters can be obtained by the analysis of the histogram of the difference image. A pixel subsetSnlikely to belong toωnand a pixel subsetSclikely to belong toωccan be obtained by preliminarily selecting two-thresholdTnandTcon the histogram. This is equivalent to solving the ML boundaryTofor the two classesωnandωcon the difference image. The optional thresholdTorequires

P(ωc)P(ωn)=p(X|ωn)p(X|ωc)E47

Under the assumption of Gaussian distribution, Eq. (3) yields

(σn2σc2)To2+2(μnσc2μcσn2)To+μc2σn2μn2σc2+2σn2σc2ln[σcP(ωn)σnP(ωc)]=0E48

to obtain optimalTo.

Figures 18 (a,b) are the ALOS PALSAR images (L band, HH polarization) in February 17 and May 19, 2008 before and after earthquake in Beichuan County, Wenchuan area, respectively. Spatial resolution is 4.7m4.5m. Figure 18(c) is the difference image of Figures 18(a,b), which seems very difficult to evaluate the terrain surface changes only using man’s vision, especially to accurately classify the change classes in large scale.

The pixels of the difference image (i.e.XD=σD0=σ20σ10) are classified into three classes:ωc1ofσD0-enhanced,ωnofσD0-unchanged andωc2ofσD0-reduced. Thus, the probability density functionP(X)is modeled as a mixture density distribution consisting of three components:

p(X)=p(X|ωc1)P(ωc1)+p(X|ωn)P(ωn)+p(X|ωc2)P(ωc2)E49

The parameterTo1is first obtained by application of the EM algorithm to the enhanced classωc1and no-enhanced classωn1(ωnandωc2). Then, we obtainTo2by applying the EM to the reduced classωc2and no-reduced classωn2(ωnandωc1), whereωn1=ωnωc2,ωn2=ωnωc1. Finally, the two results are superposed to get the final classification. Figure 19.The histogram of the difference image and change detection using 2EM method.

As an example, Figure 19(a) shows the grayscale histogram of a small part chosen from the difference image, Figure 18(c). Three classes of the probability distributions are outlined in this figure. The histogram is normalized in accordance with the probability density distributions.

The initial statistical parameters of the subsets of

Sn1={X(i,j)|0X(i,j)Tn1},Sc1={X(i,j)|X(i,j)Tc1are first derived, and then derive the parameters ofSn2={X(i,j)|Tn2X(i,j)0},Sc2={X(i,j)|X(i,j)Tc2}E50

The initial statistical parameters are related to the classesωn1,ωc1and the classesωn2,ωc2, respectively. Then, Eqs. (46a-c) are sequentially used to perform the iterations on the four classes, i.e. the a priori probability and other statistical parameters [P(ωn1),μn1,σn12], [P(ωc1),μc1,σc12], [P(ωn2),μn2,σn22] and [P(ωc2),μc2,σc22]. Solving Eq. (48), the thresholdsTo1=3.1643dB andTo2=2.8381dB are obtained.

Figure 19(b) is the change detection using EM algorithm, where red color indicates the area with scattering enhanced, blue color indicates the area with scattering reduced, and green color denotes no-changed. It makes three classes change of the difference image.

### 5.2. The change detection using 2EM-MRF

Actually, the pixels of the image are spatially correlated, i.e. a pixel belonging to the classωkis likely to be surrounded by pixels belonging to the same class. To take account of spatial texture may yield more reliable and accurate change detection.

Let the setC={C,1L}represent the possible sets of the labels in the difference image:

C={C(i,j),1iI,1jJ},C(i,j){ωc1,ωc2,ωn}E51

whereL=3IJ. According to Bayes rule for minimum error, the classification result should maximize the posterior conditional probability,

Ck=argmaxCC{P(C|XD)}=argmaxCC{P(C)p(XD|C)}E52

whereP(C)is the prior model for the class labels, andp(XD|C)is the joint density function of the pixel values in the difference imageXDgiven the set of labelsC.

The MRF algorithm is employed with a spatial neighborhood 55 pixels system to take account of spatial texuture. The generation of the final change-detection map involves the labeling of all the pixels in the difference image so that the posterior probability of Eq. (52) is maximized.

The MRF algorithm is equivalent to the minimization of the Gibbs energy function. The MRF algorithm is iteratively carried out.

Figure 20(a) presents the final result of 2EM-MRF for detection and classification of the terrain surface changes from the difference image, Figure 18(c). The numbers 1~8 indicate some areas with typical changes.

It can be seen that, for example, in the area 1 the river was largely blocked up with landslide; in the area 2, there were landslides causing large scale blocks; the area 3 is Beichuan town, where the terrain surface was significantly undulated or roughed due to landslide and building collapse; in the areas 4 and 5 the river was blocked up due to landslides along river lines; in the area 6 the highway was significantly blocked up with landslide; in the area 7 there was collective landslides almost towards one azimuth direction, and the flat area with reduced scattering along the river might be due to seasonal risen water in May than February; in the area 8 collapse of mountain blocks caused the terrain surface undulating to enhance stronger scattering.

Figure 20(b) gives the topography and contour lines of the same area from the google website. It is useful to locate where and which kind of the changes are happening. It might be seen that the landslides, e.g. in the areas 1 and 7, are correlated with direction and magnitude of slopes. A quantitative analysis to assess the landslides and surface damage can be further developed. Figure 20.Automatic detection and classification of terrain surface changes using 2EM-MRF.

As an example, Figure 20(c) gives a comparison of the change at the region 3 with a photograph distributed to public from the website of the Ministry of the State Resources of China. It can be clarified in both figures that A shows lanslides and makes the surface smooth, and B shows the highway blocked. It can be seen that optical shadowing actually does not confuse these classifications.

Using the tool of Google Earth mapping with the 2EM-MRF results, the terrain surface change situation classified by three types overlapped the DEM topography can be showed in multi-azimuth views as a animated cartoons. Since all process of 2EM-MRF are automatic and carried out on real time, it should be helpful, especially for commanding the rescue works in disaster scene. Figure 21 shows four views.

It can be seen that information retrieval from a single amplitude image (scattered power) with one-frequency, mono-polarization at one time is very limited. In fully polarimetric technology, for example, using target decomposition theory, deorientation transform etc., the physical status of the terrain surfaces such as vegetation canopy, road, building, river etc. can be well classified and would be of greatly helpful to change detection. Four Stokes parameters of polarimetric measurement can be feasible to show the surface slope variation and anisotropy, and INSAR has been well applied to retrieval of surface digital elevation. All of these progress show superiority over a single amplitude image analysis and manual vision estimation. It is also helpful to fuse CFAR (constant false alarm rate) for detection of the object, e.g. edge and block, from a SAR image.

As multi-polarized, multi-channels and multi-temporal image data become available, 2EM-MRF can be further extended to utilization of principal characteristic parameters of the difference image, such as VV+HH, VV-HH, entropy, angularα,β,γ,ψfor change detection and classification. More information about terrain surface changes can be retrieved.

## 6. DEM Inversion from a Single POL-SAR Image

Co-polarised or cross-polarised backscattering signature is the function of the incidence wave with the ellipticity angleχand orientation angleψ. Recently, polarimetric INSAR image data has been utilized to generate digital surface elevation and to invert terrain topography. When the terrain surface is flat, polarimetric scattering signature has the maximum largely at the orientation angleψ=0. However, it has been shown that as the surface is tilted, the orienattion angleψat the maximum of co-polarized (co-pol) or cross-polarized (cross-pol) signature can shift fromψ=0. This shift can be applied to convert the surface slopes. Making use of the assumption of real co-pol and zero cross-pol scattering amplitude functions, theψshift is expressed by the real scattering amplitude functions. Since both the range and azimuth angles are coupled, two- or multi-pass SAR image data are required for solving two unknowns of the surface slopes. This approach has been well demonstrated for inversion of digital elevation mapping (DEM) and terrain topography by using airborne SAR data.

However, scattering signature is an ensemble average of echo power from random scatter media. Measurable Stokes parameters as the polarized scattering intensity should be directly related to theψshift. In this section, using the Mueller matrix solution, theψshift is newly derived as a function of three Stokes parameters,Ivs,Ihs,Us, which are measurable by the polarimetric SAR imagery.

Using the Euler angles transformation between the principal and local coordinates, the orientation angleψis related with both the range and azimuth angles,βandγ, of the tilted surface pixel and radar viewing geometry. These results are consistent with Lee et al. (1998) and Schuler (2000), but are more general.

It is proposed that the linear texture of tilted surface alignment is used to specify the azimuth angleγ. The adaptive threshold method and image morphological thinning algorithm are applied to determine the azimuth angleγfrom image linear textures. Thus, the range angleβis then solved, and bothβandγare utilized to obtain the azimuth slope and range slope. Then, the full multi-grid algorithm is employed to solve the Poisson equation of DEM and produce the terrain topography from a single pass Polarimetric SAR image (Jin, 2005; Jin and Luo, 2003 ).

### 6.1. The Ψ shift as a function of the Stokes parameters

Consider a polarized electromagnetic wave incident upon the terrain surface at(πθi,φi). Incidence polarization is defined by the ellipticity angleχand orientation angleψ( Jin 1994 ). The 22-D complex scattering amplitude functions are obtained from polarimetric measurement as

I¯i(χ,ψ)=[Ivi,Ihi,Ui,Vi]T=[12(1cos2χcos2ψ),12(1+cos2χcos2ψ),cos2χsin2ψ,sin2χ]TE53
σc=4πcosθiPnE54

where

Pn=0.5[Ivs(1cos2χcos2ψ)+Ihs(1+cos2χcos2ψ)+Uscos2χsin2ψ+Vssin2χ]E55

When the terrain surface is flat, co-pol backscatteringσcversus the incidence polarization(χ,ψ)has the maximum atψ=0. However, it has been shown that as the surface is tilted, the orientation angleψfor theσcmaximum is shifted fromψ=0.

LetPn/ψ=0at the maximumσcandχ=0of symmetric case, it yields

0=(IhsIvs)sin2ψ+Uscos2ψ +0.5(Ihs+Ivs)'+0.5Ussin2ψ+0.5(IhsIvs)'cos2ψE56

where the superscript of prime denote/ψ. It can be shown that

0.5(Ihs+Ivs)'~(M13+M23)cos2ψ~(ReSvvSvh*+ReShvShh*)cos2ψ0.5Us~M33cos2ψ~ReSvvShh*+SvhShv*cos2ψ(IhsIvs)~0.5(M11+M22)cos2ψ~0.5(|Svv|2+|Shh|2)cos2ψE57
0.5(IhsIvs)'~0.5(M13M23)cos2ψ~0.5(ReSvvSvh*ReShvShh*)cos2ψUs~0.5(M31+M32)~ReSvvShv*+ReSvhShh*E58

It can be seen that (57a) and (57b) are much less than (57c), and (58a) is much less than (58b), so the last three terms on RHS of Eq. (56) are now neglected. Thus, it yields theψshift at theσcmaximum expressed by the Stokes parameters as follows

tan2ψ=UsIhsIvsE59

It can be seen that the third Stokes parameterUs0does cause theψshift.

By the way, if bothUsandIhsIvsapproach zero, e.g. scattering from uniformly oriented scatterers or isotropic scatter media such as thick vegetation canopy,ψcannot be well defined by Eq. (59).

### 6.2. The range and azimuth slopes and DEM inversion

The polarization vectorsh^i,v^iof the incident wave at(πθi,ϕi)in the principal coordinate (x^,y^,z^) are defined as ( Jin, 1994 )

h^i=z^×k^i|z^×k^i|=sinϕix^+cosϕiy^andv^i=h^i×k^iwhere the incident wave vectork^i=sinθicosϕix^+sinθisinϕiy^cosθiz^E60

As the pixel surface is tilted with local slope, the polarization vectors should be re-defined following the local normal vectorz^bas follows

h^b=z^b×k^ib|z^b×k^ib|andv^b=h^b×k^ibE61

By using the transformation of the Euler angles(β,γ)between two coordinates (x^,y^,z^) and(x^b,y^b,z^b)( Jin, 1994 ), it has

x^=cosγcosβx^b+sinγy^bcosγsinβz^by^=sinγcosβx^b+cosγy^b+sinγsinβz^bz^=sinβx^b+cosβz^bE62

Substituting above relations into Eqs. (60,61), it yields

h^i=cosβsin(γ+ϕi)x^b+cos(γ+ϕi)y^b+sinβsin(γ+ϕ)z^bE63
k^ib=(cosβsinθicos(γ+ϕi)sinβcosθi)x^b+sinθisin(γ+ϕi)y^b       [sinβsinθicos(γ+ϕi)+cosβcosθi]z^bE64

Using Eq. (64) to Eq. (61), the polarization vectorh^bfor local surface pixel is written as

h^b=ax^b+by^ba2+b2E65

wherea=sinθisin(γ+ϕi),b=sinθicosβcos(γ+ϕi)sinβcosθi. Thus, Eqs. (63,65) yield the orientation angle as

cosψ=h^bh^i=cosβsinθicos(γ+ϕi)sinβcosθia2+b2tanψ=tanβsin(γ+ϕi)sinθicosθitanβcos(γ+ϕi)E66

Thus, the range and azimuth slopes of the pixel surface can be obtained as

SR=tanβcosγ,   SA=tanβsinγE67

Since a singleψshift cannot simultaneously determine two unknowns ofβandγin Eq. (64), two- or multi-pass SAR image data are usually needed.

If only the single-pass SAR image data are available, one of two unknown angles,βorγ, should be first determined. The azimuth alignment of tilted surface pixels can be visualized as a good indicator of the azimuth direction. We apply the adaptive threshold method and image morphological thinning algorithm to specify the azimuth angle in all SAR image pixels. The algorithm contains the following steps:

(1) Make speckle filtering over the entire image.

(2) Apply the adaptive threshold method to produce a binary image. The global threshold value is not adopted because of the heterogeneity of the image pixels.

(3) Apply the image morphological processing for the binary image, remove those isolated pixels and fill small holes. Referring to the part of binary’s “1” as the foreground and the part of binary’s “0” as the background, the edges from the foreground are extracted.

(4) Each pixel on the edge is set as the center of a square 2121 window, and a curve segment through the centered pixel is then obtained. Then, applying the polynomial algorithm for fitting curve segment in the least-squares sense, the tangential slope of the centered pixel is obtained. It yields the azimuth angle of the centered pixel. Make a mark on that pixel so that it won’t be calculated in the next turn.

(5) Removing the edge in Step (4) from the foreground, a new foreground is formed. Repeat Step (4) until the azimuth angle of every pixel in the initial foreground is determined.

(6) Make the complementary binary image, i.e. the initial background now becomes the foreground. Then, the Steps (4) and (5) are repeated to this complementary image until the azimuth angle of every pixel in the initial background is determined.

This approach provides a supplementary information to firstly determine the angleγover whole image area if there is no other information available.

As an example for DEM inversion, the L-band polarimetric SAR data is shown in Figure 23(a). As the azimuth angleγof each pixel is obtained by the adaptive threshold method and thinning algorithm as described in the above steps, theβangle of each pixel can be determined by Eq. (66b). Takingϕi=0, it yields

tanβ=tanψsinθisinγ+tanψcosθciosγE68

where the orientation angleψis calculated using Eq. (59), while the incident angleθiis determined by the SAR view geometry.

Substitutingβandγinto Eq. (67), the azimuth slopeSAand range slopeSRare obtained. We utilize the two slopes to invert the terrain topography and DEM. The slopesSAandSRof all pixels in the SAR image are calculated as shown in Figures 23(b,c).

The DEM can be generated by solving the Poisson equation for aM×Nrectangular grid area. The Poisson equation can be written as

2ϕ(x,y)=ρ(x,y)E69

where2is the Laplace operator. The source functionρ(x,y)consists of the surface curvature calculated by the slopesS(x,y), where thex^, y^-directions are used as the range and azimuth directions, respectively. In this section, the full multi-grid (FMG) algorithm is employed to solve the Poisson equation. The benefits of FMG are due to its rapid-convergence, robustness and low computation load. The inverted DEM of SAR image in Figure 23(a) is given in Figures 24(a,b), and its contour map, Figure 24(b) has been validated in comparison to the map of China Integrative Atlas (1990).

## 7. Reconstruction of Buildings Objects from Multi-Aspect SAR Images

Reconstruction of 3D-objects from SAR images has become a key issue for information retrieval for SAR monitoring. 3D reconstruction of man-made object is usually based on interferometeric technique or fusion with other data resources, e.g. optical and GIS data. It has been noted that scattering from man-made objects produces bright spots in sub-metric resolution, but present strips/blocks image in metric resolution. This difference is largely attributed to the different imaging ways employed for different resolutions, for example, spotlight for sub-metric resolution, stripmap for metric resolution.

An automatic method for detection and reconstruction of 3D objects from multi-aspect metric-resolution SAR images. The steps are as follows:

The linear profile of the building objects is regarded as the most prominent characteristics. The POL-CFAR detector, Hough transform, and corresponding image processing techniques are applied to detection of parallelogram-like façade-images of building objects. Results of simulated SAR images as well as real 4-aspect Pi-SAR images are given subsequently after each step.

A probabilistic description of the detected façade- images is presented. Maximum-likelihood estimation (ML) of building objects from multi-aspect observed façade-images is given. Eventually, in association with a hybrid priority rule of inversion reliability, an automatic algorithm is designed to match multi-aspect façade- images and reconstruct the building objects. Moreover, taking advantage of the multi-aspect coherence of building-images, a new iterative co-registration method is presented.

Finally, reconstruction results are presented, and good performance is evaluated comparing with ground truth data. It is also concluded that the reconstruction accuracy closely depends on the number of available aspects. In the end, a practicable scheme of the 3D-object reconstruction from satellite-borne SAR image is proposed (Xu and Jin, 2007).

### 7.1. Object image detection

As shown in Figure 25, the scattering image of a simple building is produced by direct scattering from the wall/façade, roof and edges, double scattering from wall-ground and shadows projected etc. In metric resolution, the scattering produces bright strips/blocks from respective parts of the object. A key step of reconstruction is to identify and extract these strips/blocks.

For the case of a smooth wall/façade, the only double scattering term to be considered must follow a specific propagation path, i.e. wall (reflect) to ground (diffuse). Simple building object is taken into account and is modeled as a cuboid, and the spatial distribution of the building objects is assumed not crowded, i.e. without serious shadowing and superposition. The cuboid object is described by seven geometric parameters: 3D position, 3D size and orientation. Besides, the flat roof is assumed with much less scattering compared with the edges and façades.

Figure 26 shows (a) a model of the cuboid object, (b) its simulated SAR image, (b) using the mapping and projection algorithm, (c) a photo of real rectangular building, and (d) its image of Pi-SAR observation, respectively.

It can be seen from Figure 26 that the longer wall of the cuboid-like building, called as major wall henceforth, plays the dominant role in a SAR image. In this section, main attention is focused on the major wall image. At the first step, the edge detectors of constant false alarm rate (CFAR) such as ratio gradient are used for edge detection. Figure 25.SAR imaging of a simple building model and its image composition.

To our experience, the POL-CFAR detector derived from complex Wishart distribution can fulfill the segmentation requirements of medium- and small-scale building-images. Besides, the ridge filter applied after this step can accommodate segmentation error to some extent.

In order to improve the segmentation precision, the detected edge by window-operator needs to be further thinned. Using the edge intensity produced in the POL-CFAR detection, an edge thinning method of ‘ridge filtering’ is presented. Taking an 8-neighbor 3×3 window as an example, the center pixel is regarded as on the ridge if its intensity is higher than the pixels along two sides.

4-aspect Pi-SAR images acquired over Sendai, Japan by a square-loop flying path (Flight No. 8708, 8709, 8710, 8711, X-band, pixel resolution 1.25m) are taken as an example of real SAR image study. The region of interest (ROI) is the Aobayama campus of Tohoku University.

Figure 27(a) shows an aspect Pi-SAR images as an example.

Figures 27(b-d) show the results processed by the edge detection of Pi-SAR images.

The most distinct feature of a building object in SAR image is parallel lines. The Hough transform is employed to detect straight segments from the thinned edges, and parallelogram outlines of the façade-images can then be extracted. It is carried out in tiling manner in this paper, i.e. the original picture is partitioned into blocks, each of which is detected independently via Hough transform.

The detection steps of parallel line segments are: i) find bright spots in transform domain with a minimum distance between every two of them so as to avoid re-detection; ii) search the segments consisted of successive points along the corresponding parallel lines in spatial domain, which is longer than a minimum length, and the distance between two successive points is shorter than a maximum gap; iii) only the pairs of points lying on two lines and facing directly are taken into account for segment searching.

Figures 28(a-d) show the detection from 4 aspects Pi-SAR images. Few building-images are not detected and some false images are wrongly produced. In fact, there is a compromise between over-detection and incomplete-detection. We prefer to preserve more detected building-images, so as to control the non-detected rate, while the false images are expected to be eliminated through the subsequent auto-selection of effective images for reconstruction. However, there always remain some undetectable images, attributable to shadowing of tree canopy, overlapping of nearby buildings or with too complicated wall structures.

The detected parallelogram of a homogenous scattering area could be direct scattering from façade, double scattering of wall-ground, combination of direct and double scatterings, projected shadow of building, or even strong scattering of strip-like objects (e.g. metal fence or metal awning), which is not considered in classification.

Shadowing is identified if scattering power of that area is much lower than the vicinity. Specifically, first set up two parallel equal-length strip windows on its two sides and then calculate the median scattering powers of the three regions. If the middle one is weaker than two sides, it is classified as shadow instead of building image.

The wall-ground double scattering can be differentiated from direct scattering based on polarimetric information. The de-orientation parametervindicating the scattering mechanism is used to identify double scattering.

To implement target reconstruction from multi-aspect SAR data, calibrating multi-aspect data requires that at least one aspect is calibrated beforehand, and other aspects are then calibrated with respect to this calibrated aspect. A natural object, such as flat bare ground, is usually chosen as a reference target, which is expected to preserve identical scattering for all aspects. Thereafter, the channel imbalance factors are estimated from distribution of the phase difference and amplitude ratio of co-polarized, hh and vv, echoes of the reference target, and are then used to compensate the whole SAR images (Xu and Jin, 2008b).

### 7.2. Building reconstruction

Complexity of objects-terrain surfaces and image speckles incorporate certain uncertainty for detection of object images. To well describe multi-aspect object image, the parameterized probability is introduced for further proceeding of automatic reconstruction. For convenience, the detected building-images are parameterized. Generally, the edge pixels detected by CFAR are randomly situated around the real, or say, theoretical boundary of the object. It is reasonable to presume that the deviation distances of the edge points follow a normal distribution.

The original edge can be equivalently treated as a set of independent edge points. The line detection approach is considered as an equivalent linear regression, i.e. line fitting from random sample points. According to linear regression from independent normal samples, the slope of the fitted line follows the normal distribution.

All parameters have normal distributions and their variances are determined by the variance of the edge points deviation. After counting the deviation of the edge points in the vicinity of each lateral of all detected building-images and making its histogram, the variance can be determined through a minimum square error (MSE) fitting of the histogram using normal probability density function (PDF). The 4-aspect Pi-SAR images are counted.

Given a group of building-images detected from multi-aspect SAR images, the corresponding maximum- likelihood probability can be further used as an assessment of the coherence among this group of multi-aspect images. A large maximum-likelihood probability indicates a strong coherence among the group of multi-aspect building-images, and vice versa.

Multi-aspect co-registration, as a critical pre- processing step, is necessary when dealing with real SAR data.

Given the specification of a region of interest (ROI), e.g. the longitudes and latitudes, the corresponding area in SAR image of each aspect can be coarsely delimited according to its orbit parameters. It can be regarded as a coarse co-registration step. Manual intervention is necessary if the orbit parameters are not accurate enough.

Only parameters of the building-images are needed to be co-registered to the global coordinates, but rather than the original SAR images. It is regarded as a fine co-registration step. In this study, only linear co-registrations are considered. A simple and effective method should be manually choosing ground control points. In general, a featured terrain object or building is first selected as the reference of zero-elevation, and then its locations of different aspects are pinpointed. Coherence among multi-aspect building-images is the basis of automatic reconstruction.

Finally, an automatic multi-aspect reconstruction algorithm is designed. The building objects are reconstructed from the 4-aspect simulated SAR images. The inversion accuracy is very good. shows the 3D reconstructed objects on the true DEM. It seems the inverted elevations also match well with the true DEM.

Due to the difficulty for authors to collect ground truth data, a high-resolution satellite optical picture (0.6m QuickBird data) is used as a reference. Geometric parameters of each building manually measured from the picture are taken as ground truth data to evaluate the accuracy of reconstruction.

There is a trade-off between the false and correct reconstruction rates. If we increase the false alarm rate of edge detector, relax the requirements of building-image detection and/or increase the false alarm rate of judging correctly matched group, it will raise the reconstruction rate, but also boost the false reconstruction rate. Efficiency will be deteriorated if the false reconstruction rate goes too high. On the contrary, the false reconstruction rate can be reduced and the accuracy of reconstructed buildings can be improved, but the reconstruction rate will also decline.

A critical factor confining the reconstruction precision should be the number of effective aspects, hereafter referred as effective aspect number (EAN).T he reconstruction result will become better if more SAR images of different aspects are available.

Main error of reconstruction is caused by the boundary-deviation of detected building-images, which is originated from complicated scattering and interactions of spatially distributed objects and backgrounds. In probabilistic description of detected building-image, the presented boundary in SAR image is seen as the same as reality. However, the real detected building-image might be biased or even partly lost due to the obstacle and overlapping.

In addition, it is noticed that large-scale buildings might not be well reconstructed. The reason is partly attributed to the premise of Wishart distribution of POL-CFAR detector. Since the images of large-scale buildings might reveal more detail information about the texture feature and heterogeneity, it deteriorates the performance of edge detector. To develop a new edge detector based on certain specific speckle model for high-resolution images can improve the results. Another more feasible way is to employ a multi-scale analysis, through which building-images of different scales can present Gaussian properties in their own scale levels. Of course, the expense for multi-scale analysis might be a loss in precision.

Another issue to be addressed is the exploitation of multi-aspect polarimetric scattering information. In heterogeneous urban area, the terrain objects appear distinctively in different aspects, which gives rise to a very low coherence between their multi-aspect scatterings. Hence, polarimetric scattering information may not be a good option for the fusion of multi-aspect SAR images over urban area

After the ROI is coarsely chosen in each aspect image, edge detection and object-image extraction are carried out, subsequently. Then the object-images are parameterized and finely co-registered. As long as multi-aspect object-images are automatically matched, 3D objects are reconstructed at the same time.

The merits of this approach are the process automation with few manual interventions, the fully utilization of all-aspect information, and the high efficiency for computer processing. Making a reconstruction on a 4-aspect Pi-SAR dataset (500×500) takes less than 10 min CPU time (CPU frequency 3GHz). Complexity of this approach is aboutO(KN2), whereKis the aspect number andNis the size of SAR images (K=4, N=500 in this case).

It is tractable to extend this approach to reconstruction of other kinds of objects. For other types of primary objects, given the a priori knowledge, new image detection methods have to be developed. It is possible to treat more complicated buildings as combinations of different primary objects, which can be reconstructed separately and then combine together.

Considering a space-borne SAR with the functions of left/right side looking (e.g. Radarsat2 SAR) and forward/backward oblique side looking (e.g. PRISM in ALOS), six different aspect settings are available for ascending orbit. There is a 20 angle between ascending and descending flights for sun-synchronous orbit. Therefore, it can observe from 12 different aspects. Suppose the sensor use different aspect in each visit and the repeat period is 15 days, a set of SAR images acquired from 12 aspects can be obtained through a 3-month observation. Then, application of SBT reconstruction with acceptable precision.

## Acknowledgments

This work was supported by the National Science Foundation of China with the grant No. NSFC 406370338.

chapter PDF
Citations in RIS format
Citations in bibtex format

## How to cite and reference

### Cite this chapter Copy to clipboard

Ya-Qiu Jin (October 1st 2009). Theoretical Modeling for Polarimetric Scattering and Information Retrieval of SAR Remote Sensing, Advances in Geoscience and Remote Sensing, Gary Jedlovec, IntechOpen, DOI: 10.5772/8323. Available from:

### chapter statistics

1Crossref citations

### Related Content

#### Advances in Geoscience and Remote Sensing

Edited by Gary Jedlovec

Next chapter

#### Polarimetric Responses and Scattering Mechanisms of Tropical Forests in the Brazilian Amazon

By J. R. dos Santos, I. S. Narvaes, P. M. L. A. Graca and F. G. Goncalves

First chapter

#### Narrowband Vegetation Indices for Estimating Boreal Forest Leaf Area Index

By Ellen Eigemeier, Janne Heiskanen, Miina Rautiainen, Matti Mõttus, Veli-Heikki Vesanto, Titta Majasalmi and Pauline Stenberg

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

View all Books