## 1. Introduction

Synthetic aperture radar (SAR) imagery technology is one of most important advances in space-borne microwave remote sensing during recent decades. Completely polarimetric scattering from complex terrain surfaces can be measured. Fully understanding and retrieving information from polarimetric scattering signatures of natural media and SAR images have become a key issue for the SAR remote sensing and its broad applications.

This paper presents some research progress in Fudan University on theoretical modeling of the terrain surface for polarimetric scattering simulation and Mueller matrix solution, mono-static and bistatic SAR image simulation, new parameters for unsupervised surface classification, DEM inversion, change detection from multi-temporal SAR images, and reconstructions of buildings from multi-aspect SAR images, etc. Some applications are discussed.

## 2. Model of Vegetation Canopy and Mueller Matrix solution

As a polarized wave

where 22-D (dimensional) complex scattering amplitude function

The Mueller matrix solution of a layer of scatterers model, as shown in Figure 1, is written as

where

Eqs. (4a-c) will be applied to numerical simulation of polarimetric scattering from the terrain surface with physical parameters of scatterers (Jin, 2005), as described in next sections.

The Mueller matrix is a 4×4-D real matrix with complex eigenvalues and eigenvectors. To be physically realizable, this matrix must satisfy the Stokes criterion together with several other restrictive conditions. Unfortunately, however, these restrictions do not have any direct physical interpretation in terms of the eigenstructure of the Mueller matrix.

The coherency matrix

where * A, B, C, D*are sequentially defined in Eq. (5), and the subscript

*denotes Pauli vectiorization. The coheremcy matrix is defined as*P

The eigenvalue and eigenvector of the coherency matrix are defined as

All eigenvalues are real and non-negative, and

The coherency matrix is also related with the Mueller matrix as

where

Usually, the case of* H*is defined by the eigenvalues of the coherency matrix as

The entropy is an important feature since it relates the randomness of scatter media with other physical parameters, such as canopy configuration, biomass, etc.

The coherency matrix is 4×4-D positive semidefinite Hermitian and as such has real non-negative eigenvalues and complex orthogonal eigenvectors. Amplitude and difference of four eigen-values are functionally related with polarimetric scattering process of the terrain canopy. Eigen-analysis of the coherency matrix yields better physical insight into the polarimetric scattering mechanisms than does the Mueller matrix and further, it can be employed to physically identify a Mueller matrix by virtue of the semi-definite nature of the corresponding coherency matrix.

For the first-order solution in small albedo, cross polarization is small and correlations such as the terms

It is interesting to see that if

Substituting * A,B,C*and

*of Eq. (6) into Eq. (10), the eigen-values are derived using the power terms of*D

where the configuration parameter

varies from 0 of totally random media to 1 of ordered media, i.e. no-random case.

It can be seen that the first eigen-value* vv*and

*co-polarized power subtracting their coherence defined by the parameter*hh

*of Eq. (12), which varies from 1 at ordered case to 0 in totally random medium. The second eigen-value*

The (p,q=v,h)-polarized backscattering coefficient is

Substituting Eq. (13) into Eqs. (11,12), the functions

where

Note that all functions

Define the co-polarized and cross polarized indexes, respectively, as

The entropy H is directly related with backscattering indexes CPI and XPI via Eqs. (14~16).

Figure 2(a) presents an AirSAR image of total power of

## 3. SAR Imaging Simulation (MPA)

Given the sophisticated mechanisms of electromagnetic (EM) wave scattering and SAR signal collecting, it is hard to tackle SAR imagery without the aid of forward simulation of scattering based on physical modeling. Simulation of SAR imaging presents a different mode to study inhomogeneous terrain scene under SAR observation. It may evaluate or predict the SAR observation, interpret the physical scattering behavior, and take account of the parameterized characteristics of terrain surface objects. Some approaches to SAR image simulation in previous literature might be classified into two catalogues: one focuses on the statistical characteristics of SAR images, and the other on physical scattering process. Natural scene is more complicated including randomly, inhomogeneously distributed, penetrable or impenetrable objects, such as vegetation canopies, manmade structures and perturbed surface topography. Particularly, volumetric scattering through penetrable scatterers, e.g. timberland forests, crops, green plants etc., plays an essential or dominant role in SAR imagery. It is meaningful to develop computerized simulation of SAR imaging over comprehensive terrain scene with heterogeneous terrain objects.

we present an novel approach of the Mapping and Projection Algorithm (MPA) to polarimetric SAR imaging simulation for comprehensive scenarios, which takes account of scattering, attenuation, shadowing and multiple scattering of spatially distributed volumetric and surface scatterers ( Xu and Jin, 2006 ). In this approach, scattering contributions from scatterers are cumulatively summed on the slant range plane (the mapping plane) following geometric principles, while their extinction and shadowing effects are cumulatively multiplied on the ground range plane (the projection plane). It finally constructs a general imaging picture of the whole terrain scene via mapping and projection operations. A MPA is then devised to speed up the simulation of the whole process of scattering, extinction, mapping and projection in association with a grid partition of the comprehensive terrain scene. Our SAR simulation scheme incorporates polarimetric scattering, attenuation or shadowing of several typical terrain surfaces, as well as the coherent speckle generation.

### 3.1. The mapping and projection algorithm (MPA)

Since the slant range is much larger than the synthetic aperture, the incidence angle to the same target is nearly invariant during the interval of radar flying over a length of synthetic aperture. Thus in the imaging simulation, the whole scene is divided into cross-track lines along the azimuth dimension, and then the scattering contribution from the terrain objects lying in each line (included in the incidence plane) are calculated sequentially. It finally constructs a scattering picture or a scattering map of the whole scene.

Following from VRT, we derived a general expression of the scattering power at range

where the phase function

where

We further devise a mapping and projection algorithm (MPA) to fast compute Eq. (17) The first step is to partition the scene and terrain objects lying in the horizontal plane

As shown in Figure 3, two arrays

The MPA calculating sequence of the terrain objects in one grid unit is: first to calculate the scattering power of the current terrain object and accumulate onto the array

As shown in Figure 3, a vertical segment (length

where

where

It can be seen from the effective depth

In a similar way, larger surface scatterers, like vertical wall surfaces, cliff slopes etc., are segmented according to mapping cells. But, for nearly horizontal surfaces, e.g. ground or roof surface, the facet cut by one grid unit is small enough and is projected into one projection cell, i.e.

where

The elements of the array

Double and triple scatterings between terrain objects and the underlying ground are regarded as shown in Figure 3. First, the corresponding ground patch of multiple scattering can be located by the ray tracing technique. Then the length of propagation path, i.e. the effective range

In the same way, multiple scattering of a segment of scatterers is mapped into the array

For double scattering, the attenuation or shadowing suffered through the propagation paths of

where

For triple scattering (object-ground-object), the expression can be directly written as:

where

Correct sequence for calculation of each grid unit is first to count single scattering, then multiple scattering, at last its attenuation or shadowing. Notice that the ground patch of multiple scattering could be located at any place around the terrain object. For simplicity, we take the projection cell in the current incidence plane as an approximate substitute. Some other coding techniques are adopted to guarantee the correct computation and make it more efficient.

### 3.2. Scattering models for terrain objects

For vegetation canopies, VRT model of a layer of random non-spherical particles ( Jin, 1994 ) is employed. Leaves, small twigs or thin stems are modeled as non-spherical dielectric particles under the Rayleigh or Rayleigh-Gans approximation, while branches, trunks or thick stems are modeled as dielectric cylinders. A tree model is composed of crown and trunk. The crown is a cloud with a simple geometrical shape containing randomly oriented non-spherical particles. The trunk is an upright cylinder with the top covered by the crown. Oblate or disk-like particles and elliptic crown are adopted for broad-leaf forest, while prolate or needle-like particles and cone-like crown are for needle-leaf forest. In addition, the crown shapes take a small random perturbation. Similarly, a farm filed is modeled as a layer of randomly oriented non-spherical particles with perturbed layer depth. If the grid units are small enough, the segment of crown or crops cut by one grid unit can be regarded as a segment fully filled with particles. Differences between the trunk and crown are: (a) the trunk is solid within a grid unit i.e. fractional volume is set to 1; (b) double scattering between the trunk and ground is taken into account.

Scattering of the building is seen as the surface scattering from its wall and roof surfaces and multiple interactions with the ground surface. The integral equation method (IEM) is employed to calculate surface scattering. The building is modeled as a parallelepiped with an isoscales triangular cylinder layered upon it. Due to the orientation of wall surface and roof surface, the incident angle must be transferred into the local coordinates of the surface, as well as the polar basis of wave propagation. Geometrical relationships among the wall surfaces, the roof surfaces, as well as the double and triple scatterings between the wall and ground can be found in Franceschetti et al. (2002). Figure 2 illustrates the building model, its image and shadowing in SAR imagery.

For ground facet, the tangent vectors along

The MPA approach is based on the VRT theory for incoherent scattering power, not coherent summation of scattering fields. However, speckle due to coherent interference is one of the most critical characteristics of SAR imagery, especially for studies of SAR image filtering and optimization. Assuming Gaussian probability distribution of the scattering vector

where

### 3.3. Simulation results

The configurations and parameter settings of the radar and platform in simulation are selected follow the AIRSAR sensor of NASA/JPL.

A virtual comprehensive terrain scene is designed, as shown in Figure 5(a), as according to a true DEM of Guangdong province, south China. It contains different types of forests covering on the hill, crops farmland, ordered or random buildings in urban and suburban regions, roads and rivers.

A simulated scattering map (decibel of normalized scattering power, from -50dB to 0dB, pseudo color: R-HH, G-HH, B-HV.) at L band with 12m resolution (pixel spacing is 5m) is displayed in Figure 5(b). The simulated SAR image at L band is shown in Figure 5(c).

Figures 6(a,b) give the simulated scattering map and SAR image at C band, respectively. In this higher band, vegetation scatterings are significantly increased, as well as attenuation is increased and wave penetrable depth is decreased. It causes that trees become thinner while the shadowing becomes darker.

At the upper-right corner, the blocks images appear like parallel lines, which reveal the dominance of double scattering in the urban area. However, on the other side below the river, the buildings are oriented nearly 45 degrees to the radar flying direction. As a result, the cross-pol scattering (blue) is stronger, while double scattering is reduced. Timberlands can be classified into two classes: broad leaves in yellow color due to balance between HH and VV scattering, and narrow leaves in green color due to weaker scattering of HH. The narrow leaves trees make weaker attenuation of HH and therefore stronger HH scattering of the shadowed ground, which is the reason why most areas of narrow leaves forest become red. Overlay and shadowing effects caused by mountainous topography are particularly perceptible. Croplands are generally uniform and dense, and appear like patches in the image. Its VV scattering (green) is always stronger than HH. They have distinguishable brightness due to different density, sizes, shapes etc. More detail discussion can be seen in ( Xu and Jin, 2006 )

### 3.4. Bistatic SAR imaging

Bistatic SAR (BISAR) with separated transmitter and receiver flying on different platforms has become of great interests. Most of BISAR research is focused on the engineering realization and signal processing algorithm with few on land-based bistatic experiments or theoretical modeling of bistatic scattering. The MPA provides a fast and efficient tool for monostatic imaging simulation. It involves the physical scattering process of multiple terrain objects, such as vegetation canopy, buildings and rough ground surfaces (Xu and Jin, 2008a).

Similar to mono-static MAP, the bistatic-MAP steps are described as follows,

(1) The arrays

(2) Perform 3D projections of the current grid unit along incidence and scattering directions to the cells

(3) Determine the mapping position of the current grid unit based on the information of its synthetic aperture and Doppler history, which corresponds to the cell

(4) Obtain or calculate the scattering matrix

(5) Refresh the elements of

Refresh the elements of

(6) Return to Step (2), and continue to visit the next grid at the same coordinate of

Figures 7, 8 explain the bistatic imaging process of a tree and a building. One of the differences between bistatic and monostatic cases is the shadowing area projected in both incidence and scattering directions. Additionally, due to the split incidence and scattering directions, double scattering terms of object-ground and ground-object are different. One reflects at the ground and diffusely scatters from the object, the other scatters at the object and then reflects from the ground. These two scattering terms have different paths which give rise to separated double-scattering images.

A comprehensive virtual scene as shown in Figure 9(a) is designed for simulation and analysis. Simulated BISAR images of the configurations, AT (across track) and TI (translational invariant) are shown in Figures 9(b,c), respectively.

### 3.5. Unified bistatic polar bases and polarimetric analysis

It is found in the BISAR simulation that in AT BISAR image, bistatic scattering of terrain objects preserves polarimetric characteristics analogous to monostatic case. while the major disparity is on total scattering power.

However, the polarimetric parameters behavior is different in the image of general TI mode. First,

It seems that

Conventional polarimetric parameters such as

As proposed in many studies, inverse problems of bistatic scattering are usually discussed in a coordinates system determined by the bisectrix. We believe it is important to first transform the conventional bistatic polar bases to a new one defined by the bisectrix. The unified bistatic polar bases are defined as

where

The relationship between the unified bistatic polar bases and the conventional polar bases can be described by polar basis rotation, i.e.

where

We modify the definition of Cloude’s parameters as

The redefined

Redefinition of Cloude’s parameters is only a preliminary attempt of bistatic polarimetric interpretation. It remains open for further study in a systematic way and for verification using both simulation and experiments.

The MPA is also applied to simulation of SAR image of undulated lunar surface (Fa and Jin, 2009). Based on the statistics of the lunar cratered terrain, e.g. population, dimension and shape of craters, the terrain feature of cratered lunar surface is numerically generated. According to inhomogeneous distribution of the lunar surface slope, the triangulated irregular network is employed to make the digital elevation of lunar surface model. The Kirchhoff approximation of surface scattering is then applied to simulation of lunar surface scattering. The SAR image for cratered lunar surface is numerically generated. Making use of the digital elevation and Clementine UVVIS data at Apollo 15 landing site as the ground truth, an SAR image at Apollo 15 landing site is simulated.

## 4. Terrain Surface Classification using De-Orientation Theory

Classification of complex terrain surfaces using polarimetric SAR imagery is one of most important SAR applications. An unsupervised method based on the entropy

Scattering of the terrain targets functionally depends on the scatter orientation, shape, dielectric property, and scattering mechanism etc. Scatter targets of complex terrain surfaces are often randomly oriented and cause randomly fluctuating echoes. It is difficult to make classification of randomly oriented and randomly distributed scatter targets. Different scatters with different orientations can make the similar scattering, and vice versa, the same scatters with random orientation can make different scattering to make confused classification.

In this section, a transformation of the target scattering vector to rotate the target along the sight line is derived. Deorientation is introduced to transform the target orientation into such fixed state with minimization of cross-polarization (min-x-pol). And meanwhile, the angle is extracted to indicate the angle deviation of the target orientation from this min-x-pol state.

A set of new parameters

Numerical simulations of polarimetric scattering of a single small non-spherical particle, and a layer of random non-spherical particles above a rough surface are studied to show the effectiveness of the parameters

As examples, the terrain surface classifications for a SIR-C and an AirSAR images are presented.

### 4.1. De-orientation and parameterization

From Eq. (1), the scattering vector is usually defined as

where

It yields

Rotating the angle

In backscattering direction, this rotation makes the scattering vector

Applying minimization of cross polarization (min-x-pol) to

it yields the deorientation angle

It means that such rotation of the angle

New parameters

In the case of non-deterministic targets, we obtain the uncorrelated scattering vectors, i.e. eigenvectors, through eigen-analysis of coherency matrix. Here the most significant eigenvector i.e. principal eigenvector is considered as the representative scattering vector of non-deterministic targets. Thus, the deorientation of non-deterministic targets is merely conducted on the principal eigenvector.

### 4.2. Numerical simulation

Based on the model of Figure 1, polarimetric scattering is calculated, and the scattering vector and Mueller matrix are obtained. A model of a layer of random non-spherical particles above a rough surface for scattering from terrain surfaces, as shown in Figure 1, is applied to numerical simulation of

Figures 12(a,b) show numerical relationship between the parameters

Figure 12(a) presents the distributions of some typical terrain surfaces on the

Figure 12(b) presents the distribution of scattering terms on the plane

### 4.3. Surface classification

The flow-chart of the classification method of PolSAR Data based on deorientation and new parameters

An AirSAR data at L band of the Boreal area in Canada with rich resources of vegetation is chosen for classification and orientation-analysis, as shown in Figure 15(a). Figure 15(b) is the deorientation classification over this area. Terrain surfaces are divided into 8 classes following the decision-tree: Timber, Urban 2, Urban 1, Canopy 2, Canopy 1, Surface 3, Surface 2 and Surface 1.

Figure 16 shows the data distributions on the planes

The orientation distribution of several classes are selected to be displayed in Figures 17(a,b). It can be seen that

While in the Urban 1 (sparse forest of vertical trunks, instead of artificial constructions), there are uniformly vertical orientations indicating orderly trunks.

Random orientation in the Canopy 1 means that the vegetation canopy in this area might be the disordered bush. Note that the region with the roads inside the forest might show randomness confused by the bush vegetation on the roadsides.

Following the above orientation analysis, the terrain surfaces are further classified into the subclasses and are identified by their types.

### 4.4. Faraday rotation and Surface classification

To obtain the moisture profiles in subcanopy and subsurface, it needs UHF/VHF bands (435MHz, 135MHz) to penetrate through the dense subcanopy (~20kg/m^{2} and more) with little scattering and reach subsurface. However, when the wave passes through the anisotropic ionosphere and action of geomagnetic field, the polarization vectors of electromagnetic wave are rotated, which is called the Faraday rotation (FR). The FR depends on the wavelength, electronic density, geomagnetic field, the angle between the direction of wave propagation and geomagnetic field, and the wave incidence angle.

Assuming that the propagation direction is not changed passing through homogeneous ionosphere,

where^{2}), and

The FR may decrease the difference between co-pol backscattering, enhance cross-polarized echoes, and mix different polarized terms. Thus, the satellite-borne SAR data at low frequency becomes distorted due to FR effect. Since the FR is proportional to the square power of the wavelength, it yields especially serious impact on the SAR observation operating at the frequency lower than L band. The FR angle at P band can reach dozens of degrees.

As a polarized electromagnetic wave passes through the ionosphere, the scattering matrix

The measured polarimetric data with FR,

## 5. Change Detection of Terrain Surface from Multi-Temporal SAR Images

Multi-temporal observations of SAR remote sensing imagery provide fast and practical technical means for surveying and assessing such vast changes. One direct application is to detect and classify the information on changes in the terrain surfaces. It would be laborious to make an intuitive assessment for a huge amount of multi-temporal image data over a vast area. Such assessment based on qualitative gray-level analysis is not accurate and might lose some important information. How to detect and automatically analyze information on change in the terrain surfaces is a key issue in remote sensing.

In this section, two-thresholds EM and MRF algorithm (2EM-MRF) is developed to detect the change direction of backscattering enhanced, reduced and unchanged regimes from the SAR difference image (Jin and Luo, 2004, Jin and Wang, 2009).

On May 12, 2008, a major earthquake, measuring 8.0 on the Richter scale, jolted southwestern China’s Sichuan Province, Wenchuan area. To evaluate the damages and terrain surface changes caused by the earthquake, multi-temporal ALOS PALSAR image data before and after the earthquake are applied to detection and classification of the terrain surface changes. Using the tools of Google Earth for surface mapping, the surface change situation after the earthquake overlapped the DEM topography can be demonstrated in multi-azimuth views as animated cartoon. The detection and classification are also compared with the optical photographs. It is proposed that multi-thresholds EM and MRF analysis may become traceable when multi-polarization, multi-channels, multi-sensors multi-temporal image data become available.

### 5.1. Two thresholds expectation maximum algorithm

Consider two co-registered images

As usually, one-threshold expectation maximum (EM) algorithm has been employed to classify two opposite changes: the unchanged class

Assume that both the conditional probabilities

where the superscripts t and t +1 denote the current and next iterations.

Analogously, these equations can also be used to estimate

The estimates are computed starting from the initial values by iterating the above equations until convergence. The initial value of the estimated parameters can be obtained by the analysis of the histogram of the difference image. A pixel subset

Under the assumption of Gaussian distribution, Eq. (3) yields

to obtain optimal

Figures 18 (a,b) are the ALOS PALSAR images (L band, HH polarization) in February 17 and May 19, 2008 before and after earthquake in Beichuan County, Wenchuan area, respectively. Spatial resolution is 4.7m4.5m. Figure 18(c) is the difference image of Figures 18(a,b), which seems very difficult to evaluate the terrain surface changes only using man’s vision, especially to accurately classify the change classes in large scale.

The pixels of the difference image (i.e.

The parameter

As an example, Figure 19(a) shows the grayscale histogram of a small part chosen from the difference image, Figure 18(c). Three classes of the probability distributions are outlined in this figure. The histogram is normalized in accordance with the probability density distributions.

The initial statistical parameters of the subsets of

The initial statistical parameters are related to the classes

Figure 19(b) is the change detection using EM algorithm, where red color indicates the area with scattering enhanced, blue color indicates the area with scattering reduced, and green color denotes no-changed. It makes three classes change of the difference image.

### 5.2. The change detection using 2EM-MRF

Actually, the pixels of the image are spatially correlated, i.e. a pixel belonging to the class

Let the set

where

where

The MRF algorithm is employed with a spatial neighborhood 55 pixels system to take account of spatial texuture. The generation of the final change-detection map involves the labeling of all the pixels in the difference image so that the posterior probability of Eq. (52) is maximized.

The MRF algorithm is equivalent to the minimization of the Gibbs energy function. The MRF algorithm is iteratively carried out.

Figure 20(a) presents the final result of 2EM-MRF for detection and classification of the terrain surface changes from the difference image, Figure 18(c). The numbers 1~8 indicate some areas with typical changes.

It can be seen that, for example, in the area 1 the river was largely blocked up with landslide; in the area 2, there were landslides causing large scale blocks; the area 3 is Beichuan town, where the terrain surface was significantly undulated or roughed due to landslide and building collapse; in the areas 4 and 5 the river was blocked up due to landslides along river lines; in the area 6 the highway was significantly blocked up with landslide; in the area 7 there was collective landslides almost towards one azimuth direction, and the flat area with reduced scattering along the river might be due to seasonal risen water in May than February; in the area 8 collapse of mountain blocks caused the terrain surface undulating to enhance stronger scattering.

Figure 20(b) gives the topography and contour lines of the same area from the google website. It is useful to locate where and which kind of the changes are happening. It might be seen that the landslides, e.g. in the areas 1 and 7, are correlated with direction and magnitude of slopes. A quantitative analysis to assess the landslides and surface damage can be further developed.

As an example, Figure 20(c) gives a comparison of the change at the region 3 with a photograph distributed to public from the website of the Ministry of the State Resources of China. It can be clarified in both figures that A shows lanslides and makes the surface smooth, and B shows the highway blocked. It can be seen that optical shadowing actually does not confuse these classifications.

Using the tool of Google Earth mapping with the 2EM-MRF results, the terrain surface change situation classified by three types overlapped the DEM topography can be showed in multi-azimuth views as a animated cartoons. Since all process of 2EM-MRF are automatic and carried out on real time, it should be helpful, especially for commanding the rescue works in disaster scene. Figure 21 shows four views.

It can be seen that information retrieval from a single amplitude image (scattered power) with one-frequency, mono-polarization at one time is very limited. In fully polarimetric technology, for example, using target decomposition theory, deorientation transform etc., the physical status of the terrain surfaces such as vegetation canopy, road, building, river etc. can be well classified and would be of greatly helpful to change detection. Four Stokes parameters of polarimetric measurement can be feasible to show the surface slope variation and anisotropy, and INSAR has been well applied to retrieval of surface digital elevation. All of these progress show superiority over a single amplitude image analysis and manual vision estimation. It is also helpful to fuse CFAR (constant false alarm rate) for detection of the object, e.g. edge and block, from a SAR image.

As multi-polarized, multi-channels and multi-temporal image data become available, 2EM-MRF can be further extended to utilization of principal characteristic parameters of the difference image, such as VV+HH, VV-HH, entropy, angular

## 6. DEM Inversion from a Single POL-SAR Image

Co-polarised or cross-polarised backscattering signature is the function of the incidence wave with the ellipticity angle

However, scattering signature is an ensemble average of echo power from random scatter media. Measurable Stokes parameters as the polarized scattering intensity should be directly related to the

Using the Euler angles transformation between the principal and local coordinates, the orientation angle

It is proposed that the linear texture of tilted surface alignment is used to specify the azimuth angle

### 6.1. The Ψ shift as a function of the Stokes parameters

Consider a polarized electromagnetic wave incident upon the terrain surface at

where

When the terrain surface is flat, co-pol backscattering

Let

where the superscript of prime denote

It can be seen that (57a) and (57b) are much less than (57c), and (58a) is much less than (58b), so the last three terms on RHS of Eq. (56) are now neglected. Thus, it yields the

It can be seen that the third Stokes parameter

By the way, if both

### 6.2. The range and azimuth slopes and DEM inversion

The polarization vectors

As the pixel surface is tilted with local slope, the polarization vectors should be re-defined following the local normal vector

By using the transformation of the Euler angles

Substituting above relations into Eqs. (60,61), it yields

Using Eq. (64) to Eq. (61), the polarization vector

where

Thus, the range and azimuth slopes of the pixel surface can be obtained as

Since a single

If only the single-pass SAR image data are available, one of two unknown angles,

(1) Make speckle filtering over the entire image.

(2) Apply the adaptive threshold method to produce a binary image. The global threshold value is not adopted because of the heterogeneity of the image pixels.

(3) Apply the image morphological processing for the binary image, remove those isolated pixels and fill small holes. Referring to the part of binary’s “1” as the foreground and the part of binary’s “0” as the background, the edges from the foreground are extracted.

(4) Each pixel on the edge is set as the center of a square 2121 window, and a curve segment through the centered pixel is then obtained. Then, applying the polynomial algorithm for fitting curve segment in the least-squares sense, the tangential slope of the centered pixel is obtained. It yields the azimuth angle of the centered pixel. Make a mark on that pixel so that it won’t be calculated in the next turn.

(5) Removing the edge in Step (4) from the foreground, a new foreground is formed. Repeat Step (4) until the azimuth angle of every pixel in the initial foreground is determined.

(6) Make the complementary binary image, i.e. the initial background now becomes the foreground. Then, the Steps (4) and (5) are repeated to this complementary image until the azimuth angle of every pixel in the initial background is determined.

This approach provides a supplementary information to firstly determine the angle

As an example for DEM inversion, the L-band polarimetric SAR data is shown in Figure 23(a). As the azimuth angle

where the orientation angle

Substituting

The DEM can be generated by solving the Poisson equation for a

where

## 7. Reconstruction of Buildings Objects from Multi-Aspect SAR Images

Reconstruction of 3D-objects from SAR images has become a key issue for information retrieval for SAR monitoring. 3D reconstruction of man-made object is usually based on interferometeric technique or fusion with other data resources, e.g. optical and GIS data. It has been noted that scattering from man-made objects produces bright spots in sub-metric resolution, but present strips/blocks image in metric resolution. This difference is largely attributed to the different imaging ways employed for different resolutions, for example, spotlight for sub-metric resolution, stripmap for metric resolution.

An automatic method for detection and reconstruction of 3D objects from multi-aspect metric-resolution SAR images. The steps are as follows:

The linear profile of the building objects is regarded as the most prominent characteristics. The POL-CFAR detector, Hough transform, and corresponding image processing techniques are applied to detection of parallelogram-like façade-images of building objects. Results of simulated SAR images as well as real 4-aspect Pi-SAR images are given subsequently after each step.

A probabilistic description of the detected façade- images is presented. Maximum-likelihood estimation (ML) of building objects from multi-aspect observed façade-images is given. Eventually, in association with a hybrid priority rule of inversion reliability, an automatic algorithm is designed to match multi-aspect façade- images and reconstruct the building objects. Moreover, taking advantage of the multi-aspect coherence of building-images, a new iterative co-registration method is presented.

Finally, reconstruction results are presented, and good performance is evaluated comparing with ground truth data. It is also concluded that the reconstruction accuracy closely depends on the number of available aspects. In the end, a practicable scheme of the 3D-object reconstruction from satellite-borne SAR image is proposed (Xu and Jin, 2007).

### 7.1. Object image detection

As shown in Figure 25, the scattering image of a simple building is produced by direct scattering from the wall/façade, roof and edges, double scattering from wall-ground and shadows projected etc. In metric resolution, the scattering produces bright strips/blocks from respective parts of the object. A key step of reconstruction is to identify and extract these strips/blocks.

For the case of a smooth wall/façade, the only double scattering term to be considered must follow a specific propagation path, i.e. wall (reflect) to ground (diffuse). Simple building object is taken into account and is modeled as a cuboid, and the spatial distribution of the building objects is assumed not crowded, i.e. without serious shadowing and superposition. The cuboid object is described by seven geometric parameters: 3D position, 3D size and orientation. Besides, the flat roof is assumed with much less scattering compared with the edges and façades.

Figure 26 shows (a) a model of the cuboid object, (b) its simulated SAR image, (b) using the mapping and projection algorithm, (c) a photo of real rectangular building, and (d) its image of Pi-SAR observation, respectively.

It can be seen from Figure 26 that the longer wall of the cuboid-like building, called as major wall henceforth, plays the dominant role in a SAR image. In this section, main attention is focused on the major wall image. At the first step, the edge detectors of constant false alarm rate (CFAR) such as ratio gradient are used for edge detection.

To our experience, the POL-CFAR detector derived from complex Wishart distribution can fulfill the segmentation requirements of medium- and small-scale building-images. Besides, the ridge filter applied after this step can accommodate segmentation error to some extent.

In order to improve the segmentation precision, the detected edge by window-operator needs to be further thinned. Using the edge intensity produced in the POL-CFAR detection, an edge thinning method of ‘ridge filtering’ is presented. Taking an 8-neighbor 3×3 window as an example, the center pixel is regarded as on the ridge if its intensity is higher than the pixels along two sides.

4-aspect Pi-SAR images acquired over Sendai, Japan by a square-loop flying path (Flight No. 8708, 8709, 8710, 8711, X-band, pixel resolution 1.25m) are taken as an example of real SAR image study. The region of interest (ROI) is the Aobayama campus of Tohoku University.

Figure 27(a) shows an aspect Pi-SAR images as an example.

Figures 27(b-d) show the results processed by the edge detection of Pi-SAR images.

The most distinct feature of a building object in SAR image is parallel lines. The Hough transform is employed to detect straight segments from the thinned edges, and parallelogram outlines of the façade-images can then be extracted. It is carried out in tiling manner in this paper, i.e. the original picture is partitioned into blocks, each of which is detected independently via Hough transform.

The detection steps of parallel line segments are: i) find bright spots in transform domain with a minimum distance between every two of them so as to avoid re-detection; ii) search the segments consisted of successive points along the corresponding parallel lines in spatial domain, which is longer than a minimum length, and the distance between two successive points is shorter than a maximum gap; iii) only the pairs of points lying on two lines and facing directly are taken into account for segment searching.

Figures 28(a-d) show the detection from 4 aspects Pi-SAR images. Few building-images are not detected and some false images are wrongly produced. In fact, there is a compromise between over-detection and incomplete-detection. We prefer to preserve more detected building-images, so as to control the non-detected rate, while the false images are expected to be eliminated through the subsequent auto-selection of effective images for reconstruction. However, there always remain some undetectable images, attributable to shadowing of tree canopy, overlapping of nearby buildings or with too complicated wall structures.

The detected parallelogram of a homogenous scattering area could be direct scattering from façade, double scattering of wall-ground, combination of direct and double scatterings, projected shadow of building, or even strong scattering of strip-like objects (e.g. metal fence or metal awning), which is not considered in classification.

Shadowing is identified if scattering power of that area is much lower than the vicinity. Specifically, first set up two parallel equal-length strip windows on its two sides and then calculate the median scattering powers of the three regions. If the middle one is weaker than two sides, it is classified as shadow instead of building image.

The wall-ground double scattering can be differentiated from direct scattering based on polarimetric information. The de-orientation parameter

To implement target reconstruction from multi-aspect SAR data, calibrating multi-aspect data requires that at least one aspect is calibrated beforehand, and other aspects are then calibrated with respect to this calibrated aspect. A natural object, such as flat bare ground, is usually chosen as a reference target, which is expected to preserve identical scattering for all aspects. Thereafter, the channel imbalance factors are estimated from distribution of the phase difference and amplitude ratio of co-polarized, hh and vv, echoes of the reference target, and are then used to compensate the whole SAR images (Xu and Jin, 2008b).

### 7.2. Building reconstruction

Complexity of objects-terrain surfaces and image speckles incorporate certain uncertainty for detection of object images. To well describe multi-aspect object image, the parameterized probability is introduced for further proceeding of automatic reconstruction. For convenience, the detected building-images are parameterized. Generally, the edge pixels detected by CFAR are randomly situated around the real, or say, theoretical boundary of the object. It is reasonable to presume that the deviation distances of the edge points follow a normal distribution.

The original edge can be equivalently treated as a set of independent edge points. The line detection approach is considered as an equivalent linear regression, i.e. line fitting from random sample points. According to linear regression from independent normal samples, the slope of the fitted line follows the normal distribution.

All parameters have normal distributions and their variances are determined by the variance of the edge points deviation. After counting the deviation of the edge points in the vicinity of each lateral of all detected building-images and making its histogram, the variance can be determined through a minimum square error (MSE) fitting of the histogram using normal probability density function (PDF). The 4-aspect Pi-SAR images are counted.

Given a group of building-images detected from multi-aspect SAR images, the corresponding maximum- likelihood probability can be further used as an assessment of the coherence among this group of multi-aspect images. A large maximum-likelihood probability indicates a strong coherence among the group of multi-aspect building-images, and vice versa.

Multi-aspect co-registration, as a critical pre- processing step, is necessary when dealing with real SAR data.

Given the specification of a region of interest (ROI), e.g. the longitudes and latitudes, the corresponding area in SAR image of each aspect can be coarsely delimited according to its orbit parameters. It can be regarded as a coarse co-registration step. Manual intervention is necessary if the orbit parameters are not accurate enough.

Only parameters of the building-images are needed to be co-registered to the global coordinates, but rather than the original SAR images. It is regarded as a fine co-registration step. In this study, only linear co-registrations are considered. A simple and effective method should be manually choosing ground control points. In general, a featured terrain object or building is first selected as the reference of zero-elevation, and then its locations of different aspects are pinpointed. Coherence among multi-aspect building-images is the basis of automatic reconstruction.

Finally, an automatic multi-aspect reconstruction algorithm is designed. The building objects are reconstructed from the 4-aspect simulated SAR images. The inversion accuracy is very good. shows the 3D reconstructed objects on the true DEM. It seems the inverted elevations also match well with the true DEM.

Due to the difficulty for authors to collect ground truth data, a high-resolution satellite optical picture (0.6m QuickBird data) is used as a reference. Geometric parameters of each building manually measured from the picture are taken as ground truth data to evaluate the accuracy of reconstruction.

There is a trade-off between the false and correct reconstruction rates. If we increase the false alarm rate of edge detector, relax the requirements of building-image detection and/or increase the false alarm rate of judging correctly matched group, it will raise the reconstruction rate, but also boost the false reconstruction rate. Efficiency will be deteriorated if the false reconstruction rate goes too high. On the contrary, the false reconstruction rate can be reduced and the accuracy of reconstructed buildings can be improved, but the reconstruction rate will also decline.

A critical factor confining the reconstruction precision should be the number of effective aspects, hereafter referred as effective aspect number (EAN).T he reconstruction result will become better if more SAR images of different aspects are available.

Main error of reconstruction is caused by the boundary-deviation of detected building-images, which is originated from complicated scattering and interactions of spatially distributed objects and backgrounds. In probabilistic description of detected building-image, the presented boundary in SAR image is seen as the same as reality. However, the real detected building-image might be biased or even partly lost due to the obstacle and overlapping.

In addition, it is noticed that large-scale buildings might not be well reconstructed. The reason is partly attributed to the premise of Wishart distribution of POL-CFAR detector. Since the images of large-scale buildings might reveal more detail information about the texture feature and heterogeneity, it deteriorates the performance of edge detector. To develop a new edge detector based on certain specific speckle model for high-resolution images can improve the results. Another more feasible way is to employ a multi-scale analysis, through which building-images of different scales can present Gaussian properties in their own scale levels. Of course, the expense for multi-scale analysis might be a loss in precision.

Another issue to be addressed is the exploitation of multi-aspect polarimetric scattering information. In heterogeneous urban area, the terrain objects appear distinctively in different aspects, which gives rise to a very low coherence between their multi-aspect scatterings. Hence, polarimetric scattering information may not be a good option for the fusion of multi-aspect SAR images over urban area

After the ROI is coarsely chosen in each aspect image, edge detection and object-image extraction are carried out, subsequently. Then the object-images are parameterized and finely co-registered. As long as multi-aspect object-images are automatically matched, 3D objects are reconstructed at the same time.

The merits of this approach are the process automation with few manual interventions, the fully utilization of all-aspect information, and the high efficiency for computer processing. Making a reconstruction on a 4-aspect Pi-SAR dataset (500×500) takes less than 10 min CPU time (CPU frequency 3GHz). Complexity of this approach is about

It is tractable to extend this approach to reconstruction of other kinds of objects. For other types of primary objects, given the a priori knowledge, new image detection methods have to be developed. It is possible to treat more complicated buildings as combinations of different primary objects, which can be reconstructed separately and then combine together.

Considering a space-borne SAR with the functions of left/right side looking (e.g. Radarsat2 SAR) and forward/backward oblique side looking (e.g. PRISM in ALOS), six different aspect settings are available for ascending orbit. There is a 20 angle between ascending and descending flights for sun-synchronous orbit. Therefore, it can observe from 12 different aspects. Suppose the sensor use different aspect in each visit and the repeat period is 15 days, a set of SAR images acquired from 12 aspects can be obtained through a 3-month observation. Then, application of SBT reconstruction with acceptable precision.

## Acknowledgments

This work was supported by the National Science Foundation of China with the grant No. NSFC 406370338.