Open access peer-reviewed chapter

Remote Sensing for Non‐Technical Survey

Written By

Yann Yvinec, Nada Milisavljevic, Charles Beumier, Idrissa Mahamadou, Dirk Borghys, Michal Shimoni and Vinciane Lacroix

Reviewed: 02 November 2016 Published: 30 August 2017

DOI: 10.5772/66691

From the Monograph

Mine Action - The Research Experience of the Royal Military Academy of Belgium

Authored by Charles Beumier, Damien Closson, Vinciane Lacroix, Nada Milisavljevic and Yann Yvinec

Chapter metrics overview

1,769 Chapter Downloads

View Full Metrics

Abstract

This chapter describes the research activities of the Royal Military Academy on remote sensing applied to mine action. Remote sensing can be used to detect specific features that could lead to the suspicion of the presence, or absence, of mines. Work on the automatic detection of trenches and craters is presented here. Land cover can be extracted and is quite useful to help mine action. We present here a classification method based on Gabor filters. The relief of a region helps analysts to understand where mines could have been laid. Methods to be a digital terrain model from a digital surface model are explained. The special case of multi‐spectral classification is also addressed in this chapter. Discussion about data fusion is also given. Hyper‐spectral data are also addressed with a change detection method. Synthetic aperture radar data and its fusion with optical data have been studied. Radar interferometry and polarimetry are also addressed.

Keywords

  • multi‐spectral
  • hyper‐spectral
  • radar
  • interferometry
  • polarimetry

1. Problem statement

Any sufficiently advanced technology is indistinguishable from magic.

—Arthur C. Clarke

Experience shows that a lot of effort is sometimes spent to clear areas that are not actually mines [1]. It is therefore paramount to have a good assessment of the areas that are contaminated so that clearance is done on actually mined areas. Remote sensing can be used for that. It provides information on large areas that surveyors cannot enter because of the suspicion of mine contamination.

In the early days of research, the main objective was to find ways to detect mines by airborne survey by developing the right sensors. Nowadays, the goal is to use remote sensing to collect the right data about an area, to merge it with information from other sources, for instance expert knowledge and then improve awareness on the suspicious areas to update our assessment of their contamination.

This chapter describes some techniques developed to use remote sensing to help this activity.

A key objective is the detection of elements in the remote sensing data that may indicate the possible presence of mines. Trenches are such indicators. Section 2 describes the detection of long trenches on visible images.

Another example of the detection of indicators is the detection of craters as in Section 3.

The texture of an area as seen in airborne data may be an indication of the land use and therefore may help understand the contamination. A good example is the use of Gabor filters as in Section 4.

Remote sensing can also be used to provide a three‐dimensional (3D) representation of an area. This information proves to be invaluable to photo‐interpreters when they want to understand a certain area. A method to provide this surface, or terrain, model is in Section 5.

Classification of the land use may benefit from the multi‐spectral sensors as described in Section 6.

Section 7 explains how to fuse the data from several sensors to obtain a common and improved understanding of an area.

Hyper‐spectral sensors can bring a lot of added information for instance for change detection as explained in Section 8.

Radars are less often used in mine action but prove to be quite useful. A good example can be found in Section 9.

In order to help the mine action community to use remote‐sensing data, a geoinformation platform has been created and is available on the TIRAMISU website: http://www.fp7‐tiramisu.eu/.

Advertisement

2. Long trench detection (Vinciane Lacroix)

Using aerial photographs for individual mine recognition has been proposed in Humanitarian Demining since 1998 (e.g., see Ref. [2]) although with little success. Maathuis then introduced the concept of indirect minefield indicators in Ref. [3]. This concept has been further developed in the SMART project (e.g., see Ref. [4]) and used in AIDSS, an operational system designed to find indicators of mine presence (IMP) and indicators of mine absence (IMA) (e.g., see Ref. [5]). In the scope of the EU FP7 TIRAMISU project, new methods to detect and map IMAs and IMPs on aerial and satellite images were developed and tested [69]. The choice of indicators results from an analysis involving photo‐interpreters and possibly former members of the military who took part in the conflict that caused the contamination.

In the case of military conflicts, trenches are examples of IMPs. The first step for detecting an IMP/IMA consists in translating the indicator in terms of basic image features. As trenches can be detected thanks to their long straight linear shadows, an automatic dark line detection tool is proposed for their extraction.

The low occurrence of trenches in the huge amount of aerial photos collected during a flight over a suspected hazardous area (SHA) makes the detection task overwhelming for a photo‐interpreter. Figure 1 gives an idea of the amount of data collected during a typical campaign. The line in Figure 1 shows the path of a plane, which performed an aerial campaign over the SHAs of Bihać, Bosnia and Herzegovina, in 2010.

Figure 1.

Example of a partial aerial campaign over Bihać region in Bosnia and Herzegovina. Left: the partial fight path. Right: a stripe made of 17 colour photographs taken during the flight.

Five stripes of 17 colour photographs of size 4288 × 2848 were taken using Nikon D90 camera. On the right of the figure, the corresponding photographs of one of the airplane tracks are overlaid.

Various scenarios using the dark line detector were proposed to extract some suspicious photographs from the whole set. The result of such a scenario is shown in Figure 2. Note that the area is covered by forest producing tree shadows generating many other dark lines.

Figure 2.

Extraction of the most suspicious image thanks to the long trench detection tool: Image #9 has been identified as having the longest dark linear structures.

The trench detector based on dark line detection is described in Ref. [8] and summarized hereafter.

The success of line detection relies on (i) a line filter producing the contrast of the line and its direction at each pixel, (ii) an efficient non‐maximum suppression to extract the line axis and (iii) an appropriate linking of line elements. In order to use the line detection process for trench detection, a set of relevant local properties enabling to discriminate a trench from other elements should then be computed.

The line filter is based on the gradient line detector, which exploits the change in gradient direction at each side of a line [10]. Each pixel p is considered as a potential line element: in an eight‐neighbourhood, dp, the dot product of the gradient of the intensity at pixels arranged symmetrically around p is computed (see Scheme 1); in the presence of a line, dp is negative for some of the four pairs, and |dp|, its absolute value, is the highest for the pair lying in the direction perpendicular to the line (quantized direction). A more precise direction is provided by the difference of the gradient vectors G located at the pixels belonging to this pair. The direction of each of these gradient vectors enables to discriminate between dark and bright lines (see Scheme 1 (centre) and (right)), while the square root of |dp| provides quantitative information related to the contrast in the neighbourhood of p.

Scheme 1.

Local neighbourhood at p (left); dark line at p (centre); bright line at p (right).

A line filter providing a vector field is thus available over the image, based on the above computation. Non‐maxima suppression is then used to obtain the line median axis. Local line width is then computed by associating the line axis with its borders; the gradient norm at these borders enables to compute the local line contrast, while linking axis elements enables to obtain various average attributes over line segments: contrast, direction and width.

Red, green, blue and possibly panchromatic channels may be available for trench detection. As linear shadows should be detected, the channel offering the highest dark line contrast is kept for line extraction. This will most probably be the red channel. The IMP (i.e., trenches) should then be translated into image features according to image resolution and scene characteristics. In this case, the minimum length (in pixels) of the line segments and the minimum average width (in pixels) should be provided. Several scenarios were considered, all of them analysing the series of images taken during the aerial campaign displayed in Figure 1. In the first one, information about potential trench orientation is provided thanks to old annotated scanned maps, focusing the detection on a specific orientation. In another one, the detection in a specific orientation will be avoided; this could be an option in areas where tree shadows may generate a lot of dark lines. In the most general scenario, no constraint on the orientation is provided. In all cases, line features satisfying the constraints are provided as a list vectors ranked based on the average line contrast. The vectors may be superimposed on the image or imported in a geographic information system. The most suspicious image is the one providing the larger total length of valid segments.

When looking at the results, the photo‐interpreter may consider the image as suspicious and note it for further visual analysis, object‐based image analysis [9] or further processing such as ortho‐photo production.

The result of launching the dark line detector with a minimum length of 180 pixels and a minimum width of five pixels (the image resolution is about 10 cm) with no orientation constraint provides only three suspected photographs, all of which contain trenches. Part of the most suspected one is displayed together with the superimposed detected linear objects.

Advertisement

3. Crater detection (Vinciane Lacroix)

3.1. Introduction

In the scope of the TIRAMISU project, image‐processing tools to help non‐technical survey were developed and tested (see Section 2). In this framework, the Cambodian Mine Action Centre (CMAC) expressed the need for having the mapping of craters in the eastern part of Cambodia as they might provide indication on the presence of unexploded ordnance (UXO). The presence of UXOs resulting from the US bombing during the late 1960s and 1970s is still preventing the use of the land in Cambodia. When dropped, the bombs produced craters that may still exist today. Many of them are filled with water so that they appear as circular objects on satellite images.

Similar work was made by Hatfield‐Consultants [11] for Laos. The authors used historic Corona satellite images; they computed differences between the original image and its smoothed version and used these differences in an unsupervised K‐means fuzzy classifier.

We rather used data acquired by the WorldView‐2 (WV2) instrument. We used circularity of the craters with the assumption of water or bare soil inside.

This initial circle detection method applied to the panchromatic image is described in Ref. [12] and has been validated as a circle detection technique on various sets of images used for circle detection [13] and on artificial images made of controlled shapes (circles, ellipse, squares and triangles). The method called CGC (constrained gradient for circle) has been enhanced and further applied to the pan‐sharpened Band 8 (covering near‐infrared (NIR) range from 860 to 900 nm) and to various normalized difference indices. In order to validate the method as a crater detection tool, since ground truth was not available, a visual crater detection has been performed by an independent photo‐interpreter at IGEAT (Institut de Gestion de l’Environnement et d’Aménagement du Territoire). A summary of the whole process is provided in this section.

3.2. Circle detection overview

The state of the art in circle detection is provided in Ref. [12]. The most existing approaches use the fact that some basic elements (pixels, edge elements or ‘edgels’, or connected segments) are part of a circumference so that they can be combined to generate a centre‐radius pair hypothesis. To our knowledge, none of these methods directly uses a centre‐radius hypothesis. We designed a ‘Roundness Index’ filter providing at each pixel a radius and a contrast corresponding to the most contrasted circle having the current pixel as centre. The current design of the filter is such that a unique circle is assigned to a pixel (i.e., if the pixel is the centre of several circles, only the most contrasted one will be kept). The filtering involves a computation on pre‐defined digital circles of radius spanning the radius range to detect (see Figure 3, left). The Roundness Index computation described in the next section is based on the average flux and circulation of the gradient of the intensity along the tested circle and a gradient angle compatibility test. Indeed, in the case of a perfect circle, all gradient directions of pixels located on (and nearby) the circumference should point along the normal―that is, the line joining the current point to the centre of the circle; in the case of an imperfect circle, a small difference α should be tolerated. If the test is positive for a minimum proportion of pixels on the considered circumference, the pixel is considered as a potential circle centre candidate, and the radius providing the highest contrast is kept. The local maxima of the contrast are then selected. A final test on the variation of the gradient on the identified digital circle is performed in order to keep circular shapes only. A reasonable range for the tolerated gradient difference α is [0.06, 0.14] radian. And a reasonable range for the fraction of pixels satisfying the gradient angle compatibility is [0.7, 1]. The method should not be used for too small radius (i.e., above or equal to 6 pixels) because the digitalization generates errors on the gradient direction.

Figure 3.

Left: Various digital circles of odd radius (from 7 to 11) generated by Bresenham's circle algorithm. Right: local computation at p located on the digital circle of radius 7, involving projection of G, the gradient of the intensity, along two perpendicular directions (normal and tangent to the circle).

3.3. Round index computation

The computation of the Gaussian gradient G and angle A is a prerequisite to the filtering process. Let r0 and rn be, respectively, the minimum and maximum radius of the circles to detect. Let C={C0,,Ck,,Cn} be the list of digital circles located at the origin, corresponding to the increasing radii r0,,rk,,rn. In order to initialize the process, the following information about each Ck is stored: Z the number of pixels in the circle and, for each of these pixels, the integer Cartesian coordinates i and j, and the angle in polar coordinates θ. Bresenham's circle algorithm may be used to compute i and j. Finally, let α be the tolerance angle, that is, the maximum angle difference between the gradient and the normal to the assumed circle, and let b represent the minimum portion of pixels satisfying the gradient angle compatibility test.

Two rasters, R, the local radius, and F, the filtered image providing the ‘Roundness Index’, are initialized. The image is scanned; each pixel c is considered as a potential circle centre candidate; each digitized circle in C is translated at c; the average flux |Gn|¯ (absolute value of the projection of G along the normal) and average circulation |Gt|¯ (absolute value of the projection of G along the tangent) over all pixels of the translated circle is computed (see an example in Figure 3, right). If |Gn|¯>|Gt|¯ and if the gradient angle compatibility test are satisfied for at least bz pixels, c is accepted as a circle centre candidate. The compatibility test at pixel P located on circle Cj is |θAp|<α, where Ap represents the gradient angle at p translated by c and θ is the polar angle of P in Cj. The maximum average flux |Gn|¯ computed over all digital circles and the radius r of the circle providing that maximum values are stored in R and F, respectively. Bright circles are discriminated from dark circles based on the sign of the gradient projection (bright circles are such that the gradient is pointing towards the centre), so that the average flux is computed for dark and bright circles separately. For practical purpose, dark circles are stored as negative values. The result of the filtering process holds in these two rasters R and F.

3.4. Crater detection

The main assumption underlying the crater detection is that craters are circular. Moreover, we assume that they are either filled with water or made of bare soil. This is probably too restrictive but limits the false‐positive rate especially in the presence of trees whose crowns are circular. Therefore, we have identified Band 8 covering near‐infrared (NIR) range from 860 to 900 nm as ideal band to detect dark circles. Water has indeed a low reflectivity in this part of the spectrum. A threshold at 0.6 on the normalized difference vegetation index (NDVI) computed from WV2 data has been added: only circles having their average NDVI equal to or lower than threshold are considered as potential craters. NDVI is computed on WV2 as the normalized difference between Band 6 B6 and Band 5 , that is, B6B5B6+B5. All potential normalized differences have also been considered as a basis for dark circle detection.

We purchased WV2 data taken on 26 November 2011, covering 100 km2 of the eastern part of Cambodia near the border with Vietnam, in a rural zone (Choam Kravien), with a spatial resolution of about 0.5 and 2 m for panchromatic band and multi‐spectral band, respectively, and a pixel depth at acquisition of 11 bits. The dark circle detection has been performed on the data after ortho‐rectification and atmospheric correction.

3.5. Validation

In order to validate the crater detection algorithm, as ground truth was not available, a photo‐interpreter analysed five zones of 4000 × 4000 pixels, each one covering 4 km2, based on a coloured composition (red = Band 7, green = Band 5, blue = Band 3). The photointerpreter provided for each detected crater the centre, the radius, the land cover, the circularity expressed in percentage (100% representing perfect circles) and a confidence measurement also expressed in percentage. She found 334 craters over the five zones (20 km2). In the following, ‘visual crater’ and ‘CGC circle’ denote a crater identified by the photo‐interpreter and a circle detected using the automatic method, respectively. Table 1 summarizes some statistics derived from the visual crater detection over the five zones.

Confidence#Circularity#Radius#Land cover#
[75, 100]102[80, 100]32Larger than 1212water187
[50, 75]109[60, 80]122[9, 12]80Bare soil143
[25, 50]86[40, 60]109[6, 9]171Vegetation2
[0, 25]37[20, 40]58[3, 6]69Unknown2
[0, 20]13Lower than 32
Total334334334334

Table 1.

Statistics on visual crater detection: confidence, circularity, radius and land cover.

The following remarks can be drawn from Table 1:

  • As far as the land cover is concerned, the assumption of bare soil or water inside crater makes sense as only four visual craters over 334 (less than 1%) have another land cover according to the photo‐interpreter.

  • The circularity assumption (i.e., assuming that the crater is almost a circle) is far from true: 152 visual craters only (45%) have a circularity coefficient larger than 60.

  • The radius of the visual crater can be very small: 71 (21%) have a radius lower than 6 pixels, which are the minimum radius recommended for our circle detection technique.

  • Only 211 visual craters (63%) are detected with a confidence larger than 50.

Given these remarks, it seems a priori difficult to detect the craters based on the CGC method: less than 1/3 (33%) satisfies the required assumption of a radius larger or equal to 6 and circularity larger or equal to 70%. The percentage drops to 28% if one considers only the craters with a confidence larger or equal to 75%. Nevertheless, the comparison between the visual craters and the CGC circles has been performed. In order to obtain quantitative results, precision p and recall r have been computed over the five zones; p and r are defined as p=TPTP+FP and r=TPTP+FN where TP, FP and FN are the number of true positive, false positive and false negative, respectively. A false positive is a CGC circle, which is not identified as a visual crater. A true positive is a CGC circle, which corresponds to a visual crater. A false negative is a visual crater that has been missed by the automatic method. Precision is also referred as the positive‐predictive value while recall is also called true‐positive rate, sensitivity or hit rate. These definitions are better understood in view of Table 2, where it is assumed that visual detection corresponds to ‘Ground Truth’.

Ground Truth
CraterNo CraterSumRatio
DetectionCircleTPFPDetectedP = TP/(TP + FP)
No CircleFNTN
SumExisting
Ratior = TP/(TP + FN)

Table 2.

Precision p and recall r definitions.

The circle detection has been performed on Band 8 (with NDVI of >0.6), and on various normalized difference indices: ND83, ND84, ND85, ND73, ND74, ND75 and ND65, where NDij refers to the normalized difference between Bi and Bj, that is,NDij=BiBjBi+Bj. A true positive is counted if the centre of the CGC circle is falling inside a visual crater. Fusion using logical operator ‘OR’ has been considered. Several precisions and recalls (summed up on Figure 4) have been computed in different ways:

  1. All visual craters have been considered with an equal weight.

  2. Visual craters (TP and FN) are counted according to the confidence assigned by the photo‐interpreter: the weight assigned is 25th of the confidence so that a crater of confidence 100 has a weight of 4 compared to a crater with the lowest confidence having a weight of 1. False positives have an equal weight of 1.

  3. Only visual craters having a radius larger than or equal to 6 and circularity c70 are equally considered. Nevertheless, CGC circles corresponding to visual craters having radius or circularity outside these bounds are not considered as FP.

  4. Only visual craters having a radius larger than or equal to 6 and circularity c70 are considered and weighted according to their confidence. Similarly, CGC circles corresponding to visual craters having a radius or circularity outside these bounds are not considered as FP.

Figure 4.

Precision and recall of the CGC automatic crater detection method.

Selected results among these four sets of experiments are presented in Figure 5. The selection is based on the ‘best precision’, the ‘best recall’ and the ‘best average of precision and recall’, taking each considered image independently and fusion (OR) of all of possible results.

Figure 5.

Examples of FP superimposed on part of Band 8 (left) and part of ND74 (right). Bright and dark circles represent CGC circles detected on band 8 and ND74 respectively.

As expected, results are less good if all the visual craters are considered in the computation, independently of their circularity and radius. Indeed, the CGC method assumes a minimum circularity of the shape and should not be used for detecting too small circles. As long as the craters satisfy the working hypothesis, the detection of craters showing much evidence is very good (more than 80% for the best fusion result).

Weighting the craters according to the confidence given by the photo‐interpreter always improves the results, which means that the CGC circles mainly correspond to visual craters with high confidence.

In general, recall is better than precision, which means that they are more FP than FN, which could be interpreted as CGC method generating too many FP. This statement should be qualified; indeed, there are 116 FP for the experiment providing the best precision, which corresponds to a density of 5.8 FP per km2 or 1.45 FP for 108 pixels, which is quite reasonable compared to the 334 visual craters corresponding to a density of 16.7 craters per km2or 4.17 visual craters for 108 pixels. For the experiment providing the best recall, they were 738 FP (36.9 per km2 or 9.22 for 108 pixels) which is still acceptable.

The distribution of FP is not uniform: in particular, one zone presents a lot of false positives, which are generated either by the shadow of small‐squared houses (on Band 8) or by the house themselves (on ND74) as can be seen in Figure 5.

The best precision is always obtained for the ND of Band 7 and Band 4―which means it is the best image to consider if low false‐positive rate is searched for. CGC method applied to that image provided 161 FP and 116 TP.

The best recall is a fusion of results obtained on different images. If all visual craters are considered, the fusion involves Band 8, ND83, ND84, ND75 and ND65. On the other hand, if craters outside the domain of CGC are filtered out, Band 8 fused with ND84 and ND75 provides the best results. That experiment provided 738 FP and 223 TP.

The CGC circles have been presented to the photo‐interpreter. While some circles (FP) have been easily identified as house shadow or house themselves (see Figure 5), others were recognized by the photo‐interpreter as potential craters that could have been missed. Indeed, this visual inspection is quite tedious and is prone to errors. Other examples of TP, FP and FN are shown in Figure 6.

Figure 6.

Examples of TP, FP and FN over part of Band 8 (left) and ND74 (right).

3.6. Conclusions

This study showed that, according to a photo‐interpreter, the naive idea of a crater being seen as perfect circle on satellite image is not true; indeed, only 45% of the craters visually detected can be considered as such. Nevertheless, the CGC method used over Band 8 and fused with various normalized difference indices enables to detect at least half of them, mainly the ones having a radius larger than 6 pixels, and the ones for which the photo‐interpreter gives a larger confidence, all this with a reasonable amount of false positives. The fusion method providing the best recall should probably be used in the current context as it should be more important not missing a crater than having more false positives.

The performances would be enhanced if the method could be adapted to small circle detection, for example, by making the detection parameter dependant on the radius.

Advertisement

4. Segmentation with Gabor filters (Mahamadou Idrissa)

4.1. Introduction

In satellite image interpretation, classification is the operation by which an operator would like to detect different kinds of region like forest, urban zone, waterways, and so on. As the scene is a set of points (pixels) with intensity in grey‐scale values in several bands, most methods use these grey‐scale values to determine the kind of terrain. However, a single ground cover usually occupies a number of pixels with some variability in their grey‐scale values. A more satisfactory interpretation of the scene should thus include textural aspects of regions.

In general, texture is characterized by invariance of certain local attributes that are periodically or quasi‐periodically distributed over a region. There are many approaches for analysing image texture. Haralick [14] proposed a set of features (energy, entropy, maximum probability, correlation, etc.) based on grey level co‐occurrence matrices (GLCMs). Some statistical techniques use Markov Random Field models to characterize textures [15]. The spectral approach [16, 17] to texture analysis is referred to as the multi‐channel filtering approach. Textures are characterized by their responses to filters, each one being selective in spatial frequency and orientation.

We present a spectral approach to extract texture features. The textured input image is decomposed into feature images using a bank of Gabor filters. These feature images are used to form feature vectors and each of them corresponds to one dimension of the feature space. Then, we present the Fuzzy c‐means‐clustering algorithm used for unsupervised classification of the input pixels based on their associated feature vector. This method considers clusters as fuzzy sets, while membership function measures the possibility that each feature vector belongs to a cluster. At last, we present methods for evaluating how well different textures are separated in feature space, as well as measuring classification performance. In most applications, the number of classes is unknown. Here, we propose a method for choosing the best number of classes and we apply it to a real texture representation problem.

4.2. Gabor filters

Gabor filters perform a local Fourier analysis thanks to sine and cosine functions modulated by a Gaussian window. In the complex space, these filters are defined as follows:

G(x,y|X,Y,kx,ky)=e((xX)2+(yY)2)2σ2.ej(kxx+kyy)E1

where x,y represent the spatial coordinates while represent the frequency coordinates. X and Y are the spatial localization of the Gaussian window.

As signal is discrete, two simplifications are proposed [18]. The first one makes use of short‐time Fourier transform (STFT) enabling a fixed window size independently of the filter frequency. The second introduces binomial window as approximation of the Gaussian. The basis functions of this decomposition are as follows:

Ck,l(n,m)=W(n,m)×cos2π(knN+1+lmM+1)  Sk,l(n,m)=W(n,m)×sin2π(knN+1+lmM+1)  E2

where W2(n,m)=12(N+M)CN(N2+n)CM(M2+m) is the (N + 1) × (M + 1) binomial window. The coefficients are given by

Cji={j!i!(ji)!     0ij0                elsewhere}E3

The indexes n=N2,,N2 and m=M2,,M2 are the window spatial coordinates with N and M even integers. The filters selectivity in frequency (expressed in number of cycles per pixel) and orientation (in radian) is derived from the following equation:

f=(kN+1)2+(lM+1)2θ=arctankN+1M+1lE4

with k=K2,,K2, l=L2,,L2 and KN, LM. Figure 7 shows a subset of filter functions (their corresponding sine and cosine terms) for a 31 × 31 binomial window with K = 4 and L = 4, while Table 3 shows their corresponding sine and cosine terms.

Figure 7.

Filter bank for N = M = 30 (grey = 0, white = positive values, black = negative values).

C0,0C0,1S0,1C0,2S0,2
C1,0C1,1S1,1C1,2S1,2
S1,0C−1,1S−1,1C−1,2S−1,2
C2,0C2,1S2,1C2,2S2,2
S2,0C−2,1S−2,1C−2,2S−2,2

Table 3.

Filters legend: respectively, for the sine and cosine terms.

A set of feature images is obtained by convolving the image I with each filter providing a local ‘Energy’ given by

Ek,l(n,m)=[(Ck,lI)(n,m)]2+[(Sk,lI)(n,m)]2E5

Here, ⊗ denotes the convolution and k,l the frequency‐orientation coordinates.

4.3. Classification

Many algorithms have been developed for supervised and unsupervised classification. In supervised classification, training sets are needed whereas unsupervised classification classifies images automatically by finding clusters in the feature space. One of the unsupervised data‐clustering methods is the hard k‐means‐clustering algorithm. It assigns each sample (feature vector) to one and only one of the clusters. This method assumes that boundaries between clusters are well defined. The model does not reflect real data where samples are in general mixed. In fuzzy c‐means algorithm [19], unlike the k‐means algorithm, samples belong to all clusters with various membership degrees. Membership degree is defined as a function of the distance between sample and the ith‐cluster centre. The fuzzy c‐means may be seen as a generalization of the k‐means method with membership values in the interval [0,1] rather than in the set {0,1}.

4.4. Classification validity

Classification of data should be of high quality, that is, all samples should have a large membership degree for at least one cluster. This problem is related to how many classes there are in the data. In fuzzy c‐means algorithm, the number of clusters is required though in many applications this information is unknown. A method for measuring performance is needed to compare the goodness of different classification results. Gath has defined an ‘optimal partition’ of the data into subgroups as follows [20]:

  1. Clear separation between the resulting clusters.

  2. Minimum volume of the clusters.

  3. Maximum number of data points concentrated around the cluster centre.

In literature, this is known as cluster validity problem; there exists a wide variety of cluster validity parameters. Gath has proposed the partition density PD more related to the geometry of the dataset. The method accounts for variability in cluster shapes and the number of data points in each of the subsets:

PD=SFHVE6

where s=i=1Cj=1nuij is the fuzzy cardinality of dataset and FHV=i=1C[det(i)1/2 the fuzzy hyper‐volume of dataset.

This parameter is only sensitive to well‐compact substructures in the dataset and does not take the overlapping between them into account.

For this work, we have defined a cluster validity parameter, which combines Gath partition density PD (Eq. (6)) and a resemblance function r defined by Bloch in Ref. [21]. This function r is used as separation measurement between clusters

r=f(AB)f(AB)E7

where A and B are two fuzzy sets; f(X) is the fuzzy cardinality of a set X. The intersection AB is the largest fuzzy set, which is contained in both A and B; the union is the smallest fuzzy set containing both A and B. The new definition of Compactness and Separation coefficient is

nCS=PDmax(r)E8

The factor max(r) corresponds to the maximum resemblance value measured between two clusters. This normalized factor always takes a value close to zero as soon as the clusters are disjoint; the higher resemblance value is close to one. It can also be interpreted as a punishing factor applied to the partition density PD to avoid the over‐segmentation problem.

With this definition, the procedure to select the best number of classes is to cluster the dataset for many different numbers of classes; the best one should give the maximum nCS index.

4.5. Experiments

We now apply this texture classification method to a grey‐scale image (Figure 8) taken in November 1998 during an airborne minefields survey in Mozambique [22]. In the surveyed area, minefields are homogeneous regions with no human activity inside and surrounded by agricultural fields. Since the vegetation in these minefields is different from the vegetation outside, texture‐based classification is a good approach for the interpretation of this image. Unfortunately, exact class maps or texture models are not available. Thus, the use of the proposed cluster validity parameter will help us to find the optimal class number, which maximizes the final classification accuracy.

Figure 8.

Aerial image of a minefield area in Mozambique.

Gabor filters are computed with an 11 × 11 binomial window and five discrete values of frequency coordinates k, l {−2,−1,0,1,2} (empirically determined). We have used only 9 features selected from the 13 feature images. The feature selection scheme is based on a simple method, which consists in sorting the feature images based on their amount of energy and pick up as many features as needed to achieve 95% of the total energy.

The optimal number of clusters that we find for this image is 5 (Figure 9), and the classification result is presented in Figure 10. As we can see in Figure 11, the dark area corresponding to the suspected minefield is determined and the other types of vegetation as well.

Figure 9.

Compactness‐separation function (nCS) for the aerial image.

Figure 10.

Classification in five classes.

Figure 11.

Minefield region classification.

4.6. Conclusions

A fuzzy‐clustering approach to textured image classification has been presented. The texture features are extracted using a set of Gabor filters with different frequencies and orientations.

The fuzzy c‐means algorithm has been successfully used for discriminating different types of textured image, but the drawback is that one has to specify the number of clusters. We thus discussed the use of cluster validity parameters. A modification of Compactness and Separation validity function is proposed as index to estimate the optimal number of texture categories, which have proven adequate for texture discrimination.

Advertisement

5. From DSM to DTM (Charles Beumier)

The bare ground is of interest for applications such as flood‐risk evaluation, environment planning or terrestrial navigation. In the specific case of minefield delineation, a terrain model is a precious piece of information to spot where camps could have been settled and hence where mines could have possibly been laid or buried.

In a mine context, the earth surface model is ideally captured by a remote‐sensing method, resulting in a fast and cost‐effective solution if the area of interest may be flown over. LIDAR and photogrammetry are the two classic options to deliver a dense digital surface model (DSM) with high resolution. Since it is taken from the air, this model contains elevated objects such as trees and buildings. These usually complicate the automatic extraction of features of the bare soil useful for detection.

By contrast, a digital terrain model (DTM) is a representation of the bare ground surface. It is advantageously derived from the DSM by automatic or semi‐automatic procedures. Most of them filter elevated points and fill filtered areas by interpolation between remaining points.

In the specific case of the TRAMISU project, gigantic DSMs (one or several giga‐pixels) were produced with a spatial resolution of 0.3 m. Fortunately, a rougher level of detail for the desired DTM is sufficient and we opted for 3‐m resolution, leading to tractable images in terms of speed and memory.

We implemented a DTM from DSM approach based on the sate‐of‐the‐art solution ‘lasground’ distributed by rapidlasso Gmbh in the LASTOOLS suite, and inspired from the work of Ref. [23] about LIDAR point filtering. The method filters out recursively locally elevated points from a point cloud and returns the reduced cloud normally made of ground points.

We first converted the DSM raster to a point cloud after low‐pass filtering and regularly sub‐sampling the raster to get a reasonable file size. The lasground programme was then run to deliver a reduced point cloud, normally made of ground points. The main parameter (‘step’) was set to 10 to accommodate for rather mountainous/forest scenes. Finally, a raster was created from the resulting point cloud and filled in by interpolation thanks to the las2dem of LASTOOLS.

The project TIRAMISU successfully employed the produced DTM. By subtracting it to the DSM, obtaining the so‐called normalized DSM (nDSM), the vegetation and buildings were easily localized. Figure 12 shows perspective views for one example of extracted DTM and the corresponding nDSM.

Figure 12.

Digital surface model and digital terrain model.

Advertisement

6. Multi‐spectral classification (Nada Milisavljevic)

Daedalus, the multi‐spectral scanner used in the project SMART [24], records the electromagnetic spectrum in 12 channels ranging from visible blue to thermal infrared, with the spatial resolution of 1 m. The appearance of objects in the different Daedalus channels may vary widely, depending on the molecular reflection and absorption characteristics of the matter of the objects.

Generally speaking, multi‐spectral remote sensing takes benefit from the fact that the way that different types of material or of land cover absorb and reflect solar radiation depends on the wavelength. This means that any type of land cover or material has its own multi‐spectral signature. This signature can be represented as a plot of the fraction of reflected radiation in function of the incident wavelength. For instance, the reflectance of bare soil increases with the increasing wavelength. On the contrary, clear water generally has a low reflectance, but it reaches the maximum at the blue region of the spectrum and decreases with the increasing wavelength. Finally, the reflectance of vegetation is minimal in blue and red parts of the spectrum; the reflectance increases as the wavelength increases towards the near‐infrared part of the spectrum, and becomes significantly higher than in the visible part. Apart from these general tendencies of soil, water and vegetation, differences exist in the spectral signature (i.e., shape of the reflectance spectrum) between different types of the same class. In other words, the exact shape of the signature of bare soil is a function of the type of soil, its moisture, composition, and so on. The main idea behind our work with the Deadalus data is based on the fact illustrated by these examples. Namely, in order to perform land‐cover classification, we analyse the shape of the multi‐spectral signatures and then assign pixels to classes based on their similar signatures.

There are two main types of multi‐spectral classification. In supervised classification (applied when the knowledge about the data and types of land‐cover classes is good), classes are related to known features on the ground. In unsupervised classification, the pixels are analysed in an automatic way and divided in spectrally distinct classes (i.e., accumulations of points in a multidimensional feature space). The unsupervised classification is used when we have a low knowledge about the data before classification, and the classification result is given to the image analyst, who should then give meaning to the obtained classes.

In our case, there is no reliable information on the minefield indicators (as a matter of fact, one goal of our work is to determine useful minefield indicators). In addition, we have to be careful, taking into account the context, and avoid any misclassification that could have dangerous consequences. Consequently, our plan is to proceed as follows:

  • as a first step, to perform an unsupervised classification which should provide us with classes of different land cover;

  • then, based on the available ground‐truth information, we attach meaning to typical and wide‐spread classes of land cover (such as forest, water, grass and cornfield);

  • finally, the remaining regions that are not labelled could be related to land‐cover anomalies, which means that they are possibly dangerous and should be analysed further.

Note that in order not to discard, misclassify or miss any mined area, the first two steps have to be performed slowly and carefully. Another reason for being cautious lies in the fact that the mine density is often low in mine‐affected regions of Croatia. Furthermore, combining these results with data coming from synthetic aperture radar (SAR) (Section 7), we can improve the reliability of the obtained results.

During the application of this unsupervised classification, we take advantage of another piece of information related to the multi‐spectral signature analysis—the information provided by comparing the reflectance in the red part (r) and in the near‐infrared of the spectrum (n), the so‐called vegetation index. NDVI, that is, the normalized difference vegetation index is calculated as nrn+r and

  • is below 0 for snow and water;

  • is close to 0 for bare soil and rocks;

  • is between 0 and 1/3 in case of lower vegetation;

  • increases above 1/3 with the vitality and density of the plant canopy (e.g., for fresh vegetation and forests).

Water and forests are not potential minefield indicators. Thus, before we apply a land‐cover classification, we can mask areas with NDVI below 0 and above 1/3, taking into account that there is no fresh biomass in several year‐old pioneer vegetations of the mine‐suspected fields.

Thus, the way our unsupervised classification proceeds can be summarized in the following way. As a first step, the 12 Daedalus channels are analysed pixel per pixel. A first analysed pixel is assigned to a first class, and its multi‐spectral signature (i.e., its values p1,1, p1,2,…, p1,12in the 12 channels) is stored as the representative of that class, noted as c1,1, c1,2,…, c1,12. The following pixel, which has values p2,1, p2,2,…, p2,12in the 12 Daedalus channels, is compared with the representative of the first class based on their multi‐spectral distance t, as t=i=112(p2,ic2,i)2.

The analysed pixel is grouped in the first class if t is lower than or equal to a tolerance set in advance (k), and the representative multi‐spectral signature of that class is then updated (replaced by averaging the signature of the prior existing pixel in the class and the newly found pixel). If t is greater than k, the analysed pixel is the founder of a new class, which means that its values initialize the multi‐spectral representative of that class. For all pixels having NDVI > 1 or NDVI < 0, a separate rejection class is formed. This procedure is repeated so that each pixel signature is compared with all existing classes. Each pixel is always grouped to the class having the minimum distance t or it creates a new class if t is greater than k. Once all pixels are analysed, we obtain a first output of our unsupervised classification, as well as the final multi‐spectral signature representative for each of the classes. At this step, there are two possibilities: our method can either end or it can go on for a new iteration, where the results of the last iteration are used as a priori classes. As a matter of fact, in the first iteration, the process of finding the multi‐spectral signatures of the classes consists in updating the class representatives each time a new class member is found, which means that misclassifications for pixels belonging to similar vegetation types are possible. After the first iteration, the class representatives are fixed, so they can be used to reclassify the pixels and correct most misclassifications (if not all). Furthermore, in such a way, a strong influence of the choice of the tolerance, k, that influences the number of classes is significantly diminished.

The described method is illustrated for the mine‐suspected area shown in Figure 13. Figure 14 contains results of the method for k = 19 (20 classes) and k = 22 (10 resulting classes), while results obtained for a larger area are given in Figure 15. If the number of classes increases, both the level of details and noise (i.e., isolated pixels that are grouped with other classes than their surroundings) increase as well, where the former could result in identifying possible anomalies in land cover. Although the resulting classified data are relatively noisy, we do not smooth the image in order to preserve possible minefield indicators.

Figure 13.

Daedalus geocoded images of a mine-suspected area in Croatia. Left: visible, right: infrared.

Figure 14.

Results of an unsupervised classification of Daedalus data for the test area in Croatia: 10 classes (left), 20 classes (right).

Figure 15.

Left: RGB composite of geocoded Daedalus channels of a larger area in Croatia; r,g,b = ch5, ch3, ch2. Right: result of an unsupervised classification of Daedalus data, 10 classes.

Advertisement

7. Multisensor data fusion (Nada Milisavljevic)

As shown in detail in Ref. [25], in order to find indicators of mine absence and mine presence as well as to provide image analysts with adequate tools to analyse and interpret potentially mined zones during the process of area reduction, various pieces of information obtained using Daedalus and airborne full‐polarimetric SAR, together with the existing context information (all integrated in a geographical information system), are classified and then combined.

As far as the data fusion module is concerned, we have worked on several numerical fusion methods. The proposed methods are to a large extent original thus representing a result of the project SMART. Results are obtained on three test sites using the three most promising methods: fusion using belief functions [26] and having a global discounting factor, fusion using belief functions and using confusion matrices for a more specific discounting, and fusion using fuzzy logic. Then, in the next step, we have shown how the results obtained on this basic level can be improved by introducing additional knowledge in the fusion process (e.g., knowledge on the width of the roads, on the existence of rivers, etc.). Finally, as the third step, a spatial regularization at a regional level further improves the results, based on the idea that isolated pixels of one class quite unlikely appear in another class. Note that the results are at least as good as those obtained for each class using the best classifier for that particular class. In other words, they are globally better than any separate input detector or classifier, which proves the improvement that is brought by fusion. The user can influence the choice of the classifiers as well as the choice of some parameters (some supervision is still required in the choice of parameters for the fuzzy fusion approach in particular).

As an illustration, Figure 16 contains results with fuzzy fusion. For more detailed information, please refer to Ref. [25].

Figure 16.

Results with fuzzy logic fusion: basic version (left), after knowledge inclusion and spatial regularization (right).

Advertisement

8. Hyper‐spectral change detection (Michal Shimoni)

8.1. Introduction

Hyper‐spectral imaging collects information from the visible to the microwave regions of the electromagnetic spectrum by taking a series of images over a range of narrow and continuous wavelengths. This results in the creation of a three‐dimensional hyper‐spectral imaging cube, where the x‐ and y‐axes present the spatial dimension of the image and the z‐axis denotes the spectrum as dependent on the number of bands. For the visible to short‐wave infrared (SWIR) wavelengths, a continuous reflectance spectrum for each pixel is formed, whereas for the mid‐wave infrared (MWIR) to long‐wave infrared (LWIR) wavelengths, an emittance spectrum is created. These wave ranges are, respectively, named in the literature as the reflective and the emittance domains.

Hyper‐spectral imagery has been applied with success to many security and defence applications including the detection of buried mines and occluded improvised explosive devices (IEDs) [27]. In the reflective domain, spectral‐based methods were applied for the detection of surface landmines using anomaly detection [28], matched filtering [29], the constrained energy minimization (CEM) [30] and the discrete wavelet transform (DWT) [31]. The presence of buried landmines was investigated using the changes in the spectrum of the covered vegetation or soil upon burying the devices [3237].

In the radiative domain, the detection of disturbed soil exploits the ‘reststrahlen effect’, in which the emissivity absorption of a particular soil is changed within the 8–10‐μm spectral range [38]. However, the reststrahlen effect has been shown to be quite variable depending on the geographic location and the soil composition, and it requires the use of higher‐order decision fusion (i.e., fusion of thermal, textural and spectral information) to achieve target/clutter discrimination from a single thermal hyper‐spectral image [34]. Our study tries to overcome this limitation by applying spectral‐based change detection of buried IED using temporal thermal hyper‐spectral scenes. Our study assesses the detection of small buried objects using multi‐temporal thermal hyper‐spectral images and by applying two change detection methods based on multivariate statistical algorithms: the Chronochrome (CC) and the class‐conditional change detector (QCC).

8.2. Dataset

A measurement campaign took place on the 21 October 2009 in the location of a quarry in the periphery of Quebec City (Canada).

The TELOPS imager (Hyper‐Cam) [39] was located at a distance of 2.5 m away from the area of interest. The sensor scanned the area site looking downwards at an angle of 26° and with an instantaneous field of view of 1.4 μrad (Figure 17). The spectral images were collected using 660 bands in the wavelengths 800–1350 cm−1. A magnifying telescope was installed for improving the spatial resolution. The detector array was set to 220 × 200 pixels, resulting in an FOV of 17.6° by 16°.

Figure 17.

View of the experimental layout showing the Hyper‐Cam field of view, buried targets, and undisturbed area.

Four aluminium plates consisting of 6.5 × 8 × 0.25 inches (Figures 17 and 18) were buried in a sandy soil and were framed with several rocks (diameter of 10–25 cm). The plates were buried at different depths: 2, 5, 10 and 15 cm (Figure 18) the day prior to the experiment on 20 October 2009, between 12h00 and 13h00. Part of the site was undisturbed as indicated at the upper part of Figure 18.

Figure 18.

Target site showing the location of the buried targets.

Images were collected from 6h40 am on the 21 October 2009 in nine sequences consisted with the following:

  • 50 acquisitions of a blackbody at a given temperature;

  • 100 acquisitions of the target site;

  • 50 acquisitions of a blackbody at a second given temperature.

Table 4 summarizes the environmental conditions in the nine measured sequences.

SequenceStart timeAmbient temp °CHumidity %Ground temp (E = 0.98)
IED#106:401.380−7.3
IED#207:551.879−2.5
IED#309:103.1787
IED#410:506.5578.5
IED#500:009.94516
IED#613:169.14512–14
IED#714:249.3449.5–11
IED#815:347.9443.5–4.5
IED#917:244.454−1.3

Table 4.

Experiment log.

8.3. Change detection methods

For automatic target detection of the buried objects, two change detection methods were applied: the Chronochrome (CC) and the class‐conditional CC (QCC).

8.3.1. Chronochrome (CC)

For the two hyper‐spectral matrices x and y, the diagonal matrices Tx and Ty are written as follows [40]:

xi=Txρi+dxyi=Tyρi+dyE9

where ρi is the spectral reflectance and dx and dy are the changes between the observations. The spatial position index on the vector is dropped for notation convenience:

x=Txyy+dxyTxy=TxTy1dxy=dxTxydyx^=T^xyy+d^xyE10

And the change residual image (δ) is defined as follows:

δ=xx^(T^xyy+d^xy)E11

Using second‐order statistics, the transformation parameters T^xy and d^xy can be estimated using the mean vectors mx and my and the covariance matrices Cx and Cy. Following, the covariance matrices are diagonalized:

Cx=VxDxVxTCy=VyDyVyTE12

where Vx and Vy are the eigenvector matrices, and Dx and Dy are the diagonalized covariance matrices. The Chronochrome (CC) change detection method is written as

T^xy(CC)=CxyCyy1d^xy(CC)=mxT^xy(CC)myE13

8.3.2. Class‐conditional CC (QCC)

The QCC [41, 42] is an image with a normal mixture model that allows the transformation parameters T^xy and d^xy to vary between spectral classes. In QCC, each spectrum (x) is defined by the class index q (where q = 1, 2, …, Q) with a prior probability P(q) to belong to each respective class. In this method, a class‐conditional probability function p(x|q) is assigned to the transformation parameters in Eq. (2):

x^|q=T^xy|q.y+d^xy|qE14

And the change residual image (δ1) is defined as

δ1=xx^|q=x(T^xy|q.y+d^xy|q)E15

As a classification method, we applied the stochastic expectation maximization (SEM) [43] after principal component analysis on the reference image (x). The SEM is a quadratic‐clustering method that addresses the bias against overlapping class‐conditional probability density functions and uses a Monte‐Carlo class assignment. Because in thermal imagery, the temperature variations will cause a large non‐linear radiance offset [43], we selected a stochastic model rather than a linear‐mixing model. We assume that local thermal variation occurs due to changes in the upwelling radiance and the down‐welling illumination from nearby object emissions.

8.4. Results

The nine sequenced acquisitions were resulted in 72 pairs of images to which the CC was applied. From Figure 19, one can learn that the obtained changes are spectral‐based and not thermal‐based. Apart from the illumination and the shadow effects, the obtained changes did not vary along the day. Nevertheless, because the thermal spectra are influenced by the down‐welling illumination, clear changes were recorded only from sequence #3 (i.e., 09h10 am). When the up‐spectra increase in the afternoon, the spectral changes between the disturbed soil and its surrounding are reduced.

Figure 19.

Several CC change detection results.

In Figure 19, the disturbed soil is clearly detected using the sequenced images #3, #4 and #5 (i.e., image x = #3). It is also clearly detected using the short temporal sequences (i.e., x = 3; y = 4) as the long temporal sequences (i.e., x = 1; y = 5).

Further, the detection of the buried objects was applied using the QCC method. Following the results acquired using the CC method, the transformation parameter‐based spectral classes were obtained only using the hyper‐spectral images x = #3, #4, #5. We implemented the QCC with Q = QSEM (i.e., the maximum number of classes Q may be obtained using the expectation maximization). In our case study, the Q = 5 classes: three different types of soils (included the disturbed soil), stones and shadows (Figure 20). The spectral doublet features of quartz that are located at 1176 and 1123 cm−1 (8.5 and 8.9 μm) are clearly identified in the spectra of the undisturbed soils shown in Figure 21. On the other hand, the spectral features of the quartz were suppressed in the recently disturbed soil.

Figure 20.

SEM results using Q = 5.

Figure 21.

Spectra of the three detected soils using SEM.

Figure 22 presents the results of the QCC. In comparison to the CC method, the disturbed soil is clearly identified in the different sequence images when using the QCC. Using the two applied methods, the changes were clearly detected using the short temporal sequences as the long temporal sequences.

Figure 22.

The QCC results.

By merging the QCC results from different sequences (Figure 22) and after applying a threshold filter, the detection of the disturbed soil became significant.

8.5. Conclusions

This research found that spectral‐based change detection is a valuable method for the detection of buried IEDs under disturbed soil condition. The change detection methods (Chronochrome (CC)) and the class‐conditional (QCC) were able to detect changes even using short temporal sequences. The efficiency of these methods increases using pair of images from large daily temporal differences (e.g., morning and evening). A thermal hyper‐spectral sensor with very high spectral resolution (as the TELOPS Hyper‐Cam) was found to be a sensitive tool for the detection of small changes in the upper layer of the soil that were caused by the buried small objects.

Advertisement

9. Synthetic aperture radar (Dirk Borghys and Michal Shimoni)

9.1. Introduction

Remote sensing by synthetic aperture radar (SAR) systems provides information that is complementary to information available from optical images. Moreover, in areas under risk where rapid land‐cover mapping and suspected mined area reduction are required, the advantages of SAR offering cloud penetration and day/night acquisition are evident in comparison to optical data.

The current chapter summarizes results obtained by using different SAR modalities for helping in mined‐area reduction.

All the results presented in this chapter were obtained by automatic image processing. They could be further improved by introducing human intervention in a semi‐automatic approach.

9.2. Properties of SAR images

The aim of this paragraph is to provide a very brief overview of the principal properties of SAR images. For a more detailed and rigorous explanation, the interested reader is referred to the plethora of books, websites and online tutorials on SAR, for example, [4448].

In SAR imaging, an electromagnetic wave is emitted towards the earth surface where it is scattered by the elements on that surface. The energy scattered into the direction of the sensor is then measured and converted into an image. Usually, the emitter and the receiver are located on the same platform (airplane, satellite or UAV). This is called mono‐static SAR, and in this case, the radar backscattering is measured.

The backscattered energy depends on properties of the scattering surface (geometrical properties, dielectric constant, etc.) as well as on the characteristics of the SAR system (wavelength, incidence angle, used polarization, imaging mode, etc.). SAR is operated in the microwave at different frequencies (Table 5). In general, each frequency has its advantage and the longer the wavelength, the higher the penetration depth of the SAR energy.

BandWavelength (cm)Frequency (GHz)Main applications
X2.4–3.812.5–8Urban
C3.8–7.58–4Land cover, soil moisture and ice
S7.5–154–2Security, manmade targets
L15–302–1Forest and soil moisture
P30–1001–0.3Tomography

Table 5.

Wavelength/frequency bands used in SAR remote sensing.

The geometrical properties determine the type of scattering that occurs on the surface. For a given SAR system, the type of scattering and the dielectric constant of the scattering surface (determined by the material type and humidity) determine the amount of backscattered energy and thus the intensity of the image. For a given type of land cover, the type of backscattering depends on the frequency of the SAR system.

In a SAR system, the emitted and received radar waves are polarized. Usually, the SAR system emits either horizontally (H) or vertically polarized waves (V), and the receiver is able to capture also either H or V waves, or both. This leads to four possible polarization configurations denoted HH, HV, VH or VV. A fully polarimetric system (PolSAR system) can emit/receive these four modes simultaneously. In PolSAR, the tight relation between the physical properties of natural media and their polarimetric behaviour leads to highly descriptive results that can be interpreted by analysing underlying scattering mechanisms [49, 50].

Different SAR images acquired from a slightly different location can, under certain conditions, be combined into a set of interferometric couples. SAR interferometry provides information about the 3D structure of the terrain and can be used for generating digital elevation models (DEMs) [51]. Interferometry can also be used for detecting subtle changes by a method called coherent change detection (CCD). A complete chapter of this book is devoted to the subject.

The combination of interferometric and polarimetric information (PolInSAR) provides information on scattering type and 3D structure of the image objects [52, 53]. PolInSAR data provide information about the structure and the complexity of the observed objects. When utilized concurrently, these different capabilities allow substantial improvements in land‐cover determination [54, 55].

Table 6 gives an overview of past, current and planned civilian SAR remote‐sensing satellites used for applications over land. The table shows that most satellites operate in C‐ or X‐band, that since 2007 metric resolution satellite SAR images are available and that several SAR systems offer polarimetric capabilities. The column labelled ‘Min. SR’ gives the best spatial resolution available from that satellite. Usually, this best spatial resolution is obtained only for single‐band spotlight mode images. The best obtainable spatial resolution in full‐polarimetric mode is usually three times less good. The images from some of the satellites mentioned in the table are freely available (PALSAR, Sentinel‐1). Furthermore, in an attempt to have a larger community benefit from satellite SAR images, space agencies freely offer SAR toolboxes that include most of the relevant processing tools and are able to process most of the available SAR satellite images [5659].

SatelliteLifetimeAgencyBandPolarizationMin. SR
ERS‐11991–2000ESA (EU)CVV25 m
JERS1992–1998NASDA (JP)LHH25 m
ERS‐21995–2012ESA (EU)CVV25 m
RADARSAT‐11995–CSA (CA)CHH10 m
ENVISAT‐ASAR2002–2012ESA (EU)CHH/HV/VV15 m
ALOS‐PALSAR2006–JAXA (JP)LPolSAR10 m
TerraSAR‐X2007–DLR (GE)XPolSAR1 m
Tandem‐X2008–DLR (GE)CPolSAR1 m
Cosmo‐Skymed constellation2007–eGEOS (IT)XPolSAR1 m
RADARSAT‐22008–MDA (CA)CPolSAR1 m
Sentinel‐12014–ESA (EU)CDual‐Pol5 m
NovaSAR‐S2015–2022SSTL‐EADS (UK)SPolSAR6 m
Radarsat constellation2018 (planned)CSA (CA)CPolSAR3 m

Table 6.

Overview of civilian SAR remote‐sensing satellites.

9.3. Problem statement, dataset and region of interest

The SAR dataset used in this chapter was acquired in the frame of the SMART project [4]. SMART was dedicated to help mined area reduction by providing methods and integrated tools to help the use of airborne and space data.

In the project, a large dataset of airborne images was acquired over four test areas in Croatia. The flight campaign took place in August 2001 and was performed by the German Aerospace Agency (DLR). Metric resolution multi‐spectral images (Daedalus line scanner with 12 bands in VIS‐LWIR), very high resolution (5‐cm) panchromatic visual images and metric resolution multi‐frequency SAR data from the E‐SAR system of the DLR were acquired. The SAR data consist of PolSAR data in L‐ and P‐band as well as HH‐ and VV‐polarized data in X‐ and C‐band (Table 7).

E‐SAR parameterX‐bandL‐bandP‐band
Central frequency9.6 GHz1.3 GHz450 MHz
PolarizationVVHH‐VV‐HV‐VHHH‐VV‐HV‐VH
Nr of looks111
Spatial resolution1.5 m2 m4 m
Altitude above sea level3160 m3160 m3160 m
Radiometric resolution<2 dB<2 dB<2 dB
Incidence angle50°50°55°
Azimuth beam width17°18°30°
Elevation beam width30°35°60°

Table 7.

Characteristics of the E‐SAR data (©DLR) acquired in the SMART project.

The land‐cover mapping serves for ‘suspected area’ reduction and not for mine detection. The end users defined nine land‐cover classes of interest: residential areas, roads, forests, wheat and corn fields, pastures, abandoned areas, bare soil, and rivers. These remote‐sensing data were mainly used for identifying and detecting indicators of mine presence or mine absence. A list of such indicators was compiled in the project and the different available data were investigated for their adequacy for detecting the defined indicators. Table 8 shows the indicators for which SAR data were found to improve the detection capability. The table shows that a combination of image classification and detection of linear features contributes to the detection of most indicators. For detecting power supply poles, a dedicated detector was developed by DLR, which exploits the specific effect of dihedral scattering on PolSAR images. Note that this is an example where objects much smaller than the resolution cell can be detected using SAR image processing. The relative 3D structure of the terrain can be easily extracted from interferometric image couples [51].

ClassIndicators of mine presenceMethods/tools
C1Cultivated land (A)Classification
C2Asphalted roads (A)Classification and detection of linear features
C3Agricultural areas that are no longer in use (P)Classification and change detection
C4Edges of forests (P)Classification
C5Trenches and man‐made embankments (P)Detection of linear features
Interactive enhancement and extraction
C6Soft edges of hardtop roads (P)Classification and detection of linear features
C7Crossroads, especially crossings of main roads with tracks no longer in use (P)Detection of linear features, classification
C8River banks (P)Detection of linear features, classification
C9Power supply poles (P)Dedicated detector based on polarimetric features
C10Hilltops and elevated plateaus (P)DEM generation SAR interferometry

Table 8.

Indicators of mine presence (P) or absence (A) for which SAR data can contribute to the detection.

The remainder of the current chapter will discuss how the different SAR modalities contribute to increased classification accuracy for the other indicators. The research area is located in Glinska Poljana (Croatia), which is a post‐war land mine‐affected zone (Figure 23). The Croatian Mine Action Centre found the SAR to be the most valuable remote‐sensing source due to the high frequency of its acquisition and its independency on cloud covers. Training and validation datasets have been constructed using the collected ground‐truth data. Due to the risk incurred in mine‐suspected areas, the ground‐truth survey could be applied in only a small part of the research zone and its peripheries. Figure 23 presents the locations of the training and the validation sets superimposed on the L‐band SAR image.

Figure 23.

Training and validation dataset locations superimposed on the L‐band E‐SAR.

Based on the confusion matrix determined using the validation set, four statistics are calculated for the validation: global Kappa coefficient (κ) [60], overall accuracy (OA), producer's accuracy (PA) and user's accuracy (UA) [61].

9.4. Fusion of single‐channel SAR and optical data

For investigating the potential of ‘future’ satellite remote‐sensing data for mine‐area reduction‐related land‐cover classification, the available airborne data were used for simulating these satellite data. The Daedalus images were sub‐sampled to the spectral bands of a typical optical satellite (Pleiades). A four‐channel optical image was thus simulated (B1: 0.45–0.52 µm, B2: 0.52–0.60 µm, B3: 0.63–0.69 µm, B4: 0.76–0.90 µm). For the SAR data, VV‐polarized X‐band data were used.

For the land‐cover classification from the single‐channel SAR data, a feature‐based supervised classification method was developed and the information available from the single‐channel X‐band SAR image was augmented by combining intensity information with spatial information. Figure 24 presents a schematic overview of the developed approach.

Figure 24.

Processing scheme for land‐cover classification from single‐channel SAR data.

The introduced spatial information consisted of a combination of textural features derived from grey level co‐occurrence matrices (GLCMs) [62] and the results from a detector of bright and dark lines [63]. The line detector was applied after speckle reduction [64]. Feature‐level fusion based on stepwise logistic regression (LR) [65] was applied to obtain the classification map. The stepwise LR implicitly performs a feature selection for each class to be detected [66].

Figure 25 shows the speckle‐reduced X‐band VV‐polarized SAR amplitude image of part of the region of interest. The results of the two line detectors and of the extraction of textural parameters are shown in Figures 26 and 27, respectively. Figure 28 shows the classification results obtained from the SAR image. From the classification results, it appears that the method based on only a single channel allows detecting the location of forests and edges as well as large parts of the roads and some of the buildings. It also provides an idea of the location of abandoned land and fields in use with vegetation; however, the distinction between them is difficult because the two classes are mixed. Nevertheless, using only the single‐band SAR image the distinction between water and radar shadows is very difficult, because both surface types result in a very low backscattering. In Table 9, the global classification accuracy as well as the user and producer accuracy per class is presented.

Figure 25.

Speckle‐reduced VV‐polarized, X‐band E‐SAR image (©DLR).

Figure 26.

Results of the dark and bright line detectors.

Figure 27.

Images of the various textural features.

Figure 28.

Classification map obtained from the single‐channel X‐band SAR image.

Class nameSAROpticalFusion
PAUAPAUAPAUA
Roads0.470.410.860.280.860.30
Water0.280.050.930.900.940.91
Residential areas0.080.980.320.170.410.30
Abandoned land0.660.780.740.820.810.89
Fields with vegetation0.600.510.740.810.810.80
Trees and shrubs0.580.550.880.830.880.86
Total accuracy0.590.760.82

Table 9.

Classification results obtained from SAR processing, optical image processing and fusion.

For building a classification map from the optical data, a minimum distance classifier was applied. Decision‐level fusion was used for combining the classification from SAR and optical data. Classification results obtained from the optical data and by fusion of optical and SAR are also presented in Table 9. The table shows that while the optical image leads to better classification results than the single‐channel SAR image, the fusion of both results improves further the classification accuracy. This is particularly the case for the class ‘abandoned land’, which is a very important indicator for possible mine presence.

9.5. PolSAR and PolInSAR

In the past recent years, several polarimetric decomposition methods have been developed for various applications. Each application combines the PolSAR or PolInSAR information in order to characterize a certain type of scattering process which was determined through complementary information as experience, physical grounds [67] or by systematic selection and reduction process [68]. In very complex scenes, it is useful to exploit the discriminative power offered by the combination of a great number of these features.

At RMA, we investigated the complementarity of different frequencies and the fusion of PolSAR and PolInSAR data for land‐cover classification in mine‐covered areas. Several PolSAR and PolInSAR features are extracted in the imaging process, each combining amplitude, phase and correlation information in order to extract specific characteristics of the scene. For land‐cover classification, two levels of fusion are applied: feature and decision levels. In the feature‐level fusion, a logistic regression (LR) is used for feature selection and for fusing the selected features by optimizing the log‐likelihood function. The obtained probability images are then combined using a soft‐decision fusion method, the neural network (NN), in order to obtain the final classification results.

9.5.1. Feature extraction

A polarimetric SAR system measures the electro‐magnetic signal and its polarization state that are backscattered from the scene. The interaction of the transmitted wave with a scattering object transforms its polarization. Discrimination of different types of mechanisms is possible using the polarimetric signatures that are strongly depended on the scattering process. It has been demonstrated that the inclusion of SAR polarimetry can lead to a significant improvement in the quality of the data analysis in comparison to conventional single‐polarization SAR data [69]. In our study, for describing different properties of the land‐cover objects and for the fusion processes, a total of 72 different PolSAR and PolInSAR features were extracted. Nine PolSAR features and nine PolInSAR features are presented in Figures 29 and 30, respectively. The figures demonstrate the variation and the inherent complementarity between the different features. For a detailed description of the used feature set the authors refer to and the references therein, see Ref. [70].

Figure 29.

PolSAR features dataset extracted using different decomposition methods and by calculating the PolSAR coherences.

Figure 30.

PolInSAR features dataset extracted using different decomposition methods and by calculating the PolInSAR optimal coherences.

9.5.2. Fusion methodology

In this study, four independent sources of information: L‐band PolSAR, L‐band PolInSAR, P‐band PolSAR and P‐band PolInSAR, are considered. The image processing resulted in the extraction of a large features set: 25 L‐band PolSAR features, 25 P‐band PolSAR features, 13 L‐band PolInSAR features and 13 P‐band PolInSAR features [70].

The end users identified nine different land‐cover classes using a ground‐truth survey: roads, residential areas, wheat crop fields, corn crop fields, forests, rivers, bare soils, and abandoned areas (no trees or shrubs). The processing of the classification found two fundamental technical problems. The first consists in identifying the parameters that capture the information from the radar signals and that allow to distinguish the different classes of land cover. The second consists in selecting a technique that fuses these features in an efficient manner that will allow distinguishing the land‐cover classes.

Figure 31 presents the selected methodological scheme. The neural network, (NN) which is a non‐linear method, is selected as a classifier because linear classifiers do not provide satisfying results in the fusion of data from different sources [71, 72]. However, the NN cannot be applied directly on the original 76 features because the training set should be larger than available. To overcome this shortcoming, we separated the fusion process for two levels. In the feature‐level fusion, a logistic regression (LR) [65] is used for combining the different features extracted from the PolSAR and the PolInSAR data. The LR is used as intelligent feature reduction method. The LR is also a linear classifier that is applied separately on each information source and on each of the nine classes. The results are nine probability images for each information source. The obtained probability images are then fused using a multi‐layer perceptron neural network with one hidden layer [73, 74] to yield the final classification.

Figure 31.

The fusion process scheme.

9.5.3. Results

As mentioned before, only the probability images derived by the LR using different combinations of SAR features are used as input to the NN method.

The NN results per class using only L‐band or only the P‐band E‐SAR dataset are, respectively, presented in Tables 10 and 11. Table 10 shows that high UA and PA results using only the L‐PolSAR or only the L‐PolInSAR dataset were obtained for the classes ‘forests’, ‘wheat fields’ and ‘abandoned areas’. For the other classes, due to the low PA and UA results and due to the lack of pattern in the results, it is difficult to identify which feature set is more significant, the PolSAR or the PolInSAR. The highest PA (98.87 and 91.10%) and UA (98.71 and 88.38%) results are obtained for the abandoned area class using the L‐band PolSAR and PolInSAR probability datasets and the NN classifier.

L SAR/Land cover classL‐PolSARL‐PolInSAR
PA (%)UA (%)PA (%)UA (%)
Residence31.8833.551.254.44
Road6.287.9624.3724.25
River23.9231.8213.4932.89
Forest88.0072.3394.5871.67
Bare soil41.0050.0067.5585.60
Abandoned area98.8798.7191.1088.38
Wheat87.4555.1894.4077.62
Corn30.3216.0543.1536.01
Pasture0.000.0019.4317.28

Table 10.

The NN results per class using E‐SAR L‐Band datasets.

P SAR/Land cover classP‐PolSARP‐PolInSAR
PA (%)UA (%)PA (%)UA (%)
Residence95.6252.8515.9417.11
Road36.9337.6914.3212.98
River21.9440.007.0116.18
Forest83.1362.6638.0065.73
Bare soil82.7649.5739.1517.97
Abandoned area99.8482.2789.1662.05
Wheat51.3576.8891.5140.0
Corn0.000.000.00.0
Pasture0.265.880.00.0

Table 11.

The NN results per class using E‐SAR P‐band.

High UA and PA results are obtained for the ‘abandoned area’ class by using only the P‐PolSAR or only the P‐PolInSAR dataset (Table 11). For the other classes, lower UA and PA results are obtained using P‐PolInSAR probability sets than those obtained using the P‐band PolSAR probability sets. Using the NN classifier, similar trend was obtained for the ‘abandoned area’ class with the highest PA (99.84 and 89.16%) and UA (82.87 and 62.05%) results and using P‐band PolSAR and PolInSAR probability datasets, respectively.

The ‘pasture’ class obtained very low PA and UA results in both L‐band and P‐band datasets. Analysis of the confusion matrices shows that the ‘pasture’ class was mainly confused with bare soil. As expected, the signal of the L‐ and the P‐frequencies penetrates the pasture areas that are covered with very short scattered grass, as in bare soil.

A comparison between the results was obtained using single dataset (Tables 10 and 11) and the results after fusion (Table 12) show that process using the NN significantly improves the results of the land‐cover classification.

Land cover classNN classification
The best fusion setPA (%)UA (%)
ResidencesAll‐SAR90.3168.81
RoadsAll‐SAR41.8462.67
RiverAll‐SAR61.6481.73
ForestAll‐SAR91.6088.08
Bare soilAll‐SAR89.3279.92
Abandoned areasAll‐SAR99.5199.35
WheatAll‐SAR99.0397.53
CornL‐PolSAR + L‐PolInSAR65.3166.27
PasturesL‐PolInSAR + P‐PolSAR31.3529.58

Table 12.

The highest PA and UA results per class obtained using the NN classifier.

The best results are derived using the ‘All‐SAR’ probability datasets for seven of the nine land‐cover classes (Table 12). The highest PA and UA results were obtained for all the classes using the fused PolSAR and PolInSAR datasets. These results emphasize again the complementarities of the different SAR frequency datasets and of the PolSAR and PolInSAR information. The results of the ‘abandoned area’ class are the highest with PA (99.51%) and UA (99.35%). The results for this class have a high importance for the Croatian mine action (i.e., end users) because it considered as suspected area. Moreover, it was demonstrated that using a pixel‐wise classification based on optical data or a single SAR dataset, it is difficult to distinguish this class. Applying the LR and multinomial logistic regression (MNLR) methods only on the PolSAR data resulted with lower user accuracy (UA) of 80% and producer accuracy (PA) of 66% for the ‘abandoned areas’ in ‘Glinska Poljana’ [66], than the results were presented in this study. Using the PolSAR and multi‐spectral data and a Dempster‐Shafer (DS) method, the ‘abandoned areas’ are classified with PA of 84% and using a fuzzy method with PA of 89% [25].

After the fusion process, the road and river classes are still confused. However, those classes can be easily distinguished by a photo‐interpreter. Table 13 presents the overall accuracies and the Kappa values obtained using the NN for the land‐cover classifications. The results show that the classification accuracy improves for the complete land‐cover classification, using fused datasets from different SAR frequencies. A kappa of 0.809 and the highest accuracy (84.00%) are obtained using the All‐SAR dataset. The results emphasize again the complementarities of the PolSAR and PolInSAR information and of data acquired in different frequencies.

LR probability datasetNN
Overall accuracy (%)Kappa
L‐PolSAR52.130.437
P‐PolSAR59.720.522
L+P‐PolSAR67.480.618
L‐PolInSAR61.040.539
P‐PolInSAR37.690.522
L+P‐PolInSAR66.160.601
L‐PolSAR + L‐PolInSAR63.870.576
P‐PolSAR + P‐PolInSAR65.550.596
L‐PolSAR + P‐PolInSAR63.220.569
L‐PolInSAR + P‐PolSAR69.960.646
All‐SAR84.000.809

Table 13.

Overall accuracy for land‐cover classification obtained using the NN classifier.

Figure 32 presents the land‐cover classification obtained by implementing the All‐SAR dataset and using the NN classifier. Many false alarms are present in the class ‘road’. Please note that we did not select ‘shadow’ as class because we did not visualize this phenomenon in the original image scene. However, common artefacts as shadow and layover are also part of the SAR images and future improvement should include this class.

Figure 32.

NN land‐cover classification of Glinska Poljana obtained using All‐SAR dataset.

9.6. Conclusion

The results presented here show that a feature‐based fusion method for classifying SAR image data is a valuable approach. When only a single‐channel SAR image is available, a rough land‐cover map can be extracted by combining the available intensity information with spatial information such as line detection results and textural features. Fusion with a classification obtained from optical images does provide a significant improvement in classification accuracy with respect to using optical data alone.

Features from different SAR frequencies were found to be complementary and adequate for land‐cover classification. It was also found that PolInSAR features are complementary to the PolSAR features and essential for producing an accurate classification of complex scenes composed of very diverse land‐cover classes. Fully polarized and interferometric SAR data proved to produce valuable remote‐sensing information and can be used to obtain accurate information for areas under danger of mine presence.

The current availability of high‐resolution SAR image data from various satellites on one hand and the SAR‐processing toolboxes freely available on the other hand makes it easier to apply an approach as described in this chapter. Nevertheless, an efficient exploitation of the full information richness offered by SAR images and its complementarity to information from optical data still requires acquiring a good understanding of the SAR image phenomenology.

Advertisement

10. Conclusion: the future of remote sensing in mine action

The information collected by remote sensing can help mine action in many other ways that survey a suspicious area to decide where to send the clearance teams. In some mined countries, the availability of up‐to‐date maps can be poor. Remote sensing can be used to create recent maps to help the planning of mine action operations. An analysis of the relief and the land cover can help understand the road network and decide the best route to access a given location. Deciding which areas to clear first is often a difficult decision. Remote sensing can provide input regarding the risk of the presence of mines or the vulnerability of local population. In case of major disasters such as flooding, remote sensing can be used to assess the change in the mine contamination.

For decades, airplanes and satellites were the only vectors to obtain remote‐sensing data. Drones offer now new opportunities. With the reduction of size and costs of cameras, they change drastically the way remote sensing can be used. If the autonomy of the smaller drones is still limited, they still allow for fast deployment, which can help reactivity of mine action organizations.

With drones offering a good solution for small‐range survey and satellite for large areas, the question now is about the relevance of conventional aircraft. Have they still a place between drones and satellites? U.S. Air Force is currently training more pilots of drones than pilots of fighter aircraft. Will aircraft slowly phase out for remote sensing?

Acknowledgments

The authors of this chapter would like to thank the Belgian Ministry of Defence and the European Union.

Section 2: The aerial campaign was supported by financial aid of the US State Department, through the International Trust Fund for Demining and Mine Victims Assistance (ITF), and kindly shared by HCR Centre for testing, development and training Ltd.

Section 3: The research leading to these information and results has received funding from the European Community's Seventh Framework Programme (FP7/2007–2013) under grant agreement n° 284747 (TIRAMISU). Noveltis and IGEAT, two partners of TIRAMISU, contributed to this crater detection study. Noveltis pre‐processed the data (georeferencing and atmospheric correction). IGEAT made the visual interpretation and provided valuable information for the data preparation and results discussion.

Sections 6, 7 and 9: The authors thank all the researchers of the project SMART, co‐funded by the European Commission (IST‐2000‐25044) and involving the following partners: CROMAC, DLR, ENST, IXL, Renaissance ASBL (RMA), RST, TRASYS, ULB‐IGEAT and Zeppelin.

References

  1. 1. GICHD. A Guide to Land Release: Technical Methods. Geneva: GICHD; April 2011, ISBN 2‐940369‐42‐9
  2. 2. Van Genderen JL, et al. Airborne Detection of Landmines: A Review of Techniques and Some Practical Results. Disarmament & International Security. Germany; 1998
  3. 3. Maathuis B. RS based detection of minefields. Geocarto International. 2003;18:51‐60
  4. 4. Yvinec. EU Project of Remote Detection: SMART in a nutshell. In: Proceedings of Robotics and Mechanical Assistance in HuDem and Similar Risky Interventions. Brussels, Belgium; 2004
  5. 5. Bajić et al. Research of the Mine Suspected Area. CROMAC; 2011, ISBN 978‐953‐99879‐7‐6
  6. 6. Bajić M. Airborne Detection of UXO, Reconstruction of Former War Zone, EOD FUTURE in the Light of NATO EOD Demonstrations and Trials 2012, A Book of Papers. 2013. pp. 33‐41
  7. 7. Krtalic A, Bajić M. Upgrade of the Advanced Intelligence Decision Support System for Mine Action in the TIRAMISU Project. In: 11th IARP Workshop HUDEM, Book of Papers; 25th April 2013, Šibenik, Croatia; 2013. pp. 83‐86
  8. 8. Lacroix V, Vanhuysse S. Indicators of mine presence: Focus on trenches. South‐Eastern European Journal of Earth Observation and Geomatics. 2014;3(2):635‐639
  9. 9. Vanhuysse et al. Object-based image analysis for detecting indicators of mine presence to support suspected hazardous area re-delineation. South‐Eastern European Journal of Earth Observation and Geomatics. 2014;3(2):525‐529
  10. 10. Lacroix et al. Feature‐extraction using the constrained gradient; ISPRS. Journal of Photogrammetry and Remote Sensing. 1998;53:85‐94
  11. 11. Hatfield‐Consultants. Uxo predictive modeling in MMG LXML, Laos (sepon) mine development area. Technical Report 1791.D6.1, Hatfield consultants, East Lansing, MI; 2014
  12. 12. Lacroix V, Vanhuysse S. Crater detection using CGC—A new circle detection method. ICPRAM. 2015;(1):320‐327
  13. 13. Akinlar C, Topal C. Edcircles: A real‐time circle detector with a false detection control. Pattern Recognition. 2013;46
  14. 14. Haralick RM. Statistical and structural approaches to texture. Proceedings of the IEEE. 1979;67:786‐804
  15. 15. Cross GR, Jain AK. Markov random field texture models. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1983;5:25‐39
  16. 16. Bovik AC, Clarke M, Geisler WS. Multichannel texture analysis using localized spatial filters. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1990;12:55‐73
  17. 17. Randen T, Husoy JH. Filtering for texture classification: A comparative study. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1999;21(4):291‐310
  18. 18. Neyt X, Acheroy M, Lemahieu I. Directional adaptive image restoration. In: Proceedings of International Workshop on Image and Signal Processing IWISP’96; November 1996
  19. 19. Bezdek JC, Pal SK. Fuzzy models for pattern recognition. IEEE Press. 1992
  20. 20. Gath I, Geva AB. Unsupervised optimal fuzzy clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1989;11(7):773‐781
  21. 21. Isabelle B. On fuzzy distance and their use in image processing under imprecision. Pattern Recognition. 1999;32(11):1873‐1895
  22. 22. Yvinec Y, Druyts P, Lacroix V, Ouaghli Y, Idrissa M. Airborne minefield detection: Pilot project RMA/SIC final report. Technical Report. Royal Military Academy/Signal and Image Centre, Avenue de la Renaissance, 30 B‐1000 Brussels; 1999.
  23. 23. Axelsson P. DEM generation from laser scanner data using adaptive TIN models. International Archives of Photogrammetry and Remote Sensing. 2000;XXXIII:110‐117
  24. 24. Keller M, Milisavljevic N, Suess H, Acheroy M. Reduction of mine suspected areas by multisensor airborne measurements: First results. In: Proceedings of SPIE Conference on Detection Technologies for Mines and Minelike Targets. Vol. 4742. Orlando, USA; 2002. pp. 857‐871
  25. 25. Bloch I, Milisavljević N, Acheroy M. Multisensor data fusion for spaceborne and airborne reduction of mine suspected areas. International Journal of Advanced Robotic Systems. 2007;4(2):173‐186
  26. 26. Milisavljevic N, Bloch I. Sensor fusion in anti‐personnel mine detection using a two‐level belief function model. IEEE Transactions on Systems, Man and Cybernetics—Part C. 2003;33(2):269‐283
  27. 27. Shimoni M, Van der Meer F, Acheroy M. Thermal imaging spectroscopy: Present technology and future dual use applications. In: Proceedings of the 5th EARSeL Workshop on Imaging Spectroscopy; Bruges, Belgium; April 23‐25, 2007.
  28. 28. Liao W‐J, Chen D‐H, Baertlein BA. Algorithms for detection of surface mines in multispectral IR and visible imagery. Proceedings of the SPIE. 2001;4394:310‐321
  29. 29. Burt AB. Detection of Spatially Unresolved (Nominally SubPixel) submerged and surface targets using hyper‐spectral data [MSc thesis]. Remote Sensing Centre, Naval Postgraduate School Monterey, Canada; 2012
  30. 30. Elbakary MI, Alam MS. Mine detection in multispectral imagery data using constrained energy minimization. Proceedings of the SPIE 6977, Optical Pattern Recognition. 2008;XIX:P697707. DOI: 10.1117/12.781735
  31. 31. Baghbidi MZ, Jamshidi K, Nilchi AR, Homayouni S. Improvement of anomaly detection algorithms in hyperspectral images using discrete wavelet transform. Signal & Image Processing: An International Journal. 2011;2(4):13‐25
  32. 32. McFee JE, Ripley HT. Detection of buried land mines using a CASI hyperspectral imager. In: Proceedings of the SPIE 3079‐Detection and Remediation Technologies for Mines and Minelike Targets II. Vol. 738. 1997
  33. 33. Winter EM, Fields DJ, Carter MR, Bennett CL, Lucey PG, Johnson JR, Horton KA, Bowman AP. Assessment of techniques for airborne infrared land mine detection. In: Presented at the Third International Airborne Remote Sensing Conference and Exhibition, July 1997
  34. 34. Winter EM, Miller M, Simi C, Hill A, Williams T, Hampton D, Wood M, Zadnick J, Sviland M. Mine detection experiments using hyperspectral sensors. In: SPIE Proceeding. Vol. 5415; September 2004. pp. 791‐798
  35. 35. Zare A, Bolton J, Gader P, Schatten M. Vegetation mapping for landmine detection using long‐wave hyperspectral imagery. IEEE Transactions on Geoscience and Remote Sensing. January 2008;46(1):172‐178
  36. 36. A. Zare, J. Bolton, P. Gader, and M. Schatten, “Vegetation mapping for landmine detection using long-wave hyperspectral imagery,” IEEE Trans. Geosci. Remote Sens., vol. 46, iss. 1, pp. 172–178, 2008.
  37. 37. Tabulo MA. Detecting disturbed earth using hyperspectral imaging. AFDA. 2001;4(1)
  38. 38. Gurton KP, Felton M. Remote detection of buried land‐mines and IEDs using LWIR polarimetric imaging. Optics Express. 2012;20(20)
  39. 39. TELOPS. Infrared Cameras. 2015. Available from: http://www.telops.com/en/
  40. 40. Schaum A, Stocker A. Hyperspectral change detection and supervised matched filtering based on covariance equalization. In: Proceedings of the SPIE. Vol. 5425. 2004. pp. 77‐90
  41. 41. Eismann AT, Meola J, Hardie RC. Hyperspectral change detection in the presence of diurnal and seasonal variations. IEEE Transactions on Geoscience and Remote Sensing. 2008;46(1):237‐249
  42. 42. Eismann MT, Meola J, Stocker AD, Beaven SG, Schaum AP. Airborne hyperspectral detection of small changes. Applied Optics. 2008;47:27‐45
  43. 43. Eisman MT, Stein DW. Stochastic mixture modelling. In: Chang C‐I, editor. Hyperspectral Data Exploitation: Theory and Applications. John Wiley & Sons; 2007. pp. 107‐148
  44. 44. Oliver C, Quegan S. Understanding Synthetic Aperture Radar Images. SciTech Publishing; 2004. p. 479
  45. 45. Maître H. Processing of Synthetic Aperture Radar Images. London, UK: ISTE; 2008. DOI: 10.1002/9780470611111.biblio
  46. 46. Richards JA. Remote Sensing with Imaging Radar. Springer; 2009. p. 361. DOI: 10.1007/978‐3‐642‐02020‐9
  47. 47. ESA Earth Online. SAR Land Applications Tutorial [Internet]. Available from: https://earth.esa.int/web/guest/training‐packages/‐/article/sar‐land‐applications‐tutorial [Accessed: 2015‐10‐30]
  48. 48. Sandia National Laboratories. What Is Synthetic Aperture Radar [Internet]. Available from: http://www.sandia.gov/radar/what_is_sar/index.html [Accessed: 2015‐10‐30]
  49. 49. Cloude SE. Polarisation: Applications in Remote Sensing. OUP Oxford; 2009. p. 472. DOI: 10.1093/acprof:oso/9780199569731.001.0001
  50. 50. Lee JS, Pottier E. Polarimetric Radar Imaging: From Basics to Applications. Taylor & Francis; 2009. p. 438. ISBN: 9‐781‐4200‐5497‐2
  51. 51. Hansen RE. Radar Interferometry: Data Interpretation and Error Analysis [thesis]. TU Delft, The Netherlands; 2001
  52. 52. Papathanassiou K, Cloude SR. Three‐stage inversion process for polarimetric SAR interferometry. IEE Proceedings on Radar, Sonar and Navigation. 2003;150(3):125‐134. DOI: 10.1049/ip‐rsn:20030449
  53. 53. Reigber A, Neumann M, Guillaso S, Sauer S, Ferro‐Famil L. Evaluating PolInSAR parameter estimation using tomographic imaging results. In: Proceedings of European Radar Conference (EURAD'05); Paris; 6‐7 October 2005. pp. 189‐192. DOI: 10.1109/EURAD.2005.1605597
  54. 54. Gamba P, Houshmand B. Three‐dimensional road network by fusion of polarimetric and interferometric SAR data. In: Proceeding of IGARSS'99; 28 June‐2 July 1999; Hamburg, Germany: IEEE; 1999. pp. 302‐304
  55. 55. Hong S, Moon WM, Paik HY, Choi GH. Data fusion of multiple polarimetric SAR images using discrete wavelet transform (DWT). In: Proceeding of IGARSS’02; 24‐28 June 2002; Toronto, Canada: IEEE; 2002. pp. 3323‐3325. DOI: 10.1109/IGARSS.2002.1027170
  56. 56. PolSARPro: The Polarimetric SAR Processing and Educational Toolbox [Internet]. Available from: https://earth.esa.int/web/polsarpro/home [Accessed: 2015‐10‐29]
  57. 57. ESA Sentinel Online: Sentinel Toolboxes [Internet]. Available from: https://sentinel.esa.int/web/sentinel/toolboxes [Accessed: 2015‐10‐29]
  58. 58. Canadian Space Agency: Earth Observation Application Development Program (EOADP), Open Source RADARSAT‐2 Polarimetric Toolbox [Internet]. Available from: http://www4.asc‐csa.gc.ca/AUOT‐EOAU/eng/EOADP/Projects/72596.aspx [Accessed: 2015‐10‐29]
  59. 59. Next ESA SAR Toolbox (NEST) [Internet]. Available from: https://earth.esa.int/web/nest/home [Accessed: 2015‐10‐29]
  60. 60. Rosenfield G, Fitzpatrick‐Lins KA. Coefficient of agreement as a measure of thematic classification accuracy. Photogrammetric Engineering and Remote Sensing. 1986;52(2):223‐227. DOI: 10.1016/S0034‐4257(97)00083‐7
  61. 61. Congalton RG, Green K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices. Lewis Publishers; 1999. p. 630. DOI: 10.1201/978142004856
  62. 62. Haralick RM, Shanmugam K, Dinstein IH. Textural features for image classification. IEEE Transactions on Systems, Man and Cybernetics. 1973;3(6):610‐621. DOI: 10.1109/TSMC.1973.4309314
  63. 63. Tupin F, Maitre H, Mangin J‐F, Nicolas J‐M, Pechersky E. Detection of linear features in SAR images: Application to road network extraction. IEEE Transactions on Geoscience and Remote Sensing. 1998;36(2):434‐453. DOI: 10.1109/36.662728
  64. 64. Pizurica A, Philips W, Lemahieu I, Acheroy M. Despeckling SAR images using wavelets and a new class of adaptive shrinkage estimators (ICIP'01). In: Proceedings of the 8th IEEE International Conference on Image Processing. DOI: 10.1109/ICIP.2001.958467
  65. 65. Hosmer DW, Lemeshow S, Sturdivant RX. Applied Logistic Regression. 3rd ed. Wiley; 2013. p. 528. DOI: 10.1002/9781118548387
  66. 66. Borghys D, Perneel C. A supervised classification of multi‐channel high‐resolution SAR data. EARSEL eProceedings. 2007;6(1):26‐27
  67. 67. Hoekman DH, Quirones MJ. Land cover type and biomass classification using AirSAR data for evaluation and monitoring scenarios in Columbian Amazon. IEEE Transactions on Geoscience and Remote Sensing. 2000;38(2):685‐696. DOI: 10.1109/36.841998
  68. 68. Cumming IG, Van Zyl JJ. Feature utility in polarimetric radar image classification. In: Proceeding IGARSS’89; 10‐14 July 1989; Vancouver, Canada: IEEE; 1989
  69. 69. Hellmann M. Polarimetric interferometry and differential interferometry 1 & 2”, in radar polarimetry and interferometry. 2004. RTO lecture series SET‐081; Paper 6, p. 1‐33, Available from: http://ftp.rta.nato.int/public//PubFullText/RTO/EN/RTO‐EN‐SET‐081/EN‐SET‐081‐$$TOC.pdf
  70. 70. Shimoni M, Borghys D, Heremans R, Perneel C, Acheroy M. Fusion of PolSAR and PolInSAR data for land cover classification. International Journal of Applied Earth Observations and Geoinformation. 2009;11(3):169‐180. DOI: 10.1016/j.jag.2009.01.004
  71. 71. Borghys D, Yvinec Y, Perneel C, Pizurica A, Philips W. Supervised feature‐based classification of multi‐channel SAR images. Pattern Recognition Letters. 2006;27(4):252‐258. DOI: 10.1016/j.patrec.2005.08.006
  72. 72. Bloch I, Milisavljevic N, Acheroy M. Multisensor data fusion for spaceborne and airborne reduction of mine suspected areas. International Journal of Advanced Robotic Systems. 2007;4(2):173‐186. DOI: 10.5772/5698
  73. 73. Lippmann R. An introduction to computing with neural nets. IEEE Signal Processing Magazine. 1987;4(1):4‐22. DOI: 10.1109/MASSP.1987.1165576
  74. 74. Duda RO, Hart PE, Stork DG. Pattern Classification. Wiley Interscience. 1987. p. 654. DOI: 10.1007/s00357‐007‐0015‐9

Written By

Yann Yvinec, Nada Milisavljevic, Charles Beumier, Idrissa Mahamadou, Dirk Borghys, Michal Shimoni and Vinciane Lacroix

Reviewed: 02 November 2016 Published: 30 August 2017