Open access peer-reviewed chapter

Practical Digital Terrain Model Extraction Using Image Inpainting Techniques

Written By

Chiman Kwan, David Gribben, Bulent Ayhan and Jude Larkin

Submitted: 29 December 2019 Reviewed: 15 June 2020 Published: 04 November 2020

DOI: 10.5772/intechopen.93184

From the Edited Volume

Recent Advances in Image Restoration with Applications to Real World Problems

Edited by Chiman Kwan

Chapter metrics overview

593 Chapter Downloads

View Full Metrics

Abstract

In some applications such as construction planning and land surveying, an accurate digital terrain model (DTM) is essential. However, in urban and sub-urban areas, the terrain may be covered by trees and man-made structures. Although digital surface model (DSM) obtained by radar or LiDAR can provide a general idea of the terrain, the presence of trees, buildings, etc. conceals the actual terrain elevation. Normally, the process of extracting DTM involves a land cover classification followed by a trimming step that removes the elevation due to trees and buildings. In this chapter, we assume the land cover types have been classified and we focus on the use of image inpainting algorithms for DTM generation. That is, for buildings and trees, we remove those pixels from the DSM and then apply inpainting techniques to reconstruct the terrain pixels in those areas. A dataset with DSM and hyperspectral data near the U. Houston area was used in our study. The DTM from United States Geological Survey (USGS) was used as the ground truth. Objective evaluation results indicate that some inpainting methods perform better than others.

Keywords

  • digital terrain model (DTM)
  • digital surface model (DSM)
  • image inpainting
  • vegetation extraction
  • land classification

1. Introduction

There are several ways to obtain DTM. The oldest method is to do this manually by measuring the terrain elevations of some selected points of a given area. The process is time-consuming, tedious, and prone to human errors. In recent years, people have started to use LiDAR to generate DTM. The obtained DTM is in general satisfactory even though the point density may not be very dense as compared to optical stereo imaging approach [1]. Radar has been used as well. It is well known that LiDAR and radar equipment are expensive. Due to availability of low-cost drones, stereo imaging has been gaining popularity. Near infrared (NIR) together with color imagers have been used in recent years to generate DSM. However, due to the presence of vegetation and buildings, some additional processing steps are needed in order to obtain DTM from DSM.

In recent years, hyperspectral images [2, 3, 4] are gaining popularity in various applications, including anomaly detection [5, 6, 7], target classification [8, 9], search and rescue operations [10], and many others. Due to the availability of hundreds of contiguous spectral bands, accuracies of anomaly detection and target classification have been improved quite significantly. Hyperspectral images can also be used for accurate land cover classification [11, 12, 13, 14, 15, 16]. Many methods have been developed in the past [17, 18, 19] for target detection in hyperspectral images. It will be ideal that hyperspectral images are available for land cover classification so that more accurate DTM can be obtained. However, equipment cost, requirement on data storage, and computational burden are limiting the widespread usage of hyperspectral imagers.

In contrast, low-cost color and NIR images are relatively inexpensive, have low computational cost, and low data storage. If one is given only color (RGB) and near infrared (NIR) images, however, it will be difficult to obtain accurate land cover classification for the following reasons. First, the accuracy of using only RGB and NIR bands for land cover classification is low as compared to that of using hyperspectral images. This point will be clear later in Section 3. Improving land cover classification using only color and NIR images will be a good contribution to the community. In recent years, there are some new developments along this direction. In particular, people have developed methods to synthesize spectral bands from color and NIR images. One technique is known as Extended Morphological Attribute Profile (EMAP) [20]. Several notable applications have appeared in the literature [16, 17]. Second, even after the pixels related to trees and man-made structures are identified and removed from the DSM, we still need to face an important practical issue. How can one recover the missing terrain pixels in the DSM to build a DTM? Conventional approaches use simple interpolation such as bilinear or bicubic interpolations [1]. However, the accuracy of DTM may be compromised. In recent years, there have been new developments in interpolation methods, termed as image inpainting methods. Those recent methods can be categorized into several groups. The first group is similar to bicubic interpolation methods. Some representative methods include bicubic, Laplacian [21], and inpaint-nans [22]. The second group uses nonlocal sparse representation for inpainting. Well-known methods include Local Matrix Completion Sparse (LMCS) [23], field of expert (FOE) [24], and Transformic [25]. The last group is the deep learning-based methods. One representative method is known as generative inpainting (GenIn) [26].

In this chapter, we propose a low-cost and accurate approach to DTM generation. Suppose we are given a DSM and only the color and NIR images. Our approach consists of four steps. First, we perform land cover classification using only color and NIR images. Various methods can be applied in this step. The key innovation is to apply synthetic spectral bands to enhance the land cover performance. It was demonstrated that the land cover performance using synthetic bands can yield performance very close to that of the hyperspectral image. Second, since there may be more than 10 types of land covers, we observed that it is more accurate to consolidate some of the land cover types into only five groups. Third, the trees and man-made structures are then removed from the DSM. Fourth, various conventional and deep learning inpainting methods are applied to generate the DTM. Comparisons show that GenIn has consistent performance in DTM construction.

This chapter is organized as follows. In Section 2, we will briefly review the methods and data. Section 3 will discuss the land cover classification results and how we consolidate 15 land cover types into only five groups. Section 4 focuses on the various DTM reconstruction results. Finally, some concluding remarks will be given in Section 5.

Advertisement

2. Methods and data

2.1 Land cover classification methods

In this research, we have used the following nine methods for land cover classification. We will not go into the details of each method. Instead, we briefly list the names and provide some references for their sources.

We categorize the methods into three groups. In the first group are simple and efficient methods, including Matched Subspace Detection (MSD) [18], Adaptive Subspace Detection (ASD) [18], and Reed-Xiaoli Detection (RXD) [19]. These methods have been used in hyperspectral image processing in the past. In the second group are kernel versions of the first group and they are: Kernel MSD (KMSD) [18], Kernel ASD (KASD) [18], and Kernel RXD (KRXD) [19]. The kernel-based algorithms are computationally expensive and may not be suitable for real-time applications. The third group contains Sparse Representation (SR) [27] algorithm, Joint Sparse Representation (JSR) [27] algorithm, and Support Vector Machine (SVM) [28, 29] algorithm. In the past, we have used the above three methods in group 3 for soil detection using multispectral images [27].

2.2 Inpainting methods

We have applied seven methods in this project. They are briefly summarized below:

Bicubic: in a recent paper by researchers at Cyprus, a bicubic interpolation method was used in [1].

Inpaint_nans: we denote this as “inpaint” in our later experiments. This method was developed by D’Errico [22]. This is a very simple method that only uses the neighboring pixels to estimate the missing pixels, which will be referred as NaNs (not a number).

FOE: the Field of Experts method (FOE) was developed by Roth [24]. This method uses pre-trained models that are used to filter out noise and obstructions in images.

Laplacian: this method [21] fills in each missing pixel using the Laplacian interpolation formula by finding the mean of the surrounding known values.

Local Matrix Completion Sparse (LMCS) [23]: in LMCS, which was developed by us, a search is performed for each missing pixel to find a pixel with the most similar neighbors. After the search, the missing pixel is replaced with the found pixel. This method performs very well with images containing repeating patterns.

Transformic: the Transformic method was developed by Mansfield [25]. It is similar to the LMCS in that it searches the whole image for a patch that is similar to the neighbors of the missing pixel.

Generative Inpainting (GenIn) [26]: a new inpainting method, Generative Inpainting (GenIn), which is a deep learning-based method [26], was considered in our research. It was developed at the University of Illinois and aims to outperform typical deep learning methods that use convolutional neural network (CNN) models. GenIn builds on CNN and Generative Adversarial Networks (GANs) in an effort to encourage cohesion between created and existing pixels.

2.3 EMAP

In this section, we briefly introduce EMAP, which has been shown to yield good classification performance when one only has a few spectral bands available. Given an input grayscale image f and a sequence of threshold levels Th1,,Th2,,Thn, the attribute profile (AP) of f is obtained by applying a sequence of thinning and thickening attribute transformations to every pixel in f as follows:

APf=ϕ1f,ϕ2f,ϕnf,f,γ1f,γ2f,γnfE1

where ϕi and γii=1,,2,,n are the thickening and thinning operators at threshold Thi, respectively. The EMAP of f is then acquired by stacking two or more APs using any feature reduction technique on multispectral/hyperspectral images, such as purely geometric attributes (e.g., area, length of the perimeter, image moments, shape factors), or textural attributes (e.g., range, standard deviation, entropy).

EMAPf=AP1f,AP2fAPmfE2

More technical details about EMAP can be found in [20, 30, 31, 32]. In this work, the “area (a)” and “length of the diagonal of the bounding box (d)” attributes of EMAP [17] were used. The lambda parameters for the area attribute of EMAP, which is a sequence of thresholds used by the morphological attribute filters, were set to 10 and 15, respectively. The lambda parameters for the length attribute of EMAP were set to 50, 100, and 500. With this parameter setting, EMAP creates 11 synthetic bands for a given single band image. One of the bands comes from the original image.

2.4 IEEE dataset

From the IEEE GRSS Data Fusion package [11], we obtained the ground truth classification maps, the hyperspectral image of the University of Houston area, and the LiDAR data of the same area. The instrument used to collect the dataset is simply a hyperspectral and LiDAR sensor. The hyperspectral image contains 144 bands ranging in wavelength from 380 to 1050 nm with spatial resolution of 0.25 m. The LiDAR sensor has the same spatial resolution of 0.25 m.

As shown in Table 1, there are a number of datasets used for analysis. The first group is the RGB (band # 60, 30, 22 in the hyperspectral data) and the NIR band (band #103). It should be noted that the above selection of bands is not the same as band selection in the literature [33]. In band selection, the objective is to select the most informative bands out of the available hyperspectral bands. In our case, we are restricted to only having a few bands. We call this group Dataset-4 (DS-4). The second group is the four band group put through EMAP augmentation to produce 44 bands as each band produces 10 other bands in addition to the original band [denoted as Dataset-44 (DS-44)]. The third group is the full hyperspectral image of 144 bands [denoted as Dataset-144 (DS-144)].

Dataset labelBands present in the corresponding dataset
Dataset-4 (DS-4)RGB and the NIR bands (respectively bands # 60, # 30, # 22, and # 103 in the hyperspectral data)
Dataset-44 (DS-44)RGB and the NIR bands. Forty bands obtained by EMAP augmentation applied to RGB and the NIR bands
Dataset-144 (DS-144)Hyperspectral dataset

Table 1.

Dataset labels and the corresponding bands.

Advertisement

3. Consolidation of the number of land cover classes

Before studying the performance of inpainting techniques on the IEEE GRSS Data Fusion dataset, in order to create a consensus about the best classification method, the number of classes was reduced from 15 to 5. As shown in Table 1, the first three grass classes (Healthy, Stressed, and Synthetic) were consolidated into simply grass; tree, soil, and water maintained their individual classifications; and then all other classes were grouped into one class as man-made structures. This was done simply because some of the man-made classes—road, highway, railway, and both parking lots—were consistently misclassified and often as the other classes in this group. The same is true for the grass classes. By consolidating the classes, the classification method selection process was made easier. The averages listed in Table 2 are a summation of all non-kernel methods averages. This is shown to illustrate how low performing some types are even among the high-performing methods.

New class #Class typeClass #Avg. accuracy (5)
1Healthy grass172.93
Stressed grass256.67
Synthetic grass391.90
2Tree470.98
3Soil582.99
4Water661.29
5Residential754.82
Commercial848.16
Road939.23
Highway1043.00
Railway1145.20
Parking lot 11236.94
Parking lot 21336.69
Tennis court1464.03
Running track1594.22

Table 2.

Combining classes down from 15 classes to 5 classes and the average accuracy of each class.

Table 3, extracted from a recent work [34], corresponds to the accuracy for the full 15 class models while Table 4 is for consolidated 5 classes. Comparing the two tables, it can be seen that the new class combination results in much improved results in all cases. Each method has an overall improvement of at least 13% and most methods saw an improvement of over 20%. It is clear from Table 4 that JSR clearly stands out as the best performing method. JSR goes from being the best performing method in one band case to every band case as well as overall average when using the new class arrangement. Yet, every band case of JSR returns over 90% accuracy, when previously the smaller number band cases returned results near 50%.

OADS-4 (%)DS-44 (%)DS-144 (%)Avg. (%)
ASD22.5922.7521.1122.15
MSD0.1148.6555.5634.77
RXD28.9346.0942.6939.24
KASD6.1679.7053.5746.48
KMSD26.3269.2653.6149.73
KRXD5.7264.1471.7947.22
SR39.9964.4557.4653.97
JSR59.8380.7772.5771.06
SVM70.4382.6478.6877.25

Table 3.

Overall accuracies using 15 classifications of the 9 classification methods and each band combination.

OADS-4 (%)DS-44 (%)DS-144 (%)Avg. (%)
ASD47.1769.8965.7560.94
MSD63.3171.3281.9472.19
RXD57.5068.9862.2962.92
KASD42.6794.5082.6473.27
KMSD63.5391.8775.1876.86
KRXD45.6286.3588.4073.46
SR55.1190.6185.5077.07
JSR93.1594.5593.8493.85
SVM91.5992.2587.7290.52

Table 4.

Overall accuracies using five classes for the nine classification methods and each band combination.

Bold numbers indicate the best performing method of each column.

It should be noted that the results related to the 44-band (DS-44) case are observed to perform better than the 4-band (DS-4) and 144-band (DS-144) cases. It may be easier to understand why DS-44 is better than DS-4. A simple explanation is that the DS-44 data contain some synthetic spectral information, which enriches the spectral content. The explanation for why DS-144 case is worse than DS-44 case is because there are a lot of redundancies in the various bands in the DS-144 data. The data redundancies appear to cause some conflicts in the classifiers. Other researchers have observed similar behaviors [11] before and sometimes they call this the curse of dimension.

Advertisement

4. DTM extraction by removing man-made structures and trees

4.1 Ground truth DTM

The ground truth being used is the 1/9 arc second-resolution Digital Elevation map produced by USGS. Additional maps used for comparison in this investigation are the Cloth Simulation Filter (CSF) method [35] and the 1 arc second-resolution USGS DE map. However, CSF and USGS can only be used for general comparison as their inputs are not dependent on the different numbers of bands. CSF simply uses the LiDAR image while USGS is an already completed product. The three DTMs are shown in Figure 1. It can be seen that the USGS 1/9 arc second map is more accurate.

Figure 1.

USGS 1/9 arc second resolution (top), CSF (middle), and USGS 1 arc second-resolution (bottom) DTMs.

4.2 Individual inpainting results

The different methods used to compare digital terrain models (DTMs) through inpainting were “inpaint_nans,” “LMCS,” “Laplacian,” “Transformic,” and “CSF.” However, CSF must be considered separate from others as it is not dependent on the same image bands that the other inpainting techniques are dependent on. In this study, the best resolution (1/9 arc second) USGS satellite radar imagery was used as the ground truth.

With a consistently well-performing method available for the composition of DTMs, which is JSR, we now look at the performance of inpainting methods judging against a general ground truth of the USGS Digital Elevation maps. Our goal is to remove Class 2 (trees) and Class 5 (manmade structures) from the DSM. The missing pixels will be interpolated by using inpainting techniques. The names of the methods tested against this ground truth were: inpaint_nans, LMCS, Laplace, Transformic, and FOE. There is also the added variation of downsizing the image four times versus maintaining the full-size image to demonstrate affected accuracy because the downsized results save considerable time.

After JSR classifier is applied to the EMAP images (DS-44) and the man-made objects areas are identified by JSR, these identified man-made and trees areas are removed from the LiDAR image (DSM). Inpainting techniques are then applied to those missing pixel areas in the LiDAR image. The filled-in LiDAR image with inpainting methods corresponds to the estimated DTM. Figure 2 contains the DTMs, generated from the four times downsized DS-44 EMAP images for each method (excluding CSF). The purpose of downsizing by four times was because of computational issues. It took many hours to finish the inpainting for some of the methods. The images from Figure 2 can be compared to the ground truth and fully produced products of CSF and the lower resolution USGS. The LMCS results have issues near the boundary of the image because LMCS cannot handle missing pixels near the image boundary.

Figure 2.

DTMs generated from the four times downsized DS-44 EMAP images: first row: Laplace; second row: inpaint_nans; third row: Transformic; fourth row: FOE; and fifth row: LMCS.

Figure 1 displays the full-size ground truth maps. Figure 3 contains the estimated DTMs using full-size DS-44 EMAP images. The inpainting maps in Figure 2 and Figure 3 can also be compared against Figure 1.

Figure 3.

DTMs generated from the full-size DS-44 EMAP images. We could not generate LMCS, which took many days and we stopped the program. First row: Laplace; second row: inpaint_nans; third row: Transformic; fourth row: FOE.

Clearly the lower resolution USGS image is not a great product to use for the digital terrain map. However, it is useful to show a low-resolution picture of what the Houston area could look like without classification and inpainting.

To find an objective statistical proof of accuracy of the different inpainting methods, there are five different metrics that can be used. By taking the difference between the DTM of a given inpainting method and the ground truth map (USGS)—then calculating mean, standard deviation, root mean squared, and the min and max of each instance—we can find a general standard of accuracy for each method. The visual observation from LMCS shows that performance is poor on the edges of each map, as is expected given that it does not calculate any inpainting on the edges. The same can also be said to a lesser extent of inpaint_nans. To help alleviate that inaccuracy, a cropped comparison of downsized and full-size versions is conducted for all methods, which gets rid of these problematic areas on the edges.

The performance metrics for the DS-44 case can be seen in Table 5. In the DS-44 case, we observe that two techniques, Laplacian and Transformic, performed better than the rest. While Transformic’s mean value is the smallest, the other four metrics have better values in Laplacian. For comparison purposes, the performance metrics for the CSF method and USGS lower resolution elevation map are also included in Table 6. Overall, CSF performs pretty well for the mean; however, because of the non-removed bridge, all other metrics are relatively poor performing. The 1 arc second resolution USGS image performs poorly in all accuracy categories. It can be also noticed from Table 6 that these values for CSF and USGS are worse than the best performing individual cases that are shown in Table 5.

Inpaint_nansLMCSLaplacianTransformicFOE
Mean0.390.240.340.080.38
Sigma0.740.820.580.670.64
RMS0.840.850.670.670.74
Min−3.43−11.87−3.43−3.55−3.43
Max6.3818.456.306.566.60

Table 5.

Mean, standard deviation (sigma), root mean square (RMS), min, and max accuracy results using five inpainting methods for the DS-44 case.

Bold numbers indicate the best performing method of each row.

CSFUSGS 1 arc second
Mean0.374.70
Sigma1.023.10
RMS1.085.63
Min−5.81−7.83
Max15.9819.04

Table 6.

Accuracy values for CSF and USGS lower resolution map.

4.3 Fusion of different inpainting results

In an effort to improve the inpainting performance metrics, three different fusion methods are utilized. The pixel level fusion methods were used in [36]. For the first fusion method, alpha trimmed mean filter (ATMF), the worst and best performing methods for a given accuracy measurement are removed and then the three in-between results are averaged before re-taking the accuracy measurements to see how the results were improved. The second fusion method, weighted method, weighs each method based on a specific accuracy measurement and averages those results. The final fusion method, F3, simply averages the three best performing methods for each accuracy measurement.

In order to perform these operations, it was necessary to rank each of the methods based on the three main accuracy measurements: mean, standard deviation (STD, also denoted in other tables as sigma), and root mean squared (RMS). This was done for exclusively DS-44 results. It includes both the full-sized results and the four times down-sampled results. Table 7 shows the performance rankings for various combinations of band number with respect to three performance metrics: mean, sigma, and root mean squared (RMS). From the results in Table 7, it is clear that overall the downsized results return much more accurate values than the full-sized values in most cases.

MeanSTDRMS
RankMethodValueRankMethodValueRankMethodValue
14 × T0.0814 × LP0.5814 × LP0.67
24 × LMCS0.2424 × FOE0.6424 × T0.67
34 × LP0.343Full T0.6534 × FOE0.74
44 × FOE0.3844 × nans0.7444 × nans0.84
54 × nans0.3954 × LMCS0.8254 × LMCS0.85

Table 7.

Ranking results for various combinations of band number and accuracy measurement for DS-44 case.

Transformic is T and Laplacian is LP.

Table 8 shows the performance metrics for the DS-44 case when three different fusion methods were applied to the best five individual inpainting methods’ results where the ranking was conducted with respect to three performance metrics separately (mean, STD, and RMS). F3 produces the lowest mean value and relatively lower sigma and RMS values in comparison to others.

ATMFWeightedF3
DS-44MeanSTDRMSDS-44MeanSTDRMSDS-44MeanSTDRMS
Mean0.580.640.56Mean0.290.510.44Mean0.220.620.53
Sigma0.680.670.67Sigma0.600.600.60Sigma0.610.670.67
RMS0.890.930.87RMS0.670.790.75RMS0.640.910.86
Min−5.85−3.47−3.50Min−4.46−4.36−4.46Min−5.93−3.50−3.54
Max6.776.756.68Max6.376.426.36Max6.406.616.61

Table 8.

Performance metrics based on fusion algorithms.

From left to right, ATMF, weighted, and fusion 3; methods for combining generated DTM maps for DS-44 case.

Table 9 shows the performance metrics for the special case (which we name as combo) when three different fusion methods are applied to the best five individual inpainting methods’ results from both 44-bands inpainting results where the ranking is conducted with respect to three performance metrics separately (mean, STD, and RMS). In this case, F3 method produces lower mean, sigma, and RMS values with respect to ranking according to the mean performance metric.

ATMFWeightedF3
ComboMeanSTDRMSComboMeanSTDRMSComboMeanSTDRMS
Mean0.610.350.25Mean0.290.560.73Mean0.220.660.56
Sigma0.700.610.63Sigma0.600.620.70Sigma0.610.680.68
RMS0.930.700.68RMS0.670.841.01RMS0.640.950.88
Min−5.83−3.39−3.45Min−4.46−3.42−4.12Min−5.93−3.38−3.43
Max7.556.626.48Max6.376.857.03Max6.407.557.44

Table 9.

Accuracy values.

From left to right, ATMF, weighted, and fusion 3; methods for combining generated DTM maps for both band instances.

Table 10 shows a summary of the best performing individual cases (no fusion) and the best performing cases with fusion for DS-44 case with fusion. From Table 10, it can be noticed that the F3 (with respect to ranking according to lowest mean) improves the RMS value slightly when compared with the RMS values of the best performing individual inpainting method results (Laplacian and Transformic in DS-44). However, when all performance metrics are considered as a whole, we cannot clearly state F3 performs the best in all performance accuracy metrics but improves a few of the parameters.

No fusionWith fusion
MetricBest DS-44 (Laplacian)Best DS-44 (Transformic)Best DS-44 (F3-mean)
Mean0.340.080.22
Sigma0.580.670.61
RMS0.670.670.64
Min−3.43−3.55−5.93
Max6.306.566.40

Table 10.

Summary of the best performing individual and fusion cases.

Bold numbers indicate the best performing method of each row.

4.4 Comparison with deep learning inpainting

Using the pre-trained model provided by the GenIn package [26] for an image the size of about 350 by 1900 pixels (that covers the University of Houston campus and surrounding area), the computation time observed is roughly 2 minutes.

The accuracy of the GenIn model as compared to the other inpainting techniques is competitive and, cases, if not overall, is a more accurate result. In Table 11, statistics from GenIn together with the statistics from other inpainting techniques for the IEEE dataset are provided. In regards to mean, root mean square (RMS), and the maximum difference (max), GenIn outperforms all other techniques. For the sigma metric, it is the second-best performing method. The minimum difference accuracy measure is the only underperforming value coming in as the fourth best performing statistic. However, it is still the second-best value available, closely trailing the other three techniques.

4 × cropInpaint-nansLMCSLaplacianTransformicFOEGenIn
Mean0.390.240.340.080.380.23
Sigma0.740.820.580.670.640.59
RMS0.840.850.670.670.740.63
Min−3.43−11.87−3.43−3.55−3.43−3.50
Max6.3818.456.306.566.606.29

Table 11.

Comparison of GenIn statistics with respect to other inpainting methods’ performances for IEEE dataset.

Bold numbers indicate the best performing method of each row.

It is also helpful to visualize GenIn’s digital terrain map estimation as compared to the ground truth. In Figure 4, an image of the U. Houston area can be observed after GenIn is applied on that area’s LiDAR data. Figure 5 corresponds to the USGS 1/9 arc second Digital Elevation map that is used as the ground truth for the area.

Figure 4.

GenIn digital terrain map for the U. Houston (UH) area. Scale is from 8 to 25 m.

Figure 5.

USGS 1/9 arc second digital elevation map for the UH area. Scale is from 8 to 25 m.

The GenIn-generated results are found to be a very close reproduction of the ground truth. In some instances, it is observed that it provides more realistic results than the ground truth. As an example, in the horizontal right and vertical center of the plot in Figure 5, there is a deep dark spot that is observed, which is to be denoted as a low spot. However, this is caused because of a highway bridge that runs over a railway and could be considered a miscalculated section of the Digital Elevation map. The GenIn-generated map produces no such deep dark spot and instead smoothly removes the bridge and because it does this, it then slightly suffers in the resultant accuracy statistics.

Advertisement

5. Conclusions

In this research, we investigated the feasibility of using only color and NIR images for accurate DTM extraction. We assume the DSM is also available. Our approach involves several steps. The first step is to use color and NIR images for land cover classification. After some extensive experiments, it was observed that using only four bands cannot achieve accurate land cover classification. A morphological filtering approach was applied to generate synthetic spectral bands. Using nine land cover classification algorithms, it was observed that the use of synthetic bands significantly improved the land cover classification accuracy for the well-known IEEE dataset. The second step is to consolidate the many land cover types into only five groups. This was observed to further improve the accuracy. The third step is to apply nine inpainting algorithms to recover DTM from DSM. It was observed that the deep learning algorithm yielded more consistent performance.

Here, we also briefly mention a few future research directions. One direction is to focus on DSM generation using color images. The second direction is to obtain ortho-rectified images for the color and NIR images. A third direction is to build a software prototype that integrates DSM generation tool, ortho-rectification tool, land cover classification tool, and DTM reconstruction tool.

Advertisement

Acknowledgments

This research was supported by DOE under contract # DE-SC0019936.

References

  1. 1. Skarlatos D, Marinos V. Vegetation removal from UAV derived DSMS using combination of RGB and NIR imagery. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences. 2018;IV-2:255-262
  2. 2. Lee CM, Cable ML, Hook SJ, Green RO, Ustin SL, Mandl DJ, et al. An introduction to the NASA hyperspectral infrared imager (HyspIRI) mission and preparatory activities. Remote Sensing of Environment. 2015;167:6-19
  3. 3. Zhou J, Kwan C, Budavari B. Hyperspectral image super-resolution: A hybrid color mapping approach. Journal of Applied Remote Sensing. 2016;10(3):035024
  4. 4. Kwan C, Choi JH, Chan S, Zhou J, Budavari B. Resolution enhancement for hyperspectral images: A super-resolution and fusion approach. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). New Orleans, LA; 2017. pp. 6180-6184
  5. 5. Wang W, Li S, Qi H, Ayhan B, Kwan C, Vance S. Identify anomaly component by sparsity and low rank. In: IEEE Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensor (WHISPERS); 2-5 June 2015; Tokyo, Japan. 2015
  6. 6. Zhou J, Kwan C, Ayhan B, Eismann MT. A novel cluster kernel RX algorithm for anomaly and change detection using hyperspectral images. IEEE Transactions on Geoscience and Remote Sensing. 2016;54(11):6497-6504
  7. 7. Qu Y, Qi Y, Ayhan B, Kwan C, Kidd R. Does multispectral/hyperspectral pansharpening improve the performance of anomaly detection? In: IEEE International Geoscience and Remote Sensing Symposium (IGARSS). 2017. pp. 6130-6133
  8. 8. Zhou J, Kwan C, Ayhan B. Improved target detection for hyperspectral images using hybrid in-scene calibration. Journal of Applied Remote Sensing. 2017;11(3):035010
  9. 9. Kwan C, Ayhan B, Chen G, Wang J, Ji B, Chang C-I. A novel approach for spectral unmixing, classification, and concentration estimation of chemical and biological agents. IEEE Transactions on Geoscience and Remote Sensing. 2006;44(2):409-419
  10. 10. Eismann MT, Stocker AD, Nasrabadi NM. Automated hyperspectral cueing for civilian search and rescue. Proceedings of the IEEE. 2009;97(6):1031-1055
  11. 11. Khodadadzadeh M, Li J, Prasad S, Plaza A. Fusion of hyperspectral and LiDAR remote sensing data using multiple feature learning. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2015;8(6):2971-2983
  12. 12. Kwan C, Ayhan B, Larkin J, Kwan LM, Bernabé S, Plaza A. Performance of change detection algorithms using heterogeneous images and extended multi-attribute profiles (EMAPs). Remote Sensing. 2019;11(20):2377
  13. 13. Kwan C, Larkin J, Ayhan B, Kwan LM, Skarlatos D, Vlachos M. Performance comparison of different inpainting algorithms for accurate DTM generation. In: Geospatial Informatics X (Conference SI113). 2020. DOI: 10.1117/12.2557824
  14. 14. Ayhan B, Kwan C, Kwan LM, Skarlatos D, Vlachos M. Deep learning models for accurate vegetation classification using RGB image only. In: Geospatial Informatics X (Conference SI113). 2020. DOI: 10.1117/12.2557833
  15. 15. Ayhan B, Kwan C. Tree, shrub, and grass classification using only RGB images. Remote Sensing. 2020;12. DOI: 10.3390/rs12081333
  16. 16. Ayhan B, Kwan C. Application of deep belief network to land cover classification using hyperspectral images. In: International Symposium on Neural Networks. 2017. pp. 269-276
  17. 17. Dao M, Kwan C, Bernabé S, Plaza A, Koperski K. A joint sparsity approach to soil detection using expanded bands of WV-2 images. IEEE Geoscience and Remote Sensing Letters. Dec 2019;16(12):1869-1873
  18. 18. Nasrabadi NM. Kernel-based spectral matched signal detectors for hyperspectral target detection. In: International Conference on Pattern Recognition and Machine Intelligence. Berlin, Heidelberg: Springer; 2007
  19. 19. Kwon H, Nasrabadi NM. Kernel RX-algorithm: A nonlinear anomaly detector for hyperspectral imagery. IEEE Transactions on Geoscience and Remote Sensing. 2005;43(2):388-397
  20. 20. Bernabé S, Marpu PR, Plaza A, Mura MD, Benediktsson JA. Spectral-spatial classification of multispectral images using kernel feature space representation. IEEE Geoscience and Remote Sensing Letters. 2014;11:288-292
  21. 21. Doshkov D, Ndjiki-Nya P, Lakshman H, Köppel M, Wiegand T. Towards efficient intra prediction based on image inpainting methods. In: 28th Picture Coding Symposium. IEEE; 2010
  22. 22. Inpaint_nans. Available from: https://www.mathworks.com/matlabcentral/fileexchange/4551-inpaint_nans
  23. 23. Zhou J, Kwan C. High performance image completion using sparsity based algorithms. In: SPIE Commercial + Scientific Sensing and Imaging Conference. Orlando, FL; 2018
  24. 24. Roth S, Black MJ. Fields of experts. International Journal of Computer Vision. 2009;82:205
  25. 25. Mansfield A, Prasad M, Rother C, Sharp T, Pushmeet K, Van Gool L. Transforming image completion. In: The 22nd British Machine Vision Conference; 29 August-2 September 2011. 2011
  26. 26. Yu J, Lin Z, Yang J, Shen X, Lu X, Huang T. Generative Image Inpainting with Contextual Attention. arXiv:1801.07892 [cs.CV]. 2018
  27. 27. Dao M, Kwan C, Koperski K, Marchisio GA. Joint sparsity approach to tunnel activity monitoring using high resolution satellite images. In: IEEE Ubiquitous Computing, Electronics & Mobile Communication Conference. 2017. pp. 322-328
  28. 28. Burges CA. Tutorial on support vector machines for pattern recognition. In: Data Mining and Knowledge Discovery. Boston: Kluwer Academic Publishers; 1998. pp. 121-167
  29. 29. Qian T, Li X, Ayhan B, Xu R, Kwan C, Griffin T. Application of support vector machines to vapor detection and classification for environmental monitoring of spacecraft. In: Lecture Notes in Computer Science, LNCS 3973. New York: Springer; 2006. pp. 1216-1222
  30. 30. Bernabé S, Marpu PR, Plaza A, Benediktsson JA. Spectral unmixing of multispectral satellite images with dimensionality expansion using morphological profiles. In: Proceedings of the SPIE Satellite Data Compression, Communications, and Processing VIII; 19 October 2012; San Diego, CA, USA. Vol. 8514. 2012. p. 85140Z
  31. 31. Mura MD, Benediktsson JA, Waske B, Bruzzone L. Morphological attribute profiles for the analysis of very high resolution images. IEEE Transactions on Geoscience and Remote Sensing. 2010;48:3747-3762
  32. 32. Mura MD, Benediktsson JA, Waske B, Bruzzone L. Extended profiles with morphological attribute filters for the analysis of hyperspectral data. International Journal of Remote Sensing. 2010;31:5975-5991
  33. 33. Sun W, Du Q. Hyperspectral band selection: A review. IEEE Geoscience and Remote Sensing Magazine. 2019;7(2):118-139
  34. 34. Kwan C, Gribben D, Ayhan B, Bernabe S, Plaza A, Selva M. Improving land cover classification using extended multi-attribute profiles (EMAP) enhanced color, near infrared, and LiDAR data. Remote Sensing. 2020;12(9). DOI: 10.3390/rs12091392
  35. 35. Zhang W, Qi J, Wan P, Wang H, Xie D, Wang X, et al. An easy-to-use airborne LiDAR data filtering method based on cloth simulation. Remote Sensing. 2016;8(6):501
  36. 36. Kwan C, Chou B, Kwan LM, Larkin J, Ayhan B, Bell JF, et al. Demosaicking enhancement using pixel-level fusion. Journal of Signal, Image, and Video Processing. 2018;12:749-756. DOI: 10.1007/s11760-017-1216-2

Written By

Chiman Kwan, David Gribben, Bulent Ayhan and Jude Larkin

Submitted: 29 December 2019 Reviewed: 15 June 2020 Published: 04 November 2020