Open access peer-reviewed chapter

Land Use Information Quick Mapping Based on UAV Low- Altitude Remote Sensing Technology and Transfer Learning

Written By

Lu Heng, Fu Xiao, Liu Chao, Li Longguo, Li Naiwen and Ma Lei

Submitted: 02 October 2017 Reviewed: 26 January 2018 Published: 27 June 2018

DOI: 10.5772/intechopen.74475

From the Edited Volume

Drones - Applications

Edited by George Dekoulis

Chapter metrics overview

1,470 Chapter Downloads

View Full Metrics

Abstract

Obtaining surface spatio-temporal data rapidly, automatically and accurately is an important issue in agriculture informationization and intellectualization. Samples obtained by conventional manual visual interpretation are difficult to adapt the demands of land resources information extraction. Low altitude remote sensing technology as a kind of emerging technology for earth observation in recent years. Based on this, spatio-temporal data mining technology was introduced, and knowledge transfer learning mechanism was used, a novel landuse information classification method based on knowledge transfer learning (KTLC) was proposed. Firstly, new image was segmented by improved mean shift algorithm to obtain image objects. Secondly, the vector boundary of the objects and former historical landuse thematic map were matched and nested, invariant objects were obtained through overlay analysis, and purification of invariant object was finished by spectral and spatial information threshold filtering. The historical features category knowledge of thematic map was transferred to the new image objects. Finally, current images classification mapping was completed based on decision tree, and landuse classification mapping results were completed by the KTLC and eCognition for landuse information mapping classification (EC). The experimental results showed that KTLC could obtain accuracies equivalent to EC, and also outperforms EC in terms of efficiency.

Keywords

  • low-altitude remote sensing technology
  • land use information
  • classification mapping
  • invariant objects acquisition
  • knowledge transfer learning
  • prior knowledge

1. Introduction

It is an issue in agricultural informatization and intellectualization to collect surface spatio-temporal data rapidly and accurately. In general, the data sources selected during agricultural information background investigations (such as basic farmland area monitoring and crop planting structure investigation) are satellite images [1, 2, 3, 4, 5]. However, it is hard to collect the required image data continuously in cloudy and foggy regions (such as Sichuan Basin, China) as satellite sensors are affected by weather conditions. Satellite images generally have low spatial resolution, so it is hard to identify the scattered and discontinuous small pieces of cultivated land for precise agricultural land monitoring [6, 7]. Meanwhile, land use information is still obtained and updated by means of manual interpretation currently, resulting in large workload and low efficiency. Although some scholars have proposed the automatic interpretation method, and large workload is also required for manual sampling. Therefore, it is far behind real automation. In consequence, higher requirements are placed on data source resolution and information extraction technologies. Under such circumstance, low-altitude remote sensing technology is represented by UAVs emerges. Compared with the conventional aerial photography crafts, UAVs have the following advantages: rapid take-off and landing, repetitive operations, low cost for image collection, and high spatial resolution of collected images [8, 9, 10]. As the UAV equipped with the low-altitude remote sensing technology can provide images with centimeter-level resolution at low cost, it has great application potential for basic farmland protection areas with high requirements on accuracy of land use information.

With rapid development of remote sensing technologies, the spatial resolution of collected remote sensing images is increasingly high. On the high-resolution images collected by applying low-altitude remote sensing technology, more spectral information of surface features can be obtained, and spectra difference of similar surface features become large while that of different land types is reduced. Besides, the phenomenon of same feature with different spectra and different features with same spectrum becomes more common. Due to the number of details identified on images and the complication of spectral characteristics of surface features, the accuracy of classification methods (such as maximum likelihood method, minimum distance method, and k-means clustering algorithm) based on conventional spectrum statistics characteristics is lowered [11]. Baatz and Schape [12] put forward the object-oriented remote sensing image classification method based on the characteristics of high-resolution images. With the growing popularity of images with high spatial resolution, the object-oriented analysis method is gradually replacing the conventional pixel-based analysis method [13]. With the objectification technology, spectra, shape, and texture information can be collected effectively and further integration of hierarchical relationships or semantic information can be realized. Therefore, it is more aligned with the visual image interpretation principles and process [14, 15]. A number of studies [16, 17, 18] prove that the object-oriented classification method has great potential in the improvement of automatic extraction of high-resolution remote sensing images, and that it is an ideal choice for automatic classification of high-resolution images.

Current knowledge transfer methods can be classified into four categories, that is, instance transfer, characteristics transfer, parameter transfer, and associated-knowledge transfer [19, 20]. In this chapter, by combining associated-knowledge transfer, a transfer method for surface feature category labels (associated knowledge) based on the detection of invariant objects is designed for classified mapping of high-resolution images collected by applying the low-altitude remote sensing technology. The “category interpretation knowledge of invariant surface features” is migrated from the source domain to the target domain by obtaining invariant surface features on new images through matching and nesting of the new land use images and the previous time-phase thematic vector maps and by transferring the surface feature category label knowledge integrated in the previous time phase to the new images. It is used to set up a new characteristic—surface feature mapping relationship. In this way, this chapter proposes a method for classified mapping of land use information on high-resolution remote sensing images.

Advertisement

2. General information on study area and data

The study area is located in the basic farmland protection area in Lianshan Town, Guanghan City, Sichuan Province, China. The parent materials of soils in Guanghan City are either weathered bedrock matters or loose deposits. The area with soil thickness greater than 100 cm and less than 30 cm, respectively, takes up 7.43 and 1.5% of the total cultivated area. Most soils feature good arability, long arable period, and good fertilizer preservation and supply performance, providing a large arable area. However, Guanghan City has a large population with relatively less land. It covers 548 km2 in total with a total population of 600,000. Its agricultural area is only 34,000 hm2, with cultivated land area of 3.1 hm2 and basic farmland protection area of 28,000 hm2. Based on the state standards for land use classification and in combination with local conditions, the study area mainly includes six categories of land, that is, cultivated land, forest land, residential land, road, water, and other land. Figure 1 shows the location of the study area.

Figure 1.

Location of the Guanghan City selected for the experimental purpose.

Considering that the study area is gentle in terrain and therefore convenient for take-off and landing, the ejection-type fixed-wing UAV is selected for the experimental purpose. Canon EOS 5D Mark II is carried on the flying platform and the preset forward overlap and side overlap are 75 and 45%, respectively. The flight altitude is 600 m, and the camera focal length is 24.49 mm. The collected UAV images have a spatial resolution up to 0.2 m. The thematic land use maps in previous time phase were drawn in June 2014, as shown in Figure 2a and b. They were taken by UAV in July 2015. To better verify the efficiency and applicability of the method, two typical hybrid UAV images of different land types (i.e., “complex building—cultivated land” hybrid image as shown in Figure 2c and “forest land—cultivated land” hybrid image as shown in Figure 2d) are selected in this chapter.

Figure 2.

Preliminary thematic land use map and experimental UAV images. (a) Preliminary thematic land use map of experimental image 1. (b) Preliminary thematic land use map of experimental image 2. (c) Experimental image 1. (d) Experimental image 2.

Advertisement

3. Working process and study method

3.1. Working process

The collected original UAV images are preprocessed, including color uniformizing, light uniformizing, and generation of orthoimages. After preprocessing, image objects are identified after multi-resolution segmentation of the to-be-classified images by applying the improved mean shift algorithm. Next, the vector boundaries of segmented objects are matched and nested with the thematic land use maps in previous time phase. Invariant objects on current images are identified through overlay analysis so as to weed out wrong invariant objects based on the spectral and spatial information thresholds. At last, the categories of invariant surface features are transferred to the current target images through transfer learning. Classification rules are established with the decision tree so as to carry out classified mapping with current images rapidly. In addition, a comparison is made with the classified mapping directly using object-oriented classification software (eCognition).

3.2. Preprocessing of image data

The digital camera on UAV is of non-metric type, so the images are subject to serious lens distortion. Therefore, distortion correction shall be carried out based on the distortion parameters of the camera [21, 22]. Meanwhile, exposure time intervals and different weather conditions in the flight course will result in chromatic aberration, so color and light uniformizing shall be carried out with the mask method. Based on the aircraft attitude parameters recorded by the flight control system, preliminary image sorting and positioning can be carried out for matching homologous points of adjacent image pairs. After matching homologous points, block adjustment can be made based on the conditions of collinearity equation. After that, the coordinates of ground control points may be incorporated to realize absolute orientation so as to obtain the corrected orthoimages. It provides high-accuracy orthoimage data for subsequent rapid updating and mapping of land use information.

3.3. Mapping method (KTLC) of land use information based on transfer learning

3.3.1. Calculation of improved mean shift segmentation and image/spectral features of objects

First of all, the preprocessed UAV images are divided into texture domain and homochromatic domain. The latter is obtained by applying the mean shift algorithm directly while the former is segmented by applying the mean shift algorithm after appropriate bandwidth is obtained based on the normalized distribution density. Next, based on the established cost function, a decision on merging of adjacent domains is made to eliminate over-segmentation domain. Refer to reference [23] for the improved mean shift segmentation algorithm selected in this chapter. Afterwards, the vector boundaries of segmented objects are matched and nested with the thematic land use maps in previous time phase, allowing them to be under the consistent spatial reference conditions. Invariant objects are further identified on the current images through overlay analysis. After image segmentation, object features have to be calculated so as to ensure the smooth progress of the subsequent classification work. In this chapter, 18 features listed in Table 1 are calculated based on the spectral, shape, and textural features.

Spectral featuresShape featuresTextural features
DescriptionSpectrum or spaceInterpretationDescriptionSpectrum or spaceInterpretationDescriptionSpectrum or spaceInterpretation
R_MeanSpectrumMean value of red spectral bandL/WSpaceLength-width ratioGLCM_HSpaceHomogeneity
G_MeanSpectrumMean value of green spectral bandGeo_LSpaceObject lengthGLCM_ESpaceEntropy
B_MeanSpectrumMean value of blue spectral bandGeo_WSpaceObject widthGLCM_CSpaceContrast ratio
R_DevSpectrumStandard deviation of red spectral bandBorder_LSpaceSide length of objectGLCM_VSpaceVariance
G_DevSpectrumStandard deviation of green spectral bandCompactSpaceCompactnessGLCM_DSpaceHeterogeneity
B_DevSpectrumStandard deviation of blue spectral bandNum_PSpaceNumber of pixelsGLCM_ASpaceAngular second moment (ASM)

Table 1.

Spatial and spectral features of objects.

3.3.2. Purification of invariant object samples

It shall be noted that a mistake may be made when invariant objects on the current images are identified through overlay analysis. To this point, relevant rules shall be designed to weed out wrong invariant objects. In this chapter, invariant objects are purified based on the spectral and spatial information. Specifically, object purification is judged by calculating the distance (difference value) between the mean brightness of image elements and the center of object brightness value (mean value), that is:

RxiMμi4δiGxiMμi4δiBxiMμi4δiE1

where Rxi, Gxi and, Bxi—object brightness in red, green, and blue spectral bands, respectively;Mμi—mean value of object sample brightness; δi—spectral standard deviation of image elements in each object.

Considering the spatial information, a wrong object can be judged by checking whether the spectral standard deviation of image elements in each object exceeds the limits.

δi0.2Rbmaxδi0.2Gbmaxδi0.2BbmaxE2

where Rbmax, Gbmax and, Bbmax—the maximum image brightness in red, green, and blue spectral bands, respectively. If the selected invariant object meets both Eqs. (1) and (2), it is a reliable invariant object; otherwise, it is an unreliable invariant object and shall be weeded out.

3.3.3. Associated-knowledge transfer learning and rapid classified mapping

After collection and purification of object samples regarding current target images, the best feature combination and classification model are selected for supervised classification based on the calculated image and spectral features. There are many methods regarding feature optimization selection and classification models. To ensure simplification while considering efficiency, in this chapter, the practice regarding feature optimization selection and classification model are conducted by adopting the decision tree algorithm. Then, judgment rules are established for classification purpose so as to complete classified mapping of current images.

3.4. EC method

To verify reliability of the proposed method, classified mapping of land use information is carried out with the widely applied eCognition 8, and a comparison is made with the results obtained from the KTLC method. Image segmentation shall be conducted, and then classified mapping of the segmented image objects may be carried out. Given that the standard nearest neighbor classification method is simple, efficient, and extensively applied, and it is adopted for classified mapping with the EC method in this chapter. Specific mapping method is as follows: first, select a sample object and carry out statistical analysis to obtain textural/spectral/shape features, information on adjacent domains, etc. for establishment of multi-dimensional feature space; second, calculate the distance between the to-be-classified object and the sample; and finally, judge which one of the to-be-classified objects are closest to a sample based on the feature distance relationship and membership function, and then incorporate such object into the corresponding category.

Advertisement

4. Results and analysis

4.1. Rapid classified mapping

Based on the principles in Section 2.1, spectral and spatial scale parameters are taken as 7 and 10, respectively, for the improved mean shift segmentation. The segmentation results obtained by applying the improved mean shift algorithm are shown in Figure 3.

Figure 3.

Segmentation results based on the improved mean shift method. (a) Segmentation result of experimental image 1. (b) Segmentation result of experimental image 2.

The vector boundaries of segmented objects are matched and nested with the thematic land use maps obtained in June 2014. Invariant objects are identified through overlay analysis and purified based on the spectral and spatial information to weed out the wrong ones. In addition, the categories of invariant surface feature objects are transferred to the current target images through knowledge transfer learning. Finally, the image/spectral features of invariant objects are subject to optimization selection by using a decision tree. Classification rules are established for classified mapping of the land use information. Figure 4a and b shows the results of classified mapping with the KTLC method.

Figure 4.

Results of classification mapping based on the two methods. (a) Result of experimental image 1 based on the KTLC method. (b) Result of experimental image 2 based on the KTLC method. (c) Result of experimental image 1 based on the EC method. (d) Result of experimental image 2 based on the EC method.

In the experimental process, segmentation parameter settings are as follows based on the principles in Section 2.3: segmentation scale parameter: 90; color/shape parameter: 0.5/0.5; and smoothness/compactness parameter: 0.5/0.5. Figure 4c and d shows the results of classified mapping of land use information with the EC method.

4.2. Evaluation on precision and efficiency

In general, the precision evaluation of classification results is classified into qualitative evaluation and quantitative evaluation. For qualitative evaluation, a consistence comparison between the pattern spots after classification and the actual surface feature is mainly carried out. It is strongly subjective. For quantitative evaluation, overall precision, Kappa coefficient, and the like are mainly calculated [24, 25]. Due to high spatial resolution of UAV images, verification data can be directly obtained through visual interpretation. To get more objective evaluation results, verification points are obtained with the following method in this chapter: first, draw a 20 × 20 regular grid with an area equal to the image area in the experiment; then, generate 10 random points in each grid; and finally, determine the land use type at each random verification point through visual interpretation. For experimental image 1, totally 395 valid verification points are obtained; for experimental image 2, totally 382 valid verification points are obtained. These verification points are overlapped with the classification results to judge whether the results of classified mapping are correct. Upon calculation, for experimental image 1, the overall classification precision of the KTLC method is 88.61% and the Kappa coefficient is 0.86; the overall classification precision of the EC method is 89.87% and the Kappa coefficient is 0.87. Tables 2 and 3 show the detailed results. For experimental image 2, the overall classification precision of the KTLC method is 88.30% and the Kappa coefficient is 0.82; the overall classification precision of the EC method is 84.84% and the Kappa coefficient is 0.79. Tables 4 and 5 show the detailed results.

Forest landCultivated land with cropsCultivated land without cropsRoadResidential landOther landWater
Forest land431210200
Cultivated land with crops210600000
Cultivated land without crop04360120
Road00018530
Residential land00028230
Other land20510530
Water00000012
Producer’s precision/%91.4986.8985.7185.7191.1186.89100
User’s precision/%74.1498.1583.7269.2394.2586.89100
Overall precision = 88.61%; Kappa coefficient = 0.86

Table 2.

Confusion matrix of accuracy for experimental image 1(KTLC method).

Forest landCultivated land with cropsCultivated land without cropsRoadResidential landOther landWater
Forest land46600000
Cultivated land with crops310100000
Cultivated land without crop04300040
Road00014420
Residential land10139040
Other land10600631
Water00000011
Producer’s precision/%90.2090.9981.0882.3595.7486.3091.67
User’s precision/%88.4697.1278.9570.0090.9188.73100
Overall precision = 89.87%; Kappa coefficient = 0.87

Table 3.

Confusion matrix of accuracy for experimental image 1(EC method).

Forest landCultivated land with cropsCultivated land without cropsRoadResidential landOther landWater
Forest land1371100100
Cultivated land with crops158220000
Cultivated land without crop04590020
Road00011200
Residential land20021510
Other land10601120
Water00000016
Producer’s precision/%88.3984.5488.0684.6278.9580.00100
User’s precision/%91.9582.8390.7784.6275.0060.00100
Overall precision = 88.30%; Kappa coefficient = 0.82

Table 4.

Confusion matrix of accuracy for experimental image 2(KTLC method).

Forest landCultivated land with cropsCultivated land without cropsRoadResidential landOther landWater
Forest land1411020010
Cultivated land with crops207400000
Cultivated land without crop01510120
Road0008410
Residential land10011620
Other land40710140
Water00000015
Producer’s precision/%84.9487.0685.0080.0080.0070.00100
User’s precision/%91.5678.7292.7361.5480.0053.85100
Overall precision = 84.8%; Kappa coefficient = 0.79

Table 5.

Confusion matrix of accuracy for experimental image 2(EC method).

Experimental image 1 is a “complex building-cultivated land” hybrid image. It can be learned from Tables 2 and 3 that the KTLC method and the EC method have high separation precision in terms of building land and cultivated land with crops and without crops. It indicates that these three types of land are highly separable on such images with high spatial resolution. In the KTLC method, the classification precision of forest land and roads is 74.14 and 69.23%, respectively, which is lower than that of the EC method. However, its classification precision of any land types is not extremely low, and it has high classification precision in terms of cultivated land with crops, building land, and waters (98.15, 94.25 and 100%, respectively). As a result, the overall precision of the KTLC method is up to 88.61%, which is comparable with that (89.87%) of the EC method. Experimental image 2 is a “forest land—cultivated land” hybrid image. It can be learned from Tables 4 and 5 that both the KTLC method and the EC method result in many errors in separation of forest land and cultivated land with crops due to high spectrum and texture similarity of these two types of land. The classification precision can be improved if more reliable samples are used during transfer learning. The overall precision of the KTLC method is 88.30%, which is slightly higher than that (84.84%) of the EC method, especially for forest land, cultivated land without crops, and waters (91.95, 90.77 and 100%, respectively).

Besides, after review of the error-related sample points in the results of classified mapping, it is discovered that some of the sample points are located at the boundary of two different land types (because random sample points on a 20 × 20 regular grid are adopted in the chapter), and therefore they are difficult to-be-classified. Therefore, if the errors caused by visual interpretation can be eliminated, the precision of classified mapping can be higher than that listed in Tables 25.

For efficiency of classified mapping, with two groups of experimental data as examples, the efficiency of the KTLC method and the EC method is shown in Table 6 under the conditions of Intel Core i7 2.4GHz,4GB memory and Windows 7 environment. It can be discovered that the KTLC method saves much time in obtaining object samples on the premise of ensuring high precision, so its efficiency is greatly improved when compared with the EC method.

MethodsConsumption time of experimental image 1/hConsumption time of experimental image 2/h
KTLC0.50.7
EC1.21.3

Table 6.

Comparison of efficiency based on two methods.

Advertisement

5. Conclusion

In this chapter, a method for rapid classified mapping of land use information on high-resolution remote sensing images is studied in the knowledge transfer mechanism. Compared with the extensively used methods for classified mapping with the software eCognition, the KTLC method proposed in this chapter effectively combines machine learning, knowledge accumulation, and agricultural remote sensing field. In addition to providing classification results comparable with those of the EC method, the KTLC method also improves efficiency greatly, and thus improving the automation level of classified mapping of land use information. It has great application prospect to fully explore the relationship between the historical data and current data. The studies in this chapter provide new ideas for quick collection of land use information in key areas in respect of agricultural remote sensing field.

Advertisement

Acknowledgments

This work is supported by the National Natural Science Foundation of China (Grant no. 41701499), the Key Laboratory of Digital Mapping and Land Information Application of National Administration of Surveying, Mapping and Geoinformation of China (Grant no. DM2014SC02) and the Key Laboratory of Geo-special Information Technology, Ministry of Land and Resources of China (Grant no. KLGSIT2015-04). Finally, Lu H wants to thank, in particular, the invaluable support received from Fu X over the years.

Advertisement

Conflicts of interest

The authors declare no conflict of interest.

References

  1. 1. Chunjiang Z. Advances of research and application in remote sensing for agriculture. Transactions of the Chinese Society for Agricultural Machinery. 2014;45(12):277-293 (in Chinese)
  2. 2. Lei M, Liang C, Wenqi H, et al. Cultivated land information extraction from high-resolution unmanned aerial vehicle imagery data. Journal of Applied Remote Sensing. 2014;8(1):083673
  3. 3. Zhou S, Zongzheng L, Yuanyuan Y, et al. Status and prospect of agricultural remote sensing. Transactions of the Chinese Society for Agricultural Machinery. 2015;46(2):247-260 (in Chinese)
  4. 4. Yongqi H, Weihong C, Yangjian Z, et al. Research on development of agricultural geographic information ontology. Journal of Integrative Agriculture. 2012;11(5):865-877
  5. 5. Rujing W. Bottleneck of agricultural informatization development in China and the thinking of coping strategies. Bulletin of the Chinese Academy of Sciences. 2013;28(3):337-343 (in Chinese)
  6. 6. Qiong H, Wenbin W, Qian S, et al. Recent progresses in research of crop patterns mapping by using remote sensing. Scientia Agricultura Sinica. 2015;48(10):1900-1914 (in Chinese)
  7. 7. Qiting H, Zelin Q, Zhikang Z. Study on the crop classification and planting area estimation at land parcel scale using multi-sources satellite data. International Journal of Geographical Information Science. 2016;18(5):708-717 (in Chinese)
  8. 8. Limin W, Jia L, Lingbo Y, et al. Applications of unmanned aerial vehicle images on agricultural remote sensing monitoring. Transactions of the Chinese Society of Agricultural Engineering. 2013;29(18):136-145 (in Chinese)
  9. 9. Heng L, Longguo L, Yi'nan H, et al. Method of UAV image mosaic based on weighted adjustment considering terrain feature. Transactions of the Chinese Society for Agricultural Machinery. 2015;46(9):296-301 (in Chinese)
  10. 10. Heng L, Xiao F, Yi'nan H, et al. Cultivated land information extraction from high resolution UAV imagery based on transfer learning. Transactions of the Chinese Society for Agricultural Machinery. 2015;46(12):274-279, 284 (in Chinese)
  11. 11. Bin L, Liangpei Z. Robust autodual morphological profiles for the classification of high-resolution satellite images. IEEE Transactions on Geoscience and Remote Sensing. 2014;52(2):1451-1462
  12. 12. Baatz M, Schape A. Object-oriented and multi-scale image analysis in semantic networks. In: Proceedings of the 2nd International Symposium on Operationalization of Remote Sensing; Enschede. Netherlands; 1999. pp. 16-20
  13. 13. Hay GJ, Castilla G. An automated object-based approach for the multiscale image segmentation of forest scenes. International Journal of Applied Earth Observation and Geoinformation. 2005;7(4):339-359
  14. 14. Gitas IZ, Mitri GH, Ventura G. Object-based image classification for burned area mapping of creus cape, Spain, using NOAA-AVHRR imagery. Remote Sensing of Environment. 2004;92(3):409-413
  15. 15. Yunhao C, Tong F, Peijun S, et al. Classification of remote sensing image based on object oriented and class rules. Geomatics and Information Science of Wuhan University. 2006;31(4):316-320
  16. 16. Benz UC, Hofmann P, Willhauck G, et al. Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS Journal of Photogrammetry and Remote Sensing. 2004;58(3-4):239-258
  17. 17. Renaud M, Jagannath A. Object-Based classification of Ikonos imagery for mapping large-scale vegetation communities in urban areas. Sensors. 2007;7(11):2860-2880
  18. 18. Ruvimbo G, Philippe DM, Morgan DD. Object-oriented change detection for the city of Harare, Zimbabwe. Expert Systems with Applications. 2009;36(1):571-588
  19. 19. Jialin P, Qiang Y. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering. 2010;22(10):1345-1359
  20. 20. Sarinnapakorn K, Kubat M. Combining subclassifiers in text categorization: A dst-based solution and a case study. IEEE Transactions on Knowledge and Data Engineering. 2007;19(12):1638-1651
  21. 21. Zhizhuo W. Photogrammetric Principles. Wuhan: Wuhan University Press; 2007
  22. 22. Yongjun Z. Geometric processing of low altitude remote sensing images captured by unmanned airship. Geomatics and Information Science of Wuhan University. 2009;34(3):284-288 (in Chinese)
  23. 23. Heng L, Chao L, Naiwen L, et al. Segmentation of high spatial resolution remote sensing images based on the improved mean shift algorithm. Journal of Mountain Science. 2015;12(3):671-681
  24. 24. Shukui B, Lin D. The effect of the size of training sample on classification accuracy in object-oriented image analysis. Journal of Image and Graphics. 2010;15(7):1106-1111 (in Chinese)
  25. 25. Foody GM. Status of land cover classification accuracy assessment. Remote Sensing of Environment. 2002;80(1):185-201

Written By

Lu Heng, Fu Xiao, Liu Chao, Li Longguo, Li Naiwen and Ma Lei

Submitted: 02 October 2017 Reviewed: 26 January 2018 Published: 27 June 2018