Open access peer-reviewed chapter

High-Resolution Satellite Imagery Classification for Urban Form Detection

Written By

Juan Manuel Núñez, Sandra Medina, Gerardo Ávila and Jorge Montejano

Submitted: 22 August 2018 Reviewed: 26 November 2018 Published: 15 February 2019

DOI: 10.5772/intechopen.82729

From the Edited Volume

Satellite Information Classification and Interpretation

Edited by Rustam B. Rustamov

Chapter metrics overview

2,053 Chapter Downloads

View Full Metrics

Abstract

Mapping urban form at regional and local scales is a crucial task for discerning the influence of urban expansion upon the ecosystem and the surrounding environment. Remotely sensed imagery is ideally used to monitor and detect urban areas that occur frequently as a consequence of incessant urbanization. It is a lengthy process to convert satellite imagery into urban form map using the existing methods of manual interpretation and parametric image classification digitally. In this work, classification techniques of high-resolution satellite imagery were used to map 50 selected cities of study of the National Urban System in Mexico, during 2015–2016. In order to process the information, 140 RapidEye Ortho Tile multispectral satellite imageries with a pixel size of 5 m were downloaded, divided into 5 × 5 km tiles and then 639 tiles were generated. In each (imagery or tile), classification methods were tested, such as: artificial neural networks (RNA), support vector machines (MSV), decision trees (AD), and maximum likelihood (MV); after tests, urban and nonurban categories were obtained. The result is validated with an accuracy method that follows a stratified random sampling of 16 points for each tile. It is expected that these results can be used in the construction of spatial metrics that explain the differences in the Mexican urban areas.

Keywords

  • urban form
  • remote sensing
  • high-resolution satellite imagery
  • advanced classification methods
  • GIS integration

1. Introduction

Urbanization, as a process that manifests itself through the concentration of population in cities, is considered one of the most powerful and visible anthropogenic forces on the planet. Its influence is manifested on topics ranging from environmental changes on a global, regional, and local scale [1, 2], socioeconomic problems [3] to urban planning [4]. Thereby, several investigations use maps of urban areas to assess the influence of urbanization on natural and human environments and to estimate some important aspects of urbanization, such as its composition [5], size, scale, and form [6].

The urban form is the most visible result of the economic, social, cultural, and environmental driving forces of urban development [1]. Therefore, it is a spatial reflection of different processes across the evolution of a city and its characterization is a valuable source of information for urban planning. Ultimately, urban form is the result of the symbiotic interactions of infrastructures, people, and economic activities in a city that is constantly evolving in response to social, environmental, economic, and technological development [7].

In the cities, urban form is materialized by the heterogeneous physical alignment and characteristics of buildings, streets, and open spaces at different levels of spatial resolution. This high heterogeneity of materials and urban objects in terms of size, forms, and urban fabric morphology of the cities can be detected through the use of remote sensing imagery. This type of research provides very important information in relation to urban issues on planning, housing, health, transportation, and economic policies; especially for regions in developing countries that are less documented.

Most of the research efforts have been made for mapping urban landscapes at various scales and on the spatial resolution requirements of such mapping [8]. Different remote sensing techniques have already shown their value in mapping urban areas with different spatial, geometric, spectral, and temporal resolutions for different purposes. Therefore, the selection of an appropriate estimation method based on remotely sensed data characteristics is important.

Traditional remote sensing literature review suggests that major approaches include pixel-based image classification [9, 10], spectral index [11, 12], object-oriented algorithms [13, 14], and machine learning like artificial neural networks [15] and decision tree classification algorithm [16]. Techniques, such as data/image fusion, have also been explored [17]. Recent research has used high and very high spatial resolution remote sensing imagery to quantitatively describe the spatial structure of urban environments and characterize patterns of urban morphology [18].

Remote sensing approach compared with traditional methods for mapping the urban form provides certain advantages due to its convenience, efficiency, and coverage [19]. For this reason, the study of the detection of the urban form and its corresponding derived attributes through different types of satellite images is becoming of more interest [16, 20, 21, 22, 23].

Regardless of the satellite imagery classification method employed for urban form detection, they can be divided into two categories: supervised and unsupervised methods. Those results obtained by the first ones usually produce a greater reliability, nevertheless they require more processing steps for the construction of training data.

For the supervised methods, the classifiers based on support vector machines (SVM) are very popular due to their good performance and robustness [24, 25]. Additionally, the methods based on the artificial neural networks (ANN) are also widely used for the classification of urban areas [26]. For example, Dridi et al. [27] combine multiple SVM for the mapping of urban extensions in the city of Algeria and compare them with ANN to support the experimental analysis to monitoring the spatiotemporal phenomenon of urban sprawl. Other supervised classification methods, such as decision tree (DT), regression model (RM), and maximum likelihood (ML), can also provide plausible results in the mapping of urban areas [28].

In this work, we evaluated four supervised classification methods (SVM, ANN, DT, and ML) using satellite images of earth observation, to integrate with a GIS approach the mapping of the urban form in 50 Mexican cities. The rest of this document is organized as follows: in Section 2, the context of the cities selected for the test and the dataset used are briefly presented; in Section 3, it is described the methodology with the proposed classification strategy for urban mapping that includes the preprocessing of RapidEye images, the collection of training samples, the classification methods evaluating the validation strategy, and the postprocessing GIS approach. The experimental results obtained and their discussions are presented in Section 4. Finally, the conclusions of the work are expressed in Section 5.

Advertisement

2. Context

2.1 Study area

In Mexico, urbanization has been associated with increased prosperity and improvements in quality of life. Urban areas, lead in expanding coverage of basic and social services, also offer better access to other services and amenities, including health care and education. Moreover, Mexico’s growing middle class and declining inequality in recent decades seem to be definitely urban phenomena [29].

There have been important changes on the spatial form of Mexican cities over the past 30 years: most notably urban growth is characterized as distant, dispersed, and disconnected. Between 1980 and 2010, the built-up area of Mexican cities expanded on average by a factor of seven and the urbanized area of the 11 biggest metropolitan areas with more than 1 million inhabitants in 2010 has even grown by a factor of nine (SEDESOL 2012). This rapid spatial transformation of most Mexican cities presents important challenges for their potential to promote green and inclusive growth. To solve these problems, different initiatives have made significant efforts to put in place measurement systems and to broaden information about urban dynamics.

An ambitious national initiative, the National Urban System (NUS) is a unified platform to support decision-making for urban and housing policies. The NUS, launched by Mexican federal agencies in 2012, exemplifies a significant effort to broaden information and understanding about urban dynamics and has been recognized as innovative among Latin American urban initiatives. This system is a reference to analyze spatial patterns of Mexican cities, their causes, and their impact and to provide an analytical basis to understand urban phenomenon.

The National Population Council (Consejo Nacional de Población, CONAPO) and the Secretariat of Social Development (Secretaria de Desarrollo Social, SEDESOL) put together the NUS on the basis of data from the Population and Housing Census (2010) with the objective of creating a system to support strategic planning and decision-making in urban areas and to provide all sectors (state governments, municipalities, academia, private sector, and general users) with integrated metropolitan and urban information on demographic and socioeconomic variables. The NUS comprises 384 cities with over 15,000 inhabitants each, out of which 59 are metropolitan areas, 78 conurbations (suburban centers), and 247 urban centers. About 81.2 million people or 72.3% of the country’s population live in these 384 cities.

The study area corresponds to a 50 cities sample of the NUS that include three types of cities, classified on the basis of geographical delimitations defined by the NUS (Figure 1).

Figure 1.

Selected cities of study, National Urban System and classification of city types. Source: Own elaboration based on data from the secretariat of social development (Secretaría de Desarrollo Social, SEDESOL).

These 50 urban areas include:

  1. 12 metropolitan areas defined as a group of municipalities that share a central city and are highly integrated with more than 250,000 residents: (1) Aguascalientes, (2) Monclova-Frontera, (3) Juárez, (4) San Francisco del Rincón, (5) Moroleón-Uriangato, (6) Tula, (7) Tehuacán, (8) Rioverde Ciudad Fernández, (9) Nuevo Laredo, (10) Coatzacoalcos, (11) Tianguistenco, and (12) Teziutlán.

  2. 16 urban conurbations that extend across more than one locality and have more than 15,000 residents: (13) Ensenada, (14) Campeche, (15) Manzanillo, (16) Tapachula de Córdova y Ordóñez, (17) Guanajuato, (18) Irapuato, (19) Chilpancingo de los Bravo, (20) Ciudad Lázaro Cárdenas, (21) Uruapan, (22) Zitácuaro, (23) San Juan Bautista Tuxtepec, (24) Chetumal, (25) Ciudad Obregón, (26) Cárdenas, (27) Túxpam de Rodríguez Cano,and (28) Fresnillo.

  3. 22 urban centers that have more than 15,000 residents and that do not extend beyond the boundaries of their locality: (29) La Paz, (30) Ciudad del Carmen, (31) Ciudad Acuña, (32) Comitán de Domínguez, (33) San Cristóbal de las Casas, (34) Cuauhtémoc, (35) Delicias, (36) Hidalgo del Parral, (37) Victoria de Durango, (38) Salamanca, (39) Iguala de la Independencia, (40) Ciudad Guzmán, (41) Lagos de Moreno, (42) Apatzingán, (43) San Juan del Río, (44) Ciudad Valles, (45) Los Mochis, (46) Culiacán Rosales, (47) Mazatlán, (48) Navojoa, (49) Heroica Nogales, and (50) Ciudad Victoria.

2.2 Materials

Urban areas were identified by looking at the layer of urban polygons of the geostatistical framework, version 5.0 of the National Institute of Statistics and Geography (Instituto Nacional de Estadística y Geografía, INEGI). Later, satellite images were obtained for the binary classification between urban and nonurban areas that covered the 50 study cities, for which 140 RapidEye images of the period 2015–2016 were acquired, through the Planet platform (www.planet.com).

The main characteristics of these images are: (a) spatial resolution of 5 m and covered area per image of 25 km2; (b) 5-band spectral resolution (blue 440–510 nm, green 520–590 nm, red 630–685 nm, red edge 690–730 nm, and near-infrared 760–850 nm); (c) 12-bit radiometric resolution, and (d) Universal Transverse Mercator (UTM) and WGS84 Horizontal Datum.

Additionally, a digital elevation model (DEM) of the Mexican territory was downloaded to perform the radiometric and atmospheric corrections. Finally, for the collection of training samples, a Web Map Service (WMS) of a SPOT satellite images mosaic provided by the Mexico Reception Station (Estación de Recepción México, ERMEX) was used, at a resolution of 1.5 m in true color.

Advertisement

3. Methodology

The methodology is split into five main steps as follows: strategy for satellite imagery download and preprocessing, training and validation sample selection, classification methods, GIS integration, and results evaluation.

3.1 Strategy for satellite imagery download and preprocessing

In the first step, the entire Mexican territory was divided into nonoverlapping 5 × 5 km blocks, with the purpose of selecting blocks that cover the mosaics of the images related to the urban areas selected. A total of 639 blocks were selected to cover the 50 urban areas. Then, 140 RapidEye Ortho Tile multispectral scenes were downloaded through the Planet platform (www.planet.com) to cover all cities within the project. The satellite images were selected for the period 2015–2016, obtaining a homogeneous selection of acquisition dates and conditions of zero or little cloudiness.

Radiometric and atmospheric corrections were conducted to retrieve surface reflectance values by means of the atmospheric and topographic corrections software (ATCOR3) implemented in the ENVI virtual IDL machine [30]. Finally, mosaics by blocks were prepared for each of the 50 cities.

3.2 Training and validation sample selection

To obtain training and validation samples, the generated blocks in the previous stage were used to cover the mosaics of the satellite imagery that corresponds to the selected. Training and validation data should be representative of the study area and of the classification scheme. Because urban is often a relatively rare class that covers only a small proportion of the landscape, spatial stratification with proportional class allocation (SpatialProp) was selected to be able to obtain high user’s accuracy of urban class [31].

In the SpatialProp strategy, the sample size is allocated to each class proportional to the areal coverage in the reference set, with the constraint that each spatial stratum receives an equal total sample size. For example, if the urban and nonurban classes comprised 25 and 75% of the area of the entire region, respectively, the sample allocation in each spatial stratum would be 25% urban and 75% nonurban. According to Jin et al. [31] in each 5 × 5 km block, 16 random samples are assigned to the urban and nonurban strata proportional allocation. For example, in our hypothetical situation, nonurban occupies 75% of the area and urban occupies 25%. Given the total sample size of 16, 12 nonurban pixels and 4 urban pixels will be selected following the designs of SpatialProp.

For the 639 blocks employed for the 50 selected urban areas, 20,448 sampling and validation points were assigned. Later, each of the data points were verified with the related category based on the RapidEye mosaic and the Web Map Service (WMS) of a SPOT Image.

3.3 Classification methods

Machine-learning classification has become a major focus of the remote-sensing literature since it is generally able to model complex class signatures without making assumptions about the data distribution, i.e., it is nonparametric [25]. A wide range of studies have generally found that these methods tend to produce higher accuracy compared to traditional parametric classifiers, especially for complex data with a high-dimensional feature space [32, 33].

However, parametric maximum likelihood (ML) classifier method is the most commonly used remote-sensing classification method [34]. In this work, we evaluate the classification methods of artificial neural networks (ANN), support vector machines (SVM), decision tree (DT), and maximum likelihood (ML) for each city. For each of this classifier, we can measure the accuracy based on the use of an error matrix. Below, there is a brief description of each referred methods.

3.3.1 Artificial neural networks (ANN)

An artificial neural network is a massive parallel distributed processor made up of simple processing units, which has a natural propensity for storing experiential knowledge and can make it available for use [35]. The model is formed by artificial neurons that emulate biological neurons and the synaptic connections among them; it regulates them through the process of solving problem [36].

The network needs to be “trained” with a sufficiently large number of examples in order to be able to make the appropriate inferences. The procedure of training involves groups of input data together with the expected output data. Once the system of neurons has been trained, the network allows the processing of imprecise information, the generalization of known responses to new situations, and the prediction of outcomes. They are appropriate models for dealing with a large set of variables and their nonlinearity is convenient for the assessment of complex systems [37].

The links with the neurons located in the so-called hidden neuron layer take then different weights and are educated depending on the required output, thus they can model complex relationships among variables. The system requires feedforward and backpropagation processes to allow the network to get trained [38]. The visualization of this stage is accomplished through error analysis. If the error becomes smaller and asymptotic, the network will be ready to receive new input data and to predict an output [37].

The ANN models used in this study are of the multilayer perceptron ANN type, a model in which all neurons are fully connected to adjacent layers while layers are not connected to each other at all [39, 40]. There are three types of layers in a typical multilayer perceptron network: input layer, hidden layer, and output layer. This architecture is shown in Figure 2. In each case, the training of the proposed network was performed with a backpropagation algorithm which is a supervised learning procedure [41].

Figure 2.

Artificial neural networks classifier. Source: adapted from [39].

The main tasks of remote sensing data analysis in which the application of ANN standard backpropagation for supervised learning is reported are classification, more commonly land cover classification [42, 43], unmixing [44, 45], and retrieval of biophysical parameters of cover [46]. Other applications of ANNs are also reported in change detection, data fusion, forecasting, preprocessing, georeferencing, and object recognition.

3.3.2 Support vector machines (SVMs)

Support vector machines are a supervised nonparametric statistical learning technique that has no assumption made on the underlying data distribution [47]. Initially, the method is presented with a set of labeled data instances and the SVM training algorithm aims to find a hyperplane that separates the dataset into a discrete predefined number of classes in a fashion consistent with the training examples [48]. Where, optimal separation hyperplane term is used to refer to the decision boundary that minimizes misclassifications, obtained in the training step and learning refers to the iterative process of finding a classifier with optimal decision boundary to separate the training patterns (in potentially high-dimensional space) and then to separate simulation data under the same configurations (dimensions) [49].

In its simplest form, SVM are linear binary classifiers that assign a given test sample a class from one of the two possible labels [47]. Figure 3 illustrates a simple scenario of a two-class separable classification problem in a two-dimensional input space where the solution for a typical two-dimensional case where the subset of points that lies on the margin (called support vectors) is the only one that defines the hyperplane of maximum margin.

Figure 3.

Linear support vector machine classifier. Source: adapted from [47].

An important generalization aspect of SVMs is that frequently not all the available training examples are used in the description and specification of the separating hyperplane. The subset of points that lie on the margin (called support vectors) is the only one that defines the hyperplane of maximum margin. If the two classes are not linearly separable, the SVM tries to find the hyperplane that maximizes the margin while, at the same time, minimizing a quantity proportional to the number of misclassification errors [50]. The tradeoff between margin and misclassification error is controlled by a user-defined constant [51]. SVM can also be extended to handle nonlinear decision surfaces. Boser et al. [52] propose a method of projecting the input data onto a high-dimensional feature space using kernel functions and formulating a linear classification problem in that feature space [53].

In case of nonlinear classification, SVM can perform the classification by using various types of kernels which turn nonlinear boundaries to linear ones in the high-dimensional space to define optimal hyperplane [54]. In this study, four types of kernels (linear, polynomial, radial basis function, and sigmoid) were used for the SVM classification.

3.3.3 Decision tree (DT)

A decision tree is a flow chart like tree structure, defined as a classification procedure that recursively partitions a dataset into smaller subdivisions on the basis of a set of tests defined at each branch (or node) in the tree [55]. Figure 4 illustrates a tree composed of a root node (formed from all of the data), a set of internal nodes (splits), and a set of terminal nodes (leaves). Each circle is a node at which tests (T) are applied recursively, in order to split the data into smaller groups. The labels (A, B, C) at each leaf node refer to the class label assigned to each observation.

Figure 4.

Decision tree classifier. Source: adapted from [55].

In this framework, a DT classifier performs multistage classifications by using a series of binary decisions to place pixels into classes. Each decision divides the pixels in a set of images into two classes based on an expression. It is possible to divide each new class into two more classes based on another expression and defines as many decision nodes as needed. Decision trees have significant intuitive appeal because the classification structure is explicit and therefore easily interpretable since the results of the decisions are always classes. Furthermore, it is possible to use data from many different sources and files together to make a single DT classifier.

The construction of decision tree classifier does not require any domain knowledge of parameter setting, and therefore, is appropriate for satellite imagery classification [56]. The learning and classification steps of decision tree induction are simple and fast. In general, decision tree classifier has good accuracy. Decision tree induction algorithms have been used for classification in many applications areas, including remote sensing [57]. Decision trees have several advantages over traditional supervised classification procedures used in remote sensing such as l ISODATA clustering and maximum likelihood classifier algorithms [58]. In particular, decision trees are strictly nonparametric and do not require assumptions regarding the distributions of the input data. In addition, they handle nonlinear relations between features and classes, they verify missing values and are capable of handling both numeric and categorical inputs in a natural manner [55].

3.3.4 Maximum likelihood (ML)

Into the classic remote sensing image classification techniques, maximum likelihood (ML) classifier, widely implemented in commercial image-processing software packages, is the most frequently method used to pixel-wise classification [34]. ML classifier assumes that the statistics for each class in each band is normally distributed and calculates the probability that a given pixel belongs to a specific class. Unless the algorithm selects a probability threshold, all pixels are classified. Each pixel is assigned to the class that has the highest probability, that is, the maximum likelihood [41].

Statistical techniques such as ML estimation usually assume that data distribution is known a priori [59]. The ML algorithm in remote sensing classification is parametric and depends on each class and is represented by a Gaussian probability density function, which is completely described by the mean vector and variance–covariance matrix using all available spectral bands, and if possible, ancillary information (Figure 5). The maximum likelihood classifier is based on an estimated probability density function for each of the reference classes under consideration, where the class statistics is obtained from the training data. Given these parameters, it is possible to compute the statistical likelihood of a pixel vector as a member of each spectral class [60].

Figure 5.

Maximum likelihood classifier. Source: adapted from [59].

The maximum likelihood classifier is simple and robust enough to accommodate modifications. With the advent of commercial high and very high spatial resolution sensor data, the ML classifier is appropriate for many urban applications [61]. In the context of the new generation of very high spatial resolution commercial satellite sensors, data from these sensors are high volume and they measure large spectral variations in urban land cover, so that in the absence of classifiers designed to deal with such data, simplicity in the maximum likelihood can accommodate large datasets, and the modifications outlined [62].

3.4 Validation strategy

In this step, the overall classification accuracies were determined from the error matrix by calculating the total percentage of pixels correctly classified for the classification methods of: (i) artificial neural networks (ANN); (ii) support vector machines (SVM) for linear (ML), polynomial (MP), radial basis function (MRBF), and sigmoid (MS) kernels; (iii) decision tree (DT); and (iv) maximum likelihood (ML). Since this assessment takes only the diagonal of the matrix into account, the Kappa coefficient, which is based on all the elements in the confusion matrix, was also calculated [63]. The overall accuracy and kappa values were determined using test datasets, obtained with the SpatialProp strategy for training and validation samples developed in Section 3.2.

With the approach of more advanced digital satellite remote sensing techniques, the necessity of performing an accuracy assessment has received renewed interest [64]. Accurate assessment or validation is an important step in the processing of remote sensing data. At present, the geographic information systems and remote sensing communities are becoming more interested on accurate topics. Technological developments in the area of data processing offer more and more possibilities. In this work, the collection of training samples collected from a Web Map Service (WMS) of a SPOT satellite images mosaic at a resolution of 1.5 m in true color is used. The data collected by this method are comparable to the field data employed to assess the accuracy of these remote sensing products.

3.5 GIS integration

The different nonparametric classifiers implemented in this work, such as an artificial neural network, decision tree, support vector machines, and the traditional maximum likelihood classifier, have their own strengths and limitations. For example, when sufficient training samples are available and the feature of land covers in a dataset is normally distributed, a maximum likelihood classifier may yield an accurate classification result. In contrast, when an image data are anomalously distributed, neural network and decision tree classifiers may demonstrate a better classification result [65, 66]. Some other times, machine-learning approaches provide a better classification result than ML, although some tradeoffs exist in classification accuracy, time consumption, and computing resources [67].

Previous research has indicated that the integration of two or more classifiers provides improved classification accuracy compared to the use of a single classifier [67, 68, 69]. A critical step is to develop suitable rules to combine the classification results from different classifiers. Some previous research has explored different techniques, such as a production rule, a sum rule, stacked regression methods, majority voting, and thresholds, to combine multiple classification results [69, 70].

In this step, we have employed a GIS approach to integrate the results of the ANN, SVM, DT, and ML classifiers to produce a better final map of urban form. Different urban mapping hybrid approaches have already been combined to achieve better results [71, 72]. In our approach, the matching results of two or more methods evaluated are combined by the superposition function with the results of the best evaluated method. Subsequently, through a selection of these attributes, the pixels of the urban and nonurban uses that were identified as the best results of the combination within a GIS environment are extracted. The resulting map was validated again, revealing that the most likely characteristics of urban and nonurban uses were present in the combined pixels. This integration GIS approach has allowed the improvement of the results of the urban area classification for the selected cities of study. We suggested that this integration approach can be economically and immediately implemented in a standard GIS software package to produce urban form maps with higher accuracy from satellite images of high spatial resolution for the Mexican National Urban System.

Advertisement

4. Results and discussion

In this study, four different supervised classification methods were integrated to map urban forms of 50 selected cities of study in the National Urban System in Mexico. Maximum likelihood classifier which is a conventional classification method and the advanced classification methods: artificial neural networks, decision tree, and support vector machines for linear (ML), polynomial (MP), RBF (MR), sigmoid (MS) kernels. We found that the artificial neural network classifier (overall accuracy of 92.2%) turned out to be the better single classification method. Support vector machine (overall accuracy of 89.8%) and maximum likelihood (overall accuracy of 89.2%) had similar results. Decision tree classification method (overall accuracy of 87.8%) was the lower classification method. The results we obtained were evaluated by the overall accuracy which is computed by dividing the total number of correct pixels (i.e., the sum of the major diagonal) by the total number of pixels in the error matrix. Overall accuracy for ANN, DT, selected SVM models, and ML classifiers is summarized in Figure 6.

Figure 6.

Overall accuracy for ANN, DT, selected SVM models, and ML classifiers. Source: own elaboration.

After integrating the results obtained by city, using GIS approach, each evaluated method produces a result that has an impact on the spatial extent of the urban form, this is an important result. GIS approach showed an overall accuracy above the average of global reliabilities for each of the 50 selected cities of study; the average reliability for the methods evaluated in all the cities was 89.8%; when using GIS approach, this average reached 91.2%; this number is higher in 38 of the 50 cities evaluated. The approach used in this work has shown good results, although all the classifiers showed very little differences in the spatial extent (within ±4%) of the urban class. The result for the 50 selected cities of study is shown as follows. Figure 7a shows the metropolitan areas, Figure 7b the urban conurbations, and Figure 7c the urban centers.

Figure 7.

(a) Metropolitan areas. Source: own elaboration. (b) Urban conurbations. Source: own elaboration. (c-1) Urban centers 29–38. Source: own elaboration. (c-2) Urban centers 40–50. Source: own elaboration.

Advertisement

5. Conclusions

Information about urban form maping is essential for proper planning and to examine how the recent urban growth has affected the economic performance and livability of cities. This methodological approach offers a spatially explicit inputs for adjusting urban policy frameworks and instruments in ways that support sustainable spatial development and make cities more productive and inclusive.

In this work, different advance classification methods have been tested for the high-resolution satellite imagery classification for urban form detection. SVM method proved to be better for classification problems of two classes. Its major advantage is the less parameters to make it operational and reach high accuracy rates. The employed methodology shows a great potential for the urban form mapping, which could help urban planners to understand and interpret complex urban characteristics with greater precision, where problems are often cited about satellite-based remotely sensed imagery [73].

Furthermore, the proposed approach used to integrate results through GIS environment indicates a robust framework for addressing integrated classification problems in the field of remote sensing. This proposed approach allows to obtain better results when is used to integrate, on the basis that each of the integrated classification methods provides the best of its results to the benefit of a more accurate urban form classification.

Therefore, we believe this proposed approach has great practical value for several remote sensing problems and could be improved and applied to various urban applications in the near future. In this respect, this integration approach can be strengthened through the implementation of learning methods to manage the integration of the data and therefore obtain more and better reliable results. Finally, we are also interested in plainly analyzing the morphological characteristics of the urban form through the application of metrics that have, as primary input, the results obtained with this work.

Advertisement

Acknowledgments

The authors thank the anonymous reviewers for their comments and suggestions. We also thank the financial support granted by the Fondo Sectorial INEGI-CONACYT (278953-S0025-2016-1) project. Throughout the project we had the technical assistance of the Centro de Investigación en Ciencias de Información Geoespacial. For the technical support, we thank Sandra Medina and Gerardo Ávila, and specially we thank Gabriela Quiroz for the mapping making and visual design.

References

  1. 1. Seto KC, Satterthwaite D. Interactions between urbanization and global environmental change. Current Opinion in Environment Sustainability. 2010;2(3):127-128
  2. 2. Núñez JM, Corona N, Ocampo P, Mohar A. Conectando el frente de agua marítimo de la zona costera norte de Yucatán con la zona metropolitana de Mérida. In: Iracheta A, Pedrotti C, Patricia R, editors. Suelo Urbano y Frentes de Agua: Debates y Propuestas en Iberoamérica. México: El Colegio Mexiquense, A.C.; 2017
  3. 3. Caudillo C, Flores S. Tendencias espacio-temporales en la segregación. In: Tendencias territoriales determinantes del futuro de la Ciudad de México. México: Consejo Económico y Social de la Ciudad de México/Consejo Nacional de Ciencia y Tecnología/CentroGeo; 2016. pp. 153-175
  4. 4. Mohar A. Tendencias territoriales determinantes del futuro de la Ciudad de México. Consejo Económico y Social de la Ciudad de México; 2016
  5. 5. Núñez JM. Mapeo de la composición urbana, contraste entre dispersión y formas compactas en el sur de la Ciudad de México. In: Rothe HQ , editor. Ciudad Compacta: Del concepto a la práctica. Universidad Nacional Autónoma de México, Ciudad de México; 2015
  6. 6. Batty M. The size, scale, and shape of cities. Science. 2008;319(5864):769-771
  7. 7. Besussi E, Chin N, Batty M, Longley P. The structure and form of urban settlements. In: Remote Sensing of Urban and Suburban Areas. Berlin, Heidelberg, New York: Springer-Verlag; 2010.pp. 13-31
  8. 8. Weng QH. Remote sensing of impervious surfaces in the urban areas: Requirements, methods, and trends. Remote Sensing of Environment. 2012;117:34-49
  9. 9. Guindon B, Zhang Y, Dillabaugh C. Landsat urban mapping based on a combined spectral-spatial methodology. Remote Sensing of Environment. 2004;92(2):218-232
  10. 10. Schneider A, Friedl MA, Potere D. A new map of global urban extent from MODIS satellite data. Environmental Research Letters. 2009;4(4):11
  11. 11. Ridd MK. Exploring a V-I-S (vegetation-impervious surface-soil) model for urban ecosystem analysis through remote sensing: Comparative anatomy for cities. International Journal of Remote Sensing. 1995;16(12):2165-2185
  12. 12. Deng CB, Wu CS. BCI: A biophysical composition index for remote sensing of urban environments. Remote Sensing of Environment. 2012;127:247-259
  13. 13. Bhaskaran S, Paramananda S, Ramnarayan M. Per-pixel and object-oriented classification methods for mapping urban features using Ikonos satellite data. Applied Geography. 2010;30(4):650-665
  14. 14. Zhou W, Troy A. An object-oriented approach for analysing and characterizing urban landscape at the parcel level. International Journal of Remote Sensing. 2008;29(11):3119-3135
  15. 15. Zhang J, Foody GM. Fully-fuzzy supervised classification of sub-urban land cover from remotely sensed imagery: Statistical and artificial neural network approaches. International Journal of Remote Sensing. 2001;22(4):615-628
  16. 16. Schneider A, Friedl MA, Potere D. Mapping global urban areas using MODIS 500-m data: New methods and datasets based on ‘urban ecoregions’. Remote Sensing of Environment. 2010;114(8):1733-1746
  17. 17. Byun Y, Choi J, Han Y. An area-based image fusion scheme for the integration of SAR and optical satellite imagery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2013;6(5):2212-2220
  18. 18. Puissant A, Zhang W, Skupinski G, editors. Urban morphology analysis by high and very high spatial resolution remote sensing. In: International Conference on Geographic Object-Based Image Analysis. 2012
  19. 19. Duan YL, Shao XW, Shi Y, Miyazaki H, Iwao K, Shibasaki R. Unsupervised global urban area mapping via automatic labeling from ASTER and PALSAR satellite images. Remote Sensing. 2015;7(2):2171-2192
  20. 20. Bartholome E, Belward AS. GLC2000: A new approach to global land cover mapping from earth observation data. International Journal of Remote Sensing. 2005;26(9):1959-1977
  21. 21. Gao F, De Colstoun EB, Ma RH, Weng QH, Masek JG, Chen J, et al. Mapping impervious surface expansion using medium-resolution satellite image time series: A case study in the Yangtze River Delta, China. International Journal of Remote Sensing. 2012;33(24):7609-7628
  22. 22. Esch T, Marconcini M, Marmanis D, Zeidler J, Elsayed S, Metz A, et al. Dimensioning urbanization—An advanced procedure for characterizing human settlement properties and patterns using spatial network analysis. Applied Geography. 2014;55:212-228
  23. 23. Sandoval H, Núñez JM. Cuantificación de la composición biofísica de los ambientes urbanos de la ciudad de Mérida, Yucatán basada en el análisis de imágenes Landsat TM/ETM+/OLI (1986-2014). In: LCA C, LCB P, LCW Q, MET O, MIU C, MOG L, editors. Estudios Territoriales en México: Percepción Remota y Sistemas de Información Espacial. México: Universidad Autónoma de Ciudad Juárez; 2016
  24. 24. Xian GZ. Remote Sensing Applications for the Urban Environment. Boca Raton, FL: CRC Press; 2015
  25. 25. Maxwell AE, Warner TA, Fang F. Implementation of machine-learning classification in remote sensing: An applied review. International Journal of Remote Sensing. 2018;39(9):2784-2817
  26. 26. Mas JF, Flores JJ. The application of artificial neural networks to the analysis of remotely sensed data. International Journal of Remote Sensing. 2008;29(3):617-663
  27. 27. Dridi H, Bendib A, Kalla M. Analysis of urban sprawl phenomenon in Batna city (Algeria) by remote sensing technique. Analele Universităţii din Oradea, Seria Geografie. 2015;2:211-220
  28. 28. Bhatta B. Analysis of Urban Growth and Sprawl from Remote Sensing Data. Berlin, Heidelberg, New York: Springer-Verlag; 2010. 170 p
  29. 29. Ferreira FH, Messina J, Rigolini J, López-Calva L-F, Lugo MA, Vakis R. Economic Mobility and the Rise of the Latin American Middle Class. Washington, DC: The World Bank; 2012
  30. 30. Richter R, Schlapfer D. Geo-atmospheric processing of airborne imaging spectrometry data. Part 2: Atmospheric/topographic correction. International Journal of Remote Sensing. 2002;23(13):2631-2649
  31. 31. Jin HR, Stehman SV, Mountrakis G. Assessing the impact of training sample selection on accuracy of an urban classification: A case study in Denver, Colorado. International Journal of Remote Sensing. 2014;35(6):2067-2081
  32. 32. Ghimire B, Rogan J, Galiano VR, Panday P, Neeti N. An evaluation of bagging, boosting, and random forests for land-cover classification in Cape Cod, Massachusetts, USA. GIScience & Remote Sensing. 2012;49(5):623-643
  33. 33. Homer C, Huang CQ, Yang LM, Wylie B, Coan M. Development of a 2001 National Land-Cover Database for the United States. Photogrammetric Engineering and Remote Sensing. 2004;70(7):829-840
  34. 34. Yu L, Liang L, Wang J, Zhao Y, Cheng Q, Hu L, et al. Meta-discoveries from a synthesis of satellite-based land-cover mapping research. International Journal of Remote Sensing. 2014;35(13):4573-4588
  35. 35. Haykin S. Neural Networks: A Comprehensive Foundation. India: Prentice Hall PTR; 1994
  36. 36. Atkinson PM, Tatnall A. Introduction neural networks in remote sensing. International Journal of Remote Sensing. 1997;18(4):699-709
  37. 37. Canziani G, Ferrati R, Marinelli C, Dukatz F. Artificial neural networks and remote sensing in the analysis of the highly variable Pampean shallow lakes. Mathematical Biosciences and Engineering. 2008;5(4):691-711
  38. 38. Basheer IA, Hajmeer M. Artificial neural networks: Fundamentals, computing, design, and application. Journal of Microbiological Methods. 2000;43(1):3-31
  39. 39. Minsky M, Papert S. Perceptron Expanded Edition. Cambridge, MA: MIT Press; 1969
  40. 40. Rosenblatt F. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Washington, DC: Spartan; 1962
  41. 41. Richards JA, Richards J. Remote Sensing Digital Image Analysis. Berlin, Heidelberg, New York: Springer-Verlag; 1999
  42. 42. Fuller DO. Remote detection of invasive Melaleuca trees (Melaleuca quinquenervia) in South Florida with multispectral IKONOS imagery. International Journal of Remote Sensing. 2005;26(5):1057-1063
  43. 43. Augusteijn MF, Folkert BA. Neural network classification and novelty detection. International Journal of Remote Sensing. 2002;23(14):2891-2902
  44. 44. Liu WG, Wu EY. Comparison of non-linear mixture models: Sub-pixel classification. Remote Sensing of Environment. 2005;94(2):145-154
  45. 45. Mertens KC, Verbeke LPC, Westra T, De Wulf RR. Sub-pixel mapping and sub-pixel sharpening using neural network predicted wavelet coefficients. Remote Sensing of Environment. 2004;91(2):225-236
  46. 46. Lafont D, Guillemet B. Beam-filling effect correction with subpixel cloud fraction using a neural network. IEEE Transactions on Geoscience and Remote Sensing. 2005;43(5):1070-1077
  47. 47. Mountrakis G, Im J, Ogole C. Support vector machines in remote sensing: A review. ISPRS Journal of Photogrammetry and Remote Sensing. 2011;66(3):247-259
  48. 48. Vapnik V. Estimation of Dependences Based on Empirical Data. USA: Springer Science & Business Media; 2006
  49. 49. Zhu GB, Blumberg DG. Classification using ASTER data and SVM algorithms; the case study of Beer Sheva, Israel. Remote Sensing of Environment. 2002;80(2):233-240
  50. 50. Pal M, Mather PM. Support vector machines for classification in remote sensing. International Journal of Remote Sensing. 2005;26(5):1007-1011
  51. 51. Cortes C, Vapnik V. Support-vector networks. Machine Learning. 1995;20(3):273-297
  52. 52. Boser BE, Guyon IM, Vapnik VN, editors. A training algorithm for optimal margin classifiers. In: Proceedings of the Fifth Annual Workshop on Computational Learning Theory. ACM; 1992
  53. 53. Vapnik V. The Nature of Statistical Learning Theory. New York: Springer Science & Business Media; 2013
  54. 54. Ustuner M, Sanli FB, Dixon B. Application of support vector machines for land use classification using high-resolution RapidEye images: A sensitivity analysis. European Journal of Remote Sensing. 2015;48:403-422
  55. 55. Friedl MA, Brodley CE. Decision tree classification of land cover from remotely sensed data. Remote Sensing of Environment. 1997;61(3):399-409
  56. 56. De Fries R, Hansen M, Townshend J, Sohlberg R. Global land cover classifications at 8 km spatial resolution: The use of training data derived from Landsat imagery in decision tree classifiers. International Journal of Remote Sensing. 1998;19(16):3141-3168
  57. 57. Pal M. Random forest classifier for remote sensing classification. International Journal of Remote Sensing. 2005;26(1):217-222
  58. 58. Sharma R, Ghosh A, Joshi PK. Decision tree approach for classification of remotely sensed satellite data using open source support. Journal of Earth System Science. 2013;122(5):1237-1247
  59. 59. Burges CJC. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery. 1998;2(2):121-167
  60. 60. Besag J. On the statistical analysis of dirty pictures. Journal of the Royal Statistical Society: Series B: Methodological. 1986;48(5-6):259-302
  61. 61. Mesev V. Modified maximum likelihood classifications of urban land use: Spatial segmentation of prior probabilities. Geocarto International. 2001;16(4):41-48
  62. 62. Majd MS, Simonetto E, Polidori L. Maximum likelihood classification of single high-resolution polarimetric SAR images in urban areas. Photogrammetrie, Fernerkundung, Geoinformation. 2012;(4):395-407
  63. 63. Lucas I, Janssen F, van der Wel FJ. Accuracy assessment of satellite derived land cover data: A review. Photogrammetric Engineering and Remote Sensing. 1994;60(4):479-426
  64. 64. Bharatkar PS, Patel R. Approach to accuracy assessment tor RS image classification techniques. International Journal of Scientific and Engineering Research. 2013;4(12):79-86
  65. 65. Lu DS, Mausel P, Batistella M, Moran E. Comparison of land-cover classification methods in the Brazilian Amazon Basin. Photogrammetric Engineering and Remote Sensing. 2004;70(6):723-731
  66. 66. Pal M, Mather PM. An assessment of the effectiveness of decision tree methods for land cover classification. Remote Sensing of Environment. 2003;86(4):554-565
  67. 67. Lu D, Weng Q. A survey of image classification methods and techniques for improving classification performance. International Journal of Remote Sensing. 2007;28(5):823-870
  68. 68. Huang Z, Lees BG. Combining non-parametric models for multisource predictive forest mapping. Photogrammetric Engineering and Remote Sensing. 2004;70(4):415-425
  69. 69. Steele BM. Combining multiple classifiers: An application using spatial and remotely sensed information for land cover type mapping. Remote Sensing of Environment. 2000;74(3):545-556
  70. 70. Liu W, Gopal S, Woodcock CE. Uncertainty and confidence in land cover classification using a hybrid classifier approach. Photogrammetric Engineering and Remote Sensing. 2004;70(8):963-971
  71. 71. Lo CP, Choi J. A hybrid approach to urban land use/cover mapping using landsat 7 enhanced thematic mapper plus (ETM+) images. International Journal of Remote Sensing. 2004;25(14):2687-2700
  72. 72. Kuemmerle T, Radeloff VC, Perzanowski K, Hostert P. Cross-border comparison of land cover and landscape pattern in Eastern Europe using a hybrid classification technique. Remote Sensing of Environment. 2006;103(4):449-464
  73. 73. Carlson T. Preface—Applications of remote sensing to urban problems. Remote Sensing of Environment. 2003;86(3):273-274

Written By

Juan Manuel Núñez, Sandra Medina, Gerardo Ávila and Jorge Montejano

Submitted: 22 August 2018 Reviewed: 26 November 2018 Published: 15 February 2019