Open access peer-reviewed chapter

Introductory Chapter: Spatial Analysis, Modelling, and Planning

By José António Tenedório and Jorge Rocha

Submitted: June 6th 2018Reviewed: August 21st 2018Published: November 5th 2018

DOI: 10.5772/intechopen.81049

Downloaded: 861

1. Introduction

It can be difficult to separate spatial analysis from other fields of interest such as geography, location analysis, geographic information science, etc. Yet, its beginnings are to some extent easy to identify. It started with both the “spatial thinking” paradigm and the geography quantitative revolution in the 1950s–1960s [1, 2].

The first promoters of this paradigm shift (Brian Berry, Waldo Tobler, Art Getis, etc.) had a geography background and, as such, conducted their work in a multidisciplinary crossroad approach, allowing crossing ideas and spatial analysis approaches from substantially different disciplines (e.g., statistics and computer science). One of the most marking add-ins to spatial analysis was Peter Haggett’s [3] work, which remains as a reference for spatial analysis researchers and scholars.

Spatial analysis stands over the principle that there is some spatial component—absolute, relative, or both—in data. Indeed, in the beginning of the twentieth century, 80% of all data have already some kind of spatial explanation [4]. Spatial analysis comprehends numerous representational models of reality based on the spatial properties of the data features [5].

The importance gathered by spatial analysis within geography, its achievement in getting into the analytical framework of several sciences (e.g., natural, social, and physical), and its prominence as a pillar of geographic information science [6] reflects that geographic space does matter and greatly influences the way natural, social, and physical processes evolve.

Spatial patterns and processes have idiosyncratic properties [7] that establish the core of the spatial analysis paradigm. One is spatial dependence, which postulates that the spatially located semantic information gives some insights about the existent information in nearby locations. This is known as spatial autocorrelation (i.e., a kind of statistical dependence relationship) and when it applies to univariate analysis is often understood as some kind of spatial expansion process. Here it is impossible not to mention Tobler’s (1970) First Law of Geography, stipulating that all things are related, but near things are more related than distant things [8].

Another basic principle of spatial analysis is known as spatial heterogeneity. Here univariate/multivariate analysis is possibly not static throughout the geographic space, that is, anisotropy. Thus, one may find local hot and cold spots [9], because the parameter calibration of these models may vary athwart the study area, mirroring local variations of the global model adjusted for the study area as an all [10].

In a narrow view, one can consider that spatial data is special [11]. Yet, a rigid interpretation has often resulted in the postulate that the geographic space exists objectively and independently of the social and natural processes that operate across the spatial extension and in the conceptual and operational separation of spatial and semantic information in spatial models. Most recently this idea has been considered dogmatic and detached from the authenticity of the geographic space [12].

Geography has a history of a, sometimes crispy, relation between law-seeking (nomothetic) and description-seeking (idiographic) knowledge [13]. Wisely, physical geographers get away from these debates, but the nomothetic-idiographic tension keeps on in human geography [13, 14, 15]. Possibly without surprise, geography has been censured for invalidated theories, results that cannot be reproduced, and a division among practice and science [16].

Goodchild and Li [17] debate that old synthesis process, which is usually hidden from general view and not easily related to the final result, will be more explicit and of serious importance in the new big data era. Much of the geographic knowledge is made of formal theories, models, and equations that need to be processed in an informal manner. By the contrary, data mining techniques require explicit representations, for example, rules and hierarchies, with straight access deprived of processing [18].

One can state that multiple regression is undoubtedly the most widely used statistical approach in geography. This model assumes that the model that is being used in the analysis is the most correct [19]. Regrettably, the background theories are hardly ever satisfactorily developed to include even the most pertinent variables. Also, often many researchers do not even know what can be those missing variables. Hence, one often finds us testing a limited set of variables, and the incapacity to include crucial mislaid variables can have severe implications in the model’s accuracy and thus in the conclusions drawn from the data [20].

In opposition to this traditional methodology, the data analytic approach trusts on multiple models or a group of models. Instead of selecting the only best model and accepting that it properly defines the data generation process, a group of models analyze all the possible models to be resultant from the existing variables set and combine the results through a multiplicity of techniques, for example, bootstrap aggregation, bagging, boosting, support vector machines, neural networks, genetic algorithms, and Bayesian model averaging [21, 22]. The subsequent group of models constantly achieves better results than the designated best model making higher accurate predictions across a wide group of domains and techniques [23, 24]. However, one should note that the new advanced data analytic techniques do not always outperform the more traditional techniques [25].

This book is a gathering of original research contributions focusing on recent developments in spatial analysis and modelling with implications to—spatial—planning. The book is organized in three parts that make use of spatial analytic approaches in a progressively integrated and systemic way. It pretends to show how computational methods of spatial analysis and modelling in a geographic information system (GIS) environment can be applied on systems comprehension and allow a more informed spatial planning and, thus, theoretically improved and more effective. The 12 topics comprise new types of data, analysis to distinguish the importance of data in structures, functions and processes, and the use of approaches to backing decision-making.

2. Spatial analysis

The emergence of critical geography (mainly physical), critical GIS, and radical approaches to quantitative geography fostered the idea that geographers are well prepared to combine quantitative methods with technical practice and critical analysis [26]. This proved to be not quite true, but presently big data opens, specially through data mining, new possibilities for spatial analysis research [27] and can extend the limits of quantitative approaches to a wide array of problems usually addressed qualitatively [27, 28].

Despite that big data puts challenges to conventional concepts and practices of “hard” sciences, where geographic information science is included [29, 30], the predominance of big data will undoubtedly lead to a new quantitative turn in geography [31]. This is clearly a new paradigm shift in geography research methodologies: a fourth—data-intensive—paradigm [32].

The alleged spatially integrated social sciences intend to influence GIS in order to analyze the enormous amounts of available geocoded data [33]. Making sense of these data requires both computationally based analysis methods and the ability to situate the results [34] and brings together the risk of plunging traditional interpretative approaches [35]. The big data era calls for new capacities of synthesis and synergies between qualitative and quantitative approaches [36].

This paradox alliance between “poets and geeks” [37] can be a unique opportunity for geography, stimulating wider efforts to create a bridge over the qualitative-quantitative crater [15] and enabling smart combinations of quantitative and qualitative methodologies [38, 39, 40].

It is a similar case to the rebirth of social network theory and analysis where due to the growing availability of relational datasets covering human interactions and relationships, network researchers manage to implement a new set of theoretical techniques and concepts [41].

Surveys are an example of this new paradigm. This methodology is at a crisis because of the decline of response rates, sampling frames, and the narrow ability to record certain variables that are the core of geographical analysis, for example, accurate geographical location [42]. Gradually, self-reported surveys quantifying human motivations and behaviors are being studied and compared with more “biological” data sources [43].

These limitations are still more pronounced if one considers two additional features: (i) the majority of social survey data is cross-sectionally deprived of a longitudinal temporal facet [44] and (ii) most social datasets are rough clusters of variables due to the limitations of what can be asked in self-reported approaches.

Big data is leading to advances on both aspects, shifting from static snapshots to dynamic recounting and from rough aggregations to high resolution, spatiotemporal, data. Here, what matters the most is the likelihood of an increased emphasis of geography on processes rather than structures. Again, network analysis works as a good example as the availability of longitudinal relational data generated the latest procedural and theoretic advances on network dynamics [41].

Big data and its influence on geographic research have to be interpreted in the context of the computational and algorithmic shift that is progressively influencing geography research methods. To fully understand such shift, one can make the distinction between two modelling approaches [45]: (i) the data modelling approach which assumes a stochastic data model and (ii) the algorithmic modelling approach that considers the data as complex and unknown. The first evaluates the parameter values from the data and then uses the model for information and/or prediction, and in the second, there is a move from data models to algorithms properties.

This is precisely the type of data created from immense complex systems simulations [46], but a big percentage of it is provided by sensors and/or software that collect a wide range of social and environmental patterns and processes [47, 48]. The geographic sources of this spatial and temporal data embrace location-aware tools such as mobile phones, airborne (e.g., unmanned aerial vehicles) and satellite remote sensors, other sensors attached to infrastructures or vehicles, and georeferenced social media, among others [18, 49, 50].

There is in big data an enormous potential for innovative statistics [51]. Perhaps the upmost importance is the necessity for a distinct mind-set because big data points toward a paradigm shift, comprising an increased and improved use of modelling practices [52, 53]. Taking in consideration the growing importance of location, it is fundamental for geographers to stop just questioning “where?” but also start to enquire “why?” and “how?” [47].

Spatial analysis is defined as a way of looking at the geographical patterns of data and analyzes the relationships between the entities. In spatial analysis, the tendency in the direction of local statistics, for example, geographically weighted regression [54] and (local) indicators of spatial association [9], characterizes a concession where the main rules of nomothetic geography can evolve in their own way across the geographic space. Goodchild [55] sees GIS as a mix of both the nomothetic and idiographic characteristics, retained, respectively, on the software and algorithms, and within the databases.

Hence, spatial analysis is some sort of modelling procedure that relates data features over a geographic space (2D), across several spaces (3D), and along time dimension (4D).

3. Spatial modelling

What is a model? Well, in a broad sense, a model is a simplification of the reality: thus, all models are wrong [56]. As one can understand, it is impractical or even functionally impossible to collect cartographic information using an exact match between the representation and the real objects; the elements generated would be a replica of the studied area and not a model. The acquisition of information is therefore a numerical relationship between reality and the cartographic representation and, therefore, requires a semantic transfer, inseparable from the graphic and thematic generalization processes.

Lewis Carroll, the world-renowned writer for his book Alice in Wonderland, in his poetic tale The Hunting of the Snark (An Agony in 8 Fits) [57], presents a very particular vision of the relation: greater abstraction versus less information versus more extensive understanding, by proposing an empty map (the Bellman’s Map, Figure 1). This blank sheet of paper, with suggestions for navigation (North, South, etc.) and very mysterious, can represent the total ignorance of humans in relation to their location but at the same time was a map that everyone understood. The point is only the simplification/selection, since in the middle of the ocean, this map can be quite accurate, if there is nothing else to consider than water itself.

Chorley and Haggett [58] mention that one of the approaches to model building can start with the simplification of a system to its essential and then start building an increasingly complex structure, by induction, a priori reasoning, and so on. Hardly there may be a standard procedure for the construction of a system model never before modeled, but the suggestion of ways to address the problem given by the authors can help in a first approach to the problem. The original thought processes are difficult to understand and explain, and the solutions of the problems auto-suggest in strange shapes and times. It is not expected that two researchers working on the same subject address two models in the same way. What is expected is that they start with a topic of interest and then try to model it their own way.

All information is gathered at a certain range. This can be set, in a somewhat crude manner, as the number of real-world metrics units that correspond to a same unit in the spatial model. As one reduces the operating scale, the level of detail decreases according to the implicit generalization. However, before doing it, this option should be weighted because, in practice, it is not always possible to reduce and then enlarge a map, without such procedures will lead to a loss of information.

According to a story collected by Jorge Luís Borges from “Travels of Praiseworthy Men” written by Suárez Miranda (1658), and published in the chapter “Of Exactitude in Science” of the book named A Universal History of Infamy [59], this would not have been the understanding of a group of cartographers who, perhaps compelled by the thirst for the power of an empire, intended to make a map of their country. More driven by the greatness than by the desire to better understand that territory, these cartographers endeavored to design or, rather, copy the shape of their territory in increasing scales—1:10,000, 1:1000, 1:100, and 1:10—until 1 day they reached what they considered the perfect representation, that is, a map with a 1:1 scale. Inevitably, “less attentive to the Study of Cartography, succeeding generations came to judge a map of such magnitude cumbersome, and, not without irreverence, they abandoned it to the rigors of sun and rain” [59].

The Empire’s cartographers had copied the territory in an obsessive way as if it were a text. It is possible that, except if the Empire comes to decline, the next step would have been to represent each of the transformations of any details of that territory to the extent that it would be impossible to distinguish the importance between the representation and the object represented. These cartographers though believed achieving an increasingly better representation, through a perfect copy of the geometry of place, distorted, in inverse proportion, the ability of these maps to explain the territory of the Empire. Today we can associate modernity, to which Marc Augé refers [60], to the excess of information that submerges us with spatial data; but at the same time reduces the distances due to information and communication technologies (ICT).

Nowadays, spatial modelling and in a broad sense geography have shifted from a data-scarce to a data-rich environment. Contrary to the generalized idea, the critical change is not about the data volume, but relatively to the variety and the velocity at which georeferenced data can be taken. Data-driven geography is (re)emerging due to a massive georeferenced dataflow coming from sensors and people.

The notion of data-driven science defends that the generation of hypothesis and theory creation is up to date by an iterative process where data is used inductively. Hence, it is possible to name a new category of big data research that leads to the creation of new knowledge [61]. One should note that the inductive process should not start in a theory-less void. Preexisting knowledge is used to outline the analytic engine in order to inform the knowledge discovery process, to originate valuable conclusions instead of detecting any-and-all possible relations [62].

Data-driven geography raises some issues that in fact have been long-lasting problems debated within the geographic community. Just to name a few, one can point dealing with large data volumes the problem of samples versus populations, the data fuzziness, and the frictions between idiographic and nomothetic approaches. Yet, the conviction that location matters (i.e., spatial context) is intrinsic to geography and acts as a strong motivation to approaches such as spatial statistics, time geography, and geographic information science as an all.

Models can have very distinct applications, from the conception of suitability, vulnerability, or risk indicators, to simulation to the assessment of planning scenarios. In a GIS framework, modelling can provide insights about the way real systems work with enough precision and accuracy to permit prediction and assertive decision-making.

Nowadays, two distinct cultures of modelling coexist [45, 63]. By one side, one can start imaging a stochastic data model in what can be called a data modelling culture. The other one, the algorithmic modelling culture, assumes that the core of the model is complex and unidentified. The former uses the model for both information and prediction after retrieving the parameter values from the data. In the latter, a shift exists from the data models to the algorithms properties.

Putka and Oswald [64] indicate how geography could benefit by implementing the data algorithmic philosophy. They claim that the actual data modelling philosophy prevents the ability to predict results more accurately, generates models that do not integrate a phenomenon’s key drivers, and cannot incorporate models’ uncertainty and complexity in a satisfactory manner.

4. Modelling and planning

The history of territories reveals cycles, both of progress and decline, if we consider only the opposites. Each cycle mirrors, in scales, dimensions, and variable rhythms, the importance of political decisions. Planning the territory constitutes an instituted praxis from which the models of the desired evolution are derived. As a general rule, the models are drawn up on the basis of essentially qualitative assumptions. They establish themselves as models that transpose the dominant ideas resulting from the interpretation of the spirit of the laws and regulations, from the debate of the technical solutions, and from public participation.

However, in the light of the recent theories on the territorial dynamics, there is the possibility to resort to quantitative models that reveal the self-organizing systems of the territories (e.g., cellular automata, multi-agent systems, fractal analysis, etc.). These models use intensively spatial modelling in a GIS environment and future scenarios simulation based on historical information of geographical changes. When, for instance, one quantifies the land use/cover changes and relates them with what is predicted in the plans, the conditions for quantitative modelling are created, favoring the dialectic between models. Therefore, it is not a question of using the confrontation between qualitative and quantitative models to nullify the relevance and/or excessive valorization of one of them. On the contrary, it is to evaluate the potential of each other and make use of it to improve the technical efficiency in the moment of preparation, monitoring, and evaluation of the territorial management instruments.

The legal systems and regulations of each country can be an opportunity to use geosimulation models, of quantitative root, to enrich the political and technical debate, about the planning of the territories in the future. It was in this context that we captured the questions relating to the analysis and spatial modelling as fundamentals of urban planning and regional planning, which, as we know, are complex processes of geographic space organization.

5. Conclusions

The difficulties to interpret and understand the territory, particularly with regard to the mixing of subsystems, inevitably require using the notion of complexity. Thus, it is essential to provide tools that could address complexity, linking both spatial organizations and the system of actors who make them evolve. Therefore, the systems approach presents itself as a paradigm capable of guiding the use and understanding of complex systems and as a prerequisite for more advanced modelling approaches.

Understanding social complexity requires the use of a large variety of computational approaches. For instance, the multiscale nature of social clusters comprises a countless diversity of organizational, temporal, and spatial dimensions, occasionally at once. Moreover, computation denotes several computer-based tools, as well as essential concepts and theories, varying from information extraction algorithms to simulation models [65, 66].

Location analysis and modelling as an integrating part of spatial analysis [67] come out from Weber’s industrial location theory. Location models might embrace a descriptive methodology, but they can also be very operative as normative environments. Hence, spatial analysis overlaps typical data analytic methods such as statistics, network analysis, and several data science viewpoints, such as data mining and machine learning.

Whereas there is an interesting discussion between statistics and machine learning researchers about the advantages and disadvantages of each method, it is unmistakable that the huge mainstream of quantitative analytical methods falls inside the concept of data modelling culture. This enables a profounder knowledge about the importance of spatial values in shaping the geographic space.

The spatial analysis overlapping with numerous fields of application leads to the coin of the designation spatial science [68], which seems to better represent its singularities. In addition to geography, spatial analysis has a clear linkage to regional science.

Ever since its beginning, regional science has dealt with knowledge discovery adopting a neopositivism approach. It embraces the emerging architype of geospatial data integration rooted in geographic information science [69, 70, 71] to analyze the complex systems and the spatiotemporal processes that make them. It also extended the procedural boundary of spatial analysis, through both exploratory spatial data analysis [72] and confirmatory spatial data analysis [73].

Thus, spatial analysis and modelling is an interesting area of application within geographic information science, directing analysis, modelling, and improving the comprehension of spatiotemporal processes. It comprises a group of narrowly connected subareas, for example, geographic knowledge discovery, data analytics, spatiotemporal statistics, social network analysis, spatiotemporal modelling, and agent-based simulation.

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

José António Tenedório and Jorge Rocha (November 5th 2018). Introductory Chapter: Spatial Analysis, Modelling, and Planning, Spatial Analysis, Modelling and Planning, Jorge Rocha and José António Tenedório, IntechOpen, DOI: 10.5772/intechopen.81049. Available from:

chapter statistics

861total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

One World, One Health Challenge: The Holistic Understanding of Rickettsiosis Integrating Multi-Criteria Analysis Techniques and Spatial Statistics

By Diego Montenegro, Ana Paula da Cunha, Ingrid Machado, Liliane Duraes, Stefan Vilges de Oliveira, Marcel Pedroso, Gilberto S. Gazêta and Reginaldo P. Brazil

Related Book

First chapter

Introductory Chapter: Multi-Agent Systems

By Jorge Rocha, Inês Boavida-Portugal and Eduardo Gomes

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More about us