Comparison of the literature, the table classifies space representations and used sensors for traversability purposes

## Abstract

Watch your step! Or perhaps, watch your wheels. Whatever the robot is, if it puts its feet, tracks, or wheels in the wrong place, it might get hurt; and as robots are quickly going from structured and completely known environments towards uncertain and unknown terrain, the surface assessment becomes an essential requirement. As a result, future mobile robots cannot neglect the evaluation of terrain’s structure, according to their driving capabilities. With the objective of filling this gap, the focus of this study was laid on terrain analysis methods, which can be used for robot control with particular reference to autonomous vehicles and mobile robots. Giving an overview of theory related to this topic, the investigation not only covers hardware, such as visual sensors or laser scanners, but also space descriptions, such as digital elevation models and point descriptors, introducing new aspects and characterization of terrain assessment. During the discussion, a wide number of examples and methodologies are exposed according to different tools and sensors, including the description of a recent method of terrain assessment using normal vectors analysis. Indeed, normal vectors has demonstrated great potentialities in the field of terrain irregularity assessment in both on‐road and off‐road environments.

### Keywords

- traversability
- terrain assessment
- terrain analysis
- UGV
- mobile robots

## 1. Introduction

From an analysis in the United States, the automated guided vehicles (AGVs) market will be worth 2240 million dollars by 2020, due to growing automation investments across all major industries [1]. Besides, BI Intelligence estimates a number of 10 million cars and trucks featuring self‐driving capabilities by the same year [2]. On the other side, during the DARPA Robotics Challenge 2015, worldwide universities and their humanoids have raced among challenging scenarios, and a number of robots lost their balance traveling across rubble [3], and some of them even used semi‐autonomous systems to overcome this challenge by manually sending commands about specific locations where to put their feet on. Additionally, the Curiosity rover, recently sent on Mars by NASA, demonstrates the growing utilization of robotics technologies in planetary exploration as they require high level of reliability during their surveys, and rocks or terrain irregularities may cause irreparable damages to on‐board instrumentation [4].

The common element among all these types of robots consists of the necessity of a high level of driving capability; though motion control has made great strides, it may fail in case of unexpected circumstances, including road hazards, pavement distresses, and rubble. As a result, from widely known AGVs, spread in industries since years, to modern unmanned ground vehicles (UGVs) [5], the high level of driving capabilities is perceived an essential requirement. In order to enhance robustness and reliability, future mobile robots should be designed including custom hardware and software components, helping UGVs to adapt their driving behavior according to surface irregularities. In robotics, the assessment of terrain conditions is generally referred to as * “terrain traversability analysis;”* even though traversability has been explored from various perspectives, a thorough survey on this topic suggests that a specific definition is still missing in the robotic community [6]. On the other hand, as robots’ diffusion increases braking up new boundaries in their application, the use of visual technologies for traversability assessment will improve their reliability; consequently, the acquisition of information about the terrain is a prerequisite capacity and recent advances in sensors and perception encourage future researched in this field.

Among the number of methods and models for terrain analysis, there are at least two large categories, (i) * classification‐based* methods and (ii)

*methods. In the former, it is possible to count all the approaches that consider a binary distinction of the terrain as two classes, traversable or non‐traversable; to cite an example, in [7], the authors use an on‐line trained classifier to distinguish traversable and non‐traversable regions. Widely spread in research, occupancy maps also fall in this category as they use the elevation of surrounding objects to construct a map of occupied regions on the base of sensor measurements [8]. Whereas in cost‐assessment methods is common to assign a continuous cost index, to better describe the traversability characteristics of terrain according to a specific cost function [9]. As advances on the same line, Tanaka et al. implemented a fuzzy‐based traversability analysis, considering terrain roughness and slope as input for a fuzzy inference system and then generating a vector field histogram for navigation purposes [10].*cost‐assessment

A further classification of methods commonly used in this field distinguishes between * geometric‐* or

*methods. Used in a large number of works in research [11–13], geometric‐based analyses aim to detect traversability using geometric properties of surfaces such as distances in space and shapes. Whereas appearance methods, to a greater extent related to camera images processing and cognitive analyses, have the objective of recognize colors and patterns not related to the common appearance of terrain, such as grass, rocks or vegetation [14, 15]. In spite of the clear potentialities of appearance‐based methods, still geometric ones are mostly common in robotics, because they can be easily used for path‐planning purposes, where also probabilistic methods are gaining interest. Indeed, in 2006, Thrun et al. [16] presented a probabilistic algorithm for terrain classification on a fast moving robot platform, constituting a part of their autonomous vehicle during the Darpa Grand Challenge in 2005. As a recent example, in [17], the authors describe a terrain classification approach for an autonomous robot based on Markov random fields (MRFs) on fused 3D laser and camera image data.*appearance‐based

In the light of glaring requirements of terrain analysis for future UGVs, this discussion aims at exploring some of the basic concepts of traversability; the focus was laid on geometric methods. This study introduces a definition of traversability and its application to robot control and autonomous ground vehicles. This directly leads to the contributions of this chapter, which attempts to compare different methodologies and fill the gap between theory and practical applications giving a definition that can be of general value for terrain traversability analysis in terms of a fuzzy set, including practical examples to foregoing functions available in the literature. Furthermore, the potentialities of novel methods based on the normal vectors analysis will be explored, providing some practical examples of application.

The chapter is structured as follow: Section 2 will provide an overview and basic knowledge about the field with focus on related works and recent techniques for visual terrain analysis, used sensors and space representation. Later, in Section 3, a theoretical background will help, who unfamiliar with the topic, to understand the basic concepts related to robot models and state spaces, introducing a definition of traversability in terms of a fuzzy set. Examples, results and comparisons are exposed during a thorough discussion in Section 4, which will cover basic functions and recent researches in the field applied on both synthetic data and real scenarios. Conclusions are drawn in Section 5.

## 2. Overview

As humans themselves rely on their five senses to know where to walk or drive a vehicle on, creating an implicit space representation in the brain, robots perceive and interpret the space using exteroceptive and proprioceptive transducers as a sensing aid. In order to build an effective exteroceptive traversability analysis tool two elements are required: (i) visual sensors and (ii) a mathematical space representation. The former comprises any exteroceptive sensor such as cameras, depth cameras, or time‐of‐flight sensors, which endow robots with sensing capabilities; whereas the latter provides a spatial organization of sensory data and build an abstract representation of the 3D environment. As a result, the approach to terrain traversability analysis may change according to space representation, as much as the available data may vary according to the type of sensor. Even though the most common methods for terrain traversability analysis are based on exteroceptive perception [9], for the sake of completeness, it is important to cite that proprioceptive sensors are also successfully used for terrain analysis [18–20], measuring and interpreting quantities such as vibrations or slippage, but their study is out of the scope of this study.

To facilitate the comprehension of the content of this discussion, following a short review on space representations and sensor technologies available for terrain analysis in mobile robotics is reported.

### 2.1. Sensors for terrain analysis

Sensing denotes a group of techniques used in robotics to measure any physical quantity interacting with the robot. Hence, any device used to acquire information can be counted in this category. Although the general concept of sensing as the problem of understanding how a robot see the world, by means of a set of visual sensors, has been addressed following various approaches, in the specific topic of traversability, there are a number of open issues still to be solved. In [21], the author has accurately described the problem of semantic perception for a robot operating in human‐living environments, approaching the problem from sensors and data point of view. Notwithstanding the valuable work done in the field of perception, the indoor structured environments introduce a number of simplifications which are never applicable in outdoor unstructured environments. First of all, indoor scenarios are generally characterized by smooth ground surfaces and high‐size objects represented as vertical planes. For this reason, AGVs, commonly used in indoor industrial environments, do not consider any terrain representation at all. Moreover, indoor robots generally move at low speed, and consequently, they do not require any sophisticated system for terrain analysis. The situation changes totally in the case of planetary rovers [4], driving on sandy terrains featuring rocks, varying in size and shape. Furthermore, recent driverless cars are quickly going towards public roads; in such situations, rocks, road hazards, and pavement distresses may put the vehicle, and its passengers, in serious danger [22].

Since this discussion examines terrain analysis, a distinction between acquisition and representation of information should be done. On one hand, the space acquisition strongly depends on the typology of sensors and applications; on the other hand, its representation depends on the perception meaning and its content. From a purely geometrical point of view, the most primitive representation of a point in the space is the 3D Euclidean metric. However, the information about the real 3D coordinates of a specific point can be obtained by triangulation techniques [23, 24] on stereocamera images, or by directly measuring its distance using time‐of‐flight (TOF) systems [25]. Figure 1 shows typical image sensors assembled on several UGVs in order to acquire some of the images used for the experimental discussion in this work. Specifically, Figure 1a depicts a depth sensor, the Kinect camera, used in [26] for a novel approach to terrain analysis, whereas in Figure 1b a more sophisticated vision system designed for an agricultural tractor is shown [27], the red circle marks a trinocular stereocamera. Figure 1c and Figure 1d show two examples of time‐of‐flight sensors, a Sick laser range finder and a sonar sensing system. Following, the technology at the base of such sensors will be briefly recalled.

#### 2.1.1. Stereovision

Stereocameras constitute a family of cameras composed by two or more lenses with separated image sensors. They provide a visual image for each lens and post‐elaboration attempts to estimate the distance of each point from the sensor by means of connections between correspondences seen by two different lenses at the same time, simulating the human binocular vision. In order to provide accurate measures, the sensors require the perfect calibration with respect to each other, done by the extrapolation of their intrinsic and extrinsic parameters.

In the literature, a large number of methods for camera calibration are available. As an example, Kearney et al. propose a method for the calibration using geometric constraints in [28] and then Puget and Skordas present a method for optimizing the calibration [29]. Later, many researchers studied methods for fast and accurate calibration of multiple cameras [30], in anticipation of the most recent researches of automatic calibration for cars, for example [31]. Recent sensors use more than two cameras for the triangulation in order to increase the accuracy in both short and long range. The 3D representation of the environment is inferred detecting the same point into both camera images, and the bigger the set of points the richer will be the 3D space reconstruction.

Simplifying the concept, said * d* the distance from a point

*measured by a binocular stereocamera, then:*p

where * f* is the focal distance of the sensors,

*is the baseline, that is, the spacing between the sensors, and*b

x

_{1},

x

_{2}are the coordinates of

*in the two images expressed in terms of pixels.*p

An example of a trinocular camera featuring multiple baseline can be seen in Figure 1b, where the sensor has been mounted as visual aid on an experimental tractor [27].

#### 2.1.2. Time‐of‐flight 3D sensors

In contrast to stereocameras, TOF‐based systems, such as lasers and sonars, directly evaluate distances by the measurement of the delay until an emitted signal hits a surface and returns back to the receiver, thus estimating the true distance from the sensor to the surface. Also in this case, a simplified relation can calculate the distance between the sensor and a point in the space as follows:

where * c* is the speed of the ray, light in case of lasers, and

*is the amount of time since the emission until the reception. However, in case of ultrasonic sensors, the speed of the ray depends of its wavelength and the estimation of the distance as well as the localization problem become harder due to the wider beam which may be cause of multiple reflections. As an example, in [32], the authors propose three different mathematical approaches to detect position and orientation of an observer, such as a robot, with respect to a smooth surface. Such ultrasonic‐based system is depicted in Figure 1d. In contrast to ultrasonic technology, laser scanners are much more precises and reliables for environment description. To underline the global diffusion of laser scanners, Figure 1c shows a Sick 3D laser range finder applied on an electric autonomous vehicle at University of Almería (Spain) [33]. As proof of the higher performance of lasers, Borrmann et al. obtained an accurate space description from a laser scanner and use laser information to build a global map in outdoor urban environment [34]. Besides this research, a large number of scientists continuously propose new methods for the 3D space reconstruction using 3D laser scanner technologies.*t

Thanks to their properties of accuracy and reliability, the research involving vision for mobile robot shifted towards the use of laser technologies as an aid for space reconstruction.

### 2.2. Space representations

The term * space representation* roboticists refer to an abstract depiction of robots’ surrounding environment. As robots live in the three‐dimensional space, the most natural space representation should be the Euclidean 3D space, but handling 3D space data may be hard and time‐consuming. Thus, for computational performance purposes, the most used foregoing space representation has been the 2

#### 2.2.1. Digital elevation maps

Organizing sensors data is a mandatory step to reconstruct information for geometric interpretation purposes, and digital elevation models (DEM) [35] are widely used as space representation in mobile robotics. Although topography and large areas terrain mapping constitute the original use of DEMs, their use for traversability analysis has been demonstrated as successful in mobile robotics [4]. As further example, Larson et al. discuss a real‐time approach to analyze the traversability of off‐road terrain for UGVs considering positive and negative obstacles through elevation information [36].

DEMs have been introduced as a compact 2* x* and

*are the coordinates on a regularly sampled plane. As a result, a grid‐based space representation is obtained, in which a surface is described by a finite number of points collected in a fixed size grid structure. Figure 2 shows an example of a DEM representation obtained from a stereocamera images, the entire procedure shows the process from a camera image, see Figure 2a, to point cloud in Figure 2b, and DEM, Figure 2c. Though compact the DEM representation requires a further step from acquisition to 3D reconstruction and DEM generation, whereas working on purely 3D data implies that one step can be skipped.*y

The classical DEM approach constitutes an efficient representation, but it lacks of accuracy in space description since objects are described as surfaces using their elevation without taking into account their real shape. For instance, a tunnel cannot be represented using a digital elevation model. As an improvement of classical DEMs approach, Pfaff et al. [37] proposed the extended DEMs or the so‐called * extended elevation maps* (EEM). Such technique involves the use of additional information in order to have a better description of objects and space; furthermore, the authors also used a Kalman filter to enhance the terrain description in a DEM taking into account measurements error and uncertainties. Recently, in [38], the researchers used EEM as multilayer digital maps for the description of volcano areas.

In conclusion, though suitable due to its compactness and simplicity, in each DEM formalization, there is the assumption of regularity in the surface and it turns into a not‐complete space representation. As the matter of fact, it fails in a large number of practical situations; nevertheless, it is extensively used in robotics since it is simply applicable in low‐performance embedded controllers.

#### 2.2.2. Point descriptors

A recent space description, used in robotics for traversability purposes, consists in the representation of each point simply by its 3D Cartesian coordinates [24]. Hence, let us define a * point cloud* as a set of scattered 3D points, that is:

where * n* is the number of elements in the set. In order to provide a coherent space representation, the coordinates of each point

*,*x

*,*y

*). However, problems such as perception and recognition in point clouds are ill‐posed, if only the geometric coordinates of points are considered. In spite of the addition of new characteristics of points, such as color or intensity, may help, the problem remains ill‐posed due to the ambiguity of matching between points. In particular, a point in a cloud can be seen as a single point, yet it could represent the intersection of perpendicular planes representing the sides of an object, and therefore, it could be described using semantic meanings such as “vertex” or “edge.” The set of characteristics used to describe a point defines a*z

*As a result, in the context of perception, the concept of 3D point as described only by its coordinates is substituted by the concept of local descriptor.*local descriptor.

Let be given a point cloud

*the so‐called*p

_{q}

*the neighborhood of*query point,

*in*p

_{q}

where * d*, the so‐defined as

_{m}

*is the maximum distance between*search radius,

*and each neighbor,*p

^{q}

*is the number of neighbors of*k

*in*p

^{q}

*, and |·| is a generic norm (without loss of generality, it is possible to refer to the Euclidean distance).*P

^{q}

A local descriptor of * p* can be defined as the vector function

_{q}

*that describes the information content of*F

*according to a specific characteristic:*P

^{d}

where * ith* dimension of the descriptor. By comparing the local descriptors of two points, namely

p

_{1}and

p

_{2}, it is possible to estimate their differences. Let Г be the measure of similarity between

p

_{1}and

p

_{2}, with their associated descriptors

F

_{1}and

F

_{2}, and let

*be their distance:*d

Then, * d* is a scalar function and can be considered as the degree of similarity between points. If Г → 0 two points can be considered similar according to the specific characteristics set. Conversely, if Г increases the points will have different properties. It is important to note that the effectiveness of the explicit expression of descriptors is given by its ability to differentiate points in the presence of rigid transformations, noise, sampling variations, changes in scale, or illumination. Moreover, the generality of the representation of points using descriptors allows to collect points and their characteristics such as color, but also traversability, as a vector in the form of a point cloud.

A possible application of point clouds for traversability analysis can be found in [14], where the authors describe a method for terrain classification using point clouds data obtained by stereovision. They propose the use of superpixels as the visual primitives for traversability estimation using a learning algorithm. A different approach can be found in [39]; here, the authors acquire information about terrain by a LIDAR and, using local 3D point statistics, segment it into three classes: clutter to capture grass and tree canopy, linear to capture thin objects such as wires or tree branches, and finally surface to capture solid objects such as ground terrain surface, rocks or tree trunks. As further example, in [40], the authors use a Sick lidar to acquire point cloud and build a traversability cost‐to‐go function for navigation purposes.

#### 2.2.3. A comparison among methods for terrain analysis

To finalize this overview, it is worth to compare different methods according to their use in the scientific community and provide a classification of the used approaches. Table 1 presents a summary of references in the field of terrain analysis and traversability, classifying them for space representation and used sensor, the full bullet indicates the classification. More specifically, the classification of used sensors distinguishes between ToF and stereocameras as method to acquire information, whereas the space description classification differentiates between DEMs and point clouds, including in the last category also point descriptors.

Reference | Application | Sensors ToF | Stereo |
Space representation DEM | Pt.C |
---|---|---|---|

. [27] |
Natural | ●|● | ○|● |

[41] |
Automotive | ●|○ | ○|● |

[42] |
Natural | ○|● | ●|○ |

[13] |
Automotive | ○|● | ●|○ |

. [43] |
Search and rescue | ●|○ | ○|● |

[44] |
Natural | ○|● | ○|● |

[14] |
Natural | ○|● | ○|● |

[17] |
Natural | ●|○ | ●|○ |

[45] |
Planetary | ●|○ | ●|○ |

[46] |
Planetary | ●|○ | ●|○ |

[36] |
Natural | ●|○ | ●|○ |

[25] |
Automotive | ●|○ | ○|● |

. [38] |
Field | ●|○ | ●|○ |

. [12] |
Automotive | ○|● | ●|○ |

[9] |
Search and rescue | ●|○ | ●|○ |

. [37] |
Natural | ●|○ | ●|○ |

[47] |
Planetary | ●|○ | ●|○ |

[7] |
Natural | ○|● | ●|○ |

[11] |
Natural | ●|○ | ○|● |

[16] |
Automotive | ●|○ | ○|● |

[39] |
Natural | ○|● | ○|● |

[40] |
Field | ●|○ | ○|● |

This analysis suggests that both DEMs and point clouds are used for traversability analysis; however, one can consider as a possible trend, the use of point clouds for terrain traversability, since recent researches are going towards this direction. Contrary, DEMs constitute a stable and robust tool, widely used in all the fields of robotics, and it is even possible to find recent extensions of research. To cite one of them, in [38], the authors use an extended elevation models as improvement to DEMs. The historical predominant application of traversability is in natural outdoor environments, where the assumptions of surface regularity cannot be applied. Only recently, the study of surfaces is gaining interest in the automotive sector, in which all researches are quite recent, since this technology was never required in the field. Possible uses are as follows: pavement distress detection [41], sidewalk detection [12], or segment terrain’s inliers and outliers to be used for obstacles detection [13].

From sensors point of view, laser scanner are commonly used for specific applications such as planetary or search and rescue, whereas stereocameras are preferred in applications where the cost‐effectiveness of cameras can be attractive. However, it is important to cite that ToF sensors are commonly used for geometry‐based traversability techniques, whereas cameras are used in the case of appearance‐based classification.

## 3. Terrain traversability analysis

From a dictionary definition, the word “traversability” denotes * “the condition of being traversable”* and traversable concerns the capability

“to travel across or through.”

Definition from Oxford Dictionaries.

This linguistic definition does not explicitly refer to means; for instance, if one is conducting a car the word traversable better characterizes the action of “driving across or through,” whereas going by feet may refer to the natural process of “walking across or through.” However, an allusion to two elements exists in the definition: (i) the space, to be traversed, and (ii) the mean, to traverse the space. In classical control theory, such elements are expressed using concepts such as controllability or reachability, and they are related to the properties of a system to reach a generic state from the origin or the other way round, according to a specific physical model of the process. Whereas a thorough survey on traversability assessment suggests that its formal definition is still missing in the robotic community [6]. In the same survey, a qualitative definition of traversability in the context of UGVs appears, stating:Though descriptive and valuable, this definition only provides ingredients to reach a more general and formal definition of traversability. First of all, it is important to consider few aspects: (i) a robot model including its motion constraints, (ii) space representation, for example, the terrain model, and (iii) a set of criteria to express the traversability properties. All these concepts will be later recalled.

Since this topic is attracting further researches, a more general definition of traversability is given later by Cafaro et al. [43]. The authors have made a valuable work on the theory of space description using point clouds, introducing the definitions of * traversable region* and

*in the context of graph theory, thus defining traversability as the existence of a connection (i.e., a branch) between two vertexes of a graph. A different characterization in terms of fuzzy sets was already provided by Seraji [48], and even though it was not general, the author distinguishes among different types of terrain providing the introduction of this topic in the robotic community. In the light of all relevant works made in research a clear discrepancy between theory and application appears. This section will attempt to fill this gap, using the elements in the literature to reach a definition in terms of control space which can consider the robot model, its operating environment and an evaluation criterion.*traversability map

### 3.1. Robot models and configuration space

Prom the basis of control theory, it is well known that the robot control includes three different, but fundamental, items: process, controller, and sensors. This concept perfectly describes the ancient meaning of the word * control,* which refers to the capacity of inducing a specific behavior to a process based on observations of its evolution. Starting from simple regulators, the control theory evolved towards robot control, regarding robots considered as complex processes. Obviously, as processes complexity increases, the complexity of controllers increases itself. The reason of the growing complexity of robotic systems is furthermore referred to the requirement of a higher level of interaction between robots and real world.

The physical description of robots in control theory typically is expressed through a process and a state space. Thus, given the state * state space*, and the command

*a discrete system can be defined as:*command space,

The function * f*, referred to as

*denotes the behavior of a system, from simple systems to complex mobile robots. The generality of this definition expresses the evolution of any physical process and though usable in any possible situation, its elements, including space structures and transition function, must be explicitly expressed in practical applications. The command space can be easily defined given the kinematic/dynamic properties of the robot and its actuators, and it can be considered as a finite set of possible actions. Whereas the state space may be uncountable, open set and even featuring time‐variant elements (e.g., moving obstacles); as a consequence, it deserves a specific description.*transition function,

For the sake of clarity, let us mention an example, the state space for a planar vehicle may be defined as * x* and

*axis, respectively, and*y

*(2) the rotation around the axis orthogonal to the motion plane, also known as*SO

*(2), special Euclidean group. This state space constitutes an open and uncountable set. Considering the 3D space, it is also common to find the state space as*SE

configuration space

*notion in physics which is common in general control theory.*state space

Now, let us suppose that the C‐Space contains a forbidden region

*(2), the configuration of the robot at the time*SE

*will be*k

*= (*q

*,*x

_{k}

*,*y

_{k}

*). Under the aforementioned assumptions, an*θ

_{k}

*can be expressed as follows:*obstacle region

The obstacle region constitutes the set of all robot’s configurations

free space,

q

_{1}at a time

t

_{1}, to another configuration

q

_{2}at the time

t

_{2}. A rough analogy between

*and*states

*suggests that the transition function can be expressed as*configurations

q

_{k}

_{+1}=

*(*f

*,*q

_{k}

*); clearly defining the robot*u

_{k}

*moving according to the equation of motion*q

*.*f

This discussion does not pretend to be a complete description of spaces and sets, but it only gives the preliminary knowledge for the reading of this text, for additional details about assumptions, demonstrations, and definitions please refer to [50] as a relevant reference in the field.

The reason of the diffusion of C‐Spaces in robotics research resides in the possibility of describing them as manifolds, i.e., topological spaces that behave at every point like our intuitive notion of a surface, and the best way of describing the terrain is to consider its topological properties. Hence, considering a ground vehicle, the configuration space cannot be other than the terrain region it is driving on, described as a manifold.

### 3.2. Traversability characterization

The previous theory considers the robot moving in a configuration space

Nevertheless, we are looking for a more general definition; thus, the traversability can be seen as the capability to travel across of through, which implies that the aforementioned binary definition could be extended. Indeed, the set could be forbidden (i.e., not traversable at all) or partially forbidden (i.e., traversable with some grade of membership). This clearly recalls the fuzzy logic

Definition of fuzzy set: Given a generic set X and a membership function

expressed as a closed subset

where

is a possible configuration of the mobile robot

and

denotes its C‐Space. Let us suppose the existence of a not empty free space

with

Moreover, let us suppose be defined a traversability function

, the traversable region will, be the defined by the following fuzzy set:

First of all, let us note that the traversable set is included into the C‐Space by definition,

The aforementioned definition considers all the elements previously indicated, i.e., a robot model

In order to better clarify the concept, Figure 3 expresses the difference between a simple occupancy map in Figure 3a, where free space and obstacles are clearly distinguished through a binary classification black/white, whereas the concept of fuzzy set in Figure 3b better characterizes the terrain according to the membership function

## 4. Discussion

As the definition of traversability previously introduced can be of general value for geometry‐based terrain analysis purposes, how to use it in order to build practical traversability functions will be following shown, including the re‐definition of classical methods, such as elevation models and roughness models. The exposed examples cover both binary classification methods and cost‐based assessment methods. Along the discussion, an irregular terrain model in the form of a DEM of about 20 m × 20 m, featuring a 0.25m grid size, has been used in order to compare different methods. Let us note that the terrain model, considered as sample model, expressed as a DEM is stored into a 80 × 80 size matrix, that is, 6400 elements. The same data in form of a point cloud, storing only the points’ Cartesian coordinates, will take 6400 × 3 points. This clearly demonstrates the advantage in handling DEMs instead of point clouds; however, using DEMs part of the information is lost due to the assumption of terrain regularity, which is not always applicable. Moreover, ToF sensors as well as stereocamera triangulation always provide a set of distances between the cameras and sampled points in the space, that is, a point cloud, thus a transformation is required, including its computational cost, to build the digital map.

### 4.1. Binary classification for traversability

Let us consider the example of a binary classification and apply the aforementioned definition to find a member function

Note that, even though this function is the simplest possible, it works regardless of the particular structure of the C‐Space, and it converges into the general theory of configuration space. However, in practical cases, it is expected the free space to be explicitly expressed. To prove that

*such that*q

_{k}

*would belong to both*q

_{k}

As an example of functionality, Figure 4 presents a binary classification applied to a sample terrain model. For the sake of the example, given

### 4.2. Elevation terrain model for traversability

Typically used in mobile robotics, elevation models may be described using the formulation in Eq. (9). Let us suppose to have a ground vehicle that can move in three‐dimensional space. As indicated earlier, its configuration space can be expressed as

^{3}, neglecting the orientation terms to simplify the notation, the ground vehicle may be considered as a subset

where * x* and

*are considered as limited, thus*y

*plane has been defined. As a result, given a generic shaped robot*x, y

One should note that in Eq. (12) if

The example of this type of analysis is reported in Figure 5, where the values of

### 4.3. Traversability model based on roughness index

A widely used approach, for geometry and cost‐based terrain traversability analysis, consists in the definition of the roughness index [47]. It is defined as the standard deviation of the elevation values in a specific region of the terrain, given by the projection of the robot shape on the ground.

Given a terrain region considered as free space

where

As in the previous case, Eq. (14) → 0 if * q* will fall into low values of membership function and this implies that it does not belong to

Figure 6 shows an example of traversability map obtained using the roughness index, for the sake of this calculation, the robot has been considered to cover an area of about 8 × 8 cells of the map having grid size of 0.25 m, corresponding to 2 meters in size. The consideration of the standard deviation on a terrain region calculated according to the robot’s geometry may be considered as a robust method and, for this reason, widely used for practical applications. One should note that between the pure elevation traversability analysis and the roughness analysis, a specific region of the terrain appears as irregular and dangerous, corresponding to a local surface minimum. This evaluation agrees with the reality that a robot may get stuck into a hole. On the contrary, the same analysis does not mark as irregular the peak of the hill that may be perfectly traversable as upland. However, it is clear that also this method may fail in the simple case of a surface featuring a slope, which though regular and traversable, it may present high values of variance in its elevation [51].

### 4.4. Unevenness point descriptor‐based model

As an alternative analysis to solve the problems related to the variance of the elevation in sloped regular surfaces, the use of normal vectors to estimate surface irregularities was presented in [27], where the authors defined the unevenness point descriptor (UPD), as a simple choice to extract traversability information from 3D point cloud data. Specifically, the UPD describes surfaces using a normal analysis in a neighborhood, resulting in an efficient description of both irregularities and inclination.

Summarizing the concept, let

*be a given point defined as the*p

^{q}

*The neighborhood of*query point.

*in*p

^{q}

*> 0. Then, we define the unevenness point descriptor F*d

_{m}

_{U}in

*, as:*p

^{q}

*, that is,*P

^{q}

*is the number of elements in*k

The components of * k*, i.e., the number of points in

*; hence, it is possible to compare the unevenness index of different points among each other. The main advantages of this descriptor reside in its simplicity and robustness for traversability evaluation. Contrary to other methods, UPD detects the variations in the surface orientation instead of the variation of the pure elevation, which leads to a general description of regularity in the surface. Moreover, the UPD can be easily adapted to the robot’s specific task by appropriately setting the neighborhood size,*P

^{q}

*In practice, its value is fixed at the beginning of the operations based on the robot geometric size [26]. As further observation, given a neighborhood*d

_{m}.

where * xy*‐plane, as a consequence,

*(*d

*) represents the global orientation of the surface portion*P

^{q}

*respect to the plane*P

^{q}

*.*xy

To bring the unevenness index into the definition of a traversable region, we can consider as given the C‐Space

where

Now, let us observe that in its original form the UPD considers the robot model into the parameter

The example of the UPD analysis, for the same terrain model considered as sample, is reported in Figure 7, for the sake of visibility, the values of * ζ* have been normalized to their minimum values in the region, since the results of variation were close to regularity. During the calculation, the search radius has been set to 1 m, according to the previous example of the roughness index. Contrary to the previous approaches, in the UPD analysis, the strong variations such as the depressions are now considered as not regular showing a different perception of the traversability of this terrain model.

^{q}

As last example, in Figure 8, the UPD has been applied on a point cloud obtained by triangulation on a stereocamera in real environment, the value of the traversabilitv function is reported using in color scale, whereas the left‐camera image of the scenario is reported in Figure 8a. This scene has been extracted from a dataset thoroughly analyzed in [51]. It can be interesting to note that the presented case scenario features a ramp to access an indoor structure. The ramp is considered as regular via UPD analysis, whereas it may be misinterpreted considering elevation model as well as the roughness index. All the borders are correctly detected as not traversable regions. As the matter of fact, Figure 8d and Figure 8e present the same scenario described using a DEM and the traversability function in Eq. (14). The misunderstanding of the scenario leads to the erroneous classification of the ramp to access the building behind it as fully not traversable. On the contrary, in Figure 8b and Figure 8c the scene is properly interpreted using the UPD approach.

## 5. Conclusion and further extensions

Along the chapter, different methods of geometry‐based traversability for mobile robotics have been explored. A thorough review on the topic suggests that the future trend of sensors and space description for traversability purposes will refer to point clouds and time‐of‐flight sensors, or stereo‐3D reconstruction. The necessity to improve the description of terrain, removing the assumption of regularity, will bring the robot towards the full 3D reconstruction of the environment at least in short range visibility. Among different methods analyzed in the discussion, the UPD has demonstrated highest capability of recognition even though it could be costly in terms of computational performances. The contributions in this work are as follows: (i) a review of the field with comparison among technologies, (ii) a new definition of traversability that can be of general value for robot navigation purposes, and (iii) a comparison among literature methods including practical examples.

To conclude this chapter, it is worth to give some possible extensions of this work and future developments. One of them could be the definition of traversable regions in terms of probability. Indeed, it should be possible to include a probability function in terms of risk‐of‐collision or probability of traverse, in which high values refer to minimum probability of collision (i.e., max traversing probability) or low values imply maximum probability of collision (i.e., min traversing probability). Moreover, the traversability regions as defined during this chapter may fit for navigation purposes using the common potential fields, where the potential function will consider traversable regions as “attractive.” On the contrary, “repulsive” regions will coincide with low values of traversability function. Literature in this field typically considers potential functions that use the distance from obstacles instead of a complete traversability description.

## References

- 1.
Automated Guided Vehicle Market by Type – Global Forecast to 2020. Marketsandmarkets, 2015. - 2.
J. Greenough, The self‐driving car report: forecasts, tech timelines, and the benefits and barriers that will impact adoption. BI Intelligence, 2015. - 3.
H. A. Yanco, A. Norton, W. Ober, D. Shane, A. Skinner, and J. Vice, “Analysis of human‐robot interaction at the darpa robotics challenge trials,” Journal of Field Robotics, vol. 32, no. 3, pp. 420–444, 2015. - 4.
Ellery, A. (2015). “Planetary Rovers: Robotic Exploration of the Solar System”. Springer, ISBN: 978–3–642–03258–5. - 5.
M. H. Hebert, C. E. Thorpe, and A. Stentz, “Intelligent unmanned ground vehicles: autonomous navigation research at Carnegie Mellon” (Vol. 388). Springer Science & Business Media, Eds. 2012, ISBN:1461563259. - 6.
P. Papadakis, “Terrain traversability analysis methods for unmanned ground vehicles: A survey,” Engineering Applications of Artificial Intelligence, vol. 26, no. 4, pp. 1373–1385, 2013. - 7.
H. Roncancio, M. Becker, A. Broggi, and S. Cattani, “Traversability analysis using terrain mapping and online‐trained terrain type classifier,” in Intelligent Vehicles Symposium Proceedings, 2014 IEEE , pp. 1239–1244, IEEE, 2014. - 8.
S. Thrun, “Learning occupancy grid maps with forward sensor models,” Autonomous Robots, vol. 15, no. 2, pp. 111–127, 2003. - 9.
P. Papadakis, F. Pirri “3D Mobility Learning and Regression of Articulated, Tracked Robotic Vehicles by Physics–based Optimization” International conference on Virtual Reality Interaction and Physical Simulation, Eurographics, Dec 2012, Darmstadt, Germany. - 10.
Y. Tanaka, Y. Ji, A. Yamashita, and H. Asama, “Fuzzy based traversability analysis for a mobile robot on rough terrain,” in Proceedings of the 2015 IEEE International Conference on Robotics and Automation, 2015. - 11.
B. Suger, B. Steder, and W. Burgard, “Traversability analysis for mobile robots in outdoor environments: A semi‐supervised learning approach based on 3d‐lidar data,” in Robotics and Automation (ICRA), 2015 IEEE International Conference on, pp. 3941‐3946, 2015. - 12.
F. Oniga and S. Nedevschi, “Processing dense stereo data using elevation maps: Road surface, traffic isle, and obstacle detection,” IEEE Transactions on Vehicular Technology, vol. 59, pp. 1172–1182, 2010. - 13.
A. Broggi, E. Cardarelli, S. Cattani, and M. Sabbatelli, “Terrain mapping for off‐road autonomous ground vehicles using rational b‐spline surfaces and stereo vision,” in Intelligent Vehicles Symposium (IV), 2013 IEEE, pp. 648–653, 2013. - 14.
K. Dongshin, M. O. Sang, and M. R. James, “Traversability classification for ugv navigation: A comparison of patch and superpixel representations,” (San Diego, CA), pp. 3166‐3173, International Conference on Intelligent Robots and Systems, 2007. - 15.
A. Howard and H. Saraji, “Vision‐based terrain characterization and traversability assessment,” Journal of Robotic System, vol. 18, no. 10, pp. 77–587, 2001. - 16.
S. Thrun, M. Montemerlo, and A. Aron, “Probabilistic Terrain Analysis For High–Speed Desert Driving” In Robotics: Science and Systems, pp. 16–19, Philadelphia, USA, August 2006. - 17.
M. Häselich, M. Arends, N. Wojke, F. Neuhaus, and D. Paulus, “Probabilistic terrain classification in unstructured environments,” Robotics and Autonomous Systems, vol. 61, no. 10, pp. 1051–1059, 2013. - 18.
K. Iagnemma, H. Shibly, and S. Dubowsky, “On‐line terrain parameter estimation for planetary rovers,” in Robotics and Automation, 2002. Proceedings. ICRA ‘02. IEEE International Conference on, vol. 3, pp. 3142–3147, IEEE, 2002. - 19.
E. Coyle and E. G. E. Jr., “A comparison of classifier performance for vibration‐based terrain classification,” tech. rep., DTIC Document, 2008. - 20.
F. L. G. Bermudez, C. J. Ryan, D. W. Haldane, P. Abbeel, and R. S. Fearing, “Performance analysis and terrain classification for a legged robot over rough terrain,” in Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 513–519, IEEE, 2012. - 21.
R. B. Rusu, Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments. Künstl Intell (2010) 24: 345. doi:10.1007/s13218–010–0059–6. - 22.
M. Bellone and G. Reina, “Pavement distress detection and avoidance for intelligent vehicles”, International Journal of Vehicle Autonomous Systems, 2016, Vol. 14, ISSN: 1471–0226. - 23.
J. D. S. Prince, “Computer Vision: Models, Learning, and Inference” Cambridge University Press, 1st ed., 2012. - 24.
R. Szeliski, “Computer Vision: Algorithm and applications” Springer, 2010, ISBN1848829353. - 25.
F. Neuhaus, D. Dillenberger, J. Pellenz and D. Paulus, “Terrain drivability analysis in 3D laser range data for autonomous robot navigation in unstructured environments,” 2009 IEEE Conference on Emerging Technologies & Factory Automation, Mallorca, Spain, 2009, pp. 1–4, doi: 10.1109/ETFA.2009.5347217. - 26.
M. Bellone, A. Messina, and G. Reina, “A new approach for terrain analysis in mobile robot applications,” in Mechatronics (ICM), 2013 IEEE International Conference on, pp. 225–230, IEEE, 2013. - 27.
M. Bellone, G. Reina, N. Giannoccaro, and L. Spedicato, “Unevenness point descriptor for terrain analysis in mobile robot applications,” International Journal of Advanced Robotic Systems, vol. 10, p. 284, 2013. - 28.
J. K. Kearney, X. Yang, and S. Zhang, “Camera calibration using geometric constraints,” (San Diego, California, USA), IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1989. - 29.
P. Puget and T. Skordas, “An optimal solution for mobile camera calibration,” (Cincinnati, Ohio, USA), IEEE International Conference on Robotics and Automation, 1990. - 30.
G. Unal, A. Yezzi, S. Soatto and G. Slabaugh, “A Variational Approach to Problems in Calibration of Multiple Cameras,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 8, pp. 1322–1338, Aug. 2007, doi: 10.1109/TPAMI.2007.1035. - 31.
L. Heng, B. Li, and M. Pollefeys, “Camodocal: Automatic intrinsic and extrinsic calibration of a rig with multiple generic cameras and odometry,” (Tokyo, Japan), International Conference on Intelligent Robots and Systems, 2013. - 32.
L. Spedicato, N. I. Giannoccaro, G. Reina, and M. Bellone, “Three different approaches for localization in a corridor environment by means of an ultrasonic wide beam”, International Journal of Advanced Robotic Systems, vol. 10, pp. 163–172, March 2013. - 33.
J.L. Torres, J.L. Blanco, M. Bellone, F. Rodrìguez, A. Gimènez, and G. Reina – “A proposed software framework aimed at energy&x2013;efficient autonomous driving of electric vehicles” – International Conference on Simulation, Modeling, and Programming for Autonomous Robots, Bergamo, Italy October 2014, pp. 219–230, ISBN 978–3–319–11899–4; - 34.
D. Borrmann, J. Elseberg, K. Lingemann, A. Nüchter, and J. Hertzberg, “Globally consistent 3d mapping with scan matching,” Robotics and Autonomous Systems, vol. 56, no. 2, pp. 130–142, 2008. - 35.
I. S. Kweon and T. Kanade, “High‐resolution terrain map from multiple sensor data,” IEEE Transaction on Pattern and Machine Intelligence, vol. 14, pp. 278–292, 1992. - 36.
J. Larson, M. Trivedi, and M. Bruch, “Off‐road terrain traversability analysis and hazard avoidance for ugvs,” (Baden‐Baden, Germany), IEEE Intelligent Vehicles Symposium, 2011. - 37.
P. Pfaff, R. Triebel, and W. Burgard, “An efficient extension of elevation maps for outdoor terrain mapping,” In Proceedings of the International Conference on Field and Service Robotics (FSR), pp. 165–176, 2005. - 38.
T. Ohki, K. Nagatani, and K. Yoshida, “Path planning for mobile robot on rough terrain based on sparse transition cost propagation in extended elevation maps,” pp. 494–499, 2013 IEEE International Conference on Mechatronics and Automation (ICMA), Aug 2013. - 39.
N. Vandapel, D. F. Huber, A. Kapuria, and M. Hebert, “Natural terrain classification using 3‐d ladar data,” (New Orleans, LA, USA), pp. 5117–5122, IEEE International Conference on Robotics and Automation, 2004. - 40.
M. Whitty, S. Cossell, K. S. Dang, J. Guivant, and J. Katupitiya, “Autonomous navigation using a real‐time 3d point cloud,” in 2010 Australasian Conference on Robotics and Automation, pp. 1–3, 2010. - 41.
M. Bellone and G. Reina, “Road surface analysis for driving assistance,” in Workshop Proceedings of IAS‐13 13th International Conference on Intelligent Autonomous Systems Padova (Italy), pp. 226–234, 2014. - 42.
T. Braun, H. Bitsch, and K. Berns, “Visual terrain traversability estimation using a combined slope/elevation model”, Advances in Artificial Intelligence Volume 5243 of the series Lecture Notes in Computer Science pp 177–184, Springer, 2008, ISBN 978–3–540–85845–4. - 43.
B. Cafaro, M. Gianni, F. Pirri, M. Ruiz, and A. Sinha, “Terrain traversability in rescue environments,” in 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pp. 1–8, Oct 2013. - 44.
A. Dargazanv and K. Berns, “Stereo‐based terrain traversability estimation using surface normals,” in 41st International Symposium on Robotics; Proceedings of ISR/Robotik 2014, pp. 1–7, June 2014. - 45.
G. Ishigami, K. Nagatani, and K. Yoshida, “Path planning for planetary exploration rovers and its evaluation based on wheel slip dynamics,” in IEEE International Conference on Robotics and Automation, pp. 2361–2366, 2007. - 46.
T. Kubota, Y. Kuroda, Y. Kunii, and T. Yoshimitsu, “Path planning for newly developed microrover,” in 2001. Proceedings 2001 ICRA IEEE International Conference on Robotics and Automation, vol. 4, pp. 3710–3715, 2001. - 47.
E. Rohmer, G. Reina, and K. Yoshida, “Dynamic simulation‐based action planner for a reconfigurable hybrid leg‐wheel planetary exploration rover,” Advanced Robotics, vol. 24, no. 8–9, pp. 1219–1238, 2010. - 48.
H. Seraji, “Traversability index: A new concept for planetary rovers,” (Detroit, MI, USA), pp. 2006–2013, IEEE International Conference on Robotics and Automation, 1999. - 49.
J. J. Craig, Introduction to Robotics: Mechanics and Control, vol. 3. Pearson Prentice Hall, Upper Saddle River, 2005. - 50.
S. M. LaValle “Planning Algorithms” Cambridge University Press, 2006, ISBN1139455176. - 51.
M. Bellone, G. Reina, N. Giannoccaro, and L. Spedicato, “3d traversability awareness for rough terrain mobile robots,” Sensor Review, vol. 34, no. 2, pp. 220–232, 2014.

## Notes

- Definition from Oxford Dictionaries.
- Definition of fuzzy set: Given a generic set X and a membership function f:X→[0;1], the fuzzy set A is defined as A={(x,f(x))|x∈X}.