Open access peer-reviewed chapter

Watch Your Step! Terrain Traversability for Robot Control

Written By

Mauro Bellone

Submitted: 22 November 2015 Reviewed: 02 June 2016 Published: 19 October 2016

DOI: 10.5772/64489

From the Edited Volume

Robot Control

Edited by Efren Gorrostieta Hurtado

Chapter metrics overview

2,233 Chapter Downloads

View Full Metrics

Abstract

Watch your step! Or perhaps, watch your wheels. Whatever the robot is, if it puts its feet, tracks, or wheels in the wrong place, it might get hurt; and as robots are quickly going from structured and completely known environments towards uncertain and unknown terrain, the surface assessment becomes an essential requirement. As a result, future mobile robots cannot neglect the evaluation of terrain’s structure, according to their driving capabilities. With the objective of filling this gap, the focus of this study was laid on terrain analysis methods, which can be used for robot control with particular reference to autonomous vehicles and mobile robots. Giving an overview of theory related to this topic, the investigation not only covers hardware, such as visual sensors or laser scanners, but also space descriptions, such as digital elevation models and point descriptors, introducing new aspects and characterization of terrain assessment. During the discussion, a wide number of examples and methodologies are exposed according to different tools and sensors, including the description of a recent method of terrain assessment using normal vectors analysis. Indeed, normal vectors has demonstrated great potentialities in the field of terrain irregularity assessment in both on‐road and off‐road environments.

Keywords

  • traversability
  • terrain assessment
  • terrain analysis
  • UGV
  • mobile robots

1. Introduction

From an analysis in the United States, the automated guided vehicles (AGVs) market will be worth 2240 million dollars by 2020, due to growing automation investments across all major industries [1]. Besides, BI Intelligence estimates a number of 10 million cars and trucks featuring self‐driving capabilities by the same year [2]. On the other side, during the DARPA Robotics Challenge 2015, worldwide universities and their humanoids have raced among challenging scenarios, and a number of robots lost their balance traveling across rubble [3], and some of them even used semi‐autonomous systems to overcome this challenge by manually sending commands about specific locations where to put their feet on. Additionally, the Curiosity rover, recently sent on Mars by NASA, demonstrates the growing utilization of robotics technologies in planetary exploration as they require high level of reliability during their surveys, and rocks or terrain irregularities may cause irreparable damages to on‐board instrumentation [4].

The common element among all these types of robots consists of the necessity of a high level of driving capability; though motion control has made great strides, it may fail in case of unexpected circumstances, including road hazards, pavement distresses, and rubble. As a result, from widely known AGVs, spread in industries since years, to modern unmanned ground vehicles (UGVs) [5], the high level of driving capabilities is perceived an essential requirement. In order to enhance robustness and reliability, future mobile robots should be designed including custom hardware and software components, helping UGVs to adapt their driving behavior according to surface irregularities. In robotics, the assessment of terrain conditions is generally referred to as “terrain traversability analysis;” even though traversability has been explored from various perspectives, a thorough survey on this topic suggests that a specific definition is still missing in the robotic community [6]. On the other hand, as robots’ diffusion increases braking up new boundaries in their application, the use of visual technologies for traversability assessment will improve their reliability; consequently, the acquisition of information about the terrain is a prerequisite capacity and recent advances in sensors and perception encourage future researched in this field.

Among the number of methods and models for terrain analysis, there are at least two large categories, (i) classification‐based methods and (ii) cost‐assessment methods. In the former, it is possible to count all the approaches that consider a binary distinction of the terrain as two classes, traversable or non‐traversable; to cite an example, in [7], the authors use an on‐line trained classifier to distinguish traversable and non‐traversable regions. Widely spread in research, occupancy maps also fall in this category as they use the elevation of surrounding objects to construct a map of occupied regions on the base of sensor measurements [8]. Whereas in cost‐assessment methods is common to assign a continuous cost index, to better describe the traversability characteristics of terrain according to a specific cost function [9]. As advances on the same line, Tanaka et al. implemented a fuzzy‐based traversability analysis, considering terrain roughness and slope as input for a fuzzy inference system and then generating a vector field histogram for navigation purposes [10].

A further classification of methods commonly used in this field distinguishes between geometric‐ or appearance‐based methods. Used in a large number of works in research [1113], geometric‐based analyses aim to detect traversability using geometric properties of surfaces such as distances in space and shapes. Whereas appearance methods, to a greater extent related to camera images processing and cognitive analyses, have the objective of recognize colors and patterns not related to the common appearance of terrain, such as grass, rocks or vegetation [14, 15]. In spite of the clear potentialities of appearance‐based methods, still geometric ones are mostly common in robotics, because they can be easily used for path‐planning purposes, where also probabilistic methods are gaining interest. Indeed, in 2006, Thrun et al. [16] presented a probabilistic algorithm for terrain classification on a fast moving robot platform, constituting a part of their autonomous vehicle during the Darpa Grand Challenge in 2005. As a recent example, in [17], the authors describe a terrain classification approach for an autonomous robot based on Markov random fields (MRFs) on fused 3D laser and camera image data.

In the light of glaring requirements of terrain analysis for future UGVs, this discussion aims at exploring some of the basic concepts of traversability; the focus was laid on geometric methods. This study introduces a definition of traversability and its application to robot control and autonomous ground vehicles. This directly leads to the contributions of this chapter, which attempts to compare different methodologies and fill the gap between theory and practical applications giving a definition that can be of general value for terrain traversability analysis in terms of a fuzzy set, including practical examples to foregoing functions available in the literature. Furthermore, the potentialities of novel methods based on the normal vectors analysis will be explored, providing some practical examples of application.

The chapter is structured as follow: Section 2 will provide an overview and basic knowledge about the field with focus on related works and recent techniques for visual terrain analysis, used sensors and space representation. Later, in Section 3, a theoretical background will help, who unfamiliar with the topic, to understand the basic concepts related to robot models and state spaces, introducing a definition of traversability in terms of a fuzzy set. Examples, results and comparisons are exposed during a thorough discussion in Section 4, which will cover basic functions and recent researches in the field applied on both synthetic data and real scenarios. Conclusions are drawn in Section 5.

Advertisement

2. Overview

As humans themselves rely on their five senses to know where to walk or drive a vehicle on, creating an implicit space representation in the brain, robots perceive and interpret the space using exteroceptive and proprioceptive transducers as a sensing aid. In order to build an effective exteroceptive traversability analysis tool two elements are required: (i) visual sensors and (ii) a mathematical space representation. The former comprises any exteroceptive sensor such as cameras, depth cameras, or time‐of‐flight sensors, which endow robots with sensing capabilities; whereas the latter provides a spatial organization of sensory data and build an abstract representation of the 3D environment. As a result, the approach to terrain traversability analysis may change according to space representation, as much as the available data may vary according to the type of sensor. Even though the most common methods for terrain traversability analysis are based on exteroceptive perception [9], for the sake of completeness, it is important to cite that proprioceptive sensors are also successfully used for terrain analysis [1820], measuring and interpreting quantities such as vibrations or slippage, but their study is out of the scope of this study.

To facilitate the comprehension of the content of this discussion, following a short review on space representations and sensor technologies available for terrain analysis in mobile robotics is reported.

2.1. Sensors for terrain analysis

Sensing denotes a group of techniques used in robotics to measure any physical quantity interacting with the robot. Hence, any device used to acquire information can be counted in this category. Although the general concept of sensing as the problem of understanding how a robot see the world, by means of a set of visual sensors, has been addressed following various approaches, in the specific topic of traversability, there are a number of open issues still to be solved. In [21], the author has accurately described the problem of semantic perception for a robot operating in human‐living environments, approaching the problem from sensors and data point of view. Notwithstanding the valuable work done in the field of perception, the indoor structured environments introduce a number of simplifications which are never applicable in outdoor unstructured environments. First of all, indoor scenarios are generally characterized by smooth ground surfaces and high‐size objects represented as vertical planes. For this reason, AGVs, commonly used in indoor industrial environments, do not consider any terrain representation at all. Moreover, indoor robots generally move at low speed, and consequently, they do not require any sophisticated system for terrain analysis. The situation changes totally in the case of planetary rovers [4], driving on sandy terrains featuring rocks, varying in size and shape. Furthermore, recent driverless cars are quickly going towards public roads; in such situations, rocks, road hazards, and pavement distresses may put the vehicle, and its passengers, in serious danger [22].

Figure 1.

Examples of sensing devices in which: (a) is a depth camera, the Kinect sensor, mounted on an experimental planetary rover, (b) is a stereovision system including the XB3 Bumblebee camera used on an agricultural tractor, in (c) an autonomous electric car featuring a Sick laser, and (d) is an ultrasonic sensor‐based mechatronic device.

Since this discussion examines terrain analysis, a distinction between acquisition and representation of information should be done. On one hand, the space acquisition strongly depends on the typology of sensors and applications; on the other hand, its representation depends on the perception meaning and its content. From a purely geometrical point of view, the most primitive representation of a point in the space is the 3D Euclidean metric. However, the information about the real 3D coordinates of a specific point can be obtained by triangulation techniques [23, 24] on stereocamera images, or by directly measuring its distance using time‐of‐flight (TOF) systems [25]. Figure 1 shows typical image sensors assembled on several UGVs in order to acquire some of the images used for the experimental discussion in this work. Specifically, Figure 1a depicts a depth sensor, the Kinect camera, used in [26] for a novel approach to terrain analysis, whereas in Figure 1b a more sophisticated vision system designed for an agricultural tractor is shown [27], the red circle marks a trinocular stereocamera. Figure 1c and Figure 1d show two examples of time‐of‐flight sensors, a Sick laser range finder and a sonar sensing system. Following, the technology at the base of such sensors will be briefly recalled.

2.1.1. Stereovision

Stereocameras constitute a family of cameras composed by two or more lenses with separated image sensors. They provide a visual image for each lens and post‐elaboration attempts to estimate the distance of each point from the sensor by means of connections between correspondences seen by two different lenses at the same time, simulating the human binocular vision. In order to provide accurate measures, the sensors require the perfect calibration with respect to each other, done by the extrapolation of their intrinsic and extrinsic parameters.

In the literature, a large number of methods for camera calibration are available. As an example, Kearney et al. propose a method for the calibration using geometric constraints in [28] and then Puget and Skordas present a method for optimizing the calibration [29]. Later, many researchers studied methods for fast and accurate calibration of multiple cameras [30], in anticipation of the most recent researches of automatic calibration for cars, for example [31]. Recent sensors use more than two cameras for the triangulation in order to increase the accuracy in both short and long range. The 3D representation of the environment is inferred detecting the same point into both camera images, and the bigger the set of points the richer will be the 3D space reconstruction.

Simplifying the concept, said d the distance from a point p measured by a binocular stereocamera, then:

d=fb||x1x2||,E1

where f is the focal distance of the sensors, b is the baseline, that is, the spacing between the sensors, and x1, x2 are the coordinates of p in the two images expressed in terms of pixels.

An example of a trinocular camera featuring multiple baseline can be seen in Figure 1b, where the sensor has been mounted as visual aid on an experimental tractor [27].

2.1.2. Time‐of‐flight 3D sensors

In contrast to stereocameras, TOF‐based systems, such as lasers and sonars, directly evaluate distances by the measurement of the delay until an emitted signal hits a surface and returns back to the receiver, thus estimating the true distance from the sensor to the surface. Also in this case, a simplified relation can calculate the distance between the sensor and a point in the space as follows:

d=ct2,E2

where c is the speed of the ray, light in case of lasers, and t is the amount of time since the emission until the reception. However, in case of ultrasonic sensors, the speed of the ray depends of its wavelength and the estimation of the distance as well as the localization problem become harder due to the wider beam which may be cause of multiple reflections. As an example, in [32], the authors propose three different mathematical approaches to detect position and orientation of an observer, such as a robot, with respect to a smooth surface. Such ultrasonic‐based system is depicted in Figure 1d. In contrast to ultrasonic technology, laser scanners are much more precises and reliables for environment description. To underline the global diffusion of laser scanners, Figure 1c shows a Sick 3D laser range finder applied on an electric autonomous vehicle at University of Almería (Spain) [33]. As proof of the higher performance of lasers, Borrmann et al. obtained an accurate space description from a laser scanner and use laser information to build a global map in outdoor urban environment [34]. Besides this research, a large number of scientists continuously propose new methods for the 3D space reconstruction using 3D laser scanner technologies.

Thanks to their properties of accuracy and reliability, the research involving vision for mobile robot shifted towards the use of laser technologies as an aid for space reconstruction.

2.2. Space representations

The term space representation roboticists refer to an abstract depiction of robots’ surrounding environment. As robots live in the three‐dimensional space, the most natural space representation should be the Euclidean 3D space, but handling 3D space data may be hard and time‐consuming. Thus, for computational performance purposes, the most used foregoing space representation has been the 212‐dimensional, such as digital elevation models (DEM) better described later in this section. Only recently, thanks to the high performing CPU and GPUs, 3D point descriptors are gaining interest in this field.

2.2.1. Digital elevation maps

Organizing sensors data is a mandatory step to reconstruct information for geometric interpretation purposes, and digital elevation models (DEM) [35] are widely used as space representation in mobile robotics. Although topography and large areas terrain mapping constitute the original use of DEMs, their use for traversability analysis has been demonstrated as successful in mobile robotics [4]. As further example, Larson et al. discuss a real‐time approach to analyze the traversability of off‐road terrain for UGVs considering positive and negative obstacles through elevation information [36].

DEMs have been introduced as a compact 212‐dimensional representation, which assumes that a surface can be represented as an elevation function g(x,y),g:2, where x and y are the coordinates on a regularly sampled plane. As a result, a grid‐based space representation is obtained, in which a surface is described by a finite number of points collected in a fixed size grid structure. Figure 2 shows an example of a DEM representation obtained from a stereocamera images, the entire procedure shows the process from a camera image, see Figure 2a, to point cloud in Figure 2b, and DEM, Figure 2c. Though compact the DEM representation requires a further step from acquisition to 3D reconstruction and DEM generation, whereas working on purely 3D data implies that one step can be skipped.

Figure 2.

Example of two different space representations in which (b) is a point cloud representation, whereas (c) is the relative DEM, both geometrically describing the scene in the camera image (a).

The classical DEM approach constitutes an efficient representation, but it lacks of accuracy in space description since objects are described as surfaces using their elevation without taking into account their real shape. For instance, a tunnel cannot be represented using a digital elevation model. As an improvement of classical DEMs approach, Pfaff et al. [37] proposed the extended DEMs or the so‐called extended elevation maps (EEM). Such technique involves the use of additional information in order to have a better description of objects and space; furthermore, the authors also used a Kalman filter to enhance the terrain description in a DEM taking into account measurements error and uncertainties. Recently, in [38], the researchers used EEM as multilayer digital maps for the description of volcano areas.

In conclusion, though suitable due to its compactness and simplicity, in each DEM formalization, there is the assumption of regularity in the surface and it turns into a not‐complete space representation. As the matter of fact, it fails in a large number of practical situations; nevertheless, it is extensively used in robotics since it is simply applicable in low‐performance embedded controllers.

2.2.2. Point descriptors

A recent space description, used in robotics for traversability purposes, consists in the representation of each point simply by its 3D Cartesian coordinates [24]. Hence, let us define a point cloud as a set of scattered 3D points, that is:

P=P{pi(xi,yi,zi)3,i=1,2,...,n,n},E3

where n is the number of elements in the set. In order to provide a coherent space representation, the coordinates of each point

have to be given respect to a common coordinate system. The origin of such reference frame is usually located into the robot’s geometric center or the sensing device, defined as camera reference frame RFc(Oc,xc,yc,zc,). For this reason, generally distance data need an additional coordinate transformation using appropriate rotation matrices. As a result, 3D space description in form as point cloud constitutes a simple and robust solution to represent environments for robotic purposes. In the most recent data representation, the RGB color information is added to points obtaining the so‐called RGB‐D point clouds. As an example, Figure 2b shows an RGB‐D point cloud obtained as triangulation of stereopairs in outdoor road environment. Nowadays, it is common to think the 3D points as defined in the three‐dimensional meaning of Euclidean metric and represented by its Cartesian coordinates (x, y, z). However, problems such as perception and recognition in point clouds are ill‐posed, if only the geometric coordinates of points are considered. In spite of the addition of new characteristics of points, such as color or intensity, may help, the problem remains ill‐posed due to the ambiguity of matching between points. In particular, a point in a cloud can be seen as a single point, yet it could represent the intersection of perpendicular planes representing the sides of an object, and therefore, it could be described using semantic meanings such as “vertex” or “edge.” The set of characteristics used to describe a point defines a local descriptor. As a result, in the context of perception, the concept of 3D point as described only by its coordinates is substituted by the concept of local descriptor.

Let be given a point cloud

, defined as in Eq. (3), and let us consider a point pq the so‐called query point, the neighborhood of pq in
can be defined as the set of points such that:
Pq={piP3:|pipq|dmi=1,2,...,k},E4

where dm, the so‐defined as search radius, is the maximum distance between pq and each neighbor, k is the number of neighbors of pq in Pq, and |·| is a generic norm (without loss of generality, it is possible to refer to the Euclidean distance).

A local descriptor of pq can be defined as the vector function F that describes the information content of Pd according to a specific characteristic:

F(pq,Pq)={x1q,x2q,...,xnq},E5

where xiq is the ith dimension of the descriptor. By comparing the local descriptors of two points, namely p1 and p2, it is possible to estimate their differences. Let Г be the measure of similarity between p1 and p2, with their associated descriptors F1 and F2, and let d be their distance:

Γ=d(F1,F2).E6

Then, d is a scalar function and can be considered as the degree of similarity between points. If Г → 0 two points can be considered similar according to the specific characteristics set. Conversely, if Г increases the points will have different properties. It is important to note that the effectiveness of the explicit expression of descriptors is given by its ability to differentiate points in the presence of rigid transformations, noise, sampling variations, changes in scale, or illumination. Moreover, the generality of the representation of points using descriptors allows to collect points and their characteristics such as color, but also traversability, as a vector in the form of a point cloud.

A possible application of point clouds for traversability analysis can be found in [14], where the authors describe a method for terrain classification using point clouds data obtained by stereovision. They propose the use of superpixels as the visual primitives for traversability estimation using a learning algorithm. A different approach can be found in [39]; here, the authors acquire information about terrain by a LIDAR and, using local 3D point statistics, segment it into three classes: clutter to capture grass and tree canopy, linear to capture thin objects such as wires or tree branches, and finally surface to capture solid objects such as ground terrain surface, rocks or tree trunks. As further example, in [40], the authors use a Sick lidar to acquire point cloud and build a traversability cost‐to‐go function for navigation purposes.

2.2.3. A comparison among methods for terrain analysis

To finalize this overview, it is worth to compare different methods according to their use in the scientific community and provide a classification of the used approaches. Table 1 presents a summary of references in the field of terrain analysis and traversability, classifying them for space representation and used sensor, the full bullet indicates the classification. More specifically, the classification of used sensors distinguishes between ToF and stereocameras as method to acquire information, whereas the space description classification differentiates between DEMs and point clouds, including in the last category also point descriptors.

Reference Application Sensors
ToF | Stereo
Space representation
DEM | Pt.C
Bellone et al. [27] Natural ●|● ○|●
Bellone and Reina [41] Automotive ●|○ ○|●
Braun et al. [42] Natural ○|● ●|○
Broggi et al. [13] Automotive ○|● ●|○
Cafaro et al. [43] Search and rescue ●|○ ○|●
Dargazanv and Berns [44] Natural ○|● ○|●
Dongshin et al. [14] Natural ○|● ○|●
Haselich et al. [17] Natural ●|○ ●|○
Ishigami et al. [45] Planetary ●|○ ●|○
Kubota et al. [46] Planetary ●|○ ●|○
Larson et al. [36] Natural ●|○ ●|○
Neuhus et al. [25] Automotive ●|○ ○|●
Ohki et al. [38] Field ●|○ ●|○
Oniga et al. [12] Automotive ○|● ●|○
Papadakis et al. [9] Search and rescue ●|○ ●|○
Pfaff et al. [37] Natural ●|○ ●|○
Rohmer et al. [47] Planetary ●|○ ●|○
Roccacio et al. [7] Natural ○|● ●|○
Suger et al. [11] Natural ●|○ ○|●
Thruh et al. [16] Automotive ●|○ ○|●
Vandapel et al. [39] Natural ○|● ○|●
Whitty et al. [40] Field ●|○ ○|●

Table 1.

Comparison of the literature, the table classifies space representations and used sensors for traversability purposes

The full bullet indicates the classification.


This analysis suggests that both DEMs and point clouds are used for traversability analysis; however, one can consider as a possible trend, the use of point clouds for terrain traversability, since recent researches are going towards this direction. Contrary, DEMs constitute a stable and robust tool, widely used in all the fields of robotics, and it is even possible to find recent extensions of research. To cite one of them, in [38], the authors use an extended elevation models as improvement to DEMs. The historical predominant application of traversability is in natural outdoor environments, where the assumptions of surface regularity cannot be applied. Only recently, the study of surfaces is gaining interest in the automotive sector, in which all researches are quite recent, since this technology was never required in the field. Possible uses are as follows: pavement distress detection [41], sidewalk detection [12], or segment terrain’s inliers and outliers to be used for obstacles detection [13].

From sensors point of view, laser scanner are commonly used for specific applications such as planetary or search and rescue, whereas stereocameras are preferred in applications where the cost‐effectiveness of cameras can be attractive. However, it is important to cite that ToF sensors are commonly used for geometry‐based traversability techniques, whereas cameras are used in the case of appearance‐based classification.

Advertisement

3. Terrain traversability analysis

From a dictionary definition, the word “traversability” denotes “the condition of being traversable” and traversable concerns the capability “to travel across or through.”

Definition from Oxford Dictionaries.

This linguistic definition does not explicitly refer to means; for instance, if one is conducting a car the word traversable better characterizes the action of “driving across or through,” whereas going by feet may refer to the natural process of “walking across or through.” However, an allusion to two elements exists in the definition: (i) the space, to be traversed, and (ii) the mean, to traverse the space. In classical control theory, such elements are expressed using concepts such as controllability or reachability, and they are related to the properties of a system to reach a generic state from the origin or the other way round, according to a specific physical model of the process. Whereas a thorough survey on traversability assessment suggests that its formal definition is still missing in the robotic community [6]. In the same survey, a qualitative definition of traversability in the context of UGVs appears, stating:

“The capability of a ground vehicle to reside over a terrain region under an admissible state wherein it is capable of entering given its current state, this capability being quantified by taking into account a terrain model, the robotic vehicle model, the kinematic constraints of the vehicle and a set of criteria based on which the optimality of an admissible state can be assessed [6].”

Though descriptive and valuable, this definition only provides ingredients to reach a more general and formal definition of traversability. First of all, it is important to consider few aspects: (i) a robot model including its motion constraints, (ii) space representation, for example, the terrain model, and (iii) a set of criteria to express the traversability properties. All these concepts will be later recalled.

Since this topic is attracting further researches, a more general definition of traversability is given later by Cafaro et al. [43]. The authors have made a valuable work on the theory of space description using point clouds, introducing the definitions of traversable region and traversability map in the context of graph theory, thus defining traversability as the existence of a connection (i.e., a branch) between two vertexes of a graph. A different characterization in terms of fuzzy sets was already provided by Seraji [48], and even though it was not general, the author distinguishes among different types of terrain providing the introduction of this topic in the robotic community. In the light of all relevant works made in research a clear discrepancy between theory and application appears. This section will attempt to fill this gap, using the elements in the literature to reach a definition in terms of control space which can consider the robot model, its operating environment and an evaluation criterion.

3.1. Robot models and configuration space

Prom the basis of control theory, it is well known that the robot control includes three different, but fundamental, items: process, controller, and sensors. This concept perfectly describes the ancient meaning of the word control, which refers to the capacity of inducing a specific behavior to a process based on observations of its evolution. Starting from simple regulators, the control theory evolved towards robot control, regarding robots considered as complex processes. Obviously, as processes complexity increases, the complexity of controllers increases itself. The reason of the growing complexity of robotic systems is furthermore referred to the requirement of a higher level of interaction between robots and real world.

The physical description of robots in control theory typically is expressed through a process and a state space. Thus, given the state xX, where Xn is referred to as state space, and the command uU with Um, called command space, a discrete system can be defined as:

x(k+1)=f(x(k),u(k))f:Rnn.E7

The function f, referred to as transition function, denotes the behavior of a system, from simple systems to complex mobile robots. The generality of this definition expresses the evolution of any physical process and though usable in any possible situation, its elements, including space structures and transition function, must be explicitly expressed in practical applications. The command space can be easily defined given the kinematic/dynamic properties of the robot and its actuators, and it can be considered as a finite set of possible actions. Whereas the state space may be uncountable, open set and even featuring time‐variant elements (e.g., moving obstacles); as a consequence, it deserves a specific description.

For the sake of clarity, let us mention an example, the state space for a planar vehicle may be defined as X=2×SO(2) denoting 2 the translations on x and y axis, respectively, and SO(2) the rotation around the axis orthogonal to the motion plane, also known as SE(2), special Euclidean group. This state space constitutes an open and uncountable set. Considering the 3D space, it is also common to find the state space as SE(3)=3×SO(3) referring to 3D translations and rotations, for example, in the case of position and orientation of UAVs (unmanned aerial vehicles) or even simply the end effector’s pose in manipulators. Talking about traversability and driving on not‐flat terrains, the use of 3D representation is also becoming common for a more accurate design of autonomous navigation systems for UGVs. As a general definition, in robotics, it is possible to find the name configuration space

or simply C‐Space [49, 50] describing the set of all possible configurations of the robot. C‐Space refers to a broad family of constructions closely related to the state space notion in physics which is common in general control theory.

Now, let us suppose that the C‐Space contains a forbidden region OC MC

; moreover, since the mobile robot will also live in C‐Space, we can denote the robot geometry as a subset
, all sets may be expressed using polygonal or polyhedral models. At this point, let us denote as
a possible configuration of our mobile robot
, as a result
is the configuration of the entire robot geometry in C‐Space, note that in the case of SE(2), the configuration of the robot at the time k will be q = (xk, yk, θk). Under the aforementioned assumptions, an obstacle region can be expressed as follows:
Cobs={qC|(q)O0}C.E8

The obstacle region constitutes the set of all robot’s configurations

intersecting the forbidden subspace. All the other configurations can be denoted as free space,
, and obviously
. Let us note that the sets
and
must be closed set in
, as a consequence
must be open, this will ensure the possibility to formalize an optimization problem in
; moreover, it ensures that the robot can drive arbitrary close to an obstacle without colliding it. As last consideration, though different in the formulation, the configuration q and the state x in Eq. (7) may be considered similar; as a consequence, there exists a transition function to go from a configuration q1 at a time t1, to another configuration q2 at the time t2. A rough analogy between states and configurations suggests that the transition function can be expressed as qk+1 = f (qk, uk); clearly defining the robot
in the configuration q moving according to the equation of motion f.

This discussion does not pretend to be a complete description of spaces and sets, but it only gives the preliminary knowledge for the reading of this text, for additional details about assumptions, demonstrations, and definitions please refer to [50] as a relevant reference in the field.

The reason of the diffusion of C‐Spaces in robotics research resides in the possibility of describing them as manifolds, i.e., topological spaces that behave at every point like our intuitive notion of a surface, and the best way of describing the terrain is to consider its topological properties. Hence, considering a ground vehicle, the configuration space cannot be other than the terrain region it is driving on, described as a manifold.

3.2. Traversability characterization

The previous theory considers the robot moving in a configuration space

composed by a free space part
and a forbidden region
. Yet, considering the concept of traversability as the condition of being traversable, then it is simple to understand that the free space can be considered as traversable, while the forbidden space may be not traversable. This definition would be perfectly enough for a binary classification of traversability.

Nevertheless, we are looking for a more general definition; thus, the traversability can be seen as the capability to travel across of through, which implies that the aforementioned binary definition could be extended. Indeed, the set could be forbidden (i.e., not traversable at all) or partially forbidden (i.e., traversable with some grade of membership). This clearly recalls the fuzzy logic

Definition of fuzzy set: Given a generic set X and a membership function f:X[0;1], the fuzzy set A is defined as A={(x,f(x))|xX}.

that can be considered as an extension of the binary logic, such that statements need not be true or false, but they may have a grade of truth between 0 and 1. As a result, one can suppose the existence of a fuzzy set defined following.

Definition 1 Let be given a robot

expressed as a closed subset
where
is a possible configuration of the mobile robot
, and
denotes its C‐Space. Let us suppose the existence of a not empty free space
, with
. Moreover, let us suppose be defined a traversability function
, the traversable region will, be the defined by the following fuzzy set:
Ctr={(qCfree,T(q))|(q)Cfree}.E9

First of all, let us note that the traversable set is included into the C‐Space by definition,

, because the membership function
is defined in
; moreover
and also
. The traversability function used in this definition can be considered as a clear analogous of the more general membership functions which are common in the theory of fuzzy sets. As a result, when
goes to 1 the statement “is traversale” will be true, whereas if
the statement “is traversable” will be false.

The aforementioned definition considers all the elements previously indicated, i.e., a robot model

, a space structure
and a set of traversability criteria
. The use of this definition, according to an explicit expression of
, can also be used to solve optimal control problems.

In order to better clarify the concept, Figure 3 expresses the difference between a simple occupancy map in Figure 3a, where free space and obstacles are clearly distinguished through a binary classification black/white, whereas the concept of fuzzy set in Figure 3b better characterizes the terrain according to the membership function

. Its values are expressed according to a degree of membership in gray scale, denoting in white high values of
; on the contrary, black corresponds to low values of
. The presence of the region A in Figure 3b can be interpreted as a region “less‐traversable” than usual, but still not classifiable as an obstacle, a politic of driving control may generate safe plans.

Figure 3.

Depiction of the free space and fuzzy traversability characterization, the entire area inside the rectangle can be considered as in (a), whereas the gray‐scale gradient indicates the value of the membership function for each point of in (b). The presence of the region A denotes a portion of the free space featuring different values of traversability.

Advertisement

4. Discussion

As the definition of traversability previously introduced can be of general value for geometry‐based terrain analysis purposes, how to use it in order to build practical traversability functions will be following shown, including the re‐definition of classical methods, such as elevation models and roughness models. The exposed examples cover both binary classification methods and cost‐based assessment methods. Along the discussion, an irregular terrain model in the form of a DEM of about 20 m × 20 m, featuring a 0.25m grid size, has been used in order to compare different methods. Let us note that the terrain model, considered as sample model, expressed as a DEM is stored into a 80 × 80 size matrix, that is, 6400 elements. The same data in form of a point cloud, storing only the points’ Cartesian coordinates, will take 6400 × 3 points. This clearly demonstrates the advantage in handling DEMs instead of point clouds; however, using DEMs part of the information is lost due to the assumption of terrain regularity, which is not always applicable. Moreover, ToF sensors as well as stereocamera triangulation always provide a set of distances between the cameras and sampled points in the space, that is, a point cloud, thus a transformation is required, including its computational cost, to build the digital map.

4.1. Binary classification for traversability

Let us consider the example of a binary classification and apply the aforementioned definition to find a member function

such that the traversability region corresponds to the free part of the configuration space. Given a generic robot
, in the configuration space
, in this simple case
can be expressed by the function:
T={T(q)=1qC|(q)CfreeT(q)=0qC|(q)CobsE10

Note that, even though this function is the simplest possible, it works regardless of the particular structure of the C‐Space, and it converges into the general theory of configuration space. However, in practical cases, it is expected the free space to be explicitly expressed. To prove that

is true in the case of binary classification, let us consider that by definition that
. As a result, the only part that should be proven is
, if
is defined as in Eq. (10). Hence, let us suppose that exists a configuration qk such that
but
. This implies
; hence,
, but this is absurd because qk would belong to both
and
. As a result,
if
is defined as in Eq. (10).

Figure 4.

Binary traversability rule applied on an elevation model. In (a) and (b) the 3D‐view and xy‐view are shown. The red color labels not traversable regions (i.e., ), whereas the cyan color denotes the traversable parts of the terrain, .

As an example of functionality, Figure 4 presents a binary classification applied to a sample terrain model. For the sake of the example, given q=(x,y,z),

has been defined as the set of points such that |z|zmax. The result is a cyan region which can be considered as traversable, that is, belonging to
, and a red region which can be considered as not traversable, that is,
. The example explicitly refers to the 3D space; however, the definition in Eq. (10) has general value, since the structure of the configuration space has not been explicitly given. Though simple and widely used this method neglects information about intermediate levels of elevation or local irregularities; hence, it is much more used in indoor structured environments where there are strong discontinuities (e.g., floor, walls) and under the assumption of regular flat floor surface. A different way to see this concept consists in the occupancy maps, which consider a cell as not traversable, if its elevation is higher than a threshold, that is, obstacle.

4.2. Elevation terrain model for traversability

Typically used in mobile robotics, elevation models may be described using the formulation in Eq. (9). Let us suppose to have a ground vehicle that can move in three‐dimensional space. As indicated earlier, its configuration space can be expressed as

= ℝ3, neglecting the orientation terms to simplify the notation, the ground vehicle may be considered as a subset
, and we can also consider the existence of a forbidden region
. Now, let us construct a traversability function given a terrain model expressed as follows:
Cfree={q(x,y,z)C|(q)O=},E11

where z=g(x,y), with g:2 supposed to be regular; moreover, x and y are considered as limited, thus |x|xmax,|y|ymax. In this way, a bounded portion of the x, y plane has been defined. As a result, given a generic shaped robot

in the configuration space
, a traversability function
that considers an elevation terrain model can be expressed by the following:
T(q)=1|zqzmax|qCtr,zmax0|(q)Cfree.E12

Figure 5.

The elevation model better describes the sample terrain in Figure 4, the higher informative content allows to perform better cost‐based traversability analysis, in (a) the 3D mesh is presented, whereas the xy‐axis view is depicted in (b).

One should note that in Eq. (12) if zqzmax then

and the configuration will fall into low values of membership function and this implies that the point will not belong to
. However, even though the configuration of the robot includes orientation angles in its formalization, this traversability function does not consider any orientation in its values and this results in a limitation in the practical application of pure elevation‐based methods. For the sake of completeness, we should consider the case of zmax, where
zq, but this case can be considered as trivial.

The example of this type of analysis is reported in Figure 5, where the values of

are indicated as a color bar from blue corresponding to traversable regions, to red denoting not-traversable part of terrain. It results evident that a control rule based on such analysis will bring the robot towards the lowest regions of the terrain, which though reasonable, it may be not the best behavior according to the objective of the robot movements. Let us observe that in this method, the robot shape is considered as a single point in the calculation of the traversability function, hence considering only the terrain elevation.

4.3. Traversability model based on roughness index

A widely used approach, for geometry and cost‐based terrain traversability analysis, consists in the definition of the roughness index [47]. It is defined as the standard deviation of the elevation values in a specific region of the terrain, given by the projection of the robot shape on the ground.

Given a terrain region considered as free space

, defined as in Eq. (11), and a robot model
described using any polygonal model. Then, it is possible to define the roughness index Bq of the terrain, when the robot is in the configuration
as the standard deviation of the elevation values Zq of the surface given by the intersection between
and
.
Bq=E(Zqμ)2,E13

where

is the set of all points that fall into the intersection between the robot
and the free space, and
is the average of the elevation values in the same region. Since the values of Bq are not limited in [0, 1] the traversability function related to the roughness index, according to the definition in Eq. (9), may be considered using a normalization as following:
T(q)=1BqBmaxqC,Bmax0|(q)Cfree.E14

As in the previous case, Eq. (14) → 0 if BqBmax and the configuration q will fall into low values of membership function and this implies that it does not belong to

. Moreover, Bq0BqBmax[0,1], as a result
is well defined. Also in this case, if Bmax,
Bq can be considered as trivial.

Figure 6 shows an example of traversability map obtained using the roughness index, for the sake of this calculation, the robot has been considered to cover an area of about 8 × 8 cells of the map having grid size of 0.25 m, corresponding to 2 meters in size. The consideration of the standard deviation on a terrain region calculated according to the robot’s geometry may be considered as a robust method and, for this reason, widely used for practical applications. One should note that between the pure elevation traversability analysis and the roughness analysis, a specific region of the terrain appears as irregular and dangerous, corresponding to a local surface minimum. This evaluation agrees with the reality that a robot may get stuck into a hole. On the contrary, the same analysis does not mark as irregular the peak of the hill that may be perfectly traversable as upland. However, it is clear that also this method may fail in the simple case of a surface featuring a slope, which though regular and traversable, it may present high values of variance in its elevation [51].

Figure 6.

Roughness traversability analysis result based on the roughness index in Eq. (13), integrated into the membership function in Eq. (14); (a) 3D surface model; (b) xy‐view. The color bar indicates increasing values of traversability, where red corresponds to not traversable regions, while blue corresponds to traversable portion of terrain.

4.4. Unevenness point descriptor‐based model

As an alternative analysis to solve the problems related to the variance of the elevation in sloped regular surfaces, the use of normal vectors to estimate surface irregularities was presented in [27], where the authors defined the unevenness point descriptor (UPD), as a simple choice to extract traversability information from 3D point cloud data. Specifically, the UPD describes surfaces using a normal analysis in a neighborhood, resulting in an efficient description of both irregularities and inclination.

Summarizing the concept, let

be a point cloud, that is, a set of points defined by their Cartesian coordinates defined as in Eq. (3), and let pq be a given point defined as the query point. The neighborhood of pq in
can be defined as in Eq. (4), given a search radius dm > 0. Then, we define the unevenness point descriptor FU in pq, as: FU(pq,Pq)={rq,ζq}, where rq=(rxq,ryq,rzq) is given by the vector sum of all the vectors ni normal vectors in the neighbors Pq, that is, rq=i=1kni with i=1,...,k, ζq is defined by ζq=|rq|/k, and k is the number of elements in Pq.

The components of rq provide information about the global direction of the local surface in the sensor reference frame. Whereas ζq can be interpreted as a local inverse “unevenness index,” since it assesses the degree of local roughness, and it depends on the distribution of the direction of the normal vectors in the neighborhood. ζq is normalized by k, i.e., the number of points in Pq; hence, it is possible to compare the unevenness index of different points among each other. The main advantages of this descriptor reside in its simplicity and robustness for traversability evaluation. Contrary to other methods, UPD detects the variations in the surface orientation instead of the variation of the pure elevation, which leads to a general description of regularity in the surface. Moreover, the UPD can be easily adapted to the robot’s specific task by appropriately setting the neighborhood size, dm. In practice, its value is fixed at the beginning of the operations based on the robot geometric size [26]. As further observation, given a neighborhood Pq denoting a certain region of the terrain, its orientation can be written as follows:

θ(Pq)=cos1(rzq|rq|),E15

where rzq represents the third component of rq, orthogonal to the xy‐plane, as a consequence, d(Pq) represents the global orientation of the surface portion Pq respect to the plane xy.

To bring the unevenness index into the definition of a traversable region, we can consider as given the C‐Space

and a forbidden space
, then a free space can be defined as in the following Eq. (16):
Cfree={qCP|(q)O},E16

where

is a generic portion of space expressed as a point cloud. Let us note that the meaning of the intersection with
consists in a practical limitation of the C‐Space in the part the robot can see or has information about. Then, let us suppose to be given the unevenness index
, the traversability region may be identified by the set in Eq. (9), where the membership function is given by the following:
T(q)=1ζqqC|(q)Cfree.E17

Figure 7.

Unevenness point descriptor‐based model for geometric traversability analysis applied on a point cloud depicting the sample terrain model. In (a) and (b), the 3D view and xy‐view, respectively, are depicted. The search radius for the UPD calculation is dm = 1 m.

Figure 8.

UPD point·descriptor analysis. In (b) the 3D point cloud is shown using the color bar to denote traversability value, whereas its relative xy‐view is shown in (c). As the point cloud has been obtained by stereo‐triangulation, the left‐camera image is shown in (a). The roughness index‐based analysis in Eq. (14) produces poor results on the same scenario using a DEM approach, see 3D‐view (d) and xy‐view (e).

Now, let us observe that in its original form the UPD considers the robot model into the parameter dm, at least in its size, that has been said to be fixed at the beginning of robot operations, according to its shape. However, it is possible to generalize the concept of neighborhood Pq considering the set of points which fall in the set not as a sphere neighborhood but as the intersection between a polyhedral robot model

and the free part of C‐Space, hence
This generalization allows the user to better define the robot shape into the descriptor.

The example of the UPD analysis, for the same terrain model considered as sample, is reported in Figure 7, for the sake of visibility, the values of ζq have been normalized to their minimum values in the region, since the results of variation were close to regularity. During the calculation, the search radius has been set to 1 m, according to the previous example of the roughness index. Contrary to the previous approaches, in the UPD analysis, the strong variations such as the depressions are now considered as not regular showing a different perception of the traversability of this terrain model.

As last example, in Figure 8, the UPD has been applied on a point cloud obtained by triangulation on a stereocamera in real environment, the value of the traversabilitv function is reported using in color scale, whereas the left‐camera image of the scenario is reported in Figure 8a. This scene has been extracted from a dataset thoroughly analyzed in [51]. It can be interesting to note that the presented case scenario features a ramp to access an indoor structure. The ramp is considered as regular via UPD analysis, whereas it may be misinterpreted considering elevation model as well as the roughness index. All the borders are correctly detected as not traversable regions. As the matter of fact, Figure 8d and Figure 8e present the same scenario described using a DEM and the traversability function in Eq. (14). The misunderstanding of the scenario leads to the erroneous classification of the ramp to access the building behind it as fully not traversable. On the contrary, in Figure 8b and Figure 8c the scene is properly interpreted using the UPD approach.

Advertisement

5. Conclusion and further extensions

Along the chapter, different methods of geometry‐based traversability for mobile robotics have been explored. A thorough review on the topic suggests that the future trend of sensors and space description for traversability purposes will refer to point clouds and time‐of‐flight sensors, or stereo‐3D reconstruction. The necessity to improve the description of terrain, removing the assumption of regularity, will bring the robot towards the full 3D reconstruction of the environment at least in short range visibility. Among different methods analyzed in the discussion, the UPD has demonstrated highest capability of recognition even though it could be costly in terms of computational performances. The contributions in this work are as follows: (i) a review of the field with comparison among technologies, (ii) a new definition of traversability that can be of general value for robot navigation purposes, and (iii) a comparison among literature methods including practical examples.

To conclude this chapter, it is worth to give some possible extensions of this work and future developments. One of them could be the definition of traversable regions in terms of probability. Indeed, it should be possible to include a probability function in terms of risk‐of‐collision or probability of traverse, in which high values refer to minimum probability of collision (i.e., max traversing probability) or low values imply maximum probability of collision (i.e., min traversing probability). Moreover, the traversability regions as defined during this chapter may fit for navigation purposes using the common potential fields, where the potential function will consider traversable regions as “attractive.” On the contrary, “repulsive” regions will coincide with low values of traversability function. Literature in this field typically considers potential functions that use the distance from obstacles instead of a complete traversability description.

References

  1. 1. Automated Guided Vehicle Market by Type – Global Forecast to 2020. Marketsandmarkets, 2015.
  2. 2. J. Greenough, The self‐driving car report: forecasts, tech timelines, and the benefits and barriers that will impact adoption. BI Intelligence, 2015.
  3. 3. H. A. Yanco, A. Norton, W. Ober, D. Shane, A. Skinner, and J. Vice, “Analysis of human‐robot interaction at the darpa robotics challenge trials,” Journal of Field Robotics, vol. 32, no. 3, pp. 420–444, 2015.
  4. 4. Ellery, A. (2015). “Planetary Rovers: Robotic Exploration of the Solar System”. Springer, ISBN: 978–3–642–03258–5.
  5. 5. M. H. Hebert, C. E. Thorpe, and A. Stentz, “Intelligent unmanned ground vehicles: autonomous navigation research at Carnegie Mellon” (Vol. 388). Springer Science & Business Media, Eds. 2012, ISBN:1461563259.
  6. 6. P. Papadakis, “Terrain traversability analysis methods for unmanned ground vehicles: A survey,” Engineering Applications of Artificial Intelligence, vol. 26, no. 4, pp. 1373–1385, 2013.
  7. 7. H. Roncancio, M. Becker, A. Broggi, and S. Cattani, “Traversability analysis using terrain mapping and online‐trained terrain type classifier,” in Intelligent Vehicles Symposium Proceedings, 2014 IEEE, pp. 1239–1244, IEEE, 2014.
  8. 8. S. Thrun, “Learning occupancy grid maps with forward sensor models,” Autonomous Robots, vol. 15, no. 2, pp. 111–127, 2003.
  9. 9. P. Papadakis, F. Pirri “3D Mobility Learning and Regression of Articulated, Tracked Robotic Vehicles by Physics–based Optimization” International conference on Virtual Reality Interaction and Physical Simulation, Eurographics, Dec 2012, Darmstadt, Germany.
  10. 10. Y. Tanaka, Y. Ji, A. Yamashita, and H. Asama, “Fuzzy based traversability analysis for a mobile robot on rough terrain,” in Proceedings of the 2015 IEEE International Conference on Robotics and Automation, 2015.
  11. 11. B. Suger, B. Steder, and W. Burgard, “Traversability analysis for mobile robots in outdoor environments: A semi‐supervised learning approach based on 3d‐lidar data,” in Robotics and Automation (ICRA), 2015 IEEE International Conference on, pp. 3941‐3946, 2015.
  12. 12. F. Oniga and S. Nedevschi, “Processing dense stereo data using elevation maps: Road surface, traffic isle, and obstacle detection,” IEEE Transactions on Vehicular Technology, vol. 59, pp. 1172–1182, 2010.
  13. 13. A. Broggi, E. Cardarelli, S. Cattani, and M. Sabbatelli, “Terrain mapping for off‐road autonomous ground vehicles using rational b‐spline surfaces and stereo vision,” in Intelligent Vehicles Symposium (IV), 2013 IEEE, pp. 648–653, 2013.
  14. 14. K. Dongshin, M. O. Sang, and M. R. James, “Traversability classification for ugv navigation: A comparison of patch and superpixel representations,” (San Diego, CA), pp. 3166‐3173, International Conference on Intelligent Robots and Systems, 2007.
  15. 15. A. Howard and H. Saraji, “Vision‐based terrain characterization and traversability assessment,” Journal of Robotic System, vol. 18, no. 10, pp. 77–587, 2001.
  16. 16. S. Thrun, M. Montemerlo, and A. Aron, “Probabilistic Terrain Analysis For High–Speed Desert Driving” In Robotics: Science and Systems, pp. 16–19, Philadelphia, USA, August 2006.
  17. 17. M. Häselich, M. Arends, N. Wojke, F. Neuhaus, and D. Paulus, “Probabilistic terrain classification in unstructured environments,” Robotics and Autonomous Systems, vol. 61, no. 10, pp. 1051–1059, 2013.
  18. 18. K. Iagnemma, H. Shibly, and S. Dubowsky, “On‐line terrain parameter estimation for planetary rovers,” in Robotics and Automation, 2002. Proceedings. ICRA ‘02. IEEE International Conference on, vol. 3, pp. 3142–3147, IEEE, 2002.
  19. 19. E. Coyle and E. G. E. Jr., “A comparison of classifier performance for vibration‐based terrain classification,” tech. rep., DTIC Document, 2008.
  20. 20. F. L. G. Bermudez, C. J. Ryan, D. W. Haldane, P. Abbeel, and R. S. Fearing, “Performance analysis and terrain classification for a legged robot over rough terrain,” in Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 513–519, IEEE, 2012.
  21. 21. R. B. Rusu, Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments. Künstl Intell (2010) 24: 345. doi:10.1007/s13218–010–0059–6.
  22. 22. M. Bellone and G. Reina, “Pavement distress detection and avoidance for intelligent vehicles”, International Journal of Vehicle Autonomous Systems, 2016, Vol. 14, ISSN: 1471–0226.
  23. 23. J. D. S. Prince, “Computer Vision: Models, Learning, and Inference” Cambridge University Press, 1st ed., 2012.
  24. 24. R. Szeliski, “Computer Vision: Algorithm and applications” Springer, 2010, ISBN1848829353.
  25. 25. F. Neuhaus, D. Dillenberger, J. Pellenz and D. Paulus, “Terrain drivability analysis in 3D laser range data for autonomous robot navigation in unstructured environments,” 2009 IEEE Conference on Emerging Technologies & Factory Automation, Mallorca, Spain, 2009, pp. 1–4, doi: 10.1109/ETFA.2009.5347217.
  26. 26. M. Bellone, A. Messina, and G. Reina, “A new approach for terrain analysis in mobile robot applications,” in Mechatronics (ICM), 2013 IEEE International Conference on, pp. 225–230, IEEE, 2013.
  27. 27. M. Bellone, G. Reina, N. Giannoccaro, and L. Spedicato, “Unevenness point descriptor for terrain analysis in mobile robot applications,” International Journal of Advanced Robotic Systems, vol. 10, p. 284, 2013.
  28. 28. J. K. Kearney, X. Yang, and S. Zhang, “Camera calibration using geometric constraints,” (San Diego, California, USA), IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1989.
  29. 29. P. Puget and T. Skordas, “An optimal solution for mobile camera calibration,” (Cincinnati, Ohio, USA), IEEE International Conference on Robotics and Automation, 1990.
  30. 30. G. Unal, A. Yezzi, S. Soatto and G. Slabaugh, “A Variational Approach to Problems in Calibration of Multiple Cameras,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 8, pp. 1322–1338, Aug. 2007, doi: 10.1109/TPAMI.2007.1035.
  31. 31. L. Heng, B. Li, and M. Pollefeys, “Camodocal: Automatic intrinsic and extrinsic calibration of a rig with multiple generic cameras and odometry,” (Tokyo, Japan), International Conference on Intelligent Robots and Systems, 2013.
  32. 32. L. Spedicato, N. I. Giannoccaro, G. Reina, and M. Bellone, “Three different approaches for localization in a corridor environment by means of an ultrasonic wide beam”, International Journal of Advanced Robotic Systems, vol. 10, pp. 163–172, March 2013.
  33. 33. J.L. Torres, J.L. Blanco, M. Bellone, F. Rodrìguez, A. Gimènez, and G. Reina – “A proposed software framework aimed at energy&x2013;efficient autonomous driving of electric vehicles” – International Conference on Simulation, Modeling, and Programming for Autonomous Robots, Bergamo, Italy October 2014, pp. 219–230, ISBN 978–3–319–11899–4;
  34. 34. D. Borrmann, J. Elseberg, K. Lingemann, A. Nüchter, and J. Hertzberg, “Globally consistent 3d mapping with scan matching,” Robotics and Autonomous Systems, vol. 56, no. 2, pp. 130–142, 2008.
  35. 35. I. S. Kweon and T. Kanade, “High‐resolution terrain map from multiple sensor data,” IEEE Transaction on Pattern and Machine Intelligence, vol. 14, pp. 278–292, 1992.
  36. 36. J. Larson, M. Trivedi, and M. Bruch, “Off‐road terrain traversability analysis and hazard avoidance for ugvs,” (Baden‐Baden, Germany), IEEE Intelligent Vehicles Symposium, 2011.
  37. 37. P. Pfaff, R. Triebel, and W. Burgard, “An efficient extension of elevation maps for outdoor terrain mapping,” In Proceedings of the International Conference on Field and Service Robotics (FSR), pp. 165–176, 2005.
  38. 38. T. Ohki, K. Nagatani, and K. Yoshida, “Path planning for mobile robot on rough terrain based on sparse transition cost propagation in extended elevation maps,” pp. 494–499, 2013 IEEE International Conference on Mechatronics and Automation (ICMA), Aug 2013.
  39. 39. N. Vandapel, D. F. Huber, A. Kapuria, and M. Hebert, “Natural terrain classification using 3‐d ladar data,” (New Orleans, LA, USA), pp. 5117–5122, IEEE International Conference on Robotics and Automation, 2004.
  40. 40. M. Whitty, S. Cossell, K. S. Dang, J. Guivant, and J. Katupitiya, “Autonomous navigation using a real‐time 3d point cloud,” in 2010 Australasian Conference on Robotics and Automation, pp. 1–3, 2010.
  41. 41. M. Bellone and G. Reina, “Road surface analysis for driving assistance,” in Workshop Proceedings of IAS‐13 13th International Conference on Intelligent Autonomous Systems Padova (Italy), pp. 226–234, 2014.
  42. 42. T. Braun, H. Bitsch, and K. Berns, “Visual terrain traversability estimation using a combined slope/elevation model”, Advances in Artificial Intelligence Volume 5243 of the series Lecture Notes in Computer Science pp 177–184, Springer, 2008, ISBN 978–3–540–85845–4.
  43. 43. B. Cafaro, M. Gianni, F. Pirri, M. Ruiz, and A. Sinha, “Terrain traversability in rescue environments,” in 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pp. 1–8, Oct 2013.
  44. 44. A. Dargazanv and K. Berns, “Stereo‐based terrain traversability estimation using surface normals,” in 41st International Symposium on Robotics; Proceedings of ISR/Robotik 2014, pp. 1–7, June 2014.
  45. 45. G. Ishigami, K. Nagatani, and K. Yoshida, “Path planning for planetary exploration rovers and its evaluation based on wheel slip dynamics,” in IEEE International Conference on Robotics and Automation, pp. 2361–2366, 2007.
  46. 46. T. Kubota, Y. Kuroda, Y. Kunii, and T. Yoshimitsu, “Path planning for newly developed microrover,” in 2001. Proceedings 2001 ICRA IEEE International Conference on Robotics and Automation, vol. 4, pp. 3710–3715, 2001.
  47. 47. E. Rohmer, G. Reina, and K. Yoshida, “Dynamic simulation‐based action planner for a reconfigurable hybrid leg‐wheel planetary exploration rover,” Advanced Robotics, vol. 24, no. 8–9, pp. 1219–1238, 2010.
  48. 48. H. Seraji, “Traversability index: A new concept for planetary rovers,” (Detroit, MI, USA), pp. 2006–2013, IEEE International Conference on Robotics and Automation, 1999.
  49. 49. J. J. Craig, Introduction to Robotics: Mechanics and Control, vol. 3. Pearson Prentice Hall, Upper Saddle River, 2005.
  50. 50. S. M. LaValle “Planning Algorithms” Cambridge University Press, 2006, ISBN1139455176.
  51. 51. M. Bellone, G. Reina, N. Giannoccaro, and L. Spedicato, “3d traversability awareness for rough terrain mobile robots,” Sensor Review, vol. 34, no. 2, pp. 220–232, 2014.

Notes

  • Definition from Oxford Dictionaries.
  • Definition of fuzzy set: Given a generic set X and a membership function f:X→[0;1], the fuzzy set A is defined as A={(x,f(x))|x∈X}.

Written By

Mauro Bellone

Submitted: 22 November 2015 Reviewed: 02 June 2016 Published: 19 October 2016