Comparison of the literature, the table classifies space representations and used sensors for traversability purposes
Abstract
Watch your step! Or perhaps, watch your wheels. Whatever the robot is, if it puts its feet, tracks, or wheels in the wrong place, it might get hurt; and as robots are quickly going from structured and completely known environments towards uncertain and unknown terrain, the surface assessment becomes an essential requirement. As a result, future mobile robots cannot neglect the evaluation of terrain’s structure, according to their driving capabilities. With the objective of filling this gap, the focus of this study was laid on terrain analysis methods, which can be used for robot control with particular reference to autonomous vehicles and mobile robots. Giving an overview of theory related to this topic, the investigation not only covers hardware, such as visual sensors or laser scanners, but also space descriptions, such as digital elevation models and point descriptors, introducing new aspects and characterization of terrain assessment. During the discussion, a wide number of examples and methodologies are exposed according to different tools and sensors, including the description of a recent method of terrain assessment using normal vectors analysis. Indeed, normal vectors has demonstrated great potentialities in the field of terrain irregularity assessment in both on‐road and off‐road environments.
Keywords
- traversability
- terrain assessment
- terrain analysis
- UGV
- mobile robots
1. Introduction
From an analysis in the United States, the automated guided vehicles (AGVs) market will be worth 2240 million dollars by 2020, due to growing automation investments across all major industries [1]. Besides, BI Intelligence estimates a number of 10 million cars and trucks featuring self‐driving capabilities by the same year [2]. On the other side, during the DARPA Robotics Challenge 2015, worldwide universities and their humanoids have raced among challenging scenarios, and a number of robots lost their balance traveling across rubble [3], and some of them even used semi‐autonomous systems to overcome this challenge by manually sending commands about specific locations where to put their feet on. Additionally, the Curiosity rover, recently sent on Mars by NASA, demonstrates the growing utilization of robotics technologies in planetary exploration as they require high level of reliability during their surveys, and rocks or terrain irregularities may cause irreparable damages to on‐board instrumentation [4].
The common element among all these types of robots consists of the necessity of a high level of driving capability; though motion control has made great strides, it may fail in case of unexpected circumstances, including road hazards, pavement distresses, and rubble. As a result, from widely known AGVs, spread in industries since years, to modern unmanned ground vehicles (UGVs) [5], the high level of driving capabilities is perceived an essential requirement. In order to enhance robustness and reliability, future mobile robots should be designed including custom hardware and software components, helping UGVs to adapt their driving behavior according to surface irregularities. In robotics, the assessment of terrain conditions is generally referred to as
Among the number of methods and models for terrain analysis, there are at least two large categories, (i)
A further classification of methods commonly used in this field distinguishes between
In the light of glaring requirements of terrain analysis for future UGVs, this discussion aims at exploring some of the basic concepts of traversability; the focus was laid on geometric methods. This study introduces a definition of traversability and its application to robot control and autonomous ground vehicles. This directly leads to the contributions of this chapter, which attempts to compare different methodologies and fill the gap between theory and practical applications giving a definition that can be of general value for terrain traversability analysis in terms of a fuzzy set, including practical examples to foregoing functions available in the literature. Furthermore, the potentialities of novel methods based on the normal vectors analysis will be explored, providing some practical examples of application.
The chapter is structured as follow: Section 2 will provide an overview and basic knowledge about the field with focus on related works and recent techniques for visual terrain analysis, used sensors and space representation. Later, in Section 3, a theoretical background will help, who unfamiliar with the topic, to understand the basic concepts related to robot models and state spaces, introducing a definition of traversability in terms of a fuzzy set. Examples, results and comparisons are exposed during a thorough discussion in Section 4, which will cover basic functions and recent researches in the field applied on both synthetic data and real scenarios. Conclusions are drawn in Section 5.
2. Overview
As humans themselves rely on their five senses to know where to walk or drive a vehicle on, creating an implicit space representation in the brain, robots perceive and interpret the space using exteroceptive and proprioceptive transducers as a sensing aid. In order to build an effective exteroceptive traversability analysis tool two elements are required: (i) visual sensors and (ii) a mathematical space representation. The former comprises any exteroceptive sensor such as cameras, depth cameras, or time‐of‐flight sensors, which endow robots with sensing capabilities; whereas the latter provides a spatial organization of sensory data and build an abstract representation of the 3D environment. As a result, the approach to terrain traversability analysis may change according to space representation, as much as the available data may vary according to the type of sensor. Even though the most common methods for terrain traversability analysis are based on exteroceptive perception [9], for the sake of completeness, it is important to cite that proprioceptive sensors are also successfully used for terrain analysis [18–20], measuring and interpreting quantities such as vibrations or slippage, but their study is out of the scope of this study.
To facilitate the comprehension of the content of this discussion, following a short review on space representations and sensor technologies available for terrain analysis in mobile robotics is reported.
2.1. Sensors for terrain analysis
Sensing denotes a group of techniques used in robotics to measure any physical quantity interacting with the robot. Hence, any device used to acquire information can be counted in this category. Although the general concept of sensing as the problem of understanding how a robot see the world, by means of a set of visual sensors, has been addressed following various approaches, in the specific topic of traversability, there are a number of open issues still to be solved. In [21], the author has accurately described the problem of semantic perception for a robot operating in human‐living environments, approaching the problem from sensors and data point of view. Notwithstanding the valuable work done in the field of perception, the indoor structured environments introduce a number of simplifications which are never applicable in outdoor unstructured environments. First of all, indoor scenarios are generally characterized by smooth ground surfaces and high‐size objects represented as vertical planes. For this reason, AGVs, commonly used in indoor industrial environments, do not consider any terrain representation at all. Moreover, indoor robots generally move at low speed, and consequently, they do not require any sophisticated system for terrain analysis. The situation changes totally in the case of planetary rovers [4], driving on sandy terrains featuring rocks, varying in size and shape. Furthermore, recent driverless cars are quickly going towards public roads; in such situations, rocks, road hazards, and pavement distresses may put the vehicle, and its passengers, in serious danger [22].
Since this discussion examines terrain analysis, a distinction between acquisition and representation of information should be done. On one hand, the space acquisition strongly depends on the typology of sensors and applications; on the other hand, its representation depends on the perception meaning and its content. From a purely geometrical point of view, the most primitive representation of a point in the space is the 3D Euclidean metric. However, the information about the real 3D coordinates of a specific point can be obtained by triangulation techniques [23, 24] on stereocamera images, or by directly measuring its distance using time‐of‐flight (TOF) systems [25]. Figure 1 shows typical image sensors assembled on several UGVs in order to acquire some of the images used for the experimental discussion in this work. Specifically, Figure 1a depicts a depth sensor, the Kinect camera, used in [26] for a novel approach to terrain analysis, whereas in Figure 1b a more sophisticated vision system designed for an agricultural tractor is shown [27], the red circle marks a trinocular stereocamera. Figure 1c and Figure 1d show two examples of time‐of‐flight sensors, a Sick laser range finder and a sonar sensing system. Following, the technology at the base of such sensors will be briefly recalled.
2.1.1. Stereovision
Stereocameras constitute a family of cameras composed by two or more lenses with separated image sensors. They provide a visual image for each lens and post‐elaboration attempts to estimate the distance of each point from the sensor by means of connections between correspondences seen by two different lenses at the same time, simulating the human binocular vision. In order to provide accurate measures, the sensors require the perfect calibration with respect to each other, done by the extrapolation of their intrinsic and extrinsic parameters.
In the literature, a large number of methods for camera calibration are available. As an example, Kearney et al. propose a method for the calibration using geometric constraints in [28] and then Puget and Skordas present a method for optimizing the calibration [29]. Later, many researchers studied methods for fast and accurate calibration of multiple cameras [30], in anticipation of the most recent researches of automatic calibration for cars, for example [31]. Recent sensors use more than two cameras for the triangulation in order to increase the accuracy in both short and long range. The 3D representation of the environment is inferred detecting the same point into both camera images, and the bigger the set of points the richer will be the 3D space reconstruction.
Simplifying the concept, said
where
An example of a trinocular camera featuring multiple baseline can be seen in Figure 1b, where the sensor has been mounted as visual aid on an experimental tractor [27].
2.1.2. Time‐of‐flight 3D sensors
In contrast to stereocameras, TOF‐based systems, such as lasers and sonars, directly evaluate distances by the measurement of the delay until an emitted signal hits a surface and returns back to the receiver, thus estimating the true distance from the sensor to the surface. Also in this case, a simplified relation can calculate the distance between the sensor and a point in the space as follows:
where
Thanks to their properties of accuracy and reliability, the research involving vision for mobile robot shifted towards the use of laser technologies as an aid for space reconstruction.
2.2. Space representations
The term
2.2.1. Digital elevation maps
Organizing sensors data is a mandatory step to reconstruct information for geometric interpretation purposes, and digital elevation models (DEM) [35] are widely used as space representation in mobile robotics. Although topography and large areas terrain mapping constitute the original use of DEMs, their use for traversability analysis has been demonstrated as successful in mobile robotics [4]. As further example, Larson et al. discuss a real‐time approach to analyze the traversability of off‐road terrain for UGVs considering positive and negative obstacles through elevation information [36].
DEMs have been introduced as a compact 2
The classical DEM approach constitutes an efficient representation, but it lacks of accuracy in space description since objects are described as surfaces using their elevation without taking into account their real shape. For instance, a tunnel cannot be represented using a digital elevation model. As an improvement of classical DEMs approach, Pfaff et al. [37] proposed the extended DEMs or the so‐called
In conclusion, though suitable due to its compactness and simplicity, in each DEM formalization, there is the assumption of regularity in the surface and it turns into a not‐complete space representation. As the matter of fact, it fails in a large number of practical situations; nevertheless, it is extensively used in robotics since it is simply applicable in low‐performance embedded controllers.
2.2.2. Point descriptors
A recent space description, used in robotics for traversability purposes, consists in the representation of each point simply by its 3D Cartesian coordinates [24]. Hence, let us define a
where
Let be given a point cloud
where
A local descriptor of
where
Then,
A possible application of point clouds for traversability analysis can be found in [14], where the authors describe a method for terrain classification using point clouds data obtained by stereovision. They propose the use of superpixels as the visual primitives for traversability estimation using a learning algorithm. A different approach can be found in [39]; here, the authors acquire information about terrain by a LIDAR and, using local 3D point statistics, segment it into three classes: clutter to capture grass and tree canopy, linear to capture thin objects such as wires or tree branches, and finally surface to capture solid objects such as ground terrain surface, rocks or tree trunks. As further example, in [40], the authors use a Sick lidar to acquire point cloud and build a traversability cost‐to‐go function for navigation purposes.
2.2.3. A comparison among methods for terrain analysis
To finalize this overview, it is worth to compare different methods according to their use in the scientific community and provide a classification of the used approaches. Table 1 presents a summary of references in the field of terrain analysis and traversability, classifying them for space representation and used sensor, the full bullet indicates the classification. More specifically, the classification of used sensors distinguishes between ToF and stereocameras as method to acquire information, whereas the space description classification differentiates between DEMs and point clouds, including in the last category also point descriptors.
Reference | Application | Sensors ToF | Stereo |
Space representation DEM | Pt.C |
---|---|---|---|
Natural | ●|● | ○|● | |
Automotive | ●|○ | ○|● | |
Natural | ○|● | ●|○ | |
Automotive | ○|● | ●|○ | |
Search and rescue | ●|○ | ○|● | |
Natural | ○|● | ○|● | |
Natural | ○|● | ○|● | |
Natural | ●|○ | ●|○ | |
Planetary | ●|○ | ●|○ | |
Planetary | ●|○ | ●|○ | |
Natural | ●|○ | ●|○ | |
Automotive | ●|○ | ○|● | |
Field | ●|○ | ●|○ | |
Automotive | ○|● | ●|○ | |
Search and rescue | ●|○ | ●|○ | |
Natural | ●|○ | ●|○ | |
Planetary | ●|○ | ●|○ | |
Natural | ○|● | ●|○ | |
Natural | ●|○ | ○|● | |
Automotive | ●|○ | ○|● | |
Natural | ○|● | ○|● | |
Field | ●|○ | ○|● |
This analysis suggests that both DEMs and point clouds are used for traversability analysis; however, one can consider as a possible trend, the use of point clouds for terrain traversability, since recent researches are going towards this direction. Contrary, DEMs constitute a stable and robust tool, widely used in all the fields of robotics, and it is even possible to find recent extensions of research. To cite one of them, in [38], the authors use an extended elevation models as improvement to DEMs. The historical predominant application of traversability is in natural outdoor environments, where the assumptions of surface regularity cannot be applied. Only recently, the study of surfaces is gaining interest in the automotive sector, in which all researches are quite recent, since this technology was never required in the field. Possible uses are as follows: pavement distress detection [41], sidewalk detection [12], or segment terrain’s inliers and outliers to be used for obstacles detection [13].
From sensors point of view, laser scanner are commonly used for specific applications such as planetary or search and rescue, whereas stereocameras are preferred in applications where the cost‐effectiveness of cameras can be attractive. However, it is important to cite that ToF sensors are commonly used for geometry‐based traversability techniques, whereas cameras are used in the case of appearance‐based classification.
3. Terrain traversability analysis
From a dictionary definition, the word “traversability” denotes Definition from Oxford Dictionaries.
Though descriptive and valuable, this definition only provides ingredients to reach a more general and formal definition of traversability. First of all, it is important to consider few aspects: (i) a robot model including its motion constraints, (ii) space representation, for example, the terrain model, and (iii) a set of criteria to express the traversability properties. All these concepts will be later recalled.
Since this topic is attracting further researches, a more general definition of traversability is given later by Cafaro et al. [43]. The authors have made a valuable work on the theory of space description using point clouds, introducing the definitions of
3.1. Robot models and configuration space
Prom the basis of control theory, it is well known that the robot control includes three different, but fundamental, items: process, controller, and sensors. This concept perfectly describes the ancient meaning of the word
The physical description of robots in control theory typically is expressed through a process and a state space. Thus, given the state
The function
For the sake of clarity, let us mention an example, the state space for a planar vehicle may be defined as
Now, let us suppose that the C‐Space contains a forbidden region
The obstacle region constitutes the set of all robot’s configurations
This discussion does not pretend to be a complete description of spaces and sets, but it only gives the preliminary knowledge for the reading of this text, for additional details about assumptions, demonstrations, and definitions please refer to [50] as a relevant reference in the field.
The reason of the diffusion of C‐Spaces in robotics research resides in the possibility of describing them as manifolds, i.e., topological spaces that behave at every point like our intuitive notion of a surface, and the best way of describing the terrain is to consider its topological properties. Hence, considering a ground vehicle, the configuration space cannot be other than the terrain region it is driving on, described as a manifold.
3.2. Traversability characterization
The previous theory considers the robot moving in a configuration space
Nevertheless, we are looking for a more general definition; thus, the traversability can be seen as the capability to travel across of through, which implies that the aforementioned binary definition could be extended. Indeed, the set could be forbidden (i.e., not traversable at all) or partially forbidden (i.e., traversable with some grade of membership). This clearly recalls the fuzzy logic Definition of fuzzy set: Given a generic set X and a membership function
First of all, let us note that the traversable set is included into the C‐Space by definition,
The aforementioned definition considers all the elements previously indicated, i.e., a robot model
In order to better clarify the concept, Figure 3 expresses the difference between a simple occupancy map in Figure 3a, where free space and obstacles are clearly distinguished through a binary classification black/white, whereas the concept of fuzzy set in Figure 3b better characterizes the terrain according to the membership function
4. Discussion
As the definition of traversability previously introduced can be of general value for geometry‐based terrain analysis purposes, how to use it in order to build practical traversability functions will be following shown, including the re‐definition of classical methods, such as elevation models and roughness models. The exposed examples cover both binary classification methods and cost‐based assessment methods. Along the discussion, an irregular terrain model in the form of a DEM of about 20 m × 20 m, featuring a 0.25m grid size, has been used in order to compare different methods. Let us note that the terrain model, considered as sample model, expressed as a DEM is stored into a 80 × 80 size matrix, that is, 6400 elements. The same data in form of a point cloud, storing only the points’ Cartesian coordinates, will take 6400 × 3 points. This clearly demonstrates the advantage in handling DEMs instead of point clouds; however, using DEMs part of the information is lost due to the assumption of terrain regularity, which is not always applicable. Moreover, ToF sensors as well as stereocamera triangulation always provide a set of distances between the cameras and sampled points in the space, that is, a point cloud, thus a transformation is required, including its computational cost, to build the digital map.
4.1. Binary classification for traversability
Let us consider the example of a binary classification and apply the aforementioned definition to find a member function
Note that, even though this function is the simplest possible, it works regardless of the particular structure of the C‐Space, and it converges into the general theory of configuration space. However, in practical cases, it is expected the free space to be explicitly expressed. To prove that
As an example of functionality, Figure 4 presents a binary classification applied to a sample terrain model. For the sake of the example, given
4.2. Elevation terrain model for traversability
Typically used in mobile robotics, elevation models may be described using the formulation in Eq. (9). Let us suppose to have a ground vehicle that can move in three‐dimensional space. As indicated earlier, its configuration space can be expressed as
where
One should note that in Eq. (12) if
The example of this type of analysis is reported in Figure 5, where the values of
4.3. Traversability model based on roughness index
A widely used approach, for geometry and cost‐based terrain traversability analysis, consists in the definition of the roughness index [47]. It is defined as the standard deviation of the elevation values in a specific region of the terrain, given by the projection of the robot shape on the ground.
Given a terrain region considered as free space
where
As in the previous case, Eq. (14) → 0 if
Figure 6 shows an example of traversability map obtained using the roughness index, for the sake of this calculation, the robot has been considered to cover an area of about 8 × 8 cells of the map having grid size of 0.25 m, corresponding to 2 meters in size. The consideration of the standard deviation on a terrain region calculated according to the robot’s geometry may be considered as a robust method and, for this reason, widely used for practical applications. One should note that between the pure elevation traversability analysis and the roughness analysis, a specific region of the terrain appears as irregular and dangerous, corresponding to a local surface minimum. This evaluation agrees with the reality that a robot may get stuck into a hole. On the contrary, the same analysis does not mark as irregular the peak of the hill that may be perfectly traversable as upland. However, it is clear that also this method may fail in the simple case of a surface featuring a slope, which though regular and traversable, it may present high values of variance in its elevation [51].
4.4. Unevenness point descriptor‐based model
As an alternative analysis to solve the problems related to the variance of the elevation in sloped regular surfaces, the use of normal vectors to estimate surface irregularities was presented in [27], where the authors defined the unevenness point descriptor (UPD), as a simple choice to extract traversability information from 3D point cloud data. Specifically, the UPD describes surfaces using a normal analysis in a neighborhood, resulting in an efficient description of both irregularities and inclination.
Summarizing the concept, let
The components of
where
To bring the unevenness index into the definition of a traversable region, we can consider as given the C‐Space
where
Now, let us observe that in its original form the UPD considers the robot model into the parameter
The example of the UPD analysis, for the same terrain model considered as sample, is reported in Figure 7, for the sake of visibility, the values of
As last example, in Figure 8, the UPD has been applied on a point cloud obtained by triangulation on a stereocamera in real environment, the value of the traversabilitv function is reported using in color scale, whereas the left‐camera image of the scenario is reported in Figure 8a. This scene has been extracted from a dataset thoroughly analyzed in [51]. It can be interesting to note that the presented case scenario features a ramp to access an indoor structure. The ramp is considered as regular via UPD analysis, whereas it may be misinterpreted considering elevation model as well as the roughness index. All the borders are correctly detected as not traversable regions. As the matter of fact, Figure 8d and Figure 8e present the same scenario described using a DEM and the traversability function in Eq. (14). The misunderstanding of the scenario leads to the erroneous classification of the ramp to access the building behind it as fully not traversable. On the contrary, in Figure 8b and Figure 8c the scene is properly interpreted using the UPD approach.
5. Conclusion and further extensions
Along the chapter, different methods of geometry‐based traversability for mobile robotics have been explored. A thorough review on the topic suggests that the future trend of sensors and space description for traversability purposes will refer to point clouds and time‐of‐flight sensors, or stereo‐3D reconstruction. The necessity to improve the description of terrain, removing the assumption of regularity, will bring the robot towards the full 3D reconstruction of the environment at least in short range visibility. Among different methods analyzed in the discussion, the UPD has demonstrated highest capability of recognition even though it could be costly in terms of computational performances. The contributions in this work are as follows: (i) a review of the field with comparison among technologies, (ii) a new definition of traversability that can be of general value for robot navigation purposes, and (iii) a comparison among literature methods including practical examples.
To conclude this chapter, it is worth to give some possible extensions of this work and future developments. One of them could be the definition of traversable regions in terms of probability. Indeed, it should be possible to include a probability function in terms of risk‐of‐collision or probability of traverse, in which high values refer to minimum probability of collision (i.e., max traversing probability) or low values imply maximum probability of collision (i.e., min traversing probability). Moreover, the traversability regions as defined during this chapter may fit for navigation purposes using the common potential fields, where the potential function will consider traversable regions as “attractive.” On the contrary, “repulsive” regions will coincide with low values of traversability function. Literature in this field typically considers potential functions that use the distance from obstacles instead of a complete traversability description.
References
- 1.
Automated Guided Vehicle Market by Type – Global Forecast to 2020. Marketsandmarkets, 2015. - 2.
J. Greenough, The self‐driving car report: forecasts, tech timelines, and the benefits and barriers that will impact adoption. BI Intelligence, 2015. - 3.
H. A. Yanco, A. Norton, W. Ober, D. Shane, A. Skinner, and J. Vice, “Analysis of human‐robot interaction at the darpa robotics challenge trials,” Journal of Field Robotics, vol. 32, no. 3, pp. 420–444, 2015. - 4.
Ellery, A. (2015). “Planetary Rovers: Robotic Exploration of the Solar System”. Springer, ISBN: 978–3–642–03258–5. - 5.
M. H. Hebert, C. E. Thorpe, and A. Stentz, “Intelligent unmanned ground vehicles: autonomous navigation research at Carnegie Mellon” (Vol. 388). Springer Science & Business Media, Eds. 2012, ISBN:1461563259. - 6.
P. Papadakis, “Terrain traversability analysis methods for unmanned ground vehicles: A survey,” Engineering Applications of Artificial Intelligence, vol. 26, no. 4, pp. 1373–1385, 2013. - 7.
H. Roncancio, M. Becker, A. Broggi, and S. Cattani, “Traversability analysis using terrain mapping and online‐trained terrain type classifier,” in Intelligent Vehicles Symposium Proceedings, 2014 IEEE , pp. 1239–1244, IEEE, 2014. - 8.
S. Thrun, “Learning occupancy grid maps with forward sensor models,” Autonomous Robots, vol. 15, no. 2, pp. 111–127, 2003. - 9.
P. Papadakis, F. Pirri “3D Mobility Learning and Regression of Articulated, Tracked Robotic Vehicles by Physics–based Optimization” International conference on Virtual Reality Interaction and Physical Simulation, Eurographics, Dec 2012, Darmstadt, Germany. - 10.
Y. Tanaka, Y. Ji, A. Yamashita, and H. Asama, “Fuzzy based traversability analysis for a mobile robot on rough terrain,” in Proceedings of the 2015 IEEE International Conference on Robotics and Automation, 2015. - 11.
B. Suger, B. Steder, and W. Burgard, “Traversability analysis for mobile robots in outdoor environments: A semi‐supervised learning approach based on 3d‐lidar data,” in Robotics and Automation (ICRA), 2015 IEEE International Conference on, pp. 3941‐3946, 2015. - 12.
F. Oniga and S. Nedevschi, “Processing dense stereo data using elevation maps: Road surface, traffic isle, and obstacle detection,” IEEE Transactions on Vehicular Technology, vol. 59, pp. 1172–1182, 2010. - 13.
A. Broggi, E. Cardarelli, S. Cattani, and M. Sabbatelli, “Terrain mapping for off‐road autonomous ground vehicles using rational b‐spline surfaces and stereo vision,” in Intelligent Vehicles Symposium (IV), 2013 IEEE, pp. 648–653, 2013. - 14.
K. Dongshin, M. O. Sang, and M. R. James, “Traversability classification for ugv navigation: A comparison of patch and superpixel representations,” (San Diego, CA), pp. 3166‐3173, International Conference on Intelligent Robots and Systems, 2007. - 15.
A. Howard and H. Saraji, “Vision‐based terrain characterization and traversability assessment,” Journal of Robotic System, vol. 18, no. 10, pp. 77–587, 2001. - 16.
S. Thrun, M. Montemerlo, and A. Aron, “Probabilistic Terrain Analysis For High–Speed Desert Driving” In Robotics: Science and Systems, pp. 16–19, Philadelphia, USA, August 2006. - 17.
M. Häselich, M. Arends, N. Wojke, F. Neuhaus, and D. Paulus, “Probabilistic terrain classification in unstructured environments,” Robotics and Autonomous Systems, vol. 61, no. 10, pp. 1051–1059, 2013. - 18.
K. Iagnemma, H. Shibly, and S. Dubowsky, “On‐line terrain parameter estimation for planetary rovers,” in Robotics and Automation, 2002. Proceedings. ICRA ‘02. IEEE International Conference on, vol. 3, pp. 3142–3147, IEEE, 2002. - 19.
E. Coyle and E. G. E. Jr., “A comparison of classifier performance for vibration‐based terrain classification,” tech. rep., DTIC Document, 2008. - 20.
F. L. G. Bermudez, C. J. Ryan, D. W. Haldane, P. Abbeel, and R. S. Fearing, “Performance analysis and terrain classification for a legged robot over rough terrain,” in Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 513–519, IEEE, 2012. - 21.
R. B. Rusu, Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments. Künstl Intell (2010) 24: 345. doi:10.1007/s13218–010–0059–6. - 22.
M. Bellone and G. Reina, “Pavement distress detection and avoidance for intelligent vehicles”, International Journal of Vehicle Autonomous Systems, 2016, Vol. 14, ISSN: 1471–0226. - 23.
J. D. S. Prince, “Computer Vision: Models, Learning, and Inference” Cambridge University Press, 1st ed., 2012. - 24.
R. Szeliski, “Computer Vision: Algorithm and applications” Springer, 2010, ISBN1848829353. - 25.
F. Neuhaus, D. Dillenberger, J. Pellenz and D. Paulus, “Terrain drivability analysis in 3D laser range data for autonomous robot navigation in unstructured environments,” 2009 IEEE Conference on Emerging Technologies & Factory Automation, Mallorca, Spain, 2009, pp. 1–4, doi: 10.1109/ETFA.2009.5347217. - 26.
M. Bellone, A. Messina, and G. Reina, “A new approach for terrain analysis in mobile robot applications,” in Mechatronics (ICM), 2013 IEEE International Conference on, pp. 225–230, IEEE, 2013. - 27.
M. Bellone, G. Reina, N. Giannoccaro, and L. Spedicato, “Unevenness point descriptor for terrain analysis in mobile robot applications,” International Journal of Advanced Robotic Systems, vol. 10, p. 284, 2013. - 28.
J. K. Kearney, X. Yang, and S. Zhang, “Camera calibration using geometric constraints,” (San Diego, California, USA), IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1989. - 29.
P. Puget and T. Skordas, “An optimal solution for mobile camera calibration,” (Cincinnati, Ohio, USA), IEEE International Conference on Robotics and Automation, 1990. - 30.
G. Unal, A. Yezzi, S. Soatto and G. Slabaugh, “A Variational Approach to Problems in Calibration of Multiple Cameras,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 8, pp. 1322–1338, Aug. 2007, doi: 10.1109/TPAMI.2007.1035. - 31.
L. Heng, B. Li, and M. Pollefeys, “Camodocal: Automatic intrinsic and extrinsic calibration of a rig with multiple generic cameras and odometry,” (Tokyo, Japan), International Conference on Intelligent Robots and Systems, 2013. - 32.
L. Spedicato, N. I. Giannoccaro, G. Reina, and M. Bellone, “Three different approaches for localization in a corridor environment by means of an ultrasonic wide beam”, International Journal of Advanced Robotic Systems, vol. 10, pp. 163–172, March 2013. - 33.
J.L. Torres, J.L. Blanco, M. Bellone, F. Rodrìguez, A. Gimènez, and G. Reina – “A proposed software framework aimed at energy&x2013;efficient autonomous driving of electric vehicles” – International Conference on Simulation, Modeling, and Programming for Autonomous Robots, Bergamo, Italy October 2014, pp. 219–230, ISBN 978–3–319–11899–4; - 34.
D. Borrmann, J. Elseberg, K. Lingemann, A. Nüchter, and J. Hertzberg, “Globally consistent 3d mapping with scan matching,” Robotics and Autonomous Systems, vol. 56, no. 2, pp. 130–142, 2008. - 35.
I. S. Kweon and T. Kanade, “High‐resolution terrain map from multiple sensor data,” IEEE Transaction on Pattern and Machine Intelligence, vol. 14, pp. 278–292, 1992. - 36.
J. Larson, M. Trivedi, and M. Bruch, “Off‐road terrain traversability analysis and hazard avoidance for ugvs,” (Baden‐Baden, Germany), IEEE Intelligent Vehicles Symposium, 2011. - 37.
P. Pfaff, R. Triebel, and W. Burgard, “An efficient extension of elevation maps for outdoor terrain mapping,” In Proceedings of the International Conference on Field and Service Robotics (FSR), pp. 165–176, 2005. - 38.
T. Ohki, K. Nagatani, and K. Yoshida, “Path planning for mobile robot on rough terrain based on sparse transition cost propagation in extended elevation maps,” pp. 494–499, 2013 IEEE International Conference on Mechatronics and Automation (ICMA), Aug 2013. - 39.
N. Vandapel, D. F. Huber, A. Kapuria, and M. Hebert, “Natural terrain classification using 3‐d ladar data,” (New Orleans, LA, USA), pp. 5117–5122, IEEE International Conference on Robotics and Automation, 2004. - 40.
M. Whitty, S. Cossell, K. S. Dang, J. Guivant, and J. Katupitiya, “Autonomous navigation using a real‐time 3d point cloud,” in 2010 Australasian Conference on Robotics and Automation, pp. 1–3, 2010. - 41.
M. Bellone and G. Reina, “Road surface analysis for driving assistance,” in Workshop Proceedings of IAS‐13 13th International Conference on Intelligent Autonomous Systems Padova (Italy), pp. 226–234, 2014. - 42.
T. Braun, H. Bitsch, and K. Berns, “Visual terrain traversability estimation using a combined slope/elevation model”, Advances in Artificial Intelligence Volume 5243 of the series Lecture Notes in Computer Science pp 177–184, Springer, 2008, ISBN 978–3–540–85845–4. - 43.
B. Cafaro, M. Gianni, F. Pirri, M. Ruiz, and A. Sinha, “Terrain traversability in rescue environments,” in 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pp. 1–8, Oct 2013. - 44.
A. Dargazanv and K. Berns, “Stereo‐based terrain traversability estimation using surface normals,” in 41st International Symposium on Robotics; Proceedings of ISR/Robotik 2014, pp. 1–7, June 2014. - 45.
G. Ishigami, K. Nagatani, and K. Yoshida, “Path planning for planetary exploration rovers and its evaluation based on wheel slip dynamics,” in IEEE International Conference on Robotics and Automation, pp. 2361–2366, 2007. - 46.
T. Kubota, Y. Kuroda, Y. Kunii, and T. Yoshimitsu, “Path planning for newly developed microrover,” in 2001. Proceedings 2001 ICRA IEEE International Conference on Robotics and Automation, vol. 4, pp. 3710–3715, 2001. - 47.
E. Rohmer, G. Reina, and K. Yoshida, “Dynamic simulation‐based action planner for a reconfigurable hybrid leg‐wheel planetary exploration rover,” Advanced Robotics, vol. 24, no. 8–9, pp. 1219–1238, 2010. - 48.
H. Seraji, “Traversability index: A new concept for planetary rovers,” (Detroit, MI, USA), pp. 2006–2013, IEEE International Conference on Robotics and Automation, 1999. - 49.
J. J. Craig, Introduction to Robotics: Mechanics and Control, vol. 3. Pearson Prentice Hall, Upper Saddle River, 2005. - 50.
S. M. LaValle “Planning Algorithms” Cambridge University Press, 2006, ISBN1139455176. - 51.
M. Bellone, G. Reina, N. Giannoccaro, and L. Spedicato, “3d traversability awareness for rough terrain mobile robots,” Sensor Review, vol. 34, no. 2, pp. 220–232, 2014.
Notes
- Definition from Oxford Dictionaries.
- Definition of fuzzy set: Given a generic set X and a membership function f:X→[0;1], the fuzzy set A is defined as A={(x,f(x))|x∈X}.