Open access

Introductory Chapter: Autonomous Mobile Mapping Robots – Current State and Future Real-World Challenges

Written By

Janusz Będkowski

Published: 10 May 2023

DOI: 10.5772/intechopen.110085

From the Edited Volume

Autonomous Mobile Mapping Robots

Edited by Janusz Bȩdkowski

Chapter metrics overview

92 Chapter Downloads

View Full Metrics

1. Introduction

The concept of autonomous mobile mapping robots is composed of many technological advances, such as AI (Artificial Intelligence), sensors, locomotion, path planning, and data fusion. A great example of the research journey to autonomous robots is the autonomous car concept related to a certain level of autonomy. The first level assumes the driver support functionalities improving overall safety and the last level of autonomy assumes no driving wheel inside the vehicle. Obviously, mapping the world by single car is rather impossible due to the coverage required to build such digital representation. For this reason, autonomous cars will use existing maps and, if necessary, they will provide updates to them. A great example of an autonomous mobile mapping robot is a drone capable of executing flying missions and delivering data for further offline 3D map processing. This robot flies autonomously truth a predefined path and records all necessary data. Most of these robots are not capable of avoiding obstacles, but it is not relevant since the sky is rather empty space in most of the applications. Another example of an autonomous mobile machine is a cleaning robot performing cleaning mission on a predefined path. In this application, robot could be equipped with AI (Artificial Intelligence) capability to redefine the path according to sudden events, such as obstacle, not defined dirt on the path, etc. Most of the potential applications of the autonomous mobile mapping robots are related with the scenarios where human activities have to be reduced or even impossible to perform, such as space exploration or NPP (Nuclear Power Plant) inspection, during high radiation determined by accident.

Recent advances in robotic perception, such as non-repetitive scan pattern lidars [1], show a great decrease in the lidar cost, making it an affordable solution for massive autonomous robotic mobile mapping applications, such as aerial 3D mapping [2]. Non-repetitive scan pattern lidars are comparable with multi-beam lidars within the context of accuracy and temporal stability [3]. It is evident in the literature a great interest in this affordable perception plays an important role in expanding building 3D maps [4] in many applications. During last years, it was not so obvious that these narrow field-of-view lidars will be competitive with multi-beam lidars. Due to an integrated IMU (Inertial Measurement Unit), it is possible to perform online lidar odometry [5] without any additional engineering effort. Thus, recent advances in mobile mapping algorithms show great improvement in lidar 3D mapping even looking from the perspective of other applications such ADAS (Advanced Driver Assistance Systems) [6] (related to future autonomous driving).

Autonomous mobile mapping robots play important roles in many commercial applications, such as aerial mapping, underground mapping, search and rescue applications, space exploration, delivery robotics, and many others. These applications produce 3D maps with the trajectory as a core component. A trajectory is a set of consecutive poses (translation and rotation) with timestamps. These timestamps are crucial for retrieving poses for every measurement. Once we have the trajectory and calibration parameters of the mounted sensors (lidars, IMU, and cameras), it is possible to reconstruct the map. It is important to mention that we do not consider that the single source of truth (e.g., derived from GNSS signal) exists, thus a SLAM (Simultaneous Localization and Mapping) problem-solving is considered. Obviously, on one hand, we can consider GNSS as ground truth, on the other hand, we can find in the literature a comprehensive study that such a single source of truth can be improved by a data fusion approach [7]. Autonomous mobile robots equipped with multi-modal sensors (lidar, camera, and IMU) should consider the fact that a single source of truth does not exist, thus these machines should be equipped with the capability of data fusion. This solution provides an optimal result by combining trajectory calibration, observations, and maps.

Advertisement

2. Historical perspective

The core component of the autonomous mobile mapping robot is SLAM capability [8, 9]. The term SLAM [4, 5] corresponds to the so-called “chicken and egg dilemma”—what was first the chicken or the egg? For this reason, at the same time, robot should be equipped with efficient map representation to localize within this map, and at the same time the robot should provide accurate localization for building the map. SLAM is considered as an already solved mathematical problem, but it is evident looking at many robotic challenges that an efficient implementation does not exist. Recent advances in lidar technology show a great positive impact.

Advertisement

3. Real-world challenge

Mapping of large constriction sites and large urban buildings is a potential application for autonomous mobile mapping robots. An example is the boiling water reactor of the Zwentendorf Nuclear Power Plant (NPP Zwentendorf, Figure 1), which is the world’s only nuclear power plant that has been completed but never put into operation. Thanks to EnRiCH [10] robotic trial it is evident that Zwentendorf areas are easily accessible, which in other NPPs can only be visited under severe difficulties. In active nuclear power plants extensive safety precautions are needed for human personnel due to the high level of radioactivity. Instead, in the NPP Zwentendorf engineers have transformed the plant and turbine halls into a training facility. Repair and dismantling measures but also critical incidents and disaster scenarios can be trained under realistic conditions, also autonomous mobile mapping robots are tested once every 2 years in so-called EnRiCH—European Robotics Hackathon [10].

Figure 1.

Large urban building—the boiling water reactor of the Zwentendorf Nuclear Power Plant.

To demonstrate the complexity of the mapping challenge Figure 2 shows photos from a mapping survey performed with an affordable mobile mapping backpack system [11]. Figure 3 shows that more than three-kilometer length trajectory is required to cover the entire scene. The result of the mapping—registered 3D point cloud representing the Zwentendorf Nuclear Power Plant is shown in Figures 4 and 5.

Figure 2.

Mapping survey performed with an affordable mobile mapping backpack system [11].

Figure 3.

The more than three-kilometer length trajectory required to cover the entire scene of the Zwentendorf Nuclear Power Plant.

Figure 4.

Perspective and top view, the result of the mapping—registered 3D point cloud representing the Zwentendorf Nuclear Power Plant.

Figure 5.

Result of the mapping—registered 3D point cloud representing the Zwentendorf Nuclear Power Plant.

This challenge shows fundamental requirements for an autonomous mobile mapping robot that should be capable traverse a 3.5 km length trajectory, including 15 levels of stair. At the current stage of robotic technology, only a swarm of ground and air robots can reach such a level of autonomy and coverage.

Advertisement

4. Conclusion

On one hand, we can observe that SLAM problem is already solved, on other hand many real-world challenges [12, 13] in autonomous mobile mapping robots prove great interest in this domain. The main problem is the cost of the robot capable of performing missions in hazardous environments. For this reason, this research activity is not so popular, therefore the technological improvements are rather incremental than revolutionary. An opportunity is in the autonomous car driving domain and delivery robotics since it requires collecting high volume, large scope data, and executing SLAM by many agents. To conclude, autonomous mobile mapping robots are fascinating and require many efforts to deliver satisfactory results.

References

  1. 1. Jiahe C, Jianwei N, Zhenchao O, Yunxiang H, Dian L. ACSC: Automatic calibration for non-repetitive scanning solid-state LiDAR and camera systems. arXiv. 2020
  2. 2. Aldao E, González-de Santos LM, González-Jorge H. LiDAR based detect and avoid system for UAV navigation in UAM corridors. Drones. 2022;6:185. DOI: 10.3390/drones6080185
  3. 3. Kelly C, Wilkinson B, Abd-Elrahman A, Cordero O, Lassiter HA. Accuracy assessment of low-cost lidar scanners: An analysis of the Velodyne HDL–32E and Livox Mid–40’s temporal stability. Remote Sensing. 2022;14:4220. DOI: 10.3390/rs14174220
  4. 4. Wang Y, Lou Y, Zhang Y, Song W, Huang F, Tu Z. A robust framework for simultaneous localization and mapping with multiple non-repetitive scanning lidars. Remote Sensing. 2015;2021:13. DOI: 10.3390/rs13102015
  5. 5. Tixiao S, Brendan E, Drew M, Wei W, Carlo R, Daniela R. LIO-SAM: Tightly-coupled lidar inertial odometry via smoothing and mapping. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Las Vegas, NV, USA (Virtual). 2020. pp. 5135-5142
  6. 6. Long N, Yan H, Wang L, Li H, Yang Q. Unifying obstacle detection, recognition, and fusion based on the polarization color stereo camera and LiDAR for the ADAS. Sensors. 2022;22:2453. DOI: 10.3390/s22072453
  7. 7. Bedkowski J, Nowak H, Kubiak B, Studzinski W, Janeczek M, Karas S, et al. A novel approach to global positioning system accuracy assessment, verified on LiDAR alignment of one million kilometers at a continent scale, as a foundation for autonomous driving safety analysis. Sensors. 2021;21:5691. DOI: 10.3390/s21175691
  8. 8. Leonard J, Durrant-Whyte H. Simultaneous map building and localization for an autonomous mobile robot. In: Proceedings IROS’91:IEEE/RSJ International Workshop on Intelligent Robots and Systems’91. Vol. 3. Osaka, Japan: IEEE; 1991. pp. 1442-1447. Date of Conference: 03-05 November 1991, Date Added to IEEE Xplore: 06 August 2002. Print ISBN: 0-7803-0067-X. INSPEC Accession Number: 4199151. DOI: 10.1109/IROS.1991.174417.
  9. 9. Skrzypczynski P. Simultaneous localization and mapping: a feature-based probabilistic approach. International Journal of Applied Mathematics and Computer Science. 2009;19(4):575-588
  10. 10. Available from: https://enrich.european-robotics.eu
  11. 11. Pelka M, Majek K, Bedkowski J. Testing the affordable system for digitizing USAR scenes. In: 2019 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). Germany: Julius-Maximilians University of Würzburg, IEEE Press; 2019. pp. 1-2. DOI: 10.1109/SSRR.2019.8848929
  12. 12. Available from: https://research.csiro.au/robotics/our-work/darpa-subt-challenge-2018/
  13. 13. Available from: https://hilti-challenge.com

Written By

Janusz Będkowski

Published: 10 May 2023