Open access peer-reviewed chapter

Aerial 3D Mapping with Continuous Time ICP for Urban Search and Rescue

Written By

Helge Andreas Lauterbach and Andreas Nüchter

Reviewed: 23 September 2022 Published: 03 November 2022

DOI: 10.5772/intechopen.108260

From the Edited Volume

Autonomous Mobile Mapping Robots

Edited by Janusz Bȩdkowski

Chapter metrics overview

107 Chapter Downloads

View Full Metrics

Abstract

Fast reconnaissance is essential for strategic decisions during the immediate response phase of urban search and rescue missions. Nowadays, UAVs with their advantageous overview perspective are increasingly used for reconnaissance besides manual inspection of the scenario. However, data evaluation is often limited to visual inspection of images or video footage. We present our LiDAR-based aerial 3D mapping system, providing real-time maps of the environment. UAV-borne laser scans typically offer a reduced field of view. Moreover, UAV trajectories are more flexible and dynamic compared to those of ground vehicles, for which SLAM systems are often designed. We address these challenges by a two-step registration approach based on continuous time ICP. The experiments show that the resulting maps accurately represent the environment.

Keywords

  • UAV
  • mobile mapping
  • USAR
  • continuous time ICP
  • rescue robotics

1. Introduction

Reconnaissance is the basis for SAR missions and essential for strategic decisions. With the growing size of the site, it becomes difficult for officers in charge to get an overview of a disaster scenario, even in more frequent urban scenarios like houses on fire. Nowadays the main tools for scenario exploration used by firefighters are still 2D maps in command vehicles. Assessment is still done mainly by human visual inspection. Besides safety risks, this is a difficult and time-consuming process either for data collection and evaluation. Accordingly, this process aims the risk of missing important information.

In recent years, video footage in visible and thermal spectrum from small UAVs is increasingly used for scenario assessment and search operations [1]. UAV images are advantageous due to their overhead perspective and their capability to reach otherwise inaccessible areas. However, in practice, data evaluation is time-consuming likewise and faces the challenge of spatial correlation. Creating 3D maps in real-time is one useful approach to improve the evaluation process [2].

UAV-based approaches to 3D mapping often rely on structure from motion (SfM) techniques. Such approaches often do not provide 3D maps immediately, as dense mapping has high computational requirements [3]. Photogrammetric approaches are therefore mostly applied to large-scale scenarios like earthquakes [4, 5]. In [6], 3D point clouds generated from UAV images with SfM are used to localize and navigate unmanned ground vehicles. In large-scale scenarios, the immediate response phase has a time scale of hours to weeks [2], rather than minutes to hours as in more frequent urban scenarios, like houses on fire.

LiDAR-based mapping systems have the advantage of providing spatial information directly. Here the key issue is the implementation of an adequate SLAM approach. Unlike unmanned ground vehicles, UAVs are constantly in motion. Thus, the rigidity assumption does not hold true for sensors like laser scanners. To compensate for motion distortion in laser scans a continuous time formulation of the SLAM problem is convenient. [7] presents an application of continuous-time SLAM in UAV mapping. Groups of scans in a sliding window fashion are registered and the computed corrections are interpolated along the trajectory. In a second offline step, the trajectory is optimized globally. [8] use a map-centric approach. Instead of pose graph optimization, the map itself is deformed in case of loop closures.

LiDAR odometry and mapping (LOAM) [9] allows for real-time mapping by utilizing edge and planar features for registration. Distortion is removed by motion estimation from IMU mechanization and odometry estimation before registering to the map. An extension with visual odometry [10] was also demonstrated in aerial mapping [11]. Several approaches use feature extraction from LOAM and introduce environmental constraints such as ground planes [12] or tightly coupled LiDAR and INS [13]. They also incorporate loop closing, which LOAM does not. [14] propose improved edge and planar features. Due to offline batch optimization, the approach is not real-time capable. The study [15] instead relies on planes as landmarks and bundle adjustment for optimization. However, both methods are designed for indoor environments.

Instead of sampling at scan frequency in [16], the trajectory is modeled as a B-Spline to allow for interpolation between scan poses. The map is refined by realigning scans within local submaps. Based on the B-Spline trajectory representation, [17] registers multiple scans at once in a sliding window fashion to a sparse multiresolution surfel map, enabling real-time LiDAR odometry.

Thermal imaging is another important tool for firefighters [1]. It is used for navigation under low-sight conditions [18] (e.g., smoke) and to detect sources of fire and thermal hot spots [1]. Robotic systems equipped with thermal cameras also facilitate object detection [19] and search for casualties [20].

In a previous work [21], we gave an overview of our UAV system for SAR applications. Most similar to our setup is the UAV [22], integrated into a multirobot system with a focus on autonomous exploration [23]. In this paper, we present our mapping pipeline, generating accurate, temperature-enhanced 3D point clouds. The two-step registration approach is based on continuous time ICP.

Advertisement

2. Methodology

2.1 System overview

The proposed mapping system for rescue missions is divided into two principal components, an aerial segment for data acquisition and a ground segment for data processing and user interaction as depicted in Figure 1. The aerial segment is designed as a mapping sensor unit, that is able to operate without a UAV. The main sensor is a Velodyne Puck VLP16 Lite1 laser scanner. With its low weight of 830 g and typical power consumption of 8 W, the sensor is appropriate for aerial mapping. It provides full 360 scans at a frequency of 10 Hz with a maximum range of 100 m. The vertical FoV (field of view) of 30 is sparsely covered by 16-line laser scans with an angular resolution of 2 vertically. The laser scanner is mounted at a pitch angle of 45, resulting in a reduced usable horizontal FoV. This is a compromise between maximizing the ground coverage and ensuring a convenient overlap between consecutive scans for scan matching. As a second sensor, the system features an Optris PI 640LW2 thermal camera to enrich the map with registered temperature information. In order to maximize the overlap with the laser scanner a wide-angle lens with an FoV of 90 x 64 is used. Thermal images are captured at a resolution of 640 px × 480 px with a frequency of 10 Hz. As an aerial platform, several Dji3 UAVs are used that provide enough payload capacity to lift the sensor unit for a long period. The sensor unit mounted on the UAV is depicted in Figure 2 on the left. For initial pose estimation, we rely on the trajectory reported by the flight control unit (FCU) of the UAV. This control system fuses three sets of redundant sensors, each of them including an IMU, a barometer and a GPS sensor.

Figure 1.

System overview. The aerial segment is used for data acquisition while the ground segment runs the mapping pipeline and user interaction.

Figure 2.

UAV with mounted mapping payload in front of simulated facade fire (left) and the firefighter command vehicle (right).

The second component of our mapping system is the ground segment. Apart from communication for telemetry and data transmission, its main task is to run the mapping pipeline described in Section 2.2. As we focus on mapping rather than autonomous operation, a rough initial localization with UAV sensors is sufficient and onboard data processing is omitted. Furthermore, the ground segment offers online map visualization and a user interface for managing automated flights.

For this purpose, a workstation based on an Intel i7 8700 Hexacore running at 3.2 GHz and 16 GB RAM is integrated into a vehicle based on a german standard command vehicle (ELW1), depicted in Figure 2 on the right.

2.2 Workflow

This subsection gives an overview of the processing pipeline, also visualized in Figure 3. Data acquired by the UAV is transmitted to the ground station. In the preprocessing stage, a pose for each laser scan is estimated by GNSS/INS localization. To reduce the computational effort, a keyframe approach is used. A new keyframe is selected only if the change in pose exceeds a defined threshold w.r.t. the previous keyframe, e.g., 1 m. We drop all scans that do not contribute to the map significantly, since for this application we are mainly interested in a map rather than navigation tasks. Selected frames are then undistorted based on the initial trajectory to compensate for motion. In an optional step, the laser scans are enriched with temperature information.

Figure 3.

Workflow of our UAV mapping system.

The mapping process is divided into two stages, for registration in a coarse to fine manner as shown in Figure 4. The core of the first registration stage is a continuous time ICP approach described in Section 2.4. This stage aims to reduce gross errors in the initial trajectory and to improve the map globally. Therefore a loop-closing technique adopted for the continuous time ICP is applied. This results in a coarse map of the environment, suitable for online inspection.

Figure 4.

Example for improved point clouds by proposed registration steps.

The second stage is run as a postprocessing step to further refine the trajectory and fine-register the laser scans. Here we use an upsampling approach to deal with the sparse nature of point clouds from a Velodyne VLP16 laser scanner. Details are given in Section 2.6.

The map is enriched with temperature information. Therefore, for each laser scan, we estimate the transformation between the poses of laser scanner and thermal camera at their current time stamps based on the GNSS/INS trajectory and project the thermal image on the laser scan. In our setup, this is done before matching the laser scans to enable fast thermal inspection of the coarse map. Note that by this approach only 3D points in the overlapping FoV of laser scanner and camera get temperature values assigned. Thus the temperature-enhanced map is less dense than the original map. In postprocessing, the mean temperature value of each voxel of the map is assigned to all points in the voxel. An example of the resulting representation is given in Figure 5. A facade fire scanned by the UAV is shown on the left, the corresponding temperature-enhanced 3D point cloud representation from our approach is shown on the right.

Figure 5.

UAV scanning a simulated facade fire on the campus of the Bavarian firefighter school Würzburg (left) and the corresponding 3D model colored by temperature (right). Red indicates high temperatures.

2.3 Continuous time SLAM

The movement of a mobile robot creates a trajectory T=X0Xn where Xi is the 6-degree of freedom pose of the vehicle at time ti. Using T a map P=p0pn of the environment is derived, by transforming the set of timestamped measurements M=m0mn into the global coordinate system with pi=Ximi=Rimi+ti.

Given a sufficiently estimated trajectory, in [24], the map quality is improved by optimizing the trajectory, based on the ICP concept known for rigid registration. Similar to our graph-based SLAM algorithm [25], the n-scan registration problem is considered, but with a finer discretization of time, e.g., segments of a 3D scan or even single points. Under the assumption, that the error of the estimated trajectory is negligible for a short time interval the error function to be minimized is given by:

Ei,j=k=iNi+NXimkXjmk2E1

where a small neighborhood of 2N points close in time to mi the closest point mkM\miNmi+N is selected.

In other words, the measurements are grouped into overlapping submaps Mi=miNmi+N, where mi denotes the reference measurement of the submap, that defines the corresponding pose Xi. These submaps are then matched using the automatic high-precision registration of terrestrial 3D scans, i.e., the graph-based SLAM approach presented in [25, 26]. The graph is estimated using a heuristic that measures the overlap of the submaps based on the number of closest point pairs.

After applying globally consistent scan matching on the submaps the actual continuous time matching, as described in [24], is applied. Using the results of the rigid optimization as starting values for T, the numerical minimum of the underlying least square problem is computed.

2.4 Continuous time ICP

In contrast to the continuous-time SLAM approach, in ICP registration only the current pose is optimized with respect to its predecessor or a fixed map. Again consider the error of the initially given trajectory to be negligible in a small time interval before and after a pose Xi. The positional error of Xi is then given by

Ei=k=iNi+NXimkpk2E2

where pk is the closest point to mk in P\piNpn. Note, that all poses Xj with j<iN are fixed.

For efficiency, we subsample the trajectory at 2N steps, as is done in the continuous time SLAM approach. The measurements mk with k=i±N then form a submap with pose Xi. These submaps are then registered rigidly using plain ICP. Afterward the transformation update is locally distributed to all 2N poses between Xi and Xi1. Although not constrained, a linear distribution of translation and SLERP interpolation of rotation shows to be sufficient for small trajectory errors. For practical implementation, we form groups of 10 to 15 Velodyne VLP16 scans.

2.5 Loop closing

A common strategy in loop detection is, to rely on a heuristic based on pose-to-pose distance. This works well for trajectories with low drift, if either the flat surface assumption holds true, e.g., on ground vehicles and or if omnidirectional perception is available. In UAV laser scanning, both requirements are often not fulfilled. In our mapping system for instance the laser scanner is tilted by 45, thus the effective FoV is reduced to less than 180 horizontally depending on the flight altitude and surrounding environment. As a consequence, there is often no overlap between loop closure candidates, or the overlap is small if the difference in orientation is high between the submaps. Instead, we search for poses with a maximum distance to the tilted plane, spanned by x and y coordinate of the sensor. Due to the reduced FoV, loop closure candidates additionally require an orientation that is similar to the query pose. After candidates are found, we apply the loop-closing technique ELCH from [27] to the submaps. To maintain the continuity of the trajectory we then distribute the transformation update of submaps to each laser scan in a similar way as in continuous time ICP in Section 2.4. An example is given in Figure 6.

Figure 6.

Example for detected loop closures. We search for poses close to sensor xy plane. Poses with similar orientation are accepted as loop closing candidates.

2.6 Upsampling registration

Rigid registration of sparse point clouds, as those of a Velodyne VLP16, faces the problem of inhomogeneous point density in vertical and horizontal directions. The difference between the angular resolution between two scan lines and within one scan line is typically in the order of a magnitude. Similar to this, rotating 2D profilers provide 3D point clouds with a higher angular resolution within a sweep. This inhomogeneity in resolution has an impact on registration approaches using point-to-point distances as the ICP. As a result scan lines are pulled onto each other and the estimated transformation is distorted. In theory point to plane approaches perform better in such a case, as the underlying surface structure is estimated by local planar patches. However, the success of registration depends on the quality of the estimated normals of the points. The traditional approach of calculating the normals using k nearest neighbors is affected by the point distribution though.

To mitigate the problems caused by inhomogeneous point density, often assumptions about the underlying structure are made. One group of approaches relies on robust features, such as edges and planes [9]. The second group of approaches coarsely reconstructs the surface. [28] interpret a scan of a Velodyne 64 laser scanner as a range image and compute the linkage of the direct neighbors of a point in the image. Then they estimate the normal vectors as the average cross product of the vectors to the neighbors weighted by the linkage and apply a point-to-plane variant of ICP. A similar strategy is proposed by [29] that is more general in terms of the sensor model. Using the ordered structure of point clouds provided by sensors, they create a simple quad mesh to estimate the point normals and then apply a variant of the generalized-ICP [30]. Similar to this we approximate the surface by upsampling the point cloud. Inspired by the idea of range images, the sensor data is first organized into a ring and bin structure, preserving the real measurement. Then for each point, we create a line with its neighbor in the next ring and linear sample points. Compared to previous work [31], this simple approach proved to be sufficient due to the high point density within a ring. As in [29] an edge is rejected if it is nearly parallel to the line of sight with respect to the scan pose or a depth discontinuity is detected considering angular and range-dependent thresholds. The set of fitted line points forms a virtual scan that estimates the underlying surface, as visualized in Figure 7. Octree reduction generates a homogeneous distribution of points. The virtual scan is then rigidly registered against previous scans in their original form with ICP, either sequential or incremental, to further refine the robot trajectory.

Figure 7.

Upsampled 3D laser scan from Velodyne VLP16. The original point cloud is depicted in red, the upsampled point cloud in black.

2.7 Map organization

In both registration stages, we follow a scan-to-map matching strategy. A key issue in scan registration is the search for closest point pairs. The implementation of search trees in 3DTK [32] (e.g. octree) is optimized for fast point query operations and a small memory footprint. However, due to their compact representation in memory, they do not allow for insert operation. Thus the search tree needs to be rebuilt once the map is updated. To deal with this, an efficient strategy for maintaining the map is important. For both registration steps, we consider the map to be a set of submaps. In the upsampling registration step, we follow a keyframe approach. Keyframes are added to the current active submap. Each keyframe in the active submap provides a search tree. All search trees of the submap are queried in parallel. Once a submap is finished, the keyframes are joined and a single search tree for the finished submap is built. A submap size of 10 to 15 scans has shown to be a good trade-off between increasing search time and reducing construction time for a new k-d tree containing the complete submap. To further optimize the runtime, we only consider a ROI (region of interest) during the registration. Therefore, we select the k nearest submaps each time starting a new submap. Each scan is then registered against this ROI map. As with the active submap, the search trees of all submaps are searched in parallel.

In the continuous-time ICP step, the map is organized in a similar fashion. The major difference is that the active submap is not composed of several search trees. Note that submaps do not change once they are completed unless a loop closure is detected. As a consequence, submaps may significantly overlap and thus cause redundancy. However, the continuity of the trajectory is maintained.

Advertisement

3. Results

To evaluate our approach we acquired a data set at the campus of the Bavarian Firefighter School Würzburg with our system described in Section 2.1. The UAV (DJI S1000) was manually flown in several loops at different altitudes above the site of the school. During the flight of 325 s a trajectory of 862 m was covered. We applied the pipeline as described in Section 2.2. The initial trajectory was provided by the UAV. We applied continuous time ICP with a maximum point-to-point distance of 0.75 m and a submap size of 10 VLP16 scans. The resulting point cloud and the flown trajectory are visualized in Figure 8.

Figure 8.

Optimized point cloud from bird’s eye view. The red line represents the UAV trajectory.

For comparison, we also applied BLAM! [33] and Pointmatcher [34] to the data set. Both approaches are based on ICP registration as our method is. To gain valuable results, we slightly modified BLAM, such that the UAV trajectory is used as an initial guess for odometry estimation.

For ground truth comparison we collected 42 highly precise laser scans with a Riegl VZ400 terrestrial laser scanner to cover the complete area of the firefighter school. The scans were registered using our ICP and SLAM methods [25] implemented in 3DTK [32] to obtain the reference point cloud.

As an error measure, we use the Absolute Trajectory Error (ATE) following [35]. It is computed as the root mean square error

ATEpos=1Ni=0N1Δpi2ATErot=1Ni=0N1ΔRi2,

where Δpi denotes the euclidean distance between the estimated and ground truth position at time step i of N and denotes the rotation angle of the corresponding rotation ΔRi in angle axis representation [35].

To obtain the ground truth trajectory we first localize each laser scan from the UAV system in the reference cloud. Then the initial trajectory as well as the optimized trajectories are aligned to the ground truth trajectory using all pose pairs and the ATE is calculated. The resulting mean errors are given in Table 1. Figure 9 visualizes the positional error for all evaluated approaches and Figure 10 the orientation error respectively.

algorithmATEpos [m]ATErot []
GNSS0.7182.16
Ours0.1720.32
BLAM0.3831.74
Pointmatcher0.2250.32

Table 1.

Absolute trajectory error.

Figure 9.

Absolute trajectory error in position.

Figure 10.

Absolute trajectory error in orientation.

Compared to the initial trajectory by GNSS/INS the mean error in position reduces to 0.172 m by applying our pipeline. Especially the drift in z-axis is removed as shown in Figure 11. Regarding orientation, the pitch angle θy provides the highest errors, as depicted in Figure 12. One explanation is that clocks of laser scanner and UAV are not synchronized. Thus the error increases during positive and negative acceleration and deceleration phases. θx and θz are less affected due to low speed and smooth curves in the trajectory. The mean rotational error is reduced 0.32.

Figure 11.

Position error of individual axis. The error of GNSS trajectory (left) is reduced by the proposed pipeline (right).

Figure 12.

Rotational error of individual axis. Error of GNSS trajectory (left) is reduced by the proposed pipeline (right).

A comparison of the optimized point cloud to the reference point cloud justifies the results from ATE evaluation. Therefore, we computed the cloud-to-cloud distance with a maximum point-to-plane distance of 1 m. The results are visualized in Figure 13 from two views. To support the visual representation, the corresponding error histogram is given in Figure 14. It shows that 90% of the points feature an error of less than 0.13 m. The accuracy of the Velodyne VLP16 is specified at ±3 cm, which corresponds with the peak of the histogram and is close to the median.

Figure 13.

Optimized point cloud from top (above) and oblique view (below). The color decodes the deviation from ground truth. Higher errors mainly correspond to dynamic objects (1) and occlusions in the reference data (2), as well as areas not directly overflown (3).

Figure 14.

Histogram of cloud-to-cloud error.

Note that Figure 13 shows regions of high errors up to 1 m which do not reflect the low ATE errors discussed before. First, they refer to dynamic objects like cars that moved between the acquisition of the UAV data and the ground truth data (see Figure 13, at 1). Second, there are gaps in the ground truth data. Due to restricted possibilities to position the terrestrial laser scanner, it suffers from occlusions. This mainly affects the roofs of the buildings, e.g., no. 2 in Figure 13. Those gaps are filled by the airborne data set. As we used the distance to the closest plane as an error measure, some points at the borders are wrongly assigned a high error. A similar effect is present at windows and glass facades, e.g., at no. 4. Static areas, such as buildings and ground in majority, feature errors less than 0.1 m. Higher deviations up to 0.4 m, e.g., at no. 3, are explained by the fact, that this area was not directly overflown by the UAV, as depicted in Figure 8. Hence, the point density is reduced since the area was less often covered by the sensor and only from the side of the sensor at larger distances. Points in those areas have less influence on the scan matching than points in the core of the FoV since the probability to find corresponding points within the maximum search distance decreases. On the other hand, small registration errors have a greater impact on the distant areas of a scan. Moreover, the point density of the reference cloud is reduced in this area, biasing the error measure as described for number 2.

To summarize the results, our approach is capable of generating detailed point clouds, accurately representing the environment.

Advertisement

4. Conclusions

In this paper, we presented our pipeline for aerial 3D mapping in USAR scenarios. Using a two-step continuous time registration approach the system is able to produce an accurate representation of the environment. Enhanced with temperature values the generated 3D maps aim at facilitating the assessment of disaster scenarios. The current drawback of our approach is that only the first registration stage runs online, producing a coarse map while the second stage is designed as a postprocessing step producing an accurate map. Future work includes runtime optimization and further integration to enable online processing of the entire pipeline. Additionally point cloud analyzing methods, for instance, to detect heat sources are in work.

Advertisement

Acknowledgments

This work was funded by the projects Eins3D (FZK 13 N14183) and Deals3D (FKZ 13 N15313) from the Federal Ministry of Education and Research, Germany.

This publication was supported by the Open Access Publication Fund of the University of Würzburg.

Advertisement

Conflict of interest

The authors declare no conflict of interest.

Advertisement

Abbreviations

ATEAbsolute Trajectory Error
FoVField of View
GNSSGlobal Navigation Satellite System
ICPIterative Closest Points
IMUInertial Measurement Unit
INSInertial Navigation System
LiDARLight Detection And Ranging
SARSearch And Rescue
SLAMSimultaneous Localization And Mapping
UAVUnmanned Aerial Vehicle
USARUrban Search And Rescue

References

  1. 1. Carrillo-Zapata D, Milner E, Hird J, Tzoumas G, Vardanega PJ, Sooriyabandara M, et al. Mutual shaping in swarm robotics: User studies in fire and rescue, storage organization, and bridge inspection. Frontiers in Robotics and AI. 2020;7:53
  2. 2. Delmerico J, Mintchev S, Giusti A, Gromov B, Melo K, Horvat T, et al. The current state and future outlook of rescue robotics. Journal of Field Robotics. 2019;36(7):1171-1191
  3. 3. Martell A, Lauterbach HA, Schilling K, Nüchter A. Benchmarking structure from motion algorithms of urban environments with applications to reconnaissance in search and rescue scenarios. In: Proceedings of the 16th IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR ‘18). Philadelphia, PA, USA: IEEE; 2018. pp. 1-7
  4. 4. Verykokou S, Doulamis A, Athanasiou G, Ioannidis C, Amditis A. Uav-based 3d modelling of disaster scenes for urban search and rescue. In: 2016 IEEE International Conference on Imaging Systems and Techniques (IST). IEEE; 2016. pp. 106-111
  5. 5. Kruijff-Korbayová I, Freda L, Gianni M, Ntouskos V, Hlaváč V, Kubelka V, et al. Deployment of ground and aerial robots in earthquake-struck amatrice in Italy (brief report). In: 2016 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). IEEE; 2016. pp. 278-279
  6. 6. Gawel A, Dubé R, Surmann H, Nieto J, Siegwart R, Cadena C. 3d registration of aerial and ground robots for disaster response: An evaluation of features, descriptors, and transformation estimation. In: 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR). Shanghai, China: IEEE; 2017. pp. 27-34
  7. 7. Kaul L, Zlot R, Bosse M. Continuous-time three-dimensional mapping for micro aerial vehicles with a passively actuated rotating laser scanner. Journal of Field Robotics. 2016;33(1):103-132
  8. 8. Park C, Moghadam P, Kim S, Elfes A, Fookes C, Sridharan S. Elastic lidar fusion: Dense map-centric continuous-time slam. In: 2018 IEEE International Conference on Robotics and Automation (ICRA). Brisbane, QLD, Australia: IEEE; 2018. pp. 1206-1213
  9. 9. Zhang J and Singh S. Loam: Lidar odometry and mapping in real-time. In: Robotics: Science and Systems. Vol. 2. Berkeley, USA: Roboticsproceedings; 2014. p. 9. DOI: 10.15607/RSS.2014.X.007
  10. 10. Zhang J, Singh S. Laser–visual–inertial odometry and mapping with high robustness and low drift. Journal of Field Robotics. 2018;35(8):1242-1264
  11. 11. Zhang J, Singh S. Aerial and ground-based collaborative mapping: An experimental study. In: Field and Service Robotics. Springer International Publishing; 2018. pp. 397-412
  12. 12. Shan T, Englot B. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Madrid, Spain: IEEE; 2018. pp. 4758-4765
  13. 13. Shan T, Englot B, Meyers D, Wang W, Ratti C, Rus D. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}. Las Vegas, NV, USA; IEEE; 2020. pp. 5135-5142. DOI: 10.1109/IROS45743.2020.9341176
  14. 14. Gentil CL, Vidal-Calleja T, and Huang S. In2lama: Inertial lidar localisation and mapping. In: 2019 International Conference on Robotics and Automation (ICRA). Montreal, QC, Canada: IEEE: 2019. pp. 6388–6394
  15. 15. Zhou L, Wang S, Kaess M. π-LSAM: LiDAR smoothing and mapping with planes. In: Proc. of IEEE Int. Conf. On Robotics and Automation (ICRA ‘21). Xi’an, China: IEEE; 2021. pp. 5751–5757. DOI: 10.1109/ICRA48506.2021.9561933
  16. 16. Droeschel D, Behnke S. Efficient continuous-time slam for 3d lidar-based online mapping. In: 2018 IEEE International Conference on Robotics and Automation (ICRA). Brisbane, QLD, Australia: IEEE; 2018. pp. 1-9
  17. 17. Quenzel J, Behnke S. Real-time multi-adaptive-resolution-surfel 6d lidar odometry using continuous-time trajectory optimization. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Prague, Czech Republic: IEEE; 2021. pp. 5499-5506
  18. 18. S. P. Kleinschmidt SP and Wagner B. Visual multimodal odometry: Robust visual odometry in harsh environments. In: 2018 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). Philadelphia, PA, USA: IEEE; 2018. pp. 1–8
  19. 19. Bañuls A, Mandow A, Vázquez-Martín R, Morales J, and García-Cerezo A. Object detection from thermal infrared and visible light cameras in search and rescue scenes. In: 2020 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). Abu Dhabi, United Arab Emirates: IEEE; 2020. pp. 380–386
  20. 20. Feraru VA, Andersen RE, and Boukas E. Towards an autonomous uav-based system to assist search and rescue operations in man overboard incidents. In: 2020 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). Abu Dhabi, United Arab Emirates: IEEE; 2020. pp. 57–64
  21. 21. Lauterbach HA, Koch CB, Hess R, Eck D, Schilling K, Nüchter A. The eins 3d project—Instantaneous uav-based 3d mapping for search and rescue applications. In: 2019 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). Würzburg, Germany: IEEE; 2019. pp. 1-6
  22. 22. Kruijff-Korbayová I, Grafe R, Heidemann N, Berrang A, Hussung C, Willms C, et al. German rescue robotics center (drz): A holistic approach for robotic systems assisting in emergency response. In: 2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). New York City, NY, USA: IEEE; 2021. pp. 138-145
  23. 23. Schleich D, Beul M, Quenzel J, and Behnke S. Autonomous flight in unknown gnss-denied environments for disaster examination. In: 2021 International Conference on Unmanned Aircraft Systems (ICUAS). Athens, Greece: IEEE; 2021. pp. 950–957. DOI: 10.1109/ICUAS51884.2021.9476790
  24. 24. Elseberg J, Borrmann D, Nüchter A. Algorithmic solutions for computing accurate maximum likelihood 3D point clouds from mobile laser scanning platforms. Remote Sensing. MDPI; 2013;5:5871-5906
  25. 25. Borrmann D, Elseberg J, Lingemann K, Nüchter A, Hertzberg J. Globally consistent 3d mapping with scan matching. Journal Robotics and Autonomous Systems (JRAS). Elsevier; 2008;56:130-142
  26. 26. Borrmann D, Elseberg J, Lingemann K, Nüchter A, Hertzberg J. The efficient extension of globally consistent scan matching to 6 DoF. In: Proceedings of the 4th International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT ‘08). Atlanta, USA: Georgia Institute of Technology; 2008. pp. 29-36
  27. 27. Sprickerhof J, Nüchter A, Lingemann K, Hertzberg J. An explicit loop closing technique for 6D SLAM. In: Proceedings of the 4th European Conference on Mobile Robots (ECMR ‘09). Mlini/Dubrovnic, Croatia: KoREMA; September 2009. pp. 229-234
  28. 28. Moosmann F, Stiller C. Velodyne slam. In: 2011 IEEE Intelligent Vehicles Symposium (IV). Baden-Baden, Germany: IEEE; 2011. pp. 393-398
  29. 29. Holz D and Behnke S. Registration of non-uniform density 3d point clouds using approximate surface reconstruction. In: Proc. of the 41st Int. Symposium on Robotics (ISR) and 8th German Conference on Robotics (ROBOTIK). Munich, Germany: VDE; 2014. pp. 1-7
  30. 30. Segal A, Hähnel D, and Thrun S. Generalized-icp. In: Proceedings of Robotics: Science and Systems. Seattle, WA, USA: The {MIT} Press; 2009. pp. 161-168. DOI: 10.15607/RSS.2009.V.021
  31. 31. Lauterbach HA, Borrmann D, Nüchter A, Rossi AP, Unnithan V, Torrese P, et al. Mobile mapping of the la corona lavatube on lanzarote. In: Proceedings of the ISPRS Geospatial Week 2019, Laserscanning 2019, ISPRS Annals Photogrammetry and Remote Sensing, Spatial Inf. Sci., IV-2/W5. Enschede, Netherlands: Copernicus Publications; 2019. pp. 381-387
  32. 32. Nüchter A. 3DTK - the 3D toolkit. 2020 https://www.threedtk.de. [Accessed: March 8, 2021]
  33. 33. Nelson E. BLAM! – Berkeley Localization and Mapping. 2016 https://github.com/erik-nelson/blam. [Accessed: March 23, 2021
  34. 34. Pomerleau F, Colas F, Siegwart R, Magnenat S. “Comparing icp variants on real-world data sets.” Autonomous Robots. Springer International Publishing; April 2013;34:133-148
  35. 35. Zhang Z, Scaramuzza D. A tutorial on quantitative trajectory evaluation for visual (−inertial) odometry. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Madrid, Spain: IEEE; 2018. pp. 7244-7251

Notes

  • https://velodynelidar.com/products/puck-lite/
  • https://www.optris.com/thermal-imager-optris-pi-640
  • https://www.dji.com

Written By

Helge Andreas Lauterbach and Andreas Nüchter

Reviewed: 23 September 2022 Published: 03 November 2022