Open access peer-reviewed chapter

A System for Continuous Underground Site Mapping and Exploration

Written By

Alexander Ferrein, Ingrid Scholl, Tobias Neumann, Kai Krückel and Stefan Schiffer

Reviewed: 14 March 2019 Published: 30 May 2019

DOI: 10.5772/intechopen.85859

From the Edited Volume

Unmanned Robotic Systems and Applications

Edited by Mahmut Reyhanoglu and Geert De Cubber

Chapter metrics overview

1,342 Chapter Downloads

View Full Metrics

Abstract

3D mapping becomes ever more important not only in industrial mobile robotic applications for AGV and production vehicles but also for search and rescue scenarios. In this chapter we report on our work of mapping and exploring underground mines. Our contribution is two-fold: First, we present our custom-built 3D laser range platform SWAP and compare it against an architectural laser scanner. The advantages are that the mapping vehicle can scan in a continuous mode and does not have to do stop-and-go scanning. The second contribution is the mapping tool mapit which supports and automates the registration of large sets of point clouds. The idea behind mapit is to keep the raw point cloud data as a basis for any map generation and only store all operations executed on the point clouds. This way the initial data do not get lost, and improvements on low-level date (e.g. improved transforms through loop closure) will automatically improve the final maps. Finally, we also present methods for visualization and interactive exploration of such maps.

Keywords

  • 3D mapping
  • continuous mapping
  • large underground site mapping
  • mapping tools
  • point cloud registration
  • map exploration
  • map visualization

1. Introduction

An important environment information in urban search and rescue applications is 3D map data of the site. First responders usually have 2D map material of buildings, tunnels and street sites, while a concise overview in 3D would possibly give them further and new important information about the situation at hand. Kruijff et al. [1, 2], for instance, report from mapping an earthquake site in Mirandola, Italy, in July 2012. There, an unmanned guided vehicle (UGV) together with unmanned aerial vehicle (UAV) acquired 3D environment information to help judge if partly destroyed structures are safe for first responders to move in. Also, robotic technology may help reveal information that is inaccessible otherwise. For example, [3] present an approach for UAV-based mapping of underground tunnels in darkness. While more works focus on disaster management with UAVs (e.g. [4]), many others build upon ground-based USAR robots with the capability to map the disaster site in 3D (see, for instance [5, 6]). Many other related research works including our own previous research [7, 8, 9] focus on investigating SLAM algorithms best suited for mapping disaster sites. How large outdoor environments can be mapped in a fast manner is investigated in [10]. A general overview on robotics in disaster scenarios can be found, for example, in [11]. Again other work looks at sensor systems [12] that are appropriate for generating feasible maps. As shown in [13], for first responders it is important to get a reliable overview of the disaster site and the damages and hazards in order to react correctly. This motivates the work presented in this paper. It is very important to get a map quite quickly in order to judge the operation on-site. The operation may be inspecting a disaster site, mapping a building site or, as in our case, mapping an underground mining operation: in each of these examples, the operator needs to quickly get an overview of the site. The operator then either learns where further sensor information needs to be acquired or what measures need to be taken for the operation. In this paper, we report on our 3D mapping system which offers exactly this. In a slightly different but comparable setting in underground mines, we developed a mapping system in hardware and software which allows to quickly integrate new laser scans into a 3D map.

1.1 The project UPNS4D+

The results presented in this paper are part of the project “Underground 4D+ Positioning, Navigation and Mapping System for Highly Selective, Efficient and Highly-secure Exploitation of Important Resources” (UPNS4D+) which was funded by the German Federal Ministry of Education and Research within the programme of “R4–Innovative Technologies for Resource Efficiency – Research for the Provision of Raw Materials of Strategic Economic Importance”.

The overall project aimed at exploiting mineral resources of rare earths in a highly selective, efficient and highly secure way from local deposits as well as detecting new ones. This required innovative mining technologies which integrate dynamic change of the mine. The interdisciplinary research project UPNS4D+ aimed at developing an underground deposit positioning, navigation and mapping system for a mobile robot platform. For more details on the overall project, we refer to [14, 15].

The consortium consisted of the following partners: (1) indurad GmbH, Aachen; (2) Fachhochschule Aachen, MASCOR Institute; (3) MILAN Geoservice GmbH, Spremberg; (4) RWTH Aachen University, Institute for Advanced Mining Technology; (5) XGraphic Ingenieurgesellschaft mbH, Aachen; (6) Technische Universität Bergakademie Freiberg; (7) Fritz Rensmann, Maschinenfabrik, Diesellokomotiven, Getriebe GmbH & Co. KG, Dortmund; and (8) GHH Fahrzeuge GmbH, Gelsenkirchen. The project started in April 2015 and ended in December 2018.

The goal of our subproject “6D mapping” was to develop a prototype robot system that is able to map underground mining sites. To this end, a suitable robot platform had to be equipped with the right sensor equipment (radar, cameras, LiDARs, IMU). The data coming from the sensors needed to be integrated into consistent high-dimensional maps deploying known SLAM approaches. High-dimensional means that besides the 3D point clouds, also key frames from the vision or radar data were stored in the map. New approaches had to be developed to grant easy access to the data in order to process and visualize them.

1.2 Contribution

In the following, we present results from that research project that are highly relevant also for urban search and rescue robotics and will find useful applications there. In particular, we present:

  1. A novel sensor platform [16] which allows for continuous high-resolution scans

  2. A novel registration tool for checking point clouds on-site

  3. The sensor data registration tool mapit which facilitates the processing of large-scale point cloud data.

1.3 Outline

The paper is organized as follows. In the next section (Section 2), we present the exploration vehicle that was developed during the project and used in our experiments. We introduce the overall platform design and the sensor setup which was used for exploration runs at underground mining site. Our exploration robot is equipped with a revolving 3D LiDAR for acquiring map data, several further 3D and 2D laser range finders used for navigation and terrain classification, a thermal imaging camera for detecting mine workers even in unilluminated areas and a high-resolution wide-angle camera for teleoperation. Additionally, we mounted a FARO 3D LiDAR as a reference system.

Section 3 is devoted to our novel 3D LiDAR platform SWAP. SWAP allows to continuously acquire 3D point cloud data of the environment while the robot is slowly moving forwards. This highly reduces the time needed to acquire a section of the underground mine, compared with the FARO scanner also mounted on the exploration vehicle. Such architectural scanners are usually meant to acquire the scene from a fixed position taking up to 10 minutes for scanning a single (but very dense) point cloud. An important development of this project is the mapping and registration tool mapit, which facilitates the registration and manipulation of point cloud and map data. We will outline the idea of mapit in Section 4. In Section 5 we present a number of visualization tools that were developed with mapit. In Section 6 we conclude.

Advertisement

2. Exploring and mapping underground mines

The underlying idea in the UPNS4D+ project is to deploy two kinds of vehicles in an underground mining facility. There is an exploration vehicle that periodically drives around in the underground mine when no regular work is taking place to initially record and then update a map. The second vehicle is a regular processing vehicle that is performing the daily work in the mine. It used the map that is periodically updated by the exploration vehicle. While the exploration vehicle needs to have more sophisticated sensory equipment for recording the map, for the processing vehicle, a stripped-down equipment suffices, since it only needs to localize within a given map.

With our project partners, we developed the exploration robot shown in Figure 1a. It is a skid-steered tracked robot based on a mini excavator platform. It carries the modular sensor platform shown in Figure 1b. The robot can drive up to 3 ms−1 and is controlled via the ROS [17] Movebase. For navigation, collision avoidance and terrain classification, two Velodyne VLP-16 Puck LiDARs are mounted at the front. They acquire environment information with 16 scan lines with an opening angle of 30° and with 20 Hz. They are mounted on a 20° slope to be able to get information in the close vicinity of the robot. With a horizontal opening angle of 360°, they can acquire 3D data from the front and the sides of the robot. For safety reasons, additional 2D laser range finders have been mounted at two corners of the sensor platform which are also used for collision avoidance.

Figure 1.

Exploration robot developed for mapping underground mining sites. (a) Exploration vehicle and (b) Sensor setup of the exploration robot.

The mapping operation is not run autonomously in the mine environment for now. At the front of the robot, an Allied Vision GT6600C high-resolution camera with a wide-angle lens is mounted. The camera can be used for teleoperation.

As an additional safety feature, we mounted a FLIR A315 thermal camera at the front of the robot in order to be able to detect persons even when not sufficient light is available.

For mapping the mine, the platform is equipped with a rotating 3D LiDAR system, the SWAP platform, which we will describe in detail in the next section. For reference, we mounted a FARO Focus3D X 130 LiDAR, which can be used in a stop-and-go fashion. Scanning times of the Focus LiDAR lie between 1 and 30 min. To remotely operate the LiDAR, we developed a ROS driver based on the FARO SDK.

As part of our project contribution, we developed a rotating sensor platform for the swift acquisition of dense point clouds as reported in [16]. The main goal was to find a compromise between acquiring accurate and dense point clouds which usually takes much time and having available data for online use in a robotic system for tasks such as localization which has to be updated more frequently. For instance, taking the FARO LiDAR with an angular resolution of 0.0035°, very dense and accurate point clouds can be recorded. However, the robot needs to stand still, and the scanning time of a single scan can take up to 30 min. Table 1 shows a comparison of the two scanners.

SWAP platformFARO Focus3D X 130
Measuring range0.1–100 m0.6–130
Horizontal resolution0.035°
Vertical resolution0.4°0.07°
Sphere coverage80.27%83.33%
Scan time0.3–30 s1–30 min

Table 1.

Comparison between SWAP and FARO Focus3D X 130.

Advertisement

3. The 3D LiDAR system SWAP

In this section, following previous work in [16], we present the 3D LiDAR platform SWAP which was developed during the mine mapping project. The SWAP platform consists of a Velodyne VLP-16 PUCK LiDAR and a Hokuyo UTM-30LX-EW range scanner which are both mounted opposite to each other on a disk which rotates both scanning devices around the centre of the disk. The disk and the upper part of the scanner are driven by a motor which is equipped with absolute encoders. Both scanners transfer their data via Ethernet which is connected by a slip ring which connects the revolving part to the rest of the scanning device. The combination of motor and gear head provides us with 3 Nm of torque and allows for a maximum rotation speed of 2.6 Hz. However, a reasonable azimuth resolution can only be achieved with a scanning speed of up to 1.67 Hz, while the full-sphere point clouds are then captured with a half revolution which equals 3.34 Hz for this. We deploy a 14 bit industrial grade absolute SSI encoder which is mounted on the drive shaft. The resolution provides a maximum error of 1.32′ or 0.022°. In a distance of 10, this corresponds to 3.8. The second part of the platform is the rotating sensor mount. It houses a gigabit Ethernet switch, the interface box of the Velodyne VLP-16 PUCK and the Hokuyo UTM-30LX-EW, the power distribution for the sensors and several mounting rails for different sensors. The raw data of the deployed Velodyne VLP-16 PUCK and the attached Hokuyo UTM-30LX-EW are registered making use of the SSI absolute encoder. Besides the absolute encoders, there is another incremental encoder attached to the motor shaft. Then, based on the readings of the absolute encoder, the raw data is collected and integrated into a point cloud for the device. This is done with a best-effort time-stamping on the data and where one UDP package of the Velodyne VLP-16 PUCK is transformed altogether. The time difference between the laser readings within one UDP package is about 1.33 ms. For the rectification of the Hokuyo UTM-30LX-EW measurements, the recording time for one sweep is taken into account. As a final ingredient, our SWAP platform is equipped with an IMU (μIMU from NG1) for providing the orientation of the platform w.r.t. the ground. Figure 2 shows a CAD drawing as well as a photo of the device.

Figure 2.

The components and a photo of our rotating sensor platform. (a) Components of the platform and (b) Photo of the platform.

The current design was chosen keeping in mind the lessons learnt from its predecessor device which was based on a Velodyne HDL-64. That device was tilted along its vertical axis. It was presented in [18]. While the device is very suitable for acquiring 3D dense point clouds (in [7] we used this device for mapping large-scale motorway tunnels), it has some drawbacks when it comes to the distribution of the range measurement in the point cloud. With a tilting scanner, the point clouds are particularly dense at the turning points of the device and less dense in between.

For the SWAP design, we therefore changed the setup from a tilting 3D LiDAR to a rotating device. With the additional tilt angle in the mounting position of the VLP-16, we achieve an even distribution of points in the scanning range of the device. The additional Hokuyo rangefinder was mounted in order to acquire sensor data in a close range around the robot, as the measurement range of the VLP-16 starts at about 0.9 m.

Analyzing and optimizing the homogeneity of a scan are investigated in [19]. For our target scenarios, the sensor platform yields an optimized compromise between the scan acquisition speed and the point cloud density. By adjusting the rotation frequency, the map resolution and the time needed to record the map can also be balanced.

Advertisement

4. The map registration system mapit

There exist a number of toolkits that help with registering point clouds and process 3D data. Many of them are professional software products, often also provided from the manufacturer of a 3D LiDAR system. Also a number of Open Source projects exist, for instance, the 3D Toolkit (3DTK) [20, 21]. The 3DTK provides a number of state-of-the-art 6D SLAM algorithm for registering point cloud data as well as a large number of additional shape detection and visualization tools.

In contrast to this, mapit focuses more strongly on the registration workflow and the post-processing of map data, but it is not restricted only to point cloud data. In mapit, additional sensor cues could be associated with the 3D data. The key idea of mapit is to store the raw data and keep track of all algorithm steps over time.

4.1 Overview

With mapit, we developed a 3D mapping framework2 for managing and post-processing a wide range of sensor data, especially the point clouds from the exploration vehicle. The software is divided into components and is designed for extensibility. We describe the different fields below:

Management/administration

The sensor data are loaded and stored persistently in a database. The access is via defined interfaces. All changes to the data are stored individually. As a result, work steps and results are stored together consistently.

Algorithm processing

The algorithms for filtering the sensor data, the registration tree of the 3D point clouds, the creation of the 3D map and the further processing of the map are defined and developed with mapit.

Connection

A network interface has been developed that allows transparent access between local and remote mapit instances. A connection to external software (e.g. CloudCompare3) has been implemented to work efficiently through plugins.

While ROS is used to programme mobile robots and can save all sensor data from a robot test drive, the framework mapit was developed to manage and save the post-processing of all sensor data. There are three basic principles similar to a version control system that have been implemented in mapit:

  1. Data and the executed algorithms are stored together: At any time, the origin of a map can be traced—the basic sensor data, the used algorithms and parameter for the post-processing.

  2. Every access to the data must be done by mapit: This concept is like a version control system. The development and history of the post-processing are logged.

  3. Algorithms are deterministic (if possible): Data can be deleted and restored at a later point.

All map data and algorithms can be stored like in a directory system. Figure 3a shows the structure of a management process, i.e. to develop a map. The repository corresponds to a project. Each project arranges its data in trees and entities, i.e. the point clouds. Each workspace consists of the post-processing algorithm workflow, i.e. to develop a map from registered point clouds.

Figure 3.

mapit concept and workflow. (a) Visualization of the mapit concept and (b) Workflow in mapit with the exploration vehicle and the processing vehicle involved.

4.2 Workflow with mapit

This subsection describes the workflow with mapit to develop a 3D map from a test drive of the exploration vehicle. Mapit includes various requirements to support the work of 3D map creation. Firstly, it must have open and flexible interfaces to import and export sensor data and maps, as well as being easily adaptable to new requirements. Secondly, the data processing must be simple, reproducible and, in particular, traceable. Thirdly, it must have open and flexible interfaces to import and export sensor data and maps, as well as being easily adaptable to new requirements.

Figure 3b represents a mapit workflow from the exploration vehicle and the process vehicle. The figure has the following steps:

Exploratory trip

During the exploratory trip, every 10 full-sphere point clouds is recorded. These point clouds are stored together with the odometry data via the Robot Operating System (ROS) software with rosbag. After the exploratory trip, this data will be integrated into the mapit system.

Registration and mapping

In mapit, these point clouds are aligned to a consistent map using various registration algorithms (operators) and multiple passes. These aligned point clouds are then converted to a 3D occupancy map.

3D to 2D map conversion/transformation

The exploration vehicle and the production vehicle are located via 2D scanners, which measure the surface of the mine in one section. For these two vehicles, cuts are made in the mine’s 3D map, and a 2D occupancy map is created.

Subsequent exploration

During the subsequent exploration, the exploration vehicle locates itself in this created 2D map or drives outside of it. New full-sphere point clouds of known and unknown areas are recorded and integrated into the mapit system after the trip to the previous one.

Map extension

Then, similar to the point registration and mapping, new point clouds are registered to the already aligned point clouds and the new extended map is then created with all data from previous reconnaissance trips and the new data from the additional reconnaissance.

4.3 3D mine mapping with mapit

To compute a consistent map based on the data collected by the mobile platform, we use spherical point clouds and minimize the error in merging them by integrating the robot’s movements.

In order to integrate all sensor data to one global map, the algorithms in mapit do not operate on the data but only on the transformations on this data. This is why registration algorithms, for example, can be run on different resolutions or on different data without the data being modified: the results just change according to the modified transformations.

4.3.1 Pairwise registration

The point clouds recorded during the exploratory trip are first registered in pairs. For this, the iterative closest point (ICP) method [22] is used, which is an iterative minimum method. There are always two consecutive point clouds compared. It searches, from each point of the one point cloud, the next point in the other point cloud and minimizes this between all pairs of points minimized. The initials of the algorithm are the odometry provided by the AMT. Therefore, a general implementation has been created so that different algorithms, for example, can be easily integrated from the Point Cloud Library (PCL) [23]. To create the maps, the algorithm iterative closest point (ICP) has been used, which forms pairs of points between two point clouds and minimizes the distance between these pairs. The initial values for these algorithms are primarily the odometry provided by the RWTH Aachen University Institute for Advanced Mining Technologies (AMT). This results in a good orientation of the point clouds but typically remains a small error. This becomes visible after long scan series (e.g. in the map of maxit, Krölpa, of approximately 800 m range).

4.3.2 Global registration

To minimize the errors of pairwise registration of many point clouds, all point clouds are registered globally. Again, the algorithms are easily interchangeable. Mostly currently the GraphSLAM [24] algorithm is used after Lu and Milios [25]. It creates a graph of the connections between all point clouds with overlaps and minimizes the alignment errors of all connections simultaneously.

4.3.3 Conversion to 3D map

As a representation of the 3D map, an OctoMap [26] is used. This offers the advantage of being able to query the information in various resolutions and to map the distinctions that are important for navigation between free, occupied and unknown cells.

Figure 4 represents the result from the 3D mapping process from an exploratory trip in the underground mine from maxit in Krölpa, Germany. The mobile robot scans every 10 m a full-sphere point cloud. The point cloud data and the odometry data are saved into a rosbag file and are processed with the mapit workflow. Figure 4a shows a 2D occupancy grid of the mapped part of the mine. Figure 4c visualizes the point clouds themselves and Figure 4b a 3D OctoMap from the point clouds.

Figure 4.

Parts of the anhydride mine in Krölpa, Germany. (a) Occupancy grid, (b) Octree and (c) Point Cloud.

Advertisement

5. Visualization

ROS includes a 3D visualization environment package RViz that fuse sensor data, robot model and other 3D data like point clouds into a combined view. RViz uses several external libraries like the Ogre3D graphics library for the 3D visualization. The 3D points from the point clouds can be visualized as small surface elements (surfel) or boxes. A surface mesh can be developed with surface reconstruction algorithms. This very expensive step is mostly avoided and is approximated by choosing the size of the surfels or boxes.

For the post-processing of the robot data, a graphical user interface (GUI) for mapit has been developed. The user can define a new project and can select all considered data over the client-server connection. Mapit has implemented algorithms to register point clouds, space decomposition algorithm for efficient computation and rendering algorithm for the visualization. All user-selected algorithms can be arranged and saved in a workflow and can be visualized and edited in the GUI as a node graph. Figure 5 shows the mapit GUI and the node graph to calculate and render 13 point clouds from the ground floor of the main building of the Aachen University of Applied Sciences.

Figure 5.

Visualization from the mapit GUI to render 13 registered point clouds.

5.1 Visualization from sensor and mapping data

For almost all sensors attached to the exploration vehicle, ROS connections are provided for visualization. The calibration software of the SWAP platform allows a visualization of the point clouds recorded over time in near real-time. The raw data of the FARO laser scanner are also automatically processed on-board on-site and made available within ROS. This process takes a few seconds to a few minutes, depending on the volume of data. Figure 6a represents a combined visualization from the stop-and-go FARO scan point cloud data in gray values and the live sensor data over 2 seconds from the SWAP platform in color values. Figure 6b shows in a top-down view the real-time data from one Velodyne Puck scanner with 20 Hz in blue and red color values.

Figure 6.

Live sensor data from the SWAP platform of the exploration vehicle at the maxit mine, Krölpa, Germany. (a) Grey values: Stop-and-go data from FARO Focus3D X 130. Color values: Live data from SWAP platform over 2s and (b) Top down perspective from Velodyne VLP-16 puck data, 20Hz in blue and reds.

The exploration vehicle can scan automatically with the automation adapter of the FARO Focus laser scanner. A main disadvantage of this automation adapter is that all drivers for the FARO scanner support only the Windows operating system. To overcome this problem, several Windows applications are developed and can be selected via ROS. Firstly, the scanning application sends user-selected scan parameter over ROS to the scanner and starts the scanning process. The scanned data are stored in the customer format on the hard disk. The second application converts the customer format of a point cloud into the free PCD format of the Point Cloud Library (PCL) and stores the new format again. Last but not least, the PCD data files are loaded and are visualized with RViz in ROS. As an option, the PCD data can be filtered.

The huge amount of point cloud data can be decomposed with an octree space partitioning algorithm (see Figure 7). This data structure is suitable for local collision detection, downsampling the huge data amount and representing the data in different resolutions. Especially, the normal estimation uses a neighboring search to every point of the point cloud. This step and the viewer-dependent resolution can be calculated more efficiently using the octree data structure.

Figure 7.

Spatial decomposition of the point cloud with an octree data structure. The points are organized in a hierarchical fashion where the resolution of space is increased only when the space actually contains data points.

An interface from mapit to ROS allows a generated map to be returned to the context of the exploration vehicle. This allows current sensor data to be visualized in a corresponding map section with appropriate resolution during a re-exploration.

5.2 Virtual reality integration with CuteVR

The CuteVR library was developed as a bridge between the virtual reality hardware drivers and the end-user software with the aim of a consistent interface. CuteVR is highly modular based on the cross-platform application framework Qt and can be extended due to its class structures with relatively little effort. An event system and differentiated error handling also make it easier to handle highly dynamic VR scenes.

It forms the basis for a VR plugin for RViz, which allows the viewing of all sensor data in VR. This allows the user to navigate in the virtual reality world next to the exploration vehicle (see Figure 8). CuteVR unifies the interfaces to VR devices and can be expanded for future VR hardware.

Figure 8.

Virtual reality visualization with CuteVR using the HTC Vive VR headset.

Based on CuteVR, the ROS package vr_tools was developed, which integrates VR hardware and VR concepts in ROS. The core component is the head-mounted display (HMD) plugin for RViz, with which one or more users can look around and move around in a RViz scene. This allows intuitive and true-to-scale viewing of 3D sensor data in the virtual space.

Since RViz itself does not provide structures for the spatial distribution of large amounts of data such as octrees, it was decided instead to filter the data stream to RViz and adapt it on the fly.

Furthermore, the states of VR input devices are made available in ROS and thus can interact with other programme components. With additional VR setups, multiple users can view the same scenes simultaneously from different perspectives via Multi-View.

Advertisement

6. Discussion

In this chapter, we presented a system for continuous mapping and exploration of underground sites. Most of this work has been developed as part of the project “Underground 4D+ Positioning, Navigation and Mapping System for Highly Selective, Efficient and Highly-secure Exploitation of Important Resources” (UPNS4D+) which was funded by the German Federal Ministry of Education and Research within the programme of “R4–Innovative Technologies for Resource Efficiency – Research for the Provision of Raw Materials of Strategic Economic Importance”.

We first reported on the hardware platform that was built to acquire comparably densely populated 3D point clouds of the (underground) environment using a rotating LiDAR device. Afterwards, we reported on the framework mapit which is used to track and execute post-processing operations on the data acquired by the robot. Namely, it allows for registering a (large) set of individual maps to one global map. The important aspect is that mapit does not store the resulting map and discards the original data, but instead it keeps track of the operations that were performed on the data and logs these operations. This allows for reapplying all post-processing in case an algorithm has been improved or a misalignment in the calibration of the sensor setup has been detected. Finally, we presented the options for visualizing the resulting maps in different contexts.

The system described in this chapter provides diverse support for (first) responders in search and rescue applications. For one, the resulting maps can be used to conduct further missions with rescue robots. Also, analysis tools can be run on the maps. For example, the mapit framework supports running algorithms to compare maps from different points in time to see which changes have occurred. The versatile visualization capabilities allow for planning rescue missions and training first responders before sending them into the field.

Advertisement

Acknowledgments

This research was funded by the German Federal Ministry of Education and Research within the programme of “R4–Innovative Technologies for Resource Efficiency—Research for the Provision of Raw Materials of Strategic Economic Importance” and is part of the project UPNS4D+ with FKZ 033R126C. Special thanks go to Tobias Hartmann from the Advanced Mining Technology Institute of RWTH Aachen University and Prof. Paul Burgwinkel and Jörn Lachmann from the company Fritz Rensmann GmbH & Co. with whom we designed and built up the exploration vehicle.

Further, we want to thank the other consortium partners for their valuable contributions: indurad GmbH; MILAN Geoservice GmbH; Institute for Advanced Mining Technology at RWTH Aachen University; XGraphic Ingenieurgesellschaft mbH, Aachen; Technische Universität Bergakademie Freiberg; Fritz Rensmann, Maschinenfabrik, Diesellokomotiven, Getriebe GmbH & Co. KG; and GHH Fahrzeuge GmbH.

References

  1. 1. Kruijff G-JM, Pirri F, Gianni M, Papadakis P, Pizzoli M, Sinha A, et al. Rescue robots at earthquake-hit Mirandola, Italy: A field report. In: 2012 IEEE international symposium on Safety, Security, and Rescue Robotics (SSRR), IEEE. 2012. pp. 1-8
  2. 2. Kruijff G-JM, Kruijff-Korbayová I, Keshavdas S, Larochelle B, Janček M, Colas F, et al. Designing, developing, and deploying systems to support human–robot teams in disaster response. Advanced Robotics. 2014;28(23):1547-1570
  3. 3. Mascarich F, Khattak S, Papachristos C, Alexis K. A multi-modal mapping unit for autonomous exploration and mapping of underground tunnels. In: 2018 IEEE Aerospace Conference. March 2018. pp. 1-7
  4. 4. Erdelj M, Natalizio E, Chowdhury KR, Akyildiz IF. Help from the sky: Leveraging UAVS for disaster management. IEEE Pervasive Computing. 2017;(1):24-32
  5. 5. Pellenz J, Lang D, Neuhaus F, Paulus D. Real-time 3D mapping of rough terrain: A field report from disaster city. In: IEEE International Workshop on Safety Security and Rescue Robotics (SSRR); IEEE. 2010. pp. 1-6
  6. 6. Ohno K, Tadokoro S, Nagatani K, Koyanagi E, Yoshida T. Trials of 3-d map construction using the tele-operated tracked vehicle kenaf at disaster city. In: 2010 IEEE International Conference on Robotics and Automation (ICRA). IEEE; 2010. pp. 2864-2870
  7. 7. Leingartner M, Maurer J, Ferrein A, Steinbauer G. Evaluation of sensors and mapping approaches for disasters in tunnels. Journal of Field Robotics. 2016;33(8):1037-1057
  8. 8. Kohlbrecher S, Meyer J, Graber T, Petersen K, Klingauf U, von Stryk O. Hector open source modules for autonomous mapping and navigation with rescue robots. In: Behnke S, Veloso M, Visser A, Xiong R, editors. RoboCup 2013: Robot World Cup XVII, Berlin, Heidelberg, Springer Berlin Heidelberg. 2014. pp. 624-631
  9. 9. Dubé R, Gawel A, Cadena C, Siegwart R, Freda L, Gianni M. 3D localization, mapping and path planning for search and rescue operations. In: 2016 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). Oct 2016. pp. 272-273
  10. 10. Balta H, Velagic J, Bosschaerts W, De Cubber G, Siciliano B. Fast iterative 3D mapping for large-scale outdoor environments with local minima escape mechanism. In: 12th IFAC Symposium on Robot Control SYROCO 2018. IFAC-PapersOnLine, Vol. 51(22). 2018. pp. 298-305
  11. 11. Murphy RR, Tadokoro S, Kleiner A. Disaster robotics. In: Springer Handbook of Robotics. Springer; 2016. pp. 1577-1604
  12. 12. Zhang Z, Nejat G, Guo H, Huang P. A novel 3D sensory system for robot-assisted mapping of cluttered urban search and rescue environments. Intelligent Service Robotics. 2011;4(2):119-134
  13. 13. Larochelle B, Kruijff G-JM. Multi-view operator control unit to improve situation awareness in USAR missions. In: 2012 IEEE, RO-MAN; IEEE. 2012. pp. 1103-1108
  14. 14. Buttgereit DA, Hartmann T, Schade S, Nienhaus K. Auf dem Weg zu nachhaltigen Abbauprozessen mit UPNS 4D+547-8. In: 20. Kolloquium Bohr- und Sprengtechnik: Institut für Bergbau, Technische Universität Clausthal; Clausthal-Zellerfeld (Germany); 18. und 19. Januar 2017: Tagungsband/Herausgeber: Univ.-Prof. Dr.-Ing. Oliver Langefeld, Univ.-Prof. Dr.-Ing. habil. Hossein Tudeshki; Clausthal-Zellerfeld, 18 Jan 2017–19 Jan 2017. 1. Auflage ed. 2017. pp. 131-140. Papierflieger
  15. 15. Donner R, Rabel M, Scholl I, Ferrein A, Donner M, Geier A, et al. The extraction of relevant features from 3D point clouds of a mobile multi-sensor system in an underground mine setting. In: Tagungsband Geomonitoring 2019; Institutionelles Repositorium der Leibniz Universität, Hannover. 2019. pp. 91-110 (in German)
  16. 16. Neumann T, Dülberg E, Schiffer S, Ferrein A. A rotating platform for swift acquisition of dense 3D point clouds. In: ICIRA (1), volume 9834 of Lecture Notes in Computer Science. Springer; 2016. pp. 257-268
  17. 17. Quigley M, Conley K, Gerkey BP, Faust J, Foote T, Leibs J, et al. ROS: An open-source robot operating system. In: ICRA Workshop on Open Source Software. 2009
  18. 18. Neumann T, Ferrein A, Kallweit S, Scholl I. Towards a mobile mapping robot for underground mines. In: Proceedings of the 7th IEEE Robotics and Mechatronics Conference (RobMech-14). 2014
  19. 19. Mandow A, Morales J, Gomez-Ruiz JA, García-Cerezo AJ. Optimizing scan homogeneity for building full-3D lidars based on rotating a multi-beam velodyne range-finder. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Oct 2018. pp. 4788-4793
  20. 20. Nüchter A, Lingemann K. 3DTK—The 3D Toolkit. 2011. Available from: http://slam6d.sourceforge.net/doc/slam6ddoc.html
  21. 21. Nüchter A. 3D robotic mapping. In: volume 52 of Springer Tracts in Advanced Robotics. Springer; 2008
  22. 22. Besl P, McKay ND. A method for registration of 3-D shapes. IEEE Transactions on Pattern Analysis & Machine Intelligence. 1992;14(2):239-256
  23. 23. Rusu RB, Cousins S. 3D is here: Point Cloud Library (PCL). In: 2011 IEEE International Conference on Robotics and Automation. 2011. pp. 1-4
  24. 24. Thrun S, Montemerlo M. The graph SLAM algorithm with applications to large-scale mapping of urban structures. The International Journal of Robotics Research. 2006;25(5–6):403-429
  25. 25. Lu F, Milios E. Robot pose estimation in unknown environments by matching 2d range scans. Journal of Intelligent and Robotic Systems. 1997;18(3):249-275
  26. 26. Hornung A, Wurm KM, Bennewitz M, Stachniss C, Burgard W. OctoMap: An efficient probabilistic 3D mapping framework based on octrees. Autonomous Robots. 2013;34(3):189-206

Notes

  • http://www.northropgrumman.litef.com/en/products-services/industrial-applications/product-overview/mems-imu/.
  • https://github.com/MASKOR/mapit.
  • https://github.com/MASKOR/cc_qMapit_IO_plugin.

Written By

Alexander Ferrein, Ingrid Scholl, Tobias Neumann, Kai Krückel and Stefan Schiffer

Reviewed: 14 March 2019 Published: 30 May 2019