Open access peer-reviewed chapter

Unmanned Ground Robots for Rescue Tasks

Written By

Karsten Berns, Atabak Nezhadfard, Massimo Tosa, Haris Balta and Geert De Cubber

Reviewed: 27 April 2017 Published: 23 August 2017

DOI: 10.5772/intechopen.69491

From the Monograph

Search and Rescue Robotics - From Theory to Practice

Authored by

Chapter metrics overview

24,456 Chapter Downloads

View Full Metrics

Abstract

This chapter describes two unmanned ground vehicles that can help search and rescue teams in their difficult, but life-saving tasks. These robotic assets have been developed within the framework of the European project ICARUS. The large unmanned ground vehicle is intended to be a mobile base station. It is equipped with a powerful manipulator arm and can be used for debris removal, shoring operations, and remote structural operations (cutting, welding, hammering, etc.) on very rough terrain. The smaller unmanned ground vehicle is also equipped with an array of sensors, enabling it to search for victims inside semi-destroyed buildings. Working together with each other and the human search and rescue workers, these robotic assets form a powerful team, increasing the effectiveness of search and rescue operations, as proven by operational validation tests in collaboration with end users.

Keywords

  • unmanned ground vehicles
  • search and rescue

1. Introduction

1.1. Motivation

Ground robots have demonstrated to be a useful tool when dealing with post-disasters intervention and in particular for Urban Search and Rescue (USAR).

From the experience of the search and rescue (SAR) operators that have been consulted, it came out as the best ground robot for SAR operations would have such large variety of capabilities and tools that it would not be feasible to merge in a single machine. Hence, the strategy to use two different robots with well-defined and complementary capabilities:

  • A large unmanned ground vehicle (UGV) to perform heavy duty jobs and collect information about dangerous places

  • A small UGV to enter tight places, look for victims, and provide indoor view for danger assessment

These two robots can be either used together within one SAR team to have the most effectiveness or can be used independently of each other.

The advantages of using UGVs in disaster scenarios are multiple:

  • Faster location of victims

  • Shorter rescue times

  • Less risk for the SAR operators

  • Faster assessment of damage to buildings

The disadvantage to have two machines to develop instead of a single one can be partially compensated using similar structures and control programs on both robots [1].

1.2. State of the art

Kleiner [2] addresses the problems of robot localization, environment mapping, team coordination, and victim detection. In particular, an RFID-SLAM approach is used to close the loop when mapping. Robot and human’s position is tracked respectively by slippage-sensitive odometry and pedestrian dead reckoning (PDR). Maps are then used for both centralized and decentralized coordination of rescue teams. Data collected by robots are available to SAR operators through wearable devices like head-mounted display (HMD).

Michael [3] combines maps generated by an unmanned aerial vehicle (UAV) and a ground robot used as a moving base. The cooperation between UGV and UAV in particular is addressed. The purpose is to explore compromised nuclear power plants that are too risky for humans due to the high level of radioactivity. Maps are generated from 3D sensor data using the Iterative Closest Point (ICP) approach. Maps are then corrected based on the odometry readings.

Murphy focuses [4] on robots for underground mine rescue. The physical challenges emerging from different scenarios are addressed and different platforms are proposed to better fit the various scenarios. In particular, most of the platforms proposed are mobile robots equipped with tracks and a dexterous manipulator and they are teleoperated through fibre-optic cable.

In the Viewfinder Project [5], robotic tools were developed for disaster management and for supporting fire-fighting services. However, the project concentrated mostly on developing the teleoperation and autonomous navigation capabilities [6, 7] and did not consider the mobility of the unmanned vehicles on rough terrain. This dichotomy is often seen in research projects: either they concentrate on conducting research on robot mobility in rough terrain or either on increasing the cognitive/intelligent behaviour of the robotic assets, but seldom on the combination of both research domains.

In the EU-funded project NIFTi, a system for supporting human-robot teams during disaster response has been designed and developed [21]. A UGV is used in different scenarios to explore the disaster area and look for victims. In a similar way as the ICARUS SUGV, the NIFTi UGV has some capabilities necessary to accomplish the USAR missions:

  • Automatic victim detection

  • Creation of metrical map based on LIDAR

  • Path planning

  • Multimodal human-robot interaction

Furthermore, the NIFTi UGV has some higher-level features that make it able to reason and infer more complex concepts about the environment.

The localization is performed fusing visual and inertial odometry and correcting the result with an estimate obtained through the ICP algorithm.

The high level representation of the sensory inputs is addressed using a topological representation of the environment consisting in a graph whose nodes can be either relevant objects or regions obtained by segmentation of the metric map. The robot uses this representation to autonomously plan a method to execute a task.

The EU-funded project TRADR is a sequel to the aforementioned project NIFTi [22]. After the earthquake in Amatrice, Italy, deployment with a teleoperated UGV and some UAVs has been performed. The UGV carries similar sensors as the ICARUS SUGV, with the exception that the former has a LadyBug3 omnidirectional camera while on the SUGV it has been preferred to mount single cameras in the most critical points. The laser scanner is almost the same model, but in the TRADR UGV, it is mounted on the front on a rotating unit, to provide a 3D point cloud, while on the SUGV, the two laser scanners are fixed on the sides and the 3D point cloud is provided by a time-of-flight camera.

1.3. Subtasks

Breaking with the tradition of focusing on only one research domain, the ICARUS project developed intelligent robotic agents with high mobility on rough terrain [1]. The UGVs represent, within the ICARUS project, the core of the assistive robots during land disasters like earthquakes, floods, landslides, etc. The UGVs are not meant to be a substitute of human search and rescue operators but they are instead a complementary tool to assist them and extend their operational possibilities.

The large unmanned ground vehicle (LUGV) is intended to be primarily a means to open the way to rescuers whenever the way is obstructed by debris. The possibility to mount the jackhammer makes it useful to break with a certain speed walls and concrete slabs. Using the gripper then makes it possible to remove debris and stones that obstruct the entrance to a damaged building. Whenever the structure of the building is not completely stable, it is possible to use the gripper to place some struts to stabilize it; in this way, the risk for the rescuers is minimal as they can perform this operation remotely.

When the entrance to the collapsed building is large enough to allow the passage of the small unmanned ground vehicle (SUGV), then its box is hooked to the end-effector of LUGV arm and it is deployed. If the ground floor is still not accessible, it is possible to deploy SUGV on a balcony or a roof up to 3 m.

The SUGV can explore the inside of a building, teleoperated from a remote area. The operator has a view of the environment around the robot, thanks to the numerous sensors and the mapping system. Knowledge of the obstacles that are near the robot is necessary when doing manoeuvres in tight space in order to avoid collisions that could make the robot unusable.

With an infrared camera, SUGV is able to find heat traces belonging to victims that cannot move. When a victim is found then the operator can annotate its position and deliver a small rescue kit like water or oxygen. When the communication bandwidth is enough, bidirectional voice communication with the victim is possible in order to give instructions and to reassure the victim.

Advertisement

2. Mechatronics of the large unmanned ground vehicle

2.1. Design concept

The LUGV (Figure 1) was originally a commercial vehicle built by the French company Metalliance [8]. It was adapted to the purposes of ICARUS project and provided to the University of Kaiserslautern. Its specifications are shown in Table 1. The main power is provided by a diesel engine which in turn actuates a hydraulic pump. This latter pumps oil into the pistons that actuate the two tracks. A turbine attached to a generator is actuated by the oil flow as well and this latter provides 220 V AC.

Figure 1.

Large unmanned ground vehicle (Source: ICARUS).

Dimensions 3 m × 2 m × 1.8 m
Weight 4 t
Payload weight capacity 300 kg
Power capacity 1 kW at 12 V DC
Maximum speed 18 km/h
Autonomy 3 h
Operational range 2500 m
Computing power Intel Core i7-3610, 4GB RAM
Control software FINROC (C++)
Safety mechanism 4 emergency stop buttons, 1 wireless passive stop system
Main power Diesel engine
Locomotion system 6 wheels inflated with polyurethane foam covered by two caterpillars
Minimum turning radius 0 m

Table 1.

LUGV specifications.

LUGV has a high mobility on uneven terrain due to its big caterpillars that allow in-place rotations as well. A 5 degree of freedom hydraulically actuated manipulator is mounted on top of the vehicle. Different tools can be attached to the end-effector, a gripper, a jackhammer, or a box to transport the SUGV.

The control program runs on a PC that communicates with a PLC, this latter deals directly with the low level hardware. The manipulator has its own controllers that are connected to the main PC.

2.2. Sensor system

The position and orientation on LUGV are provided respectively by a global navigation satellite system (GNSS) and an inertial measurement unit (IMU). As there is quite some space on the LUGV, it has been possible to mount two GNSS antennas, making the position measurement more precise.

A stereo-camera system was mounted on the front section bar tower, with the aim of providing a dense 3D point cloud used for mapping and obstacle detection [9, 10]. The choice of the stereo-camera system is justified by a bigger range with respect to a time-of-flight camera. One camera of the stereo-system is used also as direct visual feedback by the remote operator. Since this camera is attached to the main structure of the robot, an IP camera is mounted on the arm next to the elbow joint, which provides visibility about the arm in order to not hit any object during manipulation activities. Finally, a USB webcam was installed on the end-effector to have direct visibility on the tool.

Two laser range finders are mounted on the front and the rear side for obstacle detection, in a similar fashion as on SUGV but the sensor type on LUGV has a bigger range and a smaller resolution; further details are shown in Section 6.1.

2.3. Mechatronics of the arm

The LUGV manipulator is a 5 degree of freedom arm, which is hydraulically actuated. The workspace of the manipulator is shown in Figure 2.

Figure 2.

Workspace for the manipulator arm on the LUGV (Source: ICARUS).

A simple PID controller is used to control the joint angles. During operation, the temperature and pressure of the hydraulic oil is changing and therefore the internal dynamics of the hydraulic actuator is time variant. This can result in undesirable oscillations if the PID controller is set to high gains. The issue of stability can be alleviated by reducing the PID gains and consequently decreasing the precision. Due to the safety issues of such a large vehicle, the PID controller has to be chosen in low gain to guarantee stability. We adopted a different methodology to overcome the problem of stability. In order to increase the speed of operation and precision, a virtual trajectory was calculated to force the arm towards the desired trajectory; nevertheless, this method had no benefit in remote control scenario where the user directly moves the manipulator. Our experiments proved that for convenient operation of the arm a haptic controller with force feedback is necessary to safely control such a highly dynamic arm.

The LUGV manipulator was equipped with three different tools. The gripper tool is used for grabbing and moving obstacles such as metal sheets, bricks, etc. The controller provides satisfactory accuracy for grasping task. The other tool is jackhammer. This tool is used to breach the concrete walls and in several demonstrations proved to be a useful way for the SUGV to enter hazardous buildings. SUGV box is another tool that is used to carry and deploy SUGV near the rescue area. Deploying SUGV is vital to the rescue operation since SUGV can cover a maximum radius of 50 m in a reasonable time. Where LUGV can approach the building, it can save the battery power of the SUGV for more indoor operation. LUGV also has the capability of deploying SUGV up to 3 m above the ground, and this provides flexibility to the rescue operation where SUGV can be directly placed on the first floor of a building.

Advertisement

3. Mechatronics of the small unmanned ground vehicle

3.1. Design concept

The SUGV (Figure 3) is a commercial robot originally produced by the British company Allen Vanguard as a teleoperated robot for bomb defusing [11]. The specifications of the system are shown in Table 2.

Figure 3.

Small unmanned ground vehicle (Source: ICARUS).

Dimensions 104 cm × 53 cm × 56 cm (with closed arm)
Weight 60 kg
Main power 2 × 24 V DC LiPo batteries
Propulsion 2 caterpillars actuated by DC motor + gearbox
Arm payload 20 kg
Autonomy 2 h
Maximum speed 2 km/h
Computing power Intel Core i7-3610, 4GB RAM
Control software FINROC (C++)
Safety mechanisms Wireless controller, emergency button
Minimum turning radius 0 m

Table 2.

SUGV specifications.

The platform motion is provided by two tracks actuated by an electric motor with a gearbox. The tracks are a bit higher in the front side, and this permits to climb steps up to 20 cm.

The 5 degree of freedom manipulator allows to open doors, grasp small objects, or extend the visual feedback where the platform cannot arrive, like through small holes or above little walls. Each joint of the arm is actuated by a DC motor and a worm gearbox.

The low-level control of the caterpillar motors and the manipulator is performed by a dedicated motor controller and a digital signal processor (DSP), respectively. The DSP is in charge also to control the lights and gather information from the joint encoders. The main processing power is provided by a PC mounted on the rear part of the chassis. It is present as a second processing unit in an Odroid, which is used only to collect data from some sensors.

3.2. Sensor system

The SUGV was provided with a rich sensor configuration for the purpose of ICARUS:

  • An IMU and GPS provide basic telemetry like position and orientation.

  • A Kinect of the second generation is installed in the front part of the chassis. It provides a dense 3D point cloud used for navigation and obstacle avoidance. Two laser range finders (LRF) are mounted on both sides for the same purpose. Two different maps are built from the Kinect and LRF and then merged together afterwards (Section 6.1).

  • An RGB camera is integrated into the Kinect, while a second camera is installed on the gripper and is used when it is needed to grasp objects with the manipulator or when it is necessary to extend the visual feedback to places that are inaccessible for the whole robot, like below furniture, or inside small holes. A third RGB camera is mounted on the rear side. It is set to a low resolution to reduce the network load. This camera is used mainly when doing manoeuvres in tight space where rear vision is required.

  • An infrared camera with a quantum cascade detector [12] were mounted after the demonstrations to find the heat trace of alive victims as well as the presence of CO2 in the air.

  • As requested by the end-users, a CO2 sensor was installed to measure CO2 in the air with direct contact.

  • A microphone and speaker were mounted as well to allow a direct communication with victims when possible or text-based messages when the communication bandwidth is low.

3.3. Mechatronics of the arm

The SUGV manipulator (Figure 4) has three rotational joints on the arm and two rotational joints on the wrist. It is able to perform the required tasks inside its workspace. The arm is designed to perform low speed high precision object manipulation, where its actuators are equipped with worm drive gear-boxes to maximize the load capability of the arm and reduce the negative effect of the load disturbance on position control. Henceforth, the arm is capable to lift up to 20 kg weight in compact mode and 8 kg in fully stretched mode.

Figure 4.

The workspace of the SUGV manipulator (Source: ICARUS).

The joints are using magnetic absolute encoders that are resistant to dust and moisture with a resolution of 0.1 degree. The sensory feedback provides high precision position control which is only limited with the backlash of the drive boxes. This feedback can also be applied to provide force feedback for the haptic device. Our experiment demonstrated that the force feedback calculated based on position error can help the user to have a better understanding of the motion and adapt itself to the slow speed of the manipulator. The force feedback can be influential in teleoperation with high vision delay. In teleoperation tasks, especially in disaster area, the communication bandwidth is low and the video quality is not good. In such a situation, the force feedback could be of great value since it provides the user fast response from environment preferred to the delayed camera image.

Advertisement

4. Control concept

On both robots, FINROC (Framework for INtelligent RObot Control) is installed, a C++/Java framework [13] developed by RRLab explicitly to control robots in an easy and flexible way.

The modularity of FINROC allows the user to add modules that perform different tasks and communicate with each other through ports. The user does not have to care about details like scheduling, data exchange, or multithreading, as they are managed in the background by the framework. The data type that can be passed through ports can be various, such as values, images, point clouds, etc. The basic requirement is that data must be serializable to be passed.

Two Java-based tools can be used to monitor the program: Finstruct and Fingui. The first shows a tree with the modules that are running within the program and allows the user to connect the ports manually in runtime without the need to recompile; this feature is particularly useful when doing tests on the field and it is required to do fast changes to the program. Fingui is a graphical interface that allows to visualize the output of some ports and to give inputs to others using some predefined graphic widgets.

The structure of the software running on our UGVs has at the top level the graphical interface or the Joypad; both can be used by the operator to provide control inputs (Figure 5).

Figure 5.

Control structure (Source: ICARUS).

Such controls are then processed by the control program (IcarusControl) and converted to hardware commands. The control program is in charge of the high level processing of data coming from the sensors. The mapping, localization, navigation, and obstacle avoidance are tasks performed by modules of this layer.

The motion commands for both platform and manipulator are sent by the Hardware Abstraction Layer (HAL) to the related hardware. The HAL is responsible also for collecting data from hardware peripherals such as manipulator encoders and send it to the control program.

Since FINROC is not a ROS node, it could not be directly connected to the ICARUS Command and Control interface (C2I) [14], which is ROS-based. For this purpose, an interface was created using the JAUS (Joint Architecture for Unmanned Systems) library [15].

On the robot, the interface is a FINROC module that acts as a bidirectional gate: on one side, it receives commands from the C2I to be sent to the robots. On the other side, the interface receives sensor data from the robot and sends them to the C2I. The available data that are sent through the interface in both directions is shown in Table 3.

SUGV LUGV
C2I to UGV
End-effector desired Cartesian position V V
Joints desired position V V
Gripper control V V
Light switch V V
Engine switch V
Manipulator enable V V
Platform motion V V
Waypoints list V V
Tool change selector V
Tool lock switch V
Enable audio communication V
Enable speech over text V
Reset localization V V
Text V
Digital video sensor V
UGV to C2I
Roll, Pitch, Yaw V V
GNSS V V
RGB images V V
Point cloud V V
Battery voltage V
End-effector actual Cartesian position V V
Joints actual position V V
Low fuel alarm V
Text V
Victim waypoint V
CO2 value V

Table 3.

Data sent through the interface.

An asynchronous call-back function is called whenever a message is received by the C2I. Depending on the type of the message that is received, the call-back converts it to the proper FINROC-compatible data format and forwards the data to the main control program. At every cycle of the frame the interface gets the input data from the robot and after proper conversion to JAUS formats calls the relative function to forward the data to the JAUS layer.

Advertisement

5. Simulation

Simulation has been a valuable tool to develop both robots, as shown in Figures 6 and 7. Because of the difficulty and the costs, both in time and money, to test the platforms (the LUGV in particular), some simulations were created using different simulation engines.

Figure 6.

SUGV simulation (Source: ICARUS).

Figure 7.

LUGV simulation (Source: ICARUS).

The last and most accurate simulation performed makes use of the Virtual-Robot Experimentation Platform (V-REP) [16].

A CAD model of both robots has been drawn, part-by-part, and then assembled in the simulation and provided with dynamic properties measured on the real robots, such as mass, moment of inertia, centre of mass, and friction coefficient.

In the simulation, the robot’s hardware is replaced by a physical model of the real platforms. The higher level control program used to control the simulated robots is the same that is used to control the real robots.

Simulation has been particularly useful to test the collision avoidance system, for which a big space with many different obstacles was necessary.

Advertisement

6. Navigation of the UGVs in an unstructured environment

6.1. Mapping

A proper mapping system is available on both robots. It processes the information about the environment gathered by the sensors and it converts to a format that is usable by the collision avoidance system and by the human operator. This system is composed of the following parts:

  • Laser range finders (LRF) + stereo-camera (LUGV only)

  • Grid map

  • Sector map

On the LUGV, two LRFs are installed, one in the front and one in the back. Both are attached to a section bar structure fixed directly to the bumpers. The sensor is a SICk LMS511, a planar laser scanner protected by a case against rain and dust. The scan has a horizontal field of view (FOV) of 180°, the scan range is about 80 m, for a minimum angular resolution of 1/6°. Sensor frequency is set to 10 Hz, which means that 10 scans per each device are done in a second and each scan provides 1080 planar points. The height of the sensors is about one-half of the track height, so that if an obstacle intersects the scan plane it is considered not traversable, everything that lies below the scan plane is considered traversable. Both laser scanners are not completely outside the vehicle but they are protected by its bumpers in such a way that if the obstacle detection fails and an obstacle is hit, the first part to hit is the bumper instead of the more delicate and expensive laser scanner. Anyway a suitable position is found considering also the needing to have the least possible parts belonging to the vehicle that obstruct the FOV of the sensor.

On the SUGV, two LRFs are installed as well. Smaller than the ones mounted on LUGV, they are located on both sides, with the plane of scan just above the tracks. Even if the FOV of this scanner type is 270°, they are limited to 180° to avoid that parts of the robot itself fall within the FOV. The scan range is 10 m and the minimum angular resolution is 1/3°.

At every cycle both scans from the two devices are merged together and converted from the sensor 2D reference frame to the vehicle 3D reference frame, in this way a 3D point cloud is obtained.

A traversability map [17] is built out of the point cloud. This map is a 2D grid of cells built around the vehicle, and sizes are by default 20 m × 20 m with a resolution of 0.25 m for LUGV and 6 m × 6 m with a resolution of 0.05 m for SUGV.

In the pre-processing phase, the point cloud is filtered and the points too close to the sensor are removed because they are assumed to belong to the robot itself. The point cloud is then filled into the grid map. For each point belonging to the cloud, the corresponding cell in the map is found. Each cell keeps a count of how many points lie inside it.

To consider a cell as an obstacle, it is necessary that the number of points contained is more than a specific threshold. Such threshold is not constant for all cells but it is inversely proportional to the radial distance from the vehicle; this is done to keep into account that the point density decreases in far cells.

A cell can have on of the following labels:

  • UNKNOWN: if the cell is out of the FOV of the sensors and it was never explored

  • FREE: if it is within the FOV of the sensors and the number of points inside is lower than the threshold

  • EXPECTED FREE: if it was FREE in the previous cycle and it is now out of the FOV

  • OBSTACLE: if the number of points inside is higher than the threshold

  • EXPECTED OBSTACLE: if it was OBSTACLE in the previous cycle and it is now out of the FOV

  • ROBOT: are the cells covered by the robot

The meaning of EXPECTED FREE and EXPECTED OBSTACLE is that we have some information about these cells from the previous cycles but they are currently not covered by the sensors, so we expect that the previous information is still valid but we are not sure.

The grid map is refreshed every cycle and it is connected with the vehicle odometry from the inertial measurement unit (IMU). When the robot moves, the map is shifted of an amount depending on the distance travelled by the robot. In this way, the robot is always in the centre of the map and the obstacles shift. If an obstacle goes out of the map boundary its information is lost. If the robot rotates then the virtual robot in the map rotates while the obstacles around are fixed.

The sector map is the next level of the obstacle detection system. It is a local map built out of the grid map. It contains information about the very close distance from the robot.

Two types of maps are used: a Cartesian and a polar map. A total of 18 maps belonging to the two types are used. Each map contains 10 sectors and each sector stores the minimum free range. The purpose of the sector is to know how much manoeuvrability has the robot in order to not collide with any obstacle.

Sector maps are filled from cells of the grid map. In particular, for each OBSTACLE and EXPECTED OBSTACLE cell the distance from each sector origin is performed and if such distance is less than the sector maximum range then the current sector range is set equal to the cell distance. To speed up the processing, only the cells that lie within the FOV of the sensors are computed.

Sector maps are then used to feed the behaviour-based collision avoidance network [18] that is based on this information and on the motion control from operator decides which movement to perform.

Figures 810 depict the different stages of the mapping process: evolving from a point cloud to a grid map to end up with a sector map.

Figure 8.

2D view of 3D point cloud (Source: ICARUS).

Figure 9.

Grid map (Source: ICARUS).

Figure 10.

Sector map (Source: ICARUS).

On the SUGV, a further mapping is performed based on a dense 3D point cloud that is gathered by the Kinect. The same process is applied as for the LRFs and the behaviour-based collision avoidance system is in charge to fuse the information from both sector maps (Kinect and LRFs).

6.2. Path planning

The navigation system, further discussed in [19], is the same on both robots, except some parameters that are strictly platform-dependent. Since both robots have the capability to perform in-place turns, it was preferred to perform a simple point-approach navigator instead of a more complex trajectory tracking.

The difference is that with the point-approach technique a sequence of geographical points must be provided by the user. Starting from where the robot is located as soon as it receives the sequence, it turns in-place towards the first point in the list and it begins moving in a virtual straight line to reach it. When the point is reached, the robot goes on to reach the second point in the list and so on.

The trajectory between two consecutive points is subordinate to the collision avoidance system. That means that the robot tries to move in a straight line but if there are obstacles on its trajectory it performs manoeuvres in order to avoid them and it re-plans a new trajectory to the next point.

To have a correct navigation it is obvious that a precise localization system is correct, in both indoor and outdoor environments.

Advertisement

7. Control of the manipulator arm

The control loop of the manipulator is illustrated in the Figure 11. The lower level of the controller is a digital signal processing (DSP) unit, responsible to calculate and acquire the sensor data to control the position of the joints. It also uses the CAN Bus interface to send the sensory data to the onboard computer where the higher level calculations such as inverse and forward kinematics are performed. The electronics are designed very flexible since many different features and I/O interfaces can be added to the system later on. For instance, touch sensors are added to the gripper later on to give the force feedback of the contact force. Also, laser pointer with circular pattern was added to give better sense of distance to the user. Different lights and analogue sensors could be extended due to versatile design of the electronics.

Figure 11.

The control loop of the manipulator arm (Source: ICARUS).

The onboard computer calculates the kinematics and assists the user in control. Based on the experiments, four different control modes are coded for user interface. In workspace mode, the user is able to control the velocity of the manipulator with respect to robot frame. This mode is useful when the user has a direct eye contact with the robot, without using an onboard camera. The configuration mode gives the user possibility to control each joint. This mode is for test and calibration of the manipulator and is not recommended for manipulation task. In the camera mode, the user is controlling the speed of the tool centre point (TCP) with respect to camera frame placed on the end effector. The last mode is Pose mode, where instead of velocity, directly the desired workspace position and orientation (Pd) of the TCP is commanded by the user. This mode is particularly useful when haptic joysticks or an exoskeleton is used to control the arm. Using this mode, the position and orientation of the user’s hand will be directly mapped to the pose of the manipulator. This makes the manipulation much easier and faster; indeed, complicated tasks without using haptic devices are almost impossible to perform with articulated arm.

Advertisement

8. Experimental validation

Operational validation trials were performed in Marche en Famenne, Belgium, in order to assess the capabilities of the SUGV and LUGV against the user requirements. On the LUGV, the 3D vision sensor [20] was integrated into the robotic control framework FINROC. After mounting the sensor on the vehicle, a calibration of the sensor (see Figure 12) has been carried out to synchronize the sensor coordinate frame with the robot coordinate frame. Several tests to ensure that the detection of obstacles, the traversability information, and the visual feedback were correctly set up.

Figure 12.

Calibration of the 3D vision sensor. The sensor can be seen mounted on the sensor tower of the LUGV (Source: ICARUS).

Subsequently, the autonomous navigation was evaluated: the vehicle has been given a list of GPS waypoints and it was successfully able to reach them without any direct control. Also, the manipulation capabilities of the LUGV were tested. First, the box with the SUGV inside has been lifted to a height of more than 2 m and the SUGV has been deployed on the roof of a test building (see Figure 13).

Figure 13.

The LUGV uses its arm to lift the transport box containing the SUGV (Source: ICARUS).

Before entering the building, a door had to be opened. This was accomplished using the feedback-controlled manipulator arm of the SUGV. Then the vehicle had been driven remotely inside the damaged building using its own lights and a camera to look for victims in all rooms, passing through a hole in the wall and descending stairs (see Figure 14).

Figure 14.

Due to its small size, the SUGV is able to drive through narrow openings (Source: ICARUS).

All the available tools have been successfully mounted on LUGV: the box for the SUGV, the gripper, and the jackhammer (see Figure 15). Then some rocks have been grasped with the gripper and a concrete piece has been broken to assess the capabilities of the jackhammer (see Figure 16).

Figure 15.

The jackhammer is being attached to the arm of the LUGV (Source: ICARUS).

Figure 16.

The LUGV uses its jackhammer to breach through a concrete plate (Source: ICARUS).

Afterward, the small ground vehicle was deployed together with an indoor multi-copter to explore another building.

A posteriori, the human victim detection sensor was integrated on the SUGV (Figure 17). It was mounted next to the arm as this fulfils most of the requirements on the degree of freedom of the manipulator arm, field of view for the human detection sensor, and avoidance of collision-prone configurations.

Figure 17.

The SUGV with the human detection sensor mounted in front of the manipulator arm (Source: ICARUS).

Advertisement

9. Main issues and considerations

Various experiments conducted for validation purposes have shown that both UGVs can still be used successfully for search and rescue tasks. The formal validation showed that both the SUGV and LUGV fulfil most of the requirements that have been set out by end users. However, some requirements are currently also not fulfilled and it is important to learn from this. Aspects where more research is required are:

  • Compromising system complexity and robustness in large systems, as both systems sometimes had robustness issues.

  • Precise manipulation with powerful hydraulic arms, as was not possible to achieve the desired precision with the hydraulic actuator.

  • Vibration isolation of sensors, or rendering sensors less prone to noise induced by vibrations.

  • Stair climbing of heavily packed robot systems, as the SUGV is capable of climbing stairs, but with the full sensor load, this becomes very difficult for steep stairs designs. A more suitable platform for this task would be equipped with orientable flippers, as it has been done in the EU-funded projects NIFTi and TRADR [21, 22].

  • Long-distance, non–line-of-sight communication for operations inside buildings, as we did still lose communication to the SUGV from time to time.

Advertisement

10. Conclusions

In this chapter, the ground robots employed in the ICARUS project were described. The LUGV is a heavy-duty supporting machine used to free the way from debris and open a safe passage to the SAR operators. The SUGV is a versatile robot used to explore and search victims within damaged buildings. Both platforms have been described in detail and their advantages and drawbacks related to SAR missions have been addressed. Furthermore, the SUGV has been compared with similar robots used in other projects.

Acknowledgments

The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement number 285417.

References

  1. 1. Armbrust C, De Cubber G, Berns K. ICARUS Control Systems for Search and Rescue Robots. Field and Assistive Robotics—Advances in Systems and Algorithms. Shaker Verlag, Aachen; 2014. pp. 1-16. DOI: ISBN-13: 978-3-8440-2753-2
  2. 2. Kleiner A. Mapping and exploration for search and rescue with humans and mobile robots [dissertation]. Freiburg im Breisgau, Germany: Albert-Ludwigs-Universität Freiburg im Breisgau; 2007
  3. 3. Michael N, Shen S, Mohta K, Mulgaonkar Y, Kumar V, Nagatani K, Okada Y, Kiribayashi S, Otake K, Yoshida K, Ohno K, Takeuchi E, Tadokoro S. Collaborative mapping of an earthquake-damaged building via ground and aerial robots. Journal of Field Robotics. 2012;29(5):832-841
  4. 4. Murphy RR, Kravitz J, Stover SL, Shoureshi R. Mobile robot in mine rescue and recovery. IEEE Robotics and Automation Magazine. 2009;16(2):91-103
  5. 5. Baudoin Y, Doroftei D, De Cubber G, Berrabah SA, Pinzon C, Warlet F, Gancet J, Motard E, Ilzkovitz M, Nalpantidis L, Gasteratos A. View-Finder: Robotics assistance to fire-fighting services and Crisis Management. In: IEEE, editor. IEEE International Workshop on Safety, Security & Rescue Robotics (SSRR 2009), 2009. Denver, CO; 2009. pp. 1-6. DOI: 10.1109/SSRR.2009.5424172
  6. 6. Doroftei D, De Cubber G, Colon E, Baudoin Y. Behavior based control for an outdoor crisis management robot. In: Proceedings of the IARP International Workshop on Robotics for Risky Interventions and Environmental Surveillance; 2009. pp. 12-14
  7. 7. De Cubber G, Doroftei D, Sahli H, Baudoin Y. Outdoor terrain traversability analysis for robot navigation using a time-of-flight camera. In: RGB-D Workshop on 3D Perception in Robotics; 2011
  8. 8. Metalliance. Metalliance [Internet]. Available from: http://www.metalliance-tsi.com/en/ [Accessed: November 2016]
  9. 9. De Cubber G, Doroftei D. Multimodal terrain analysis for an all-terrain crisis management robot. In: IARP HUDEM 2011. IARP; 2011
  10. 10. Balta H, De Cubber G, Doroftei D, Baudoin Y, Sahli H. Terrain traversability analysis for off-road robots using time-of-flight 3d sensing. In: 7th IARP International Workshop on Robotics for Risky Environment-Extreme Robotics; Saint-Petersburg, Russia. IARP; 2013
  11. 11. Allen Vanguard. Remotely Operated Vehicle-Digital Vanguard. Product Brochure; 2011
  12. 12. Harrer A, Schwarz B, Gansch R, Reininger P, Detz H, Zederbauer T, Andrews AM, Schrenk W, Strasser G. Plasmonic lens enhanced mid-infrared quantum cascade detector. Applied Physics Letters. 2014;105(171112). DOI: http://dx.doi.org/10.1063/1.4901043
  13. 13. Reichardt M, Föhst T, Berns K. On software quality-motivated design of a real-time framework for complex robot control systems. In: 7th International Workshop on Software Quality and Maintainability (SQM), in conjunction with the 17th European Conference on Software Maintenance and Reengineering (CSMR); March 2013; Genova, Italy. 2013
  14. 14. Govindaraj S, Chintamani K, Gancet J, Letier P, Van Lierde B, Nevatia Y, De Cubber G, Serrano D, Bedkowski J, Armbrust C, Sanchez J, Coelho A, Palomares ME, Orbe I. The ICARUS project—command, control and intelligence (C2I). In: Safety, Security and Rescue Robots (SSRR); October 2013; Sweden. IEEE; 2013
  15. 15. Serrano D. Introduction to JAUS for Unmanned Systems Interoperability. NATO Science & Technology Organization. STO-EN-SCI-271 Report. 2015
  16. 16. Freese M, Singh S, Ozaki F, Matsuhira N. Virtual robot experimentation platform V-REP: A versatile 3D robot simulator. Simulation, Modeling, and Programming for Autonomous Robots. 2010;6472:51-62. DOI: 10.1007/978-3-642-17319-6_8
  17. 17. Dargazany A, Berns K. Terrain traversability analysis using organized point cloud, superpixel surface normals-based segmentation and PCA-based classification. In: Workshop on Field and Assistive Robotics (WFAR); Lahore, Pakistan; 2014
  18. 18. Kiekbusch L, Armbrust C, Berns K. Formal verification of behaviour networks including sensor failures. Robotics and Autonomous Systems. 2015;74:331-339
  19. 19. Armbrust C. Design and verification of behaviour-based systems realising task sequences [dissertation]. München, Germany; Robotics Research Lab, Department of Computer Science, University of Kaiserslautern: Verlag Dr. Hut; 2015. p. 236. Available from: https://agrosy.cs.uni-kl.de/fileadmin/Literatur/Armbrust15a.pdf
  20. 20. Vislab. 3DV-A. Product brochure; 2012.
  21. 21. Kruijff GJM, Kruijff-Korbayová I, Keshavdas S, Larochelle B, Janíček M, Colas F, Liu M, Pomerleau F, Siegwart R, Neerincx MA, Looije R, Smets NJJM, Mioch T, van Diggelen J, Pirri F, Gianni M, Ferri F, Menna M, Worst R, Linder T, Tretyakov V, Surmann H, Svoboda T, Reinštein M, Zimmermann K, Petříček T, Hlaváč V. Designing, developing, and deploying systems to support human-robot teams in disaster response. Advanced Robotics. 2014;28(23):1547-1570. DOI: 10.1080/01691864.2014.985335
  22. 22. Kruijff-Korbayová I, Freda L, Gianni M, Ntouskos V, Hlaváč V, Kubelka V, Zimmermann E, Surmann H, Dulic K, Rottner W, Gissi E. Deployment of ground and aerial robots in earthquake-struck amatrice in Italy (brief report). In: International Symposium on Safety, Security and Rescue Robotics; October 2016; Lausanne, Switzerland. IEEE; 2016

Written By

Karsten Berns, Atabak Nezhadfard, Massimo Tosa, Haris Balta and Geert De Cubber

Reviewed: 27 April 2017 Published: 23 August 2017