Open access

3D Terrain Sensing System Using Laser Range Finder with Arm-Type Movable Unit

Written By

Toyomi Fujita and Yuya Kondo

Submitted: 07 October 2010 Published: 09 June 2011

DOI: 10.5772/16257

From the Edited Volume

Robot Arms

Edited by Satoru Goto

Chapter metrics overview

4,047 Chapter Downloads

View Full Metrics

1. Introduction

A 3D configuration and terrain sensing is a very important function for a tracked vehicle robot to give precise information as possible for operators and to move working field efficiently. A Laser Range Finder (LRF) is widely used for the 3D sensing because it can detect wide area fast and can obtain 3D information easily. Some 3D sensing systems with the LRF have been presented in earlier studies (Hashimoto et al., 2008) (Ueda et al., 2006) (Ohno & Tadokoro, 2005). In those measurement systems, multiple LRF sensors are installed in different directions (Poppinga et al., 2008), or a LRF is mounted on a rotatable unit (Nuchter et al., 2005) (Nemoto et al., 2007). Those kinds of system still have the following problems:

  • The system is going to be complex in data acquisition because of the use of multiple LRFs for the former case,

  • It is difficult for both cases to do sensing more complex terrain such as valley, deep hole, or inside the gap because occlusions occur for such terrain in the sensing.

In order to solve these problems, we propose a novel kind of sensing system using an arm-type sensor movable unit which is an application of robot arm. In this sensing system, a LRF is installed at the end of the arm-type movable unit. The LRF can change position and orientation in movable area of the arm unit and face at a right angle according to a variety of configuration. This system is therefore capable of avoiding occlusions for such a complex terrain and sense more accurately. A previous study (Sheh et al., 2007) have showed a similar sensing system in which a range imager has been used to construct a terrain model of stepfields; The range imager was, however, fixed at the end of a pole. Our proposed system is more flexible because the sensor can be actuated by the arm-type movable unit.

We have designed and developed a prototype system of the arm-type sensor movable unit in addition to a tracked vehicle robot. In this chapter, Section 2 describes an overview of the developed tracked vehicle and sensing system as well as how to calculate 3D sensing position. Section 3 explains two major sensing methods in this system. Section 4 presents fundamental experiments which were employed to confirm a sensing ability of this system. Section 5 shows an example of 3D mapping for wide area by this system. Section 6 discusses these results.

Advertisement

2. System overview

The authors have designed and developed a prototype system of the arm-type movable unit. The unit has been mounted on a tracked vehicle robot with two crawlers that we have also developed. Fig. 1 shows the overview. The following sections describe each part of this system.

2.1. Tracked vehicle

We have developed a tracked vehicle robot toward rescue activities. Fig. 1 shows an overview of the robot system. The robot has two crawlers at the both sides. A crawler consists of rubber blocks, a chain, and three sprocket wheels. The rubber blocks are fixed on each attachment hole of the chain. One of the sprocket wheels is actuated by a DC motor to drive a crawler for each side. The size of the robot is 400[mm](length) × 330[mm](width) × 230[mm](height), when the sensor is descended on the upper surface of the robot.

Figure 1.

System overview

2.2.Arm-type sensor movable unit

We have designed the arm-type sensor movable unit and developed a prototype system. This unit consists of two links having a length of 160[mm]. The links are connected by two servo motors as a joint in order to make the sensor horizontal orientation easily when folded. Another two joints are also attached to the both ends of the connecting links; one is connected to the sensor at the end and the other is mounted on the upper surface of the robot. The robot can lift the sensor up to a height of 340[mm] and change its position and orientation by rotating those joints.

2.3. Sensors

HOKUYO URG-04LX (Hokuyo Automatic Co. Ltd.) is used as the Laser Range Finder (LRF) in this system. This sensor can scan 240 degrees area and obtain distance data every 0.36 degree on a 2D plane. The robot is able to change the position and orientation of this sensor because it is equipped at the end of the arm-type movable unit.

In addition, we have installed an acceleration sensor around three orthogonal axes to detect tilt angle of the robot body and to control the orientation of the LRF to be flat corresponding to the tilt angle. The use of this sensor enables the arm-type movable unit to change the height of the LRF with keeping its orientation.

2.4. Control system

The control system of this robot system consists of two embedded micro computers: Renesas SH-2/7045F and H8/3052F for controlling the main robot and the arm-type sensor movable unit respectively. A Windows/XP host PC manages all controls of those units as well as scanned data of the sensor. The host PC sends movement commands to individual embedded micro computers for the robot and arm-type movable unit and request for sensor data acquisition to the sensor. The sensor can communicate directly with the host PC. All communications for those protocols are made by wireless serial communications using bluetooth-serial adapters: SENA Parani-SD100.

2.5. Calculation of 3D sensing position

In this system, the robot can obtain 3D sensing positions from distance data of the LRF. We gave coordinate systems to each joint of the arm-type unit and LRF as shown in Fig. 2. When the sensed distance by the LRF is ds at a scan angle θs, the 3D measurement position vector X in the base coordinate system can be calculated by

( X 1 ) = P 0 1 1 P 2 2 P 3 3 P 4 4 P 5 ( X s 1 ) E1

where Xs shows a position vector of sensed point in the LRF coordinate system:

X s = d s ( cos θ s , sin θ s , 0 ) T E2

i P i+1 (i=0,…,4) shows a homogeneous matrix that represents a transformation between two coordinate systems of joint-(i) and joint-(i+1) :

P i i + 1 = ( R i i + 1 T i i + 1 0 3 1 ) E3

where i R i+1 shows a rotation matrix for the rotation angle θ i+1 around yi axis,

R i i + 1 = ( cos θ i + 1 0 sin θ i + 1 0 1 0 sin θ i + 1 0 cos θ i + 1 ) E4

for i= 0, …,4. i T i+1 shows a translation vector on the link from joint-(i) to joint-(i+1) the length of which is i :

T i i + 1 = ( 0 0 i ) Τ E5

for i = 0, …, 4 ( 0 = 0). 03 shows a 3 × 1 zero vector.

The base position of the sensor is set to the position when the arms are folded: the joint angles θ 1, θ 2, θ 3, and θ 4 in Fig. 2 are 90, −90, −90, and 90 degrees respectively.

Figure 2.

Coordinate systems

Advertisement

3. Sensing method

The mechanism of this system enables the LRF to change position and orientation to face at a right angle corresponding to a variety of configuration. For example, this sensing system is able to do sensing deep bottom area without occlusions as shown in Fig. 3. Because the occlusion can be avoided by this mechanism even for complex terrain, the robot can measure a 3D configuration such as valley, gap, upward or downward stairs more accurately than conventional 3D sensing system with the LRF. In addition, a robot can do sensing more safely by this method because the robot does not have to stand at close to the border. It is important when the robot needs to work in an unknown site such as disaster area.

On the other hand, this arm-type movable unit can change the height of the LRF by keeping its orientation flat. In this way, 2D shape information in a horizontal plane is detected in each height with even distance. Consequently, the 3D shape of surrounding terrain can be obtained more efficiently by moving the LRF up vertically and keeping its orientation flat. We have installed acceleration sensors to detect tilt angle of the robot so that the robot performs this kind of sensing even when it is on uneven surface as shown in Fig. 4.

Figure 3.

Sensing of deep bottom area

Figure 4.

Sensing by moving the LRF up vertically

We can switch these two kinds of sensing style, as shown in Fig. 3 and Fig. 4, depending on the kind of information that the robot needs to obtain.

Advertisement

4. Experiments

Several fundamental experiments were employed to confirm basic sensing ability of the sensing system for complex terrains: upward stairs, downward stairs, valley configuration, and side hole configuration under the robot. The results for these experiments showed that proposed system is useful for sensing and mapping of more complex terrain.

4.1. Upward stairs

We employed a measurement of upward stairs as an experiment for basic environment. Fig. 5 shows an overview of the experimental environment and Fig. 6 shows its schematic diagram. The stairs are located 1100[mm] ahead of the robot. Each stair is 80[mm] in height and depth. The robot stayed at one position and the LRF sensor was lifted vertically by the arm-type unit from the upper surface of robot to the height of 340[mm] with equal interval of 50[mm]. Each scanning of the sensor was performed for each height. The robot was tilted 10 degrees to confirm the usefulness of the acceleration sensor in the robot. The robot detected its orientation by the sensor and controlled the height of the LRF according to the orientation.

Fig. 7 shows the measurement result; almost same configuration to actual environment was obtained in this sensing system.

Figure 5.

Overview of an experiment for measurement of upward stairs

Figure 6.

Schematic diagram of experimental environment for measurement of upward stairs

4.2. Downward stairs

Fig. 8 shows an overview of the experimental setup and Fig. 9 shows its schematic diagram with reference points for 3D measurement of downward stairs. We picked some reference points from corner points for measurement error analysis. Each stair is 80[mm] in height

Figure 7.

Measured 3D shape of upward stairs

and depth. In this experiment, the robot stayed at one position, 330[mm] away from the top stair, and moved the arm-unit so that the sensor was located over the downward stairs. The sensor angle was changed by rotating the angle θ4, shown in Fig. 2, with the same position of the end of the arm-unit by keeping the angles θ1, θ2, and θ3. The rotation angle θ4 was controlled remotely from 0 degree to 60 degrees every 1.8 degrees. The scanning of the LRF was performed for each sensor angle. The sensing data were then accumulated to make 3D map.

Figure 8.

Overview of an experiment for measurement of downward stairs

Fig. 10 shows the measurement result. We can see almost same configuration to actual environment. The measurement positions for reference points are also denoted in the figure. The results show that accurate position can be sensed by this system. Table 1 shows actual and measured distance with error ratio values on the reference points. This result also confirms valid sensing of this system because error ratio values of the distance were within 5.8% for all reference points.

Figure 9.

Schematic diagram of experimental environment for downward stairs with reference points

Figure 10.

Measured 3D shape of downward stairs with measurement position values for reference points (unit:[mm])

distance [mm]
point actual measured error error ratio [%]
a 851.7 889.7 38.0 4.5
b 851.7 863.1 11.4 1.3
c 800.4 846.6 46.3 5.8
d 800.4 836.6 36.2 4.5
e 770.8 801.2 30.3 3.9
f 770.8 800.3 29.5 3.8
g 722.6 761.4 38.7 5.4
h 722.6 753.3 30.7 4.2
i 699.0 722.5 23.5 3.4
j 699.0 711.6 12.6 1.8
k 655.3 682.4 27.1 4.1
l 655.3 682.2 27.0 4.1
m 637.9 658.1 20.3 3.2
n 637.9 658.9 21.1 3.3

Table 1.

Measured distances and error ratios on reference points for downward stairs

4.3. Valley

A valley configuration was set up as an experimental environment as shown in Fig. 11. Fig. 12 shows its schematic diagram. The valley was 610[mm] deep and 320[mm] long. We gave reference points at each corner of the configuration to estimate actual error value on measurement points.

Figure 11.

Overview of an experiment for measurement of a valley configuration (left: front view, right: side view)

In the same way as previous experiment, the robot stayed at one position, 250[mm] away from the border, and the sensor was located over the valley by the arm-unit. The sensor angle only was changed and the other joint angles of the arm were kept to fix sensor position. The rotation angle of the sensor, θ4, was varied from 0 degree to 90 degrees every 1.8 degrees. Each scanning was performed for each sensor angle.

Fig. 13 shows the measurement result. We can see very similar configuration to the actual valley. The measurement positions for reference points are also denoted in the figure. The

Figure 12.

Schematic diagram of experimental environment for a valley configuration with reference points

Figure 13.

Measured valley configuration with measurement position values for reference points (unit:[mm])

position values show that accurate position can be sensed by this sensing system. Table 2 shows actual and measured distance with error ratio values on the reference points. Even though the error ratio for the point e was higher, the most of the values are less than about 5% for the other points.

distance [mm]
point actual measured error error ratio [%]
a 645.8 625.8 20.0 3.1
b 645.8 618.1 27.7 4.3
c 948.2 944.3 3.8 0.4
d 948.2 904.4 43.8 4.6
e 797.9 737.6 60.3 7.6
f 393.3 373.1 20.2 5.1

Table 2.

Measured distances and error ratios on reference points for a valley configuration

4.4. Side hole under robot

We employed an experiment of measurement for a side hole configuration under the robot. Fig. 14 shows an overview of the experiment and Fig. 15 shows a schematic diagram of its environment. The dimension of the hole was set to 880[mm](width) × 400[mm](height) × 600[mm](depth). Eight reference points were given at each corner of the hole to estimate actual errors.

Figure 14.

Overview of an experiment for measurement of a side hole configuration under the robot

The robot stayed at one position as previous experiments and the sensor was located in front of the hole by the arm-unit. Each rotation angles of joints except for the last joint was fixed to keep the sensor position. The sensor angle, θ4, was only varied from 0 degree to 70 degrees every 1.8 degrees. Each scanning was performed for each sensor angle. Fig. 16 shows the measurement result. This result also showed almost same configuration to the actual environment. The measurement position values for reference points are also denoted in the figure. Table 3 shows actual and measured distance with error ratio values on the eight reference points. The error ratios demonstrated accurate sensing in this system; the maximum was 4.4% and average was 1.9% for all points.

Figure 15.

Schematic diagram of experimental environment for a side hole configuration under the robot with reference points

Figure 16.

Measured configuration of a side hole under the robot with measurement position values for reference points (unit:[mm])

distance [mm]
point actual measured error error ratio [%]
a 677.7 648.2 29.5 4.4
b 677.7 686.1 8.4 1.2
c 792.0 795.8 3.8 0.5
d 792.0 775.8 16.2 2.0
e 476.8 461.6 15.2 3.2
f 476.8 463.8 13.0 2.7
g 628.7 631.9 3.2 0.5
h 628.7 634.4 5.7 0.9

Table 3.

Measured distances and error ratios on reference points for a side hole configuration under the robot

Advertisement

5. 3D mapping

A basic experiment of 3D mapping for wide area was employed by this sensing system. In the experiment, robot moved in a flat corridor shown in Fig. 17. The robot moved forward in the environment for every 40[cm] distance and made a 3D sensing on each location. The robot obtained 3D data by moving the LRF vertically from the upper surface of the robot to

Figure 17.

Experimental environment for 3D mapping

the height of 340[mm] in every 68[mm] for respective scanning at each sensing location. In order to build 3D map, all sensing data at the sensing locations were combined using odometry information of the robot. We put additional several obstacles in the environment to estimate how this system can detect these shapes and positions. The obstacles are put in the areas labeled by  and  as shown in Fig. 17.

Fig. 18 shows the result of 3D mapping. This result shows valid 3D shapes of the environment including added obstacles within the appropriate height. The areas of the obstacles are denoted by ellipse with each label. The built data for each sensing location were described by individual different color. Note that this result clearly shows the top surface detection for each obstacle. This sensing can be made by the mechanism of this system.

Fig. 19 shows the upper view of the built map in the left panel and actual map in the right panel. Obstacles were detected at almost correct location in the result.

Figure 18.

Experimental result of 3D mapping

Advertisement

6. Discussions

We have employed fundamental experiments for sensing complex terrains: upward stairs, downward stairs, valley configuration, and side hole configuration under the robot. From Fig. 7, Fig. 10, Fig. 13, and Fig. 16, we can see that the almost same configuration was measured respectively. We therefore confirm that this sensing system has basic ability of 3D sensing and useful for more complex environment.

The result of sensing for upward stairs, as shown in Fig. 7, provided that the sensing by lifting the LRF vertically with equal interval was effective for getting whole 3D shape in the sensing area. We confirmed that the acceleration sensor was useful for this kind of sensing. This sensing method is also able to avoid a problem on accumulation point in conventional method which uses a rotating mechanism.

The result of sensing for downward stairs, as shown in Fig. 10 and Table 1, suggested that this system is possible to perform 3D mapping effectively even if the terrain has many

Figure 19.

Upper view of built map (left) and actual environment (right)

occlusions. The error ratio of distance was about 5% at a maximum. This error may be derived from mechanical errors of the unit in addition to original detection errors of the sensor device itself. It is necessary to develop the unit with mechanical stability. We however consider this error value is acceptable for a mapping for the purpose of movement or exploration by a tracked vehicle or a rescue robot.

The 3D shape of measurement result for a valley terrain, as shown in Fig. 13, indicated another advantage of the proposed sensing method. This sensing system is able to do sensing deep bottom area without occlusions. In addition, a robot can do it safely by this method because the robot does not have to stand at close to the border. We consider that the error ratio of 7.6% for the reference point e, shown in Table 2, occurred because the position was acute angle for the sensor. This error could be improved if the sensor is located properly so that it can face to the right position to the point. This sensing system can correspond to variety of terrain because the arm-type sensor movable unit can provide a lot of positions and orientations of the sensor.

The result of 3D measurement for a side hole under the robot also demonstrated further ability and strong advantage of the sensing system. Fig. 16 showed that this system enables us to obtain 3D information for such a shape which any conventional sensing system has never been able to measure. Moreover, the experimental result showed accurate sensing due to less error ratios, as shown in Table 3. This sensing system must be useful for 3D shape sensing specially in rough or rubble environments such as disaster area.

The experimental results for 3D mapping described in Section 5 indicated that this robot system was capable of building 3D map in wide area using odometry information. Fig. 18 showed almost actual shapes and positions of obstacles in the areas  and . The sensing of top-surface of the obstacles also demonstrated one of advantages of this proposed system because such a sensing would be difficult for conventional method. Some errors however occurred in the far area from the beginning sensing location. We consider these errors may come from some odometry errors due to slip of tracks in the movement. More accurate mapping would be possible by solving this problem using external sensors with more sophisticated calculation method such as ICP (Nuchter et al., 2005) (Besl & Mckay, 2002).

Advertisement

7. Conclusions

This chapter proposed a novel 3D sensing system using arm-type sensor movable unit as an application of robot arm. This sensing system is able to obtain 3D configuration for complex environment such as valley which is difficult to get correct information by conventional methods. The experimental results showed that our method is also useful for safe 3D sensing in such a complex environment. This system is therefore adequate to get more information about 3D environment with respect to not only Laser Range Finder but also other sensors.

References

  1. 1. Besl P. J. Mckay N. D. 1999 A method for registration of 3-d shapes, IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2), 239 256 , August 2002
  2. 2. Hashimoto M. Matsui Y. Takahashi K. 2008 Moving-object tracking with in-vehicle multi-laser range sensors, Journal of Robotics and Mechatronics, 20 3 367 377
  3. 3. Hokuyo Automatic Co., Ltd., Available from http://www.hokuyo-aut.co.jp
  4. 4. Iocchi L. Pellegrini S. Tipaldi G. 2007 Building multi-level planar maps integrating LRF, stereo vision and IMU sensors, Proceedings of IEEE International Workshop on Safety, Security and Rescue Robotics 2007
  5. 5. Nemoto Z. Takemura H. Mizoguchi H. 2007 Development of Small-sized Omni-directional Laser Range Scanner and Its Application to 3D Background Difference, Proceedings of IEEE 33rd Annual Conference Industrial Electronics Society(IECON 2007), 2284 2289
  6. 6. Nuchter A. Lingemann K. Hertzberg J. 2005 Mapping of rescue environments with kurt3d, Proceedings of IEEE International Workshop on Safety, Security and Rescue Robotics 2005, 158 163
  7. 7. Ohno K. Tadokoro S. 2005 Dense 3D map building based on LRF data and color image fusion, Proceedings of 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005. (IROS 2005), 2792 2797
  8. 8. Poppinga J. Birk A. Pathak K. 2008 Hough based terrain classification for realtime detection of drivable ground, Journal of Field Robotics, 25 1-2 ), 67 88
  9. 9. Sheh R. Kadous M. Sammut C. Hengst B. 2007 Extracting terrain features from range images for autonomous random stepfield traversal, Proceedings of IEEE International Workshop on Safety, Security and Rescue Robotics 2007
  10. 10. Ueda T. Kawata H. Tomizawa T. Ohya A. Yuta S. 2006 Mobile SOKUIKI Sensor System-Accurate Range Data Mapping System with Sensor Motion, Proceedings of the 2006 International Conference on Autonomous Robots and Agents, 309 304 , December 2006

Written By

Toyomi Fujita and Yuya Kondo

Submitted: 07 October 2010 Published: 09 June 2011