Distance Measurement for Indoor Robotic Collectives

Location monitoring is a common problem for many mobile robotic applications covering various domains, such as industrial automation, manipulation in difficult areas, rescue operations, environment exploration and monitoring, smart environments and buildings, robotic home appliances, space exploration and probing. A key aspect of localization is inter-robot distance measurement. In this chapter we consider the problem of autonomous, collaborative distance measurement in mobile robotic systems, under the following set of design and functional constraints: a. indoor operation, b. independence of fixed landmarks, c. robustness and accuracy, d. energy efficiency, e. low cost and complexity. This work significantly extends and updates the results previously published in (Micea et al., 2010). We present and discuss some of the most relevant state of the art techniques for robot distance estimation. Next, we introduce a framework for collaborative inter-robot distance measurement along with a procedure for accurate robotic alignment. The proposed alignment algorithm is based on evaluating and comparing the strength of ultrasonic signals at different angles, processing (filtering) the measured data and ensuring a good synchronization during the process. Further on, we present the CTOF (Combined Time-ofFlight) method for distance measurement, which brings significant improvements to the classical TOF technique, and we show how this new technique meets the above specified design constraints. Some of the most interesting test and evaluation results are presented and discussed. The experimental data show how the distance estimation accuracy can be increased by applying the Kalman filter algorithm on repetitive measurements. The final remarks and the reference list conclude this chapter.

where P t and P r are the signal power at the emitter and at the receiver, respectively, d is the propagation distance,  and f are the carrier wavelength and frequency, respectively, and c, the speed of light.The accuracy of such systems though, is around 2-3 m.Another similar technique, based on modeling the signal power for the ZigBee (IEEE 802.15.4) protocol (ZigBee Standards Organization, 2007), is presented in (Grossmann, 2007).An indoor GPS system, presented in (Kim et al., 2006), consists of two receivers as fixed reference points and a transmitter which uses both ultrasonic and RF signals.The receivers estimate the distance to the transmitter based on the delay between the received RF signal and the ultrasound waves.The two resulting distances are then used to calculate the location of the transmitter, through geometrical formulas.A Linear Kalman Filter is also used to minimize the errors and noise occurring in the measurements of the ultrasound signal.GPS systems usually calculate the distance between a receiver and multiple transmitters based on the difference in the time-of-flight of the received signals (the TOF method).
Through the TOF method, more precise results can be obtained.However, TOF is influenced by the synchronization accuracy of system, environment temperature or other factors which could yield calculation errors.As a result, filtering of measurement values is frequently used.A common solution is the Linear Kalman Filter, as presented in (Kim et al., 2006;Ko et al., 2008;Welch & Bishop, 2006).Other approaches use Bayesian filters (Fox et al., 2003), which estimate the state of a probabilistic dynamic system from observations drowned in noise.Using statistic techniques, they operate in a deterministic manner and are suitable for systems with multiple sensors with different characteristics.
The Building Positioning System (Reynolds, 1999), determines the position of a mobile device by receiving radio signals from fixed devices.These are designed to transmit radio signals in a manner which is similar to the operation of the Cricket or GPS modules.Compared with the GPS, this system uses a much lower frequency, making the radio waves propagate with a relatively low attenuation.The system requires only 4 fixed transmission antennas attached to four different corners of the building.The accuracy of such a system is about 5 cm.Image processing methods are also widely used, many of them using passive cameras.A microcontroller drives a motor to focus a target image located at a certain distance from the sensor.Based on several parameters, like motor position or lens properties, the distance to the target object can be determined.Some systems are based, for instance, on visual information from a 360 degree camera (Tamimi et al., 2006).Such systems must be trained before being used, by capturing representative images from the environment and associating them with the corresponding locations.

Example framework for collaborative inter-robot distance measurement
The proposed distance measurement method and inter-robot alignment algorithm have a common set of requirements for the target robotic system.Such a framework, which has been used to implement and test the proposed techniques, is the CORE-TX platform (Cioarga et al., 2006).CORE-TX (COllaborative Robotic Environment -the Timisoara eXperiment) is designed to provide theoretical and applicative support for the study of intelligent sensor networks and robotic collectives.Its architecture is structured on three main layers (see Fig.

General architecture of the robotic elements
The WIT elements may have perception functions (intelligent sensors), operating functions (autonomous mini-robots), or combined.They have been designed using a modular approach (Fig. 2) which specifies a motherboard (the Base Processing Module), interconnected through a system bus to a set of specialized daughter boards.Such daughter boards are the Power Management Module, the Perception Module and the Communication Module.The additional Support and Operation Module transforms the WIT, from a static intelligent sensor, into an autonomous mini-robot.Currently, the WIT communication board uses the XBee wireless module (Digi International, 2009), which is based on the Zigbee protocol.This module provides a unique 64-bit ID.Other features are: size of 2 cm x 3 cm, operating range of up to 30 m indoor and up to 90 m outdoor, with a maximum consumption of 50 mA at a voltage of 3.3 V. Communication uses the 2.4 GHz radio frequency band with 16 channels.

Design of the perception module
The schematic design of the Perception Module is shown in Fig. 3.The main processor is the ARM7-based LPC2294 (NXP Semiconductors, 2008) which runs the Hard Real-Time Operating Kernel, HARETICK (Micea et al., 2006), for predictable operation.Another important part of the module is the coprocessor ATxmega128A1 (Atmel Corporation, 2010), used for fast, periodic data acquisition and processing operations.It was also chosen for its Two similar transducers are used both for transmitting and for receiving ultrasonic signals.
The BPU-1640IOAH12 device (Bestar Electronics, 2006) has been selected, due to its convenient features, which include low cost, bidirectional operation, nominal frequency of 40 kHz, and maximum input voltage of 120 Vpp.Signal duplexing at the transducer level (bidirectional operation) has been implemented using SI4808DY MOSFET circuits.To perform a fast alignment process, the two ultrasonic transducers are mounted back to back at 180 degrees on a rotating platform, which is driven by a servo motor.The motor is a TowerPro SG-50 (Tower Pro, 2008) with the following specifications: weight 5 g, dimensions 21.5  11. 7  25.1 mm, speed 0.1 s/60 degrees (at 4.8 V), supply voltage 4  6 V.The servo motor is driven by a PWM signal with a period of 20 ms and a variable duty cycle.Rotation is between 0 (minimum pulse duration) and 180 degrees (maximum pulse duration).Choosing the design based on a turret support for the ultrasonic transducers has several advantages when compared to other designs.On one hand, avoiding the rotation of the entire robots during the alignment process eliminates the inherent positioning errors, while also lowers the power consumption of the system.Other advantages of this solution are the increase of the alignment accuracy at a higher process speed.

Inter-robot alignment algorithm
To obtain correct results, the proposed techniques require that the pair of robots performing the distance measurement procedure must successfully complete the alignment algorithm.Correct alignment means the sensing devices (i.e. the ultrasonic transducers) of the robots are facing each other, as close as possible to the straight line between them (see Fig. 4).This procedure also provides key angle values of each robot position and orientation related to the local reference system (Fig. 5):  α 12 is the angle between the orientation axis (Ox 1 ) of the first robot (W 1 ) and the direct line between the two robots. φ 1 is the angle defined by the Ox 1 axis and the ultrasonic sensor axis of W 1 . α 21 is the angle between the orientation axis (Ox 2 ) of the second robot (W 2 ) and the direct line between W 1 and W 2 . φ 2 is the angle defined by the Ox 2 axis and the ultrasonic sensor axis of W 2 .Thus, the ideal alignment situation is when α 12 = φ 1 and α 21 = φ 2 .In this case, the alignment error is 0.
Fig. 5. Key angles on the alignment process

Description of the algorithm
The alignment procedure uses the wireless communication interfaces of the robots to enable the two corresponding peers exchange the required commands and messages, and is based on the continuous measurement of the Sonar acoustic intensity.It is initiated and conducted by one of the robots, which acts as the master (W M ), while the other robot, the slave (W S ), executes the commands received from the master through the wireless link.The master will operate in the Sonar receive mode and the slave in Sonar transmit mode.
The procedure is based on the high directivity of ultrasonic waves used by the Sonar.As the two robot turrets rotate, the master calculates the average strength of the ultrasonic signal received from the slave, at each rotation step of 1 degree.If W M senses this average signal strength has increased from the previous rotation step, it continues the procedure until a decrease is encountered.Then, the two robots will change the rotation directions of their turrets to return to the previously detected maximum.Fig. 6 shows all steps of the alignment algorithm.The process starts with the reading of the ADC results.The two decision blocks labeled "done" refer to extracting a predetermined number of samples, respectively, re-reading the values stored in memory.Next, the measured values are optimized by finding the peak amplitude of each period, applying a Kalman filter and comparing the results to each other to determine a global maximum.This global maximum is further used as the amplitude of the signal in the next steps.The block labeled "pre_align==0?" determines whether it is the last alignment phase of the process.If not, the algorithm determines the trend of the signal: if two consecutive increases are detected, "flag_inc" is set and if there has been a previous decrease, the current stage of the algorithm is finished.On the other hand, if there are two consecutive decreases, the direction of rotation changes and, if "flag_inc" is set, "flag_dec" also will be set.This phase also determines and updates the highest value of the signal reached so far.
In the last phase of the alignment, the algorithm tries to trace back the position with the highest measured values.Thus, if it detects an increase of the measured signal and the current value is larger than or equal to the stored peak, the process ends and the transducers are considered aligned.If, instead, a decrease is detected, the rotation direction is changed.In all cases, if the process doesn't end, the flow of the algorithm goes back to ADC readings.

Performance evaluation of the alignment algorithm
To evaluate the robotic alignment procedure, a custom simulation application has been developed using the WPF (Windows Presentation Foundation).Two cases have been considered: 1. the ultrasonic transducers are fixed on the robot case and thus the robots rotate themselves (using the wheels) during the alignment process, and 2. the current design in which the robots are equipped with turrets, rotated at 180 degrees by a servo motor.The data collected was used to improve the alignment algorithm.To simulate the ultrasonic signal transmission we considered an idealized representationan isosceles triangle which represents the longitudinal cross-section of the propagation cone.This triangle will represent the "active space" of the robot, and it has an opening of 60 degrees.In this case, the power of the signal (its amplitude) varies with the distance from the bisector of the considered angle.In other words, a perfect alignment takes place when the bisectors of the receiver and transmitter concur (Fig. 7).

Fig. 7. Robotic alignment simulation tool
Because the distance between the robots is constant throughout the alignment process, its importance in the real case lies in the fact that the receiver may not "read" anything from the transmitter if it is too far away.Also, when further apart, the visibility angle broadens and more precise alignments may occur.In the application, we consider the distance a constant parameter and simulate the most usual scenarios, i.e.where the robots are not too far apart for the angle to broaden.To be able to determine the position of the robots (of the sensors) relative to each other, we have to calculate the signal strength from both directions and then multiply the results.For the correction of the final alignment, the maximum value of the signal is determined.With the simulator, this maximum value is calculated from the initial conditions.In the real case, the best value must be searched for the signal currently measured, because the distance between robots is unknown.Fig. 8 and Fig. 9 show the elapsed time for the alignment, versus the initial angles of the robots, at various ratios of the rotation speed (calculated as [rad/(s10 3 )]).The thick lines mark the best two series, which minimize the alignment duration, at a corresponding rotation speed ratio.For the design without turrets, most of the rotation speed ratios have cases in which the alignment time exceeds the limit imposed (60 s).They are shown on the graph as being equal to the limit.For the turret case, the sensors have to scan only half of the circle and, consequently, they find each other more quickly.In many cases of small rotation ratios, the sensors cannot find each other, because they are "following" each other closely behind.On the other hand, for increased values of the rotation ratios, the entire process is becoming slow.Considering these facts, the simulations found an optimal rotation ratio of 1047/209 (i.e.5/1) for the case in Fig. 8 and 1047/349 (i.e.3/1) for the rotating turret (Fig. 9).Taking also into consideration the alignment time, its maximum value is about 27 s in the first case and 14 s for the second.These results prove that by using the turret design, the alignment time can be almost reduced to half. www.intechopen.com

Distance measurement with the CTOF method
CTOF, Combined Time-of-Flight, is based on the TOF technique and involves two robots.Although a little more complicated, CTOF has several advantages over the MTDOA method, proposed in (Micea et al., 2010).Thus, the CTOF procedure does not require an additional robot (the third one) to coordinate the distance calculation.It also does not depend on the delays implied by the wireless communication interfaces of the robots.Fig. 10.Distance measurement with the CTOF method Fig. 10 depicts the CTOF technique.Robot W 1 initiates the procedure by sending a "START" wireless message (abbreviated "WMes" in Fig. 10) to its peer, W 2 .The latter acknowledges the start of its part of the procedure with the "SONAR REQ" message, while simultaneously launching its own Sonar Receive Task.As a response to the second message, W 1 starts the Sonar Transmit Task and activates the timer which will count the elapsed time of the entire procedure, t.Upon receiving the ultrasound signal, W 2 activates a delay timer with a predefined value,  U , which is empirically determined to cover the total duration of the ultrasonic transmission from W 1 .After the  U delay, W 2 sends a "SONAR START" message to W 1 and starts a second timer, with a value  W empirically established to cover the maximum communication delay over the wireless link and the corresponding interfaces.When W 1 receives the "SONAR START" message, it launches its Sonar Receive Task.After the  W timer expires, W 2 starts its Sonar Transmit Task and sends the corresponding ultrasonic signal towards W 1 .Finally, when W 1 receives the signal, it stops the timer to produce the t period.As a result, the t period contains the two predefined delays,  U and  W , and twice the propagation delay of the ultrasound signal, from W 1 to W 2 and www.intechopen.comMobile Robots -Control Architectures, Bio-Interfacing, Navigation, Multi Robot Motion Planning and Operator Training 364 backwards.Based on this ultrasound propagation delay, the distance between the two robots can be derived: where c air = 343.4m/s is the velocity of acoustic waves in air at room temperature and at normal pressure.When considering the threshold-based detection method of the received ultrasonic bursts and the fact that the ultrasonic measurements are not perfectly linear, an additional calibration offset is needed for the distance formula in (2): where  UC is the ultrasonic signal calibration offset and has an experimentally determined value (in our case studies,  UC = 290 s).

Experimental results
An extensive set of experiments have been conducted in the DSPLabs using the robotic system of the CORE-TX platform.The experimental setup consisted in several mobile robots, out of which two of them were randomly chosen to perform the alignment and the distance calculation procedures for each experiment.The robots have been placed at a distance ranging from 100 mm to 3000 mm and, for each 100 mm in this range, a set of over 50 pairs of measurements have been performed.Before each measurement, the robots have been positioned in random directions with respect to each other.Since the proposed techniques are based on Sonar and are specifically designed for indoor measurements, the experiments, evaluations and results consider normal room values for the air parameters (such as temperature, humidity, pressure, etc.).These parameters could otherwise influence the speed of ultrasonic waves used in equations ( 2) and (3).Fig. 11 shows several periods for the raw received ultrasonic signal, at the output of the LM6134 amplification circuit.Further on, the values of the signal peaks (which occur every 25 s) are extracted and interpreted as the received Sonar signal.Table 1 presents the distance measurement results for the proposed CTOF method.The maximum absolute error is 4.8 cm, when the robots are 3000 mm apart.The result and error analysis of the CTOF procedure show that, after the necessary calibrations, the measurement characteristics are linear and follow very closely the real distance.It proves also to be independent from the random delays introduced by the wireless modules.Table 2 presents distance measurements with the CTOF method, after applying the Kalman filter to the results.As we can see, the results are very good in terms of accuracy.The disadvantage is the time of measurement, which increases for repetitive measurements.Some comparative distance evaluation results with and without filtering the data are depicted in detail in Fig. 15.From the experiments we conclude that the system provide optimal results for approximately 10 repetitive measurements.

Conclusion
This work extends the discussion on the CTOF method of inter-robot distance measurement, introduced in (Micea et al., 2010).An extended discussion has also been made on the prerequisite sensor (robot) alignment procedure.The custom designed software simulation application provided the optimal ratio between the rotation speeds of the robots or their turrets with the ultrasonic transducers.After the alignment process, the distance between two robots can be measured with the CTOF method.It has been shown that CTOF is independent of the communication propagation errors.We have also shown how the CTOF method meets the requirements of indoor, low-cost, energy-efficient robotic applications, reaching an accuracy of 4.8 cm for d i s t a n c e s o f 3 m .F u r t h e r m o r e , b y a p p l y i n g t h e K a l m a n f i l t e r t o r e p e t i t i v e d i s t a n c e measurements, an accuracy of 1 cm has been achieved for distances of 3 m, without the need of fixed landmarks.The proposed distance measurement method and inter-robot alignment algorithm rely on inter-robot collaborative procedures and, therefore, these techniques are independent of fixed landmarks.Nevertheless, if the system further requires accurate localization of the mobile robots, at least the initial position of one of the robots must be known prior to the start of the system operation.
and Operation Layer, consisting mainly of autonomous microsystems with embedded intelligence, called WITs (Wireless Intelligent Terminals), 2. Collaborative Communication Layer, based on ad-hoc wireless data communication techniques (currently, the basic support is provided by the ZigBee protocol), and 3. Background Control and Supervision Layer, with a central node called BRAIN (Background Robotic Activity Induction Node).

Fig. 1 .
Fig. 1.General architecture of the CORE-TX system

Fig. 8 .
Fig. 8. Inter-robot alignment in the case of the fixed transducers (rotating robots)

Fig. 14 .
Fig. 14.Maximum variation of the correction angle for experimental alignment procedures

Table 1 .
Distance measurement results for the CTOF method

Table 2 .
Distance measurement results for the CTOF method with Kalman filtering www.intechopen.comFig.15.Repetitive CTOF distance measurements with and without Kalman filtering, for various distances between robots (100 mm -top, 1000 mm -middle, and 3000 mm -bottom) Mobile Robots -Control Architectures, Bio-Interfacing, Navigation, Multi Robot Motion Planning and Operator Training 370