Open access

Object Location in Closed Environments for Robots Using an Iconographic Base

Written By

M. Pena-Cabrera, I. Lopez-Juarez, R. Rios-Cabrera M. Castelan and K. Ordaz-Hernandez

Submitted: 03 November 2010 Published: 09 June 2011

DOI: 10.5772/19360

From the Edited Volume

Robot Arms

Edited by Satoru Goto

Chapter metrics overview

3,229 Chapter Downloads

View Full Metrics

1. Introduction

In order to have an efficient use of digital image processing and pattern recognition techniques, it is necessary to understand the human visual system behaviour. The human visual system comprises the human eye and some areas of the human brain which performs neurological processing information. Human eye and brain together convert the optical information in a visual scene perception, the eye functions as the human visual system camera, and his function is to transform light electromagnetic waves to small electrical signals carrying the information to the brain to achieve data analysis and build a structured high resolution image.

For a machine vision, this process represents a very complex task to implement; adequate elements have to exist like: environment perception, information processing and primitive base knowledge generation to achieve specific actions for decision making based on interpretation of such information. Artificial vision then might be defined as: machine implementation with capabilities of visual perception of the surrounding environment, extraction of region of interest, analysis, scene interpretation and decision making. In this regard some authors like Haralick & Shapiro have defined machine vision as the science which study and develop the theoretical bases and algorithms to obtain information of the real world from one or some images (Haralick&Shapiro, 1992). Another notable definition is proposed by Pajares et al. that define it as a machine capability to see its rounded world, to understand the structure and properties of a 3D world based in the analysis of one or more 2D images.

The system described in this article resembles the above capability in automated industrial applications using manufacturing machinery or robots, which in some moment of the industrial process need to obtain its position and location within their working space, so to accomplish their labour tasks efficiently. A real example is locating and positioning working pieces within a manufacture area during assembly, painting, sorting or storage operations. The goal is to identify the working piece and make reference to a visual scene, then with a geometrical model, the working piece position is calculated and so the necessary path to reach the pieces using a robot-arm as described by de Lope (de Lope, et al., 1997) typically in an eye-in-hand configuration.

A frequent requirement that we have found during manipulative tasks using mobile robots or multiple collaborative robot arms is to accurately locate objects within the work space or in the case of mobile robots, the robot itself. In this chapter we present the design and implementation of an artificial vision system capable of obtaining the position of an object (working piece), tool or end-effector device of a robot manipulator, or the camera’s position during eye-in-hand configuration for robot arms within an enclosed environment by the way of “wall marks” recognition.

An iconographic base formed by icon symbols containing enough information for the object is employed in order to obtain the robot’s spatial position (end-effector location, for instance). All the information is obtained in one camera shot. The systems comprises a digital camera with pan-tilt movements attached to the object to get its location, the object might be a working piece, the end-effector of a robot-arm or any other mobile object. The system obtains the location and position of the camera by way of performing an observation around the walls, the system finds “wall marks” which indicates where the object is and its position. The camera gets enough information to obtain sufficient parameters to allow calculations from the exact position of an object; the information allows the graphical representation of the closed environment and the camera location.

Pattern recognition techniques are widely used in computer vision with excellent results in industrial inspection and automation applications; however, some of the algorithms are not for practical use in real time applications. Using a visual system based on the recognition of symbols makes the possibility to design and implement a visual system to get working pieces locations by way of using an iconographic descriptive symbol set with real time interpretation, and making possible to build a basic geometrical symbol recognition method, which interpretation, might indicate information of location. By making analysis of depicted symbols, the object camera within an enclosed environment might know where it is, what its direction, and what its inclination is.

The used symbols represent movements, paths in 2D and potentially 3D and allow establishing references to fixed zones within the scene; the symbols are extracted from general scene and from its analysis, the location information is obtained.

Advertisement

2. Scope

The scope of this research is to design a methodology implementing an artificial vision system capable to obtain in real time the exact position of an object within an enclosed working environment by using symbols and icon recognition. The system comprises a camera located in the object, working piece or end-effector segment of a robot-arm. The customized software communicates with the camera to get visual information of its environment in the same place where the object, working piece or end-effector segment is located. The parameter acquisition is carried out by image analysis and the exact location of the camera is obtained. Enough information is obtained as well to have a graphical representation of the environment. Applications for autonomous agents and robot manipulators are natural for this methodology.

The possibility of using this methodology involves using closed environments, the walls are decorated with symbols (icons) in specific areas, the symbols contain by itself graphical information about where they are located; in this sense, by image processing and interpretation is possible to know the location of the object by looking at the icons.

Advertisement

3. Design and implementation

The design and implementation of the system, as well as the experimental platform is integrated with the following operation modules.

Software and Hardware (camera and dedicated algorithms). - The camera is a digital solid state wireless camera with pan/tilt movement support, used to look for the appropriate icon in a specific working area. The wireless communication with the control computer is achieved via a wireless access point.

Iconographic set. - For environment representation, the icons are basic geometric forms implemented within the environment (walls).

Dummy enclosed environment for experiment. - is a specific environment to carry out test experimentation with the system, this dummy model, has practical dimensions to acquire enough data to match the digital camera specifications and satisfy the application requirements.

Image acquisition. - It is the module in the system to capture the image and produce a standard format to be implemented in an image model.

Region of Interest (ROI) algorithm.- It detects the icon presence within the scene, and extracts it from scene.

Detection, interpretation and camera location algorithm.- This algorithm performs data processing of an image structure of the extracted icon, and together with a geometrical model obtain the camera coordinates within the scene.

3.1. Software

Visual Basic 6.0 and Matlab 7.0 platforms were used to implement the system, software application development was basically developed in Visual Basic 6.0, interaction with Matlab 7.0 were carried out by way of “Matlab Automation Server” component. The main purpose of the system is to obtain the coordinates of a video camera, within a pre-establish coordinate reference system in a known working space. The system has to have recalibration capabilities in order to work in different working space dimensions for different applications. Figure 1 shows the software modules.

Figure 1.

General configuration of software modules.

3.2. Hardware

Hardware components of the developed system are: a digital camera, a connectivity network card and a PC computer. The used camera is a “Veo Wireless Observer”, it has communication with computer through a wireless local network using a wireless access point and a wireless network card "3Com® 11 Mbps Wireless LAN PCI Adapter" PCI with protocol 802.11b of IEEE and using a point to point configuration (figure 2).

Figure 2.

General configuration of the system.

3.3. Iconographic base

Simple figures were used to implement the iconographic base, combination of these figures have to get enough features to obtain the location information, a symbol set was created to conform what we have called “iconographic base”, this symbols were physically painted in the wall of the closed environment for different zones within the working place. For testing purposes this was implemented in a dummy model of practical dimensions. Four “working icons” were proposed and each icon has to be located in the specific area of each working

Figure 3.

Camera and network card.

area zone in order to get correlated information with physical real world location. Each “working icon” is conformed by four small squares, their centroid conform the corner points of a bigger imaginary square. The position of the icons allow subsequent analysis of displacements due to the perspective factors, by using geometrical models, according with this, it is possible to get the distance between the camera lens and the centroid of imaginary square as well as the angle between optical axis of the camera and the normal line crossing the imaginary square centroid. Dimensions for different square, were obtained by “try and error” in order to reach optimum resolution and view area for a particular used camera.

3.4. Specifications for the design of the dummy model environment

For the design of the model environment, a camera resolution of 320 x 240 pixels was used in order to achieve an image size suitable for fast processing and acquisition. Several test shots to look for ranges of distance between the camera lens and the centre of the icon to get an acceptable work experiment were carried out, resulting a maximum distance of 60 [cm] and a minimum distance of 20 [cm] for the experiment, for distances outside of this range was impossible to obtain the necessary parameters to obtain the required distance and angle of vision to get a real situation of experimental measurements of objects locations within the dummy model enclosed environment. The maximum value of angle of vision to guarantee the experiment was 55 . Once established these parameters it can be an operational region for each icon, in which the system will be able to obtain the position of the camera, this region is applicable for each of the four icons and will be called the Operating Region of the working space (Figure 4 a and 4b).

Figure 4.

a) Icon’s Operating Region, (b) Operating Region of the Working Space.

Each icon frame was designed as depicted in Figure 5a and they were located in each wall as indicated in Figure 5b.

In addition to the above parameters, it was also necessary to obtain the minimum height in order to avoid external scenes images to the working environment being acquired, which will act in the form of noise causing failures in image processing, the height was established as 50 [cm]. The resulting dimensions proposed for the construction of model tests were: 50cm x 120cm x 120cm, as shown in figure 6.

Figure 5.

a)icon's frame design, (b) icon's wall location

Figure 6.

Dummy model environment for experimental test.

3.5. Working icon detection

In order to detect just one of the four working icons at a time in a particular scene, we used a “minimum and maximum distances” defined for each operation region, so if they are out of an specific range the icon is not considered for analysis. For “icon” detection, we first select a “merit number” so to know which region of interest has been associated for each “icon”, this is the “P2A” number, which is a known factor between perimeter and area used in digital image processing on binary images, in our case we define a black object (icon zone) on a white background.

3.6. Icon identification

The main objective in having the icon set is to get enough parameters to find the camera-icon distance, vision angle and the region being used within all enclosed environment.

Three algorithms were designed to perform each task, they are:

  • Algorithm to get camera-icon distance (CIDA),

  • Algorithm to get the angle of vision (AVA),

  • and algorithm to find which of the four working icons is being used within the current image in the application and is called icon identification algorithm (IIA).

One important requirement is to have some of the working icons as a complete image within the image frame; otherwise, the camera had to be moved to find it in order to have enough confidence on data values before calculations were made.

Icon identification algorithm (IIA). It uses a binary image and tag process, so that to get the difference among different elements comprising the working icons, it is necessary to obtain the “centroid” of each element to calculate distances as given by equation (1) with a reference element (triangle) and with a different orientation for each working region in order to obtain the identification of the working icon (see figure 5 and 7.)

d A B ¯ = ( X A X B ) 2 + ( Y A Y B ) 2 E1

Figure 7.

Centroid distances representation.

Once all distances are obtained, the algorithm finds which is the smaller distance so to get the criteria to determine which “working icons” has been founded and which “working region” (geometrical square) is being used within the environment. This information tells the system where the camera is because the “working icon” center, has been draw in the very border of each geometrical working region in the environment walls as shown in figure 8.

3.7. Calculation of distance to the icon

The algorithm to get the camera-icon distance (CIDA), is useful to obtain the distance between the camera lens and the center of the icon, which make use of the following formula:

d = κ i c o n h e i g h t E2

where:

d = distance

k = proportional constant

icon height = icon height in pixels

The procedure is to get the distance from the analysis of the image perspective, which relate the magnitudes of objects with the distance being considered, such as the figures appear to be smaller with a greater distance and vice versa, this relationship is given by equation (2).

To make use of the equation, the first step is proceed to a labelling process of icon elements (see figure 9), to identify them within the icon working area.

The centroid of each element is calculated in order to qualify for two points: one point corresponds to a midpoint between the centroid of element 2 and the centroid of element 4,

Figure 8.

Working icons for different working regions

Figure 9.

Icon elements identification on image.

the second corresponds to the midpoint between centroid of element 1 and centroid of element 5. It important to notice that for the calculation of the distance, the centroid of element 3 (triangle) is not used, and calculation of previous defined midpoints points, equations 2 and 3 are used, once obtained these points, it is calculated the magnitude than exists among them (called "apparent height") as illustrated in Figure 10.

Figure 10.

Original Icon image

The distance a is given by equation 7 as follows:

a = ( P M 2 x P M 1 x ) 2 + ( P M 2 y P M 1 y ) 2 E3

where:

a = icon height (apparent height)

PM1x, PM1y corresponds to the x-y coordinates of the mid point centroid between centroid of element 2 and centroid of element 4.

PM2x, PM2y corresponds to the x-y coordinates of the mid point centroid between centroid of element 1 and centroid of element 5.

They are calculated as:

PM1x  =   ( xC2  +  xC4 )   / 2 E4
PM1y  =   ( yC2  +  yC4 ) /  2 E5
and
PM2x  =   ( xC1  +  xC5 )   /  2 E6
PM2y  =   ( yC1  +  yC5 )   /  2 E7

where

xC2 and yC2 are coordinates of centroid 2

xC4 and yC4 are coordinates of centroid 4

xC1 and yC1 are coordinates of centroid 1

xC5 and yC5 are coordinates of centroid 5

A similar procedure is made when an image is obtained from another point in the scene with different image perspective as shown in Figure 11.

Figure 11.

Icon image at 55 with reference to the perpendicular line

At this point, we have only obtained a parameter of the equation (2), the next step is to obtain the proportionality constant (k), which depends on the lens characteristics and each visual scene, therefore the best practical way to get it is by means of laboratory tests.

Tests for obtaining "k" were carried out in the following way:

  1. Place the camera at a known distance “d” to obtain a focused and central positioned image of the icon.

  2. With the acquired image get the parameter "apparent height".

  3. Values "d" and "apparent high" were replaced in the equation (2) to obtain a proportionality constant "ki".

  4. Repeat steps 1 to 3 for 5 different distances from the operational region of a specific icon (see table 1).

  5. We calculated the average of the "ki" in order to find a constant "k" allowing us to get the value of "d" at any point within the operating region of each icon. The obtained values of Ki obtained in laboratory tests are provided in Table 1.

3.8. Camera position

The following actions were performed to obtain a final position of the camera: First, a reference position is found by a system initialization (HOME), then camera will begin to make a PAN movement to find some available “working -icon” and checking validated distances, once the icon is found, the system moves the camera to get the icon in the image centre of current image, calculates the criteria distances explained before to know which icon is within the scene.

Once the icon is in the centre, the system calculates the distance between camera and icon and the vision angle θ (angle between optical camera axis and normal line to icon centroid). The distance and the angle are shown in Figure 12 and the corresponding values obtained in laboratory tests are given in Table 1.

Figure 12.

Position vector

With this information a geometrical model is used to get the (x,y) coordinate of the camera in the working region being used and a graphical representation of the camera and the environment interaction is obtained. In order to get this situation, three parameters are obtained and used to get the final position of the camera:

  • Icon identification (minimum, maximal distance),

  • camera-icon distance and

  • Angle of vision θ,

Distance
[cm]
K i Icon height
[pixels]
20 2985.8 149
30 2990.3 100
40 3007.9 75
50 3000.3 60
60 3001.7 50
Average K = 2997.2

Table 1.

Advertisement

4. Experimental results

Experimental tests were made to obtain the performance of the system in real conditions, to this purpose an enclosed squared environment was built, the iconographic symbols set as described before were and painted on the four different walls of the environment, which represents four working icons areas.

4.1. Experimental method

Experimental tests showed very good results with real time performance of the system. Once the system was implemented, and the practical operation checked, the precision of the system was verified. In order to achieve this task, we used different working regions for each working icon. For experiment purposes, the area of the enclosed environment for each icon was divided in three zones with 20, 40 and 55cms distances from each icon as shown in figure 13

Figure 13.

Divided zones for each working icon.

The difference between desired and real locations was measured and the results are showed in Table 2. Eight points were selected in a random manner for each zone and real desired physical coordinates were obtained. The camera was positioned on each selected point and the system calculated the positions to compare its results (table 1).

The experiment was made for all points in all different regions for all different icons a graphical representation was made with the obtained values to get a better feedback of the system performance, Figures 14 and 15 shows a graphics for two different icon working regions.

Testing of the complete system with software and hardware integrated was done by selecting ten random points inside of the workspace, then the camera along with a driver support which performs the pan/tilt movements was located in real points and compare the response given by the system against the actual position values. The results of the tests were as follows:

Real
Measurements [cm]
System
Measurements [cm]
Time [s]
X Y X Y
20.00 35.00 20.31 34.36 84.782
40.00 3.00 38.48 3.04 143.368
-4.00 5.00 -4.95 6.93 50.442
15.00 -31.50 18.70 -36.23 141.694
34.00 -16.50 33.47 -17.21 43.432
-19.00 -25.20 -20.97 -25.07 92.844
-40.00 -1.00 -39.64 -0.67 119.512
15.00 3.50 14.66 2.48 57.232
-25.00 29.00 -24.98 29.16 82.899
-25.30 29.60 -25.87 30.21 82.889

Table 2.

Real and System calculated positions.

Figure 14.

Graphical representation of measured and real points for icon zone 1.

Figure 15.

Graphical representation of measured and real points for icon zone 2.

Experimental testing was repeated ten times and average measurements were registered. Figure 16, shows a graphic for the error in x axis for 3 zones of a working icon.

Figure 16.

Error for x coordinate

Average error in x and y can be established and for each of the measurements zones, in order to see in which of the three zones the system's behavior is more precise, resulting as follows:

Zone 1 Zone 2 Zone 3
x = 0.58 cm x= 0.82 cm x= 1.19 cm
y = 0.79 cm y= 0.89 cm y= 1.47 cm

Table 3.

Average error for x and y for three different zones.

Previous data indicates through analysis and comparison of the obtained test results that: the precision of results that provides the system are directly proportional to the distance that the icon is captured, in addition also we can see from figure 12 that the greater the view angle, the greater error value too.

Advertisement

5. Conclusions

A system capable to obtain real time position of an object using a pan/tilt camera in hand as the sensor was developed. An iconographic symbol set was used to identify different working areas within an enclosed simulated working environment. Iconographic symbols projected or draw in the environment walls can be used to the purpose of get a calculated camera position. The camera has automated icon search capabilities, experimental measurements show feasible practical use in manufactured and assembly applications to find real-time positions in working tools for robot manipulators. Experimental test were carried out with some optimal laboratory conditions to get images such as good illumination, good contrast and specific sizes of experimental environment in order to assess the system. However, future work envisages an automated recalibration so for real applications in an arm robot manipulator with a camera mounted onto the arm in a hand-in-eye configuration. It is intended to preserve the use of basic geometric figures as it resulted very useful in this investigation and it can speed up the distance calculation in more complex scenarios.

References

  1. 1. Pajares M. Gonzalo de la Cruz G. Jesús 2002 Visión por computador Ed. Ra-Ma. Colombia.R.M. Haralick and L.G. Shapiro (1993). Computer and robot vision. Ed. Addison-Wesley Publishing Co., New York.
  2. 2. de Lope J. Serradilla F. Zato J. 1997 Sistema de localización y posicionamiento de piezas usando visión artificial. Inteligencia Artificial 1 1 57 64
  3. 3. Pressman Roger. S. 2002 Ingeniería del software. Un enfoque práctico. Ed. McGraw-Hill. Madrid.Matlab Automation Server http://www.mathworks.com/access/helpdesk/help/techdoc/matlab_external/f27470.html. visited Feb. 03, 2011.
  4. 4. Malvino Albert. p. Leach Donald. P. 1993 Principios y aplicaciones digitales. Ed. Marcombo Boixareu editores.

Written By

M. Pena-Cabrera, I. Lopez-Juarez, R. Rios-Cabrera M. Castelan and K. Ordaz-Hernandez

Submitted: 03 November 2010 Published: 09 June 2011