Open access peer-reviewed chapter

Switching Control of Image Based Visual Servoing in an Eye-in-Hand System Using Laser Pointer

By Wen-Fang Xie, Zheng Li, Claude Perron and Xiao-Wei Tu

Published: January 1st 2010

DOI: 10.5772/6984

Downloaded: 2089

1. Introduction

To reduce the labor cost and increase the throughput in the manufacturing industry, there is an increasing demand for automated robotic manufacturing systems such as robotic assembly, bin picking, drilling and palletizing systems, which require accurate and fast robot positioning. The 3D machine vision system is normally used in a robotic manufacturing system to compensate for the robotic positioning errors due to unforeseen work environment and randomly-placed objects. The task of robot positioning using vision system is referred to as visual servoing which aims at controlling the pose of the robot’s end effector relative to a target object or a set of target features (Hutchinson et al, 1996), (Corka, 1996), (Li et al, 2006), (Aouf et al, 2004). According to the features used as feedback in minimizing the positioning error, visual servoing is classified into three categories, Position Based Visual Servoing (PBVS) (Hutchinson et al, 1996), (DeManthon & Davis, 1995), (Wilson et al, 1996), Image Based Visual Servoing (IBVS) (Weiss et al, 1987), (Espiau, 1993), (Chaumette, 1998), (Wang & Cho, 2008) and Hybrid Visual Servoing (Malis et al, 1999).

Since IBVS was introduced in 1980, it has attracted the attention of many researchers and has been tremendously developed in recent years. The method is based on the principle that when the image feature error in 2-D image space is approaching zero, the kinematical error in Cartesian space approaches zero too (Hutchinson et al, 1996). In IBVS, the error for the controller is defined directly with respect to the image feature parameters (Weiss et al, 1987). Compared with PBVS, the advantages of IBVS are obvious. First, it is object model free, and robust to camera modeling and hand-eye calibration errors (Espiau, 1993). Second, the image feature point trajectories are controlled to move approximately along straight lines in the image plane. Hence, it is able to prevent the image features from leaving the FOV. However, the drawbacks of IBVS lie in the following aspects. Since the control law is merely designed in the image plane, the trajectory of the end effector in Cartesian space is not a straight line, and even odd in some cases. In other words, in order to reduce the image feature error to zero as soon as possible, unnecessary motions of end effector are performed. Moreover the system is stable only in a region around the desired position, and there may exist image singularities and image local minima (Chaumette, 1998) leading to IBVS failure. The choice of the visual features is a key point to solve the problem of image singularities. Lots of studies have been done to find out the decoupled visual features with respect to the 6DOF of the robot. Such studies also ensure that the trajectory in Cartesian space is like a straight line (Tahri & Chaumette, 2005), (Pages et al, 2006), (Janbi-Sharifi & Fiocelli, 2004), (Krupa et al, 2003). In (Tahri & Chaumette, 2005), six image moment features were selected to design a decoupled control scheme. In (Pages et al, 2006), Pages et al. derived the image Jacobian matrix related to a laser spot as an image feature. The global convergences of the control law had been shown with a constant interaction matrix. However, the method needed the information of the planar object and only fit for the situation where the camera was located near the desired position. Another approach using a laser pointer in visual servoing was presented in (Krupa et al, 2003). Krupa et al. developed a vision system with stationary camera, which retrieved and positioned surgical instruments for operation. A laser pointer was used to project laser spot on the organ surface to control the depth. However, the servoing was only carried out in 3-DOF and the camera was motionless. Therefore, the system could not provide much flexibility in visual servoing in large scale environment.

Koichi Hashimoto et al. (Hashimoto & Noritsugu, 2000) introduced a method to solve the image local minima. The main idea was to define a potential function and to minimize it while controlling the robot. If the potential had local minima, the algorithm generated an artificial potential and then controlled the camera based on the artificial one. In (Kase et al, 1993), stereo based visual servoing was proposed to solve the depth estimation problem and calculate an exact image Jacobian matrix. However this kind of algorithm increased the computational cost. R. Mahony et al. (Mahony et al, 2002) introduced a method of choosing other types of image features instead of points for IBVS and focusing on the depth axis control. P.Y. Oh et al. (Oh & Allen, 2001) presented a partitioning DOF method for IBVS which used a 3-DOF robot with a 2-DOF pan tilt unit. The experimental results of tracking people were given. In (Corke & Hutchinson, 2001), another partitioned approach to visual servoing control was introduced, which decoupled the z-axis rotational and translational components of the control from the remaining DOF.

To overcome the aforementioned shortcomings of IBVS, some new approaches that integrate PBVS and IBVS methods have been developed (Gans et al, 2003), (Malis et al, 1999). The main idea is to use a hybrid of Cartesian and image space sensory feedback signals to control both Cartesian and image trajectories simultaneously. One example of such hybrid approach is 2.5D visual servoing (Malis et al, 1999), which was based on the estimation of the partial camera displacement. Recently, a hybrid motion control and planning strategy for image constraints avoidance was presented in (Deng et al, 2005), (Deng et al, 2003). This motion control part included a local switching control between the IBVS and PBVS for avoiding image singularity and image local minima. In addition, the planning strategy was composed of an artificial hybrid trajectory planner.

Inspired by the hybrid motion control and planning strategy, we proposed a new switch control approach to IBVS to overcome the shortcomings of IBVS. First, a laser pointer is adopted to realize on-line depth estimation to obtain image Jacobian matrix. Second, we added a laser point image feature to the chosen image features of the object. Based on the new image feature set, we can detect the object in the workspace even when the features of object are in FOV partially. Hence the available workspace is virtually enlarged to some extent. Furthermore, a set of imaginary target features are introduced so that a decoupled control scheme for IBVS can be designed. Third, we separated 3-DOF rotational motion from the translational motion to solve some image singularity problem such as 180 degree rotation around the optical axis (Chaumette, 1998) and image local minima in IBVS. This decoupled control strategy can make visual servoing system work over a large region around the desired position.

This switch control approach to IBVS with laser pointer is applied to a robotic assembly system. The system is composed of a 6-DOF robot, a camera mounted on the robot end effector, a simple off-the-shelf laser pointer rigidly linked to the camera and a vacuum pump for the object grasping. The whole algorithm consists of three steps. First the laser spot is driven onto a planar object. Since the laser pointer is mounted on the robot end effector, 3-DOF rotational motion of end effector can drive the object image features close to a set of imaginary image features so that the laser spot is projected on the object. Next, the image features of the object and laser spot are used to obtain the image Jacobian matrix, which is primarily used for controlling the end effector translational motion with respect to the object. Finally a constant image Jacobian at the desired camera configuration for IBVS is used to adjust the fine alignment so that the feature errors can reach to the image global minimum. The successful application of the proposed algorithm to an experimental robotic assembly system demonstrates the effectiveness of the proposed method.

The paper is organized as follows. In Section 2, the problem statement of visual servoing is introduced. A novel approach to switching control of IBVS with laser pointer is presented in Section 3. In Section 4, several experimental results are given to show the effectiveness of the proposed method. The concluding remarks are given in Section 5.

Advertisement

2. Problem statement

In this paper, we focus on an automated robotic assembly system which uses Eye-in-Hand architecture to perform visual servoing. In this system, the assembly task is to move the robot end effector together with a tool, such as a gripper or a vacuum pump, to approach a part with unknown pose, and then to grasp and assemble it to a main body fast and smoothly. Such an assembly task is a typical visual servoing control problem. Hence, IBVS is an appropriate method to achieve the task since all control input is computed in an image space without using the pose information.

Letr˙=[vCCωCC]T=[vxvyvzωxωyωz]Tbe a velocity screw of the camera. Definefi=[xiyi]T,i=1,2,nas the image features andf˙i=[x˙iy˙i]Tas the corresponding image feature velocity. Denote the desired image feature asfdi=[xdiydi]Tandfd=[fd1fd2fdn]Twhich are obtained by using a teaching by showing approach. In this paper, the teaching procedures are: (i) Move the robot end effector to a position where the pump can perfectly suck up the object; (ii) Move the robot in end effector frame to a new position where the whole object is in the FOV. Record this moving value as the constant transformation; (iii) Take an image as the target image for visual servoing. The four coplanar corners of the object are chosen as the target image features and the laser spotfdl=[xdlydl]Tis also selected as a target image feature in some cases.

Assume that the effective sizes of a pixel(sx,sy)are constant to simplify the visual servoing computation without loss of generality. The transformation between[xiyi]Tand the pixel indices[uv]Tdepends only on camera intrinsic parameters.

In order to design the feedback control for robot based on the velocity of the feature points, we have the following relationship between the motion of image features and the physical motion of the camera:

f˙=Jimg(f,Z)r˙E1

where

Jimg(f,Z)=[Jimg(f1,Z1)Jimg(fn,Zn)]E2

is the image Jacobian matrix,Z=[Z1Zn]Tis the depth of each feature point, andf=[f1fn]Tis the image feature vector containing n features.

For each feature point(xi,yi), the image Jacobian matrix is represented as follows:

Jimg(fi,Zi)=[λZi0xiZixiyiλλ2+xi2λyi0λZiyiZiλ2yi2λxiyiλxi]E3

whereλis the known focal length of the camera.

Equation (1) can be written as:

r˙=Jimg+(f,Z)f˙E4

whereJimg+(f,Z)is the pseudo inverse of the image Jacobian. If the error function is defined ase(f)=ffdand we imposee˙(f)=Ke(f), a simple proportional control law is given by

r˙=KJimg+(f,Z)e(f)E5

wherer˙is the camera velocity sent to the robot controller, K is the proportional gain which tunes the exponential convergence rate offtoward

fdE6

It is assumed that the optical axis of the camera is coincident with the Z axis of the end effector. The motion of camera can be related to robot joint rates through the normal robot Jacobian and a fixed transformation between the motion of the camera and the end effector. When the image error function e(f) tends to zero, the kinematical error must also approach zero. However, an inappropriate choice ofJimg+(f,Z)may lead the system close to, or even cause a singularity of the Jacobian matrix, which may result in potential servoing failure (Malis et al, 1999).

The objective of IBVS in this paper is to control the end effector to approach an unknown object so that the image error function e(f) approaches zero while the trajectory in the Cartesian space is kept as short as possible. Meanwhile, the system should be designed with low cost and the visual servoing algorithm is developed with low computational load for real time robot control.

3. IBVS with laser system

In this section, a new approach to switching control of IBVS with laser pointer is presented to accomplish the aforementioned visual servoing tasks. This approach is designed to overcome the drawbacks of IBVS by installing an off-the-shelf laser pointer on the end effector for estimating the depth of the image features and separating the visual servoing procedures into several control stages.

(a) Robotic Eye-in-Hand System

The designed robotic Eye-in-Hand system configuration is shown in Figure 1, which is composed of a 6-DOF robot, a camera mounted on the robot end effector, a laser pointer rigidly linked to the camera and a vacuum pump for grasping object. In Figure 1, H denotes the transformation between two reference frames.

Figure 1.

Robotic Eye-in-Hand System Configuration

In traditional IBVS, since the control law is only designed in the image plane, unnecessary motions of the end effector are performed. In order to obtain the image JacobianJimg(f,Z)which is a function of features and depths of object, depth estimation is needed for each feature point. A low cost laser pointer is thus adopted and installed on the end effector to measure the distance from the camera to the target for depth estimation. Through the laser triangulation method, the depth estimation can be achieved in a very short time. Moreover the laser spot can be chosen as an image feature that eases the image processing with low computational load for visual servoing.

(b) On-line Depth Estimation

In Equation (3),Jimg(f,Z)is a function of features and their depths. Although one of the solutions to IBVS is to choose the target image features and their depths for a constant image Jacobian, it is proved to be stable only in a neighborhood of the desired position. Another solution is to estimate the depth of every feature point on-line. In this paper, a simple laser pointer based triangulation method is applied to estimate the depth on-line.

Assume that the laser beam is lying in the same plane with the camera optical axis. We use a camera-center frame{c}withzCparallel to the optical axis. In this configuration, no matter how camera is moving, the trajectory of the laser spot in the image is a straight line passing the principal point. If we consider the laser pointer as a camera and the laser beam is its optical axis, the straight line in the image is an epipolar line of the imaginary stereo configuration.

As shown in Figure 2, d denotes the horizontal distance between the laser beam and the optical axis of lens of the camera, andαis the angle between the laser beam and the horizontal line. Both of them are fixed and known when the laser pointer is installed. Point P is the intersecting point of the laser beam and the object surface. A function of depth Zp with respect to pixel indices(u,v)(Trucco & Verri, 1998) is derived by applying the trigonometry. The depth Zp calculated by Equation (6) can be used to approximate the depth of each feature point in the planar object surface under assumption that the size of object is small enough.

Zp=dsinαcos(αβ)E7

whereβis the triangle inside the camera.

Figure 2.

Calculation of the Depth of a Point by Using Triangulation

Remark: It is noted that (6) is only valid when the camera and the laser are planar. In the robotic assembly system, the laser pointer is rigidly linked to the camera in a frame which is attached to the end effector of the robot. Inside the frame, the laser pointer is installed in a way that the laser line and object camera projections are coplanar. The possible generalization of this calibration phase to the case on no-coplanar laser and camera is under investigation.

(c) Switching Control of IBVS with Laser Pointer

The proposed algorithm is divided into three control stages which are a) driving the laser spot on the object, b) combining IBVS with laser spot for translational moving and c) adjusting the fine alignment. The block diagram of switching control system of IBVS with laser pointer is presented in Figure 3. The object is assumed to be stationary with respect to robot reference frame.

Figure 3.

Block diagram of IBVS with Laser control system

Stage 1: Driving the Laser Spot on the Object

To project the laser spot on the object, two kinds of situations need to be considered. One is that all features are in the FOV (Stage 1.A), and the other is that certain features are missing in the FOV (Stage 1.B). Both of them are discussed below in detail.

Stage 1.A: All Features in the FOV

When all image features are in the FOV, the control task is to drive the center of gravity of the object together with the image features of object near the current laser spot and the created imaginary image features by using 3DOF camera motion. Equation (1) can be decomposed into translational and rotational component parts as shown below

f˙=[Jimgt(f,Z)Jimgr(f)][vCCωCC]E8

whereJimgt(f,Z)andJimgr(f)are stacked byJimgt(fi,Zi)andJimgr(fi)given by

Jimgt(fi,Zi)=[λZi0xiZi0λZiyiZi]E9
Jimgr(fi)=[xiyiλλ2+xi2λyiλ2yi2λxiyiλxi]E10

It is noted thatJimgtis related to both features and their depths, andJimgris only a function of the image features. Since the laser pointer is mounted on the robot end effector, it is possible to control it by performing 3-DOF rotational motion. It is known that 2-DOF of rotational motion can drive a laser pointer to project its dot image on the target. Here the reason of using 3-DOF instead of 2-DOF is that the rotation along the camera Z axis can be used to solve some image singularity problems such as 180 deg rotation around the optical axis, which is well known case causing visual servoing failure as presented by F. Chaumette (Chaumette, 1998). To avoid this particular case, a set of imaginary target features are designed. Based on the target image features including the desired object featuresfdand the laser spotfdl, a new imaginary image can be designed, which is shown in Figure 4. Let the distance between the target position of laser spot and the current laser spot bedl=fdlfl. The imaginary object featuresfiofare formed by shifting all target object featuresfdiby unitsdland adding the current laser spotfl:

fiof=[fd1dlfd2dlfdndlfl]TE11

It is assumed that the height of the object is relatively small. Hence there is no big discrepancy of the laser spot in the image plane when the laser spot moves from the workspace platform to the surface of object.

Figure 4.

Example of creating imaginary features, (a) in 2-D image space, (b) in 3-D Cartesian space

The center of gravity of object image features is served as an extra image feature which is defined asfcg=[(x1++xn)/n,(y1++yn)/n]Tin the image plane. The control goal is to minimize the error norm between the image featuresfs1A=[f1f2fnfcg]Tand the imaginary features, as presented below

min{fs1Afiof}E12

From Equation (7), the relationship between the motion of the image features and the rotational DOF of cameraωCCis represented as

f˙s1A=[Jimgt(fs1A,Zs1A)Jimgr(fs1A)][vCCωCC]E13

whereZs1A=[Z1Z2ZnZl]T. The above equation can be written as:

ωCC=Jimgr+(fS1A)[f˙s1AJimgt(fs1A,Zs1A)vCC]E14

We set the translational DOF of camera motion as zero (vCC=0) and the following equation is obtained:

ωCCJimgr+(fs1A)[f˙s1A]E15

Let the feature error defined aseiof=fs1Afiof. By imposinge˙iof=Kleiof, one can design the proportional control law given by

us1A=ωCC=KlJimgr+(fs1A)eiofE16

whereωCCis the camera angular velocity sent to the robot controller andKlis the proportional gain.

Since we deliberately turn off the translational motion of camera, Equation (14) relating the image velocity to 3DOF camera rotational motion is approximately held. The proportional controller (15) cannot make the feature erroreiofapproach zero exponentially. However, the current control task is only to drive the laser spot on the object and rotate the camera to make the image features of the object approach the imaginary features. The further tuning of visual servoing will be carried out in the next stages. Hence we adopt the switching rule to switch the controller to the second stage.

The switching rule is described as: when the error norm falls below a predetermined threshold value, the controller will switch from current stage to the second stage. The switching condition is given by

fioffs1Afs1A_0E17

wherefs1A_0is the predetermined feature error threshold value.

Therefore, the controller (15) is expressed as follows:

us1A=ωCC={KlJimgr+(fs1A)eioffioffs1Afs1A_0us2OtherwiseE18

Notice that the function of this control law not only drives the laser spot on the object but also solves the image singularity problem. As mentioned in (Chaumette, 1998), a pure rotation of 180 deg around the optical axis leads to image singularity and causes a pure backward translational camera motion along the optical axis. In the proposed algorithm, the 3-DOF rotation of the camera is mainly executed in the first stage of control and the translational movement of camera is primarily executed in the second stage of control. Hence, the backward translational camera motion is avoided.

Stage 1.B: Features Partially seen in the FOV

When only partial object is in the FOV, some features are not available. In order to obtain all the features, we propose a strategy to control 2-DOF rotational motion of camera to project the laser spot on the centroid of the object in image plane till the whole object appears in the FOV. Hence the motion of the laser image featurefl=[xlyl]Tis related to the camera motion as:

f˙l=JimgXYr(fl)r˙xy+Jimg_cz(fl)r˙czTE19

wherer˙xy=[ωx,ωy]Trepresents 2-DOF rotational motion of camera,r˙cz=[vxvyvzωz]Tis the rest of camera velocity screw, the image Jacobian matricesJimgXYr(fl)andJimg_cz(fl)are defined respectively as:

JimgXYr(fl)=[xlylλλ2+xl2λλ2yl2λxlylλ]Jimg_cz(fl,Zl)=[λZl00λZlxlZlylylZlxl]E20

Equation (18) can be written as:

r˙xy=JimgXYr+(fl)[f˙lJimg_cz(fl,Zl)r˙czT]E21

whereJimgXYr+(fl)is the pseudo-inverse of image Jacobian matrixJimgXYr(fl)

As mentioned before, 2-DOF of rotational motionr˙xy=[ωxωy]Tallows the laser pointer to project its dot image close to the desired target. Thus the other elements of the camera velocity screwr˙cz=[vxvyvzωz]Tare set to zero. Equation (20) becomes:

r˙xyJimgXYr+(fl)f˙lE22

The centroid of the partial object in image is used as the desired laser image feature for the laser spot. To attain such centroid, one generally calculates the first order moments of the partial object image. Let R represents the region of the partial object in a binary imageI(k,j), which can be obtained by using fixed-level threshold. For a digital image, the moments of the region R are defined as:

mkj=(x,y)Rxkyjk0,j0E23

where(x,y)represent the row and column of a pixel in the region R respectively. According to the definition of moment, we have the area of the region R and the centroid of R as:

AR=m00E24
fc=[fxcfyc]=[m10m00m10m00]E25

wherefcis the centroid of the partial object in the image.

Define the image feature error between the laser image feature and the centroid of partial object image feature asecl=[flfc]. The proportional control law for 2-DOF of rotational motion is designed by imposinge˙cl=Kl2ecl:

us1B=r˙xy={Kl2JimgXYr+(fl)eclflfcfs1B_0us1AOtherwiseE26

whereKl2is the proportional gain andfs1B_0is the predetermined feature error threshold value.

When the image feature erroreclenters into the rangeflfcfs1B_0, the laser spot is projected close to the planar object. In this paper, the end effector is assumed to be posed above the planar object and the FOV is relatively large compared with the image of the object. Hence, all the feature points enter into FOV when the laser spot is close to the desired laser image (flfcfs1B_0). Once all the feature points are obtained, the control law is switched tous1Ain Stage 1.A.

Stage 2: Translational Moving of Camera

The key problem in switch control of Stage 2 is how to obtain the image Jacobian matrix of relating the motion of image features plus laser spot to the translational motion of camera. According to the derivation of traditional image Jacobian matrix, the target is supposed to be stationary. Hence the laser spot on the object can be considered as a stationary point adapting to image Jacobian matrix of traditional IBVS. Based on above scheme, the algorithm is presented in detail as follows.

Letfs2=[f1fnfl]Trepresents n image features plus the laser image featurefl=[xlyl]T. It is assumed that the object is small compared with the workplace. Hence the depth of the laser image close to the centroid of the object can be treated as a good approximation to the depth of all features. The modified relationship between the motion of the image features and the motion of camera is given by
f˙s2=Jimg(fs2,Zl)r˙E27

whereJimg(fs2,Zl)can also be decomposed into translational and rotational component parts as shown below

Jimg(fs2,Zl)=[Jimgt(fs2,Zl)Jimgr(fs2)]E28

where the two components are formed as:

Jimgt(fs2,Zl)=[Jimgt(f1,Zl)Jimgt(fn,Zl)Jimgt(fl,Zl)]Jimgr(fs2)=[Jimgr(f1)Jimgr(fn)Jimgr(fl)]E29

Equation (26) can be written as:

f˙s2=[Jimgt(fs2,Zl)Jimgr(fs2)][vCCωCC]=Jimgt(fs2,Zl)vCC+Jimgr(fs2)ωCCE30

And the translational motion of the camera is derived as:

vCC=Jimgt+(fs2,Zl)[f˙s2Jimgr(fs2)ωCC]E31

Set the rotational motion of the camera to zero. The above equation is rewritten as:

vCCJimgt+(fs2,Zl)f˙s2E32

The control objective is to move the image features of object plus the laser spot image close to the target image features by using the translational motion of the camera. The target image features include four coplanar corner points of object image plus the desired laser spotfdldefined asfs2_D=[fd1fd2fdnfdl]T. The error between them is defined as:

es2=fs2fs2_DE33

The translational motion of the camera is designed by imposinge˙s2=Ks2es2:

us2=vCC={Ks2Jimgt+(fs2,Zl)es2fs2fs2_Dfs2_0us3OtherwiseE34

WhereKs2is the proportional gain andfs2_0is the predetermined feature error threshold value.

The switching rule is described as: if the image feature error norm between the current image features and the desired image features falls below a thresholdfs2_0, the IBVS with laser pointer switches from the current stage to the third stageus3.

fs2fs2_Dfs2_0E35

Since the translational motion of camera is approximately derived as (31), the proportional controller (32) can not drive the image featurees2to approach zero exponentially. However, the translational motion brings the image of object in the vicinity of target image features. A switching rule is thus set to switch from the current stage to the next one for the fine tuning of visual servoing. The thresholdfs2_0is directly related to the depth and affects the stability of the controller. Thus the selection of this threshold is crucial and it is normally set as relatively small pixels to maintain the stability of the system.

Stage 3: Adjusting the Fine Alignments

After applying the switching controllers in Stage 1 and 2, one can control the end effector to the neighbour of the target image features. It has been presented in (Chaumette, 1998) that the constant image Jacobian at the desired camera configuration can be used to reach the image global minimum. Hence, a constant image Jacobian matrix can be used in IBVS to adjust the fine alignment of end effector so that the pump can perfectly suck up the object. In this stage, the laser image feature is not considered as an image feature and the traditional IBVS with constant image Jacobian of target image features is applied.

Since the image features are close to their desired position in the image plane, as shown in (33), the depths of the points of image features are approximate to the desired ones (essentially planar object). The target features and their corresponding depths are applied to the traditional IBVS with constant image Jacobian matrix shown as:

Jimg(fid,Zid)=[λZid0xidZidxidyidλλ2+xid2λyid0λZidyidZidλ2yid2λxidyidλxid]E36

wherefidandZidare target features and their corresponding depths respectively.

The control goal of this stage is to control the pose of the end effector so that the image feature error between the current image featuresf=[f1fn]Tand the target image featuresfd=[fd1fd2fdn]Treaches the global minimum. The condition can be described as: if the feature error norm falls below a predetermined threshold, the whole IBVS with laser pointer will stop. The condition is presented as:

fdffs3_0E37

wherefs3_0is a predetermined threshold value,f=[f1fn]Tis the image features andfd=[fd1fd2fdn]Tis the desired feature.

The proportional controller is designed as:

us3=[vCCωCC]={Jimg+(fd,Zd)(ffd)fdffS3_0StopSerovingStartgraspingOtherwiseE38

whereKs3is the proportional gain andZd=[Z1dZnd]Tis the depth vector of each target feature point.

The thresholdfS3_0directly affects the accuracy of the robot end effector pose with respect to the object. The bigger value of threshold is chosen, the less accurate pose will be achieved. However, the positioning process time is reduced.

It is noticed that the proposed IBVS algorithm is derived from traditional IBVS. Therefore it inherits the advantages of IBVS, which does not need object model and is robust to the camera calibration and hand-eye calibration error. In addition, by using a laser pointer and the separate DOF method, the proposed switch algorithm decouples the rotational and translational motion control of the robotic end effector to overcome the inherent drawbacks of traditional IBVS such as image singularities and the image local minima.

4. Experimental results

The proposed IBVS with laser pointer has been tested on a robotic assembly system including an industrial robot Motoman UPJ with JRC controller, a laser pointer, and a PC based vision system including Matrox frame grabber and a Sony XC55 camera mounted on the robot. The robotic assembly system setup for testing IBVS with laser pointer is shown in Figure 5.

To verify the effectiveness of the proposed method, a plastic object shown in Figure 5 is chosen to be assembled in a metallic part. The four coplanar corners of the surface are selected as the target features. One of the advantages of the JRC controller in UPJ robot is that it accepts the position and orientation values and calculates the joint angle by itself, which eliminates the robot kinematical modeling error. However, the drawback of the JRC controller is that it cannot be used for real time control. In other words, when the designed controller generates a new position or orientation value and sends it to the JRC controller, it will not respond it until the previous position is reached in each iteration. With this limitation of hardware, we have to divide the calculated value into a serial of small increment by a constant factor in each step, and increase the sampling time as well. In the experiment, we chose constant factor as 50, and the sampling time as 2 seconds.

Figure 5.

Robotic assembly system setup and components to be assembled

The desired poseHEOis set as[36661230013.7](mm, deg) for teaching procedure. The predetermined thresholdfs1A_0in (16) is set as 10 pixels andfs1B_0in (25) is set as 10 pixels. Thefs2_0in (32) is tuned as 10 pixels, which is approximate to 28mm in the experimental setup.fs3_0in (35) is set as 3 pixels. All the proportional gains of control law are set as 1. When the condition of feature error norm falling below 3 pixels is met, the end effector is brought to the desired pose with respect to the plastic object. It should be noted that no matter where the object is located in the workplace, the desired pose of end effector with respect to the object is exactly the same. For any object position, there exists a fixed transformation between camera and end effector frame with this desired pose. Before the visual servoing starts, we need to teach the robot this fixed pose once. Therefore, commanding the robot end effector to move the predefined transformation vector in end effector frame perfectly lets the vacuum pump suck up the object. The assembled components shown in Figure 5 demonstrate the accuracy of the proposed method.

(a) With Good Calibration Value

The proposed algorithm is tested with good calibration values (Table 1). The parameters d andαfor the fixed laser-camera configuration are adjusted to 8mm and 72 deg individually. The sequential pictures of successful assembly process are illustrated in Figure 6. The image trajectory obtained from the experiment is illustrated in Figure 7 (a). The rectangular with white edges represents the initial position and the black object shows the desired position. The object is posed manually with randomized initial position.

ParametersGood calibration valuesBad calibration values
Principal point (pixel)[ 326 228 ][3 84 2 88 ]
Focal length (mm)6 .027.2
Effective size (mm)0.0074×0.00740.0074×0.0074
HCL[X Y Z φ θ ψ ] (mm, deg) measured by hand[8 0 0 0 -18 0 ][6 0 0 0 -14 0]
HEC[X Y Z φ θ ψ ] (mm, deg) calculated by calibration[- 7 5 189 -2 0 -179 ][-20 20 210 5 5 180]

Table 1.

Good and bad calibration values of system parameters

(b) With Bad Calibration Value

To test the robustness of the IBVS with laser pointer, camera calibration error is also added to intrinsic parameters with 20% deviation as shown in Table 1. The good calibration value and bad calibration value of transformation among camera reference, laser pointer frame and robot end effector frame are also shown in Table 1. The object position is the same as that of the experiment with good calibration value and the image trajectory resulted from experiment is shown in Figure 7 (b).

Although the image trajectory shown in Figure 7 (b) is distorted comparing with the trajectory presented in Figure 7 (a), the image features still converge to the desired position and the assembly task is successfully accomplished as well. Hence, the proposed algorithm is very robust to the camera calibration and hand-eye calibration error on the order of 20% deviation from nominal values.

Figure 6.

Assembly sequence (a) Initial position (b) The end of stage 1, (c) The end of stage 2 (d) The end of stage 3 (ready to pick up the object) (e) Suck up the object (f) Assembly

Figure 7.

Image trajectory (a) with good calibration value (b) with bad calibration value

In order to test the depth estimation convergence, the experiments on the depth estimation are carried out under the conditions with good and bad calibration values. The experimental results are presented in Figure 8. Both results show that the estimated depths converge to the real distance and thus the depth estimation convergences are experimentally proved.

Figure 8.

Experimental results of depth estimation

5. Conclusion

In this paper, a new approach to image based visual servoing with laser pointer is developed. The laser pointer has been adopted and the triangular method has been used to estimate the depth between the camera and the object. The switch control of IBVS with laser pointer is decomposed into three stages to accomplish visual servoing tasks under various circumstances. The algorithm has been successfully applied in an experimental robotic assembly system. The experimental results verify the effectiveness of the proposed method and also validate the feasibility of applying the proposed method to industrial manufacturing systems. Future work includes testing the proposed method on a robot manipulator supporting real time control and conducting analytical convergence analysis of switch control algorithm.

© 2010 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike-3.0 License, which permits use, distribution and reproduction for non-commercial purposes, provided the original is properly cited and derivative works building on this content are distributed under the same license.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Wen-Fang Xie, Zheng Li, Claude Perron and Xiao-Wei Tu (January 1st 2010). Switching Control of Image Based Visual Servoing in an Eye-in-Hand System Using Laser Pointer, Motion Control, Federico Casolo, IntechOpen, DOI: 10.5772/6984. Available from:

chapter statistics

2089total chapter downloads

1Crossref citations

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

The Analysis and Optimization in Virtual Environment of the Mechatronic Tracking Systems Used for Improving the Photovoltaic Conversion

By Cătălin Alexandru and Claudiu Pozna

Related Book

First chapter

Introduction to Infrared Spectroscopy

By Theophile Theophanides

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us