Open access

Switching Control of Image Based Visual Servoing in an Eye-in-Hand System Using Laser Pointer

Written By

Wen-Fang Xie, Zheng Li, Claude Perron and Xiao-Wei Tu

Published: 01 January 2010

DOI: 10.5772/6984

From the Edited Volume

Motion Control

Edited by Federico Casolo

Chapter metrics overview

2,512 Chapter Downloads

View Full Metrics

1. Introduction

To reduce the labor cost and increase the throughput in the manufacturing industry, there is an increasing demand for automated robotic manufacturing systems such as robotic assembly, bin picking, drilling and palletizing systems, which require accurate and fast robot positioning. The 3D machine vision system is normally used in a robotic manufacturing system to compensate for the robotic positioning errors due to unforeseen work environment and randomly-placed objects. The task of robot positioning using vision system is referred to as visual servoing which aims at controlling the pose of the robot’s end effector relative to a target object or a set of target features (Hutchinson et al, 1996), (Corka, 1996), (Li et al, 2006), (Aouf et al, 2004). According to the features used as feedback in minimizing the positioning error, visual servoing is classified into three categories, Position Based Visual Servoing (PBVS) (Hutchinson et al, 1996), (DeManthon & Davis, 1995), (Wilson et al, 1996), Image Based Visual Servoing (IBVS) (Weiss et al, 1987), (Espiau, 1993), (Chaumette, 1998), (Wang & Cho, 2008) and Hybrid Visual Servoing (Malis et al, 1999).

Since IBVS was introduced in 1980, it has attracted the attention of many researchers and has been tremendously developed in recent years. The method is based on the principle that when the image feature error in 2-D image space is approaching zero, the kinematical error in Cartesian space approaches zero too (Hutchinson et al, 1996). In IBVS, the error for the controller is defined directly with respect to the image feature parameters (Weiss et al, 1987). Compared with PBVS, the advantages of IBVS are obvious. First, it is object model free, and robust to camera modeling and hand-eye calibration errors (Espiau, 1993). Second, the image feature point trajectories are controlled to move approximately along straight lines in the image plane. Hence, it is able to prevent the image features from leaving the FOV. However, the drawbacks of IBVS lie in the following aspects. Since the control law is merely designed in the image plane, the trajectory of the end effector in Cartesian space is not a straight line, and even odd in some cases. In other words, in order to reduce the image feature error to zero as soon as possible, unnecessary motions of end effector are performed. Moreover the system is stable only in a region around the desired position, and there may exist image singularities and image local minima (Chaumette, 1998) leading to IBVS failure. The choice of the visual features is a key point to solve the problem of image singularities. Lots of studies have been done to find out the decoupled visual features with respect to the 6DOF of the robot. Such studies also ensure that the trajectory in Cartesian space is like a straight line (Tahri & Chaumette, 2005), (Pages et al, 2006), (Janbi-Sharifi & Fiocelli, 2004), (Krupa et al, 2003). In (Tahri & Chaumette, 2005), six image moment features were selected to design a decoupled control scheme. In (Pages et al, 2006), Pages et al. derived the image Jacobian matrix related to a laser spot as an image feature. The global convergences of the control law had been shown with a constant interaction matrix. However, the method needed the information of the planar object and only fit for the situation where the camera was located near the desired position. Another approach using a laser pointer in visual servoing was presented in (Krupa et al, 2003). Krupa et al. developed a vision system with stationary camera, which retrieved and positioned surgical instruments for operation. A laser pointer was used to project laser spot on the organ surface to control the depth. However, the servoing was only carried out in 3-DOF and the camera was motionless. Therefore, the system could not provide much flexibility in visual servoing in large scale environment.

Koichi Hashimoto et al. (Hashimoto & Noritsugu, 2000) introduced a method to solve the image local minima. The main idea was to define a potential function and to minimize it while controlling the robot. If the potential had local minima, the algorithm generated an artificial potential and then controlled the camera based on the artificial one. In (Kase et al, 1993), stereo based visual servoing was proposed to solve the depth estimation problem and calculate an exact image Jacobian matrix. However this kind of algorithm increased the computational cost. R. Mahony et al. (Mahony et al, 2002) introduced a method of choosing other types of image features instead of points for IBVS and focusing on the depth axis control. P.Y. Oh et al. (Oh & Allen, 2001) presented a partitioning DOF method for IBVS which used a 3-DOF robot with a 2-DOF pan tilt unit. The experimental results of tracking people were given. In (Corke & Hutchinson, 2001), another partitioned approach to visual servoing control was introduced, which decoupled the z-axis rotational and translational components of the control from the remaining DOF.

To overcome the aforementioned shortcomings of IBVS, some new approaches that integrate PBVS and IBVS methods have been developed (Gans et al, 2003), (Malis et al, 1999). The main idea is to use a hybrid of Cartesian and image space sensory feedback signals to control both Cartesian and image trajectories simultaneously. One example of such hybrid approach is 2.5D visual servoing (Malis et al, 1999), which was based on the estimation of the partial camera displacement. Recently, a hybrid motion control and planning strategy for image constraints avoidance was presented in (Deng et al, 2005), (Deng et al, 2003). This motion control part included a local switching control between the IBVS and PBVS for avoiding image singularity and image local minima. In addition, the planning strategy was composed of an artificial hybrid trajectory planner.

Inspired by the hybrid motion control and planning strategy, we proposed a new switch control approach to IBVS to overcome the shortcomings of IBVS. First, a laser pointer is adopted to realize on-line depth estimation to obtain image Jacobian matrix. Second, we added a laser point image feature to the chosen image features of the object. Based on the new image feature set, we can detect the object in the workspace even when the features of object are in FOV partially. Hence the available workspace is virtually enlarged to some extent. Furthermore, a set of imaginary target features are introduced so that a decoupled control scheme for IBVS can be designed. Third, we separated 3-DOF rotational motion from the translational motion to solve some image singularity problem such as 180 degree rotation around the optical axis (Chaumette, 1998) and image local minima in IBVS. This decoupled control strategy can make visual servoing system work over a large region around the desired position.

This switch control approach to IBVS with laser pointer is applied to a robotic assembly system. The system is composed of a 6-DOF robot, a camera mounted on the robot end effector, a simple off-the-shelf laser pointer rigidly linked to the camera and a vacuum pump for the object grasping. The whole algorithm consists of three steps. First the laser spot is driven onto a planar object. Since the laser pointer is mounted on the robot end effector, 3-DOF rotational motion of end effector can drive the object image features close to a set of imaginary image features so that the laser spot is projected on the object. Next, the image features of the object and laser spot are used to obtain the image Jacobian matrix, which is primarily used for controlling the end effector translational motion with respect to the object. Finally a constant image Jacobian at the desired camera configuration for IBVS is used to adjust the fine alignment so that the feature errors can reach to the image global minimum. The successful application of the proposed algorithm to an experimental robotic assembly system demonstrates the effectiveness of the proposed method.

The paper is organized as follows. In Section 2, the problem statement of visual servoing is introduced. A novel approach to switching control of IBVS with laser pointer is presented in Section 3. In Section 4, several experimental results are given to show the effectiveness of the proposed method. The concluding remarks are given in Section 5.

Advertisement

2. Problem statement

In this paper, we focus on an automated robotic assembly system which uses Eye-in-Hand architecture to perform visual servoing. In this system, the assembly task is to move the robot end effector together with a tool, such as a gripper or a vacuum pump, to approach a part with unknown pose, and then to grasp and assemble it to a main body fast and smoothly. Such an assembly task is a typical visual servoing control problem. Hence, IBVS is an appropriate method to achieve the task since all control input is computed in an image space without using the pose information.

Let r ˙ = [ v C C ω C C ] T = [ v x v y v z ω x ω y ω z ] T be a velocity screw of the camera. Define f i = [ x i y i ] T , i = 1, 2, n as the image features and f ˙ i = [ x ˙ i y ˙ i ] T as the corresponding image feature velocity. Denote the desired image feature as f d i = [ x d i y d i ] T and f d = [ f d 1 f d 2 f d n ] T which are obtained by using a teaching by showing approach. In this paper, the teaching procedures are: (i) Move the robot end effector to a position where the pump can perfectly suck up the object; (ii) Move the robot in end effector frame to a new position where the whole object is in the FOV. Record this moving value as the constant transformation; (iii) Take an image as the target image for visual servoing. The four coplanar corners of the object are chosen as the target image features and the laser spot f d l = [ x d l y d l ] T is also selected as a target image feature in some cases.

Assume that the effective sizes of a pixel ( s x , s y ) are constant to simplify the visual servoing computation without loss of generality. The transformation between [ x i y i ] T and the pixel indices [ u v ] T depends only on camera intrinsic parameters.

In order to design the feedback control for robot based on the velocity of the feature points, we have the following relationship between the motion of image features and the physical motion of the camera:

f ˙ = J i m g ( f , Z ) r ˙ E1

where

J i m g ( f , Z ) = [ J i m g ( f 1 , Z 1 ) J i m g ( f n , Z n ) ] E2

is the image Jacobian matrix, Z = [ Z 1 Z n ] T is the depth of each feature point, and f = [ f 1 f n ] T is the image feature vector containing n features.

For each feature point ( x i , y i ) , the image Jacobian matrix is represented as follows:

J i m g ( f i , Z i ) = [ λ Z i 0 x i Z i x i y i λ λ 2 + x i 2 λ y i 0 λ Z i y i Z i λ 2 y i 2 λ x i y i λ x i ] E3

where λ is the known focal length of the camera.

Equation (1) can be written as:

r ˙ = J i m g + ( f , Z ) f ˙ E4

where J i m g + ( f , Z ) is the pseudo inverse of the image Jacobian. If the error function is defined as e ( f ) = f f d and we impose e ˙ ( f ) = K e ( f ) , a simple proportional control law is given by

r ˙ = K J i m g + ( f , Z ) e ( f ) E5

where r ˙ is the camera velocity sent to the robot controller, K is the proportional gain which tunes the exponential convergence rate of f toward

f d E6

It is assumed that the optical axis of the camera is coincident with the Z axis of the end effector. The motion of camera can be related to robot joint rates through the normal robot Jacobian and a fixed transformation between the motion of the camera and the end effector. When the image error function e(f) tends to zero, the kinematical error must also approach zero. However, an inappropriate choice of J i m g + ( f , Z ) may lead the system close to, or even cause a singularity of the Jacobian matrix, which may result in potential servoing failure (Malis et al, 1999).

The objective of IBVS in this paper is to control the end effector to approach an unknown object so that the image error function e(f) approaches zero while the trajectory in the Cartesian space is kept as short as possible. Meanwhile, the system should be designed with low cost and the visual servoing algorithm is developed with low computational load for real time robot control.

Advertisement

3. IBVS with laser system

In this section, a new approach to switching control of IBVS with laser pointer is presented to accomplish the aforementioned visual servoing tasks. This approach is designed to overcome the drawbacks of IBVS by installing an off-the-shelf laser pointer on the end effector for estimating the depth of the image features and separating the visual servoing procedures into several control stages.

(a) Robotic Eye-in-Hand System

The designed robotic Eye-in-Hand system configuration is shown in Figure 1, which is composed of a 6-DOF robot, a camera mounted on the robot end effector, a laser pointer rigidly linked to the camera and a vacuum pump for grasping object. In Figure 1, H denotes the transformation between two reference frames.

Figure 1.

Robotic Eye-in-Hand System Configuration

In traditional IBVS, since the control law is only designed in the image plane, unnecessary motions of the end effector are performed. In order to obtain the image Jacobian J i m g ( f , Z ) which is a function of features and depths of object, depth estimation is needed for each feature point. A low cost laser pointer is thus adopted and installed on the end effector to measure the distance from the camera to the target for depth estimation. Through the laser triangulation method, the depth estimation can be achieved in a very short time. Moreover the laser spot can be chosen as an image feature that eases the image processing with low computational load for visual servoing.

(b) On-line Depth Estimation

In Equation (3), J i m g ( f , Z ) is a function of features and their depths. Although one of the solutions to IBVS is to choose the target image features and their depths for a constant image Jacobian, it is proved to be stable only in a neighborhood of the desired position. Another solution is to estimate the depth of every feature point on-line. In this paper, a simple laser pointer based triangulation method is applied to estimate the depth on-line.

Assume that the laser beam is lying in the same plane with the camera optical axis. We use a camera-center frame { c } with z C parallel to the optical axis. In this configuration, no matter how camera is moving, the trajectory of the laser spot in the image is a straight line passing the principal point. If we consider the laser pointer as a camera and the laser beam is its optical axis, the straight line in the image is an epipolar line of the imaginary stereo configuration.

As shown in Figure 2, d denotes the horizontal distance between the laser beam and the optical axis of lens of the camera, and α is the angle between the laser beam and the horizontal line. Both of them are fixed and known when the laser pointer is installed. Point P is the intersecting point of the laser beam and the object surface. A function of depth Zp with respect to pixel indices ( u , v ) (Trucco & Verri, 1998) is derived by applying the trigonometry. The depth Zp calculated by Equation (6) can be used to approximate the depth of each feature point in the planar object surface under assumption that the size of object is small enough.

Z p = d sin α cos ( α β ) E7

where β is the triangle inside the camera.

Figure 2.

Calculation of the Depth of a Point by Using Triangulation

Remark: It is noted that (6) is only valid when the camera and the laser are planar. In the robotic assembly system, the laser pointer is rigidly linked to the camera in a frame which is attached to the end effector of the robot. Inside the frame, the laser pointer is installed in a way that the laser line and object camera projections are coplanar. The possible generalization of this calibration phase to the case on no-coplanar laser and camera is under investigation.

(c) Switching Control of IBVS with Laser Pointer

The proposed algorithm is divided into three control stages which are a) driving the laser spot on the object, b) combining IBVS with laser spot for translational moving and c) adjusting the fine alignment. The block diagram of switching control system of IBVS with laser pointer is presented in Figure 3. The object is assumed to be stationary with respect to robot reference frame.

Figure 3.

Block diagram of IBVS with Laser control system

Stage 1: Driving the Laser Spot on the Object

To project the laser spot on the object, two kinds of situations need to be considered. One is that all features are in the FOV (Stage 1.A), and the other is that certain features are missing in the FOV (Stage 1.B). Both of them are discussed below in detail.

Stage 1.A: All Features in the FOV

When all image features are in the FOV, the control task is to drive the center of gravity of the object together with the image features of object near the current laser spot and the created imaginary image features by using 3DOF camera motion. Equation (1) can be decomposed into translational and rotational component parts as shown below

f ˙ = [ J i m g t ( f , Z ) J i m g r ( f ) ] [ v C C ω C C ] E8

where J i m g t ( f , Z ) and J i m g r ( f ) are stacked by J i m g t ( f i , Z i ) and J i m g r ( f i ) given by

J i m g t ( f i , Z i ) = [ λ Z i 0 x i Z i 0 λ Z i y i Z i ] E9
J i m g r ( f i ) = [ x i y i λ λ 2 + x i 2 λ y i λ 2 y i 2 λ x i y i λ x i ] E10

It is noted that J i m g t is related to both features and their depths, and J i m g r is only a function of the image features. Since the laser pointer is mounted on the robot end effector, it is possible to control it by performing 3-DOF rotational motion. It is known that 2-DOF of rotational motion can drive a laser pointer to project its dot image on the target. Here the reason of using 3-DOF instead of 2-DOF is that the rotation along the camera Z axis can be used to solve some image singularity problems such as 180 deg rotation around the optical axis, which is well known case causing visual servoing failure as presented by F. Chaumette (Chaumette, 1998). To avoid this particular case, a set of imaginary target features are designed. Based on the target image features including the desired object features f d and the laser spot f d l , a new imaginary image can be designed, which is shown in Figure 4. Let the distance between the target position of laser spot and the current laser spot be d l = f d l f l . The imaginary object features f i o f are formed by shifting all target object features f d i by units d l and adding the current laser spot f l :

f i o f = [ f d 1 d l f d 2 d l f d n d l f l ] T E11

It is assumed that the height of the object is relatively small. Hence there is no big discrepancy of the laser spot in the image plane when the laser spot moves from the workspace platform to the surface of object.

Figure 4.

Example of creating imaginary features, (a) in 2-D image space, (b) in 3-D Cartesian space

The center of gravity of object image features is served as an extra image feature which is defined as f c g = [ ( x 1 + + x n ) / n , ( y 1 + + y n ) / n ] T in the image plane. The control goal is to minimize the error norm between the image features f s 1 A = [ f 1 f 2 f n f c g ] T and the imaginary features, as presented below

min { f s 1 A f i o f } E12

From Equation (7), the relationship between the motion of the image features and the rotational DOF of camera ω C C is represented as

f ˙ s 1 A = [ J i m g t ( f s 1 A , Z s 1 A ) J i m g r ( f s 1 A ) ] [ v C C ω C C ] E13

where Z s 1 A = [ Z 1 Z 2 Z n Z l ] T . The above equation can be written as:

ω C C = J i m g r + ( f S 1 A ) [ f ˙ s 1 A J i m g t ( f s 1 A , Z s 1 A ) v C C ] E14

We set the translational DOF of camera motion as zero ( v C C = 0 ) and the following equation is obtained:

ω C C J i m g r + ( f s 1 A ) [ f ˙ s 1 A ] E15

Let the feature error defined as e i o f = f s 1 A f i o f . By imposing e ˙ i o f = K l e i o f , one can design the proportional control law given by

u s 1 A = ω C C = K l J i m g r + ( f s 1 A ) e i o f E16

where ω C C is the camera angular velocity sent to the robot controller and K l is the proportional gain.

Since we deliberately turn off the translational motion of camera, Equation (14) relating the image velocity to 3DOF camera rotational motion is approximately held. The proportional controller (15) cannot make the feature error e i o f approach zero exponentially. However, the current control task is only to drive the laser spot on the object and rotate the camera to make the image features of the object approach the imaginary features. The further tuning of visual servoing will be carried out in the next stages. Hence we adopt the switching rule to switch the controller to the second stage.

The switching rule is described as: when the error norm falls below a predetermined threshold value, the controller will switch from current stage to the second stage. The switching condition is given by

f i o f f s 1 A f s 1 A _ 0 E17

where f s 1 A _ 0 is the predetermined feature error threshold value.

Therefore, the controller (15) is expressed as follows:

u s 1 A = ω C C = { K l J i m g r + ( f s 1 A ) e i o f f i o f f s 1 A f s 1 A _ 0 u s 2 O t h e r w i s e E18

Notice that the function of this control law not only drives the laser spot on the object but also solves the image singularity problem. As mentioned in (Chaumette, 1998), a pure rotation of 180 deg around the optical axis leads to image singularity and causes a pure backward translational camera motion along the optical axis. In the proposed algorithm, the 3-DOF rotation of the camera is mainly executed in the first stage of control and the translational movement of camera is primarily executed in the second stage of control. Hence, the backward translational camera motion is avoided.

Stage 1.B: Features Partially seen in the FOV

When only partial object is in the FOV, some features are not available. In order to obtain all the features, we propose a strategy to control 2-DOF rotational motion of camera to project the laser spot on the centroid of the object in image plane till the whole object appears in the FOV. Hence the motion of the laser image feature f l = [ x l y l ] T is related to the camera motion as:

f ˙ l = J i m g X Y r ( f l ) r ˙ x y + J i m g _ c z ( f l ) r ˙ c z T E19

where r ˙ x y = [ ω x , ω y ] T represents 2-DOF rotational motion of camera, r ˙ c z = [ v x v y v z ω z ] T is the rest of camera velocity screw, the image Jacobian matrices J i m g X Y r ( f l ) and J i m g _ c z ( f l ) are defined respectively as:

J i m g X Y r ( f l ) = [ x l y l λ λ 2 + x l 2 λ λ 2 y l 2 λ x l y l λ ] J i m g _ c z ( f l , Z l ) = [ λ Z l 0 0 λ Z l x l Z l y l y l Z l x l ] E20

Equation (18) can be written as:

r ˙ x y = J i m g X Y r + ( f l ) [ f ˙ l J i m g _ c z ( f l , Z l ) r ˙ c z T ] E21

where J i m g X Y r + ( f l ) is the pseudo-inverse of image Jacobian matrix J i m g X Y r ( f l )

As mentioned before, 2-DOF of rotational motion r ˙ x y = [ ω x ω y ] T allows the laser pointer to project its dot image close to the desired target. Thus the other elements of the camera velocity screw r ˙ c z = [ v x v y v z ω z ] T are set to zero. Equation (20) becomes:

r ˙ x y J i m g X Y r + ( f l ) f ˙ l E22

The centroid of the partial object in image is used as the desired laser image feature for the laser spot. To attain such centroid, one generally calculates the first order moments of the partial object image. Let R represents the region of the partial object in a binary image I ( k , j ) , which can be obtained by using fixed-level threshold. For a digital image, the moments of the region R are defined as:

m k j = ( x , y ) R x k y j k 0, j 0 E23

where ( x , y ) represent the row and column of a pixel in the region R respectively. According to the definition of moment, we have the area of the region R and the centroid of R as:

A R = m 00 E24
f c = [ f x c f y c ] = [ m 10 m 00 m 10 m 00 ] E25

where f c is the centroid of the partial object in the image.

Define the image feature error between the laser image feature and the centroid of partial object image feature as e c l = [ f l f c ] . The proportional control law for 2-DOF of rotational motion is designed by imposing e ˙ c l = K l 2 e c l :

u s 1 B = r ˙ x y = { K l 2 J i m g X Y r + ( f l ) e c l f l f c f s 1 B _ 0 u s 1 A O t h e r w i s e E26

where K l 2 is the proportional gain and f s 1 B _ 0 is the predetermined feature error threshold value.

When the image feature error e c l enters into the range f l f c f s 1 B _ 0 , the laser spot is projected close to the planar object. In this paper, the end effector is assumed to be posed above the planar object and the FOV is relatively large compared with the image of the object. Hence, all the feature points enter into FOV when the laser spot is close to the desired laser image ( f l f c f s 1 B _ 0 ). Once all the feature points are obtained, the control law is switched to u s 1 A in Stage 1.A.

Stage 2: Translational Moving of Camera

The key problem in switch control of Stage 2 is how to obtain the image Jacobian matrix of relating the motion of image features plus laser spot to the translational motion of camera. According to the derivation of traditional image Jacobian matrix, the target is supposed to be stationary. Hence the laser spot on the object can be considered as a stationary point adapting to image Jacobian matrix of traditional IBVS. Based on above scheme, the algorithm is presented in detail as follows.

Let f s 2 = [ f 1 f n f l ] T represents n image features plus the laser image feature f l = [ x l y l ] T . It is assumed that the object is small compared with the workplace. Hence the depth of the laser image close to the centroid of the object can be treated as a good approximation to the depth of all features. The modified relationship between the motion of the image features and the motion of camera is given by
f ˙ s 2 = J i m g ( f s 2 , Z l ) r ˙ E27

where J i m g ( f s 2 , Z l ) can also be decomposed into translational and rotational component parts as shown below

J i m g ( f s 2 , Z l ) = [ J i m g t ( f s 2 , Z l ) J i m g r ( f s 2 ) ] E28

where the two components are formed as:

J i m g t ( f s 2 , Z l ) = [ J i m g t ( f 1 , Z l ) J i m g t ( f n , Z l ) J i m g t ( f l , Z l ) ] J i m g r ( f s 2 ) = [ J i m g r ( f 1 ) J i m g r ( f n ) J i m g r ( f l ) ] E29

Equation (26) can be written as:

f ˙ s 2 = [ J i m g t ( f s 2 , Z l ) J i m g r ( f s 2 ) ] [ v C C ω C C ] = J i m g t ( f s 2 , Z l ) v C C + J i m g r ( f s 2 ) ω C C E30

And the translational motion of the camera is derived as:

v C C = J i m g t + ( f s 2 , Z l ) [ f ˙ s 2 J i m g r ( f s 2 ) ω C C ] E31

Set the rotational motion of the camera to zero. The above equation is rewritten as:

v C C J i m g t + ( f s 2 , Z l ) f ˙ s 2 E32

The control objective is to move the image features of object plus the laser spot image close to the target image features by using the translational motion of the camera. The target image features include four coplanar corner points of object image plus the desired laser spot f d l defined as f s 2 _ D = [ f d 1 f d 2 f d n f d l ] T . The error between them is defined as:

e s 2 = f s 2 f s 2 _ D E33

The translational motion of the camera is designed by imposing e ˙ s 2 = K s 2 e s 2 :

u s 2 = v C C = { K s 2 J i m g t + ( f s 2 , Z l ) e s 2 f s 2 f s 2 _ D f s 2 _ 0 u s 3 O t h e r w i s e E34

Where K s 2 is the proportional gain and f s 2 _ 0 is the predetermined feature error threshold value.

The switching rule is described as: if the image feature error norm between the current image features and the desired image features falls below a threshold f s 2 _ 0 , the IBVS with laser pointer switches from the current stage to the third stage u s 3 .

f s 2 f s 2 _ D f s 2 _ 0 E35

Since the translational motion of camera is approximately derived as (31), the proportional controller (32) can not drive the image feature e s 2 to approach zero exponentially. However, the translational motion brings the image of object in the vicinity of target image features. A switching rule is thus set to switch from the current stage to the next one for the fine tuning of visual servoing. The threshold f s 2 _ 0 is directly related to the depth and affects the stability of the controller. Thus the selection of this threshold is crucial and it is normally set as relatively small pixels to maintain the stability of the system.

Stage 3: Adjusting the Fine Alignments

After applying the switching controllers in Stage 1 and 2, one can control the end effector to the neighbour of the target image features. It has been presented in (Chaumette, 1998) that the constant image Jacobian at the desired camera configuration can be used to reach the image global minimum. Hence, a constant image Jacobian matrix can be used in IBVS to adjust the fine alignment of end effector so that the pump can perfectly suck up the object. In this stage, the laser image feature is not considered as an image feature and the traditional IBVS with constant image Jacobian of target image features is applied.

Since the image features are close to their desired position in the image plane, as shown in (33), the depths of the points of image features are approximate to the desired ones (essentially planar object). The target features and their corresponding depths are applied to the traditional IBVS with constant image Jacobian matrix shown as:

J i m g ( f i d , Z i d ) = [ λ Z i d 0 x i d Z i d x i d y i d λ λ 2 + x i d 2 λ y i d 0 λ Z i d y i d Z i d λ 2 y i d 2 λ x i d y i d λ x i d ] E36

where f i d and Z i d are target features and their corresponding depths respectively.

The control goal of this stage is to control the pose of the end effector so that the image feature error between the current image features f = [ f 1 f n ] T and the target image features f d = [ f d 1 f d 2 f d n ] T reaches the global minimum. The condition can be described as: if the feature error norm falls below a predetermined threshold, the whole IBVS with laser pointer will stop. The condition is presented as:

f d f f s 3 _ 0 E37

where f s 3 _ 0 is a predetermined threshold value, f = [ f 1 f n ] T is the image features and f d = [ f d 1 f d 2 f d n ] T is the desired feature.

The proportional controller is designed as:

u s 3 = [ v C C ω C C ] = { J i m g + ( f d , Z d ) ( f f d ) f d f f S 3 _ 0 S t o p S e r o v i n g S t a r t g r a s p i n g O t h e r w i s e E38

where K s 3 is the proportional gain and Z d = [ Z 1 d Z n d ] T is the depth vector of each target feature point.

The threshold f S 3 _ 0 directly affects the accuracy of the robot end effector pose with respect to the object. The bigger value of threshold is chosen, the less accurate pose will be achieved. However, the positioning process time is reduced.

It is noticed that the proposed IBVS algorithm is derived from traditional IBVS. Therefore it inherits the advantages of IBVS, which does not need object model and is robust to the camera calibration and hand-eye calibration error. In addition, by using a laser pointer and the separate DOF method, the proposed switch algorithm decouples the rotational and translational motion control of the robotic end effector to overcome the inherent drawbacks of traditional IBVS such as image singularities and the image local minima.

Advertisement

4. Experimental results

The proposed IBVS with laser pointer has been tested on a robotic assembly system including an industrial robot Motoman UPJ with JRC controller, a laser pointer, and a PC based vision system including Matrox frame grabber and a Sony XC55 camera mounted on the robot. The robotic assembly system setup for testing IBVS with laser pointer is shown in Figure 5.

To verify the effectiveness of the proposed method, a plastic object shown in Figure 5 is chosen to be assembled in a metallic part. The four coplanar corners of the surface are selected as the target features. One of the advantages of the JRC controller in UPJ robot is that it accepts the position and orientation values and calculates the joint angle by itself, which eliminates the robot kinematical modeling error. However, the drawback of the JRC controller is that it cannot be used for real time control. In other words, when the designed controller generates a new position or orientation value and sends it to the JRC controller, it will not respond it until the previous position is reached in each iteration. With this limitation of hardware, we have to divide the calculated value into a serial of small increment by a constant factor in each step, and increase the sampling time as well. In the experiment, we chose constant factor as 50, and the sampling time as 2 seconds.

Figure 5.

Robotic assembly system setup and components to be assembled

The desired pose H E O is set as [ 36 66 123 0 0 13.7 ] (mm, deg) for teaching procedure. The predetermined threshold f s 1 A _ 0 in (16) is set as 10 pixels and f s 1 B _ 0 in (25) is set as 10 pixels. The f s 2 _ 0 in (32) is tuned as 10 pixels, which is approximate to 28mm in the experimental setup. f s 3 _ 0 in (35) is set as 3 pixels. All the proportional gains of control law are set as 1. When the condition of feature error norm falling below 3 pixels is met, the end effector is brought to the desired pose with respect to the plastic object. It should be noted that no matter where the object is located in the workplace, the desired pose of end effector with respect to the object is exactly the same. For any object position, there exists a fixed transformation between camera and end effector frame with this desired pose. Before the visual servoing starts, we need to teach the robot this fixed pose once. Therefore, commanding the robot end effector to move the predefined transformation vector in end effector frame perfectly lets the vacuum pump suck up the object. The assembled components shown in Figure 5 demonstrate the accuracy of the proposed method.

(a) With Good Calibration Value

The proposed algorithm is tested with good calibration values (Table 1). The parameters d and α for the fixed laser-camera configuration are adjusted to 8mm and 72 deg individually. The sequential pictures of successful assembly process are illustrated in Figure 6. The image trajectory obtained from the experiment is illustrated in Figure 7 (a). The rectangular with white edges represents the initial position and the black object shows the desired position. The object is posed manually with randomized initial position.

Parameters Good calibration values Bad calibration values
Principal point (pixel) [ 326 228 ] [3 84 2 88 ]
Focal length (mm) 6 .02 7.2
Effective size (mm) 0.0074 × 0.0074 0.0074 × 0.0074
H C L [X Y Z φ θ ψ ] (mm, deg) measured by hand [8 0 0 0 -18 0 ] [6 0 0 0 -14 0]
H E C [X Y Z φ θ ψ ] (mm, deg) calculated by calibration [- 7 5 189 -2 0 -179 ] [-20 20 210 5 5 180]

Table 1.

Good and bad calibration values of system parameters

(b) With Bad Calibration Value

To test the robustness of the IBVS with laser pointer, camera calibration error is also added to intrinsic parameters with 20% deviation as shown in Table 1. The good calibration value and bad calibration value of transformation among camera reference, laser pointer frame and robot end effector frame are also shown in Table 1. The object position is the same as that of the experiment with good calibration value and the image trajectory resulted from experiment is shown in Figure 7 (b).

Although the image trajectory shown in Figure 7 (b) is distorted comparing with the trajectory presented in Figure 7 (a), the image features still converge to the desired position and the assembly task is successfully accomplished as well. Hence, the proposed algorithm is very robust to the camera calibration and hand-eye calibration error on the order of 20% deviation from nominal values.

Figure 6.

Assembly sequence (a) Initial position (b) The end of stage 1, (c) The end of stage 2 (d) The end of stage 3 (ready to pick up the object) (e) Suck up the object (f) Assembly

Figure 7.

Image trajectory (a) with good calibration value (b) with bad calibration value

In order to test the depth estimation convergence, the experiments on the depth estimation are carried out under the conditions with good and bad calibration values. The experimental results are presented in Figure 8. Both results show that the estimated depths converge to the real distance and thus the depth estimation convergences are experimentally proved.

Figure 8.

Experimental results of depth estimation

Advertisement

5. Conclusion

In this paper, a new approach to image based visual servoing with laser pointer is developed. The laser pointer has been adopted and the triangular method has been used to estimate the depth between the camera and the object. The switch control of IBVS with laser pointer is decomposed into three stages to accomplish visual servoing tasks under various circumstances. The algorithm has been successfully applied in an experimental robotic assembly system. The experimental results verify the effectiveness of the proposed method and also validate the feasibility of applying the proposed method to industrial manufacturing systems. Future work includes testing the proposed method on a robot manipulator supporting real time control and conducting analytical convergence analysis of switch control algorithm.

References

  1. 1. Aouf N. Rajabi H. Rajabi N. Alanbari H. Perron C. 2004 “Visual object tracking by a camera mounted on a 6DOF industrial robot,” IEEE Conference on Robotics, Automation and Mechatronics, 1 213 218 , Dec. 2004.
  2. 2. Chaumette F. 1998 “Potential problems of stability and convergence in image based and position-based visual servoing,” in The Conference of Vision and Control, D. Kriegman, G.
  3. 3. Chesi G. Hashimoto K. Prattichizzo D. Vicino A. 2004 “Keeping Features in the Field of View in Eye-In-Hand Visual Servoing: A Switching approach”, IEEE Transactions on Robotics, 20 5 908 913 , Oct. 2004.
  4. 4. Corke P. I. Hutchinson S. A. 2001 “A new partitioned approach to image-based visual servo control,” IEEE Transactions on Robotics and Automation, 17 4 507 515 , Aug. 2001.
  5. 5. Corke P. I. 1996 “Robotics toolbox for MATLAB,” IEEE Transactions on Robotics and Automation, 3 1 24 32 , 1996.
  6. 6. Corke P. I. 1996 “Visual Control of Robots: High Performance Visual Servoing,” Research Studies Press, Australia, 1996.
  7. 7. De Menthon D. F. Davis L. S. 1995 “Model-based object pose in 25 lines of code,” International Journal of Computer Vision, 15 1/2 , 123 142 , 1995.
  8. 8. Deng L. Janabi-Sharifi F. Wilson W. J. 2005 “Hybrid Motion Control and Planning Strategies for Visual Servoing,” IEEE Transactions on Industrial Electronics, 52 4 1024 1040 , Aug. 2005.
  9. 9. Deng L. Wilson W. J. Janabi-Sharifi F. 2003 “Dynamic performance of the position-based visual servoing method in the cartesian and image spaces,” in Proceedings, IEEE/RSJ International Conference on Intelligent Robots and Systems, 1 510 515 , 2003.
  10. 10. Espiau B. 1993 “Effect of camera calibration errors on visual servoing in robotics,” in Experimental Robotics III: The 3rd International symposium on experimental robotics, Springer-Verlag, Lecture notes in control and information sciences, 200 182 192 , 1993.
  11. 11. Gans N. R. Hutchinson S. A. 2003 “An experimental study of hybrid switched system approaches to visual servoing,” in Proceedings ICRA’03, IEEE International Conference on Robotics and Automation, 3 3061 3068 , Sept. 2003.
  12. 12. Hager D. Morse A. 1998 Eds. Berlin, Germany: Springer-Verlag, 237 66 78 . Lecture Notes in Control and Information Sciences, 1998.
  13. 13. Hashimoto K. Noritsugu T. 2000 “Potential problems and switching control for visual servoing,” in Proceedings, IEEE/RSJ International Conference on Intelligent Robots and System, 1 423 428 , Oct. 2000.
  14. 14. Hutchinson S. Hager G. D. Corke P. I. 1996 “A tutorial on visual servo control,” IEEE Transactions on Robotics and Automation, 12 5 651 670 , Oct. 1996.
  15. 15. Janabi-Sharifi F. Ficocelli M. 2004 “Formulation of Radiometric Feasibility Measures for Feature Selection and Planning in Visual Servoing”, IEEE Transactions on Systems, Man and Cybernetics-Part B: Cybernetics, 34 2 978 987 , April 2004.
  16. 16. Kase H. Maru N. Nishikawa A. Yamada S. Miyazaki F. 1993 “Visual Servoing of the Manipulator using the Stereo Vision,” in Proceedings of the IECON’93 IEEE International Conference on Industrial Electronics, Control, and Instrumentation, 3 1791 1796 , 1993.
  17. 17. Krupa A. Gangloff J. Doignon C. Mathelin M. Morel G. Leroy J. Soler L. Marescaux J. 2003 “Autonomous 3-D positioning of surgical instruments in robotized laparoscopic surgery using visual servoing,” IEEE Transaction on Robotic and Automation, 19 5 842 853 , 2003.
  18. 18. Li Z. Xie W. F. Aouf N. 2006 “A Neural Network Based Hand-Eye Calibration Approach in Robotic Manufacturing Systems”, CSME 2006, Calgary, May 2006.
  19. 19. Mahony R. Corke P. Chaumette F. 2002 “Choice of image features for depth-axis control in image based visual servo control,” IEEE/RSJ International Conference on Intelligent Robots and Systems, 1 390 395 , 2002.
  20. 20. Malis E. Chaumette F. Boudet S. 1999 “2-1/2-D visual servoing (1999),” IEEE Transactions on Robotics and Automation, 15 2 238 250 , April 1999.
  21. 21. Oh P. Y. Allen P. K. 2001 “Visual Servoing by Partitioning Degrees of Freedom,” IEEE Transactions on Robotics and Automation, 17 1 1 17 , 2001.
  22. 22. Pages J. Collewet C. Chaumette F. Salvi J. 2006 “Optimizing Plane-to-Plane Positioning Tasks by Image-Based Visual Servoing and Structure light,” IEEE Transactions on Robotics, 22 5 1000 1010 , 2006.
  23. 23. Tahri O. Chaumette F. 2005 “Point-Based and Region-Based Image Moments for Visual Servoing of Planar Objects”, IEEE Transactions on Robotics, 21 6 1116 1127 , Dec. 2005.
  24. 24. Trucco E. Verri A. 1998 “Introductory Techniques for 3 -D computer Vision,” Prentice Hall; 1998.
  25. 25. Wang J. P. Cho H. 2008 “Micropeg and Hole Alignment Using Image Moments Based Visual Servoing Method”, IEEE Transactions on Industrial Electronics, 55 3 1286 1294 , March 2008.
  26. 26. Weiss L. E. Sanderson A. C. Neuman C. P. 1987 “Dynamic sensor based control of robots with visual feedback,” IEEE Journal of Robotics and Automation, 3 5 404 417 , Oct. 1987.
  27. 27. Wilson W. J. Hulls C. C. W. Bell G. S. 1996 “Relative end-effector control using Cartesian position-based visual servoing,” IEEE Transactions on Robotics and Automation, 12 5 684 696 , Oct. 1996.

Written By

Wen-Fang Xie, Zheng Li, Claude Perron and Xiao-Wei Tu

Published: 01 January 2010