Good and bad calibration values of system parameters
1. Introduction
To reduce the labor cost and increase the throughput in the manufacturing industry, there is an increasing demand for automated robotic manufacturing systems such as robotic assembly, bin picking, drilling and palletizing systems, which require accurate and fast robot positioning. The 3D machine vision system is normally used in a robotic manufacturing system to compensate for the robotic positioning errors due to unforeseen work environment and randomly-placed objects. The task of robot positioning using vision system is referred to as visual servoing which aims at controlling the pose of the robot’s end effector relative to a target object or a set of target features (Hutchinson et al, 1996), (Corka, 1996), (Li et al, 2006), (Aouf et al, 2004). According to the features used as feedback in minimizing the positioning error, visual servoing is classified into three categories, Position Based Visual Servoing (PBVS) (Hutchinson et al, 1996), (DeManthon & Davis, 1995), (Wilson et al, 1996), Image Based Visual Servoing (IBVS) (Weiss et al, 1987), (Espiau, 1993), (Chaumette, 1998), (Wang & Cho, 2008) and Hybrid Visual Servoing (Malis et al, 1999).
Since IBVS was introduced in 1980, it has attracted the attention of many researchers and has been tremendously developed in recent years. The method is based on the principle that when the image feature error in 2-D image space is approaching zero, the kinematical error in Cartesian space approaches zero too (Hutchinson et al, 1996). In IBVS, the error for the controller is defined directly with respect to the image feature parameters (Weiss et al, 1987). Compared with PBVS, the advantages of IBVS are obvious. First, it is object model free, and robust to camera modeling and hand-eye calibration errors (Espiau, 1993). Second, the image feature point trajectories are controlled to move approximately along straight lines in the image plane. Hence, it is able to prevent the image features from leaving the FOV. However, the drawbacks of IBVS lie in the following aspects. Since the control law is merely designed in the image plane, the trajectory of the end effector in Cartesian space is not a straight line, and even odd in some cases. In other words, in order to reduce the image feature error to zero as soon as possible, unnecessary motions of end effector are performed. Moreover the system is stable only in a region around the desired position, and there may exist image singularities and image local minima (Chaumette, 1998) leading to IBVS failure. The choice of the visual features is a key point to solve the problem of image singularities. Lots of studies have been done to find out the decoupled visual features with respect to the 6DOF of the robot. Such studies also ensure that the trajectory in Cartesian space is like a straight line (Tahri & Chaumette, 2005), (Pages et al, 2006), (Janbi-Sharifi & Fiocelli, 2004), (Krupa et al, 2003). In (Tahri & Chaumette, 2005), six image moment features were selected to design a decoupled control scheme. In (Pages et al, 2006), Pages et al. derived the image Jacobian matrix related to a laser spot as an image feature. The global convergences of the control law had been shown with a constant interaction matrix. However, the method needed the information of the planar object and only fit for the situation where the camera was located near the desired position. Another approach using a laser pointer in visual servoing was presented in (Krupa et al, 2003). Krupa et al. developed a vision system with stationary camera, which retrieved and positioned surgical instruments for operation. A laser pointer was used to project laser spot on the organ surface to control the depth. However, the servoing was only carried out in 3-DOF and the camera was motionless. Therefore, the system could not provide much flexibility in visual servoing in large scale environment.
Koichi Hashimoto et al. (Hashimoto & Noritsugu, 2000) introduced a method to solve the image local minima. The main idea was to define a potential function and to minimize it while controlling the robot. If the potential had local minima, the algorithm generated an artificial potential and then controlled the camera based on the artificial one. In (Kase et al, 1993), stereo based visual servoing was proposed to solve the depth estimation problem and calculate an exact image Jacobian matrix. However this kind of algorithm increased the computational cost. R. Mahony et al. (Mahony et al, 2002) introduced a method of choosing other types of image features instead of points for IBVS and focusing on the depth axis control. P.Y. Oh et al. (Oh & Allen, 2001) presented a partitioning DOF method for IBVS which used a 3-DOF robot with a 2-DOF pan tilt unit. The experimental results of tracking people were given. In (Corke & Hutchinson, 2001), another partitioned approach to visual servoing control was introduced, which decoupled the z-axis rotational and translational components of the control from the remaining DOF.
To overcome the aforementioned shortcomings of IBVS, some new approaches that integrate PBVS and IBVS methods have been developed (Gans et al, 2003), (Malis et al, 1999). The main idea is to use a hybrid of Cartesian and image space sensory feedback signals to control both Cartesian and image trajectories simultaneously. One example of such hybrid approach is 2.5D visual servoing (Malis et al, 1999), which was based on the estimation of the partial camera displacement. Recently, a hybrid motion control and planning strategy for image constraints avoidance was presented in (Deng et al, 2005), (Deng et al, 2003). This motion control part included a local switching control between the IBVS and PBVS for avoiding image singularity and image local minima. In addition, the planning strategy was composed of an artificial hybrid trajectory planner.
Inspired by the hybrid motion control and planning strategy, we proposed a new switch control approach to IBVS to overcome the shortcomings of IBVS. First, a laser pointer is adopted to realize on-line depth estimation to obtain image Jacobian matrix. Second, we added a laser point image feature to the chosen image features of the object. Based on the new image feature set, we can detect the object in the workspace even when the features of object are in FOV partially. Hence the available workspace is virtually enlarged to some extent. Furthermore, a set of imaginary target features are introduced so that a decoupled control scheme for IBVS can be designed. Third, we separated 3-DOF rotational motion from the translational motion to solve some image singularity problem such as 180 degree rotation around the optical axis (Chaumette, 1998) and image local minima in IBVS. This decoupled control strategy can make visual servoing system work over a large region around the desired position.
This switch control approach to IBVS with laser pointer is applied to a robotic assembly system. The system is composed of a 6-DOF robot, a camera mounted on the robot end effector, a simple off-the-shelf laser pointer rigidly linked to the camera and a vacuum pump for the object grasping. The whole algorithm consists of three steps. First the laser spot is driven onto a planar object. Since the laser pointer is mounted on the robot end effector, 3-DOF rotational motion of end effector can drive the object image features close to a set of imaginary image features so that the laser spot is projected on the object. Next, the image features of the object and laser spot are used to obtain the image Jacobian matrix, which is primarily used for controlling the end effector translational motion with respect to the object. Finally a constant image Jacobian at the desired camera configuration for IBVS is used to adjust the fine alignment so that the feature errors can reach to the image global minimum. The successful application of the proposed algorithm to an experimental robotic assembly system demonstrates the effectiveness of the proposed method.
The paper is organized as follows. In Section 2, the problem statement of visual servoing is introduced. A novel approach to switching control of IBVS with laser pointer is presented in Section 3. In Section 4, several experimental results are given to show the effectiveness of the proposed method. The concluding remarks are given in Section 5.
2. Problem statement
In this paper, we focus on an automated robotic assembly system which uses Eye-in-Hand architecture to perform visual servoing. In this system, the assembly task is to move the robot end effector together with a tool, such as a gripper or a vacuum pump, to approach a part with unknown pose, and then to grasp and assemble it to a main body fast and smoothly. Such an assembly task is a typical visual servoing control problem. Hence, IBVS is an appropriate method to achieve the task since all control input is computed in an image space without using the pose information.
LetAssume that the effective sizes of a pixel
In order to design the feedback control for robot based on the velocity of the feature points, we have the following relationship between the motion of image features and the physical motion of the camera:
where
is the image Jacobian matrix,
For each feature point
where
Equation (1) can be written as:
where
where
It is assumed that the optical axis of the camera is coincident with the Z axis of the end effector. The motion of camera can be related to robot joint rates through the normal robot Jacobian and a fixed transformation between the motion of the camera and the end effector. When the image error function
The objective of IBVS in this paper is to control the end effector to approach an unknown object so that the image error function
3. IBVS with laser system
In this section, a new approach to switching control of IBVS with laser pointer is presented to accomplish the aforementioned visual servoing tasks. This approach is designed to overcome the drawbacks of IBVS by installing an off-the-shelf laser pointer on the end effector for estimating the depth of the image features and separating the visual servoing procedures into several control stages.
(a) Robotic Eye-in-Hand System
The designed robotic Eye-in-Hand system configuration is shown in Figure 1, which is composed of a 6-DOF robot, a camera mounted on the robot end effector, a laser pointer rigidly linked to the camera and a vacuum pump for grasping object. In Figure 1, H denotes the transformation between two reference frames.

Figure 1.
Robotic Eye-in-Hand System Configuration
In traditional IBVS, since the control law is only designed in the image plane, unnecessary motions of the end effector are performed. In order to obtain the image Jacobian
(b) On-line Depth Estimation
In Equation (3),
Assume that the laser beam is lying in the same plane with the camera optical axis. We use a camera-center frame
As shown in Figure 2, d denotes the horizontal distance between the laser beam and the optical axis of lens of the camera, and
where

Figure 2.
Calculation of the Depth of a Point by Using Triangulation
(c) Switching Control of IBVS with Laser Pointer
The proposed algorithm is divided into three control stages which are a) driving the laser spot on the object, b) combining IBVS with laser spot for translational moving and c) adjusting the fine alignment. The block diagram of switching control system of IBVS with laser pointer is presented in Figure 3. The object is assumed to be stationary with respect to robot reference frame.

Figure 3.
Block diagram of IBVS with Laser control system
To project the laser spot on the object, two kinds of situations need to be considered. One is that all features are in the FOV (Stage 1.A), and the other is that certain features are missing in the FOV (Stage 1.B). Both of them are discussed below in detail.
Stage 1.A: All Features in the FOVWhen all image features are in the FOV, the control task is to drive the center of gravity of the object together with the image features of object near the current laser spot and the created imaginary image features by using 3DOF camera motion. Equation (1) can be decomposed into translational and rotational component parts as shown below
where
It is noted that
It is assumed that the height of the object is relatively small. Hence there is no big discrepancy of the laser spot in the image plane when the laser spot moves from the workspace platform to the surface of object.

Figure 4.
Example of creating imaginary features, (a) in 2-D image space, (b) in 3-D Cartesian space
The center of gravity of object image features is served as an extra image feature which is defined as
From Equation (7), the relationship between the motion of the image features and the rotational DOF of camera
where
We set the translational DOF of camera motion as zero (
Let the feature error defined as
where
Since we deliberately turn off the translational motion of camera, Equation (14) relating the image velocity to 3DOF camera rotational motion is approximately held. The proportional controller (15) cannot make the feature error
The switching rule is described as: when the error norm falls below a predetermined threshold value, the controller will switch from current stage to the second stage. The switching condition is given by
where
Therefore, the controller (15) is expressed as follows:
Notice that the function of this control law not only drives the laser spot on the object but also solves the image singularity problem. As mentioned in (Chaumette, 1998), a pure rotation of 180 deg around the optical axis leads to image singularity and causes a pure backward translational camera motion along the optical axis. In the proposed algorithm, the 3-DOF rotation of the camera is mainly executed in the first stage of control and the translational movement of camera is primarily executed in the second stage of control. Hence, the backward translational camera motion is avoided.
Stage 1.B: Features Partially seen in the FOVWhen only partial object is in the FOV, some features are not available. In order to obtain all the features, we propose a strategy to control 2-DOF rotational motion of camera to project the laser spot on the centroid of the object in image plane till the whole object appears in the FOV. Hence the motion of the laser image feature
where
Equation (18) can be written as:
where
As mentioned before, 2-DOF of rotational motion
The centroid of the partial object in image is used as the desired laser image feature for the laser spot. To attain such centroid, one generally calculates the first order moments of the partial object image. Let R represents the region of the partial object in a binary image
where
where
Define the image feature error between the laser image feature and the centroid of partial object image feature as
where
When the image feature error
The key problem in switch control of Stage 2 is how to obtain the image Jacobian matrix of relating the motion of image features plus laser spot to the translational motion of camera. According to the derivation of traditional image Jacobian matrix, the target is supposed to be stationary. Hence the laser spot on the object can be considered as a stationary point adapting to image Jacobian matrix of traditional IBVS. Based on above scheme, the algorithm is presented in detail as follows.
Letwhere
where the two components are formed as:
Equation (26) can be written as:
And the translational motion of the camera is derived as:
Set the rotational motion of the camera to zero. The above equation is rewritten as:
The control objective is to move the image features of object plus the laser spot image close to the target image features by using the translational motion of the camera. The target image features include four coplanar corner points of object image plus the desired laser spot
The translational motion of the camera is designed by imposing
Where
The switching rule is described as: if the image feature error norm between the current image features and the desired image features falls below a threshold
Since the translational motion of camera is approximately derived as (31), the proportional controller (32) can not drive the image feature
After applying the switching controllers in Stage 1 and 2, one can control the end effector to the neighbour of the target image features. It has been presented in (Chaumette, 1998) that the constant image Jacobian at the desired camera configuration can be used to reach the image global minimum. Hence, a constant image Jacobian matrix can be used in IBVS to adjust the fine alignment of end effector so that the pump can perfectly suck up the object. In this stage, the laser image feature is not considered as an image feature and the traditional IBVS with constant image Jacobian of target image features is applied.
Since the image features are close to their desired position in the image plane, as shown in (33), the depths of the points of image features are approximate to the desired ones (essentially planar object). The target features and their corresponding depths are applied to the traditional IBVS with constant image Jacobian matrix shown as:
where
The control goal of this stage is to control the pose of the end effector so that the image feature error between the current image features
where
The proportional controller is designed as:
where
The threshold
It is noticed that the proposed IBVS algorithm is derived from traditional IBVS. Therefore it inherits the advantages of IBVS, which does not need object model and is robust to the camera calibration and hand-eye calibration error. In addition, by using a laser pointer and the separate DOF method, the proposed switch algorithm decouples the rotational and translational motion control of the robotic end effector to overcome the inherent drawbacks of traditional IBVS such as image singularities and the image local minima.
4. Experimental results
The proposed IBVS with laser pointer has been tested on a robotic assembly system including an industrial robot Motoman UPJ with JRC controller, a laser pointer, and a PC based vision system including Matrox frame grabber and a Sony XC55 camera mounted on the robot. The robotic assembly system setup for testing IBVS with laser pointer is shown in Figure 5.
To verify the effectiveness of the proposed method, a plastic object shown in Figure 5 is chosen to be assembled in a metallic part. The four coplanar corners of the surface are selected as the target features. One of the advantages of the JRC controller in UPJ robot is that it accepts the position and orientation values and calculates the joint angle by itself, which eliminates the robot kinematical modeling error. However, the drawback of the JRC controller is that it cannot be used for real time control. In other words, when the designed controller generates a new position or orientation value and sends it to the JRC controller, it will not respond it until the previous position is reached in each iteration. With this limitation of hardware, we have to divide the calculated value into a serial of small increment by a constant factor in each step, and increase the sampling time as well. In the experiment, we chose constant factor as 50, and the sampling time as 2 seconds.

Figure 5.
Robotic assembly system setup and components to be assembled
The desired pose
(a) With Good Calibration Value
The proposed algorithm is tested with good calibration values (Table 1). The parameters d and
Parameters | Good calibration values | Bad calibration values |
Principal point (pixel) | [ 326 228 ] | [3 84 2 88 ] |
Focal length (mm) | 6 .02 | 7.2 |
Effective size (mm) |
|
|
|
[8 0 0 0 -18 0 ] | [6 0 0 0 -14 0] |
|
[- 7 5 189 -2 0 -179 ] | [-20 20 210 5 5 180] |
Table 1.
(b) With Bad Calibration Value
To test the robustness of the IBVS with laser pointer, camera calibration error is also added to intrinsic parameters with 20% deviation as shown in Table 1. The good calibration value and bad calibration value of transformation among camera reference, laser pointer frame and robot end effector frame are also shown in Table 1. The object position is the same as that of the experiment with good calibration value and the image trajectory resulted from experiment is shown in Figure 7 (b).
Although the image trajectory shown in Figure 7 (b) is distorted comparing with the trajectory presented in Figure 7 (a), the image features still converge to the desired position and the assembly task is successfully accomplished as well. Hence, the proposed algorithm is very robust to the camera calibration and hand-eye calibration error on the order of 20% deviation from nominal values.

Figure 6.
Assembly sequence (a) Initial position (b) The end of stage 1, (c) The end of stage 2 (d) The end of stage 3 (ready to pick up the object) (e) Suck up the object (f) Assembly

Figure 7.
Image trajectory (a) with good calibration value (b) with bad calibration value
In order to test the depth estimation convergence, the experiments on the depth estimation are carried out under the conditions with good and bad calibration values. The experimental results are presented in Figure 8. Both results show that the estimated depths converge to the real distance and thus the depth estimation convergences are experimentally proved.

Figure 8.
Experimental results of depth estimation
5. Conclusion
In this paper, a new approach to image based visual servoing with laser pointer is developed. The laser pointer has been adopted and the triangular method has been used to estimate the depth between the camera and the object. The switch control of IBVS with laser pointer is decomposed into three stages to accomplish visual servoing tasks under various circumstances. The algorithm has been successfully applied in an experimental robotic assembly system. The experimental results verify the effectiveness of the proposed method and also validate the feasibility of applying the proposed method to industrial manufacturing systems. Future work includes testing the proposed method on a robot manipulator supporting real time control and conducting analytical convergence analysis of switch control algorithm.
References
- 1.
Aouf N. Rajabi H. Rajabi N. Alanbari H. Perron C. 2004 “Visual object tracking by a camera mounted on a 6DOF industrial robot,” IEEE Conference on Robotics, Automation and Mechatronics,1 213 218 , Dec. 2004. - 2.
Chaumette F. 1998 “Potential problems of stability and convergence in image based and position-based visual servoing,” in The Conference of Vision and Control, D. Kriegman, G. - 3.
Chesi G. Hashimoto K. Prattichizzo D. Vicino A. 2004 “Keeping Features in the Field of View in Eye-In-Hand Visual Servoing: A Switching approach”, IEEE Transactions on Robotics,20 5 908 913 , Oct. 2004. - 4.
Corke P. I. Hutchinson S. A. 2001 “A new partitioned approach to image-based visual servo control,” IEEE Transactions on Robotics and Automation,17 4 507 515 , Aug. 2001. - 5.
Corke P. I. 1996 “Robotics toolbox for MATLAB,” IEEE Transactions on Robotics and Automation,3 1 24 32 , 1996. - 6.
Corke P. I. 1996 “Visual Control of Robots: High Performance Visual Servoing,” Research Studies Press, Australia, 1996. - 7.
De Menthon D. F. Davis L. S. 1995 “Model-based object pose in 25 lines of code,” International Journal of Computer Vision,15 1/2 ,123 142 , 1995. - 8.
Deng L. Janabi-Sharifi F. Wilson W. J. 2005 “Hybrid Motion Control and Planning Strategies for Visual Servoing,” IEEE Transactions on Industrial Electronics,52 4 1024 1040 , Aug. 2005. - 9.
Deng L. Wilson W. J. Janabi-Sharifi F. 2003 “Dynamic performance of the position-based visual servoing method in the cartesian and image spaces,” in Proceedings, IEEE/RSJ International Conference on Intelligent Robots and Systems,1 510 515 , 2003. - 10.
Espiau B. 1993 “Effect of camera calibration errors on visual servoing in robotics,” in Experimental Robotics III: The 3rd International symposium on experimental robotics, Springer-Verlag, Lecture notes in control and information sciences,200 182 192 , 1993. - 11.
Gans N. R. Hutchinson S. A. 2003 “An experimental study of hybrid switched system approaches to visual servoing,” in Proceedings ICRA’03, IEEE International Conference on Robotics and Automation,3 3061 3068 , Sept. 2003. - 12.
Hager D. Morse A. 1998 Eds. Berlin, Germany: Springer-Verlag,237 66 78 . Lecture Notes in Control and Information Sciences, 1998. - 13.
Hashimoto K. Noritsugu T. 2000 “Potential problems and switching control for visual servoing,” in Proceedings, IEEE/RSJ International Conference on Intelligent Robots and System,1 423 428 , Oct. 2000. - 14.
Hutchinson S. Hager G. D. Corke P. I. 1996 “A tutorial on visual servo control,” IEEE Transactions on Robotics and Automation,12 5 651 670 , Oct. 1996. - 15.
Janabi-Sharifi F. Ficocelli M. 2004 “Formulation of Radiometric Feasibility Measures for Feature Selection and Planning in Visual Servoing”, IEEE Transactions on Systems, Man and Cybernetics-Part B: Cybernetics,34 2 978 987 , April 2004. - 16.
Kase H. Maru N. Nishikawa A. Yamada S. Miyazaki F. 1993 “Visual Servoing of the Manipulator using the Stereo Vision,” in Proceedings of the IECON’93 IEEE International Conference on Industrial Electronics, Control, and Instrumentation,3 1791 1796 , 1993. - 17.
Krupa A. Gangloff J. Doignon C. Mathelin M. Morel G. Leroy J. Soler L. Marescaux J. 2003 “Autonomous 3-D positioning of surgical instruments in robotized laparoscopic surgery using visual servoing,” IEEE Transaction on Robotic and Automation,19 5 842 853 , 2003. - 18.
Li Z. Xie W. F. Aouf N. 2006 “A Neural Network Based Hand-Eye Calibration Approach in Robotic Manufacturing Systems”, CSME 2006, Calgary, May 2006. - 19.
Mahony R. Corke P. Chaumette F. 2002 “Choice of image features for depth-axis control in image based visual servo control,” IEEE/RSJ International Conference on Intelligent Robots and Systems,1 390 395 , 2002. - 20.
Malis E. Chaumette F. Boudet S. 1999 “2-1/2-D visual servoing (1999),” IEEE Transactions on Robotics and Automation,15 2 238 250 , April 1999. - 21.
Oh P. Y. Allen P. K. 2001 “Visual Servoing by Partitioning Degrees of Freedom,” IEEE Transactions on Robotics and Automation,17 1 1 17 , 2001. - 22.
Pages J. Collewet C. Chaumette F. Salvi J. 2006 “Optimizing Plane-to-Plane Positioning Tasks by Image-Based Visual Servoing and Structure light,” IEEE Transactions on Robotics,22 5 1000 1010 , 2006. - 23.
Tahri O. Chaumette F. 2005 “Point-Based and Region-Based Image Moments for Visual Servoing of Planar Objects”, IEEE Transactions on Robotics,21 6 1116 1127 , Dec. 2005. - 24.
Trucco E. Verri A. 1998 “Introductory Techniques for3 -D computer Vision,” Prentice Hall; 1998. - 25.
Wang J. P. Cho H. 2008 “Micropeg and Hole Alignment Using Image Moments Based Visual Servoing Method”, IEEE Transactions on Industrial Electronics,55 3 1286 1294 , March 2008. - 26.
Weiss L. E. Sanderson A. C. Neuman C. P. 1987 “Dynamic sensor based control of robots with visual feedback,” IEEE Journal of Robotics and Automation,3 5 404 417 , Oct. 1987. - 27.
Wilson W. J. Hulls C. C. W. Bell G. S. 1996 “Relative end-effector control using Cartesian position-based visual servoing,” IEEE Transactions on Robotics and Automation,12 5 684 696 , Oct. 1996.