Camera parameters in simulations.
Abstract
The accuracy and stability are two fundamental concerns of the visual servoing control system. This chapter presents a sliding mode controller for image‐based visual servoing (IBVS) which can increase the accuracy of 6DOF robotic system with guaranteed stability. The proposed controller combines proportional derivative (PD) control with sliding mode control (SMC) for a 6DOF manipulator. Compared with conventional proportional or SMC controller, this approach owns faster convergence and better disturbance rejection ability. Both simulation and experimental results show that the proposed controller can increase the accuracy and robustness of a 6DOF robotic system.
Keywords
- sliding mode control
- image-based visual servoing
- 6DOF robotic manipulator
1. Introduction
With the development of industrial manufacture technology, manufacture process is going forward, more dexterous and more efficient machines is needed to meet this large changes.
Visual servoing system comes into being; it can handle the dynamic interaction between the manipulator and environment and has been applied in various surroundings where high accuracy and strong robustness are needed, such as cell injection in Ref. [1], car steering, aircraft landing and missile tracking in Ref. [2]. Generally speaking, the system where the camera is used as a visual sensor in the feedback is referred to as visual servoing system. Depending on the configuration of the camera with respect to the robot, the visual servoing configuration can be classified as eye‐in‐hand where the camera is installed at the end effector and eye‐to‐hand where the camera is fixed in workspace [3]. In this chapter, more attention is focused on eye‐in‐hand configuration in visual servoing system.
Furthermore, visual servoing can be classified into three different classes: image‐based visual servoing (IBVS), position‐based visual servoing (PBVS) and hybrid visual servoing (HVS). Their performances have been precisely described in Refs. [2–6]. In comparison, although PBVS is convenient in actual application, a calibrated camera and a known geometric model of the target are needed. The control performance depends on the accuracy of the camera calibration and the object geometric model, which is difficult to ensure. IBVS directly uses image feature errors to calculate the control signal, which reduces computational delay and becomes less sensitive to the calibration and model errors in Refs. [3, 5]. In this chapter, more attention is focused on IBVS.
Various control methods have been presented to apply to IBVS system, including proportional‐integral‐derivative control (PID) in Refs. [4, 7, 8], predictive control in Ref. [9], sliding mode control (SMC) in Ref. [10], adaptive control in Ref. [11], and so on. The core of this control method is to generate a velocity vector or an acceleration vector as the control input to guide the end effector to the desired position, to complete the total control task.
Particularly, PID control has a wide range of applications because of its simple form and popularity among engineers. Some researchers have conducted various analyses on using proportional or proportional derivative (PD) controller to produce a velocity command in Ref. [7]; convergent property of proportional or PD controller is satisfactory, but sometimes sudden variation or small shakiness due to image noise or motion vibration will be caused. In order to address these issues, the researchers proposed the control scheme using PD controller and producing an acceleration command as the control input in Ref. [8]. The proposed method can solve the above‐mentioned problems; however, only velocity signal can be accepted as control input in most visual servoing systems. SMC was considered to be successfully applied in some automatic control fields due to its insensitivity to model uncertainties and disturbance in Ref. [12]. Using SMC in IBVS or PBVS or robotic manipulator to guarantee the system robustness and good tracking performance has been reported in some literature in the recent years in Refs. [13–15]. Meanwhile, the chatting phenomenon of SMC also needs to be considered in some special situations.
In this chapter, a new enhanced IBVS scheme that combines PD control with SMC is proposed to generate the velocity profile to control the robotic manipulator. This PD‐SMC method takes the advantages of PD and SMC methods. The stability of the enhanced IBVS method is proved by using Lyapunov method. It can achieve the better convergence performance, ensure the stability of the system and own the strong robustness when the system is subjected to uncertainty and noises.
This chapter is structured as follows. The visual servoing system model is described in Section 2. The enhanced controller is designed in Section 3. The system stability is analysed in Section 4. The simulations are performed in Section 5. The experiments are performed in Section 6. The concluding remarks and future work are mentioned in Section 7.
2. System description
In the IBVS system, the control problem can be expressed by obtaining the relation between the derivative of the image features and the camera spatial velocity in Refs. [3, 4]. The system model, which consists of a 6DOF manipulator with a camera mounted on its end effector, is shown in Figure 1.

Figure 1.
IBVS with eye‐in‐hand configuration.
Before going into the detailed discussion of the system model, the following notations are introduced. The camera spatial velocity can be noted by
Using the velocity of the point relative to the camera frame, we can describe the relationship between the feature velocity and the camera velocity in normalized image coordinate in Ref. [3] as follows:
Due to a 6DOF manipulator that needs to be controlled, at least three feature points are necessary to avoid the interaction matrix singularities and the multiple global minima in Refs. [4, 8]. Nevertheless, three points may be required for interaction matrix singularities and the multiple global minima. For this reason, we use four feature points to control 6DOF in the space, the expression is written as follows:
where
3. Controller design
The general design approach of a visual servoing controller is to use proportional control to generate the control signal. This approach is also applied to the conventional IBVS, the form can be described as follows:
where
The proportional control is a prompt and timely control method. However, this method cannot eliminate the system residual error. In this chapter, PD control is used to replace the proportional control, which can improve the control quality with faster control convergence speed and smaller error. Meanwhile, in order to improve the system stability, the sliding model control is also adopted to compensate uncertainties of the system. This is an enhanced approach, which combines PD control with SMC base on IBVS, and can be called as hybrid PD‐SMC method.
We define the sliding surface
where
Adding the sliding mode control, and applying PD control to the visual servoing system in Ref. [15], the modified control law should be considered as follows:
where
Consider the above control scheme easily to have chatting phenomenon. In order to smooth the chattering, a saturation function is used to replace the sign function, and the control law is described as follows:
where sat(·) is the saturation function, which is defined as follows:
This control law is an enhanced IBVS scheme, which combines PD control with SMC together. SMC is well known for its robustness in Refs. [14–16]. By applying this control method, it is expected that this controller will achieve better robustness, faster convergence speed and higher accuracy. This will be demonstrated in both simulation and experiment sections.
4. Stability analysis
The stability analysis of the proposed controller is based on Lyapunov direct method in Ref. [12]. Consider the uncertainties in depth, the estimated interaction matrix can be described as follows:
where
The time derivative of the Lyapunov function is obtained as follows:
By substituting **Eq. (8) into Eq. (2), the system error dynamic equation is obtained as follows:
Moving the term associated with
The time derivative of Lyapunov function is obtained as follows:
It is noted that the rank of
The following formula can be ensured:
where
According to the above conditions, the time derivative of Lyapunov function can be described as follows:
By applying Barbalat's lemma, it is obvious that
5. Simulations
Simulations have been conducted on a 6DOF Puma 560 robot model by using MATLAB Robotics Toolbox and Machine Vision Toolbox in Ref. [3]. The 6DOF arm is chosen as the manipulator and the camera is mounted on the end effector, which assumes no transformation between the end effector and the camera. The camera characteristics are shown in Table 1. The maximum linear velocity of Puma 560 is 0.5 m/s according to the robot user manual in Refs. [17, 18].
Parameters | Values |
---|---|
Focal length | 0.008 (m) |
Principal point | (512, 512) |
Camera resolution | 1024 × 1024 |
Table 1.
To analyse and compare the performance of hybrid PD‐SMC IBVS with the conventional IBVS, three simulation tests have been conducted, including pure translation and pure rotation of features, and disturbance rejection test. Four feature points are used in visual servoing control. The initial and desired positions of the image features are given in Table 2.
Positions | ||||
---|---|---|---|---|
( |
( |
( |
( |
|
Initial | (360 401) | (361 611) | (570 610) | (573 402) |
Desired | (412 412) | (412 612) | (612 612) | (612 412) |
Initial | (360 401) | (361 611) | (570 610) | (573 402) |
Desired | (362 506) | (466 612) | (572 506) | (466 403) |
Table 2.
Initial and desired positions.
Test 1, in this test, a normal translational motion, is completed. Figures 2 and 3 show the feature position error and joint velocity convergence situation of IBVS and enhanced IBVS (PD‐SMC), respectively, under the pure translation condition. Figure 4 shows the feature trajectory in image space under the pure translation condition.

Figure 2.
Feature error variation in pure translation test: (a) IBVS and (b) enhanced IBVS (PD‐SMC).

Figure 3.
Joint velocity variation in pure translation test. (a) IBVS and (b) enhanced IBVS (PD‐SMC).

Figure 4.
Feature trajectory in image space in Test 1.
Test 2, in this test, a pure rotational motion, is concluded. Figures 5 and 6 show the feature position error and joint velocity convergence situation of IBVS and enhanced IBVS (PD‐SMC), respectively, under the pure rotation condition. Figure 7 shows the feature trajectory in image space under the pure rotation condition.

Figure 5.
Feature error variation in pure rotation test. (a) IBVS and (b) enhanced IBVS (PD‐SMC).

Figure 6.
Joint velocity variation in pure rotation test. (a) IBVS and (b) enhanced IBVS (PD‐SMC).

Figure 7.
Feature trajectory in image space in Test 2.
Test 3, in this test, a chirp signal as a disturbance, is added to demonstrate the robustness of the enhanced IBVS. Figures 8 and 9 show the feature position error and joint velocity convergence situation of IBVS and enhanced IBVS (PD‐SMC), respectively, under the disturbance. Figure 7 shows the feature trajectory in image space under the disturbance.

Figure 8.
Feature error variation with disturbance. (a) IBVS and (b) enhanced IBVS (PD‐SMC).

Figure 9.
Joint velocity variation with disturbance. (a) IBVS and (b) enhanced IBVS (PD‐SMC).
According to three test results, it is obvious that the performance of enhanced IBVS is better than that of IBVS. More specifically, the simulation results demonstrate that the PD‐SMC control system owns higher convergence rate, more accurate convergence state and strong robustness.
To further compare the performance of IBVS and enhanced IBVS, the performance index ISE (integrate square error) is adopted, which is defined as
The results are summarized in Table 3, where the ‘ISE Total’ represents the total integrate square error of feature error
IBVS | Enhanced IBVS | |
---|---|---|
Test 1: ISE total | 1.7875 × 104 | 5.3609 × 103 |
Test 2: ISE total | 4.5601 × 105 | 1.6251 × 105 |
Test 3: ISE total | 1.7639 × 104 | 5.3348 × 103 |
Table 3.
ISE values of IBVS and enhanced IBVS.
6. Experiments
To further validate the performance of the proposed method, experimental tests are conducted on a 6DOF Denso robot. The experimental setup consists of a controller and two computers through a double PC bilateral teleoperation. PC No. 1 (Master PC) communicates with the master robot and transmits the commands to the Slave PC (PC No. 2) over the communication network. The slave PC also communicates with the slave robot (Denso robot) and obtains the camera data and sends it back to the master PC over the communication network in Refs. [17, 18]. The experimental setup is shown in Figure 10. The experimental system is shown in Figure 11. Denso VP6242G is used as the manipulator in [8]; the characteristics of the camera are given in Table 4.

Figure 10.
Experimental setup.

Figure 11.
Experimental system.
Parameters | Values |
---|---|
Focal length | 0.004 (m) |
110,000 (pixel/m) | |
110,000 (pixel/m) | |
Image plane offset of |
120 (pixel) |
Image plane offset of |
187 (pixel) |
Table 4.
Camera parameters in experiments.
Three experimental tests have been conducted, including long‐distance translation and pure rotation of features, and hybrid translation‐rotation test. Four feature points are used in visual servoing control. The initial and desired positions of the image features are given in Table 5. Figure 12 shows Denso robot in initial position and in work processing.

Figure 12.
Denso robot. (a) Initial position and (b) work process.
Positions | ||||
---|---|---|---|---|
( |
( |
( |
( |
|
Initial | (57 150) | (57 57) | (146 63) | (146 148) |
Desired | (595 270) | (595 175) | (684 177) | (686 275) |
Initial | (454 213) | (385 146) | (447 81) | (516 148) |
Desired | (602 270) | (600 174) | (688 179) | (619 273) |
Initial | (103 136) | (196 105) | (225 187) | (134 220) |
Desired | (447 203) | (540 189) | (557 278) | (461 292) |
Table 5.
Initial and desired positions.
Test 1 is performed to examine the convergence of image feature points when the desired position is far away from the initial one, which needs a long‐distance translational motion. Figure 13 shows that the feature position errors converge to zero. Figure 14 shows the initial and desired positions captured by the camera. Figure 15 shows the feature trajectory. Figure 16 shows the camera trajectory in Cartesian space.

Figure 13.
Feature error variation. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 1.

Figure 14.
Feature position variation. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 1.

Figure 15.
Feature trajectory. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 1.

Figure 16.
Camera trajectory in Cartesian. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 1.
It is shown that the performance of hybrid PD‐SMC is better than that of IBVS. The settling time of the hybrid PD‐SMC method is shorter than that of conventional method. Furthermore, in hybrid PD‐SMC method, the feature trajectory is straighter in image plane and the camera trajectory in Cartesian space is smoother.
Test 2 is performed to examine the rotation performance of the proposed method; a pure rotation of image feature points has been completed. Figure 17 shows that the feature position errors converge to zero. Figure 18 shows the initial and desired positions, which are captured by the camera. Figure 19 shows the feature trajectory in image plane. Figure 20 shows the camera trajectory in Cartesian space.

Figure 17.
Feature error variation. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 2.

Figure 18.
Feature position variation. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 2.

Figure 19.
Feature trajectory. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 2.

Figure 20.
Camera trajectory in Cartesian. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 2.
It is obvious that the test is successfully performed to prove the better performance of enhanced IBVS. Figures 17–20 show the comparison of experiment results. The results are similar to those of Test 1.
Test 3 is a hybrid translation‐rotation motion process. In this experimental test, the translation and rotation motions of features are incorporated in one process. In the initial stage of the movement, the translation motion is implemented. In the final stage of the movement, the rotation motion is completed.
Figure 21 shows the feature position error variation of IBVS and enhanced IBVS. It is observed that enhanced IBVS owns the higher convergence rate. Figure 22 shows the image feature points from the initial position to the final position and the trajectory by using IBVS and enhanced IBVS. It is observed that enhanced IBVS performs better in the final stage than IBVS in terms of the smoothness and length of its trajectories in image plane. Figure 23 shows the camera trajectory in a three‐dimensional space of IBVS and enhanced IBVS. It can be seen that the camera trajectory of enhanced IBVS is smoother and more accurate.

Figure 21.
Feature error variation. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 3.

Figure 22.
Feature trajectory. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 3.

Figure 23.
Camera trajectory in Cartesian. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 3.
More specifically, the robustness against the random disturbances during the experiment is demonstrated in rotation movement. By comparing the trajectories, one can notice that the proposed enhanced IBVS method owns better robustness.
The performance index ISE (Integrate Square Error) is also used to compare the performance of IBVS and enhanced IBVS. The results are described in Table 6, and it shows that the ISE of enhanced IBVS is smaller than that of IBVS in three tests.
IBVS | Enhanced IBVS | |
---|---|---|
1160.0 | 791.5 | |
112.6 | 92.7 | |
438.6 | 296.8 |
Table 6.
ISE values of IBVS and enhanced IBVS.
7. Conclusions
An enhanced IBVS, which combines PD control with SMC, is proposed for a 6DOF manipulator in this chapter. This approach can improve the visual servoing performance by taking the advantages of PD control and SMC and compensating for the shortcomings. The stability of the enhanced IBVS system is proven. Extensive simulations and experiments have been carried out and three tests are implemented for the comparison. The results validate that the tracking performance and robustness of the proposed method are superior to the conventional IBVS controller.
References
- 1.
Ghanbari A, Wang W, Hann CE, Chase JG, Chen XQ. Cell image recognition and visual servo control for automated cell injection. In: 2009 4th International Conference on Autonomous Robots and Agents; 10–12 February 2009; IEEE; 2009. pp. 92–96. - 2.
Hashimoto K. A review on vision‐based control of robot manipulators. Advanced Robotics. 2003; 17 :969–991. DOI: 10.1163/156855303322554382 - 3.
Corke P. Robotics, Vision and Control. 1st ed. Springer Berlin Heidelberg; 2011. 495 p. DOI: 10.1007/978‐3‐642‐20144‐8 - 4.
Chaumette F, Hutchinson S. Visual servo control. I. Basic approaches. IEEE Robotics Automation Magazine. 2006; 13 :82–90. DOI: 10.1109/mra.2006.250573 - 5.
Hutchinson S, Hager GD, Corke P. A tutorial on visual servo control. IEEE Transactions on Robotics and Automation. 1996; 12 :651–670. DOI: 10.1109/70.538972 - 6.
Chaumette F, Hutchinson S. Visual servo control. II. Advanced approaches. IEEE Robotics Automation Magazine. 2007; 14 :109–118. DOI: 10.1109/mra.2007.339609 - 7.
Wang J, Cho H. Micropeg and hole alignment using image moments based visual servoing method. IEEE Robotics Automation Magazine. 2008; 55 :1286–1294. DOI: 10.1109/tie.2007.911206 - 8.
Keshmiri M, Xie WF, Mohebbi A. Augmented image‐based visual servoing of a manipulator using acceleration command. IEEE Transactions on Industrial Electronics. 2014; 61 :5444–5452. DOI: 10.1109/tie.2014.2300048 - 9.
Allibert G, Courtial E, Chaumette F. Predictive control for constrained image‐based visual servoing. IEEE Transactions on Robotics. 2010; 26 :933–939. DOI: 10.1109/tro.2010.2056590 - 10.
Kim JK, Kim DW, Choi SJ, Won SC. Image‐based visual servoing using sliding mode control. 2006 SICE‐ICASE International Joint Conference; 18–21 October 2006; IEEE; 2007. P. 4995–5001. - 11.
Fang Y, Liu X, Zhang X. Adaptive active visual servoing of nonholonomic mobile robots. IEEE Transactions on Industrial Electronics. 2012; 59 :486–493. DOI: 10.1109/tie.2011.2143380 - 12.
Slotine JJE, Li WP. Applied Nonlinear Control. Prentice Hall; Englewood Cliffs, New Jersey 07632, 1991. - 13.
Yksel T. IBVS with fuzzy sliding mode for robot manipulators. 2015 International Workshop on Recent Advances in Sliding Modes (RASM); 9–11 April 2015; IEEE; 2015. P. 1–6. - 14.
Parsapour M, RayatDoost S, Taghirad HD. Position based sliding mode control for visual servoing system. 2013 First RSI/ISM International Conference on Robotics and Mechatronics (ICRoM); 13–15 February 2013; IEEE; 2013. P. 337–342. - 15.
Acob JM, Pano V, Ouyang PR. Hybrid PD sliding mode control of a two degree‐of‐freedom parallel robotic manipulator. 2013 10th IEEE International Conference on Control and Automation (ICCA); 12–14 June 2013; IEEE; 2013. P. 1760–1765. - 16.
Ngo QH, Nguyen NP, Chi NN, Tran TH, Hong KS. Fuzzy sliding mode control of container cranes. International Journal of Control, Automation, and Systems. 2015; 13 :419–425. DOI: 10.1049/iet‐cta.2010.0764 - 17.
Corke P, Armstrong‐Helouvry B. A meta‐study of puma 560 dynamics: A critical appraisal of literature data. Robotica. 1995;13:253–258. DOI: 10.1017/s0263574700017781 - 18.
Spong MW, Hutchinson S. Robot Modeling and Control. Hoboken: Wiley; 2006. DOI: 10.1108/ir.2006.33.5.403.1