Open access peer-reviewed chapter

Sliding Mode Control (SMC) of Image‐Based Visual Servoing for a 6DOF Manipulator

By Shutong Li, Ahmad Ghasemi, Wen‐Fang Xie and Yanbin Gao

Submitted: September 29th 2016Reviewed: January 18th 2017Published: June 28th 2017

DOI: 10.5772/67521

Downloaded: 853

Abstract

The accuracy and stability are two fundamental concerns of the visual servoing control system. This chapter presents a sliding mode controller for image‐based visual servoing (IBVS) which can increase the accuracy of 6DOF robotic system with guaranteed stability. The proposed controller combines proportional derivative (PD) control with sliding mode control (SMC) for a 6DOF manipulator. Compared with conventional proportional or SMC controller, this approach owns faster convergence and better disturbance rejection ability. Both simulation and experimental results show that the proposed controller can increase the accuracy and robustness of a 6DOF robotic system.

Keywords

  • sliding mode control
  • image-based visual servoing
  • 6DOF robotic manipulator

1. Introduction

With the development of industrial manufacture technology, manufacture process is going forward, more dexterous and more efficient machines is needed to meet this large changes.

Visual servoing system comes into being; it can handle the dynamic interaction between the manipulator and environment and has been applied in various surroundings where high accuracy and strong robustness are needed, such as cell injection in Ref. [1], car steering, aircraft landing and missile tracking in Ref. [2]. Generally speaking, the system where the camera is used as a visual sensor in the feedback is referred to as visual servoing system. Depending on the configuration of the camera with respect to the robot, the visual servoing configuration can be classified as eye‐in‐hand where the camera is installed at the end effector and eye‐to‐hand where the camera is fixed in workspace [3]. In this chapter, more attention is focused on eye‐in‐hand configuration in visual servoing system.

Furthermore, visual servoing can be classified into three different classes: image‐based visual servoing (IBVS), position‐based visual servoing (PBVS) and hybrid visual servoing (HVS). Their performances have been precisely described in Refs. [26]. In comparison, although PBVS is convenient in actual application, a calibrated camera and a known geometric model of the target are needed. The control performance depends on the accuracy of the camera calibration and the object geometric model, which is difficult to ensure. IBVS directly uses image feature errors to calculate the control signal, which reduces computational delay and becomes less sensitive to the calibration and model errors in Refs. [3, 5]. In this chapter, more attention is focused on IBVS.

Various control methods have been presented to apply to IBVS system, including proportional‐integral‐derivative control (PID) in Refs. [4, 7, 8], predictive control in Ref. [9], sliding mode control (SMC) in Ref. [10], adaptive control in Ref. [11], and so on. The core of this control method is to generate a velocity vector or an acceleration vector as the control input to guide the end effector to the desired position, to complete the total control task.

Particularly, PID control has a wide range of applications because of its simple form and popularity among engineers. Some researchers have conducted various analyses on using proportional or proportional derivative (PD) controller to produce a velocity command in Ref. [7]; convergent property of proportional or PD controller is satisfactory, but sometimes sudden variation or small shakiness due to image noise or motion vibration will be caused. In order to address these issues, the researchers proposed the control scheme using PD controller and producing an acceleration command as the control input in Ref. [8]. The proposed method can solve the above‐mentioned problems; however, only velocity signal can be accepted as control input in most visual servoing systems. SMC was considered to be successfully applied in some automatic control fields due to its insensitivity to model uncertainties and disturbance in Ref. [12]. Using SMC in IBVS or PBVS or robotic manipulator to guarantee the system robustness and good tracking performance has been reported in some literature in the recent years in Refs. [1315]. Meanwhile, the chatting phenomenon of SMC also needs to be considered in some special situations.

In this chapter, a new enhanced IBVS scheme that combines PD control with SMC is proposed to generate the velocity profile to control the robotic manipulator. This PD‐SMC method takes the advantages of PD and SMC methods. The stability of the enhanced IBVS method is proved by using Lyapunov method. It can achieve the better convergence performance, ensure the stability of the system and own the strong robustness when the system is subjected to uncertainty and noises.

This chapter is structured as follows. The visual servoing system model is described in Section 2. The enhanced controller is designed in Section 3. The system stability is analysed in Section 4. The simulations are performed in Section 5. The experiments are performed in Section 6. The concluding remarks and future work are mentioned in Section 7.

2. System description

In the IBVS system, the control problem can be expressed by obtaining the relation between the derivative of the image features and the camera spatial velocity in Refs. [3, 4]. The system model, which consists of a 6DOF manipulator with a camera mounted on its end effector, is shown in Figure 1.

Figure 1.

IBVS with eye‐in‐hand configuration.

Before going into the detailed discussion of the system model, the following notations are introduced. The camera spatial velocity can be noted by u=(vc, ωc), vc=(vcx, vcy, vcz)and ωc=(ωcx, ωcy, ωcz),which are the camera's linear velocity and angular velocity in Ref. [5]. Set the focal length of camera as f. A world point P in the camera frame is denoted by P = (X, Y, Z), the projected coordinate in image space is denoted by p = (x, y).

Using the velocity of the point relative to the camera frame, we can describe the relationship between the feature velocity and the camera velocity in normalized image coordinate in Ref. [3] as follows:

[x˙y˙]=[fZ0xZxyf1+x2fy0fZyZ1+y2xxyfx][vcxvcyvczωcxωcyωcz]E1

Due to a 6DOF manipulator that needs to be controlled, at least three feature points are necessary to avoid the interaction matrix singularities and the multiple global minima in Refs. [4, 8]. Nevertheless, three points may be required for interaction matrix singularities and the multiple global minima. For this reason, we use four feature points to control 6DOF in the space, the expression is written as follows:

e˙=Lv4uE2

where Lv4is the interaction matrix.

Lv4=[Lv/p=p1Lv/p=p2Lv/p=p3Lv/p=p4]E3

p=p1,,p4is the image feature points and e is the feature error. Since the image interaction matrix largely depends on the depth Z and camera intrinsic parameters such as focal length f, there exists some uncertainties in these parameters. In this chapter, we focus on dealing with the uncertainties on the depth. The range of the depth of the visual servoing system is assumed as ZminZZmax. The estimated interaction matrix L^v4is used in the viusal servoing control design.

3. Controller design

The general design approach of a visual servoing controller is to use proportional control to generate the control signal. This approach is also applied to the conventional IBVS, the form can be described as follows:

u=KL^v4+e˙(t)E4

where L^v4+is the pseudo inverse of the estimated interaction matrix, Kis a positive definite matrix.

The proportional control is a prompt and timely control method. However, this method cannot eliminate the system residual error. In this chapter, PD control is used to replace the proportional control, which can improve the control quality with faster control convergence speed and smaller error. Meanwhile, in order to improve the system stability, the sliding model control is also adopted to compensate uncertainties of the system. This is an enhanced approach, which combines PD control with SMC base on IBVS, and can be called as hybrid PD‐SMC method.

We define the sliding surface s, which will converge to 0 when the image feature errors go forward and stay on it all the time in Ref. [12].

s=e=ϵϵdE5

where ϵis the image plane feature and ϵdis the desired value of the feature. The basic visual servoing controller of IBVS is designed based on the above proportional control equation in Ref. [3], and it is described as the following first‐order system:

e˙+Kpe=0E6

Adding the sliding mode control, and applying PD control to the visual servoing system in Ref. [15], the modified control law should be considered as follows:

u=L^v4+(Kde˙(t)Kpe(t)Kssgn(s))E7

where Ksis a positive definite matrix and sgn(·) is the signum function.

Consider the above control scheme easily to have chatting phenomenon. In order to smooth the chattering, a saturation function is used to replace the sign function, and the control law is described as follows:

u=L^v4+(Kde˙(t)Kpe(t)Kssat(s))E8

where sat(·) is the saturation function, which is defined as follows:

sat(s)={ s if |s|1sgn(s) otherwiseE9

This control law is an enhanced IBVS scheme, which combines PD control with SMC together. SMC is well known for its robustness in Refs. [1416]. By applying this control method, it is expected that this controller will achieve better robustness, faster convergence speed and higher accuracy. This will be demonstrated in both simulation and experiment sections.

4. Stability analysis

The stability analysis of the proposed controller is based on Lyapunov direct method in Ref. [12]. Consider the uncertainties in depth, the estimated interaction matrix can be described as follows:

(I+Δmin)Lv4L^v4+(I+Δmax)E10

where Δminis a matrix of the uncertainties associated with lower bounds of estimated depth Zminand Δmaxis a matrix of the uncertainties associated with the upper bounds of the estimated depth Zmax. A Lyapunov function is constructed as follows:

V=12sTsE11

The time derivative of the Lyapunov function is obtained as follows:

V˙=sTs˙E12

By substituting **Eq. (8) into Eq. (2), the system error dynamic equation is obtained as follows:

e˙=Lv4L^v4+(Kde˙(t)Kpe(t)Kssat(s))ϵdE13

Moving the term associated with e˙to the left of the equation yields

e˙=Lv4L^v4+(I+KdLv4L^v4+)1(KpeKssat(s))(I+KdLv4L^v4+)1ϵdE14

The time derivative of Lyapunov function is obtained as follows:

V˙=Lv4L^v4+(I+KdLv4L^v4+)1(eTKpeKs|s|)(I+KdLv4L^v4+)1sTϵdE15

It is noted that the rank of Lv4L^v4+is 6, Lv4L^v4+has two null vectors that satisfy Lv4L^v4+x=0. It is know that Lv4L^v4+has two null vectors that satisfy {Lv4L^v4+=0, xR8, x0}. Assuming that x does not belong to the null space of Lv4L^v4+in Refs. [4, 8], Lv4L^v4+x>0can be obtained. If Kdis chosen as a positive definite matrix,

Kd>0E16

The following formula can be ensured:

(I+KdLv4L^v4+)1Lv4L^v4+x>0E17

Ksis chosen as follows:

Ks>λmax(I+Δmax)λmin(I+Δmin)1ϵd+ηE18

where ηis a diagonal positive definite matrix whose elements determine the decay rate of Vto zero. λminand λmaxare the minima and maximum parameters, respectively.

According to the above conditions, the time derivative of Lyapunov function can be described as follows:

V˙<Lv4L^v4+(I+KdLv4L^v4+)1(eTKpeη|s|)<0E19

By applying Barbalat's lemma, it is obvious that V˙=0when t, the image feature error e(t). The stability of IBVS system is ensured.

5. Simulations

Simulations have been conducted on a 6DOF Puma 560 robot model by using MATLAB Robotics Toolbox and Machine Vision Toolbox in Ref. [3]. The 6DOF arm is chosen as the manipulator and the camera is mounted on the end effector, which assumes no transformation between the end effector and the camera. The camera characteristics are shown in Table 1. The maximum linear velocity of Puma 560 is 0.5 m/s according to the robot user manual in Refs. [17, 18].

ParametersValues
Focal length0.008 (m)
Principal point(512, 512)
Camera resolution1024 × 1024

Table 1.

Camera parameters in simulations.

To analyse and compare the performance of hybrid PD‐SMC IBVS with the conventional IBVS, three simulation tests have been conducted, including pure translation and pure rotation of features, and disturbance rejection test. Four feature points are used in visual servoing control. The initial and desired positions of the image features are given in Table 2.

Positions
(x1 y1)(x2 y2)(x3 y3)(x4 y4)
Tests 1 and 3
Initial(360 401)(361 611)(570 610)(573 402)
Desired(412 412)(412 612)(612 612)(612 412)
Test 2
Initial(360 401)(361 611)(570 610)(573 402)
Desired(362 506)(466 612)(572 506)(466 403)

Table 2.

Initial and desired positions.

Test 1, in this test, a normal translational motion, is completed. Figures 2 and 3 show the feature position error and joint velocity convergence situation of IBVS and enhanced IBVS (PD‐SMC), respectively, under the pure translation condition. Figure 4 shows the feature trajectory in image space under the pure translation condition.

Figure 2.

Feature error variation in pure translation test: (a) IBVS and (b) enhanced IBVS (PD‐SMC).

Figure 3.

Joint velocity variation in pure translation test. (a) IBVS and (b) enhanced IBVS (PD‐SMC).

Figure 4.

Feature trajectory in image space in Test 1.

Test 2, in this test, a pure rotational motion, is concluded. Figures 5 and 6 show the feature position error and joint velocity convergence situation of IBVS and enhanced IBVS (PD‐SMC), respectively, under the pure rotation condition. Figure 7 shows the feature trajectory in image space under the pure rotation condition.

Figure 5.

Feature error variation in pure rotation test. (a) IBVS and (b) enhanced IBVS (PD‐SMC).

Figure 6.

Joint velocity variation in pure rotation test. (a) IBVS and (b) enhanced IBVS (PD‐SMC).

Figure 7.

Feature trajectory in image space in Test 2.

Test 3, in this test, a chirp signal as a disturbance, is added to demonstrate the robustness of the enhanced IBVS. Figures 8 and 9 show the feature position error and joint velocity convergence situation of IBVS and enhanced IBVS (PD‐SMC), respectively, under the disturbance. Figure 7 shows the feature trajectory in image space under the disturbance.

Figure 8.

Feature error variation with disturbance. (a) IBVS and (b) enhanced IBVS (PD‐SMC).

Figure 9.

Joint velocity variation with disturbance. (a) IBVS and (b) enhanced IBVS (PD‐SMC).

According to three test results, it is obvious that the performance of enhanced IBVS is better than that of IBVS. More specifically, the simulation results demonstrate that the PD‐SMC control system owns higher convergence rate, more accurate convergence state and strong robustness.

To further compare the performance of IBVS and enhanced IBVS, the performance index ISE (integrate square error) is adopted, which is defined as

ISE=0Te2(t)dtE20

The results are summarized in Table 3, where the ‘ISE Total’ represents the total integrate square error of feature error x1,x2,x3,x4 and feature error y1,y2,y3,y4. It shows that the ISE of enhanced IBVS is smaller than that of IBVS in three tests.

IBVSEnhanced IBVS
Test 1: ISE total1.7875 × 1045.3609 × 103
Test 2: ISE total4.5601 × 1051.6251 × 105
Test 3: ISE total1.7639 × 1045.3348 × 103

Table 3.

ISE values of IBVS and enhanced IBVS.

6. Experiments

To further validate the performance of the proposed method, experimental tests are conducted on a 6DOF Denso robot. The experimental setup consists of a controller and two computers through a double PC bilateral teleoperation. PC No. 1 (Master PC) communicates with the master robot and transmits the commands to the Slave PC (PC No. 2) over the communication network. The slave PC also communicates with the slave robot (Denso robot) and obtains the camera data and sends it back to the master PC over the communication network in Refs. [17, 18]. The experimental setup is shown in Figure 10. The experimental system is shown in Figure 11. Denso VP6242G is used as the manipulator in [8]; the characteristics of the camera are given in Table 4.

Figure 10.

Experimental setup.

Figure 11.

Experimental system.

ParametersValues
Focal length0.004 (m)
X‐axis scaling factor110,000 (pixel/m)
Y‐axis scaling factor110,000 (pixel/m)
Image plane offset of X‐axis120 (pixel)
Image plane offset of Y‐axis187 (pixel)

Table 4.

Camera parameters in experiments.

Three experimental tests have been conducted, including long‐distance translation and pure rotation of features, and hybrid translation‐rotation test. Four feature points are used in visual servoing control. The initial and desired positions of the image features are given in Table 5. Figure 12 shows Denso robot in initial position and in work processing.

Figure 12.

Denso robot. (a) Initial position and (b) work process.

Positions
(x1 y1)(x2 y2)(x3 y3)(x4 y4)
Test 1
Initial(57 150)(57 57)(146 63)(146 148)
Desired(595 270)(595 175)(684 177)(686 275)
Test 2
Initial(454 213)(385 146)(447 81)(516 148)
Desired(602 270)(600 174)(688 179)(619 273)
Test 3
Initial(103 136)(196 105)(225 187)(134 220)
Desired(447 203)(540 189)(557 278)(461 292)

Table 5.

Initial and desired positions.

Test 1 is performed to examine the convergence of image feature points when the desired position is far away from the initial one, which needs a long‐distance translational motion. Figure 13 shows that the feature position errors converge to zero. Figure 14 shows the initial and desired positions captured by the camera. Figure 15 shows the feature trajectory. Figure 16 shows the camera trajectory in Cartesian space.

Figure 13.

Feature error variation. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 1.

Figure 14.

Feature position variation. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 1.

Figure 15.

Feature trajectory. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 1.

Figure 16.

Camera trajectory in Cartesian. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 1.

It is shown that the performance of hybrid PD‐SMC is better than that of IBVS. The settling time of the hybrid PD‐SMC method is shorter than that of conventional method. Furthermore, in hybrid PD‐SMC method, the feature trajectory is straighter in image plane and the camera trajectory in Cartesian space is smoother.

Test 2 is performed to examine the rotation performance of the proposed method; a pure rotation of image feature points has been completed. Figure 17 shows that the feature position errors converge to zero. Figure 18 shows the initial and desired positions, which are captured by the camera. Figure 19 shows the feature trajectory in image plane. Figure 20 shows the camera trajectory in Cartesian space.

Figure 17.

Feature error variation. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 2.

Figure 18.

Feature position variation. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 2.

Figure 19.

Feature trajectory. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 2.

Figure 20.

Camera trajectory in Cartesian. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 2.

It is obvious that the test is successfully performed to prove the better performance of enhanced IBVS. Figures 1720 show the comparison of experiment results. The results are similar to those of Test 1.

Test 3 is a hybrid translation‐rotation motion process. In this experimental test, the translation and rotation motions of features are incorporated in one process. In the initial stage of the movement, the translation motion is implemented. In the final stage of the movement, the rotation motion is completed.

Figure 21 shows the feature position error variation of IBVS and enhanced IBVS. It is observed that enhanced IBVS owns the higher convergence rate. Figure 22 shows the image feature points from the initial position to the final position and the trajectory by using IBVS and enhanced IBVS. It is observed that enhanced IBVS performs better in the final stage than IBVS in terms of the smoothness and length of its trajectories in image plane. Figure 23 shows the camera trajectory in a three‐dimensional space of IBVS and enhanced IBVS. It can be seen that the camera trajectory of enhanced IBVS is smoother and more accurate.

Figure 21.

Feature error variation. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 3.

Figure 22.

Feature trajectory. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 3.

Figure 23.

Camera trajectory in Cartesian. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 3.

More specifically, the robustness against the random disturbances during the experiment is demonstrated in rotation movement. By comparing the trajectories, one can notice that the proposed enhanced IBVS method owns better robustness.

The performance index ISE (Integrate Square Error) is also used to compare the performance of IBVS and enhanced IBVS. The results are described in Table 6, and it shows that the ISE of enhanced IBVS is smaller than that of IBVS in three tests.

IBVSEnhanced IBVS
Test 1: ISE total1160.0791.5
Test 2: ISE total112.692.7
Test 3: ISE total438.6296.8

Table 6.

ISE values of IBVS and enhanced IBVS.

7. Conclusions

An enhanced IBVS, which combines PD control with SMC, is proposed for a 6DOF manipulator in this chapter. This approach can improve the visual servoing performance by taking the advantages of PD control and SMC and compensating for the shortcomings. The stability of the enhanced IBVS system is proven. Extensive simulations and experiments have been carried out and three tests are implemented for the comparison. The results validate that the tracking performance and robustness of the proposed method are superior to the conventional IBVS controller.

© 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Shutong Li, Ahmad Ghasemi, Wen‐Fang Xie and Yanbin Gao (June 28th 2017). Sliding Mode Control (SMC) of Image‐Based Visual Servoing for a 6DOF Manipulator, Recent Developments in Sliding Mode Control Theory and Applications, Andrzej Bartoszewicz, IntechOpen, DOI: 10.5772/67521. Available from:

chapter statistics

853total chapter downloads

1Crossref citations

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Super Twisting Sliding Mode Control with Region Boundary Scheme for an Autonomous Underwater Vehicle

By Vina Wahyuni Eka Putranti and Zool Hilmi Ismail

Related Book

First chapter

Sliding Mode Control and Fuzzy Sliding Mode Control for DC-DC Converters

By Kamel Ben Saad, Abdelaziz Sahbani and Mohamed Benrejeb

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us