Open access peer-reviewed chapter

Sliding Mode Control (SMC) of Image‐Based Visual Servoing for a 6DOF Manipulator

Written By

Shutong Li, Ahmad Ghasemi, Wen‐Fang Xie and Yanbin Gao

Submitted: 29 September 2016 Reviewed: 18 January 2017 Published: 28 June 2017

DOI: 10.5772/67521

From the Edited Volume

Recent Developments in Sliding Mode Control Theory and Applications

Edited by Andrzej Bartoszewicz

Chapter metrics overview

1,940 Chapter Downloads

View Full Metrics

Abstract

The accuracy and stability are two fundamental concerns of the visual servoing control system. This chapter presents a sliding mode controller for image‐based visual servoing (IBVS) which can increase the accuracy of 6DOF robotic system with guaranteed stability. The proposed controller combines proportional derivative (PD) control with sliding mode control (SMC) for a 6DOF manipulator. Compared with conventional proportional or SMC controller, this approach owns faster convergence and better disturbance rejection ability. Both simulation and experimental results show that the proposed controller can increase the accuracy and robustness of a 6DOF robotic system.

Keywords

  • sliding mode control
  • image-based visual servoing
  • 6DOF robotic manipulator

1. Introduction

With the development of industrial manufacture technology, manufacture process is going forward, more dexterous and more efficient machines is needed to meet this large changes.

Visual servoing system comes into being; it can handle the dynamic interaction between the manipulator and environment and has been applied in various surroundings where high accuracy and strong robustness are needed, such as cell injection in Ref. [1], car steering, aircraft landing and missile tracking in Ref. [2]. Generally speaking, the system where the camera is used as a visual sensor in the feedback is referred to as visual servoing system. Depending on the configuration of the camera with respect to the robot, the visual servoing configuration can be classified as eye‐in‐hand where the camera is installed at the end effector and eye‐to‐hand where the camera is fixed in workspace [3]. In this chapter, more attention is focused on eye‐in‐hand configuration in visual servoing system.

Furthermore, visual servoing can be classified into three different classes: image‐based visual servoing (IBVS), position‐based visual servoing (PBVS) and hybrid visual servoing (HVS). Their performances have been precisely described in Refs. [26]. In comparison, although PBVS is convenient in actual application, a calibrated camera and a known geometric model of the target are needed. The control performance depends on the accuracy of the camera calibration and the object geometric model, which is difficult to ensure. IBVS directly uses image feature errors to calculate the control signal, which reduces computational delay and becomes less sensitive to the calibration and model errors in Refs. [3, 5]. In this chapter, more attention is focused on IBVS.

Various control methods have been presented to apply to IBVS system, including proportional‐integral‐derivative control (PID) in Refs. [4, 7, 8], predictive control in Ref. [9], sliding mode control (SMC) in Ref. [10], adaptive control in Ref. [11], and so on. The core of this control method is to generate a velocity vector or an acceleration vector as the control input to guide the end effector to the desired position, to complete the total control task.

Particularly, PID control has a wide range of applications because of its simple form and popularity among engineers. Some researchers have conducted various analyses on using proportional or proportional derivative (PD) controller to produce a velocity command in Ref. [7]; convergent property of proportional or PD controller is satisfactory, but sometimes sudden variation or small shakiness due to image noise or motion vibration will be caused. In order to address these issues, the researchers proposed the control scheme using PD controller and producing an acceleration command as the control input in Ref. [8]. The proposed method can solve the above‐mentioned problems; however, only velocity signal can be accepted as control input in most visual servoing systems. SMC was considered to be successfully applied in some automatic control fields due to its insensitivity to model uncertainties and disturbance in Ref. [12]. Using SMC in IBVS or PBVS or robotic manipulator to guarantee the system robustness and good tracking performance has been reported in some literature in the recent years in Refs. [1315]. Meanwhile, the chatting phenomenon of SMC also needs to be considered in some special situations.

In this chapter, a new enhanced IBVS scheme that combines PD control with SMC is proposed to generate the velocity profile to control the robotic manipulator. This PD‐SMC method takes the advantages of PD and SMC methods. The stability of the enhanced IBVS method is proved by using Lyapunov method. It can achieve the better convergence performance, ensure the stability of the system and own the strong robustness when the system is subjected to uncertainty and noises.

This chapter is structured as follows. The visual servoing system model is described in Section 2. The enhanced controller is designed in Section 3. The system stability is analysed in Section 4. The simulations are performed in Section 5. The experiments are performed in Section 6. The concluding remarks and future work are mentioned in Section 7.

Advertisement

2. System description

In the IBVS system, the control problem can be expressed by obtaining the relation between the derivative of the image features and the camera spatial velocity in Refs. [3, 4]. The system model, which consists of a 6DOF manipulator with a camera mounted on its end effector, is shown in Figure 1.

Figure 1.

IBVS with eye‐in‐hand configuration.

Before going into the detailed discussion of the system model, the following notations are introduced. The camera spatial velocity can be noted by u = ( v c ,   ω c ) , v c = ( v c x ,   v c y ,   v c z ) and ω c = ( ω c x ,   ω c y ,   ω c z ) , which are the camera's linear velocity and angular velocity in Ref. [5]. Set the focal length of camera as f . A world point P in the camera frame is denoted by P = (X, Y, Z), the projected coordinate in image space is denoted by p = (x, y).

Using the velocity of the point relative to the camera frame, we can describe the relationship between the feature velocity and the camera velocity in normalized image coordinate in Ref. [3] as follows:

[ x ˙ y ˙ ] = [ f Z 0 x Z x y f 1 + x 2 f y 0 f Z y Z 1 + y 2 x x y f x ] [ v c x v c y v c z ω c x ω c y ω c z ] E1

Due to a 6DOF manipulator that needs to be controlled, at least three feature points are necessary to avoid the interaction matrix singularities and the multiple global minima in Refs. [4, 8]. Nevertheless, three points may be required for interaction matrix singularities and the multiple global minima. For this reason, we use four feature points to control 6DOF in the space, the expression is written as follows:

e ˙ = L v 4 u E2

where L v 4 is the interaction matrix.

L v 4 = [ L v / p = p 1 L v / p = p 2 L v / p = p 3 L v / p = p 4 ] E3

p = p 1 , , p 4 is the image feature points and e is the feature error. Since the image interaction matrix largely depends on the depth Z and camera intrinsic parameters such as focal length f, there exists some uncertainties in these parameters. In this chapter, we focus on dealing with the uncertainties on the depth. The range of the depth of the visual servoing system is assumed as Z min Z Z max . The estimated interaction matrix L ^ v 4 is used in the viusal servoing control design.

Advertisement

3. Controller design

The general design approach of a visual servoing controller is to use proportional control to generate the control signal. This approach is also applied to the conventional IBVS, the form can be described as follows:

u = K L ^ v 4 + e ˙ ( t ) E4

where L ^ v 4 + is the pseudo inverse of the estimated interaction matrix, K is a positive definite matrix.

The proportional control is a prompt and timely control method. However, this method cannot eliminate the system residual error. In this chapter, PD control is used to replace the proportional control, which can improve the control quality with faster control convergence speed and smaller error. Meanwhile, in order to improve the system stability, the sliding model control is also adopted to compensate uncertainties of the system. This is an enhanced approach, which combines PD control with SMC base on IBVS, and can be called as hybrid PD‐SMC method.

We define the sliding surface s, which will converge to 0 when the image feature errors go forward and stay on it all the time in Ref. [12].

s = e = ϵ ϵ d E5

where ϵ is the image plane feature and ϵ d is the desired value of the feature. The basic visual servoing controller of IBVS is designed based on the above proportional control equation in Ref. [3], and it is described as the following first‐order system:

e ˙ + K p e = 0 E6

Adding the sliding mode control, and applying PD control to the visual servoing system in Ref. [15], the modified control law should be considered as follows:

u = L ^ v 4 + ( K d e ˙ ( t ) K p e ( t ) K s sgn ( s ) ) E7

where K s is a positive definite matrix and sgn(·) is the signum function.

Consider the above control scheme easily to have chatting phenomenon. In order to smooth the chattering, a saturation function is used to replace the sign function, and the control law is described as follows:

u = L ^ v 4 + ( K d e ˙ ( t ) K p e ( t ) K s sat ( s ) ) E8

where sat(·) is the saturation function, which is defined as follows:

sat ( s ) = {   s   if   | s | 1 sgn ( s )   otherwise E9

This control law is an enhanced IBVS scheme, which combines PD control with SMC together. SMC is well known for its robustness in Refs. [1416]. By applying this control method, it is expected that this controller will achieve better robustness, faster convergence speed and higher accuracy. This will be demonstrated in both simulation and experiment sections.

Advertisement

4. Stability analysis

The stability analysis of the proposed controller is based on Lyapunov direct method in Ref. [12]. Consider the uncertainties in depth, the estimated interaction matrix can be described as follows:

( I + Δ min ) L v 4 L ^ v 4 + ( I + Δ max ) E10

where Δ min is a matrix of the uncertainties associated with lower bounds of estimated depth Z min and Δ max is a matrix of the uncertainties associated with the upper bounds of the estimated depth Z max . A Lyapunov function is constructed as follows:

V = 1 2 s T s E11

The time derivative of the Lyapunov function is obtained as follows:

V ˙ = s T s ˙ E12

By substituting **Eq. (8) into Eq. (2), the system error dynamic equation is obtained as follows:

e ˙ = L v 4 L ^ v 4 + ( K d e ˙ ( t ) K p e ( t ) K s sat ( s ) ) ϵ d E13

Moving the term associated with e ˙ to the left of the equation yields

e ˙ = L v 4 L ^ v 4 + ( I + K d L v 4 L ^ v 4 + ) 1 ( K p e K s sat ( s ) ) ( I + K d L v 4 L ^ v 4 + ) 1 ϵ d E14

The time derivative of Lyapunov function is obtained as follows:

V ˙ = L v 4 L ^ v 4 + ( I + K d L v 4 L ^ v 4 + ) 1 ( e T K p e K s | s | ) ( I + K d L v 4 L ^ v 4 + ) 1 s T ϵ d E15

It is noted that the rank of L v 4 L ^ v 4 + is 6, L v 4 L ^ v 4 + has two null vectors that satisfy L v 4 L ^ v 4 + x = 0 . It is know that L v 4 L ^ v 4 + has two null vectors that satisfy { L v 4 L ^ v 4 + = 0 ,   x R 8 ,   x 0 } . Assuming that x does not belong to the null space of L v 4 L ^ v 4 + in Refs. [4, 8], L v 4 L ^ v 4 + x > 0 can be obtained. If K d is chosen as a positive definite matrix,

K d > 0 E16

The following formula can be ensured:

( I + K d L v 4 L ^ v 4 + ) 1 L v 4 L ^ v 4 + x > 0 E17

K s is chosen as follows:

K s > λ max ( I + Δ max ) λ min ( I + Δ min ) 1 ϵ d + η E18

where η is a diagonal positive definite matrix whose elements determine the decay rate of V to zero. λ min and λ max are the minima and maximum parameters, respectively.

According to the above conditions, the time derivative of Lyapunov function can be described as follows:

V ˙ < L v 4 L ^ v 4 + ( I + K d L v 4 L ^ v 4 + ) 1 ( e T K p e η | s | ) < 0 E19

By applying Barbalat's lemma, it is obvious that V ˙ = 0 when t , the image feature error e ( t ) . The stability of IBVS system is ensured.

Advertisement

5. Simulations

Simulations have been conducted on a 6DOF Puma 560 robot model by using MATLAB Robotics Toolbox and Machine Vision Toolbox in Ref. [3]. The 6DOF arm is chosen as the manipulator and the camera is mounted on the end effector, which assumes no transformation between the end effector and the camera. The camera characteristics are shown in Table 1. The maximum linear velocity of Puma 560 is 0.5 m/s according to the robot user manual in Refs. [17, 18].

Parameters Values
Focal length 0.008 (m)
Principal point (512, 512)
Camera resolution 1024 × 1024

Table 1.

Camera parameters in simulations.

To analyse and compare the performance of hybrid PD‐SMC IBVS with the conventional IBVS, three simulation tests have been conducted, including pure translation and pure rotation of features, and disturbance rejection test. Four feature points are used in visual servoing control. The initial and desired positions of the image features are given in Table 2.

Positions
(x1 y1) (x2 y2) (x3 y3) (x4 y4)
Tests 1 and 3
Initial (360 401) (361 611) (570 610) (573 402)
Desired (412 412) (412 612) (612 612) (612 412)
Test 2
Initial (360 401) (361 611) (570 610) (573 402)
Desired (362 506) (466 612) (572 506) (466 403)

Table 2.

Initial and desired positions.

Test 1, in this test, a normal translational motion, is completed. Figures 2 and 3 show the feature position error and joint velocity convergence situation of IBVS and enhanced IBVS (PD‐SMC), respectively, under the pure translation condition. Figure 4 shows the feature trajectory in image space under the pure translation condition.

Figure 2.

Feature error variation in pure translation test: (a) IBVS and (b) enhanced IBVS (PD‐SMC).

Figure 3.

Joint velocity variation in pure translation test. (a) IBVS and (b) enhanced IBVS (PD‐SMC).

Figure 4.

Feature trajectory in image space in Test 1.

Test 2, in this test, a pure rotational motion, is concluded. Figures 5 and 6 show the feature position error and joint velocity convergence situation of IBVS and enhanced IBVS (PD‐SMC), respectively, under the pure rotation condition. Figure 7 shows the feature trajectory in image space under the pure rotation condition.

Figure 5.

Feature error variation in pure rotation test. (a) IBVS and (b) enhanced IBVS (PD‐SMC).

Figure 6.

Joint velocity variation in pure rotation test. (a) IBVS and (b) enhanced IBVS (PD‐SMC).

Figure 7.

Feature trajectory in image space in Test 2.

Test 3, in this test, a chirp signal as a disturbance, is added to demonstrate the robustness of the enhanced IBVS. Figures 8 and 9 show the feature position error and joint velocity convergence situation of IBVS and enhanced IBVS (PD‐SMC), respectively, under the disturbance. Figure 7 shows the feature trajectory in image space under the disturbance.

Figure 8.

Feature error variation with disturbance. (a) IBVS and (b) enhanced IBVS (PD‐SMC).

Figure 9.

Joint velocity variation with disturbance. (a) IBVS and (b) enhanced IBVS (PD‐SMC).

According to three test results, it is obvious that the performance of enhanced IBVS is better than that of IBVS. More specifically, the simulation results demonstrate that the PD‐SMC control system owns higher convergence rate, more accurate convergence state and strong robustness.

To further compare the performance of IBVS and enhanced IBVS, the performance index ISE (integrate square error) is adopted, which is defined as

ISE = 0 T e 2 ( t ) d t E20

The results are summarized in Table 3, where the ‘ISE Total’ represents the total integrate square error of feature error x1,x2,x3,x4 and feature error y1,y2,y3,y4. It shows that the ISE of enhanced IBVS is smaller than that of IBVS in three tests.

IBVS Enhanced IBVS
Test 1: ISE total 1.7875 × 104 5.3609 × 103
Test 2: ISE total 4.5601 × 105 1.6251 × 105
Test 3: ISE total 1.7639 × 104 5.3348 × 103

Table 3.

ISE values of IBVS and enhanced IBVS.

Advertisement

6. Experiments

To further validate the performance of the proposed method, experimental tests are conducted on a 6DOF Denso robot. The experimental setup consists of a controller and two computers through a double PC bilateral teleoperation. PC No. 1 (Master PC) communicates with the master robot and transmits the commands to the Slave PC (PC No. 2) over the communication network. The slave PC also communicates with the slave robot (Denso robot) and obtains the camera data and sends it back to the master PC over the communication network in Refs. [17, 18]. The experimental setup is shown in Figure 10. The experimental system is shown in Figure 11. Denso VP6242G is used as the manipulator in [8]; the characteristics of the camera are given in Table 4.

Figure 10.

Experimental setup.

Figure 11.

Experimental system.

Parameters Values
Focal length 0.004 (m)
X‐axis scaling factor 110,000 (pixel/m)
Y‐axis scaling factor 110,000 (pixel/m)
Image plane offset of X‐axis 120 (pixel)
Image plane offset of Y‐axis 187 (pixel)

Table 4.

Camera parameters in experiments.

Three experimental tests have been conducted, including long‐distance translation and pure rotation of features, and hybrid translation‐rotation test. Four feature points are used in visual servoing control. The initial and desired positions of the image features are given in Table 5. Figure 12 shows Denso robot in initial position and in work processing.

Figure 12.

Denso robot. (a) Initial position and (b) work process.

Positions
(x1 y1) (x2 y2) (x3 y3) (x4 y4)
Test 1
Initial (57 150) (57 57) (146 63) (146 148)
Desired (595 270) (595 175) (684 177) (686 275)
Test 2
Initial (454 213) (385 146) (447 81) (516 148)
Desired (602 270) (600 174) (688 179) (619 273)
Test 3
Initial (103 136) (196 105) (225 187) (134 220)
Desired (447 203) (540 189) (557 278) (461 292)

Table 5.

Initial and desired positions.

Test 1 is performed to examine the convergence of image feature points when the desired position is far away from the initial one, which needs a long‐distance translational motion. Figure 13 shows that the feature position errors converge to zero. Figure 14 shows the initial and desired positions captured by the camera. Figure 15 shows the feature trajectory. Figure 16 shows the camera trajectory in Cartesian space.

Figure 13.

Feature error variation. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 1.

Figure 14.

Feature position variation. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 1.

Figure 15.

Feature trajectory. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 1.

Figure 16.

Camera trajectory in Cartesian. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 1.

It is shown that the performance of hybrid PD‐SMC is better than that of IBVS. The settling time of the hybrid PD‐SMC method is shorter than that of conventional method. Furthermore, in hybrid PD‐SMC method, the feature trajectory is straighter in image plane and the camera trajectory in Cartesian space is smoother.

Test 2 is performed to examine the rotation performance of the proposed method; a pure rotation of image feature points has been completed. Figure 17 shows that the feature position errors converge to zero. Figure 18 shows the initial and desired positions, which are captured by the camera. Figure 19 shows the feature trajectory in image plane. Figure 20 shows the camera trajectory in Cartesian space.

Figure 17.

Feature error variation. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 2.

Figure 18.

Feature position variation. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 2.

Figure 19.

Feature trajectory. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 2.

Figure 20.

Camera trajectory in Cartesian. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 2.

It is obvious that the test is successfully performed to prove the better performance of enhanced IBVS. Figures 1720 show the comparison of experiment results. The results are similar to those of Test 1.

Test 3 is a hybrid translation‐rotation motion process. In this experimental test, the translation and rotation motions of features are incorporated in one process. In the initial stage of the movement, the translation motion is implemented. In the final stage of the movement, the rotation motion is completed.

Figure 21 shows the feature position error variation of IBVS and enhanced IBVS. It is observed that enhanced IBVS owns the higher convergence rate. Figure 22 shows the image feature points from the initial position to the final position and the trajectory by using IBVS and enhanced IBVS. It is observed that enhanced IBVS performs better in the final stage than IBVS in terms of the smoothness and length of its trajectories in image plane. Figure 23 shows the camera trajectory in a three‐dimensional space of IBVS and enhanced IBVS. It can be seen that the camera trajectory of enhanced IBVS is smoother and more accurate.

Figure 21.

Feature error variation. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 3.

Figure 22.

Feature trajectory. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 3.

Figure 23.

Camera trajectory in Cartesian. (a) IBVS and (b) enhanced IBVS (PD‐SMC) in Test 3.

More specifically, the robustness against the random disturbances during the experiment is demonstrated in rotation movement. By comparing the trajectories, one can notice that the proposed enhanced IBVS method owns better robustness.

The performance index ISE (Integrate Square Error) is also used to compare the performance of IBVS and enhanced IBVS. The results are described in Table 6, and it shows that the ISE of enhanced IBVS is smaller than that of IBVS in three tests.

IBVS Enhanced IBVS
Test 1: ISE total 1160.0 791.5
Test 2: ISE total 112.6 92.7
Test 3: ISE total 438.6 296.8

Table 6.

ISE values of IBVS and enhanced IBVS.

Advertisement

7. Conclusions

An enhanced IBVS, which combines PD control with SMC, is proposed for a 6DOF manipulator in this chapter. This approach can improve the visual servoing performance by taking the advantages of PD control and SMC and compensating for the shortcomings. The stability of the enhanced IBVS system is proven. Extensive simulations and experiments have been carried out and three tests are implemented for the comparison. The results validate that the tracking performance and robustness of the proposed method are superior to the conventional IBVS controller.

References

  1. 1. Ghanbari A, Wang W, Hann CE, Chase JG, Chen XQ. Cell image recognition and visual servo control for automated cell injection. In: 2009 4th International Conference on Autonomous Robots and Agents; 10–12 February 2009; IEEE; 2009. pp. 92–96.
  2. 2. Hashimoto K. A review on vision‐based control of robot manipulators. Advanced Robotics. 2003;17:969–991. DOI: 10.1163/156855303322554382
  3. 3. Corke P. Robotics, Vision and Control. 1st ed. Springer Berlin Heidelberg; 2011. 495 p. DOI: 10.1007/978‐3‐642‐20144‐8
  4. 4. Chaumette F, Hutchinson S. Visual servo control. I. Basic approaches. IEEE Robotics Automation Magazine. 2006;13:82–90. DOI: 10.1109/mra.2006.250573
  5. 5. Hutchinson S, Hager GD, Corke P. A tutorial on visual servo control. IEEE Transactions on Robotics and Automation. 1996;12:651–670. DOI: 10.1109/70.538972
  6. 6. Chaumette F, Hutchinson S. Visual servo control. II. Advanced approaches. IEEE Robotics Automation Magazine. 2007;14:109–118. DOI: 10.1109/mra.2007.339609
  7. 7. Wang J, Cho H. Micropeg and hole alignment using image moments based visual servoing method. IEEE Robotics Automation Magazine. 2008;55:1286–1294. DOI: 10.1109/tie.2007.911206
  8. 8. Keshmiri M, Xie WF, Mohebbi A. Augmented image‐based visual servoing of a manipulator using acceleration command. IEEE Transactions on Industrial Electronics. 2014;61:5444–5452. DOI: 10.1109/tie.2014.2300048
  9. 9. Allibert G, Courtial E, Chaumette F. Predictive control for constrained image‐based visual servoing. IEEE Transactions on Robotics. 2010;26:933–939. DOI: 10.1109/tro.2010.2056590
  10. 10. Kim JK, Kim DW, Choi SJ, Won SC. Image‐based visual servoing using sliding mode control. 2006 SICE‐ICASE International Joint Conference; 18–21 October 2006; IEEE; 2007. P. 4995–5001.
  11. 11. Fang Y, Liu X, Zhang X. Adaptive active visual servoing of nonholonomic mobile robots. IEEE Transactions on Industrial Electronics. 2012;59:486–493. DOI: 10.1109/tie.2011.2143380
  12. 12. Slotine JJE, Li WP. Applied Nonlinear Control. Prentice Hall; Englewood Cliffs, New Jersey 07632, 1991.
  13. 13. Yksel T. IBVS with fuzzy sliding mode for robot manipulators. 2015 International Workshop on Recent Advances in Sliding Modes (RASM); 9–11 April 2015; IEEE; 2015. P. 1–6.
  14. 14. Parsapour M, RayatDoost S, Taghirad HD. Position based sliding mode control for visual servoing system. 2013 First RSI/ISM International Conference on Robotics and Mechatronics (ICRoM); 13–15 February 2013; IEEE; 2013. P. 337–342.
  15. 15. Acob JM, Pano V, Ouyang PR. Hybrid PD sliding mode control of a two degree‐of‐freedom parallel robotic manipulator. 2013 10th IEEE International Conference on Control and Automation (ICCA); 12–14 June 2013; IEEE; 2013. P. 1760–1765.
  16. 16. Ngo QH, Nguyen NP, Chi NN, Tran TH, Hong KS. Fuzzy sliding mode control of container cranes. International Journal of Control, Automation, and Systems. 2015;13:419–425. DOI: 10.1049/iet‐cta.2010.0764
  17. 17. Corke P, Armstrong‐Helouvry B. A meta‐study of puma 560 dynamics: A critical appraisal of literature data. Robotica. 1995;13:253–258. DOI: 10.1017/s0263574700017781
  18. 18. Spong MW, Hutchinson S. Robot Modeling and Control. Hoboken: Wiley; 2006. DOI: 10.1108/ir.2006.33.5.403.1

Written By

Shutong Li, Ahmad Ghasemi, Wen‐Fang Xie and Yanbin Gao

Submitted: 29 September 2016 Reviewed: 18 January 2017 Published: 28 June 2017