Open access peer-reviewed chapter

An Intelligent Position-Tracking Controller for Constrained Robotic Manipulators Using Advanced Neural Networks

Written By

Dang Xuan Ba

Submitted: 04 July 2022 Reviewed: 19 July 2022 Published: 28 August 2022

DOI: 10.5772/intechopen.106651

From the Edited Volume

Recent Advances in Robot Manipulators

Edited by Serdar Küçük

Chapter metrics overview

95 Chapter Downloads

View Full Metrics

Abstract

Nowadays, robots have become a key labor force in industrial manufacturing, exploring missions as well as high-tech service activities. Possessing intelligent robots for such the work is an understandable reason. Adoptions of neural networks for excellent control accuracies of robotic control systems that are restricted in physical constraints are practical challenges. This chapter presents an intelligent control method for position tracking control problems of robotic manipulators with output constraints. The constrained control objectives are transformed to be free variables. A simple yet effective driving control rule is then designed to force the new control objective to a vicinity around zeros. To suppress unexpected systematic dynamics for outstanding control performances, a new neural network is employed with a fast-learning law. A nonlinear disturbance observer is then used to estimate the neural estimation error to result in an asymptotic control outcome. Robustness of the closed loop system is guaranteed by the Lyapunov theory. Effectiveness and feasibility of the advanced control method are validated by comparative simulation.

Keywords

  • robotic manipulators
  • neural network
  • constrained control
  • motion control
  • simulations

1. Introduction

The world is now passing the Industry Revolution 4.0 in which robots have played a crucial role in industrial, manufacturing, discovering, rescuing and day-life activities. Excellent position controllers are required in most of industrial robots [1, 2]. However, in reality, it is not easy to achieve outstanding control precision with simple control structures due to unexpected influences of internal uncertain nonlinearities and unpredictable external disturbances in systematic dynamics [3, 4, 5, 6]. Nevertheless, most real-life robot joints are restricted in certain physical ranges. Note that, few danger issues could be activated if the joints went over such the boundaries [4, 5]. To deal with the strict control problems, many research outcomes have been recently reported for both fully actuated and underactuated robotic systems [7, 8]. To realize control objective in predefined constraints, backstepping-based controllers are favorite approaches for developers [9, 10]. Barrier Lyapunov functions are employed as core-stones to implement the nonlinear control procedures [11, 12]. Such the advanced state-interfered techniques could cope with both static and dynamical practical constraints of robotic systems [11, 13]. As comparing to the backstepping-based methods, sliding-mode-control (SMC) approaches are also potential control solutions for output-constraint control problems thanks to the simpler design and implementation [14, 15]. Furthermore, the SMC ones could be upgraded with employment of soft boundaries to result in Prescribed-Performance Control (PPC) remedies which could maintain the control objectives within predefined control accuracies [16, 17].

To reach excellent control performances, the nonlinear behaviors of the robotic systems need to be compensated during the control process [14, 15, 16, 17, 18]. The uncertain functionalities could be modeled with classical approaches such as basic force/torque transformation or optimal-energy solutions or decomposition analyses [18, 19]. Such the classical methods seem to be effective with simple robotic systems since they highly depend on the system structure [7, 20]. To enhance the modeling performances, fast-estimation approaches were studied in the past few years based on time-delay estimation (TDE) technologies [21, 22]. lumped dynamics of the system are simply approximated from information of acceleration signals and input gain matrices selected [23, 24]. Owing to the simplicity in deployment, a vast of real-time applications have been developed using such the TDE algorithm [24, 25]. Since the acceleration signals are normally computed from the position signals using high-order time derivatives, measurement noises could be amplified reducing the estimation effect [26, 27]. In fact, to learn the systematic behaviors in a model-free manner, intelligent methods are also great solutions [28, 29]. Thanks to the ability of universal approximation, the system dynamics could be learnt under black-box models using Radial-basis function (RBF) networks [30, 31, 32] or Fuzzy-hybrid-networks [33, 34, 35]. Once the neural networks are integrated in the control process, the control error could be adopted as main excitation signals of the learning process.

Since the networks require abundant excitation signals to activate the learning processes, the intelligent controllers would sacrifice unexpected transient time to reach the excellent steady-state control outcomes [30, 36]. As a result, high learning rates were normally adopted in the classical learning rules to speed up the estimation processes. Note that, the conventional learn laws only ensure boundedness of the control errors instead of the learning errors [5, 37]. To create certain bounds of neural weighting coefficients, the networks were modified by integrating linear-leakage functions in their adaptation mechanisms [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37]. Since the adaptation rules of free channels were not properly deactivated, the convergence processes of the overall systems are slower than those of the conventional ones. Note that although the nonlinear dynamics of the robotic systems could be efficiently compensated by the advanced neural networks, to yield outstanding transient control precision, the neural estimation errors need to be tackled [11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38]. Integration of both neural networks and disturbance observers in nonlinear controllers have been proven to be excellent solutions for modern robotic systems [39, 40, 41]. Indeed, such the nonlinear integration was shown promising control results in stiffness-control robots, and in an exoskeleton, or in cooperative robots [42, 43, 44]. However, intelligent controllers using linear leakage functions in the neural adaptation rules require large robust signals to attain asymptotic control errors. With the combination of the neural network and disturbance in the intelligent approaches, the transient performances were remarkably improved, but the control errors were not still driven to zero in a smooth manner.

This chapter presents a new intelligent high-performance motion controller for robotic manipulators with output constraints. To deal with such the constraint problem, the control objective is first converted to a free variable using a new nonlinear transformation function. The indirect control objective is next driven to a certain vicinity using a sliding-mode-like control signal. A nonlinear neural network and disturbance observer are combined in a special fashion to construct a new closed-loop system in which both the estimation and control errors are pushed to zero in infinite time. The proposed controller possesses the following contributions:

  • A novel nonlinear controller is proposed to stabilize the control objective inside arbitrary vicinity of zero without violation of the physical constraints.

  • A nonlinear learning law of the neural network is developed to effectively estimate uncertain nonlinearities in the system model.

  • To result in the asymptotic control performance of the overall system the neural estimation error is finally tackled by a nonlinear disturbance observer integrated.

  • Working performances of the proposed control method is rigorously analyzed by an integral Lyapunov approach and extensive simulation results.

Outline of the paper is structured as follows. Section 2 presents the modeling of the studied systems and problem statements. Section 3 shows the design procedure of the proposed control algorithm with new neural disturbance estimation techniques and the stability analysis. Section 4 discusses the validation results obtained from comparative simulations. The conclusions are finally drawn in Section 5.

Advertisement

2. System model and problem statements

Behaviors of a general nDOF robot can be expressed using the following dynamics [19, 20]:

Mqq¨+Cqq̇q̇+gq+fq̇+τd=τE1

where τn is the vector of the control torques generated by joint actuators, qn is the vector of joint position or the system output, Mqn×n is the positive-definite mass matrix, Cqq̇q̇,gq,fq̇,τdn are the Centripetal/Coriolis vector, the gravitational, frictional, and the external disturbance torques, respectively.

Remark 1: The main control objective here is to derive a proper control signal τ to drive the system output q following a desired trajectory qd.

Before designing the expected control approach, the following assumptions are consolidated.

Assumption 1 [45, 46]: The disturbance τd is bounded and Lipschitz continuous.

Assumption 2: the reference profile qd is known, bounded and twice continuously differentiable.

Assumption 3: The system states qq̇ are measurable.

Remark 2: The robotic system Eq. (1) is a passivity model with bounded time-derivative states [19, 22, 45]. For practical systems, the robot joints q are limited in physical ranges:

q¯qq¯E2

where q¯andq¯ are respectively the lower and upper bounds of the system output q.

In reality, unexpected impacts from physical collisions could make the system danger.

Remark 3: To obtain an excellent controller for the stated problem, one needs a proper control strategy that could deal with dynamical nonlinear behaviors of the robotic system Eq. (1) in complying with the physical constraint and be able to drive the control objective to zero as fast as possible. Furthermore, the controller is also expected to be robust and model-free.

Advertisement

3. Intelligent nonlinear constrained controller

A robust adaptive controller is designed in this section based on a new constrained sliding mode framework and new learning mechanism of a basic neural network and nonlinear disturbance observer. Stability of the closed control system is then investigated by Lyapunov theories.

3.1 Constrained sliding mode control with neural network

We first define the following control error as the main control objective:

e=qqdE3

The error is in fact allowed to vary in the following range that is constructed by the constraint (2).

e¯ee¯e¯q¯qd>0e¯q¯qd<0E4

where e¯ande¯ are the lower and upper physical bounds of the control errore, respectively.

The following transformation function is next proposed to map the constrained error e to a new free space:

yii=1..n=eie¯ieieie¯iE5

where y=y1y2ynT is the transformed error, ei is a specific entry of the control error vector e=e1e2..ei.enT.

A sliding manifold is defined as an indirect control objective of the studied system:

s=ė+K0yE6

where K0=diagk0=diagk01k0n is a positive-definite diagonal gain matrix.

The time derivative of the manifold Eq. (6) under the dynamics Eq. (1) is expressed

ṡ=v+M¯1τq¨d+K0ẏE7

where, v=M1Cq̇+g+f+τd+M1M¯1τn is defined as a systematic-deviation term that is composited from both the internal dynamics and external disturbances, and M¯=diagm¯1m¯2m¯m is a nominal positive-definite mass matrix selected.

Based on the manifold system Eq. (7), the final control signal is structured from a dynamical control term τDYN, error-driving term τDRI, and robust control term τROB, as follows:

τ=M¯τDYN+τDRI+τROBE8

The dynamical signal τDYN is used to compensate for the internal dynamics v in Eq. (7). With robotic manipulators, the lumped dynamics v are bounded [19, 42] but very complicated and not easy to derive [20]. To study such the complex behaviors, a neural network could be thought of a reasonable tool. The dynamics v=v1v2vnT can be modeled using the following universal linear combination:

vii=1..n=wiTriqq̇τ+δiE9

where wi,riqq̇τ,δi are optimal weight vectors, neural regression vectors, and the modeling error, respectively.

Hence, the signal is structured as follows:

τMOD=v̂+q¨dK0ẏE10

where, the approximation v̂i is estimate of the dynamics vi, and is designed as [17, 42]

v̂ii=1..n=ŵiTriqq̇τE11

in which, ŵi is estimate of the weight vector wi.

By employing the dynamical control signal Eq. (10), the dynamics Eq. (7) become

ṡ=v+τDRI+τROBE12

where v=v1v2vnT=v̂vn is an estimation-error vector.

Since the role of the control signal τDRI is to drive the sliding manifold to around zero from an arbitrary initial position, it is selected as

τDRI=K1sE13

where K1=diagk1=diagk11k1n is a diagonal positive-definite gain matrix.

Since the robust control signal τROB is adopted to suppress the estimation error δ, it is designed as

τROB=K2sgnsE14

where K2=diagk2=diagk21k2n is a diagonal positive-definite gain matrix.

The manifold dynamics Eq. (12) is now expressed as

ṡ=vK1sK2sgnsE15

Remark 3: The dynamics Eq. (15) indicate that the closed-loop system is bounded stable if the estimate v̂ is bounded. In theoretical aspects, the asymptotic control performance would be resulted in if the robust gain k2 is selected satisfying a condition of k2>δ. However, with such a big robust control gain, it could activate chattering phenomena. In contrast, a small robust gain could reduce the control precision.

To approximate the nonlinear dynamics v, the network Eq. (11) is activated using the control information of the sliding manifold under following rule:

ŵ̇i=diaga1idiagri2si21+si2ŵibisiriE16

where B=diagb1b2bnanddiaga1ii=1..n are diagonal positive-definite constant matrices.

The control performance of the neural-constrained sliding mode system is investigated by the following statements.

Theorem 1: By employing the robust control rule Eqs. (3)(14) and the neural learning law Eqs. (11), Eq. (16) to control the robotic system Eq. (1) under the output constraint Eq. (2), the closed-loop system is asymptotically stable if the control gains comply with

k1ii=1..n>14εiriTdiagridiaga1iwi2Tk2i>δimaxE17

The proof of Theorem 1 is given in Appendix A.

Remark 4: As seen in Eq. (15), once the real dynamics v are well estimated with an arbitrary small accuracy, small robust gains would yield good control performances. Note that, approximation by the neural network is a multi-channel learning work. The learning rule Eq. (16) is hence designed to increase the neural updating effect.

Advertisement

4. Disturbance-observer integration

The neural-constrained nonlinear control structure provides the excellent control performances with stationary trajectory signals. In high-speed working frequencies, the estimation error δ becomes large and could degrade the control accuracy. Adoption of an additional control term based on the disturbance-observer technology could be an understandable solution. The following assumption could be taken into account:

Assumption 4: The error δ and its time derivative are bounded. It could be thus modeled as a first-order system:

δ̇=diagαδ+ζE18

where diagα=diagαiα2αn is a diagonal positive-definite constant matrix.ζ=ς1ς2ςnT is a virtual bounded disturbance vector.

To effectively compensate for the estimation error δ, the robust control signal Eq. (14) is updated with a disturbance-estimation term, as follows:

τROB=K2sgns+δ̂E19

at which δ̂=δ̂1δ̂2δ̂nT is estimate vector of the neural modeling error δ . It is computed from the following learning rule:

δ̂̇=diagαδ̂BP1sK3sgnsE20

Here, P=diagp1p2pn,K3=diagk31k32k3n are diagonal positive-definite constant matrices.

Validation results in previous work [46, 47] confirmed the learning efficiency of the disturbance observer Eq. (20) for simple systems. To connect the disturbance observer with the neural sliding mode control scheme, the adaptation rule of the network is improved as

ŵ̇ii=1..n=diaga1idiagri2si21+si2ŵidiaga2idiagri2ŵibisi+pik3isgnsiriE21

where diaga2ii=1..n is a diagonal positive-definite constant matrix.

The stability of the closed-loop system is validated by the following statement.

Theorem 2: By employing the robust control rule Eqs. (3)(14) combining with the neural learning law Eqs. (11), Eq. (21) and disturbance observer Eqs. (19), (20) to control the robotic system Eq. (1) under the output constraint Eq. (2), the closed-loop system is asymptotically stable if the control gains comply with

k1ii=1..n>14biriTdiagridiaga1iwi2Tk2i>0k3i>ςimax+14pik2iriTdiagridiaga2iwi2TE22

The proof of Theorem 2 is discussed in Appendix B.

Remark 5: From Theorem 2, it can be seen that the robust control gain k2 could be selected with a small value for a high control accuracy. Obviously, the robustness of the closed-loop system is undertaken by a large value of the disturbance-observer gaink3.Remark 6: After the sliding manifold s converges to zero, the control error e will approach to origin under the sliding phase [25, 27]. Adoption of the nonlinear synthetization (9) could speed up the convergence time of the sliding process [17, 18]. The detailed block diagram of the proposed controller is presented in Figure 1.

Figure 1.

Structure of the proposed controller.

Advertisement

5. Verification results

Validation results of the developed controller in various testing conditions are discussed are discussed in this section. To provide the competitive evaluation, a classical Proportional-Integral-Derivative (PID) controller and linear neural-disturbance-observer (LND) controller were also implemented to control the same system in the same working conditions. The LND algorithm was referred from previous and is re-expressed in Appendix C.

The controllers were employed for motion control of a 3DOF robot, as depicted in Figure 2. Detailed dynamics of the 3DOF robot were derived based on the Lagrange method [4, 19, 47], as formulated in Appendix D. The neural network had 9 inputs qiq̇iτii=1,2,3 and 730 neurons with the logsig activation function in the hidden layer [42, 45]. All of the initial values of the weight vectors ŵii=1,2,3 were set to be zero. Other simulation parameters of the dynamics and the controllers are shown in Tables 1 and 2, respectively. The control results obtained by the controllers are intensively discussed in the following subsections.

Figure 2.

Configuration of the simulation 3DOF robot.

DescriptionParametersValuesUnit
Link length 1l1, l2, l30.1, 0.2, 0.2m
Gravitational Accelg9.81m/s2
Friction coefficienta1, a2, a320, 20, 20N.s
Mass of linksm1, m2, m35, 3, 2kg

Table 1.

Detailed parameters of the simulation model.

DescriptionParametersValues
LND Controller [45]
Nominal mass matrixM¯I3
Control gainsKc0,Kc1diag1010010,diag20010010
Disturbance gainKc330I3
Learning rateΓii=1..3500I3
Learning ratesμii=1..30.002I3
PID
Control gainsKP, KI, KDdiag700900500,diag501010,diag101010,
Proposed Controller
Nominal mass matrixM¯I3
Control gainsK0,K1,K2,andK3diag10010010,diag401002,0.1I3,200I3.
Leakage ratesa1ii=1..3,a2i10I3,10I3
Excitation ratesB,P200I4,10I4
Disturbance gainαii=1..32

Table 2.

Selected parameters of the controllers.

Advertisement

6. Simulation results

In the first simulation, the desired profiles were sinusoidal signals with different frequencies (0.1 (Hz), 0.3 (Hz), and 0.5 (Hz)), as plotted in Figure 3. Physical ranges of the robot joints were set to be 0.5π0.5π. The control results obtained are compared in Figures 46.

Figure 3.

The desired profiles of the robot joints in the first simulation.

Figure 4.

System responses of the controllers obtained in the first simulation.

Figure 5.

Comparative control errors of the controller in the first simulation.

Figure 6.

Learning performances of the proposed controller in the first simulation.

As shown in Figure 5, stability of the closed-loop system could be maintained by the PID controller and with good control errors: 1 (deg), 1.39 (deg) and 5.6 (deg) for joints 1, 2, and 3, respectively. However, as carefully observed in the response of joint 2 in Figure 4, the physical constraints were violated by the PID control in the transient time. To void the unexpected collision and provide high control performances both in the transient and steady-state phases, a combination of neural network, disturbance-observer learning techniques and the constrained backstepping control signal was adopted in the LND controller. Indeed, outstanding control precision was resulted in by the LND one: the control precision at joints 1,2, and 3 were 0.094 (deg), 0.105 (deg), and 1.85 (deg), respectively. The output-constraint control problem could be also dealt with by the constrained control algorithm designed. Moreover, the nonlinear dynamics of the robotic system were eliminated well by the proposed neural-disturbance learning method. As a result, higher control performances were delivered by the studied controller: the control accuracies at joints 1 and 2 were 0.11 (deg) and 0.108 (deg), respectively. The control results in Figure 5 imply that although the control performances of the LND and proposed controllers were almost same in low-speed work conditions, they were clearly different in the high-frequency trajectory-tracking control. To this end, the proposed control algorithm was employed the new nonlinear learning rule Eq. (21) to improve the estimation effect, which are revealed from estimation data presented in Figure 6.

The controllers were continuously challenged with new various frequencies of the sinusoidal trajectories in the second test. The new frequencies at joints 1, 2, and 3 were selected to be 1 (Hz), 0.3 (Hz), and (0.7 Hz), respectively. Figure 7 presents pieces of the new reference signals with respect to time. Applying the same controllers to the robotic system, the results obtained are shown in Figure 8.

Figure 7.

The desired profiles of the robot joints in the second simulation.

Figure 8.

Comparative control errors in the second simulation.

Control results in Figure 8 indicate that the PID control accuracies were seriously degraded in arduous testing conditions: the control errors were increased to 9.32 (deg) and 8.18 (deg) at joints 1 and 2, respectively. The LND control method could however maintain acceptable control performances thanks to the merging linear learning algorithm: control precision at joints 1 and 3 was slightly increased to be 0.7 (deg) and 4.1 (deg), respectively. Note that as discussed in previous work [45], it is difficult to result in asymptotic control outcomes by the LND approach. This drawback was well overcome by the new learning rule proposed, in which the nonlinear network and disturbance observer were properly combined with an arbitrary small robust gain to ensure the asymptotic convergence of the closed-loop system. The convergences of the control errors obtained by the proposed control algorithm, as demonstrated in Figure 8, show that the uncertain nonlinearities and external disturbances in the system dynamics were well estimated by the collaborative nonlinear adaptation laws. The control and learning effectiveness of the new control approach was confirmed by the validation results achieved.

Advertisement

7. Additional discussion

By comparing the control results obtained by the two intelligent controllers, as presented in Figures 5 and 8, it can be seen that their control performances would be same in the steady-state phases but really different in the transient phases. The nonlinear learning integration led to the faster learning effect and higher control precision.

Estimation data illustrated in Figures 7 and 9 imply that the neural network played as a crucial role in approximating the system dynamics, and the estimation error was then learnt by the nonlinear disturbance observer. Furthermore, with the merging control technique proposed, one only needs an arbitrary small robust signal to result in asymptotic control outcomes, that ensures the smooth control behaviors as presented in Figure 10.

Figure 9.

Learning performances of the proposed controller in the second simulation.

Figure 10.

Control signals generated by the controllers in the second simulation.

Table 3 summarizes the maximum absolute (MA) and root-mean-square (RMS) values of the control errors for a specific working time (75 s to 85 s). As seen in the table, the best RMS errors were always provided by the designed controller even though its MA values were not the highest one in some cases. Here, we propose a ratio of RMS/MA values to deeply evaluate the control performances of the controllers in which those of the PID, LND and proposed controllers were in range of 0.64, 0.41, and 0.31, respectively. The smaller factors imply that the internal deviation and external disturbances were effectively eliminated by the corresponding controllers in better manners. The superior control performances of the proposed controller over the previous control methods are thus confirmed again by the intensive analyses based on the obtained results.

Control ErrorJoint 1Joint 2Joint 3
MARMSMARMSMARMS
The 1st casePID1.020.693.92.55.84.1
LND0.0960.0510.120.0441.941.21
PRO0.1.0.0080.110.0250.410.041
The 2nd casePID9.36.33.92.38.25.5
LND0.70.440.130.024.12.6
PRO0.160.040.120.020.830.22

Table 3.

Statistical control errors of the comparative controllers.

Advertisement

8. Conclusions

This chapter presents a new intelligent control method for high-performance motion control of robotic manipulators with output constraints. The controller is built based on a new neural-disturbance constrained sliding mode structure. A nonlinear sliding-mode control signal is first derived to strictly stabilize the control objective within predefined output constraints. The control accuracies are next improved by eliminating the nonlinear uncertainties and external disturbances in the system dynamics using a new nonlinear neural network. The estimation error is then compensated by proper integration of a nonlinear disturbance observer. By adoption of this neural-disturbance mechanism and a minor robust signal, an asymptotic control outcome is resulted in. The effectiveness of the overall control system is investigated by the rigorous theoretical proofs and comparative simulation results in various working conditions.

Advertisement

The following Lyapunov function is first considered:

L1=0.5i=1nbisi2+wiTwiE23

By substituting the dynamics Eqs. (15) and (16) into the time derivative of the function (A1), we next have

L̇1=i=1nbik1si2+sik2+δii=1nsi21+si2wiTdiaga1idiagri2ŵii=1nbik1si2+sik2+δii=1nsi21+si2wiTdiaga1idiagri2wi+i=1nsi21+si2wiTdiaga1idiagri2wiE24

From the condition Eq. (17), there always exist two positive constants λi1i=1..n,λi2 for the following constraint:

L̇1i=1nλi1bik1si2+λi2si21+si2wiTdiaga1idiagri2wiE25

It means that the proof of Theorem 1 has been completed.

Dynamics of the subsystems Eqs. (15), (19), (20) in element-wise forms are presented as follows:

i=1..nṡi=wiTrik1sik2isgnsi+δiδ̇i=αiδibipisik3isgnsiςiE26

A new integral-type Lyapunov function is investigated:

L2=L1+i=1n0.5piδi2+si0sik3isgnsi+ςidsi+L20iE27

where L20ii=1..n is a positive constant selected as [46]:

L20i=k3i+ςimax22bi+k3i+ςimaxsi0E28

The time derivative of the function (B2) under the dynamical behaviors Eqs. (B1) and (21) is constrained in the following inequality:

L̇2i=1nbisik1si+k2isgnsi+αipiδi2i=1nwiTdiaga1isi21+si2+diaga2idiagri2wii=1npik1si+k2isgnsik3isgnsi+ςi+i=1nwiTdiaga1isi21+si2+diaga2idiagri2wiE29

If the gains selected satisfying Eq. (22), there always exist another constant λi3i=1..n for the following inequality:

L̇1i=1nλi1bik1si2+αipiδi2i=1nwiTλi2si21+si2diaga1i+λi3diaga2idiagri2wiE30

It leads to the proof of Theorem 2.

From a previous work [45], A linear neural-disturbance-observer backstepping (LND) controller is re-designed here for validation. Note that, the previous controller is developed in the single system space. From the control error Eq. (3), a virtual control signal uii=1..n and virtual control error zii=1..n are chosen as

i=1..nui=kc0ie+q̇dizi=q̇iuiE31

where kc0ii=1..n are positive control gains.

The final control signal of the system is then selected as

τii=1..n=eikc1iziφ̂i+ui+ŵiTψiqq̇żu̇E32

where kc1i are positive control gains, ψii=1..n are the regression vectors of the neural network. ŵii=1..n are estimates of the weight vector ψii=1..n, and are updated by:

ŵ̇i=Γiψizi+μiŵiE33

where μi are positive leakage rates, and Γi are diagonal positive-definite matrices.

φ̂ii=1..n are estimates of systematic disturbances, and are computed throughout an auxiliary variable ϕ̂ii=1..n, that is estimated by the following learning mechanism:

φ̂i=ϕ̂i+kc2izϕ̂̇i=kc2im¯i1τiq̇i+φ̂iE34

where, kc2i is a positive disturbance gain selected.

The dynamics (1) of the robot whose configuration is presented in Figure 2, can be derived in detail using the Euler–Lagrange method as follows:

Mq=m11000m22m230m32m33m11=m1l12+m2l1+l2cosq22+m3l1+l2c2+l3cosq2+q32m22=m2l22+m3l22+l32+2l2l3cosq3m23=m32=m3l2l3cosq3+l32m33=m3l32E35
Cqq̇q̇=c1c2c3Tc1=2m2l1+l2cosq2l2sinq2q̇2q̇12m3l1+l2cosq2+l3cosq2+q3l2q̇2sinq2+q̇2+q̇3l3sinq2+q3q̇1c2=2m3l2l3s3q̇2q̇3m3l2l3s3q̇32+l2s2l1+l2c2m3q̇12m3l2s2l3s23l1+l2c2+l3c23q̇12c3=m3l2l3sinq3q̇3q̇2+m3l1+l2cosq2+l3cosq2+q3l3sinq2+q3q̇12+m3l2l3sinq3q̇22+m3l2l3sinq3q̇2q̇3E36
gq=g002l2cosq2+l3cosq2+q3l3cosq2+q3TE37
fq̇=a1q̇1a2q̇2a3q̇3TE38

where qi,li,miandaii=1,2,3 are joint positions, link lengths, link masses and frictional coefficients, respectively; g0 is the absolute gravitational-acceleration value.

References

  1. 1. Goel R, Gupta P. Robotics and industry 4.0. In: A Roadmap to Industry 4.0: Smart Production, Sharp Business and Sustainable Development. Cham: Springer; 2019. pp. 157-169
  2. 2. Shehu N, Abba N. The role of automation and robotics in buildings for sustainable development. Journal of Multidisciplinary Engineering Science and Technology. 2019;6(2):9557-9560
  3. 3. Park Y, Jo I, Lee J, Bae J. A dual cable hand exoskeleton system for virtual reality. Mechatronics. 2018;49:177-186
  4. 4. He W, Chen Y, Yin Z. Adaptive neural network control of an uncertain robot with full-state constraints. IEEE Transactions on Cybernetics. 2016;46(3):620-629
  5. 5. Wang Y, Yan F, Chen J, Ju F, Chen B. A new adaptive time-delay control scheme for cable-driven manipulators. IEEE/ASME Transactions on Mechatronics. 2019;15(6):3469-3481
  6. 6. Tri NM, Ba DX, Ahn KK. A gain-adaptive intelligent nonlinear control for an electrohydraulic rotary actuator. International Journal of Precision Engineering and Manufacturing, vo. 2018;19(5):665-673
  7. 7. Guo Q, Liu Y, Wang Q, Jiang D. Adaptive neural network control of two-DOF robotic arm driven by electro-hydraulic actuator with output constraint. In: Proceedings of the IET Conference. Guiyang, China, 19–22, June; 2018
  8. 8. Cui R, Guo J, Mao Z. Adaptive backstepping control of wheeled inverted pendulums models. Nonlin. Dyn. 2015;79(1):501-511
  9. 9. Tee KP, Ren B, Ge SS. Control of nonlinear systems with time-varying output constraints. Automatica. 2011;47(11):2511-2516
  10. 10. Cui R, Gao B, Guo J. Pareto-optimal coordination of multiple robots with safety guarantees. Autonomous Robots. 2012;32(3):189-205
  11. 11. Wu L, Yan Q, Cai J. Neural network-based adaptive learning control for robot manipulators with arbitrary initial errors. IEEE Access. 2019;7:180194-180204
  12. 12. Zhou Q, Wang L, Wu C, Li H, Du H. Adaptive fuzzy control for nonstrict-feedback systems with input saturation and output constraint. IEEE Trans. Syst. Man Cybern. Syst. 2017;47:1-12
  13. 13. Yu J, Zhao L, Yu H, Lin C. Barrier Lyapunov functions-based command filtered output feedback control for full-state constrained nonlinear systems. Automatica. 2019;105:71-79
  14. 14. Zhu YK, Qiao JZ, Guo L. Adaptive sliding-mode disturbance observer-based composite control with prescribed performance of space manipulators for target capturing. IEEE Transactions on Industrial Electronics. 2019;66(3):1973-1983
  15. 15. Jing C, Xu H, Niu X. Adaptive sliding-mode disturbance rejection control with prescribed performance for robot manipulators. ISA Transactions. 2019;91:41-51
  16. 16. Bechlioulis CP, Rovithakis GA. Prescribed performance adaptative control for multiinput multioutput affline in the control nonlinear systems. IEEE Transactions on Automatic Control. 2010;55(5):1220-1226
  17. 17. Kostarigka AK, Rovithakis GA. Prescribed performance output feedback observer-free robust adaptive control of uncertain systems using neural networks. IEEE Transactions on Systems, Man, and Cybernetics. B: Cybernetics. 2011;41(6):1483-1494
  18. 18. Craig JJ. Manipulator dynamics. In: Introduction to Robotics: Mechanics and Control. 3rd ed. USA: Pearson Prentice Hall; 2005. pp. 165-200
  19. 19. Zu WH. Virtual decomposition control – general formulation. In: Virtual Decomposition Control: Toward Hyper Degrees of Freedom Robots. Berlin Heidelberg: Springer-Verlag; 2010. pp. 63-109
  20. 20. Karayiannidis Y, Papageorgiou D, Doulgeri Z. A model-free controller for guaranteed prescribed performance tracking of both robot joint positions and velocities. IEEE Robotics and Automation Letters. 2016;1(1):267-274
  21. 21. Hsia TC. A new technique for robust control of servo systems. IEEE Transactions on Industrial Electronics. 1989;36(1):1-7
  22. 22. Youcef-Toumi K, Ito O. A time-delay controller for systems with unknown dynamics. Journal of Dynamic Systems, Measurement, and Control. 1990;112(1):133-142
  23. 23. Wang YX, Yu DH, Kim YB. ‘Robust time delay control for the DC-DC boost converter,’ IEEE Transactions on Industrial Electronics. Sep 2014;61(9):4829-4837
  24. 24. Ba DX, Yeom H, Bae JB. A direct robust nonsingular terminal sliding mode controller based on an adaptive time-delay estimator for servomotor rigid robots. Mechatronics. 2019;59:82-94
  25. 25. Lee J, Chang PH, Jin M. An adaptive gain dynamics for time delay control improves accuracy and robustness to significant payload changes for robots. IEEE Transactions on Industrial Electronics. 2020;67(4):3076-3085
  26. 26. Jin M, Lee J, Tsagarakis NG. Model free robust adaptive control of humanoid robots with flexible joints. IEEE Transactions on Industrial Electronics. 2017;64(2):1706-1715
  27. 27. Lee JY, Jin M, Chang PH. Variable PID gain-tuning method using backstepping control with time delay estimation and nonlinear damping. IEEE Transactions on Industrial Electronics. 2014;14(12):6975-6985
  28. 28. Bechlioulis CP, Rovithakis GA. Robust adaptive control of feedback linearizable MIMO nonlinear systems with prescribed performance. IEEE Trans. Automatic Control. 2008;53:2090-2099
  29. 29. Ba DX, Truong DQ, Ahn KK. An integrated intelligent nonlinear control method for pneumatic artificial muscle. IEEE/ASME Trans. on Mechatronics. 2016;21(4):1835-1845
  30. 30. San P, Ren B, Ge SS, Lee TH, Liu J. Adaptive neural network control of hard disk drives with hysteresis friction nonlinearity. IEEE Transactions on Control Systems Technology. 2011;19(2):351-358
  31. 31. Park J, Sandberg IW. Universal approximation using radial-basis-function networks. Neural Computation. 1991;3(2):246-257
  32. 32. Khooban H, Vafamand N, Niknam T, Dragicevic T, Blaabjerg F. Model-predictive control based on Takagi-Sugeno fuzzy model for electrical vehicles delayed model. IET Electric Power Applications. 2017;11(5):918-934
  33. 33. Wilamowski BM, Cotton NJ, Kaynak O, Dundar G. Computing gradient vector and Jacobian matrix in arbitrary connected neural networks. IEEE Transactions on Industrial Electronics. 2008;55(10):3784-3790
  34. 34. Jung S, Kim SS. Hardware implementation of a real-time neural network controller with a DSP and an FPGA for nonlinear systems. IEEE Transactions on Industrial Electronics. 2007;54(1):265-271
  35. 35. Chao F, Zhou D, Lin CM, Yang L, Zhou C, Shang C. Type-2 fuzzy hybrid controller network for robotic systems. IEEE Trans. Cybernetics. 2020;50(8):3778-3792
  36. 36. Wang M, Yang A. Dynamic learning from adaptive neural control of robot manipulators with prescribe performance. IEEE Transactions on Systems, Man, and Cybernetics: Systems. 2017;47(8):2244-2255
  37. 37. Li Z, Kang Y, Xiao Z, Song W. ‘Human–robot coordination control of robotic exoskeletons by skill transfers,’ IEEE Transactions on Industrial Electronics. Jun 2017;64(6):5171-5181
  38. 38. Jing C, Xu H, Niu X. Adaptive sliding mode disturbance rejection control with prescribed performance for robotic manipulators. ISA Transactions. 2019;91:41-51
  39. 39. Chen M, Ge SS. Adaptive neural output feedback control of uncertain nonlinear systems with unknown hysteresis using disturbance observer. IEEE Transactions on Industrial Electronics. 2015;62(12):7706-7716
  40. 40. Chen M, Shao SY, Yang B. Adaptive neural control of uncertain nonlinear systems using disturbance observer. IEEE Trans. Cybernetics. 2015;62(12):7706-7716
  41. 41. Zhang JJ. State observer-based adaptive neural dynamics surface control for a class of uncertain nonlinear systems with input saturation using disturbance observer. Neural Computing and Applications. 2019;31:4993-5004
  42. 42. Zhang L, Li Z, Yang C. Adaptive neural network based variable stiffness control of uncertain robotic systems using disturbance observer. IEEE Transactions on Industrial Electronics. 2017;64(3):2236-2245
  43. 43. Li Z, Su C, Wang L, Chen Z, Chai T. Nonlinear disturbance observer-based control design for a robotic exoskeleton incorporating fuzzy approximation. IEEE Transactions on Industrial Electronics. 2015;62(9):5763-5775
  44. 44. He W, Sun Y, Yan Z, Yang C, Li Z, Kaynak O. Disturbance observer-based neural network control of cooperative multiple manipulators with input saturation. IEEE Trans Neural Networks and Learning Systems. 2020;31(5):1735-1746
  45. 45. Ba DX, Truong DQ, Bae JB, Ahn KK. An effective disturbance-observer-based nonlinear controller for a pump-controlled hydraulic system. IEEE/ASME Trans. on Mechatronics. 2020;25(1):23-32
  46. 46. Ba DX, Bae JB. A precise neural-disturbance learning controller of constrained robotic manipulators. IEEE Access. 2021;9:50381-50390
  47. 47. Ba DX. An intelligent sliding mode controller of robotic manipulators with output constraints and high-level adaptation. International Journal of Robust and Nonlinear Control. 2022;32(12):6888-6912

Written By

Dang Xuan Ba

Submitted: 04 July 2022 Reviewed: 19 July 2022 Published: 28 August 2022