Open access peer-reviewed chapter

Self-Learning Low-Level Controllers

Written By

Dang Xuan Ba and Joonbum Bae

Submitted: 20 October 2020 Reviewed: 19 February 2021 Published: 20 March 2021

DOI: 10.5772/intechopen.96732

From the Edited Volume

Collaborative and Humanoid Robots

Edited by Jesús Hamilton Ortiz and Ramana Kumar Vinjamuri

Chapter metrics overview

521 Chapter Downloads

View Full Metrics

Abstract

Humanoid robots are complicated systems both in hardware and software designs. Furthermore, the robots normally work in unstructured environments at which unpredictable disturbances could degrade control performances of whole systems. As a result, simple yet effective controllers are favorite employed in low-level layers. Gain-learning algorithms applied to conventional control frameworks, such as Proportional-Integral-Derivative, Sliding-mode, and Backstepping controllers, could be reasonable solutions. The adaptation ability integrated is adopted to automatically tune proper control gains subject to the optimal control criterion both in transient and steady-state phases. The learning rules could be realized by using analytical nonlinear functions. Their effectiveness and feasibility are carefully discussed by theoretical proofs and experimental discussion.

Keywords

  • backstepping control
  • PID control
  • sliding mode control
  • gain-learning control
  • position control
  • low-level control

1. Introduction

Precise motion control of low-level systems is one of the most important tasks in industrial and humanoid robotic systems [1, 2, 3]. Different from industrial robots which commonly operate in compact regions with simple and almost repetitive missions, humanoid robots perform complicated works and face to unknown disturbances in daily activities. Hence, designing a high-performance controller that is easy to use in real-time implementation for such the maneuver systems is a big challenge [4, 5].

To accomplish motion control in real-time applications, conventional proportional-integral-derivative (PID) controllers are the first selection from engineers and researchers thanks to simplicity in design and acceptable control outcome for uncertain systems [6, 7, 8, 9, 10, 11]. Stability of the servo-controlled systems is proven by theoretical analyses, and their flexibility could be enhanced using machine-learning methods such as ordinary or neuro fuzzy-logic-based self-tuning [7, 10, 11], pole-placement adaptation [8], or convolutional learning [9]. However, using linear control signals to suppress the nonlinear behaviors of the robotic dynamics may lead to unexpected transient performance. To overcome this drawback, nonlinear controllers such as sliding mode control (SMC), backstepping control (BSC), or inverse dynamical control have gotten attention from developers [12, 13, 14, 15, 16, 17]. Indeed, a robust-integral-sign-error (RISE) controller was studied to consolidate lumped disturbances inside the system dynamics for achieving asymptotic control results [13]. In another direction, a model-based nonlinear disturbance-observer controller was proposed based on the backstepping technique to yield excellent control accuracies [15]. Nevertheless, extended studies noted that the outstanding control performances are difficult to be preserved with hard control gains employed in diverse real-time operations [18, 19].

As a result, gain-learning SMC algorithms have been developed for robotic systems [18, 19, 20, 21]. The control objective could be minimized by learning processes of robust gains, driving gains or massive gains [22, 23]. In fact, some control gains still need to manually tune for their possibly wide ranges due to nature of each control plant. Thus, it may lead to inconvenience during the operation.

Intelligent methods for automatically tuning all the control gains have been also proposed based on modified backtracking search algorithms (MBSA) combining with a Type-2 fuzzy-logic design [24] or model predictive approaches [25]. The desired gains could be estimated for the best performance by dealing with closed-loop optimal constraints. Though promising control results were presented, smooth variation of the gain dynamics need to further consideration.

Gain-learning control approaches under backstepping design provided another interesting direction as well. PID control with a gain-varying technique encoded by the backstepping scheme was formerly studied [26]. Success of the creative control method was confirmed by a thorough theoretical proof and experimental validation results. Since the learning process of all the control gain is generated only by one damping function, versatility of the control design may be limited for diverse working conditions. Improvement on the flexibility of gain selection is thus still an open issue.

In this chapter, an extensive gain-adaptive nonlinear control approach is presented for high-performance motion control of a low-level servo system. The controller is comprised of an inner robust nonlinear loop and an outer gain-learning loop. The inner loop is developed based on a RISE-modified backstepping framework to ensure asymptotic tracking control in the existence of nonlinear uncertainties and disturbances. The second loop contains a new gain-adaptive engine to activate variation gains of the inner loop in real-time applications. Theoretical effectiveness of the proposed controller is concretely proven by Lyapunov-based analyses. Feasibility of the control approach was confirmed by intensive real-time experiments on a legged robot. Their features are presented in detail in the below sections.

Advertisement

2. Problem statements

General dynamics of a robotic system could be expressed in the following form:

Mqq¨+Cqq̇q̇+gq+τfrq̇+JTfext=τE1

where q,q̇,q¨n are respectively the joint position, velocity and acceleration vectors, Mqn×nis the inertia matrix, Cqq̇n×nis the Centrifugal/Coriolis matrix, gqndenotes the gravitational torque, τfrq̇n is the frictional torque, JT is the respective Jacobian matrix, fext is the external disturbance, and τ is control torque at robot joints.

The main control objective here is to find out a proper control signal τ that ensures a control error between the system output and a desired profile stabilizing at origin under various complicated environments.

To realize the control objective, conventional linear or nonlinear controllers such as Proportional-Integral-Derivative (PID) and Sliding mode control (SMC) methods are priority selections in industry thanks to their simplicity and robustness. However, such the mission in humanoid robots is a different story in which the systems frequently operate in unknown environments with harshly unpredictable disturbances [27, 28]. Obviously, the required controller is strong robustness, fast adaptation, and easy implementation.

Advertisement

3. Low-level intelligent nonlinear controller

In this subsection, a position controller is developed based on the general model using the backstepping technique and new adaptation laws. The dynamics Eq. (1) can be splitted for low-level subsystems under the following state-space form:

ẋ1=x2+υẋ2=a1x2+a2u+dE2

where x1=qii=1..n presents a specific joint angle, x2 is the measurement joint velocity, u=τii=1..n is the control torque at the specific joint, υ is the measurement noise, a1 is a positive constant presenting the nominal dynamics, a2 is another positive constant standing for the inverse nominal mass at low-level dynamics, and d is the lumped disturbance denoting the deviation of internal dynamics. Note that, x1andx2 hold for the following assumptions:

Assumption 1:

  1. The system output x1 is measurable.

  2. The angular velocity x2 is bounded and is indirectly measured from the angular data with a bounded tolerance υ.

3.1 Robust backstepping control scheme

Let formulate the main control error as:

e1=x1x1dE3

where x1d is the desired trajectory of the controlled joint.

Before designing the final control signal, additional assumptions are given.

Assumption 2:

  1. The measurement noise υ is bounded and differentiable up to the second order.

  2. The disturbance d and its time derivative are bounded.

  3. The desired signal x1d is bounded and differentiable up to the third order.

The time derivative of the control objective e1 in considering the first equation of dynamics Eq. (2) is:

ė1=x2+υẋ1dE4

To control the error e1 to zero or to be as small as possible, a virtual control signal is employed to remove the time derivative of the desired signal and to compensate for the disturbance υ:

x2d=ẋ1dk1e1E5

where k1 is a positive constant.

A new state control error is defined as:

e2=x2x2dE6

Differentiating the new error with respect to time and using the second equation of the dynamics Eq. (2) lead to

ė2=a1x2+a2u+dx¨1d+k1x2+υẋ1dE7

To drive the new control error e2 to an expected range, the final control signal is proposed as follows, including two sub-control terms (a model-based term and robust term):

u=a21a1x2+k1x2ẋ1d+k2k1e2+k3+k4e1+0tk4k1e1+k5sgne1E8

where kii=2,3,4,5 are positive control gains.

Stability of the closed-loop system under the controller Eq. (8) can be confirmed by the following statement.

Lemma 1:

Given a low-level system Eq. (2) under Assumptions 1 and 2, if employing the control rule Eqs. (3)–(8), stability of the closed-loop system is ensured for the positive bounded control gains kii=1..5 satisfying:

k2k1>0E9

Proof of Lemma 1 is given in Appendix A.

Remark 1: Lemma 1 reveals that the closed-loop system is stabilized at a vicinity around zero under the constrain Eq. (9). Obviously, acceptable control performance could be resulted in with proper control gains selected.

Effectiveness of the nonlinear control structure is achieved by the following statement:

Theorem 1:

Given a closed loop system satisfying Lemma 1, it asymptotically converges if properly further choosing the control gains such that:

k5ΔḣE10

Proof of Theorem 1 is discussed in Appendix B.

Remark 2: In real-time situations [15, 29, 30], the position data x1 are employed to approximate the velocity x2 throughout a low-pass filter. Thus, the perturbance term (υ) obviously exists in the studied model Eq. (2) and its variation depends on the used filter.

Remark 3: With the robust backstepping control scheme designed, an excellent control performance can be resulted in by the proper control gains selected regardless of the presence of the disturbances. Perfectly selecting the gains for a good transient performance and maintaining high-precision control results for divergent working conditions in the real-time control is not a trivial work.

3.2 Auto gain-tuning rules

To effectively support gain selection for users, a simple strategy for gain tuning is employed: the control gains kii1..5 are separated into two terms: nominal elements k¯ii1..5and variation elements kii1..5. The nominal ones play a key role in ensuring stability of the closed-loop system. The variation gains are self-adjusted to suppress unpredictable disturbances for the expected transient performance.

Furthermore, to ensure high control quality by avoiding sudden change of the gain variation, which could activate a chattering problem [25], the following constraints are noted.

Assumption 3: The variation terms kii1..5 and their first-order time derivatives are bounded.

Under operation of the flexible gains, the nonlinear control signal Eq. (8) is modified:

unew=u+satk1e1a2E11

where satkii1..5 are the saturation functions limited by upper-bound values ki_upi1..5and lower-bound values ki_loi1..5as follows:

satkii1..5=ki_upifkiki_up0kiifki_low<ki<ki_upki_lowifkiki_low0

Lemma 2:

If a closed-loop system satisfies Lemma 1, it is stable for the time-varying gains complying with Assumption 3, and

0<kiminkikimax<Δk̇i<i=1..5Δk1+Δk̇1<k3min.E12

Proof of Lemma 2 is given in Appendix D.

To comply with Assumption 3, the learning laws for the dynamic gains is structured from activation functions of the state control errors and leakage functions, which make sure boundedness of the learning gains.

The learning rules for the variation gains are proposed as follows:

k̇1=σ1e1εsatk1k̇2=1σ2ε2η2satk2k̇3=1σ3e1εη3satk3k̇4=1σ4εφη4satk4k̇5=1σ5sgne1φη5satk5E13

where ηii=2..5andσii=1..5 are positive learning rates.

To investigate the control performance of the learning control system, a new theorem is given.

Theorem 2:

If applying the control gains updated using Eq. (13) to a closed-loop system satisfying Lemma 2, asymptotic convergences of the state control error and variation gains are obtained.

Proof of Theorem 2 could be referred in Appendix E.

Remark 4: Overview of the proposed controller is sketched in Figure 1. As stated in Theorem 1, the stability of the closed-loop system is ensured in a robust control framework, and as proven in Theorem 2, the adaptation of the control structure is highlighted by all the control gains learning for minimizing the tracking control error. The form of Eq. (E.4) reveals that the learning rates (σii=1..5andηii=2..5) can be employed with predefined values for specific control hardware.

Figure 1.

Overview of the gain-learning backstepping controller for low-level subsystems.

Remark 5: In real-time applications, the proposed algorithm will be deployed in a discrete-time environment, the control errors will converge to arbitrary vicinities around zero. The desired control range can be however minimized under the learning mechanism proposed.

Advertisement

4. Real-time experiments

4.1 Setup

In this section, control performance of the intelligent controller is discussed based on verification results carried out in a real-time legged 2DOF robot. The experimental leg included one hip joint and one knee joint which were actuated by two BLDC motors. The mechanical design and a photograph of the actual leg are presented in Figure 2.

Figure 2.

Design and setup of the experimental testing system.

Incremental encoders were used to measure the joint angles, while a force sensor was placed in the shank of the robot to evaluate the ground contact force. The velocity signal was calculated from filtered backward differentiation of the position data. The robot was setup to freely move in both x and y directions. Total weight of the robot was about 15.74 kg. The proposed control algorithm was deployed in a NI Electrical Controller throughout LABVIEW software with a sampling time of 2 ms. The time derivative and integral terms in real-time implementation were approximated by Euler backward methods.

Two systematic parameters a1ja2jj=h,k of the low-level systems could be estimated offline or online using a model-based identification method derived in previous works [27, 31, 32]. Nominal values of the parameters were approximately determined as a1h=2.5;a2h=12.25;a1k=0.5;a2k=15.

4.2 Comparative control results

Both the hip and knee joints were controlled at the same time using the same control algorithm proposed. The controller was also compared with an adaptive robust extended-state-observer-based (ARCESO) controller, a robust integral-sign-error (RISE) controller, and another case of itself with fixed gains (nominal gains) in Eq. (8), which is denoted as the robust backstepping (RB) controller.

The ARCESO controller was designed based on a previous work [30] wherein their control gains were chosen as

k1h=100;k2h=100;ω0h=60;θminh=61T;θmaxh=255T;θ̂h0=12.252.5T;k1k=80;k2k=75;ω0k=40;θmink=50.2T;θmaxk=301.5T;θ̂k0=150.5T;

The RISE controller was implemented based on a robust integral theory [13] to control the studied system Eq. (2) without considering the measurement noise υ. Its control signal was:

e1=x1x1d;e2=ė1+kRISE1e1uRISE=a21a1x2x¨1d+kRISE1x2ẋ1d+kRISE2e2+0tkRISE3e2+kRISE4sgne2E14

The RISE control gains were set to be:

kRISE1h=53;kRISE2h=85.4;kRISE3h=20;kRISE4h=250;kRISE1k=45;kRISE2k=65;kRISE3k=17;kRISE4k=235;

The nominal gains of the proposed controllers were chosen to be:

k¯1h=1.5;k¯2h=72.5;k¯3h=2000;k¯4h=50;k¯5h=200;k¯1k=5;k¯2k=72.5;k¯3k=2000;k¯4k=50;k¯5k=200;

The excitation signals εandφ of the learning laws Eq. (13) were directly synthesized from the control error (e1) and its high-order time derivatives based on Eq. (D.1):

ε=ė1+k1e1φ=e¨1+k2ė1+k3+k2k1k1satk1e1E15

From the nominal control gains selected, the feasible ranges of the variation gains were then chosen to gratify the constraint Eq. (9):

k1hmin=1.4;k2hmin=66.5;k3hmin=1000;k4hmin=49;k1hmax=3.5;k2hmax=500;k3hmax=4000;k4hmax=1500;k5hmin=199;k5hmax=1500;k1kmin=4.9;k2kmin=66.5;k3kmin=1000;k4kmin=49;k1kmax=5;k2kmax=500;k3kmax=4000;k4kmax=1500;k5kmax=1500;k5kmin=199;

The learning rates (σii=1..5andηii=2..5) were then set to comply with the condition Eq. (12) and to ensure the variation gains freely varying inside their predetermined ranges. For simplicity, the relaxation rates (ηii=2..5) could be chosen to be 1 or 2. Finally, the rates tuned were as

σ1h=1000;σ2h=102;η2h=1;σ3h=2×104;η2h=1;σ4h=0.05;η4h=2;σ5h=0.03;η5h=1;σ1k=1000;σ2k=101;η2k=1;σ3k=2×103;η2k=1;σ4k=0.05;η4k=2;σ5k=0.03;η5k=1;

4.2.1 Simple verification

In this validation series, the proposed controller was only applied for position-tracking control of the hip joint. A sinusoidal signal of x1dh=14sin4πtdeg was chosen as the desired trajectory of the test. The leg was put to move freely in the air to eliminate the external disturbance. Figure 3(a) presents the experimental data obtained by the comparative controllers. The ARCESO controller produced a very small control error of ±0.14 deg. (∼1.0%) in the high-speed tracking control thanks to the use of an effective adaptive-disturbance learning mechanism. The ARCESO control performance was still however limited with fast-variation disturbances [30]. By adopting the integral-robust control signal Eq. (14) to compensate for the lumped disturbance (d) in the low-level system Eq. (2), the RISE controller also exhibited a high control accuracy (control error: [−0.16; 0.14] deg. (∼1.14%)). In fact, in real-time applications, improper control gains selected or large measurement noise (υ) could degrade the RISE control performance. As operating under the highly robust design Eq. (8) against all the disturbances, the RB technique provided better control precision (control error: ±0.138 deg. (∼ 0.98%)). Theoretically, the control performance could be further increased if the best control gains were found, but it may be a time-consuming work. As a solution, the gain-tuning process could be supported by the learning mechanism Eqs. (11) and (13) proposed. Indeed, the control quality was intuitively enhanced by applying GARB control method, which yielded the smallest control error of ±0.085 deg. (∼0.6%).

Figure 3.

Experimental results of the single-joint test. (a). Comparative control errors of the testing controllers. (b). Gain learning of the GARB controller.

The gain-learning behaviors are illustrated in Figure 3(b). As seen in the figure, the variation gains were automatically changed in various ways under the adaptation laws to minimize the control error. The maximum-absolute (MA) and root-mean-squares (RMS) values of the control errors from after system was stable (from 2 s to 5 s) are summarized in Table 1. Herein, the proposed controller shows outperformance as comparing to the previous methods.

Control errorARCESORISERBGARB
MA0.1400.1600.1380.085
RMS0.0800.0740.0720.030

Table 1.

Performance comparison of the controllers for the single-joint validation.

4.2.2 Complex verification

To deeper challenge to the special properties of the proposed controller, the robot was controlled to perform a squatting exercise in three different working cases: in the air, on the ground, and with ground contact. The frequency and amplitude of the squatting motion were selected to be 2 Hz and 80 mm, respectively. These tests are normal working cases of the leg in real-time missions. The desired trajectories x1dhandx1dk of the two robot joints (hip and knee) are plotted in Figure 4. The trajectories were derived from desired foot motion of Py=460+40sin4πtPx=0mm using simple inverse-kinematics computation as noted in Appendix F.

Figure 4.

Desired profiles of the robot joints in the multiple-joint tests.

4.2.2.1 Verification with minor external disturbances

Although the robot worked in the air, the disturbances affecting the control joints were large due to high-speed control and interaction forces between the joints during the system movement. The dynamical and statical control results obtained by the validated controllers are respectively shown in Figure 5 and Table 2. In spite of operating with faster motions (192.2 (deg/s) and 324.8 (deg/s) for the hip and knee joints) and in harder internal disturbance conditions, the ARCESO controllers maintained high control outcome thanks to the strong adaptation ability: ±0.8deg4.8% and ±1.5deg4.1%for the hip and knee joints.

Figure 5.

Experimental results of the testing controllers for the multiple-joint test in case of small external disturbance. (a). Control errors of the comparative controllers. (b). Control inputs generated by the comparative controllers. (c) Forces measured at the shank with respect to the GARB controllers. (d) Gain learning of the GARB controllers.

Control errorARCESORISERBGARB
HIPMA0.81.110.35
RMS0.50.780.750.17
KNEEMA1.52.320.8
RMS0.741.171.150.37

Table 2.

Performance comparison of the validated controllers in the small disturbance tests.

As seen in Figure 5(a), the robust backstepping designs coped with the reaction forces as well. The RB and RISE controllers stabilized the control errors inside acceptable ranges: the errors for hip joint and knee joint are respectively 0.51deg6%and 21deg5.5% with the RB, while those are 0.81.1deg6.6% and 2.31.2deg6.3% with the RISE. In the new working conditions, excellent control errors were also resulted in by the GARB controller based on a new set of the control gains found. Figure 5(d) depicts the variation gains that were incorporated with the proposed robust design Eq. (11) to create a better control performance as comparing to the others (0.20.35deg2.1%and 0.80.5deg2.2% for the hip and knee control errors).

Comparison of the control power required to conduct the high-speed control motions is shown in Figure 5(b). Although the control efforts of the controllers were almost same for this mission. Only minor disparate nonlinearities in the control signals would lead to the divergence on control performances. The figure also reveals that the GARB controllers generated applicable control inputs even though the learning gains were moderated in a risk of the high-order measurement noise. The benefit comes from the low-pass-filter-like nature of the gain-learning algorithm proposed. External force affecting the leg measured in the shank using the GARB controllers is presented in Figure 5(c). The coordinate of the measured force is sketched in Figure 2. This experiment shows the higher control accuracies and demonstrates the advantages of the proposed controller as comparing to other controllers.

4.2.2.2 Verification with large external disturbances

In this experiment, the robust adaptive ability of the proposed controller was harshly investigated under conditions of heavy external load. The robot was put on the ground and supported by sliders in both the x and y directions. To avoid damage for the robot, only the proposed controller was used in the verification. The control results obtained are plotted in Figure 6. In this test, the external forces reacting from environment were significantly increased from 10 N to 390 N. The data presented in Figure 6(a) however implies that the controller still provided acceptable control accuracy: 0.350.68deg4.08% and 1.51.1deg4.1% for the hip and knee joints.

Figure 6.

Experimental results of the GARB controllers for the multiple-joint test in case of large external disturbance. (a) Control errors of the GARB controllers. (b) Control inputs generated by the GARB controllers. (c) Measurement of ground reaction forces. (d) Gain learning of the proposed mechanisms.

As demonstrated in Figure 6(b), in this case the system used larger energy than in the second one to execute the fast-tracking control under critical conditions. As presented in Figure 6(d), the control gains were also automatically changed to higher values to deal with large disturbances for a smallest possible control error. Hence, the strong robustness and fast adaptability of the proposed method can be confirmed via this investigation.

4.2.2.3 Verification with fast-variation external disturbances

In this case study, transient behaviors of the designed controller were carefully validated by using fast-variation external disturbances. The robot was still controlled to conduct the same squatting work. Harder testing conditions were constituted by two consecutive distinguished phases of one working cycle: a ground-contact phase and ground-release phase. Figure 7(c) shows the ground-reaction forces measured during the test. The nature of the external disturbance in this case was different from those in the previous cases. Fast variation of the reaction forces may make the system instable. The control system designed had however showed the concrete robustness and impressive adaptation in real-time control again.

Figure 7.

Experimental results of the GARB controllers for the multiple-joint test in case of fast-variation external disturbance. (a) Control errors of the GARB controllers. (b) Control inputs generated by the GARB controllers. (c) Measurement of ground reaction forces. (d) Gain learning of the proposed mechanisms.

As presented in Figure 7(a), the closed-loop system provided good performance: 0.220.76deg4.56% for the hip joint and 0.80.7deg2.2% for the knee joint. The control energy and control parameters were varied to properly adapt to change of the new working conditions. Figure 7(d) shows that the new ranges of the control gains were found by the proposed algorithm, and Figure 7(b) presents the required energy for the new test.

4.2.3 Additional Statical note

The RMS values of the control errors, control signals (u), and the ground-reaction forces for the hip and knee joints of the complex validation process are noted in Table 3. The data imply that the GARB controller was able to result in good control performances with the preset learning rates in the high-speed task under different working conditions. The learning mechanism and robust control technique generated proper power for each test case to effectively realize the control objective. Some snapshots of the robot movement in the last experiment are shown in Figure 8.

HIPKNEE
Testing conditionForce (N)Error (deg)u (%)Error (deg)u (%)
Small external disturbance4.10.1683.1490.3272.932
Large external disturbance229.80.4056.46530.7829.7454
Fast-variation external disturbance89.40.2673.8880.4964.491

Table 3.

Performance comparison of the garb controllers in the multiple-joint tests.

Figure 8.

Snapshots of the leg motion in the large external disturbance test.

Advertisement

5. Discussion

In many humanoid robots, so far, one mainly focuses on building complicated high-level control structures while in the low-level framework simple controllers, such as PID or SMC, were normally employed to realize the given command [28, 33]. Obviously, to ensure the whole system operate as expected, auto-adjusting terms must be implemented at the upper-level framework to compensate for the imperfection of the simple low-level actions [34, 35]. With such the cross-over interference between the control layers, it was hard to provide high accuracies and fast responses for the overall system [28, 36]. Indeed, in our real-time experiments with the legged robot, well-tuned PID controllers could be adopted for squatting tests in a certain case. When the working condition changed, the control system could be damaged by the PID controller due to degradation of the control performance. Of course, precision controllers could be employed in the low-level layer but their simplicity in implementation and less computation burden should be preserved. The gain-adaptive robust backstepping control algorithm has been developed in comply with these strict requirements.

As noted in the control signal Eq. (8), if one chooses k5 = 0 and a1 = 0, the nonlinear control method becomes an ordinary PID controller. In another sense, if the control gains k4 and k5 are removed, the control signal Eq. (8) presents for a conventional form of the SMC scheme in which e2 is the sliding surface. Hence, users have various options in adoption of the designed controller, which could be easily switched to basic control options [6, 8, 28, 35].

Note also that the input gain constant (a2) could be selected with an arbitrarily positive constant while the nominal dynamical constant (a1) could be zero or any bounded value. Their deviations could be counted into the lumped disturbance (d) or extended disturbance (h). One possible way to determine such the terms is use of the model-based identification method presented in previous works [27, 30, 31].

As comparing to other intelligent gain-learning algorithms such as neural network or fuzzy logic engines, the computational burden and fast response are noteworthy advantages [9, 10, 11, 37, 38]. However, in some cases, one does not need to use the nominal dynamics or (a1 = 0), and at that time, overall design of the proposed control method becomes a model-free controller.

The experimental results have confirmed the outperformance of the gain-learning controllers over other robust adaptive nonlinear controllers, such as ARCESO and RISE [13, 30], thanks to a high-degree-of-learning mechanism. Furthermore, the designed controller has been improved from the former controller [27] to increase the real-time applicability by removing third-order time-derivation terms in the control signal.

From the above analyses, the flexibility of the designed controller in terms of working efficiency and user implementation are intuitively observed. Its feasibility in movable robots have been also confirmed by intensive experiments.

Advertisement

6. Summary

This chapter presents a gain-adaptive robust position-tracking controller for low-level subsystems of large robotic systems. The mathematical model of the system dynamics was reviewed to provide necessary information for the controller design. To realize the tracking control objective, a robust control signal based on the backstepping scheme was adopted. In fact, this design is a nonlinear extension of ordinary PID controller or conventional sliding mode controller. New adaptation laws were developed to automatically tune the control gains for different working conditions. The learning mechanism was activated by various forms of the control error and deactivated by the relaxation functions.

Stability of the overall system was concretely maintained by proper Lyapunov-based constraints. Extended real-time experiments were conducted to verify the performance of the proposed controller. The results achieved confirmed the advantages on the robustness, adaptation, high accuracy, and fast response of the proposed controller. Depending on the usage purpose of user, the controller could be simplified to become a gain-learning PID controller or an adaptive robust sliding mode controller.

Advertisement

Let define the following new disturbance:

h=x¨1d+υ̇+k2υ+dEA.1

Also synthesize a new state variable and lumped term as follows:

φ=0tk4ε+k5sgne1+hε=υ+e2EA.2

By noting Eqs. (3), (4), (8), and (A.2), the following dynamics are obtained:

ė1=εk1e1ε̇=k3e1k2k1ε+φEA.3

The following positive function is studied:

V10=0.5k4k3e12+0.5k4ε2+0.5φ2+0tk5sgne1ḣε̇+V100EA.4

where V100 is a positive constant selected as.

V100=0.5k5+Δḣ2k4+k5+Δḣε0EA.5

Here, Δ=max is the maximum absolute value of function .

The proof of the positive function V10 can be obtained by applying integral inequalities and the condition Eq. (A.5).

The time derivative V̇10 is simplified using combinations of Eqs. (A.2) and (A.3), as follows:

V̇10=k4k3e1εk1e1+k4εε̇+k5sgne1ḣε̇ε̇+k2k1ε+k3e1k4ε+k5sgne1ḣk3k5e1k3k4k1e1Δḣ2k4k12+k3Δḣ24k4k1k2k1k4εk5+Δḣ2k42+k2k1k5+Δḣ24k4EA.6

Let define the following positive constants:

e=Δḣ2k4k1+1k3k4k1k3Δḣ24k4k1+k2k1k5+Δḣ24k4ε=k5+Δḣ2k4+1k2k1k4k3Δḣ24k4k1+k2k1k5+Δḣ24k4

By noting Assumption 2, the terms Δḣ,e,ε are bounded. If (e1>e) and/or (ε>ε), V̇10 is negative. It implies e1andεare bounded [19, 29]. Therefore, Lemma 1 is proven.■

A new Lyapunov function is investigated.

V11=V10+P1tEB.1

where P1t is a positive function defined as:

P1t=P10+0tk5sgne1ḣk2k1ε+k3e1P10=2k2k1Δek5+ΔḣEB.2

The proof of the function P1t can be referred in Appendix C.

The time derivative of the Lyapunov function in adoption of Eqs. (A.6) and (B.2) is.

V̇11=k3k4k1e12k4k2k1ε2+Ṗ1tk5sgne1ḣk2k1ε+k3e1=k3k4k1e12k4k2k1ε2EB.3

From Eqs. (2), (9)(10), (A.6), (B.1)(B.3), and Assumptions 1 and 2, we have: e1εTL22. By recalling (A.4), ė1 is bounded, and.

ε¨=k2k1ε̇+k1k3e1+ḣk3+k4εk5sgne1EB.4

It implies that ε̇ is bounded. Hence, by using Barbalat’s lemma [39], Theorem 1 is proven. ■

The function P1t expressed in Eq. (B.2) can be expanded using the error dynamics Eq. (A.3) and integral inequalities as follows:

P1tP10+0tk5sgne1ḣk3+k1k2k12e1k2k10tḣė1k5k2k1e10e1tde1EC.1

By applying the integrating procedures in previous works [8] and comparison inequality, we have.

P1tP10+0tk5Δḣk3+k1k2k12e1k2k1Δḣ+k5ee0EC.2

The proof is completed by noting Lemma 1, the conditions Eqs. (10) and (B.2), the definition Eq. (A.1), and Assumptions 1 and 2.

By applying the control input Eq. (11) to the dynamics Eq. (2), the closed-loop system is:

ė1=εk1e1ε̇=k2k1εk3satk1k̇1e1+φ.ED.1

A new positive function is studied.

V20=0.5φ2+0tk5sgne1ḣε̇+0tk4k3satk1k̇1e1ė1+0tk4εε̇+V200ED.2

where V200 is a positive constant selected as.

V200=0.5k5max+Δḣ2k4min+k5max+Δḣε0+0.5k4maxε02+0.5k4maxk3max+Δk1+Δk̇1e102ED.3

The proof of Lemma 1 can be reused for the positive function V20 based on Eq. (D.3) and for its time derivative. Then, the time derivative of the new function is:

V̇20=k5sgne1ḣε̇+k4k3satk1k̇1e1εk1e1+k4εε̇ε̇+k2k1ε+k3satk1k̇1e1k4ε+k5sgne1ḣe1k3satk1k̇1k4k1e1+k5Δḣk2k1εk4εk5maxΔḣED.4

By employing the same discussion with Lemma 1 under Assumption 2, Lemma 2 is proven.■

Let consider the following Lyapunov function:

V2=0.5k¯3k¯4e12+k¯4ε2+2P2t+φ2+σ2k¯4k22+σ3k¯4k32+σ4k42+σ5k52EE.1

where P2t is a positive function which is chosen as follows:

P2t=P20+0tk¯5sgne1ḣφdτ2EE.2

The proof of the function P2t can be satisfactory using the similar arguments presented in Appendix C with the following conditions:

P20=2k¯5+ΔḣΔė+k2maxΔek3+k2k1k1satk1min>0k¯5ΔḣEE.3

Substituting Eqs. (D.1) and (13) to the time derivative of the new Lyapunov function leads to.

V̇2=k¯3k¯4k1e12k¯2k1k¯4ε2k¯4σ1e1ε2η2σ2k¯4k2satk2η3σ3k¯4k3satk3η4σ4k4satk4η5σ5k5satk5EE.4

Theorem 2 is proven by noting Eqs. (2), (D.1), (E.1), (E.4), Assumptions 1 and 2, and the discussions in the proof of Theorem 1.■

The desired angles of the leg joints (hip x1dh and knee x1dk) can be calculated from the position of the foot (the end-effector) using the following inverse kinematics:

x1dh=atan2PxPy+arcPx2+Py2+l12l222l1Px2+Py2x1dk=atan2Pxl1sinx1dhPxl1cosx1dhx1dhEE.5

where l1=0.21mandl2=0.295m are the link lengths of robot (thigh and shank), respectively. PxandPy are the end-effector position of the robot foot with respect to the robot coordinate setting at the hip joint, as sketched in Figure 2(b). The feasible working range of the hip joint was selected to be 0+80deg.

References

  1. 1. S. Seok, A. Wang, M. Y. Chuah, D. J. Hyun, J. Lee, D. M. Otten, F. H. Lang, and S. Kim, “Design principles for energy-efficient legged locomotion and implementation on the MIT Cheetah robot,” IEEE/ASME Trans. Mechatronics, vol. 20, no. 3, pp. 1117–1129, 2015.
  2. 2. D. Das, N. Kumaresan, V. Nayanar, K. Navin Sam, and N. Ammasai Gounden, “Development of BLDC Motor-Based Elevator System Suitable for DC Microgrid,” IEEE/ASME Trans. Mechatronics, vol. 21, no. 3, pp. 1552–1560, 2016.
  3. 3. W. S. Huang, C. W. Liu, P. L. Hsu, and S. S. Yeh, “Precision control and compensation of servomotors and machine tools via the disturbance observer,” IEEE Trans. Ind. Electron., vol. 57, no. 1, pp. 420–429, 2010.
  4. 4. L. Lu, B. Yao, Q. F. Wang, and Z. Chen, “Adaptive robust control of linear motors with dynamic friction compensation using modified Lugre model,” Automatica, vol. 45, no. 12, pp. 2890–2896, 2009.
  5. 5. Y. Yang, Y. Wang, and P. Jia, “Adaptive robust control with extended disturbance observer for motion control of DC motors,” Electronic Letters, vol. 51, no. 22, pp. 1761–1763, 2015.
  6. 6. P. Roco, “Stability of PID control for industrial robot arms,” IEEE Trans. Robot. Automation, vol. 12, no. 4, pp. 606–614, 1996.
  7. 7. S. Skoczowski, S. Domesk, K. Pietrusewicz, and B. Broel-Plater, “A method for improving the robustness of PID control,” IEEE Trans. Ind. Electron., vol. 58, no. 6, pp. 1669–1676, 2005.
  8. 8. V. Mummadi, “Design of robust digital PID controller for H-bridge soft-switching boost converter,” IEEE Trans. Ind. Electron., vol. 58, no. 7, pp. 2883–2897, 2011.
  9. 9. R. J. Wai, J. D. Lee, and K. L. Chuang, “Real-time PID control strategy for maglev transportation system via particle swarm optimization,” IEEE Trans. Ind. Electron., vol. 58, no. 2, pp. 629–646, 2011.
  10. 10. J. L. Meza, V. Santibanez, R. Soto, and M. A. Llama, “Fuzzy self-tuning PID semiglobal regulator for robot manipulators,” IEEE Trans. Ind. Electron., vol. 59, no. 6, pp. 2709–2717, 2012.
  11. 11. M. J. Prabu, P. Poongodi, K. Premkumar, “Fuzzy supervised online coactive neuro-fuzzy inference system-based rotor position control of brushless DC motor,” IET Power Electronics, vol. 9, no. 11, pp. 2229–2239, 2016.
  12. 12. M. Nikkhah, H. Ashrafiuon, and F. Fahimi, “Robust control of underactuated bipeds using sliding modes,” Robotica, vol. 25, no. 3, pp. 367–374, 2007.
  13. 13. B. Xian, D. M. Dawson, M. S. de Queiroz, J. Chen, “A continuous asymptotic tracking control strategy for uncertain nonlinear systems,” IEEE Trans. Autom. Control, vol. 49, no. 7, pp. 1206–1211, 2004.
  14. 14. Z. Liu, L. Wang, C. Philip Chen, X. Zeng, Y. Zhang, and Y. Wang, “Energy-efficiency-based gait control system architecture and algorithm for biped robots,” IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 42, no. 6, pp. 926–933, 2012.
  15. 15. J. Yao, Z. Jiao, and D. Ma, “Adaptive robust control of DC motors with extended state observer,” IEEE Trans. Ind. Electron., vol. 61, no. 7, pp. 3630–3637, 2014.
  16. 16. S. Kim and J. B. Bae, “Force-mode control of rotary series elastic actuators in a lower extremity exoskeleton using model-inverse time delay control (MiTDC),” IEEE/ASME Trans. Mechatronics, vol. 22, pp. 1392–1400, 2017.
  17. 17. W. Zhang, J. B. Bae, and M. Tomizuka, “Modified preview control for a wireless tracking control system with packet loss,” IEEE/ASME Trans. Mechatronics, vol. 20, pp. 299–307, 2015.
  18. 18. J. Baek, M. Jin, and S. Han, “A new adaptive sliding mode control scheme for application to robot manipulators,” IEEE Trans. Ind. Electron., vol. 63, no. 6, pp. 3628–3637, 2016.
  19. 19. Y. Shtessel, M. Taleb, and F. Plestan, “A novel adaptive-gain supertwisting sliding mode controller: Methodology and application,” Automatica, vol. 48, pp. 759–769, 2012.
  20. 20. C. Xia, G. Jiang, W. Chen, and T. Shi, “Switch-gain adaptation current control for brushless DC motors,” IEEE Trans. Ind. Electron., vol. 63, no. 4, pp. 2044–2052, 2016.
  21. 21. M. Jin, J. Lee, and N. G. Tsagarakis, “Model-free robust adaptive control of humanoid robots with flexible joints,” IEEE Trans. Ind. Electron., vol. 64, no. 2, pp. 1706–1715, 2017.
  22. 22. D. X. Ba and J. B. Bae, “A Nonlinear Sliding Mode Controller of Serial Robot Manipulators with Two-level Gain-learning Ability,” IEEE Access, 2020.
  23. 23. D. X. Ba, “A Fast Adaptive Time-delay-estimation Sliding Mode Control for Robot Manipulators,” Advances in Science, Technology and Engineering Systems Journal, Vol. 5, No. 6, 904–911, 2020.
  24. 24. M. H. Khooban, T. Niknam, F. Blaabjerg, and M. Dehghani, “Free chattering hybrid sliding mode control for a class of nonlinear systems: electric vehicles as a case study,” IET Sci. Meas. Technol, vol. 10, no. 7, pp. 776–785, 2016.
  25. 25. H. Khooban, N. Vafamand, T. Niknam, T. Dragicevic, and F. Blaabjerg, “Model-predictive control based on Takagi-Sugeno fuzzy model for electrical vehicles delayed model,” IET Electr. Power Appl., vol. 11, no. 5, pp.918–934, 2017.
  26. 26. J. Y. Lee, M. Jin, and P. H. Chang, “Variable PID gain tuning method using backstepping control with time-delay estimation and nonlinear damping,” IEEE Trans. Ind. Electron., vol. 14, no. 12, pp. 6975–6985, 2014.
  27. 27. D. X. Ba, H. Yeom, J. Kim, and J. B. Bae, “Gain-adaptive Robust Backstepping Position Control of a BLDC Motor System,” IEEE/ASME Trans. on Mechatronics, vol. 23, no. 5, pp. 2470–2481, 2018.
  28. 28. B. Yao, F. Bu, J. Reedy, and G. T. C. Chiu, “Adaptive robust motion control of single-rod hydraulic actuators: theory and experiments,” IEEE/ASME Trans. Mechatronics, vol. 5, no. 1, pp. 79–91, Mar. 2000.
  29. 29. K. J. Astrom and B. Wittenmark, Adaptive Control. New York: AddisonWesley, 1995.
  30. 30. J. Yao, Z. Jiao, D. Ma, and L. Yan, “High-accuracy tracking control of hydraulic rotary actuators with modeling uncertainties,” IEEE/ASME Trans. Mechatronics, vol. 19, no. 2, pp. 633–641, 2014.
  31. 31. K. Guo, J. Wei, J. Fang, R. Feng, and X. Wang, “Position tracking control of electro-hydraulic single-rod actuator based on an extended disturbance observer,” Mechatronics, vol. 27, pp. 47–56, 2015.
  32. 32. D. X. Ba, K. K. Ahn, D. Q. Truong, and H. G. Park, “Integrated model-based backstepping control for an electro-hydraulic system,” Int. J. Prec. Eng. Manufacturing, vol. 17, no. 5, pp. 1–13, 2016.
  33. 33. M. Raibert, “Legged robots that balance,” MIT Press, Cambridge, USA, 1986.
  34. 34. D. N. Nenchev, A. Konno, and T. Tsujita, “Humanoid robots: modeling and control,” Butterworth-Heinemann, Cambridge, USA, 2019.
  35. 35. J. Lima, J. Goncalves, P. Costa, and A. P. Moreira, “Humanoid low-level control development based on a realistic simulation,” Int. J. Humanoid Robotics, vol. 7, no. 4, pp. 587–607, 2010.
  36. 36. V. M. F. Santos and F. M. T. Silva, “Design and low-level control of a humanoid robot using a distributed architecture approach,” J. Vibration and Control, vol. 12, no. 12, pp. 1431–1456, 2006.
  37. 37. F. L. Moro and L. Sentis, “Whole-body control of humanoid robot,” in Humanoid Robotics: A reference, Springer, Dordrecht, 2019.
  38. 38. J. Ahn, J. Lee, and L. Sentis, “Data-efficient and safe learning for humanoid locomotion aided by a dynamic balancing model,” IEEE Robotics and Automation Letters, vol. 5, no. 3, pp. 4376–4383, 2020.
  39. 39. J. Wang, W. Qin, and L. Sun, “Terrain adaptive walking of biped neuromuscular virtual human using deep reinforcement learning,” IEEE Access, vol. 7, pp. 92465–92475, 2019.

Written By

Dang Xuan Ba and Joonbum Bae

Submitted: 20 October 2020 Reviewed: 19 February 2021 Published: 20 March 2021