Open access peer-reviewed chapter

# Optimal Control of Fuzzy Systems with Application to Rigid Body Attitude Control

Written By

Yonmook Park

Submitted: September 21st, 2018 Reviewed: October 22nd, 2018 Published: November 26th, 2018

DOI: 10.5772/intechopen.82181

From the Edited Volume

## Aerospace Engineering

Edited by George Dekoulis

Chapter metrics overview

View Full Metrics

## Abstract

In this chapter, the author presents a theoretical result on the optimal control of nonlinear dynamic systems. In this theoretical result, the author presents the optimal control problem for nonlinear dynamic systems and shows that this problem can be solved by utilizing the dynamic programming approach and the inverse optimal approach. The author employs the dynamic programming approach to derive the Hamilton-Jacobi-Bellman (H-J-B) equation associated with the optimal control problem for nonlinear dynamic systems. Then, the author presents an analytic way to solve the H-J-B equation with the help of the inverse optimal approach. Based on the theoretical result presented in this chapter, the author establishes an optimal control design for TS-type fuzzy systems that guarantees the global asymptotic stability of an equilibrium point and the optimality with respect to a cost function and provides good convergence rates of state trajectories to an equilibrium point. The author considers the three-axis attitude stabilization problem of a rigid body to illustrate the optimal control design method for TS-type fuzzy systems. The author designs the optimal three-axis attitude stabilizing control law for a rigid body based on this optimal control design method and analyzes its control performance by numerical simulations.

### Keywords

• intelligent system design
• nonlinear optimal control
• fuzzy systems
• rigid body motion
• rigid body attitude control

## 1. Introduction

Since Lotfi Aliasker Zadeh introduced the fuzzy set [1] and the fuzzy logic [2], these two concepts have been successfully applied to various kinds of fields existing in the earth (e.g., see references in [3]), and the usefulness of these two concepts have also been verified in many industries.

Fuzzy logic is basically a multi-valued logic that allows intermediate values to be defined between conventional evaluations like yes or no, true or false, and big or small. The main characteristic of fuzzy concept is that it can handle some complicated phenomena of systems with the help of the fuzzy and linguistic modeling. Thus, it is possible to design knowledge-oriented intelligent systems if we use the fuzzy logic in the design of systems. This is a very important characteristic of fuzzy logic.

Most physical systems are almost nonlinear dynamic systems. Conventional control design approaches use different approximation methods such as linear, piecewise linear, and lookup table approximations to handle their nonlinearities. The linear approximation method linearizes a nonlinear dynamic system about a single equilibrium point and provides a linearized design-model for it. Then, the controller is designed for the linearized design-model to satisfy a given control objective. The linear approximation method is relatively simple, but it tends to limit control performance. In addition, the controller designed by the linear approximation method is valid only under the assumption that the states of a nonlinear dynamic system operate closely around the considered equilibrium point, which is the basic limitation of the control design by the linear approximation method [4]. The piecewise linear approximation method requires the design of several linear controllers. Thus, it works better than the linear approximation method, although it is tedious to implement. The lookup table approximation method can improve control performance, but it is difficult to debug and tune. Moreover, in complex systems where multiple inputs exist, the lookup table approximation method may be very costly to implement because large memories are required to store a lookup table.

In many applications, the fuzzy control, based on the fuzzy logic, provides better control performance than linear, piecewise linear, or lookup table approximation methods because it provides an efficient framework to incorporate linguistic fuzzy information from human experts. The so-called intelligent control has emerged to use the expert information in the control community. Some tools for implementing the intelligent control can be referred to artificial neural network, genetic algorithm, and fuzzy logic. Note that the artificial neural network is a learning-based device whose design is motivated by the function of human brains and components thereof. And, the genetic algorithm is inspired by Charles Robert Darwin’s theory of evolution and works by creating many stochastic selection parameters to a problem. Usually, the expert information is represented by fuzzy terms like small, large, fast, and so on. Therefore, the fuzzy control is more adequate to implement the intelligent control with the expert information than the artificial neural network and the genetic algorithm.

Fuzzy controllers can perform the nonlinear control actions because fuzzy logic systems are capable of uniformly approximating any nonlinear function over a compact set to any degree of accuracy [5]. Thus, if the parameters of a fuzzy controller are carefully chosen, it is possible to design a fuzzy controller for nonlinear dynamic systems. Then, it is well known that fuzzy controllers are robust with respect to disturbances of systems because their operations are determined by fuzzy rules. Also, fuzzy controllers are customizable because it is easy to understand and modify their rules.

As the behaviors of dynamic systems become complex, the need of fuzzy scheme increases, and the linguistic analysis suggested by Zadeh [6] allows us to analyze the qualitative behaviors of systems with the fuzzy algorithms. As motivated by the study of [6], Mamdani and Assilian [7] proposed the configuration of a fuzzy system with fuzzifier and defuzzifier, and they applied the fuzzy logic to the control of a dynamic plant, and the fuzzy control has attracted a great deal of interest among researchers. Note that fuzzy system is a name for the system which has a direct relationship with fuzzy concepts (e.g., fuzzy sets and linguistic variables) and fuzzy logic [5]. Also, note that the function of fuzzifier is to map crisp points to fuzzy sets, and the function of defuzzifier is to map fuzzy sets to crisp points [5].

Subsequently, many successful applications of the fuzzy control have increased the need of theoretical analysis concerning the stability and performance of fuzzy control systems. Most of all, the stability of fuzzy control systems has often been required to be verified with theoretical arguments, and there have been several significant studies for designing the stabilizing controllers for fuzzy systems with rigorous stability proofs [8, 9, 10, 11, 12, 13, 14], in which the so-called TS-type fuzzy model proposed by Takagi and Sugeno [15] has mainly been used to represent fuzzy systems. Specifically, in [8, 9, 10, 11, 12, 13, 14], the authors made TS-type fuzzy models of dynamic systems with the IF-THEN fuzzy implication and fuzzy inference and designed the stabilizing control laws for TS-type fuzzy models. Then they applied the stabilizing control laws for TS-type fuzzy models to dynamic systems. In the fuzzy control design, the knowledge of an expert can be applied to the control design for dynamic systems by a linguistic expression such as the IF-THEN fuzzy implication.

Concerning the performance of fuzzy control systems, the optimality has often been considered as an important issue in the design of fuzzy control systems, and the conventional linear optimal control method [16] has been used to design the optimal control law for TS-type fuzzy systems. On the optimality issue of fuzzy control systems, Wang [17] developed the optimal fuzzy controller for linear time-invariant systems by utilizing the Pontryagin minimum principal. However, the design method of [17] does not have much practical implications because it may not be a good choice to use the fuzzy controller designed for linear systems directly as the controller for nonlinear fuzzy systems. Based on the linear quadratic optimal control theory [16], Wu and Lin [18] presented a design method of the optimal controllers for both continuous- and discrete-time fuzzy systems. The main strategy of [18] is to seek the optimal controller that minimizes a given performance index by solving the matrix Riccati differential equations or the steady-state algebraic Riccati equations. Later, Wu and Lin [19] addressed a quadratic optimal control problem for continuous-time fuzzy systems, which were represented by the so-called linear-like synthetic matrix form and developed a design scheme of the optimal fuzzy controller under finite or infinite horizon by utilizing the calculus-of-variation method. The study of [19] is also based on solving a steady-state algebraic Riccati-like equation, but it utilizes an efficient algorithm to design the global optimal fuzzy controller. Park et al. [20] addressed the optimal control problem for continuous-time TS-type fuzzy systems. However, the design method of [20] has less redundancy in choice of feedback gains and requires undesirable high feedback gains, which are the main drawbacks of [20] in the design of the optimal controller for fuzzy systems. Kim and Rhee [21] presented a response surface methodology, and they applied this methodology to the design of an optimal fuzzy controller for a plant. Wu and Lin [22] proposed a way to design a global optimal discrete-time fuzzy controller to control and stabilize a nonlinear discrete-time TS-type fuzzy system with finite or infinite horizon time. Chen and Liu [23] studied the problem of guaranteed cost control for TS-type fuzzy systems with a time-varying delayed state. Mirzaei et al. [24] proposed an optimized fuzzy controller for antilock braking systems to improve vehicle control during sudden braking. Lin, Wang, and Lee [25] investigated a geometric property of time-optimal control problem in the TS-type fuzzy model via Lie algebra and found the time-optimal controller as the bang-bang type with a finite number of switching by applying the maximum principle. Mostefai et al. [26] presented a fuzzy observer-based optimal control design for the compensation of nonlinear friction in a robot joint structure based on a fuzzy local modeling technique. Zhu [27] studied a fuzzy optimal control problem for a multistage fuzzy system to optimize the expected value of a fuzzy objective function subject to a multistage fuzzy system. Esfahani and Sichani [28] studied the problem of optimal fuzzy H-tracking control design for nonlinear systems that are represented by using the TS fuzzy modeling scheme. Through these studies, the optimal control for fuzzy systems has quite been progressed.

Since TS-type fuzzy systems essentially have a nonlinear nature due to the IF-THEN fuzzy implication and fuzzy inference, we see that a nonlinear optimal control method is suitable for designing an optimal control law for TS-type fuzzy systems. In addition, when we design an optimal control law for TS-type fuzzy systems, it is often required that the control design should allow us to control the convergence rates of state trajectories to an equilibrium point. The decay rate of the closed-loop dynamics may be used to achieve this requirement in the control design. These observations motivate the author to study an optimal control of TS-type fuzzy systems that can provide good convergence rates of state trajectories to an equilibrium point in this chapter.

More specifically, in this chapter, the author presents a theoretical result on the optimal control of nonlinear dynamic systems. In this theoretical result, the author presents the optimal control problem for nonlinear dynamic systems and solves this problem by utilizing the dynamic programming approach [29] and the inverse optimal approach [30]. Note that Kalman [31] first proposed the inverse optimal approach to establish some gain and phase margins of a linear quadratic regulator. Also, note that the conventional direct optimal approach is based on seeking a stabilizing controller that minimizes a given performance index. On the other hand, the inverse optimal approach avoids the task of solving the Hamilton-Jacobi- Bellman (H-J-B) equation numerically but finds a stabilizing controller first and then shows its optimality with respect to a posteriorly determined performance index. In this chapter, the author employs the dynamic programming approach to derive the H-J-B equation associated with the optimal control problem for nonlinear dynamic systems. Then, the author presents an analytic way to solve the H-J-B equation with the help of the inverse optimal approach, by which the author establishes a systematic approach for designing the optimal controller for nonlinear dynamic systems. The resulting optimal controller takes the form of state feedback L g V controller and has a relaxed control gain structure.

Then, based on the theoretical result presented in this chapter, the author establishes an optimal control design for TS-type fuzzy systems that guarantees the global asymptotic stability of an equilibrium point and the optimality with respect to a cost function, which incorporates a penalty on the state and control input vectors, and provides good convergence rates of state trajectories to an equilibrium point. The problem appearing in this optimal control design for TS-type fuzzy systems is given as a linear matrix inequality (LMI)-based problem. From the results, the optimal controller can be found by a simple controller design procedure, which is essentially given as LMIs. The control design involving LMIs is particularly useful in practice because LMIs can be efficiently solved by recently developed interior-point methods (e.g., [32, 33]). One of the algorithms belonging to the interior-point methods can be found in [34], and an implementation of the algorithm in [34] is included in the LMI Control Toolbox of MATLAB [35], which will be used as the solver for the LMI-based problem appearing in the optimal control design.

Note that the optimal controller for a nonlinear dynamic system is in general designed for a linearized design-model, which is obtained by conventional linear approximation techniques, because it is sometimes difficult to solve the nonlinear optimal control problem associated with a nonlinear dynamic system. Clearly, the optimal controller designed for a linearized design-model guarantees its optimality only at an equilibrium point used to design a linearized design-model. Compared with a design-model obtained by conventional linear approximation techniques, the TS-type fuzzy model can be seen as a good design-model for approximating a nonlinear dynamic system because it retains the essential features of a nonlinear dynamic system with a linguistic description in terms of fuzzy IF-THEN rules, by which the TS-type fuzzy system is valid over a range of operating points within fuzzy sets. Thus, when we design the optimal controller for a nonlinear dynamic system, it is adequate to approximate a nonlinear dynamic system with a TS-type fuzzy model and to use a TS-type fuzzy model rather than a linearized design-model. In addition, one can expect that the optimal controller designed for a TS-type fuzzy system has a wider range of optimality than the optimal controller designed for a linearized design-model because the former guarantees the optimality over a range of operating points within fuzzy sets.

As a control design example in this chapter, the three-axis attitude stabilization problem of a rigid body is considered to illustrate the optimal control design method for TS-type fuzzy systems presented in this chapter. The attitude motion of a rigid body is basically represented by a set of two Equations [36]: (i) Euler’s dynamic equation, which describes the time derivative of the angular velocity vector and (ii) the kinematic equation, which relates the time derivatives of the orientation angles to the angular velocity vector. For representing the orientation angles of a rigid body, there exist several kinematic parameterizations such as Euler angles, Gibbs vector, Cayley-Rodrigues parameters, and modified Rodrigues parameters [37, 38], which are singular three-dimensional parameter representations, and quaternion (also called Euler parameters), which is a nonsingular four-dimensional parameter representation. Note that three-dimensional parameter representations exhibit singular orientations because the Jacobian matrix is singular for some orientations. On the other hand, the quaternion consists of four parameters subject to the unit length constraint and is a globally nonsingular parameter for describing the body orientation [36].

In this chapter, the equations of motion of a rigid body including dynamics and kinematics are considered. The kinematic equation of a rigid body considered in this chapter is described by the quaternion. The equations of motion of a rigid body considered in this chapter describe a system in cascade interconnection, and the backstepping method of [39] can be efficiently utilized to apply the optimal control design method presented in this chapter to the three-axis attitude stabilization problem of a rigid body.

The optimal attitude stabilization problem of a rigid body has been addressed by several researchers [40, 41, 42]. Also, there have been many studies which consider performance indices such as time and/or fuel in the formulation of the optimal attitude stabilization problem of a rigid body [43, 44, 45, 46, 47, 48, 49], in which the optimal regulation problems for angular velocity subsystem of a rigid body and for some quadratic performance indexes have mainly been addressed.

The optimal attitude control problem of the complete attitude motion of a rigid body, which includes dynamics as well as kinematics, has been investigated by many researchers: Carrington and Junkins [50] used a polynomial expansion approach to approximate the solution of H-J-B equation. Rotea et al. [51] showed that Lyapunov functions including a logarithmic term in the kinematic parameters result in linear controllers with a finite quadratic performance index. For the general quadratic performance index, they also presented sufficient conditions which guarantee the existence of a linear and suboptimal stabilizing controller. Tsiotras [52] derived a new class of globally asymptotically stabilizing feedback control laws as well as a family of exponentially stabilizing optimal control laws for the complete attitude motion of a nonsymmetric rigid body. Later, Tsiotras [53] presented a partial solution to the optimal regulation problem of a spinning rigid body by using the natural decomposition of the complete attitude motion into its kinematics and dynamics systems and the inherent passivity properties of these two systems. Bharadwaj et al. [54] presented a couple of new globally stabilizing attitude control laws based on minimal and exponential coordinates. Park and Tahk [55] have considered the problem of three-axis robust attitude stabilization of a rigid body with inertia uncertainties, and they have presented a class of new robust attitude control laws having relaxed feedback gain structures. Later, Park and Tahk [56] have extended their robust attitude control scheme of [55] to the optimal attitude control scheme by using the Hamilton-Jacobi theory of [57]. Also, Park et al. [58] have first addressed a game-theoretic approach to robust and optimal attitude stabilization of a rigid body with external disturbances.

Note that, in the case of robot arm control, since the arms or hand fingers can be viewed as actuators which maneuver the attitude of the held object, the results on the attitude control of a rigid body can be applied to the attitude control of a rigid payload held by the robot arm [59]. With this relation, there have been many studies concerning the attitude control problem of a rigid body, and some remarkable studies can be referred to [60, 61].

The rest of this chapter is composed as follows. In Section 2, the author presents a theoretical result on the optimal control of nonlinear dynamic systems. In Section 3, the author introduces TS-type fuzzy systems and presents an optimal control design for TS-type fuzzy systems. In Section 4, the author considers the three-axis attitude stabilization problem of a rigid body as a control design example and illustrates the effectiveness of the optimal control design for TS-type fuzzy systems. In Section 5, the author concludes this chapter with concluding remarks.

## 2. Nonlinear optimal control

Consider the nonlinear dynamic system given by

x ̇ t = f x t + g x t u t , E1

where f : R n R n and g : R n R n × p are smooth, vector- and matrix-valued functions, respectively, and f 0 = 0 . Moreover, x t R n and u t R p are the state and control input vectors, respectively. Throughout this chapter, we use the definitions of

L f V x t V x t x t f x t and
L g V x t V x t x t g x t , where

V : R n R is a scalar function [4].

In general, we can find the optimal control law for the nonlinear dynamic system in (1) by numerically solving the corresponding Hamilton-Jacobi-Bellman (H-J-B) equation. However, this is a difficult task, and, thus, we may need a simple and efficient method to find optimal control law for the nonlinear dynamic system in (1). In the following, the author presents a theory to provide the optimal control law for the nonlinear dynamic system in (1) by circumventing the task of numerically solving the H-J-B equation.

Proposition 1 [62]: For the nonlinear dynamic system in (1), suppose that there exists a radially unbounded and positive definite function V x t that has continuous, first, partial derivatives with respect to x t and the feedback control u t = α R 1 L g V x t T , where α 1 is a constant and R = R T > 0 is a positive definite matrix, achieves global asymptotic stability of the equilibrium point x t = 0 for the system in (1) such that:

V ̇ x t u t = α R 1 L g V x t T = L f V x t α L g V x t R 1 L g V x t T < 0 E2

for all x t 0 . Then, the control law

u t = 2 α R 1 L g V x t T E3

is the optimal, globally asymptotically stabilizing control law for the system in (1) that minimizes the cost function

P = 0 l x t + u t T Ru t dt , E4

where l x t is given by

l x t = 4 α 2 L f V x t α L g V x t R 1 L g V x t T + 4 α 2 α 1 L g V x t R 1 L g V x t T > 0 E5

for all x t 0 and α 1 .

Proof: First, the following condition holds by (2):

V ̇ x t u t = u t = L f V x t + 1 2 L g V x t u t + 1 2 L g V x t u t = V ̇ x t u t = α R 1 L g V x t α L g V x t R 1 L g V x t T < α L g V x t R 1 L g V x t T < 0 E6

for all x t 0 and α 1 . Since V x t is a radially unbounded and positive definite function, the condition in (6) guarantees that the control law u t in (3) is a globally asymptotically stabilizing control law for the system in (1) by the Lyapunov’s stability theorem [4].

Next, define W x t 4 α 2 V x t and consider the following H-J-B equation associated with the optimal control problem for the system in (1):

min u t l x t + u t T Ru t + L f W x t + L g W x t u t = 0 , W 0 = 0 . E7

Substituting u t in (3) and l x t in (5) into the H-J-B equation in (7) yields

min u t = u t l x t + u t T Ru t + L f W x t + L g W x t u t = min u t = u t L f W x t + 1 4 α L g W x t R 1 L g W x t T + α 1 4 α 2 L g W x t R 1 L g W x t T + 1 4 α 2 L g W x t R 1 L g W x t T + L f W x t 1 2 α L g W x t R 1 L g W x t T = 0 , W 0 = 0 , E8

which implies that u t in (3) and in (5) are solutions of the H-J-B equation in (7). In addition, by (2) and the property of R = R T > 0 , l x t in (5) satisfies l x t > 0 for all x t 0 and α 1 . This completes the proof.

In proposition 1, we see that the globally asymptotically stabilizing control law in (3) for the nonlinear dynamic system in (1) can be found without the task of numerically solving the H-J-B equation in (7) and the control law in (3) is optimal with respect to the cost function in (4). The key point of this work is that we posteriorly determine the penalty on the state vector, which is l x t , rather than we priorly choose it. Sepulcher, Janković, and Kokotović [30] proposed this approach which is referred to as the inverse optimal approach.

It is remarkable that as shown in (4, 5), we can adjust the penalty on the control input vector, which is R , and the penalty on the state vector, which is l x t , with the weight matrix R . Indeed, we can decrease the penalty on the control input vector and increase the penalty on the state vector with a weight matrix R having small values. In this condition, we can obtain a cheap optimal control law requiring a large control effort, and this cheap optimal control law makes the nonlinear dynamic system in (1) stable within a short period of time. Note that the term “cheap” refers to the fact that the control effort is viewed as being cheap. On the other hand, we can increase the penalty on the control input vector and decrease the penalty on the state vector with a weight matrix R having large values. In this condition, we can obtain an expensive optimal control law requiring a small control effort, and this expensive optimal control law makes the nonlinear dynamic system in (1) stable within a long period of time. Note that the term “expensive” refers to the fact that the control effort is viewed as being expensive.

As shown in (3), since the constant α of the optimal control law u t in (3) plays the role of a feedback gain for u t even though the weight matrix R is predetermined to impose the penalties on the control input and state vectors, it is also remarkable that the optimal control law u t in (3) has a relaxed feedback gain structure.

Now, if we consider a practical application, we know that any control law for dynamic systems must provide good convergence rates of state trajectories to an equilibrium point. For achieving this requirement in the design of a control law, the decay rate can be used as a design factor to dominate convergence rates of state trajectories to an equilibrium point. Note that the decay rate of a system is defined to be the largest constant γ > 0 such that lim t e γt x t 2 = 0 holds for all trajectories of a system, where x t 2 denotes the Euclidean norm of x t . From the definition of decay rate, the convergence rate of the system trajectory to the equilibrium point can be controlled. Also, note that stability of dynamic systems corresponds to a positive decay rate. In the following, the author presents a theory about the decay rate.

Proposition 2 [32]: If there exist a positive definite function V x t and a constant β > 0 such that

V ̇ x t < 2 βV x t E9

for all trajectories of a system, then the decay rate of a system is at least β .

Proof: If there exist a positive definite function V x t and a constant β > 0 such that V ̇ x t < 2 βV x t for all trajectories of a system, then we obtain V x t < V x 0 e 2 βt . With a positive definite function V x t x t T X 1 x t , where X = X T > 0 is a positive definite matrix, V x t < V x 0 e 2 βt can be represented by X 1 2 x t 2 2 < X 1 2   e βt x 0 2 2 . Then, by Rayleigh-Ritz theorem [63], we can derive the following:

x t 2 2 X 1 2 x t 2 2 λ min X 1 < X 1 2   e βt x 0 2 2 λ min X 1 X 1 2 2 2 λ min X 1 e βt x 0 2 2 = λ max X 1 λ min X 1 x 0 2 2 e 2 βt ,

where λ min X 1 and λ max X 1 denote the minimum and maximum eigenvalues of X 1 , respectively. Thus, we obtain x t 2 < λ max X 1 / λ min X 1 x 0 2 e βt , and therefore the decay rate of the system is at least β . This completes the proof.

## 3. Optimal control of fuzzy systems

### 3.1. Fuzzy systems

The TS-type fuzzy model and the TS-type fuzzy control law for a system are given by the following IF-THEN fuzzy implications, respectively [15]:

• Plant rule i for a system:

IF x 1 t is M i 1 and · · · and x n t is M in , THEN

x ̇ t = A i x t + B i u t , i = 1 , , r . E10
• Control law rule i for a system:

IF x 1 t is M i 1 and · · · and x n t is M in , THEN

u t = K i x t , i = 1 , , r . E11

In (10, 11), x i t , i = 1 , , n and M ij , i = 1 , , r , j = 1 , , n are state variables and fuzzy sets, respectively, and r is the number of IF-THEN rules. Moreover, A i R n × n , B i R n × p , and K i R p × n .

If we follow the usual fuzzy inference method, we can represent the state equations of the TS-type fuzzy model and the TS-type fuzzy control law for a system as follows, respectively [9]:

x ̇ t = i = 1 r h i x t A i x t + i = 1 r h i x t B i u t E12

and

u t = i = 1 r h i x t K i x t , E13

where h i , i = 1 , , r are the normalized weight functions given by

h i x t j = 1 n M ij x j t i = 1 r M ij x j t , i = 1 , , r E14

and M ij x j t denotes the grade of membership of x j t in the fuzzy set M ij . Here, h i , i = 1 , , r in (14) satisfy h i x t 0 , i = 1 , , r and i = 1 r h i x t = 1 for all x t R n .

### 3.2. Problem definition

The author defines two kinds of problems considered in this section. The first problem is to design a control law for the TS-type fuzzy system in (12) that achieves the global asymptotic stability of the equilibrium point x t = 0 and minimizes the cost function

P = 0 l x t + u t T Ru t dt ,

where l x t > 0 for all x t 0 and R = R T > 0 is a positive definite matrix. The second problem is that the decay rate of the closed-loop dynamics for the TS-type fuzzy system in (12) should be at least β , where β > 0 is a constant.

### 3.3. Optimal control design

We can use the results of propositions 1 and 2 to solve the problems defined in Section 3.2. Specifically, we define f x t i = 1 r h i x t A i x t and g x t i = 1 r h i x t B i for the TS-type fuzzy system in (12). Then, if there exists a radially unbounded and positive definite function V x t such that the conditions in (2) and (9) hold, then an optimal control law in the form of (3) for the TS-type fuzzy system in (12) can be found. Thus, we have to construct V x t whose time derivative satisfies the conditions in (2, 9). The author uses a quadratic Lyapunov function V x t = x t T X 1 x t , where X = X T > 0 is a positive definite matrix, as the candidate of such a V x t , and presents the following theory:

Theorem 1 [62]: For the TS-type fuzzy system in (12), suppose that there exists a function V x t = x t T X 1 x t with X R n × n such that

X = X T > 0 , E15
A i X + XA i T 4 α B i R 1 B i T < 0 , i = 1 , , r , 1 2 A i X + XA i T + A j X + XA j T 2 α B i R 1 B j T + B j R 1 B i T < 0 , 1 i < j r , E16

and

A i X + XA i T 8 α B i R 1 B i T + 2 βX < 0 , i = 1 , , r , 1 2 A i X + XA i T + A j X + XA j T 4 α B i R 1 B j T + B j R 1 B i T + 2 βX < 0 , 1 i < j r , E17

where R = R T > 0 is a positive definite matrix and α 1 and β > 0 are constants. Then the control law

u t = i = 1 r h i x t K i x t , E18

where K i 4 α R 1 B i T X 1 , i = 1 , , r is the optimal, globally asymptotically stabilizing control law for the TS-type fuzzy system in (12) that minimizes the cost function

P = 0 l x t + u t T Ru t dt , E19

where l x t is given by

l x t = 4 α 2 x t T i = 1 r h i 2 x t G ii T X 1 + X 1 G ii +   2 i < j r h i x t h j x t G ij + G ji 2 T X 1 + X 1 G ij + G ji 2 x t +   4 α 2 α 1 x t T 4 i = 1 r   j = 1 r h i x t h j x t X 1 B i R 1 B j T X 1 x t > 0 E20

for all x t 0 and α 1 , where G ij A i 2 α B i R 1 B j T X 1 , and the decay rate of the closed-loop dynamics for the TS-type fuzzy system in (12) with the control law u t in (18) is at least β , where β > 0 is a constant.

Proof: Suppose that V x t = x t T X 1 x t , where X = X T > 0 is a positive definite matrix. Then, from proposition 1, the control law u t in (3) with g x t i = 1 r h i x t B i and V x t = x t T X 1 x t becomes

u t = 2 α R 1 L g V x t T = 2 α R 1 2 x t T X 1 i = 1 r h i x t B i T = i = 1 r h i x t 4 α R 1 B i T X 1 x t i = 1 r h i x t K i x t , E21

where K i 4 α R 1 B i T X 1 , i = 1 , , r and α 1 is a constant.

Now, from propositions 1 and 2, assume that there exists a positive definite matrix X = X T > 0 such that

V ̇ x t u t = 1 2   u t = L f V x t + 1 2 L g V x t u t = x t T i = 1 r h i x t 2 G ii T X 1 + X 1 G ii +   2 i < j r h i x t h j x t G ij + G ji 2 T X 1 + X 1 G ij + G ji 2 x t < 0 E22

for all x t 0 , where G ij A i 1 2 B i 4 α R 1 B j T X 1 = A i 1 2 B i K j , and

V ̇ x t u t = u t = L f V x t + L g V x t u t = x t T i = 1 r h i 2 x t Q ii T X 1 + X 1 Q ii +   2 i < j r h i x t h j x t Q ij + Q ji 2 T X 1 + X 1 Q ij + Q ji 2 x t < 2 βx t T X 1 x t E23

for all x t , where Q ij A i B i 4 α R 1 B j T X 1 = A i B i K j and β > 0 is a constant. Since the normalized weight functions h i , i = 1 , , r in (22) and (23) satisfy h i x t h j x t 0 , i = 1 , , r , j = 1 , , r and i = 1 r j = 1 r h i x t h j x t = 1 (i.e., i = 1 r j = 1 r h i x t h j x t = i = 1 r h i 2 x t + 2 i < j r h i x t h j x t = 1 ) for all x t R n , sufficient conditions for satisfying (22) and (23) are

G ii T X 1 + X 1 G ii < 0 , i = 1 , , r , G ij + G ji 2 T X 1 + X 1 G ij + G ji 2 < 0 , 1 i < j r E24

and

Q ii T X 1 + X 1 Q ii + 2 β X 1 < 0 , i = 1 , , r , Q ij + Q ji 2 T X 1 + X 1 Q ij + Q ji 2 + 2 β X 1 < 0 , 1 i < j r , E25

respectively. If all sets of inequalities in (24, 25) are pre- and post-multiplied by a positive definite matrix X = X T > 0 , then we can transform the nonlinear conditions in (24, 25) into the linear conditions in (16, 17), respectively. Thus, by the results of propositions 1 and 2, we see that the control law u t in (21) with a positive definite matrix X = X T > 0 satisfying the conditions in (16, 17) becomes the optimal, globally asymptotically stabilizing control law for the TS-type fuzzy system in (12) that minimizes the cost function in (19), where l x t in (20) comes from (5) and satisfies l x t > 0 for all x t 0 and α 1 by (22) and the property of R = R T > 0 , and the decay rate of the closed-loop dynamics for the TS-type fuzzy system in (12) with the control law u t in (21) becomes at least β > 0 . This completes the proof.

Now, consider the TS-type fuzzy system with a common input matrix, which is described by

x ̇ t = i = 1 r h i x t A i x t + Bu t . E26

Then, the author presents the following theory:

Theorem 2 [62]: For the TS-type fuzzy system in (26), suppose that there exists a function V x t = x t T X 1 x t with X R n × n such that

X = X T > 0 , E27
A i X + XA i T 4 α BR 1 B T < 0 , i = 1 , , r , E28

and

A i X + XA i T 8 α BR 1 B T + 2 βX < 0 , i = 1 , , r , E29

where R = R T > 0 is a positive definite matrix and α 1 and β > 0 are constants. Then the control law:

u t = Kx t , E30

where K 4 α R 1 B T X 1 is the optimal, globally asymptotically stabilizing control law for the TS-type fuzzy system in (26) that minimizes the cost function

P = 0 l x t + u t T Ru t dt , E31

where l x t is given by

l x t = 4 α 2 x t T i = 1 r h i x t A i T X 1 + X 1 A i 4 α X 1 BR 1 B T X 1 x t + 4 α 2 α 1 x t T 4 X 1 BR 1 B T X 1 x t > 0 E32

for all x t 0 and α 1 , and the decay rate of the closed-loop dynamics for the TS-type fuzzy system in (26) with the control law u t in (30) is at least β , where β > 0 is a constant.

Proof: Suppose that V x t = x t T X 1 x t , where X = X T > 0 is a positive definite matrix. Then, from proposition 1, the control law u t in (3) with g x t B and V x t = x t T X 1 x t becomes

u t = 2 α R 1 L g V x t T = 2 α R 1 2 x t T X 1 B T = 4 α R 1 B T X 1 x t Kx t , E33

where K 4 α R 1 B T X 1 and α 1 is a constant.

Now, from propositions 1 and 2, assume that there exists a positive definite matrix X = X T > 0 such that

V ̇ x t u t = 1 2   u t = L f V x t + 1 2 L g V x t u t = x t T i = 1 r h i x t A i T X 1 + X 1 A i 4 α X 1 BR 1 B T X 1 x t < 0 E34

for all x t 0 and

V ̇ x t u t = u t = L f V x t + L g V x t u t = x t T i = 1 r h i x t A i T X 1 + X 1 A i 8 α X 1 BR 1 B T X 1 x t < 2 βx t T X 1 x t E35

for all x t , where β > 0 is a constant. Since the normalized weight functions h i , i = 1 , , r in (34, 35) satisfy h i x t 0 , i = 1 , , r and i = 1 r h i x t = 1 for all x t R n , sufficient conditions for satisfying (34, 35) are

A i T X 1 + X 1 A i 4 α X 1 BR 1 B T X 1 < 0 , i = 1 , , r E36

and

A i T X 1 + X 1 A i 8 α X 1 BR 1 B T X 1 + 2 β X 1 < 0 , i = 1 , , r , E37

respectively. If all sets of inequalities in (36, 37) are pre- and post-multiplied by a positive definite matrix X = X T > 0 , then we can transform the nonlinear conditions in (36, 37) into the linear conditions in (28, 29), respectively. Therefore, by the results of propositions 1 and 2, we see that the control law u t in (33) with a positive definite matrix X = X T > 0 satisfying the conditions in (28, 29) becomes the optimal, globally asymptotically stabilizing control law for the TS-type fuzzy system in (26) that minimizes the cost function in (31), where l x t in (32) comes from (5) and satisfies l x t > 0 for all x t 0 and α 1 by (34) and the property of R = R T > 0 , and the decay rate of the closed-loop dynamics for the TS-type fuzzy system in (26) with the control law u t in (33) becomes at least β > 0 . This completes the proof.

Note that the problem appearing in Theorems 1 and 2 is to find a matrix X R n × n subject to some linear constraints in the form of linear matrix inequality (LMI). Therefore, this problem is an LMI-based problem [32], and we can efficiently solve the LMI-based problem by the LMI Control Toolbox of MATLAB [35]. In this chapter, the author uses the LMI Control Toolbox of MATLAB [35] as the solver for the LMI-based problem in the optimal control design.

## 4. A control design example

As a control design example, the author considers the three-axis attitude stabilization problem of a rigid body and illustrates the effectiveness of the optimal control design for TS-type fuzzy systems presented in Section 3.

### 4.1. Rigid body model

First, the dynamic equation of the rotational motion of a rigid body is described as follows [36]:

ω ̇ t = J 1 Ω ω t t + J 1 u t , E38

where ω t = ω 1 t ω 2 t ω 3 t T R 3 is the angular velocity vector of the body in the body-fixed frame, u t = u 1 t u 2 t u 3 t T R 3 is the control torque vector of the body, and J R 3 × 3 is the inertia matrix of the body and satisfies J = J T > 0 . And Ω ω t R 3 × 3 denotes a skew-symmetric matrix defined by

Ω ω t 0 ω 3 t ω 2 t ω 3 t 0 ω 1 t ω 2 t ω 1 t 0

and has the property of

Ω ω t T ω t 0 , ω t R 3 . E39

Second, the kinematic equation of rotational motion of a rigid body described in terms of the quaternion is given as follows [36]:

q ̇ t = 1 2 F q t ω t , E40

where q t = q 1 t q 2 t q 3 t q 4 t T q 1 t q v t T T R 4 is the quaternion and F q t : R 4 R 4 × 3 denotes the kinematics Jacobian matrix defined as

F q t q v t T q 1 t I 3 × 3 + Ω q v t T , E41

where I 3 × 3 denotes the 3 × 3 identity matrix.

With the notations of the Euler axis e ̂ R 3 and Euler angle ϕ R , we define the quaternion by q 1 t cos ϕ / 2 and q v t e ̂ sin ϕ / 2 . The quaternion q t is subject to the unit length constraint of q t 2 = 1 for all t 0 and is a kinematic parameter set that can represent the orientation of a body and [36]. From the definition of the quaternion, we see that q 1 t satisfies q 1 t 0 for all π rad ϕ π rad , which describes all eigenaxis rotations [36]. Thus, we can write q 1 t as q 1 t = 1 q v t 2 2 for all π rad ϕ π rad . .

### 4.2. Optimal control design

First, it is observed that two state equations given by (38) and (41) represent a system in cascade interconnection. That is, the angular velocity vector indirectly controls the kinematics system in (41). Thus, the angular velocity vector can be regarded as a virtual control input of the kinematics system in (41) to stabilize the kinematics system in (41). This observation gives the following theorem: Theorem 3 [62]: Consider the kinematics system in (41) with ω t to be the control input, and let the control law for the kinematics system in (41) be

ω v t = k 1 q v t , E42

where k 1 > 0 is a constant. Then, ω v t in (43) is the global asymptotic stabilizing control law for the kinematics system in (41).

Proof: With the control law ω v t in (43), the closed-loop system of the kinematics system in (41) becomes

q ̇ t = 1 2 k 1 F q t q v t = 1 2 k 1 q v t q 1 t I 3 × 3 + Ω q v t T q v t . E43

Now, consider the Lyapunov function candidate V q t = q 1 t 1 2 + q v t 2 2 . Taking the time derivative of V q t along a nonzero trajectory of the closed-loop system in (44) and using the property of Ω q v t q v t 0 for all q v t R 3 , then the following condition holds:

V ̇ q t = 2 q 1 t 1 q ̇ 1 t + 2 q v t T q ̇ v t = 2 q 1 t 1 2 k 1 q v t T q v t 2 1 2 k 1 q v t T q v t + 2 q v t T 1 2 k 1 q 1 t q v t + 1 2 k 1 Ω q v t q v t = k 1 q v t T q v t < 0

for all q v t 0 and all k 1 > 0 . Then, global asymptotic stability of the closed-loop dynamics in (44) follows from the Barbashin-Krasovskii theorem [4]. This completes the proof.

Next, we have to stabilize the dynamics system in (38) with making ω t in (38) follow ω v t in (43), and this is a backstepping problem [39]. For solving this problem, the author defines the new variable δ t as δ t ω t ω v t = ω t + k 1 q v t . For convenience of notation, the author defines x 1 t δ 1 t , x 2 t δ 2 t , x 3 t δ 3 t , x 4 t q 2 t , x 5 t q 3 t , x 6 t q 4 t , x δ t x 1 t x 2 t x 3 t T , and x q v t x 4 t x 5 t x 6 t T . Then, with x t x δ t T x q v t T T and q 1 t = 1 x q v t 2 2 for all π rad ϕ π rad , the author represents the state equation for x t by

x ̇ t = A x t x t + Bu t , E44

where A x t R 6 × 6 and B R 6 × 3 are defined in (46) and (47), respectively. In (47), 0 3 × 3 denotes the 3 × 3 zero matrix:

A x t { J 1 Ω x δ t k 1 x q v t J + 1 2 k 1 Ω x q v t T + 1 2 k 1 1 x q v t 2 2 I 3 × 3 , { k 1   J 1 Ω x δ t k 1 x q v t J 1 2 k 1 2 Ω x q v t T 1 2 k 1 2 1 x q v t 2 2 I 3 × 3 1 2 Ω x q v t T + 1 2 k 1 1 x q v t 2 2 I 3 × 3 , 1 2 k 1 Ω x q v t T 1 2 k 1 1 x q v t 2 2 I 3 × 3 . E45
B J 1 0 3 × 3 . E46

As a numerical example, the author assumes that

J = diag 10 15 20 kg m 2 , E47

where diag means the diagonal matrix, k 1 = 0.2 , and x δ i , x q v i 0.5 0.5 , i = 1 , , 3 . If we do sampling A x t in (46) at nine operating points of x δ i x q v i = 0 0 , 0 0.5 , 0 0.5 , 0.25 0 , 0.25 0.5 , 0.25 0 , 0.25 0.5 , 0.5 0.5 , 0.5 0.5 , i = 1 , , 3 with the given k 1 = 0.2 and J in (48), we can obtain the following TS-type fuzzy model for the system in (38):

• Plant rule i for the system in (45):

IF x 1 t is M i 1 and · · · and x 6 t is M i 6 , THEN

x ̇ t = A i x t + Bu t , i = 1 , , 9 . E48

In (49), x t R 6 is the state vector, u t R 3 is the control input vector, M ij , i = 1 , , 9 , j = 1 , , 6 are the fuzzy sets defined as in Figures 1 and 2, A i , i = 1 , , 9 are obtained by sampling A x t in (46) at the given nine operating-points, and B is given in (47). With the normalized weight functions h i , i = 1 , , 9 defined by

h i x t j = 1 6 M ij x j t i = 1 9 j = 1 6 M ij x j t , i = 1 , , 9 ,

the TS-type fuzzy model in (49) can be transformed into the following TS-type fuzzy system:

x ̇ t = i = 1 9 h i x t A i x t + Bu t . E49

We can use the result of theorem 2 to design the optimal control law for the TS-type fuzzy system in (50) because the TS-type fuzzy system in (50) has the common input matrix B in (47). Then, we assume that α = 2 , β = 0.1 , and R = I 3 × 3 in (28) and (29). With these values, the authors solve the LMIs in (27), (28), and (29) by using the command of “feasp” provided by the LMI Control Toolbox of MATLAB [35]. From the result, we obtain the following optimal control law for the TS-type fuzzy system in (43) that minimizes the cost function in (31):

u a t = K a x t , E50

where

K a = 110.0629 24.5318 47.2469 73.0798 28.6127 37.7649 16.3545 139.9846 60.3201 17.0025 75.0011 39.4506 23.6235 45.2400 185.6703 5.6923 18.6642 92.8441 .

Since we can analyze the pure effect of the weight matrix R on the control performance without any constraint, the author designs two other optimal control laws for the TS-type fuzzy system in (50) without considering the decay rate constraint, which is given by (29), to clearly analyze the influence of the weight matrix R in (31) on the control performance. The author designs one by solving (27) and (28) for the TS-type fuzzy system in (50) with α = 2 and R = I 3 × 3 , and one is given by

u b t = K b x t , E51

where

K b = 28.0708 2.5099 1.9744 3.5899 0.3747 0.1420 1.6733 33.7936 4.9727 1.3456 1.7520 0.3518 0.9872 3.7295 42.8319 1.4713 1.3202 0.2517 .

The author designs the other by solving (27) and (28) for the TS-type fuzzy system in (50) with α = 2 and R = 3 × I 3 × 3 , and the other is given by

u c t = K c x t , E52

where

K c = 29.1798 18.4270 18.9874 0.6854 0.2322 1.5053 12.2846 40.0169 18.1900 2.3547 1.7662 1.3049 9.4937 13.6425 49.3411 2.4969 0.9770 4.9648 .

### 4.3. Numerical simulation results

As the numerical simulation model, the author uses the equations of rotational motion of a rigid body given by (38) and (41), where the inertia matrix is given by (48). The author assumes that the initial conditions at the initial time t 0 = 0 sec for the Euler axis e ̂ and the Euler angle ϕ are e ̂ t 0 = 0.4896 0.2030 0.8480 T and ϕ t 0 = 2.4648 rad, respectively, which give q t 0 = 0.3320 0.4618 0.1915 0.7999 T . Note that the given ϕ t 0 represents an almost upside-down initial orientation of a rigid body. Also, the author assumes a rest-to-rest maneuver of a rigid body, and, thus, the author assumes that the initial condition for the angular velocity vector is ω t 0 = 0 0 0 T rad/sec.

With the optimal control laws u a t of (51), u b t of (52), and u c t of (53), the author illustrates the influences of the decay rate β in (29) and the weight matrix R in (31) on the control performance. Then, in Figures 35, the author shows the numerical simulation results for a rigid body with each control law u a t , u b t , and u c t . In Figures 35, the red-solid, green-dashed, and blue-dotted lines represent the state trajectories of a rigid body with u a t , u b t , and u c t , respectively.

First, as shown in Figures 35, we see that u a t , u b t , and u c t guarantee the asymptotic stability of the equilibrium point. Second, in Figures 3 and 4, we observe that u a t provides more desirable control performance than u b t and u c t because the design of u a t incorporates with the decay rate constraint.

Finally, in Figures 3 and 4, we see that the state responses of a rigid body with u b t show faster convergence rates to the equilibrium point than those with u c t . And, in Figure 5, we see that the control efforts using u b t and u c t are comparable. We can explain this result by the fact that the weight matrix R whose diagonal elements are small increases the penalty on the state vector and decreases the penalty on the control input vector, which makes the system stable within a short period of time and the weight matrix R whose diagonal elements are large decreases the penalty on the state vector and increases the penalty on the control input vector, which makes the system stable within a long period of time.

## 5. Conclusion

In this chapter, the author presented a theory on the optimal control of nonlinear dynamic systems by utilizing the dynamic programming approach and the inverse optimal approach. Specifically, the author employed the dynamic programming approach to derive the Hamilton-Jacobi-Bellman (H-J-B) equation associated with the optimal control problem for nonlinear dynamic systems and utilized the inverse optimal approach to avoid the task of solving the H-J-B equation numerically.

Then, the author established an optimal control design for TS-type fuzzy systems to achieve the global asymptotic stability of an equilibrium point, the optimality with respect to a cost function, and the good convergence rates of state trajectories to an equilibrium point. Based on this optimal control design, the author presented a systematic way for designing the optimal control law for TS-type fuzzy systems.

The author showed the usefulness of the optimal control design by considering the three-axis attitude stabilization problem of a rigid body. The optimal three-axis attitude stabilizing control law for a rigid body was designed, and its control performance was analyzed by numerical simulations. The numerical simulation results demonstrated that the optimal three-axis attitude stabilizing control law designed in this chapter provides desirable optimal control performance together with good convergence rates of state trajectories to an equilibrium point.

The author would like to suggest two further research topics: one is an extension of the study presented in this chapter toward the robust control design for TS-type fuzzy systems with system parametric uncertainties and external disturbances. The problem is to incorporate these robustness issues into the optimal control design approach, and the key to solve this problem may be found in many literatures addressing the robust control approaches such as loop-transfer-recovery approach, guaranteed-cost approach, stochastic approach, and state-estimation approach. However, the difficulty of combining these approaches with the optimal control design method comes from the fact that these approaches are mainly for linear systems. Therefore, the extension of these approaches to nonlinear dynamic systems may be requisite. The other is to develop the optimal control design of TS-type fuzzy systems for tracking problems. Tracking problems assume that the equilibrium point is not the zero state. In linear dynamic systems, tracking problems can be reduced to regulation problems under the assumption that the desired state is known or can be reduced to disturbance-rejection problems under the assumption that the disturbance signal is known. However, in many cases, the desired state or the disturbance signal is not known. Thus, in such cases, some alternatives such as minimax approach and proportional-integral control may be needed. The minimax approach is for the worst-case design such that the disturbance signal maximizes the same performance index that the control input minimizes. And the proportional-integral control can be used to reject the constant disturbances. In the sequel, the minimax approach and the proportional-integral control may provide the solution to the tracking problem of TS-type fuzzy systems.

## References

1. 1. Zadeh LA. Fuzzy sets. Information and Control. 1965;8(3):338-353
2. 2. Zadeh LA. Fuzzy algorithms. Information and Control. 1968;12(2):94-102
3. 3. Bezdek JC. Editorial: Fuzzy models what are they, and why? IEEE Transactions on Fuzzy Systems. 1994;1(1):1-6
4. 4. Khalil HK. Nonlinear Systems. Upper Saddle River, NJ: Prentice-Hall; 1996
5. 5. Wang LX. Adaptive Fuzzy Systems and Control. Englewood Cliffs, NJ: Prentice-Hall; 1994
6. 6. Zadeh LA. Outline of a new approach to the analysis of complex systems and decision processes. IEEE Transactions on Systems, Man, and Cybernetics. 1973;SMC-3(1):28-44
7. 7. Mamdani EH, Assilian S. Applications of fuzzy algorithms for control of simple dynamic plant. Proceedings of the Institution of Electrical Engineers. 1974;121(12):1585-1588
8. 8. Tanaka K, Sugeno M. Stability analysis and design of fuzzy control system. Fuzzy Sets and Systems. 1992;45(2):135-156
9. 9. Wang H, Tanaka K, Griffin M. An approach to fuzzy control of nonlinear systems: Stability and design issues. IEEE Transactions on Fuzzy Systems. 1996;4(1):14-23
10. 10. Feng G, Cao SG, Rees NW, Chak CK. Design of fuzzy control systems with guaranteed stability. Fuzzy Sets and Systems. 1997;85(1):1-10
11. 11. Tanaka K, Ikeda T, Wang H. Fuzzy regulators and fuzzy observers: Relaxed stability conditions and LMI-based designs. IEEE Transactions on Fuzzy Systems. 1998;6(2):250-265
12. 12. Joh J, Chen YH, Langari R. On the stability issues of linear Takagi-Sugeno fuzzy models. IEEE Transactions on Fuzzy Systems. 1998;6(3):402-410
13. 13. Teixeira MCM, Żak SH. Stabilizing controller design for uncertain nonlinear systems using fuzzy models. IEEE Transactions on Fuzzy Systems. 1999;7(2):133-142
14. 14. Park J, Kim J, Park D. LMI-based design of stabilizing fuzzy controllers for nonlinear systems described by Takagi-Sugeno fuzzy model. Fuzzy Sets and Systems. 2001;122(1):73-82
15. 15. Takagi T, Sugeno M. Fuzzy identification of systems and its applications to modeling and control. IEEE Transactions on Systems, Man, and Cybernetics. 1985;SMC-15(1):116-132
16. 16. Dorato P, Abdallah C, Cerone V. Linear-Quadratic Control: An Introduction. Upper Saddle River, NJ: Prentice-Hall; 1995
17. 17. Wang LX. Stable and optimal fuzzy control of linear systems. IEEE Transactions on Fuzzy Systems. 1998;6(1):137-143
18. 18. Wu SJ, Lin CT. Optimal fuzzy controller design: Local concept approach. IEEE Transactions on Fuzzy Systems. 2000;8(2):171-185
19. 19. Wu SJ, Lin CT. Optimal fuzzy controller design: Global concept approach. IEEE Transactions on Fuzzy Systems. 2000;8(6):713-729
20. 20. Park Y, Tahk MJ, Park J. Optimal stabilization of Takagi-Sugeno fuzzy systems with application to spacecraft control. Journal of Guidance, Control, and Dynamics. 2001;24(4):767-777
21. 21. Kim D, Rhee S. Design of an optimal fuzzy logic controller using response surface methodology. IEEE Transactions on Fuzzy Systems. 2001;9(3):404-412
22. 22. Wu SJ, Lin CT. Discrete-time optimal fuzzy controller design: Global concept approach. IEEE Transactions on Fuzzy Systems. 2002;10(1):21-38
23. 23. Chen B, Liu X. Fuzzy guaranteed cost control for nonlinear systems with time-varying delay. IEEE Transactions on Fuzzy Systems. 2005;13(2):238-249
24. 24. Mirzaei A, Moallem M, Mirzaeian B, Fahimi B. Design of an optimal fuzzy controller for antilock braking systems. IEEE Transactions on Vehicular Technology. 2006;55(6):1725-1730
25. 25. Lin P-T, Wang C-H, Lee T-T. Time-optimal control of T-S fuzzy models via Lie algebra. IEEE Transactions on Fuzzy Systems. 2009;17(4):737-749
26. 26. Mostefai L, Denai M, Oh S, Hori Y. Optimal control design for robust fuzzy friction compensation in a robot joint. IEEE Transactions on Industrial Electronics. 2009;56(10):3832-3839
27. 27. Zhu Y. Fuzzy optimal control for multistage fuzzy systems. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics. 2011;41(4):964-975
28. 28. Esfahani SH, Sichani AK. Improvement on the problem of optimal fuzzy Hi-tracking control design for non-linear systems. IET Control Theory and Applications. 2011;5(18):2179-2190
29. 29. Bellman RE. The theory of dynamic programming. Proceedings of the National Academy of Sciences of the United States of America. 1952;38:716-719
30. 30. Sepulchre R, Janković M, Kokotović PV. Constructive Nonlinear Control. New York: Springer-Verlag; 1997
31. 31. Kalman RE. When is a linear control system optimal? Journal of Basic Engineering. 1964;86(1):51-60
32. 32. Boyd S, Ghaoui El L, Feron E, Balakrishnan V. Linear Matrix Inequalities in System and Control Theory, SIAM Studies in Applied Mathematics Series. Vol. 15. Philadelphia, PA: SIAM; 1994
33. 33. Vandenberghe L, Balakrishnan V. Algorithms and software for LMI problems in control. IEEE Control Systems Magazine. 1997;17(5):89-95
34. 34. Nesterov Y, Nemirovskii A. Interior-Point Polynomial Algorithms in Convex Programming, SIAM Studies in Applied Mathematics Series. Vol. 13. Philadelphia, PA: SIAM; 1994
35. 35. Gahinet P, Nemirovski A, Laub AJ, Chilali M. LMI Control Toolbox. Natick, MA: The MathWorks, Inc.; 1995
36. 36. Junkins JL, Turner JD. Optimal Spacecraft Rotational Maneuvers, Studies in Astronautics. Vol. 3. Amsterdam, Netherlands: Elsevier Science; 1986
37. 37. Shuster MD. A survey of attitude representations. Journal of Astronautical Sciences. 1993;41(4):439-517
38. 38. Marandi SR, Modi V. A preferred coordinate system and the associated orientation representation in attitude dynamics. Acta Astronautica. 1987;15:833-843
39. 39. Krstić M, Kanellakopoulos I, Kokotović PV. Nonlinear and Adaptive Control Design. New York: Wiley; 1995
40. 40. Debs AS, Athans M. On the optimal angular velocity control of asymmetrical space vehicles. IEEE Transactions on Automatic Control. 1969;14(1):80-83
41. 41. Dabbous TE, Ahmed NU. Nonlinear optimal feedback regulation of satellite angular momenta. IEEE Transactions on Aerospace and Electronic Systems. 1982;18(1):2-10
42. 42. Vadali SR, Kraige LG, Junkins JL. New results on the optimal spacecraft attitude maneuver problem. Journal of Guidance, Control, and Dynamics. 1984;7(3):378-380
43. 43. Athans M, Falb PL, Lacoss RT. Time-, fuel-, and energy-optimal control of nonlinear norm-invariant systems. IEEE Transactions on Automatic Control. 1963;8(3):196-202
44. 44. Dixon MV, Edelbaum T, Potter JE, Vandervelde WE. Fuel optimal reorientation of axisymmetric spacecraft. Journal of Spacecraft and Rockets. 1970;7(11):1345-1351
45. 45. Junkins JL, Carrington CK, Williams CE. Time-optimal magnetic attitude maneuvers. Journal of Guidance, Control, and Dynamics. 1981;4(4):363-368
46. 46. Scrivener SL, Thomson RC. Survey of time-optimal attitude maneuvers. Journal of Guidance, Control, and Dynamics. 1994;17(2):225-233
47. 47. Windeknecht TG. Optimal stabilization of rigid body attitude. Journal of Mathematical Analysis and Applications. 1963;6(2):325-335
48. 48. Kumar KSP. On the optimum stabilization of a satellite. IEEE Transactions on Aerospace and Electronic Systems. 1965;1(2):82-83
49. 49. Tsiotras P, Corless M, Rotea M. Optimal control of rigid body angular velocity with quadratic cost. In: Proceedings of the 35th IEEE Conference on Decision and Control; 11-13 December 1996; Kobe, Japan: pp. 1630-1635
50. 50. Carrington CK, Junkins JL. Optimal nonlinear feedback control for spacecraft attitude maneuvers. Journal of Guidance, Control, and Dynamics. 1986;9(1):99-107
51. 51. Rotea M, Tsiotras P, Corless M. Suboptimal control of rigid body motion with a quadratic cost. Dynamics and Control. 1998;8(1):55-81
52. 52. Tsiotras P. Stabilization and optimality results for the attitude control problem. Journal of Guidance, Control, and Dynamics. 1996;19(4):772-779
53. 53. Tsiotras P. Optimal regulation and passivity results for axisymmetric rigid bodies using two controls. Journal of Guidance, Control, and Dynamics. 1997;20(3):457-463
54. 54. Bharadwaj S, Osipchuk M, Mease KD, Park FC. Geometry and inverse optimality in global attitude stabilization. Journal of Guidance, Control, and Dynamics. 1998;21(6):930-939
55. 55. Park Y, Tahk MJ. Robust attitude stabilization of spacecraft using minimal kinematic parameters. In: Proceedings of the 2001 IEEE International Conference on Robotics and Automation; 21–26 May 2001; Seoul, Korea. pp. 1621-1626
56. 56. Park Y, Tahk MJ. Optimal attitude stabilization of spacecraft using minimal kinematic parameters. In: Proceedings of the 4th Asian Control Conference; 25-27 September 2002; Singapore. pp. 881-885
57. 57. Anderson BDO, Moore JB. Optimal Control: Linear Quadratic Methods. Englewood Cliffs, NJ: Prentice-Hall; 1990
58. 58. Park Y, Tahk MJ, Bang HC. A game-theoretic approach to robust optimal attitude stabilization of a spacecraft with external disturbances. In: Proceedings of the JSASS 15th International Sessions in 39th Aircraft Symposium; 29–31 October 2001; Gifu, Japan. pp. 17-20
59. 59. Wen JT, Kreutz-Delgado K. The attitude control problem. IEEE Transactions on Automatic Control. 1991;36(10):1148-1162
60. 60. Lizarralde F, Wen JT. Attitude control without angular velocity measurement: A passivity approach. IEEE Transactions on Automatic Control. 1996;41(3):468-472
61. 61. Caccavale F, Natale C, Siciliano B, Villani L. Six-DOF impedance control based on angle/axis representations. IEEE Transactions on Robotics and Automation. 1999;15(2):289-300
62. 62. Park Y. Optimal control of TS-type fuzzy systems. IEEE Transactions on Aerospace and Electronic Systems. 2014;50(1):761-772
63. 63. Horn R. Matrix Analysis. New York: Cambridge University Press; 1985

Written By

Yonmook Park

Submitted: September 21st, 2018 Reviewed: October 22nd, 2018 Published: November 26th, 2018