Open access peer-reviewed chapter

Feedback and Partial Feedback Linearization of Nonlinear Systems: A Tribute to the Elders

Written By

Issa Amadou Tall

Submitted: 29 October 2015 Reviewed: 23 June 2016 Published: 19 October 2016

DOI: 10.5772/64689

From the Edited Volume

Nonlinear Systems - Design, Analysis, Estimation and Control

Edited by Dongbin Lee, Tim Burg and Christos Volos

Chapter metrics overview

2,430 Chapter Downloads

View Full Metrics

Abstract

Arthur Krener and Roger Brockett pioneered the feedback linearization problem for control systems, that is, the transforming of a nonlinear control system into linear dynamics via change of coordinates and feedback. While the former gave necessary and sufficient conditions to linearize a system under change of coordinates only, the latter introduced the concept of feedback and solved the problem for a particular case. Their work was soon extended in the earlier eighties by Jakubczyk and Responder, and Hunt and Su who gave the conditions for a control system to be linearizable by change of coordinates and feedback (full rank and involutivity of the associated distributions). It turned out that those conditions are very restrictive; however, it was showed later that systems that fail to be linearizable can still be transformed into two interconnected subsystems: one linear and the other nonlinear. This fact is known as partial feedback linearization. For input-output systems with well-defined relative degree, coordinates can be found by differentiating the outputs. For systems without outputs, necessary and sufficient geometric conditions for partial linearization have been obtained in terms of the Lie algebra of the system; however, both results of linearization and partial feedback linearization lack practicability. Until recently, none has provided a way to actually compute the linearizing coordinates and feedback. In this paper, we propose an algorithm allowing to find the linearizing coordinates and feedback if the system is linearizable, and in the contrary, to decompose a system (without outputs) while achieving the largest linear subsystem. Those algorithms are built upon successive applications of the Frobenius theorem. Examples are provided to illustrate.

Keywords

  • feedback
  • Frobenius theorem
  • partial linearization

1. Introduction

Roger Brockett is considered as the father of feedback linearization, one of the most important techniques for studying nonlinear systems. The problem of feedback linearization seeks to find new coordinates in which the system exhibits linear dynamics driven by new control inputs. The role of linear systems in engineering and mechanical systems has already been demonstrated in several applications. First, let us consider a linear system

Λ : {x˙=Fx+Gu=Fx+G1u1++Gmumy=Hx=H1x1++HnxnE1

where Fx,G1,,Gm are, respectively, on

, Hx a linear vector field on
,
denotes the state of the system, and
the control input. To the linear system Λ, we attach two geometric objects: one called controllability space
as a n × (nm) matrix whose columns are those of the matrices Fi−1G for i=1, 2,,n; the other called observability space
as a p × (nm) matrix whose columns are those of the matrices Hi−1F for i=1, 2,,n. The system Λ is controllable if and only if
and the system is observable if and only if
. By a linear change of coordinates z = Tx and a linear feedback u=Kx+Lv where T, K, and L are matrices of appropriate sizes, T and L being invertible, the system Λ is transformed into a linear equivalent one

Λ¯ : {z˙=Az+Bv=Az+B1v1++Bmvmy=Cz=C1z1++CnznE2

with A=T(F+GK)T1,B=TGL, and C = HT−1.

For the linear system x˙=Ax+Bu where A and B are n × n and n × m matrices, respectively, we denote by Mj=[A AB Aj1B] and mj=dimMj. We define kj=max{ni | nij} where n0 = 0 and ni = mi–mi−1 for 1 ≤ in. It is straightforward to notice that k1km with k1++km=n. It is a classical result of the linear control theory that a certain choice of the matrices T, K, and L leads to the Brunovsky canonical form Λ¯=ΛBR for which A=diag {A1, A2,,Am} and B=diag {b1, b2,,bm} with (see [1])

Ai=(010 0001 0  000 1000 0) bi=(0 001 )E3

form a canonical pair of dimension ki. Moreover, C=(10 0).

Now let us consider a nonlinear control system (control-affine for simplicity)

Σ : {x˙=f(x)+g(x)u=f(x)+g1(x)u1++gm(x)um, xϵ Rn, u ϵ Rmy=h(x)=(h1(x), , hp(x))E4

where

denotes the state of the system, and
the control input., f,g1,,gm are smooth or analytic vector fields with f(0)=0, and h1,,hp analytic functions on
.

The problem of finding new coordinates

in which the system Σ, driven by new inputs
, takes the form Λ¯ is referred as the input-output static state feedback linearization. For input-output systems, the problem of linearization is equivalent to achieving a relative degree (see details later). When the relative degree is achieved, finding the coordinates system in which the system becomes linear is a simple differentiation process. For systems without outputs, we only refer to static state linearization (Problem 1) or static state feedback linearization (Problem 2) as follows:

Problem 1: Find new coordinates z=Φ(x) that transform the system Σ : x˙=f(x)+g(x)u into a linear controllable system z˙=Az+Bu.

Problem 2: Find new coordinates z=Φ(x) and an invertible feedback u=α(x)+β(x)v that transform the system Σ : x˙=f(x)+g(x)u into a linear controllable system z˙=Az+Bv.

Arthur Krener [2] formulated and completely solved the first problem by showing that the Lie brackets of some vector fields have to be zero, that is, a certain set of vector fields associated with the system have to commute. Roger Brockett [3] solved the second problem under the assumption that m = 1 (single-input), p = 1 (single-output) and β is constant. The general case of input-output feedback linearization (Problem 2) was solved by Jakubczyk and Respondek [4] on one side, and independently by Hunt and Su [5] on the other side. Necessary and sufficient geometric conditions were obtained and showed that there is only a small class of nonlinear systems that can be linearized by feedback. Indeed, the system should satisfy the following two strong conditions:

  1. (F1) an involutive distribution,

  2. (F2) a distribution with full rank equal to the dimension of the system.

Those conditions are very restrictive, thus making the class of nonlinear systems that can be linearized by static state feedback very small. To enlarge the class of nonlinear systems that can be analyzed via feedback linearization, several techniques have been introduced including dynamic feedback linearization, nonregular state feedback linearization, partial feedback linearization, orbital feedback linearization, and transverse feedback linearization. Dynamic feedback linearization differs from static state feedback linearization in the sense that a compensator w˙=a(x,w)+b(x,w)v,wϵq,u=α(x,w)+β(x,w)v is thought that enlarges the dimension of the system. This means that one tries to linearize the system

Σ : {x˙=f(x)+g(x)α(x, w)+g(x)β(x, w)v, xϵ Rn, v ϵ Rmw˙=a(x,w)+b(x,w)v, wϵ RqE5

using an extended state space transformation z=φ(x,w)ϵn+q. This problem is referred as regular feedback linearization (β(.) is an invertible matrix). More general feedbacks have been exploited to enlarge the class of linearizable systems by allowing the matrix β(.) to be noninvertible, that is, admitting fewer inputs than the original system [6, 7]. In this case, we talk about nonregular feedback linearization [8]. Orbital feedback linearization, also known as time scale feedback linearization, introduces a new time scale τ such that dtdτ=γ(x) is a positive function (preserve orientation). Hence, in the new time scale τ, the problem becomes to linearize the time-scaled system (see [9] and references therein)

Σ : {dxdτ =γ(x)f(x)+g(x)u=γ(x)f(x)+g1(x)u1++gm(x)um, xϵ Rn, u ϵ Rmy=h(x)=(h1(x), , hp(x))E6

Transverse feedback linearization [10] deals with transforming a control-affine system coupled with a controlled invariant manifold into a system whose dynamics, transversal to the invariant manifold, are linear and controllable.

The feedback linearization problem has been thoroughly investigated in the past four decades but have regained interest recently with new algorithms developed to circumvent the solving of partial differential equations associated to the linearization (see [4, 5, 2128], and the references therein). Whenever a system fails to satisfy either condition (F1) or (F2), its dynamics contain nonlinearities in any given coordinate system. The fundamental question is in which coordinates does the system exhibit the largest linear subsystem. This question was first addressed naturally for systems with outputs [6, 7, 1120]. We propose in this paper an algorithmic way of transforming a control system into a cascade of two systems: one nonlinear and one linear with the largest dimension. First, we will recall basics about vector fields and the Frobenius theorem, then Section 3 deals with linearization of control systems with outputs, Section 4 contains the partial linearization algorithm. We end the paper with Section 5 with few examples as an illustration.

Advertisement

2. Vector fields and Frobenius theorem

The theory of differential equations is one of the most productive and useful contributions of our modern times. Its applications are widespread in all branches of natural sciences, particularly, in physics, biology, chemistry, engineering, ecology, and in weather predictions, just to name few. It plays the role of a connector between abstract mathematical theories and applications in real world problems. Paraphrasing Newton quoted as saying that ”it is useful to solve differential equations,” a lot has been deserved in solving differential equations with various methods and techniques provided in the literature. Existence and uniqueness of solutions have been addressed in many scientific papers and textbooks. Consider the simplest expression of a linear partial differential equation

f1(x)ux1++fn(x)uxn=b(x)E7

where f1(x),,fn(x), and b(x) are smooth or analytic functions in the variable x. This partial differential equation is referred to as a homogeneous (resp. nonhomogeneous) linear first order partial differential equation when b(x)0 (resp.

. The vector field f(x) whose components are f1(x),,fn(x) is called the characteristic vector field of the homogeneous equation and the corresponding dynamical system x˙=f(x), its characteristic equation. The solutions of the system are the integral curves of the characteristic equation and are often obtained by solving the so-called Lagrange subsidiary equation (also called characteristic equation)

dx1f1(x)==dxnfn(x)=dub(x)E8

Several methods have been devoted to the solving of such system among them Euler's method and Natani’s method. Most of the work on ordinary differential equations have been done around equilibrium points (nonregular or singular point), that is, a point x0 where (fx0 = 0) . The reason being that regular points, that is, where f(x0)0 are not topologically reach, because in those neighborhoods all trajectories are straight parallel lines (straightening theorem). Though this fact remains true and hence often neglected, the straightening theorem has many important applications. Indeed, a solution of the nonhomogeneous partial differential equation above can be easily found around a regular point x0 of f by simple quadrature in new coordinates: If z=𝜑(x) is a change of coordinates around x0 that rectifies the vector field f, that is, such that φ*(f)=zn, then the nonhomogeneous equation simplifies as u˜zn=b˜(z), where u(x)=u˜(φ(x)) and b(x)=b˜(φ(x)). A solution u˜ (yielding u=u˜φ) is given

u˜(z)=a(z1,, zn1)+0znb˜(z1,, zn1,ε)dε.E9

The dynamical system x˙=f(x) takes in this case the canonical form

{z˙1 =0z˙2 =0 z˙n1=0z˙n =1E10

Theorem 1: (Flow-box) Let f be a vector field defined in a neighbourhood of a nonsingular point

, that is, f(x0)0. There exists a local change of coordinates z= Φ(x) in a neighbourhood U of x0 such that Φ*(f)(z)=xn for all
.

The existence and proof of this theorem, as well as its general form, can be found in the literature. The only difficulty in applying the straightening theorem is in finding the straightening diffeomorphism as one needs to solve the system of highly nonlinear partial differential equations:

{Φ1x1fn(x)++Φ1xnfn(x)=0Φ2x1fn(x)++Φ2xnfn(x)=0Φn1x1fn(x)++Φn1xnfn(x)=0Φnx1fn(x)++Φnxnfn(x)=1E11

In earlier work [25], we provided a solution to this problem by giving explicit changes of coordinates, which will be recalled below. If x0 is a singular point, that is, f(x0)=0, the notion of linearization, and later of normal form, were introduced by Poincare. Before we recall those facts, let us remind the reader that dynamical systems are a subclass of a largest class named control systems. Indeed, a control system can be interpreted as a parameterized family of dynamical systems x˙=F(x,u) where for each fixed value of u, Fu:xFu(x)=F(x,u) is a vector field. When u = 0, we rediscover dynamical systems. Poincare was the first to address the problem of linearization for dynamical systems around an equilibrium point. He indeed showed that when fx(x0)=F is a matrix whose spectrum λ=(λ1,,λn) is not resonant, then new coordinates z=φ(x) exist where the dynamical system takes the linear form z˙=Fz. We recall that a spectrum λ=(λ1,,λn) is called resonant if there are nonnegative integers m1,,mn with m1++ mn2 such that m1λ1++mnλn=λj for some 1j n. He further showed that, when resonances are present, the dynamical system can be put in a normal form

z˙=Fz+|m|=2Θmz1m1z2m2znmnE12

where

is a vector constant whose jth-component is zero when there is no resonance of order m associated to the eigenvalue λj.

Notations: For a vector field f(x)=(f1(x),,fn(x)) on

and a function h in x-coordinates =(x1,,xn), we denote by

Lfh(x)=hx1f1(x)+hx2f2(x)++hxnfn(x)E13

the Lie derivative of h along the vector field f, and recursively, we define the Lie-derivatives

Lf0h(x)=h(x), Lfjh(x)=LfLfj1h(x), j=1, 2, , E14

For another vector field g(x)=(g1(x),,gn(x)) on

, we define the Lie bracket [f, g] between the two vector fields as a new vector field

[f, g](x)=(Lfg1(x)Lgf1(x), ,Lfgn(x)Lgfn(x))E15

and, for simplicity, we denote such vector field as adfg(x)=[f, g](x), and recursively, we define

adf0g(x)=g(x), adfjg(x)=[f, adfj1g](x), j=1, 2, ,E16

Let

be a local diffeomorphism with Φ(0)=0, giving rise to new coordinates z= Φ(x). The vector field f is transported by Φ into a new vector field, denoted f¯(z)Φ*f(z), whose components f¯(z)=(f¯1(z),,f¯n(z)) are given for all 1 ≤ jn by

f¯j(z)= LfΦj(Φ1(z))= Φjx1f1(Φ1(z))+ Φjx2f2(Φ1(z))++ Φjxnfn(Φ1(z))E17

Below we recall the method we provided in [25] to solve the problem of straightening a vector field around a nonsingular point. Without loss of generality, we will assume the nonsingular point to be

.

Theorem 2: Let ν=(ν1,, νm) be aanalytic vector field on

and σ(x)=1νk(x).

  1. Define z= Φ(x) by its components as following

    Φj(x)=xj+s=1(1)sxkss!Lσνs1(σνj)(x), jkΦj(x)=s=1(1)s+1xkss!Lσνs1(σ)(x), j=kE18

    The local diffeomorphism Φ satisfies Φ*(ν)(z)=(0,,0,1,0,,0)xk.

  2. The local diffeomorphism x=Ψ(z) whose components are given by

    Ψj(z)=zj+s=1znss!(i=0s1(1)iCnizkiLνsi1(νj)(z)), jk Ψj(z)=s=1znss!(i=0s1(1)iCnizkiLνsi1(νn)(z)), j=kE19

is the inverse of z=Φ(x), that is, Φ(Ψ(z))=z and Ψ(Φ(x))=x such that Ψzk=ν(Ψ(z)).

The series proposed above are not Taylor series or series in the variable xk (resp. zk). Indeed, the coefficients Lσνs1(σνj)(x) and Lσνs1(σ)(x) are functions that depend on the variables xk (resp. zk). Above, the notation zkih means the ith-derivative of h about the variable zk. We refer to [tall-adjm] for more details and the generalization of Frobenius theorem to the straightening of a set of vector fields as stated below.

Theorem 3: Let ν1(x),,νm(x) be a set of analytic vector fields on

such that the distribution D(x)={ν1(x),,νm(x)}1 is involutive and of maximal rank m ≤ n in a neighborhood
of the origin. There exist an open neighborhood
and a change of coordinates z=Φ(x) such that Φ*(νi)(z)=zi for all
and i=1, , m.

We proposed a constructive way to find the diffeomorphism Φ through successive applications of Frobenius theorem.

Advertisement

3. Control systems and feedback linearization

Let us reconsider the control-affine nonlinear system with outputs

Σ : {x˙=f(x)+g(x)u=f(x)+g1(x)u1++gm(x)um, xϵ Rn, u ϵ Rmy=h(x)=(h1(x), , hp(x))E20

The input-output feedback linearization as stated earlier is to find new coordinates system

and new inputs
under which the system Σ has linear dynamics and linear outputs. This problem has been connected directly to the notion of relative degree. Indeed, one needs to differentiate the outputs repeatedly until the inputs appear. Formally, if there exists γi>0 such that
for all 1 j m and 0 k  γi2 with
for some j, we say that γi is the relative degree of the jth output. In other words, γi is the smallest integer k for which the kth-derivative yi(k) of yi depends explicitly on the input u. The set {γ1,,γm} is called vector relative degree associated to the outputs of Σ. It is well known that taking
for 1k γi and completing the coordinates with zγi+1i.,zni, the system can be expressed into m-subsystems of the form

{z˙1i=z2iz˙2i=z3iz˙γi1i=zγiiz˙γii=fγi(z)+gγ11(z)v1++gγim(z)vmz˙nii=fni(z)+gn11(z)v1++gnim(z)vmyi=z1iE21

for 1 ≤ im with n1++nm=m. Thus, the system becomes a connection between a linear and nonlinear systems and this has been known as partial feedback linearization. A necessary and sufficient condition for exact linearization, that is, for a multi-input multi-output system to be transformed into a chain of integrators

(BR){z˙11=z21z˙21=z31z˙γ111=zγ1iz˙γ11=v1y1=z11z˙1m=z2mz˙2m=z3mz˙γ11m=zγ1mz˙γ1m=vmym=z1mE22

is that it has a vector relative degree {γ1,,γn} such that γ1++γm=n.

Obviously, different outputs will lead to different cascade systems: A system can be linearized with respect to some outputs and fail to be linearizable with respect to a different set of outputs. If we consider a control-affine system without outputs, then the linearization problem (Problem 2) is equivalent to solving a system of partial differential equation. Indeed, two affine control systems

Σ : x˙=f(x)+g(x)u=f(x)+g1(x)u1++gm(x)um, xϵ Rn, u ϵ RmE23

and

Σ¯ : z˙=f¯(z)+g¯(z)v=f¯(z)+g¯1(z)v1++g¯m(z)vm, z ϵ Rn, v ϵ RmE24

are feedback equivalent via static state transformations z=Φ(x) and feedback u=α(x)+β(x)v if and only if

(PDEs) : {f¯(Φ(x))=Φx(f(x)+g(x)α(x))g¯(Φ(x))=Φx(g(x)β(x))E25

In particular, the control-affine system Σ is static state feedback equivalent to a controllable linear system if and only the system of partial differential equations

(PDEs) : {AΦ(x)=Φx(f(x)+g(x)α(x))B=Φx(g(x)β(x))E26

is solvable in Φ, α, and β with Φ a diffeomorphism around the origin, and β invertible. A geometric characterization of feedback linearization was obtained by Jakubczyk and Respondek [4] and independently by Hunt and Su [5]

Theorem 4: The system Σ is feedback equivalent to a controllable linear system Λ around an equilibrium point x0 = 0 if and only if the following two conditions are satisfied

  1. (F1)

  2. (F2)

Above

stand for distributions defined recursively by

Dj(x)=span {gi(x), adfgi(x), , adfj1gi(x), 1im}E27

and

as the distribution spanned by all Lie brackets of the two distributions. The first condition (F1) stands for the rank condition while the second (F2) is referred as the involutivity condition.

Thus, to find the largest linear subsystem, the outputs need not to be predefined.

In this paper, we consider only systems without outputs and look to find such largest linear subsystem. First, an affine system

is said to be partially static state feedback linearizable if there exists a coordinate system z=(z1,,zn) and feedback in which the system takes the form

Λp :{dz1dt=f˜(z1,z2)+g˜(z1,z2)vdz2dt=Az2+BvE28

where z1=(z1,,zq) and z2=(zq+1,,zn).

Remark 1: Notice that the form above is also equivalent to

{dz1dt=Az1+Bvdz2dt=f˜(z1,z2)+g˜(z1,z2)vE29

by reordering the variables accordingly. In the sequel, we will refer more to the former form. The following result can be found in [17]

Theorem 5: Consider a control affine system Σ.

  1. If Σ is locally state space equivalent at x0 to a partially linear system Λp then dim

    in a neighbourhood of x0.

  2. Assume that Σ satisfies dim

    and that dim
    in a neighbourhood of x0. Then, Σ is locally state space equivalent at x0 to a partially linear system Λp, such that the dimension of the linear subsystem is dimz2=nρ, and moreover, the linear subsystem is controllable.

We will provide a step-by-step procedure to write the system as a cascade of a nonlinear subsystem and a linear subsystem with highest dimension. Notice that a geometrical approach has been used in [14, 16] where the characterization depends on controllability indices associated to some lie algebras.

Advertisement

4. Algorithm for partial feedback linearization

We first consider a single-input control system

Σ : x˙=f(x)+g(x)u, xϵ Rn, u ϵ RE30

and we assume that its linear approximation x˙=Fx+Gu is controllable with F=fx(0) and G=g(0). Without loss of generality, we can also assume that the pair (F, G) is in Brunovsky canonical form.

Step 0: We apply the Frobenius theorem to find coordinates y=φ(x) that rectifies the vector field g, that is, such that φ*(g)=(0,,0,1)b and transform the system as

Σ : y˙=f˜(y)+bu, y ϵ Rn, u ϵ R.E31

Completing this step with the push-forward transformation

{z1=y1zn1=yn1zn=f˜n1(y)v=f˜n1y1f˜1(y)++f˜n1ynf˜n(y)+f˜n1ynuE32

the system is transformed as

Σ : z˙=f¯(z)+bv, z ϵ Rn, v ϵ RE33

where

f¯(z)=(f¯1(z1, , zn)f¯2(z1, , zn)f¯n2(z1, , zn)zn0)E34

Step 1: We reset the original notation, that is, replace the variable z by x, and f¯(z) by f(x). Then, we decompose f(x) as following

f(x)=(f1(x)f2(x)fn2(x)xn0)=(f1(x1, , xn1)f2(x1, , xn1)fn2(x1, , xn1)00)+xn(g1(x1, , xn1)g2(x1, , xn1)gn2(x1, , xn1)10)+xn2(G1n(x)G2n(x)Gn2n(x)00)E35

If Gn2n(x1,,xn)0, then the algorithm stops. This means that the dimension of the largest linear subsystem is 2. In case Gn2n(x1,,xn)=0, we define ρn the largest j such that Gjn(x1,,xn)0. If Gjn(x1,,xn)=0 for all 1jn2, then we put ρn=0. We then apply the Frobenius theorem to straighten the vector field

g(x)=(g1(x1, , xn1)g2(x1, , xn1)gn2(x1, , xn1)10)E36

by defining coordinates y=φ(x) such that φ*(g)=(0,,0,1,0)Ab. Notice that, because g depends only on the variables x1,,xn1, so do the first (n–1) components of the diffeomorphism φ. Thus, the system is transformed as

Σ : y˙=(f1(y1, , yn1)f2(y1, , yn1)fn2(y1, , yn1)00)+yn(0 0 010)+yn2(G1n(y1, , yn)G2n(y1, , yn)Gn2n(y1, , yn)00)f˜(y)+(0 0 001)u,E37

We thus apply the push-forward transformation

{z1=y1zn2=yn2zn1=fn2(y1, , yn1)zn=zn1y1f˜1(y)++zn1yn2f˜n2(y)+zn1yn1ynv=zny1f˜1(y)++znyn1f˜n1(y)+znynuE38

to bring the system into the form

Σ : z˙=(f1(z1, , zn1)f2(z1, , zn1)fn3(z1, , zn1)zn100)+zn(F1n(z1, , zn)F2n(z1, , zn)Fρnn(z1, , zn)000)+zn(0 0 0010)+(0 0 0001)v, z ϵ Rn, v ϵ RE39

or in much compact form

Σ : z˙=f(z1, , zn1)+znFn(z)+ABzn+Bu, z ϵ Rn, u ϵ RmE40

with ρn=max{j,1jn2,|Fjn(z1,,zn)0}. Moreover, and more importantly, we also have

Fn(0)=0 and Fρnnzn0.E41

Remark 2

  1. Please notice that the vector field znFn(z) contains all nonlinearities including terms that are linear in zn but whose coefficient depends on the variables z1,,zn1.

  2. The Frobeinus theorem applied to the vector field g could have been restricted by taking the first ρn components of vector field g equal zero. This is due to the fact that, by applying the push-forward transformation above, we regenerate those terms as yn depends on all variables z1,,zn.

Step 2: We reset the original notation, that is, replace the variable z by x. Then, we decompose f(x) as following

f(x)=(f1(x1, , xn1)f2(x1, , xn1)fn3(x1, , xn1)xn100)=(f1(x1, , xn2)f2(x1, , xn2)fn3(x1, , xn2)000)+xn1(g1(x1, , xn2)g2(x1, , xn2)gn3(x1, , xn2)100)+xn12(G1(x1, , xn1)G2n(x1, , xn1)Gn3n(x1, , xn1)000)E42

If Gn3n(x1,,xn)0, then the dimension of the largest linear subsystem is less or equal to 3. We denote by ρn1 the largest j such that Gjn(x1,,xn)0. If Gjn(x1,,xn)=0 for all 1jn3, then we put ρn1=0. We define ρ¯n1=max{ρn1, ρn} as the updated largest component that cannot be cancelled or, equivalently, such that the dimension of the largest linear subsystem is less or equal to nρ¯n1.

We then apply the Frobenius theorem to straighten the vector field

g(x)=(g1(x1, , xn2)g2(x1, , xn2)gn3(x1, , xn2)100)E43

by defining coordinates y=φ(x) such that φ*(g)=(0,,0,1,0,0)A2b. Notice that, because g depends only on the variables x1,,xn2, so do the first (n2) components of the diffeomorphism φ. Thus, the system is transformed as

Σ : y˙=(f1(y1, , yn2)f2(y1, , yn2)fn3(y1, , yn2)00)+yn1(0 0 010)+yn12(G1n(y1, , yn1)G2n(y1, , yn1)Gn2n(y1, , yn1)00)+yn(F1n(y1, , yn)F2n(y1, , yn)Fn2n(y1, , yn)00)f˜(y)+yn(0 0 010)+(0 0 001)u, y ϵ Rn, u ϵ RE44

We thus apply the push-forward transformation

{z1=y1zn3=yn3zn2=fn3(y1, , yn2)zn1=zn2y1f˜1(y)++zn2yn3f˜n3(y)+zn2yn2yn1zn=zn1y1f˜1(y)++zn1yn2f˜n2(y)+zn1yn1ynv=zny1f˜1(y)++znyn1f˜n1(y)+znynuE45

to bring the system into the form

Σ : z˙=(f1(z1, , zn2)f2(z1, , zn2)fn3(z1, , zn2)zn2000)+zn1(F1n1(z1, , zn1)F2n1(z1, , zn1)Fρ¯n1n1(z1, , zn1)0000)+zn(F1n(z1, , zn)F2n(z1, , zn)Fρ¯n1n(z1, , zn)0000)+zn1(0 0 00100)+zn(0 0 00010)+(0 0 00001)v, z ϵ Rn, v ϵ RE46

or in much compact form

Σ : z˙=f(z1, , zn2)+zn1Fn1(z1, , zn1)+znFn(z)+A2bzn1+Abzn+bu,E47

with Fn1(0)=Fn(0)=0 and either Fρ¯n1nzn0 or Fρ¯n1n1zn10.

General step: Let us assume that the system has been transformed such that it takes the form

Σ : x˙=f(x1, , xk)+i=kn1(xi+1Fi+1(x1, , xi+1)+AniBxi+1)+Bu, x ϵ Rn, u ϵ RE48

where Fi+1(0)=0 for all kin1 and Fρi+1zi+10 for some i, kin1 with ρ being the largest nonzero component among those of the vector fields Fk+1,,Fn. We will write

f(x1, , xk)=(f1(x1, , xk)f2(x1, , xk)fk2(x1, , xk)xk00) and Fi+1(x)=(F1i+1(x1, , xi+1)F2i+1(x1, , xi+1)Fρ*i+1(x1, , xi+1)00)E49

Then, we decompose the vector field f as follows

f(x)=(f1(x1, , xk1)f2(x1, , xk1)fk2*(x1, , xk1)00)+xk(g1(x1, , xk1)g2(x1, , xk1)gk2(x1, , xk1)10)+xk2(G1(x1, , xk)G2(x1, , xk)Gk2(x1, , xk)00)E50

If the largest nonzero component of the vector field G(x) is less or equal to ρ, then move to the next step. If that largest component is greater than ρ, then update ρ as this component and apply Frobenius theorem to straighten the vector field g(x) and follow by a push-forward transformation. Any time in the process the value of ρ=n2, the algorithm will stop; if not until, we reach the last step.

Advertisement

5. Examples

Figure 1.

Forces acting on a VTOL aircraft.

In this section, we consider few examples to illustrate the partial feedback linearization algorithm.

Example 1: Consider a simplified model of a VTOL with dynamics [29] (see Figure 1).

{x¨=sin(θ)TM+cos(θ)2sin(α)MFy¨=cos(θ)TM+sin(θ)2sin(α)MFgθ¨=2lJcos(α)FE51

where M, J, l, and g denote the mass, moment of inertia, distance between wingtips and gravitational acceleration. The control inputs are the thrust T, and the rolling moment due to the torque F, whose direction forms a fixed angle α with the horizontal body axis. The position of center mass and the roll angle with respect to the horizon are (x,y), and θ, while (x˙, y˙) and θ˙ stand for their respective velocities.

Let x1=x, x2=x˙, x3=θ, x4=θ˙, x5=y, x6=y˙ with control inputs

u1=2lFJcosαE52

and

u2=sin(θ)TM+cos(θ)2sin(α)MFgE53

The system rewrites in the form

Σ : x˙=f(x)+g1(x)u1+g2(x)u2, x=(x1,, x6) ϵ R6E54

with

f(x)=(x2gtanx3x40x60), g1(x)=( 0 η(x3)0100 ) and g2(x)=( 0tanx30001 )E55

where

η(x3)=J tanαMl(cos2x3sin2x3cosx3)E56

We showed in [25] that the change of coordinates

z=φ(x){z1=x1z2=x2x4η(x3)x6tanx3z3=x3z4=x4z5=x5z6=x6E57

transformed the system into

where

f¯(z)=(z2+z4η(z3)+z6tanz3gtanz3η'(z3)z42z6z4sec2(z3)z40z60), g¯1(z)=( 0 00100 ) and g¯2(z)=( 000001 )E58

The distribution generated by g1 and g2 is involutive and constant. A simple feedback

v1= x12x3+2x52x62x62+u1+u2 and v2=x4x5+x4x5+u2E59

transforms the system so as

f(x)=(x1+x2x42+2x4x5x3x620x50x4+x52x6), g1(x)=( 0 01000 ) and g2(x)=( 000010 )E60

We then decompose the vector field f as

f(x)=(x1+x2x42+2x4x5x3x620x50x4+x52x6)=x3( 0 10000 )+x5( 2x4 00100 )+(x1+x2x42x62000x4+x52x6)E61

Here, we rectify the two vector fields (affine in x3 and x5) and find the change of coordinates

{y1=x1x42y2=x2y3=x3y4=x4y5=x5y6=x6E62

to transform the system into

{y˙1=y1+y2y˙2=y3y62y˙3=u1y˙4=y5y˙5=u5y˙6=y4y6+y52E63

If we apply the push-forward transformation given by z3=x3x62, zj=xj, j3, and the feedback v1=u12x6(x4+x52x6), v2=u2, we take the system into

Σ : {z˙1=z1+z2z˙2=z3z˙3=v1z˙4=z5z˙5=v2z˙6=z4z6+z52E64

with ρ = 4 being the dimension of the largest linear subsystem.

References

  1. 1. P. Brunovsky. A classification of linear controllable systems. Kybernetica. 1970;3(6):173–187.
  2. 2. A. J. Krener. On the equivalence of control systems and the linearization of nonlinear systems. SIAM Journal on Control. 1973;11:670–676.
  3. 3. R. W. Brockett. Feedback invariants for nonlinear systems. Proceedings of 7th IFAC Congress, Helsinki. 1978;:1115–1120.
  4. 4. B. Jakubczyk and W. Respondek. On linearization of control systems. Bulletin Academie Polonaise des Sciences Series Mathematics. 1980;28:517–522.
  5. 5. L. R. Hunt and R. Su. Linear equivalents of nonlinear time varying systems. In: Proceedings of Mathematical Theory of Networks & Systems; August 5-7; Santa Monica, CA. USA:1981. p. 119–123.
  6. 6. B. Charlet, J. Levine and R. Marino. Dynamic feedback linearization, SIAM Journal on Control Optimization. 1991;(29):38–57.
  7. 7. B. Charlet, J. Levine and R. Marino. Sufficient conditions for dynamic state feedback linearization. Systems & Control Letters. 1989;(13):143–151.
  8. 8. Zhendong Sun and S. S. Ge, Nonregular feedback linearization: a nonsmooth approach, in IEEE Transactions on Automatic Control, vol. 48, no. 10, pp. 1772–1776, Oct. 2003.
  9. 9. S-J. Lie and W. Respondek. Orbital feedback linearization of multi-input control systems. International Journal of Robust and Nonlinear Control. 2015;25(1):1352–1378.
  10. 10. C. Nielsen and M. Maggiore. On local transverse feedback linearization. SIAM Journal of Control and Optimization. 2008;47(5):2227–2250.
  11. 11. A. Isidori and A. J. Krener. On feedback equivalence of nonlinear systems. Systems & Control Letters. 1982;2(2):118–121.
  12. 12. A. Isidori, C. Gori-Giorgi, A. J. Krener, and S. Monaco. Nonlinear decoupling via feedback: A differential geometric approach. IEEE Transactions on Automatic Control. 1982;26(2):331–345.
  13. 13. L. R. Hunt, R. Su, and G. Meyer. Design for multi-input nonlinear systems. In: R. W. Brockett, R. S. Milman, and H. Sussmann, editors. Differential Geometric Control Theory. Boston, USA: Birkhauser; 1983. p. 268–298.
  14. 14. R. Marino. On the largest feedback linearizable subsystem. Systems & Control Letters. 1986; 6(5):345–351.
  15. 15. A. J. Krener, A. Isidori, W. Respondek. Partial and robust linearization by feedback. In: 22nd IEEE Conference on Decision and Control; December 14-16; San Antonio, Texas. 1983. p. 126–130.
  16. 16. Z. Xu and L. R. Hunt. On the largest input-output linearizable subsystem. IEEE Transactions on Automatic Control. 1996;41(1):128–132.
  17. 17. W. Respondek. Partial linearization, decompositions and fibre linear systems. In: C. I. Byrnes, A. Lindquist, editors. Theory and Applications of Nonlinear Control Systems. Amsterdam and New York: North-Holland:1986. p. 137–154.
  18. 18. M. W. Spong. Partial feedback linearization of underacutated mechanical systems. In: IROS94 (International Conference on Intelligent Robots and Systems); September 12-14; Munich. Germany:1994. p. 314–321.
  19. 19. K. Pathak, J. Franch, and S. K. Agrawal. Velocity and position of wheeled inverted pendulum by partial feedback linearization. IEEE Transactions on Automatic Control. 2005;21(3):505–513.
  20. 20. Z. Xu and L. R. Hunt. On the largest input-output linearizable subsystem. IEEE Transactions on Automatic Control. 1996;41(1):128–132.
  21. 21. Ph. Mullhaupt. Quotient submanifolds for static feedback linearization. Systems & Control Letters. 2006;55:549–557.
  22. 22. I. A. Tall. State and feedback linearizations of single-input control systems. Systems & Control Letters. 2010;59(7):429–441.
  23. 23. I. A. Tall. State linearization of control systems: an explicit algorithm. In: Joint 48th IEEE Conference on Decision and Control and 28th Chinese Control Conference ; December 15-18; Shanghai. China:2009. p. 7448–7453.
  24. 24. I. A. Tall. Explicit feedback linearization of control systems. In: Joint 48th IEEE Conference on Decision and Control and 28th Chinese Control Conference; December 15-18; Shanghai. China:2009. p. 7554–7459.
  25. 25. I. A. Tall. Flow-Box Theorem and Beyond. African Diaspora Journal of Mathematics. 2011;11(1):75–102. DOI: www.math-res-pub.org/adjm
  26. 26. I. A. Tall. Explicit state linearization of multi-input control systems: . In: 49th IEEE Conference on Decision and Control; December 15-17; Atlanta, GA. USA:2010. p. 7075–7080.
  27. 27. I. A. Tall. Multi-input control systems: explicit feedback linearization. In: 49th IEEE Conference on Decision and Control; December 15-17; Atlanta, GA. USA:2010. p. 5378–5383.
  28. 28. I. A. Tall. A new approach to exact and partial feedback linearizations. Journal of Robust and Nonlinear Control. 2012;1(1):1–32.
  29. 29. A. Serrani, A. Isidori, C. I. Byrnes, and L. Marconi. Recent advances in output regulation of nonlinear systems. In: A. Isidori, F. Lamnabhi-Lagarrigue, and W. Respondek, editors. Nonlinear Control in the Year 2000. 2nd ed. Paris, France: Springer LNCIS; 2000. p. 409–419.

Written By

Issa Amadou Tall

Submitted: 29 October 2015 Reviewed: 23 June 2016 Published: 19 October 2016