Open access peer-reviewed chapter

On Parametrizations of State Feedbacks and Static Output Feedbacks and Their Applications

Written By

Yossi Peretz

Reviewed: October 11th, 2021 Published: December 2nd, 2021

DOI: 10.5772/intechopen.101176

Chapter metrics overview

50 Chapter Downloads

View Full Metrics

Abstract

In this chapter, we provide an explicit free parametrization of all the stabilizing static state feedbacks for continuous-time Linear-Time-Invariant (LTI) systems, which are given in their state-space representation. The parametrization of the set of all the stabilizing static output feedbacks is next derived by imposing a linear constraint on the stabilizing static state feedbacks of a related system. The parametrizations are utilized for optimal control problems and for pole-placement and exact pole-assignment problems.

Keywords

  • control systems
  • continuous-time systems
  • state-space representation
  • feedback stabilization
  • static state feedback
  • static output feedback
  • Lyapunov equation
  • parametrization
  • optimization
  • optimal control
  • H∞-control
  • H2-control
  • linear-quadratic regulators
  • pole assignment
  • pole placement
  • robust control

1. Introduction

The solution of the problem of stabilizing static output feedback (SOF) has a great practical importance, for several reasons: they are simple, cheap, reliable, and their implementation is simple and direct. Since in practical applications, full-state measurements are not always available, the application of stabilizing state feedback (SF) is not always possible. Obviously, in practical applications, the entries of the needed SOFs are bounded with bounds known in advance, but unfortunately, the problem of SOFs with interval constrained entries is NP-hard (see [1, 2]). Exact pole assignment and simultaneous stabilization via SOF or stabilization via structured SOFs are also NP-hard problems (see [2, 3] resp.). These problems become even harder when optimal SOFs are sought, when the optimality notions can be the sparsity (see [4]) of the controller (e.g., for reliability purposes of networked control systems (NCSs)), the cost or energy consumption of the controller (which are related to various norm-bounds on the controller), the H-norm, the H2-norm or the linear-quadratic regulator (LQR) functional of the closed loop. The practical meaning of the NP-hardness of the aforementioned problems is that the problems cannot be formulated as convex problems (e.g., through LMIs or SDPs) and cannot have any efficient algorithms (under the widespread belief that P NP). Thus, one has to compromise the exactness (which might affect the feasibility of the solution) or the optimality of the solution. Therefore, one has to utilize the specific structure of the given problem, in order to describe effectively the set of all feasible solutions, by reducing the number of variables and constraints to the minimum, for the purpose of increasing the efficiency and accuracy of the available algorithms. This is the aim of the proposed method.

Several formulations and related algorithms were introduced in the literature for the constrained SOF and other control hard problems. The iterated linear matrix inequalities (ILMI), bilinear matrix inequalities (BMI), and semi-definite programming (SDP) approaches for the constrained SOF problem, for the simultaneous stabilizing SOF problem, and for the robust control via SOF (with related algorithms) were studied in: [5, 6, 7, 8, 9, 10, 11]. The problem of pole placement via SOF and the problem of robust pole placement via Static Feedback were studied in: [12, 13]. In [14, 15], the method of alternating projections was utilized to solve the problems of rank minimization and pole placement via SOFs, respectively. The probabilistic and randomized methods for the constrained SOF problem and robust stabilization via SOFs (among other hard problems) were discussed in [16, 17, 18, 19]. In [20], the problem of minimal-gain SOF was solved efficiently by the randomized method. A non-smooth analysis approach for Hsynthesis and for the SOF problem is given in [21, 22], respectively. A MATLAB®library for multiobjective robust control problems based on the non-smooth analysis approach was introduced in [23]. All these references (and many more references not brought here) show the significance of the constrained SOF problem to control applications.

Many problems can be reduced to the SOF constrained problem, including the reduction of the minimal-degree dynamic-feedback problem and robust or decentralized stability via static-feedback, reduced-order Hfilter problem, global minimization of LQR functional via SOF, and the design problem of optimal PID controllers (see [2, 10, 24, 25, 26, 27] respectively). It is worth mentioning [28], where the alternating direction method of multipliers was utilized to alternate between optimizing the sparsity of the state feedback matrix and optimizing the closed-loop H2-norm, where the sparsity measure was introduced as a penalty term, without any pre-assumed knowledge about the sparsity structure of the controller. The method of augmented Lagrangian for optimal structured static-feedbacks was considered in [29], where it is assumed that the structure is known in advance (otherwise, one should solve a combinatorial problem). The computation overhead of all the aforementioned methods can be reduced significantly, if good parametrization of all the SOFs of the given system could be found, where a parametrization can be called “good” if it takes into account the structure of the given specific system and if it well separates between free and dependent parameters, thus resulting in a minimal set of nonlinear nonconvex inequalities/equations needed to be solved.

In [30], a parametrization of all the SFs and SOFs of Linear-Time-Invariant (LTI) continuous-time systems is achieved by using a characterization of all the (marginally) stable matrices as dissipative Hamiltonian matrices, leading to a highly performance sequential semi-definite programming algorithm for the minimal-gain SOF problem. The proposed method there can be applied also to LTI discrete-time systems by adding semi-definite conditions for placing the closed-loop eigenvalues in the unit disk. A new parametrization for SOF control of linear parameter-varying (LPV) discrete-time systems, with guaranteed 2-gain performance, is provided in [31]. The parametrization there is given in terms of an infinite set of LMIs that becomes finite, if some structure on the parameter-dependent matrices is assumed (e.g., an affine dependency). The H2-norm guaranteed-performance SOF control for hidden Markov jump linear systems (HMJLS) is studied in [32], where the SOFs are parameterized via convex optimization with LMI constraints, under the assumptions of full-rank sensor matrices and an efficient and accurate Markov chain state estimator. In [33], an iterative LMI algorithm is proposed for the SOF problem for LTI continuous-time negative-imaginary (NI) systems with given Hnorm-bound on the closed loop, based on decoupling the dependencies between the SOF and the Lyapunov certificate matrix.

When solving an optimization problem, it is important to have a convenient parametrization for the set of feasible solutions. Otherwise, one needs to use the probability method (i.e., the “generate and check” method), which is seriously doomed to the “curse of dimensionality” (see [16]). In [13], a closed form of all the stabilizing state feedbacks is proved (up to a set of measure 0), for the purpose of exact pole assignment, when the location errors are optimized by lowering the condition number of the similarity matrix, and the controller performance is optimized by minimizing its Frobenius norm. The parametrization in [13] is based on the assumptions that the input-to-state matrix Bhas a full rank and at least one real state feedback leading to diagonalizable closed-loop matrix exists, where a necessary condition for the existence of such feedback is that the multiplicity of any assigned eigenvalue is less than or equal to rankB. In this context, it is worth mentioning [34] in which a parametrization of all the exact pole-assignment state feedbacks is given, under the assumption that the set of needed closed-loop poles should contain sufficient number of real eigenvalues (which make no problem if the problem of pole placement is of concern, where it is generally assumed that the region is symmetric with respect to the real axis and contains a real-axis segment with its neighborhood). The results of [34] and of the current chapter are based on a controllability recursive structure that was discovered in [35].

In this chapter, using the aforementioned controllability recursive structure, we introduce a parametrization of the set of all stabilizing SOFs for continuous-time LTI systems with no other assumptions on the given system (for discrete-time LTI systems, the parametrization is much more involved and will be treated in a future work). As opposed to the notable works [36, 37, 38], where for the parametrization one still needs to solve some LMIs in order to get the Lyapunov matrix, here we give an explicit recursive formula for the Lyapunov matrix and for the feedback in the case of SF, and a constrained form for the Lyapunov matrix and for the feedback in the case of SOF.

The rest of the chapter goes as follows:

In Section 2, we set notions and give some basic useful lemmas, and in Section 3, we introduce the parametrization of the set of all stabilizing static-state feedbacks for LTI continuous-time systems. In Section 4, we introduce the constrained parametrization of the set of all stabilizing SOFs for LTI continuous-time systems. The effectiveness of the method is shown on a real-life system. Section 5 is based on [34] and is devoted to the problem of exact pole assignment by SF, for LTI continuous-time or discrete-time systems. The effectiveness of the method is shown on a real-life system. Finally, in Section 6, we conclude with some remarks and intentions for a future work.

Advertisement

2. Preliminaries

By Cwe denote the complex field and by Cthe open left half-plane. For zCwe denote by Rzits real part, while by Izwe denote its imaginary part. For a square matrix Z, we denote by σZthe spectrum of Z. For a Rp×qmatrix Z, we denote by ZTits transpose, and by zi,jor by Zi,j, its ij‘th element or block element. A square matrix Zin the continuous-time context (in the discrete-time context) is said to be (asymptotically) stable, if any eigenvalue λσZsatisfies Rλ<0, i.e., λC(satisfies λ<1, i.e. λDwhere Dis the open unit disk).

Consider a continuous-time system in the form:

ddtxt=Axt+Butyt=CxtE1

where ARn×n,BRn×m,CRr×n, and x,u,yare the state, the input, and the measurement, respectively. Assuming that the state xis fully accessible and fully available for feedback, we define u=K0xto be the state feedback (SF). When the state is not fully accessible or not fully available for feedback but the measurement yis available for feedback, we define u=Kyto be the static output feedback (SOF). The problems that we consider here are the following:

  • (SF-PAR): How can one parameterize the set of all K0Rm×nsuch that the closed-loop ABK0is stable, and what is the best parametrization (in terms of minimal number of parameters and minimal set of constraints)?

  • (SOF-PAR): How can one parameterize the set of all KRm×rsuch that the closed-loop ABKCis stable, and what is the best parametrization?

The parameterizations will be used for achieving other goals and performance keys for the system, other than stability, which is the feasibility defining basic key.

A square matrix Zis said to be non-negative (denoted as Z0) if ZT=Zand vTZv0for any vector v. A non-negative matrix Zis said to be strictly non-negative (denoted as Z>0) if vTZv>0for any vector v0. For two square matrices Z,W, we would write ZW(Z>W) if ZW0(respectively, if ZW>0). For a matrix ZRp×q, we denote by Z+the Moore-Penrose pseudo-inverse (see [39, 40] for definition and properties). By LZ,RZwe denote the orthogonal projections IqZ+Zand IpZZ+, respectively, where Isdenotes the identity matrix of size s×s. Note that Z+Zand ZZ+(as well as LZand RZ) are symmetric and orthogonally diagonalizable with eigenvalues from 01. By diagand bdiagwe denote diagonal and block-diagonal matrices, respectively.

A system triplet ABCis SOF stabilizable (or just stabilizable) if and only if there exist Kand P>0such that

EP+PET=RE2

for some given R>0(R=Ican always be chosen), where E=ABKC. For the “if” direction, note that (2) implies the negativity of the real part of any eigenvalue of E, implying that the closed-loop Eis stable. For the “only-if” direction, under the assumption that E=ABKCis stable for some given K, one can show that P0expEtRexpETtdtis well defined, satisfies PT=Pand P>0, and is the unique solution for (2).

Note that the set of all SOFs is given by K=B+XC++LBS+TRCwhere S,Tare any m×rmatrices and Xis any n×nmatrix such that E=ABB+XC+Cis stable. Thus, one can optimize Kby utilizing the freeness in S,Twithout changing the closed-loop performance achieved by X. This characterization of the feasibility space shows its effectiveness in proving theorems, as will be seen along the chapter (see also [20, 35]). We also conclude that ABCis stabilizable if and only if ABB+C+Cis stabilizable.

In the sequel, we make use of the following lemma (see [39]):

Lemma 2.1 The matrix equation AX=Bhas solutions if and only if AA+B=B(equivalently, RAB=0). When the condition is satisfied, the set of all solutions is given by

X=A+B+LAZ,E3

where Zis arbitrary matrix. Moreover, we have: XF2=A+BF2+LAZF2, implying that the minimal Frobenius-norm solution is X=A+B.

Similarly, the equation YA=Bhas solutions if and only if BA+A=B(equivalently, BLA=0). When the condition is satisfied, the set of all solutions is given by

Y=BA++WRA,E4

where Wis arbitrary matrix. Moreover, we have: YF2=BA+F2+WRAF2, implying that the minimal Frobenius-norm solution is Y=BA+.

Advertisement

3. Parametrization of all the static state feedbacks

We start with the following lemma known as the projection lemma (see [41], Theorem 3.1):

Lemma 3.1 The pair ABB+is stabilizable if and only if there exists P>0such that

RBI+AP+PATRB=0.E5

When (5) is satisfied then, Xis a stabilizing SF if and only if Xis a solution for

BB+XP+PXTBB+=I+AP+PAT.E6

Moreover, one specific solution for (6) is given by

X0=I+AP+PATI12BB+P1.E7

Similarly, ATC+Cis stabilizable if and only if there exists Q>0such that

LCI+ATQ+QALC=0.E8

When (8) is satisfied then, YTis a stabilizing SF (i.e., ATC+CYTor, equivalently, AYC+Cis stable) if and only if Yis a solution for:

C+CYTQ+QYC+C=I+ATQ+QA.E9

One specific solution for (9) is given by

Y0=Q1I12C+CI+ATQ+QA.E10

Remark 3.1 The explicit formulas (7) and (10) are our little contribution to the projection lemma, Lemma 3.1. Unfortunately, we do not have such an explicit formulas for LTI discrete-time systems.

In order to describe the set of all solutions for (6) and (9), we need the following lemma that can be proved easily:

Lemma 3.2 Let P>0,Q>0. Then, the set of all solutions for:

ZP+PZT=0,E11

is given by Z=WP1where WT=W.

Similarly, the set of all solutions for:

ZTQ+QZ=0,E12

is given by Z=Q1Vwhere VT=V.

The following theorem describes the set of all solutions for (6) and (9), using the controllers (7) and (10):

Theorem 3.1 Let P>0satisfy (5) and let X0be given by (7). Then, Xis a solution for (6) if and only if:

X=X0+WP1+RBL,E13

where Wsatisfies WT=W,RBW=0and Lis arbitrary.

Similarly, let Q>0satisfy (8) and let Y0be given by (10). Then, Yis a solution for (9) if and only if:

Y=Y0+Q1V+MLC,E14

where Vsatisfies VT=V,VLC=0and Mis arbitrary.

Proof:

Assume that Xis a solution for (6). Since X0is also a solution for (6), it follows that BB+XX0P+PXX0TBB+=0. Let Z=BB+XX0. Then, RBZ=0and ZP+PZT=0. Lemma 3.2 implies that Z=WP1, where WT=Wand therefore RBW=0. We conclude that XX0=WP1+RBLfor some L(namely, L=XX0).

Conversely, let Xbe given by (13) and let Z=WP1. Then, BB+XX0=BB+Z=Zsince BB+RB=0and since RBZ=0. Now, ZP+PZT=0implies that

BB+XX0P+PXX0TBB+=0,

from which we conclude that Xsatisfies (6), since X0satisfies (6). The second claim is proved similarly.

In the following we describe the set Pof all matrices P>0satisfying (5). Note that in Theorem 3.1 the existence of P>0satisfying (5) is guaranteed by the assumption that ABB+is stabilizable and as a result of Lemma 3.1. Let PPand let

X0=I+AP+PATI12BB+P1WarbitrarysuchthatWT=W,RBW=0X=X0+WP1+RBLwhereLisarbitraryK=B+X+LBFwhereFisarbitrary.E15

Let XPdenote the set of all matrices Xsatisfying (15) for a fixed PP, and let KPdenote the set of all matrices Ksatisfying (15) for a fixed PP. Note that for a fixed PP, the set XPis convex (actually affine) and PPXPcontains all the stabilizing Xparameters of the stabilizable pair ABB+. Finally, PPKPcontains all the stabilizing SF’s Kof the stabilizable pair AB.

For a stabilizable pair ATCT, let Qbe the set of all matrices Q>0satisfying (8), and let

Y0=Q1I12C+CI+ATQ+QAVarbitrarysuchthatVT=V,VLC=0Y=Y0+Q1V+MLCwhereMisarbitraryK=YC++GRCwhereGisarbitrary.E16

Let YQdenote the set of all matrices Ysatisfying (16) for a fixed QQ, and let KQdenote the set of all matrices Ksatisfying (16) for a fixed QQ. Then, QQKQcontains all the stabilizing SFs Kof the stabilizable pair ATCT.

In the following we assume (without loss of generality, see Remark 4.2) that ABB+is controllable. Under this assumption, we recursively (go downwards and) define a sequence of sub-systems of the given system ABB+. Since BB+is symmetric matrix (with simple eigenvalues from the set 01), it is diagonalizable by an orthogonal matrix. Let Udenote an orthogonal matrix such that

B̂=UTBB+U=Ik000=bdiagIk0E17

(where k=rankB=rankBB+1since ABis controllable). Let Â=UTAU=Â1,1Â1,2Â2,1Â2,2be partitioned accordingly. Let U0=Uand let A0=A,B0=B,n0=n,k0=rankB0. Similarly, let U1be an orthogonal matrix such that U1TB1B1+U1=bdiagIk10, where B1=Â2,1. Let A1=Â2,2,n1=n0k0,k1=rankB1. Then, A1B1is controllable since A0B0is controllable (see [35] and see Lemma 5.1 in the following).

Recursively, assume that the pair AiBiwas defined and is controllable. Let Uibe an orthogonal matrix such that Bî=UiTBiBi+Ui=bdiagIki0, where ki1(since AiBiis controllable). Let Aî=UiTAiUi=Aî1,1Aî1,2Aî2,1Aî2,2be partitioned accordingly, with sizes ki×kiand niki×nikiof the main diagonal blocks. Let Ai+1=Aî2,2,Bi+1=Aî2,1,ni+1=niki,ki=rankBi. Then, Ai+1Bi+1is controllable. We stop the recursion when BiBi+=Ikifor some i=b(i.e. the base case, in which also kb=nb).

Now, we go upward and define the Lyapunov matrices and the related SFs of the sub-systems. For the base case i=b, let Pb>0be arbitrary (note that it is a free parameter!). Let

X0b=12Inb+AbPb+PbAbTPb1WbarbitrarysuchthatWbT=WbXb=X0b+WbPb1Kb=Bb+Xb+LBbFbwhereFbisarbitrary,E18

and note that RBb=0in the base case. Now, it can be checked that Eb=AbBbKb=AbXbis stable. We therefore have a parametrization of KbPbthrough arbitrary Pb>0.

Let Pi+1denote the set of all Pi+1>0satisfying:

RBi+1Ini+1+Ai+1Pi+1+Pi+1Ai+1TRBi+1=0,E19

and assume that Ki+1Pi+1was parameterized through Pi+1>0ranging in the set Pi+1, as is defined by (19). Similarly, let Pidenote the set of all Pi>0satisfying:

RBiIni+AiPi+PiAiTRBi=0,E20

and assume that KiPiwas parameterized through Pi>0ranging in the set Pi, as is defined by (20).

Now, we need to characterize the matrices Pi>0belonging to the set Pi. Multiplying (20) from the left by UiTand from the right by Uiwe get:

RBîIni+AîPî+PîAiT̂RBî=0,

where RBî=000Inikiand Pî=UiTPiUi=Pî1,1Pî1,2PiT̂1,2Pî2,2is partitioned accordingly. The condition (20) is therefore equivalent to:

Iniki+Aî2,2+Aî2,1Pî1,2Pî2,21Pî2,2++Pî2,2Aî2,2+Aî2,1Pî1,2Pî2,21T=0,E21

which is equivalent to:

Iniki+Ai+1+Bi+1Pî1,2Pî2,21Pî2,2++Pî2,2Ai+1+Bi+1Pî1,2Pî2,21T=0.E22

Let Pi+1Pi+1and let Ki+1Ki+1Pi+1. Set Pî2,2Pi+1and set Pî1,2Ki+1Pi+1. Now, since Pî2,2Pi+1>0, (22) implies that the system:

Ai+1+Bi+1Pî1,2Pî2,21Ai+1Bi+1Ki+1,E23

is stable. Now,

Pî=Pî1,1Ki+1Pi+1Pi+1Ki+1TPi+1E24

and we need to define Pî1,1in order to complete Pîto a strictly non-negative matrix. Since:

Pî1,1Ki+1Pi+1Pi+1Ki+1TPi+1==IkiKi+10InikiPî1,1Ki+1Pi+1Ki+1T00Pi+1Iki0Ki+1TIniki,

it follows that Pî>0if and only if Pî1,1Ki+1Pi+1Ki+1T>0or equivalently if and only if Pî1,1=ΔPî1,1+Ki+1Pi+1Ki+1T, where ΔPî1,1is arbitrary strictly non-negative matrix (a free parameter!).

Conversely, if Pi>0satisfies (20) then (23) is stable and thus:

Ki+1=Pî1,2Pî2,21Ki+1Ri+1,

for some Ri+1Pi+1. But since Ki+1Ki+1Ri+1if and only if:

Iniki+Ai+1Bi+1Ki+1Ri+1+Ri+1Ai+1Bi+1Ki+1T=0,E25

since the last equation has unique strictly non-negative solution and since Pî2,2satisfies this equation, it follows that Ri+1=Pî2,2. Let Pi+1=Pî2,2. Then, Ki+1Ki+1Pi+1and since Pî1,2=Ki+1Pi+1, it follows that Pîhas the form (24). Thus, Pî1,1=ΔPî1,1+Ki+1Pi+1Ki+1Twhere ΔPî1,1>0is arbitrary (a free parameter!) and

Pî=UiTPiUi=ΔPî1,1+Ki+1Pi+1Ki+1TKi+1Pi+1Pi+1Ki+1TPi+1.E26

Therefore, Piis the set of all Pi>0such that Pî=UiTPiUiis given by (26). We thus have a parametrization of all Pi>0satisfying (20). Specifically, P0is the set of all P0>0satisfying (5).

Now, let PiPiand let

X0i=Ini+AiPi+PiAiTIni12BiBi+Pi1WiarbitrarysuchthatWiT=Wi,RBiWi=0Xi=X0i+WiPi1+RBiLiwhereLiisarbitraryKi=Bi+Xi+LBiFiwhereFiisarbitrary.E27

Then, it can be checked that Ei=AiBiKi=AiBiBi+Xiis stable. We therefore have a parametrization of KiPithrough PiPi. We conclude the discussion above with the following:

Theorem 3.2 Let ABbe a controllable pair. Then, in the above notations, for i=b1,,0, Pi>0satisfies (20) if and only if Pî=UiTPiUihas the structure (26) where ΔPî1,1>0is arbitrary (free parameter), where Ki+1Ki+1Pi+1, where Pb>0is arbitrary (free parameter) and KbPbis given by (18). Moreover, KiPifor i=b1,,0is given by (27).

Similarly to the discussion above, relating to ATC+Cand defining sub-systems for j=0,,c, we have a parametrization of all Qj>0satisfying (8) for the related sub-system and specifically, Q0is the set of all Q0>0satisfying (8). The parametrizations of all the stabilizing SF’s of ABB+and ATC+Care given in the following:

Corollary 3.1 Let ABB+be a given controllable pair. Then, the set of all stabilizing SF’s of ABB+is given by X=X0+WP1+RBLwhere

X0=I+AP+PATI12BB+P1,

where Lis arbitrary, Wsatisfies WT=W,RBW=0, and P>0satisfies

RBI+AP+PATRB=0,

i.e. PP0.

Similarly, let ATC+Cbe a given controllable pair. Then, the set of all stabilizing SF’s of ATC+Cis given by Y=Y0+Q1V+MLCwhere

Y0=Q1I12C+CI+ATQ+QA,

where Mis arbitrary, Vsatisfies VT=V,VLC=0, and Q>0satisfies

LCI+ATQ+QALC=0,

i.e. QQ0.

Advertisement

4. Parametrizations of all the static output feedbacks

In this section, we give two parametrizations for the set of all the stabilizing SOFs. We start with the following lemma, which was extensively used in [20]:

Lemma 4.1 A system ABCis stabilizable if and only if ABand ATCTare stabilizable and there exist matrices X,YRn×nsuch that ABB+Xand AYC+Care stable and BB+X=YC+C. When the conditions hold, the set of all stabilizing SOFs related to the chosen matrices X,Yis given by KX=B+XC++LBS+TRCor by KY=B+YC++LBF+HRCrespectively, where S,T,F,Hare any m×rmatrices. The closed-loop matrix is given by E=ABKXC=ABB+X=AYC+C=ABKYC.

Remark 4.1 Under the hypotheses of Corollary 3.1, note that X=X0+WP1+RBLand Y=Y0+Q1V+MLCsatisfies BB+X=YC+Cif and only if BB+X0+WP1=Y0C+C+Q1V, since BB+W=W,BB+RB=0,VC+C=V,LCC+C=0. Moreover, this condition can be simplified to (meaning that it does not include matrix inverses):

QBB+I+AP+PATI12BB++QW==I12C+CI+ATQ+QAC+CP+VP.E28

We can state now the first parametrization for the set of all the stabilizing SOFs:

Corollary 4.1 Let ABCbe a given system triplet. Assume that AB,ATCTare controllable. Then, the system has a stabilizing static output feedback if and only if there exist P,Q>0and W,Vsuch that

RBI+AP+PATRB=0i.e.PP0LCI+ATQ+QALC=0i.e.QQ0WT=W,RBW=0VT=V,VLC=0QBB+I+AP+PATI12BB++QW==I12C+CI+ATQ+QAC+CP+VP.

In this case, ABKCis stable if and only if

K=KX=B+XC++LBS+TRCX=X0+WP1+RBLX0=I+AP+PATI12BB+P1,

where S,T,Lare arbitrary.

Similarly, ABKCis stable if and only if

K=KY=B+YC++LBF+HRCY=Y0+Q1V+MLCY0=Q1I12C+CI+ATQ+QA,

where F,H,Mare arbitrary.

We conclude this section with a second SOF parametrization:

Corollary 4.2 Let ABand ATCTbe controllable pairs. Then, ABKCis stable if and only if there exists K0K0P0for some P0P0, such that K0LC=0. In this case, the set of all K‘s such that ABKCis stable, is given by K=K0C++GRCwhere K0K0P0, and Gis arbitrary.

Proof:If there exists K0K0P0such that K0LC=0for some P0P0then K0=K0C+C. Since K0K0P0it follows that ABK0is stable. Thus, for K=K0C+we get that ABKCis stable.

Conversely, if ABKCis stable for some Kthen, for K0=KCwe have K0LC=0and since ABK0is stable, Theorem 3.2 implies that there exists P0P0such that K0K0P0. .

Remark 4.2 Note that when ABB+is stabilizable, there exists an orthogonal matrix Vsuch that

VTAV=Â1,10Â2,1Â2,2,VTBB+V=000B̂2,2B̂2,2+,

where Â1,1is stable and Â2,2B̂2,2B̂2,2+is controllable (see [35], Lemma 3.1, p. 536). Thus, we may assume without loss of generality that the given pair is controllable. If ABCis a system triplet such that ABB+and ATC+Care stabilizable then, there exists an orthogonal matrix Vsuch that Â2,2B̂2,2B̂2,2+and Â2,2TĈ2,2+Ĉ2,2are controllable, where Ĉ=VTC+CVis partitioned accordingly (see [35], Theorem 4.1 and Remark 4.1, p. 539). Thus, the assumption in Corollary 4.2 that the given pairs are controllable does not make any loss of generality of the results.

The effectiveness of the method is shown in the following example, but first, for the convenience of the reader, we summarize the whole method in Algorithm 1 (with its continuation in Algorithm 2). Let fKdenote a target function of the SOF K, to be minimized (e.g., KF, the LQR functional, the H-norm or the H2-norm of the closed loop, the pole-placement errors of the closed loop, or any other key performance that depends on K).

Regarding the LQR problem, let the LQR functional be defined by:

Jx0u=0xtTQxt+utTRutdt,E29

where Q>0and R0are given. We need to find utthat minimizes the functional value for any initial disturbance x0from the equilibrium point 0. Assuming that utis realized by a stabilizing SOF, let ut=Kyt=KCxt. Then, by substitution of the last into (29), we get:

Jx0K=0xtTQ+CTKTRKCxtdt.E30

Now, since Q+CTKTRKC>0and since EABKCis stable, the Lyapunov equation:

ETP+PE=Q+CTKTRKC,E31

has unique solution PLQRK>0given by:

PLQRK=0expETtQ+CTKTRKCexpEtdt==matIET+ETI1vecQ+CTKTRKC.E32

By substitution of (31) into (30), we get:

Jx0K=x0TPLQRKx0=PLQRK12x022.E33

Algorithm 1. An Algorithm For Optimal SOF’s.

Require:An algorithm for optimizingfKunder LMI and linear constraints, an algorithm for computing the Moore-Penrose pseudo-inverse and an algorithm for orthogonal diagonalization.

Input:System tripletABCsuch thatAB,ATCTare controllable.

Output:SOFKsuch thatABKCis stable minimizingfK— if exists

 1. A0A

 2. B0B

 3. i0

 4. k0rankB0

 5. whileBiBi+Ikido

 6.  compute orthogonal matrixUisuch thatUiTBiBi+Ui=bdiagIki0

 7.  AîUiTAiUi

 8.  partitionAî=Aî1,1Aî1,2Aî2,1Aî2,2

 9.  Ai+1Aî2,2

10.  Bi+1Aî2,1

11.  ii+1

12.  kirankBi

13. end while

14. bi

15. letPbbeasymbol forPb>0

16. X0b12Inb+AbPb+PbAbTPb1

17. letWbbeasymbol foramatrix satisfyingWbT=Wb

18. XbX0b+WbPb1

19. letFbbeasymbol for arbitrary matrix

20. KbBb+Xb+LBbFb

Thus,

Jx0K=PLQRK12x022PLQRK122x022==PLQRKx022==σmaxPLQRKx022,

Algorithm 2. An Algorithm For Optimal SOF’s, Continued.

 1. fori=b1downto0do

 2.  letΔPî1,1beasymbol forΔPî1,1>0

 3.  PîΔPî1,1+Ki+1Pi+1Ki+1TKi+1Pi+1Pi+1Ki+1TPi+1

 4.  Pî1ΔPî1,11ΔPî1,11Ki+1Ki+1TΔPî1,11Pi+11+Ki+1TΔPî1,11Ki+1

 5.  PiUiPîUiT

 6.  Pi1UiPî1UiT

 7.  X0iIni+AiPi+PiAiTIni12BiBi+Pi1

 8.  letWibeasymbol foramatrix satisfyingWiT=WiandRBiWi=0

 9.  letLibeasymbol for arbitrary matrix

10.  letFibeasymbol for arbitrary matrix

11.  XiX0i+WiPi1+RBiLi

12.  KiBi+Xi+LBiFi

13. end for

14. optimizefKunder the matrix equationK0LC=0

  and the constraintsΔP0̂1,1>0,,ΔPb1̂1,1>0,Pb>0

  with respect toF0,,FbtoW0,,Wb

  and toΔP0̂1,1,,ΔPb1̂1,1,Pbasvariables

15. ifasolution was foundthen

16. returnK

17. else

18. returnnosolution was found

19. end if

where σmaxPLQRKis the largest eigenvalue of PLQRK. Therefore,

Jx0Kx022σmaxPLQRK.E34

Now, if x0is known then we can minimize Jx0Kby minimizing x0TPLQRKx0. Otherwise, and if we design for the worst-case, we need to minimize σmaxPLQRK.

In the following examples, we have executed the algorithm on Processor: Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz 3.10 GHz, RAM: 8.00 GB, Operating System: Windows 10, System Type: 64-bit Operating System, x64-based processor, Platform: MATLAB®, Version: R2018b, Function: fmincon.

Example 4.1 A system of Boeing B-747 aircraft (the “AC5” system in [42] and see also [43]) is given by the general model (given here with slight changes):

ddtxt=Axt+B1wt+Butzt=C1xt+D1,1wt+D1,2utyt=Cxt+D2,1wt,

where xis the state, wis the noise, uis the control input, zis the regulated output, and yis the measurement, where:

A=0.9801000000000000.0003000000000000.0980000000000000.0038000000000000.3868000000000000.9071000000000000.0471000000000000.0008000000000000.1591000000000000.0015000000000000.9691000000000000.0003000000000000.0198000000000000.0958000000000000.0021000000000001.000000000000000B=0.0001000000000000.0058000000000000.0296000000000000.0153000000000000.0012000000000000.0908000000000000.0015000000000000.000800000000000,C=10000001,B1=B,C1=C,D1,1=04,D2,1=02×4,D1,2=04×2.

with

σA=0.978871342065923±0.128159143146289i0.8996141208384040.998943195029751.

Note that ABand ATCThere are controllable. Let u=urKy, where uris a reference input. Then, u=urKCxKD2,1wand substitution the last into the system yields the closed-loop system:

ddtxt=ABKCxt+B1BKD2,1wt+Burtzt=C1D1,2KCxt+D1,1D1,2KD2,1wt+D1,2urt,

where the behavior of zis of our interest. Note that we actually have:

ddtxt=ABKCxt+Bwt+Burtzt=Cxt=yt.

For the stabilization via SOF with minimal Frobenius-norm, we need to minimize fK=KF. For the LQR problem we need to minimize fK=x0TPLQRKx0when x0is known and to minimize fK=σmaxPLQRKwhen x0is unknown, where PLQRKis given by (32). For the Hand the H2problems, we need to minimize fK=Tw,zsHand fK=Tw,zsH2, resp. where:

Tw,zs=D1,1D1,2KD2,1+C1D1,2KCsIA+BKC1B1BKD2,1==CsIA+BKC1B.

These problems needed to be solved under the constraint that ABKCstable, i.e., that K=K0C++GRC, where K0K0P0for some P0P0, such that K0LC=0.

Applying the algorithm we had:

U0=0.06388269943991800.99795741427791900.0121958756626980.9986431305453430.0007807001062570.0506216252086100.9978828285664220.0122228707162180.0638779249510250.0002694210148900.0003489718707200.0506211343813180.0000223388942370.998717867304654A1=0.9836508385357660.0037722779118550.0000914909052290.994959072276129,B1=0.0988839726594750.0015449789641360.0009392250450700.100248978115558.

The “while-loop” stops because B1B1+=I2. We have

B0+=0.38657179589290033.4683562994405295.6297316968027761.6948788805385710.6969555358864780.44271209837539910.8938751110143710.025378383262705B1+=10.1113821883576800.1558307433452300.0947327700501169.973704058215121LB0=02,RC0=02,LB1=02,RB0=0.9959190007122690.0007791054593670.0637474488135640.0000222932651300.0007791054593670.0025631584314170.0000362309731580.0505567041278610.0637474488135640.0000362309731580.0040804618837320.0002705025436070.0000222932651300.0505567041278610.0002705025436070.997437378972582,RB1=02I412B0B0+=0.9979595003561350.0003895527296830.0318737244067820.0000111466325650.0003895527296830.5012815792157080.0000181154865790.0252783520639310.0318737244067820.0000181154865790.5020402309418660.0001352512718040.0000111466325650.0252783520639310.0001352512718040.998718689486291I212B1B1+=12I2.

Now, we parameterize all the matrices K0such that A0B0K0is stable. Let P1=p1p2p2p3, where p1,d1p1p3p22>0. Let w1be arbitrary and let W1=0w1w10. Let

S1=B1+12I2+A1P1+P1A1T+W1.

Then

K1=S1P11.

Let ΔP0̂1,1=p4p5p5p6, where p4,d2p4p6p52>0. Then,

P0=U0ΔP0̂1,1+K1P1K1TK1P1P1K1TP1U0T==U0ΔP0̂1,1+S1K1TS1S1TP1U0T,

and

P01=U0ΔP0̂1,11ΔP0̂1,11K1K1TΔP0̂1,11P11+K1TΔP0̂1,11K1U0T.

Let

S0=B0+I4+A0P0+P0A0TI412B0B0+.

Then

K0=S0P01.

Note that W0T=W0,RB0W0=0implies that W0=04. We have completed the parametrization of all the SFs of the system, where the parameters W1, P1>0, and ΔP0̂1,1>0are free.

Regarding the optimization stage, we had the following results: starting from the point (feasible for SF but not feasible for SOF):

p1p2p3p4p5p6w1==750075075007500,

in CPUTime=0.59375secthe fmincon function (with the interior-point option and the default optimization parameters) has converged to the optimal point

p1p2p3p4p5p6w1==1070.0091036542271970.0001057358166640.0064861224950941.9123552163428580.0570533011257191.7929302292982370.000647092932030,

resulting with the following optimal Frobenius-norm SF and SOF

K0=1030.1575339997477760.0000000000000000.0000000000000001.2737766538917930.3326509541806000.0000000000000000.0000000000000000.000655866128882K=1030.1575339997477761.2737766538917930.3326509541806000.000655866128882,

with KF=1.325888763265586103. The resulting closed-loop eigenvalues are:

σA0B0KC0=0.000004083911866±1.423412274467895i0.000005220069661±1.681989978268793i.

For a comparison, in this (small) example we had seven scalar indeterminate, four scalar equations and four scalar inequalities while by the BMI method A0B0KC0P+PA0B0KC0T<0,P>0we would have 14scalar indeterminate and eight scalar inequalities. This shows the potential of the method in reducing the number of variables and inequalities/equations, thus enabling to deal efficiently with larger problems. Moreover, the method removes the decoupling of Pand K, in the sense that now Kdepends on Pand the dependence of Pon Khas removed, thus making the problem more relaxed.

Figures 1 and 2 show the impulse response and the step response of the closed-loop system, in terms of the regulated output z=y, where w=0and uris the delta Dirac function or the unit-step function, respectively. While the amplitudes seem to be reasonable, the settling time of order 105seems unreasonable. This happens because lowering the SOF-norm results in pushing the closed-loop eigenvalues toward the imaginary axes, as can be seen from the dense oscillations. We therefore must set a barrier on the abscissa of the closed-loop eigenvalues as a constraint. Note however that as a starting point for other optimization keys where we need any stabilizing SOF that we can get, the above SOF might be sufficient.

Figure 1.

Impulse response of the closed loop with the minimal-norm SOF.

Figure 2.

Step response of the closed loop with the minimal-norm SOF.

Regarding the LQR functional with Q=I,R=I, starting from:

p1p2p3p4p5p6w1==3000300300030015,

in CPUTime=0.90625secthe fmincon function has converged to the optimal point

p1p2p3p4p5p6w1==1020.0106030848134200.0025955490835210.0096142408300021.0090768724323890.8324939333214821.8121487014223450.001750170924431,

resulting with the following optimal SF and SOF

K0=1031.4660944990851960.0000000000000020.0000000000000002.7316825594436390.7033525987223060.0000000000000000.0000000000000010.149101634953946K=1031.4660944990851962.7316825594436390.7033525987223060.149101634953946,

with KF=3.182523976577676103and LQR wort-case functional-value σmaxPLQRK=1.981249586261248106. The resulting closed-loop eigenvalues are:

σA0B0KC0=1.723892066022943±1.346849871126735i0.450106460827155±1.711908728695912i.

The entries of z=yunder w=0and ur=0, when the closed-loop system is derived by the initial condition x0=1111Tare depicted in Figure 3. The results might not be satisfactory regarding the amplitudes or the settling time; however, as a starting point for other optimization keys where we need any stabilizing SOF that we can get, the above SOF might be sufficient.

Figure 3.

Response of initial condition of the closed loop with the LQR SOF.

For the problem of pole placement via SOF, assume that the target is to place the closed-loop eigenvalue as close as possible to 10±i,1±0.1i. Then, starting from:

p1p2p3p4p5p6w1==1011010,

in CPUTime=1.828125secthe fmincon function has converged to the optimal point

p1p2p3p4p5p6w1==1030.0175437170923540.0252652816380220.0409842852988121.8552507471704893.3970797209557386.2514769241924420.013791203700439,

resulting with the following optimal SF and SOF

K0=1051.0142462364982310.0000000000000000.0000000000000000.7084172045235230.1773494333976920.0000000000000000.0000000000000000.124922842398310K=1051.0142462364982310.7084172045235230.1773494333976920.124922842398310,

with KF=1.256029021159584105. The resulting closed-loop eigenvalues are:

σA0B0KC0=9.682859336407926±0.940019732471932i0.157090195949127±1.083963060760387i.

Figures 4 and 5 depict the impulse response and the step response of the closed loop with the pole-placement SOF. The amplitudes look reasonable but the settling time might be unsatisfactory.

Figure 4.

Impulse response of the closed loop with the pole-placement SOF.

Figure 5.

Step response of the closed loop with the pole-placement SOF.

Regarding the H-norm of the closed loop, starting from:

p1p2p3p4p5p6w1==1011010,

in CPUTime=0.703125secthe fmincon function has converged to the optimal point

p1p2p3p4p5p6w1==0.8882213167906830.0054504633952210.5096880065346110.3513677707004930.1084795349489882.1356836188632950.023286321711901,

resulting with the following optimal SF and SOF

K0=1040.4341263487648170.0000000000000000.0000000000000006.2304251402867766.1397812090814310.0000000000000000.0000000000000000.417994561595739K=1040.4341263487648176.2304251402867766.1397812090814310.417994561595739,

with KF=8.768026908280017104and Tw,zsH=1.631954397074613105. The resulting closed-loop eigenvalues are:

σA0B0KC0=356.940184596476490.95308869518821.02361299382180.5685837870690.

The simulation results of the closed-loop system are given in Figure 6, where wis normally distributed random disturbance, where each entry is N0106distributed. The maximum absolute values of the entries of z=yare

Figure 6.

Response of the closed loop with the optimalH-norm SOF to a106variance zero-mean normally distributed random disturbance.

0.0347197142018420.014588756724050T,

and the maximum absolute values of the entries of xare

0.0347197142018420.2798768531926290.5491241013166660.014588756724050T.

The results here are good.

Regarding the H2-norm of the closed loop, starting from:

p1p2p3p4p5p6w1==1011010,

in CPUTime=8.390625secthe fmincon function has converged to the optimal point

p1p2p3p4p5p6w1==0.8911784776421380.0066397748764510.5086840074825980.0000382886615460.0000536520149080.0001404413157750.021795268637180,

resulting with the following optimal SF and SOF

K0=1091.8352625375875360.0000000000039000.0000000000124161.8042180596064461.1942448746761160.0000000000020800.0000000000072860.578593785387393K=1091.8352625375875361.8042180596064461.1942448746761160.578593785387393,

with KF=2.895579903518702109and Tw,zsH2=2.289352128445973106. The resulting closed-loop eigenvalues are:

σA0B0KC0=1068.8250679372928021.0872227993527280.0000005614915440.000000982645227.

The simulation results of the closed-loop system is given in Figure 6, where wis normally distributed random disturbance, where each entry is N0106distributed. The maximum absolute values of the entries of z=yare

1050.7937060289859330.829879751812045T,

and the maximum absolute values of the entries of xare

1050.793706028985916.350241307390112.37208967086210.8298797518120T.

The results here are excellent.

We conclude that the best performance of the closed-loop system is achieved with the optimal H2-norm SOF; however, since the Frobenius-norm of the SOF controller is high, the cost of construction and of operation of the SOF controller might be high, and there are no “free meals.” Note also that by minimizing the SOF Frobenius-norm, the eigenvalues of the closed loop tend to get closer to the imaginary axes (to the region of lower degree of stability), while by minimizing the H2-norm, the eigenvalues of the closed loop tend to escape from the imaginary axes (to the region of higher degree of stability). These are conflicting demands, and therefore, one should use some combination of the related key functions or to use some multiobjective optimization algorithm in order to get the best SOF in some or all of the needed key performance measures (Figure 7).

Figure 7.

Response of the closed loop with the optimalH2-norm SOF to a106variance zero-mean normally distributed random disturbance.

The following counterintuitive example shows that the SOF problem can be unsolvable (or hard to solve) even for small systems. In the example we show how nonexistence of SOF can be detected by the method:

Example 4.2 Let

A0=1101,B0=11,C0=11.

Applying the algorithm we have

U0=121111,A1=12,B1=12.

The “while-loop” stops because B1B1+=1. Let P1=p1where p1>0. Then,

K1=B1+121+A1P1+P1A1TP11=1+p1p1.

Let ΔP0̂1,1=p2, where p2>0. Then

P0=U0ΔP0̂1,1+K1P1K1TK1P1P1K1TP1U0T==12p1p2p1+1p2p1+1+2p1p2p1+1+2p1p2p1+1+4p1+4p12.

We therefore have:

K0=B0+I2+A0P0+P0A0TI212B0B0+P01==14p2p134p2p12+4p1+6p12+4p13+p22p12+2p2p1+4p2p13+12p2p122p12p12p22p122p2p11+4p2p13T,

as the free parametrization of all the state feedbacks for which A0B0K0is stable—for any choice of p1,p2>0. Now LC0=121111and the equations K0LC0=0are equivalent to the single equation:

14p2p13p22p12+p22p1+3p12+2p13+4p12+3p1+1=0.

Assuming p1,p2>0, the last equation implies that

p2=2p1+3p12±2p1+3p1224p122p13+4p12+3p1+12p12,

leading to a contradiction with p2being real positive number.

Advertisement

5. A parametrization for exact pole assignment via SFs

This section is based on the results reported in [34], where the proofs of the following lemma and theorem can be found. The aim of this section is to introduce a parametrization of all the SF’s for the exact pole-assignment problem, when the set of eigenvalues can be given as free parameters (under some reasonable assumptions). This is done as part of the research of the problem of parametrization of all the SOFs for pole assignment. Note that the problem of exact pole assignment by SOFs is NP-hard (see [3]), meaning that an efficient algorithm for the problem probably does not exist, and therefore an effective description of the set of all solutions might not exist too. Also note that with SOFs, the feasible set Ωmight exclude some open set from being a feasible set for the closed-loop spectrum (see [12]). These make the full aim very hard (if not impossible) to achieve. We therefore focus here on the problem of exact pole assignment via SFs.

Let the control system be given by:

Σxt=Axt+Butyt=CxtE35

where ARn×n,BRn×m,CRr×n, Σxt=ddtxtin the continuous-time context and Σxt=xt+1in the discrete-time context. We assume without loss of generality that ABis controllable. The problem of exact pole assignment by SF is defined as follows:

  • (SF-EPA) Given a set ΩC, Ω=n(in the discrete-time context ΩD), symmetric with respect to the x-axis, find a state feedback FRm×nsuch that the closed-loop state-to-state matrix E=ABFhas Ωas its complete set of eigenvalues, with their given multiplicities.

In [13], a closed form of all the exact pole-placement SFs is proved (up to a set of measure 0), based on Moore’s method. In order to minimize the inaccuracy of the eigenvalues final placement and in order to minimize the Frobenius-norm of the feedback, a convex combination of the condition number of the similarity matrix and of the feedback norm was minimized. The parametrization proposed in [13] is based on the assumptions that there exists at least one real state feedback that leads to a diagonalizable state-to-state closed-loop matrix and that Bis full rank. A necessary condition for such SF to exist is that the final multiplicity of any eigenvalue is less than or equal to rankB. Here, we do not assume that Bis full rank and we only assume that Ωcontains sufficient number of real eigenvalues. A survey of most of the methods for robust pole assignment via SFs or by SOFs and the formulation of these methods as optimization problems with optimality necessary conditions is given in [44]. In [45] a performance comparison of most of the algorithmic methods for robust pole placement is given. A formulation of the general problem of robust exact pole assignment via SFs as an SDP problem and LMI-based linearization is introduced in [46], where the robustness is with respect to the condition number of the similarity matrix, which is made in order to hopefully minimize the inaccuracy of the eigenvalues final placement. Unfortunately, one probably cannot gain a parametric closed form of the SFs from such formulations. Moreover, the following proposed method is exact and therefore enables the use of the parametrization free parameters for other (and maybe more important) optimization purposes. Note that since the proposed method is exact, the closed-loop eigenvalues thyself can be inserted to the problem as parameters.

A completely different notion of robustness with respect to pole placement is considered in the following works:

Robust pole placement in LMI regions and Hdesign with pole placement in LMI regions are considered in [47, 48], respectively. An algorithm based on alternating projections is introduced in [15], which aims to solve efficiently the problem of pole placement via SOFs. A randomized algorithm for pole placement via SOFs with minimal norm, in nonconvex or unconnected regions, is considered in [20].

Let Ω=α1,α1¯c1timesαm,αm¯cmtimesβ1r1timesβrtimes, be the intended closed-loop eigenvalues, where the αs denote the paired complex-conjugate eigenvalues (with nonzero imaginary part), the βs denote the real eigenvalues, and 2c1,,2cm,r1,,rdenote their respective multiplicities, where 2i=1mci+j=1rj=n. In the following we would say that the size of the set (actually, the multiset) Ωis n(counting multiplicities) and we would write Ω=n. Note that ABis controllable if and only if ABB+is controllable, and also note that BB+is a real symmetric matrix with simple eigenvalues in the set 01and thus is orthogonally diagonalizable matrix. Let Udenote an orthogonal matrix such that:

B̂=UTBB+U=Ik000=bdiagIk0,E36

where k=rankB=rankBB+1since ABis controllable, and let Â=UTAU=Â1,1Â1,2Â2,1Â2,2be partitioned accordingly. We cite here the following lemma taken from [34] connecting between the controllability of the given system and the controllability of its sub-system:

Lemma 5.1 In the notations above, ABB+is controllable if and only if Â2,2Â2,1Â2,1+is controllable.

Again, we use the recursive controllable structure. Let U0=Uand let A0=A,B0=B,n0=n,k0=rankB0. Similarly, let U1be an orthogonal matrix such that U1TB1B1+U1=bdiagIk10, where B1=Â2,1. Let A1=Â2,2,n1=n0k0,k1=rankB1. Now, Lemma 5.1 implies that A1B1is controllable since A0B0is controllable. Recursively, assume that the pair AiBiis controllable. Let Uibe an orthogonal matrix such that Bî=UiTBiBi+Ui=bdiagIki0, where ki1(since AiBiis controllable). Let Aî=UiTAiUi=Aî1,1Aî1,2Aî2,1Aî2,2be partitioned accordingly, with sizes ki×kiand niki×nikiof the main block-diagonal blocks. Let Ai+1=Aî2,2,Bi+1=Aî2,1,ni+1=niki,ki=rankBi. Then, Lemma 5.1 implies that Ai+1Bi+1is controllable. The recursion stops when BiBi+=Ikifor some i=b(which we call the base case). Note that in the worst case, the recursion stops when the rank kb=1.

Theorem 5.1 In the above notations, assume that j=1rja, where ais the number of parity alternations in the sequence n0n1nb. Let Ω0=Ω. Then, there exist a sequence Ω0Ω1Ωbof symmetric sets with size Ωi=ni(counting multiplicities) and there exist a real state feedback Fi=FiGi+1Fi+1such that σAiBiFi=Ωi. Moreover, an explicit (recursive) formula for FiGi+1Fi+1is given by:

Fi=Bi+WiWi=UiŴiUiTŴi=Ŵ1,1iŴ1,2i00Ŵ1,1i=Â1,1i+Fi+1Â2,1iGi+1Ŵ1,2i=Â1,2i+Fi+1Â2,2iGi+1Fi+1,E37

where σÂ2,2iÂ2,1iFi+1=σAi+1Bi+1Fi+1=Ωi+1and Gi+1is arbitrary real matrix such that σGi+1=Ωi\Ωi+1.

Example 5.1 Consider the problem of exact pole assignment via SF for the same system from Example 4.1. We therefore assume here that the full state is available for feedback control. Now, using the calculations from Example 4.1, we have: n0n1=42implying that the number of parity alternations is a=0. We therefore can assign by the method any symmetric set of eigenvalues to the closed loop. Let Ω0=αα¯ββ¯be the eigenvalues to be assigned, and let Ω1=ββ¯. Now,

F1=B1+A1G2,

where

G2=RβIβIβRβ,

and

F0=B0+W0,

where

W0=U0W0̂U0TW0̂=W0̂1,1W0̂1,20202W0̂1,1=A0̂1,1+F1A0̂2,1G1W0̂1,2=A0̂1,2+F1A0̂2,2G1F1G1=RαIαIαRα.

We have completed the pole-assignment SF parametrization. As an application, assume that α=10+i,β=1+0.1i. Then,

F0=1032.3129447657275390.0632744216356690.0335985708683077.1368478644816912.3820513596686750.0022491120060260.0114602199467600.432788287638212,

resulting with the closed-loop eigenvalues:

σA0B0F0=10.000000000000000±1.000000000000004i1.000000000000003±0.099999999999998i.

In our calculations, we used MATLAB®, which has a general precession of 57significant digits in computing eigenvalues. Thus, we have almost no loss of digits by the method. For a comparison, see the last case in Example 4.1 and note that while exact pole assignment can be achieved by SF, in general, it cannot be achieved by SOF because the last is a NP-hard problem (see the introduction of this section). Even regional pole placement is hard to achieve by SOF because of the nonconvexity of the SOF feasibility domain.

Remark 5.1 Note that the indices k0k1kbas well as the indices n0n1nb, can be calculated from ABin advance. After calculating these indices and the number aof parity alternations in the sequence n0n1nb, the designer can define Ωas to satisfy the assumption of Theorem 5.1, i.e., being symmetric with at least areal eigenvalues, in a parametric way, and get a parametrization of all the real SF leading to Ωas the set of closed-loop eigenvalues. Next, the designer can play with the specific values of these and of other free parameters, in order to gain the needed closed-loop performance requirements. This is in contrast with other methods where the parametrization is calculated ad-hoc for a specific set of eigenvalues, where any change in the set of eigenvalues necessitates new execution of the method.

Remark 5.2 Note that Fi+1can be replaced by Fi+1+IBi+BiHi+1where Hi+1is any real matrix, without changing the closed-loop eigenvalues (if one seek feedbacks with minimal Frobenius-norm, then he should take Hi+1=0, otherwise he should leave Hi+1as another free parameter). Thus, the freeness in HbH1H0and in Gb+1GbG1makes the freeness in F0(e.g. in order to globally optimize the H-norm of the closed loop, the H2-norm or the LQR functional of the closed loop or any other performance key thereof). Note also that the sequences FbF1F0and Gb+1GbG1can be calculated for Ωas in Theorem 5.1, where the eigenvalues in Ωare given as free parameters. In that case, it can be easily proved by induction that the state feedbacks FbF1F0depend polynomially on the eigenvalues parameters and on the other free parameters mentioned above (for complex eigenvalue αthey depend polynomially on Rα,Iα).

Finally, it is wort mentioning that the complementary theorem of Theorem 5.1 was also proved in [34], meaning that under the assumptions of Theorem 5.1, any SF that solves the problem has the form given in the theorem (up to a factor of the form given in Remark 5.2).

Advertisement

6. Concluding remarks

In this chapter, we have introduced an explicit free parametrization of all the stabilizing SF’s of a controllable pair AB. This enables global optimization over the set of all the stabilizing SF’s of such pair, because the parametrization is free. For a system triplet ABC, we have shown how to get the parametrization of all the SOFs of the system by parameterizing all the SFs of ABand all the SFs of ATCTand then imposing the compatibility constraint (28). We have also shown a parametrization of all the SOFs of the system triplet ABCby imposing the linear constraint K0LC0=0on the SF K0of the pair AB, where K0was defined recursively and parameterizes the set of all SFs of AB. This leads to a set of polynomial equations (after multiplying by the l.c.m. of the denominators of the rational entries of K0) and inequalities that can be brought to polynomial equations. The resulting polynomial set of equations can be solved (parametrically) by using the Gröbner basis method (see e.g., [49, 50, 51, 52]). By applying the Gröbner basis method, one would get an indication to the existence of solutions and in case that solutions do exist, it would tell what are the free parameters and how other parameters depend on the free parameters. It seems that the proposed method makes the Gröbner basis computations overhead (or other methods thereof) reduced significantly, thus enabling SOF global optimization for larger systems.

In view of Theorem 5.1 (with its complementary theorem proved in [34]), we have introduced a sound and complete parametrization of all the state feedbacks Fwhich make the matrix Ê=UTABFUk-complementary nk-invariant with respect to Ω1(see [34] for the definition and properties), where Uis orthogonal such that UTBB+U=bdiagIk0, k=rankB, where Ωis symmetric and has at least a(being the parity alternations in the sequence n0nb) real eigenvalues, where Ω1Ωis symmetric with maximum real eigenvalues with size Ω1=nk. Assuming Ωas above, we have generalized the results of [13] in the sense that we do not assume the existence of real state feedback Fthat brings the closed-loop E=ABFto diagonalizable matrix, which actually means that the geometric and algebraic multiplicity coincides for any eigenvalue of the closed loop, and we do not assume the restriction on the multiplicity of each eigenvalue to be less than or equal to rankB. However, in cases where the number of real eigenvalues in Ωis less than a, one should use the parametrizations given in [13], in [45] or in the references there. Note that in communication systems, where complex SFs and SOFs are sought, the introduced method is complete (with no restrictions) since the number of parity alternations and the restriction on Ωto contain as much real eigenvalues, were needed only to guarantee that Fifor i=b,,0is real in each stage, which is needless in communication systems.

In view of Example 5.1, one can see that the accuracy of the final location of the closed-loop eigenvalues given by the proposed method depends only on the accuracy of computing Bi+and Uifor i=0,,band in the algorithm that we have to compute the closed-loop eigenvalues (see [53], for example) in order to validate their final location, and it has nothing to do with the specific values of the specific eigenvalues given in Ω. Therefore, by the proposed method, once that Bi+and Uifor i=0,,bwere computed as accurate as possible, the location of the closed-loop eigenvalues will be accurate accordingly. Thus, by the proposed method the designer can save time since he can do it parametrically only once, and afterward he only needs to play with the specific values of the eigenvalues until he gets a satisfactory closed-loop performance, where he can be sure that the accuracy of the final placement will be the same for all of his trials independently on the specific values of the chosen eigenvalues. Also, the given parametrization of F0is polynomially dependent on the free parameters and thus is very convenient for applying automatic differentiation and optimization methods.

To conclude, we have introduced parametrizations of SFs and SOFs that are based on the recursive controllable structure that was discovered in [35]. The results has powerful implications for real-life systems, and we expect for more results in this direction. Unfortunately, for uncertain systems, the method cannot work directly because of the dependencies of ABCin uncertain parameters, for which we cannot compute Uifor i=0,,b. However, if a nominal system A˜B˜C˜is known accurately then, the method can be applied to that system and the free parameters of the parametrization can be used to “catch” the uncertainty of the whole system, together with the closed-loop performance requirements. The research of this method will be left for a future work.

References

  1. 1. Nemirovskii A. Several NP-hard problems arising in robust stability analysis. Mathematics of Control, Signals, and Systems. Springer. 1993;6(2):99-105
  2. 2. Blondel V, Tsitsiklis JN. NP-hardness of some linear control design problems. SIAM Journal on Control and Optimization. 1997;35(6):2118-2127
  3. 3. Fu M. Pole placement via static output feedback is NP-hard. IEEE Transactions on Automatic Control. 2004;49(5):855-857
  4. 4. Peretz Y. On applications of the ray-shooting method for structured and structured-sparse static-output-feedbacks. International Journal of Systems Science. 2017;48(9):1902-1913
  5. 5. Kučera V, Trofino-Neto A. Stabilization via static output feedback. IEEE Transactions on Automatic Control. 1993;38(5):764-765
  6. 6. de Souza CC, Geromel JC, Skelton RE. Static output feedback controllers: Stability and convexity. IEEE Transactions On Automatic Control. 1998;43(1):120-125
  7. 7. Cao YY, Lam J, Sun YX. Static output feedback stabilization: An ILMI approch. Automatica. 1998;34(12):1641-1645
  8. 8. Cao YY, Sun YX, Lam J. Simultaneous stabilization via static output feedback and state feedback. IEEE Transactions on Automatic Control. 1998;44(6):1277-1282
  9. 9. Quoc TD, Gumussoy S, Michiels W, Diehl M. Combining convex-concave decompositions and linearization approaches for solving BMIs, with application to static output feedback. IEEE Transactions on Automatic Control. 2012;57(6):1377-1390
  10. 10. Mesbahi M. A semi-definite programming solution of the least order dynamic output feedback synthesis problem. Procedings of the 38’th IEEE Conference on Decision and Control. 1999;(2):1851-1856
  11. 11. Henrion D, Loefberg J, Kočvara M, Stingl M. Solving Polynomial static output feedback problems with PENBMI. In: Proc. Joint IEEE Conf. Decision Control and Europ. Control Conf.; 2005; Sevilla, Spain. 2005
  12. 12. Eremenko A, Gabrielov A. Pole placement by static output feedback for generic linear systems. SIAM Journal on Control and Optimization. 2002;41(1):303-312
  13. 13. Schmid R, Pandey A, Nguyen T. Robust pole placement with Moore’s algorithm. IEEE Transactions on Automatic Control. 2014;59(2):500-505
  14. 14. Fazel M, Hindi H, Boyd S. Rank minimization and applications in system theory. In: Proceedings of the American Control Conference. 2004. pp. 3273-3278
  15. 15. Yang K, Orsi R. Generalized pole placement via static output feedback: A methodology based on projections. Automatica. 2006;42:2143-2150
  16. 16. Vidyasagar M, Blondel VD. Probabilistic solutions to some NP-hard matrix problems. Automatica. 2001;37:1397-1405
  17. 17. Tempo R, Calafiore G, Dabbene F. Randomized Algorithms for Analysis and Control of Uncertain Systems. London: Springer-Verlag; 2005
  18. 18. Tempo R, Ishii H. Monte Carlo and Las Vegas randomized algorithms for systems and control. European Journal of Control. 2007;13:189-203
  19. 19. Arzelier D, Gryazina EN, Peaucelle D, Polyak BT. Mixed LMI/randomized fcmethods for static output feedback control. In: Proceedings of the American Control Conference, IEEE Conference Publications. 2010. pp. 4683-4688
  20. 20. Peretz Y. A randomized approximation algorithm for the minimal-norm static-output-feedback problem. Automatica. 2016;63:221-234
  21. 21. Apkarian P, Noll D. NonsmoothHsynthesis. IEEE Transactions on Automatic Control. 2006;51(1):71-86
  22. 22. Burke JV, Lewis AS, Overton ML. Stabilization via nonsmooth, nonconvex optimization. IEEE Transactions On Automatic Control. 2006;51(11):1760-1769
  23. 23. Gumussoy S, Henrion D, Millstone M, Overton ML. Multiobjective robust control with HIFOO 2.0. In: Proceedings of the IFAC Symposium on Robust Control Design; 2009; Haifa, Israel. 2009
  24. 24. Borges RA, Calliero TR, Oliveira CLF, Peres PLD. Improved conditions for reduced-orderHfilter design as a static output feedback problem. In: American Control Conference, San-Francisco, CA, USA. 2011
  25. 25. Peretz Y. On application of the ray-shooting method for LQR via static-output-feedback. MDPI Algorithms Journal. 2018;11(1):1-13
  26. 26. Zheng F, Wang QG, Lee TH. On the design of multivariable PID controllers via LMI approach. Automatica. 2002;38:517-526
  27. 27. Peretz Y. A randomized algorithm for optimal PID controllers. MDPI Algorithms Journal. 2018;11(81):1-15
  28. 28. Lin F, Farad M, Jovanović M. Sparse feedback synthesis via the alternating direction method of multipliers. In: IEEE, Proceedings of the American Control Conference (ACC), 2012. pp. 4765-4770
  29. 29. Lin F, Farad M, Jovanović M. Augmented lagrangian approach to design of strutured optimal state feedback gains. IEEE Transactions On Automatic Control. 2011;56(12):2923-2929
  30. 30. Gillis N, Sharma P. Minimal-norm static feedbacks using dissipative Hamiltonian matrices. Linear Algebra and its Applications. 2021;623:258-281
  31. 31. Silva RN, Frezzatto L. A new parametrization for static output feedback control of LPV discrete-time systems. Automatica. 2021;128:109566
  32. 32. de Oliveira AM, Costa OLV. On theH2static output feedback control for hidden Markov jump linear systems. Annals of the Academy of Romanian. Scientists. Series on Mathematics and its Applications. 2020;12(1-2)
  33. 33. Ren D, Xiong J, Ho DW. Static output feedback negative imaginary controller synthesis with anHnorm bound. Automatica. 2021;126:109157
  34. 34. Peretz Y. On parametrization of all the exact pole-assignment state feedbacks for LTI systems. IEEE Transactions on Automatic Control. 2017;62(7):3436-3441
  35. 35. Peretz Y. A characterization of all the static stabilizing controllers for LTI systems. Linear Algebra and its Applications. 2012;437(2):525-548
  36. 36. Iwasaki T, Skelton RE. All controllers for the generalHcontrol problem: LMI existence conditions and state space formulas. Automatica. 1994;30(8):1307-1317
  37. 37. Iwasaki T, Skelton RE. Parametrization of all stabilizing controllers via quadratic lyapunov functions. Journal of Optimization Theory and Applications. 1995;85(2):291-307
  38. 38. Skelton RE, Iwasaki T, Grigoriadis KM. A Unified Algebraic Approach To Linear Control Design. London: Tylor & Francis Ltd.; 1998
  39. 39. Piziak R, Odell PL. In: Nashed Z, Taft E, editors. Matrix Theory: From Generalized Inverses to Jordan Form. Chapman & Hall/CRC \& Francis Group, 6000 Broken Sound Parkway NW, Suit 300, Boca Raton, FL 33487-2742; 2007. p. 288
  40. 40. Karlheinz S. Abstract Algebra With Applications. Marcel Dekker, Inc.; 1994
  41. 41. Ohara A, Kitamori T. Geometric structures of stable state feedback systems. IEEE Transactions on Automatic Control. 1993;38(10):1579-1583
  42. 42. Leibfritz F. COMPleib: Constrained matrix-optimization problem library—A collection of test examples for nonlinear semidefinite programs, control system design and related problems. In: Dept. Math., Univ. Trier, Germany, Tech.-Report, (2003)
  43. 43. Tadashi I, Hai-Jiao G, Hiroshi T. A design of discrete-time integral controllers with computation delays via loop transfer recovery. Automatica. 1992;28(3):599-603
  44. 44. Chu EK. Optimization and pole assignment in control system design. International Journal of Applied Mathematics and Computer Science. 2001;11(5):1035-1053
  45. 45. Pandey A, Schmid R, Nguyen T, Yang Y, Sima V, Tits AL. Performance survey of robust pole placement methods. In: IEEE 53rd Conference on Decision and Control; December 15-17; Los Angeles, California, USA. 2014. pp. 3186-3191
  46. 46. Ait Rami M, El Faiz S, Benzaouia A, Tadeo F. Robust exact pole placement via an LMI-based algorithm. IEEE Transactions on Automatic Control. 2009;54(2):394-398
  47. 47. Chilali M, Gahinet P.Hdesign with pole placement constraints: An LMI approach. IEEE Transactions on Automatic Control. 1996;41(3):358-367
  48. 48. Chilali M, Gahinet P, Apkarian P. Robust pole placement in LMI regions. IEEE Transactions on Automatic Control. 1999;44(12):2257-2270
  49. 49. Buchberger B. Gröbner bases and system theory. Multidimentional Systems and Signal Processing. 2001;12:223-251
  50. 50. Shin HS, Lall S. Optimal decentralized control of linear systems via Groebner bases and variable elimination. In: American Control Conference. Baltimore, MD, USA: Marriott Waterfront; 2010. pp. 5608-5613
  51. 51. Lin Z. Gröbner bases and applications in control and systems. In: IEEE, 7th International Conference On Control, Automation, Robotics And Vision (ICARCV’02). 2002. pp. 1077-1082
  52. 52. Lin Z, Xu L, Bose NK. A tutorial on Gröbner bases with applications in signals and systems. IEEE Transactions on Circuits and Systems. 2008;55(1):445-461
  53. 53. Pan VY. Univariate polynomials: nearly optimal algorithms for numerical factorization and root-finding. Elsevier Journal of Symbolic Computation. 2002;33(5):701-733

Written By

Yossi Peretz

Reviewed: October 11th, 2021 Published: December 2nd, 2021