Open access peer-reviewed chapter

Algorithms for LQR via Static Output Feedback for Discrete-Time LTI Systems

Written By

Yossi Peretz

Submitted: 11 June 2019 Reviewed: 23 August 2019 Published: 27 September 2019

DOI: 10.5772/intechopen.89319

From the Edited Volume

Control Theory in Engineering

Edited by Constantin Volosencu, Ali Saghafinia, Xian Du and Sohom Chakrabarty

Chapter metrics overview

1,071 Chapter Downloads

View Full Metrics

Abstract

Randomized and deterministic algorithms for the problem of LQR optimal control via static-output-feedback (SOF) for discrete-time systems are suggested in this chapter. The randomized algorithm is based on a recently introduced randomized optimization method named the Ray-Shooting Method that efficiently solves the global minimization problem of continuous functions over compact non-convex unconnected regions. The randomized algorithm presented here has a proof of convergence in probability to the global optimum. The suggested deterministic algorithm is based on the gradient method and thus can be proved to converge to local optimum only. A comparison between the algorithms is provided as well as the performance of the hybrid algorithm.

Keywords

  • control systems
  • optimal control
  • discrete-time systems
  • state-space models
  • NP-hard control problems
  • randomized algorithms
  • deterministic algorithms

1. Introduction

The application of static-output-feedbacks (SOFs) for linear-quadratic regulators (LQR) is very attractive, since they are cheap and reliable and their implementation is simple and direct, because their components has direct physical interpretation in terms of sensors amplification rates and actuator activation power. Moreover, the long-term memory of dynamic feedbacks is useless for systems subject to random disturbances, to fast dynamic loadings or to random bursts and impulses, and the application of state feedbacks is not always possible due to unavailability of full-state measurements (see, e.g., [1]). Also, the use of SOF avoids the need to reconstruct the state by Kalman filter or by any other state reconstructor.

On the other hand, in practical applications, the entries of the needed SOFs are bounded, and since the problem of SOFs with interval constrained entries is NP-hard (see [2, 3]), one cannot expect the existence of a deterministic efficient (i.e., polynomial-time) algorithm to solve the problem. Randomized algorithms are thus natural solutions to the problem. The probabilistic and randomized methods for the constrained SOF problem and robust stabilization via SOFs (among other hard problems) are discussed in [4, 5, 6, 7]. For a survey of the SOF problem see [8], and for a recent survey of the robust SOF problem see [9].

The Ray-Shooting Method was recently introduced in [10], where it was used to derive the Ray-Shooting (RS) randomized algorithm for the minimal-gain SOF problem, with regional pole assignment, where the region can be non-convex and unconnected. The Ray-Shooting Method was successfully applied recently also to the following hard complexity control problems for continuous-time systems:

  • Structured and structured-sparse SOFs (see [11])

  • LQR via SOF for continuous-time LTI systems (see [12])

  • LQR optimal, H -optimal, and H 2 -optimal proportional-integral-differential (PID) controllers (see [13])

  • Robust control via SOF (see [14])

The contribution of the research presented in the current chapter is as follows:

  1. The randomized algorithm presented here (which we call the RS algorithm) is based on the Ray-Shooting Method (see [10]), which opposed to smooth optimization methods, and has the potential of finding a global optimum of continuous functions over compact non-convex and unconnected regions.

  2. The RS algorithm has a proof of convergence in probability and explicit complexity.

  3. Experience with the algorithm shows good quality of controllers (in terms of reduction of the LQR functional value with relatively small controller norms), high percent of success, and good run-time, for real-life systems. Thus, the suggested practical algorithm efficiently solves the problem of LQR via SOF for discrete-time systems.

  4. The RS algorithm does not need to solve any Riccati or quadratic matrix equations (QMES) and thus can be applied to large systems.

  5. The RS algorithm is one of the few, dealing with the problem of LQR via SOF for discrete-time systems.

  6. A deterministic algorithm for the problem that generalizes the algorithm of Moerder and Calise [15], for discrete-time systems, is given (we call it the MC algorithm). The MC algorithm has a proof of convergence to a local optimum only, and it needs other algorithms for computing initial stabilizing SOF.

  7. A comparison between the RS and the MC algorithms, as well as the performance of the hybrid algorithm, for real-life systems, is provided.

The reminder of the chapter is organized as follows:

In Section 2 we formulate the problem and give some useful lemmas (without a proof). In Section 3, we introduce the randomized algorithm for the problem of LQR via SOF for discrete-time LTI systems. Section 4 is devoted to the deterministic algorithm for the problem. In Section 5, we give the results of the algorithms for some real-life systems. Finally, in Section 6 we conclude with some remarks.

Advertisement

2. Preliminaries

Let a discrete-time system be given by

x k + 1 = Ax k + Bu k , k = 0 , 1 , y k = Cx k E1

where A R p × p , B R p × q , C R r × p , and x 0 R p . Let the LQR cost functional be defined by

J x 0 u k = 0 x k T Qx k + u k T Ru k , E2

where Q > 0 and R > 0 . Let u k = Ky k be the SOF, and let A c K A BKC denote the closed-loop matrix. Let D denote the open unit disk, let 0 < α < 1 , and let D α denote the set of all z D with z < 1 α (where z is the absolute value of z ). For a square matrix Z , we denote by σ Z its spectrum. For any rectangular matrix Z , we denote by Z + its Moore-Penrose pseudo-inverse. By Z F = trace Z T Z 1 2 we denote the Frobenius norm of Z , and by Z = max σ Z T Z 1 2 we denote the spectral norm of Z . By L Z and R Z , we denote the (left and right) orthogonal projections I Z + Z and I ZZ + on the spaces Ker Z and Ker Z + , respectively. For a topological space X and a subset U X , we denote by int U the interior of U , i.e., the largest open set included in U . By U ¯ we denote the closure of U , i.e., the smallest closed set containing U , and by U = U ¯ int U we denote the boundary of U . Let S q × r denote the set of all matrices K R q × r such that σ A c D (i.e., stable in the discrete-time sense), and let S q × r denote the set of all matrices K R q × r such that σ A c D α . If the last is nonempty, we say that A c is α -stable and we call α the degree of stability. Let K S α q × r be given. Substitution of the SOF u k = Ky k = KCx k into (2) yields:

J x 0 K = k = 0 x k T Q + C T K T RKC x k . E3

Since Q + C T K T RKC > 0 and A c K is stable, it follows that the Stein equation

P A c K T PA c K = Q + C T K T RKC E4

has a unique solution P > 0 , given by

P K = mat I p I p A c K T A c K T 1 vec Q + C T K T RKC . E5

Substitution of (4) into (3) and using x k = A c K k x 0 with the fact that A c K is stable leads to

J x 0 K = k = 0 x k T P A c K T PA c K x k = k = 0 x 0 T A c K Tk P A c K T PA c K A c K k x 0 = x 0 T P K x 0 = P K 1 2 x 0 2 .

Thus, when x 0 is known, we search for K S α q × r that minimizes the functional

J x 0 K = x 0 T P K x 0 . E6

Let

σ max K max σ P K . E7

Now, since J x 0 K x 0 2 σ max K for any x 0 0 , with equality in the worst case, therefore

sup x 0 0 J x 0 K x 0 2 = sup x 0 0 P K 1 2 x 0 2 x 0 2 = P K 1 2 2 = P K = σ max K .

Thus, when x 0 is unknown, we search for K S α q × r , such that σ max K = P K is minimal. Note that if λ is an eigenvalue of A c K and v is a corresponding eigenvector, then (4) yields 1 λ 2 = v Q + C T K T RKC v v P K v v Qv v P K v > 0 . Therefore, λ 2 1 v Qv v P K v < 1 , and thus, minimizing σ max K results in eigenvalues that are getting closer to the boundary of D . Since α , the degree of stability, is important to get satisfactory decay rate of the state to 0 , and for disturbance rejection, we allow the user of the algorithms to determine α . Note that too high value of α might result in nonexistence of any SOF for the system or in complicating the search for a starting SOF. Higher values of α result in higher values of the optimal value of the LQR functional, i.e., higher energy consumption for decaying the disturbance x 0 to 0 .

The functionals J x 0 K and σ max K are generally not convex since their domain of definition S α q × r (and therefore S α q × r ) is generally non-convex. Necessary conditions for optimality for continuous-time systems were given as three QMEs in [15, 16, 17, 18]. Necessary and sufficient conditions for optimality for continuous-time systems, based on linear matrix inequalities (LMI), were given in [19, 20, 21]. However, algorithms based on these formulations are generally not guaranteed to converge, seemingly because of the non-convexity of the coupled matrix equations or inequalities, and when they converge, it is to a local optimum only.

In the sequel, we will use the following lemmas, given here without proofs.

Lemma 2.1. We have:

  1. The equation AX = B has solutions if and only if AA + B = B or equivalently, if and only if R A B = 0 . In this case, the set of all solutions is given by

X = A + B + L A Z ,

where Z is arbitrary.

  1. The equation XA = B has solutions if and only if BA + A = B or equivalently, if and only if BL A = 0 . In this case, the set of all solutions is given by

X = BA + + ZR A ,

where Z is arbitrary.

Lemma 2.2. We have:

  1. Let A , B , X be matrices with sizes p × q , r × p , q × r , respectively. Then

X trace AXB = A T B T .

  1. Let A , B , C , X be matrices with sizes p × q , r × r , q × p , r × q , respectively. Then

X trace AX T BXC = B T XA T C T + BXCA .
Advertisement

3. The randomized Ray-Shooting Method-based algorithm

The Ray-Shooting Method works as follows, for general function minimization: let f x 0 be a continuous function defined over some compact set X R n . Let ϵ > 0 be given and assume that we want to compute x X such that y f x = min x X f x up to ϵ , i.e., to find x y in the set S ϵ = x y x X f x y = f x f x + ϵ . Let x 0 X be given, let y 0 f x 0 and let S 0 = x y x X f x y y 0 denote the search space, which is a subset of the epigraph of f . Let D 0 = x y x X 0 y y 0 denote the cylinder enclosed between X and the level y 0 . Let L 0 = x y x X y = 0 or x X 0 y y 0 . Let z 0 x 0 y 0 and note that z 0 S 0 . Then, we choose w 0 in L 0 randomly, according to some distribution, and we define the ray as z t 1 t z 0 + tw 0 , 0 t 1 . We scan the ray and choose the largest 0 t 0 1 such that 1 t 0 z 0 + t 0 w 0 S 0 (actually, we scan the ray from t = 1 in equal-spaced points and take the first t for which this happens). We define z 1 1 t 0 z 0 + t 0 w 0 and update sets S 0 , D 0 , and L 0 by replacing y 0 with y 1 , where x 1 y 1 = z 1 . Let S 1 , D 1 , and L 1 denote the updated sets. We continue the process similarly from z 1 S 1 , and we define a sequence z n S n , n = 0 , 1 , . Note that S ϵ S n + 1 S n for any n = 0 , 1 , , unless we have z n S ϵ for some n (in which the process is ceased). One can show that the sequence z n n = 0 converges (in probability) to a point in S ε . Note that shooting rays from the points of local minimum have positive probability to hit S ε (under the following mild assumption), because any global minimum is visible from any local minimum. Moreover, for a given level of certainty, we hit S ε in a finite number of iterations (see Remark 3.1 below). Practically, we may stop the algorithm if no improvement is detected within a window of 20 % of the allowed number of iterations. The function need not be smooth or even continuous. It only needs to be well defined and measurable over the compact domain X , and S ε should have non-negligible measure (i.e., should have some positive volume). Obviously, global minimum points belong to the boundary of the search space S 0 , and actually such points are where the distance between the compact sets X × 0 and S 0 in R n + 1 is accepted. This is essential for the efficiency of the Ray-Shooting Method, although we raised the search space dimension from n to n + 1 .

In order to apply the Ray-Shooting Method for the LQR via SOF problem, we need the following definitions: assume that K 0 int S α was found by the RS algorithm (see [10]) or by any other method (see [22, 23, 24]). Let h > 0 and let U 0 be a unit vector (actually a matrix, but we consider here the space of matrices as a normed vector space) with respect to the Frobenius norm, i.e., U 0 F = 1 . Let L 0 = K 0 + h U 0 and let L be the hyperplane defined by L 0 + V , where V U 0 F = 0 . Here L is the tangent space at L 0 to the closed ball B K 0 h ¯ centered at K 0 with radius h , with respect to the Frobenius norm on R q × r . Let r > 0 and let R denote the set of all F L , such that F L 0 F r . Let R ϵ = R + B 0 ϵ ¯ , where B 0 ϵ ¯ denotes the closed ball centered at 0 with radius ϵ ( 0 < ϵ 1 2 ). Let D 0 = chull K 0 R ϵ denote the convex hull of the vertex K 0 with the basis R ϵ . Let S α 0 = S α D 0 and note that S α 0 is compact (but generally not convex). We wish to minimize the continuous function σ max K (or the continuous function J x 0 K , when x 0 is known) over the compact set S α B K 0 h ¯ . Let K denote a point in S α B K 0 h ¯ where the minimum of σ max K is accepted. Obviously, K D 0 , for some direction U 0 from K 0 .

The Ray-Shooting Algorithm 1 for the LQR via SOF problem, works as follows: we start with a point K 0 int S α , found by the RS algorithm (see [10]). Assuming that K D 0 , the inner loop ( j = 1 , , n ) uses the Ray-Shooting Method in order to find an approximation of the global minimum of the function σ max K over S α 0 —the portion of S α bounded in the cone D 0 . The proof of convergence in probability of the inner loop and its complexity (under the above-mentioned assumption) can be found in [10] (see also [11]). In the inner loop, we choose a search direction by choosing a point F in R ϵ —the base of the cone D 0 . Next, in the most inner loop ( k = 0 , , s ), we scan the ray K t 1 t K 0 + tF and record the best controller on it. Repeating this sufficiently many times, we reach K (or an ϵ neighborhood of it) with high probability, under the assumption that K D 0 (see Remark 3.1).

The reasoning of the Ray-Shooting Method is that sampling the whole search space will lead to the probabilistic method that is doomed to the “curse of dimensionality,” which the method tries to avoid. This is achieved by slicing the search space into covering cones ( m is the number of cones allowed), because any point in the cone is visible from its vertex. At each cone we shoot rays ( n is the number of rays per cone) from its node toward its basis, where each ray is sampled from its head toward its tail, while updating the best point found so far. Note that the global minimum of σ max K over any compact subset of S α is achieved on the boundary of the related portion of the epigraph of σ max K . Therefore, we can break the most inner loop; in the first moment, we find an improvement in σ max K . This bypasses the need to sample the whole search space (although we raise by 1 the search space dimension) and explains the efficiency of the Ray-Shooting Method in finding global optimum. Another advantage of the Ray-Shooting Method which is specific to the problem of LQR via SOF is that the search is concentrated to the parameter space (the qr -dimension space where the K rests) and not to the certificate space (the p 2 -dimension space where the Lyapunov matrices P rests). Thus, the method avoids the need to solve any Riccati, LMI, and BMI equations, which might make crucial difference for large-scale systems (i.e., where p 2 > > qr ).

Algorithm 1. The Ray-Shooting Algorithm for LQR via SOF for discrete-time systems.

Require: An algorithm for deciding α -stability, an algorithm for computing σ max K and algorithms for general linear algebra operations.

Input: 0 < ϵ 1 2 , 0 < α < 1 , h > 0 , r > 0 , integers: m , n , s > 0 , controllable pairs A B and A T C T , matrices Q > 0 , R > 0 and K 0 int S α .

Output: K S α close as possible to K .

1. compute P K 0 as in (5)

2. P best P K 0

3. σ max best max σ P best

4. υ 1

5. for i = 1 to m do

6. choose U 0 such that U 0 F = 1 , uniformly at random

7. L 0 K 0 + h U 0

8. for j = 1 to n do

9. choose F R ϵ , uniformly at random

10. for k = 0 downto s do

11. t k s

12. K t 1 t K 0 + tF

13. if K t S α then

14. compute P K t as in (5)

15. σ max K t max σ P K t

16. if σ max K t < σ max best then

17. K best K t

18. P best P K t

19. σ max best σ max K t

20. end if

21. end if

22. end for

23. end for

24. if i > υ e 2 π qr then

25. K 0 K best

26. υ υ + 1

27. end for

28. end for

29. return K best , P best , σ max best

Remark 3.1. In [12] it is shown that by taking m = e 2 πqr iterations in the outer loop, we have K D 0 , for some direction U 0 , almost surely. Let S α 0 ϵ denote the set K S α 0 σ max K σ max K + ϵ . Then, the total number of arithmetic operations of the RS algorithm that guarantees a probability of at least 1 β to hit S α 0 ϵ is given by O ln β h ϵ r r ϵ q 0 r 0 max q r 3 + p 6 , for systems with q q 0 , r r 0 for fixed q 0 , r 0 , where r ϵ is the radius of the basis of a cone with height ϵ that has the same volume as of S α 0 ϵ ; see [10, 11, 12]. This is a polynomial-time algorithm by restricting the input and by regarding r r ϵ as the size of the problem.

Advertisement

4. The deterministic algorithm

The deterministic algorithm we introduce here as Algorithm 2 (which we call the MC algorithm) generalizes the algorithm of Daniel D. Moerder and Anthony A. Calise (see [15]) to the case of discrete-time systems. To the best of our knowledge, this is the best algorithm for LQR via SOF published so far, in terms of rate of convergence (to local minimum).

Here, we wish to minimize the LQR functional

J x 0 P = x 0 T Px 0 , E8

under the constraints

Y K P Q + C T K T RKC P + A c K T PA c K = 0 , P > 0 . E9

Since Y T = Y , there exist orthogonal matrix U such that Y ̂ = U T Y U is diagonal. Now, minimizing (8) under the constraints (9) is equivalent to minimizing

L K P S = trace x 0 T Px 0 + i = 1 p S ̂ i , i Y ̂ i , i ,

under the constraint P > 0 , where S ̂ i , i are the Lagrange multipliers. We have

L K P S = trace x 0 T Px 0 + i = 1 p S ̂ i , i Y ̂ i , i = trace x 0 T Px 0 + trace S ̂ Y ̂ = trace x 0 T Px 0 + trace S ̂ U T Y U = trace x 0 T Px 0 + trace U S ̂ U T Y = trace x 0 T Px 0 + trace SY

where S = U S ̂ U T . Note that S T = S . Let the Lagrangian be defined by

L K P S = trace x 0 T Px 0 + trace SY K P , E10

for any K any P > 0 and any S such that S T = S . The necessary conditions for optimality are L K = 0 , L P = 0 , and L S = Y T = Y = 0 .

Now, using Lemma 2.2, we have

L P = 0 x 0 x 0 T S T + A c S T A c T = 0 x 0 x 0 T S + A c SA c T = 0 S A c SA c T = x 0 x 0 T I p I p A c A c vec S = vec x 0 x 0 T S = mat I p I p A c A c 1 vec x 0 x 0 T ,

where the last passage is affordable because σ A c D . Note that the last with the stability of A c implies that S 0 .

We also have

L K = K trace SY = K trace S Q + C T K T RKC P + A T C T K T B T P A BKC = K trace SC T K T RKC SA T PBKC SC T K T B T PA + SC T K T B T PBKC = K trace SC T K T RKC SA T PBKC A T P T BKCS T + SC T K T B T PBKC = R T KCS T C T + RKCSC T B T P T AS T C T B T PASC T + B T P T BKCS T C T + B T PBKCSC T = 2 RKCSC T 2 B T PASC T + 2 B T PBKCSC T .

Therefore,

L K = 0 RKCSC T B T PASC T + B T PBKCSC T = 0 R + B T PB KCSC T = B T PASC T KCSC T = R + B T PB 1 B T PASC T .

Thus, if CSC T is invertible, then

K = R + B T PB 1 B T PASC T CSC T 1 . E11

Otherwise, if

R + B T PB 1 B T PASC T L CSC T = 0 ,

which is equivalent to

B T PASC T L CSC T = 0 , E12

then

K = R + B T PB 1 B T PASC T CSC T + + Z R CSC T , E13

where Z is arbitrary q × r matrix (and we may take Z = 0 , unless some other constraints on K are needed). Note that if condition (12) does not happen, then L K 0 . We conclude with the following theorem:

Theorem 4.1. Assume that L K P S given by (10) is minimized locally at some point K , P > 0 , and S such that S T = S . Then

K = R + B T P B 1 B T P AS C T CS C T + + Z R CSC T , for some q × r matrix Z P = mat I p I p A c K T A c K T 1 vec Q + C T K T RK C S = mat I p I p A c K A c K 1 vec x 0 x 0 T , E14

where A c K = A BK C .

Proof:

Since L K P S is minimal in some neighborhood of K P S , it follows that L K K P S = 0 , L P K P S = 0 , and L S K P S = Y T K P = Y K P = 0 .

The condition L S K P S = Y T K P = Y K P = 0 is just

P A c K T P A c K = Q + C T K T RK C

which with P > 0 and Q > 0 , R > 0 implies that A c K = A BK C is stable. Now, since σ A c K D , it follows that I p I p A c K T A c K T is invertible, and therefore,

P = mat I p I p A c K T A c K T 1 vec Q + C T K T RK C .

Since I p I p A c K A c K is invertible, L P K P S = 0 implies that

S = mat I p I p A c K A c K 1 vec x 0 x 0 T .

Finally, L P K P S = 0 implies that K CS C T = R + B T P B 1 B T P AS C T , which in view of Lemma 2.1 implies R + B T P B 1 B T P AS C T L CS C T = 0 and

K = R + B T P B 1 B T P AS C T CS C T + + Z R CS C T ,

where Z is some q × r matrix.

Note that the equations are coupled tightly, in the sense that P and S need K , while K needs P and S . Note also the cubic dependencies (that can be made quadratic by introducing new variables). These make the related QMEs non-convex and, therefore, hard to compute.

Remark 4.1. When x 0 is unknown, it is customary to assume that x 0 is uniformly distributed on the unit sphere, which implies that E x 0 x 0 T = I p , where E is the expectation operator. Thus, changing the problem to that of minimizing E J x 0 P amounts to replacing S with

E S = mat I p I p A c K A c K 1 vec I p > 0 .

Therefore, there is change in Algorithm 2.

Remark 4.2. The convergence of Algorithm 2 to local minimum can be proved similarly to the proof appearing in [15], under the assumptions that S α q × r is nonempty and that Q 1 2 A is detectable (here, we do not need this condition because of the assumption that Q > 0 ). The convergence can actually be proved for the more general problem that adds K F 2 = trace K T K to the LQR functional, thus minimizing also the Frobenius norm of K . In this context, note that by adding K 2 = max σ K T K to the LQR functional will lose the argument, and there will be a need to more general proof, because in the proof appearing in [15], the demand is for C 1 smooth function of K , while K 2 = σ max K T K is continuous but not Lipschitz continuous. The RS algorithm can use any continuous function of K and can deal also with sparse SOFs for LQR and with regional pole-placement SOFs for LQR.

Example 4.1. In the following simple example, we illustrate the notions appearing in the definition of the RS algorithm, and we demonstrate the operation of the RS algorithm. Consider the unstable system

x k + 1 = 2 1 0 1 2 x k + 1 1 u k , k = 0 , 1 , y k = x k ,

where we look for SOF K stabilizing the system while reducing the LQR functional (2) with Q = I , R = 1 . Let K = k 1 k 2 then

A c K = A BK = 2 k 1 1 k 2 k 1 1 2 k 2 ,

with characteristic polynomial z 2 + z k 1 + k 2 3 2 + 3 2 k 1 2 k 2 1 . Applying the Y. Bistritz stability criterion (see [25]), we have

υ = Var 5 2 k 1 k 2 3 2 3 2 k 1 + 2 k 2 + 2 1 2 k 1 3 k 2 + 3 2 ,

where υ is the number of sign variations in the set. According to the Bistritz criterion, the system is stable if and only if υ = 0 . We conclude that S is the set of all K such that 5 2 k 1 k 2 3 2 > 0 , 3 2 k 1 + 2 k 2 + 2 > 0 , 1 2 k 1 3 k 2 + 3 2 > 0 or 5 2 k 1 k 2 3 2 < 0 , 3 2 k 1 + 2 k 2 + 2 < 0 , 1 2 k 1 3 k 2 + 3 2 < 0 , where the last branch is empty (which could have make the set non-convex). The set S appears in Figure 1 as the blue region, where the golden star is the analytic global optimal solution K = 1.09473459 0.36138828 (computed by the related discrete algebraic Riccati equation).

Figure 1.

The stability region S .

Algorithm 2 The MC algorithm for LQR via SOF for discrete-time systems.

Require: An algorithm for deciding α -stability, an algorithm for computing σ max K and algorithms for general linear algebra operations.

Input: 0 < ϵ 1 2 , 0 < α < 1 , integers: m , s > 0 , controllable pairs A B and A T C T , matrices Q > 0 , R > 0 and K 0 int S α .

Output: K S α that locally minimizes σ max K .

1. j 0 ; A 0 A BK 0 C

2. P 0 mat I p I p A 0 T A 0 T 1 vec Q + C T K 0 T RK 0 C

3. S 0 mat I p I p A 0 A 0 1 vec I p ; σ max K 0 max σ P 0

4. Δ K 0 R + B T P 0 B 1 B T P 0 AS 0 C T CS 0 C + K 0

5. flag 0

6. for k = 0 to s do

7. t k s

8. K t 1 t K 0 + t Δ K 0

9. if K t S α then

10. A t A BK t C

11. P t mat I p I p A t T A t T 1 vec Q + C T K t T RK t C

12. S t mat I p I p A t A t 1 vec I p ; σ max K t max σ P t

13. if σ max K t < σ max K 0 then

14. K 1 K t ; A 1 A BK 1 C ; P 1 P t ; S 1 S t ; σ max K 1 σ max K t

15. flag 1

16. end if

17. end if

18. end for

19. if flag = = 1 then

20. while σ max K j + 1 σ max K j ϵ and j < m do

21. Δ K j R + B T P j B 1 B T P j AS j C T CS j C T + K j

22. for k = 0 to s do

23. t k s

24. K t 1 t K j + t Δ K j

25. if K t S α then

21. A t A BK t C

22. P t mat I p I p A t T A t T 1 vec Q + C T K t T RK t C

23. S t mat I p I p A t A t 1 vec I p ; σ max K t max σ P t

29. if σ max K t < σ max K j then

30. K j + 1 K t ; A j + 1 A BK j + 1 C ; P j + 1 P t ; S j + 1 S t ; σ max K j + 1 σ max K t

31. end if

32. end if

33. end for

34. j j + 1

35. end while

36. end if

37. return K best K j , P best P j , σ max best σ max K j

In Figure 1, we can see how the RS algorithm works: we fix α = 10 3 , ϵ = 10 16 , r = 2 , h = 2 , and we set m = 1 , n = 5 , s = 10 for a single iteration, where the single cone is sampled along 5 rays and each ray is sampled 10 times. The sampled points are circled, where red circles indicate infeasible or non-improving points and the black circles indicate improving points. The green star point is the initial point K 0 found by the Ray-Shooting algorithm for minimal-norm SOF. The bold black circle is the boundary of the closed circle B K 0 h ¯ . We choose U 0 randomly to define the search direction, and we set L 0 = K 0 + h U 0 to be the point where the direction meets the boundary of the circle. L is the tangent line at L 0 to the circle, and R ϵ is the 2 r width segment on the line, inflated by ϵ . The search cone D 0 = chull K 0 R ϵ is the related black triangle. Here S α 0 = S α D 0 is just the portion of the blue region inside the triangle, and we can see that the assumption that K D 0 is in force. For the current problem e 2 π qr = 10 , and therefore, by making 10 iterations, K will be inside some triangle almost surely.

The algorithm chooses F in the basis of the triangle and defines K t to be the ray from K 0 to F . The ray is sampled at 10 equally spaced points, and the best feasible point is recorded.

In Figure 2, we can see that 5 iterations suffice to include K in some triangle and to find improving points very close to K . In Figure 3, we can see that when we allow 20 iterations, after 10 iterations, the center K 0 is switched to the best point found so far (see lines 24 26 in Algorithm 1). This is done in order to raise the probability to hit K or its ϵ -neighborhood, and as we can see, the final best point (green star) is very close to K (Figure 4).

Figure 2.

Single iteration of the RS algorithm.

Figure 3.

Five iterations of the RS algorithm.

Figure 4.

Twenty iterations of the RS algorithm.

The results of the algorithm for 1 , 5 and 20 iterations are the following. Note that σ max K = 5.9551 , and note the “huge variations” the function σ max K has.

For m = 1 we had

K 0 = 0.58739333 0.15823016 , σ max 0 = 25.7307 , RS : K best = 1.17786349 0.35034398 , σ max best = 6.1391 , MC : K best = 0.58739333 0.15823016 , σ max best = 25.7307 , RS + MC : K best = 1.05244278 0.31681948 , σ max best = 6.0001 .

Note that in this case, the MC algorithm makes no improvement, while the RS and RS + MC have very close values to the global optimal value, with slightly better value for the RS + MC, over the RS algorithm.

For m = 5 we had

K 0 = 0.60478870 0.06023828 , σ max 0 = 36.4583 , RS : K best = 1.04166520 0.40826562 , σ max best = 6.1655 , MC : K best = 0.60478870 0.06023828 , σ max best = 36.4583 , RS + MC : K best = 1.04166520 0.40826562 , σ max best = 6.16557843 .

For m = 20 we had

K 0 = 0.51029365 0.22521376 , σ max 0 = 3198.8196 , RS : K best = 1.11453066 0.33955607 , σ max best = 5.9893 , MC : K best = 0.51029365 0.22521376 , σ max best = 3198.8196 , RS + MC : K best = 1.11453066 0.33955607 , σ max best = 5.9893 .

In Figure 5, the initial condition response of the open-loop system is given. One can see the unstable mode related to the unstable eigenvalue 2 . In Figures 68, the initial condition responses of the closed-loop systems with the SOFs for m = 20 , with x 0 = 3 1 T and sampling time T s = 0.01 , are given. One can see that the responses of the closed-loop systems with the SOFs computed by RS and RS + MC are very close to the global optimal response, while the response of the closed-loop system with the SOF computed by the MC algorithm (actually with the initial SOF), although stable, is unacceptable.

Figure 5.

The initial condition response of the open-loop system.

Figure 6.

The initial condition response of the closed-loop system with the SOF computed by the RS algorithm (blue) compared with the global optimal response (red).

Figure 7.

The initial condition response of the closed-loop system with the SOF computed by the MC algorithm (blue) compared with the global optimal response (red).

Figure 8.

The initial condition response of the closed-loop system with the SOF computed by the RS + MC algorithm (blue) compared with the global optimal response (red).

Advertisement

5. Experiments

In the following experiments, we applied Algorithms 1 and 2, for systems taken from the libraries [26, 27, 28]. The systems given in these libraries are real-life continuous-time systems. In order to get related discrete-time systems, we sampled the systems using the Tustin method with sampling rate T s = 0.01 sec . We took only the systems for which the RS algorithm succeeded in finding SOFs for the continuous-time systems (see [10], Table 8, p. 231). In order to initialize the MC Algorithm, we also used the RS algorithm to find a starting α -stabilizing SOF. In all the experiments, we used m = 2 e 2 π qr , n = 100 , s = 100 ; h = 100 , r = 100 , ϵ = 10 16 , for the RS algorithm; and m = 200 e 2 π qr , s = 100 , for the MC Algorithm, in order to get the same number of total iterations and the same number s = 100 of iterations for the local search. We took Q = I p , R = I q in all the cases.

The stability margin column of Table 1 relates to 0 < α < 1 for which the absolute value of any eigenvalue of the closed loop is less than 1 α . The values of α in Table 1 relates to the largest 0 < α < 1 for which the RS algorithm succeeded in finding a starting SOF K 0 . As we saw above, it is worth searching for a starting point K 0 that maximizes 0 < α < 1 . This can be achieved efficiently by running a binary search on the 0 < α < 1 and using the RS algorithm as an oracle. Note that the RS CPU time appearing in the fourth column of Table 1 relates to running the RS algorithm for known optimal value of 0 < α < 1 . The RS algorithm is sufficiently fast also for this purpose, but other algorithms such as the HIFOO (see [24]) and HINFSTRUCT (see [29]) can be applied in order to get a starting SOF. The advantage of the use of the RS is of finding starting SOF with relatively small norm.

Let σ max F denote the functional (7) for the system A B I p , where A BF is stable, i.e., F S q × p . Let P F denote the Lyapunov matrix (5) for the system A B I p with F as above. Let σ max K denote the functional (7) for the system A B C with K S q × r and related Lyapunov matrix P = P K given by (5). Now, if A BKC is stable for some K , then A BF is stable for F = KC (but there might exist F such that A BF is stable, which cannot be defined as KC for some q × r matrix K ). Therefore,

σ max F = min F S q × p σ max F min K S α q × r B K 0 h ¯ σ max K = σ max K , E15

where F is an optimal LQR state-feedback (SF) controller for the system A B I p . We conclude that σ max F σ max K σ max K best . Thus, σ max F is a lower bound for σ max K best and can serve as a good estimator for it, in order to quantify the convergence of the algorithm to the global minimum (as is evidently seen from Table 1 in many cases) and in order to stop the algorithm earlier, since σ max F can be calculated in advance. The lower bound appears in the last column of Table 1.

For all the systems, we had A B , A T C T controllable, except for ROC1 and ROC2. All the systems are unstable, except for AC6, AC15, and NN16 which are stable, but not α -stable, for α given in the stability margin column.

The experiments were executed on:

Computer: LAPTOP-GULIHG OV, ASUSeK COMPUTER, INC.

TUF GAMING FX504GM-FX80GM.

Processor: Intel(R) Core(TM) i7-8750H CPU@2.20GHz.

Platform: MATLAB, Version R2018b Win 64.

5.1 Conclusions from the experiments

Regarding the experimental results in Table 2 and the comparison between the RS algorithm and the MC algorithm, we conclude:

  1. The RS algorithm performs in magnitude better than the MC algorithm for the systems: AC1, AC11, HE1, HE4, ROC1, ROC4, TF1, and NN5.

  2. The MC algorithm performs in magnitude better than the RS algorithm for the systems AC5 and DIS5.

  3. The MC algorithm performs slightly better than the RS algorithm for systems HE3 and NN16.

System Size p q r Stab . Mgn . RS CPU time sec σ max 0 for A B C σ max F for A B
AC1 5,3,3 0.01 2.6226 1.0701 10 4 1.3073 10 3
AC5 4,2,2 0.001 1.5468 1.5888 10 9 8.4264 10 7
AC6 7,2,4 0.001 0.7094 3.1767 10 3 5.9783 10 2
AC11 5,2,4 0.01 1.0575 1.2968 10 4 5.8777 10 2
HE1 4,2,1 0.001 0.0872 1.5040 10 3 3.0013 10 2
HE3 8,4,6 0.001 2.6845 5.4064 10 6 6.1185 10 4
HE4 8,4,6 0.001 2.5633 4.1660 10 6 2.2992 10 4
ROC1 9,2,2 10 5 0.5279 1.5906 10 7 1.1207 10 5
ROC4 9,2,2 10 5 0.4677 1.2273 10 6 8.5460 10 4
DIS4 8,4,6 0.01 2.5074 4.5133 10 3 1.7556 10 2
DIS5 4,2,2 0.001 1.2187 2.8686 10 8 9.0756 10 6
TF1 7,2,4 10 4 0.8011 7.9884 10 5 5.8134 10 3
NN5 7,1,2 10 4 0.4138 5.4066 10 6 2.8789 10 5
NN13 6,2,2 0.01 0.4876 7.8402 10 2 63.5366
NN16 8,4,4 10 4 3.5530 1.9688 10 3 2.3327 10 2
NN17 3,2,1 0.001 0.0925 3.2733 10 4 3.1358 10 2

Table 1.

General information of the systems and initial values.

System σ max best for A B C RS Algo . σ max best for A B C MC Algo . σ max best for A B C RS + MC Algo . RS Algo . CPU time sec MC Algo . CPU time sec RS + MC Algo . CPU time sec
AC1 1.9207 10 3 1.0701 10 4 1.9207 10 3 2.9843 0.0468 3.0312
AC5 1.5888 10 9 2.5905 10 8 2.5905 10 8 2.0156 0.4062 2.2500
AC6 6.1449 10 2 6.5913 10 2 6.1389 10 2 5.1250 0.2500 5.1718
AC11 2.4234 10 3 1.2968 10 4 2.4234 10 3 2.9531 0.0468 3.0000
HE1 9.1253 10 2 1.0968 10 3 9.1253 10 2 2.2812 0.0468 2.3437
HE3 8.6808 10 4 7.1816 10 4 8.1737 10 4 13.4843 0.2343 13.7656
HE4 5.2247 10 4 1.1817 10 6 3.1783 10 4 10.9687 0.0468 11.2656
ROC1 6.6239 10 5 1.5906 10 7 6.6239 10 5 7.6250 0.1250 7.7500
ROC4 5.9923 10 5 1.2273 10 6 5.9923 10 5 5.3750 0.0625 5.4218
DIS4 1.7590 10 2 2.0376 10 2 1.7590 10 2 7.2187 0.1250 7.2656
DIS5 2.8686 10 8 3.2079 10 7 3.2079 10 7 1.8593 0.1562 1.9687
TF1 1.9289 10 4 1.1230 10 5 1.9270 10 4 5.0000 0.0625 5.0468
NN5 9.6780 10 5 1.3372 10 6 9.6780 10 5 1.2968 0.1718 1.3593
NN13 2.0521 10 2 2.6553 10 2 1.7953 10 2 2.3281 0.2656 2.4843
NN16 6.4416 10 2 6.0032 10 2 6.0030 10 2 6.4062 0.5468 6.7343
NN17 3.6805 10 3 4.9674 10 3 3.6787 10 3 0.9375 0.2812 1.1718

Table 2.

Experimental results.

Regarding the experimental results in Table 2 and the performance of the RS + MC algorithm, we conclude:

  1. The RS + MC algorithm performs better than each algorithm separately, for systems AC6, HE4, TF1, NN13, NN16, and NN17.

  2. The RS + MC algorithm performs better than the RS algorithm for systems AC5, HE3, and DIS5.

  3. The RS + MC algorithm performs exactly as the RS algorithm for systems AC1, AC11, DIS4, HE1, ROC1, ROC4, and NN5. This observation assesses the claim for convergence of the RS algorithm to global optimum.

  4. The RS + MC algorithm performs exactly as the MC algorithm for systems AC5 and DIS5.

Regarding improvements over the starting point, we had:

  1. The RS algorithm failed in finding any improvement over σ max K 0 for systems AC5 and DIS5.

  2. The MC algorithm failed in finding any improvement over σ max K 0 for systems AC1, AC11, ROC1, and ROC4. This observation assesses the heuristic that it is better to start with a SOF that brings the poles of the closed loop as close as possible to the boundary of the disk D α .

  3. The RS + MC algorithm improved σ max K 0 in any case.

Regarding the assessment of convergence to a global minimum, we had the following results:

  1. The RS algorithm and the RS + MC algorithm had very close values of σ max K best (or exactly the same value) which is very close to the lower bound, for systems AC1, AC6, HE1, HE3, HE4, ROC1, DIS4, NN5, and NN16.

  2. The MC algorithm achieved a very close value of σ max K best to the lower bound, for the systems AC6, HE3, DIS5, NN5, and NN16.

As was expected, the MC algorithm seems to perform better locally, while the RS algorithm seems to perform better globally. Thus, practically, the best approach is to apply the RS algorithm in order to find a close neighborhood of a global minimum and then to apply the MC algorithm on the result, for the local optimization, as is evidently seen from the performance of the RS + MC algorithm.

5.2 Some specific results

Example 5.1. For the HE4 system with

A = A 1 A 2 , where A 1 = 0.99999985 0.00000014 0.00001538 0.00988556 0.00000082 0.99999927 0.00944714 0.00015179 0.00016276 0.00014358 0.89058607 0.02380208 0.00002661 0.00002887 0.00410983 0.98016538 0.00002930 0.00000576 0.01923117 0.00428369 0.32100350 0.00003189 0.00471073 0.02122199 0.00098904 0.32051910 0.02076100 0.00474084 0.01910442 0.01711196 0.00004146 0.00066123 , A 2 = 0.00053188 0.00000088 0.00000089 0.00000005 0.00059005 0.00000508 0.00000451 0.00000034 0.00060285 0.00100715 0.00089937 0.00006730 0.00000054 0.00016706 0.00018078 0.00001158 0.99268256 0.00018120 0.00003706 0.00002045 0.00008474 0.99978709 0.00020742 0.00015757 0.00841925 0.00020153 0.99963047 0.00000293 0.00000005 0.00013964 0.00000914 0.99709909 , B = 0.00000097 0.00002352 0.00000082 0.00000055 0.00000679 0.00000357 0.00013147 0.00000146 0.00117865 0.00072316 0.02601681 0.00016953 0.00035692 0.00470509 0.00008442 0.00000015 0.00302236 0.00013040 0.00468257 0.00205817 0.00286635 0.00539604 0.00009665 0.00000026 0.00018970 0.00014386 0.00517989 0.00234114 0.04813606 0.00000574 0.00000060 0.00000001 ,
C = C 1 C 2 , where C 1 = 0.00000185 0.00001067 0.00071398 0.00083459 0.99999992 0.00000007 0.00000769 0.00494278 0.00000041 0.99999963 0.00472357 0.00007589 0.00001393 0.00000365 0.00972548 0.05509146 0.00008138 0.00007179 0.94529303 0.01190104 0.00001330 0.00001443 0.00205491 0.99008269 , C 2 = 0.00022183 0.05942943 0.05327854 0.99534942 0.00026594 0.00000044 0.00000044 0.00000002 0.00029502 0.00000254 0.00000225 0.00000017 0.99634129 0.00008613 0.00002336 0.00001053 0.00030142 0.00050357 0.00044968 0.00003365 0.00000027 0.00008353 0.00009039 0.00000579 ,

with

σ A = 0.638773652186517 , 0.847768449750652 1.002501752901569 , 0.960047795833900 0.990353602254223 ,

we had the following results:

by the RS algorithm for minimal-gain SOF (see [10])

K 0 = K 1 0 K 2 0 , where K 1 0 = 0.05281866 0.30558099 0.04123125 0.74370605 0.07272045 0.16180699 0.34989799 0.70937255 0.03438071 0.36682921 0.55329174 0.42930790 , K 2 0 = 0.07894070 0.83106320 0.63513665 0.57049060 0.02944824 0.03985277 0.01822942 0.40097959 0.32739026 0.32035712 0.23550532 0.55239497 , σ A BK 0 C = 0.88529956 , 0.98303140 , 0.99890558 ± 0.00709027 i , 0.99895917 ± 0.00548801 i , 0.99648905 , 0.99228590 , σ max 0 = 4.1660 10 6 , K 0 = 1.1052 ,

by the RS Algorithm1

K best = K 1 best K 2 best , where K 1 best = 2.60115550 0.71670943 1.38518242 1.21472623 7.41955425 2.28748737 0.08032866 1.01678491 3.50944968 0.88639191 0.92895747 0.77107501 , K 2 best = 0.43606038 0.52797919 1.91992615 0.37181168 0.49434738 1.97765275 0.43167652 2.40298411 0.14274690 1.34628983 2.73288946 0.19682963 , σ A BK best C = 0.86726666 , 0.98939915 , 0.99600356 ± 0.01886288 i , 0.99670041 ± 0.00199237 i , 0.97791267 ± 0.01292326 i , , σ max best = 5.2247 10 4 , K best = 8.1964 ,

by the MC Algorithm 2

K best = K 1 best K 2 best , where K 1 best = 0.16219293 0.22685502 1.41842788 0.10558055 0.04299252 1.12924145 0.01638169 0.08184317 0.14133958 0.03859594 0.07947463 0.12534983 , K 2 best = 0.23992446 0.17536269 0.44875450 0.14481130 0.18150136 0.27764220 0.04123859 0.04014410 0.05631322 0.06158264 0.05048672 0.16208075 , σ A BK best C = 0.53091472 , 0.95364142 , 0.98741833 , 0.99897830 , 0.99717581 ± 0.00564301 i , 0.95033459 ± 0.08956374 i , , σ max best = 1.1817 10 6 , K best = 195.3621 ,

and by RS + MC Algorithms 1 and 2:

K best = K 1 best K 2 best , where K 1 best = 0.16101573 0.52768736 1.99112247 1.8453102 11.28218548 3.45382160 0.45368113 0.97316671 6.96467764 0.57106266 0.53606197 0.45209883 , K 2 best = 0.67588625 0.18705486 0.28941025 0.69074912 0.26655181 0.20004806 0.83280356 3.20877857 0.31860927 3.09450298 0.73194640 0.07799223 , σ A BK best C = 0.98195937 ± 0.03551664 i , 0.99264852 ± 0.01953917 i , 0.99354901 ± 0.00357529 i , 0.99828371 , 0.98490431 σ max best = 3.1783 10 4 , K best = 12.1029 .
Advertisement

6. Concluding remarks

The Ray-Shooting Method is a powerful tool, since it practically solves the problem of LQR via SOF, for real-life discrete-time LTI systems. The proposed hybrid algorithm RS + MC has good performance in terms of run-time, in terms of the quality of controllers (by reducing the starting point LQR functional value and by reducing the controller norm) and in terms of the success rate in finding a starting point feasible with respect to the needed α -stability. The RS + MC algorithm has a proof of convergence in probability to a global minimum (as is evidently seen from the experiments). This enlarges the practicality and scope of the Ray-Shooting Method in solving hard complexity control problems, and we expect to receive more results in this direction.

References

  1. 1. Camino JF, Zampieri DE, Peres PLD. Design of a vehicular suspension controller by static output feedback. Proceedings of the American Control Conference; San Diego, California; June 1999
  2. 2. Nemirovskii A. Several NP-hard problems arising in robust stability analysis. Mathematics of Control, Signals, and Systems. 1993;6:99-105
  3. 3. Blondel V, Tsitsiklis JN. NP-hardness of some linear control design problems. SIAM Journal on Control and Optimization. 1997;35(6):2118-2127
  4. 4. Vidyasagar M, Blondel VD. Probabilistic solutions to some NP-hard matrix problems. Automatica. 2001;37:1397-1405
  5. 5. Tempo R, Calafiore G, Dabbene F. Randomized Algorithms for Analysis and Control of Uncertain Systems. London: Springer-Verlag; 2005
  6. 6. Tempo R, Ishii H. Monte Carlo and Las Vegas randomized algorithms for systems and control. European Journal of Control. 2007;13:189-203
  7. 7. Arzelier D, Gryazina EN, Peaucelle D, Polyak BT. Mixed LMI/randomized methods for static output feedback control. In: Proceedings of the American Control Conference. Baltimore, Maryland, USA: Institute of Electrical and Electronics Engineers (IEEE); 2010. pp. 4683-4688
  8. 8. Syrmos VL, Abdallah C, Dorato P, Grigoradis K. Static output feedback: A survey. Automatica. 1997;33:125-137
  9. 9. Sadabadi MS, Peaucelle D. From static output feedback to structured robust static output feedback: A survey. Annual Reviews in Control. 2016;42:11-26
  10. 10. Peretz Y. A randomized approximation algorithm for the minimal-norm static-output-feedback problem. Automatica. 2016;63:221-234
  11. 11. Peretz Y. On applications of the ray-shooting method for structured and structured-sparse static-output-feedbacks. International Journal of Systems Science. 2017;48(9):1902-1913
  12. 12. Peretz Y. On application of the ray-shooting method for LQR via static-output-feedback. MDPI Algorithms Journal. 2018;11(1):1-13
  13. 13. Peretz Y. A randomized algorithm for optimal PID controllers. MDPI Algorithms Journal. 2018;11(81):1-15
  14. 14. Peretz Y, Moyal S, Merzbach O. A randomized algorithm for robust stabilization via static-output-feedback. In: IEEE International Conference on the Science of Electrical Engineering in Israel (ICSEE); December 12-14, 2018; Eilat, Israel
  15. 15. Moerder D, Calise A. Convergence of numerical algorithm for calculating optimal output feedback gains. IEEE Transactions on Automatic Control. 1985;AC-30:900-903
  16. 16. Johnson T, Athans M. On the design of optimal dynamic compensators for linear constant systems. IEEE Transactions on Automatic Control. 1970;AC-15:658-660
  17. 17. Levine W, Athans M. On the determination of the optimal constant output feedback gains for linear multivariables systems. IEEE Transactions on Automatic Control. 1970;AC-15:44-48
  18. 18. Levine W, Johnson TL, Athans M. Optimal limited state variable feedback controllers for linear systems. IEEE Transactions on Automatic Control. 1971;AC-16:785-793
  19. 19. Peres PLD, Geromel J, de Souza S. Optimal H 2 control by output feedback. In: Proceedings of the 32nd IEEE conference on Decision and Control, San Antonio, Texas, USA. 1993. pp. 102-107
  20. 20. Iwasaki T, Skelton RE. All controllers for the general H control problem: LMI existence conditions and state space formulas. Automatica. 1994;30:1307-1317
  21. 21. Iwasaki T, Skelton RE. Linear quadratic suboptimal control with static output feedback. Systems & Control Letters. 1994;23(6):421-430
  22. 22. Henrion D, Loefberg J, Kočvara M, Stingl M. Solving polynomial static output feedback problems with PENBMI. In: Proceedings of the joint IEEE Conference on Decision and Control and European Control Conference, Sevilla, Spain. 2005
  23. 23. Yang K, Orsi R. Generalized pole placement via static output feedback: A methodology based on projections. Automatica. 2006;42:2143-2150
  24. 24. Gumussoy S, Henrion D, Millstone M, Overton ML. Multiobjective robust control with HIFOO 2.0. Proceedings of the IFAC Symposium on Robust Control Design; Haifa, Israel. 2009
  25. 25. Bistritz Y. Zero location with respect to the unit circle of discrete-time linear system polynomials. Proceedings of the IEEE. 1984;72(9):1131-1142
  26. 26. Leibfritz F. COMPleib: Constrained matrix-optimization problem library—A collection of test examples for nonlinear semidefinite programs, control system design and related problems. Tech.-Report. 2003
  27. 27. Leibfritz F, Lipinski W. COMPleib 1.0—User manual and quick reference. Tech.-Report. 2004
  28. 28. Leibfritz F. Description of the benchmark examples in COMPleib 1.0. Tech.-Report. 2003
  29. 29. Gahinet P, Apkarian P. Frequency-domain tuning of fixed-structure control systems. In: Proceedings of 2012 UKACC International Conference on Control; 3–5 September 2012; Cardiff, UK. IEEE. 2012. pp. 178-183

Written By

Yossi Peretz

Submitted: 11 June 2019 Reviewed: 23 August 2019 Published: 27 September 2019