Open access peer-reviewed chapter

Algorithms for LQR via Static Output Feedback for Discrete-Time LTI Systems

Written By

Yossi Peretz

Submitted: June 11th, 2019 Reviewed: August 23rd, 2019 Published: September 27th, 2019

DOI: 10.5772/intechopen.89319

From the Edited Volume

Control Theory in Engineering

Edited by Constantin Volosencu, Ali Saghafinia, Xian Du and Sohom Chakrabarty

Chapter metrics overview

761 Chapter Downloads

View Full Metrics


Randomized and deterministic algorithms for the problem of LQR optimal control via static-output-feedback (SOF) for discrete-time systems are suggested in this chapter. The randomized algorithm is based on a recently introduced randomized optimization method named the Ray-Shooting Method that efficiently solves the global minimization problem of continuous functions over compact non-convex unconnected regions. The randomized algorithm presented here has a proof of convergence in probability to the global optimum. The suggested deterministic algorithm is based on the gradient method and thus can be proved to converge to local optimum only. A comparison between the algorithms is provided as well as the performance of the hybrid algorithm.


  • control systems
  • optimal control
  • discrete-time systems
  • state-space models
  • NP-hard control problems
  • randomized algorithms
  • deterministic algorithms

1. Introduction

The application of static-output-feedbacks (SOFs) for linear-quadratic regulators (LQR) is very attractive, since they are cheap and reliable and their implementation is simple and direct, because their components has direct physical interpretation in terms of sensors amplification rates and actuator activation power. Moreover, the long-term memory of dynamic feedbacks is useless for systems subject to random disturbances, to fast dynamic loadings or to random bursts and impulses, and the application of state feedbacks is not always possible due to unavailability of full-state measurements (see, e.g., [1]). Also, the use of SOF avoids the need to reconstruct the state by Kalman filter or by any other state reconstructor.

On the other hand, in practical applications, the entries of the needed SOFs are bounded, and since the problem of SOFs with interval constrained entries is NP-hard (see [2, 3]), one cannot expect the existence of a deterministic efficient (i.e., polynomial-time) algorithm to solve the problem. Randomized algorithms are thus natural solutions to the problem. The probabilistic and randomized methods for the constrained SOF problem and robust stabilization via SOFs (among other hard problems) are discussed in [4, 5, 6, 7]. For a survey of the SOF problem see [8], and for a recent survey of the robust SOF problem see [9].

The Ray-Shooting Method was recently introduced in [10], where it was used to derive the Ray-Shooting (RS) randomized algorithm for the minimal-gain SOF problem, with regional pole assignment, where the region can be non-convex and unconnected. The Ray-Shooting Method was successfully applied recently also to the following hard complexity control problems for continuous-time systems:

  • Structured and structured-sparse SOFs (see [11])

  • LQR via SOF for continuous-time LTI systems (see [12])

  • LQR optimal, H-optimal, and H2-optimal proportional-integral-differential (PID) controllers (see [13])

  • Robust control via SOF (see [14])

The contribution of the research presented in the current chapter is as follows:

  1. The randomized algorithm presented here (which we call the RS algorithm) is based on the Ray-Shooting Method (see [10]), which opposed to smooth optimization methods, and has the potential of finding a global optimum of continuous functions over compact non-convex and unconnected regions.

  2. The RS algorithm has a proof of convergence in probability and explicit complexity.

  3. Experience with the algorithm shows good quality of controllers (in terms of reduction of the LQR functional value with relatively small controller norms), high percent of success, and good run-time, for real-life systems. Thus, the suggested practical algorithm efficiently solves the problem of LQR via SOF for discrete-time systems.

  4. The RS algorithm does not need to solve any Riccati or quadratic matrix equations (QMES) and thus can be applied to large systems.

  5. The RS algorithm is one of the few, dealing with the problem of LQR via SOF for discrete-time systems.

  6. A deterministic algorithm for the problem that generalizes the algorithm of Moerder and Calise [15], for discrete-time systems, is given (we call it the MC algorithm). The MC algorithm has a proof of convergence to a local optimum only, and it needs other algorithms for computing initial stabilizing SOF.

  7. A comparison between the RS and the MC algorithms, as well as the performance of the hybrid algorithm, for real-life systems, is provided.

The reminder of the chapter is organized as follows:

In Section 2 we formulate the problem and give some useful lemmas (without a proof). In Section 3, we introduce the randomized algorithm for the problem of LQR via SOF for discrete-time LTI systems. Section 4 is devoted to the deterministic algorithm for the problem. In Section 5, we give the results of the algorithms for some real-life systems. Finally, in Section 6 we conclude with some remarks.


2. Preliminaries

Let a discrete-time system be given by


where ARp×p,BRp×q, CRr×p, and x0Rp. Let the LQR cost functional be defined by


where Q>0and R>0. Let uk=Kykbe the SOF, and let AcKABKCdenote the closed-loop matrix. Let Ddenote the open unit disk, let 0<α<1, and let Dαdenote the set of all zDwith z<1α(where zis the absolute value of z). For a square matrix Z, we denote by σZits spectrum. For any rectangular matrix Z, we denote by Z+its Moore-Penrose pseudo-inverse. By ZF=traceZTZ12we denote the Frobenius norm of Z, and by Z=maxσZTZ12we denote the spectral norm of Z. By LZand RZ, we denote the (left and right) orthogonal projections IZ+Zand IZZ+on the spaces KerZand KerZ+, respectively. For a topological space Xand a subset UX, we denote by intUthe interior of U, i.e., the largest open set included in U. By U¯we denote the closure of U, i.e., the smallest closed set containing U, and by U=U¯intUwe denote the boundary of U. Let Sq×rdenote the set of all matrices KRq×rsuch that σAcD(i.e., stable in the discrete-time sense), and let Sq×rdenote the set of all matrices KRq×rsuch that σAcDα. If the last is nonempty, we say that Acis α-stable and we call αthe degree of stability. Let KSαq×rbe given. Substitution of the SOF uk=Kyk=KCxkinto (2) yields:


Since Q+CTKTRKC>0and AcKis stable, it follows that the Stein equation


has a unique solution P>0, given by


Substitution of (4) into (3) and using xk=AcKkx0with the fact that AcKis stable leads to


Thus, when x0is known, we search for KSαq×rthat minimizes the functional




Now, since Jx0Kx02σmaxKfor any x00, with equality in the worst case, therefore


Thus, when x0is unknown, we search for KSαq×r, such that σmaxK=PKis minimal. Note that if λis an eigenvalue of AcKand vis a corresponding eigenvector, then (4) yields 1λ2=vQ+CTKTRKCvvPKvvQvvPKv>0. Therefore, λ21vQvvPKv<1, and thus, minimizing σmaxKresults in eigenvalues that are getting closer to the boundary of D. Since α, the degree of stability, is important to get satisfactory decay rate of the state to 0, and for disturbance rejection, we allow the user of the algorithms to determine α. Note that too high value of αmight result in nonexistence of any SOF for the system or in complicating the search for a starting SOF. Higher values of αresult in higher values of the optimal value of the LQR functional, i.e., higher energy consumption for decaying the disturbance x0to 0.

The functionals Jx0Kand σmaxKare generally not convex since their domain of definition Sαq×r(and therefore Sαq×r) is generally non-convex. Necessary conditions for optimality for continuous-time systems were given as three QMEs in [15, 16, 17, 18]. Necessary and sufficient conditions for optimality for continuous-time systems, based on linear matrix inequalities (LMI), were given in [19, 20, 21]. However, algorithms based on these formulations are generally not guaranteed to converge, seemingly because of the non-convexity of the coupled matrix equations or inequalities, and when they converge, it is to a local optimum only.

In the sequel, we will use the following lemmas, given here without proofs.

Lemma 2.1. We have:

  1. The equation AX=Bhas solutions if and only if AA+B=Bor equivalently, if and only if RAB=0. In this case, the set of all solutions is given by


where Zis arbitrary.

  1. The equation XA=Bhas solutions if and only if BA+A=Bor equivalently, if and only if BLA=0. In this case, the set of all solutions is given by


where Zis arbitrary.

Lemma 2.2. We have:

  1. Let A,B,Xbe matrices with sizes p×q,r×p,q×r, respectively. Then


  1. Let A,B,C,Xbe matrices with sizes p×q,r×r,q×p,r×q, respectively. Then


3. The randomized Ray-Shooting Method-based algorithm

The Ray-Shooting Method works as follows, for general function minimization: let fx0be a continuous function defined over some compact set XRn. Let ϵ>0be given and assume that we want to compute xXsuch that yfx=minxXfxup to ϵ, i.e., to find xyin the set Sϵ=xyxXfxy=fxfx+ϵ. Let x0Xbe given, let y0fx0and let S0=xyxXfxyy0denote the search space, which is a subset of the epigraph of f. Let D0=xyxX0yy0denote the cylinder enclosed between Xand the level y0. Let L0=xyxXy=0orxX0yy0. Let z0x0y0and note that z0S0. Then, we choose w0in L0randomly, according to some distribution, and we define the ray as zt1tz0+tw0,0t1. We scan the ray and choose the largest 0t01such that 1t0z0+t0w0S0(actually, we scan the ray from t=1in equal-spaced points and take the first tfor which this happens). We define z11t0z0+t0w0and update sets S0, D0, and L0by replacing y0with y1, where x1y1=z1. Let S1, D1, and L1denote the updated sets. We continue the process similarly from z1S1, and we define a sequence znSn,n=0,1,. Note that SϵSn+1Snfor any n=0,1,, unless we have znSϵfor some n(in which the process is ceased). One can show that the sequence znn=0converges (in probability) to a point in Sε. Note that shooting rays from the points of local minimum have positive probability to hit Sε(under the following mild assumption), because any global minimum is visible from any local minimum. Moreover, for a given level of certainty, we hit Sεin a finite number of iterations (see Remark 3.1 below). Practically, we may stop the algorithm if no improvement is detected within a window of 20%of the allowed number of iterations. The function need not be smooth or even continuous. It only needs to be well defined and measurable over the compact domain X, and Sεshould have non-negligible measure (i.e., should have some positive volume). Obviously, global minimum points belong to the boundary of the search space S0, and actually such points are where the distance between the compact sets X×0and S0in Rn+1is accepted. This is essential for the efficiency of the Ray-Shooting Method, although we raised the search space dimension from nto n+1.

In order to apply the Ray-Shooting Method for the LQR via SOF problem, we need the following definitions: assume that K0intSαwas found by the RS algorithm (see [10]) or by any other method (see [22, 23, 24]). Let h>0and let U0be a unit vector (actually a matrix, but we consider here the space of matrices as a normed vector space) with respect to the Frobenius norm, i.e., U0F=1. Let L0=K0+hU0and let Lbe the hyperplane defined by L0+V, where VU0F=0. Here Lis the tangent space at L0to the closed ball BK0h¯centered at K0with radius h, with respect to the Frobenius norm on Rq×r. Let r>0and let Rdenote the set of all FL, such that FL0Fr. Let Rϵ=R+B0ϵ¯, where B0ϵ¯denotes the closed ball centered at 0with radius ϵ(0<ϵ12). Let D0=chullK0Rϵdenote the convex hull of the vertex K0with the basis Rϵ. Let Sα0=SαD0and note that Sα0is compact (but generally not convex). We wish to minimize the continuous function σmaxK(or the continuous function Jx0K, when x0is known) over the compact set SαBK0h¯. Let Kdenote a point in SαBK0h¯where the minimum of σmaxKis accepted. Obviously, KD0, for some direction U0from K0.

The Ray-Shooting Algorithm 1 for the LQR via SOF problem, works as follows: we start with a point K0intSα, found by the RS algorithm (see [10]). Assuming that KD0, the inner loop (j=1,,n) uses the Ray-Shooting Method in order to find an approximation of the global minimum of the function σmaxKover Sα0—the portion of Sαbounded in the cone D0. The proof of convergence in probability of the inner loop and its complexity (under the above-mentioned assumption) can be found in [10] (see also [11]). In the inner loop, we choose a search direction by choosing a point Fin Rϵ—the base of the cone D0. Next, in the most inner loop (k=0,,s), we scan the ray Kt1tK0+tFand record the best controller on it. Repeating this sufficiently many times, we reach K(or an ϵneighborhood of it) with high probability, under the assumption that KD0(see Remark 3.1).

The reasoning of the Ray-Shooting Method is that sampling the whole search space will lead to the probabilistic method that is doomed to the “curse of dimensionality,” which the method tries to avoid. This is achieved by slicing the search space into covering cones (mis the number of cones allowed), because any point in the cone is visible from its vertex. At each cone we shoot rays (nis the number of rays per cone) from its node toward its basis, where each ray is sampled from its head toward its tail, while updating the best point found so far. Note that the global minimum of σmaxKover any compact subset of Sαis achieved on the boundary of the related portion of the epigraph of σmaxK. Therefore, we can break the most inner loop; in the first moment, we find an improvement in σmaxK. This bypasses the need to sample the whole search space (although we raise by 1the search space dimension) and explains the efficiency of the Ray-Shooting Method in finding global optimum. Another advantage of the Ray-Shooting Method which is specific to the problem of LQR via SOF is that the search is concentrated to the parameter space (the qr-dimension space where the Krests) and not to the certificate space (the p2-dimension space where the Lyapunov matrices Prests). Thus, the method avoids the need to solve any Riccati, LMI, and BMI equations, which might make crucial difference for large-scale systems (i.e., where p2>>qr).

Algorithm 1.The Ray-Shooting Algorithm for LQR via SOF for discrete-time systems.

Require:An algorithm for decidingα-stability, an algorithm for computingσmaxKand algorithms for general linear algebra operations.

Input:0<ϵ12,0<α<1,h>0,r>0, integers: m,n,s>0, controllable pairs ABand ATCT, matrices Q>0,R>0and K0intSα.

Output:KSαclose as possible to K.

1. compute PK0as in (5)

2. PbestPK0

3. σmaxbestmaxσPbest

4. υ1

5. fori=1tomdo

6. choose U0such that U0F=1, uniformly at random

7. L0K0+hU0

8. forj=1tondo

9. choose FRϵ, uniformly at random

10. fork=0downtosdo

11. tks

12. Kt1tK0+tF

13. ifKtSαthen

14. compute PKtas in (5)

15. σmaxKtmaxσPKt

16. ifσmaxKt<σmaxbestthen

17. KbestKt

18. PbestPKt

19. σmaxbestσmaxKt

20. end if

21. end if

22. end for

23. end for

24. ifi>υe2πqrthen

25. K0Kbest

26. υυ+1

27. end for

28. end for

29. returnKbest,Pbest,σmaxbest

Remark 3.1. In [12] it is shown that by taking m=e2πqriterations in the outer loop, we have KD0, for some direction U0, almost surely. Let Sα0ϵdenote the set KSα0σmaxKσmaxK+ϵ. Then, the total number of arithmetic operations of the RS algorithm that guarantees a probability of at least 1βto hit Sα0ϵis given by Olnβhϵrrϵq0r0maxqr3+p6, for systems with qq0,rr0for fixed q0,r0, where rϵis the radius of the basis of a cone with height ϵthat has the same volume as of Sα0ϵ; see [10, 11, 12]. This is a polynomial-time algorithm by restricting the input and by regarding rrϵas the size of the problem.


4. The deterministic algorithm

The deterministic algorithm we introduce here as Algorithm 2 (which we call the MC algorithm) generalizes the algorithm of Daniel D. Moerder and Anthony A. Calise (see [15]) to the case of discrete-time systems. To the best of our knowledge, this is the best algorithm for LQR via SOF published so far, in terms of rate of convergence (to local minimum).

Here, we wish to minimize the LQR functional


under the constraints


Since YT=Y, there exist orthogonal matrix Usuch that Ŷ=UTYUis diagonal. Now, minimizing (8) under the constraints (9) is equivalent to minimizing


under the constraint P>0, where Ŝi,iare the Lagrange multipliers. We have


where S=UŜUT. Note that ST=S. Let the Lagrangian be defined by


for any Kany P>0and any Ssuch that ST=S. The necessary conditions for optimality are LK=0, LP=0, and LS=YT=Y=0.

Now, using Lemma 2.2, we have


where the last passage is affordable because σAcD. Note that the last with the stability of Acimplies that S0.

We also have




Thus, if CSCTis invertible, then


Otherwise, if


which is equivalent to




where Zis arbitrary q×rmatrix (and we may take Z=0, unless some other constraints on Kare needed). Note that if condition (12) does not happen, then LK0. We conclude with the following theorem:

Theorem 4.1. Assume that LKPSgiven by (10) is minimized locally at some point K, P>0, and Ssuch that ST=S. Then

K=R+BTPB1BTPASCTCSCT++ZRCSCT,for someq×rmatrixZP=matIpIpAcKTAcKT1vecQ+CTKTRKCS=matIpIpAcKAcK1vecx0x0T,E14

where AcK=ABKC.


Since LKPSis minimal in some neighborhood of KPS, it follows that LKKPS=0, LPKPS=0, and LSKPS=YTKP=YKP=0.

The condition LSKPS=YTKP=YKP=0is just


which with P>0and Q>0,R>0implies that AcK=ABKCis stable. Now, since σAcKD, it follows that IpIpAcKTAcKTis invertible, and therefore,


Since IpIpAcKAcKis invertible, LPKPS=0implies that


Finally, LPKPS=0implies that KCSCT=R+BTPB1BTPASCT, which in view of Lemma 2.1 implies R+BTPB1BTPASCTLCSCT=0and


where Zis some q×rmatrix.

Note that the equations are coupled tightly, in the sense that Pand Sneed K, while Kneeds Pand S. Note also the cubic dependencies (that can be made quadratic by introducing new variables). These make the related QMEs non-convex and, therefore, hard to compute.

Remark 4.1. When x0is unknown, it is customary to assume that x0is uniformly distributed on the unit sphere, which implies that Ex0x0T=Ip, where Eis the expectation operator. Thus, changing the problem to that of minimizing EJx0Pamounts to replacing Swith


Therefore, there is change in Algorithm 2.

Remark 4.2. The convergence of Algorithm 2 to local minimum can be proved similarly to the proof appearing in [15], under the assumptions that Sαq×ris nonempty and that Q12Ais detectable (here, we do not need this condition because of the assumption that Q>0). The convergence can actually be proved for the more general problem that adds KF2=traceKTKto the LQR functional, thus minimizing also the Frobenius norm of K. In this context, note that by adding K2=maxσKTKto the LQR functional will lose the argument, and there will be a need to more general proof, because in the proof appearing in [15], the demand is for C1smooth function of K, while K2=σmaxKTKis continuous but not Lipschitz continuous. The RS algorithm can use any continuous function of Kand can deal also with sparse SOFs for LQR and with regional pole-placement SOFs for LQR.

Example 4.1. In the following simple example, we illustrate the notions appearing in the definition of the RS algorithm, and we demonstrate the operation of the RS algorithm. Consider the unstable system


where we look for SOF Kstabilizing the system while reducing the LQR functional (2) with Q=I,R=1. Let K=k1k2then


with characteristic polynomial z2+zk1+k232+32k12k21. Applying the Y. Bistritz stability criterion (see [25]), we have


where υis the number of sign variations in the set. According to the Bistritz criterion, the system is stable if and only if υ=0. We conclude that Sis the set of all Ksuch that 52k1k232>0,32k1+2k2+2>0,12k13k2+32>0or 52k1k232<0,32k1+2k2+2<0,12k13k2+32<0, where the last branch is empty (which could have make the set non-convex). The set Sappears in Figure 1 as the blue region, where the golden star is the analytic global optimal solution K=1.094734590.36138828(computed by the related discrete algebraic Riccati equation).

Figure 1.

The stability regionS.

Algorithm 2The MC algorithm for LQR via SOF for discrete-time systems.

Require:An algorithm for decidingα-stability, an algorithm for computingσmaxKand algorithms for general linear algebra operations.

Input:0<ϵ12,0<α<1, integers: m,s>0, controllable pairs ABand ATCT, matrices Q>0,R>0and K0intSα.

Output:KSαthat locally minimizes σmaxK.

1. j0;A0ABK0C

2. P0matIpIpA0TA0T1vecQ+CTK0TRK0C

3. S0matIpIpA0A01vecIp;σmaxK0maxσP0


5. flag0

6. fork=0tosdo

7. tks

8. Kt1tK0+tΔK0

9. ifKtSαthen

10. AtABKtC

11. PtmatIpIpAtTAtT1vecQ+CTKtTRKtC

12. StmatIpIpAtAt1vecIp;σmaxKtmaxσPt

13. ifσmaxKt<σmaxK0then

14. K1Kt;A1ABK1C;P1Pt;S1St;σmaxK1σmaxKt

15. flag1

16. end if

17. end if

18. end for

19. ifflag==1then

20. whileσmaxKj+1σmaxKjϵand j<mdo


22. fork=0tosdo

23. tks

24. Kt1tKj+tΔKj

25. ifKtSαthen

21. AtABKtC

22. PtmatIpIpAtTAtT1vecQ+CTKtTRKtC

23. StmatIpIpAtAt1vecIp;σmaxKtmaxσPt

29. ifσmaxKt<σmaxKjthen

30. Kj+1Kt;Aj+1ABKj+1C;Pj+1Pt;Sj+1St;σmaxKj+1σmaxKt

31. end if

32. end if

33. end for

34. jj+1

35. end while

36. end if

37. returnKbestKj,PbestPj,σmaxbestσmaxKj

In Figure 1, we can see how the RS algorithm works: we fix α=103,ϵ=1016,r=2,h=2, and we set m=1,n=5,s=10for a single iteration, where the single cone is sampled along 5rays and each ray is sampled 10times. The sampled points are circled, where red circles indicate infeasible or non-improving points and the black circles indicate improving points. The green star point is the initial point K0found by the Ray-Shooting algorithm for minimal-norm SOF. The bold black circle is the boundary of the closed circle BK0h¯. We choose U0randomly to define the search direction, and we set L0=K0+hU0to be the point where the direction meets the boundary of the circle. Lis the tangent line at L0to the circle, and Rϵis the 2rwidth segment on the line, inflated by ϵ. The search cone D0=chullK0Rϵis the related black triangle. Here Sα0=SαD0is just the portion of the blue region inside the triangle, and we can see that the assumption that KD0is in force. For the current problem e2πqr=10, and therefore, by making 10iterations, Kwill be inside some triangle almost surely.

The algorithm chooses Fin the basis of the triangle and defines Ktto be the ray from K0to F. The ray is sampled at 10equally spaced points, and the best feasible point is recorded.

In Figure 2, we can see that 5iterations suffice to include Kin some triangle and to find improving points very close to K. In Figure 3, we can see that when we allow 20iterations, after 10iterations, the center K0is switched to the best point found so far (see lines 2426in Algorithm 1). This is done in order to raise the probability to hit Kor its ϵ-neighborhood, and as we can see, the final best point (green star) is very close to K(Figure 4).

Figure 2.

Single iteration of the RS algorithm.

Figure 3.

Five iterations of the RS algorithm.

Figure 4.

Twenty iterations of the RS algorithm.

The results of the algorithm for 1,5and 20iterations are the following. Note that σmaxK=5.9551, and note the “huge variations” the function σmaxKhas.

For m=1we had


Note that in this case, the MCalgorithm makes no improvement, while the RS and RS + MC have very close values to the global optimal value, with slightly better value for the RS + MC, over the RS algorithm.

For m=5we had


For m=20we had


In Figure 5, the initial condition response of the open-loop system is given. One can see the unstable mode related to the unstable eigenvalue 2. In Figures 68, the initial condition responses of the closed-loop systems with the SOFs for m=20, with x0=31Tand sampling time Ts=0.01, are given. One can see that the responses of the closed-loop systems with the SOFs computed by RS and RS + MC are very close to the global optimal response, while the response of the closed-loop system with the SOF computed by the MC algorithm (actually with the initial SOF), although stable, is unacceptable.

Figure 5.

The initial condition response of the open-loop system.

Figure 6.

The initial condition response of the closed-loop system with the SOF computed by the RS algorithm (blue) compared with the global optimal response (red).

Figure 7.

The initial condition response of the closed-loop system with the SOF computed by the MC algorithm (blue) compared with the global optimal response (red).

Figure 8.

The initial condition response of the closed-loop system with the SOF computed by the RS + MC algorithm (blue) compared with the global optimal response (red).


5. Experiments

In the following experiments, we applied Algorithms 1 and 2, for systems taken from the libraries [26, 27, 28]. The systems given in these libraries are real-life continuous-time systems. In order to get related discrete-time systems, we sampled the systems using the Tustin method with sampling rate Ts=0.01sec. We took only the systems for which the RS algorithm succeeded in finding SOFs for the continuous-time systems (see [10], Table 8, p. 231). In order to initialize the MC Algorithm, we also used the RS algorithm to find a starting α-stabilizing SOF. In all the experiments, we used m=2e2πqr,n=100,s=100; h=100,r=100,ϵ=1016, for the RS algorithm; and m=200e2πqr,s=100, for the MC Algorithm, in order to get the same number of total iterations and the same number s=100of iterations for the local search. We took Q=Ip,R=Iqin all the cases.

The stability margin column of Table 1 relates to 0<α<1for which the absolute value of any eigenvalue of the closed loop is less than 1α. The values of αin Table 1 relates to the largest 0<α<1for which the RS algorithm succeeded in finding a starting SOF K0. As we saw above, it is worth searching for a starting point K0that maximizes 0<α<1. This can be achieved efficiently by running a binary search on the 0<α<1and using the RS algorithm as an oracle. Note that the RS CPU time appearing in the fourth column of Table 1 relates to running the RS algorithm for known optimal value of 0<α<1. The RS algorithm is sufficiently fast also for this purpose, but other algorithms such as the HIFOO (see [24]) and HINFSTRUCT (see [29]) can be applied in order to get a starting SOF. The advantage of the use of the RS is of finding starting SOF with relatively small norm.

Let σmaxFdenote the functional (7) for the system ABIp, where ABFis stable, i.e., FSq×p. Let PFdenote the Lyapunov matrix (5) for the system ABIpwith Fas above. Let σmaxKdenote the functional (7) for the system ABCwith KSq×rand related Lyapunov matrix P=PKgiven by (5). Now, if ABKCis stable for some K, then ABFis stable for F=KC(but there might exist Fsuch that ABFis stable, which cannot be defined as KCfor some q×rmatrix K). Therefore,


where Fis an optimal LQR state-feedback (SF) controller for the system ABIp. We conclude that σmaxFσmaxKσmaxKbest. Thus, σmaxFis a lower bound for σmaxKbestand can serve as a good estimator for it, in order to quantify the convergence of the algorithm to the global minimum (as is evidently seen from Table 1 in many cases) and in order to stop the algorithm earlier, since σmaxFcan be calculated in advance. The lower bound appears in the last column of Table 1.

For all the systems, we had AB,ATCTcontrollable, except for ROC1 and ROC2. All the systems are unstable, except for AC6, AC15, and NN16 which are stable, but not α-stable, for αgiven in the stability margin column.

The experiments were executed on:



Processor: Intel(R) Core(TM) i7-8750H CPU@2.20GHz.

Platform: MATLAB, Version R2018b Win 64.

5.1 Conclusions from the experiments

Regarding the experimental results in Table 2 and the comparison between the RS algorithm and the MC algorithm, we conclude:

  1. The RS algorithm performs in magnitude better than the MC algorithm for the systems: AC1, AC11, HE1, HE4, ROC1, ROC4, TF1, and NN5.

  2. The MC algorithm performs in magnitude better than the RS algorithm for the systems AC5 and DIS5.

  3. The MC algorithm performs slightly better than the RS algorithm for systems HE3 and NN16.


Table 1.

General information of the systems and initial values.


Table 2.

Experimental results.

Regarding the experimental results in Table 2 and the performance of the RS + MC algorithm, we conclude:

  1. The RS + MC algorithm performs better than each algorithm separately, for systems AC6, HE4, TF1, NN13, NN16, and NN17.

  2. The RS + MC algorithm performs better than the RS algorithm for systems AC5, HE3, and DIS5.

  3. The RS + MC algorithm performs exactly as the RS algorithm for systems AC1, AC11, DIS4, HE1, ROC1, ROC4, and NN5. This observation assesses the claim for convergence of the RS algorithm to global optimum.

  4. The RS + MC algorithm performs exactly as the MC algorithm for systems AC5 and DIS5.

Regarding improvements over the starting point, we had:

  1. The RS algorithm failed in finding any improvement over σmaxK0for systems AC5 and DIS5.

  2. The MC algorithm failed in finding any improvement over σmaxK0for systems AC1, AC11, ROC1, and ROC4. This observation assesses the heuristic that it is better to start with a SOF that brings the poles of the closed loop as close as possible to the boundary of the disk Dα.

  3. The RS + MC algorithm improved σmaxK0in any case.

Regarding the assessment of convergence to a global minimum, we had the following results:

  1. The RS algorithm and the RS + MC algorithm had very close values of σmaxKbest(or exactly the same value) which is very close to the lower bound, for systems AC1, AC6, HE1, HE3, HE4, ROC1, DIS4, NN5, and NN16.

  2. The MC algorithm achieved a very close value of σmaxKbestto the lower bound, for the systems AC6, HE3, DIS5, NN5, and NN16.

As was expected, the MC algorithm seems to perform better locally, while the RS algorithm seems to perform better globally. Thus, practically, the best approach is to apply the RS algorithm in order to find a close neighborhood of a global minimum and then to apply the MC algorithm on the result, for the local optimization, as is evidently seen from the performance of the RS + MC algorithm.

5.2 Some specific results

Example 5.1. For the HE4 system with




we had the following results:

by the RS algorithm for minimal-gain SOF (see [10])


by the RS Algorithm1


by the MC Algorithm 2


and by RS + MC Algorithms 1 and 2:


6. Concluding remarks

The Ray-Shooting Method is a powerful tool, since it practically solves the problem of LQR via SOF, for real-life discrete-time LTI systems. The proposed hybrid algorithm RS + MC has good performance in terms of run-time, in terms of the quality of controllers (by reducing the starting point LQR functional value and by reducing the controller norm) and in terms of the success rate in finding a starting point feasible with respect to the needed α-stability. The RS + MC algorithm has a proof of convergence in probability to a global minimum (as is evidently seen from the experiments). This enlarges the practicality and scope of the Ray-Shooting Method in solving hard complexity control problems, and we expect to receive more results in this direction.


  1. 1. Camino JF, Zampieri DE, Peres PLD. Design of a vehicular suspension controller by static output feedback. Proceedings of the American Control Conference; San Diego, California; June 1999
  2. 2. Nemirovskii A. Several NP-hard problems arising in robust stability analysis. Mathematics of Control, Signals, and Systems. 1993;6:99-105
  3. 3. Blondel V, Tsitsiklis JN. NP-hardness of some linear control design problems. SIAM Journal on Control and Optimization. 1997;35(6):2118-2127
  4. 4. Vidyasagar M, Blondel VD. Probabilistic solutions to some NP-hard matrix problems. Automatica. 2001;37:1397-1405
  5. 5. Tempo R, Calafiore G, Dabbene F. Randomized Algorithms for Analysis and Control of Uncertain Systems. London: Springer-Verlag; 2005
  6. 6. Tempo R, Ishii H. Monte Carlo and Las Vegas randomized algorithms for systems and control. European Journal of Control. 2007;13:189-203
  7. 7. Arzelier D, Gryazina EN, Peaucelle D, Polyak BT. Mixed LMI/randomized methods for static output feedback control. In: Proceedings of the American Control Conference. Baltimore, Maryland, USA: Institute of Electrical and Electronics Engineers (IEEE); 2010. pp. 4683-4688
  8. 8. Syrmos VL, Abdallah C, Dorato P, Grigoradis K. Static output feedback: A survey. Automatica. 1997;33:125-137
  9. 9. Sadabadi MS, Peaucelle D. From static output feedback to structured robust static output feedback: A survey. Annual Reviews in Control. 2016;42:11-26
  10. 10. Peretz Y. A randomized approximation algorithm for the minimal-norm static-output-feedback problem. Automatica. 2016;63:221-234
  11. 11. Peretz Y. On applications of the ray-shooting method for structured and structured-sparse static-output-feedbacks. International Journal of Systems Science. 2017;48(9):1902-1913
  12. 12. Peretz Y. On application of the ray-shooting method for LQR via static-output-feedback. MDPI Algorithms Journal. 2018;11(1):1-13
  13. 13. Peretz Y. A randomized algorithm for optimal PID controllers. MDPI Algorithms Journal. 2018;11(81):1-15
  14. 14. Peretz Y, Moyal S, Merzbach O. A randomized algorithm for robust stabilization via static-output-feedback. In: IEEE International Conference on the Science of Electrical Engineering in Israel (ICSEE); December 12-14, 2018; Eilat, Israel
  15. 15. Moerder D, Calise A. Convergence of numerical algorithm for calculating optimal output feedback gains. IEEE Transactions on Automatic Control. 1985;AC-30:900-903
  16. 16. Johnson T, Athans M. On the design of optimal dynamic compensators for linear constant systems. IEEE Transactions on Automatic Control. 1970;AC-15:658-660
  17. 17. Levine W, Athans M. On the determination of the optimal constant output feedback gains for linear multivariables systems. IEEE Transactions on Automatic Control. 1970;AC-15:44-48
  18. 18. Levine W, Johnson TL, Athans M. Optimal limited state variable feedback controllers for linear systems. IEEE Transactions on Automatic Control. 1971;AC-16:785-793
  19. 19. Peres PLD, Geromel J, de Souza S. OptimalH2control by output feedback. In: Proceedings of the 32nd IEEE conference on Decision and Control, San Antonio, Texas, USA. 1993. pp. 102-107
  20. 20. Iwasaki T, Skelton RE. All controllers for the generalHcontrol problem: LMI existence conditions and state space formulas. Automatica. 1994;30:1307-1317
  21. 21. Iwasaki T, Skelton RE. Linear quadratic suboptimal control with static output feedback. Systems & Control Letters. 1994;23(6):421-430
  22. 22. Henrion D, Loefberg J, Kočvara M, Stingl M. Solving polynomial static output feedback problems with PENBMI. In: Proceedings of the joint IEEE Conference on Decision and Control and European Control Conference, Sevilla, Spain. 2005
  23. 23. Yang K, Orsi R. Generalized pole placement via static output feedback: A methodology based on projections. Automatica. 2006;42:2143-2150
  24. 24. Gumussoy S, Henrion D, Millstone M, Overton ML. Multiobjective robust control with HIFOO 2.0. Proceedings of the IFAC Symposium on Robust Control Design; Haifa, Israel. 2009
  25. 25. Bistritz Y. Zero location with respect to the unit circle of discrete-time linear system polynomials. Proceedings of the IEEE. 1984;72(9):1131-1142
  26. 26. Leibfritz F. COMPleib: Constrained matrix-optimization problem library—A collection of test examples for nonlinear semidefinite programs, control system design and related problems. Tech.-Report. 2003
  27. 27. Leibfritz F, Lipinski W. COMPleib 1.0—User manual and quick reference. Tech.-Report. 2004
  28. 28. Leibfritz F. Description of the benchmark examples in COMPleib 1.0. Tech.-Report. 2003
  29. 29. Gahinet P, Apkarian P. Frequency-domain tuning of fixed-structure control systems. In: Proceedings of 2012 UKACC International Conference on Control; 3–5 September 2012; Cardiff, UK. IEEE. 2012. pp. 178-183

Written By

Yossi Peretz

Submitted: June 11th, 2019 Reviewed: August 23rd, 2019 Published: September 27th, 2019