Open access peer-reviewed chapter

Conditions for Optimality of Singular Controls in Dynamic Systems with Retarded Control

Written By

Misir J. Mardanov and Telman K. Melikov

Submitted: February 2nd, 2016 Reviewed: May 12th, 2016 Published: October 19th, 2016

DOI: 10.5772/64225

Chapter metrics overview

1,482 Chapter Downloads

View Full Metrics


In this chapter, we consider an optimal control problem with retarded control and study a larger class of singular (in the classical sense) controls. For the optimality of singular controls, the various necessary conditions in the recurrent forms are obtained. These conditions contain also the analogs of Kelly, Koppa-Mayer, Gabasov, and equality-type conditions. While proving the main results, the Legendre polynomials are used as variations of control.


  • singular control
  • optimal control
  • variation transform method
  • Legendre polynomial
  • necessary optimality conditions

1. Introduction

As is known, optimal control problems described by the dynamical systems with retarded control are attracting the attention of many specialists, and the results obtained in this field deal mainly with the first-order necessary optimality conditions [18, etc.]. However, theory of singular controls for systems with retarded control has not been studied enough yet [9, 10]. One of the main reasons here is that the methods proposed and developed for ordinary systems (for systems without retardation) in [1118] are not directly applicable to the singular controls in dynamical systems with aftereffect (see [9, 1419]). Therefore, to study optimal control problems in the systems with retarded control is of special theoretical interest. Besides, such problems have practical significance as well, because mathematical modelling for some problems of organization of the economic plan and production leads to the problems with retarded control (see, e.g., [20]).

As is known, the concept of singular control was first introduced to the theory of optimal processes by Rozenoer [22] in 1959. First results on the necessary optimality conditions for singular controls have been obtained by Kelley [12] in the case of open set U, and by Gabasov [11] in the case of arbitrary (in particular, closed) set U, where U is a set of values of admissible controls. Afterward, Kelley and Gabasov’s conditions as well as the methods for treating singular controls proposed in [11, 13] have been significantly generalized in [10, 1419, 2341, etc.] to the cases of (1) controls with higher-order degeneration, (2) multidimensional controls, and (3) various classes of control systems. Considering all these cases, the methods in [11, 13] have been generalized in [17, 37] and for optimality of singular controls, necessary conditions in the form of recurrence sequences are obtained for dynamical systems with delayed in state. Similar results for the problem of dynamic systems with retarded control have been obtained in [10] only for singular controls with full degree of degeneration. Below, by considering a larger class of singular controls, proposing a modified version of the variations transform method [13] and matrix impulse method [11], we generalize all results of [10]. While treating the optimality of singular (in the classical sense) controls, we use the Legendre [[42], p. 413] polynomials as variations of control because such an approach is more convenient.

1. Problem statement. Consider the following optimal control problem with retarded control:


Here, U is an open set in r -dimensional Euclidean space Rr,R1=:R:=(,+), xRn is an n-vector with phase coordinates, uU is an r-vector of control actions, h=const>0, x0, t0,t1 are fixed points with t1>t0+h; φ(x):RnR, f(x,u,υ,t):Rn×Rr×Rr×RRn, w()C˜+([t0h,t0],Rr) are the given functions, where C˜+([t0h,t0],Rr) is a class of piecewise continuous (continuous from the right at discontinuity points and continuous from the left at the point t0 ) vector functions w(t):[t0h,t0]Rr.

The function u() is said to be an admissible control if it belongs to C˜+(I1,Rr) and satisfies the condition (1.3), where


Note that if the function f() and its partial derivative fx() are continuous on Rn×Rr×Rr×R, then, by using the method of successive approximations as in [21] it is easy to show that every admissible control u() generates a unique absolutely continuous solution x() of the system (1.2), (1.3) where this solution will be assumed as defined everywhere on I.

If the admissible control u0(t),tI1 is a solution of the problem (1.1)–(1.3), we will call it an optimal control, while the corresponding trajectory x0(t),tI of the system (1.2)–(1.3) will be called an optimal trajectory. The pair (u0(),x0()) will be called an optimal process.

While studying the problem (1.1)–(1.3), we will also use the following assumptions:

  1. (A1) let the functional φ(x):RnR be twice continuously differentiable in the space Rn;

  2. (A2) let the function f() and its partial derivatives fz(), fzz() be continuous in the space Rn×Rr×Rr×R, where z=(x,u,υ);

  3. (A3) let the function f() be three times continuously differentiable in the totality of its arguments in the space Rn×Rr×Rr×R;

  4. (A4) let the inclusions w˙()C˜([t0h,t0],Rr) and u˙0()C˜(I1,Rr) hold for the derivatives w˙() and u˙0(), where C˜([a,b],Rr) is a class of piecewise continuous (continuous from the right and left at the points a and b, respectively) vector functions c(t):[a,b]Rr;

  5. (A5) let the function f() be sufficiently smooth in the totality of its arguments in the space Rn×Rr×Rr×R;

  6. (A6) let the initial function w()C˜+([t0h,t0],Rr) and admissible control u0() be sufficiently piecewise smooth, that is,


Especially note that more precise assumptions on the analytic properties of φ(),f(),u(),w() will directly follow from the representation of optimality criteria obtained below.


2. The second variation of the objective functional and the definition of a singular (in the classical sense) control

Let assumptions (A1) and (A2) be fulfilled, and (u0(),x0()) be some admissible process. If the process (u0(),x0()) is optimal, then, by using the known technique (see, e.g., [27, p. 51]), it is easy to get




where δ1S(u0;δu()) and δ2S(u0;δu()) are, respectively, the first and the second variations of the functional S(u) at the point u0();

, μ,ν{x,u,υ}; δu() is the variation of the control u0(), while δx() is the corresponding variation of the trajectory x0(t),tI, which δx() is the solution of the system

where fμ(t):=fμ(x0(t),u0(t),u0(th),t), tI and μ{x,u,υ}, while the vector function

is the solution of the conjugate system

Below, we consider that the following conditions are fulfilled:


If (u0(),x0()) is an optimal process, then, by definition of an admissible control and taking into consideration (2.2)–(2.4) from (2.1), proceeding the same way as in [27, p. 53], we obtain the classical necessary conditions of optimality (analogs of the Euler equation and Legendre-Clebsch condition) [10, 43], that is, the following relations are valid:

  1. Hu(t)+χ(t)Hυ(t+h)=0,tI;E2.7

  2. u˜T[Huu(t)+χ(t)Hυυ(t+h)]u˜0,tI,u˜Rr;E2.8

  3. Hυ(t1)=0,u˜T[Huu(t1h)+Hυυ(t1)]u˜0,for all u˜Rr, if optimal control u0() is continuous at the points t=t1-ih,i=1,2 . Here, χ() is the characteristic function of the set [t0,t1h).

It should be noted that the optimality condition (c) is the corollary of conditions (a) and (b).

Definition 2.1. An admissible control u0(t),tI, satisfying conditions (2.7) and (2.8), is called singular (in classical sense) if


In this case, the set I is called a singular plot for an admissible control u0(). The main goal of this chapter is to study such singular controls.

Let u=(p,q)T,υ=(p˜,q˜)T, where p,p˜Rr0,q,q˜Rr1,r0+r1=r. Without loss of generality [[27], p. 138], we assume that the singularity to the control u0() is delivered by a vector component pRr0, that is,


Note that the general inequality (2.8) implies the equality-type optimality condition for a singular (in classical sense) control u0():


Proposition 2.1. Let assumptions (A1) and (A2) be fulfilled, the admissible control u0()=(p(),q())T be singular (in the classical sense) and condition (2.9) be fulfilled along it. Let also the variations δu(t)=(δ0p(t),δq(t))TC˜+(I1,Rr) be non-zero only on [θ,θ+ε), where θ[t0,t1) and ε(0,ε0), with the number ε0(0,h) be such that (1) if θ[t0,t1h), then ε0<t1θh and (2) if θ[t1h,t1), then ε0<t1θ. Then, (a) the variational system (2.4) becomes


(b) the following representation is valid for the second variation (2.3):


Proof. To prove (a), it suffices to consider the definition of the variation δu()=(δ0p(),δq())T in (2.4). The proof of (b) follows directly from (2.3), in view of (2.6), (2.9), (2.11), and the definition of the variation δu()=(δ0p(),δq())T.


3. Transformation of the second variation of the functional by means of modified variant of matrix impulse method (when studying singular (in the sense of Definition 2.1) of controls)

Let conditions (A1) and (A2) be fulfilled and along the singular control u0() the equality (2.9) hold. Use Proposition 2.1. Let the variation δu(t)=(δ0p(t),δq(t))T~C+(I1,Rr) have the form:


where ξEr0,θ[t0,t1), and the number ε0 was defined in Proposition 2.1.

Along the singular control u0()=(p(),q())T satisfying condition (2.9), taking into account (3.1), formula (2.12) takes the form:

δ 2 S( u 0 ;δu( ) )=δ x T ( t 1 ) ϕ xx ( x 0 ( t 1 ) )δx( t 1 ) Δ 1 * 2 Δ 2 * ,E3.2



where δx(t),tI is the solution of the system (2.11).

By the Cauchy formula, we have


where λ(s,t),(s,t)I×I is the solution of the system


λ(s,t)=0,s>t,λ(s,s)=E (E is a unit n×n matrix).

As (A2) and u0()C˜+(I1,Rr) are fulfilled, then by (3.1) and (3.4) and for all θ[t0,t1), from (3.3) we get


where χ() is the characteristic function of the set [t0,t1h); o(τ)/τ0, as τ0.

By (2.6) and (3.5) and taking into account λ(s,s)=E and λ(s,t)=0 for s>t, we calculate separate terms of (3.2). As a result, after simple reasoning, we get


Following [10, 14, 17], we consider the matrix functions


where λ(,) is the solution of the system (3.4).

Thus, substituting (3.6)–(3.8) in (3.2), allowing for (3.9), (3.10) and equality λ(s,t)=0, for s>t,(s,t)I×I, we get the validity of the following statement.

Proposition 3.1. Let conditions (A1) and (A2) be fulfilled, and the admissible control u0()=(p(),q())T be singular (in the classic sense) and the condition (2.9) be fulfilled along it. Then, for each θ[t0,t1) and for all ξRr0 the following expansion is valid:


where the number ε0 was defined above (see Proposition 2.1), χ() is the characteristic function of the set [t0,t1h) and matrix functions M0[p,p](θ,θ), M0[p,p˜](θ,θ+h), M0[p˜,p˜](θ+h,θ+h) that are defined by (3.10).


4. Transformation of the second variation of the functional by means of modified variant of variations transformation method

4.1. Expansion of the second variation δ2S(u0;δu()) in Kelley-type variation (first-order transformation)

Let u0() be a singular control satisfying condition (2.9), and assumptions (A1), (A3), and (A4) be fulfilled. Now, we proceed to generalize and apply the variation transformation method [13].

Introduce the following set dependent on the admissible control u0():

I*:=I(u0())={ θ[ t1-h,t1 ) :the derivativeu.0()is continuousor continuous from the right at the pointsθandθ-h }{ θ[ t0,t1-h ) :the derivativeu.0()is continuous or continuous from the right at the pointsθandθ.h}.E4.1

The following properties are obvious: (1) I\I* is a finite set and t1¯I*; (2) for every θI*, there exists a sufficiently small number ε^>0 such that [θ,θ+ε^)[θ+h,θ+h+ε^)II*; and (3) by (1.2), (1.3), and (2.5), the derivatives x˙0(),ψ˙0() are continuous or continuous from the right at every θI*. These properties are important for our further reasoning, and we call them properties of the set I*.

Require that the variation δu()=(δ0p(),δq())T satisfies additionally the following conditions as well:


where ε*=min{ε0,ε^}, and ε0,ε^ were defined above.

Make a passage from the variation δu(t)=(δ0p(t),δq(t))T,tI1, satisfying (4.2), to a new variation δ1u(t)=(δ1p(t),δq(t))T,tI1, where




Transform the variation of the trajectory as well: in place of δx(t),tI, consider the function δ1x(t),tI:




As assumptions (A3) and (A4) are fulfilled, then by virtue of property of the set I* we easily have: the function δ1x(t),tI is continuous and δ1x˙(t)C˜(I,Rn).

By direct differentiation, allowing for (A3), (A4) and (2.11), (4.3), (4.4) from (4.5) we obtain that δ1x(t),tI is the solution of the system




Now, let us write down the second variation (2.12) in terms of new variables. By (4.4) from (4.5), we have δx(t1)=δ1x(t1). According to this property and (4.2)–(4.6), for any ε(0,ε*) the second variation (2.12), after simple reasoning takes a new form




In the obtained representation, taking into account (A3), (A4), (4.2), (4.3), (4.7), (4.8), (4.13), (4.14) and the property of the set I*, we transform Δ3,Δ4 by integration by parts. Then, we have


where g1[μ](),μ{p,p˜} is defined by (2.19),


By substituting these relations in (4.10), after elementary transformations considering (4.11) and (4.12), we arrive at the validity of the following statement.

Proposition 4.1. Let assumptions (A1), (A3), (A4), and conditions (2.6) be fulfilled. Also, let the functions g0[μ](),g1[μ](),Q0[μ]() be defined by (4.6), (4.9), and (4.15), respectively, and δ1x(t),tI be the solution of the system (4.7) and (4.8). Then along the singular control u0(), satisfying condition (2.9), and on the variations δu(t)=(δ0p(t),δq(t))T,tI1 satisfying (4.2), (4.3), the following representation (first-order transformation) is valid:




where ε* was defined above (see (4.2)),


4.2. Higher-order transformation

Let (u0(),x0()) be some process, where u0() is a singular control satisfying condition (2.9), and assumptions (A1), (A5), and (A6) be fulfilled. Introduce the matrix functions calculated along the process (u0(),x0()) and determined by the following recurrent formulas:


Furthermore, similar to (4.15), (4.20), (4.21), and (3.10), consider the functions


where λ() and Ψ() are determined by (3.4) and (3.9), respectively.

Similar to I*, we introduce the set I** when assumption (A6) is fulfilled:

I**:=I(u0())={θ[t1h,t1): the admissible control u0() is sufficiently smooth or sufficiently smooth from the right at the points θ and θ-h }{ θ[ t0,t1-h ) : the admissible control u0() is sufficiently smooth or sufficiently smooth

from the right at the pointsθandθ±h}.E4.28

The following obvious properties hold: (1) I\I** is a finite set, and t1¯I**, also I**I*; (2) for every θI** there exists a sufficiently small number ε˜>0, such that

, furthermore, (3) by (A5), (A6), (1.2), (1.3), and (2.5), the functions
are continuous and sufficiently smooth or sufficiently smooth from the right at every point θI**. These properties are important at the next reasoning and we call them the properties of the set I**.

Let us consider a variation δu(t)=(δ0p(t),δq(t))T,tI1 that in addition satisfies the following conditions as well:




θI**,ε(0,ε**),ε**=min{ε0,ε^,ε˜} (ε0,ε^,ε˜ were defined above).

According to (4.30), we have


The following statement is valid.

Proposition 4.2. Let assumptions (A1), (A5), (A6), and condition (2.6) be fulfilled. Furthermore, let the functions gi[μ](),Gi[μ](),Pi[p,q](),Pi[p˜,q˜](),Qi[μ]() and Li[μ](), where μ{p,p˜},i=0,1,..., be defined by (4.22)–(4.26), and the set I** be defined by (4.28). Then along the singular control u0(), satisfying condition (2.9), and on the variations δu(t)=(δ0p(t),δq(t))T,tI1 satisfying (4.29) and (4.30), the following representation (k-th order transformation, where k{1,2,...}) is valid:




where θI**, ε(0,ε**) (the number ε** was defined above), δkx(t),tI is the solution of the system


Proof. We carry out the proof of Proposition 4.2 by induction. For k=1, Proposition 4.2 was completely proved at item 4 (see Proposition 4.1). Assume that Proposition 4.2 is valid for all the cases to (k1) inclusively, (k2). We prove the validity of representation (4.32) for the case k. Let the variation δu(t)=(δ0p(t),δq(t))T, tI1 satisfies the conditions (4.29) and (4.30). Then by assumption the following representation is valid:




where Gi[μ](), Pi[p,q](), Pi[p˜,q˜](), Qi[μ](), Li[μ](), μ{p,p˜}, i=0,1,... are defined by (4.23)-(4.26), and δk1x(t),tI is the solution of the system:


Apply the modified variant of variations transformations method [13] to the system for δk1x(t),tI and representation (4.36). According to the technique of the previous item (see item 4.1), we introduce a new variation in the following way:


According to (4.22), (4.30), (4.31), and (4.39) from (4.40) by direct differentiation, we get the system (4.35) for δkx(t),tI. Furthermore, as θI**, then by (4.40) we get δkx(t1)=δk1x(t1). Taking into account this equality and by (4.29), (4.30), and (4.40) in (4.37), let us transform the representation (4.36) into new variables δkp(),δq(),δkx(). Then,


where Δ22S() is determined by formula (4.38) as well as Δi1,i=1,2,3 by (4.22), (4.29), (4.30), (4.35), (2.6) are calculated in the following way:




Taking into account (A5), (A6), (4.29), (4.30), (4.35), and the properties of the set I**, let us calculate Δ12*,Δ12**. Then, applying the method of integration by parts, we have


At first, we substitute the last expression Δ12*,Δ12** in (4.43), and then (4.42)–(4.44) in (4.41). Then by (4.23)–(4.26), (4.33), (4.34), and (4.38), it is easy to get representation (4.32). Consequently, we get the proof for k. This completes the proof of Proposition 4.2.


5. Optimality conditions

Based on Propositions 3.1, 4.1, and 4.2, we prove the following theorem.

Theorem 5.1. Let conditions (A1), (A5), and (A6) be fulfilled, and the matrix functions Pi[p,q](),Pi[p˜,q˜](), Qi[μ](), Li[μ](),Mi[p,p˜](), μ{p,p˜}, i=0,1,... be defined as in (4.24)–(4.27). Let also the set I** be defined as in (4.28) and along the singular (in the classical sense) control u0() the following equalities be fulfilled:


where χ() is the characteristic function of the set [t0,t1h).

Then for the optimality of the admissible control u0(), it is necessary that the relations


be fulfilled for all θI**,ξRr0 and ηRr1.

Proof. Let u0() be an optimal control. We will prove the theorem by induction. Let k=0, that is, i=0. Then, according to (4.24) and (2.10) we get the proof of optimality condition (5.2) for k=0. The proof of optimality condition (5.3) for k=0 directly follows from (3.11) allowing for (2.1) (see Proposition 3.1). Now, based on Proposition 4.1 prove the optimality conditions (5.4) and (5.5) for k=0.

We first prove the validity of (5.4) for k=0.

Suppose that


where i,j(ij) are arbitrary fixed points of the set {1,2,...,r0} and δ0pk() is the k-th coordinate of the vector δ0p(); α,βR and θI** are arbitrary fixed points, the functions l1(s)=s,l2(s)=32s212,s[1,1] are the Legendre polynomials.

It is clear that the variation

, defined by (5.6) satisfies the condition (4.2) and, according to (5.6) the function δ1p(t),tI1, defined by (4.3) is of order ε, and the solution δ1x(t),tI of the system (4.7), (4.8) is of order ε2. Also, according to (4.15) it is easy to see that for every tI the matrix Q0[p](t)+χ(t)Q0[p˜](t+h) is skew-symmetric. Therefore, by Proposition 4.1 and condition (2.6), considering (2.1), (4.3), (4.17), (4.18), and the properties of the set I**, along the singular optimal control u0(), we have

where qij(0)(θ),qji(0)(θ) are the elements of the matrix Q0[p](θ)+χ(θ)Q0[p˜](θ+h).

Then, we conclude from the arbitrariness of α,βR,θI** and i,j{1,2,...,r0},ij that the skew-symmetric matrix Q0[p](θ)+χ(θ)Q0[p˜](θ+h) is also symmetric. Consequently, for every tI** we have Q0[p](t)+χ(t)Q0[p˜](t+h)=0. This completes the proof of the optimality condition (5.4) for k=0.

To prove statement (5.5) for k=0, under the conditions (4.2) and (4.3), we write down the vector components of the variation δu()=(δ0p(),δq())T in the following form:


where l1(τ)=τ,τ[1,1] is a Legendre polynomial, ξRr0, ηRr1, θI** are arbitrary fixed points.

According to (4.2), (4.3), (4.7), (4.8), and (5.7), it is easy to prove that


In view of the last relations and above proved condition (5.4) (for the case k = 0) taking into account the properties of the set I** and the relations (2.1), (4.3), (4.17), (4.18), and (5.7) from (4.16), we obtain the following relation along the singular optimal control u0(t),tI1:


Hence, taking into account the arbitrariness of θI*, ξRr0 and ηRr1, we easily get the validity of the optimality condition (5.5) for k=0.

Now suppose that all the statements of Theorem 5.1 are valid for i=1,2,...,k1(k2) as well. Prove statements (5.2)–(5.5), for i=k. By assumption, the inequality

k (θ,ξ,η)0 (see (5.5) for the case k-1) is valid for all θI**,ξRr0 and ηRr1. Hence, taking into account (5.1), we have

From this inequality, we easily get that Pk[p,q](θ)+χ(θ)Pk[p˜,q˜](θ+h)=0, that is, we get the validity of optimality condition (5.2) for i=k.

Now, prove the validity of condition (5.3) for i=k. In formula (4.32), we put


where lk(τ),τ[1,1] is the k-th Legendre polynomial ,ε(0,ε**) which the number ε** is defined above (see (4.30)) and θI**,ξRr0.

Obviously, conditions (4.29) and (4.30) are fulfilled for variation (5.8).

As the conditions Li[p](t)+χ(t)Li[p˜](t)=0, tI**, i=0,k¯ and Qi[p](t)+χQi[p˜](t+h)=0, tI**, i=0,k1¯, are fulfilled, then by (4.33), (4.34), and (5.8), formula (4.32) takes the form:




Here, by (4.31), (4.35), (5.8), and the Cauchy formula, δkp() and δkx() are determined as follows:


where λ() is the solution of the system (3.4).

By considering (5.12) in (5.13), we calculate δkx(t),tI. As θI**, then by the properties of the set I**, we have




As lk(τ),τ[1,1] is the k-th Legendre polynomial, then it is easy to get


Taking into account (5.12)–(5.16) and the fact that λ(s,t)=0 for s>t we calculate separately each terms of (5.9). As a result, after simple reasoning we get


Substitute (5.15)–(5.17) in (5.9). Then by (3.9), (4.27), and (5.14), we have


Hence, taking into account the inequality in (2.1), it is easy to complete the proof of optimality condition (5.3) for i=k.

Continuing the proof of Theorem 5.1, we prove also the validity of optimality condition (5.4) for i=k. Based on Proposition 4.2, let us consider the (k+1)-th order transformation. As the equalities


taking into account (2.6), we have


where Δ12S(u0();δk+1p,δq,δk+1x,ε) are determined similarly to (4.33) by changing the index k by k+1, and δk+1x(t) is the solution of the system (similar to (4.35))


Choose the variation δu(t)=(δ0p(t),δq(t))T,tI1 in the following way:



, z{k+1,k+2} is a Legendre polynomials α,βR,θI**,ε(0,ε**).

Obviously, by (5.20), the variation δu()=(δ0p(),δq())T defined in (5.20) satisfies conditions (4.29), (4.30) for k+1. Taking into account (5.20), by means of (4.30), (4.31), (4.33), and (5.19), it is easy to calculate


By (5.20) and (5.21), from (5.18) we get


where Qk[μ](),μ{p,p˜} is determined in (4.25).

Hence, taking into account the skew symmetry of the matrix Qk[p](t)+χ(t)Qk[p˜](t+h), tI and the properties of the set I**, and also by (2.1), (4.30), and (5.20), we have


where θI**, a=1(k+1)!2k+1,b=1(k+2)!2k+2, and qij(k)(θ),qji(k)(θ) are the elements of the matrix Qk[p](θ)+χ(θ)Qk[p˜](θ+h).

From the last inequality, by arbitrariness of θI**, α,βR and i,j{1,2,...,r0}(ij) it follows that for each θI**, the skew-symmetric matrix Qk[p](θ)+χ(θ)Qk[p˜](θ+h) is also symmetric. Consequently, Qk[p](θ)+χ(θ)Qk[p˜](θ+h)=0, that is, condition (5.4) is proved for i=k.

At last, let us prove optimality condition (5.5). Choose the variation δu(t)=(δ0p(t),δq(t))T,tI1 in the following way:


where lk+1(τ),τ[1,1] is the (1+k)-th Legendre polynomial, ξRr0, ηRr1, θI**, ε(o,ε**).

Obviously, the variation δu(t)=(δ0p(t),δq(t))T,tI1defined in (5.22) satisfies the conditions (4.29) and (4.30) for i=1,2...k+1

By (4.30), (4.31), (5.12), (5.19), (5.22), and (5.23), the following relations hold:


Taking into account (5.23)–(5.25) and validity of the equality Qk[p](t)+χ(t)Qk[p˜](t+h)=0,tI** (see (5.4)), from (5.18), we get


From this expansion, taking into account (2.1), it follows inequality (5.5).

Therefore, Theorem 5.1 is completely proved.

Corollary 5.1. Let all the conditions of Theorem 5.1 be fulfilled. Let, in addition, the following equalities hold:


Then, for optimality of the singular control u0(), it is necessary that the relations


be fulfilled for all θI**,ξRr0.

The proof of the corollary follows immediately from Theorem 5.1.

Remark 5.1. As is seen (see Proposition 3.1 and (4.6), (4.15), and (4.24)), for validity of optimality conditions (5.2)–(5.4), for k=0 it is sufficient that assumptions (A1) and (A2) be fulfilled.

Remark 5.2. It is clear that (see Proposition 4.1) for validity of optimality conditions (5.5), for k=0 it is sufficient that assumptions (A1), (A3), and (A4) be fulfilled.

Remark 5.3. If in Definition 2.1 a special plot is some interval (t¯,t^)I, then very easily similar to the proof of Theorem (5.1) we can prove that conditions (5.2)–(5.5) as optimality conditions are valid for all θ(t¯,t^)I** and ξRr0,ηRr1.


6. Conclusion

As is seen, systems (1.2) and (1.3) are not the most general among all the systems with retarded control. We have chosen it only for definiteness, just to demonstrate the essentials of our method. Nevertheless, the optimality conditions (5.2)–(5.5) can be generalized to the case for more general systems with retarded control.

It should be noted that (1) optimality conditions (5.4) and (5.5), for k=0, are actually the analogs of the equality-type conditions and the Kelly [12] condition, while optimality condition (5.3) is the analog of the Gabasov [11] condition for the considered problem (1.1)–(1.3); (2) optimality condition (5.5), for k=1 is the analog of the Koppa-Mayer [33] condition. Conditions (5.3)–(5.5) were obtained in [10] only for singular controls with complete degree of degeneracy, that is, for the case when r1=0 (see Definition 2.1).

We also note that (1) the analog of the Kelly condition and equality-type condition was obtained in [24] by another method for systems with retarded state; (2) optimality-type conditions (5.2)–(5.5) for system with retarded state were obtained in [[31, 32], p. 119]; (3) optimality conditions of type (5.4), (5.5) for systems without retardation were obtained in the papers [[23, 26, 27], p. 145, [29, 30, 33, 34, 3941], etc.].

The proof of Theorem 5.1 shows that the optimality conditions (5.3)–(5.5) are independent. Also, it is clear that, unlike (5.2), (5.3), and (5.5), the optimality condition (5.4) for r1=r1 (see Definition 2.1) becomes ineffective, though it is effective in the general case for r1<r1. To illustrate the rich content of condition (5.4), we consider a concrete example:

Example. x˙1(t)=u2(t)+u12(t1)u3(t1), x˙2(t)=u1(t)u2(t),

x˙3(t)=(u1(t)+u2(t))x2(t)+u32(t)+u32(t1), tI:=[0,2], xi(0)=0, ui(t)=0, t[1,0), |ui|<2, i=1,2,3,h=1,ϕ(x(2))=x3(2)+12x12(2)min.

Check for optimality of the control u0(t)=(0,0,0)T,t[1,2]. In this control according to (2.7), (2.8), (3.9), (3.10), (4.6), (4.9), (4.15), (4.21), and (4.24), we have

xi0(t)=0,i=1,2,3,ψi0(t)=0,i=1,2,ψ30(t)=1,tI,H(ψ0(t),x,u,υ,t)=(u1+u2)x2u32υ32, Huu(t):=(hij(t)),tI, where hij(t)=0, i,j{1,2,3}, (i,j)(3,3), h33(t)=2;Hυυ(t+1)=(h˜ij(t)),t[0,1], where h˜ij(t)=0,i,j{1,2,3}, (i,j)(3,3), h˜33(t)=2; Hυυ(t+1)=0,t(1,2]; g0T[p](t)=(010110),tI, g0T[p˜](t)=0, tI, where p:=(u1,u2), p˜:=(υ1,υ2); g1[p](t)=g1[p˜](t)=0,tI, Q0[p](t)=(0220), tI, Q0[p˜](t+1)=0, tI, L1[p](t)=L1[p˜](t+1)=0, tI, P0[p,q](t)=P0[p˜,q˜](t)=0, P1[p,q](t)=P1[p˜,q˜](t)=0, M0[p,p](t,t)=(1110),tI, M0[p,p˜]()=0, M0[p˜,p˜]()=0, Hqq(t)+Hq˜q˜(t+1)={4,t[0,1),2,t[1,2], where q=u3,q˜=υ3.

Hence, we have the following: (1) admissible control u0(t)=(0,0,0)T,t[1,2] is singular (in the sense of Definition 2.1) and singularity to it is delivered by the vector component p=(u1,u2)T, that is, equality (5.1) is fulfilled only k=0; (2) optimality conditions (5.2), (5.3), (5.5), and the results of the papers [13, 6, 9, 10] cannot say that whether the control u0() is an optimal or not. However, optimality condition (5.4) for k=0 is not fulfilled (Q0[p](t)+χ(t)Q0[p˜](t+1)=(0220)=0, tI), that is, by condition (5.4) (for k=0) we conclude that the control u0(t)=(0,0,0)T,t[1,2] cannot be optimal.


  1. 1. Artyunov A, Mardanov M: To theory of maximum principle in delay problems. Differen. Uravn. 1989; 25: 2048–2058.
  2. 2. Halanay A: Optimal controls for systems with time lag. SIAM J. Control, 1968; 6(2). DOI: 10.1137/030606016
  3. 3. Kharatashvili G, Tadumadze T: Nonlinear optimal control systems with variable constructions. Matem. Sb. 1978; 107(149): No 4 (12), 613–633.
  4. 4. Mardanov M: Necessary conditions of optimality in delay and phase restraint systems. Matem. zhametki, 1987; 42(5), 691–702.
  5. 5. Mardanov M: Investigation of optimal delay process with constraints. Elm, Baku; 2009. 192 p.
  6. 6. Matveev A: Optimal control problems with general form delays and phase constraints. Izv. AN SSSR. ser. matem. 5(6):1200–1229. DOI: 10.1070/IM1989v033n03ABEH000855
  7. 7. Tadumadze T: Some problems of quality theory of optimal control. Tbilisi, Izd. TTU; 1983. 128 p.
  8. 8. Vasil’ev F: Conditions of optimality for some classes of systems unsolved with regard to derivative. DAN SSSR, 1969; 184(6), 1267–1270.
  9. 9. Guliyev V: Some issues of optimal control theory in delay argument systems involving constraints. Author’s thesis for cand. of phys. math. sci. Baku, 1985. 25 p.
  10. 10. Mardanov M, Melikov T: Recurrent optimality conditions of singular controls in delay control systems. The Third International Conference “Problems of Cybernetics and Informatics” 6–8 September 2010; Baku, pp. 3–5.
  11. 11. Gabasov R: To theory of necessary conditions of optimality for singular controls. DAN SSSR, 1968; 183(2), 300–302.
  12. 12. Kelley H: Necessary conditions for singular extremals based on the second variation. Raketnaya tekhnika i kosmonavtika, 1964; (8), 26–29.
  13. 13. Kelley H, Kopp R, Moyer H: Singular extremals, in “Topics in Optimization” (ed. By Leitman G), Acad. Press, New York-London, 1967; 63–101.
  14. 14. Akhmedov K, Melikov T, Gasanov K: On optimality of singular controls in delay systems. Dokl. AN Az SSR, 1975, 31(7), 7–10.
  15. 15. Mansimov K: Multipoint necessary condition of optimality for singular in the classic sence controls in delay control systems. Diff. Uravn. 1985; 21(3), 527–530.
  16. 16. Mardanov M: On condition of optimality for singular controls. DAN SSSR, 1980; 253(4), 815–818.
  17. 17. Melikov T: Recurrent conditions of optimality for singular controls in delay systems. Dokl. RAN, 1992; 322(5), 843–846.
  18. 18. Melikov T: An Analogue of the Kelley condition for optimal systems with aftereffect of neutral type. Zhurnal Vychisl. Mat. i Mat. Fiz., 1998; 38(9), 1490–1499.
  19. 19. Melikov T: Optimality of singular controls in systems with aftereffect of neutral type. Zhurnal Vychisl. Mat. i Mat. Fiz., 2001; 41(9), 1332–1343.
  20. 20. Parayev Y.I. Solving a problem of optimal product, storage and sale of goods. Izv. RAN. Teor. i sistemy upravlenia. 2000; (2), 103–107.
  21. 21. Sansone J: Ordinary differential equations. Inostrannaya literatura. Moscow, Il, 1954; 2.
  22. 22. Rozonoer L: L.S. Pontryagin’s maximum principle in theory of optimal systems, III. Avtomatika i telemekhanika, 1959; 20(12), 1561–1578.
  23. 23. Agrachev A, Gamkrelidze R: Second order optimality principle for contagion problem. Matem. sb. 1976; 100(142), No 4(8), 610–643.
  24. 24. Ashepkov L, Eppel D: Analog of Kelley condition in optimal delay systems. Diff. Uravn., 1974; 10(4), 591–597.
  25. 25. Barbashina E: Kopp-Moyer type necessary conditions for Goursat-Darboux systems. 1989; 25(6), 1045–1047.
  26. 26. Bolonkin A: Special extremals in optimal control problems. Izv. AN SSSR. OTN. Tekhnicheskaya kibernetika, 1969; 2, 187–198.
  27. 27. Gabasov R, Kirillova F: Singular optimal controls. Nauka, Moscow; 1973. 256 p.
  28. 28. Gasanov K, Yusifov B: Inductive analysis of singular controls in delay systems. Avtomatika i telemekhanika, 1982; (6), 37–42.
  29. 29. Goh B: Necessary conditions for singular extremals involving multiple control variables. SIAM J. Control, 1966; 4(4), 716–731.
  30. 30. Gorokhovik V, Gorokhovik S: Different forms of Legendre-Clebsch generalized conditions. Avtomatika i telemekhanika, 1982; (7), 28–33.
  31. 31. Jacobson D: A new necessary condition of optimality for singular control problems. SIAM J. Control, 1969; 7(4), 578–595. DOI: 10.1137/0307042.
  32. 32. Kaganovich S: On inductive method for studying singular extremals. Avtomatika i telemekhanika, 1976; (11), 28–39.
  33. 33. Kopp R, Moyer H: Necessary condition of optimality for singular extremals. Raketnaya tekhnika i kosmonavtika. 1965; 3(8), 84–91.
  34. 34. Krener A: The high order maximal principle and its application to singular extremals. SIAM J. Control Optimization, 1977; 15(2), 256–293.
  35. 35. Mardanov M: Second order necessary condition of optimality in delay control systems. UMN, 1988; 43, 4(262), 213–214.
  36. 36. Melikov T: The necessary conditions for high-order optimality. Zhurnal Vychisl. Mat. i Mat. Fiz., 1995; 35(7), 1134–1138.
  37. 37. Melikov T: Singular controls in aftereffect systems. Baku, “Elm” 2002; 188 p.
  38. 38. Melikov T: Recurrent conditions of optimality for singular controls in Goursat-Darboux systems. Dokl. Akad. Nauk. Az. SSR, 1990; 14(8), 6–10.
  39. 39. Srochko V: Investigation of second variation on singular controls. Diff. Uravn., 1974; 10(6), 1050–1066.
  40. 40. Srochko V: Method of variation transformation in theory of singular controls. Differen. i integ. uravnenia., Irkutsk: State Univ., 1973; (2), 70–80.
  41. 41. Vopnyarskiy I: Theorems of the existence of optimal control in Bolts problem, some its applications and necessary conditions of optimality of sliding and singular modes. Zhurn. Vych. math. i math-phys., 1967; 7(2), 259–283.
  42. 42. Kudryavtsev L: Course of mathematical analysis. Vyshaya shkola, Moscow; 1981; (2), 2.
  43. 43. Mardanov M: Legendre’s necessary conditions in delay control optimization problems. DAN SSSR, 1987. 297(4), 795–797.

Written By

Misir J. Mardanov and Telman K. Melikov

Submitted: February 2nd, 2016 Reviewed: May 12th, 2016 Published: October 19th, 2016