Open access peer-reviewed chapter

Zhang Neural Networks for Online Solution of Time-Varying Linear Inequalities

Written By

Dongsheng Guo, Laicheng Yan and Yunong Zhang

Submitted: 16 October 2015 Reviewed: 29 February 2016 Published: 19 October 2016

DOI: 10.5772/62732

From the Edited Volume

Artificial Neural Networks - Models and Applications

Edited by Joao Luis G. Rosa

Chapter metrics overview

2,455 Chapter Downloads

View Full Metrics

Abstract

In this chapter, a special type of recurrent neural networks termed “Zhang neural network” (ZNN) is presented and studied for online solution of time-varying linear (matrix-vector and matrix) inequalities. Specifically, focusing on solving the time-varying linear matrix-vector inequality (LMVI), we develop and investigate two different ZNN models based on two different Zhang functions (ZFs). Then, being an extension, by defining another two different ZFs, another two ZNN models are developed and investigated to solve the time-varying linear matrix inequality (LMI). For such ZNN models, theoretical results and analyses are presented as well to show their computational performances. Simulation results with two illustrative examples further substantiate the efficacy of the presented ZNN models for time-varying LMVI and LMI solving.

Keywords

  • Zhang neural network (ZNN)
  • Zhang function (ZF)
  • time-varying linear inequalities
  • design formulas
  • theoretical results

1. Introduction

In recent years, linear inequalities have played a more and more important role in numerous fields of science and engineering applications [19], such as obstacle avoidance of redundant robots [1, 2], robustness analysis of neural networks [3] and stability analysis of fuzzy control systems [5]. They, including linear matrix-vector inequality (LMVI) and linear matrix inequality (LMI), have now been considered as a powerful formulation and design technique for solving a variety of problems [711]. Due to their important roles, lots of numerical algorithms and neural networks have been presented and studied for online solution of linear inequalities [717]. For example, an iterative method was presented by Yang et al. for linear inequalities solving [12]. In [13], three continuous-time neural networks were developed by Cichocki and Bargiela to solve the system of linear inequalities. Besides, a gradient-based neural network and a simplified neural network were investigated respectively in [10] and [11] for solving a class of LMI problems (e.g., Lyapunov matrix inequalities and algebraic Riccati matrix inequalities).

It is worth pointing out that most of the reported approaches are designed intrinsically to solve the time-invariant (or say, static) linear inequalities. In view of the fact that many systems in science and engineering applications are time-varying, the resultant linear inequalities may be time-varying ones (i.e., the coefficients are time-varying). Generally speaking, to solve a time-varying problem, based on the assumption of the short-time invariance, such a time-varying problem can be treated as a time-invariant problem within a small time period [8]. The corresponding approaches (e.g., numerical algorithms and neural networks) are thus designed for solving the problem at each single time instant. Note that, as for this common way used to solve the time-varying problem, the time-derivative information (or say, the change trend) of the time-varying coefficients is not involved. Due to the lack of the consideration of such an important information, the aforementioned approaches may be less effective, when they are exploited directly to solve time-varying problems [79, 1719].

Aiming at solving time-varying problems (e.g., time-varying matrix inversion and time-varying quadratic program), a special type of recurrent neural networks termed Zhang neural network (ZNN) has been formally proposed by Zhang et al. since March 2001 [79, 1721]. According to Zhang et al.’s design method, the design of a ZNN is based on an indefinite Zhang function (ZF), with the word “indefinite” meaning that such a ZF can be positive, zero, negative or even lower-unbounded. By exploiting methodologically the time-derivative information of time-varying coefficients involved in the time-varying problems, the resultant ZNN models can thus solve the time-varying problems effectively and efficiently (in terms of avoiding the lagging errors generated by the conventional approaches) [18, 19]. For better understanding and to lay a basis for further investigation, the concepts of ZNN and ZF [18] are presented as follows.

Concept 1. Being a special type of recurrent neural networks, Zhang neural network (ZNN) has been developed and studied since 2001. It originates from the research of Hopfield-type neural networks and is a systematic approach for time-varying problems solving. Such a ZNN is different from the conventional gradient neural network(s) in terms of the problem to be solved, indefinite error function, exponent-type design formula, dynamic equation, and the utilization of time-derivative information.

Concept 2. Zhang function (ZF) is the design basis of ZNN. It differs from the common error/energy functions in the study of conventional approaches. Specifically, compared with the conventional norm-based scalar-valued positive or at least lower-bounded energy function, ZF can be bounded, unbounded or even lower-unbounded (in a word, indefinite). Besides, corresponding to a vector- or matrix-valued problem to be solved, ZF can be vector- or matrix-valued to monitor the solving process fully.

In this chapter, focusing on time-varying linear (matrix-vector and matrix) inequalities solving, we present four different ZNN models based on four different ZFs. Specifically, by defining the first two different ZFs, the corresponding two ZNN models are developed and investigated for solving the time-varying LMVI. Then, being an extension, by defining another two different ZFs, another two ZNN models are developed and investigated to solve the time-varying LMI. For such ZNN models, theoretical results and analyses are also presented to show their computational performances. Simulation results with two illustrative examples further substantiate the efficacy of the presented ZNN models for time-varying LMVI and LMI solving.

Advertisement

2. Preliminaries

As mentioned in Concept 2, the ZF is the design basis for deriving ZNN models to solve time-varying LMVI and LMI. Thus, for presentation convenience, in this chapter, the ZF is denoted by E(t) with E˙(t) being the time derivative of E(t). Based on the ZF, the design procedure of a ZNN model for time-varying LMVI/LMI solving is presented as follows [18, 19].

  1. Firstly, an indefinite ZF is defined as the error-monitoring function to monitor the solving process of time-varying LMVI/LMI.

  2. Secondly, to force the ZF (i.e., E(t)) converge to zero, we choose its time derivative (i.e., E˙(t)) via the ZNN design formula (including its variant).

  3. Finally, by expanding the ZNN design formula, the dynamic equation of a ZNN model is thus established for time-varying LMVI/LMI solving.

In order to derive different ZNN models to solve time-varying LMVI and LMI, the following two design formulas (being an important part in the above ZNN design procedure) are exploited in this chapter [79, 1721]:

E˙(t)=γ(E(t)),E1
E˙(t)=γSGN(E0)(E(t)),E2

where γ>0R, being the reciprocal of a capacitance parameter, is used to scale the convergence rate of the solution, and () denotes the activation-function array. Note that, in general, design parameter γ should be set as large as the hardware system would permit, or selected appropriately for simulation purposes [22]. In addition, function f(), being a processing element of (), can be any monotonically increasing odd activation function, e.g., the linear, power-sigmoid and hyperbolic-sine activation functions [19, 23]. Furthermore, E0=E(t=0) denotes the initial error, and the unipolar signum function sgn(), being an element of SGN(), is defined as

sgn(c)=( 1, if c>0, 0, if c0.E21

Besides, the multiplication operator is the Hadamard product [24] and is defined as follows:

UV=[u11v11u12v12u1nv1nu21v21u22v22u2nv2num1vm1um2wm2umnvmn]Rm×n.E22

Note that, as for the presented design formulas (1) and (2), the former is the original ZNN design formula proposed by Zhang et al. to solve the time-varying Sylvester equation [20], while the latter is the variant of such a ZNN design formula constructed elaborately for time-varying linear inequalities solving [9]. Thus, for presentation convenience and better understanding, (1) is called the original design formula, while (2) is called the variant design formula for time-varying LMVI and LMI solving in this chapter.

Remark 1. For the variant design formula (2), when the initial error E0>0, it reduces to E˙(t)=γ(E(t)), which is exactly the original design formula (1) for various time-varying problems solving [1721]. In this case (i.e., E˙(t)=γ(E(t))), different convergence performances of E(t) can be achieved by choosing different activation function arrays [1721, 23]. For example, (2) reduces to E˙(t)=γE(t) with a linear activation function array used and with E0>0. Evidently, its analytical solution is E(t)=exp(γt)E0, which means that E(t) is globally and exponentially convergent to zero with rate γ. By following the previous successful researches [1721, 23], superior convergence property of E(t) can be achieved by exploiting nonlinear activation functions, e.g., the power-sigmoid and hyperbolic-sine activation functions [23]. In addition, the convergence property can be further improved by increasing the γ value. Therefore, in the case of E0>0, the global and exponential convergence property is guaranteed for E(t). Note that, in the case of E00, (2) reduces to E˙(t)=0, meaning that E(t)=E0 as time t evolves. In this situation, there is no need to investigate the convergence performance of E(t) with the different activation function arrays and different γ values used.

According to the presented design formulas (1) and (2), by defining different ZFs (i.e., E(t) with different formulations), different ZNN models are thus developed and investigated for time-varying LMVI and LMI solving.

Advertisement

3. Time-varying linear matrix-vector inequality

In this section, we introduce two different ZFs and develop the resultant ZNN models for time-varying linear matrix-vector inequality (LMVI) solving. Then, theoretical results and analyses are provided to show the computational performances of such two ZNN models.

Specifically, the following problem of time-varying LMVI [7, 9] is considered in this chapter:

A(t)x(t)b(t),E3

in which A(t)Rn×n and b(t)Rn are smoothly time-varying matrix and vector, respectively. In addition, x(t)Rn is the unknown time-varying vector to be obtained. The objective is to find a feasible solution x(t) such that (3) holds true for any time instant t0. Note that, for further discussion, A(t) is assumed to be nonsingular at any time instant t[0,+] in this chapter.

3.1. ZFs and ZNN models

In this subsection, by defining two different ZFs, two different ZNN models are developed and investigated for time-varying LMVI solving.

3.1.1. The first ZF and ZNN model

To monitor and control the process of solving the time-varying LMVI (3), the first ZF is defined as follows [7]:

E(t)=A(t)x(t)+Λ2(t)b(t)Rn,E4

where Λ2(t)=Λ(t)Λ(t) with the time-varying vector Λ(t) being Λ(t)=[λ1(t),λ2(t),,λn(t)]TRn. In view of the fact that Λ2(t)0, when E(t)=0, then we have

A(t)x(t)b(t)=Λ2(t)0.E23

That is to say, time-varying LMVI (3) solving can be equivalent to solving the time-varying equation A(t)x(t)+Λ2(t)b(t)=0. For further discussion, the following diagonal matrix D(t) is defined:

D(t)=[λ1(t)000λ2(t)000λn(t)]Rn×n,E24

which yields

Λ2(t)=D(t)Λ(t) and Λ˙2(t)=dΛ2(t)dt=2D(t)Λ˙(t).E25

with Λ˙(t) being the time derivative of Λ(t).

On the basis of ZF (4), by exploiting the original design formula (1), the dynamic equation of a ZNN model is established as follows:

A(t)x˙(t)+2D(t)Λ˙(t)=A˙(t)x(t)+b˙(t)γ(A(t)x(t)+Λ2(t)b(t)),E5

where x˙(t), A˙(t) and b˙(t) are the time derivatives of x(t), A(t) and b(t), respectively. As for (5), it is reformulated as

[A(t) 2D(t)][x˙(t)Λ˙(t)]=[A˙(t) 0][x(t)Λ(t)]+b˙(t)γ([A(t) D(t)][x(t)Λ(t)]b(t)).E6

By defining the augmented vector y(t)=[xT(t),ΛT(t)]TR2n, (6) is further rewritten as follows:

C(t)y˙(t)=P(t)y(t)+b˙(t)γ(Q(t)y(t)b(t)),E7

with y˙(t) being the time derivative of y(t), and the augmented matrices are being defined as below:

C(t)=[AT(t)2DT(t)]TRn×2n, P(t)=[A˙T(t)0]TRn×2n and Q(t)=[AT(t)DT(t)]TRn×2n.E26

In order to make (7) more computable, we can reformulate (7) to the following explicit form:

y˙(t)=C(t)P(t)y(t)+C(t)b˙(t)γC(t)(Q(t)y(t)b(t)),E8

where C(t)=CT(t)(C(t)CT(t))1R2n×n denotes the right pseudoinverse of C(t) and the MATLAB routine “pinv” is used to obtain C(t) at each time instant in the simulations. Therefore, based on ZF (4), ZNN model (8) is obtained for time-varying LMVI solving. Besides, for better understanding and potential hardware implementation, ZNN model (8) is expressed in the ith (with i=1,2,,2n) neuron form as

yi=k=1ncik(j=1mpkjyj+b˙kγf(j=1mqkjyjbk))dt,E27

where yi denotes the ith neuron of (8), m=2n and f() is a processing element of (). In addition, time-varying weights cik, pkj and qkj denote the ikth element of C(t), the kjth element of P(t) and kjth element of Q(t), respectively. Moreover, time-varying thresholds b˙k and bk denote, respectively, the kth elements of b˙(t) and b(t). Thus, the neural-network structure of (8) is shown in Figure 1.

Figure 1.

Structure of the neurons in ZNN model (8) for time-varying LMVI (3) solving.

3.1.2. The second ZF and ZNN model

Being different from the first ZF (4), the second ZF is defined as follows [9]:

E(t)=A(t)x(t)b(t)Rn.E9

On the basis of such a ZF, by exploiting the variant design formula (2), another ZNN model is developed as follows:

A(t)x˙(t)=A˙(t)x(t)+b˙(t)γSGN(E0)(A(t)x(t)b(t)),E10

where the initial error E0=E(t=0)=A(0)x(0)b(0). Therefore, based on ZF (9), ZNN model (10) is obtained for time-varying LMVI solving. Besides, for better understanding and potential hardware implementation, the block diagram of ZNN model (10) is shown in Figure 2, where IRn×n denotes the identity matrix.

Figure 2.

Block diagram of ZNN model (10) for time-varying LMVI (3) solving.

3.2. Theoretical results and analyses

In this subsection, theoretical results and analyses of the presented ZNN models (8) and (10) for solving the time-varying LMVI (3) are provided via the following theorems.

Theorem 1. Given a smoothly time-varying nonsingular coefficient matrix A(t)Rn×n and a smoothly time-varying coefficient vector b(t)Rn in (3), if a monotonically increasing odd activation function array () is used, then ZNN model (8) generates an exact time-varying solution of the time-varying LMVI (3).

Proof: To lay a basis for discussion, we define x*(t) as a theoretical time-varying solution of (3), i.e., A(t)x*(t)b(t). Then, a time-varying vector Λ*(t) would exist, which results in the time-varying matrix-vector equation as follows:

A(t)x*(t)+Λ*2(t)=b(t).E11

By differentiating (11) with respect to time t, we have

A(t)x˙*(t)+A˙(t)x*(t)+Λ˙*2(t)=b˙(t),E12

with x˙*(t) and Λ˙*2(t) being respectively the time derivatives of x*(t) and Λ*2(t). Based on (7), (11) and (12), we further have

A(t)(x˙(t)x˙*(t))+A˙(t)(x(t)x*(t))+Λ˙2(t)Λ˙*2(t)=γ(A(t)(x(t)x*(t))+Λ2(t)Λ*2(t)),E13

which is rewritten as

E˜˙(t)=γ(E˜(t)),E13

where E˜(t)=A(t)(x(t)x*(t))+Λ2(t)Λ*2(t)Rn with E˜˙(t) being the time derivative of E˜(t).

As for (13), its compact form of a set of n decoupled differential equations is written as follows:

e˜˙i(t)=γf(e˜i(t)),E14

where i=1,2,,n. To analyze (14), we define a Lyapunov function candidate vi(t)=e˜i2(t)/20 with its time derivative being

v˙i(t)=dvi(t)dt=e˜i(t)e˜˙i(t)=γe˜i(t)f(e˜i(t)).E29

Since f() is a monotonically increasing odd activation function, i.e., f(e˜i(t))=f(e˜i(t)), then we have

e˜i(t)f(e˜i(t))(>0, if e˜i(t)0,=0, if e˜i(t)=0,E30

which guarantees the negative definiteness of v˙i(t). That is to say, v˙i(t)<0 for e˜i(t)0, while v˙i(t)=0 for e˜i(t)=0 only. By Lyapunov theory, e˜i(t) converges to zero for any i{1,2,,n}, thereby showing that E˜(t) is convergent to zero as well.

Besides, based on (11), we have A(t)x*(t)+Λ*2(t)=b(t). Then, E˜(t) is rewritten as E˜(t)=A(t)x(t)+Λ2(t)b(t), which is equivalent to A(t)x(t)b(t)=Λ2(t)+E˜(t). Note that, as analyzed previously, E˜(t)0 with time t+. Thus, as time evolves,

A(t)x(t)b(t)=Λ2(t)+E˜(t)Λ2(t).E31

Since Λ2(t)0 (i.e., each element is less than or equal to zero), then we have A(t)x(t)b(t)0. This implies that x(t) (being the first n elements of y(t) of (8)) would converge to a time-varying vector which satisfies the time-varying LMVI (3); i.e., x(t)x*(t) to make (3) hold true. In summary, the presented ZNN model (8) generates an exact time-varying solution of the time-varying LMVI (3). The proof is thus completed.

Theorem 2. Given a smoothly time-varying nonsingular coefficient matrix A(t)Rn×n and a smoothly time-varying coefficient vector b(t)Rn in (3), if a monotonically increasing odd activation function array () is used, then ZNN model (10) generates an exact time-varying solution of the time-varying LMVI (3).

Proof: Consider ZNN model (10), which is derived from the variant design formula (2). Thus, there are three cases as follows.

  1. If the randomly generated initial state x(0)Rn is outside the initial solution set S(0) of (3), i.e., E0>0 in (10), based on Remark 1 and the previous work [9], the global and exponential convergence of the error function E(t) is achieved (or say, E(t)=A(t)x(t)b(t)0 globally and exponentially). This also means that the neural state x(t) of ZNN model (10) is convergent to the theoretical time-varying solution of the matrix-vector equation A(t)x(t)b(t)=0. Note that A(t)x(t)b(t)=0 (i.e., A(t)x(t)=b(t)) is a special case of A(t)x(t)b(t). Therefore, ZNN model (10) is effective on solving the time-varying LMVI (3), in terms of x(t) being convergent to the time-varying solution set S(t) of (3).

  2. If x(0) is inside S(0) of (3), i.e., E00 in (10), based on Remark 1, the error function E(t) would remain E0 with t+. That is, E(t)=E00, no matter how time t evolves. In this situation, ZNN model (10) is still effective on solving the time-varying LMVI (3), in terms of its neural state x(t) always being inside S(t) of (3).

  3. If some elements of x(0) are inside S(0) of (3) while the others are outside S(0), i.e., some elements of E0 are greater than zero while the rest elements of E0 are less than or equal to zero, then, (i) for the elements of E(t) that have positive initial values (i.e., their initial values are greater than zero), they can be convergent to zero globally and exponentially; and (ii) for the rest elements of E(t), they can be always equal to their initial values that are less than or equal to zero. In view of the fact that, as time evolves, each element of E(t)=A(t)x(t)b(t) is less than or equal to zero, ZNN model (10) is thus effective on solving the time-varying LMVI (3).

By summarizing the above analyses, the time-varying LMVI (3) is solved effectively via ZNN model (10), in the sense that such a model can generate an exact time-varying solution of (3). The proof is thus completed.

Remark 2. On the basis of two different ZFs (i.e., (4) and (9)), two different ZNN models (i.e., (8) and (10)) are obtained for online solution of the time-varying LMVI (3). Note that the former aims at solving (3) aided with equality conversion (i.e., from inequality to equation) and the original design formula (1), while the latter focuses on solving (3) directly with the aid of the variant design formula (2). The resultant ZNN model (8) is depicted in an explicit dynamics (i.e., y˙(t)=), and ZNN model (10) is depicted in an implicit dynamics (i.e., A(t)x˙(t)=). As analyzed above and as demonstrated by the simulation results shown in Section 5, such two ZNN models are both effective on solving the time-varying LMVI (3). In summary, two different approaches for time-varying LMVI solving have been discovered and presented in this chapter; i.e., one is based on the variant of the original ZNN design formula, and the other is based on the conversion from inequality to equation. This can be viewed as an important breakthrough on (time-varying or static) inequalities solving [79].

Advertisement

4. Time-varying linear matrix inequality

In this section, being an extension, by defining another two different ZFs, another two ZNN models are developed and investigated for time-varying linear matrix inequality (LMI) solving.

Specifically, the following problem of time-varying LMI is considered [9]:

A(t)X(t)B(t),E15

where A(t)Rm×m and B(t)Rm×n are smoothly time-varying matrices, and X(t)Rm×n is the unknown matrix to be obtained. Note that (15) is a representative time-varying LMI problem which is studied here. The design approaches presented in this chapter (more specifically, summarized in Remark 2) can be directly extended to solve other types of time-varying LMIs [8, 10, 11].

4.1. The first ZF and ZNN model

In order to solve the time-varying LMI (15), the first ZF is defined as follows:

E(t)=A(t)X(t)+Λ2(t)B(t)Rm×n,E16

where Λ2(t)=Λ(t)Λ(t) with the time-varying vector Λ(t) being

Λ(t)=[λ11(t)λ12(t)λ1n(t)λ21(t)λ22(t)λ2n(t)λm1(t)λm2(t)λmn(t)]Rm×n.E32

In addition, for matrices Λ(t) and Λ2(t), we have

vec(Λ2(t))=D(t)vec(Λ(t)),E33

where operator vec()Rmn generates a column vector obtained by stacking all column vectors of a matrix together [8, 18, 19]. In addition, the diagonal matrix D(t) is defined as follows:

D(t)=[λ˜1(t)000λ˜2(t)000λ˜n(t)]Rmn×mn,E34

with the ith (with i=1,,n) block matrix being

λ˜i(t)=[λ1i(t)000λ2i(t)000λmi(t)]Rm×m.E31

By defining u(t)=vec(X(t))Rmn, v(t)=vec(Λ(t))Rmn and w(t)=vec(B(t))Rmn, ZF (16) is reformulated as E(t)=M(t)u(t)w(t)+D(t)v(t)Rmn, where M(t)=IA(t)Rmn×mn with IRn×n being the identity matrix and denoting the Kronecker product [18, 19]. Thus, on the basis of (16), by exploiting the original design formula (1), we have

C(t)y˙(t)=P(t)y(t)+w˙(t)γ(Q(t)y(t)w(t)),E17

where the augmented vector y(t)=[uT(t),vT(t)]TR2mn, and y˙(t)R2mn and w˙(t)Rmn are the time derivatives of y(t) and w(t), respectively. In addition, the augmented matrices are defined as

C(t)=[MT(t)2DT(t)]TRmn×2mn, P(t)=[N(t)0]TRmn×2mn and Q(t)=[MT(t)DT(t)]TRmn×2mn,E35

where N(t)=M˙(t)=IA˙(t)Rmn×mn with A˙(t) being the time derivative of A(t).

Similarly, to make (17) more computable, we can reformulate (17) as the following explicit form:

y˙(t)=C(t)P(t)y(t)+C(t)w˙(t)γC(t)(Q(t)y(t)w(t)),E18

where C(t)=CT(t)(C(t)CT(t))1R2mn×mn. Therefore, based on ZF (16), ZNN model (18) is obtained for time-varying LMI solving. Note that the neural-network structure of (18) is similar to the one shown in Figure 1, and is thus omitted here. Besides, as for ZNN model (18), we have the following theoretical result, with the related proof being generalized from the proof of Theorem 1 and being left to interested readers to complete as a topic of exercise.

Corollary 1. Given a smoothly time-varying nonsingular coefficient matrix A(t)Rm×m and a smoothly time-varying coefficient matrix B(t)Rm×n in (15), if a monotonically increasing odd activation function array () is used, then ZNN model (18) generates an exact time-varying solution of the time-varying LMI (15).

4.2. The second ZF and ZNN model

In this subsection, being different from the first ZF (16), the second ZF is defined as follows:

E(t)=A(t)X(t)B(t)Rm×n.E19

On the basis of such a ZF, by exploiting the variant design formula (2), the following ZNN model for time-varying LMI solving is developed:

A(t)X˙(t)=A˙(t)X(t)+B˙(t)γSGN(E0)(A(t)X(t)B(t)),E20

where the initial error E0=E(t=0)=A(0)X(0)B(0). Note that, due to similarity to the block diagram of (10), the block diagram of ZNN model (20) is omitted. Besides, as for ZNN model (20), we have the following theoretical result, of which the proof is generalized from the proof of Theorem 2 (and is also left to interested readers to complete as a topic of exercise).

Corollary 2. Given a smoothly time-varying nonsingular coefficient matrix A(t)Rm×m and a smoothly time-varying coefficient matrix B(t)Rm×n in (15), if a monotonically increasing odd activation function array () is used, then ZNN model (20) generates an exact time-varying solution of the time-varying LMI (15).

Advertisement

5. Simulative verifications

In this section, one illustrative example is first simulated for demonstrating the efficacy of the presented ZNN models (8) and (10) for solving the time-varying LMVI (3). Then, another illustrative example is provided for substantiating the efficacy of the presented ZNN models (18) and (20) for solving the time-varying LMI (15).

Example 1 In the first example, the following smoothly time-varying coefficient matrix A(t) and coefficient vector b(t) of (3) are designed to test ZNN models (8) and (10):

A(t)=[3+sin(3t)cos(3t)/2cos(3t)cos(3t)/23+sin(3t)cos(3t)/2cos(3t)cos(3t)/23+sin(3t)]R3×3 and b(t)=[sin(3t)+1cos(3t)+2sin(3t)+cos(3t)+3]R3.E36

The corresponding simulation results are shown in Figures 3 through 9.

Specifically, Figures 3 and 4 illustrate the state trajectories synthesized by ZNN model (8) using γ=1 and the power-sigmoid activation function. As shown in Figures 3 and 4, starting from five randomly generated initial states, the x(t) trajectories (being the first 3 elements of y(t) in (8)) and the Λ(t) trajectories (being the rest elements of y(t)) are time-varying. In addition, Figure 5 presents the characteristics of residual error Q(t)y(t)b(t)2=A(t)x(t)+Λ2(t)b(t)2 (with symbol 2 denoting the two norm of a vector), from which we can observe that the residual errors of ZNN model (8) (corresponding to Figures 3 and 4) are all convergent to zero. This means that the x(t) and Λ(t) solutions shown in Figures 3 and 4 are the time-varying solutions of A(t)x(t)+Λ2(t)b(t)=0. In view of Λ2(t)0, such a solution of x(t) is an exact solution of the time-varying LMVI (3), i.e., A(t)x(t)b(t). For better understanding, the profiles of the testing error function ε(t)=A(t)x(t)b(t) (i.e., ZF (9)) are illustrated in Figure 6. As shown in the figure, all the elements of ε(t) are less than or equal to zero, thereby meaning that the x(t) solution satisfies A(t)x(t)b(t) (being an exact time-varying solution of (3)). These simulation results substantiate the efficacy of ZNN model (8) for time-varying LMVI solving. Besides, Figure 7 shows the simulation results synthesized by ZNN model (8) using different γ values (i.e., γ=1 and γ=10) and different activation functions (i.e., linear, hyperbolic-sine and power-sigmoid activation functions). As seen from Figure 7, the residual errors all converge to zero, which means that ZNN model (8) solves the time-varying LMVI (3) successfully. Note that, from Figure 7, we have a conclusion that superior computational performance of ZNN model (8) can be achieved by increasing the γ value and choosing a suitable activation function.

Figure 3.

State trajectories of x(t)R3 synthesized by ZNN model (8) with γ=1 and the power-sigmoid activation function used for time-varying LMVI (3) solving.

Figure 4.

State trajectories of Λ(t)R3 synthesized by ZNN model (8) with γ=1 and the power-sigmoid activation function used for time-varying LMVI (3) solving.

Figure 5.

Residual errors Q(t)y(t)b(t)2 of ZNN model (8) for time-varying LMVI (3) solving.

Figure 6.

Profiles of ε(t)=A(t)x(t)b(t)R3 synthesized by ZNN model (8) with γ=1 and the power-sigmoid activation function used for time-varying LMVI (3) solving.

Figure 7.

Residual errors Q(t)y(t)b(t)2 of ZNN model (8) with γ fixed and different activation functions used for time-varying LMVI (3) solving.

Figure 8.

State trajectories of x(t)R3 synthesized by ZNN model (10) with γ=1 and the power-sigmoid activation function used for time-varying LMVI (3) solving.

It is worth pointing out here that, in general, it may be difficult to know whether the initial state x(0) used for simulation/application is outside the initial solution set S(0) of the time-varying LMVI (3) or not. Thus, as for ZNN model (10), we focus on investigating its computational performance when some elements of x(0) are outside S(0) while the others are inside S(0). In this case, some elements of the initial error E0=A(0)x(0)b(0) are greater than zero, while the rest are less than or equal to zero. The corresponding simulation results synthesized by ZNN model (10) using γ=1 and the power-sigmoid activation function are illustrated in Figures 8 and 9. As shown in Figure 8, starting from five randomly generated initial states, the x(t) trajectories of ZNN model (10) are time-varying. Besides, from Figure 9 which shows the profiles of the testing error function ε(t)=A(t)x(t)b(t), we can observe that the elements of ε(t) with positive initial values are convergent to zero, while the rest elements remain at their initial values. This result implies that the x(t) solutions shown in Figure 6 are the time-varying solutions of (3), i.e., A(t)x(t)b(t), thereby showing the efficacy of ZNN model (10) for time-varying LMVI solving. That is, ZNN model (10) generates an exact time-varying solution of the time-varying LMVI (3). Note that the computational performance of ZNN model (10) can be improved by increasing the value of γ and choosing a suitable activation function (which is similar to that of ZNN model (8)). Being a topic of exercise, the corresponding simulative verifications of ZNN model (10) are left for interested readers.

Figure 9.

Profiles of ε(t)=A(t)x(t)b(t)R3 synthesized by ZNN model (10) with γ=1 and the power-sigmoid activation function used for time-varying LMVI (3) solving.

In summary, the above simulation results (i.e., Figures 3 through 9) have substantiated that the presented ZNN models (8) and (10) are both effective on time-varying LMVI solving.

Example 2 In the second example, the following smoothly time-varying coefficient matrices A(t) and B(t) of (15) are designed to test ZNN models (18) and (20):

A(t)=[sin(10t)cos(10t)cos(10t)sin(10t)]R2×2 and B(t)=[cos(10t)+1sin(10t)+1.5sin(10t)+1.5cos(10t)+1]R2×2.E37

The corresponding simulation results are illustrated in Figures 10 through 13.

Figure 10.

Neural states synthesized by ZNN model (18) with γ=1 and the hyperbolic-sine activation function used for time-varying LMI (15) solving.

Figure 11.

Profiles of residual errors Q(t)y(t)w(t)2 and ε(t)=A(t)X(t)B(t) synthesized by ZNN model (18) with γ=1 and the hyperbolic-sine activation function used for time-varying LMI (15) solving.

Figure 12.

Residual errors Q(t)y(t)w(t)2 of ZNN model (18) with γ fixed and different activation functions used for time-varying LMI (15) solving.

Figure 13.

Simulation results synthesized by ZNN model (20) with γ=1 and the hyperbolic-sine activation function used for time-varying LMI (15) solving.

On one hand, as synthesized by ZNN model (18) using γ=1 and the hyperbolic-sine activation function, Figure 10 shows the trajectories of X(t) (being the first 4 elements of y(t) in (18)) and Λ(t) (being the rest elements of y(t)), which are time-varying. In addition, Figure 11(a) shows the characteristics of residual error Q(t)y(t)w(t)2=A(t)X(t)+Λ2(t)B(t)F (with symbol F denoting the Frobenius norm of a matrix), from which we can observe that the residual errors of ZNN model (18) all converge to zero. This means that the solutions of X(t) and Λ(t) shown in Figure 10 are the time-varying solutions of A(t)X(t)+Λ2(t)B(t)=0. That is, X(t) satisfies A(t)X(t)=B(t)Λ2(t)B(t), showing that such a solution is an exact time-varying solution of the time-varying LMI (15). For better understanding, Figure 11(b) shows the profiles of the testing error function ε(t)=A(t)X(t)B(t), from which we can observe that all the elements of ε(t) are less than or equal to zero. These simulation results substantiate the efficacy of ZNN model (18) for time-varying LMI solving. Besides, Figure 12 shows the simulation results synthesized by ZNN model (18) using different γ values and different activation functions. As seen from Figure 12, the residual errors all converge to zero, which means that the time-varying LMI (15) is solved successfully via ZNN model (18). Note that, as for ZNN model (18), its computational performance can be improved by increasing the γ value and choosing a suitable activation function (as shown in Figure 12).

On the other hand, as synthesized by ZNN model (20) using γ=1 and the hyperbolic-sine activation function, Figure 13 shows the related simulation results, where some elements of the initial state X(0) are outside the initial solution set S(0) of the time-varying LMI (15) while the others are inside S(0). From Figure 13(a), we can observe that the X(t) trajectory of ZNN model (20) is time-varying. In addition, as shown in Figure 13(b), the errors ε11(t) and ε21(t) (being the elements of the testing error function ε(t)=A(t)X(t)B(t)) converge to zero, and the errors ε12(t) and e22(t) are always equal to ε12(0)<0 and ε22(0)<0. This means that the X(t) solution shown in Figure 13(a) is the time-varying solution of (15), i.e., A(t)X(t)B(t), thereby showing the efficacy of ZNN model (20). That is, ZNN model (20) generates an exact time-varying solution of the time-varying LMI (15). Besides, the investigations on the computational performance of (10) using different γ values and different activation functions are left to interested readers to complete as a topic of exercise.

In summary, the above simulation results (i.e., Figures 10 through 13) have substantiated that the presented ZNN models (18) and (20) are both effective on time-varying LMI solving.

Advertisement

6. Summary

In this chapter, by exploiting two design formulas (1) and (2), based on different ZFs (i.e., (4), (9), (16) and (19)), four different ZNN models (i.e., (8), (10), (18) and (20)) have been developed and investigated to solve the time-varying LMVI (3) and time-varying LMI (15). For such ZNN models, theoretical results and analyses have also been presented to show their computational performances. Simulation results with two illustrative examples have further substantiated the efficacy of the presented ZNN models for time-varying LMVI and LMI solving.

Advertisement

Acknowledgments

This work is supported by the National Natural Science Foundation of China (with number 61473323 and 61403149), and also by the Scientific Research Funds of Huaqiao University.

References

  1. 1. Guo D, Zhang Y. Acceleration-level inequality-based MAN scheme for obstacle avoidance of redundant robot manipulators. IEEE Transactions on Industrial Electronics. 2014; 61:6903–6914. DOI: 10.1109/TIE.2014.2331036
  2. 2. Guo D, Zhang Y. A new inequality-based obstacle-avoidance MVN scheme and its application to redundant robot manipulators. IEEE Transactions on Systems, Man, and Cybernetics, Part C. 2012; 42:1326–1340. DOI: 10.1109/TSMCC.2012.2183868
  3. 3. Jing X. Robust adaptive learning of feedforward neural networks via LMI optimizations. Neural Networks. 2012; 31:33–45. DOI: 10.1016/j.neunet.2012.03.003
  4. 4. Wang Z, Zhang H, Jiang B. LMI-based approach for global asymptotic stability analysis of recurrent neural networks with various delays and structures. IEEE Transactions on Neural Networks. 2011; 22:1032–1045. DOI: 10.1109/TNN.2011.2131679
  5. 5. Kim E, Kang HJ, Park M. Numerical stability analysis of fuzzy control systems via quadratic programming and linear matrix inequalities. IEEE Transactions on Systems, Man, and Cybernetics, Part A. 1999; 29:333–346. DOI: 10.1109/3468.769752
  6. 6. Xiao L, Zhang Y. Different Zhang functions resulting in different ZNN models demonstrated via time-varying linear matrix-vector inequalities solving. Neurocomputing. 2013; 121:140–149. DOI: 10.1016/j.neucom.2013.04.041
  7. 7. Guo D, Zhang Y. ZNN for solving online time-varying linear matrix-vector inequality via equality conversion. Applied Mathematics and Computation. 2015; 259:327–338. DOI: 10.1016/j.amc.2015.02.060
  8. 8. Guo D, Zhang Y. Zhang neural network for online solution of time-varying linear matrix inequality aided with an equality conversion. IEEE Transactions on Neural Networks and Learning Systems. 2014; 25:370–382. DOI: 10.1109/TNNLS.2013.2275011
  9. 9. Guo D, Zhang Y. A new variant of the Zhang neural network for solving online time-varying linear inequalities. Proceedings of the Royal Society A. 2012; 468:2255–2271. DOI: 10.1098/rspa.2011.0668
  10. 10. Lin C, Lai C, Huang T. A neural network for linear matrix inequality problems. IEEE Transactions on Neural Networks. 2000; 11:1078–1092. DOI: 10.1109/72.870041
  11. 11. Cheng L, Hou ZG, Tan M, A simplified neural network for linear matrix inequality problems. Neural Processing Letters. 2009; 29:213–230. DOI: 10.1007/s11063-009-9105-5
  12. 12. Yang K, Murty KG, Mangasarian OL. New iterative methods for linear inequalities. Journal of Optimization Theory and Applications. 1992; 72:163–185. DOI: 10.1007/BF00939954
  13. 13. Cichocki A, Bargiela, A. Neural networks for solving linear inequality systems. Parallel Computing. 1997; 22:1455–1475. DOI: 10.1016/S0167-8191(96)00065-8
  14. 14. Xia Y, Wang J, Hung DL. Recurrent neural networks for solving linear inequalities and equations. IEEE Transactions on Circuits and Systems - I. 1999; 46:452–462. DOI: 10.1109/81.754846
  15. 15. Zhang Y. A set of nonlinear equations and inequalities arising in robotics and its online solution via a primal neural network. Neurocomputing. 2006; 70:513–524. DOI: 10.1016/j.neucom.2005.11.006
  16. 16. Hu X. Dynamic system methods for solving mixed linear matrix inequalities and linear vector inequalities and equalities. Applied Mathematics and Computation. 2010; 216:1181–1193. DOI: 10.1016/j.amc.2010.02.010
  17. 17. Xiao L, Zhang Y. Zhang neural network versus gradient neural network for solving time-varying linear inequalities. IEEE Transactions on Neural Networks. 2011; 22:1676–1684. DOI: 10.1109/TNN.2011.2163318
  18. 18. Zhang Y, Guo D. Zhang Functions and Various Models. Heidelberg: Springer-Verlag; 2015. 236 p.
  19. 19. Zhang Y, Yi C. Zhang Neural Networks and Neural-Dynamic Method. New York: Nova Science Publishers; 2011. 261 p.
  20. 20. Zhang Y, Jiang D, Wang J. A recurrent neural network for solving Sylvester equation with time-varying coefficients. IEEE Transactions on Neural Networks. 2002; 13:1053–1063. DOI: 10.1109/TNN.2002.1031938
  21. 21. Zhang Y, Ge SS. Design and analysis of a general recurrent neural network model for time-varying matrix inversion. IEEE Transactions on Neural Networks. 2005; 16:1477–1490. DOI: 10.1109/TNN.2005.857946
  22. 22. Mead C. Analog VLSI and Neural Systems. Boston, USA: Addison-Wesley, Reading; 1989. 371 p.
  23. 23. Li Z, Zhang Y. Improved Zhang neural network model and its solution of time-varying generalized linear matrix equations. Expert Systems with Applications. 2010; 37:7213–7218. DOI: 10.1016/j.eswa.2010.04.007
  24. 24. Liu S, Trenkler G. Hadamard, Khatri-Rao, Kronecker and other matrix products. International Journal of Cooperative Information Systems. 2008; 4:160–177.

Written By

Dongsheng Guo, Laicheng Yan and Yunong Zhang

Submitted: 16 October 2015 Reviewed: 29 February 2016 Published: 19 October 2016