Open access peer-reviewed chapter

Solving and Algorithm for Least-Norm General Solution to Constrained Sylvester Matrix Equation

Written By

Abdur Rehman and Ivan I. Kyrchei

Reviewed: 02 January 2023 Published: 08 February 2023

DOI: 10.5772/intechopen.109749

From the Edited Volume

Inverse Problems - Recent Advances and Applications

Edited by Ivan I. Kyrchei

Chapter metrics overview

69 Chapter Downloads

View Full Metrics

Abstract

Keeping in view that a lot of physical systems with inverse problems can be written by matrix equations, the least-norm of the solution to a general Sylvester matrix equation with restrictions A1X1=C1,X1B1=C2,A2X2=C3,X2B2=C4,A3X1B3+A4X2B4=Cc, is researched in this chapter. A novel expression of the general solution to this system is established and necessary and sufficient conditions for its existence are constituted. The novelty of the proposed results is not only obtaining a formal representation of the solution in terms of generalized inverses but the construction of an algorithm to find its explicit expression as well. To conduct an algorithm and numerical example, it is used the determinantal representations of the Moore–Penrose inverse previously obtained by one of the authors.

Keywords

  • linear matrix equation
  • generalized Sylvester matrix equation
  • Moore-Penrose inverse

1. Introduction

Standardly, we state C and R, respectively, for the complex and real numbers. Let Cm×n denote the set of all m×n matrices over C, and Crm×n stay for a subset of m×n complex matrices with rank r. The rank of A is denoted by both symbols rA and rankA. The (complex) conjugate transpose matrix of ACm×n is written by A and a matrix ACn×n is said to be Hermitian if A=A. An identity matrix with feasible shape is denoted by I.

Definition 1.1. The Moore–Penrose (MP-) inverse of ACm×n, denoted by A, is defined to be the unique solution X to the following four Penrose equations

AXA=A,E1
XAX=X,E2
AX=AX,E3
XA=XA.E4

Matrices satisfying the eqs. (1) and (2) are known as reflexive inverses, denoted by A+.

In addition, LA=IAA and RA=IAA represent a pair of orthogonal projectors onto the kernels of A and A, respectively.

Mathematical models of physical systems with inverse problems especially those has a finite number of model parameters can be written by matrix equations. In particular, the Sylvester-type matrix equations have far-reaching applications in singular system control [1], system design [2], robust control [3], feedback [4], perturbation theory [5], linear descriptor systems [6], neural networks [7] and theory of orbits [8], etc.

Some recent work on generalized Sylvester matrix equations and their systems can be observed in [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21]. In 2014, Bao [22] examined the least-norm and extremal ranks of the least square solution to the quaternion matrix equations

A1X=C1,XB1=C2,A3XB3=Cc.E5

Wang et al. [23] examined the expression of the general solution to the system

A1X1=C1,A2X2=C3,A3X1B3+A4X2B4=Cc,E6

and as an application, the P-symmetric and P-skew-symmetric solution to

AaX=Ca,AbXBb=Cb.

has been established. Li et al. [24] established a novel expression of the general solution of the system (6) and they computed the least-norm of general solution to (6). In 2009, Wang et al. [25] constituted the expression of the general solution to

A1X1=C1,X1B1=C2,A2X2=C3,X2B2=C4,A3X1B3+A4X2B4=Cc,E7

and as an application, they explored the PQ-symmetric solution to the system

AaX=Ca,XBb=Cb,AcXBc=Cc.

Some latest findings on the least-norm of matrix equations and PQ-symmetric matrices can be consulted in [26, 27, 28, 29, 30]. Furthermore, our main system (7) is a special case of the following system

A1X1=C1,X2B1=D1,A2X3=C2,X3B2=D2,A3X4=C3,X4B3=D3,A4X1+X2B4+C4X3D4+C5X4D5=Cc,E8

which has been investigated by Zhang in 2014.

Motivated by the latest interest of least-norm of matrix equations, we construct a novel expression of the general solution to the system (7) and apply this to investigate the least-norm of the general solution to the system (7) in this chapter. Observing that systems (5) and (6) are particular cases of our system (7), solving system (7) will encourage the least-norm to a wide class of problems.

We commence with the following lemmas which have crucial function in the construction of the chief outcomes of the following sections.

Lemma 1.2. [31]. Let A,B, and C be given matrices over C with agreeable dimensions. Then.

  1. rA+rRAB=rB+rRBA=rAB.

  2. rA+rCLA=rC+rALC=rAC.

  3. rB+rC+rRBALC=rABC0.

Lemma 1.3. [32]. Let A, B, and C be known matrices over C with right sizes. Then

  1. A=AAA=AAA.

  2. LA=LA2=LA,RA=RA2=RA.

  3. LABLA=BLA,RACRA=RAC.

Lemma 1.4. [33]. Let Φ,Ω be matrices over C and

Φ=Φ1Φ2,Ω=Ω1Ω2,F=Φ2LΦ1,T=RΩ1Ω2.

Then

LΦ=LΦ1LF,LΩ=LΩ1Ω1Ω2LT0LT,RΩ=RTRΩ1,RΦ=RΦ10RFΦ2Φ1RF,

where Φ1+, Ω1+ are any fixed reflexive inverses, LΦ1 and RΩ1 stand for the projectors LΦ1=IΦ1+Φ1, RΩ1=IΩ1Ω1+ induced by Φ1, Ω1, respectively.

Remark 1.5. Since the Moore-Penrose inverse is a reflexive inverse, this lemma can be used for the MP-inverse without any changes. It has taken place in ([32], Lemma 2.4).

Lemma 1.6. [34]. Suppose that

B1XC1+B2YC2=AE9

is consistent linear matrix equation. Then.

  1. The general solution of the homogeneous equation

    B1XC1+B2YC2=0,

    can be expressed by

    X=X1X2+X3,Y=Y1Y2+Y3,

    where X1X3 and Y1Y3 are general solution to the system

    B1X1=B2Y1,X2C1=Y2C2,B1X3C1=0,B2Y3C2=0.

    By computing the value of unknowns in above and using them in X and Y, we have

    X=S1LGURHT1+LB1V1+V2RC1,Y=S2LGURHT2+LB2W1+W2RC2,

    where S1=Ip0,S2=0Is,T1=Iq0, T2=0It,G=B1B2, and H=C1C2; the matrices U,V1,V2,W1andW2 are free to vary over C.

  2. Assume that Eq. (9) is solvable, then its general solution can be expressed as

    X=X0+X1X2+X3,Y=Y0+Y1Y2+Y3,

    where X0 and Y0 are any pair of particular solutions to (9).

    It can also be written as

    X=X0+S1LGURHT1+LB1V1+V2RC1,Y=Y0+S2LGURHT2+LB2W1+W2RC2.

Lemma 1.7. [35]. Let A1,B1,C1,C2 be given matrices over C with agreeable sizes and X1 to be determined. Then the system

A1X1=C1,X1B1=C2,E10

is consistent if and only if

RA1C1=0,C2LB1=0,A1C2=C1B1.E11

Under these conditions, the general solution to (10) can be established as

X1=A1C1+LA1C2B1+LA1U1RB1,

where U1 is a free matrix over C with accordant dimension.

Lemma 1.8. [36]. Let A, B, and C be known matrices over C with agreeable dimensions, and X be unknown. Then the matrix equation

AXB=CE12

is consistent if and only if AACBB=C. In this case, its general solution can be expressed as

X=ACB+LAV+WRB,E13

where V,W are arbitrary matrices over C with appropriate dimensions.

In [37], it is proved that (13) is the least squares solution to (12), and its minimum norm least squares solution is XLS=ACB.

Lemma 1.9. [25]. Let Ai,Bi,Ci, i=14, and Cc be given matrices over C with agreeable dimensions, and X1,X2 to be determined. Denote

A=A3LA1,B=RB1B3,C=A4LA2,D=RB2B4,N=DLB,M=RAC,S=CLM,E=CcA3A1C1B3AC2B1B3A4A2C3B4CC4B2B4.

Then the following conditions are tantamount:

  1. System (7) is resolvable.

  2. The conditions in (11) are met and

    RA2C3=0,C4LB2=0,A2C4=C3B2,RMRAE=0,RAELD=0,ELBLN=0,RCELB=0.E14

  3. The equalities in (11) and (14) are satisfied and

    MMRADD=RAE,CCELBNN=ELB.

In these conditions, the general solution to the system (7) can be written as

X1=A1C1+LA1C2B1+LA1AEBRB1LA1ACMEBRB1LA1ASCENDBRB1LA1ASV1RNDBRB1++LA1LAU1+Z1RBRB1,E15
X2=A2C3+LA2C4B2+LA2MRAEDRB2+LA2LMbSSCENRB2+LA2LMV1SSV1NNRB2+LA2W1RDRB2,E16

where U1,V1,W1andZ1 are free matrices over C with agreeable dimensions.

Since the general solutions of considered systems are expressed in terms of generalized inverses, another goal of the paper is to give determinantal representations of the least-norm of the general solution to the system (7) based on determinantal representations of generalized inverses.

Due to the important role of generalized inverses in many application fields, considerable effort has been exerted toward the numerical algorithms for fast and accurate calculation of matrix generalized inverse. In general, most existing methods for their obtaining are iterative algorithms for approximating generalized inverses of complex matrices (some recent papers, see, e.g. [38, 39, 40]). There are only several direct methods for finding MP-inverse for an arbitrary complex matrix ACm×n. The most famous is method based on singular value decomposition (SVD), i.e. if A=UΣV, then A=VΣU. The computational cost of this method is dominated by the cost of computing the SVD, which is several times higher than matrix–matrix multiplication. Another approach is constructing determinantal representations of the MP-inverse A. A well-known determinantal representation of an ordinary inverse is the adjugate matrix with the cofactors in entries. It has an important theoretical significance and brings forth Cramer’s rule for systems of linear equations. The same is desirable to have for the generalized inverses. Due to looking for their more applicable explicit expressions, there are various determinantal representations of generalized inverses (for the MP-inverse, see, e.g. [41, 42]). Because of the complexity of the previously obtained expressions of determinantal representations of the MP-inverse, they have little applicability.

In this chapter, we will use the determinantal representations of the MP-inverse recently obtained in [43].

Lemma 1.10. [43, Theorem 2.2] If ACm×n with rankA=r, then the Moore-Penrose inverse A=aijCn×m possess the following determinantal representations

aij=βJr,niAA.ia.jβββJr,nAAββ=αIr,mjAAj.ai.αααIr,mAAαα.E17

Here Aαα denote a principal minor of A whose rows and columns are indexed by αα1αk1m,

Lk,mα:1α1<<αkm,andIr,miα:αLr,miα.

Also, a.j and ai. denote the jth column and the ith row of A, and Ai.b and A.jc stand for the matrices obtained from A by replacing its ith row with the row vector bC1×n and its jth column with the column vector cCm, respectively.

The formulas (17) give very simple and elegant determinantal representations of the MP-inverse. So, for any ACrm×n, we have sum of all principal minors of r order of the matrices AA or AA in denominators and sum of principal minors of r order of the matrices AA.ia.j or AAj.ai. that contain the ith column or the jth row, respectively, in numerators into (17).

Note that for an arbitrary full-rank matrix A, Lemma 1.10 gives a new way of finding an inverse matrix.

Corollary 1.11. If ACm×n with rankA=minmn, then the inverse A1=aij1Cn×m possess the following determinantal representations:

aij1=AA.ia.jAAifrankA=n,AAj.ai.AAifrankA=m.

These new determinantal representations of the Moore-Penrose inverse have been obtained by the developed novel limit-rank method in the case of quaternion matrices [44] as well. This method was successfully applied for constructing determinantal representations of other generalized inverses in both cases for complex and quaternion matrices (see e.g. [45, 46, 47]). It also yields Cramer’s rules of various matrix equations [48, 49, 50, 51, 52, 53, 54].

The remainder of our chapter is directed as follows. In Section 2, we provide a new expression of the general solution to our system (7) and discuss its least-norm. The algorithm and numerical example of finding the anti-Hermitian solution to (7) are presented in Section 3. (7). Finally, in Section 4, the conclusions are drawn.

Advertisement

2. A new expression of the general solution to the system

Now we demonstrate the principal theorem of this section (7).

Theorem 2.1. Assume that S1=Ip10,S2=0Ip2, T1=Iq10, T2=Iq20, G=AC,H=BD, H1=LA1LA,H2=LA1S1LG,H3=RHT1RB1,H4=LA2LC,H5=LA2S2LG,H6=RHT2RB2 and the system (7) is solvable, then the general solution to our system can be formed as

X1=A1C1+LA1C2B1+LA1AEBRB1LA1ACMEBRB1LA1ASCENDBRB1+H1V1RB1+H2UH3+LA1V2RBRB1,E18
X2=A2C3+LA2C4B2+LA2MRAEDRB2+LA2LMSSCENRB2++H4W1RB2+H5UH6+LA2W2RDRB2,E19

where U,V1,V2,W1,andW2 are free matrices over C with allowable dimensions.

Proof. Our proof contains three parts. At the first step, we show that the matrices X1 and X2 have the forms of

X1=ϕ0+H1V1RB1+LA1V2RBRB1+H2UH3,E20
X2=ψ0+H4W1RB2+LA2W2RDRB2+H5UH6,E21

where ϕ0 and ψ0 are any pair of particular solution to the system (7), V1, V2, W1, W2, and U are free matrices of able shapes over C, are solutions to the system (7). In the second step, we display that any couple of solutions μ0 and ν0 to the system (7) can be established as (20) and (21), respectively. In the end, we confirm that

μ=A1C1+LA1C2B1+AEBACMEBASCENDB,ν=A2C3+LA2C4B2+LA2MRAED+LA2LMSSCENRB2

are a couple of particular solutions to the system (7).

Now we prove that a couple of matrices X1 and X2 having the shape of (20) and (21), respectively, are solutions to the system (7). Observe that

A1C1B1+LA1C2B1B1=A1A1C2+LA1C2=C2,A2C3B2+LA2C4B2B2=A2A2C4+LA2C4=C4.

It is evident that X1 having the form (20) is a solution of A1X1=C1, and X1B1=C2 and X2 having the form (21) is a solution to A2X2=C3,X2B2=C4. Now we are left to show that A3X1B3+A4X2B4=Cc is satisfied by X1 and X2 given in (20) and (21). By Lemma 1.4, we have

AS1LG=AIp10LAACLM0LM=ALAACLM=0AACLM=0CMLM=0CLM=0S=CS2LG,E22

and

RHT1B=RB0RNDBRNIq10B=RBRNDBB=0RNDBB=0RNDILB=0RND=RHT2D.E23

Observe that ALA=0 and by using (22) and (23), we arrive that

A3X1B3+A4X2B4=Cc.

Conversely, assume that μ0 and ν0 are any couple of solutions to our system (7). By Lemma 1.7, we have

A1A1C1=C1,C2B1B1=C2,A2A2C3=C3,C4B2B2=C4,A1C2=C1B1,A2C4=C3B2.

Observe that

LA1μ0RB1=IA1A1μ0IB1B1=μ0μ0B1B1A1A1μ0+A1A1μ0B1B1=μ0C2B1A1C1+A1A1C2B1=μ0LA1C2B1A1C1

produces

μ0=LA1C2B1+A1C1+LA1μ0RB1.E24

On the same lines, we can get

ν0=LA2C4B2+A2C3+LA2ν0RB2.E25

It is manifest that μ0 and ν0 defined in (24)(25) are also solution pair of

AX1B+CX2D=E.E26

Since

AX1B+CX2D=A3LA1μ0RB1B3+A4LA2ν0RB2B4=A3μ0LA1C2B1A1C1B3+A4ν0LA2C4B2A2C3B4=A3μ0B3A3LA1C2B1B3A1C1B3+A4ν0B4A4LA2C4B2B4A4A2C3B4=A3μ0B3+A4ν0B4AC2B1B3A1C1B3CC4B2B4A4A2C3B4=CcAC2B1B3A1C1B3CC4B2B4A4A2C3B4=E.

Hence by Lemma 1.6, μ0 and ν0 can be written as

μ0=X01+S1LGURHT1+LAV1+V2RB,E27
ν0=X02+S2LGURHT2+LCW1+W2RD,E28

where X01 and X02 are a couple of special solutions to (26) and U,V1,V2,W1 and W2 are free matrices with agreeable dimensions. Using (27) and (28) in (24) and (25), respectively, we get

μ0=X10+H2UH3+H1V1RB1+LA1V2RBRB1,ν0=X20+H5UH6+H4W1RB2+LA2W2RDRB2,

where X10=A1C1+LA1C2B1+LA1X01RB1 and X20=A2C3+LA2C4B2+LA2X02RB2. It is evident that X10 and X20 are a couple of solutions to the system (7). It is clear that μ0 and ν0 can be represented by (20) and (21), respectively. Lastly, by putting U1,V1,W1, and Z1 equal to zero in (15) and (16), we conclude that μ and ν are special solutions to the system (7). Hence the expressions (18) and (19) represent the general solution to the system (7) and the theorem is completed.

Remark 2.2. Due to Lemma 1.3 and taking into account LA2LM=LMLA2, we have the following simplification of the solution pair to the system (7) that is identical for (15)(16) and (18)(19) when U,U1,V1,V2,Z1,W1, and W2 disappear,

X1=A1C1+LA1C2B1+AEBAA4MEBASCENB4B,X2=A2C3+LA2C4B2+MED+SSCEN.

Comment 2.3. We have established a novel expression of the general solution to the system (7) in Theorem 2.1 which is different from one created in [25]. With the help of this novel expression, we can explore the least-norm of the general solution which can not be studied with the help of the expression given in [25], which is one of the advantage of our new expression.

Now we discuss some special cases of our system.

If B1,B2,C2 and C4 disappear in Theorem 2.1, then we gain the following conclusion.

Corollary 2.4. Denote S1=Ip10,S2=0Ip2, T1=Iq10, T2=Iq20, G=AC,H=B3B4, H1=LA1LA,H2=LA1S1LG,H3=RHT1,H4=LA2LC,H5=LA2S2LG,H6=RHT2 and the system (6) is solvable, then the general solution to (6) can be formed as

X1=A1C1+AEB3AA4MEB3ASCENB4B3H1Y1++H2VH3+LA1Y2RB3,X2=A2C3+MEB4+SSCEN+H4Z1+H5VH6+LA2Z2RB4,

where A,C,N,M,S are the same as in Lemma 1.6, E=CcA3A1C1B3A4A2C3B4, V,Y1,Y2,Z1, and Z2 are free matrices over C obeying agreeable dimensions.

Comment 2.5. The above consequence is a chief result of [32].

If A2,B2,C3,A4,B4 and C4 vanish in our system (7), then we get the following outcome.

Corollary 2.6. Suppose that A1,B1,C1,C2,A3,B3 and Cc are given. Then the general solution to system (5) is established by

X1=A1C1+LA1C2B1+A3LA1CcA3A1C1B3A3LA1C2B1B3RB1B3++LA1LA3LA1W1RB1+LA1W2RRB1B3RB1,

where W1 and W2 are arbitrary matrices over C with appropriate sizes.

We experience the least-norm to the system (7) in this section. By the definition and [55], we can get the following result easily.

Lemma 2.7. Let ACm×n,BCn×m. Then we have.

(1) A+B2=A2+B2+2RetrBA.

(2) RetrAB=RetrBA.

Theorem 2.8. Assume that system (7) is solvable, then the least-norm of the solution pair X1 and X2 to system (7) can be extracted as follows:

X1min=A1C1+LA1C2B1+AEBAA4MEBASCENB4B,E29
X2min=A2C3+LA2C4B2+MED+SSCEN.E30

Proof. By Theorem 2.1 and Remark 2.2, the general solution to (7) can be formed as

X1=A1C1+LA1C2B1+AEBAA4MEBASCENB4BH1V1RB1+H2UH3+LA1V2RBRB1,X2=A2C3+LA2C4B2+MED+SSCEN+H4W1RB2+H5UH6+LA2W2RDRB2,

where U,V1,V2,W1, and W2 are free matrices over C having executable dimensions. By Lemma 2.7, the norm of X1 can be established as

X12=A1C1+LA1C2B1+AEBAA4MEBASCENB4BH1V1RB1+H2UH3+LA1V2RBRB12=A1C1+LA1C2B1+AEBAA4MEBASCENB4B2+H1V1RB1+H2UH3+LA1V2RBRB12+J,E31

where

J=2RetrH1V1RB1+H2UH3+LA1V2RBRB1A1C1+LA1C2B1+AEBAA4MEBASCENB4B.E32

Now we want to show that J=0. Applying Lemmas 1.3, 1.4 and 2.7, we have

Re[tr(H1V1RB1(A1C1+LA1C2B1+AEBAA4MEBASCENB4B))]=Re[tr(RB1V1H1(A1C1+LA1C2B1+AEBAA4MEBASCENB4B))]=Re[tr(RB1V1LALA1(A1C1+LA1C2B1+AEBAA4MEBASCENB4B))]=RetrRB1V1LALA1LA1C2B1=RetrV1LALA1LA1C2B1RB1=0,E33
Re[tr(LA1V2RBRB1(A1C1+LA1C2B1+AEBAA4MEBASCENB4B))]=Re[tr(RB1RBV2LA1(A1C1+LA1C2B1+AEBAA4MEBASCENB4B))]=Re[tr(RB1RBV2LA1(LA1C2B1+AEBAA4MEBASCENB4B))]=Re[tr(V2LA1(LA1C2B1+AEBAA4MEBASCENB4B)RB1RB)]=RetrV2LA1AEBAA4MEBASCENB4BRB=0,E34
Re[tr(H2UH3(A1C1+LA1C2B1+AEBAA4MEBASCENB4B))]=Re[tr(H3UH2(A1C1+LA1C2B1+AEBAA4MEBASCENB4B))]=Re[tr(H3ULGS1LA1(LA1C2B1+AEBAA4MEBASCENB4B))]=Re[tr(H3ULAACLM0LMI0(LA1C2B1+AEBAA4MEBASCENB4B))]=Re[tr(H3ULA(AEBAA4MEBASCENB4B))]=RetrH3ULALA1C2B1=RetrRB1T1RHULALA1C2B1E35

By using (33)(35) in (32) produces J=0. Since X1 is arbitrary, we get (29) from (31). In the same way, we can prove that (30) hold. □

A special case of our system (7) is given below.

If B1,B2,C2, and C4 become zero matrices in Theorem 2.8, then again we get the principal result of [20].

Corollary 2.9. Assume that system (6) is solvable, then the least-norm of the solution pair X1 and X2 to system (6) can be furnished as

X1min=A1C1+AEB3AA4MEB3ASCENB4B3,X2min=A2C3+MEB4+SSCEN.

If A2,B2,C3,A4,B4 and C4 vanish in our system, then we get the next consequence.

Corollary 2.10. Suppose that A1,B1,C1,C2,A3,B3 and Cc are given. Then the least-norm of the least square solution to system (5) is launched by

X1min=A1C1+LA1C2B1+A3LA1CcA3A1C1B3A3LA1C2B1B3RB1B3.

Comment 2.11. Corollary 2.10 is the key result of [22].

Advertisement

3. Algorithm with example

In this section, we construct the algorithm for finding the least-norm of the solution to (7) that is inducted by Theorem 2.8.

Algorithm 1.

  1. By Lemma 1.10 find the matrices Ai, Bi for i=1,,4, and RAi=IAiAi, LAi=IAiAi, RBi=IBiBi, and LBi=IBiBi for i=1,2.

  2. By Lemma 1.9 calculate the matrices A,B,C,D,M,S, and E, and by Lemma 1.10 find their MP-inverses and orthogonal projectors when it is needed.

  3. Verify the consistence equalities (11) and (14). If these equalities are hold, then we find solutions by the next steps.

  4. Finally, by (29) and (30), compute the least-norm of the solution pair X1 and X2.

The following example will be considered by using Algorithm 1. Note that our goal is both to confirm correctness of main results from Theorems 2.1 and 2.8, and to demonstrate the technique of applying the determinantal representations of the MP-inverse from Lemma 1.10 by using a not too complicated and understandable example.

Example 1. Given the matrices:

A1=1+i1i1+i1i1+i1+i1i1i2i222i,B1=2i1i+3i13i1i13i1i1+3i,A2=i111ii1iii11,
B2=2i2i1i+12i+1i2i12i+1i2i1i+22i1i+1,C1=8i88i844i44i2+4i4+2i24i42i,
C2=11i44i11442222i+8888i11i44i+11442222i8888i,A3=5i+252i2+5i2i+52i55i+22i52+5i4i44i4,
B3=ii+21224i2i2i42i211+2ii,A4=2i33i+22i+3i1i33i3,C3=3i333i33i3i333i3i33i333i,B4=7ii2732i7ii2732i,C4=42i2+4i2+2i2+4i42i2+2i24i4+2i22i,
Cc=1211130502i1344+612i27981250i18086881398+834i29421538i1154946i1488+624i26541394i.E36

Let us find a solution to the system (7) with the given above matrices by Algorithm 1.

  1. Thanks to Lemma 1.10, we calculate the Moore-Penrose inverses. So,

    A1=1321i1i2i1+i1i21i1+i21+i1+i2i,B1=14411i11i1111394120i20+i7i1+i5+3i33i,A2=112i11i1ii11ii1,B2=1121ii1i11i1i1i1+i1+i,A3=1802i225i22i5+2i2i22+5i22i5+2i,B3=170i22i12+i2+4i4+2i12i12i2i,
    A4=1693+2ii32+3i13i32ii1,B4=179235i2135i2147i5147i515248i5248i.

    Then,

    A=122+5i52i1+8i12+9i5+2i2+5i8+i9+12i4i448i8+4i,B=1225231i10135i31+52i8+9i10+25i98i9+8i2510i8+9i3152i135+10i5231i,C=13113i97i6+4i13i3+i2i9+339i6,D=02i2022i02i2022i,N=174+4i42i104i44i2+4i4+10i44i4+2i10+4i4+4i24i410i,M=1342i42i2+2i2+4i24i22i000,S=0E=18419931108289i23650968427i10828919931i110417+16211i77995+79015i16211110417i74624+106424i138224+255672i10642474624i.

  2. Confirm that (11) and (14) are true for given matrices.

  3. Finally, by (29) and (30), we find that the least-norm of the solution pair X1 and X2 to the system (7) is following

    X1=136576011103239+18670545i9851419+14002307i5154373+3862099i4697553+10234559i26688873+4258681i29888893+5510501i12048461+4721147i17746081+5177967i6556168+9656066i5321848+2196342i4452786+10360112i6757414+7845632i170492642930378i2630446411113378i102446983367816i736260913720296i,X2=113442052963i2331985i2159+3481i1465367i792+2565i1901205i317+445i221+317i171+585i146+28i8681714i146+2884i.

Note that Maple 2021 was used to perform the numerical experiment.

Advertisement

4. Conclusion

We have constructed a novel expression of the general solution to system (7) over C and used this result to explore the least-norm of the general solution to this system when it is solvable. Some particular cases of our system are also discussed. Our results carry the principal results of [22, 32]. To give an algorithm finding the explicit numerical expression of the least-norm of the general solution, it is used the determinantal representations of the MP-inverse recently obtained by one of the authors. The novelty of the conducted research is obtaining necessary and sufficient conditions to exist a solution, its formal representation of by closed formula in terms of generalized inverses, and the construction of an algorithm to find its explicit expression. A numerical example is also given to interpret the results established in this paper.

Advertisement

Conflict of interest

The authors declare that they have not conflicts of interest.

Advertisement

Data Availability

The data used to support the findings of this study are included within the article titled “Solving and Algorithm for Least-Norm General Solution to Constrained Sylvester Matrix Equation”. The prior studies (and datasets) are cited at relevant places within the text.

Advertisement

Classification

2000 AMS subject classifications: 15A09, 15A15, 15A24.

References

  1. 1. Shahzad A, Jones BL, Kerrigan EC, Constantinides GA. An efficient algorithm for the solution of a coupled Sylvester equation appearing in descriptor systems. Automatica. 2011;47:244-248. DOI: 10.1016/j.automatica.2010.10.038
  2. 2. Syrmos VL, Lewis FL. Coupled and constrained Sylvester equations in system design. Circuits, Systems, and Signal Processing. 1994;13(6):663-694. DOI: 10.1007/BF02523122
  3. 3. Varga A. Robust pole assignment via Sylvester equation based state feedback parametrization: Computer-aided control system design (CACSD). IEEE International Symposium. 2000;57:13-18. DOI: 10.1109/CACSD.2000.900179
  4. 4. Syrmos VL, Lewis FL. Output feedback eigenstructure assignment using two Sylvester equations. IEEE Transaction on Automatic Control. 1993;38:495-499. DOI: 10.1109/9.210155
  5. 5. Li RC. A bound on the solution to a structured Sylvester equation with an application to relative perturbation theory. SIAM Journal on Matrix Analysis and Application. 1999;21(2):440-445
  6. 6. Darouach M. Solution to Sylvester equation associated to linear descriptor systems. Systems and Control Letters. 2006;55:835-838. DOI: 10.1016/j.sysconle.2006.04.004
  7. 7. Zhang YN, Jiang DC, Wang J. A recurrent neural network for solving Sylvester equation with time-varying coefficients. IEEE Transcation on Neural Networks. 2002;13(5):1053-1063. DOI: 10.1109/TNN.2002.1031938
  8. 8. Terán FD, Dopico FM. The solution of the equation XA+AXT=0 and its application to the theory of orbits. Linear Algebra and its Application. 2011;434:44-67. DOI: 10.1016/j.laa.2010.08.005
  9. 9. Dehghan M, Hajarian M. An efficient iterative method for solving the second-order Sylvester matrix equation EVF2AVFCV=BW. IET Control Theory and Applications. 2009;3:1401-1408. DOI: 10.1049/iet-cta.2008.0450
  10. 10. Ding F, Chen T. Gradient based iterative algorithms for solving a class of matrix equations. IEEE Transaction on Automatic Control. 2005;50(8):1216-1221. DOI: 10.1109/TAC.2005.852558
  11. 11. Dmytryshyn A, Futorny V, Klymchuk T, Sergeichuk VV. Generalization of Roth’s solvability criteria to systems of matrix equations. Linear Algebra and its Application. 2017;527:294-302. DOI: 10.1016/j.laa.2017.04.011
  12. 12. He ZH, Wang QWA. Real quaternion matrix equation with applications. Linear and Multilinear Algebra. 2013;61(6):725-740. DOI: 10.1080/03081087.2012.703192
  13. 13. He ZH, Wang QW. The η-bihermitian solution to a system of real quaternion matrix equation. Linear and Multilinear Algebra. 2014;62(11):1509-1528. DOI: 10.1080/03081087.2013.839667
  14. 14. Kyrchei II. Explicit representation formulas for the minimum norm least squares solutions of some quaternion matrix equations. Linear Algebra and its Applications. 2013;438(1):136-152. DOI: 10.1016/j.laa.2012.07.049
  15. 15. Kyrchei II. Explicit determinantal representation formulas for the solution of the two-sided restricted quaternionic matrix equation. Journal of Applied Mathematics and Computing. 2018;58(1–2):335-365. DOI: 10.1007/s12190-017-1148-6
  16. 16. Rehman A, Wang QW. A system of matrix equations with five variables. Applied Mathematics and Computation. 2015;271:805-819. DOI: 10.1016/j.amc.2015.09.066
  17. 17. Rehman A, Wang QW, Ali I, Akram M, Ahmad MO. A constraint system of generalized Sylvester quaternion matrix equations. Adv. Appl. Clifford Algebr. 2017;27(4):3183-3196. DOI: 10.1007/s00006-017-0803-1
  18. 18. Rehman A, Wang QW, He ZH. Solution to a system of real quaternion matrix equations encompassing η-Hermicity. Applied Mathematics and Computation. 2015;265:945-957. DOI: 10.1016/j.amc.2015.05.104
  19. 19. Rehman A, Akram M. Optimization of a nonlinear hermitian matrix expression with application. Univerzitet u Nišu. 2017;31(9):2805-2819. DOI: 10.2298/FIL1709805R
  20. 20. Wang QW, Qin F, Lin CY. The common solution to matrix equations over a regular ring with applications. Indian Journal of Pure and Applied Mathematics. 2005;36(12):655-672
  21. 21. Wang QW, Rehman A, He ZH, Zhang Y. Constraint generalized Sylvester matrix equations. Automatica. 2016;69:60-64. DOI: 10.1016/j.automatica.2016.02.024
  22. 22. Bao Y. Least-norm and extremal ranks of the Least Square solution to the quaternion matrix equation AXB=C subject to two equations. Algebra Colloq. 2014;21(3):449-460. DOI: 10.1142/S100538671400039X
  23. 23. Wang QW, Chang HX, Lin CY. P-(skew)symmetric common solutions to a pair of quaternion matrix equations. Applied Mathematics and Computation. 2008;195:721-732. DOI: 10.1016/j.amc.2007.05.021
  24. 24. Li H, Gao Z, Zhao D. Least squares solutions of the matrix equation AXB+CYD=E with the least norm for symmetric arrowhead matrices. Applied Mathematics and Computation. 2014;226:719-724. DOI: 10.1016/j.amc.2013.10.065
  25. 25. Wang QW, van der Woude JW, Chang HX. A system of real quaternion matrix equations with applications. Linear Algebra and its Application. 2009;431(12):2291-2303. DOI: 10.1016/j.laa.2009.02.010
  26. 26. Peng YG, Wang X. A finite iterative algorithm for solving the least-norm generalized PQ reflexive solution of the matrix equations AiXBi=Ci. Journal of Computational Analysis and Applications. 2014;17(3):547-561
  27. 27. Yuan S, Liao A. Least squares Hermitian solution of the complex matrix equation AXB+CXD=E with the least norm. Journal of Franklin Institute. 2014;351(11):4978-4997. DOI: 10.1016/j.jfranklin.2014.08.003
  28. 28. Trench WF. Minimization problems for RS-symmetric and RS-skew symmetric matrices. Linear Algebra and its Applications. 2004;389:23-31. DOI: 10.1016/j.laa.2004.03.035
  29. 29. Trench WF. Characterization and properties of matrices with generalized symmetry or skew symmetry. Linear Algebra and its Applications. 2004;377:207-218. DOI: 10.1016/j.laa.2003.07.013
  30. 30. Trench WF. Characterization and properties of (R,S)-symmetric, (R,S)-skew symmetric and (R,S)-conjugate matrices. SIAM Journal on Matrix Analysis and Application. 2005;26:748-757. DOI: 10.1137/S089547980343134X
  31. 31. Marsaglia G, Styan GPH. Equalities and inequalities for ranks of matrices. Linear and Multilinear Algebra. 1974;2:269-292. DOI: 10.1080/03081087408817070
  32. 32. Wang QW, Li CK. Ranks and the least-norm of the general solution to a system of quaternion matrix equations. Linear Algebra and its Application. 2009;430:1626-1640. DOI: 10.1016/j.laa.2008.05.031
  33. 33. Wang QW, Chang HX, Ning Q. The common solution to six quaternion matrix equations with applications. Applied Mathematics and Computation. 2008;198:209-226. DOI: 10.1016/j.amc.2007.08.091
  34. 34. Tian Y. Solvability of two linear matrix equations. Linear and Multilinear Algebra. 2000;48:123-147. DOI: 10.1080/03081080008818664
  35. 35. Wang QW, Wu ZC, Lin CY. Extremal ranks of a quaternion matrix expression subject to consistent systems of quaternion matrix equations with applications. Applied Mathematics and Computation. 2006;182:1755-1764. DOI: 10.1016/j.amc.2006.06.012
  36. 36. Wang QW. A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity. Linear Algebra and its Application. 2004;384:43-54. DOI: 10.1016/j.laa.2003.12.039
  37. 37. Wensheng C. Solvability of a quaternion matrix equation. Applied Mathematics, Journal of Chinese Universities, Serie B. 2002;17(4):490-498. DOI: 10.1007/s11766-996-0015-2
  38. 38. Artidiello S, Cordero A, Torregrosa JR, Vassileva MP. Generalized inverses estimations by means of iterative methods with memory. Mathematics. 2019;8:2. DOI: 10.3390/math8010002
  39. 39. Guo W, Huang T. Method of elementary transformation to compute Moore–Penrose inverse. Applied Mathematics and Computation. 2010;216:1614-1617. DOI: 10.1016/j.amc.2010.03.016
  40. 40. Sayevand K, Pourdarvish A, Machado JAT, Erfanifar R. On the calculation of the Moore-Penrose and Drazin inverses: Application to fractional calculus. Mathematics. 2021;9:2501. DOI: 10.3390/math9192501
  41. 41. Bapat RB, Bhaskara KPS, Prasad KM. Generalized inverses over integral domains. Linear Algebra and its Applications. 1990;140:181-196. DOI: 10.1016/0024-3795(90)90229-6
  42. 42. Stanimirovic PS. General determinantal representation of pseudoinverses of matrices. Matematichki Vesnik. 1996;48:1-9
  43. 43. Kyrchei II. Analogs of the adjoint matrix for generalized inverses and corresponding Cramer rules. Linear and Multilinear Algebra. 2008;56(4):453-469. DOI: 10.1080/03081080701352856
  44. 44. Kyrchei II. Determinantal representation of the Moore-Penrose inverse matrix over the quaternion skew field. Journal of Mathematical Sciences. 2012;108(1):23-33. DOI: 10.1007/s10958-011-0626-x
  45. 45. Kyrchei II. Determinantal representations of the Drazin and W-weighted Drazin inverses over the quaternion skew field with applications. In: Griffin S, editor. Quaternions: Theory and Applications. New York: Nova Sci Publ; 2017. pp. 201-275
  46. 46. Kyrchei II. Determinantal representations of the quaternion weighted Moore-Penrose inverse and its applications. In: Baswell AR, editor. Advances in Mathematics Research 23. New York: Nova Sci Publ; 2017. pp. 35-96
  47. 47. Kyrchei II. Cramer’s rule for generalized inverse solutions. In: Kyrchei II, editor. Advances in Linear Algebra Research. New York: Nova Sci Publ; 2015. pp. 79-132
  48. 48. Kyrchei II. Analogs of Cramer’s rule for the minimum norm least squares solutions of some matrix equations. Applied Mathematics and Computation. 2012;218(11):6375-6384. DOI: 10.1016/j.amc.2011.12.004
  49. 49. Kyrchei II. Determinantal representations of solutions and hermitian solutions to some system of two-sided quaternion matrix equations. Journal of Mathematics. 2018;2018:6294672. DOI: 10.1155/2018/6294672
  50. 50. Kyrchei II. Cramer’s rules of η-(skew-)Hermitian solutions to the quaternion Sylvester-type matrix equations. Adv. Appl. Clifford Algebr. 2019;29(3):56. DOI: 10.1007/s00006-019-0972-1
  51. 51. Kyrchei II. Determinantal representations of solutions to systems of two-sided quaternion matrix equations. Linear and Multilinear Algebra. 2021;69(4):648-672. DOI: 10.1080/03081087.2019.1614517
  52. 52. Rehman A, Kyrchei II, Ali I, Akram M, Shakoor A. The general solution of quaternion matrix equation having η-skew-Hermicity and its Cramer’s rule. Mathematical Problems in Engineering. 2018;2018:7939238. DOI: 10.1155/2019/7939238
  53. 53. Rehman A, Kyrchei II, Ali I, Akram M, Shakoor A. Explicit formulas and determinantal representation for η-skew-Hermitian solution to a system of quaternion matrix equations. Univerzitet u Nišu. 2020;34(8):2601-2627. DOI: 10.2298/FIL2008601R
  54. 54. Rehman A, Kyrchei II, Ali I, Akram M, Shakoor A. Constraint solution of a classical system of quaternion matrix equations and its Cramer’s rule. Iranian Journal of Science and Technology, Transactions A: Science. 2021;45(3):1015-1024. DOI: 10.1007/s40995-021-01083-7
  55. 55. Tian Y. Equalities and inequalities for traces of quaternionic matrices. Algebras Groups Geometry. 2002;19(2):181-193

Written By

Abdur Rehman and Ivan I. Kyrchei

Reviewed: 02 January 2023 Published: 08 February 2023