Open access peer-reviewed chapter

Cramer’s Rules for the System of Two-Sided Matrix Equations and of Its Special Cases

By Ivan I. Kyrchei

Submitted: October 12th 2017Reviewed: January 16th 2018Published: August 29th 2018

DOI: 10.5772/intechopen.74105

Downloaded: 238

Abstract

Within the framework of the theory of row-column determinants previously introduced by the author, we get determinantal representations (analogs of Cramer’s rule) of a partial solution to the system of two-sided quaternion matrix equations A1XB1=C1, A2XB2=C2. We also give Cramer’s rules for its special cases when the first equation be one-sided. Namely, we consider the two systems with the first equation A1X=C1 and XB1=C1, respectively, and with an unchanging second equation. Cramer’s rules for special cases when two equations are one-sided, namely the system of the equations A1X=C1, XB2=C2, and the system of the equations A1X=C1, A2X=C2 are studied as well. Since the Moore-Penrose inverse is a necessary tool to solve matrix equations, we use its determinantal representations previously obtained by the author in terms of row-column determinants as well.

Keywords

  • Moore-Penrose inverse
  • quaternion matrix
  • Cramer rule
  • system matrix equations
  • 2000 AMS subject classifications: 15A15
  • 16 W10

1. Introduction

The study of matrix equations and systems of matrix equations is an active research topic in matrix theory and its applications. The system of classical two-sided matrix equations

A1XB1=C1,A2XB2=C2.E1

over the complex field, a principle domain, and the quaternion skew field has been studied by many authors (see, e.g. [1, 2, 3, 4, 5, 6, 7]). Mitra [1] gives necessary and sufficient conditions of the system (1) over the complex field and the expression for its general solution. Navarra et al. [6] derived a new necessary and sufficient condition for the existence and a new representation of (1) over the complex field and used the results to give a simple representation. Wang [7] considers the system (1) over the quaternion skew field and gets its solvability conditions and a representation of a general solution.

Throughout the chapter, we denote the real number field by R, the set of all m×nmatrices over the quaternion algebra

H=a0+a1i+a2j+a3ki2=j2=k2=1a0a1a2a3R

by Hm×nand by Hrm×n, and the set of matrices over Hwith a rank r. For AHn×m, the symbols A* stands for the conjugate transpose (Hermitian adjoint) matrix of A. The matrix A=aijHn×nis Hermitian if A*=A.

Generalized inverses are useful tools used to solve matrix equations. The definitions of the Moore-Penrose inverse matrix have been extended to quaternion matrices as follows. The Moore-Penrose inverse of AHm×n, denoted by A, is the unique matrix XHn×msatisfying 1AXA=A, 2XAX=X, 3AX=AX, and 4XA=XA.

The determinantal representation of the usual inverse is the matrix with the cofactors in the entries which suggests a direct method of finding of inverse and makes it applicable through Cramer’s rule to systems of linear equations. The same is desirable for the generalized inverses. But there is not so unambiguous even for complex or real generalized inverses. Therefore, there are various determinantal representations of generalized inverses because of looking for their more applicable explicit expressions (see, e.g. [8]). Through the noncommutativity of the quaternion algebra, difficulties arise already in determining the quaternion determinant (see, e.g. [9, 10, 11, 12, 13, 14, 15, 16]).

The understanding of the problem for determinantal representation of an inverse matrix as well as generalized inverses only now begins to be decided due to the theory of column-row determinants introduced in [17, 18]. Within the framework of the theory of column-row determinants, determinantal representations of various kinds of generalized inverses and (generalized inverses) solutions of quaternion matrix equations have been derived by the author (see, e.g. [19, 20, 21, 22, 23, 24, 25]) and by other reseachers (see, e.g. [26, 27, 28, 29]).

The main goals of the chapter are deriving determinantal representations (analogs of the classical Cramer rule) of general solutions of the system (1) and its simpler cases over the quaternion skew field.

The chapter is organized as follows. In Section 2, we start with preliminaries introducing of row-column determinants and determinantal representations of the Moore-Penrose and Cramer’s rule of the quaternion matrix equations, AXB=C. Determinantal representations of a partial solution (an analog of Cramer’s rule) of the system (1) are derived in Section 3. In Section 4, we give Cramer’s rules to special cases of (1) with 1 and 2 one-sided equations. Finally, the conclusion is drawn in Section 5.

2. Preliminaries

For A=aijMnH, we define n row determinants and n column determinants as follows. Suppose Sn is the symmetric group on the set In=1n.

Definition 2.1. The ith row determinant of AHn×mis defined for all i=1,,nby putting

rdetiA=σSn1nraiik1aik1ik1+1aik1+l1iaikrikr+1aikr+lrikr,
σ=iik1ik1+1ik1+l1ik2ik2+1ik2+l2ikrikr+1ikr+lr,
with conditions ik2<ik3<<ikrand ikt<ikt+sfor all t=2,,rand all s=1,,lt.

Definition 2.2. The jth column determinant of AHn×mis defined for all j=1,,nby putting

cdetjA=τSn1nrajkrjkr+lrajkr+1ikrajjk1+l1ajk1+1jk1ajk1j,
τ=jkr+lrjkr+1jkrjk2+l2jk2+1jk2jk1+l1jk1+1jk1j,
with conditions, jk2<jk3<<jkrand jkt<jkt+sfor t=2,,rand s=1,,lt.

Since rdet1A==rdetnA=cdet1A==cdetnARfor Hermitian AHn×n, then we can define the determinant of a Hermitian matrix A by putting, detArdetiA=cdetiA, for all i=1,,n. The determinant of a Hermitian matrix has properties similar to a usual determinant. They are completely explored in [17, 18] by its row and column determinants. In particular, within the framework of the theory of the column-row determinants, the determinantal representations of the inverse matrix over Hby analogs of the classical adjoint matrix and Cramer’s rule for quaternionic systems of linear equations have been derived. Further, we consider the determinantal representations of the Moore-Penrose inverse.

We shall use the following notations. Let αα1αk1mand ββ1βk1nbe subsets of the order 1kminmn. Aβαdenotes the submatrix of AHn×mdetermined by the rows indexed by α and the columns indexed by β. Then, Aααdenotes the principal submatrix determined by the rows and columns indexed by α. If AHn×nis Hermitian, then Aααis the corresponding principal minor of det A. For 1kn, the collection of strictly increasing sequences of k integers chosen from 1nis denoted by Lk,nα:α=α1αk1α1αkn. For fixed iαand jβ, let Ir,miα:αLr,miα,Jr,njβ:βLr,njβ.

Let a.jbe the jth column and ai.be the ith row of A. Suppose A.jbdenotes the matrix obtained from A by replacing its jth column with the column b, then Ai.bdenotes the matrix obtained from A by replacing its ith row with the row b. a.jand ai.denote the jth column and the ith row of A*, respectively.

The following theorem gives determinantal representations of the Moore-Penrose inverse over the quaternion skew field H.

Theorem 2.1. [19] If AHrm×n, then the Moore-Penrose inverse A=aijHn×mpossesses the following determinantal representations:

aij=βJr,nicdetiAA.ia.jβββJr,nAAββ,E2
or
aij=αIr,mjrdetjAAj.ai.αααIr,mAAαα.E3

Remark 2.1. Note that for an arbitrary full-rank matrix, AHrm×n, a column-vector d.j, and a row-vector di.with appropriate sizes, respectively, we put

cdetiAA.id.j=βJn,nicdetiAA.id.jββ,detAA=βJn,nAAββwhenr=n,
rdetjAAj.di.=αIm,mjrdetjAAj.di.αα,detAA=αIm,mAAααwhenr=m.

Furthermore, PA=AA, QA=AA, LA=IAA, and RAIAAstand for some orthogonal projectors induced from A.

Theorem 2.2. [30] Let AHm×n, BHr×s, and CHm×sbe known and XHn×rbe unknown. Then, the matrix equation

AXB=CE4
is consistent if and only if AACBB=C. In this case, its general solution can be expressed as
X=ACB+LAV+WRB,E5
where V and W are arbitrary matrices over Hwith appropriate dimensions.

The partial solution, X0=ACB, of (4) possesses the following determinantal representations.

Theorem 2.3. [20] Let AHr1m×nand BHr2r×s. Then, X0=xij0Hn×rhas determinantal representations,

xij=βJr1,nicdetiAA.id.jBβββJr1,nAAββαIr2,rBBαα,
or
xij=αIr2,rjrdetjBBj.di.AααβJr1,nAAββαIr2,rBBαα,
where
d.jB=αIr2,rjrdetjBBj.c˜k.ααHn×1,k=1,,n,
di.A=βJr1,nicdetiAA.iC˜.lββH1×r,l=1,,r,
are the column vector and the row vector, respectively. c˜i.and c˜.jare the ith row and the jth column of C˜=ACB.

3. Determinantal representations of a partial solution to the system (1)

Lemma 3.1. [7] Let A1Hm×n, B1Hr×s, C1Hm×s, A2Hk×n, B2Hr×p, and C2Hk×pbe given and XHn×ris to be determined. Put H=A2LA1, N=RB1B2, T=RHA2, and F=B2LN. Then, the system (1) is consistent if and only if

AiAiCiBiBi=Ci,i=1,2;E6
TA2XB2A1C1B1F=0.E7

In that case, the general solution of (1) can be expressed as the following,

X=A1C1B1+LA1HA2LTA2C2B2A1C1B1B2B2+TTA2C2B2A1C1B1B2NRB1+LA1ZHHZB2B2LA1HA2LTWNB2+WTTWNN×RB1,E8

where Z and W are the arbitrary matrices over Hwith compatible dimensions.

Some simplification of (8) can be derived due to the quaternionic analog of the following proposition.

Lemma 3.2. [32] If AHn×nis Hermitian and idempotent, then the following equation holds for any matrix BHm×n,

ABA=BA.E9

It is evident that if AHn×nis Hermitian and idempotent, then the following equation is true as well,

ABA=AB.E10

Since LA1, RB1, and RH are projectors, then using (9) and (10), we have, respectively,

LA1H=LA1A2LA1=A2LA1=H,NRB1=RB1B2RB1=RB1B2=N,TT=RHA2RHA2=RHA2A2=TA2,LT=ITT=ITA2.E11

Using (11) and (6), we obtain the following expression of (8),

X=A1C1B1+HA2ITA2A2C2B2A1C1B1B2B2+TA2A2C2B2A1C1B1B2N+LA1ZHHZB2B2HA2LTWNB2+WTTWNNRB1=A1C1B1+HC2B2+HA2TIA2A1C1B1QB2HA2TC2B2+TC2NTA2A1C1B1B2N+LA1ZHHZB2B2HA2LTWNB2+WTTWNNRB1.E12

By putting Z1=W1=0in (12), the partial solution of (8) can be derived,

X0=A1C1B1+HC2B2+TC2N+HA2TA2A1C1B1QB2HA2A1C1B1QB2HA2TC2B2TA2A1C1B1B2N.E13

Further we give determinantal representations of (13). Let A1=aij1Hr1m×n, B1=bij1Hr2r×s, A2=aij2Hr3k×n, B2=bij2Hr4r×p, C1=cij1Hm×s, and C2=cij2Hk×p, and there exist A1=aij1,Hn×m, B2=bij2,Hp×r, H=hijHn×k, N=nijHp×r, and T=tijHn×k. Let rankH=minrankA2rankLA1=r5, rankN=minrankB2rankRB1=r6, and rankT=minrankA2rankRH=r7. Consider each term of (13) separately.

  1. (i) By Theorem 2.3 for the first term, xij01, of (13), we have

xij01=βJr1,nicdetiA1A1.id.jB1βββJr1,nA1A1ββαIr2,pB1B1αα,E14
or
xij01=αIr2,qjrdetjB1B1j.di.A1ααβJr1,pA1A1ββαIr2,qB1B1αα,E15

where

d.jB1=αIr2,pjrdetjB1B1j.c˜q.1ααHn×1,q=1,,n,
di.A1=βJr1,nicdetiA1A1.ic˜.l1ββH1×r,l=1,,r,
are the column vector and the row vector, respectively. c˜q.1and c˜.l1are the qth row and the lth column of C˜1=A1C1B1.
  1. (ii) Similarly, for the second term of (13), we have

xij02=βJr5,nicdetiHH.id.jB2βββJr5,nHHββαIr4,rB2B2αα,E16
or
xij02=αIr4,rjrdetjB2B2j.di.HααβJr5,nHHββαIr4,rB2B2αα,E17

where

d.jB2=αIr4,rjrdetjB2B2j.c˜q.2ααHn×1,q=1,,n,
di.H=βJr5,nicdetiHH.ic˜.l2ββH1×r,l=1,,r,
are the column vector and the row vector, respectively. c˜q.2and c˜.l2are the qth row and the lth column of C˜2=HC2B2. Note that HH=A2LA1A2LA1=LA1A2A2LA1.
  1. (iii) The third term of (13) can be obtained by Theorem 2.3 as well. Then

xij03=βJr7,nicdetiTT.id.jNβββJr7,nTTββαIr6,rNNαα,E18
or
xij03=αIr6,rjrdetjNNj.di.TααβJr7,nTTββαIr6,rNNαα,E19

where

d.jN=αIr6,rfrdetjNNj.ĉq.2ααHn×1,q=1,,n,
di.T=βJr7,nicdetiTT.iĉ.l2ββH1×r,l=1,,r,

are the column vector and the row vector, respectively. ĉq.2is the qth row and ĉ.l2is the lth column of Ĉ2=TC2N. The following expression gives some simplify in computing. Since TT=RHA2=A2RHRHA2=A2RHA2and RH=IHH=IA2LA1A2LA1=IA2A2LA1, then TT=A2IA2A2LA1A2.

  1. (iv) Using (3) for determinantal representations of H and T in the fourth term of (13), we obtain

xij04=q=1nz=1nf=1rβJr5,nicdetiHH.ia.q2HβββJr7,nqcdetqTT.qa.z2Tββxzf01qfjβJr5,nHHβββJr7,nTTββ,E20

where a.i2Hand a.i2Tare the ith columns of the matrices H*A2 and T*A2, respectively; qfj is the (fj)th element of QB2with the determinantal representation,

qfj=αIr4,rjrdetjB2B2j.b¨f.2αααIr4,rB2B2αα,
and b¨f.2is the fth row of B2B2. Note that HA2=LA1A2A2and TA2=A2RHA2=A2IA2A2LA1A2.
  1. (v) Similar to the previous case,

xij05=q=1nf=1rβJr5,nicdetiHH.ia.q2Hββxqf01qfjβJr5,nHHββ,E21
  1. (vi) Consider the sixth term by analogy to the fourth term. So,

xij06=q=1nβJr5,nicdetiHH.ia.q2HββφqjβJr5,nHHβββJr7,nTTββαIr4,rB2B2αα,E22

where

φqj=βJr7,nicdetqTT.qψ.jB2ββ,E23

or

φqj=αIr4,rjrdetjB2B2j.ψq.Tαα,E24
and
ψ.jB2=αIr4,rfrdetjB2B2j.cq.2ααH1×n,q=1,,n,
ψq.T=βJr7,nqcdetqTT.qc.l2ββHr×1,l=1,,r,
are the column vector and the row vector, respectively. cq.2and c.l2are the qth row and the lth column of C2=TC2B2for all i=1,,nand j=1,,p.
  1. (vii) Using (3) for determinantal representations of and T and (2) for N in the seventh term of (13), we obtain

xij07=q=1nf=1rβJr7,nicdetiTT.ia.q2Tββxqf01αIr6,rjrdetjNNj.bf.2NααβJr7,nTTββαIr6,rNNαα,E25

where a.q2Tand bf.2Nare the qth column of T*A2 and the fth row of B2N=B2B2RB1, respectively.

Hence, we prove the following theorem.

Theorem 3.1. Let A1Hr1m×n, B1Hr2r×s, A2Hr3k×n, B2Hr4r×p, rankH=rankA2LA1=r5, rankN=RB1B2=r6, and rankT=RHA2=r7. Then, for the partial solution (13), X0=xij0Hn×r, of the system (1), we have,

xij0=δxij0δ,E26

where the term xij01has the determinantal representations (14) and (15), xij02—(16) and (17), xij03—(18) and (19), xij04—(20), xij05—(21), xij06—(23) and (24), and xij07—(25).

4. Cramer’s rules for special cases of (1)

In this section, we consider special cases of (1) when one or two equations are one-sided. Let in Eq.(1), the matrix B1 is vanished. Then, we have the system

A1X=C1,A2XB2=C2.E27

The following lemma is extended to matrices with quaternion entries.

Lemma 4.1. [7] Let A1Hm×n, C1Hm×r, A2Hk×n, B2Hr×p, and C2Hk×pbe given and XHn×ris to be determined. Put H=A2LA1. Then, the following statements are equivalent:

  1. System (27) is consistent.

  2. RA1C1=0, RHC2A2A1C1B2=0, C2LB2=0.

  3. rankA1C1=rankA1, rankC2B2=rankB2, rankA1C1B2A2C2=rankA1A2.

In this case, the general solution of (27) can be expressed as

X=A1C1+LA1HC2A2A1C1B2B2+LA1LHZ1+LA1W1RB2,E28

where Z1 and W1 are the arbitrary matrices over Hwith appropriate sizes.

Since by (9), LA1H=LA1A2LA1=A2LA1=H,then we have some simplification of (28),

X=A1C1+HC2B2HA2A1C1B2B2+LA1LHZ1+LA1W1RB2.

By putting Z1=W1=0, there is the following partial solution of (27),

X0=A1C1+HC2B2HA2A1C1B2B2.E29

Theorem 4.1. Let A1=aij1Hr1m×n, A2=aij2Hr2k×n, B2=bij2Hr3r×p, C1=cij1Hm×r, and C2=cij2Hk×p, and there exist A1=aij1,Hn×m, B2=bij2,Hp×r, and H=hijHn×k. Let rankH=minrankA2rankLA1=r4. Denote A1C1Ĉ1=ĉij1Hn×r, HC2B2Ĉ2=ĉij2Hn×r, HA2A1Â2=âij2Hn×m, and C1QB2Q̂=q̂ijHm×p. Then, the partial solution (29), X0=xij0Hn×r, possesses the following determinantal representations,

xij0=βJr1,nicdetiA1A1.iĉ.j1βββJr1,nA1A1ββ+dijλβJr4,nHHββαIr3,rB2B2ααl=1mgilμαIr3,rjrdetjB2B2j.q̂l.ααβJr4,nHHββαIr1,mA1A1αααIr3,rB2B2αα

for all λ=1,2and μ=1,2. Here

dij1αIr3,rjrdetjB2B2j.vi.1αα,gil1αIr1,mlrdetlA1A1l.ui.1αα,
and the row-vectors vi.1=vi11vir1and ui.1=ui11uim1such that
vit1βJr4,nicdetiHH.iĉ.t2ββ,uiz1βJr4,nicdetiHH.iâ.z2ββ.

In another case,

dij2βJr4,nicdetiHH.iv.j2ββ,gil2βJr4,nicdetiHH.iu.l2ββ.
and the column-vectors v.j2=v1j2vnj2and u.l2=u1l2unl2such that
vqj2αIr3,rjrdetjB2B2j.ĉq.2αα,uql2αIr1,mlrdetlA1A1l.aq.2αα

Proof. The proof is similar to the proof of Theorem 3.1.

Let in Eq.(1), the matrix A1 is vanished. Then, we have the system,

XB1=C1,A2XB2=C2.E30

The following lemma is extended to matrices with quaternion entries as well.

Lemma 4.2. [7] Let B1Hr×s, C1Hn×s, A2Hk×n, B2Hr×p, and C2Hk×pbe given and XHn×ris to be determined. Put N=RB1B2. Then, the following statements are equivalent:

  1. System (30) is consistent.

  2. RA2C2=0, C2A2C1B1B2LN=0, C2LB2=0.

  3. rankA2C2=rankA2, rankC1B1=rankB1, rankC2A2C1B2B1=rankB2B1.

In this case, the general solution of (30) can be expressed as

X=C1B1+A2C2A2C1B1B2NRB1+LA2W2RB1+Z2RNRB1,E31

where Z2 and W2 are the arbitrary matrices over Hwith appropriate sizes.

Since by (10), NRB1=RB1B2RB1=N,then some simplification of (31) can be derived,

X=C1B1+A2C2NA2C1B1B2N+LA2W2RB1+Z2RNRB1.

By putting Z2=W2=0, there is the following partial solution of (30),

X0=C1B1+A2C2NA2A2C1B1B2N.E32

The following theorem on determinantal representations of (29) can be proven similar to the proof of Theorem 3.1 as well.

Theorem 4.2. Let B1=bij1Hr1r×s, A2=aij2Hr2k×n, B2=bij2Hr3r×p, C1=cij1Hn×s, and C2=cij2Hk×p, and there exist B1=bij1,Hs×r, A2=aij2,Hn×k, N=nijHp×r. Let rankN=minrankB2rankRB1=r4. Denote C1B1C˜1=c˜ij1Hn×r, A2C2NC˜2=c˜ij2Hn×r, B1B2NB˜2=b˜ij2Hs×r, and PA2C1P˜=p˜ijHn×s. Then, the partial solution (32), X0=xij0Hn×r, possesses the following determinantal representations,

xij0=αIr1,rjrdetjB1B1j.c˜i.1αααIr1,rB1B1αα+dijλβJr2,nA2A2ββαIr4,rNNααz=1sβJr2,nicdetiA2A2.ip˜.zββgzjμβJr2,nA2A2βββJr1,sB1B1ββαIr4,rNNαα

for all λ=1,2and μ=1,2. Here

dij1αIr3,rjrdetjNNj.φi.1αα,gil1αIr4,rjrdetjNNj.ψz.αα,
and the row-vectors φi.1=φi11φir1and ψi.1=ψz11ψzr1such that
φit1=βJr2,nicdetiA2A2.ic.t2ββ,ψzv1=βJr1,nzcdetzB1B1.ib.v2ββ.

In another case,

dij2βJr2,nicdetiA2A2.iφ.j2ββ,gzj2βJr1,nzcdetzB1B1.zψ.j2ββ,
and the column-vectors φ.j2=φ1j2φnj2and ψ.j2=ψ1j2ψsj2such that
φqj2=αIr4,rjrdetjNNj.cq.2αα,ψuj2αIr4,rjrdetjNNj.bu.2αα.

Now, suppose that the both equations of (1) are one-sided. Let in Eq.(1), the matrices B1 and A2 are vanished. Then, we have the system

A1X=C1,XB2=C2.E33

The following lemma is extended to matrices with quaternion entries.

Lemma 4.3. [31] Let A1Hm×n, B2Hr×p, C1Hm×r, and C2Hn×pbe given and XHn×ris to be determined. Then, the system (33) is consistent if and only if RA1C1=0, C2LB2=0, and A1C2=C1B2. Under these conditions, the general solution to (33) can be established as

X=A1C1+LA1C2B2+LA1URB2,E34

where U is a free matrix over Hwith a suitable shape.

Due to the consistence conditions, Eq. (34) can be expressed as follows:

X=C2B2+A1C1A1C2B2+LA1URB2=C2B2+A1C1C1B2B2+LA1URB2=C2B2+A1C1RB2+LA1URB2,

Consequently, the partial solution X0 to (33) is given by

X0=A1C1+LA1C2B2,E35
or
X0=C2B2+A1C1RB2.E36

Due to the expression (35), the following theorem can be proven similar to the proof of Theorem 3.1.

Theorem 4.3. Let A1=aij1Hr1m×n, B2=bij2Hr2r×p, C1=cij1Hm×r, and C2=cij2Hn×r, and there exist A1=aij1,Hn×m, B2=bij2,Hp×r, and LA1=IA1A1lijHn×n. Denote A1C1Ĉ1=ĉij1Hn×rand LA1C2B2Ĉ2=ĉij2Hn×r. Then, the partial solution (35), X0=xij0Hn×s, possesses the following determinantal representation,

xij0=βJr1,nicdetiA1A1.iĉ.j1βββJr1,nA1A1ββ+αIr2,rjrdetjB2B2j.ĉi.2αααIr2,rB2B2αα,E37

where ĉ.j1is the jth column of Ĉ1and ĉi.2is the ith row of Ĉ2.

Remark 4.1. In accordance to the expression (36), we obtain the same representations, but with the denotations, C2B2Ĉ2=ĉij2Hn×rand A1C1RB2Ĉ1=ĉij2Hn×r.

Let in Eq.(1), the matrices B1 and B2 are vanished. Then, we have the system

A1X=C1,A2X=C2.E38

Lemma 4.4. [7] Suppose that A1Hm×n, C1Hm×r, A2Hk×n, and C2Hk×rare known and XHn×ris unknown, H=A2LA1, T=RHA2. Then, the system (38) is consistent if and only if AiAiCi=Ci,for all i=1,2and TA2C2A1C1=0. Under these conditions, the general solution to (38) can be established as

X=A1C1+LA1HA2A2C2A1C1+LA1LHY,E39

where Y is an arbitrary matrix over Hwith an appropriate size.

Using (10) and the consistency conditions, we simplify (39) accordingly, X0=A1C1+HC2HA2A1C1+LA1LHY.Consequently, the following partial solution of (39) will be considered

X0=A1C1+HC2HA2A1C1.E40

In the following theorem, we give the determinantal representations of (40).

Theorem 4.4. Let A1=aij1Hr1m×n, A2=aij2Hr2k×n, C1=cij1Hm×r, C2=cij2Hk×r, and there exist A1=aij1,Hn×m, H2=hijHn×s. Let rankH=minrankA2rankLA1=r3. Denote A1C1Ĉ1=ĉij1Hn×r, HC2Ĉ2=ĉij2Hn×r, and HA2Â2=âij2Hn×n. Then, X0=xij0Hn×rpossesses the following determinantal representation,

xij0=βJr1,nicdetiA1A1.iĉ.j1βββJr1,nA1A1ββ+βJr3,nicdetiHH.iĉ.j2βββJr3,nHHββl=1nβJr3,nicdetiHH.iâ.l2βββJr3,nHHβββJr1,nlcdetlA1A1.lĉ.j1βββJr1,nA1A1ββ,E41

where ĉ.j1, ĉ.j2, and â.j2are the jth columns of the matrices Ĉ1, Ĉ2, and Â2, respectively.

Proof. The proof is similar to the proof of Theorem 3.1.

5. Conclusion

Within the framework of the theory of row-column determinants previously introduced by the author, we get determinantal representations (analogs of Cramer’s rule) of partial solutions to the system of two-sided quaternion matrix equations A1XB1=C1, A2XB2=C2, and its special cases with 1 and 2 one-sided matrix equations. We use previously obtained by the author determinantal representations of the Moore-Penrose inverse. Note to give determinantal representations for all above matrix systems over the complex field, it is obviously needed to substitute all row and column determinants by usual determinants.

Conflict of interest

The author declares that there are no conflict interests.

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Ivan I. Kyrchei (August 29th 2018). Cramer’s Rules for the System of Two-Sided Matrix Equations and of Its Special Cases, Matrix Theory - Applications and Theorems, Hassan A. Yasser, IntechOpen, DOI: 10.5772/intechopen.74105. Available from:

chapter statistics

238total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Matrices Which are Discrete Versions of Linear Operations

By Armando Martínez Pérez and Gabino Torres Vega

Related Book

First chapter

3-Algebras in String Theory

By Matsuo Sato

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us