Open access peer-reviewed chapter

# Cramer’s Rules for the System of Two-Sided Matrix Equations and of Its Special Cases

Written By

Ivan I. Kyrchei

Submitted: 12 October 2017 Reviewed: 16 January 2018 Published: 29 August 2018

DOI: 10.5772/intechopen.74105

From the Edited Volume

## Matrix Theory - Applications and Theorems

Edited by Hassan A. Yasser

Chapter metrics overview

View Full Metrics

## Abstract

Within the framework of the theory of row-column determinants previously introduced by the author, we get determinantal representations (analogs of Cramer’s rule) of a partial solution to the system of two-sided quaternion matrix equations A1XB1=C1, A2XB2=C2. We also give Cramer’s rules for its special cases when the first equation be one-sided. Namely, we consider the two systems with the first equation A1X=C1 and XB1=C1, respectively, and with an unchanging second equation. Cramer’s rules for special cases when two equations are one-sided, namely the system of the equations A1X=C1, XB2=C2, and the system of the equations A1X=C1, A2X=C2 are studied as well. Since the Moore-Penrose inverse is a necessary tool to solve matrix equations, we use its determinantal representations previously obtained by the author in terms of row-column determinants as well.

### Keywords

• Moore-Penrose inverse
• quaternion matrix
• Cramer rule
• system matrix equations
• 2000 AMS subject classifications: 15A15
• 16 W10

## 1. Introduction

The study of matrix equations and systems of matrix equations is an active research topic in matrix theory and its applications. The system of classical two-sided matrix equations

A 1 XB 1 = C 1 , A 2 XB 2 = C 2 . E1

over the complex field, a principle domain, and the quaternion skew field has been studied by many authors (see, e.g. [1, 2, 3, 4, 5, 6, 7]). Mitra [1] gives necessary and sufficient conditions of the system (1) over the complex field and the expression for its general solution. Navarra et al. [6] derived a new necessary and sufficient condition for the existence and a new representation of (1) over the complex field and used the results to give a simple representation. Wang [7] considers the system (1) over the quaternion skew field and gets its solvability conditions and a representation of a general solution.

Throughout the chapter, we denote the real number field by R , the set of all m × n matrices over the quaternion algebra

H = a 0 + a 1 i + a 2 j + a 3 k i 2 = j 2 = k 2 = 1 a 0 a 1 a 2 a 3 R

by H m × n and by H r m × n , and the set of matrices over H with a rank r. For A H n × m , the symbols A* stands for the conjugate transpose (Hermitian adjoint) matrix of A. The matrix A = a ij H n × n is Hermitian if A*=A.

Generalized inverses are useful tools used to solve matrix equations. The definitions of the Moore-Penrose inverse matrix have been extended to quaternion matrices as follows. The Moore-Penrose inverse of A H m × n , denoted by A , is the unique matrix X H n × m satisfying 1 AXA = A , 2 XAX = X , 3 AX = AX , and 4 XA = XA .

The determinantal representation of the usual inverse is the matrix with the cofactors in the entries which suggests a direct method of finding of inverse and makes it applicable through Cramer’s rule to systems of linear equations. The same is desirable for the generalized inverses. But there is not so unambiguous even for complex or real generalized inverses. Therefore, there are various determinantal representations of generalized inverses because of looking for their more applicable explicit expressions (see, e.g. [8]). Through the noncommutativity of the quaternion algebra, difficulties arise already in determining the quaternion determinant (see, e.g. [9, 10, 11, 12, 13, 14, 15, 16]).

The understanding of the problem for determinantal representation of an inverse matrix as well as generalized inverses only now begins to be decided due to the theory of column-row determinants introduced in [17, 18]. Within the framework of the theory of column-row determinants, determinantal representations of various kinds of generalized inverses and (generalized inverses) solutions of quaternion matrix equations have been derived by the author (see, e.g. [19, 20, 21, 22, 23, 24, 25]) and by other reseachers (see, e.g. [26, 27, 28, 29]).

The main goals of the chapter are deriving determinantal representations (analogs of the classical Cramer rule) of general solutions of the system (1) and its simpler cases over the quaternion skew field.

The chapter is organized as follows. In Section 2, we start with preliminaries introducing of row-column determinants and determinantal representations of the Moore-Penrose and Cramer’s rule of the quaternion matrix equations, AXB=C. Determinantal representations of a partial solution (an analog of Cramer’s rule) of the system (1) are derived in Section 3. In Section 4, we give Cramer’s rules to special cases of (1) with 1 and 2 one-sided equations. Finally, the conclusion is drawn in Section 5.

## 2. Preliminaries

For A = a ij M n H , we define n row determinants and n column determinants as follows. Suppose Sn is the symmetric group on the set I n = 1 n .

Definition 2.1. The ith row determinant of A H n × m is defined for all i = 1 , , n by putting

rdet i A = σ S n 1 n r a ii k 1 a i k 1 i k 1 + 1 a i k 1 + l 1 i a i k r i k r + 1 a i k r + l r i k r ,
σ = i i k 1 i k 1 + 1 i k 1 + l 1 i k 2 i k 2 + 1 i k 2 + l 2 i k r i k r + 1 i k r + l r ,
with conditions i k 2 < i k 3 < < i k r and i k t < i k t + s for all t = 2 , , r and all s = 1 , , l t .

Definition 2.2. The jth column determinant of A H n × m is defined for all j = 1 , , n by putting

cdet j A = τ S n 1 n r a j k r j k r + l r a j k r + 1 i k r a j j k 1 + l 1 a j k 1 + 1 j k 1 a j k 1 j ,
τ = j k r + l r j k r + 1 j k r j k 2 + l 2 j k 2 + 1 j k 2 j k 1 + l 1 j k 1 + 1 j k 1 j ,
with conditions, j k 2 < j k 3 < < j k r and j k t < j k t + s for t = 2 , , r and s = 1 , , l t .

Since rdet 1 A = = rdet n A = cdet 1 A = = cdet n A R for Hermitian A H n × n , then we can define the determinant of a Hermitian matrix A by putting, det A rdet i A = cdet i A , for all i = 1 , , n . The determinant of a Hermitian matrix has properties similar to a usual determinant. They are completely explored in [17, 18] by its row and column determinants. In particular, within the framework of the theory of the column-row determinants, the determinantal representations of the inverse matrix over H by analogs of the classical adjoint matrix and Cramer’s rule for quaternionic systems of linear equations have been derived. Further, we consider the determinantal representations of the Moore-Penrose inverse.

We shall use the following notations. Let α α 1 α k 1 m and β β 1 β k 1 n be subsets of the order 1 k min m n . A β α denotes the submatrix of A H n × m determined by the rows indexed by α and the columns indexed by β. Then, A α α denotes the principal submatrix determined by the rows and columns indexed by α. If A H n × n is Hermitian, then A α α is the corresponding principal minor of det A. For 1 k n , the collection of strictly increasing sequences of k integers chosen from 1 n is denoted by L k , n α : α = α 1 α k 1 α 1 α k n . For fixed i α and j β , let I r , m i α : α L r , m i α , J r , n j β : β L r , n j β .

Let a . j be the jth column and a i . be the ith row of A. Suppose A . j b denotes the matrix obtained from A by replacing its jth column with the column b, then A i . b denotes the matrix obtained from A by replacing its ith row with the row b. a . j and a i . denote the jth column and the ith row of A*, respectively.

The following theorem gives determinantal representations of the Moore-Penrose inverse over the quaternion skew field H .

Theorem 2.1. [19] If A H r m × n , then the Moore-Penrose inverse A = a ij H n × m possesses the following determinantal representations:

a ij = β J r , n i cdet i A A . i a . j β β β J r , n A A β β , E2
or
a ij = α I r , m j rdet j AA j . a i . α α α I r , m AA α α . E3

Remark 2.1. Note that for an arbitrary full-rank matrix, A H r m × n , a column-vector d . j , and a row-vector d i . with appropriate sizes, respectively, we put

cdet i A A . i d . j = β J n , n i cdet i A A . i d . j β β , det A A = β J n , n A A β β when r = n ,
rdet j AA j . d i . = α I m , m j rdet j AA j . d i . α α , det AA = α I m , m AA α α when r = m .

Furthermore, P A = A A , Q A = AA , L A = I A A , and R A I AA stand for some orthogonal projectors induced from A.

Theorem 2.2. [30] Let A H m × n , B H r × s , and C H m × s be known and X H n × r be unknown. Then, the matrix equation

AXB = C E4
is consistent if and only if AA CBB = C . In this case, its general solution can be expressed as
X = A CB + L A V + WR B , E5
where V and W are arbitrary matrices over H with appropriate dimensions.

The partial solution, X 0 = A CB , of (4) possesses the following determinantal representations.

Theorem 2.3. [20] Let A H r 1 m × n and B H r 2 r × s . Then, X 0 = x ij 0 H n × r has determinantal representations,

x ij = β J r 1 , n i cdet i A A . i d . j B β β β J r 1 , n A A β β α I r 2 , r BB α α ,
or
x ij = α I r 2 , r j rdet j BB j . d i . A α α β J r 1 , n A A β β α I r 2 , r BB α α ,
where
d . j B = α I r 2 , r j rdet j BB j . c ˜ k . α α H n × 1 , k = 1 , , n ,
d i . A = β J r 1 , n i cdet i A A . i C ˜ . l β β H 1 × r , l = 1 , , r ,
are the column vector and the row vector, respectively. c ˜ i . and c ˜ . j are the ith row and the jth column of C ˜ = A CB .

## 3. Determinantal representations of a partial solution to the system (1)

Lemma 3.1. [7] Let A 1 H m × n , B 1 H r × s , C 1 H m × s , A 2 H k × n , B 2 H r × p , and C 2 H k × p be given and X H n × r is to be determined. Put H = A 2 L A 1 , N = R B 1 B 2 , T = R H A 2 , and F = B 2 L N . Then, the system (1) is consistent if and only if

A i A i C i B i B i = C i , i = 1 , 2 ; E6
T A 2 XB 2 A 1 C 1 B 1 F = 0 . E7

In that case, the general solution of (1) can be expressed as the following,

X = A 1 C 1 B 1 + L A 1 H A 2 L T A 2 C 2 B 2 A 1 C 1 B 1 B 2 B 2 + T T A 2 C 2 B 2 A 1 C 1 B 1 B 2 N R B 1 + L A 1 Z H HZB 2 B 2 L A 1 H A 2 L T WNB 2 + W T TWNN × R B 1 , E8

where Z and W are the arbitrary matrices over H with compatible dimensions.

Some simplification of (8) can be derived due to the quaternionic analog of the following proposition.

Lemma 3.2. [32] If A H n × n is Hermitian and idempotent, then the following equation holds for any matrix B H m × n ,

A BA = BA . E9

It is evident that if A H n × n is Hermitian and idempotent, then the following equation is true as well,

AB A = AB . E10

Since L A 1 , R B 1 , and RH are projectors, then using (9) and (10), we have, respectively,

L A 1 H = L A 1 A 2 L A 1 = A 2 L A 1 = H , N R B 1 = R B 1 B 2 R B 1 = R B 1 B 2 = N , T T = R H A 2 R H A 2 = R H A 2 A 2 = T A 2 , L T = I T T = I T A 2 . E11

Using (11) and (6), we obtain the following expression of (8),

X = A 1 C 1 B 1 + H A 2 I T A 2 A 2 C 2 B 2 A 1 C 1 B 1 B 2 B 2 + T A 2 A 2 C 2 B 2 A 1 C 1 B 1 B 2 N + L A 1 Z H HZB 2 B 2 H A 2 L T WNB 2 + W T TWNN R B 1 = A 1 C 1 B 1 + H C 2 B 2 + H A 2 T I A 2 A 1 C 1 B 1 Q B 2 H A 2 T C 2 B 2 + T C 2 N T A 2 A 1 C 1 B 1 B 2 N + L A 1 Z H HZB 2 B 2 H A 2 L T WNB 2 + W T TWNN R B 1 . E12

By putting Z 1 = W 1 = 0 in (12), the partial solution of (8) can be derived,

X 0 = A 1 C 1 B 1 + H C 2 B 2 + T C 2 N + H A 2 T A 2 A 1 C 1 B 1 Q B 2 H A 2 A 1 C 1 B 1 Q B 2 H A 2 T C 2 B 2 T A 2 A 1 C 1 B 1 B 2 N . E13

Further we give determinantal representations of (13). Let A 1 = a ij 1 H r 1 m × n , B 1 = b ij 1 H r 2 r × s , A 2 = a ij 2 H r 3 k × n , B 2 = b ij 2 H r 4 r × p , C 1 = c ij 1 H m × s , and C 2 = c ij 2 H k × p , and there exist A 1 = a ij 1 , H n × m , B 2 = b ij 2 , H p × r , H = h ij H n × k , N = n ij H p × r , and T = t ij H n × k . Let rank H = min rank A 2 rank L A 1 = r 5 , rank N = min rank B 2 rank R B 1 = r 6 , and rank T = min rank A 2 rank R H = r 7 . Consider each term of (13) separately.

1. (i) By Theorem 2.3 for the first term, x ij 01 , of (13), we have

x ij 01 = β J r 1 , n i cdet i A 1 A 1 . i d . j B 1 β β β J r 1 , n A 1 A 1 β β α I r 2 , p B 1 B 1 α α , E14
or
x ij 01 = α I r 2 , q j rdet j B 1 B 1 j . d i . A 1 α α β J r 1 , p A 1 A 1 β β α I r 2 , q B 1 B 1 α α , E15

where

d . j B 1 = α I r 2 , p j rdet j B 1 B 1 j . c ˜ q . 1 α α H n × 1 , q = 1 , , n ,
d i . A 1 = β J r 1 , n i cdet i A 1 A 1 . i c ˜ . l 1 β β H 1 × r , l = 1 , , r ,
are the column vector and the row vector, respectively. c ˜ q . 1 and c ˜ . l 1 are the qth row and the lth column of C ˜ 1 = A 1 C 1 B 1 .
1. (ii) Similarly, for the second term of (13), we have

x ij 02 = β J r 5 , n i cdet i H H . i d . j B 2 β β β J r 5 , n H H β β α I r 4 , r B 2 B 2 α α , E16
or
x ij 02 = α I r 4 , r j rdet j B 2 B 2 j . d i . H α α β J r 5 , n H H β β α I r 4 , r B 2 B 2 α α , E17

where

d . j B 2 = α I r 4 , r j rdet j B 2 B 2 j . c ˜ q . 2 α α H n × 1 , q = 1 , , n ,
d i . H = β J r 5 , n i cdet i H H . i c ˜ . l 2 β β H 1 × r , l = 1 , , r ,
are the column vector and the row vector, respectively. c ˜ q . 2 and c ˜ . l 2 are the qth row and the lth column of C ˜ 2 = H C 2 B 2 . Note that H H = A 2 L A 1 A 2 L A 1 = L A 1 A 2 A 2 L A 1 .
1. (iii) The third term of (13) can be obtained by Theorem 2.3 as well. Then

x ij 03 = β J r 7 , n i cdet i T T . i d . j N β β β J r 7 , n T T β β α I r 6 , r NN α α , E18
or
x ij 03 = α I r 6 , r j rdet j NN j . d i . T α α β J r 7 , n T T β β α I r 6 , r NN α α , E19

where

d . j N = α I r 6 , r f rdet j NN j . c ̂ q . 2 α α H n × 1 , q = 1 , , n ,
d i . T = β J r 7 , n i cdet i T T . i c ̂ . l 2 β β H 1 × r , l = 1 , , r ,

are the column vector and the row vector, respectively. c ̂ q . 2 is the qth row and c ̂ . l 2 is the lth column of C ̂ 2 = T C 2 N . The following expression gives some simplify in computing. Since T T = R H A 2 = A 2 R H R H A 2 = A 2 R H A 2 and R H = I HH = I A 2 L A 1 A 2 L A 1 = I A 2 A 2 L A 1 , then T T = A 2 I A 2 A 2 L A 1 A 2 .

1. (iv) Using (3) for determinantal representations of H and T in the fourth term of (13), we obtain

x ij 04 = q = 1 n z = 1 n f = 1 r β J r 5 , n i cdet i H H . i a . q 2 H β β β J r 7 , n q cdet q T T . q a . z 2 T β β x zf 01 q fj β J r 5 , n H H β β β J r 7 , n T T β β , E20

where a . i 2 H and a . i 2 T are the ith columns of the matrices H*A2 and T*A2, respectively; qfj is the (fj)th element of Q B 2 with the determinantal representation,

q fj = α I r 4 , r j rdet j B 2 B 2 j . b ¨ f . 2 α α α I r 4 , r B 2 B 2 α α ,
and b ¨ f . 2 is the fth row of B 2 B 2 . Note that H A 2 = L A 1 A 2 A 2 and T A 2 = A 2 R H A 2 = A 2 I A 2 A 2 L A 1 A 2 .
1. (v) Similar to the previous case,

x ij 05 = q = 1 n f = 1 r β J r 5 , n i cdet i H H . i a . q 2 H β β x qf 01 q fj β J r 5 , n H H β β , E21
1. (vi) Consider the sixth term by analogy to the fourth term. So,

x ij 06 = q = 1 n β J r 5 , n i cdet i H H . i a . q 2 H β β φ qj β J r 5 , n H H β β β J r 7 , n T T β β α I r 4 , r B 2 B 2 α α , E22

where

φ qj = β J r 7 , n i cdet q T T . q ψ . j B 2 β β , E23

or

φ qj = α I r 4 , r j rdet j B 2 B 2 j . ψ q . T α α , E24
and
ψ . j B 2 = α I r 4 , r f rdet j B 2 B 2 j . c q . 2 α α H 1 × n , q = 1 , , n ,
ψ q . T = β J r 7 , n q cdet q T T . q c . l 2 β β H r × 1 , l = 1 , , r ,
are the column vector and the row vector, respectively. c q . 2 and c . l 2 are the qth row and the lth column of C 2 = T C 2 B 2 for all i = 1 , , n and j = 1 , , p .
1. (vii) Using (3) for determinantal representations of and T and (2) for N in the seventh term of (13), we obtain

x ij 07 = q = 1 n f = 1 r β J r 7 , n i cdet i T T . i a . q 2 T β β x qf 01 α I r 6 , r j rdet j NN j . b f . 2 N α α β J r 7 , n T T β β α I r 6 , r NN α α , E25

where a . q 2 T and b f . 2 N are the qth column of T*A2 and the fth row of B 2 N = B 2 B 2 R B 1 , respectively.

Hence, we prove the following theorem.

Theorem 3.1. Let A 1 H r 1 m × n , B 1 H r 2 r × s , A 2 H r 3 k × n , B 2 H r 4 r × p , rank H = rank A 2 L A 1 = r 5 , rank N = R B 1 B 2 = r 6 , and rank T = R H A 2 = r 7 . Then, for the partial solution (13), X 0 = x ij 0 H n × r , of the system (1), we have,

x ij 0 = δ x ij 0 δ , E26

where the term x ij 01 has the determinantal representations (14) and (15), x ij 02 —(16) and (17), x ij 03 —(18) and (19), x ij 04 —(20), x ij 05 —(21), x ij 06 —(23) and (24), and x ij 07 —(25).

## 4. Cramer’s rules for special cases of (1)

In this section, we consider special cases of (1) when one or two equations are one-sided. Let in Eq.(1), the matrix B1 is vanished. Then, we have the system

A 1 X = C 1 , A 2 XB 2 = C 2 . E27

The following lemma is extended to matrices with quaternion entries.

Lemma 4.1. [7] Let A 1 H m × n , C 1 H m × r , A 2 H k × n , B 2 H r × p , and C 2 H k × p be given and X H n × r is to be determined. Put H = A 2 L A 1 . Then, the following statements are equivalent:

1. System (27) is consistent.

2. R A 1 C 1 = 0 , R H C 2 A 2 A 1 C 1 B 2 = 0 , C 2 L B 2 = 0 .

3. rank A 1 C 1 = rank A 1 , rank C 2 B 2 = rank B 2 , rank A 1 C 1 B 2 A 2 C 2 = rank A 1 A 2 .

In this case, the general solution of (27) can be expressed as

X = A 1 C 1 + L A 1 H C 2 A 2 A 1 C 1 B 2 B 2 + L A 1 L H Z 1 + L A 1 W 1 R B 2 , E28

where Z1 and W1 are the arbitrary matrices over H with appropriate sizes.

Since by (9), L A 1 H = L A 1 A 2 L A 1 = A 2 L A 1 = H , then we have some simplification of (28),

X = A 1 C 1 + H C 2 B 2 H A 2 A 1 C 1 B 2 B 2 + L A 1 L H Z 1 + L A 1 W 1 R B 2 .

By putting Z1=W1=0, there is the following partial solution of (27),

X 0 = A 1 C 1 + H C 2 B 2 H A 2 A 1 C 1 B 2 B 2 . E29

Theorem 4.1. Let A 1 = a ij 1 H r 1 m × n , A 2 = a ij 2 H r 2 k × n , B 2 = b ij 2 H r 3 r × p , C 1 = c ij 1 H m × r , and C 2 = c ij 2 H k × p , and there exist A 1 = a ij 1 , H n × m , B 2 = b ij 2 , H p × r , and H = h ij H n × k . Let rank H = min rank A 2 rank L A 1 = r 4 . Denote A 1 C 1 C ̂ 1 = c ̂ ij 1 H n × r , H C 2 B 2 C ̂ 2 = c ̂ ij 2 H n × r , H A 2 A 1 A ̂ 2 = a ̂ ij 2 H n × m , and C 1 Q B 2 Q ̂ = q ̂ ij H m × p . Then, the partial solution (29), X 0 = x ij 0 H n × r , possesses the following determinantal representations,

x ij 0 = β J r 1 , n i cdet i A 1 A 1 . i c ̂ . j 1 β β β J r 1 , n A 1 A 1 β β + d ij λ β J r 4 , n H H β β α I r 3 , r B 2 B 2 α α l = 1 m g il μ α I r 3 , r j rdet j B 2 B 2 j . q ̂ l . α α β J r 4 , n H H β β α I r 1 , m A 1 A 1 α α α I r 3 , r B 2 B 2 α α

for all λ = 1 , 2 and μ = 1 , 2 . Here

d ij 1 α I r 3 , r j rdet j B 2 B 2 j . v i . 1 α α , g il 1 α I r 1 , m l rdet l A 1 A 1 l . u i . 1 α α ,
and the row-vectors v i . 1 = v i 1 1 v ir 1 and u i . 1 = u i 1 1 u im 1 such that
v it 1 β J r 4 , n i cdet i H H . i c ̂ . t 2 β β , u iz 1 β J r 4 , n i cdet i H H . i a ̂ . z 2 β β .

In another case,

d ij 2 β J r 4 , n i cdet i H H . i v . j 2 β β , g il 2 β J r 4 , n i cdet i H H . i u . l 2 β β .
and the column-vectors v . j 2 = v 1 j 2 v nj 2 and u . l 2 = u 1 l 2 u nl 2 such that
v qj 2 α I r 3 , r j rdet j B 2 B 2 j . c ̂ q . 2 α α , u ql 2 α I r 1 , m l rdet l A 1 A 1 l . a q . 2 α α

Proof. The proof is similar to the proof of Theorem 3.1.

Let in Eq.(1), the matrix A1 is vanished. Then, we have the system,

XB 1 = C 1 , A 2 XB 2 = C 2 . E30

The following lemma is extended to matrices with quaternion entries as well.

Lemma 4.2. [7] Let B 1 H r × s , C 1 H n × s , A 2 H k × n , B 2 H r × p , and C 2 H k × p be given and X H n × r is to be determined. Put N = R B 1 B 2 . Then, the following statements are equivalent:

1. System (30) is consistent.

2. R A 2 C 2 = 0 , C 2 A 2 C 1 B 1 B 2 L N = 0 , C 2 L B 2 = 0 .

3. rank A 2 C 2 = rank A 2 , rank C 1 B 1 = rank B 1 , rank C 2 A 2 C 1 B 2 B 1 = rank B 2 B 1 .

In this case, the general solution of (30) can be expressed as

X = C 1 B 1 + A 2 C 2 A 2 C 1 B 1 B 2 N R B 1 + L A 2 W 2 R B 1 + Z 2 R N R B 1 , E31

where Z2 and W2 are the arbitrary matrices over H with appropriate sizes.

Since by (10), N R B 1 = R B 1 B 2 R B 1 = N , then some simplification of (31) can be derived,

X = C 1 B 1 + A 2 C 2 N A 2 C 1 B 1 B 2 N + L A 2 W 2 R B 1 + Z 2 R N R B 1 .

By putting Z2=W2=0, there is the following partial solution of (30),

X 0 = C 1 B 1 + A 2 C 2 N A 2 A 2 C 1 B 1 B 2 N . E32

The following theorem on determinantal representations of (29) can be proven similar to the proof of Theorem 3.1 as well.

Theorem 4.2. Let B 1 = b ij 1 H r 1 r × s , A 2 = a ij 2 H r 2 k × n , B 2 = b ij 2 H r 3 r × p , C 1 = c ij 1 H n × s , and C 2 = c ij 2 H k × p , and there exist B 1 = b ij 1 , H s × r , A 2 = a ij 2 , H n × k , N = n ij H p × r . Let rank N = min rank B 2 rank R B 1 = r 4 . Denote C 1 B 1 C ˜ 1 = c ˜ ij 1 H n × r , A 2 C 2 N C ˜ 2 = c ˜ ij 2 H n × r , B 1 B 2 N B ˜ 2 = b ˜ ij 2 H s × r , and P A 2 C 1 P ˜ = p ˜ ij H n × s . Then, the partial solution (32), X 0 = x ij 0 H n × r , possesses the following determinantal representations,

x ij 0 = α I r 1 , r j rdet j B 1 B 1 j . c ˜ i . 1 α α α I r 1 , r B 1 B 1 α α + d ij λ β J r 2 , n A 2 A 2 β β α I r 4 , r NN α α z = 1 s β J r 2 , n i cdet i A 2 A 2 . i p ˜ . z β β g zj μ β J r 2 , n A 2 A 2 β β β J r 1 , s B 1 B 1 β β α I r 4 , r NN α α

for all λ = 1 , 2 and μ = 1 , 2 . Here

d ij 1 α I r 3 , r j rdet j NN j . φ i . 1 α α , g il 1 α I r 4 , r j rdet j NN j . ψ z . α α ,
and the row-vectors φ i . 1 = φ i 1 1 φ ir 1 and ψ i . 1 = ψ z 1 1 ψ zr 1 such that
φ it 1 = β J r 2 , n i cdet i A 2 A 2 . i c . t 2 β β , ψ zv 1 = β J r 1 , n z cdet z B 1 B 1 . i b . v 2 β β .

In another case,

d ij 2 β J r 2 , n i cdet i A 2 A 2 . i φ . j 2 β β , g zj 2 β J r 1 , n z cdet z B 1 B 1 . z ψ . j 2 β β ,
and the column-vectors φ . j 2 = φ 1 j 2 φ nj 2 and ψ . j 2 = ψ 1 j 2 ψ sj 2 such that
φ qj 2 = α I r 4 , r j rdet j NN j . c q . 2 α α , ψ uj 2 α I r 4 , r j rdet j NN j . b u . 2 α α .

Now, suppose that the both equations of (1) are one-sided. Let in Eq.(1), the matrices B1 and A2 are vanished. Then, we have the system

A 1 X = C 1 , XB 2 = C 2 . E33

The following lemma is extended to matrices with quaternion entries.

Lemma 4.3. [31] Let A 1 H m × n , B 2 H r × p , C 1 H m × r , and C 2 H n × p be given and X H n × r is to be determined. Then, the system (33) is consistent if and only if R A 1 C 1 = 0 , C 2 L B 2 = 0 , and A1C2=C1B2. Under these conditions, the general solution to (33) can be established as

X = A 1 C 1 + L A 1 C 2 B 2 + L A 1 UR B 2 , E34

where U is a free matrix over H with a suitable shape.

Due to the consistence conditions, Eq. (34) can be expressed as follows:

X = C 2 B 2 + A 1 C 1 A 1 C 2 B 2 + L A 1 UR B 2 = C 2 B 2 + A 1 C 1 C 1 B 2 B 2 + L A 1 UR B 2 = C 2 B 2 + A 1 C 1 R B 2 + L A 1 UR B 2 ,

Consequently, the partial solution X0 to (33) is given by

X 0 = A 1 C 1 + L A 1 C 2 B 2 , E35
or
X 0 = C 2 B 2 + A 1 C 1 R B 2 . E36

Due to the expression (35), the following theorem can be proven similar to the proof of Theorem 3.1.

Theorem 4.3. Let A 1 = a ij 1 H r 1 m × n , B 2 = b ij 2 H r 2 r × p , C 1 = c ij 1 H m × r , and C 2 = c ij 2 H n × r , and there exist A 1 = a ij 1 , H n × m , B 2 = b ij 2 , H p × r , and L A 1 = I A 1 A 1 l ij H n × n . Denote A 1 C 1 C ̂ 1 = c ̂ ij 1 H n × r and L A 1 C 2 B 2 C ̂ 2 = c ̂ ij 2 H n × r . Then, the partial solution (35), X 0 = x ij 0 H n × s , possesses the following determinantal representation,

x ij 0 = β J r 1 , n i cdet i A 1 A 1 . i c ̂ . j 1 β β β J r 1 , n A 1 A 1 β β + α I r 2 , r j rdet j B 2 B 2 j . c ̂ i . 2 α α α I r 2 , r B 2 B 2 α α , E37

where c ̂ . j 1 is the jth column of C ̂ 1 and c ̂ i . 2 is the ith row of C ̂ 2 .

Remark 4.1. In accordance to the expression (36), we obtain the same representations, but with the denotations, C 2 B 2 C ̂ 2 = c ̂ ij 2 H n × r and A 1 C 1 R B 2 C ̂ 1 = c ̂ ij 2 H n × r .

Let in Eq.(1), the matrices B1 and B2 are vanished. Then, we have the system

A 1 X = C 1 , A 2 X = C 2 . E38

Lemma 4.4. [7] Suppose that A 1 H m × n , C 1 H m × r , A 2 H k × n , and C 2 H k × r are known and X H n × r is unknown, H = A 2 L A 1 , T = R H A 2 . Then, the system (38) is consistent if and only if A i A i C i = C i , for all i = 1 , 2 and T A 2 C 2 A 1 C 1 = 0 . Under these conditions, the general solution to (38) can be established as

X = A 1 C 1 + L A 1 H A 2 A 2 C 2 A 1 C 1 + L A 1 L H Y , E39

where Y is an arbitrary matrix over H with an appropriate size.

Using (10) and the consistency conditions, we simplify (39) accordingly, X 0 = A 1 C 1 + H C 2 H A 2 A 1 C 1 + L A 1 L H Y . Consequently, the following partial solution of (39) will be considered

X 0 = A 1 C 1 + H C 2 H A 2 A 1 C 1 . E40

In the following theorem, we give the determinantal representations of (40).

Theorem 4.4. Let A 1 = a ij 1 H r 1 m × n , A 2 = a ij 2 H r 2 k × n , C 1 = c ij 1 H m × r , C 2 = c ij 2 H k × r , and there exist A 1 = a ij 1 , H n × m , H 2 = h ij H n × s . Let rank H = min rank A 2 rank L A 1 = r 3 . Denote A 1 C 1 C ̂ 1 = c ̂ ij 1 H n × r , H C 2 C ̂ 2 = c ̂ ij 2 H n × r , and H A 2 A ̂ 2 = a ̂ ij 2 H n × n . Then, X 0 = x ij 0 H n × r possesses the following determinantal representation,

x ij 0 = β J r 1 , n i cdet i A 1 A 1 . i c ̂ . j 1 β β β J r 1 , n A 1 A 1 β β + β J r 3 , n i cdet i H H . i c ̂ . j 2 β β β J r 3 , n H H β β l = 1 n β J r 3 , n i cdet i H H . i a ̂ . l 2 β β β J r 3 , n H H β β β J r 1 , n l cdet l A 1 A 1 . l c ̂ . j 1 β β β J r 1 , n A 1 A 1 β β , E41

where c ̂ . j 1 , c ̂ . j 2 , and a ̂ . j 2 are the jth columns of the matrices C ̂ 1 , C ̂ 2 , and A ̂ 2 , respectively.

Proof. The proof is similar to the proof of Theorem 3.1.

## 5. Conclusion

Within the framework of the theory of row-column determinants previously introduced by the author, we get determinantal representations (analogs of Cramer’s rule) of partial solutions to the system of two-sided quaternion matrix equations A1XB1=C1, A2XB2=C2, and its special cases with 1 and 2 one-sided matrix equations. We use previously obtained by the author determinantal representations of the Moore-Penrose inverse. Note to give determinantal representations for all above matrix systems over the complex field, it is obviously needed to substitute all row and column determinants by usual determinants.

## Conflict of interest

The author declares that there are no conflict interests.

## References

1. 1. Mitra SK. A pair of simultaneous linear matrix A1XB1=C1 and A2XB2=C2. Proceedings of the Cambridge Philosophical Society. 1973;74:213-216
2. 2. Mitra SK. A pair of simultaneous linear matrix equations and a matrix programming problem. Linear Algebra and its Applications. 1990;131:97-123. DOI: 10.1016/0024-3795(90)90377-O
3. 3. Shinozaki N, Sibuya M. Consistency of a pair of matrix equations with an application. Keio Engineering Report. 1974;27:141-146
4. 4. Van der Woulde J. Feedback decoupling and stabilization for linear system with multiple exogenous variables [PhD thesis]. Netherlands: Technical University of Eindhoven; 1987
5. 5. Özgüler AB, Akar N. A common solution to a pair of linear matrix equations over a principle domain. Linear Algebra and its Applications. 1991;144:85-99. DOI: 10.1016/0024-3795(91)90063-3
6. 6. Navarra A, Odell PL, Young DM. A representation of the general common solution to the matrix equations A1XB1=C1 and A2XB2=C2 with applications. Computers & Mathematics with Applications. 2001;41:929-935. DOI: 10.1016/S0898-1221(00)00330-8
7. 7. Wang QW. The general solution to a system of real quaternion matrix equations. Computers & Mathematics with Applications. 2005;49:665-675. DOI: 10.1016/j.camwa.2004.12.00
8. 8. Kyrchei I. Cramer’s rule for generalized inverse solutions. In: Kyrchei I, editor. Advances in Linear Algebra Research. New York: Nova Sci. Publ; 2015. pp. 79-132
9. 9. Aslaksen H. Quaternionic determinants. Mathematical Intelligence. 1996;18(3):57-65. DOI: 10.1007/BF03024312
10. 10. Cohen N, De Leo S. The quaternionic determinant. Electronic Journal of Linear Algebra. 2000;7:100-111. DOI: 10.13001/1081-3810.1050
11. 11. Dieudonne J. Les determinants sur un corps non-commutatif. Bulletin de la Société Mathématique de France. 1943;71:27-45
12. 12. Study E. Zur theorie der linearen gleichungen. Acta Mathematica. 1920;42:1-61
13. 13. Cayley A. On certain results relating to quaternions. Philosophical Magazine. 1845;26:141-145. Reprinted in The collected mathematical papers. Cambridge Univ. Press. 1889;1:123-126
14. 14. Moore EH. On the determinant of an hermitian matrix of quaternionic elements. Bulletin of the American Mathematical Society. 1922;28:161-162
15. 15. Dyson FJ. Quaternion determinants. Helvetica Physica Acta. 1972;45:289-302. DOI: 10.5169/seals-114,385
16. 16. Chen L. Definition of determinant and Cramer solution over the quaternion field. Acta Mathematica Sinica. 1991;7:171-180. DOI: 10.1007/BF02633946
17. 17. Kyrchei I. Cramer’s rule for quaternion systems of linear equations. Fundamentalnaya i Prikladnaya Matematika. 2007;13(4):67-94
18. 18. Kyrchei I. The theory of the column and row determinants in a quaternion linear algebra. In: Baswell AR, editor. Advances in Mathematics Research 15. New York: Nova Sci. Publ; 2012. pp. 301-359
19. 19. Kyrchei I. Determinantal representations of the Moore-Penrose inverse over the quaternion skew field and corresponding Cramer’s rules. Linear Multilinear Algebra. 2011;59(4):413-431. DOI: 10.1080/03081081003586860
20. 20. Kyrchei I. Explicit representation formulas for the minimum norm least squares solutions of some quaternion matrix equations. Linear Algebra and its Applications. 2013;438(1):136-152. DOI: 10.1016/j.laa.2012.07.049
21. 21. Kyrchei I. Determinantal representations of the Drazin inverse over the quaternion skew field with applications to some matrix equations. Applied Mathematics and Computation. 2014;238:193-207. DOI: 10.1016/j.amc.2014.03.125
22. 22. Kyrchei I. Determinantal representations of the W-weighted Drazin inverse over the quaternion skew field. Applied Mathematics and Computation. 2015;264:453-465. DOI: 10.1016/j.amc.2015.04.125
23. 23. Kyrchei I. Explicit determinantal representation formulas of W-weighted Drazin inverse solutions of some matrix equations over the quaternion skew field. Mathematical Problems in Engineering. 2016. 13 p. DOI: 10.1155/2016/8673809; ID 8673809
24. 24. Kyrchei I. Determinantal representations of the Drazin and W-weighted Drazin inverses over the quaternion skew field with applications. In: Griffin S, editor. Quaternions: Theory and Applications. New York: Nova Sci. Publ; 2017. pp. 201-275
25. 25. Kyrchei I. Weighted singular value decomposition and determinantal representations of the quaternion weighted Moore-Penrose inverse. Applied Mathematics and Computation. 2017;309:1-16. DOI: 10.1016/j.amc.2017.03.048
26. 26. Song GJ, Wang QW, Chang HX. Cramer rule for the unique solution of restricted matrix equations over the quaternion skew field. Computers & Mathematcs with Applications. 2011;61:1576-1589. DOI: 10.1016/j.camwa.2011.01.026
27. 27. Song GJ, Dong CZ. New results on condensed Cramer’s rule for the general solution to some restricted quaternion matrix equations. Journal of Applied Mathematics and Computing. 2017;53:321-341. DOI: 10.1007/s12190-015-0970-y
28. 28. Song GJ, Wang QW. Condensed Cramer rule for some restricted quaternion linear equations. Applied Mathematics and Computation. 2011;218:3110-3121. DOI: 10.1016/j.amc.2011.08.038
29. 29. Song G. Characterization of the W-weighted Drazin inverse over the quaternion skew field with applications. Electronic Journal of Linear Algebra. 2013;26:1-14. DOI: 10.13001/1081-3810.1635
30. 30. Wang QW. A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity. Linear Algebra and its Applications. 2004;384:43-54. DOI: 10.1016/j.laa.2003.12.039
31. 31. Wang QW, Wu ZC, Lin CY. Extremal ranks of a quaternion matrix expression subject to consistent systems of quaternion matrix equations with applications. Applied Mathematics and Computation. 2006;182(2):1755-1764. DOI: 10.1016/j.amc.2006.06.012
32. 32. Maciejewski AA, Klein CA. Obstacle avoidance for kinematically redundant manipulators in dynamically varying environments. The International Journal of Robotics Research. 1985;4(3):109-117. DOI: 10.1177/027836498500400308

Written By

Ivan I. Kyrchei

Submitted: 12 October 2017 Reviewed: 16 January 2018 Published: 29 August 2018