Open access peer-reviewed chapter

Determinantal Representations of the Core Inverse and Its Generalizations

Written By

Ivan I. Kyrchei

Submitted: 09 May 2019 Reviewed: 26 August 2019 Published: 27 November 2019

DOI: 10.5772/intechopen.89341

From the Edited Volume

Functional Calculus

Edited by Kamal Shah and Baver Okutmuştur

Chapter metrics overview

889 Chapter Downloads

View Full Metrics

Abstract

Generalized inverse matrices are important objects in matrix theory. In particular, they are useful tools in solving matrix equations. The most famous generalized inverses are the Moore-Penrose inverse and the Drazin inverse. Recently, it was introduced new generalized inverse matrix, namely the core inverse, which was late extended to the core-EP inverse, the BT, DMP, and CMP inverses. In contrast to the inverse matrix that has a definitely determinantal representation in terms of cofactors, even for basic generalized inverses, there exist different determinantal representations as a result of the search of their more applicable explicit expressions. In this chapter, we give new and exclusive determinantal representations of the core inverse and its generalizations by using determinantal representations of the Moore-Penrose and Drazin inverses previously obtained by the author.

Keywords

  • Moore-Penrose inverse
  • Drazin inverse
  • core inverse
  • core-EP inverse
  • 2000 AMS subject classifications: 15A15
  • 16W10

1. Introduction

In the whole chapter, the notations R and C are reserved for fields of the real and complex numbers, respectively. Cm×n stands for the set of all m×n matrices over C. Crm×n determines its subset of matrices with a rank r. For ACm×n, the symbols A and rkA specify the conjugate transpose and the rank of A, respectively, A or detA stands for its determinant. A matrix ACn×n is Hermitian if A=A.

A means the Moore-Penrose inverse of ACn×m, i.e., the exclusive matrix X satisfying the following four equations:

AXA=AE1
XAX=XE2
AX=AXE3
XA=XAE4

For ACn×n with index IndA=k, i.e., the smallest positive number such that rkAk+1=rkAk, the Drazin inverse of A, denoted by Ad, is called the unique matrix X that satisfies Eq. (2) and the following equations,

AX=XA;E5
XAk+1=AkE6
Ak+1X=Ak.E7

In particular, if IndA=1, then the matrix X is called the group inverse, and it is denoted by X=A#. If IndA=0, then A is nonsingular and Ad=A=A1.

It is evident that if the condition (5) is fulfilled, then (6) and (7) are equivalent. We put both these conditions because they will be used below independently of each other and without the obligatory fulfillment of (5).

A matrix A satisfying the conditions i,j, is called an ij-inverse of A, and is denoted by Aij. The set of matrices Aij is denoted Aij. In particular, A1 is called the inner inverse, A2 is called the outer inverse, A12 is called the reflexive inverse, A1,2,3,4 is the Moore-Penrose inverse, etc.

For an arbitrary matrix ACm×n, we denote by

  • NA=xHn×1:Ax=0, the kernel (or the null space) of A;

  • CA=yHm×1:y=AxxHn×1, the column space (or the range space) of A; and

  • RA=yH1×n:y=xAxH1×m, the row space of A.

PAAA and QAAA are the orthogonal projectors onto the range of A and the range of A, respectively.

The core inverse was introduced by Baksalary and Trenkler in [1]. Later, it was investigated by S. Malik in [2] and S.Z. Xu et al. in [3], among others.

Definition 1.1. [1] A matrix XCn×n is called the core inverse of ACn×n if it satisfies the conditions

AX=PA,andCX=CA.

When such matrix X exists, it is denoted as A#.

In 2014, the core inverse was extended to the core-EP inverse defined by K. Manjunatha Prasad and K.S. Mohana [4]. Other generalizations of the core inverse were recently introduced for n×n complex matrices, namely BT inverses [5], DMP inverses [2], CMP inverses [6], etc. The characterizations, computing methods, and some applications of the core inverse and its generalizations were recently investigated in complex matrices and rings (see, e.g., [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]).

In contrast to the inverse matrix that has a definitely determinantal representation in terms of cofactors, for generalized inverse matrices, there exist different determinantal representations as a result of the search of their more applicable explicit expressions (see, e.g. [19, 20, 21, 22, 23, 24, 25]). In this chapter, we get new determinantal representations of the core inverse and its generalizations using recently obtained by the author determinantal representations of the Moore-Penrose inverse and the Drazin inverse over the quaternion skew field, and over the field of complex numbers as a special case [26, 27, 28, 29, 30, 31, 32, 33, 34]. Note that a determinantal representation of the core-EP generalized inverse in complex matrices has been derived in [4], based on the determinantal representation of an reflexive inverse obtained in [19, 20].

The chapter is organized as follows: in Section 2, we start with preliminary introduction of determinantal representations of the Moore-Penrose inverse and the Drazin inverse. In Section 3, we give determinantal representations of the core inverse and its generalizations, namely the right and left core inverses are established in Section 3.1, the core-EP inverses in Section 3.2, the core DMP inverse and its dual in Section 3.3, and finally the CMP inverse in Section 3.4. A numerical example to illustrate the main results is considered in Section 4. Finally, in Section 5, the conclusions are drawn.

Advertisement

2. Preliminaries

Let αα1αk1m and ββ1βk1n be subsets with 1kminmn. By Aβα, we denote a submatrix of AHm×n with rows and columns indexed by α and β, respectively. Then, Aαα is a principal submatrix of A with rows and columns indexed by α, and Aαα is the corresponding principal minor of the determinant A. Suppose that

Lk,nα:α=α1αk1α1<<αkn

stands for the collection of strictly increasing sequences of 1kn integers chosen from 1n. For fixed iα and jβ, put Ir,miα:αLr,miα and Jr,njβ:βLr,njβ.

The jth columns and the ith rows of A and A denote a.j and a.j and ai. and ai., respectively. By Ai.b and A.jc, we denote the matrices obtained from A by replacing its ith row with the row b, and its jth column with the column c.

Theorem 2.1. [28] IfAHrm×n, then the Moore-Penrose inverseA=aijCn×mpossesses the determinantal representations

aij=βJr,niAA.ia.jβββJr,nAAββ=E8
=αIr,mjAAj.ai.αααIr,mAAαα.E9

Remark 2.2. For an arbitrary full-rank matrix ACrm×n, a row vector bH1×m, and a column-vector cHn×1, we put, respectively,

AAi.b=αIm,miAAi.bαα,i=1,,m,AA=αIm,mAAαα,whenr=m;AA.jc=βJn,njAA.jcββ,j=1,,n,AA=βJn,nAAββ,whenr=n.

Corollary 2.3. [21] LetACrm×n. Then, the following determinantal representations can be obtained

  1. for the projector QA=qijn×n,

    qij=βJr,niAA.iȧ.jβββJr,nAAββ=αIr,njAAj.ȧi.αααIr,nAAαα,E10

    where ȧ.j is the jth column and ȧi. is the ith row of AA; and

  2. for the projector PA=pijm×m,

    pij=αIr,mjAAj.a¨i.αααIr,mAAαα=βJr,miAA.ia¨.jβββJr,mAAββ,E11

    where a¨i. is the ith row and a¨.j is the jth column of AA.

The following lemma gives determinantal representations of the Drazin inverse in complex matrices.

Lemma 2.4. [21] LetACn×nwithIndA=kandrkAk+1=rkAk=r. Then, the determinantal representations of the Drazin inverseAd=aijdCn×nare

aijd=βJr,niAk+1.ia.jkβββJr,nAk+1ββ=E12
=αIr,njAk+1j.ai.kαααIr,nAk+1αα,E13

where ai.k is the ith row and a.jk is the jth column of Ak.

Corollary 2.5. [21] LetACn×nwithIndA=1andrkA2=rkA=r. Then, the determinantal representations of the group inverseA#=aij#Cn×nare

aij#=βJr,niA2.ia.jβββJr,nA2ββ=αIr,njA2j.ai.αααIr,nA2αα.E14
Advertisement

3. Determinantal representations of the core inverse and its generalizations

3.1 Determinantal representations of the core inverses

Together with the core inverse in [35], the dual core inverse was to be introduced. Since the both these core inverses are equipollent and they are different only in the position relative to the inducting matrix A, we propose called them as the right and left core inverses regarding to their positions. So, from [1], we have the following definition that is equivalent to Definition 1.1.

Definition 3.1. A matrix XCn×n is said to be the right core inverse of ACn×n if it satisfies the conditions

AX=PA,andCX=CA.

When such matrix X exists, it is denoted as A#.

The following definition of the left core inverse can be given that is equivalent to the introduced dual core inverse [35].

Definition 3.2 A matrix XCn×n is said to be the left core inverse of ACn×n if it satisfies the conditions

XA=QA,andRX=RA.E15

When such matrix X exists, it is denoted as A#.

Remark 3.3. In [35], the conditions of the dual core inverse are given as follows:

A#A=PA,andCA#CA.

Since PA=AA=AA=AA=QA, and RA=CA, then these conditions and (15) are analogous.

Due to [1], we introduce the following sets of quaternion matrices

CnCM=ACn×n:rkA2=rkA,CnEP=ACn×n:AA=AA=CA=CA.

The matrices from CnCM are called group matrices or core matrices. If ACnEP, then clearly A=A#. It is known that the core inverses of ACn×n exist if and only if ACnCM or IndA=1. Moreover, if A is nonsingular, IndA=0, then its core inverses are the usual inverse. Due to [1], we have the following representations of the right and left core inverses.

Lemma 3.4. [1] LetACnCM. Then,

A#=A#AA,E16
A#=AAA#E17

Remark 3.5. In Theorems 3.6 and 3.7, we will suppose that ACnCM but ACnEP. Because, if ACnCM and ACnEP (in particular, A is Hermitian), then from Lemma 3.4 and the definitions of the Moore-Penrose and group inverses, it follows that A#=A#=A#=A.

Theorem 3.6.LetACnCMandrkA2=rkA=s. Then, its right core inverse has the following determinantal representations

aij#,r=αIs,njAAj.ui.1ααβJs,nA2ββαIs,nAAαα=E18
=βJs,niA2.iu.j2βββJs,nA2ββαIs,nAAαα,E19

where

ui.1=βJs,niA2.ia˜.fββC1×n,f=1,,nu.j2=αIs,njAAj.a˜l.ααCn×1,l=1,,n.

are the row and column vectors, respectively. Here a˜.f and a˜l. are the fth column and lth row of A˜A2A.

Proof. Taking into account (16), we have for #A,

aij#,r=l=1nf=1nail#alfafj.E20

By substituting (14) and (15) in (20), we obtain

aij#,r=l=1nf=1nβJs,niA2.ia.fββaflβJs,nA2ββαIs,njAAj.al.αααIs,nAAαα=f=1nl=1nβJs,njA2.je.fββa˜flαIs,njAAj.el.ααβJs,nA2ββαIs,nAAαα,

where e.l and el. are the unit column and row vectors, respectively, such that all their components are 0, except the lth components which are 1; a˜lf is the (lf)th element of the matrix A˜A2A.

Let

uil1f=1nβJs,niA2.ie.fββa˜fl=βJs,niA2.ia˜.lββ,i,l=1,,n.

Construct the matrix U1=uil1Hn×n. It follows that

luil1αIs,njAAj.el.αα=αIs,njAAj.ui.1αα,

where ui.1 is the ith row of U1. So, we get (18). If we first consider

uif2la˜flαIs,njAAj.el.αα=αIs,njAAj.a˜f.αα,f,j=1,,n.

and construct the matrix U2=uif2Hn×n, then from

f=1nβJs,niA2.ie.fββuif2=βJs,niA2.iu.f2ββ,

it follows (19).□

Taking into account (17), the following theorem on the determinantal representation of the left core inverse can be proved similarly.

Theorem 3.7.LetACnCMandrkA2=rkA=s. Then for its left core inverse#A=aij#,l, we have

aij#,l=αIs,njA2j.vi.1ααβJs,nAAββαIs,nA2αα=βJs,niAA.iv.j2βββJs,nAAββαIs,nA2αα,

where

vi.1=βJs,niAA.ia¯.fββC1×n,f=1,,n
v.j2=αIs,njA2j.a¯l.ααCn×1,l=1,,n.

Here a¯.f and a¯l. are the fth column and lth row of A¯AA2.

3.2 Determinantal representations of the core-EP inverses

Similar as in [4], we introduce two core-EP inverses.

Definition 3.8. A matrix XCn×n is said to be the right core-EP inverse of ACn×n if it satisfies the conditions

XAX=A,andCX=CX=CAd.

It is denoted as A.

Definition 3.9. A matrix XCn×n is said to be the left core-EP inverse of ACn×n if it satisfies the conditions

XAX=A,andRX=RX=RAd.

It is denoted as A.

Remark 3.10. Since CAd=RAd, then the left core inverse A of ACn×n is similar to the core inverse introduced in [4], and the dual core-EP inverse introduced in [35].

Due to [4], we have the following representations the core-EP inverses of ACn×n,

A=A2,3,6aandCACAk,A=A2,4,6bandRARAk.

Thanks to [35], the following representations of the core-EP inverses will be used for their determinantal representations.

Lemma 3.11.LetACn×nandIndA=k. Then

A=AkAk+1,E21
A=Ak+1Ak.E22

Moreover, if IndA=1, then we have the following representations of the right and left core inverses

A#=AA2,E23
A#=A2A.E24

Theorem 3.12.SupposeACn×n,IndA=k,rkAk=s, and there existAandA. ThenA=aij,randA=aij,lpossess the determinantal representations, respectively,

aij,r=αIs,njAk+1Ak+1j.âi.αααIs,nAk+1Ak+1αα,E25
aij,l=βJs,niAk+1Ak+1.iaˇ.jβββJs,nAk+1Ak+1ββ,E26

where âi. is the ith row of Â=AkAk+1 and aˇ.j is the jth column of Aˇ=Ak+1Ak.

Proof. Consider Ak+1=aijk+1 and Ak=aijk. By (21),

aij,r=t=1naitkatjk+1.

Taking into account (9) for the determinantal representation of Ak+1, we get

aij,r=t=1naitkαIs,njAk+1Ak+1j.at.k+1αααIr,mAk+1Ak+1αα,

where at.k+1 is the tth row of Ak+1. Since t=1naitkat.k+1=âi., then it follows (25).

The determinantal representation (26) can be obtained similarly by integrating (8) for the determinantal representation of Ak+1 in (22).□

Taking into account the representations (23)-(24), we obtain the determinantal representations of the right and left core inverses that have more simpler expressions than they are obtained in Theorems 3.6 and 3.7.

Corollary 3.13.LetACsn×n,IndA=1, and there existA#andA#. ThenA#=aij#,randA#=aij#,lcan be expressed as follows

aij#,r=αIs,njA2A2j.âi.αααIs,nA2A2αα,aij#,l=βJs,niA2A2.iaˇ.jβββJs,nA2A2ββ,

where âi. is the ith row of Â=AA2 and aˇ.j is the jth column of Aˇ=A2A.

3.3 Determinantal representations of the DMP and MPD inverses

The concept of the DMP inverse in complex matrices was introduced in [2] by S. Malik and N. Thome.

Definition 3.14. [2] Suppose ACn×n and IndA=k. A matrix XCn×n is said to be the DMP inverse of A if it satisfies the conditions

XAX=X,XA=AdA,andAkX=AkA.E27

It is denoted as Ad,.

Due to [2], if an arbitrary matrix satisfies the system of Eq. (27), then it is unique and has the following representation

Ad,=AdAA.E28

Theorem 3.15.LetACsn×n,IndA=k, andrkAk=s1. Then, its DMP inverseAd,=aijd,has the following determinantal representations.

aijd,=αIs,njAAj.ui.1ααβJs1,nAk+1ββαIs,nAAαα=E29
=βJs1,niAk+1.iu.j2βββJs1,nAk+1βββJs,nAAββ,E30

where

ui.1=βJs1,niAk+1.ia˜.fββC1×n,f=1,,n,u.j2=αIs,njAAj.a˜l.ααCn×1,l=1,,n.

Here, a˜.f and âl. are the fth column and the lth row of A˜Ak+1A.

Proof. Taking into account (28) for Ad,, we get

aijd,=l=1nf=1naildalfafj.E31

By substituting (12) and (9) for the determinantal representations of Ad and A in (31), we get

aijd,=l=1nf=1nβJs1,niAk+1.ia.lkβββJs1,nAk+1ββalfαIs,njAAj.af.αααIs,nAAαα=l=1nf=1nβJs1,niAk+1.ie.lβββJs1,nAk+1ββa˜lfαIs,nj(AAj.ef.αααIs,nAAαα,E32

where e.l and el. are the lth unit column and row vectors, and a˜lf is the lfth element of the matrix A˜=Ak+1A. If we put

uif1l=1nβJs1,niAk+1.ie.lββa˜lf=βJs1,niAk+1.ia˜.fββ,

as the fth component of the row vector ui.1=ui11uin1, then from

f=1nuif1αIs,njAAj.ef.αα=αIs,njAAj.ui.1αα,

it follows (29). If we initially obtain

ulj2f=1na˜lfαIs,njAAj.ef.αα=αIs,njAAj.a˜l.αα,

as the lth component of the column vector u.j2=u1j2unj2, then from

l=1nβJs1,niAk+1.ie.lββulj2=βJs1,niAk+1.iu.j2ββ,

it follows (30).□

The name of the DMP inverse is in accordance with the order of using the Drazin inverse (D) and the Moore-Penrose (MP) inverse. In that connection, it would be logical to consider the following definition.

Definition 3.16. Suppose ACn×n and IndA=k. A matrix XCn×n is said to be the MPD inverse of A if it satisfies the conditions

XAX=X,AX=AAd,andXAk=AAk.

It is denoted as A,d.

The matrix A,d is unique, and it can be represented as

A,d=AAAd.E33

Theorem 3.17.LetACsn×n,IndA=k, andrkAk=s1. Then, its MPD inverseA,d=aij,dhas the following determinantal representations

aij,d=βJs,niAA.iv.j1βββJs,nAAβββIs1,nAk+1αα=αIs1,njAk+1j.vi.2αααIs1,nAAββαIs,nAk+1αα,

where

v.j1=αIs1,njAk+1j.âl.ααCn×1,l=1,,nvi.2=βJs,niAA.iâ.fββC1×n,l=1,,n.

Here, âl. and â.f are the lth row and the fth column of ÂAAk+1.

Proof. The proof is similar to the proof of Theorem 3.15.□

3.4 Determinantal representations of the CMP inverse

Definition 3.18. [6] Suppose ACn×n has the core-nilpotent decomposition A=A1+A2, where IndA1=IndA, A2 is nilpotent, and A1A2=A2A1=0. The CMP inverse of A is called the matrix Ac,AA1A.

Lemma 3.19. [6] LetACn×n. The matrixX=Ac,is the unique matrix that satisfies the following system of equations:

XAX=X,AXA=A1,AX=A1A,andXA=AA1.

Moreover,

Ac,=QAAdPA.E34

Taking into account (34), it follows the next theorem about determinantal representations of the quaternion CMP inverse.

Theorem 3.20.LetACsn×n,IndA=m, andrkAm=s1. Then, the determinantal representations of its CMP inverseAc,=aijc,can be expressed as

aijc,=βJs,niAA.iv.jlβββJs,nAAββ2βJs1,nAm+1ββE35
aijc,=αIs,njAAj.wi.lαααIs,nAAαα2βJs1,nAm+1ββE36

for all l=1,2, where

v.j1=αIs,njAAj.ût.ααCn×1,t=1,,n,E37
wi.1=βJs,niAA.iû.kββC1×n,k=1,,n,E38
v.j2=αIs,njAAj.g˜t.ααCn×1,t=1,,n,E39
wi.2=βJs,niAA.ig˜.kββC1×n,k=1,,n.E40

Here, ût. is the tth row and û.k is the kth column of ÛUAA, g˜t. is the tth row and g˜.k is the kth column of G˜AAG, and the matrices U=uijHn×n and G=gijHn×n are such that

uij=αIs1,njAm+1j.âi.αα,gij=βJs1,niAm+1.ia˜.jββ,

where âi. is the ith row of ÂAAm+1 and a˜.j is the jth column of A˜Am+1A.

Proof. Taking into account (34), we get

aijc,=l=1nk=1nqilAalkdpkjA,E41

where QA=qilA, Ad=aild, and PA=pilA.

a. Taking into account the expressions (13), (10), and (11) for the determinantal representations of Ad, QA, and PA, respectively, we have

aijc,=ltβJs,niAA.iȧ.tβββJs,nAAββαIs1,nlAm+1l.at.mαααIs1,nAm+1αααIs,njAAj.a¨l.αααIs,nAAαα,

where ȧ.t is the tth column of AA, a¨l. is the lth row of AA, and at.m is the tth row of Am. So, it is clear that

aijc,=ltkβJs,niAA.ie.tββâtkαIs1,nlAm+1l.ek.ααβJs,nAAββαIs1,nAm+1αααIs,njAAj.a¨l.αααIs,nAAαα,

where e.t is the tth unit column vector, ek. is the kth row vector, and âtk is the tkth element of Â=AAm+1.

Denote

utlkâtkαIs1,njAm+1l.ek.αα=αIs1,njAm+1l.ât.ααE42

as the tth component of a column vector u.l=u1lunl. Then from

tβJs,niAA.ie.tββutl=βJs,niAA.iu.lββ,

we have

aijc,=lβJs,niAA.iu.lββαIs,njAAj.a¨l.ααβJr,nAAββαIs1,nAm+1αααIs,nAAαα.

Construct the matrix U=utlHn×n, where utl is given by (42), and denote ÛUAA. Then, taking into account that AAββ=AAαα, we have

aijc,=tkβJs,niAA.ie.tββûtkαIs,njAAj.ek.ααβJs,nAAββ2αIs1,nAm+1αα.

If we put that

vtj1kûtkαIs,njAAj.ek.αα=αIs,njAAj.ût.αα

is the tth component of a column vector v.j1=v1j1vnj1, then from

tβJs,niAA.ie.tββvtj1=βJs,niAA.iv.j1ββ,

it follows (35) with v.j1 given by (37). If we initially put

wik1tβJs,niAA.ie.tββûtk=βJs,niAA.iû.kββ

as the kth component of the row vector wi.1=wi11win1, then from

kwik1αIs,njA2j.ek.αα=αIs,njA2j.wi.1αα,

it follows (36) with wi.1 given by (38).

b. By using the determinantal representation (12) for Ad in (41), we have

aijc,=ktβJs,niAA.iȧ.tβββJs,nAAβββJs1,ntAm+1.ta.kmβββJs,nAm+1ββαIs,njAAj.a¨k.αααIs,nAAαα.

Therefore,

aijc,=lktβJs,niAA.iȧ.tβββJs,nAAββ×βJs1,ntAm+1.te.kβββJs1,nAm+1ββa˜klαIs,njAAj.el.αααIs,nAAαα.

where e.k is the kth unit column vector, el. is the lth unit row vector, and a˜kl is the klth element of A˜=Am+1A.

If we denote

gtllβJs1,ntAm+1.te.kββa˜kl=βJs1,ntAm+1.ta˜.lββE43

as the lth component of a row vector gt.=gt1gtn, then

lgtlαIs,njAAj.el.αα=αIs,njAAj.gt.αα.

From this, it follows that

aijc,=tβJs,niAA.iȧ.tββαIs,nj(AAj.gt.)ααβJr,nAAββαIs1,nAm+1αααIs,nAAαα.

Construct the matrix G=gtlHn×n, where gtl is given by (43). Denote G˜AAG. Then,

aijc,=tkβJs,niAA.ie.tββg˜tkαIs,njAAj.ek.ααβJs,nAAββ2αIs1,nAm+1αα.

If we denote

vtj2kg˜tkαIs,njAAj.ek.αα=αIs,njAAj.g˜t.αα

as the tth component of a column vector v.j2=v1j2vnj2, then

tβJs,niAA.ie.tββvtj2=βJs,niAA.iv.j2ββ.

Thus, we have (35) with v.j2 given by (39).

If, now, we denote

wik2tβJs,niAA.ie.tββg˜tk=βJs,niAA.ig˜.kββ

as the kth component of a row vector wi.2=wi12win2, then

kwik2αIs,njAAj.ek.αα=αIs,njAAj.wi.2αα.

So, finally, we have (36) with wi.2 given by (40).

Advertisement

4. An example

Given the matrix

A=200iiiiii.

Since

AA=42i2i2i312i13,A2=40022i0022i00,A3=80044i0044i00,

then rkA=2 and rkA2=rkA3=1, and k=IndA=2 and r1=1. So, we shall find A and A by (25) and (26), respectively.

Since

Â=A2A3=1621+i1+i1i1i1ii1,

then by (25),

a11,r=αI1,31A3A31.â1.αααI1,3A3A3αα=14.

By similarly continuing, we get

A=1821+i1+i1i1i1ii1.

By analogy, due to (26), we have

A=12100000000.

The DMP inverse Ad, can be found by Theorem 3.15. Since

A˜=A3A=442i2i22i1+i1+i22i1i1i.

and rkA3=1, then

u11=a˜1.,u21=a˜2.,u31=a˜3..

Furthermore, by (29),

a11d,=αI2,31AA1.u1.1ααβJ1,3A3ββαI2,3AAαα=1192det168i2i3+det168i2i3=13.

By similarly continuing, we get

Ad,=11242i2i22i1+i1+i22i1i1i.

Similarly by Theorem 3.17, we get

A,d=14200i00i00.

Finally, by theorem, we find the CMP inverse Ac,=aijc,. Since rkA3=1, then G=A˜ and

G˜=AAA˜=1663i3i2i112i11.

Furthermore, by (40),

w112=βJ2,31AA.1g˜.1ββ=det602i2+det602i2=24.

By similar calculations, we get

w1.2=38496i96i,w2.2=192i9696,w3.2=192i9606.

So, by (36), we get

a11c,=αI2,31AA1.w1.2αααI2,3AAαα2βJ1,3A3ββ=14608det384192i2i3+det384192i2i3=13.

By similarly continuing, we derive

Ac,=11242i2i2i112i11.
Advertisement

5. Conclusions

In this chapter, we get the direct method to find the core inverse and its generalizations that are based on their determinantal representations. New determinantal representations of the right and left core inverses, the right and left core-EP inverses, the DMP, MPD, and CMP inverses are derived.

References

  1. 1. Baksalary OM, Trenkler G. Core inverse of matrices. Linear and Multilinear Algebra. 2010;58:681-697
  2. 2. Malik S, Thome N. On a new generalized inverse for matrices of an arbitrary index. Applied Mathematics and Computation. 2014;226:575-580
  3. 3. Xu SZ, Chen JL, Zhang XX. New characterizations for core inverses in rings with involution. Frontiers in Mathematics China. 2017;12:231-246
  4. 4. Prasad KM, Mohana KS. Core EP inverse. Linear and Multilinear Algebra. 2014;62(3):792-802
  5. 5. Baksalary OM, Trenkler G. On a generalized core inverse. Applied Mathematics and Computation. 2014;236:450-457
  6. 6. Mehdipour M, Salemi A. On a new generalized inverse of matrices. Linear and Multilinear Algebra. 2018;66(5):1046-1053
  7. 7. Chen JL, Zhu HH, Patrićio P, Zhang YL. Characterizations and representations of core and dual core inverses. Canadian Mathematical Bulletin. 2017;60:269-282
  8. 8. Gao YF, Chen JL. Pseudo core inverses in rings with involution. Communications in Algebra. 2018;46:38-50
  9. 9. Guterman A, Herrero A, Thome N. New matrix partial order based on spectrally orthogonal matrix decomposition. Linear and Multilinear Algebra. 2016;64(3):362-374
  10. 10. Ferreyra DE, Levis FE, Thome N. Maximal classes of matrices determining generalized inverses. Applied Mathematics and Computation. 2018;333:42-52
  11. 11. Ferreyra DE, Levis FE, Thome N. Revisiting the core EP inverse and its extension to rectangular matrices. Quaestiones Mathematicae. 2018;41(2):265-281
  12. 12. Liu X, Cai N. High-order iterative methods for the DMP inverse. Journal of Mathematics. 2018;8175935:6
  13. 13. Ma H, Stanimirović PS. Characterizations, approximation and perturbations of the core-EP inverse. Applied Mathematics and Computation. 2019;359:404-417
  14. 14. Mielniczuk J. Note on the core matrix partial ordering. Discussiones Mathematicae Probability and Statistics 2011;31:71-75
  15. 15. Mosić D, Deng C, Ma H. On a weighted core inverse in a ring with involution. Communications in Algebra. 2018;46(6):2332-2345
  16. 16. Prasad KM, Raj MD. Bordering method to compute core-EP inverse. Special Matrices. 2018;6:193-200
  17. 17. Rakić DS, Dinčić ČN, Djordjević DS. Group, Moore-Penrose, core and dual core inverse in rings with involution. Linear Algebra and its Applications. 2014;463:115-133
  18. 18. Wang HX. Core-EP decomposition and its applications. Linear Algebra and its Applications. 2016;508:289-300
  19. 19. Bapat RB, Bhaskara Rao KPS, Prasad KM. Generalized inverses over integral domains. Linear Algebra and its Applications. 1990;140:181-196
  20. 20. Bhaskara Rao KPS. Generalized inverses of matrices over integral domains. Linear Algebra and its Applications. 1983;49:179-189
  21. 21. Kyrchei I. Analogs of the adjoint matrix for generalized inverses and corresponding Cramer rules. Linear and Multilinear Algebra. 2008;56(4):453-469
  22. 22. Kyrchei I. Explicit formulas for determinantal representations of the Drazin inverse solutions of some matrix and differential matrix equations. Applied Mathematics and Computation. 2013;219:7632-7644
  23. 23. Kyrchei I. Cramer’s rule for generalized inverse solutions. In: Kyrchei I, editor. Advances in Linear Algebra Research. New York: Nova Science Publ; 2015. pp. 79-132
  24. 24. Stanimirović PS. General determinantal representation of pseudoinverses of matrices. Matematichki Vesnik. 1996;48:1-9
  25. 25. Stanimirović PS, Djordjevic DS. Full-rank and determinantal representation of the Drazin inverse. Linear Algebra and its Applications. 2000;311:131-151
  26. 26. Kyrchei I. Determinantal representations of the Moore-Penrose inverse over the quaternion skew field. Journal of Mathematical Sciences. 2012;180(1):23-33
  27. 27. Kyrchei I. Determinantal representations of the Moore-Penrose inverse over the quaternion skew field and corresponding Cramer’s rules. Linear and Multilinear Algebra. 2011;59(4):413-431
  28. 28. Kyrchei I. Determinantal representations of the Drazin inverse over the quaternion skew field with applications to some matrix equations. Applied Mathematics and Computation. 2014;238:193-207
  29. 29. Kyrchei I. Determinantal representations of the W-weighted Drazin inverse over the quaternion skew field. Applied Mathematics and Computation. 2015;264:453-465
  30. 30. Kyrchei I. Explicit determinantal representation formulas of W-weighted Drazin inverse solutions of some matrix equations over the quaternion skew field. Mathematical Problems in Engineering. 2016;8673809:13
  31. 31. Kyrchei I. Explicit determinantal representation formulas for the solution of the two-sided restricted quaternionic matrix equation. Journal of Applied Mathematics and Computing. 2018;58(1-2):335-365
  32. 32. Kyrchei I. Determinantal representations of the Drazin and W-weighted Drazin inverses over the quaternion skew field with applications. In: Griffin S, editor. Quaternions: Theory and Applications. New York: Nova Sci. Publ.; 2017. pp. 201-275
  33. 33. Kyrchei I. Weighted singular value decomposition and determinantal representations of the quaternion weighted Moore-Penrose inverse. Applied Mathematics and Computation. 2017;309:1-16
  34. 34. Kyrchei I. Determinantal representations of the quaternion weighted Moore-Penrose inverse and its applications. In: Baswell AR, editor. Advances in Mathematics Research 23. New York: Nova Science Publ; 2017. pp. 35-96
  35. 35. Zhou M, Chen J, Li T, Wang D. Three limit representations of the core-EP inverse. Univerzitet u Nišu. 2018;32:5887-5894

Written By

Ivan I. Kyrchei

Submitted: 09 May 2019 Reviewed: 26 August 2019 Published: 27 November 2019