Open access peer-reviewed chapter - ONLINE FIRST

Matrices with a Diagonal Commutator

Written By

Armando Martínez-Pérez and Gabino Torres-Vega

Submitted: 22 September 2023 Reviewed: 19 October 2023 Published: 21 February 2024

DOI: 10.5772/intechopen.1003770

Matrix Theory and Its Applications IntechOpen
Matrix Theory and Its Applications Edited by Victor Martinez-Luaces

From the Edited Volume

Matrix Theory and Its Applications [Working Title]

Dr. Victor Eduardo Martinez-Luaces

Chapter metrics overview

8 Chapter Downloads

View Full Metrics

Abstract

It is well known that there are no two matrices with a diagonal commutator. However, the commutator can behave as if it is diagonal when acting on a particular vector. We discuss pairs of matrices that give rise to a diagonal commutator when applied to a given arbitrary vector. Some properties of these matrices are discussed. These matrices have additional, continuous eigenvalues and eigenvectors than the dimension of the matrix, and their inverse also has this property. Some of these matrices are discrete approximations of the derivative and integration of a function and are exact for the exponential function. We also determine the adjoint of the obtained discrete derivative.

Keywords

  • commutator between matrices
  • pair of matrices with diagonal commutator
  • exact finite differences derivative
  • exact finite differences integration
  • matrices as discrete operators

1. Introduction

Let us consider the matrices that shift the entries of a vector. The usual matrix that cyclically shifts the entries of a vector to the left is

R1=0100000100000000000110000.E1

Given an arbitrary vector h=h1h2hNTC, hj0, the action of R1 on this vector results in

R1h=h2h3hNh1T.E2

There is also a matrix that shifts the entries of a vector to the right, the matrix with the lower diagonal entries different from zero.

But, we can also use a diagonal matrix to rotate to the left the entries of the vector h, in particular. The action of the diagonal matrix

R2=h2h100000h3h200000h4h300000h5h400000h1hNE3

on h is R2h=h2h3hNh1T. It is also possible to perform a cyclic rotation to the right. The action of this matrix on another vector is only to rescale its entries.

Combining the above ideas gives rise to a third type of shifting matrix. There is a non-diagonal matrix that acts like an identity matrix for the particular vector h:

R3=0h1h2000000h2h3000000h3h40000000hN1hNhNh100000.E4

We have that R3h=h. The eigenvalues of this matrix are the same as those of R1, the N roots of unity: λn=ei2πn/N, n=0,1,2,,N1, and h is the eigenvector that corresponds to the eigenvalue λ0=1.

Other special matrices are the matrices that admit a continuous eigenvalue, besides the usual constant eigenvalues. For instance, the matrix

Q1=q11q1200q22q230q32q33E5

has eigenvalues q11,12q22+q33z,12q22+q33+z, and the corresponding eigenvectors are

1,0,0T,E6
q12q22+q33+zq322q11q22q33+zq22+q33+z2q321T,E7
q12q22q33+zq322q11+q22+q33+zq22q33+z2q321T,E8

where, z=4q23q32+q22q332. These are the only eigenvalues and eigenvectors. The eigenvalues are fixed quantities and depend on the fixed entries of the matrix. However, the matrix

v1τv1v2v1τv1v200v2τv2v3v2τv2v30v3τv3v2v3τv3v2,E9

has eigenvalues 0, τ, v1τv1v2 with corresponding eigenvectors

1,1,1T,v1v2v3T,1,0,0T,E10

where the eigenvalue τ is independent of vj, and viceversa, and it is a continuous variable.

There is the usual procedure to obtain a set of eigenvalues and eigenvectors [1]. But if the entries of a matrix contain variables, we can solve the eigenvalue set of simultaneous equations now for the entries of the matrix, obtaining additional continuous eigenvalues and eigenvectors.

Now, in general, there are no two matrices that have a diagonal commutator [2], that is, proportional to the identity matrix. We use the above facts about matrices to define two matrices with a diagonal commutator when applied to an arbitrary vector h. One of the obtained matrices corresponds to a finite differences derivation of a function, exact for the exponential function.

Advertisement

2. The matrices

There are many matrices with a diagonal commutator along a given direction, but we consider a simple set for simplicity.

We consider the pair of N×N, N, matrices A, with non-zero elements around and on the diagonal, and B, diagonal, of the form

A=a11a120000a22a230000a3300000aN1,N1aN1,N000aN,N1aNN,E11
B=b100000b200000b300000b400000bN,E12

both matrices with complex entries.

A straightforward calculation yields the characteristic polynomial of the matrix A,

detAλIN=a11λaN2,N2λ×λ2λaNN+aN1,N1aN1,NaN,N1+aN1,N1aNN,E13

whose solutions are

λ=a11,a22,,aN2,N2,a+,a,E14

where

a±=12aN1,N1+aNN±aN1,N1aNN2+4aN1,NaN,N1.E15

The corresponding eigenvectors are a bit complicated to write them down here, but, for instance, for dimension four, the eigenvalues are a11,a22+a33z/2,a22+a33+z/2, and the corresponding eigenvectors are

1,0,0T,E16
a12a22+a33+za322a11a22a33+za22+a33+z2a321T,E17
a12a22a33+za322a11+a22+a33+za22a33+z2a321T,E18

where

z=4a23a32+a22a332.E19

These eigenvalues are discrete and fixed and the eigenvectors are different between themselves.

Still, we can generate additional eigenvalues and eigenvectors by choosing the values of ajj. The eigenvalue equation Au=γu, γC, u=u1uNTCN, leads to a system of simultaneous equations which is now solved for the ajj’s with solutions

a11=γa12u2u1,,aN1,N1=γaN1,NuNuN1,aNN=γaN,N1uN1uN.E20

This solution set gives rise to a matrix that results in weighted differences between the entries of a vector, resembling a finite-differences derivative [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15].

Now, the determinant of the matrix A is not zero; it is equal to

detA=aN1,N1aNNaN1,NaN,N1j=1N2ajj,E21

and then, we can compute its inverse in the usual way. We could find it with A1=detA1adjA, where the adj A is the adjugate of A, whose entries are given by 1i+jdetAji, where Aji is the minor obtained by deleting the jth row and ith column from A [1]. The inverse matrix A becomes

A1=1a11a12a11a22a12a23a11a22a33aNNk=1N2ak,k+1wk=iN2akkk=1N1ak,k+1wk=1N2akk01a22a23a22a33aNNk=2N2ak,k+1wk=2N2akkk=2N1ak,k+1wk=2N2akk000aNNwaN1,Nw000aN,N1waN1,N1w,E22

where w=aN1,N1aNNaN1,NaN,N1. When w vanishes, there is no inverse of the matrix A. This matrix resembles a discrete integration approximation over subintervals, with weight factors given by the matrix elements [3].

The usual eigenvectors are too complicated to write them here, but the complement eigenvector v=v1v2vNT, with eigenvalue τ, is obtained when the diagonal matrix elements are given as

ajj=1τvj+1vjaj,j+1,1j<N,E23
aNN=1τvN1vNaN,N1.E24

We saw some properties of the general matrix A. Next, we analyze particular versions of this matrix.

Advertisement

3. The commutator between matrices A and B

The commutator between the matrices A and B results in

A,B=0a12Δ100000a23Δ200000000000aN1,NΔN1000aN,N1ΔN10,E25

where Δj=bj+1bj is the difference between bj+1 and bj. This matrix is not diagonal; it does not depend on the diagonal elements of the matrix A, and it introduces the differences between adjacent diagonal elements of the matrix B. We can choose the values of aj,j+1 and aN,N1 to obtain the effect of a diagonal commutator when acting on the particular arbitrary vector h. This procedure can be applied to more general matrices A and B. Still, our choice of the form of matrices A and B is for the sake of simplicity and to define a two-point derivative matrix that satisfies the commutator relationship and has as eigenvector the exponential function.

We ask matrices A and B to comply with the requirement that

ABh=αh,αC,E26

where h=h1h2hNTCN is an arbitrary complex vector of length N but with non-zero entries hj0. The requirement Eq. (26) leads to a system of simultaneous equations that are solved not for the vector h but for the entries of the matrixA. The resulting matrix is

Ac=a11h1αh2Δ10000a22h2αh3Δ20000a3300000aN1,N1hN1αhNΔN1000hNαhN1ΔN1aN,N,E27

which depends on h and α. With these matrices, the commutator becomes a permutation matrix with weights αhj/hj+1 or αhN/hN1,

Ac,B=0αh1h200000αh2h300000000000αhN1hN000αhNhN10.E28

This matrix performs two shifts when acting on the vector h. Since the matrix has only upper off-diagonal elements different from zero, there is a shift to the left. But the ratios hj/hj+1 shift back the vector. Then, the commutator is diagonal along the direction of h.

Now, we use another condition on Ac to determine its diagonal entries. The version of the matrix Ac that complies with the eigenvalue equation A˜cv=γv is

A˜=γh1v2αh2v1Δ1h1αh2Δ10000γh2v3αh3v2Δ2h2αh3Δ20000γh3v4αh4v3Δ300000γhN1vNαhNvN1ΔN1hN1αhNΔN1000hNαhN1ΔN1hNvN1αhN1vNΔN1+γ.E29

The action of this matrix on a vector g=g1g2gNTCN is

A˜gj=hjαhj+1Δjgj+1+γhjvj+1αhj+1vjΔjgj,1j<N,E30
A˜gN=γ+hNvN1αhN1vNΔN1gNhNαhN1ΔN1gN1.E31

If we use power series expansions for the quotients hb/hb+Δ and vb+Δ/vb, at mesh points, we obtain

hjhj+1Δj=1Δjhjhj+OΔj,andvj+1vjΔj=1Δj+vjvj+OΔj.E32

With these expansions, we obtain the small Δj expressions

A˜gjΔj0αgj+1gjΔj+gjγαvjvjαhjhjgj+1gj+OΔj,1j<N,E33
A˜gNΔj0αhNhN1ΔN1vN1vNgNgN1+γgN+OΔj.E34

Note that, since gj+1gj0 and gj+1gj/Δjgbj as Δj0, if we ask that γαvj/vj0, we would get αgbj from Eqs. (32) and (33) in the limit Δj0. Thus, if we make the choice vb=eγb/α, Eqs. (30) and (31) lead to finite difference approximations to the derivative when A acts on g. The right hand side of Eq. (33) becomes the finite differences derivative of gb at the boundary plus the boundary term γgN. Hereafter, we will use vb=eγb/α, and finite Δj.

Advertisement

4. The derivative matrix

With the choices hj=vj and vj=vbj=eγbj/α, the matrix α1A˜ becomes the derivative matrix

D=γα1Δ1eγΔ1αΔ10000γα1Δ2eγΔ2αΔ20000γα1Δ300000eγΔN2αΔN20000γα1ΔN1eγΔN1αΔN1000eγΔN1αΔN1γα+1ΔN1.E35

When the matrix D acts to the left on a vector fT=f1f2fNTCN, we obtain

fTD=Bf1Df2DfN2DfN1+BfN1DfN+BfNT,E36

where

Dfj=γα1Δjfj+eγΔj1/αfj1Δj1,1<jN,E37
Bf1=1Δ1γαf1,E38
BfN1=eγαΔN1fNΔN1,E39
BfN=2fNΔN1,E40

which is the negative of a finite-differences approximation to the derivative of f. When Δj is small and using the exponential expansions eγΔj/α=1γΔjα+, we find that

DfjΔj0fjΔjfj1Δj1γαfjfj1+OΔj,1<j<N.E41

When D is applied to the right to the vector g=g1gNT, we obtain

Dg=Dg1Dg2DgN1DgNT,E42

where

Dgj=γα1Δjgj+eγαΔjgj+1Δj,1jN1,E43
DgN=γα+1ΔN1gNeγαΔN1gN1ΔN1.E44

For small Δj, these equalities become

DgjΔj0gj+1gjΔjγαgj+1gj+OΔj,1jN1,E45
DgNΔj0gNgN1ΔN1+γαgNgN1+OΔj.E46

These approximations show that D is a derivation matrix, exact for the exponential function eγb/α for any value of Δj.

For instance, for dimension five, the eigenvalues of the derivative matrix D are γαγαγα1Δ1γα1Δ2γα1ΔN2, and the corresponding eigenvectors are

eγαb1eγαb2eγαb4eγαb5T,E47
0,0,0,0,0T,E48
1,0,0,0,0T,E49
eγαb1Δ2Δ1Δ2eγαb2,0,0,0T,E50
eγαb1Δ32Δ1Δ3Δ2+Δ3eγαb2Δ3Δ2Δ3eγαb3,0,0T,E51

and the exponential function is an eigenfunction of the derivative matrix with eigenvalue γ/α.

The determinant of the derivative matrix D is

D=γ2α2k=1N21Δkγα.E52

Therefore, as long as α,Δk,γ0, and αγΔk0, k=1,2,,N2, we can compute the inverse of D. For dimension 4×4, the inverse matrix is

D1=αΔ1αγΔ1eγαΔ1α2Δ2αγΔ1αγΔ2eγαΔ1+Δ2α3α+γΔ4γ2αγΔ1αγΔ2Δ4eγαΔ1+Δ2+Δ3α4γ2αγΔ1αγΔ2Δ40eγαΔ2αΔ2αγΔ2eγαΔ2+Δ3α2α+γΔ4γ2αγΔ2Δ4eγΔ2+Δ3αα3γ2αγΔ2Δ400αα+γΔ4γ2Δ4eγΔ3αα2γ2Δ400eγΔ3αα2γ2Δ3ααγΔ3γ2Δ3E53

When this matrix acts on a vector, the result is a collection of partial summations with weights given by its entries, an integration matrix. The eigenvalues of the inverse matrix are αγ,αγ,αΔ1αγΔ1,αΔ2αγΔ2 and the corresponding eigenvectors are

eγαb1eγαb2eγαb3eγαb4T,E54
0,0,0,0T,E55
eγαb1,0,0,0T,E56
eγαb1Δ2Δ1Δ2eγαb2,0,0T.E57

The exponential function eγb/α is an eigenvector of the inverse matrix (a finite differences integration matrix) with eigenvalue α/γ.

4.1 Summation by parts

A practical result is the summation by parts theorem (the discrete version of the continuous variable integration by parts theorem), the subject of this section.

We start by defining the summation matrix

S=Δ100000Δ2000000ΔN1000000.E58

The summation matrix S allows for determining the equality

fTSDg=gTB˜SDf,E59

where

B˜=γαΔ110000000000000000000000eγΔN1α0E60

is the boundary matrix, and

D=000000eγΔ1αΔ21Δ2γα00000eγΔ2αΔ31Δ3γα0000000eγΔN2αΔN11ΔN1γα000000E61

is the with continuous entries, adjoint of the discrete derivative matrix D. When this matrix acts on a vector f, we obtain a vector with entries

Dfj=1Δjγαfj1ΔjeγΔj1αfj1,1<j<N.E62

The small Δj limit of the boundary terms is

gB˜f1Δj0f1g1+OΔj,E63
gB˜fNΔj0fN1gN+OΔj,E64

and, for the adjoint matrix D, we obtain

DfjΔj0fjfj1Δjγαfj+OΔj.E65

Thus, the adjoint matrix is a derivation minus a constant term.

For a matrix with dimension 4×4, for instance, the eigenvalues are 0 of multiplicity two, 1Δ2γα, and 1Δ3γα, and the eigenvectors are

0,0,0,1T,E66
eb1γα1Δ2γα1Δ3γαeb2γα1Δ3γαeb3γα0T,E67
0Δ3Δ2eb2γαΔ2eb3γα0T,E68
0,0,1,0T,E69

and the complement, for arbitrary dimension, eigenvector is

0eγb2/ααατ+γΔ2eγbN1/αk=2N2αατ+γΔk0T,E70

with eigenvalue τC, and τ=1Δkγα, k=2,3,,N1, the complement of the regular eigenvalues.

The determinant of the matrix D vanishes, and then, there is no inverse matrix at all.

We arrived at a finite-differences derivative of a function that complies with the discrete versions of the properties that the continuous derivative has [16].

4.2 The upper diagonal matrix

The simplest case of matrices with diagonal commutator along the direction of h is obtained when the diagonal elements of the matrix A vanish, ajj=0. That matrix is

A2=0h1αh2Δ100000h2αh3Δ200000000000hN1αhNΔN1000hNαhN1ΔN10.E71

This matrix is also a cyclic shifting matrix to the left, with rescaling, in general, and only a rescaling matrix when acting on the vector h as if it were the matrix αdiagΔj1, with ΔN=ΔN1.

The eigenvalues of the matrix A2 are

λ=0with multiplicityN2,andλ±=±iαΔN1,E72

and the corresponding eigenvectors are

xλ=1000,ΔN1N2h1j=1N2ΔjiΔN1N3h2j=2N2ΔjΔN1N4h3j=3N2ΔjiN2hN1iN1hN,ΔN1N2h1j=1N2ΔjiΔN1N3h2j=2N2ΔjΔN1N4h3j=3N2ΔjiN2hN1iN1hN.E73

Additionally, there are N3, not well-defined vectors corresponding to the null eigenvalue with degeneracy N3. These eigenvectors give rise to a vector space of dimension three. However, if we just look for the solution of the simultaneous linear equations, there is another set of eigenvectors, the complement eigenvalue β, and the eigenvector v with entries—for dimension five—given by

v5v4=ih5h4,v4v3=h4βΔ3h3α,v3v2=h3βΔ2h2,v2v1=h2βΔ1h1α,v1C.E74

with eigenvalue β=±iα/Δ4.

The determinant of the matrix A2 vanishes, and then, there is no inverse matrix at all.

Advertisement

5. Conclusion

We discussed several properties of matrices, namely, pairs of matrices with a diagonal commutator when applied to a given vector, exact finite-differences derivation and integration, and complement eigenvalues and eigenvectors.

These results are relevant in quantum mechanics theory, in which some operators have a discrete spectrum. Our scheme might be of interest in quantum gravity theory, too, because the space is quantized, and then a discrete derivative with respect to the length variable is needed [17, 18].

Advertisement

Acknowledgments

A. Martínez-Pérez would like to acknowledge the support from the UNAM Postdoctoral Program (POSDOC).

References

  1. 1. Piziak R, Odell PL. Matrix Theory: From Generalized Inverses to Jordan Form. Boca Raton: Champan & Hall/CRC; 2007
  2. 2. Putnam CR. Commutation Properties of Hilbert Space Operators and Related Topics. Berlin: Springer-Verlag; 1967
  3. 3. Martínez-Pérez A, Torres-Vega G. The Inverse of the Discrete Momentum Operator, cap.10, Schrödinger Equation - Fundamentals Aspects and Potential Applications. Rijeka: IntechOpen; 2023. DOI: 10.5772/intechopen.112376
  4. 4. Mickens RE. Nonstandard Finite Difference Models of Differential Equations. Singapore: World Scientific; 1994
  5. 5. Mickens M. Discretizations of nonlinear differential equations using explicit nonstandard methods. Journal of Computational and Applied Mathematics. 1999;110:181
  6. 6. Mickens RE. Nonstandard finite difference schemes for differential equations. Journal of Difference Equations and Applications. 2010;8:823
  7. 7. Mickens RE. Applications of Nonstandard Finite Difference Schemes. Singapore: World Scientific; 2000
  8. 8. Mickens RE. Calculation of denominator functions for nonstandard finite difference schemes for differential equations satisfying a positivity condition. Numerical Methods for Partial Differential Equations. 2006;23:672
  9. 9. Potts RB. Differential and difference equations. The American Mathematical Monthly. 1982;89:402-407
  10. 10. Potts RB. Ordinary and partial differences equations. The Journal of the Australian Mathematical Society. Series B. 1986;27:488
  11. 11. Tarasov VE. Exact discretization by Fourier transforms. Communications in Nonlinear Science and Numerical Simulation. 2016;37:31
  12. 12. Tarasov VE. Exact discrete Analogs of derivatives of integer orders: Differences as infinite series. Journal of Mathematics. 2015;2015:134842. DOI: 10.1155/2015/134842
  13. 13. Tarasov VE. Exact discretization of Schrödinger equation. Physics Letters A. 2016;380:68. DOI: 10.1016/j.physleta.2015.10.039
  14. 14. Martínez Pérez A, Torres-Vega G. Exact finite differences. The derivative on non uniformly spaced partitions. Symmetry. 2017;9:217. DOI: 10.3390/sym9100217
  15. 15. Martínez-Pérez A, Torres-Vega G. Discrete self-adjointness and quantum dynamics. Travel times. Journal of Mathematical Physics. 2021;62:012013. DOI: 10.1063/5.0021565
  16. 16. Gitman DM, Tyutin IV, Voronov BL. Self-Adjoint Extensions in Quantum Mechanics. General Theory and Applications to Schrödinger and Dirac Equations with Singular Potentials. New York: Birkhäuser; 2012
  17. 17. Bishop M, Contreras J, Singleton D. “The more things change the more they stay the same” Minimum lengths with unmodified uncertainty principle and dispersion relation. International Journal of Modern Physics D. 2022;31:2241002
  18. 18. Bishop M, Contreras J, Singleton D. Reconciling a quantum gravity minimal length with lack of photon dispersion. Physics Letters B. 2021;816:136265

Written By

Armando Martínez-Pérez and Gabino Torres-Vega

Submitted: 22 September 2023 Reviewed: 19 October 2023 Published: 21 February 2024