Open Access is an initiative that aims to make scientific research freely available to all. To date our community has made over 100 million downloads. It’s based on principles of collaboration, unobstructed discovery, and, most importantly, scientific progression. As PhD students, we found it difficult to access the research we needed, so we decided to create a new Open Access publisher that levels the playing field for scientists across the world. How? By making research easy to access, and puts the academic needs of the researchers before the business interests of publishers.
We are a community of more than 103,000 authors and editors from 3,291 institutions spanning 160 countries, including Nobel Prize winners and some of the world’s most-cited researchers. Publishing on IntechOpen allows authors to earn citations and find new collaborators, meaning more people see your work not only from your own field of study, but from other related fields too.
We introduce and study a matrix which has the exponential function as one of its eigenvectors. We realize that this matrix represents a set of finite differences derivation of vectors on a partition. This matrix leads to new expressions for finite differences derivatives which are exact for the exponential function. We find some properties of this matrix, the induced derivatives and of its inverse. We provide an expression for the derivative of a product, of a ratio, of the inverse of vectors, and we also find the equivalent of the summation by parts theorem of continuous functions. This matrix could be of interest to discrete quantum mechanics theory.
Physics Department, Cinvestav, México City, México
Gabino Torres Vega*
Physics Department, Cinvestav, México City, México
*Address all correspondence to: gabino@fis.cinvestav.mx
1. Introduction
We are interested on matrices which are a local, as well as a global, exact discrete representation of operations on functions of continuous variable, so that there is congruency between the continuous and the discrete operations and properties of functions. Usual finite difference methods [1, 2, 3, 4] become exact only in the limit of zero separation between the points of the mesh. Here, we are interested in having exact representations of operations and functions for finite separation between mesh points.
The difference between our method and the usual finite differences method is the quantity that appears in the denominator of the definition of derivative. The appropriate choice of that denominator makes possible that the finite differences expressions for the derivative gives the exact results for the exponential function. We concentrate on the derivative operation, and we define a matrix which represents the exact finite difference derivation on a local and a global scale. The inverse of this matrix is just the integration operation. These are interesting subjects by itself, but they are also of interest in the quantum physics realm [5, 6, 7].
In this chapter, we will consider only the case of the derivative and the integration of the exponential function.
where v∈ℂ—it can be pure real or pure imaginary—, Δ∈ℝ+, and χvΔ≔sinhvΔ/v≈Δ+v2Δ3/6+OΔ5. This function χvΔ is well defined for v=0, with value χ0Δ=Δ. This matrix is interesting because, as we will see below, it represents a derivation on a partition. A rescaled matrix D¯N is defined as
Then, the eigenvalues of the derivative matrix D¯N are λ1=z−1/z=evΔ−e−vΔ=2sinhvΔ and λm=−αm, where αm is the m-th root of the N-th Fibonacci polynomial, which is a polynomial of degree N−1 [10, 11].
The system of simultaneous equations for the eigenvector emT=em,1em,2…eN corresponding to λm, can be put in a form similar to the recursion relationship for the Fibonacci polynomials, i.e.,
em,2=λmem,1+em,1z,E10
em,j+1=λmem,j+em,j−1,1<j<N,E11
zem,N=λmem,N+em,N−1.E12
This set of recursion relationships can be written as the matrix equation
i.e., the j-th component of the m-th eigenvector is
em,j=Fjλm+Fj−1λmzem,1forj=1,2,…,N.E17
For the case of the eigenvalue λ1=z−1/z, we can rewrite Eq. (17) by noticing that if we let x=w−w−1 (w∈ℂ), then Fnx+Fn−1x/w=wn−1 for n=1,2,…. This can be proved by induction method as follows. For n=1, it is immediately verified. First, suppose that the equality holds for n≤k. Next, we compute the right-hand side of the equality for k+1. Substituting Fk−1=wwk−1−Fk in the expression for k+1, and using the properties of the Fibonacci polynomials, we obtain
Therefore, according to Eqs. (17) and (18), the eigenvector for the eigenvalue λ1=2sinhvΔ takes the form e1=c1z…zN−1T, where c is a normalization constant. We can take advantage of the normalization constant and write
e1=cevq1evq2…evqNT,E19
with eigenvalue λ1=v (in original scaling, i.e., the eigenvalue of the matrix DN), q1 is an arbitrary constant, and qj=q1+j−1Δ. This means that the exponential function is an eigenvector of the derivative matrix which is a global representation of the derivative on the partition q1q2…qN. Recall that the exponential function is an eigenfunction of the derivative of functions of continuous variable.
The remain of the eigenvectors have eigenvalues equal to the negative of the roots of the N-th Fibonacci polynomial λm=−xm, m=1,2,…,N−1, and have the form
Let us consider a partition, PN≔qi1N, qi∈ℝ, of N equally spaced points qi of the interval ab∈ℝ, a<b, with the same separation Δ=b−a/N−1 between them.
The rows of the result of the multiplication of the derivative matrix DN and a vector g≔g1g2…gnT are
DNgj=gj+1−gj−12χvΔ,j=1,2,…,N,E21
where g0≔e−vΔg1 and gN+1≔evΔgN. We recognize these expressions as the second order derivatives of the function gx at the mesh points, but instead of dividing by twice the separation Δ between the mesh points, there is the function χvΔ in the denominator. This function makes it possible that the exponential function be an eigenvector of the matrix DN.
The values g0=e−vΔg1 and gN+1=evΔgN extend the original interval ab to a−Δb+Δ so that we have well defined the second order derivatives at all the points of the initial partition, including the edges of the interval. When gx is the exponential function, we have g0=evx1−Δ and gN+1=evxN+Δ, i.e., they are the values of the exponential function evaluated at the points of the extension.
Thus, we define finite differences derivatives for any function gx defined on the partition as
Dg1=g2−e−vΔg12χvΔ,E22
Dgj=gj+1−gj−12χvΔ,E23
DgN=evΔgN−gN−12χvΔ,E24
to be used on the first, central, and last points of the partition.
The determinant of the derivative matrix is not always zero, and in fact, it is [see Eqs. (4) and (9)]
∣D¯N∣=2sinhvΔFN0.E25
But, since F2j+1=1, and F2j=0, then
∣D¯2j∣=0,∣D¯2j+1∣=2sinhvΔ.E26
Hence, only the matrices with an odd dimension have an inverse.
Next, we will derive some properties of these finite differences derivatives.
3.1. The derivative of a product of vectors
There are two equivalent expressions for the finite differences derivative of a product of vectors defined on the partition. A set of such expressions is
These derivatives also have the exponential function as one of their eigenvectors, and we can generate expressions for higher derivatives with higher powers of the derivative matrix.
3.4. The derivative of the inverse of functions
It is possible to give an expression for the derivative of h−1q, including the edge points. For the first point, we have
The derivatives for the first and last points coincide with the derivative for central points when Δ=0.
3.5. The derivative of the ratio of functions
Now, we take advantage of the derivative for the inverse of a function and the derivative of a product of functions and obtain what the derivative of a ratio of functions is
expressions which are very similar to the continuous variable results. Again, these expressions coincide in the limit Δ→0, and they reduce to the corresponding expressions for continuous variables.
3.6. The local inverse operation of the derivative
The inverse operation to the finite differences derivative, at a given point, is the summation with weights 2χvΔ
∑j=nm2χvΔDgj=∑j=nmgj+1−gj−1=gm+1+gm−gn−gn−1.E46
This equality is the equivalent to the usual result for continuous functions, ∫axdydgy/dy=gx−ga. Note that the inverse at the local level is a bit different from the expressions obtained by means of the inverse matrix S (see below) of the derivative matrix D. When dealing with matrices there are no boundary terms to worry about.
3.7. An eigenfunction of the summation operation
Because the exponential function is an eigenfunction of the finite differences derivative and according to Eq. (46), we can say that
4. The commutator between coordinate and derivative
Let us determine the commutator, from a local point of view first, between the coordinate—the points of the partition PN—and the finite differences derivative. We begin with the derivative of q,
Dqj=qj+1−qj−12χvΔ=ΔχvΔ≈1−v26Δ2.E51
Hence, the finite differences derivative of the product qgq is
Dqgj=qj+1Dgj+gj−1Dqj=qj+1Dgj+gj−1ΔχvΔ,E52
i.e.,
Dcqgj−qj+1Dcgj=gj−1ΔχvΔ.E53
This is the finite differences version of the commutator between the coordinate q and the finite differences derivative D. This equality will become the identity operator in the small Δ limit, as expected. An equivalent expression is
Dqgj−qj−1Dgj=gj+1ΔχvΔ.E54
This is the finite differences version of the commutator between coordinate and derivative; the right hand side of this equality becomes gj in the small Δ limit, i.e., it becomes the identity operator.
4.1. The commutator between the derivative and coordinate matrices
The commutator between the partition and the finite differences derivative can also be calculated from a global point of view using the corresponding matrices. Let the diagonal matrix [QN] which will represent the coordinate partition
QN≔diagq1q2…qN.E55
Then, the commutator between the derivative matrix and the coordinate matrix is
This is a kind of nearest neighbors’ average operator, inside the interval. The small Δ limit is just
[DN,QN]≈I,E57
where I is the identity matrix, with the first and last elements replace with 1/2. Thus, coordinate and derivative matrices are finite differences conjugate of each other.
Since the determinant of the derivative matrix DN is not always zero, we expect that there exist an inverse of it. At a local level, the inverse of the finite differences derivation is the summation as was found in Eq. (46). In this section, we determine the inverse of the derivative matrix, and we find that it is a global finite difference integration operation.
Once we know the eigenvalues and eigenvectors of the derivative matrix DN, it turns out that we also know the eigenvectors and eigenvalues of the inverse matrix, when it exists. In fact, the equality DNem=λmem, with λm≠0, imply that
This matrix represents an integration on the partition, with an exact value when it is applied to the exponential function evq on the partition. When applied to an arbitrary vector g=g1g2…gNT, we obtain formulas for the finite differences integration, including the edge points
where N=2M+1. These are new formulas for discrete integration for the exponential function on a partition of equally separated points with the characteristic that it is exact for the exponential function evq.
6. Transformation between coordinate and derivative representations
Since one of the eigenvalues of the derivative matrix is a continuous variable, we can talk of conjugate functions with a continuous argument v. The relationship between discrete vectors on a partition qi and functions with a continuous argument v makes use of continuous and discrete Fourier type of transformations, a wavelet [12]. If we have a function h of continuous argument v, a conjugate vector on the partition qi is defined through the type of continuous Fourier transform F as
Fhqj≔1L2Δ∫−L/2L/2e−iqjvhvdv,E65
and vice-versa, a continuous variable function is defined with the help of a discrete type of Fourier transform F as
Fgv≔L2Δ∑j=−N+1N−12χvΔeiqjvgj.E66
Assuming that the involved integrals converge absolutely, we can say that
The function Kqk−qjLΔ is an approximation to the Kronecker delta function δk,j. The function shi is the hyperbolic sine integral shiz=∫0zdtsinht/t. A plot of it is shown in Figure 1.
Figure 1.
A plot of the kernel function Kxab with a=1 and b=.1. This function is an approximation to the Kronecker delta δx,0.
The ratio of sin functions, in this expression, is an approximation to a series of Dirac delta functions located at v−uΔ=kπ, k∈ℕ. Thus, the operations F and F are finite differences inverse of each other.
6.1. The discrete Fourier transform of the finite differences derivative of a vector
Hence, as is usual, the Fourier transform of the derivative of a function hv of continuous variable v is equal to iqj times the Fourier transform of the function, plus boundary terms.
We proceed with a brief discussion of the relationship between the derivative matrix DN and an important concept in quantum mechanics; the concept of self-adjoint operators [8, 9]. In particular, we focus on the momentum operator, whose continuous coordinate representation (operation) is given by −id/dq, i.e., a derivative times −i, in the case of infinite-dimensional Hilbert space.
In the finite-dimensional complex vectorial space (where each vector define a sequence gii=1N of complex numbers such that ∑igi2<∞). A transformation A is usually called Hermitian, when its entries ai,j are such that ai,j=aj,i∗ (∗ denote the complex conjugate). Our matrix DN is related to an approximation of the derivative (see Section 3) which uses second order finite differences. Therefore, we can ask if the matrix −iDN is also Hermitian.
Let PN=−iDN and v=ix be the eigenvalue of DN, where x∈ℝ is a free parameter, the corresponding eigenvalue of −iDN is indeed the real value x; which is one of the properties of a Hermitian matrix, as is also the case of infinite-dimensional space (for the Hilbert space on a finite interval, these values are discrete, and for the Hilbert space on the real line, these values conform the continuous spectrum, instead of discrete eigenvalues). Other characteristic of −iDN is that the eigenvector corresponding to x is the same exponential function which is the eigenfunction of −id/dx (see Section 2).
Furthermore, let PN† denote the adjoint of PN. Thus, if we restrict our attention to the off-diagonal entries PNi,j=−iDNi,j, it is fulfilled that PN†i,j=−idj,i∗=−idi,j=PNi,j (noticing that, with v=ix then χxΔ=sinxΔ/x∈ℝ). Even more, if we do not care about the two entries di,i for i=1,N, we will have a Hermitian matrix. Finally, as it was seen in Section 4, we can say that PN can be considered as a suitable approximation to the conjugate matrix to the coordinate matrix.
In conclusion, we have introduced a matrix with the properties that a Hermitian matrix should comply with, except for two of its entries. Besides, our partition provides congruency between discrete, continuous, and matrix treatments of the exponential function and of its properties.
References
1.Boole G. A Treatise on the Calculus of Finite Differences. New York: Cambridge University Press; 2009. p. 1860
2.Harmuth HF, Meffert B. Dogma of the continuum and the calculus of finite differences in quantum physics. In: Advances in Imaging and Electron Physics. Vol. 137. San Diego: Elsevier Academic Press; 2005
3.Jordan C. Calculus of Finite Differences. 2nd ed. New York: Chelsea Publishing Company; 1950
4.Richardson CH. An Introduction to the Calculus of Finite Differences. Toronto: D. Van Nostrand; 1954
5.Santhanam TS, Tekumalla AR. Quantum mechanics in finite dimensions. Foundations of Physics. 1976;6:583
6.Pérez AM, Torres-Vega G. Translations in quantum mechanics revisited. The point spectrum case. Canadian Journal of Physics. 2016;94:1365-1368. DOI: 10.1139/cjp-2016-0373
7.de la Torre AC, Goyeneche D. Quantum mechanics in finite-dimensional Hilbert space. American Journal of Physics. 2003;71:49
8.Gitman DM, Tyutin IV, Voronov BL. Self-Adjoint Extensions in Quantum Mechanics. General Theory and Applications to Schrödinger and Dirac Equations with Singular Potentials. New York: Springer; 2012
9.Schmüdgen K. Unbounded self-adjoint operators on Hilbert space. In: Graduate Texts in Mathematics. Vol. 265. Heidelberg: Springer; 2012
10.Hoggatt Jr VE, Bicknell M. Roots of Fibonacci polynomials. Fibonacci Quart. 1973;11:271
11.Li Y. Some properties of Fibonacci and Chebyshev polynomials. Advances in Difference Equations. 2015;2015:118
12.Kaiser G. A friendly guide to wavelets. Birkhäuser. 1994, 2011. ISBN: 978-0-8176-8110-4
Written By
Armando Martínez Pérez and Gabino Torres Vega
Submitted: November 22nd, 2017Reviewed: January 24th, 2018Published: August 29th, 2018