Open access peer-reviewed chapter

# Eigenvalue Problems

By Aleksandra Kostić

Submitted: October 8th 2015Reviewed: January 18th 2016Published: July 6th 2016

DOI: 10.5772/62267

## Abstract

In natural sciences and engineering, are often used differential equations and systems of differential equations. Their solution leads to the problem of eigenvalues. Because of that, problem of eigenvalues occupies an important place in linear algebra. In this caption we will consider the problem of eigenvalues, and to linear and quadratic problems of eigenvalues. During the studying of linear problem of eigenvalues, we put emphasis on QR algorithm for unsymmetrical case and on minmax characterization of symmetric case. During the studying of quadratic problems of eingenvalue, we consider the linearization and variational characterization. We illustrate all with practical examples.

### Keywords

• QR algorithm
• min max principle
• Rayleigh functional
• linearization
• eigenvalues

## 1. Introduction

Every mechanical system has the property of vibration. Analog phenomenon can be observed in the electrical systems in the form of oscillation circuits. Neglecting terms of vibration can lead to resonance, which on one hand can have catastrophic consequences as the demolition of the bridge, on the other hand can be used positively. Mathematical description of vibratory condition leads to a differential equation or a system of differential equations. This problem is further transformed to the eigenvalue problem. This is the motivation for considering the eigenvalue problems.

This chapter is organized as follows: The Linear eigenvalue problem and the quadratic eigenvalue problem.

## 2. The linear eigenvalue problem

This section considers the linear eigenvalue problem of finding parameter λ such that the linear system

Ax=λxE1

has nontrivial solution x, where A ∈ C(n,n). The scalar λ is called an eigenvalue of A, and x is an eigenvector of A corresponding to λ. The set of all eigenvalues of matrix A is called the spectrum of A, and is denoted as σ(A).

The literature discusses the right and left eigenvectors. In our deliberations, we have been limited to the right eigenvectors, which are earlier defined.

This section is organized as follows:

1. Basis properties (characteristic polynomial, bases for eigenspaces, eigenvalues and invertibility, diagonalization)

2. QR Algorithm (The QR algorithm is used for determining all the eigenvalues of a matrix. Today, it is the best method for solving the unsymmetrical eigenvalue problems.)

3. Mathematical background for Hermitian (symmetric) case (Rayleigh quotient, min max principle of Poincare, minmax principle of Courant-Fischer)

4. Physical Background

5. General linear eigenvalue problem

Ax=λBxE2

### 2.1. Basic properties

In this section we outline the basic concepts and theorems, which will allow us to understand further elaboration. The eigenvalue problem is related to the homogeneous system of linear equations, as we will see in the following discussion.

To find the eigenvalues of n × n matrix A we rewrite (1) as

Ax=λIxE3

or by inserting an identity matrix I equivalently

AλIx=0.E4

Such system is homogeneous system and the system (3) has a nontrivial solution if and only if

detAλI=0.E5

This is called the characteristic equation of A; the scalars satisfying this equation are the eigenvalues of A. When is expanded, the determinant det(A − λI) is polynomial p in λ, and it is called the characteristic polynomial of A.

The following theorem gives the link between the characteristic polynomial of the matrix A and its eigenvalues.

Theorem 2.1.1. Equivalent Statements

If A is an n × n matrix and λ is a complex number, then the following are equivalent

1. λ is an eigenvalue of A.

2. The system of equations (A − λI)x = 0 has nontrivial solutions.

3. There is a nonzero vector x in ℂn such that Ax = λx.

4. λ is a solution of the characteristic equation det(A − λI).

Some coefficients of the characteristic polynomial of A have a specific shape. The following theorem gives the information about it.

Theorem 2.1.2.

If A is an n × n matrix, then the characteristic polynomial p(λ) of A has degree n, the coefficient of λn is (−1)n, the coefficient of λn − 1 is (−1)n − 1 trace(A) and the constant term is det(A), where trace(A) := a11 + a22 + ⋯ + ann.

In some structured matrices, eigenvalues can be read as shown in Theorem 2.1.3.

Theorem 2.1.3.

If A is an n × n triangular matrix (upper triangular, lower triangular, or diagonal), then the eigenvalues of A are entries of the main diagonal of A.

Cayley-Hamilton’s theorem is one of the most important statements in linear algebra. The theorem states:

Theorem 2.1.4.

Substituting the matrix A for λ in characteristic polynomial of A, we get the result of zero matrix i.e., p(A) = 0.

There are a number of methods for determining eigenvalue. Some methods allow finding all the eigenvalues and the other just a few of the eigenvalues. Methods based on first determining the coefficients of the characteristic polynomial, and later to determining the eigenvalue solving algebraic equations are rarely implemented, because they are numerically unstable. In fact, for the coefficients of the characteristic polynomial burdened with rounding errors, and due to numerical instability cause large errors in the eigenvalue. Because of that, the characteristic polynomial has mainly theoretical significance. The methods, which are based on the direct application characteristic polynomial, are applied in practice only when the characteristic polynomial is well conditioned. Also for some structured matrices, we can apply the method for the characteristic polynomial, but we don’t calculate directly characteristic polynomial coefficients. The following example describes a class of such matrices.

Example 2.1.1 The Example of structured matrix which achieve the characteristic polynomial for determining the eigenvalue are Toeplitz matrix. Toeplitz matrix marked as Tn, are matrices with constant diagonals. If the Toeplitz matrix is symmetric and positive definite, recursive relation is pn(λ) = pn − 1(λ)βn − 1(λ), where are pn i pn − 1 characteristic polynomial matrix Tn i Tn − 1 respectively a βn − 1 Schur-Szegö parameter for Yule-Walker system. The above recursive relation enables work with characteristic polynomial without individual accounts of his odds. More information can be found at [1]

The following definitions are introducing two important terms: the geometric multiplicity of λ0 and the algebraic multiplicity of A.

The eigenvectors corresponding to λ are the nonzero vectors in the solutions space of (A − λI)x = 0. We call this solution space the eigenspace of A corresponding to λ.

Definition 2.1.1.

If λ0 is an eigenvalue of an n × n matrix A, then the dimension of the eigenspace corresponding to λ0 is called the geometric multiplicity of λ0, and the number of times that λ − λ0 appears as a factor in the characteristic polynomial of A is called the algebraic multiplicity of A.

Eigenvalue and eigenvector have some specific features, which are easy to prove.

The following are some of the obvious features of eigenvalues of matrix A and corresponding eigenvector:

Theorem 2.1.5.

1. If μ ≠ 0 complex number, λ is an eigenvalue of matrix A, and x ≠ 0 corresponding eigenvector, then μx is a corresponding eigenvector.

2. If k is a positive integer, λ is an eigenvalue of matrix A, and x ≠ 0 corresponding eigenvector, then λk is an eigenvalue of Ak and x is a corresponding eigenvector.

3. Matrix A and AT have the same eigenvalues.

In linear algebra invertible matrix are important. From the problem of eigenvalues we can easily conclude If the matrix A is invertible or not. What more can be, the eigenvalues of the matrix A invertible can be immediately read from the eigenvalues of the matrix A− 1. Because of that, in the following theorems we summarize some properties of invertible matrix.

Theorem 2.1.6.

If A is an n × n matrix, then the following are equivalent.

1. A is invertible.

2. λ = 0 is not an eigenvalue of A

3. If λ is an eigenvalue of matrix invertible A, and x ≠ 0 corresponding eigenvector, then 1λis an eigenvalue of A− 1 and x is a corresponding eigenvector.

4. det(A) ≠ 0.

5. A has rank n.

6. Ax = 0 has only the trivial solution.

7. Ax = b has exactly one solution for every n × 1 matrix B

8. The column vectors of A form a basis for n.

9. The row vectors of A form a basis for n.

10. ATA is invertible.

The problem of finding a base n consisting of eigenvectors is very important in linear algebra. Because of that, in this section we will consider the following two equivalent problems:

The Eigenvector Problem. Given an n × n matrix A, does there exist a basis for n consisting of eigenvectors of A?

The Diagonalization Problem (Matrix Form). Given an n × n matrix A, does there exist an invertible matrix P such that P− 1AP is a diagonal matrix?

The latter problem suggests the following terminology.

Definition 2.1.3. Two square matrices A and B are similarly called, if there is invertible matrix P, so that B = P− 1AP. The transition from A to the matrix of the matrix B is called the similarity transformations .

The importance of similar matrices can be seen in the following theorem:

Theorem 2.1.7. Similar matrices A and B have the same characteristic polynomial. They have the same eigenvalues including their geometric multiplicity of the geometric multiplicity of λ0 and the algebraic multiplicity of λ0.

Based on previous definitions, we can define the term diagonalizable as follows:

Definition 2.1.4. A square matrix A is called diagonalizable if the transformation of similarity may be translated into a diagonal form.

To the above two problems are obviously equivalent to the following theorem.

Theorem 2.1.8.

If A is an n × n matrix, then the following are equivalent.

1. A is diagonalizable.

2. A has n linearly independent eigenvectors.

The following algorithm is for diagonalizing a matrix

Algorithm for Diagonalizing a Matrix

Find n linearly independent eigenvectors of A, marked as x1x2, ⋯, xn.

Form the matrix P having x1x2, ⋯, xn., at its column vectors.

The matrix P− 1AP then will be diagonal with λ1λ2, ⋯, λn successive diagonal entries, where λi is eigenvalue corresponding to xi , for i = 1, 2, ⋯, n.

Theorem 2.1.9.

If x1x2, ⋯, xk are eigenvectors of A corresponding to distinct eigenvalues λ1λ2, ⋯, λk, then {x1x2, ⋯, xk } is a linearly independent set.

As consequence of Theorem 2.1.8. and Theorem 2.1.9 ., we obtain the following important result

Theorem 2.1.10.

If an n × n matrix A has n distinct eigenvalues, then A is diagonalizable.

There are matrices that can have the same eigenvalues and yet can be diagonalizable. Broadest such class of such matrices are normal matrix, which we will introduce the following definition.

Definition 2.1.5 . Matrix A ∈ ℂ(n,n) is called normal, if holds AHA = AAH.

More general characterization of diagonalizable matrix A is given in the following theorem .

Theorem 2.1.11.

If A is a square matrix, then:

1. For every eigenvalue of A the geometric multiplicity is less than or equal to the algebraic multiplicity.

2. A is diagonalizable if and if the geometric multiplicity is equal to the algebraic multiplicity for every eigenvalue.

### 2.2. QR algorithm

In this section we will present the QR algorithm. In numerical linear algebra, the QR algorithm is an eigenvalue algorithm : that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. QR algorithm represent factorization method, because it is based on the matrix decomposition. First factorization method labeled as LR algorithm is developed by H. Rutishauser in 1958. LR algorithm is due to the many shortcomings, today is rarely used, (Wilkinson monograph). Better factorization method, is designated as the QR algorithm. The basic form have developed independently from each other, in 1962 G. F. Francis (England) and by Vera N. Kublanskoovskaya (USSR). Today, it is the best method for solving the unsymmetrical eigenvalue problems, when you want to determine all the eigenvalues of the matrix. It is particularly effective when it is brought into the so-called matrix "Condensed form". Condensed form is for the unsymmetrical problem Hessenberg form. About Hessenberg form will be discussed more later. The basic idea is to perform a QR decomposition, writing the matrix as a product of an orthogonal matrix and an upper triangular matrix, multiply the factors in the reverse order, and iterate.

For understanding of the QR algorithm we will need the following terms:

Definition 2.2.1. Matrix Q ∈ P(n,n) called orthogonal if worth QTQ = I.

Remark 2.2.1.Orthogonal matrix represent special case of unitary matrices. The Matrix U ∈ ℂ(n,n) called unitary if applies UHU = I. It is now clear that the orthogonal matrix of a unitary matrix in which all elements of real numbers. In practice, of course, is easier to work with the orthogonal than unitary matrices.

Remark 2.2.2 Orthogonal and unitary matrices are normal matrices.

Let A and B are similar matrices. If the similarity transformations performed by the orthogonal or unitary matrix Q i.e. if applies B = QTAQ or B = UHAU we will say that the matrices A and B are unitary similar. Since the unitary similar matrices are special special case of similar matrix, the eigenvalues of unitary similar matrices are the same.

In addition to the unitary similar matrices and their properties for the introduction of the QR algorithm, we will need the following theorem.

Theorem 2.2.1. Let’s A ∈ (n,n) regular matrix, then there is a decomposition A = QR, where is Q orthogonal matrix and R upper triangular matrix. If the diagonal elements of the matrix R are positive, decomposition is unique.

The decomposition of the matrix A from Theorem 2.2.1 is called the QR decomposition of the matrix A.

The following is a basic form of the QR algorithm.

Let A0 := A. The basic forms of the QR algorithm is given by

Algorithm 2.2.1 . (QR algorithm-basic forms)

For i = 0, 1, ⋯ until convergence

Decompose Ai = QiRi (QR decomposition)

Ai+1=RiQiE60000

End

Theorem 2.2.1 All matrices Ai resulting in algorithms 2.2.1 are unitary similar.

Proof

Since applies QiTAi=Riwe have Ai+1=RiQi=QiTAiQi. Based on past relationships, it is clear that it is true Ai+1=QiTQi1TQ0TA0Q0Qi1Qi. It is obvious that the matrix Q := Q0 ⋅ ⋯ ⋅ Qi orthogonal matrix and the theorem is proved.

Let’s look briefly at some properties QR algorithm:

1. For QR factorization full matrix it is necessary O(n3) flops by factorization. Total QR iteration needs O(n4) flops, which means that it is objectively very slow. But the problem of the slowness of the basic forms of the algorithm can be overcome with the two strategies, namely: translation matrix in Hessenberg form and the introduction of shift.

Let A ∈ (n,n) then for each and QiRi ∈ (n,n) and the whole algorithm is performed in real area. If all eigenvalues of matrix A modulo different, then they are all real and the algorithm converges. However, if there is at least one, and so it is worth λi+1λi1QR algorithm is very slow. If the matrix A has complex eigenvalues then a basic form of the QR algorithm does not converge. We have already seen the need for the introduction of Hessenberg form of a matrix and it will introduce the following definition.

Definition 2.2.2. For the matrix A will say that in the upper Hessenberg form if holds

aij=0forij>2E70000

i.e. the upper triangular matrix has a side below the main diagonal.

The process of reducing the Hessenberg form can be performed using Househelder reflectors and Givens rotation. Let’s look at bringing in Hessenberg form using Househelder spotlight.

Let A=a11cTbBwithb0

Our goal is to determine ω ∈ ℂn − 1 with features a‖ω‖2 = 1 and

Q1b:=In12ωωHb=ke1E80000

where In − 1 is identity matrix row n-1 a e1 the first column of the matrix In − 1.

We define Householder reflector as follows

P1:=10T0Q1E90000

Now applies

A1:=P1AP1=a11cTQ1k0Q1BQ10.E100000

Obviously, the first column of the matrix A1 have a look what is required in the upper Hessenberg form. In this way, we showed that the first column is converted into a suitable form. An analogous procedure can be performed on 2,3,…,n-1 column.

For the implementation of QR algorithm is important that in each iteration in implementing QR algorithm preserves the structure of the matrix. For a matrix form in the upper Hessenberg applies following theorem.

Theorem 2.2.2 If A is upper Hessenberg matrix, then the Q in its QR factorization A=QR also upper Hessenberg matrix.

The above theorem that QR and also upper Hessenberg matrix is the product of the upper triangular matrix and upper Hessenberg matrix.

Preserving the structure of the matrix is very important to stop the efficiency of the algorithm. Namely, if the upper Hessenberg matrix A matrix of its QR factorization is only O(n2) instead of O(n3) as is necessary for QR factorization (decomposition) of full matrix.

Let’s look at one more advantage of the transformation matrix in Hessenberg form. Namely, Ai → R (i → ∞), where R is an upper triangular matrix. Because the matrix Ai is always upper Hessenberg matrix proves that elements of the second diagonal tend to zero and it’s worth aj+1,ji0i,j=1,2,,n1wherein aj+1,jielements of the matrix Ai. It is now clear that for sufficiently large and can be read eigenvalues initial value of A on the basis of theorem 2.1.3. as the diagonal elements of the matrix Ai

For a further improve of the algorithm, shift is used . The idea of improving the QR algorithm is based on the simple fact that if the eigenvalues of A are equal λi that the eigenvalues of the matrix A − σI equal λi − σ. If we shift σ chosen close eigenvalues, there is a strong acceleration of the algorithm.

Let A0 := A.

Algorithm 2.2.2. . (QR algorithm with shift)

For i = 0, 1, ⋯ until convergence

Choose shift σi near the eigenvalues value

Decompose Ai − σiI = QiRi (QR decomposition)

Ai+1=RiQi+σiIE110000

End

It is easy to prove that all matrices in the algorithm 2.2.1. are unitary similar. From the above it is clear that in the case of real matrices with real eigenvalues is best to shift the parameters taken σi=an.ni.

For a further analysis of the parameter shifts we will need the concept of reduced upper Hessenberg matrix as well as the implicit Q theorem.

Definition 2.2.3 The upper Hessenberg matrix H is unreduced upper Hessenberg matrix if the first second diagonal not a single zero.

Theorem 2.2.3. Let QTAQ = H unreduced upper Hessenberg matrix with positive subdiagonal elements hk + 1,k a Q unitary matrix. The columns of the matrix Q and matrix H starting from the second to n-th, are uniquely determined first column of the matrix Q.

Proof

Let Q = (q1q2, ⋯, qn) some prior q1q2, ⋯, qk and the first k − 1 column of the matrix H is determined. The proof will be carried out by mathematical induction by k. For k = 1 is determined q1 and the process can start. Because QH = AQ and H = (hij) the upper Hessenberg matrix applies

hk+1,kqk+1+hkkqk++h1kq1=Aqk.E120000

If you multiply the last equality with (qi)H we get

hik=qiHAqki=1,2,,kE130000

From here it’s k-th column of H except element hk + 1,k specified.

Because hk + 1,k ≠ 0 we have

qk+1=1hk+1,kAqki=1khikqi.E140000

From (qk + 1)Hqk + 1 = 1 and positivity hk + 1,k we get hk + 1,k in a unitary way.

Theorem is proved.

Remark 2.2.3 Condition hk + 1,k > 0 in the previous theorem was we only need to ensure uniformity of matrices Q i H.

With the help of implicit Q theorem we discuss the selection of shift A = A0 real matrix has complex eigenvalues. Then you have to make a double shift for σiσ¯. Namely,

A0σI=Q1R1,A1=R1Q1+σIA0σ¯I=Q1R1,A1=R1Q1+σ¯I.E150000

From there, easy to get A2=Q2TQ1TA0Q1Q2. Matrices Q1Q2 can be chosen so that Q1Q2 real matrices and therefore the matrix A2 is real matrix. Applying

Q1Q2R2R1=Q1A1σ¯IR1=Q1R1Q1+σσ¯IR1=Q1R1Q1R1+σσ¯Q1R1=A0σI2+σσ¯A0σI=A02+σ+σ¯A0+σ2I=:M.E160000

Because of σ+σ¯P, matrix M is real. Then Q1Q2R2R1 QR factorization real matrix and that means Q1Q2 i R2R1 we can choose a real matrix. The first column of the matrix Q1Q2 is proportional to the first column of the matrix M, and the other columns are calculated by applying implicit Q theorem.

### 2.3. Mathematical background for Hermitian (symmetric) case

In this section we look at the problem of eigenvalues in the case of a symmetric or Hermitian matrix.

Definition 2.3.1. Matrix A ∈ (n,n) is called symmetric if applies A = AT.

Definition 2.3.2 Matrix A ∈ ℂ(n,n) is called Hermitian if applies A = AH.

Remark 2.3.1 Symmetric Matrices are only a special case of Hermitian matrices in the case that the elements matrices are real numbers. Therefore, we will formulate a theorem for Hermitian matrices.

Remark 2.3.2 Hermitian and symmetric matrices are normal matrices, which means that they can diagonalize.

The following theorem gives important information on the reality of eigenvalues Hermitian (symmetric) matrices. This feature greatly facilitates consideration of the problem of eigenvalues for this class of matrices, which makes this class of matrices applicable in practice.

Theorem 2.3.1 . If A is Hermitian (symmetric) matrix, then:

1. The eigenvalues of A are all real numbers.

2. Eigenvectors from different eigenspace are orthogonal.

Since all the eigenvalues of real ones can be compared. Therefore, we assume that the eigenvalues in order of size, i.e. It applies λ1 ≤ λ2 ≤ ⋯ ≤ λn and that the corresponding orthonormal eigenvectors.

If the matrix A is symmetrical due to symmetry feature, it comes to the significant acceleration algorithms for the unsymmetrical case. We will demonstrate the QR algorithm which is presented in section 2.2. for the unsymmetrical case. In symmetric case is important to note that the upper Hessenberg form of symmetric matrix tridiagonal matrix whose QR decomposition is only necessary O(n) operation. It is also important that during the QR algorithm preserves the structure of the matrix or all tridiagonal matrices Ai. For a shift in this case is usually taken Wilkins shift which is defined as an eigenvalue of matrix

an1,n1an1,nan1,nan,n, that is closest an,n.

For a QR algorithm Wilkinson shifts apply following theorem, whose proof is given in [2]

Theorem 2.3.2 (Wilkinson) QR algorithm with Wilkinson shifts for symmetric tridiagonal matrix converges globally and at least linearly. For a almost all of the matrices are on asymptotically cubic converging.

Now we introduce the very important concept of the Rayleigh quotient, because it gives the best estimate of the eigenvalues for a given vector x ∈ ℂnx ≠ 0

Definition 2.3.3 Let A be a Hermitian (symmetric) matrix. For a given vector x ∈ ℂnx ≠ 0 Rayleigh quotient is defined Rx:=xHAxxHx.

The importance of the Rayleigh quotient is seen in the following theorems

Theorem 2.3.3. (Features Rayleigh quotient)

1. For all x ∈ ℂnx ≠ 0 worth λ1 ≤ R(x) ≤ λn.

2. λ1=minx0Rx,λn=maxx0Rx

3. If x ≠ 0 with λ1 = R(x) respectively λn = R(x) x is then x is the eigenvector corresponding to λ1 respectively λn.

4. λi=minRx:xHxj=0,j=1,,i1,x0=maxRx:xHxj=0,j=i+1,n,x0

Paragraph (d) in the previous theorem is known as Rayleigh principle. However it is numerically worthless, because for the determination example, we need eigenvector or x1 corresponding to the eigenvalue λ1. In order to overcome this disadvantage is introduced min max principle of Poincaré that is listed in the following theorems.

Theorem 2.3.4 . (min max principle of Poincaré )

λi=mindimV=imaxxV\0Rx=maxdimV=ni+1minxV\0RxE190000

The following formulation is known as min max principle of Courant-Fischer and often favorable to use.

Theorem 2.3.5. (min max principle of Courant-Fischer )

λi=minp1pi1maxRx:xHpj=0,j=i+1,n,x0maxp1pniminRx:xHpj=0,j=i+1,n,x0.E200000

From the above it is clear that these theorems are important for the localization of eigenvalues.

The following is an algorithm that is in linear algebra known as Rayleigh quotient iteration and it reads as follows

Let A ∈ (n,n) symmetric matrix and x0 initial vector, which is the standardized. For which applies ‖x02 = 1.

Algorithm 2.3.1 . (Rayleigh quotient iteration)

σ0=x0TAx0E210000

For i = 1, 2 ⋯ until convergence

Solve (Ai − σiI)yi = xi − 1

xi=yiyi2E220000
σ0=xiTAxi.E230000

End

Theorem 2.3.6. Rayleigh quotient iteration converges cubic.

Finally point out that the most effective for symmetric matrices Divide-and-Conquer method. This method was introduced by Cupen [3] and the first effective implementation is the work of Cu and Eisenstat [4]. About this method, more information can be found in [4].

### 2.4. Physical background

We have already mentioned that the problem of eigenvalues has numerous applications in engineering. Even more motivation to consider problems of eigenvalues, comes from their large application in technical disciplines. On a simple example of a mass-spring system illustrated with application of eigenvalues in engineering. We are assuming, that each spring has the same natural length l and the same spring constant k. Finally, we can assume that the displacement of each spring in measured to its own local coordinate system with an origin at the spring’s equilibrium position.

Applying Newton’s second law we get the following system

m1d2x1dt2k2x1+x2=0m2d2x2dt2kx12x2=0.E240000

We are aware of vibration theory that

xi=aisinωt,E250000

where ai is the amplitude of the vibration of mass i and ω is the frequency of the vibration.

If the last equation twice differentiate by t, we get

d2x2dt2=aiω2sinωt.E260000

If the last two equations obtained expressions for xiid2xidt2i=1,2replace the initial system and write the system in matrix form, then we get

Aa1a2=ω2a1a2,E6

where

A=2km1km1km22km2E280000

Equation (6) represents unsymmetrical eigenvalue problem.

### 2.5. General Linear Eigenvalue Problem

In this section we will deal with general linear eigenvalue problem or the problem

Ax=Bx,x0,E7

where AB ∈ C(n,n)

The scalar λ is called an eigenvalue of the problem (7), and x said to be an eigenvector of (7) corresponding to λ.

A common acronym for general linear eigenvalue problem is GEP. Now eigenvalue problems previously discussed is called the standard eigenvalue problem and tagging with SEP. In practice, the more often we meet with GEP than SEP. Now let’s consider some features of GEP and establish its relationship with SEP.

It is obvious that the eigenvalues of (7) zero of the characteristic polynomial, which is defined as pn(λ) := det(A − λB). In the case of GES degree polynomial pn is less than or equal to n. The characteristic polynomial of degree n has pn if and only if B is regular matrix. In case B=I we get SEP and in this case SEP has n eigenvalues. GEP and can be less than n eigenvalues. It can also happen that at GEP is pn(λ)≡0 and it that case GEP has infinitely many eigenvalues.

The following two examples illustrate this situation with GEP

Example 4.5.1 Let in GEP A=0000,B=0600. Then worth pn(λ)≡0 and every λ ∈ ℂ is eigenvalue i x=10is the corresponding eigenvector.

Example 4.5.2 Let in GEP A=1234,B=1111. When we get to the characteristic polynomial pn(λ) = (1 − λ)(4 − λ) − (3 + λ)(2 + λ) = − 2 − 10λ and the only eigenvalue value is λ=15.

Atypical nature of we met in the last two examples are the result of the fact that in their matrix B was not regular. Therefore, it is usual in the GEP taken that A and B are Hermitian matrix and B is positive definite, that is, all the eigenvalues of the matrix B are positive.

Our goal is to find a connection between this taken GEP and symmetric SEP. As we said B is positive definite, B has a Cholesky decomposition, i.e., B = CCH. Then is

Ax=λBxFy:=C1ACHy,y:=CHxE8

Since matrix F has n eigenvalues and it belongs to the GEP also has n real eigenvalues. Namely on the basis of (7) are the same eigenvalues.

Let yi i yj orthonormal eigenvectors of F then for xi = C− Hyi i xj = C− Hyj apply

δij=yiHyj=CHxiHCHxj=xiHCCHxj=xiHBxjE9

From equation (8) it is clear that the eigenvectors GEP (6) are an orthonormal vectors in relation to the new inner product is defined as ⟨xyB := xHBy. Now we’re going to GEP (7) redefine Rayleigh quotient as follows R(A,B)(x):=xHAxxHBx. With this predefined inneren product and Rayleigh quotient applies to all theorems of section 2.3 with appropriate modification of the definition changes required. This gives self-generation Hermitian (symmetrical) SEP.

## 3. The quadratic eigenvalue problems

In practice, nonlinear eigenproblems commonly arise in dynamic/stability analysis of structures and in fluid mechanics, electronic behavior of semiconductor hetero-structures, vibration of fluid–solid structures, vibration of sandwich plates, accelerator design, vibro-acoustics of piezoelectric/poroelestic structures, nonlinear integrated optics, regularization on total least squares problems and stability of delay differential equations. In practice, the most important is the quadratic problem Q(λ)x = 0 where

Qλ:=λ2A+λB+C,A,B,Cn×n,A0,x0E10

This section is organized as follows:

1. Basic Properties

We consider Rayleigh functional and Minmax Characterization

2. Linearization

A standard approach for investigating or numerically solving quadratic eigenvalue linearization problems, where the original problem is transformed into a generalized linear eigenvalues problem with the same spectrum.

3. Physical Background

We study vibration analysis of structural systems

### 3.1. Basic Properties

Variational characterization is important for finding eigenvalues. In this section we give a brief review of variational characterization of nonlinear eigenvalue problems. Since the quadratic eigenproblems are a special case of nonlinear eigenvalue problems, results for nonlinear eigenvalue problems can be specially applied for the quadratic eigenvalue problems. Variational characterization is generalization of well known minmax characterization for the linear eigenvalue problems.

We consider nonlinear eigenvalue problems

Tλx=0,E11

where T(λ) ∈ ℂn × nλ ∈ J is a family of the Hermitian matrices depending continuously on the parameter λ ∈ J and J is a real open interval which may be unbounded.

Problems of this type arise in damped vibrations of structures, conservative gyroscopic systems, lateral buckling problems, problems with retarded arguments, fluid–solid vibrations, and quantum dot heterostructures.

To generalize the variational characterization of eigenvalues we need a generalization of the Rayleigh quotient. To this end we assume that

(A) for every fixed x ∈ ℂn, x ≠ 0 the scalar real equation

fλ;x:=xHTλxE12

has at most one solution p(x) ∈ J. Then f(λx) = 0 implicitly defines a functional p on some subset D ⊂ ℂn which is called the Rayleigh functional of (11).

(B) for every x ∈ D and every λ ∈ J with λ ≠ p(x) it holds that (λ − p(x))f(λx) > 0.

If p is defined on D = ℂn\{0} then the problem (1) is called overdamped, otherwise it is called nonoverdamped.

Generalizations of the minmax and the maxmin characterizations of the eigenvalues were proved by Duffin [6] for the quadratic case and by Rogers [7] for the general overdamped problems. For the nonoverdamped eigenproblems the natural ordering to call the smallest eigenvalue the first one, the second smallest the second one, etc., is not appropriate. The next theorem is proved in [8], which gives more information about the following minmax characterization for eigenvalues.

Theorem 3.1.1.

Let J be an open interval in ℝ, and let λ) ∈ ℂnxnλ ∈ J, be a family of Hermitian matrices depending continuously on the parameter λ ∈ J, such that the conditions (A) and (B) are satisfied. Then the following statements hold.

1. For every l ∈ ℕ there is at most one lth eigenvalue of T(⋅) which can be characterized by

λl=minVHl,VDsupvVDpvE13
2. If

λl:=infVHl,VDsupvVDpvJ.E360000

For some l ∈ ℕ then λ l is the lth eigenvalue of ,T(⋅) in J, and (13) holds.

3. If there exists the kth and lth eigenvalue λ k and λ l in J(k < l), then J contains the jth eigenvalue λ j(k ≤ j ≤ l) as well, and λ k ≤ λ j ≤ λ l.

4. Let λ 1 = infx ∈ Dp(x) ∈ J and λ l ∈ J. If the minimum in (13) is attained for an l dimensional subspace V, then V ⊃ D ∪ {0}, and (3) can be replaced with

λl=minVHl,VD0supvV,v0pv.E370000
5. λ˜is an lth eigenvalue if and only if μ = 0 is the lth eigenvalue of the linear eigenproblem T(λ)x = μx.

6. The minimum in (3) is attained for the invariant subspace of T(λl) corresponding to its lth largest eigenvalues.

Sylvester’s law of inertia has an important role in the nonlinear eigenvalue problems. We will briefly look back to the Sylvester’s law of inertia. With this purpose we define the inertia of the Hermitian matrix T as follows [9].

Definition 3.1.1. The inertia of a Hermitian matrix T is the triplet of nonnegative integers In(T) = (npnnnz) where npnn and nz are the number of positive, negative, and zero eigenvalues of T (counting multiplicities).

Next, we consider a case that an extreme eigenvalue λ1 := infx ∈ Dp(x)orλn := supx ∈ Dp(x) is contained in J.

Theorem 3.1.2 Assume that T : J → ℂnxn satisfies the conditions of the minmax characterization, and let (npnnnz) be the inertia of T(σ) for some σ ∈ J.

1. If λ1 := infx ∈ Dp(x) ∈ J, then the nonlinear eigenproblem T(λ)x = 0 has exactly np eigenvalues λ1λnpin J which are less than σ.

2. If supx ∈ Dp(x) ∈ J, then the nonlinear eigenproblem T(λ)x = 0 has exactly nn eigenvalues λnnn+1λninJexceeding σ.

We consider the quadratic eigenvalue problem in the label QEP .That is why we adapt to the real scalar equation (11) QEPu. In this way, we get

fλx:=λ2xHAx+λxHBx+xHCx=0foreveryfixedxn,x0E14

Natural candidates for the Rayleigh functionals of QEPa (Eq. 10) are

p+x:=xHBx2xHAx+xHBx2xHAx2xHCxxHAxandE15
px:=xHBx2xHAxxHBx2xHAx2xHCxxHAxE16

The Rayleigh functionals are the generalization of the Rayleigh quotient.

In this section we deal with the hyperbolic quadratic pencil as an example overdamped problems and gyroscopically stabilized pencil as an example of the changes that are not.

Now, let us look briefly the hyperbolic quadratic pencil. It is overdamped square pencil given by (10) in which the A = AH > 0, B = BH, C = CH. For hyperbolic quadratic pencil, the following interesting features

The ranges J+ := p+(ℂn\{0}) and J := p(ℂn\{0}) are disjoint real intervals with max J < min J+. Q(λ) is the positive definite for λ < min J and λ > max J+, and it is the negative definite for λ ∈ (max J, min J+).

(QJ+) and (−QJ) satisfy the conditions of the variational characterization of the eigenvalues, i.e. there exist 2n eigenvalues[1].

λ1λ2λn<λn+1λ2nE17

and

λj=mindimV=jmaxxV,x0px,λn+j=mindimV=jmaxxV,x0p+x,j=1,2,,n.E18

Now we will look at gyroscopically stabilized system in the label GSS. A quadratic polynomial matrix

Qλ=λ2I+λB+C,B=BH,detB0,C=CHE19

is gyroscopically stabilized if for some k>0 it holds that

B>kI+k1C,E20

where denotes the positive square root of B2.

Definition 3.1.2. A eigenvalue λ is positive type if applies xHQλx>0xn,x0

A eigenvalue λ is positive type if applies xHQλx<0xn,x0

Theorem (Barkwell, Lancaster, Markus 1992)

1. The spectrum of a gyroscopic stabilized pencil is real, i.e. Q is quasi-hyperbolic.

2. All eigenvalues are either of positive type or of negative type

3. If (npnnnz) is the inertia of B, then Q(λ)x  = 0 has 2np negative and 2 nn positive eigenvalues.

4. The 2np negative eigenvalues lie in two disjoint intervals, eigenvalues in each; the ones in the left interval are of negative type, the ones in the right interval are of positive type.

5. The 2 nn negative eigenvalues lie in two disjoint intervals, eigenvalues in each; the ones in the left interval are of negative type, the ones in the right interval are of positive type.

Without loss of generality we will observe only positive eigenvalues value

Let now p+x:=xHBx2xHx+xHBx2xHx2xHCxxHxand px:=xHBx2xHxxHBx2xHx2xHCxxHxfunctionals appropriate for GSS. With them, we can define the Rayleigh functionals p+:=pxzapx>00elsei p++:=p+xzapx>00else

Voss and Kostić are defined for this function given interval in which the eigenvalues can minmax characterize.

In order to minmax characterized all eigenvalues, we will introduce new Rayleigh functional. It is a new strategy. With this aim we matrices B and C as well as vector x present in the following format :

B=B100B2,Bi>0,i=1,2C=C11C12C12HC22,Cii>0,i=1,2x=zy0.E450000

We define

Q1λ:=λ2I+λB1+C11Q2λ:=λ2IλB2+C22Tλ:=Q2λC12HQ1λ1C12E460000

Because of the conditions (20) are Q1(λ) I Q2(λ) are hyperbolic.

We will observe the following problem inherent value

Tλy=0,y0,ynnp,y0E470000

Applies following theorem

Theorem 3.1.3 λ is eigenvalue Q(⋅) if and only if λ is eigenvalue T(⋅)

Proof

Qλx=0Q1λC12C12HQ2λx=0Q1λC12C12HQ2λzy=0Q1λz+C12yC12Hz+Q2λy=00Q1λz+C12y=0C12Hz+Q2λy=0z=Q1λ1yQ2λC12HQ1λ1C12y=0z=Q1λ1yTλy=0.E480000

Theorem is proved.

Analogous to the (14) we defined the following functions

qλ;y:=yHTλyandf2λ;y:=yHQ2λyE490000

In the following theorem we give information about the properties q(λy)

Theorem 3.1.4. Function q(λy) has the following characteristics

1. For each vector q(0; y) > 0

2. For λ>0 a function q(λy) exactly two zeros for each vector y. Minimum zero function q(λy) and is lower than the minimum zero function f2(λy) and the largest zero function q(λy) is greater than the maximum zero function f2(λy)

Proof

1. Because C > 0 we have

yHC12HC111yHC11C12C12HC22C111C12yy>0.E500000

It follows

yHC12HC111yH0C22yC12HC111C12y>0.E510000

then

q0;y=yHC22C12HC111C12y>0E520000
2. We have already mentioned that Q2(λ) is hyperbolic. This means that the function f2(λy) for each vector y has two different real roots. For λ > 0 ist Q1(λ) > 0. Therefore, applies

qλ;y<yHQ2λyyHC12HQ1λ1C12y>0E530000
<yHQ2λy=f2λyE21

Of further

limλ+qλ;y=+E22

Of (a) and because (21) and (22) follows that the function q(λy) for λ>0 has two zero. Minimum zero function q(λy) a and is lower than the minimum zero function f2(λy) and the largest zero function q(λy) b is greater than the maximum zero function f2(λy)

3. q(λ;y)=2λyHyyHB2y+yHC12H(Q1(λ))1C12y>0

Theorem is proved.

Define a new functional

Definition 3.1.3. Let q(ay) = q(by) = 0 and 0 < a < b. We define two new functional

ty=at+y=bW±:=t±hE560000

Theorem 3.1.5. Applies maxW < minW+

Proof

p2±x:=xHB2x2xHx±xHB2x2xHx2xHC2xxHxJ±:=p2±nnp\0.maxJ<minJ+,E570000

Because Q2(λ) is hyperbolic.

ty<maxJ<minJ+<t+y,foreveryynnp\0E580000
maxW<maxJ<minJ+<maxW+E590000

Theorem is proved.

Theorem 3.1.6. All the positive eigenvalues of (19) are either the value of the MinMax of t(y) or Maxmin of t+(y)

### 3.2. Linearization

In this section we will deal with linearization. As mentioned linearization is standard procedure for reducing QEP on GEP with a view to facilitate the computation of eigenvalues. We have already seen that the problem of eigenvalues usually come as a result of solving differential equations or systems of differential equations. That is the basic idea of linearization came in the field of differential equations where the order of the differential equation of the second order can be lowered by introducing a system of two partial differential equations of the first order with two unknown functions.

The basic idea of linearization in QEPa is the introduction of shift z = λx u

λ2A+λB+Cx=0.E600000

Then we get

1. λAz + Bz + Cx = 0 or

2. λAz + λBx + Cx = 0.

The resulting equations are GEP because they can be written respectively in the form of

1. BCIOzx=λAOOIzx

2. OCIOzx=λABOIzx.

Since the corresponding GEPs all matrices 2n × 2n to GEP has 2n eigenvalue and therefore appropriate QEP has also 2n eigenvalue. From the above it is clear that linearization is not unambiguous. However, in the choice of linearization QEP is an important factor to maintain symmetry and some spectral properties QEP if it is possible. The application of linearization will be will in the next chapter.

### 3.3. Physical Background

We look now at the application of eigenvalues of quadratic problems in engineering. The largest review of applications QEP is in the [10].We have already mentioned in the introduction to eigenvalue problem arises in connection with differential equations or systems of differential equations. In structural mechanics the most commonly are used differential equations and therefore the problem of eigenvalues. Note that the ultimate goal is to determine the effect of vibrations on the performance and reliability of the system, and to control these effects.

We will now demonstrate the linearization of QEP on a concrete example from the engineering. Low vibration system on n unknowns are described by the following system of differential equations

My¨+Cy.+Ky=0,E23

where M is mass matrix, C is viscous damping matrix, and K is the stiffness matrix thus Because of the conditions in physics M and K are related to the kinetic and strain energy, respectively, by a quadratic form which makes them symmetric. For most structures M and K are positive definite and space.

The introduction of shift y = xeλ after rearrangement, we get

λ2M+λC+Kxeλ=0E620000

Respectively

λ2M+λC+Kx=0E24

Therefore, the system (23) has a nontrivial solution y is selected, such that λ that QEP (24) has a nontrivial solution x.

Now we’re going to QED (24) apply linearization method presented in section 3.2. Thus we have the appropriate GEP

CKIOzx=λMOOIzx.E640000

When the system is undamped (C=O) we get

ωMx:=λ2Mx=Kx=0.E650000

Because the most common matrix M and K are symmetric, positive definite and space obtained GEP is easy to solve.

## 4. Conclusion

Because of the great practical application eigenvalue problem occupies an important place in linear algebra. In this chapter, we discussed the linear and quadratic eigenvalues. In particular, an emphasis is on numerical methods such as the QR algorithm, Rayleigh quotient iteration for linear problems of eigenvalues and linearization and minmax characterization of quadratic problems eigenvalues. The whole chapter shows that the structure of the matrix, participating in the problem of eigenvalues, strongly influence the choice of the method itself. It is also clear that using the features of the structure matrix can do much more effectively existing algorithms. Thus, further studies are going to increase of use feature matrix involved in the problem of eigenvalues, with the aim of improving the effectiveness of the method. Finally, we point out that in this chapter we introduced new Rayleigh functionals for gyroscopically stabilized system that enables complete minmax (maxmin) characterization of eigenvalues. It’s a new strategy. We have proved all relevant features of new Rayleigh functionals.

## How to cite and reference

### Cite this chapter Copy to clipboard

Aleksandra Kostić (July 6th 2016). Eigenvalue Problems, Applied Linear Algebra in Action, Vasilios N. Katsikis, IntechOpen, DOI: 10.5772/62267. Available from:

### Embed this chapter on your site Copy to clipboard

<iframe src="http://www.intechopen.com/embed/applied-linear-algebra-in-action/eigenvalue-problems" />

Embed this code snippet in the HTML of your website to show this chapter

### Related Content

#### Applied Linear Algebra in Action

Edited by Vasilios Katsikis

Next chapter

#### Nonnegative Inverse Elementary Divisors Problem

By Ricardo L. Soto

First chapter

#### Simulation of Piecewise Hybrid Dynamical Systems in Matlab

By Fatima El Guezar and Hassane Bouzahir

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

View all books