Eigendecomposition of a matrix

Eigendecomposition of a matrix

In the mathematical discipline of linear algebra, eigendecomposition or sometimes spectral decomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors.

Fundamental theory of matrix eigenvectors and eigenvalues

A vector v of dimension "N" is an eigenvector of a square ("N"×"N") matrix A if and only if it satisfies the linear equation:mathbf{A}mathbf{v}=lambda mathbf{v} where λ is a scalar, termed the eigenvalue corresponding to v. The above equation is called the eigenvalue equation or the eigenvalue problem.

This yields an equation for the eigenvalues: pleft(lambda ight) := detleft(mathbf{A} - lambda mathbf{I} ight)= 0. ! We call "p"(λ) the characteristic polynomial, and the equation, called the characteristic equation, is an "N"th order polynomial equation in the unknown λ. This equation will have "N"λ distinct solutions, where 1 ≤ "N"λ ≤ "N" . The set of solutions, "i.e." the eigenvalues, is sometimes called the spectrum of A.

We can factor "p" as:pleft(lambda ight)= (lambda-lambda_1)^{n_1}(lambda-lambda_2)^{n_2}cdots(lambda-lambda_k)^{n_k} = 0 ! where:sumlimits_{i=1}^{N_{lambda{n_i} =N.

For each eigenvalue, λ"i", we have a specific eigenvalue equation: left(mathbf{A} - lambda_i mathbf{I} ight)mathbf{v} = 0. ! There will be 1 ≤ "m""i" ≤ "n""i" linearly independent solutions to each eigenvalue equation. The "m""i" solutions are the eigenvectors associated with the eigenvalue λ"i". The integer "m""i" is termed the geometric multiplicity of λ"i". It is important to keep in mind that the algebraic multiplicity "n""i" and geometric multiplicity "m""i" may or may not be equal, but we always have "m""i" ≤ "n""i". The simplest case is of course when "m""i" = "n""i" = 1. The total number of linearly independent eigenvectors, "N"v, can be calculated by summing the geometric multiplicities:sumlimits_{i=1}^{N_{lambda{m_i} =N_{mathbf{v.The eigenvectors can be indexed by eigenvalues, "i.e." using a double index, with v"i","j" being the "j"th eigenvector for the "i"th eigenvalue. The eigenvectors can also be indexed using the simpler notation of a single index v"k", with "k" = 1, 2, ... , "N"v.

Eigendecomposition of a matrix

Let A be a square ("N"×"N") matrix with "N" linearly independent eigenvectors, q_i ,, (i = 1, dots, N). Then A can be factorized as:mathbf{A}=mathbf{Q}mathbf{Lambda}mathbf{Q}^{-1} where Q is the square ("N"×"N") matrix whose "i"th column is the eigenvector q_i of A and Λ is the diagonal matrix whose diagonal elements are the corresponding eigenvalues, "i.e.", Lambda_{ii}=lambda_i.

The eigenvectors q_i ,, (i = 1, dots, N) are usually normalized, but they need not be. A non-normalized set of eigenvectors, v_i ,, (i = 1, dots, N), can also be used as the columns of Q. That this is true can be understood by noting that the magnitude of the eigenvectors in Q gets canceled in the decomposition by the presence of Q−1.

Matrix inverse via eigendecomposition

If matrix A can be eigendecomposed and if none of its eigenvalues are zero, then A is nonsingular and its inverse is given by:mathbf{A}^{-1}=mathbf{Q}mathbf{Lambda}^{-1}mathbf{Q}^{-1}

Because Λ is a diagonal matrix, its inverse is easy to calculate::left [Lambda^{-1} ight] _{ii}=frac{1}{lambda_i}

Functional calculus

The eigendecomposition allows for much easier computation of power series of matrices. If "f"("x") is given by :f(x)=a_0+a_1 x+a_2 x^2+cdotsthen we know that :fleft(mathbf{A} ight)=mathbf{Q}fleft(mathbf{Lambda} ight)mathbf{Q}^{-1}Because Λ is a diagonal matrix, functions of Λ are very easy to calculate::left [fleft(mathbf{Lambda} ight) ight] _{ii}=fleft(lambda_i ight)The off-diagonal elements of "f"(Λ) are zero; that is, "f"(Λ) is also a diagonal matrix. Therefore, calculating "f"(A) reduces to just calculating the function on each of the eigenvalues .

A similar technique works more generally with the holomorphic functional calculus, using:mathbf{A}^{-1}=mathbf{Q}mathbf{Lambda}^{-1}mathbf{Q}^{-1}from above. Once again, we find that:left [fleft(mathbf{Lambda} ight) ight] _{ii}=fleft(lambda_i ight)

Decomposition for special matrices

ymmetric matrices

Any eigenvector basis for a real symmetric matrix is orthogonal, and can always be made into an orthonormal basis. Thus a real symmetric matrix can be decomposed as:mathbf{A}=mathbf{Q}mathbf{Lambda}mathbf{Q}^{T} where Q is an orthogonal matrix, and Λ is real and diagonal.

Normal matrices

Any eigenvector basis for a complex normal matrix is also orthogonal, so a real symmetric matrix can be decomposed as:mathbf{A}=mathbf{U}mathbf{Lambda}mathbf{U}^{H} where U is a unitary matrix. Further, if A is Hermitian, the diagonal matrix Λ has only real values, and if A is unitary, Λ takes all its values on the complex unit circle.

Useful facts

Useful facts regarding eigenvalues

*The product of the eigenvalues is equal to the determinant of A:detleft(mathbf{A} ight) = prodlimits_{i=1}^{N_{lambda{lambda_i^{n_i ! Note that each eigenvalue is raised to the power "ni", the algebraic multiplicity.
*The sum of the eigenvalues is equal to the trace of A:operatorname{tr}left(mathbf{A} ight) = sumlimits_{i=1}^{N_{lambdan_i}lambda_i} ! Note that each eigenvalue is multiplied by "ni", the algebraic multiplicity.
*If the eigenvalues of A are λ"i", and A is invertible, then the eigenvalues of A-1 are simply λ"i"-1.
*If the eigenvalues of A are λ"i", then the eigenvalues of "f"(A) are simply "f"(λ"i"), for any holomorphic function "f".

Useful facts regarding eigenvectors

*If A is (real) symmetric, then "N"v="N", the eigenvectors are real, mutually orthogonal and provide a basis for mathbb{R}^{N}.
*The eigenvectors of A-1 are the same as the eigenvectors of A
*The eigenvectors of "f"(A) are the same as the eigenvectors of A

Useful facts regarding eigendecomposition

* A can be eigendecomposed if and only if:N_{mathbf{v=N

*If "p"(λ) has no repeated roots, i.e. "N"λ="N", then A can be eigendecomposed.

*The statement "A can be eigendecomposed" does "not" imply that A has an inverse.

*The statement "A has an inverse" does "not" imply that A can be eigendecomposed.

Useful facts regarding matrix inverse

*mathbf{A} can be inverted if and only if:lambda_i e 0 ; forall ,i
*If lambda_i e 0 ; forall ,i and N_{mathbf{v=N, the inverse is given by:mathbf{A}^{-1}=mathbf{Q}mathbf{Lambda}^{-1}mathbf{Q}^{-1}

Numerical computations

Numerical computation of eigenvalues

Suppose that we want to compute the eigenvalues of a given matrix. If the matrix is small, we can compute them symbolically using the characteristic polynomial. However, this is often impossible for larger matrices, in which case we must use a numerical method.

In practice, eigenvalues of large matrices are not computed using the characteristic polynomial. Computing the polynomial becomes expensive in itself, and exact (symbolic) roots of a high-degree polynomial can be difficult to compute and express: the Abel–Ruffini theorem implies that the roots of high-degree (5 and above) polynomials cannot in general be expressed simply using "n"th roots. Effective numerical algorithms for approximating roots of polynomials exist, but small errors in the eigenvalues can lead to large errors in the eigenvectors. Therefore, general algorithms to find eigenvectors and eigenvalues are iterative. The easiest method is the power method: a random vector v is chosen and a sequence of unit vectors is computed as

: frac{Av}{|Av, frac{A^2v}{|A^2v, frac{A^3v}{|A^3v, dots

This sequence will almost always converge to an eigenvector corresponding to the eigenvalue of greatest magnitude. This algorithm is simple, but not very useful by itself. However, popular methods such as the QR algorithm are based on it.

Numerical computation of eigenvectors

Once the eigenvalues are computed, the eigenvectors can be calculated by solving the equation: left(mathbf{A} - lambda_i mathbf{I} ight)mathbf{v}_{i,j} = 0 ! using Gaussian elimination or any other method for solving matrix equations.

Additional topics

Generalized eigenspaces

Recall that the "geometric" multiplicity of an eigenvalue can be described as the dimension of the associated eigenspace, the nullspace of λI − "A". The algebraic multiplicity can also be thought of as a dimension: it is the dimension of the associated generalized eigenspace (1st sense), which is the nullspace of the matrix (λI − "A")"k" for "any sufficiently large k". That is, it is the space of generalized eigenvectors (1st sense), where a generalized eigenvector is any vector which "eventually" becomes 0 if λI − "A" is applied to it enough times successively. Any eigenvector is a generalized eigenvector, and so each eigenspace is contained in the associated generalized eigenspace. This provides an easy proof that the geometric multiplicity is always less than or equal to the algebraic multiplicity.

This usage should not be confused with the "generalized eigenvalue problem" described below.

Conjugate eigenvector

A conjugate eigenvector or coneigenvector is a vector sent after transformation to a scalar multiple of its conjugate, where the scalar is called the conjugate eigenvalue or coneigenvalue of the linear transformation. The coneigenvectors and coneigenvalues represent essentially the same information and meaning as the regular eigenvectors and eigenvalues, but arise when an alternative coordinate system is used. The corresponding equation is

: Av = lambda v^*.,

For example, in coherent electromagnetic scattering theory, the linear transformation "A" represents the action performed by the scattering object, and the eigenvectors represent polarization states of the electromagnetic wave. In optics, the coordinate system is defined from the wave's viewpoint, known as the Forward Scattering Alignment (FSA), and gives rise to a regular eigenvalue equation, whereas in radar, the coordinate system is defined from the radar's viewpoint, known as the Back Scattering Alignment (BSA), and gives rise to a coneigenvalue equation.

Generalized eigenvalue problem

A generalized eigenvalue problem (2nd sense) is of the form: Av = lambda B v quad quadwhere "A" and "B" are matrices. The generalized eigenvalues (2nd sense) λ can be obtained by solving the equation:det(A - lambda B)=0., The set of matrices of the form A - lambda B, where lambda is a complex number, is called a "pencil".If "B" is invertible, then the original problem can be written in the form: B^{-1}Av = lambda v quad quad which is a standard eigenvalue problem. However, in most situations it is preferable not to perform the inversion, but rather to solve the generalized eigenvalue problem as stated originally.

ee also

*Matrix decomposition
*List of matrices
*Eigenvalue, eigenvector and eigenspace
*Spectral theorem

Bibliography

* Golub, G. H. and Van Loan, C. F. (1996). "Matrix Computations". 3rd ed., Johns Hopkins University Press, Baltimore. ISBN 0-8018-5414-8.
* Horn, Roger A. and Johnson, Charles R (1985). "Matrix Analysis". Cambridge University Press. ISBN 0-521-38632-2.
* Horn, Roger A. and Johnson, Charles R (1991). "Topics in Matrix Analysis". Cambridge University Press. ISBN 0-521-46713-6.
* Strang G (1998). "Introduction to Linear Algebra". 3rd ed., Wellesley-Cambridge Press. ISBN 0-9614088-5-5.


Wikimedia Foundation. 2010.

Игры ⚽ Нужно решить контрольную?

Look at other dictionaries:

  • Matrix decomposition — In the mathematical discipline of linear algebra, a matrix decomposition is a factorization of a matrix into some canonical form. There are many different matrix decompositions; each finds use among a particular class of problems. Contents 1… …   Wikipedia

  • Matrix (mathematics) — Specific elements of a matrix are often denoted by a variable with two subscripts. For instance, a2,1 represents the element at the second row and first column of a matrix A. In mathematics, a matrix (plural matrices, or less commonly matrixes)… …   Wikipedia

  • eigendecomposition — noun The factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors …   Wiktionary

  • Orthogonal matrix — In linear algebra, an orthogonal matrix (less commonly called orthonormal matrix[1]), is a square matrix with real entries whose columns and rows are orthogonal unit vectors (i.e., orthonormal vectors). Equivalently, a matrix Q is orthogonal if… …   Wikipedia

  • Positive-definite matrix — In linear algebra, a positive definite matrix is a matrix that in many ways is analogous to a positive real number. The notion is closely related to a positive definite symmetric bilinear form (or a sesquilinear form in the complex case). The… …   Wikipedia

  • Hermitian matrix — A Hermitian matrix (or self adjoint matrix) is a square matrix with complex entries which is equal to its own conjugate transpose mdash; that is, the element in the i th row and j th column is equal to the complex conjugate of the element in the… …   Wikipedia

  • Eigenvalues and eigenvectors — For more specific information regarding the eigenvalues and eigenvectors of matrices, see Eigendecomposition of a matrix. In this shear mapping the red arrow changes direction but the blue arrow does not. Therefore the blue arrow is an… …   Wikipedia

  • Spectral theorem — In mathematics, particularly linear algebra and functional analysis, the spectral theorem is any of a number of results about linear operators or about matrices. In broad terms the spectral theorem provides conditions under which an operator or a …   Wikipedia

  • List of mathematics articles (E) — NOTOC E E₇ E (mathematical constant) E function E₈ lattice E₈ manifold E∞ operad E7½ E8 investigation tool Earley parser Early stopping Earnshaw s theorem Earth mover s distance East Journal on Approximations Eastern Arabic numerals Easton s… …   Wikipedia

  • Спектральная теорема — В математике, в частности в линейной алгебре и функциональном анализе, термином спектральная теорема обозначают любой из целого класса результатов о линейных операторах или о матрицах. Не вдаваясь в детали можно сказать, что спектральная теорема… …   Википедия

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”