- Schur decomposition
In the mathematical discipline of

linear algebra ,**the Schur decomposition**or**Schur triangulation**(named afterIssai Schur ) is an importantmatrix decomposition .**Statement**The Schur decomposition reads as follows: if "A" is a "n" × "n"

square matrix with complex entries, then "A" can be expressed as:$A\; =\; Q\; U\; Q^\{-1\},$

where "Q" is a

unitary matrix (so that its inverse "Q"^{−1}is also theconjugate transpose "Q"* of "Q"), and "U" is anupper triangular matrix , which is called a**Schur form**of "A". Since "U" is similar to "A", it has the samemultiset ofeigenvalue s, and since it is triangular, those eigenvalues are the diagonal entries of "U".The Schur decomposition implies that there exist a nested sequence of "A"-invariant subspaces {0} = "V"

_{0}⊂ "V"_{1}⊂ ... ⊂ "V_{n}" =**C**^{"n"}, and that there exists an orderedorthonormal basis (for the standardHermitian form of**C**^{"n"}) such that the first "i" basis vectors span "V"_{"i"}for each "i". Phrased somewhat differently, the first part says that an operator "T" on a complex finite-dimensional vector space stabilizes a complete flag ("V"_{1},...,"V_{n}").**Proof**A constructive proof for the Schur decomposition is as follows: every operator "A" on a complex finite-dimensional vector space has an eigenvalue "λ", corresponding to some eigenspace "V

_{λ}". Let "V_{λ}"^{⊥}be its orthogonal complement. It is clear that, with respect to this orthogonal decomposition, "A" has matrix representation (one can pick here any orthonormal bases spanning "V_{λ}" and "V_{λ}"^{⊥}respectively):$A\; =\; egin\{bmatrix\}\; lambda\; ,\; I\_\{lambda\}\; A\_\{12\}\; \backslash \; 0\; A\_\{22\}\; end\{bmatrix\}:\; egin\{matrix\}V\_\{lambda\}\; \backslash oplus\; \backslash V\_\{lambda\}^\{perp\}end\{matrix\}\; ightarrowegin\{matrix\}V\_\{lambda\}\; \backslash oplus\; \backslash V\_\{lambda\}^\{perp\}end\{matrix\}$

where "I

_{λ}" is the identity operator on "V_{λ}". The above matrix would be upper-triangular except for the "A"_{22}block. But exactly the same procedure can be applied to the sub-matrix "A"_{22}, viewed as an operator on "V_{λ}"^{⊥}, and its submatrices. Continue this way until we exhaust the space**C**^{"n"}gives the desired result.The above argument can be slightly restated as follows: let "λ" be an eigenvalue of "A", corresponding to some eigenspace "V

_{λ}". "A" induces an operator "T" on the quotient space**C**^{"n"}modulo "V_{λ}". This operator is precisely the "A"_{22}submatrix from above. As before, "T" would have an eigenspace, say "W_{μ}" ⊂**C**^{"n"}modulo "V_{λ}". Notice the preimage of "W_{μ}" under the quotient map is aninvariant subspace of "A" that contains "V_{λ}". Continue this way until the resulting quotient space has dimension 0. Then the successive preimages of the eigenspaces found at each step form a flag that "A" stabilizes.**Notes**Although every square matrix has a Schur decomposition, in general this decomposition is not unique. For example, the eigenspace "V

_{λ}" can have dimension > 1, in which case any orthonormal basis for "V_{λ}" would lead to the desired result.Write the triangular matrix "U" as "U" = "D" + "N", where "D" is diagonal and "N" is strictly upper triangular (and thus

nilpotent ). The diagonal matrix "D" contains the eigenvalues of "A" in arbitrary order (hence its Frobenius norm, squared, is the sum of the squared moduli of the eigenvalues of "A", while the Frobenius norm of "A", squared, is the sum of the squared singular values of "A"). The nilpotent part "N" is generally not unique either, but its Frobenius norm is uniquely determined by "A" (just because the Frobenius norm of A is equal to the Frobenius norm of "U" = "D" + "N").It is clear that if "A" is a

normal matrix , then "U" from its Schur decomposition must be adiagonal matrix and the column vectors of "Q" are theeigenvector s of "A". Therefore, the Schur decomposition extends thespectral decomposition . In particular, if "A" ispositive definite , the Schur decomposition of "A", its spectral decomposition, and itssingular value decomposition coincide.A commuting family {"A

_{i}"} of matrices can be simultaneously triangularized, i.e. there exists a unitary matrix "Q" such that, for every "A_{i}" in the given family, "Q A_{i}Q*" is upper triangular. This can be readily deduced from the above proof. Take element "A" from {"A_{i}"} and again consider a eigenspace "V_{A}". Then "V_{A}" is invariant under all matrices in {"A_{i}"}. Therefore all matrices in {"A_{i}"} must share one common eigenvector in "V_{A}". Induction then proves the claim. As a corollary, we have that every commuting family of normal matrices can be simultaneously diagonalized.In the infinite dimensional setting, not every

bounded operator on a Banach space has an invariant subspace. However, the upper-triangularization of an arbitrary square matrix does generalize tocompact operator s. Every compact operator on a complex Banach space has a nest of closed invariant subspaces.**Applications**Lie theory applications include:

* Every invertible operator is contained in aBorel group .

* Every operator fixes a point of theflag manifold .**Generalized Schur decomposition**Given square matrices "A" and "B", the

**generalized Schur decomposition**factorizes both matrices as $A=QSZ^*$ and $B=QTZ^*$, where "Q" and "Z" are unitary, and "S" and "T" areupper triangular . The generalized Schur decomposition is also sometimes called the**QZ decomposition**. (See Golub and van Loan, 1996, sec. 7.7.)The generalized

eigenvalue s $lambda$ that solve the generalized eigenvalue problem $Ax=lambda\; Bx$ (where "x" is an unknown nonzero vector) can be calculated as the ratio of the diagonal elements of "S" to those of "T". That is, using subscripts to denote matrix elements, the "i"th generalized eigenvalue $lambda\_i$ satisfies $lambda\_i=S\_\{ii\}/T\_\{ii\}$.**See also***

Matrix decomposition **References*** Roger A. Horn and Charles R. Johnson, "Matrix Analysis", Sections 2.3 and further, Cambridge University Press, 1985. ISBN 0-521-38632-2.

* Gene H. Golub and Charles F. van Loan, "Matrix Computations", 3rd ed., Section 7.7, Johns Hopkins University Press, 1996. ISBN 0801854148.

*Wikimedia Foundation.
2010.*

### Look at other dictionaries:

**Schur complement method**— The Schur complement method is the basic and the earliest version of non overlapping domain decomposition method, also called iterative substructuring. A finite element problem is split into non overlapping subdomains, and the unknowns in the… … Wikipedia**Matrix decomposition**— In the mathematical discipline of linear algebra, a matrix decomposition is a factorization of a matrix into some canonical form. There are many different matrix decompositions; each finds use among a particular class of problems. Contents 1… … Wikipedia**Issai Schur**— (January 10, 1875 in Mogilyov ndash; January 10, 1941 in Tel Aviv) was a mathematician who worked in Germany for most of his life. He studied at Berlin. He obtained his doctorate in 1901, became lecturer in 1903 and, after a stay at Bonn,… … Wikipedia**CP decomposition**— In multilinear algebra, the canonical polyadic decomposition (CPD), historically known as PARAFAC and later CANDECOMP, is a generalization of the matrix singular value decomposition (SVD) to tensors, with many applications in in statistics,… … Wikipedia**Complément de Schur**— Ne doit pas être confondu avec la méthode du complément de Schur (en) en analyse numérique. En algèbre linéaire et plus précisément en théorie des matrices, le complément de Schur est défini comme suit. Soit … Wikipédia en Français**Lemme De Schur**— En mathématiques et plus précisément en algèbre linéaire, le lemme de Schur est un lemme technique utilisé particulièrement dans la théorie de la représentation des groupes. Il a été démontré en 1907 par Issai Schur (1875 1941) dans le cadre de… … Wikipédia en Français**Lemme de schur**— En mathématiques et plus précisément en algèbre linéaire, le lemme de Schur est un lemme technique utilisé particulièrement dans la théorie de la représentation des groupes. Il a été démontré en 1907 par Issai Schur (1875 1941) dans le cadre de… … Wikipédia en Français**Complement de Schur**— Complément de Schur En algèbre linéaire et plus précisément en théorie des matrices, le complément de Schur est défini comme suit. Soit une matrice de dimension (p+q)×(p+q), où les blocs A, B, C, D sont des matrices de dimensions respectives p×p … Wikipédia en Français**Complément De Schur**— En algèbre linéaire et plus précisément en théorie des matrices, le complément de Schur est défini comme suit. Soit une matrice de dimension (p+q)×(p+q), où les blocs A, B, C, D sont des matrices de dimensions respectives p×p, p×q, q×p and q×q,… … Wikipédia en Français**Complément de schur**— En algèbre linéaire et plus précisément en théorie des matrices, le complément de Schur est défini comme suit. Soit une matrice de dimension (p+q)×(p+q), où les blocs A, B, C, D sont des matrices de dimensions respectives p×p, p×q, q×p and q×q,… … Wikipédia en Français