Minimal polynomial (linear algebra)

Minimal polynomial (linear algebra)

In linear algebra, the minimal polynomial μA of an n-by-n matrix A over a field F is the monic polynomial P over F of least degree such that P(A)=0. Any other polynomial Q with Q(A) = 0 is a (polynomial) multiple of μA.

The following three statements are equivalent:

  1. λ is a root of μA,
  2. λ is a root of the characteristic polynomial χA of A,
  3. λ is an eigenvalue of matrix A.

The multiplicity of a root λ of μA is the largest power m such that Ker((AλIn)m) strictly contains Ker((AλIn)m−1) (increasing the exponent up to m will give ever larger kernels, but further increasing m will just give the same kernel).

If the field F is not algebraically closed, then the minimal and characteristic polynomials need not factor according to their roots (in F) alone, in other words they may have irreducible polynomial factors of degree greater than 1. For such factors P one has similar equivalences:

  1. P divides μA,
  2. P divides χA,
  3. the kernel of P(A) has dimension at least 1.

Like the characteristic polynomial, the minimal polynomial does not depend on the base field, in other words considering the matrix as one with coefficients in a larger field does not change the minimal polynomial. The reason is somewhat different than for the characteristic polynomial (where it is immediate from the definition of determinants), namely the fact that the minimal polynomial is determined by the relations of linear dependence between the powers of A: extending the base field will not introduce any new such relations (nor of course will it remove existing ones).

The minimal polynomial is often the same as the characteristic polynomial, but not always. For example, if A is a multiple aIn of the identity matrix, then its minimal polynomial is Xa since the kernel of aInA = 0 is already the entire space; on the other hand its characteristic polynomial is (Xa)n (the only eigenvalue is a, and the degree of the characteristic polynomial is always equal to the dimension of the space). The minimal polynomial always divides the characteristic polynomial, which is one way of formulating the Cayley–Hamilton theorem.

Contents

Formal definition

Given an endomorphism T on a finite-dimensional vector space V over a field F, let IT be the set defined as

 \mathit{I}_T = \{ p \in \mathbf{F}[t] \; | \; p(T) = 0 \}

where F[t] is the space of all polynomials over the field F. It is easy to show that IT is a proper ideal of F[t].

Thus it must be the monic polynomial of least degree in IT.

Applications

An endomorphism φ of a finite dimensional vector space over a field F is diagonalizable if and only if its minimal polynomial factors completely over F into distinct linear factors. The fact that there is only one factor Xλ for every eigenvalue λ means that the generalized eigenspace for λ is the same as the eigenspace for λ: every Jordan block has size 1. More generally, if φ satisfies a polynomial equation P(φ) = 0 where P factors into distinct linear factors over F, then it will be diagonalizable: its minimal polynomial is a divisor of P and therefore also factors into distinct linear factors. In particular one has:

  • P = Xk − 1: finite order endomorphisms of complex vector spaces are diagonalizable. For the special case of involutions, this is even true for endomorphisms of vector spaces over any field of characteristic other than 2, since X2 − 1 = (X − 1)(X + 1) is a factorization into distinct factors over such a field. This is a part of representation theory of cyclic groups.
  • P = X2X = X(X − 1): endomorphisms satisfying φ2 = φ are called projections, and are always diagonalizable (moreover their only eigenvalues are 0 and 1).
  • By contrast if μφ = Xk with k ≥ 2 then φ (a nilpotent endomorphism) is not diagonalizable, since Xk has a repeated root 0.

These case can also be proved directly, but the minimal polynomial gives a unified perspective and proof.

Computation

Let IT,v be defined as

 \mathit{I}_{T, v} = \{ p \in \mathbf{F}[t] \; | \; p(T)(v) = 0 \}.

This definition satisfies the properties of a proper ideal. Let μT,v be the monic polynomial which generates it.

Properties

  • Since IT,v contains minimal polynomial μT, the latter is divisible by μT,v.
  • If d is the least latural number such that v, T(v), ... , Td(v) are linearly dependent, then there exist unique a_0, a_1, \cdots, a_{d-1} \in \mathbf{F} such that
     a_0 v + a_1 T(v) + \cdots + a_{d-1} T^{d-1} (v) + T^d (v) = 0
    and for these coefficients one has  \mu_{T,v} (t) = a_0  + a_1 t + \ldots + a_{d-1} t^{d-1} + t^d.
  • Let the subspace V be the image of μT,v(T), which is T-stable. Since μT,v(T) annihilates at least the vectors v, T(v), ... , Td-1(v), the codimension of V is at least d.
  • The minimal polynomial μT is the product of μT,v and the minimal polynomial Q of the restriction of T to V. In the (likely) case that V has dimension 0 one has Q = 1 and therefore μT = μT,v; otherwise a recursive computation of Q suffices to find μT.

Example

Define T to be the endomorphism of R3 with matrix, on the canonical basis,

\begin{bmatrix} 1 & -1 & -1 \\ 1 & -2 & 1 \\ 0 & 1 & -3 \end{bmatrix}.

Taking the first canonical basis vector e1 and its repeated images by T one obtains

  e_1 = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \quad
  T\cdot e_1 = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}. \quad
T^2\cdot e_1 = \begin{bmatrix} 0 \\ -1 \\ 1 \end{bmatrix} \mbox{ and}\quad
T^3\cdot e_1=\begin{bmatrix} 0 \\ 3 \\ -4 \end{bmatrix}

of which the first three are easily seen to be linearly independent, and therefore span all of R3. The last one then necessarily is a linear combination of the first three, in fact T^3\cdot e_1 = -4T^2\cdot e_1 -T\cdot e_1 + e_1, so that \mu_{T,e_1}=X^3+4X^2+X-1. This is in fact also the minimal polynomial μT and the characteristic polynomial χT: indeed \mu_{T,e_1} divides μT which divides χT, and since the first and last are of degree 3 and all are monic, they must all be the same. Another reason is that in general if any polynomial in T annihilates a vector v, then it also annihilates Tv (just apply T to the equation that says that it annihilates v), and therefore by iteration it annihilates the entire space generated by the iterated images by T of v; in the current case we have seen that for v = e1 that space is all of R3, so \mu_{T,e_1}(T)=0. Indeed one verifies for the full matrix that T3 + 4T2 + TI3 is the null matrix:

\begin{bmatrix}  0 & 1 & -3 \\ 3 & -13 & 23 \\ -4 & 19 & -36 \end{bmatrix}
 +4\begin{bmatrix} 0 & 0 & 1 \\ -1 & 4 & -6 \\ 1 & -5 & 10 \end{bmatrix}
 +\begin{bmatrix} 1 & -1 & -1 \\ 1 & -2 & 1 \\ 0 & 1 & -3 \end{bmatrix}
 +\begin{bmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{bmatrix}
 =\begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}

Wikimedia Foundation. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • Minimal polynomial (field theory) — For the minimal polynomial of a matrix, see Minimal polynomial (linear algebra). In field theory, given a field extension E / F and an element α of E that is an algebraic element over F, the minimal polynomial of α is the monic polynomial p, with …   Wikipedia

  • Theorems and definitions in linear algebra — This article collects the main theorems and definitions in linear algebra. Vector spaces A vector space( or linear space) V over a number field² F consists of a set on which two operations (called addition and scalar multiplication, respectively) …   Wikipedia

  • Projection (linear algebra) — Orthogonal projection redirects here. For the technical drawing concept, see orthographic projection. For a concrete discussion of orthogonal projections in finite dimensional linear spaces, see vector projection. The transformation P is the… …   Wikipedia

  • Polynomial — In mathematics, a polynomial (from Greek poly, many and medieval Latin binomium, binomial [1] [2] [3], the word has been introduced, in Latin, by Franciscus Vieta[4]) is an expression of finite length constructed from variables (also known as… …   Wikipedia

  • algebra — /al jeuh breuh/, n. 1. the branch of mathematics that deals with general statements of relations, utilizing letters and other symbols to represent specific sets of numbers, values, vectors, etc., in the description of such relations. 2. any of… …   Universalium

  • Polynomial factorization — In mathematics and computer algebra, polynomial factorization typically refers to factoring a polynomial into irreducible polynomials over a given field. Formulation of the questionOther factorizations, such as square free factorization exist,… …   Wikipedia

  • Linear least squares (mathematics) — This article is about the mathematics that underlie curve fitting using linear least squares. For statistical regression analysis using least squares, see linear regression. For linear regression on a single variable, see simple linear regression …   Wikipedia

  • Linear code — In mathematics and information theory, a linear code is an important type of block code used in error correction and detection schemes. Linear codes allow for more efficient encoding and decoding algorithms than other codes (cf. syndrome… …   Wikipedia

  • List of polynomial topics — This is a list of polynomial topics, by Wikipedia page. See also trigonometric polynomial, list of algebraic geometry topics.Basics*Polynomial *Coefficient *Monomial *Polynomial long division *Polynomial factorization *Rational function *Partial… …   Wikipedia

  • Linear algebraic group — In mathematics, a linear algebraic group is a subgroup of the group of invertible n times; n matrices (under matrix multiplication) that is defined by polynomial equations. An example is the orthogonal group, defined by the relation MTM = I where …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”