Square root of a matrix

Square root of a matrix

In mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. A matrix B is said to be a square root of A if the matrix product B · B is equal to A.[1]

Contents

Properties

In general, a matrix can have many square roots. For example, the matrix \bigl( \begin{smallmatrix}\\ 33&24\\ 48&57\end{smallmatrix} \bigr) has square roots \bigl( \begin{smallmatrix}\\ 1&4\\ 8&5\end{smallmatrix} \bigr) and \bigl( \begin{smallmatrix}\\ 5&2\\ 4&7\end{smallmatrix} \bigr), as well as their additive inverses. Another example is the 2×2 identity matrix \bigl( \begin{smallmatrix}\\ 1&0\\ 0&1\end{smallmatrix} \bigr), which has an infinitude of symmetric rational square roots given by \tfrac{1}{t}\bigl( \begin{smallmatrix}\\ \mp s&\mp r\\ \mp r&\pm s\end{smallmatrix} \bigr),  \tfrac{1}{t}\bigl( \begin{smallmatrix}\\ \pm s&\mp r\\ \mp r&\mp s\end{smallmatrix} \bigr),  \tfrac{1}{t}\bigl( \begin{smallmatrix}\\ \mp r&\mp s\\ \mp s&\pm r\end{smallmatrix} \bigr),  \tfrac{1}{t}\bigl( \begin{smallmatrix}\\ \pm r&\mp s\\ \mp s&\mp r\end{smallmatrix} \bigr),  \bigl( \begin{smallmatrix}\\ 1&0\\ 0&\pm 1\end{smallmatrix} \bigr), and \bigl( \begin{smallmatrix}\\ \pm 1&0\\ 0& 1\end{smallmatrix} \bigr), where (r, s, t) is any Pythagorean triple — that is, any set of positive integers such that r2 + s2 = t2.[2]

However, a positive-definite matrix has precisely one positive-definite square root, which can be called its principal square root.

While the square root of an integer is either again an integer or an irrational number, in contrast an integer matrix can have a square root whose entries are rational, yet not integral. For example, the matrix \bigl( \begin{smallmatrix}\\ ~\;0&4\\ -1&5\end{smallmatrix} \bigr) has a square root \bigl( \begin{smallmatrix}\\ ~\;2/3&4/3\\ -1/3&7/3\end{smallmatrix} \bigr), as well as a square root that is an integer matrix: \bigl( \begin{smallmatrix}\\ 2&-4\\ 1&-3\end{smallmatrix} \bigr). The other two square roots are the additive inverses of these (In general, a 2×2 matrix with two distinct eigenvalues has four square roots). The 2×2 identity matrix discussed above provides another example.

Just as with the real numbers, a real matrix may fail to have a real square root, but have a square root with complex-valued entries.

Computation methods

By diagonalization

A square root of a diagonal matrix D is again a diagonal matrix, formed by taking a square root of each of the entries on the diagonal. This suggests the following methods for general matrices:

An n × n matrix A is diagonalizable if there is a matrix V such that D = V − 1AV is a diagonal matrix. This happens if and only if A has n eigenvectors which constitute a basis for Cn; in this case, V can be chosen to be the matrix with the n eigenvectors as columns.

Now, A = VDV − 1, and hence a square root of A is

 A^{1/2} = V D^{1/2} V^{-1}, \,

where D1/2 is a square root of D. That this technique yields a matrix square root is evident by squaring:

(V D^{1/2} V^{-1})^2 = V D^{1/2} (V^{-1} V) D^{1/2} V^{-1} = V D V^{-1} = A\,

Example.

The matrix A = \bigl(\begin{smallmatrix}\\ 33&24\\ 48&57\end{smallmatrix} \bigr) can be diagonalized as VDV − 1, where V = \bigl( \begin{smallmatrix}\\ 1&~\;1\\ 2&-1\end{smallmatrix} \bigr) and D = \bigl( \begin{smallmatrix}\\ 81&0\\ ~\;0&9\end{smallmatrix} \bigr).

D has principal square root D^{1/2} = \bigl( \begin{smallmatrix}\\ 9&0\\ 0&3\end{smallmatrix} \bigr), giving the square root A^{1/2} = V D^{1/2} V^{-1} = \bigl( \begin{smallmatrix}\\ 5&2\\ 4&7\end{smallmatrix} \bigr).

This approach works only for diagonalizable matrices. For non-diagonalizable matrices one can calculate the Jordan normal form followed by a series expansion, similar to the approach described in logarithm of a matrix.

By Denman–Beavers iteration

Another way to find the square root of an n × n matrix A is the Denman–Beavers square root iteration. Let Y0 = A and Z0 = I, where I is the n × n identity matrix. The iteration is defined by

 \begin{align}
Y_{k+1} &= \tfrac12 (Y_k + Z_k^{-1}), \\ 
Z_{k+1} &= \tfrac12 (Z_k + Y_k^{-1}). 
\end{align}

Convergence is not guaranteed, even for matrices that do have square roots, but if the process converges, the matrix Yk converges quadratically to a square root A1/2, while Zk converges to its inverse, A−1/2. (Denman & Beavers 1976; Cheng et al. 2001).

By the Babylonian method

Yet another iterative method is obtained by taking the well-known formula of the Babylonian method for computing the square root of a real number, and applying it to matrices. Let X0 = I, where I is the identity matrix. The iteration is defined by

X_{k+1} = \tfrac12 (X_k + A X_k^{-1})\,.

Again, convergence is not guaranteed, but if the process converges, the matrix Xk converges quadratically to a square root A1/2. Compared to Denman–Beavers iteration, an advantage of the Babylonian method is that only one matrix inverse need be computed per iteration step. However, unlike Denman–Beavers iteration, this method is numerically unstable and more likely to fail to converge.[1]

Square roots of positive operators

In linear algebra and operator theory, given a bounded positive semidefinite operator (a non-negative operator) T on a complex Hilbert space, B is a square root of T if T = B* B, where B* denotes the Hermitian adjoint of B.[citation needed] According to the spectral theorem, the continuous functional calculus can be applied to obtain an operator T½ such that T½ is itself positive and (T½)2 = T. The operator T½ is the unique non-negative square root of T.[citation needed]

A bounded non-negative operator on a complex Hilbert space is self adjoint by definition. So T = (T½)* T½. Conversely, it is trivially true that every operator of the form B* B is non-negative. Therefore, an operator T is non-negative if and only if T = B* B for some B (equivalently, T = CC* for some C).

The Cholesky factorization provides another particular example of square root, which should not be confused with the unique non-negative square root.

Unitary freedom of square roots

If T is a non-negative operator on a finite dimensional Hilbert space, then all square roots of T are related by unitary transformations. More precisely, if T = AA* = BB*, then there exists a unitary U s.t. A = BU.

Indeed, take B = T½ to be the unique non-negative square root of T. If T is strictly positive, then B is invertible, and so U = B−1A is unitary:


\begin{align}
U^*U &= \left(A^*(B^*)^{-1}\right)\left(B^{-1}A\right) = A^*(T^{-1})A \\
&=A^*((A^*)^{-1}A^{-1})A = I.
\end{align}

If T is non-negative without being strictly positive, then the inverse of B cannot be defined, but the Moore–Penrose pseudoinverse B+ can be. In that case, the operator B+A is a partial isometry, that is, a unitary operator from the range of T to itself. This can then be extended to a unitary operator U on the whole space by setting it equal to the identity on the kernel of T. More generally, this is true on an infinite-dimensional Hilbert space if, in addition, T has closed range. In general, if A, B are closed and densely defined operators on a Hilbert space H, and A* A = B* B, then A = UB where U is a partial isometry.

Some applications

Square roots, and the unitary freedom of square roots have applications throughout functional analysis and linear algebra.

Polar decomposition

If A is an invertible operator on a finite-dimensional Hilbert space, then there is a unique unitary operator U and positive operator P such that

A = UP;\,

this is the polar decomposition of A. The positive operator P is the unique positive square root of the positive operator AA, and U is defined by U = AP−1.

If A is not invertible, then it still has a polar composition in which P is defined in the same way (and is unique). The unitary operator U is not unique. Rather it is possible to determine a "natural" unitary operator as follows: AP+ is a unitary operator from the range of A to itself, which can be extended by the identity on the kernel of A. The resulting unitary operator U then yields the polar decomposition of A.

Kraus operators

By Choi's result, a linear map

\Phi : C^{n \times n} \rightarrow C^{m \times m}

is completely positive if and only if it is of the form

\Phi(A) = \sum_i ^k V_i A V_i^*

where knm. Let {Ep q} ⊂ Cn × n be the n2 elementary matrix units. The positive matrix

M_{\Phi} = ( \Phi (E_{pq}) )_{pq} \in C^{nm \times nm}

is called the Choi matrix of Φ. The Kraus operators correspond to the, not necessarily square, square roots of MΦ: For any square root B of MΦ, one can obtain a family of Kraus operators Vi by undoing the Vec operation to each column bi of B. Thus all sets of Kraus operators are related by partial isometries.

Mixed ensembles

In quantum physics, a density matrix for an n-level quantum system is an n × n complex matrix ρ that is positive semidefinite with trace 1. If ρ can be expressed as

\rho = \sum_i p_i v_i v_i^*

where ∑ pi = 1, the set

\{p_i,  v_i \} \,

is said to be an ensemble that describes the mixed state ρ. Notice {vi} is not required to be orthogonal. Different ensembles describing the state ρ are related by unitary operators, via the square roots of ρ. For instance, suppose

\rho = \sum_j a_j a_j^*.

The trace 1 condition means

\sum_j a_j ^* a_j = 1.

Let

p_i = a_i ^* a_i,

and vi be the normalized ai. We see that

\{p_i,  v_i \} \,

gives the mixed state ρ.

See also

Notes

  1. ^ a b Higham, Nicholas J. (April 1986). "Newton's Method for the Matrix Square Root". Mathematics of Computation 46 (174): 537–549. doi:10.2307/2007992. http://www.ams.org/journals/mcom/1986-46-174/S0025-5718-1986-0829624-5/S0025-5718-1986-0829624-5.pdf. 
  2. ^ Mitchell, Douglas W. "Using Pythagorean triples to generate square roots of I2". The Mathematical Gazette 87, November 2003, 499-500.

Bibliography


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Square root — Measured fall time of a small steel sphere falling from various heights. The data is in good agreement with the predicted fall time of , where h is the height and g is the acceleration of gravity. In mathematics, a square root of a number x is a… …   Wikipedia

  • Square root of 5 — The square root of 5 is the positive real number that, when multiplied by itself, gives the prime number 5. This number appears in the formula for the golden ratio. It can be denoted in surd form as::sqrt{5}.It is an irrational algebraic number.… …   Wikipedia

  • Square — may mean:Mathematics*Square (algebra), to multiply a number or other quantity by itself **Perfect square **Square matrix **Square number **Square root*Square (geometry), a polygon with four equal sides and angles **Unit square*Square wave, a… …   Wikipedia

  • Root of unity — The 5th roots of unity in the complex plane In mathematics, a root of unity, or de Moivre number, is any complex number that equals 1 when raised to some integer power n. Roots of unity are used in many branches of mathematics, and are especially …   Wikipedia

  • Matrix norm — In mathematics, a matrix norm is a natural extension of the notion of a vector norm to matrices. Contents 1 Definition 2 Induced norm 3 Entrywise norms 3.1 Frobenius norm …   Wikipedia

  • Root system — This article discusses root systems in mathematics. For root systems of plants, see root. Lie groups …   Wikipedia

  • Root-finding algorithm — A root finding algorithm is a numerical method, or algorithm, for finding a value x such that f(x) = 0, for a given function f. Such an x is called a root of the function f. This article is concerned with finding scalar, real or complex roots,… …   Wikipedia

  • Square triangular number — A square triangular number (or triangular square number) is a number which is both a triangular number and a perfect square. There are an infinite number of triangular squares, given by the formula: N k = {1 over 32} left( left( 1 + sqrt{2}… …   Wikipedia

  • Positive-definite matrix — In linear algebra, a positive definite matrix is a matrix that in many ways is analogous to a positive real number. The notion is closely related to a positive definite symmetric bilinear form (or a sesquilinear form in the complex case). The… …   Wikipedia

  • Orthogonal matrix — In linear algebra, an orthogonal matrix (less commonly called orthonormal matrix[1]), is a square matrix with real entries whose columns and rows are orthogonal unit vectors (i.e., orthonormal vectors). Equivalently, a matrix Q is orthogonal if… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”