Covariance

Covariance

In probability theory and statistics, covariance is a measure of how much two variables change together. Variance is a special case of the covariance when the two variables are identical.

Contents

Definition

The covariance between two real-valued random variables X and Y with finite second moments is


\operatorname{Cov}(X,Y) = \operatorname{E}\big[(X - \operatorname{E}[X])(Y - \operatorname{E}[Y])\big]

where E[X] is the expected value of X. By using some properties of expectations, this can be simplified to


\operatorname{Cov}(X,Y) = \operatorname{E}\big[X Y\big] - \operatorname{E}[X]\operatorname{E}[Y]

For random vectors X and Y (of dimension m and n respectively) the m×n covariance matrix is equal to


\begin{align}
    \operatorname{Cov}(X,Y) 
               & = \operatorname{E}\left[(X - \operatorname{E}[X])(Y - \operatorname{E}[Y])^T\right]\\
               & = \operatorname{E}\left[X Y^T\right] - \operatorname{E}[X]\operatorname{E}[Y]^T
\end{align}

where MT is the transpose of a matrix (or vector) M.

The (i,j)-th element of this matrix is equal to the covariance Cov(Xi, Yj) between the i-th scalar component of X and the j-th scalar component of Y. In particular, Cov(YX) is the transpose of Cov(XY).

Random variables whose covariance is zero are called uncorrelated.

The units of measurement of the covariance Cov(XY) are those of X times those of Y. By contrast, correlation, which depends on the covariance, is a dimensionless measure of linear dependence.

Properties

If X, Y, W, and V are real-valued random variables and a, b, c, d are constant ("constant" in this context means non-random), then the following facts are a consequence of the definition of covariance:


\begin{align}
    \operatorname{Cov}(X, a) &= 0 \\
    \operatorname{Cov}(X, X) &= \operatorname{Var}(X) \\
    \operatorname{Cov}(X, Y) &= \operatorname{Cov}(Y, X) \\
    \operatorname{Cov}(aX, bY) &= ab\, \operatorname{Cov}(X, Y) \\
    \operatorname{Cov}(X+a, Y+b) &= \operatorname{Cov}(X, Y) \\ 
    \operatorname{Cov}(aX+bY, cW+dV) &= ac\,\operatorname{Cov}(X,W)+ad\,\operatorname{Cov}(X,V)+bc\,\operatorname{Cov}(Y,W)+bd\,\operatorname{Cov}(Y,V)
\end{align}

For sequences X1, ..., Xn and Y1, ..., Ym of random variables, we have

\operatorname{Cov}\left(\sum_{i=1}^n {X_i}, \sum_{j=1}^m{Y_j}\right) =    \sum_{i=1}^n{\sum_{j=1}^m{\operatorname{Cov}\left(X_i, Y_j\right)}}.\,

For a sequence X1, ..., Xn of random variables, and constants a1, ..., an, we have

\operatorname{Var}\left(\sum_{i=1}^n a_iX_i \right) = \sum_{i=1}^n a_i^2\operatorname{Var}(X_i) + 2\sum_{i,j\,:\,i<j} a_ia_j\operatorname{Cov}(X_i,X_j).

Uncorrelatedness and independence

If X and Y are independent, then their covariance is zero. This follows because under independence,

\operatorname{E}\left[X \cdot Y\right] = E[X] \cdot E[Y]

The converse, however, is generally not true: Some pairs of random variables have covariance zero although they are not independent.

In order to understand how the converse of this proposition is not generally true, consider the example where Y = X2, E[X] = 0, and E[X3] = 0. In this case, X and Y are obviously not independently distributed.


\begin{align}
 \operatorname{Cov}(X, Y) &= \operatorname{Cov}(X, X^2) \\
         &= \operatorname{E}\!\left[X \cdot X^2\right] - \operatorname{E}[X] \cdot \operatorname{E}\!\left[X^2\right] \\
         &= \operatorname{E}\!\left[X^3\right] - \operatorname{E}[X]\operatorname{E}\!\left[X^2\right]  \\
         &= 0 - 0 \cdot \operatorname{E}\!\left[X^2\right]   \\
         &= 0  
\end{align}

Relationship to inner products

Many of the properties of covariance can be extracted elegantly by observing that it satisfies similar properties to those of an inner product:

  1. bilinear: for constants a and b and random variables X, Y, and U, Cov(aX + bYU) = a Cov(XU) + b Cov(YU)
  2. symmetric: Cov(XY) = Cov(YX)
  3. positive semi-definite: Var(X) = Cov(XX) ≥ 0, and Cov(XX) = 0 implies that X is a constant random variable (K).

In fact these properties imply that the covariance defines an inner product over the quotient vector space obtained by taking the subspace of random variables with finite second moment and identifying any two that differ by a constant. (This identification turns the positive semi-definiteness above into positive definiteness.) That quotient vector space is isomorphic to the subspace of random variables with finite second moment and mean zero; on that subspace, the covariance is exactly the L2 inner product of real-valued functions on the sample space.

As a result for random variables with finite variance the following inequality holds via the Cauchy–Schwarz inequality:

|\operatorname{Cov}(X,Y)| \le \sqrt{\operatorname{Var}(X) \operatorname{Var}(Y)}

Proof: If Var(Y) = 0, then it holds trivially. Otherwise, let random variable

 Z = X - \frac{\operatorname{Cov}(X,Y)}{\operatorname{Var}(Y)} Y

Then we have:


\begin{align}
0 \le \operatorname{Var}(Z) & = \operatorname{Cov}\left(X - \frac{\operatorname{Cov}(X,Y)}{\operatorname{Var}(Y)} Y,X - \frac{\operatorname{Cov}(X,Y)}{\operatorname{Var}(Y)} Y \right) \\[12pt]
& = \operatorname{Var}(X) - \frac{ (\operatorname{Cov}(X,Y))^2 }{\operatorname{Var}(Y)}
\end{align}

QED.

Calculating the sample covariance

The sample covariance of N observations of K variables is the K-by-K matrix \textstyle \mathbf{Q}=\left[  q_{jk}\right]  with the entries given by

 q_{jk}=\frac{1}{N-1}\sum_{i=1}^{N}\left(  x_{ij}-\bar{x}_j \right)  \left( x_{ik}-\bar{x}_k \right)

The sample mean and the sample covariance matrix are unbiased estimates of the mean and the covariance matrix of the random vector \textstyle \mathbf{X}, a row vector whose jth element (j = 1, ..., K) is one of the random variables. The reason the sample covariance matrix has \textstyle N-1 in the denominator rather than \textstyle N is essentially that the population mean E(X) is not known and is replaced by the sample mean \mathbf{\bar{x}}. If the population mean E(X) is known, the analogous unbiased estimate

 q_{jk}=\frac{1}{N}\sum_{i=1}^N \left(  x_{ij}-E(X_j)\right)  \left( x_{ik}-E(X_k)\right)

Comments

The covariance is sometimes called a measure of "linear dependence" between the two random variables. That does not mean the same thing as in the context of linear algebra (see linear dependence). When the covariance is normalized, one obtains the correlation matrix. From it, one can obtain the Pearson coefficient, which gives us the goodness of the fit for the best possible linear function describing the relation between the variables. In this sense covariance is a linear gauge of dependence.

See also

References

External links


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • covariance — [ kovarjɑ̃s ] n. f. • 1921; de co et variance ♦ Math. Covariance de deux variables aléatoires : moyenne des produits de deux variables centrées sur leurs espérances mathématiques et servant à définir leur coefficient de corrélation. ● covariance… …   Encyclopédie Universelle

  • covariance — n. a statistical measure of the relationship of two variables, formed by multiplying the difference of each variable from its mean, both variables being measured at the same time, and averaging all such products. [WordNet 1.5 +PJC] …   The Collaborative International Dictionary of English

  • covariance — 1878, from covariant (1853), from CO (Cf. co ) + VARIANT (Cf. variant) …   Etymology dictionary

  • covariance — [kō′ver΄ē əns, kō′ver′ē əns] n. Statistics a measure of the relationship between two variables whose values are observed at the same time; specif., the average value of the product of the two variables diminished by the product of their average… …   English World dictionary

  • Covariance — Pour le principe physique, voir Principe de covariance générale.  Ne pas confondre avec la covariance d un tenseur en algèbre ou en géométrie différentielle, ou d un foncteur en théorie des catégories. En théorie des probabilités et en… …   Wikipédia en Français

  • Covariance — A measure of the degree to which returns on two risky assets move in tandem. A positive covariance means that asset returns move together. A negative covariance means returns move inversely. One method of calculating covariance is by looking at… …   Investment dictionary

  • covariance — A measurement of the relationship between two variables. The arithmetic mean of the products of the deviations of corresponding values of two quantitative variables from their respective means. American Banker Glossary A statistical measure of… …   Financial and business terms

  • covariance — kovariacija statusas T sritis Standartizacija ir metrologija apibrėžtis Dviejų atsitiktinių kintamųjų dydžių tarpusavio priklausomybės matas, lygus jų verčių sandaugos statistiniam vidurkiui. atitikmenys: angl. covariance vok. Kovarianz, f rus.… …   Penkiakalbis aiškinamasis metrologijos terminų žodynas

  • covariance — kovariantiškumas statusas T sritis fizika atitikmenys: angl. covariance vok. Kovarianz, f rus. ковариантность, f pranc. covariance, f …   Fizikos terminų žodynas

  • Covariance — A statistical measure of the degree to which random variables move together. The New York Times Financial Glossary * * *    A statistical term for the correlation of two variables multiplied by the individual standard deviation for each of the… …   Financial and business terms

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”