Gauss–Markov theorem

Gauss–Markov theorem

:"This article is not about Gauss–Markov processes."

In statistics, the Gauss–Markov theorem, named after Carl Friedrich Gauss and Andrey Markov, states that in a linear model in which the errors have expectation zero and are uncorrelated and have equal variances, a best linear unbiased estimator (BLUE) of the coefficients is given by the least-squares estimator. The errors are "not" assumed to be normally distributed, nor are they assumed to be independent (but only uncorrelated — a weaker condition), nor are they assumed to be identically distributed (but only having zero mean and equal variances).

Statement

Suppose we have

:Y_i=sum_{j=1}^{K}eta_j X_{ij}+varepsilon_i

for "i" = 1, . . ., "n", where "β" "j" are non-random but unobservable parameters, "Xij" are non-random and observable (called the "explanatory variables"), "ε" "i" are random , and so "Y" "i" are random. The random variables "ε" "i" are called the "errors" (not to be confused with "residuals"; see errors and residuals in statistics). Note that to include a constant in the model above, one can choose to include the "XiK" = 1.

The Gauss–Markov assumptions state that

*{ m E}left(varepsilon_i ight)=0,
*{ m Var}left(varepsilon_i ight)=sigma^2(i.e., all errors have the same variance; that is "homoscedasticity"), and
*{ m Cov}left(varepsilon_i,varepsilon_j ight)=0

for "i" ≠ "j"; that is "uncorrelatedness." A linear estimator of "β" "j" is a linear combination

:widehateta_j = c_{1j}Y_1+cdots+c_{nj}Y_n

in which the coefficients "cij" are not allowed to depend on the earlier coefficients "β", since those are not observable, but are allowed to depend on "X", since this data is observable, and whose expected value remains "β" "j" even if the values of "X" change. (The dependence of the coefficients on "X" is typically nonlinear; the estimator is linear in "Y" and hence in "ε" which is random; that is why this is "linear" regression.) The estimator is unbiased iff

:{ m E}(widehateta_j)=eta_j.,

Now, let sum_{j=1}^Klambda_jeta_j be some linear combination of the coefficients. Then the mean squared error of the corresponding estimation is defined as

:{ m E} left(sum_{j=1}^Klambda_j(widehateta_j-eta_j)^2 ight)

i.e., it is the expectation of the square of the difference between the estimator and the parameter to be estimated. (The mean squared error of an estimator coincides with the estimator's variance if the estimator is unbiased; for biased estimators the mean squared error is the sum of the variance and the square of the bias.) A best linear unbiased estimator of "β" is the one with the smallest mean squared error for every linear combination "λ". This is equivalent to the condition that

:{ m Var}(widehateta)-{ m Var}( ildeeta)

is a positive semi-definite matrix for every other linear unbiased estimator ildeeta.

The ordinary least squares estimator (OLS) is the function

:widehateta=(X^{T}X)^{-1}X^{T}Y

of "Y" and "X" that minimizes the sum of squares of residuals

:sum_{i=1}^nleft(Y_i-widehat{Y}_i ight)^2=sum_{i=1}^nleft(Y_i-sum_{j=1}^Kwidehateta_j X_{ij} ight)^2.

(It is easy to confuse the concept of "error" introduced early in this article, with this concept of "residual". For an account of the differences and the relationship between them, see errors and residuals in statistics).

The theorem now states that the OLS estimator is a BLUE. The main idea of the proof is that the least-squares estimator isuncorrelated with every linear unbiased estimator of zero, i.e., with every linear combination a_1Y_1+cdots+a_nY_nwhose coefficients do not depend upon the unobservable β but whose expected value is always zero.

Generalized least squares estimator

The GLS or Aitken estimator extends the Gauss-Markov Theorem to the case where the error vector has a non-scalar covariance matrixndashthe Aitken estimator is also a BLUE. [A. C. Aitken, "On Least Squares and Linear Combinations of Observations", "Proceedings of the Royal Society of Edinburgh", 1935, vol. 55, pp. 42–48.]

ee also

*Independent and identically-distributed random variables
*Linear regression
*Measurement uncertainty
*Best linear unbiased prediction

Notes

References

* Plackett, R.L. (1950) "Some Theorems in Least Squares", "Biometrika" 37: 149–157

External links

* [http://members.aol.com/jeff570/g.html Earliest Known Uses of Some of the Words of Mathematics: G] (brief history and explanation of its name)
* [http://www.xycoon.com/ols1.htm Proof of the Gauss Markov theorem for multiple linear regression] (makes use of matrix algebra)
* [http://emlab.berkeley.edu/GMTheorem/index.html A Proof of the Gauss Markov theorem using geometry]


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Gauss-Markov-Theorem — Die Artikel Satz von Gauß Markow und Minimalvarianter linearer erwartungstreuer Schätzer überschneiden sich thematisch. Hilf mit, die Artikel besser voneinander abzugrenzen oder zu vereinigen. Beteilige dich dazu an der Diskussion über diese… …   Deutsch Wikipedia

  • Gauss–Markov process — This article is not about the Gauss–Markov theorem of mathematical statistics. Gauss–Markov stochastic processes (named after Carl Friedrich Gauss and Andrey Markov) are stochastic processes that satisfy the requirements for both Gaussian… …   Wikipedia

  • Gauss–Markov — The phrase Gauss–Markov is used in two different ways. See *Gauss–Markov processes in probability theory. *The Gauss–Markov theorem in mathematical statistics (In this theorem, one does not assume the probability distributions are Gaussian.) …   Wikipedia

  • Gauß-Markov Theorem — Die Artikel Satz von Gauß Markow und Minimalvarianter linearer erwartungstreuer Schätzer überschneiden sich thematisch. Hilf mit, die Artikel besser voneinander abzugrenzen oder zu vereinigen. Beteilige dich dazu an der Diskussion über diese… …   Deutsch Wikipedia

  • Gauss–Newton algorithm — The Gauss–Newton algorithm is a method used to solve non linear least squares problems. It can be seen as a modification of Newton s method for finding a minimum of a function. Unlike Newton s method, the Gauss–Newton algorithm can only be used… …   Wikipedia

  • List of topics named after Carl Friedrich Gauss — Carl Friedrich Gauss (1777 ndash; 1855) is the eponym of all of the topics listed below. Topics including Gauss *Carl Friedrich Gauss Prize, a mathematics award *Degaussing, to demagnetize an object *Gauss (unit), a unit of magnetic field (B)… …   Wikipedia

  • Carl Friedrich Gauss — Infobox Scientist box width = 300px name = Carl Friedrich Gauss caption = Johann Carl Friedrich Gauss (1777 1855), painted by Christian Albrecht Jensen birth date = birth date|1777|4|30|df=y birth place = Braunschweig, Electorate of Brunswick… …   Wikipedia

  • Andrey Markov — For other people named Andrey Markov, see Andrey Markov (disambiguation). Andrey (Andrei) Andreyevich Markov Born June 14, 1856( …   Wikipedia

  • Opérateur de Gauss-Kuzmin-Wirsing — En mathématiques, l opérateur de Gauss Kuzmin Wirsing apparaît dans l étude des fractions continues. Il est aussi relié à la fonction zêta de Riemann. Sommaire 1 Introduction 2 Relation avec la fonction zêta de Riemann 3 Eléments matriciels …   Wikipédia en Français

  • Théorème de d'Alembert-Gauss — Pour les articles homonymes, voir Théorème de Gauss. Jean le Rond D Alembert est le premier à ressentir la nécessité de démontrer le théorème fondamental de l algèbre. Sa motivation est entièrement analytique, il r …   Wikipédia en Français

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”