Consistent estimator

Consistent estimator
{T1, T2, T3, …} is a sequence of estimators for parameter θ0, the true value of which is 4. This sequence is consistent: the estimators are getting more and more concentrated near the true value θ0; at the same time, these estimators are biased. The limiting distribution of the sequence is a degenerate random variable which equals θ0 with probability 1.

In statistics, a sequence of estimators for parameter θ0 is said to be consistent (or asymptotically consistent) if this sequence converges in probability to θ0. It means that the distributions of the estimators become more and more concentrated near the true value of the parameter being estimated, so that the probability of the estimator being arbitrarily close to θ0 converges to one.

In practice one usually constructs a single estimator as a function of an available sample of size n, and then imagines being able to keep collecting data and expanding the sample ad infinitum. In this way one would obtain a sequence of estimators indexed by n and the notion of consistency will be understood as the sample size “grows to infinity”. If this sequence converges in probability to the true value θ0, we call it a consistent estimator; otherwise the estimator is said to be inconsistent.

The consistency as defined here is sometimes referred to as the weak consistency. When we replace the convergence in probability with the almost sure convergence, then the sequence of estimators is said to be strongly consistent.

Contents

Definition

Loosely speaking, an estimator Tn of parameter θ is said to be consistent, if it converges in probability to the true value of the parameter:[1]


    \underset{n\to\infty}{\operatorname{plim}}\;T_n = \theta.

A more rigorous definition takes into account the fact that θ is actually unknown, and thus the convergence in probability must take place for every possible value of this parameter. Suppose {pθ: θ ∈ Θ} is a family of distributions (the parametric model), and Xθ = {X1, X2, … : Xi ~ pθ} is an infinite sample from the distribution pθ. Let { Tn(Xθ) } be a sequence of estimators for some parameter g(θ). Usually Tn will be based on the first n observations of a sample. Then this sequence {Tn} is said to be (weakly) consistent if [2]


    \underset{n\to\infty}{\operatorname{plim}}\;T_n(X^{\theta}) = g(\theta),\ \ \text{for all}\ \theta\in\Theta.

This definition uses g(θ) instead of simply θ, because often one is interested in estimating a certain function or a sub-vector of the underlying parameter. In the next example we estimate the location parameter of the model, but not the scale:

Example: sample mean for normal random variables

Suppose one has a sequence of observations {X1, X2, …} from a normal N(μ, σ2) distribution. To estimate μ based on the first n observations, we use the sample mean: Tn = (X1 + … + Xn)/n. This defines a sequence of estimators, indexed by the sample size n.

From the properties of the normal distribution, we know that Tn is itself normally distributed, with mean μ and variance σ2/n. Equivalently, \scriptstyle (T_n-\mu)/(\sigma/\sqrt{n}) has a standard normal distribution. Then


    \Pr\!\left[\,|T_n-\mu|\geq\varepsilon\,\right] = 
    \Pr\!\left[ \frac{\sqrt{n}\,\big|T_n-\mu\big|}{\sigma} \geq \sqrt{n}\varepsilon/\sigma \right] = 
    2\left(1-\Phi\left(\frac{\sqrt{n}\,\varepsilon}{\sigma}\right)\right) \to 0

as n tends to infinity, for any fixed ε > 0. Therefore, the sequence Tn of sample means is consistent for the population mean μ.

Establishing consistency

The notion of asymptotic consistency is very close, almost synonymous to the notion of convergence in probability. As such, any theorem, lemma, or property which establishes convergence in probability may be used to prove the consistency. Many such tools exist:

  • In order to demonstrate consistency directly from the definition one can use the inequality [3]

    \Pr\!\big[h(T_n-\theta)\geq\varepsilon\big] \leq \frac{\operatorname{E}\big[h(T_n-\theta)\big]}{\varepsilon},

the most common choice for function h being either the absolute value (in which case it is known as Markov inequality), or the quadratic function (respectively Chebychev's inequality).

  • Another useful result is the continuous mapping theorem: if Tn is consistent for θ and g(·) is a real-valued function continuous at point θ, then g(Tn) will be consistent for g(θ):[4]

    T_n\ \xrightarrow{p}\ \theta\ \quad\Rightarrow\quad g(T_n)\ \xrightarrow{p}\ g(\theta)
  • Slutsky’s theorem can be used to combine several different estimators, or an estimator with a non-random covergent sequence. If Tn →pα, and Sn →pβ, then [5]
\begin{align}
  & T_n + S_n \ \xrightarrow{p}\ \alpha+\beta, \\
  & T_n   S_n \ \xrightarrow{p}\ \alpha \beta, \\
  & T_n / S_n \ \xrightarrow{p}\ \alpha/\beta, \text{ provided that }\beta\neq0
  \end{align}
  • If estimator Tn is given by an explicit formula, then most likely the formula will employ sums of random variables, and then the law of large numbers can be used: for a sequence {Xn} of random variables and under suitable conditions,
\frac{1}{n}\sum_{i=1}^n g(X_i) \ \xrightarrow{p}\ \operatorname{E}[\,g(X)\,]
  • If estimator Tn is defined implicitly, for example as a value that maximizes certain objective function (see extremum estimator), then a more complicated argument involving stochastic equicontinuity has to be used.[6]

Bias versus consistency

Unbiased but not consistent

An estimator can be unbiased but not consistent. For example, for an iid sample {x
1
,..., x
n
} one can use T(X) = x
1
as the estimator of the mean E[x]. This estimator is obviously unbiased, and obviously inconsistent.

Biased but consistent

Alternatively, an estimator can be biased but consistent. For example if the mean is estimated by {1 \over n} \sum x_i + {1 \over n} it is biased, but as n \rightarrow \infty, it approaches the correct value, and so it is consistent.

See also

  • Fisher consistency — alternative, although rarely used concept of consistency for the estimators
  • Consistent test — the notion of consistency in the context of hypothesis testing

Notes

  1. ^ Amemiya 1985, Definition 3.4.2
  2. ^ Lehman & Casella 1998, p. 332
  3. ^ Amemiya 1985, equation (3.2.5)
  4. ^ Amemiya 1985, Theorem 3.2.6
  5. ^ Amemiya 1985, Theorem 3.2.7
  6. ^ Newey & McFadden (1994, Chapter 2)

References


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Estimator — In statistics, an estimator is a function of the observable sample data that is used to estimate an unknown population parameter (which is called the estimand ); an estimate is the result from the actual application of the function to a… …   Wikipedia

  • consistent — adjective Etymology: Latin consistent , consistens, present participle of consistere Date: 1638 1. archaic possessing firmness or coherence 2. a. marked by harmony, regularity, or steady continuity …   New Collegiate Dictionary

  • Newey–West estimator — A Newey–West estimator is used in statistics and econometrics to provide an estimate of the covariance matrix of the parameters of a regression type model when this model is applied in situations where the standard assumptions of regression… …   Wikipedia

  • Heteroscedasticity-consistent standard errors — In statistics, a frequent assumption in linear regression is that the disturbances u i have the same variance. When this is not the case, we get heteroskedasticity in the estimated residuals scriptstylewidehat{u i} . Heteroskedasticity consistent …   Wikipedia

  • Errors-in-variables models — In statistics and econometrics, errors in variables models or measurement errors models are regression models that account for measurement errors in the independent variables. In contrast, standard regression models assume that those regressors… …   Wikipedia

  • Instrumental variable — In statistics, econometrics, and related disciplines, the method of instrumental variables (IV) is used to estimate causal relationships when controlled experiments are not feasible. Statistically, IV methods allow consistent estimation when the… …   Wikipedia

  • List of statistics topics — Please add any Wikipedia articles related to statistics that are not already on this list.The Related changes link in the margin of this page (below search) leads to a list of the most recent changes to the articles listed below. To see the most… …   Wikipedia

  • Scale parameter — In probability theory and statistics, a scale parameter is a special kind of numerical parameter of a parametric family of probability distributions. The larger the scale parameter, the more spread out the distribution.DefinitionIf a family of… …   Wikipedia

  • Consistency (statistics) — In statistics, consistency of procedures such as confidence intervals or hypothesis tests involves their behaviour as the number of items in the data set to which they are applied increases indefinitely. In particular, consistency requires that… …   Wikipedia

  • Mean difference — The mean difference is a measure of statistical dispersion equal to the average absolute difference of two independent values drawn from a probability distribution. A related statistic is the relative mean difference, which is the mean difference …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”