Anderson-Darling test

Anderson-Darling test

The Anderson-Darling test, named after Theodore Wilbur Anderson, Jr. (1918–?) and Donald A. Darling (1915–?), who invented it in 1952 [cite journal | first = T. W. | last = Anderson | author link = Theodore W. Anderson, Jr.
coauthors = Darling, D. A.
year = 1952 | month =
title = Asymptotic theory of certain "goodness-of-fit" criteria based on stochastic processes
journal = Annals of Mathematical Statistics
volume = 23 | issue = | pages = 193–212
id = | url =
doi = 10.1214/aoms/1177729437
] , is a form of minimum distance estimation, and one of the most powerful statistics for detecting most departures from normality. It may be used with small sample sizes "n" ≤ 25. Very large sample sizes may reject the assumption of normality with only slight imperfections, but industrial data with sample sizes of 200 and more have passed the Anderson-Darling test. Fact|date=February 2007

The Anderson-Darling test assesses whether a sample comes from a specified distribution. The formula for the test statistic A to assess if data {Y_1 (note that the data must be put in order) comes from a distribution with cumulative distribution function (CDF) F is

: A^2 = -N-S

where

: S=sum_{k=1}^N frac{2k-1}{N}left [ln F(Y_k) + lnleft(1-F(Y_{N+1-k}) ight) ight] .

The test statistic can then be compared against the critical values of the theoretical distribution (dependent on which F is used) to determine the P-value.

The Anderson-Darling test for normality is a distance or empirical distribution function (EDF) test. It is based upon the concept that when given a hypothesized underlying distribution, the data can be transformed to a uniform distribution. The transformed sample data can be then tested for uniformity with a distance test (Shapiro 1980).

In comparisons of power, Stephens (1974) found A^2 to be one of the best EDF statistics for detecting most departures from normality. [cite journal
first = M. A. | last = Stephens | authorlink = | coauthors =
year = 1974 | month =
title = EDF Statistics for Goodness of Fit and Some Comparisons
journal = Journal of the American Statistical Association
volume = 69 | issue = | pages = 730–737 | id = | url =
doi = 10.2307/2286009
] The only statistic close was the W^2 Cramér-von Mises test statistic.

Procedure

(If testing for normal distribution of the variable "X")

1) The data X_i, for i=1,ldots n, of the variable X that should be tested is sorted from low to high.

2) The mean ar{X} and standard deviation s are calculated from the sample of X.

3) The values X_i are standardized as

::Y_i=frac{X_i-ar{X{s}

4) With the standard normal CDF Phi, A^2 is calculated using ::A^2 = -n -frac{1}{n} sum_{i=1}^n (2i-1)(ln Phi(Y_i)+ ln(1-Phi(Y_{n+1-i})))

or without repeating indices as

::A^2 = -n -frac{1}{n} sum_{i=1}^nleft [(2i-1)lnPhi(Y_i)+(2(n-i)+1)ln(1-Phi(Y_i)) ight] .

5) A^{*2}, an approximate adjustment for sample size, is calculated using

::A^{*2}=A^2left(1+frac{0.75}{n}+frac{2.25}{n^2} ight)

6) If A^{*2} exceeds 0.752 then the hypothesis of normality is rejected for a 5% level test.

Note:

1. If "s" = 0 or any Phi(Y_i)=(0 or 1) then A^2 cannot be calculated and is undefined.

2. Above, it was assumed that the variable X_i was being tested for normal distribution. Any other theoretical distribution can be assumed by using its CDF. Each theoretical distribution has its own critical values, and some examples are: lognormal, exponential, Weibull, extreme value type I and logistic distribution.

3. Null hypothesis follows the true distribution (in this case, N(0, 1)).

ee also

*Kolmogorov-Smirnov test
*Shapiro-Wilk test
*Smirnov-Cramér-von-Mises test
*Jarque-Bera test

External links

* [http://www.itl.nist.gov/div898/handbook/eda/section3/eda35e.htm US NIST Handbook of Statistics]
* [http://www.analyse-it.com/blog/2008/8/testing-the-assumption-of-normality.aspx Testing the assumption of normality] .

References


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать реферат

Look at other dictionaries:

  • Anderson-Darling-Test — Der Anderson Darling Test beziehungsweise Anderson Darling Anpassungstest ist ein statistischer Test, mit dem festgestellt werden kann, ob die Häufigkeitsverteilung der Daten einer Stichprobe von einer vorgegebenen hypothetischen… …   Deutsch Wikipedia

  • Test de normalité — En statistiques, les tests de normalité permettent de vérifier si des données réelles suivent une loi normale ou non. Les tests de normalité sont des cas particuliers des tests d adéquation (ou tests d ajustement, tests permettant de comparer des …   Wikipédia en Français

  • Anderson (surname) — Family name name = Anderson image size = caption = pronunciation = meaning = Andrew s son region = language = related names = Andersson Andersen search = prefix = footnotes = Anderson, Andersson or Andersen is a surname deriving from a patronymic …   Wikipedia

  • Test (statistique) — Pour les articles homonymes, voir Test. En statistiques, un test d hypothèse est une démarche consistant à rejeter ou à ne pas rejeter (rarement accepter) une hypothèse statistique, appelée hypothèse nulle, en fonction d un jeu de données… …   Wikipédia en Français

  • Test de kolmogorov-smirnov — En statistiques, le test de Kolmogorov Smirnov est un test d hypothèse utilisé pour déterminer si un échantillon suit bien une loi donnée connue par sa fonction de répartition continue, ou bien si deux échantillons suivent la même loi. Sommaire 1 …   Wikipédia en Français

  • Test de Kolmogorov-Smirnov — En statistiques, le test de Kolmogorov Smirnov est un test d hypothèse utilisé pour déterminer si un échantillon suit bien une loi donnée connue par sa fonction de répartition continue, ou bien si deux échantillons suivent la même loi. Sommaire 1 …   Wikipédia en Français

  • Test de Shapiro–Wilk — En estadística, el Test de Shapiro–Wilk, se usa para contrastar la normalidad de un conjunto de datos. Se plantea como hipótesis nula que una muestra x1, ..., xn proviene de una población normalmente distribuida. Fue publicado en 1965 por Samuel… …   Wikipedia Español

  • Kolmogorov-Smirnov test — In statistics, the Kolmogorov ndash;Smirnov test (also called the K S test for brevity) is a form of minimum distance estimation used as a nonparametric test of equality of one dimensional probability distributions used to compare a sample with a …   Wikipedia

  • Donald Allan Darling — (* 1915) ist ein US amerikanischer Statistiker. Er ist für den Anderson Darling Test (einen statistischen Test, mit dem festgestellt werden kann, ob die Häufigkeitsverteilung der Daten einer Stichprobe von einer vorgegebenen hypothetischen… …   Deutsch Wikipedia

  • Lilliefors-Test — Der Lilliefors Test beziehungsweise Kolmogorow Smirnow Lilliefors Test ist ein statistischer Test, mit dem die Häufigkeitsverteilung der Daten einer Stichprobe auf Abweichungen von der Normalverteilung untersucht werden kann. Er basiert auf einer …   Deutsch Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”