Algorithms for calculating variance

Algorithms for calculating variance

Algorithms for calculating variance play a major role in statistical computing. A key problem in the design of good algorithms for this problem is that formulas for the variance may involve sums of squares, which can lead to numerical instability as well as to arithmetic overflow when dealing with large values.

I. Naïve algorithm

The formula for calculating the variance of an entire population of size "n" is:

:sigma^2 = displaystylefrac {sum_{i=1}^{n} x_i^2 - (sum_{i=1}^{n} x_i)^2/n}{n}. !

The formula for calculating an unbiased estimate of the population variance from a finite sample of "n" observations is:

:s^2 = displaystylefrac {sum_{i=1}^{n} x_i^2 - (sum_{i=1}^{n} x_i)^2/n}{n-1}. !

Therefore a naive algorithm to calculate the estimated variance is given by the following pseudocode:

n = 0 sum = 0 sum_sqr = 0 foreach x in data: n = n + 1 sum = sum + x sum_sqr = sum_sqr + x*x end for mean = sum/n variance = (sum_sqr - sum*mean)/(n - 1)

This algorithm can easily be adapted to compute the variance of a finite population: simply divide by "n" instead of "n" − 1 on the last line.

Because sum_sqr and sum * mean can be very similar numbers, the precision of the result can be much less than the inherent precision of the floating-point arithmetic used to perform the computation. This is particularly bad if the variance is small relative to the sum of the numbers.

II. Two-pass algorithm

An alternate approach, using a different formula for the variance, is given by the following pseudocode:

n = 0 sum1 = 0 foreach x in data: n = n + 1 sum1 = sum1 + x end for mean = sum1/n sum2 = 0 foreach x in data: sum2 = sum2 + (x - mean)^2 end for variance = sum2/(n - 1)

This algorithm is often more numerically reliable than the naïve algorithm I for large sets of data, although it can be worse if much of the data is very close to but not precisely equal to the mean and some are quite far away from it.

The results of both of these simple algorithms (I and II) can depend inordinately on the ordering of the data and can give poor results for very large data sets due to repeated roundoff error in the accumulation of the sums. Techniques such as compensated summation can be used to combat this error to a degree.

IIa. Compensated variant

The compensated-summation version of the algorithm above reads:

n = 0 sum1 = 0 foreach x in data: n = n + 1 sum1 = sum1 + x end for mean = sum1/n sum2 = 0 sumc = 0 foreach x in data: sum2 = sum2 + (x - mean)^2 sumc = sumc + (x - mean) end for variance = (sum2 - sumc^2/n)/(n - 1)

III. On-line algorithm

It is often useful to be able to compute the variance in a single pass, inspecting each value x_i only once; for example, when the data are being collected without enough storage to keep all the values, or when costs of memory access dominate those of computation. For such an online algorithm, a recurrence relation is required between quantities from which the required statistics can be calculated in a numerically stable fashion.

The following formulas can be used to update the mean and (estimated) variance of the sequence, for an additional element x_{mathrm{new. Here, "m" denotes the estimate of the population mean(using the sample mean), "s"2n-1 the estimate of the population variance, "s"2n the estimate of the sample variance, and "n" the number of elements in the sequence before the addition.

:m_{mathrm{new = frac{n ; m_{mathrm{old + x_{mathrm{new}{n+1} = m_{mathrm{old + frac{x_{mathrm{new - m_{mathrm{old}{n+1} !

:s^2_{mathrm{n-1, new = frac{(n-1) ; s^2_{mathrm{n-1, old + (x_{mathrm{new - m_{mathrm{new) , (x_{mathrm{new - m_{mathrm{old)}{n} ; ; , , ; , mathrm{n>0} !

:s^2_{mathrm{n, new = frac{n ; s^2_{mathrm{n, old + (x_{mathrm{new - m_{mathrm{new) , (x_{mathrm{new - m_{mathrm{old)}{n+1}.

It turns out that a more suitable quantity for updating is the sum of squares of differences from the (current) mean, sum_{i=1}^n (x_i - m)^2, here denoted M_2:

:M_mathrm{2,new}! = M_mathrm{2,old} + (x_mathrm{new} - m_mathrm{old})(x_mathrm{new} - m_mathrm{new}):s^2_mathrm{n} = frac{M_2}{n}:s^2_mathrm{n-1} = frac{M_2}{n-1}

A numerically stable algorithm is given below. It also computes the mean.This algorithm is due to Knuth, [Donald E. Knuth (1998). "The Art of Computer Programming", volume 2: "Seminumerical Algorithms", 3rd edn., p. 232. Boston: Addison-Wesley.] who cites Welford. [B. P. Welford (1962). [http://www.jstor.org/view/00401706/ap040015/04a00090/0 "Note on a method for calculating corrected sums of squares and products"] . "Technometrics" 4(3):419–420.]

n = 0 mean = 0 M2 = 0 foreach x in data: n = n + 1 delta = x - mean mean = mean + delta/n M2 = M2 + delta*(x - mean) // This expression uses the new value of mean end for variance_n = M2/n variance = M2/(n - 1)

This algorithm is much less prone to loss of precision due to massive cancellation, but might not be as efficient because of the division operation inside the loop. For a particularly robust two-pass algorithm for computing the variance, first compute and subtract an estimate of the mean, and then use this algorithm on the residuals.

A slightly more convenient form allows one to calculate the standard deviation without having to explicitly calculate the new mean. If n is the number of elements in the sequence after the addition of the new element, then one has

s^2_{mathrm{n-1, new = frac{(n-2)s^2_{mathrm{n-1, old+frac{n-1}{n}left(x_ ext{new}-m_ ext{old} ight)^2}{n-1}

IV. Weighted incremental algorithm

When the observations are weighted, West (1979) [D. H. D. West (1979). "Communications of the ACM", 22, 9, 532-535: "Updating Mean and Variance Estimates: An Improved Method"] suggests this incremental algorithm:

n = 0 foreach x in the data: if n=0 then n = 1 mean = x S = 0 sumweight = weight else n = n + 1 temp = weight + sumweight S = S + sumweight*weight*(x-mean)^2 / temp mean = mean + (x-mean)*weight / temp sumweight = temp end if end for Variance = S * n / ((n-1) * sumweight) // if sample is the population, omit n/(n-1)

V. Parallel algorithm

Chan et al. [Citation
last1 = Chan | first1 = Tony F.
last2 = Golub | first2 = Gene H. | author2-link = Gene H. Golub
last3 = LeVeque | first3 = Randall J.
contribution = Updating Formulae and a Pairwise Algorithm for Computing Sample Variances.
title = Technical Report STAN-CS-79-773
publisher = Department of Computer Science, Stanford University
year = 1979
contribution-url = ftp://reports.stanford.edu/pub/cstr/reports/cs/tr/79/773/CS-TR-79-773.pdf
.
] note that the above on-line algorithm III is a special case of an algorithm that works for any partition of the sample X into sets X^A, X^B::delta! = m^B - m^A:m^X = m^A + deltacdotfrac{N^B}{N^X}:M_2^X = M_2^A + M_2^B + delta^2cdotfrac{N^A N^B}{N^X}.This may be useful when, for example, multiple processing units may be assigned to discrete parts of the input.

Higher-order statistics

Terriberry [Citation
last=Terriberry
first=Timothy B.
year=2007
title=Computing Higher-Order Moments Online
url=http://people.xiph.org/~tterribe/notes/homs.html
] extends Chan's formulae to calculating the third and fourth central moments, needed for example when estimating skewness and kurtosis::M_3^X = M_3^A + M_3^B + delta^3frac{N^A N^B (N^A - N^B)}{(N^X)^2} + 3deltafrac{N^AM_2^B - N^BM_2^A}{N^X}:egin{align}M_4^X = M_4^A + M_4^B & + delta^4frac{N^A N^B left((N^A)^2 - N^A N^B + (N^B)^2 ight)}{(N^X)^3} \ & + 6delta^2frac{(N^A)^2 M_2^B + (N^B)^2 M_2^A}{(N^X)^2} + 4deltafrac{N^AM_3^B - N^BM_3^A}{N^X} \end{align}

Here the M_k are again the sums of powers of differences from the mean Sigma(x - overline{x})^k, giving:skewness: g_1 = frac{sqrt{n} M_3}{M_2^{3/2,:kurtosis: g_2 = frac{n M_4}{M_2^2}.

For the incremental case (i.e., B = {x}), this simplifies to::delta! = x - m:m' = m + frac{delta}{n}:M_2' = M_2 + delta^2 frac{ n-1}{n}:M_3' = M_3 + delta^3 frac{ (n - 1) (n - 2)}{n^2} - frac{3delta M_2}{n}:M_4' = M_4 + frac{delta^4 (n - 1) (n^2 - 3n + 3)}{n^3} + frac{6delta^2 M_2}{n^2} - frac{4delta M_3}{n}

It should be noted that by preserving the value delta / n, only one division operation is needed and thus that the higher-order statistics can be calculated for little incremental cost.

Example

Assume that all floating point operations use the standard IEEE 754 double-precision arithmetic. Consider the sample (4, 7, 13, 16) from an infinite population. Based on this sample, the estimated population mean is 10, and the unbiased estimate of population variance is 30. Both Algorithm I and Algorithm II compute these values correctly. Next consider the sample (108 + 4, 108 + 7, 108 + 13, 108 + 16), which gives rise to the same estimated variance as the first sample. Algorithm II computes this variance estimate correctly, but Algorithm I returns 29.333333333333332 instead of 30. While this loss of precision may be tolerable and viewed as a minor flaw of Algorithm I, it is easy to find data that reveal a major flaw in the naive algorithm: Take the sample to be (109 + 4, 109 + 7, 109 + 13, 109 + 16). Again the estimated population variance of 30 is computed correctly by Algorithm II, but the naive algorithm now computes it as −170.66666666666666. This is a serious problem with Algorithm I, since the variance can, by definition, never be negative.

See also

*Computational formula for the variance

References

External links

*


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Computational formula for the variance — See also: Algorithms for calculating variance In probability theory and statistics, the computational formula for the variance Var(X) of a random variable X is the formula where E(X) is the expected value of X. A closely related identity can be… …   Wikipedia

  • Variance (statistiques) — Variance (statistiques et probabilités) Pour les articles homonymes, voir Variance. En statistique et probabilité, la variance est une mesure arbitraire servant à caractériser la dispersion d une distribution ou d un échantillon. Sommaire 1… …   Wikipédia en Français

  • Variance (statistiques et probabilites) — Variance (statistiques et probabilités) Pour les articles homonymes, voir Variance. En statistique et probabilité, la variance est une mesure arbitraire servant à caractériser la dispersion d une distribution ou d un échantillon. Sommaire 1… …   Wikipédia en Français

  • Variance (statistiques et probabilités) — Pour les articles homonymes, voir Variance. En statistique et probabilité, la variance est une mesure arbitraire servant à caractériser la dispersion d une distribution ou d un échantillon. Sommaire 1 Définition …   Wikipédia en Français

  • Variance — In probability theory and statistics, the variance of a random variable, probability distribution, or sample is one measure of statistical dispersion, averaging the squared distance of its possible values from the expected value (mean). Whereas… …   Wikipedia

  • Standard deviation — In probability and statistics, the standard deviation is a measure of the dispersion of a collection of values. It can apply to a probability distribution, a random variable, a population or a data set. The standard deviation is usually denoted… …   Wikipedia

  • Covariance — This article is about the measure of linear relation between random variables. For other uses, see Covariance (disambiguation). In probability theory and statistics, covariance is a measure of how much two variables change together. Variance is a …   Wikipedia

  • List of statistics topics — Please add any Wikipedia articles related to statistics that are not already on this list.The Related changes link in the margin of this page (below search) leads to a list of the most recent changes to the articles listed below. To see the most… …   Wikipedia

  • Online algorithm — In computer science, an online algorithm is one that can process its input piece by piece in a serial fashion, i.e., in the order that the input is fed to the algorithm, without having the entire input available from the start. In contrast, an… …   Wikipedia

  • Numerical stability — In the mathematical subfield of numerical analysis, numerical stability is a desirable property of numerical algorithms. The precise definition of stability depends on the context, but it is related to the accuracy of the algorithm. A related… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”