Von Mises distribution

Von Mises distribution

Probability distribution
name =von Mises
type =density
pdf_

The support is chosen to be [-π,π] with μ=0
cdf_

The support is chosen to be [-π,π] with μ=0
parameters =mu real
kappa>0
support =xin any interval of length 2π
pdf =frac{e^{kappacos(x-mu)
{2pi I_0(kappa)}
cdf =(not analytic - see text)
mean =mu
median =mu
mode =mu
variance = extrm{var}(z)=1-I_1(kappa)^2/I_0(kappa)^2 (circular)
skewness =
kurtosis =
entropy =-kappafrac{I_1(kappa)}{I_0(kappa)}+ln [2pi I_0(kappa)] (differential)
mgf =
char =

In probability theory and statistics, the von Mises distribution (also known as the circular normal distribution) is a continuous probability distribution on the circle. It may be thought of as the circular analogue of the normal distribution. It is used in applications of directional statistics, where a distribution of angles are found which are the result of the addition of many small independent angular deviations, such as target sensing, or grain orientation in a granular material. If "x" is the angular random variable, it is often useful to think of the von Mises distribution as a distribution of complex numbers "z" = "e""ix" rather than the real numbers "x". The von Mises distribution is a special case of the von Mises-Fisher distribution on the "N"-dimensional sphere.

The von Mises probability density function for the angle "x" is given by:

:f(x|mu,kappa)=frac{e^{kappacos(x-mu){2pi I_0(kappa)}

where "I"0("x") is the modified Bessel function of order 0.

The parameters μ and κ are analogous to μ and σ (the mean and variance) in the normal distribution:
* μ is a measure of location (the distribution is clustered around μ), and
* κ is a measure of concentration (an inverse measure of dispersion, so 1/κ is analogous to σ).
** If κ is zero, the distribution is uniform, and for small κ, it is close to uniform.
** If κ is large, the distribution becomes very concentrated about the angle μ with κ being a measure of the concentration. In fact, as κ increases, the distribution approaches a normal distribution in "x" with mean μ and variance 1/κ.

The probability density can be expressed as a series of Bessel functions (see Abramowitz and Stegun [http://www.math.sfu.ca/~cbm/aands/page_376.htm §9.6.34] )

:f(x|mu,kappa)=,:frac{1}{2pi}left(1!+!frac{2}{I_0(kappa)}sum_{j=1}^infty I_j(kappa)cos [j(x!-!mu)] ight)

where "I""j"("x") is the modified Bessel function of order "j". The cumulative distribution function is not analytic and is best found by integrating the above series. The indefinite integral of the probability density is:

:Phi(x|mu,kappa)=int f(t|mu,kappa),dt=:frac{1}{2pi}left(x!+!frac{2}{I_0(kappa)}sum_{j=1}^infty I_j(kappa)frac{sin [j(x!-!mu)] }{j} ight)

The cumulative distribution function will be a function of the lower limit ofintegration "x"0:

:F(x|mu,kappa)=Phi(x|mu,kappa)-Phi(x_0|mu,kappa),

Moments

The moments of the von Mises distribution are usually calculated as the moments of "z" = "e""ix" rather than the angle "x" itself. These moments are referred to as "circular moments". The variance calculated from these moments is referred to as the "circular variance". The one exception to this is that the "mean" usually refers to the argument of the circular mean, rather than the circular mean itself.

The "n"th raw moment of "z" is:

:m_n=langle z^n angle=oint z^n,f(x|mu,kappa),dx:= frac{I_n(kappa)}{I_0(kappa)}e^{i n mu}

where the integral is over any interval of length 2π. In calculating the above integral, we use the fact that "z""n" = cos("n"x) + i sin("nx") and the Bessel function identity (See Abramowitz and Stegun [http://www.math.sfu.ca/~cbm/aands/page_376.htm §9.6.19] ):

:I_n(kappa)=frac{1}{pi}int_0^pi e^{kappacos(x)}cos(nx),dx.

The mean of "z" is then just

:m_1= frac{I_1(kappa)}{I_0(kappa)}e^{imu}

and the "mean" value of "x" is then taken to be the argument μ. This is the "average" direction of the angular random variables. The variance of "z", or the circular variance of "x" is:

: extrm{var}(z)=langle |z|^2 angle-|langle z angle|^2= 1-frac{I_1(kappa)^2}{I_0(kappa)^2}.

Limiting behavior

In the limit of large κ the distribution becomes a normal distribution

:lim_{kappa ightarrowinfty}f(x|mu,kappa)=frac{exp [frac{-(x-mu)^2}{2sigma^2}] }{sigmasqrt{2pi

where σ2 = 1/κ. In the limit of small κ it becomes a uniform distribution:

:lim_{kappa ightarrow 0}f(x|mu,kappa)=mathrm{U}(x)

where the interval for the uniform distribution "U"("x") is the chosen interval of length 2π.

ee also

* The SOCR Resource provides [http://socr.ucla.edu/htmls/SOCR_Distributions.html interactive interface to Von Mises distribution] .

References

* Abramowitz, M. and Stegun, I. A. (ed.), Handbook of Mathematical Functions, National Bureau of Standards, 1964; reprinted Dover Publications, 1965. ISBN 0-486-61272-4
* “Algorithm AS 86: The von Mises Distribution Function,” Mardia, Applied Statistics, 24, 1975 (pp. 268-272).
* “Algorithm 518, Incomplete Bessel Function I0: The von Mises Distribution,” Hill, ACM Transactions on Mathematical Software, Vol. 3, No. 3, September 1977, Pages 279-284.
* Best, D. and Fisher, N. (1979). Efficient simulation of the von Mises distribution. Applied Statistics, 28, 152–157.
* Evans, M., Hastings, N., and Peacock, B., "von Mises Distribution." Ch. 41 in Statistical Distributions, 3rd ed. New York. Wiley 2000.
* Fisher, Nicholas I., Statistical Analysis of Circular Data. New York. Cambridge 1993.
* Mardia, Kanti V., and Jupp, Peter E., Directional Statistics. New York. Wiley 1999.
* “Statistical Distributions,” 2nd. Edition, Evans, Hastings, and Peacock, John Wiley and Sons, 1993, (chapter 39). ISBN 0-471-55951-2


Wikimedia Foundation. 2010.

Игры ⚽ Нужна курсовая?

Look at other dictionaries:

  • von Mises distribution — von Mises Probability density function The support is chosen to be [ π,π] with μ=0 Cumulative distribution function The support is chosen to be [ π,π] with μ=0 …   Wikipedia

  • von Mises — may refer to: Ludwig von Mises Institute von Mises distribution, named after Richard von Mises von Mises yield criterion, named after Richard von Mises People Dr. Mises, pseudonym of Gustav Fechner Ludwig von Mises, economist of the Austrian… …   Wikipedia

  • Von Mises — may refer to:* Ludwig von Mises, famous economist of the Austrian School * Richard von Mises, mathematician, younger brother of Ludwig von Mises * von Mises distribution * Ludwig von Mises Institute * von Mises yield criterion * Dr. Mises,… …   Wikipedia

  • Von Mises–Fisher distribution — The von Mises–Fisher distribution is a probability distribution on the (p 1) dimensional sphere in mathbb{R}^{p}. If p=2the distribution reduces to the von Mises distribution on the circle. The distributionbelongs to the field of directional… …   Wikipedia

  • Cramér–von Mises criterion — In statistics the Cramér–von Mises criterion is a criterion used for judging the goodness of fit of a cumulative distribution function F * compared to a given empirical distribution function Fn, or for comparing two empirical distributions. It is …   Wikipedia

  • Cramér-von-Mises criterion — In statistics the Cramér von Mises criterion is a form of minimum distance estimation used for judging the goodness of fit of a probability distribution F^* compared to a given distribution F is given by:n W^2 = n int { infty}^{infty} [F(x)… …   Wikipedia

  • Ludwig von Mises — Ludwig Heinrich Edler von Mises Austrian School Born 29 September 1881(1881 09 29) Lemberg …   Wikipedia

  • Hilda Geiringer von Mises — (publizierte als Hilda Geiringer oder Hilda Pollaczek Geiringer; verheiratete von Mises; * 28. September 1893 in Wien; † 22. März 1973 in Santa Barbara (Kalifornien)) war eine österreichisch US amerikanische Mathematikerin, die sich mit… …   Deutsch Wikipedia

  • Cramér-von-Mises-Test — Der Cramér von Mises Test ist ein statistischer Test, mit dem untersucht werden kann, ob die Häufigkeitsverteilung der Daten einer Stichprobe von einer vorgegebenen hypothetischen Wahrscheinlichkeitsverteilung abweicht (Ein Stichproben Fall),… …   Deutsch Wikipedia

  • Maximum entropy probability distribution — In statistics and information theory, a maximum entropy probability distribution is a probability distribution whose entropy is at least as great as that of all other members of a specified class of distributions. According to the principle of… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”