Inverse probability


Inverse probability

In probability theory, inverse probability is an obsolete term for the probability distribution of an unobserved variable.

Today, the problem of determining an unobserved variable (by whatever method) is called inferential statistics, the method of inverse probability (assigning a probability distribution to an unobserved variable) is called Bayesian probability, the "distribution" of an unobserved variable given data is rather the likelihood function (which is not a probability distribution), and the distribution of an unobserved variable, given both data and a prior distribution, is the posterior distribution. The development of the field and terminology from "inverse probability" to "Bayesian probability" is described by Fienberg (2006).[1] The term "Bayesian", which displaced "inverse probability", was in fact introduced by R. A. Fisher as a derogatory term.[citation needed]

The term "inverse probability" appears in an 1837 paper of De Morgan, in reference to Laplace's method of probability (developed in a 1774 paper, which independently discovered and popularized Bayesian methods, and 1812 book), though the term "inverse probability" does not occur in these.[1]

Inverse probability, variously interpreted, was the dominant approach to statistics until the development of frequentism in the early 20th century by R. A. Fisher, Jerzy Neyman and Egon Pearson.[1] Following the development of frequentism, the terms frequentist and Bayesian developed to contrast these approaches, and became common in the 1950s.

Details

In modern terms, given a probability distribution p(x|θ) for an observable quantity x conditional on an unobserved variable θ, the "inverse probability" is the posterior distribution p(θ|x), which depends both on the likelihood function (the inversion of the probability distribution) and a prior distribution. The distribution p(x|θ) itself is called the direct probability.

The inverse probability problem (in the 18th and 19th centuries) was the problem of estimating a parameter from experimental data in the experimental sciences, especially astronomy and biology. A simple example would be the problem of estimating the position of a star in the sky (at a certain time on a certain date) for purposes of navigation. Given the data, one must estimate the true position (probably by averaging). This problem would now be considered one of inferential statistics.

The terms "direct probability" and "inverse probability" were in use until the middle part of the 20th century, when the terms "likelihood function" and "posterior distribution" became prevalent.

See also

References


Wikimedia Foundation. 2010.

Look at other dictionaries:

  • probability theory — Math., Statistics. the theory of analyzing and making statements concerning the probability of the occurrence of uncertain events. Cf. probability (def. 4). [1830 40] * * * Branch of mathematics that deals with analysis of random events.… …   Universalium

  • Inverse transform sampling — Inverse transform sampling, also known as the probability integral transform, is a method of generating sample numbers at random from any probability distribution given its cumulative distribution function (cdf). This method is generally… …   Wikipedia

  • Inverse Gaussian distribution — Probability distribution name =Inverse Gaussian type =density pdf | cdf parameters =lambda > 0 mu > 0 support = x in (0,infty) pdf = left [frac{lambda}{2 pi x^3} ight] ^{1/2} exp{frac{ lambda (x mu)^2}{2 mu^2 x cdf = Phileft(sqrt{frac{lambda}{x… …   Wikipedia

  • Inverse-chi-square distribution — Probability distribution name =Inverse chi square type =density pdf cdf parameters = u > 0! support =x in (0, infty)! pdf =frac{2^{ u/2{Gamma( u/2)},x^{ u/2 1} e^{ 1/(2 x)}! cdf =Gamma!left(frac{ u}{2},frac{1}{2x} ight)igg/, Gamma!left(frac{… …   Wikipedia

  • Inverse-gamma distribution — Probability distribution name =Inverse gamma type =density pdf cdf parameters =alpha>0 shape (real) eta>0 scale (real) support =xin(0;infty)! pdf =frac{eta^alpha}{Gamma(alpha)} x^{ alpha 1} exp left(frac{ eta}{x} ight) cdf… …   Wikipedia

  • Inverse distance weighting — (IDW) is a method for multivariate interpolation, a process of assigning values to unknown points by using values from usually scattered set of known points. A general form of finding an interpolated value u for a given point x using IDW is an… …   Wikipedia

  • Inverse (mathematics) — Inverse is the opposite of something. This word and its derivatives are used greatly in mathematics, as illustrated below. * Inverse element of an element x with respect to a binary operation * with identity element e is an element y such that x… …   Wikipedia

  • Probability distribution — This article is about probability distribution. For generalized functions in mathematical analysis, see Distribution (mathematics). For other uses, see Distribution (disambiguation). In probability theory, a probability mass, probability density …   Wikipedia

  • Inverse-Wishart distribution — In statistics, the inverse Wishart distribution, also called the inverted Wishart distribution, is a probability density function defined on matrices. In Bayesian statistics it is used as the conjugate prior for the covariance matrix of a… …   Wikipedia

  • Probability density function — Boxplot and probability density function of a normal distribution N(0, σ2). In probability theory, a probability density function (pdf), or density of a continuous random variable is a function that describes the relative likelihood for this… …   Wikipedia


Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”

We are using cookies for the best presentation of our site. Continuing to use this site, you agree with this.