Bayes factor

Bayes factor

In statistics, the use of Bayes factors is a Bayesian alternative to classical hypothesis testing.cite journal | author = Goodman S | title = Toward evidence-based medical statistics. 1: The P value fallacy | journal = Ann Intern Med | volume = 130 | issue = 12 | pages = 995–1004 | year = 1999 | pmid = 10383371] cite journal | author = Goodman S | title = Toward evidence-based medical statistics. 2: The Bayes factor | journal = Ann Intern Med | volume = 130 | issue = 12 | pages = 1005–13 | year = 1999 | pmid = 10383350]

Definition

Given a model selection problem in which we have to choose between two models "M"1 and "M"2, on the basis of a data vector "x". The Bayes factor "K" is given by

:K = frac{p(x|M_1)}{p(x|M_2)}.

where p(x|M_i) is called the marginal likelihood for model "i". This is similar to a likelihood-ratio test, but instead of "maximizing" the likelihood, Bayesians "average" it over the parameters. Generally, the models "M"1 and "M"2 will be parametrized by vectors of parameters θ1 and θ2; thus "K" is given by

:K = frac{p(x|M_1)}{p(x|M_2)} = frac{int ,p( heta_1|M_1)p(x| heta_1, M_1)d heta_1}{int ,p( heta_2|M_2)p(x| heta_2, M_2)d heta_2}.

The logarithm of "K" is sometimes called the weight of evidence given by "x" for M1 over M2, measured in bits, nats, or bans, according to whether the logarithm is taken to base 2, base "e", or base 10.

Interpretation

A value of "K" > 1 means that the data indicate that "M"1 is more strongly supported by the data under consideration than "M"2. Note that classical hypothesis testing gives one hypothesis (or model) preferred status (the 'null hypothesis'), and only considers evidence "against" it. Harold Jeffreys gave a scale for interpretation of "K": [H. Jeffreys, "The Theory of Probability" (3e), Oxford (1961); p. 432]

:

The second column gives the corresponding weights of evidence in decibans (tenths of a power of 10); bits are added in the third column for clarity. According to I. J. Good a change in a weight of evidence of 1 deciban or 1/3 of a bit (i.e. a change in an odds ratio from evens to about 5:4) is about as finely as humans can reasonably perceive their degree of belief in a hypothesis in everyday use.Fact|date=June 2008

The use of Bayes factors or classical hypothesis testing takes place in the context of inference rather than decision-making under uncertainty. That is, we merely wish to find out which hypothesis is true, rather than actually making a decision on the basis of this information. Frequentist statistics draws a strong distinction between these two because classical hypothesis tests are not coherent in the Bayesian sense. Bayesian procedures, including Bayes factors, are coherent, so there is no need to draw such a distinction. Inference is then simply regarded as a special case of decision-making under uncertainty in which the resulting action is to report a value. For decision-making, Bayesian statisticians might use a Bayes factor combined with a prior distribution and a loss function associated with making the wrong choice. In an inference context the loss function would take the form of a scoring rule. Use of a logarithmic score function for example, leads to the expected utility taking the form of the Kullback-Leibler divergence. If the logarithms are to the base 2 this is equivalent to Shannon information.

Example

Suppose we have a random variable which produces either a success or a failure. We want to compare a model "M"1 where the probability of success is "q" = ½, and another model "M"2 where "q" is completely unknown and we take a prior distribution for "q" which is uniform on [0,1] . We take a sample of 200, and find 115 successes and 85 failures. The likelihood can be calculated according to the binomial distribution:

:200 choose 115}q^{115}(1-q)^{85.

So we have

:P(X=115|M_1)={200 choose 115}left({1 over 2} ight)^{200}=0.00595...,,

but

:P(X=115|M_2)=int_{0}^1{200 choose 115}q^{115}(1-q)^{85}dq = {1 over 201} = 0.00497...,. The ratio is then 1.197..., which is "barely worth mentioning" even if it points very slightly towards "M"1.

This is not the same as a classical likelihood ratio test, which would have found the maximum likelihood estimate for "q", namely 115200 = 0.575, and from that get a ratio of 0.1045..., and so pointing towards "M"2. Alternatively, Edwards's "exchange rate" of two units of likelihood per degree of freedom suggests that M_2 is preferable (just) to M_1, as 0.1045ldots = e^{-2.25ldots} and 2.25>2: the extra likelihood compensates for the unknown parameter in M_2.

A frequentist hypothesis test of M_1 (here considered as a null hypothesis) would have produced a more dramatic result, saying that "M"1 could be rejected at the 5% significance level, since the probability of getting 115 or more successes from a sample of 200 if "q" = ½ is 0.0200..., and as a two-tailed test of getting a figure as extreme as or more extreme than 115 is 0.0400... Note that 115 is more than two standard deviations away from 100.

"M"2 is a more complex model than "M"1 because it has a free parameter which allows it to model the data more closely. The ability of Bayes factors to take this into account is a reason why Bayesian inference has been put forward as a theoretical justification for and generalisation of Occam's razor, reducing Type I errors.

ee also

* Bayesian model comparison

* Marginal likelihood

References

External links

* [http://www.cs.ucsd.edu/users/goguen/courses/275f00/stat.html Bayesian critique of classical hypothesis testing]
* [http://ourworld.compuserve.com/homepages/rajm/jspib.htm Why should clinicians care about Bayesian methods?]
* [http://pcl.missouri.edu/bayesfactor Web application to calculate Bayes factors for t-tests]


Wikimedia Foundation. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • Bayes' theorem — In probability theory, Bayes theorem (often called Bayes law after Thomas Bayes) relates the conditional and marginal probabilities of two random events. It is often used to compute posterior probabilities given observations. For example, a… …   Wikipedia

  • Naive Bayes classifier — A naive Bayes classifier is a simple probabilistic classifier based on applying Bayes theorem with strong (naive) independence assumptions. A more descriptive term for the underlying probability model would be independent feature model . In… …   Wikipedia

  • Bayesian inference — is statistical inference in which evidence or observations are used to update or to newly infer the probability that a hypothesis may be true. The name Bayesian comes from the frequent use of Bayes theorem in the inference process. Bayes theorem… …   Wikipedia

  • List of statistics topics — Please add any Wikipedia articles related to statistics that are not already on this list.The Related changes link in the margin of this page (below search) leads to a list of the most recent changes to the articles listed below. To see the most… …   Wikipedia

  • Conjugate prior — Bayesian statistics Theory Bayesian probability Probability interpretations Bayes theorem Bayes rule · Bayes factor Bayesian inference Bayesian network Prior · Posterior · Likelihood …   Wikipedia

  • Credible interval — Bayesian statistics Theory Bayesian probability Probability interpretations Bayes theorem Bayes rule · Bayes factor Bayesian inference Bayesian network Prior · Posterior · Likelihood …   Wikipedia

  • Maximum a posteriori estimation — In Bayesian statistics, a maximum a posteriori probability (MAP) estimate is a mode of the posterior distribution. The MAP can be used to obtain a point estimate of an unobserved quantity on the basis of empirical data. It is closely related to… …   Wikipedia

  • Raven paradox — The Raven paradox, also known as Hempel s paradox or Hempel s ravens is a paradox proposed by the German logician Carl Gustav Hempel in the 1940s to illustrate a problem where inductive logic violates intuition. It reveals the problem of… …   Wikipedia

  • Bayesian model comparison — A common problem in statistical inference is to use data to decide between two or more competing models. Frequentist statistics uses hypothesis tests for this purpose. There are several Bayesian approaches. One approach is through Bayes… …   Wikipedia

  • Outline of statistics — The following outline is provided as an overview and guide to the variety of topics included within the subject of statistics: Statistics pertains to the collection, analysis, interpretation, and presentation of data. It is applicable to a wide… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”