Cross entropy

Cross entropy

In information theory, the cross entropy between two probability distributions measures the average number of bits needed to identify an event from a set of possibilities, if a coding scheme is used based on a given probability distribution q, rather than the "true" distribution p.

The cross entropy for two distributions p and q over the same probability space is thus defined as follows:

\mathrm{H}(p, q) = \mathrm{E}_p[-\log q] = \mathrm{H}(p) + D_{\mathrm{KL}}(p \| q)\!,

where H(p) is the entropy of p, and DKL(p | | q) is the Kullback-Leibler divergence of q from p (also known as the relative entropy).

For discrete p and q this means

\mathrm{H}(p, q) = -\sum_x p(x)\, \log q(x). \!

The situation for continuous distributions is analogous:

-\int_X p(x)\, \log q(x)\, dx. \!

NB: The notation H(p,q) is sometimes used for both the cross entropy as well as the joint entropy of p and q.

Estimation

There are many situations where cross-entropy needs to be measured but the distribution of p is unknown. An example is language modeling, where a model is created based on a training set T, and then its cross-entropy is measured on a test set to assess how accurate the model is in predicting the test data. In this example, p is the true distribution of words in any corpus, and q is the distribution of words as predicted by the model. Since the true distribution is unknown, cross-entropy cannot be directly calculated. In these cases, an estimate of cross-entropy is calculated using the following formula:


H(T,q) = -\sum_{i=1}^N \frac{1}{N} \log_2 q(x_i)

where N is the size of the test set, and q(x) is the probability of event x estimated from the training set. It should be noted that the sum is calculated over N.

Cross-entropy minimization

Cross-entropy minimization is frequently used in optimization and rare-event probability estimation; see the cross-entropy method.

When comparing a distribution q against a fixed reference distribution p, cross entropy and KL divergence are identical up to an additive constant (since p is fixed): both take on their minimal values when p = q, which is 0 for KL divergence, and H(p) for cross entropy. In the engineering literature, the principle of minimising KL Divergence (Kullback's "Principle of Minimum Discrimination Information") is often called the Principle of Minimum Cross-Entropy (MCE), or Minxent.

However, as discussed in the article Kullback-Leibler divergence, sometimes the distribution q is the fixed prior reference distribution, and the distribution p is optimised to be as close to q as possible, subject to some constraint. In this case the two minimisations are not equivalent. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by redefining cross-entropy to be DKL(p||q) , rather than H(p,q).

See also


Wikimedia Foundation. 2010.

Игры ⚽ Нужен реферат?

Look at other dictionaries:

  • Cross-entropy method — The cross entropy (CE) method attributed to Reuven Rubinstein is a general Monte Carlo approach to combinatorial and continuous multi extremal optimization and importance sampling. The method originated from the field of rare event simulation,… …   Wikipedia

  • Entropy (information theory) — In information theory, entropy is a measure of the uncertainty associated with a random variable. The term by itself in this context usually refers to the Shannon entropy, which quantifies, in the sense of an expected value, the information… …   Wikipedia

  • The Way of Cross and Dragon — is a science fiction short story by George R. R. Martin. It involves a far future priest of the One True Interstellar Catholic Church of Earth and the Thousand Worlds (with similarities to the Roman Catholic hierarchy) investigating a sect that… …   Wikipedia

  • Kullback–Leibler divergence — In probability theory and information theory, the Kullback–Leibler divergence[1][2][3] (also information divergence, information gain, relative entropy, or KLIC) is a non symmetric measure of the difference between two probability distributions P …   Wikipedia

  • Méthode de l'entropie croisée — Traduction à relire Cross entropy method → …   Wikipédia en Français

  • List of mathematics articles (C) — NOTOC C C closed subgroup C minimal theory C normal subgroup C number C semiring C space C symmetry C* algebra C0 semigroup CA group Cabal (set theory) Cabibbo Kobayashi Maskawa matrix Cabinet projection Cable knot Cabri Geometry Cabtaxi number… …   Wikipedia

  • Perplexity — is a measurement in information theory. It is defined as 2 raised to the power of entropy, or more often as 2 raised to the power of cross entropy. The latter definition is commonly used to compare probability models empirically. Perplexity of a… …   Wikipedia

  • Information theory — Not to be confused with Information science. Information theory is a branch of applied mathematics and electrical engineering involving the quantification of information. Information theory was developed by Claude E. Shannon to find fundamental… …   Wikipedia

  • Quantities of information — A simple information diagram illustrating the relationships among some of Shannon s basic quantities of information. The mathematical theory of information is based on probability theory and statistics, and measures information with several… …   Wikipedia

  • Prior probability — Bayesian statistics Theory Bayesian probability Probability interpretations Bayes theorem Bayes rule · Bayes factor Bayesian inference Bayesian network Prior · Posterior · Likelihood …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”