Conditional probability distribution

Conditional probability distribution

Given two jointly distributed random variables X and Y, the conditional probability distribution of Y given X is the probability distribution of Y when X is known to be a particular value. If the conditional distribution of Y given X is a continuous distribution, then its probability density function is known as the conditional density function.

The properties of a conditional distribution, such as the moments, are often called by corresponding names such as the conditional mean and conditional variance.

Contents

Discrete distributions

For discrete random variables, the conditional probability mass function of Y given (the occurrence of) the value x of X, can be written, using the definition of conditional probability, as:

p_Y(y\mid X = x)=P(Y = y \mid X = x) = \frac{P(X=x\ \cap Y=y)}{P(X=x)}.

As seen from the definition, and due to its occurrence, it is necessary that P(X = x) > 0.

The relation with the probability distribution of X given Y is:

P(Y=y \mid X=x) P(X=x) = P(X=x\ \cap Y=y) = P(X=x \mid Y=y)P(Y=y).

Continuous distributions

Similarly for continuous random variables, the conditional probability density function of Y given (the occurrence of) the value x of X, can be written as

f_Y(y \mid X=x) = \frac{f_{X, Y}(x, y)}{f_X(x)},

where fX,Y(x, y) gives the joint density of X and Y, while fX(x) gives the marginal density for X. Also in this case it is necessary that fX(x) > 0.

The relation with the probability distribution of X given Y is given by:

f_Y(y \mid X=x)f_X(x) = f_{X,Y}(x, y) = f_X(x \mid Y=y)f_Y(y).

The concept of the conditional distribution of a continuous random variable is not as intuitive as it might seem: Borel's paradox shows that conditional probability density functions need not be invariant under coordinate transformations.

Relation to independence

If for discrete random variables P(Y = y | X = x) = P(Y = y) for all x and y, or for continuous random variables fY(y | X=x) = fY(y) for all x and y, then Y is said to be independent of X (and this implies that X is also independent of Y).

Properties

Seen as a function of y for given x, P(Y = y | X = x) is a probability and so the sum over all y (or integral if it is a conditional probability density) is 1. Seen as a function of x for given y, it is a likelihood function, so that the sum over all x need not be 1.

See also


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Conditional probability — The actual probability of an event A may in many circumstances differ from its original probability, because new information is available, in particular the information that an other event B has occurred. Intuition prescribes that the still… …   Wikipedia

  • Joint probability distribution — In the study of probability, given two random variables X and Y that are defined on the same probability space, the joint distribution for X and Y defines the probability of events defined in terms of both X and Y. In the case of only two random… …   Wikipedia

  • Regular conditional probability — is a concept that has developed to overcome certain difficulties in formally defining conditional probabilities for continuous probability distributions. It is defined as an alternative probability measure conditioned on a particular value of a… …   Wikipedia

  • Maximum entropy probability distribution — In statistics and information theory, a maximum entropy probability distribution is a probability distribution whose entropy is at least as great as that of all other members of a specified class of distributions. According to the principle of… …   Wikipedia

  • Conditional independence — These are two examples illustrating conditional independence. Each cell represents a possible outcome. The events R, B and Y are represented by the areas shaded red, blue and yellow respectively. And the probabilities of these events are shaded… …   Wikipedia

  • Conditional expectation — In probability theory, a conditional expectation (also known as conditional expected value or conditional mean) is the expected value of a real random variable with respect to a conditional probability distribution. The concept of conditional… …   Wikipedia

  • Conditional variance — In probability theory and statistics, a conditional variance is the variance of a conditional probability distribution. Particularly in econometrics, the conditional variance is also known as the scedastic function or skedastic function.… …   Wikipedia

  • probability theory — Math., Statistics. the theory of analyzing and making statements concerning the probability of the occurrence of uncertain events. Cf. probability (def. 4). [1830 40] * * * Branch of mathematics that deals with analysis of random events.… …   Universalium

  • Conditional mutual information — In probability theory, and in particular, information theory, the conditional mutual information is, in its most basic form, the expected value of the mutual information of two random variables given the value of a third. Contents 1 Definition 2… …   Wikipedia

  • Probability space — This article is about mathematical term. For the novel, see Probability Space (novel). In probability theory, a probability space or a probability triple is a mathematical construct that models a real world process (or experiment ) consisting of… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”