- Discrete probability distribution
In

probability theory , aprobability distribution is called**discrete**if it is characterized by aprobability mass function . Thus, the distribution of arandom variable "X" is discrete, and "X" is then called a**discrete random variable**, if:$sum\_u\; Pr(X=u)\; =\; 1$

as "u" runs through the set of all possible values of "X".

If a random variable is discrete, then the set of all values that it can assume with non-zero probability is finite or

countably infinite , because the sum of uncountably many positivereal number s (which is theleast upper bound of the set of all finite partial sums) always diverges to infinity.Typically, this set of possible values is a topologically discrete set in the sense that all its points are

isolated point s. But, there are discrete random variables for which this countable set is dense on the real line.The

Poisson distribution , theBernoulli distribution , thebinomial distribution , thegeometric distribution , and thenegative binomial distribution are among the most well-known discrete probability distributions.**Alternative description**Equivalently to the above, a discrete random variable can be defined as a random variable whose

cumulative distribution function (cdf) increases only by jump discontinuities — that is, its cdf increases only where it "jumps" to a higher value, and is constant between those jumps. The points where jumps occur are precisely the values which the random variable may take. The number of such jumps may be finite orcountably infinite . The set of locations of such jumps need not be topologically discrete; for example, the cdf might jump at eachrational number .**Representation in terms of indicator functions**For a discrete random variable "X", let "u"

_{0}, "u"_{1}, ... be the values it can take with non-zero probability. Denote:$Omega\_i=\{omega:\; X(omega)=u\_i\},,\; i=0,\; 1,\; 2,\; dots$

These are

disjoint set s, and by formula (1):$Prleft(igcup\_i\; Omega\_i\; ight)=sum\_i\; Pr(Omega\_i)=sum\_iPr(X=u\_i)=1.$

It follows that the probability that "X" takes any value except for "u"

_{0}, "u"_{1}, ... is zero, and thus one can write "X" as:$X=sum\_i\; alpha\_i\; 1\_\{Omega\_i\}$

except on a set of probability zero, where $alpha\_i=Pr(X=u\_i)$ and $1\_A$ is the

indicator function of "A". This may serve as an alternative definition of discrete random variables.**ee also***

Stochastic vector

*Wikimedia Foundation.
2010.*

### Look at other dictionaries:

**Probability distribution**— This article is about probability distribution. For generalized functions in mathematical analysis, see Distribution (mathematics). For other uses, see Distribution (disambiguation). In probability theory, a probability mass, probability density … Wikipedia**probability distribution**— noun A function of a discrete random variable yielding the probability that the variable will have a given value … Wiktionary**Continuous probability distribution**— In probability theory, a probability distribution is called continuous if its cumulative distribution function is continuous. That is equivalent to saying that for random variables X with the distribution in question, Pr [ X = a ] = 0 for all… … Wikipedia**Joint probability distribution**— In the study of probability, given two random variables X and Y that are defined on the same probability space, the joint distribution for X and Y defines the probability of events defined in terms of both X and Y. In the case of only two random… … Wikipedia**Conditional probability distribution**— Given two jointly distributed random variables X and Y, the conditional probability distribution of Y given X is the probability distribution of Y when X is known to be a particular value. If the conditional distribution of Y given X is a… … Wikipedia**Compound probability distribution**— In probability theory, a compound probability distribution is the probability distribution that results from assuming that a random variable is distributed according to some parametrized distribution F with an unknown parameter θ that is… … Wikipedia**Maximum entropy probability distribution**— In statistics and information theory, a maximum entropy probability distribution is a probability distribution whose entropy is at least as great as that of all other members of a specified class of distributions. According to the principle of… … Wikipedia**Probability theory**— is the branch of mathematics concerned with analysis of random phenomena.[1] The central objects of probability theory are random variables, stochastic processes, and events: mathematical abstractions of non deterministic events or measured… … Wikipedia**Discrete**— in science is the opposite of continuous: something that is separate; distinct; individual. This article is about the possible uses of the word discrete . For a definition of the word discreet , see the Wiktionary entry discreet. Discrete may… … Wikipedia**Probability metric**— A probability metric is a function defining a distance between random variables or vectors. In particular the probability metric does not satisfy the identity of indiscernibles condition required to be satisfied by the metric of the metric… … Wikipedia