# Random variable

﻿
Random variable

A random variable is a rigorously defined mathematical entity used mainly to describe chance and probability in a mathematical way. The structure of random variables was developed and formalized to simplify the analysis of games of chance, stochastic events, and the results of scientific experiments by retaining only the mathematical properties necessary to answer probabilistic questions. Further formalizations have firmly grounded the entity in the theoretical domains of mathematics by making use of measure theory.

Fortunately, the language and structure of random variables can be grasped at various levels of mathematical fluency. Set theory and calculus are fundamental.

Broadly, there are two types of random variables &mdash; discrete and continuous. Discrete random variables take on one of a set of specific values, each with some probability greater than zero. Continuous random variables can be realized with any of a range of values (e.g., a real number between zero and one), and so there are several ranges (e.g. 0 to one half) that have a probability greater than zero of occurring.

A random variable has either an associated probability distribution (discrete random variable) or probability density function (continuous random variable).

Intuitive definition

Intuitively, a random variable is thought of as a function mapping the sample space of a random process to the real numbers. A few examples will highlight this.

Examples

For a coin toss, the possible events are heads or tails. The number of heads appearing in one fair coin toss can be described using the following random variable::

with probability mass function given by:

:

A random variable can also be used to describe the process of rolling a fair die and the possible outcomes. The most obvious representation is to take the set { 1, 2, 3, 4, 5, 6 } as the sample space, defining the random variable X as the number rolled. In this case,

:

:

0,& ext{otherwise} .end{cases}

Formal definition

Let $\left(Omega, mathcal\left\{F\right\}, P\right)$ be a probability space and (Y, Σ) be a measurable space. Then a random variable X is formally defined as a measurable function $X: Omega ightarrow Y$. An interpretation of this is that the preimage of the "well-behaved" subsets of Y (the elements of Σ) are events (elements of $mathcal\left\{F\right\}$), and hence are assigned a probability by P.

Real-valued random variables

Typically, the measurable space is the measurable space over the real numbers. In this case, let $\left(Omega, mathcal\left\{F\right\}, P\right)$ be a probability space. Then, the function $X: Omega ightarrow mathbb\left\{R\right\}$ is a real-valued random variable if:$\left\{ omega : X\left(omega\right) le r \right\} in mathcal\left\{F\right\} qquad forall r in mathbb\left\{R\right\}$

Distribution functions of random variables

Associating a cumulative distribution function (CDF) with a random variable is a generalization of assigning a value to a variable. If the CDF is a (right continuous) Heaviside step function then the variable takes on the value at the jump with probability 1. In general, the CDF specifies the probability that the variable takes on particular values.

If a random variable $X: Omega o mathbb\left\{R\right\}$ defined on the probability space $\left(Omega, A, P\right)$ is given, we can ask questions like "How likely is it that the value of $X$ is bigger than 2?". This is the same as the probability of the event $\left\{ s inOmega : X\left(s\right) > 2 \right\}$ which is often written as $P\left(X > 2\right)$ for short.

Recording all these probabilities of output ranges of a real-valued random variable "X" yields the probability distribution of "X". The probability distribution "forgets" about the particular probability space used to define "X" and only records the probabilities of various values of "X". Such a probability distribution can always be captured by its cumulative distribution function

:$F_X\left(x\right) = operatorname\left\{P\right\}\left(X le x\right)$

and sometimes also using a probability density function. In measure-theoretic terms, we use the random variable "X" to "push-forward" the measure "P" on Ω to a measure d"F" on R. The underlying probability space Ω is a technical device used to guarantee the existence of random variables, and sometimes to construct them. In practice, one often disposes of the space Ω altogether and just puts a measure on R that assigns measure 1 to the whole real line, i.e., one works with probability distributions instead of random variables.

Moments

The probability distribution of a random variable is often characterised by a small number of parameters, which also have a practical interpretation. For example, it is often enough to know what its "average value" is. This is captured by the mathematical concept of expected value of a random variable, denoted E ["X"] . In general, E ["f"("X")] is not equal to "f"(E ["X"] ). Once the "average value" is known, one could then ask how far from this average value the values of "X" typically are, a question that is answered by the variance and standard deviation of a random variable.

Mathematically, this is known as the (generalised) problem of moments: for a given class of random variables "X", find a collection {"fi"} of functions such that the expectation values E ["fi"("X")] fully characterize the distribution of the random variable "X".

Functions of random variables

If we have a random variable "X" on Ω and a measurable function "f": RR, then "Y" = "f"("X") will also be a random variable on Ω, since the composition of measurable functions is also measurable. The same procedure that allowed one to go from a probability space (Ω, P) to (R, dF"X") can be used to obtain the distribution of "Y". The cumulative distribution function of "Y" is

:$F_Y\left(y\right) = operatorname\left\{P\right\}\left(f\left(X\right) le y\right).$

Example 1

Let "X" be a real-valued, continuous random variable and let "Y" = "X"2.

:$F_Y\left(y\right) = operatorname\left\{P\right\}\left(X^2 le y\right).$

If "y" < 0, then P("X"2 ≤ "y") = 0, so

:$F_Y\left(y\right) = 0qquadhbox\left\{if\right\}quad y < 0.$

If "y" ≥ 0, then

:$operatorname\left\{P\right\}\left(X^2 le y\right) = operatorname\left\{P\right\}\left(|X| le sqrt\left\{y\right\}\right) = operatorname\left\{P\right\}\left(-sqrt\left\{y\right\} le X le sqrt\left\{y\right\}\right),$

so

:$F_Y\left(y\right) = F_X\left(sqrt\left\{y\right\}\right) - F_X\left(-sqrt\left\{y\right\}\right)qquadhbox\left\{if\right\}quad y ge 0.$

Example 2

Suppose $scriptstyle X$ is a random variable with a cumulative distribution

:$F_\left\{X\right\}\left(x\right) = P\left(X leq x\right) = frac\left\{1\right\}\left\{\left(1 + e^\left\{-x\right\}\right)^\left\{ heta$

where $scriptstyle heta > 0$ is a fixed parameter. Consider the random variable $scriptstyle Y = mathrm\left\{log\right\}\left(1 + e^\left\{-X\right\}\right).$ Then,

:$F_\left\{Y\right\}\left(y\right) = P\left(Y leq y\right) = P\left(mathrm\left\{log\right\}\left(1 + e^\left\{-X\right\}\right) leq y\right) = P\left(X > -mathrm\left\{log\right\}\left(e^\left\{y\right\} - 1\right)\right).,$

The last expression can be calculated in terms of the cumulative distribution of $X,$ so

:$F_\left\{Y\right\}\left(y\right) = 1 - F_\left\{X\right\}\left(-mathrm\left\{log\right\}\left(e^\left\{y\right\} - 1\right)\right) ,$:::$= 1 - frac\left\{1\right\}\left\{\left(1 + e^\left\{mathrm\left\{log\right\}\left(e^\left\{y\right\} - 1\right)\right\}\right)^\left\{ heta$:::$= 1 - frac\left\{1\right\}\left\{\left(1 + e^\left\{y\right\} - 1\right)^\left\{ heta$:::$= 1 - e^\left\{-y heta\right\}.,$

Equivalence of random variables

There are several different senses in which random variables can be considered to be equivalent. Two random variables can be equal, equal almost surely, equal in mean, or equal in distribution.

In increasing order of strength, the precise definition of these notions of equivalence is given below.

Equality in distribution

Two random variables "X" and "Y" are "equal in distribution" ifthey have the same distribution functions::$operatorname\left\{P\right\}\left(X le x\right) = operatorname\left\{P\right\}\left(Y le x\right)quadhbox\left\{for all\right\}quad x.$

Two random variables having equal moment generating functions have the same distribution. This provides, for example, a useful method of checking equality of certain functions of i.i.d. random variables.

:$d\left(X,Y\right)=sup_x|operatorname\left\{P\right\}\left(X le x\right) - operatorname\left\{P\right\}\left(Y le x\right)|,$

which is the basis of the Kolmogorov-Smirnov test.

Equality in mean

Two random variables "X" and "Y" are "equal in p-th mean" if the "p"th moment of |"X" − "Y"| is zero, that is,

:$operatorname\left\{E\right\}\left(|X-Y|^p\right) = 0.$

As in the previous case, there is a related distance between the random variables, namely

:$d_p\left(X, Y\right) = operatorname\left\{E\right\}\left(|X-Y|^p\right).$

This is equivalent to the following:

Almost sure equality

Two random variables "X" and "Y" are "equal almost surely" if, and only if, the probability that they are different is zero:

:$operatorname\left\{P\right\}\left(X eq Y\right) = 0.$

For all practical purposes in probability theory, this notion of equivalence is as strong as actual equality. It is associated to the following distance:

:$d_infty\left(X,Y\right)=sup_omega|X\left(omega\right)-Y\left(omega\right)|,$

where 'sup' in this case represents the essential supremum in the sense of measure theory.

Equality

Finally, the two random variables "X" and "Y" are "equal" if they are equal as functions on their probability space, that is,

:$X\left(omega\right)=Y\left(omega\right)qquadhbox\left\{for all\right\}quadomega.$

Convergence

Much of mathematical statistics consists in proving convergence results for certain sequences of random variables; see for instance the law of large numbers and the central limit theorem.

There are various senses in which a sequence ("X""n") of random variables can converge to a random variable "X". These are explained in the article on convergence of random variables.

Literature

* Kallenberg, O., "Random Measures", 4th edition. Academic Press, New York, London; Akademie-Verlag, Berlin (1986). MR0854102 ISBN 0123949602
* Papoulis, Athanasios 1965 "Probability, Random Variables, and Stochastic Processes". McGraw-Hill Kogakusha, Tokyo, 9th edition, ISBN 0-07-119981-0.

*probability distribution
*event (probability theory)
*randomness
*random element
*random vector
*random function
*random measure
*generating function
*Algorithmic information theory
*Stochastic process
*Athanasios Papoulis

Wikimedia Foundation. 2010.

### Look at other dictionaries:

• random variable — n. Statistics a variable whose values are determined independently according to a probability distribution …   English World dictionary

• random variable — Intuitively, a variable, such as height, that can take various values in a population, and such that some values have some probability of occurrence, and others a different probability (for example, there might be a probability of one in ten that …   Philosophy dictionary

• Random Variable — A variable whose value is unknown or a function that assigns values to each of an experiment s outcomes. Random variables are often designated by letters and can be classified as discrete, which are variables that have specific values, or… …   Investment dictionary

• random variable — Statistics. a quantity that takes any of a set of values with specified probabilities. Also called variate. [1935 40] * * * In statistics, a function that can take on either a finite number of values, each with an associated probability, or an… …   Universalium

• random variable — atsitiktinis kintamasis statusas T sritis automatika atitikmenys: angl. random variable vok. Zufallsvariable, f rus. случайная переменная, f pranc. variable aléatoire, f …   Automatikos terminų žodynas

• random variable — noun a variable quantity that is random (Freq. 11) • Syn: ↑variate, ↑variant, ↑stochastic variable, ↑chance variable • Derivationally related forms: ↑vary (for: ↑ …   Useful english dictionary

• random variable — noun a) A quantity whose value is random and to which a probability distribution is assigned, such as the possible outcome of a roll of a dice. b) A measurable function from a sample space to the measurable space of possible values of the… …   Wiktionary

• Random variable — A function that assigns a real number to each and every possible outcome of a random experiment. The New York Times Financial Glossary …   Financial and business terms

• random variable — A function that assigns a real number to each and every possible outcome of a random experiment. Bloomberg Financial Dictionary …   Financial and business terms

• random variable — an outcome of a random process that has a numerical value …   Medical dictionary