Radon–Nikodym theorem

Radon–Nikodym theorem

In mathematics, the Radon–Nikodym theorem is a result in functional analysis that states that, given a measurable space ("X",Σ), if a sigma-finite measure "ν" on ("X",Σ) is absolutely continuous with respect to a sigma-finite measure μ on ("X",Σ), then there is a measurable function "f" on "X" and taking values in [0,∞), such that

: u(A) = int_A f , dmu

for any measurable set "A".

The theorem is named after Johann Radon, who proved the theorem for the special case where the underlying space is R"N" in 1913, and for Otton Nikodym who proved the general case in 1930.

Radon–Nikodym derivative

The function "f" satisfying the above equality is "uniquely defined up to a μ-null set", that is, if "g" is another function which satisfies the same property, then "f" = "g" μ-almost everywhere. "f" is commonly written "dν"/"dμ" and is called the Radon–Nikodym derivative. The choice of notation and the name of the function reflects the fact that the function is analogous to a derivative in calculus in the sense that it describes the rate of change of density of one measure with respect to another (the way the Jacobian determinant is used in multivariable integration). A similar theorem can be proven for signed and complex measures: namely, that if μ is a nonnegative σ-finite measure, and ν is a finite-valued signed or complex measure such that | u| ll mu, there is μ-integrable real- or complex-valued function "g" on "X" such that

: u(A) = int_A g , dmu,

for any measurable set "A".

Applications

The theorem is very important in extending the ideas of probability theory from probability masses and probability densities defined over real numbers to probability measures defined over arbitrary sets. It tells if and how it is possible to change from one probability measure to another.

For example, it is necessary when proving the existence of conditional expectation for probability measures. The latter itself is a key concept in probability theory, as conditional probability is just a special case of it.

Amongst other fields, financial mathematics uses the theorem extensively. Such changes of probability measure are the cornerstone of the rational pricing of derivative securities and are used for converting actual probabilities into those of the risk neutral probabilities.

Properties

* Let &nu;, &mu;, and &lambda; be &sigma;-finite measures on the same measure space. If &nu; << &lambda; and &mu; << &lambda; (&nu; and &mu; are absolutely continuous in respect to &lambda;), then

:: frac{d( u+mu)}{dlambda} = frac{d u}{dlambda}+frac{dmu}{dlambda}quadlambda ext{-almost everywhere}.

* If &nu; << &mu; << &lambda;, then

:: frac{d u}{dlambda}=frac{d u}{dmu}frac{dmu}{dlambda}quadlambda ext{-almost everywhere}.

* If &mu; << &lambda; and "g" is a &mu;-integrable function, then

:: int_X g,dmu = int_X gfrac{dmu}{dlambda},dlambda.

* If &mu; << &nu; and &nu; << &mu;, then

:: frac{dmu}{d u}=left(frac{d u}{dmu} ight)^{-1}.

* If &nu; is a finite signed or complex measure, then

:: {d| u|over dmu} = left|{d uover dmu} ight|.

Further applications

Information divergences

If &mu; and &nu; are measures over X, and &nu; << &mu;
* The Kullback-Leibler divergence from &mu; to &nu; is defined to be

:: D_{mathrm{KL(mu| u) = - int_X log left( frac{d u}{d mu} ight) ; dmu. !

* The Renyi divergence of order α from &mu; to &nu; is defined to be

:: D_{mathrm{alpha(mu| u) = frac{1}{1-alpha} int_X left( frac{d u}{d mu} ight)^{1-alpha} ; dmu. !

The assumption of sigma-finiteness

The Radon–Nikodym theorem makes the assumption that the measure "&mu;" with respect to which one computes the rate of change of "&nu;" is sigma-finite. Here is an example when "&mu;" is not sigma-finite and the Radon–Nikodym theorem fails to hold.

Consider the Borel sigma-algebra on the real line. Let the counting measure, "&mu;", of a Borel set "A" be defined as the number of elements of "A" if "A" is finite, and +&infin; otherwise. One can check that "&mu;" is indeed a measure. It is not sigma-finite, as not every Borel set is at most a countable union of finite sets. Let "&nu;" be the usual Lebesgue measure on this Borel algebra. Then, "&nu;" is absolutely continuous with respect to "&mu;", since for a set "A" one has "&mu;"("A") = 0 only if "A" is the empty set, and then "&nu;"("A") is also zero.

Assume that the Radon–Nikodym theorem holds, that is, for some measurable function "f" one has

: u(A) = int_A f , mathrm{d} mu for all Borel sets. Taking "A" to be a singleton set, "A" = {"a"}, and using the above equality, one finds

: 0 = f(a), for all real numbers "a". This implies that the function "f", and therefore the Lebesgue measure "&nu;", is zero, which is a contradiction.

Proof

This section gives a measure-theoretic proof of the theorem. There is also a functional-analytic proof, using Hilbert space methods, that was first given by von Neumann.

For finite measures &mu; and &nu;, the idea is to consider functions "f" with "f" d&mu; &le; d&nu;. The supremum of all such functions, along with the monotone convergence theorem, then furnishes the Radon-Nikodym derivative. The fact that the remaining part of &mu; is singular with respect to &nu; follows from a technical fact about finite measures. Once the result is established for finite measures, extending to &sigma;-finite, signed, and complex measures can be done naturally. The details are given below.

For finite measures

First, suppose that &mu; and &nu; are both finite-valued nonnegative measures. Let "F" be the set of those measurable functions "f" : "X"&rarr; [0, +&infin;] satisfying:int_A f,dmuleq u(A)for every "A" &isin; &Sigma; (this set is not empty, for it contains at least the zero function). Let "f"1, "f"2 &isin; "F"; let "A" be an arbitrary measurable set, "A"1 = {x &isin; "A" | "f"1("x") > "f"2("x")}, and "A"2 = {x &isin; "A" | "f"2("x") &ge; "f"1("x")}. Then one has:int_Amax{f_1,f_2},dmu = int_{A_1} f_1,dmu+int_{A_2} f_2,dmu leq u(A_1)+ u(A_2)= u(A),and therefore, max{"f"1,"f"2} &isin; "F".

Now, let {"f""n"}"n" be a sequence of functions in "F" such that:lim_{n oinfty}int_X f_n,dmu=sup_{fin F} int_X f,dmu.By replacing "f""n" with the maximum of the first "n" functions, one can assume that the sequence {"f""n"} is increasing. Let "g" be a function defined as:g(x):=lim_{n oinfty}f_n(x).By Lebesgue's monotone convergence theorem, one has:int_A g,dmu=lim_{n oinfty} int_A f_n,dmu leq u(A)for each "A" &isin; &Sigma;, and hence, "g" &isin; "F". Also, by the construction of "g",:int_X g,dmu=sup_{fin F}int_X f,dmu.

Now, since "g" &isin; "F",: u_0(A):= u(A)-int_A g,dmudefines a nonnegative measure on &Sigma;. Suppose &nu;0 &ne; 0; then, since &mu; is finite, there is an &epsilon; > 0 such that &nu;0("X") > &epsilon; &mu;("X"). Let ("P","N") be a Hahn decomposition for the signed measure &nu;0 − &epsilon; &mu;. Note that for every "A" &isin; &Sigma; one has &nu;0("A"&cap;"P") &ge; &epsilon; &mu;("A"&cap;"P"), and hence,: u(A)=int_A g,dmu+ u_0(A) geq int_A g,dmu+ u_0(Acap P):geq int_A g,dmu +varepsilonmu(Acap P)=int_A(g+varepsilon1_P),dmu.Also, note that &mu;("P") > 0; for if &mu;("P") = 0, then (since &nu; is absolutely continuous in relation to &mu;) &nu;0("P") &le; &nu;("P") = 0, so &nu;0("P") = 0 and: u_0(X)-varepsilonmu(X)=( u_0-varepsilonmu)(N)leq 0,contradicting the fact that &nu;0("X") > &epsilon; &mu; ("X").

Then, since:int_X g,dmu leq u(X) < +infty,"g" + &epsilon; 1"P" &isin; "F" and satisfies:int_X (g+varepsilon 1_P),dmu>int_X g,dmu=sup_{fin F}int_X f,dmu.This is impossible, therefore, the initial assumption that &nu;0 &ne; 0 must be false. So &nu;0 = 0, as desired.

Now, since "g" is &mu;-integrable, the set {"x"&isin;"X" | "g"("x")=+&infin;} is &mu;-null. Therefore, if a "f" is defined as

:f(x)=egin{cases} g(x)&mbox{if }g(x) < infty\0&mbox{otherwise,}end{cases}

then "f" has the desired properties.

As for the uniqueness, let "f","g" : "X"&rarr; [0,+&infin;) be measurable functions satisfying

: u(A)=int_A f,dmu=int_A g,dmu

for every measurable set "A". Then, "g" − "f" is &mu;-integrable, and

:int_A (g-f),dmu=0.

In particular, for "A" = {"x"&isin;"X" | "f"("x") > "g"("x")}, or {"x" &isin; "X" | "f"("x") < "g"("x")}. It follows that

:int_X (g-f)^+,dmu=0=int_X (g-f)^-,dmu,

and so, that ("g"−"f")+ = 0 &mu;-almost everywhere; the same is true for ("g" − "f"), and thus, "f" = "g" &mu;-almost everywhere, as desired.

For &sigma;-finite positive measures

If &mu; and &nu; are &sigma;-finite, then "X" can be written as the union of a sequence {"B""n"}"n" of disjoint sets in &Sigma;, each of which has finite measure under both &mu; and &nu;. For each "n", there is a &Sigma;-measurable function "f""n" : "B""n"&rarr; [0,+&infin;) such that: u(A)=int_A f_n,dmufor each &Sigma;-measurable subset "A" of "B""n". The union "f" of those functions is then the required function.

As for the uniqueness, since each of the "f""n" is &mu;-almost everywhere unique, then so is "f".

For signed and complex measures

If &nu; is a σ-finite signed measure, then it can be Hahn–Jordan decomposed as &nu; = &nu;+−&nu; where one of the measures is finite. Applying the previous result to those two measures, one obtains two functions, "g","h" : "X"&rarr; [0,+&infin;), satisfying the Radon–Nikodym theorem for &nu;+ and &nu; respectively, at least one of which is &mu;-integrable (i.e., its integral with respect to &mu; is finite). It is clear then that "f" = "g"−"h" satisfies the required properties, including uniqueness, since both "g" and "h" are unique up to &mu;-almost everywhere equality.

If &nu; is a complex measure, it can be decomposed as &nu; = &nu;1+"i" &nu;2, where both &nu;1 and &nu;2 are finite-valued signed measures. Applying the above argument, one obtains two functions, "g","h" : "X"&rarr; [0,+&infin;), satisfying the required properties for &nu;1 and &nu;2, respectively. Clearly, "f" = "g" + "i h" is the required function.

References

* Shilov, G. E., and Gurevich, B. L., 1978. "Integral, Measure, and Derivative: A Unified Approach", Richard A. Silverman, trans. Dover Publications. ISBN 0486635198.

----


Wikimedia Foundation. 2010.

Игры ⚽ Нужна курсовая?

Look at other dictionaries:

  • Stinespring factorization theorem — In mathematics, Stinespring s dilation theorem, also called Stinespring s factorization theorem, is a result from operator theory that represents any completely positive map on a C* algebra as a composition of two completely positive maps each of …   Wikipedia

  • Otto M. Nikodym — Otto Marcin Nikodým (1887 – 1974) was a Polish mathematician. He was educated at the Universities of Lwow and Warsaw, and the Sorbonne. Nikodym taught at the Universities of Kraków and Warsaw and at the High Polytechnical School in Kraków. He… …   Wikipedia

  • Johann Radon — Infobox Scientist name = Johann Radon box width = 26em image width = 225px caption = birth date = 1887 12 16 birth place = Děčín, Bohemia, Austria Hungary death date = death date and age|1956|5|25|1887|12|16 death place = Vienna, Austria… …   Wikipedia

  • Choi's theorem on completely positive maps — In mathematics, Choi s theorem on completely positive maps (after Man Duen Choi) is a result that classifies completely positive maps between finite dimensional (matrix) C* algebras. An infinite dimensional algebraic generalization of Choi s… …   Wikipedia

  • Plancherel theorem for spherical functions — In mathematics, the Plancherel theorem for spherical functions is an important result in the representation theory of semisimple Lie groups, due in its final form to Harish Chandra. It is a natural generalisation in non commutative harmonic… …   Wikipedia

  • Lebesgue's decomposition theorem — In mathematics, more precisely in measure theory, Lebesgue s decomposition theorem is a theorem which states that given mu and u two sigma; finite signed measures on a measurable space (Omega,Sigma), there exist two sigma; finite signed measures… …   Wikipedia

  • Girsanov theorem — In probability theory, the Girsanov theorem tells how stochastic processes change under changes in measure. The theorem is especially important in the theory of financial mathematics as it tells how to convert from the physical measure which… …   Wikipedia

  • Cameron-Martin theorem — In mathematics, the Cameron Martin theorem or Cameron Martin formula is a theorem of measure theory that describes how abstract Wiener measure changes under translation by certain elements of the Cameron Martin Hilbert space.MotivationRecall that …   Wikipedia

  • Absolute continuity — In mathematics, the relationship between the two central operations of calculus, differentiation and integration, stated by fundamental theorem of calculus in the framework of Riemann integration, is generalized in several directions, using… …   Wikipedia

  • Bochner integral — In mathematics, the Bochner integral extends the definition of Lebesgue integral to functions which take values in a Banach space.The theory of vector valued functions is a chapter of mathematical analysis, concerned with the generalisation to… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”