Doob's martingale convergence theorems

Doob's martingale convergence theorems

In mathematics — specifically, in stochastic analysis — Doob's martingale convergence theorems are a collection of results on the long-time limits of supermartingales, named after the American mathematician Joseph Leo Doob.

Contents

Statement of the theorems

In the following, (Ω, FFP), F = (Ft)t≥0, will be a filtered probability space and N : [0, +∞) × Ω → R will be a right-continuous supermartingale with respect to the filtration F; in other words, for all 0 ≤ s ≤ t < +∞,

N_{s} \geq \mathbf{E} \big[ N_{t} \big| F_{s} \big].

Doob's first martingale convergence theorem

Doob's first martingale convergence theorem provides a sufficient condition for the random variables Nt to have a limit as t → +∞ in a pointwise sense, i.e. for each ω in the sample space Ω individually.

For t ≥ 0, let Nt = max(−Nt, 0) and suppose that

\sup_{t > 0} \mathbf{E} \big[ N_{t}^{-} \big] < + \infty.

Then the pointwise limit

N(\omega) = \lim_{t \to + \infty} N_{t} (\omega)

exists for P-almost all ω ∈ Ω.

Doob's second martingale convergence theorem

It is important to note that the convergence in Doob's first martingale convergence theorem is pointwise, not uniform, and is unrelated to convergence in mean square, or indeed in any Lp space. In order to obtain convergence in L1 (i.e., convergence in mean), one requires uniform integrability of the random variables Nt. By Chebyshev's inequality, convergence in L1 implies convergence in probability and convergence in distribution.

The following are equivalent:

  • (Nt)t>0 is uniformly integrable, i.e.
\lim_{C \to \infty} \sup_{t > 0} \int_{\{ \omega \in \Omega | N_{t} (\omega) > C \}} \big| N_{t} (\omega) \big| \, \mathrm{d} \mathbf{P} (\omega) = 0;
  • there exists an integrable random variable N ∈ L1(Ω, PR) such that Nt → N as t → +∞ both P-almost surely and in L1(Ω, PR), i.e.
\mathbf{E} \big[ \big| N_{t} - N \big| \big] = \int_{\Omega} \big| N_{t} (\omega) - N (\omega) \big| \, \mathrm{d} \mathbf{P} (\omega) \to 0 \mbox{ as } t \to + \infty.

Corollary: convergence theorem for continuous martingales

Let M : [0, +∞) × Ω → R be a continuous martingale such that

\sup_{t > 0} \mathbf{E} \big[ \big| M_{t} \big|^{p} \big] < + \infty

for some p > 1. Then there exists a random variable M ∈ Lp(Ω, PR) such that Mt → M as t → +∞ both P-almost surely and in Lp(Ω, PR).

Discrete-time results

Similar results can be obtained for discrete-time supermartingales and submartingales, the obvious difference being that no continuity assumptions are required. For example, the result above becomes

Let M : N × Ω → R be a discrete-time martingale such that

\sup_{k \in \mathbf{N}} \mathbf{E} \big[ \big| M_{k} \big|^{p} \big] < + \infty

for some p > 1. Then there exists a random variable M ∈ Lp(Ω, PR) such that Mk → M as k → +∞ both P-almost surely and in Lp(Ω, PR)

Convergence of conditional expectations: Lévy's zero-one law

Doob's martingale convergence theorems imply that conditional expectations also have a convergence property.

Let (Ω, FP) be a probability space and let X be a random variable in L1. Let F = (Fk)kN be any filtration of F, and define F to be the minimal σ-algebra generated by (Fk)kN. Then

\mathbf{E} \big[ X \big| F_{k} \big] \to \mathbf{E} \big[ X \big| F_{\infty} \big] \mbox{ as } k \to \infty

both P-almost surely and in L1.

This result is usually called Lévy's zero-one law. The reason for the name is that if A is an event in F, then the theorem says that \mathbf{P}[ A | F_{k} ] \to \mathbf{1}_A almost surely, i.e., the limit of the probabilities is 0 or 1. In plain language, if we are learning gradually all the information that determines the outcome of an event, then we will become gradually certain what the outcome will be. This sounds almost like a tautology, but the result is still non-trivial. For instance, it easily implies Kolmogorov's zero-one law, since it says that for any tail event A, we must have \mathbf{P}[ A ] = \mathbf{1}_A almost surely, hence \mathbf{P}[ A ]\in\{0,1\}.

See also

  • Backwards martingale convergence theorem

References

  • Durrett, Rick (1996). Probability: theory and examples (Second ed.). Duxbury Press. ISBN 978-0534243180. 

Wikimedia Foundation. 2010.

Игры ⚽ Нужен реферат?

Look at other dictionaries:

  • Martingale (probability theory) — For the martingale betting strategy , see martingale (betting system). Stopped Brownian motion is an example of a martingale. It can be used to model an even coin toss betting game with the possibility of bankruptcy. In probability theory, a… …   Wikipedia

  • Joseph Leo Doob — Infobox Scientist name = Joseph Doob image width = 300px caption = Joseph Leo Doob birth date = birth date|1910|2|27|mf=y birth place = Cincinnati, Ohio, U.S. residence = nationality = death date = death date and age|2004|6|7|1910|2|27|mf=y death …   Wikipedia

  • List of mathematics articles (D) — NOTOC D D distribution D module D D Agostino s K squared test D Alembert Euler condition D Alembert operator D Alembert s formula D Alembert s paradox D Alembert s principle Dagger category Dagger compact category Dagger symmetric monoidal… …   Wikipedia

  • probability theory — Math., Statistics. the theory of analyzing and making statements concerning the probability of the occurrence of uncertain events. Cf. probability (def. 4). [1830 40] * * * Branch of mathematics that deals with analysis of random events.… …   Universalium

  • Hardy space — In complex analysis, the Hardy spaces (or Hardy classes) Hp are certain spaces of holomorphic functions on the unit disk or upper half plane. They were introduced by Frigyes Riesz (Riesz 1923), who named them after G. H. Hardy, because of the… …   Wikipedia

  • Itō calculus — Itō calculus, named after Kiyoshi Itō, extends the methods of calculus to stochastic processes such as Brownian motion (Wiener process). It has important applications in mathematical finance and stochastic differential equations.The central… …   Wikipedia

  • Optional stopping theorem — In probability theory, the optional stopping theorem (or Doob s optional sampling theorem) says that, under certain conditions, the expected value of a martingale at a stopping time is equal to its initial value (and also expected value at any… …   Wikipedia

  • Alexandra Bellow — (1935 ndash;) is a mathematician who has made substantial contributions to the fields of ergodic theory, probability and analysis. BiographyShe was born in Bucharest, Romania, as Alexandra Bagdasar. Her parents were both physicians. Her mother,… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”