Let F Be Nonnegative Integrable and Uniformly Continuous

Mathematical concept

In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales.

Measure-theoretic definition [edit]

Uniform integrability is an extension to the notion of a family of functions being dominated in L 1 {\displaystyle L_{1}} which is central in dominated convergence. Several textbooks on real analysis and measure theory often use the following definition:[1] [2]

Definition A: Let ( X , M , μ ) {\displaystyle (X,{\mathfrak {M}},\mu )} be a positive measure space. A set Φ L 1 ( μ ) {\displaystyle \Phi \subset L^{1}(\mu )} is called uniformly integrable if sup f Φ f L 1 ( μ ) < {\displaystyle \sup _{f\in \Phi }\|f\|_{L_{1}(\mu )}<\infty } , and to each ε > 0 {\displaystyle \varepsilon >0} there corresponds a δ > 0 {\displaystyle \delta >0} such that

E | f | d μ < ε {\displaystyle \int _{E}|f|\,d\mu <\varepsilon }

whenever f Φ {\displaystyle f\in \Phi } and μ ( E ) < δ . {\displaystyle \mu (E)<\delta .}

Definition A is rather restrictive for infinite measure spaces. A slightly more general definition[3] of uniform integrability that works well in general measures spaces was introduced by G. A. Hunt.

Definition H: Let ( X , M , μ ) {\displaystyle (X,{\mathfrak {M}},\mu )} be a positive measure space. A set Φ L 1 ( μ ) {\displaystyle \Phi \subset L^{1}(\mu )} is called uniformly integrable if and only if

inf g L + 1 ( μ ) sup f Φ { | f | > g } | f | d μ = 0 {\displaystyle \inf _{g\in L_{+}^{1}(\mu )}\sup _{f\in \Phi }\int _{\{|f|>g\}}|f|\,d\mu =0}

where L + 1 ( μ ) = { g L 1 ( μ ) : g 0 } {\displaystyle L_{+}^{1}(\mu )=\{g\in L^{1}(\mu ):g\geq 0\}} .

For finite measure spaces the following result[4] follows from Definition H:

Theorem 1: If ( X , M , μ ) {\displaystyle (X,{\mathfrak {M}},\mu )} is a (positive) finite measure space, then a set Φ L 1 ( μ ) {\displaystyle \Phi \subset L^{1}(\mu )} is uniformly integrable if and only if

inf a 0 sup f Φ { | f | > a } | f | d μ = 0 {\displaystyle \inf _{a\geq 0}\sup _{f\in \Phi }\int _{\{|f|>a\}}|f|\,d\mu =0}


Many textbooks in probability present Theorem 1 as the definition of uniform integrability in Probability spaces. When the space ( X , M , μ ) {\displaystyle (X,{\mathfrak {M}},\mu )} is σ {\displaystyle \sigma } -finite, Definition H yields the following equivalency:

Theorem 2: Let ( X , M , μ ) {\displaystyle (X,{\mathfrak {M}},\mu )} be a σ {\displaystyle \sigma } -finite measure space, and h L 1 ( μ ) {\displaystyle h\in L^{1}(\mu )} be such that h > 0 {\displaystyle h>0} almost surely. A set Φ L 1 ( μ ) {\displaystyle \Phi \subset L^{1}(\mu )} is uniformly integrable if and only if sup f Φ f L 1 ( μ ) < {\displaystyle \sup _{f\in \Phi }\|f\|_{L_{1}(\mu )}<\infty } , and for any ε > 0 {\displaystyle \varepsilon >0} , there exits δ > 0 {\displaystyle \delta >0} such that

sup f Φ A | f | d μ < ε {\displaystyle \sup _{f\in \Phi }\int _{A}|f|\,d\mu <\varepsilon }

whenever A h d μ < δ {\displaystyle \int _{A}h\,d\mu <\delta } .

In particular, the equivalence of Definitions A and H for finite measures follows immediately from Theorem 2; for this case, the statement in Definition A is obtained by taking h 1 {\displaystyle h\equiv 1} in Theorem 2.

Probability definition [edit]

In the theory of probability, Definition A or the statement of Theorem 1 are often presented as definitions of uniform integrability using the notation expectation of random variables.[5] [6] [7], that is,

1. A class C {\displaystyle {\mathcal {C}}} of random variables is called uniformly integrable if:

or alternatively

2. A class C {\displaystyle {\mathcal {C}}} of random variables is called uniformly integrable (UI) if there exists K [ 0 , ) {\displaystyle K\in [0,\infty )} such that E ( | X | I | X | K ) ε  for all X C {\displaystyle \operatorname {E} (|X|I_{|X|\geq K})\leq \varepsilon \ {\text{ for all X}}\in {\mathcal {C}}} , where I | X | K {\displaystyle I_{|X|\geq K}} is the indicator function I | X | K = { 1 if | X | K , 0 if | X | < K . {\displaystyle I_{|X|\geq K}={\begin{cases}1&{\text{if }}|X|\geq K,\\0&{\text{if }}|X|<K.\end{cases}}} .

Tightness and uniform integrability [edit]

One consequence of uniformly integrability of a class C {\displaystyle {\mathcal {C}}} of random variables is that family of laws or distributions { P | X | 1 ( ) : X C } {\displaystyle \{P\circ |X|^{-1}(\cdot ):X\in {\mathcal {C}}\}} is tight. That is, for each δ > 0 {\displaystyle \delta >0} , there exists a > 0 {\displaystyle a>0} such that

P ( | X | > a ) δ {\displaystyle P(|X|>a)\leq \delta }

for all X C {\displaystyle X\in {\mathcal {C}}} .[8]

This however, does not mean that the family of measures V C := { μ X : A A | X | d P , X C } {\displaystyle {\mathcal {V}}_{\mathcal {C}}:={\Big \{}\mu _{X}:A\mapsto \int _{A}|X|\,dP,\,X\in {\mathcal {C}}{\Big \}}} is tight.

There is another notion of uniformity, slightly different than uniform integrability, which also has many applications in Probability and measure theory, and which does not require random variables to have a finite integral[9]

Definition: Suppose ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},P)} is a probability space. A classed C {\displaystyle {\mathcal {C}}} of random variables is uniformly absolutely continuous with respect to P {\displaystyle P} if for any ε > 0 {\displaystyle \varepsilon >0} , there is δ > 0 {\displaystyle \delta >0} such that E [ | X | I A ] < ε {\displaystyle E[|X|I_{A}]<\varepsilon } whenever P ( A ) < δ {\displaystyle P(A)<\delta } .

The term "uniform absolute continuity" is not standard, but is used by some other authors.[10] [11]

[edit]

The following results apply to the probabilistic definition.[12]

  • Definition 1 could be rewritten by taking the limits as

    lim K sup X C E ( | X | I | X | K ) = 0. {\displaystyle \lim _{K\to \infty }\sup _{X\in {\mathcal {C}}}\operatorname {E} (|X|\,I_{|X|\geq K})=0.}

  • A non-UI sequence. Let Ω = [ 0 , 1 ] R {\displaystyle \Omega =[0,1]\subset \mathbb {R} } , and define

    X n ( ω ) = { n , ω ( 0 , 1 / n ) , 0 , otherwise. {\displaystyle X_{n}(\omega )={\begin{cases}n,&\omega \in (0,1/n),\\0,&{\text{otherwise.}}\end{cases}}}

    Clearly X n L 1 {\displaystyle X_{n}\in L^{1}} , and indeed E ( | X n | ) = 1 , {\displaystyle \operatorname {E} (|X_{n}|)=1\ ,} for all n. However,

    E ( | X n | , | X n | K ) = 1  for all n K , {\displaystyle \operatorname {E} (|X_{n}|,|X_{n}|\geq K)=1\ {\text{ for all }}n\geq K,}

    and comparing with definition 1, it is seen that the sequence is not uniformly integrable.

Non-UI sequence of RVs. The area under the strip is always equal to 1, but X n 0 {\displaystyle X_{n}\to 0} pointwise.

  • By using Definition 2 in the above example, it can be seen that the first clause is satisfied as L 1 {\displaystyle L^{1}} norm of all X n {\displaystyle X_{n}} s are 1 i.e., bounded. But the second clause does not hold as given any δ {\displaystyle \delta } positive, there is an interval ( 0 , 1 / n ) {\displaystyle (0,1/n)} with measure less than δ {\displaystyle \delta } and E [ | X m | : ( 0 , 1 / n ) ] = 1 {\displaystyle E[|X_{m}|:(0,1/n)]=1} for all m n {\displaystyle m\geq n} .
  • If X {\displaystyle X} is a UI random variable, by splitting

    E ( | X | ) = E ( | X | , | X | > K ) + E ( | X | , | X | < K ) {\displaystyle \operatorname {E} (|X|)=\operatorname {E} (|X|,|X|>K)+\operatorname {E} (|X|,|X|<K)}

    and bounding each of the two, it can be seen that a uniformly integrable random variable is always bounded in L 1 {\displaystyle L^{1}} .
  • If any sequence of random variables X n {\displaystyle X_{n}} is dominated by an integrable, non-negative Y {\displaystyle Y} : that is, for all ω and n,

    | X n ( ω ) | Y ( ω ) , Y ( ω ) 0 , E ( Y ) < , {\displaystyle |X_{n}(\omega )|\leq Y(\omega ),\ Y(\omega )\geq 0,\ \operatorname {E} (Y)<\infty ,}

    then the class C {\displaystyle {\mathcal {C}}} of random variables { X n } {\displaystyle \{X_{n}\}} is uniformly integrable.
  • A class of random variables bounded in L p {\displaystyle L^{p}} ( p > 1 {\displaystyle p>1} ) is uniformly integrable.

Relevant theorems [edit]

In the following we use the probabilistic framework, but regardless of the finiteness of the measure, by adding the boundedness condition on the chosen subset of L 1 ( μ ) {\displaystyle L^{1}(\mu )} .

Relation to convergence of random variables [edit]

A sequence { X n } {\displaystyle \{X_{n}\}} converges to X {\displaystyle X} in the L 1 {\displaystyle L_{1}} norm if and only if it converges in measure to X {\displaystyle X} and it is uniformly integrable. In probability terms, a sequence of random variables converging in probability also converge in the mean if and only if they are uniformly integrable.[17] This is a generalization of Lebesgue's dominated convergence theorem, see Vitali convergence theorem.

Citations [edit]

  1. ^ Rudin, Walter (1987). Real and Complex Analysis (3 ed.). Singapore: McGraw–Hill Book Co. p. 133. ISBN0-07-054234-1.
  2. ^ Royden, H.L. & Fitzpatrick, P.M. (2010). Real Analysis (4 ed.). Boston: Prentice Hall. p. 93. ISBN978-0-13-143747-0.
  3. ^ Hunt, G. A. (1966). Martingales et Processus de Markov. Paris: Dunod. p. 254.
  4. ^ Klenke, A. (2008). Probability Theory: A Comprehensive Course. Berlin: Springer Verlag. p. 134-137. ISBN978-1-84800-047-6.
  5. ^ Williams, David (1997). Probability with Martingales (Repr. ed.). Cambridge: Cambridge Univ. Press. pp. 126–132. ISBN978-0-521-40605-5.
  6. ^ Gut, Allan (2005). Probability: A Graduate Course. Springer. pp. 214–218. ISBN0-387-22833-0.
  7. ^ Bass, Richard F. (2011). Stochastic Processes. Cambridge: Cambridge University Press. pp. 356–357. ISBN978-1-107-00800-7.
  8. ^ Gut 2005, p. 236.
  9. ^ Bass 2011, p. 356. sfn error: no target: CITEREFBass2011 (help)
  10. ^ Benedetto, J. J. (1976). Real Variable and Integration. Stuttgart: B. G. Teubner. p. 89. ISBN3-519-02209-5.
  11. ^ Burrill, C. W. (1972). Measure, Integration, and Probability. McGraw-Hill. p. 180. ISBN0-07-009223-0.
  12. ^ Gut 2005, pp. 215–216.
  13. ^ Dunford, Nelson (1938). "Uniformity in linear spaces". Transactions of the American Mathematical Society. 44 (2): 305–356. doi:10.1090/S0002-9947-1938-1501971-X. ISSN 0002-9947.
  14. ^ Dunford, Nelson (1939). "A mean ergodic theorem". Duke Mathematical Journal. 5 (3): 635–646. doi:10.1215/S0012-7094-39-00552-1. ISSN 0012-7094.
  15. ^ Meyer, P.A. (1966). Probability and Potentials, Blaisdell Publishing Co, N. Y. (p.19, Theorem T22).
  16. ^ Poussin, C. De La Vallee (1915). "Sur L'Integrale de Lebesgue". Transactions of the American Mathematical Society. 16 (4): 435–501. doi:10.2307/1988879. hdl:10338.dmlcz/127627. JSTOR 1988879.
  17. ^ Bogachev, Vladimir I. (2007). Measure Theory Volume I. Berlin Heidelberg: Springer-Verlag. p. 268. doi:10.1007/978-3-540-34514-5_4. ISBN978-3-540-34513-8.

References [edit]

  • Shiryaev, A.N. (1995). Probability (2 ed.). New York: Springer-Verlag. pp. 187–188. ISBN978-0-387-94549-1.
  • Diestel, J. and Uhl, J. (1977). Vector measures, Mathematical Surveys 15, American Mathematical Society, Providence, RI ISBN 978-0-8218-1515-1

simsbach1984.blogspot.com

Source: https://en.wikipedia.org/wiki/Uniform_integrability

0 Response to "Let F Be Nonnegative Integrable and Uniformly Continuous"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel