Solving Probability Integrals with Monotone Convergence Theorem

  • Context: Graduate 
  • Thread starter Thread starter shoeburg
  • Start date Start date
  • Tags Tags
    Expectation Sets
Click For Summary

Discussion Overview

The discussion revolves around the application of the monotone convergence theorem in the context of probability integrals, specifically regarding the behavior of integrals of a random variable over sets with diminishing probabilities. Participants explore the conditions under which the limit of these integrals approaches zero as the probability of the sets approaches zero.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant expresses confusion about the application of the monotone convergence theorem, questioning how it relates to the integral of a random variable over sets with probabilities approaching zero.
  • Another participant suggests that if the probability of any set is greater than zero, the integral over that set must also be greater than zero, indicating a potential misunderstanding of the theorem's implications.
  • A participant agrees that the integral of a function over a set of measure zero is zero but raises concerns about the conditions under which limits can be applied in the context of expectation.
  • One participant presents a counterexample involving a Cauchy distribution, arguing that the theorem does not hold in this case, as the integral remains infinite despite the probability approaching zero.
  • Another participant questions whether assuming a finite expectation for the random variable resolves the counterexample presented.
  • Clarification is sought regarding the notation and definitions of the symbols used, particularly the distinction between different forms of integrals involving the random variable and its probability distribution.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the application of the monotone convergence theorem, with some supporting its validity under certain conditions while others present counterexamples that challenge its general applicability.

Contextual Notes

There are unresolved assumptions regarding the conditions under which the monotone convergence theorem can be applied, particularly concerning the nature of the random variable and the sets involved. The discussion also highlights the need for clarity in notation and definitions.

shoeburg
Messages
23
Reaction score
0
I'm having trouble working out a few details from my probability book. It says if P(An) goes to zero, then the integral of X over An goes to zero as well. My book says its because of the monotone convergence theorem, but this confuses me because I thought that has to do with Xn converging to X. Here is my attempt anyway:

We can write the integration as lim n to infinity of E[X(I(An))] = E(lim n to infinity of X P(An)) = E(X(0)) = E(0) = 0.
 
Physics news on Phys.org
It would be helpful if you defined the symbols. An, X, Xn, etc.?
 
Let An be a sequence of sets indexed by n. As n goes to infinity, P(An) = 0. Let X be a random variable. Prove that the limit as n goes to infinity of the integral of X over An, with respect to probabiliity measure P, equals zero. I thought that the monotone convergence theorem applies to when a sequence of random variables Xn converges to X, and you can interchange limit taking and integral taking (expectation). How does it apply here?

Thanks.
 
This may be totally wrong, but if the probability of any ##A_i## is greater than 0, then surely the integral is greater than 0, being the probability of a superset? I suppose this must be wrong but it is the most obvious way to interpret the question.
 
I do agree that the integral of anything over a set of measure zero is zero. However, if X takes the value 0 over a set B with P(B) > 0, than the expectation (integral) of X over that set B is zero. The concept that an integral over a set with probability zero is zero is certainly intuitive, it is the details of the proof that I am wondering about, namely, when can you apply the limit and let n go to infinity? Surely, we cannot just suddenly apply the limit immediately, but must calculate the integral somehow, perhaps how I tried it posted above.
 
The "theorem" is false. Example:

Let X be a random variable with a Cauchy distribution. An = set of points where X > n. P(An) -> 0. However the integral of X over An is infinite for all n.
 
You are right. Supposing I said, assume E(X) is finite. Does that take care of this counterexample and issue?
 
When you say the integral of X over An, do you mean
[tex]\int_{A_n} x p(x) dx[/tex]
or
[tex]\int_{A_n} p(x) dx[/tex]

where p(x) is the distribution?
 
The first one, I believe. My book usually has it written as X dP, but what you have written in the first one is equivalent, correct? And then you just put the limit taking as n -> infinity before the integral.




mathman said:
An = set of points where X > n. P(An) -> 0.
How do you know P({X>n}) --> 0? Is this true for all random variables with a finite expectation?
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K