Changing order of integration for a double integral

Click For Summary
SUMMARY

The discussion centers on the process of changing the order of integration for a double integral involving a nonnegative function g(x) and a probability density function f(x) for a continuous random variable X. The specific theorem being referenced is E[g(X)] = ∫_{−∞}^{∞} g(x)f(x)dx, as outlined in "A First Course in Probability" by Sheldon Ross (5th edition). The user seeks clarification on determining new limits of integration after switching the order, ultimately realizing that the region of integration R can be defined as R = {(x,y)| 0 ≤ y < g(x)}.

PREREQUISITES
  • Understanding of double integrals and their properties
  • Familiarity with probability density functions
  • Knowledge of the concept of expectation in probability
  • Ability to visualize regions of integration in the x-y plane
NEXT STEPS
  • Study the properties of double integrals and the Fubini's theorem
  • Learn about probability density functions and their applications in statistics
  • Explore graphical methods for determining limits of integration
  • Investigate the implications of changing the order of integration in probability theory
USEFUL FOR

Students and professionals in mathematics, particularly those studying calculus and probability theory, as well as anyone involved in statistical analysis and integration techniques.

JFo
Messages
91
Reaction score
0
I'm reading through a proof (the full theorem statement is at the bottom of the post) in a book on probability and I'm having trouble following a line in the proof. The line reads as follows:

\int_{0}^{\infty} \int_{x:g(x)&gt;y} f(x) dx dy = \int_{x:g(x)&gt;0} \int_{0}^{g(x)} dy f(x) dx

Where g(x) is a nonnegative function and f(x) is a probability density function for a continuous random variable X. So all that's happened in this line is that they switched the order of integration, and the part I'm having trouble with is changing the limits of integration from the left hand side to the new limits on the right hand side. The only way I know how to determine the new limits is to draw out the region of integration in the x-y plane and determine the end-points but I keep coming up with something different.

In case anyone is wondering, the proof comes from the book "A first course in Probability" by Sheldon Ross (5th edition), and is proving the following fact about the expectation of a function of a continuous random variable X with probability density function f(x):

E[g(X)] = \int_{-\infty}^{\infty}g(x)f(x)dx

The book only proves it in the case that g(x) is nonnegative

Thanks much in advance!

PS - Apologies if this is in the wrong forum, I felt this was more of a calculus question than a probability question, but please feel free to move it if you think it's better off somewhere else.
 
Last edited:
Physics news on Phys.org
Nevermind, it just hit me.

If R is the region of integration then
R = {(x,y)| g(x)>y and y>=0}, or more compactly R = {(x,y)| 0<=y< g(x)}

Which of course is just the area below the graph of g(x) and above the x-axis (g is assumed non negative) eliminating the points where g(x)=0.
 
Last edited:

Similar threads

  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 31 ·
2
Replies
31
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K