Graduate Proving Lim F(x,y) is the Distribution Function for X

Click For Summary
The discussion centers on proving that the limit of the joint distribution function F(x,y) as y approaches infinity is indeed the marginal distribution function for the random variable X. It is established that F(X,Y) is bounded above by F_X(x) and that combining upper and lower bounds leads to the conclusion that the limit of F(x,y) as y approaches infinity equals F_X(x). The proof utilizes inequalities and properties of cumulative distribution functions (CDFs) for random variables. The argument is supported by referencing previous discussions on joint probability density functions and their marginals. Ultimately, the limit confirms that F(x,y) converges to the distribution function for X as y increases.
mathman
Science Advisor
Homework Helper
Messages
8,130
Reaction score
574
TL;DR
Marginal distributions for multivariable
Let F(x,y) be the joint distribution for random variables X and Y (not necessarily independent). Is ##lim_{y\to \infty}F(x,y)## the distribution function for X? I believe it is. How to prove it?
 
Physics news on Phys.org
mathman said:
Summary: Marginal distributions for multivariable

Let F(x,y) be the joint distribution for random variables X and Y (not necessarily independent). Is ##lim_{y\to \infty}F(x,y)## the distribution function for X? I believe it is. How to prove it?
This is rather indirect, but I felt like answering this with inequalities, using a thread you were on earlier this year.
ref: https://www.physicsforums.com/threads/joint-pdf-and-its-marginals.965416/

Upper Bound
##F_{X,Y}(x,y)\leq F_X(x) ##

- - - -
with ##A## the event ##X \leq x## and ##B## the event ##Y \leq y##

##F_{X,Y}(x,y)##
##= P\big(A \cap B\big) ##
##\leq P\big(A \cap B\big) + P\big(A \cap B^C\big) ##
##= P\big( (A \cap B) \cup (A \cap B^C)\big ) ##
##= P\big( A\big ) ##
## =F_X(x)##

note: the upper bound does not depend on choice of yLower Bound (see link)
##F_X(x) + F_Y(y) - 1 \leq F_{X,Y}(x,y) ##

putting upper and lower bounds together and taking limits, recalling that Y is a bona fide random variable, so you know its CDF tends to one

## F_X(x) ##
##= \big\{ F_X(x) + 1 - 1\big\} ##
##= \lim_{y \to \infty}\big\{ F_X(x) + F_Y(y) - 1\} ##
##\leq \lim_{y \to \infty} F_{X,Y}(x,y) ##
##\leq F_X(x)##

as desired
 
Last edited:
  • Like
Likes FactChecker
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 30 ·
2
Replies
30
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K