Independent versus dependent pdf

  • Context: Undergrad 
  • Thread starter Thread starter fisher garry
  • Start date Start date
  • Tags Tags
    Independent Pdf
Click For Summary
SUMMARY

The discussion centers on the independence of random variables X and Y, particularly in the context of the joint probability density function (pdf) represented as 24xy. Participants clarify that independence is a property of random variables, not distributions, and emphasize the need for proper terminology. The joint density f(x,y) can only be expressed as a product of marginal densities g(x) and h(y) if X and Y are independent. The constraints 0 < x + y < 1 affect the ability to express 24xy as a product of its marginals, demonstrating that X and Y are dependent under these conditions.

PREREQUISITES
  • Understanding of joint probability density functions (pdfs)
  • Knowledge of marginal distributions and their calculations
  • Familiarity with the concepts of independence and dependence in probability theory
  • Basic calculus for evaluating integrals
NEXT STEPS
  • Study the definition and properties of joint probability density functions
  • Learn how to compute marginal distributions from joint distributions
  • Explore the mathematical conditions for independence of random variables
  • Review examples of dependent and independent random variables in probability theory
USEFUL FOR

Students and professionals in statistics, data science, or mathematics who are looking to deepen their understanding of probability theory, particularly in the context of random variable independence and joint distributions.

fisher garry
Messages
63
Reaction score
1
upload_2017-12-5_12-30-19.png
upload_2017-12-5_12-31-26.png
upload_2017-12-5_12-31-48.png


In the green box in the attachment I have calculated the integral of the pdf of 24xy in the same manner as in the theory that leads up to 3.1 in the attachment. But I can't see any difference in that calculation that would explain that if 24xy is dependent why is not also the theory that leads up to 3.1 dependent. In the green box I would believe that ##\frac{1}{2}(1-y)^2## is ##F_X(a-y)## for a=1. I don't see any distinctions. Why is 24xy dependent while the theory up until 3.1 is independent?

The theory is taken from Ross, A first course in probability.
 

Attachments

  • upload_2017-12-5_12-30-19.png
    upload_2017-12-5_12-30-19.png
    41.6 KB · Views: 1,239
  • upload_2017-12-5_12-31-26.png
    upload_2017-12-5_12-31-26.png
    34.1 KB · Views: 1,021
  • upload_2017-12-5_12-31-48.png
    upload_2017-12-5_12-31-48.png
    10.6 KB · Views: 949
Physics news on Phys.org
Your question isn't clear because the terms "dependent" and "independent" are properties of sets of random variables, not distributions. It doesn't make sense to talk a single probability distribution being "independent". If you are asking about a exercise from the book, you should post your question in the homework section and give a complete statement of the problem.

If we are given a joint density f(x,y) for two random variables (x,y) , we can ask whether it is possible to express that distribution as a product in the form f(x,y) = g(x) h(y). If this can be done then the random variables x and y are shown to be independent. Is that what you are asking about?

The theory leading to equation 3.1 explains how to calculate the distribution of the (single) random variable z = x + y when x and y are given to be independent. It does not explain how to use eq. 3.1 to prove the random variables x and y are independent.
 
  • Like
Likes   Reactions: fisher garry
Stephen Tashi said:
If we are given a joint density f(x,y) for two random variables (x,y) , we can ask whether it is possible to express that distribution as a product in the form f(x,y) = g(x) h(y). If this can be done then the random variables x and y are shown to be independent. Is that what you are asking about? Why can't one just write g(x)=24x and h(y)=y?

I am not sure if this belongs in the homework group but how come 24xy can not be independent and described as f(x,y)=g(x)h(y)? Can someone directly show this mathematically?
 
fisher garry said:
but how come 24xy can not be independent and described as f(x,y)=g(x)h(y)?

To repeat, you aren't using the proper terminology. It would make sense to ask "Are x and y independent random variables". It would make sense to ask "Are z = x+y and x independent random variables?" It doesn't make sense to ask "Is 24xy independent?"

Which two random variables do you wish to show dependent or independent?
 
Stephen Tashi said:
To repeat, you aren't using the proper terminology. It would make sense to ask "Are x and y independent random variables". It would make sense to ask "Are z = x+y and x independent random variables?" It doesn't make sense to ask "Is 24xy independent?"

Which two random variables do you wish to show dependent or independent?
What I should ask is: Are x and y independent not 24xy? Can you then show why x and y are dependent directly mathematically if that is the correct formulation?
 
Stephen Tashi said:
To repeat, you aren't using the proper terminology. It would make sense to ask "Are x and y independent random variables". It would make sense to ask "Are z = x+y and x independent random variables?" It doesn't make sense to ask "Is 24xy independent?"

Which two random variables do you wish to show dependent or independent?
Is the question are x and y independent in 24xy. Can you show that mathematically? And explain why 24xy for 0<x+y<1 and 0<x<1 are dependent or independent?
 
fisher garry said:
What I should ask is: Are x and y independent not 24xy? Can you then show why x and y are dependent directly mathematically if that is the correct formulation?

You haven't explained what is given about x and y. If you are asking about an exercise in Ross's book, simply quote the exercise. Trying to translate the exercise into your own words isn't working.

If you are asking about a question you have formulated yourself, give a complete statement of the question.

To repeat, it doesn't make sense of ask if a single random variable is independent. It only makes sense to ask of one random variable is independent of another random variable.

Furthermore, a "random variable" and a "distribution" are two different things. The notation "24xy" might denote the joint distribution for two random variables x,y or it might denote a single random variable defined by z = 24xy. It isn't clear which interpretation of "24xy" you want to use.
 
Stephen Tashi said:
You haven't explained what is given about x and y. If you are asking about an exercise in Ross's book, simply quote the exercise. Trying to translate the exercise into your own words isn't working.

If you are asking about a question you have formulated yourself, give a complete statement of the question.

To repeat, it doesn't make sense of ask if a single random variable is independent. It only makes sense to ask of one random variable is independent of another random variable.

Furthermore, a "random variable" and a "distribution" are two different things. The notation "24xy" might denote the joint distribution for two random variables x,y or it might denote a single random variable defined by z = 24xy. It isn't clear which interpretation of "24xy" you want to use.
Again maybe I should have posted in homework area. But my worry could be formulated as follows in example 2f in the attachment why are the random variables X and Y dependent if the joint density is 24xy and 0<x<1 0<y<1 0<x+y<1. And explain this by showing that the definition for independence f(x,y)=g(x)h(y) does not hold. And the same problem for 24xy and 0<x<1 0<x+y<1. Are then the variables X and Y independent. And show this by using the definition of independence f(x,y)=g(x)h(y)
 
  • #10
For f(x,y) to be the joint density of two independent random variables, we need to have p = f(x,y) = g(x)h(y) for all values of p,x,y. Constraints such as 0 < x+y< 1 can prevent this from happening. If the given information is "(x,y) has joint distribution g(x)h(y) subject to the following constraints..." , this is different that being given that (x,y) has joint distribution g(x)h(y) without any restrictions.

If f(x,y) = g(x)h(y) for all values of p,x,y then g(x) must be the marginal distribution of x and h(y) must be the marginal distribution of y. One approach to the problem is to compute the marginal distributions To show x and y are not independent, find one value of p = f(x,y) that is not equal to g(x)h(y).
 
  • #11
For example, if f(x,y) = 24xy = h(x)g(y) subject to the given constraints then ## g(y) = \int_0^{1-y} {24xy} \ dx##. But in your work (which has a typo of omitting a factor of 24 on the left hand side of the second line) you show that ##g(y)## is ##12(1-y)^2 y ##. So the joint distribution f(x,y) = 24xy does not contain the marginal density g(y) as a factor.

A better way to state the result that Ross proves is:

The random variables ##X## and ##Y## are independent if and only if their joint probability density function ##f_{X,Y}(x,y) ## can be expressed as ##f_{X,Y} = h(x)g(y)## where ##h(x)## is the marginal density of ##X## and ##g(y)## is the marginal density of ##Y##.

Ross didn't take the trouble to say that ##h## and ##g## must be the marginal probability densities. Apparently we are to assume that since he didn't state any constraints relating ##X## and ##Y## that no such constraints exist.

For example, suppose the joint distribution of ##X## and ##Y## is ##f(x,y) = 4xy## for ##0 < x < 1##, ##0 < y < 1## and with no other constraints relating ##x## and ##y##. Then the marginal density of ##Y## is ##g(y) = \int_0^1 {4xy} \ dx = 2y##, which is a factor of ##4xy##.
 
Last edited:
  • Like
Likes   Reactions: fisher garry

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 29 ·
Replies
29
Views
4K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K