Joint expectation of two functions of a random variable

Click For Summary
SUMMARY

The discussion centers on the joint expectation of two functions of a random variable, specifically X = cos(θ) and Y = sin(θ), where θ is uniformly distributed over (0, 2π). It is established that X and Y are uncorrelated, demonstrated by showing that the covariance cov(X, Y) equals zero. The joint probability density function (pdf) f_{xy}(x, y) does not exist in the traditional sense due to the mass of (X, Y) being confined to a one-dimensional subset of two-dimensional space, namely the unit circle. Instead, the expectation E[XY] can be simplified to E[cos(θ)sin(θ)], allowing for a straightforward calculation using the pdf of θ.

PREREQUISITES
  • Understanding of random variables and their distributions
  • Knowledge of covariance and correlation concepts
  • Familiarity with probability density functions (pdfs)
  • Basic calculus, particularly integration techniques
NEXT STEPS
  • Study the properties of uncorrelated random variables in probability theory
  • Learn about Dirac delta functions and their applications in probability distributions
  • Explore Stieltjes integrals and their relevance in joint distributions
  • Investigate the implications of transformations of random variables on their distributions
USEFUL FOR

Students and professionals in statistics, mathematics, and data science who are dealing with random variables, particularly in the context of joint distributions and expectations.

dionysian
Messages
51
Reaction score
1
Ok I am not sure if I should put this question in the homework category of here but it’s a problem from schaums outline and I know the solution to it but I don’t understand the solution 100% so maybe someone can explain this to me.
Let X and Y be defined by:
\begin{array}{l}<br /> X = \cos \theta \\ <br /> Y = \sin \theta \\ <br /> \end{array}
Where \theta is a uniform random variable distributed over (0,2\pi )
A) Show that X and Y are uncorrelated
Attempt at solution:
Show {\mathop{\rm cov}} (x,y) = 0
\begin{array}{l}<br /> {\mathop{\rm cov}} (x,y) = E[xy] - E[x]E[y] \\ <br /> E[xy] = \int\limits_0^{2\pi } {\int\limits_0^{2\pi } {xy{f_{xy}}(x,y)dxdy} } \\ <br /> \end{array}
Now my question is how do we determine the joint pdf {{f_{xy}}(x,y)} if we only know the marginal pdfs of \theta?
In the solution to the problem its seems that they assume that
{f_{xy}}(x,y) = {f_\theta }(\Theta )
Then the integeral they use becomes
E[xy] = \int\limits_0^{2\pi } {xy{f_\theta }(\Theta )d\theta }
But how come it is valid to assume that
{f_{xy}}(x,y) = {f_\theta }(\Theta )
Doesn’t the joint (and the marginal) pdf change because of the functions:
\begin{array}{l}<br /> X = \cos \theta \\ <br /> Y = \sin \theta \\ <br /> \end{array}
I f anyone knows what I am trying to ask please give me a little help to what is going on here.
 
Physics news on Phys.org
dionysian said:
... Now my question is how do we determine the joint pdf {{f_{xy}}(x,y)} if we only know the marginal pdfs of \theta?

The joint pdf doesn't technically exist, because the random variables (X,Y) have all their mass on a 1-dimensional subset of 2d space, i.e. the circle (cos(theta),sin(theta)). The joint pdf could be written in terms of Dirac delta functions or the expectation could be written as a Stieltjes integral (using the joint cdf) but the theory gets messy and for this example it's much simpler to write the expectation as

E[XY] = E[cos(\theta)sin(\theta)]

which can be expressed as a single integral because theta has a pdf.
 

Similar threads

  • · Replies 43 ·
2
Replies
43
Views
6K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K