Finding the Probability Density Function for the Sum of Two Random Variables

Click For Summary
To find the probability density function for the sum of two random variables, X and Y, where X is a complex variable uniformly distributed on a circle and Y is a constant, the Fourier transform method was initially considered. However, the discussion reveals that Z, the sum of X and Y, is uniformly distributed on a circle centered at b with radius a. The use of MATLAB to simulate Z resulted in an "upside-down Gaussian distribution," indicating a misunderstanding of the distribution's nature. The complexity arises from the fact that Z is a complex variable, not a real one, which complicates the probability density function analysis. Clarification on the nature of Z is necessary for accurate interpretation and further calculations.
jmckennon
Messages
39
Reaction score
0
Hi,

I've been working on this problem but I feel like I'm over complicating it. If you have a random variable X= a*e(j*phi), where phi is uniform on the interval [0,2pi) and a is some constant, and another random variable Y= b where b is a constant. I'm looking to find the probability density function of the random variable Z=X+Y.

This is probably really simple but from what I've been trying to do, I can just take the Fourier transform of X, Fourier transform of Y multiply them, and then take the inverse Fourier of that, but it doesn't seem to work. How can I do this?
 
Physics news on Phys.org
You haven't defined j. If I can assume you mean i (sqrt(-1)), then X (complex variable) is uniformly distributed on a circle of radius a, centered at 0. Z is then uniformly distributed on a circle of radius a centered at b.
 
yes, i apologize, j is sqrt(-1). After defining in MATLAB phi=rand(1,M).*2*pi where M=1000, i plotted Z= b+a.*exp(j.*phi) for various values of a and b and it looked kinda like an upside gaussian distribution centered about pi. Is this right?
 
*upside down gaussian distribution
 
I'm confused about what you did, since Z is complex, not real.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 10 ·
Replies
10
Views
2K