# PDF of sum of random vectors

1. Mar 27, 2014

### thapyhap

I am trying to derive the distribution for the sum of two random vectors, such that:

\begin{align} X &= L_1 \cos \Theta_1 + L_2 \cos \Theta_2 \\ Y &= L_1 \sin \Theta_1 + L_2 \sin \Theta_2 \end{align}

With:

\begin{align} L_1 &\sim \mathcal{U}(0,m_1) \\ L_2 &\sim \mathcal{U}(0,m_2) \\ \Theta_1 &\sim \mathcal{U}(0, 2 \pi) \\ \Theta_2 &\sim \mathcal{U}(0, 2 \pi) \end{align}

In other words, two vectors, each with a uniformly random direction, and each with a magnitude uniformly random between zero and $m_1$ or $m_2$, respectively. Is this even worth trying to calculate analytically?

I've tried to break the problem down into simpler parts. First, I calculated the PDF of $S_1 = \cos \Theta_1$ as:

$$f_{S_1}(s_1) = \frac{1}{\pi \sqrt{1 - {s_1}^2}}$$

Then I thought, if we ignore $L_1$ and $L_2$, how can I find the PDF of $S_1 + S_2 = \cos \Theta_1 + \cos \Theta_2$? I thought I could try multiplying the characteristic functions of $S_1$ and $S_2$, so I tried taking the Fourier transform of $f_{S_1}(s_1)$ in both MATLAB and Mathematica, but MATLAB just choked on it, and Mathematica returns something involving the Henkel function which looks too complex to use.

On Wikipedia I found something called the Arcsine distribution, which has a CDF similar to $F_{S_1}$. This is a special case of the Beta distribution, which Wikipedia does give the characteristic function for, but I'm not sure I can use it given that the CDF for the Arcsine distribution is slightly different than mine. However, this leads me to believe that the characteristic function for $S_1$ is tractable.

I really don't know anything about probability, I'm just reading Wikipedia and trying to make some sense of this problem. I would really appreciate someone telling me where to look next, or at least that what I'm trying to do is analytically impossible!

2. Mar 27, 2014

### Simon Bridge

Do you not know the rules for adding and multiplying probability density functions?
... hmmm, I think I misread: you are finding the distribution of the final values from adding 4 random numbers together.

found a discussion that may have some leads for you...
... I'll have to think some more.

Basically: the probability distribution of the sum of two or more independent random variables is the convolution of their individual distributions.

Last edited: Mar 27, 2014
3. Mar 28, 2014

### bpet

Perhaps it's easier to consider (X,Y) in polar coordinates, e.g. symmetry arguments show that the angle of (X,Y) is uniformly distributed. The magnitude is a little trickier but, say, the cdf of X^2+Y^2 could be written as a triple integral of an indicator function and then simplified somewhat.

Last edited: Mar 28, 2014
4. Mar 28, 2014

### thapyhap

Yeah, but the convolution of two distributions is the sum of their characteristic functions, i.e., the Fourier transform of their PDFs. Mathematica gave me a nice solution for this today. For $Z = \cos \Theta_1 + \cos \Theta_2$:

$$f_Z(z)=\frac{K(1-\frac{z^2}{4})}{\pi^2}$$

Where $K(k)$ is the complete elliptic integral of the first kind.

Could you elaborate a bit more? I am not sure I see how this ends up simplifying things, it seems like I would have to do all the same calculations to get the magnitude. I can't find a nice way to calculate the distribution of $L_1 \cos \Theta_1$- Wikipedia has an article on calculating the product of distributions, which I thought would be easy considering that the PDF of a uniform random distribution is so simple, but I didn't really understand the calculus.

The article gives the PDF of $Z = XY$, for two random variables $X$ and $Y$, with PDFs $f_X$ and $f_Y$:

$$f_Z(z) = \int f_X(x) f_Y\left(\frac{z}{x}\right) \frac{1}{|x|}\, dx$$

So I tried working it as follows, with $Z=L_1 \cos \Theta_1$:

\begin{align} f_Z(z) &= \int \frac{1}{\pi\,m_1\,\left|x\right|\sqrt{1-(\frac{x}{z})^2}}\, dx \\ &= \frac{\log (x)-\log \left(\sqrt{\frac{z^2-x^2}{z^2}}+1\right)}{\pi m_1} \end{align}

What does it mean that this is still a function of $x$? I have no clue what to try next.

5. Mar 28, 2014

### bpet

Standard convolution formulas are not likely to be of much use for this approach because X and Y are dependent.

Also don't worry about the PDF just yet, it's trivial to calculate (if it exists) once you've got the CDF.

A CDF can be written as the expected value of a Boolean indicator function, which for this example will be a 4d integral, and if you consider the squared magnitude like I suggested this can be simplified to a 3d integral by symmetry arguments (or a 1d integral for the case L1=L2=1).

Symmetry arguments again show that the magnitude and angle are independent, but, if you must, you can calculate the joint PDF of X and Y by differentiating the cdf and using a transformation from polars back to Cartesians.

6. Mar 29, 2014

### Stephen Tashi

I'm curious whether using the PDF will give a 2 variable integration in a straightforward way.

Since the density function has the same value at all points on a circle of radius R, we may as well compute that value at the point (x = R, y = 0).

To break down how the sum of two vectors can land at (R,0) we can consider the vertical lines through points on the x-axis. There is an interval [x_min, x_max] where it is possible for the end of the first vector to land on the vertical line through a point (x1,0) with x1 in that interval. The values of x_min,x_max are a function of m1,m2,R. On such a vertical line, there is an interval [y_min, y_max] for the values y1 where the endpoint of the first vector can land at (x1,y1) and still allow the second vector to go from (x1,y1) to (R,0). These bounds are a function of x1, m1, m2,R.

The bounds [x_min, x_max] together with the bounds [y_min, y_max] determine some sort of geometric figure (not a rectangle since y_min and y_max are functions of x1.

If we knew the Joint density J_cartesian of (x1,y1,x2,y2) ( with (x2,y2) representing the components of the second vector) we could integrate J_cartesian(x1,y1,R-x1,-y1) over the above geometric figure as a double integral in the variables x1,y1. (At least that's my intuition - granted it's dangerous to reason about problems using PDFs.)

Since we don't know J_cartesian(x1,y1,x2,y2), we can use a change of variables that expresses the vectors (x1,y1), (R-x1,-y1) in polar coordinates. In polar coordinates, the joint density J_polar(L1,theta1,L2,theta2) is just the product of 4 constants The complications come from writing the bounds of integration in terms of the polar variables and in the "volume element" introduced by the change of variables. (It looks like we would be using a 2D "area element" since J_polar is evaluated as a function of only two variables - Is that correct?)

(As an aside, the endpoint of the first vector won't land uniformly distributed over the area of a circle of radius m1. The first vector isn't "a random vector" in that sense of "random".)