Undergrad Random Unit Vector Angle Difference

Click For Summary
SUMMARY

The discussion centers on the non-uniform distribution of angle differences derived from random angles sampled uniformly from the interval [0, 2π]. When calculating the difference between two angles, dA = Ai - Aj, the resulting distribution is monotonically decreasing, contrary to the expectation of uniformity. This phenomenon arises due to the nature of sampling from a bounded interval, where the probability density of angle differences is influenced by the geometric properties of the unit circle and the relationship between Cartesian and polar coordinates.

PREREQUISITES
  • Understanding of uniform distribution in probability theory
  • Familiarity with polar and Cartesian coordinate systems
  • Knowledge of basic statistics, particularly probability density functions
  • Experience with random number generation techniques
NEXT STEPS
  • Explore the properties of the triangular distribution of the sum of uniform random variables
  • Learn about the implications of coordinate transformations on probability distributions
  • Investigate the concept of cumulative distribution functions (CDF) and their applications
  • Study the geometric interpretation of random variables in two-dimensional space
USEFUL FOR

Mathematicians, statisticians, data scientists, and anyone interested in the properties of random variables and their distributions, particularly in the context of angle measurements and geometric probability.

DuckAmuck
Messages
238
Reaction score
40
I am simulating random angles from 0 to 2π with a uniform distribution. However, if I take the differences between random angles, I get a non-uniform (monotonically decreasing) distribution of angles.

In math speek:
Ai = uniform(0,2π)
dA = Ai - Aj
dA is not uniform.

Here is a rough image of what I'm seeing. P is probability density:
upload_2019-3-7_12-10-41.png
This does not make sense to me. As it seems to imply that the difference between random angles is more likely to be 0 than to be non-zero. You would think it would be uniform, as one angle can be viewed as the *zero* and the other as the random angle. So dA seems like it should also be uniform. What is going on here?
 

Attachments

  • upload_2019-3-7_12-10-41.png
    upload_2019-3-7_12-10-41.png
    1.7 KB · Views: 1,012
Physics news on Phys.org
I have a feeling there's a problem with different coordinate systems (cartesian vs polar in particular) here and what it means to be "uniform at random". Can you explain how you are generating these 2-d vectors?

The standard approach in rectangular coordinates, for uniform at random sampling is to assume WLOG that your first vector is ##c \mathbf e_1## i.e. the 1st standard basis vector (with ##c\gt0## to normalize as needed)... i.e. for any first vector sampled you can select an orthonormal basis / set your rectangular coordinate system with it as an axis. Then sample your second vector and tease out the angle with an inner product.
 
Last edited:
The vectors aren't *really* vectors computationally. I'm just generating angles using a uniform random number generator. Then taking the differences between them.
 
DuckAmuck said:
The vectors aren't *really* vectors computationally. I'm just generating angles using a uniform random number generator. Then taking the differences between them.

If you choose two numbers in an interval, ##[0, 2\pi]## in this case, then unless one number is close to ##0## and the other number is close to ##2\pi##, you can't get a difference close to ##2\pi##.

Also, in principle, your new distribution could be on the interval ##[-2\pi, 2\pi]## depending on how you measure the difference.
 
DuckAmuck said:
The vectors aren't *really* vectors computationally. I'm just generating angles using a uniform random number generator. Then taking the differences between them.
Got it -- so its sampling from a real interval ##[0,2\pi]## uniformly at random. Up to rescaling, we could just call it ##[0,1]## and ignore any mention of angles, right?
 
  • Like
Likes DuckAmuck
Silly me, yes you can just forget about it being angles. Uniform distribution sample - uniform distribution sample = non-uniform sample. Still not sure why this is.
 
DuckAmuck said:
Silly me, yes you can just forget about it being angles. Uniform distribution sample - uniform distribution sample = non-uniform sample. Still not sure why this is.

Suppose we had a bet. You bet on a difference of ##3\pi/2## and I bet on ##\pi/4##. For you to win, your first number must be in the range ##< \pi/2## or ##> 3\pi/2##. That's only a 50% chance. But, my first number could be anywhere and I'm still in the running.

Also, for the second number, you only have one possibility. If your first number is low, your second number must be high; or vice versa. Whereas, I've got a good chance of having two possibilities, one higher and one lower than my first number.
 
so you want the distribution of ##\big \vert U_1 - U_2\big \vert##

this is a classic problem of sketching things out -- i.e. draw a rectangle with corners ##[0,0], [0,1], [1,0],[1,1]## and and draw a line from [0,0] to [1,1] (call it the anti-diagonal) -- you are looking at the (symmetric) result of going from the anti-diagonal to one of the vertical bars on your box.

- - - - -
edit:
(re-done, to cleanup the CDF approach)
My suggested approach to get the CDF of ##V := \big \vert U_1 - U_2\big \vert##
##U_1, U_2## are both iid uniform r.v.'s in [0,1]

we want to compute
##F_V(c)= P\Big(\big \vert U_1 - U_2\big \vert \leq c\Big)##

but instead consider the complementary CDF given by
##\bar{F}_V(c) = 1 - F_V(c) = 1- P\Big(\big \vert U_1 - U_2\big \vert \leq c\Big)##
but interms of underlying events,
##\bar{F}_V(c) = P\Big(\big \vert U_1 - U_2\big \vert \gt c\Big) = P\Big( U_1 - U_2 \gt c\Big) + P\Big( U_1 - U_2 \lt -c\Big) = P\Big( U_1 - U_2 \gt c\Big) + P\Big( U_1 - U_2 \leq -c\Big)##
where mutually exclusive events add, and then the strictness of the inequality can be ignored due to zero probability of a tie. So we need
##(\text{i}) P\Big( U_1 - U_2 \gt c\Big)##
##(\text{ii}) P\Big( U_1 - U_2 \leq -c\Big)##

for (i)
##P\Big( U_1 - U_2 \gt c\Big) = P\Big( U_1 \gt U_2 + c\Big) = 1 - P\Big( U_1 \leq U_2 + c\Big)##
but
##P\Big( U_1 \leq U_2 + c\Big) = \big(\int_0^{1-c} F_{U_1}(u_2 + c)\cdot dF_{U_2}(u_2) \big)+ \int_{1-c}^1 1 \cdot dF_{U_2}(u_2) = \big(\int_0^{1-c} F_{U_1}(u_2 + c)\cdot d u_2\big)+ c ##

for (ii)
##P\Big( U_1 - U_2 \leq -c\Big) = P\Big(U_1 \leq U_2 -c\Big) = \big(\int_0^c 0 \cdot dF_{U_2}\big) + \int_c^1 F_{U_1}(u_2 - c) dF_{U_2} = \int_c^1 F_{U_1}(u_2 - c) d u_2 ##
 
Last edited:
DuckAmuck said:
I am simulating random angles from 0 to 2π with a uniform distribution. However, if I take the differences between random angles, I get a non-uniform (monotonically decreasing) distribution of angles.

In math speek:
Ai = uniform(0,2π)
dA = Ai - Aj
dA is not uniform.

Here is a rough image of what I'm seeing. P is probability density:
View attachment 239875This does not make sense to me. As it seems to imply that the difference between random angles is more likely to be 0 than to be non-zero. You would think it would be uniform, as one angle can be viewed as the *zero* and the other as the random angle. So dA seems like it should also be uniform. What is going on here?

It is easy enough to work out the distribution of the difference ##A_i - A_j## or ##|A_i - A_j|.## As other responders have done, let us change the problem to one of uniform distributions over ##[0,1].## If ##X_1## and ##X_2## are independent and Unif(0,1), the density of their difference ##D = X_1 - X_2## is far from uniform. In fact, ##Y = D+1## is "familiar", because ##Y = X_1 + (1-X_2) = X_1 + X_2'##, where ##X_2' = 1-X_2## is independent of ##X_1## and has distribution Unif(0,1). Thus, ##Y## has the distribution of a sum of uniforms, so has a triangulare density function. To get the density function of ##D## we need only shift that of ##Y## by one unit to the left, so the density function of ##D## is
$$f_D(d) = \begin{cases}1+d,& -1 \leq d \leq 0\\
1-d,& 0 \leq d \leq 1 \\
0 & \text{otherwise}
\end{cases}
$$
The density of ##M = |D|## is
$$ f_M(m) = f_D(m) + f_D(-m) = \begin{cases}2(1-m) & 0 \leq m \leq 1\\
0 & \text{otherwise}
\end{cases}$$
So ##|X_1-X_2|## does, indeed, have a downward-sloping density, highest near 0 and dropping to 0 near 1.

For more about the "triangular" distribution of a sum, just Google "distribution of a sum of uniform random variables".
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 8 ·
Replies
8
Views
4K
  • · Replies 7 ·
Replies
7
Views
5K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 2 ·
Replies
2
Views
10K
  • · Replies 2 ·
Replies
2
Views
7K