Random Unit Vector From a uniform Distribution

Click For Summary
To randomly choose a unit n-dimensional vector from a uniform distribution, one can utilize generalized spherical coordinates or generate a vector from a multivariate normal distribution and normalize it. The multivariate unit normal distribution has a spherical probability distribution, ensuring that the direction of the vector is uniformly distributed over the unit n-sphere. The covariance matrix of this distribution is a constant times the identity matrix, which supports the uniformity of direction. Understanding how to derive the probability density from the covariance matrix is crucial for calculating expectation values. This approach effectively addresses the problem of generating uniform random vectors in higher dimensions.
emob2p
Messages
56
Reaction score
1
Hi,

I have encountered the following problem in my research. As I do not have a strong background in probability theory, I was wondering if anyone here could help me through the following.

I would like to know how one makes rigorous the problem of randomly choosing a unit n-dimensional vector from a uniform distribution.

This is like choosing an point on the n-sphere in which the problem can be solved by switching to generalized spherical coordinates. However, I have read that one can also generate a uniform distribution from a normal distribution of the vector's coorindates, and then dividing by the norm. It is not clear to me why this method produces a uniform distribution.

Thanks Much,
Eric
 
Physics news on Phys.org
The covariance matrix of a multivariate unit normal expressed in cartesian coordinates is a constant times the identity matrix. In other words, the multivariate unit normal has a spherical probability distribution. The direction is uniformly distributed over the unit n-sphere.
 
D H said:
In other words, the multivariate unit normal has a spherical probability distribution.

Why does this follow? And if so, how does one use the covariance matrix to obtain a probability density that can be integrated to find expectation values?
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 9 ·
Replies
9
Views
2K
Replies
9
Views
2K