Undergrad Variable transformation for a multivariate normal distribution

Click For Summary
To transform a vector x sampled from an n-dimensional multivariate normal distribution into the corresponding δ vector, use the relationship δ_i = e^(x_i) - 1 for each component. The original x vector is defined as log(δ_i + 1), ensuring that δ_i remains positive. The transformation is straightforward, allowing for direct computation of δ from x without needing to alter the covariance matrix. While the discussion acknowledges the existence of a multivariate log-normal distribution, the focus remains on using the multivariate normal distribution for sampling. For further understanding, literature on variable transformations in multivariate statistics is recommended.
nilsgeiger
Messages
6
Reaction score
0
TL;DR
I want to transform a multivariate normal [##log (\delta_i + 1)##] distribution to a multivariate normal distribution of the ##\delta_i## .
Besides, i'm looking for a way to transform the random vectors with the components ##log (\delta_i + 1)## to vectors with components ##\delta_i ##.
Hello.

I would like to draw (sample) several random vectors x from a n-dimensional multivariate normal distribution.
For this purpose I want to use C++ and the GNU Scientific Library function gsl_ran_multivariate_gaussian .

https://www.gnu.org/software/gsl/manual/html_node/The-Multivariate-Gaussian-Distribution.html


The distribution has the usual density
$$p(x_1,\dots,x_k) dx_1 \dots dx_k = {1 \over \sqrt{(2 \pi)^k |\Sigma|}} \exp \left(-{1 \over 2} (x - \mu)^T \Sigma^{-1} (x - \mu)\right) dx_1 \dots dx_k$$
with $$\mu = 0$$ but with
$$ x = \begin{pmatrix} log (\delta_1 + 1) \\
log (\delta_2+1) \\
log (\delta_3 + 1) \\
log (\delta_4 + 1) \\
... \\
log (\delta_n + 1) \\
\end{pmatrix}$$

As stated the ##log (\delta_i + 1)## follow the multivariate normal distribution.

But I am actually only interested in the ##\delta## -vectors.
$$ \delta = \begin{pmatrix} \delta_1 \\
\delta_1 \\
\delta_2 \\
\delta_3 \\
... \\
\delta_n \\
\end{pmatrix}$$
  1. How do you transform a x - vector to a ##\delta## - vector?
    With help of the covariances? But how exactly?
  2. Alternatively, can you do a change of variables to the multivariate distribution of the ##\delta_i## und draw ##\delta## - vectors directly with the gsl_ran_multivariate_gaussian?
    Could you please tell me the formula to compute the appropriate new covariance matrix?
    Or is this not possible?
    I am aware that the multiariate log-normal distribution exists, but GSL can only sample the multivariate normal.

I'm so sorry, this are probably really stupid questions.
But I'm just a not particularly good bachelor physics student in his fourth semester who also started programming c++ for the very first time.
I'm really overwhelmed and began learning about multivariate statistics for the first time because of this task no more than a week ago.

It would really help me a lot if you could answer and explain my two questions in great detail and for idiots.

For literature references for general variable transformations for multivaraite distributions and multivariate normal distributions I would also be very very thankful.
Especially for multivariate normal distributions of ##(log (x_i+1) )## there must be formulas together with a detailed derivation, right?
Normally distributed logarithms have to occur and ##+1## just ensures that for ##x_i## greater zero the logarithm always remains positive, so they should also be quite common?
 
Physics news on Phys.org
From what you've written it sounds like you already know how to randomly generate x vectors from the required multivariate normal distribution. That's the difficult bit done. From there it's easy. Since for ##i=1,2,...,n## we have ##x_i = \log(\delta_i+1)##, you can calculate a simulated ##\mathbf \delta## vector from its corresponding x vector by calculating ##\delta_i = e^{x_i}-1##, where ##\delta_i## and ##x_i## are the ##i##-th components of the ##\mathbf \delta## and x vectors respectively.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
1
Views
4K
Replies
5
Views
5K
Replies
11
Views
13K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 9 ·
Replies
9
Views
5K
Replies
2
Views
2K