Treating Propagation of Errors in $R^n$ to $R^m$ Transformations

  • Context: Graduate 
  • Thread starter Thread starter MaartenB
  • Start date Start date
  • Tags Tags
    Errors Propagation
Click For Summary
SUMMARY

This discussion focuses on the propagation of errors in transformations from $R^n$ to $R^m$, specifically addressing the mathematical framework required for such transformations. The Jacobian matrix is essential for $R^n$ to $R^n$ transformations, represented as |J|. For $R^n$ to $R^m$ transformations, the discussion highlights the use of characteristic functions and the Dirac delta function, referencing key mathematical concepts such as convolution and integration over transformed variables. The challenge remains in evaluating the Dirac delta function in higher dimensions, particularly when dealing with vector outputs.

PREREQUISITES
  • Understanding of Jacobian matrices in multivariable calculus
  • Familiarity with characteristic functions and their properties
  • Knowledge of Dirac delta functions and their applications in integration
  • Basic concepts of convolution in probability theory
NEXT STEPS
  • Research the evaluation of Dirac delta functions in multiple dimensions
  • Study the properties of characteristic functions in probability distributions
  • Explore convolution techniques for sums of random variables
  • Examine the application of Jacobians in transformations between different dimensional spaces
USEFUL FOR

Mathematicians, statisticians, and physicists working on error analysis in transformations, particularly those dealing with multivariable functions and probability distributions.

MaartenB
Messages
1
Reaction score
0
I want to know how to treat propagation of errors in general.
When the transformation of variables is a transformation of $R^n$ to $R^n$ it
simply involves a jacobian:
[tex]g(\vec{y}) = f(x(\vec{y}))|J|[/tex]
With
[tex]J_{ij} = \frac{\partial x_i}{\partial y_j}[/tex]
(see http://pdg.lbl.gov/2005/reviews/probrpp.pdf for instance)

But there are also situation of $R^n$ to $R^m$ possible.

This is how far I got:
(Much of this can be found in http://arxiv.org/abs/hep-ex/0002056 appendix A)

The characteristic function (CF) is defined as:
[tex]\phi_X(t) = E[e^{itX}] = \int e^{itx} f(x) dx[/tex]
The inverse (FT):
[tex]f(x) = \frac{1}{2\pi} \int e^{-itX} \phi_X(t) dt[/tex]
If we have a function Y=g(X) the pdf according to Kendal and Stuart (1943) is:
[tex]\phi_Y(t) = \int e^{itg(x)} f(x) dx[/tex]
Taking the inverse, and some rewriting
[tex]f(y) = \frac{1}{2\pi} \int e^{-ity} \phi_Y(t) dt = \int f(x)\delta(y-g(x))dx[/tex]
In vector form
[tex]f(y) = \int f(\vec{x})\delta(y-g(\vec{x}))d\vec{x}[/tex]
Take for instance two independent variables taken from the same distribution:
[tex]Y = g(\vec{X}) = g(X_1, X_2) = X_1 + X_2[/tex]
Then the resulting pdf is with:
[tex]f(y) = \int f(x_1,x_2)\delta(y-x_1-x_2)dx_1 dx_2 = \int f(x_1) f(y - x_1 - u)\delta(u)dx_1 du = \int f(x_1) f(y - x_1) dx_1[/tex]
Using:
[tex]u = y - x_1 - x_2[/tex]
[tex]x_2 = y - x_1 - u[/tex]
[tex]dx_2 = du[/tex]
Which is a simply a convolution, which is a known result,
see for instance: http://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables
Or for $R^1$ to $R^1$:
[tex]f(y) = \int f(x)\delta(y-g(x))dx = \int f(g^{-1}(y-u))\frac{1}{g'(x)}\delta(u)du = f(g^{-1}(y))\frac{1}{g'(x)} = f(x(y)) \frac{dx}{dy}[/tex]
Using:
[tex]u = y - g(x)[/tex]
[tex]du = \frac{dg(x)}{dx}dx[/tex]
[tex]dx = \frac{1}{g'(x)}du[/tex]
[tex]x = g^{-1}(y-u)[/tex]
Also a general result, just a change of variable.

But how to do this for the case of $R^n$ to $R^m$?, I don't know how to evaluate
the dirac delta function in general.

I did find on wikipedia (http://en.wikipedia.org/wiki/Dirac_delta_function)
[tex]\int_V f(\mathbf{r}) \, \delta(g(\mathbf{r})) \, d^nr = \int_{\partial V}\frac{f(\mathbf{r})}{|\mathbf{\nabla}g|}\,d^{n-1}r[/tex]
But I couldn't find a reference where this is explained.

But then again, if y is a vector, how to solve this?:
[tex]f(\vec{y}) = \int f(\vec{x})\vec{\delta}(\vec{y}-g(\vec{x}))d\vec{x}[/tex]

Anyone got some hints? Or am I going the wrong direction with this?
 
Physics news on Phys.org
When you are transforming from R^n to R^m and m < n, you can define n - m new random variables and set them equal to an identical number of the existing variables. Example: given transformation Y = X1 + X2, you can define Y1 = Y and Y2 = X2, at which point you will have a square matrix. I hope this is helpful.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 18 ·
Replies
18
Views
2K