Treating Propagation of Errors in $R^n$ to $R^m$ Transformations

MaartenB
Messages
1
Reaction score
0
I want to know how to treat propagation of errors in general.
When the transformation of variables is a transformation of $R^n$ to $R^n$ it
simply involves a jacobian:
g(\vec{y}) = f(x(\vec{y}))|J|
With
J_{ij} = \frac{\partial x_i}{\partial y_j}
(see http://pdg.lbl.gov/2005/reviews/probrpp.pdf for instance)

But there are also situation of $R^n$ to $R^m$ possible.

This is how far I got:
(Much of this can be found in http://arxiv.org/abs/hep-ex/0002056 appendix A)

The characteristic function (CF) is defined as:
\phi_X(t) = E[e^{itX}] = \int e^{itx} f(x) dx
The inverse (FT):
f(x) = \frac{1}{2\pi} \int e^{-itX} \phi_X(t) dt
If we have a function Y=g(X) the pdf according to Kendal and Stuart (1943) is:
\phi_Y(t) = \int e^{itg(x)} f(x) dx
Taking the inverse, and some rewriting
f(y) = \frac{1}{2\pi} \int e^{-ity} \phi_Y(t) dt = \int f(x)\delta(y-g(x))dx
In vector form
f(y) = \int f(\vec{x})\delta(y-g(\vec{x}))d\vec{x}
Take for instance two independent variables taken from the same distribution:
Y = g(\vec{X}) = g(X_1, X_2) = X_1 + X_2
Then the resulting pdf is with:
f(y) = \int f(x_1,x_2)\delta(y-x_1-x_2)dx_1 dx_2 = \int f(x_1) f(y - x_1 - u)\delta(u)dx_1 du = \int f(x_1) f(y - x_1) dx_1
Using:
u = y - x_1 - x_2
x_2 = y - x_1 - u
dx_2 = du
Which is a simply a convolution, which is a known result,
see for instance: http://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables
Or for $R^1$ to $R^1$:
f(y) = \int f(x)\delta(y-g(x))dx = \int f(g^{-1}(y-u))\frac{1}{g'(x)}\delta(u)du = f(g^{-1}(y))\frac{1}{g'(x)} = f(x(y)) \frac{dx}{dy}
Using:
u = y - g(x)
du = \frac{dg(x)}{dx}dx
dx = \frac{1}{g'(x)}du
x = g^{-1}(y-u)
Also a general result, just a change of variable.

But how to do this for the case of $R^n$ to $R^m$?, I don't know how to evaluate
the dirac delta function in general.

I did find on wikipedia (http://en.wikipedia.org/wiki/Dirac_delta_function)
\int_V f(\mathbf{r}) \, \delta(g(\mathbf{r})) \, d^nr = \int_{\partial V}\frac{f(\mathbf{r})}{|\mathbf{\nabla}g|}\,d^{n-1}r
But I couldn't find a reference where this is explained.

But then again, if y is a vector, how to solve this?:
f(\vec{y}) = \int f(\vec{x})\vec{\delta}(\vec{y}-g(\vec{x}))d\vec{x}

Anyone got some hints? Or am I going the wrong direction with this?
 
Physics news on Phys.org
When you are transforming from R^n to R^m and m < n, you can define n - m new random variables and set them equal to an identical number of the existing variables. Example: given transformation Y = X1 + X2, you can define Y1 = Y and Y2 = X2, at which point you will have a square matrix. I hope this is helpful.
 
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Back
Top