# Covariance matrix in barycentric coordinates

1. Feb 3, 2012

### MarkoDe

Hi folks,

I know the covariance matrix and the location of a point, both of which are expressed in Cartesian coordinates. I am going to represent the point in barycentric coordinates, and I would like to represent the covariance matrix for the point in barycentric coordinates as well. Does anyone have any insight into how to do transform the covariance matrix from Cartesian to barycentric coordinates?

Thanks!
MarkoDe

2. Feb 3, 2012

### Stephen Tashi

Covariance is a concept defined for random variables or samples of random variables, so I can't resist asking how you know the covariance matrix of a single point. How does this point relate to a random variable?

3. Feb 3, 2012

### chiro

Hey MarkoDe and welcome to the forums.

I can't comment specifically on what you are doing and what the random variable presents but what I can say relates to random variables in general (regardless of the problem or thing they represent).

As far as I am aware from prior work I have done, barycentric coordinates refer to a specific parametrization of a triangle with (u,v) as its coordinates.

The cartesian model is not bounded for one and I think its important you tell us exactly what you are trying to parametrize and transform. If you are referring to a barycentric representation that is not based on a triangle, then that would be helpful.

Anyway the standard transformation to find moments (and hence variance/covariance) of a random variable is to use the definition of expectation of a transformed variable which is basically E[f(x)] and then to use that in the context of variance/covariance.

It will help us if you define what your random variables and what they represent to give you more specific advice.

4. Feb 3, 2012

### MarkoDe

Thanks for the replies, and for encouraging me to better define my query.

What I have:
I have an estimate of the location of a point in three dimensionsal space, which is expressed in Cartesian coordinates. The estimate was formed using sensors with some associated noise (in this case multiple photographs from a single camera which has undergone translation and rotation.)

I have a model for the noise in the sensor measurements. This model has allowed me to form an estimate of the covariance of the estimate of the point's location. The position estimate is in X,Y,Z, and the covariance matrix is the 3x3 matrix that expresses the uncertainty in that estimate.

So the barycentric coordinates would require the control points to form a pyramid, since three dimensional space is being parameterized. I'm just getting into the concept of barycentric coordinates, so please forgive me if I mess up the jargon. The barycentric coordinates are to some extent arbitrary; they can be chosen to bound the estimate of the point's location, although I believe that, if it is poosible to express a point that lies outside of the pyramid, this will be useful.

My main conceptual issue is, how do I espress my estimate for the covariance of the estimate of the point's location in the barycentric coordinates?

MarkoDe

5. Feb 3, 2012

### chiro

MarkoDe, what you calculated the variance/mean/covariance using an existing method or by using existing data or have you just obtained those values by just 'plugging them in' or 'getting them from another source'?

As I pointed out before, once you know how to represent the transformed random variable (example X is random variable now you have Y = f(X) which is new random variable), then you can just use E[f(X)] and other things to calculate things like variance and covariance (instead E[X]E[Y] terms you will use something like E[f(X)]E[g(Y)] terms).

It would help if you gave us this information.

6. Feb 3, 2012

### MarkoDe

The mean is the output of triangulation algorithms, as described earlier. An example is available in the excellent computer vision book by Hartley and Zisserman, Multiple View Geometry:
Multiple View Geometry on Google books

The covariance is calculated by identifying the input error sources for the triangulation algorithms, such as pixel jitter and errors in the rotation/translation, modelling them as Gaussian random variables, assigning reasonable values to them through data analysis and experimentation, calculating the Jacobian of the triangulation algorithms, and propoagating the input error sources to form an covariance estimate of the output:
P_o = J*P_in*J^t
Where J is the Jacobian, P_in is the covariance matrix formed from the input variables, and P_o is the output.

Ok, this makes sense to me. How to apply it, not so much.

Thanks again!

7. Feb 4, 2012

### chiro

Basically it works the same way that you would consider it work on a say a normal function.

For example f(x) = x^2 transforms a line to a parabola. Something closer to home with your problem would involve the translation between coordinate systems. For example a transformation from (x,y) to (r,theta) is basically r = SQRT(x^2 + y^2) and theta = arctan(y/x) with appropriate modifications for appropriate quadrants.

This is basically what you have to do. You need to find a transformation that makes sense in the context of transforming one coordinate system to another, but you are doing it for your random variables.

I will need a little time to see how your R.V's are calculated, but basically all I would do is to apply the above idea in the context of what the R.V's represent. Once you relate the transforms from your cartesian system to your barycentric system and your R.V's in the context of your original system, then the transformation becomes a lot more intuitive.

8. Feb 4, 2012

### Stephen Tashi

MarkoDe,

State the formula that you are using to convert (X,Y,Z) to barycentric coordinates.

9. Feb 4, 2012

### MarkoDe

Hi Stephen and chiro.

A point $p$ in Cartesian coordinates can be expressed in barycentric coordinates as:

$p=\Sigma^{4}_{i=1}\alpha_{i}c_{i}$

where $c_{i}$ is one of four control points, and $\alpha$ is the weight for each control point.

In this work, the following constraint is applied:

$\Sigma^{4}_{i=1}\alpha_{i}=1$

To transform the Cartesian coordinates into barycentric coordinates:
1. First define the Cartesian coordinates as homogeneous:

$p_{c}=\left[X\ Y\ Z\right]^{T}\rightarrow\left[X\ Y\ Z\ 1\right]^{T}$

2. Choose the location of the four barycentric coordinates $c_{i}$. This can theoretically be done arbitrarily. For this work, the barycentric coordinates are chosen so that the one of the points is at the origin of the center of mass of the points to be transformed, and the other points are 1 unit along each of the principal axes of the cartesian coordinate system. The upshot is that the locations $c_{i}$ are known. They, too, are defined homogeneously.

For example, if the center of mass of the points to be transformed is at the origin, the values for $c_{i}$ would be:

$c_{1} = \left[1\ 0\ 0\ 1\right]^{T}$

$c_{2} = \left[0\ 1\ 0\ 1\right]^{T}$

$c_{3} = \left[0\ 0\ 1\ 1\right]^{T}$

$c_{4} = \left[0\ 0\ 0\ 1\right]^{T}$

The matrix $C$ is defined where each column is one of the barycentric coordinates:

$C = \left[c_{1}\ c_{2}\ c_{3}\ c_{4}\right]$

3. For the point $p$, compute the values for $\alpha_{i}$:

$A=\left[\alpha_{1}\ \alpha_{2}\ \alpha_{3}\ \alpha_{4}\right]^{T}$

$A=C^{-1}p$

Which allows the Cartesian point $p$ to be expressed in barycentric coordinates as:

$p=CA$

Thanks again for your help and feedback!
MarkoDe

10. Feb 4, 2012

### Stephen Tashi

If read this correctly, it says that each $\alpha_i$ is a linear combination of the cartestian coordinates plus a constant term. So, for example, you could have something like $\alpha_1 = 3.3 x - 2.6 y + 11.2 z + 7.6$

Is that correct? And you want to know quantities like $VAR(\alpha_1)$ and $COV(\alpha_1, \alpha_2)$ ?

11. Feb 5, 2012

### MarkoDe

Stephen, your interpretation looks correct to me. And I am indeed interested in quantities like $VAR(\alpha_1)$ and $COV(\alpha_1, \alpha_2)$.

Now that you guys have prodded me into formulating the question correctly, and you've interpreted it like that, the answer is beginning to be apparent.

Denoting $C^{-1}$ as $\widehat{C}$, and the subscripts $i,j$ as the row and column indices,

$\alpha_1 = \widehat{C}_{1,1}p_{1} + \widehat{C}_{1,2}p_{2} + \widehat{C}_{1,3}p_{3} + \widehat{C}_{1,4}$

$\alpha_2 = \widehat{C}_{2,1}p_{1} + \widehat{C}_{2,2}p_{2} + \widehat{C}_{2,3}p_{3} + \widehat{C}_{2,4}$

$\alpha_3 = \widehat{C}_{3,1}p_{1} + \widehat{C}_{3,2}p_{2} + \widehat{C}_{3,3}p_{3} + \widehat{C}_{3,4}$

$\alpha_4 =\widehat{C}_{4,1}p_{1} + \widehat{C}_{4,2}p_{2} + \widehat{C}_{4,3}p_{3} + \widehat{C}_{4,4}$

So, solving $COV(\alpha_1, \alpha_2)$ for example:

$COV(\alpha_1, \alpha_2) = E\left[\left(\alpha_1 - E\left[\alpha_1\right]\right)\left(\alpha_2 - E\left[\alpha_2\right]\right)\right]$

$= E\left[ \left(\widehat{C}_{1,1}p_{1} + \widehat{C}_{1,2}p_{2} + \widehat{C}_{1,3}p_{3} + \widehat{C}_{1,4} - E\left[ \widehat{C}_{1,1}p_{1} + \widehat{C}_{1,2}p_{2} + \widehat{C}_{1,3}p_{3} + \widehat{C}_{1,4}\right]\right)\left(\widehat{C}_{2,1}p_{1} + \widehat{C}_{2,2}p_{2} + \widehat{C}_{2,3}p_{3} + \widehat{C}_{2,4} - E\left[\widehat{C}_{2,1}p_{1} + \widehat{C}_{2,2}p_{2} + \widehat{C}_{2,3}p_{3} + \widehat{C}_{2,4}\right]\right)\right]$

$= E\left[ \left(\widehat{C}_{1,1}p_{1} + \widehat{C}_{1,2}p_{2} + \widehat{C}_{1,3}p_{3} + \widehat{C}_{1,4} - E\left[ \widehat{C}_{1,1}p_{1}\right] - E\left[ \widehat{C}_{1,2}p_{2}\right] - E\left[ \widehat{C}_{1,3}p_{3}\right] - E\left[ \widehat{C}_{1,4}\right]\right)\left(\widehat{C}_{2,1}p_{1} + \widehat{C}_{2,2}p_{2} + \widehat{C}_{2,3}p_{3} + \widehat{C}_{2,4} - E\left[\widehat{C}_{2,1}p_{1}\right] - E\left[ \widehat{C}_{2,2}p_{2}\right] - E\left[ \widehat{C}_{2,3}p_{3}\right] - E\left[ \widehat{C}_{2,4}\right]\right)\right]$

$= E\left[ \left(\widehat{C}_{1,1}\left(p_{1} - E\left[p_{1}\right]\right) + \widehat{C}_{1,2}\left(p_{2} - E\left[p_{2}\right]\right) + \widehat{C}_{1,3}\left(p_{3} - E\left[ p_{3}\right]\right)\right)\left(\widehat{C}_{2,1}\left(p_{1} - E\left[p_{1}\right]\right) + \widehat{C}_{2,2}\left(p_{2} - E\left[p_{2}\right]\right) + \widehat{C}_{2,3}\left(p_{3} - E\left[ p_{3}\right]\right)\right)\right]$

$= \widehat{C}_{1,1}\widehat{C}_{2,1}E\left[\left(p_{1} - E\left[p_{1}\right]\right)^{2}\right] + \widehat{C}_{1,1}\widehat{C}_{2,2}E\left[\left(p_{1} - E\left[p_{1}\right]\right)\left(p_{2} - E\left[p_{2}\right]\right)\right] + \widehat{C}_{1,1}\widehat{C}_{2,3}E\left[\left(p_{1} - E\left[p_{1}\right]\right)\left(p_{3} - E\left[p_{3}\right]\right)\right]$

$+ \widehat{C}_{1,2}\widehat{C}_{2,1}E\left[\left(p_{2} - E\left[p_{2}\right]\right)\left(p_{1} - E\left[p_{1}\right]\right)\right] + \widehat{C}_{1,2}\widehat{C}_{2,2}E\left[\left(p_{2} - E\left[p_{2}\right]\right)^{2}\right]+ \widehat{C}_{1,2}\widehat{C}_{2,3}E\left[\left(p_{2} - E\left[p_{2}\right]\right)\left(p_{3} - E\left[p_{3}\right]\right)\right]$

$+ \widehat{C}_{1,3}\widehat{C}_{2,1}E\left[\left(p_{3} - E\left[p_{3}\right]\right)\left(p_{1} - E\left[p_{1}\right]\right)\right] + \widehat{C}_{1,3}\widehat{C}_{2,2}E\left[\left(p_{3} - E\left[p_{3}\right]\right)\left(p_{2} - E\left[p_{2}\right]\right)\right] + \widehat{C}_{1,3}\widehat{C}_{2,3}E\left[\left(p_{3} - E\left[p_{3}\right]\right)^{2}\right]$

where $E\left[\left(p_{1} - E\left[p_{1}\right]\right)^{2}\right]$, $E\left[\left(p_{2} - E\left[p_{2}\right]\right)^{2}\right]$, $E\left[\left(p_{3} - E\left[p_{3}\right]\right)^{2}\right]$, $E\left[\left(p_{1} - E\left[p_{1}\right]\right)\left(p_{2} - E\left[p_{2}\right]\right)\right]$, $E\left[\left(p_{1} - E\left[p_{1}\right]\right)\left(p_{3} - E\left[p_{3}\right]\right)\right]$, and $E\left[\left(p_{2} - E\left[p_{2}\right]\right)\left(p_{3} - E\left[p_{3}\right]\right)\right]$ are all known from the entries of the input covariance matrix of $p$,

and the entries of $C_{i,j}$ are also known.

Is there a more elegant way to express this? Writing that out for every all the combinations of $\alpha$'s is pretty cumbersome.

Thanks again for your help, and for prodding me to get this into a form where the answer was apparent!

12. Feb 5, 2012

### chiro

I can see a kind of 'covariance' relation with your last expression. Also when you refer to terms about (p1 - E[p1])^2, this is basically a variance term for p1.

The best way I can think of is to change the E[(p1 - E[p1)^2] terms normal VAR(p1) terms and do the same for COV terms. Then you might have to do some kind of matrix factorization and create two matrices since i see the (2,1), (2,2), and (2,3) terms for your C-inverse matrix repeated which means they can be factorized out into a different matrix.

Another approach would be to write your system in the normal covariance matrix format (which basically does COV(a,b) for every a and b that are random variables in your system) and based on that you can make an interpretation of what all of it actually means.

13. Feb 6, 2012

### Stephen Tashi

Probably, but first we should look for a more elegant way to derive the answer. I think the COV(X,Y) operation is "bi-linear", so you can use that fact instead of using expectations and the definition of covariance.

14. Feb 7, 2012

### MarkoDe

I'm interested in following your intuition, Stephen, but I don't quite understand. Wikipedia's definition for 'bilinear' didn't shed any light. Would you care to comment more?

15. Feb 8, 2012

### MarkoDe

I'll take a stab at simplifying.

First, some notation:
$\widehat{c}_{i}$ is the row vector from the ith row of the matrix $\widehat{C} = C^{-1}$
$P_{p}$ is the 3x3 covariance matrix for the mean estimate $p$.
$P_{\alpha}$ is the 4x4 covariance matrix for the $\alpha$ values.
$i,j$ are the row, column indices for a matrix.
$\bullet$ denotes the dot product.

Then the covariance matrix $P_{\alpha}$ is computed as:

$P_{\alpha,i,j}=\widehat{c}^{T}_{i}\bullet\left(P_{p} \widehat{c}^{T}_j\right)$

Looks nifty enough that I am satisfied. However, if you guys have additional insight, or a more elegant way of expressing/solving, then I am definitely interested.