# How to find vector coordinates in non-orthogonal systems?

1. Sep 16, 2008

### mnb96

Hello,
I am reading a text in which it is assumed the following to be known:

- a n-dimensional Hilbert Space
- a set of non-orthogonal basis vectors {$$b_{i}$$}
- a vector F in that space (F = $$f_{1}b_{i}+...+f_{n}b_{n}$$)

I'd like to find the components of F in the given basis {$$b_{i}$$}, and according to the text this is easily done by:

$$f_{j}$$ = $$<F,b_{i}><b_{i},b_{j}>^{-1}$$
where Einstein summation convention has been used on the index 'i'.

I'd really like to know how I could arrive at that formula, but as far as I know, since the system is non-orthogonal the dot-product $$<F,b_{i}>$$ yields the corresponding coordinate in the dual base system {$$B_{i}$$}, and so I was able to arrive only at the following formula:

$$f_{j}$$ = $$<F,b_{i}><B_{i},B_{j}>$$

but can I get rid of the dual-basis vectors in the formula?

2. Sep 16, 2008

### Peeter

I've seen something similar with a different notation for real vector spaces (where I think what you are calling the dual basis was called the reciprocal frame). Those frame vectors were defined implicitly like so:

$$u^i \cdot u_j = {\delta^i}_j$$

this allows for expressing a vector in terms of coordinates for both the basis and the reciprocal basis:

$$x = x^k u_k \implies x^k = x \cdot u^k$$

$$x = x_k u^k \implies x_k = x \cdot u_k$$

and can do the same for the reciprocal frame vectors themselves:

$$u^i = a^k u_k \implies u^i \cdot u^k = a^k$$

or

$$u^i = (u^i \cdot u^k) u_k$$

I'd guess the question to ask is, for your Hilbert space how exactly is your dual vector defined. What do you get if you take a dual basis vector expressed in terms of the regular basis, and insert into what you have got so far?

3. Sep 16, 2008

### mnb96

Thanks for your answer Peeter; however, you arrived exactly at the same point where I got stuck; in fact, if you just plug the term $$u^i = (u^i \cdot u^k) u_k$$ into the equation $$x^k = x \cdot u^k$$ you get: $$x^k = (u^k \cdot u^j)(u_j \cdot x)$$

As you notice, my problem is here: the only quantities that are known are $$x$$ and the vectors $$u_i$$.
How can I compute the dot products $$(u^k \cdot u^j)$$ without knowing those reciprocal vectors?

4. Sep 18, 2008

### Peeter

It appears I'm not getting subscribed to threads I reply to anymore, so sorry for ignoring your question.

from the complete set of vectors $u_i$ you can compute the reciprocal set. For example, doing a two vector example, you can do this with a Cramer's rule like determinant ratio:

$$u^2 = - u_1 \frac{1}{u_1 \wedge u_2} = ... = \frac{ (u_1)^2 u_2 - (u_1 \cdot u_2) u_1 }{ (u_1)^2 (u_2)^2 - (u_1 \cdot u_2)^2 }$$

I don't think that's a computational way to do it though. Better to formulate the whole thing as a matrix equation and solve that. I vaguely recollect that SVD gives you the reciprocal basis nicely for free along with everything else. Sounds like setting up that matrix equation to solve is probably half the work of the exercise.

5. Sep 19, 2008

### chuy

If you know the coordinates of the vector F in any basis, let's say $$\{e_i\}$$ ,
$$F=g_1e_1+...+g_ne_n$$

Then you can use a matrix to find the coordinates in other basis $$\{b_{i}\}$$:

$$[F]_b$$ stands for the column matrix of coordinates from the vector F in the $$\{b_{i}\}$$ basis. Is easy to show that the mapping $$F\mapsto [F]_b$$ is linear and 1-1 (and the inverse is also a linear mapping). Then:

$$[F]_b=g_1[e_1]_b+...+g_n[e_1]_b$$

Let A be a nxn matrix and x of size nx1, then the column matrix Ax is a linear combination of the columns of A. The scalars are the elements of x. In this way:

$$[F]_b=\left(\begin{matrix}[e_1]_b & \cdots & [e_n]_b \end{matrix}\right)\begin{pmatrix}g_1 \\ g_2 \\ \vdots \\ g_n \end{pmatrix}$$

$$[F]_b=\left(\begin{matrix}[e_1]_b & \cdots & [e_n]_b \end{matrix}\right)[F]_e$$

where $$[F]_b=\begin{pmatrix}f_1 \\ f_2 \\ \vdots \\ f_n \end{pmatrix}$$
So you need to know $$e_i=e^k_ib_k$$ in order to find the previous matrix (indeed: $$[e_i]_b=(e^1_i,e^2_i,...,e^n_i)^T$$ ).

Because sometimes you already know $$b_i=b^i_ke_k$$, so it would be easier to find:

$$[F]_e=\left(\begin{matrix}[b_1]_e & \cdots & [b_n]_e \end{matrix}\right)[F]_b$$

Take $$\alpha^i[b_i]_e=0$$, with $$\alpha\in \mathbb{F}$$. Since the function $$[\ ]_e$$ is 1-1, applying the inverse to $$\alpha^i[b_i]_e=0$$, we get $$\alpha^ib_i=0$$, so $$\alpha^i=0$$ (because $$b_i$$ is a basis), then $$[b_i]_e$$ is linearly independent. Then the matrix above mentioned is inversible. Therefore:

$$[F]_b=\left(\begin{matrix}[b_1]_e & \cdots & [b_n]_e \end{matrix}\right)^{-1}[F]_e$$

I hope this helps you.

6. Sep 19, 2008

### mnb96

Thanks a lot, Peeter and chuy!
It seems that's the only way to solve the problem. Moreover the statement in the text I read
$$f_{j} = <F,b_{i}><b_{i},b_{j}>^{-1}$$
is not generally correct but only valid for orthogonal coordinates.

At this point the only thing I still need to know is this: in the text I am reading, they have a vector F expressed in an *Orthogonal* basis. Then they apply the same arbitrary function to all the basis vectors; this obviously yields a new set of basis vectors that are NOT necessarily orthogonal, and we want to get the coordinates of F in this new basis.
However they say that it is possible to "compensate" the non-orthogonality using the metric tensor. But I don't understand how.

7. Sep 19, 2008

### Peeter

I calculated something like this for myself (the calculation of the reciprocal frame vectors, from which the coordinate results follow). My notes are from trying to reconsile some different ways of expressing projection (matrix and clifford algebra approaches), but around page seven (Projection using reciprocal frame vectors.) of the following notes probably expresses the ideas you are interested in and doesn't have any clifford algebra content:

http://www.geocities.com/peeter_joot/geometric_algebra/oblique_proj.pdf

If you read it you'll probably be the first one to do so that wasn't me, so let me know if you find any errors. The connection with the metric tensor is due to the fact that the metric tensor is essentially a set of dot products between pairs of frame vectors.

8. Sep 19, 2008

### mnb96

I am reading your notes, and I have got something to ask. Sorry if my questions sound silly but I think I'm missing some concept or I am not too familiar with the notation.
-- page 8:
when you write $$x = e +\sum P_{j}P^{j}\cdot x$$ you probably mean $$x = e +\sum P_{j} (P^{j}\cdot x)$$ ?

- in the same identity, why do you actually introduce that vector 'e'. Imho it is not necessary because it is anyways orthogonal to the other vectors.

- finally the most important thing, I dont understand this notation:
$$Proj_{P}(x) = (\sum P_{j}(P^{j})^T)x$$
what does this yield? perhaps a sum of terms $$M_{i}x$$ where we have the matrices $$M_j = P_{j}(P^{j})^T$$ ?

When this is clear I can safely continue reading.
Thanks a lot for all this!

9. Sep 19, 2008

### Peeter

Yes, parens were implied. The vector e was introduced since it is part of x, just not part of the projection from x onto the subspace.

The bit that you are unclear on is a matrix encoding of the preceding operation, and has the appearance that one can just remove the dot products to form the projection matrix. This wasn't clear to me initially either, and I have added a link to a different note where I explained that bit for myself as well. Since I've been my only reader I assumed my own current understanding as I wrote each set of notes;)

Have a look at the revision and the link. If it is still unclear I'll see if I can explain things better (once home from work;)

Peeter

Last edited: Sep 19, 2008
10. Sep 22, 2008

### mnb96

I went a bit through your notes, which I believe are quite clear and well written: they might be useful to many other people!
I guess I understood what you did in your notes, and you actually derive a formula to compute the projection of a vector onto another vector given a direction vector; this would answer my question, at least the way it was posed in the thread title.

I still have some doubts about my specific problem, but that would bring this thread off-topic; perhaps it might be useful to speak about that in private?

11. Oct 1, 2008

### mnb96

When I was playing around with this problem, I came upon a result; I omit the steps for the reason that the simple result I got is in fact VERY well known in advanced linear algebra, and I just happened to be a little ignorant in the subject. The keyword here was: Moore-Penrose Inverse, also commonly known as Pseudoinverse.

Figuring out what a pseudoinverse matrix is, immediately solves the problem I posed; perhaps this might be useful for other people still coming across this thread.

12. Jul 18, 2010

### asabbs

I found this thread useful in giving me several new leads. I was stuck in understanding part of a paper which implements a cartesian to curvilinear transformation where the curvilinear basis is not necessarily orthogonal. Thanks Peeter and mnb96.

13. Jul 19, 2010

### mnb96

If you want, feel free to mention the link of the paper and the part that is causing you troubles.

14. Jul 22, 2010

### asabbs

http://jeb.biologists.org/cgi/content/abstract/205/1/55

You can locate the relevant equations on the third page (numbered as page 57 of the journal of experimental biology). Equations (17) (18) and (19) have been boggling be for a while. Roughly, each row is a first derivative in curvilinear coordinates ($$\xi , \eta , \zeta$$).

I was wondering, can the first row of e_v can be rewritten as an inner product of

$$\nabla \xi \bullet$$

and the vector:

$$\left[ \nabla \xi u_{\xi} ; \nabla \eta u_{\eta} ; \nabla \zeta u_{\zeta} \right]$$

I'm also having trouble understanding how this formulation correctly transforms the first derivative of u in the basis {x, y, z} to its equivalent in the curvilinear coordinate basis {$$\xi , \eta , \zeta$$} which is not necessarily orthogonal.

Last edited: Jul 22, 2010