Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

How to find vector coordinates in non-orthogonal systems?

  1. Sep 16, 2008 #1
    Hello,
    I am reading a text in which it is assumed the following to be known:

    - a n-dimensional Hilbert Space
    - a set of non-orthogonal basis vectors {[tex]b_{i}[/tex]}
    - a vector F in that space (F = [tex]f_{1}b_{i}+...+f_{n}b_{n}[/tex])

    I'd like to find the components of F in the given basis {[tex]b_{i}[/tex]}, and according to the text this is easily done by:

    [tex]f_{j}[/tex] = [tex]<F,b_{i}><b_{i},b_{j}>^{-1}[/tex]
    where Einstein summation convention has been used on the index 'i'.

    I'd really like to know how I could arrive at that formula, but as far as I know, since the system is non-orthogonal the dot-product [tex]<F,b_{i}>[/tex] yields the corresponding coordinate in the dual base system {[tex]B_{i}[/tex]}, and so I was able to arrive only at the following formula:

    [tex]f_{j}[/tex] = [tex]<F,b_{i}><B_{i},B_{j}>[/tex]

    but can I get rid of the dual-basis vectors in the formula?
    Thanks a lot in advance!
     
  2. jcsd
  3. Sep 16, 2008 #2
    I've seen something similar with a different notation for real vector spaces (where I think what you are calling the dual basis was called the reciprocal frame). Those frame vectors were defined implicitly like so:

    [tex]
    u^i \cdot u_j = {\delta^i}_j
    [/tex]

    this allows for expressing a vector in terms of coordinates for both the basis and the reciprocal basis:

    [tex]
    x = x^k u_k
    \implies
    x^k = x \cdot u^k
    [/tex]

    [tex]
    x = x_k u^k
    \implies
    x_k = x \cdot u_k
    [/tex]

    and can do the same for the reciprocal frame vectors themselves:

    [tex]
    u^i = a^k u_k
    \implies
    u^i \cdot u^k = a^k
    [/tex]

    or

    [tex]
    u^i = (u^i \cdot u^k) u_k
    [/tex]

    I'd guess the question to ask is, for your Hilbert space how exactly is your dual vector defined. What do you get if you take a dual basis vector expressed in terms of the regular basis, and insert into what you have got so far?
     
  4. Sep 16, 2008 #3
    Thanks for your answer Peeter; however, you arrived exactly at the same point where I got stuck; in fact, if you just plug the term [tex]u^i = (u^i \cdot u^k) u_k[/tex] into the equation [tex]x^k = x \cdot u^k[/tex] you get: [tex]x^k = (u^k \cdot u^j)(u_j \cdot x)[/tex]

    As you notice, my problem is here: the only quantities that are known are [tex]x[/tex] and the vectors [tex]u_i[/tex].
    How can I compute the dot products [tex](u^k \cdot u^j)[/tex] without knowing those reciprocal vectors?
     
  5. Sep 18, 2008 #4
    It appears I'm not getting subscribed to threads I reply to anymore, so sorry for ignoring your question.

    from the complete set of vectors [itex]u_i[/itex] you can compute the reciprocal set. For example, doing a two vector example, you can do this with a Cramer's rule like determinant ratio:

    [tex]
    u^2 = - u_1 \frac{1}{u_1 \wedge u_2} = ... =
    \frac{ (u_1)^2 u_2 - (u_1 \cdot u_2) u_1 }{ (u_1)^2 (u_2)^2 - (u_1 \cdot u_2)^2 }
    [/tex]

    I don't think that's a computational way to do it though. Better to formulate the whole thing as a matrix equation and solve that. I vaguely recollect that SVD gives you the reciprocal basis nicely for free along with everything else. Sounds like setting up that matrix equation to solve is probably half the work of the exercise.
     
  6. Sep 19, 2008 #5
    If you know the coordinates of the vector F in any basis, let's say [tex]\{e_i\}[/tex] ,
    [tex]F=g_1e_1+...+g_ne_n[/tex]

    Then you can use a matrix to find the coordinates in other basis [tex]\{b_{i}\}[/tex]:

    [tex][F]_b [/tex] stands for the column matrix of coordinates from the vector F in the [tex]\{b_{i}\}[/tex] basis. Is easy to show that the mapping [tex]F\mapsto [F]_b[/tex] is linear and 1-1 (and the inverse is also a linear mapping). Then:

    [tex][F]_b=g_1[e_1]_b+...+g_n[e_1]_b[/tex]

    Let A be a nxn matrix and x of size nx1, then the column matrix Ax is a linear combination of the columns of A. The scalars are the elements of x. In this way:

    [tex][F]_b=\left(\begin{matrix}[e_1]_b & \cdots & [e_n]_b \end{matrix}\right)\begin{pmatrix}g_1 \\ g_2 \\ \vdots \\ g_n \end{pmatrix}[/tex]

    [tex][F]_b=\left(\begin{matrix}[e_1]_b & \cdots & [e_n]_b \end{matrix}\right)[F]_e[/tex]

    where [tex][F]_b=\begin{pmatrix}f_1 \\ f_2 \\ \vdots \\ f_n \end{pmatrix}[/tex]
    So you need to know [tex]e_i=e^k_ib_k[/tex] in order to find the previous matrix (indeed: [tex][e_i]_b=(e^1_i,e^2_i,...,e^n_i)^T[/tex] ).

    Because sometimes you already know [tex]b_i=b^i_ke_k[/tex], so it would be easier to find:

    [tex][F]_e=\left(\begin{matrix}[b_1]_e & \cdots & [b_n]_e \end{matrix}\right)[F]_b[/tex]

    Take [tex]\alpha^i[b_i]_e=0[/tex], with [tex]\alpha\in \mathbb{F}[/tex]. Since the function [tex][\ ]_e[/tex] is 1-1, applying the inverse to [tex]\alpha^i[b_i]_e=0[/tex], we get [tex]\alpha^ib_i=0[/tex], so [tex]\alpha^i=0[/tex] (because [tex]b_i[/tex] is a basis), then [tex][b_i]_e[/tex] is linearly independent. Then the matrix above mentioned is inversible. Therefore:

    [tex][F]_b=\left(\begin{matrix}[b_1]_e & \cdots & [b_n]_e \end{matrix}\right)^{-1}[F]_e[/tex]

    I hope this helps you.
     
  7. Sep 19, 2008 #6
    Thanks a lot, Peeter and chuy!
    It seems that's the only way to solve the problem. Moreover the statement in the text I read
    [tex]f_{j} = <F,b_{i}><b_{i},b_{j}>^{-1}[/tex]
    is not generally correct but only valid for orthogonal coordinates.

    At this point the only thing I still need to know is this: in the text I am reading, they have a vector F expressed in an *Orthogonal* basis. Then they apply the same arbitrary function to all the basis vectors; this obviously yields a new set of basis vectors that are NOT necessarily orthogonal, and we want to get the coordinates of F in this new basis.
    However they say that it is possible to "compensate" the non-orthogonality using the metric tensor. But I don't understand how.
     
  8. Sep 19, 2008 #7
    I calculated something like this for myself (the calculation of the reciprocal frame vectors, from which the coordinate results follow). My notes are from trying to reconsile some different ways of expressing projection (matrix and clifford algebra approaches), but around page seven (Projection using reciprocal frame vectors.) of the following notes probably expresses the ideas you are interested in and doesn't have any clifford algebra content:

    http://www.geocities.com/peeter_joot/geometric_algebra/oblique_proj.pdf

    If you read it you'll probably be the first one to do so that wasn't me, so let me know if you find any errors. The connection with the metric tensor is due to the fact that the metric tensor is essentially a set of dot products between pairs of frame vectors.
     
  9. Sep 19, 2008 #8
    I am reading your notes, and I have got something to ask. Sorry if my questions sound silly but I think I'm missing some concept or I am not too familiar with the notation.
    -- page 8:
    when you write [tex]x = e +\sum P_{j}P^{j}\cdot x[/tex] you probably mean [tex]x = e +\sum P_{j} (P^{j}\cdot x) [/tex] ?

    - in the same identity, why do you actually introduce that vector 'e'. Imho it is not necessary because it is anyways orthogonal to the other vectors.

    - finally the most important thing, I dont understand this notation:
    [tex] Proj_{P}(x) = (\sum P_{j}(P^{j})^T)x[/tex]
    what does this yield? perhaps a sum of terms [tex] M_{i}x[/tex] where we have the matrices [tex]M_j = P_{j}(P^{j})^T[/tex] ?

    When this is clear I can safely continue reading.
    Thanks a lot for all this!
     
  10. Sep 19, 2008 #9
    Yes, parens were implied. The vector e was introduced since it is part of x, just not part of the projection from x onto the subspace.

    The bit that you are unclear on is a matrix encoding of the preceding operation, and has the appearance that one can just remove the dot products to form the projection matrix. This wasn't clear to me initially either, and I have added a link to a different note where I explained that bit for myself as well. Since I've been my only reader I assumed my own current understanding as I wrote each set of notes;)

    Have a look at the revision and the link. If it is still unclear I'll see if I can explain things better (once home from work;)

    Peeter
     
    Last edited: Sep 19, 2008
  11. Sep 22, 2008 #10
    I went a bit through your notes, which I believe are quite clear and well written: they might be useful to many other people!
    I guess I understood what you did in your notes, and you actually derive a formula to compute the projection of a vector onto another vector given a direction vector; this would answer my question, at least the way it was posed in the thread title.

    I still have some doubts about my specific problem, but that would bring this thread off-topic; perhaps it might be useful to speak about that in private?
     
  12. Oct 1, 2008 #11
    When I was playing around with this problem, I came upon a result; I omit the steps for the reason that the simple result I got is in fact VERY well known in advanced linear algebra, and I just happened to be a little ignorant in the subject. The keyword here was: Moore-Penrose Inverse, also commonly known as Pseudoinverse.

    Figuring out what a pseudoinverse matrix is, immediately solves the problem I posed; perhaps this might be useful for other people still coming across this thread.
     
  13. Jul 18, 2010 #12
    I found this thread useful in giving me several new leads. I was stuck in understanding part of a paper which implements a cartesian to curvilinear transformation where the curvilinear basis is not necessarily orthogonal. Thanks Peeter and mnb96.
     
  14. Jul 19, 2010 #13
    If you want, feel free to mention the link of the paper and the part that is causing you troubles.
     
  15. Jul 22, 2010 #14
    You can download the paper for free here. It deals with CFD modeling of a fruit fly wing.

    http://jeb.biologists.org/cgi/content/abstract/205/1/55

    You can locate the relevant equations on the third page (numbered as page 57 of the journal of experimental biology). Equations (17) (18) and (19) have been boggling be for a while. Roughly, each row is a first derivative in curvilinear coordinates ([tex]\xi , \eta , \zeta[/tex]).

    I was wondering, can the first row of e_v can be rewritten as an inner product of

    [tex] \nabla \xi \bullet [/tex]

    and the vector:

    [tex]\left[ \nabla \xi u_{\xi} ; \nabla \eta u_{\eta} ; \nabla \zeta u_{\zeta} \right][/tex]

    I'm also having trouble understanding how this formulation correctly transforms the first derivative of u in the basis {x, y, z} to its equivalent in the curvilinear coordinate basis {[tex]\xi , \eta , \zeta[/tex]} which is not necessarily orthogonal.
     
    Last edited: Jul 22, 2010
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?