How to find vector coordinates in non-orthogonal systems?

In summary, to find the components of a vector in a given basis, one uses the following formula: f_{j}=<F,b_{i}><B_{i},B_{j}>^{-1}
  • #1
mnb96
715
5
Hello,
I am reading a text in which it is assumed the following to be known:

- a n-dimensional Hilbert Space
- a set of non-orthogonal basis vectors {[tex]b_{i}[/tex]}
- a vector F in that space (F = [tex]f_{1}b_{i}+...+f_{n}b_{n}[/tex])

I'd like to find the components of F in the given basis {[tex]b_{i}[/tex]}, and according to the text this is easily done by:

[tex]f_{j}[/tex] = [tex]<F,b_{i}><b_{i},b_{j}>^{-1}[/tex]
where Einstein summation convention has been used on the index 'i'.

I'd really like to know how I could arrive at that formula, but as far as I know, since the system is non-orthogonal the dot-product [tex]<F,b_{i}>[/tex] yields the corresponding coordinate in the dual base system {[tex]B_{i}[/tex]}, and so I was able to arrive only at the following formula:

[tex]f_{j}[/tex] = [tex]<F,b_{i}><B_{i},B_{j}>[/tex]

but can I get rid of the dual-basis vectors in the formula?
Thanks a lot in advance!
 
Physics news on Phys.org
  • #2
I've seen something similar with a different notation for real vector spaces (where I think what you are calling the dual basis was called the reciprocal frame). Those frame vectors were defined implicitly like so:

[tex]
u^i \cdot u_j = {\delta^i}_j
[/tex]

this allows for expressing a vector in terms of coordinates for both the basis and the reciprocal basis:

[tex]
x = x^k u_k
\implies
x^k = x \cdot u^k
[/tex]

[tex]
x = x_k u^k
\implies
x_k = x \cdot u_k
[/tex]

and can do the same for the reciprocal frame vectors themselves:

[tex]
u^i = a^k u_k
\implies
u^i \cdot u^k = a^k
[/tex]

or

[tex]
u^i = (u^i \cdot u^k) u_k
[/tex]

I'd guess the question to ask is, for your Hilbert space how exactly is your dual vector defined. What do you get if you take a dual basis vector expressed in terms of the regular basis, and insert into what you have got so far?
 
  • #3
Thanks for your answer Peeter; however, you arrived exactly at the same point where I got stuck; in fact, if you just plug the term [tex]u^i = (u^i \cdot u^k) u_k[/tex] into the equation [tex]x^k = x \cdot u^k[/tex] you get: [tex]x^k = (u^k \cdot u^j)(u_j \cdot x)[/tex]

As you notice, my problem is here: the only quantities that are known are [tex]x[/tex] and the vectors [tex]u_i[/tex].
How can I compute the dot products [tex](u^k \cdot u^j)[/tex] without knowing those reciprocal vectors?
 
  • #4
It appears I'm not getting subscribed to threads I reply to anymore, so sorry for ignoring your question.

from the complete set of vectors [itex]u_i[/itex] you can compute the reciprocal set. For example, doing a two vector example, you can do this with a Cramer's rule like determinant ratio:

[tex]
u^2 = - u_1 \frac{1}{u_1 \wedge u_2} = ... =
\frac{ (u_1)^2 u_2 - (u_1 \cdot u_2) u_1 }{ (u_1)^2 (u_2)^2 - (u_1 \cdot u_2)^2 }
[/tex]

I don't think that's a computational way to do it though. Better to formulate the whole thing as a matrix equation and solve that. I vaguely recollect that SVD gives you the reciprocal basis nicely for free along with everything else. Sounds like setting up that matrix equation to solve is probably half the work of the exercise.
 
  • #5
If you know the coordinates of the vector F in any basis, let's say [tex]\{e_i\}[/tex] ,
[tex]F=g_1e_1+...+g_ne_n[/tex]

Then you can use a matrix to find the coordinates in other basis [tex]\{b_{i}\}[/tex]:

[tex][F]_b [/tex] stands for the column matrix of coordinates from the vector F in the [tex]\{b_{i}\}[/tex] basis. Is easy to show that the mapping [tex]F\mapsto [F]_b[/tex] is linear and 1-1 (and the inverse is also a linear mapping). Then:

[tex][F]_b=g_1[e_1]_b+...+g_n[e_1]_b[/tex]

Let A be a nxn matrix and x of size nx1, then the column matrix Ax is a linear combination of the columns of A. The scalars are the elements of x. In this way:

[tex][F]_b=\left(\begin{matrix}[e_1]_b & \cdots & [e_n]_b \end{matrix}\right)\begin{pmatrix}g_1 \\ g_2 \\ \vdots \\ g_n \end{pmatrix}[/tex]

[tex][F]_b=\left(\begin{matrix}[e_1]_b & \cdots & [e_n]_b \end{matrix}\right)[F]_e[/tex]

where [tex][F]_b=\begin{pmatrix}f_1 \\ f_2 \\ \vdots \\ f_n \end{pmatrix}[/tex]
So you need to know [tex]e_i=e^k_ib_k[/tex] in order to find the previous matrix (indeed: [tex][e_i]_b=(e^1_i,e^2_i,...,e^n_i)^T[/tex] ).

Because sometimes you already know [tex]b_i=b^i_ke_k[/tex], so it would be easier to find:

[tex][F]_e=\left(\begin{matrix}[b_1]_e & \cdots & [b_n]_e \end{matrix}\right)[F]_b[/tex]

Take [tex]\alpha^i[b_i]_e=0[/tex], with [tex]\alpha\in \mathbb{F}[/tex]. Since the function [tex][\ ]_e[/tex] is 1-1, applying the inverse to [tex]\alpha^i[b_i]_e=0[/tex], we get [tex]\alpha^ib_i=0[/tex], so [tex]\alpha^i=0[/tex] (because [tex]b_i[/tex] is a basis), then [tex][b_i]_e[/tex] is linearly independent. Then the matrix above mentioned is inversible. Therefore:

[tex][F]_b=\left(\begin{matrix}[b_1]_e & \cdots & [b_n]_e \end{matrix}\right)^{-1}[F]_e[/tex]

I hope this helps you.
 
  • #6
Thanks a lot, Peeter and chuy!
It seems that's the only way to solve the problem. Moreover the statement in the text I read
[tex]f_{j} = <F,b_{i}><b_{i},b_{j}>^{-1}[/tex]
is not generally correct but only valid for orthogonal coordinates.

At this point the only thing I still need to know is this: in the text I am reading, they have a vector F expressed in an *Orthogonal* basis. Then they apply the same arbitrary function to all the basis vectors; this obviously yields a new set of basis vectors that are NOT necessarily orthogonal, and we want to get the coordinates of F in this new basis.
However they say that it is possible to "compensate" the non-orthogonality using the metric tensor. But I don't understand how.
 
  • #7
mnb96 said:
... I am reading, they have a vector F expressed in an *Orthogonal* basis. Then they apply the same arbitrary function to all the basis vectors; this obviously yields a new set of basis vectors that are NOT necessarily orthogonal, and we want to get the coordinates of F in this new basis.
However they say that it is possible to "compensate" the non-orthogonality using the metric tensor. But I don't understand how.

I calculated something like this for myself (the calculation of the reciprocal frame vectors, from which the coordinate results follow). My notes are from trying to reconsile some different ways of expressing projection (matrix and clifford algebra approaches), but around page seven (Projection using reciprocal frame vectors.) of the following notes probably expresses the ideas you are interested in and doesn't have any clifford algebra content:

http://www.geocities.com/peeter_joot/geometric_algebra/oblique_proj.pdf

If you read it you'll probably be the first one to do so that wasn't me, so let me know if you find any errors. The connection with the metric tensor is due to the fact that the metric tensor is essentially a set of dot products between pairs of frame vectors.
 
  • #8
I am reading your notes, and I have got something to ask. Sorry if my questions sound silly but I think I'm missing some concept or I am not too familiar with the notation.
-- page 8:
when you write [tex]x = e +\sum P_{j}P^{j}\cdot x[/tex] you probably mean [tex]x = e +\sum P_{j} (P^{j}\cdot x) [/tex] ?

- in the same identity, why do you actually introduce that vector 'e'. Imho it is not necessary because it is anyways orthogonal to the other vectors.

- finally the most important thing, I don't understand this notation:
[tex] Proj_{P}(x) = (\sum P_{j}(P^{j})^T)x[/tex]
what does this yield? perhaps a sum of terms [tex] M_{i}x[/tex] where we have the matrices [tex]M_j = P_{j}(P^{j})^T[/tex] ?

When this is clear I can safely continue reading.
Thanks a lot for all this!
 
  • #9
Yes, parens were implied. The vector e was introduced since it is part of x, just not part of the projection from x onto the subspace.

The bit that you are unclear on is a matrix encoding of the preceding operation, and has the appearance that one can just remove the dot products to form the projection matrix. This wasn't clear to me initially either, and I have added a link to a different note where I explained that bit for myself as well. Since I've been my only reader I assumed my own current understanding as I wrote each set of notes;)

Have a look at the revision and the link. If it is still unclear I'll see if I can explain things better (once home from work;)

Peeter
 
Last edited:
  • #10
I went a bit through your notes, which I believe are quite clear and well written: they might be useful to many other people!
I guess I understood what you did in your notes, and you actually derive a formula to compute the projection of a vector onto another vector given a direction vector; this would answer my question, at least the way it was posed in the thread title.

I still have some doubts about my specific problem, but that would bring this thread off-topic; perhaps it might be useful to speak about that in private?
 
  • #11
When I was playing around with this problem, I came upon a result; I omit the steps for the reason that the simple result I got is in fact VERY well known in advanced linear algebra, and I just happened to be a little ignorant in the subject. The keyword here was: Moore-Penrose Inverse, also commonly known as Pseudoinverse.

Figuring out what a pseudoinverse matrix is, immediately solves the problem I posed; perhaps this might be useful for other people still coming across this thread.
 
  • #12
I found this thread useful in giving me several new leads. I was stuck in understanding part of a paper which implements a cartesian to curvilinear transformation where the curvilinear basis is not necessarily orthogonal. Thanks Peeter and mnb96.
 
  • #13
asabbs said:
I found this thread useful in giving me several new leads. I was stuck in understanding part of a paper which implements a cartesian to curvilinear transformation where the curvilinear basis is not necessarily orthogonal...

If you want, feel free to mention the link of the paper and the part that is causing you troubles.
 
  • #14
mnb96 said:
If you want, feel free to mention the link of the paper and the part that is causing you troubles.

You can download the paper for free here. It deals with CFD modeling of a fruit fly wing.

http://jeb.biologists.org/cgi/content/abstract/205/1/55

You can locate the relevant equations on the third page (numbered as page 57 of the journal of experimental biology). Equations (17) (18) and (19) have been boggling be for a while. Roughly, each row is a first derivative in curvilinear coordinates ([tex]\xi , \eta , \zeta[/tex]).

I was wondering, can the first row of e_v can be rewritten as an inner product of

[tex] \nabla \xi \bullet [/tex]

and the vector:

[tex]\left[ \nabla \xi u_{\xi} ; \nabla \eta u_{\eta} ; \nabla \zeta u_{\zeta} \right][/tex]

I'm also having trouble understanding how this formulation correctly transforms the first derivative of u in the basis {x, y, z} to its equivalent in the curvilinear coordinate basis {[tex]\xi , \eta , \zeta[/tex]} which is not necessarily orthogonal.
 
Last edited:

1. What are vector coordinates in non-orthogonal systems?

Vector coordinates in non-orthogonal systems refer to the numerical values that represent the position of a vector in a coordinate system that is not orthogonal, meaning the axes are not perpendicular to each other.

2. How do you find vector coordinates in non-orthogonal systems?

To find vector coordinates in non-orthogonal systems, you need to use a specific formula or transformation matrix that takes into account the angle between the axes and the scale of each axis. This formula is different for each non-orthogonal system.

3. What is the difference between finding vector coordinates in orthogonal vs. non-orthogonal systems?

The main difference is that in orthogonal systems, the axes are perpendicular to each other, making it easier to determine the coordinates of a vector. In non-orthogonal systems, the axes are not perpendicular, so a different formula or transformation matrix is needed to find the coordinates.

4. Can vector coordinates in non-orthogonal systems be negative?

Yes, vector coordinates in non-orthogonal systems can be negative. This depends on the direction and orientation of the axes, as well as the position of the vector in relation to the axes.

5. How can I visualize vector coordinates in non-orthogonal systems?

One way to visualize vector coordinates in non-orthogonal systems is to use a graphing calculator or software that allows you to plot vectors in different coordinate systems. You can also use geometric concepts such as rotations and projections to understand the position of a vector in a non-orthogonal system.

Similar threads

  • Linear and Abstract Algebra
Replies
9
Views
862
  • Linear and Abstract Algebra
Replies
9
Views
155
  • Linear and Abstract Algebra
Replies
2
Views
956
  • Beyond the Standard Models
Replies
0
Views
205
  • Linear and Abstract Algebra
Replies
1
Views
910
  • Linear and Abstract Algebra
Replies
9
Views
534
  • Linear and Abstract Algebra
Replies
3
Views
270
  • Linear and Abstract Algebra
2
Replies
41
Views
3K
Replies
3
Views
1K
  • Advanced Physics Homework Help
Replies
2
Views
806
Back
Top