Vector expressed in a basis noncoplanar, neither orthogonal nor of unit length

jonjacson
Messages
450
Reaction score
38
We have three orthonormal vectors \vec i_1 , \vec i_2, \vec i_3 , and we know which are the components of an arbitrary vector \vec A in this base, explicitly:

\vec A = (\vec A \bullet \vec i_1) \vec i_1 + (\vec A \bullet \vec i_2) \vec i_2 + (\vec A \bullet \vec i_3) \vec i_3

If now we want to generalize to a base that is orthogonal , but is NOT normalized, we can divide by the modulus, and we came back to the first case, so we have:

\vec i_1 = \frac{\vec e_1}{e_1} , \vec i_2 = \frac{\vec e_2}{e_2}, \vec i_3 = \frac{\vec e_3}{e_3}

So now the expression of \vec A is:

\vec A = \frac{\vec A \bullet \vec e_1}{e_1^2} \vec e_1 + \frac{\vec A \bullet \vec e_2}{e_2^2} \vec e_2 + \frac{\vec A \bullet \vec e_3}{e_3^2} \vec e_3 Equation 1

In the next case the base will be noncoplanar, not orthogonal and the vectors won't be normalized, I am following the book from Borisenko and Taraponov about tensor calculus, they introduce the reciprocal bases to solve this problem to arrive at this expression:

\vec A = (\vec A \bullet \vec e^1) \vec e_1 + (\vec A \bullet \vec e^2) \vec e_2 + (\vec A \bullet \vec e^3) \vec e_3

Where the reciprocal base is defined as:

\vec e^1 = \frac{\vec e_2 \times \vec e_3}{\vec e_1 \bullet (\vec e_2 \times \vec e_3 )}e^2, and e^3 have a similar definition in terms of e_1, e_2 and e_3.

Well the question is ¿why do we need to introduce the reciprocal base to solve this problem? ¿what is the benefit of using it?.

I don't know why you don't calculate the components of A using equation 1 . You can project A to the vector e_1, e_2 and e_3 using the dot product and you find the components of A ,in a similar way as you have done before with the other basis ¿is there something that impede you to make the calculation in this way?.

And you have another option, simply using this equality:

\vec A = \vec A

First A expressed in the orthonormal base , and then expressed in the more general noncoplanar, nonorthogonal and not normalized :

(\vec A \bullet \vec i_1) \vec i_1 + (\vec A \bullet \vec i_2) \vec i_2 + (\vec A \bullet \vec i_3) \vec i_3 = A^1 \vec e_1 + A^2 \vec e_2 + A^3 \vec e_3

And you can find three equations to solve, so you wold not need introduce other base ¿is anything wrong in this last equation?

The book then introduce the concept of covariant and contravariant components of a vector, and I would like to understand why do you need the reciprocal base, and what it is.
 
Last edited:
Physics news on Phys.org
¿no suggestions? ¿do you think that i made a stupid question?

I expend one hour to make the question in latex:frown:

Please tell me something, I have been waiting for a reply during three days:cry:
 
If you project A on a non-orthogonal basis, equation 1 is no longer valid. It's easy to see if you try it for A = e_1. If e_1 is not orthogonal to e_2, application of equation 1 picks up a term proportional to e_2.

Yes, you can do it the way you described too... but that requires you to solve a system of linear equations, in this case, a 3x3 system; isn't it easier just to take dot products with the reciprocal base?
 
First of all, hamster 143, thank you very much for replying, very appreciated.

Now I understand the problem, I have make an image with the digital pen to explain it, but I don't know how to attach the image, so i have uploaded it to an external server, if this is not permitted due to forum rules or if you can't see the image , please tell me. :smile:

http://i33.tinypic.com/20uwytf.jpgWhen you use the dot product, you project onto the axis using 90 degree, but if the vectors are not coplanars this is not true, ok, but you can still develop a formula to project correctly onto the axis without the introduction of the reciprocal base( if i am not wrong) :

\vec A = (\frac{\sin {\gamma} }{\sin{\beta} } A )\vec u +( \frac{\sin {\alpha}}{\sin {\beta}} A) \vec v

¿do you see any mistakes? ¿could we do the calculations like this? . ¿why is better the reciprocal base?
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top