Translation from vector calc. notation to index notation

Click For Summary
The discussion focuses on translating a vector calculus equation into index notation, specifically omitting covariant and contravariant indices. The original equation involves terms that need careful handling, particularly regarding the cross product and the use of the Einstein summation convention. Participants highlight issues with matching indices between the left and right sides of the equation and emphasize the need to restore implied summations. There is also a debate about the appropriateness of using Cartesian coordinates and the implications of introducing basis vectors in the notation. The conversation concludes with an exploration of how to correctly express the equation in terms of lower and upper indices while considering the metric tensor's role.
JonnyMaddox
Messages
74
Reaction score
1
Hi, I want to translate this equation
R_{\hat{n}}(\alpha)\vec{x}=\hat{n}(\hat{n}\cdot\vec{x})+\cos\left(\alpha\right)(\hat{n}\times\vec{x})\times\hat{n}+\sin\left(\alpha\right)(\hat{n}\times\vec{x})
to index notation (forget about covariant and contravariant indices).

My attempt:
R_{ji}x_{i}=n_{j}n_{k}n_{k}-cos(\phi)\epsilon_{lmj} \epsilon_{noj} n_{l}x_{m}n_{j}+sin(\phi)\epsilon_{pqj} n_{p}x_{q}

The minus sign comes from \hat{n} \times (\vec{x} \times \hat{n})= -(\hat{n} \times \vec{x}) \times \hat{n}

So is this right??
 
Physics news on Phys.org
If you are not going to have any upper indices, it would be appropriate to restore the implied summations since the Einstein summation convention does not apply for indices which are all lower.

Looking at your expression though, the indices don't seem to match up very well between the left and right sides...

Also, I think your attempt at a solution, even after you have fixed the errors, would only be valid in Cartesian coordinates. In other coordinate systems, the dot product and cross product does not have such a nice form...
 
Last edited:
Matterwave said:
If you are not going to have any upper indices, it would be appropriate to restore the implied summations since the Einstein summation convention does not apply for indices which are all lower.

Looking at your expression though, the indices don't seem to match up very well between the left and right sides...

Hello !
Ok you are right, but in the book I use, tensor calculus is introduces first in cartesian coordinates and they make no distinction between upper and lower indices but introduce the Einstein summation convention. Could you be a bit more explicit what went wrong? I think that this is wrong <br /> cos(\phi)\epsilon_{lmj} \epsilon_{noj} n_{l}x_{m}n_{j}, but I'm not sure what to do about the n and o indices. Maybe I should put them to the left side somehow. And I read in some pdf that they put an additional basis into the epsilon tensor expression like \vec{b} \times \vec{c}=\epsilon_{jkl}b_{j}c_{k}\hat{e}_{l} But shouldn't the expression be free of any reference to a basis??
 
JonnyMaddox said:
Hello !
Ok you are right, but in the book I use, tensor calculus is introduces first in cartesian coordinates and they make no distinction between upper and lower indices but introduce the Einstein summation convention. Could you be a bit more explicit what went wrong? I think that this is wrong <br /> cos(\phi)\epsilon_{lmj} \epsilon_{noj} n_{l}x_{m}n_{j}, but I'm not sure what to do about the n and o indices. Maybe I should put them to the left side somehow. And I read in some pdf that they put an additional basis into the epsilon tensor expression like \vec{b} \times \vec{c}=\epsilon_{jkl}b_{j}c_{k}\hat{e}_{l} But shouldn't the expression be free of any reference to a basis??

Stick with Cartesian coordinates for now, restore the summations, and let's see what expression you end up with.

To get you started, the first term in the OP should read something like $$(\hat{n}(\hat{n}\cdot\vec{x}))_j=n_j(\sum_k n_k x_k)$$
 
Matterwave said:
Stick with Cartesian coordinates for now, restore the summations, and let's see what expression you end up with.

To get you started, the first term in the OP should read something like $$(\hat{n}(\hat{n}\cdot\vec{x}))_j=n_j(\sum_k n_k x_k)$$

Ok that's clear I think. Now the second term should be, since it is just a vector x_{k}:
x_{k}= cos(\phi)\epsilon_{kln} \epsilon_{mjn} n_{l}x_{m}n_{j}
Now this is a contraction of the n indices and summation on the l,m and j indices. Now how should I incorporate that in my above equation? Something like this [R_{ji}x_{i}]_{k} ?? I have no idea why there is contraction of the n indices but ok...
 
Ah I see, it's just R_{ji}x_{i}=n_{j}n_{k}n_{k}-cos(\phi)\epsilon_{jln} \epsilon_{mon} n_{l}x_{m}n_{o}+sin(\phi)\epsilon_{pqj} n_{p}x_{q} . If this is right I'll try to convert it into upper and lower indices with metric tensors.
 
JonnyMaddox said:
Ah I see, it's just R_{ji}x_{i}=n_{j}n_{k}n_{k}-cos(\phi)\epsilon_{jln} \epsilon_{mon} n_{l}x_{m}n_{o}+sin(\phi)\epsilon_{pqj} n_{p}x_{q} . If this is right I'll try to convert it into upper and lower indices with metric tensors.

In notation with lower and upper indices and in all dimensions and geometry it is I think:
R^{j}_{i}x^{i}=n^{j}g_{hf}n^{h}n^{f}-cos(\phi)\sqrt{|det (g)|}\epsilon^{jln} \epsilon_{mon} n_{l}x^{m}n^{o}+sin(\phi)\sqrt{|det (g)|}\epsilon^{j}_{pq} n^{p}x^{q}
 
Last edited:
Hi may someone comment if this is right? :)
 
Your first term should have an ##x## in it rather than 3 ##n##'s. I'm not sure about raising and lowering indices on the Levi-civita symbol since that will bring in factors of the metric. I can't recall the exactly correct way to take a cross product in such a notation.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
Replies
8
Views
2K
Replies
3
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 12 ·
Replies
12
Views
4K
Replies
2
Views
2K
  • · Replies 20 ·
Replies
20
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 11 ·
Replies
11
Views
1K