# GRAVITAION, by MTW, Box 3.3

1. Aug 2, 2010

### Living_Dog

From Box 3.3, p. 85:

Since $$S^{\alpha}_{\phantom{\alpha}\beta\gamma} = S(\omega^\alpha, e_\beta, e_\gamma)$$

and

since S$$=S^{\alpha}_{\phantom{\alpha}\beta\gamma}e_\alpha\otimes\omega^\beta\otimes\omega^\gamma$$

is it then true that

S$$=S(\omega^\alpha, e_\beta, e_\gamma)e_\alpha\otimes\omega^\beta\otimes\omega^\gamma\ ?$$

Also, to get a new tensor from an old tensor, one of the techniques is to contract two of the indexes with each other. Is this another form of contraction, namely:

$$T_\gamma = S^{\alpha}_{\phantom{\alpha}\alpha\gamma} = S^{\alpha}_{\phantom{\alpha}\beta\gamma}\eta^{\beta}_{\phantom{\beta}\alpha} = S^{\alpha}_{\phantom{\alpha}\beta\gamma}\eta^{\beta\lambda}\eta_{\lambda\alpha}\ ?$$

Finally, why is the 1st term on the rhs of this equation transposed??

$$\nabla($$R$$\otimes$$M$$) = (\nabla$$R$$\otimes$$M$$)^T\ +\$$R$$\otimes\nabla$$M

Last edited: Aug 2, 2010
2. Aug 2, 2010

### qbert

yes

yes. but the reason is easy if you remember $\eta^\alpha_\beta = \delta^\alpha_\beta$. where $\eta^\alpha_\beta$ is defined

Just a reminder of the way the indices line up. look at the lhs
(Ra Mb),c = Ra,c Mb + RaMb,c.

The order of indices is abc. So how do you make the 1st term on the right have
the same order? Transpose the last two entries.

3. Aug 3, 2010

### Living_Dog

Thanks for the explanation. So ok, my transpositioning skillz are weak... is it:

(Ra,c Mb)T = MbT(Ra,c) T = Mb Rc,a?

But then the order is bca != abc.

Last edited by a moderator: Aug 3, 2010
4. Aug 3, 2010

### qbert

no. they're using transpose differently here.

(and not in an especially great way - in my oppinion
/see earlier in the chapter where they only transpose
the last two indices of a rank 3 tensor/ )

let's go slowly and see how this all works.

start with two rank 1 tensors, R and M. They act on vectors to give numbers.
We can form a rank two tensor R $\otimes$ M which "eats" two vectors and spits out a number. from this we can form a rank 3 tensor by using the "gradient".

ok. say we had a rank two tensor S. the definition for the gradient
says given 3 vectors "u,v,w" we have
$$\nabla S (u, v, w) = \frac{\partial S_{ab}}{\partial x^c} u^a v^b w^c$$
or in the case of the S = R $\otimes$ M
we have
$$\nabla (R\otimes M) (u, v, w) = \frac{\partial (R_a M_b)}{\partial x^c} u^a v^b w^c = \frac{\partial R_a}{\partial x^c}M_b u^a v^b w^c + R_a \frac{\partial M_b}{\partial x^c} u^a v^b w^c$$

Now we want to make sense of these coordinate independently
The second term is: $(R \otimes \nabla M )(u, v, w)$.
But the first term is: $(\nabla R \otimes M )(u, w, v)$.

Notice we've switched the order only of the last two slots. so we define a new
tensor Transpose $(\nabla R \otimes M)$ such that
for any three vectors u, v, w
Transpose $(\nabla R \otimes M)$ (u,v,w) = $(\nabla R \otimes M)$ (u, w, v).

That's it.