Raising/Lowering Indices w/ Metric Tensor

Click For Summary

Discussion Overview

The discussion revolves around the operations involving raising and lowering indices with the metric tensor in the context of tensor algebra. Participants explore the implications of index manipulation on tensor expressions, examining specific examples and the resulting forms of tensors.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant expresses confusion regarding the notation and operations involving tensors, particularly in the context of raising indices using the metric tensor.
  • Another participant points out a mistake in the initial calculations, asserting that the results should align with the standard operations of tensor algebra.
  • A later reply acknowledges the previous correction but raises a concern about the interpretation of indices in tensor products, questioning whether the expressions are equivalent.
  • Further discussion reveals that the ordering of vector and co-vector arguments affects the resulting tensors, leading to different forms based on the order of operations.
  • Participants discuss the implications of these operations, suggesting that the differences are akin to variations in function definitions based on argument order.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the implications of index ordering in tensor operations, with multiple competing views remaining on the significance of these differences.

Contextual Notes

Participants note that the operations depend on the specific definitions and conventions used for tensor indices and their ordering, which may lead to different interpretations of the results.

cianfa72
Messages
2,964
Reaction score
311
TL;DR
Rules to raise or lower indices through metric tensor
I'm still confused about the notation used for operations involving tensors.
Consider the following simple example:
$$\eta^{\mu \sigma} A_{\mu \nu} = A_{\mu \nu} \eta^{\mu \sigma}$$
Using the rules for raising an index through the (inverse) metric tensor ##\eta^{\mu \sigma}## we get ##A^{\sigma}{}_{\nu}##. However if we work out explicitly the contraction employing the operator ##C_{\alpha}^{\mu} ()## we get:

$$C_{\alpha}^{\mu} (A_{\alpha \nu} \eta^{\mu \sigma} e^{\alpha} \otimes e^{\nu} \otimes e_{\mu} \otimes e_{\sigma}) = A_{\mu \nu} \eta^{\mu \sigma} e^{\mu} (e_{\mu}) e^{\nu} \otimes e_{\sigma} = A_{\mu \nu} \eta^{\mu \sigma} e^{\nu} \otimes e_{\sigma}$$
The latter is a tensor, say ##T = T_{\nu} {}^{\sigma} e^{\nu} \otimes e_{\sigma}##.

Is it the same as ##A^{\sigma}{}_{\nu} e_{\sigma} \otimes e^{\nu}## ?
 
Last edited:
Physics news on Phys.org
You made a mistake in your work;
\begin{align*}C_{\alpha}^{\mu} (A_{\alpha \nu} \eta^{\mu \sigma} e^{\alpha} \otimes e^{\nu} \otimes e_{\mu} \otimes e_{\sigma}) &= A_{\alpha \nu} \eta^{\mu \sigma} e^{\alpha} (e_{\mu}) e^{\nu} \otimes e_{\sigma} \\ &= A_{\mu \nu} \eta^{\mu \sigma} e^{\nu} \otimes e_{\sigma} \\ &={A^{\sigma}}_{\nu} e^{\nu} \otimes e_{\sigma}\end{align*}
 
ergospherical said:
You made a mistake in your work;
\begin{align*}C_{\alpha}^{\mu} (A_{\alpha \nu} \eta^{\mu \sigma} e^{\alpha} \otimes e^{\nu} \otimes e_{\mu} \otimes e_{\sigma}) &= A_{\alpha \nu} \eta^{\mu \sigma} e^{\alpha} (e_{\mu}) e^{\nu} \otimes e_{\sigma} \\ &= A_{\mu \nu} \eta^{\mu \sigma} e^{\nu} \otimes e_{\sigma} \\ &={A^{\sigma}}_{\nu} e^{\nu} \otimes e_{\sigma}\end{align*}
Oops yes, from the RHS on the first line summing over ##\alpha## we get the second line and then (summing over ##\mu##) the result.

Maybe I'm missing the point, in your result the ##\nu## index in ##A^{\sigma}{}_{\nu}## actually refers to the first element in tensor product ##e^{\nu} \otimes e_{\sigma}##, not to the second one. Instead in the expression ##T_{\nu}{}^{\sigma} e^{\nu} \otimes e_{\sigma}## is the first index ##\nu## that refers to the first element.

Is the following correct ?

$${A^{\sigma}}_{\nu} e^{\nu} \otimes e_{\sigma} = T_{\nu}{}^{\sigma} e^{\nu} \otimes e_{\sigma}$$
 
cianfa72 said:
Is the following correct ?
$${A^{\sigma}}_{\nu} e^{\nu} \otimes e_{\sigma} = T_{\nu}{}^{\sigma} e^{\nu} \otimes e_{\sigma}$$
Well, yes, but only because that’s how you defined ##{T_{\nu}}^{\sigma}##…

You are worrying too much. It’s conventional to maintain the same horizontal ordering of the component indices and the tensor arguments (so that it’s easy to tell which slot is which), but you can do whatever you want.
 
  • Like
Likes   Reactions: cianfa72
Sorry, it seems to me there are actually two different answers we get reversing the order of the 'index raising' operation, namely:
$$C_{\alpha}^{\mu} (A_{\alpha \nu} \eta^{\mu \sigma} e^{\alpha} \otimes e^{\nu} \otimes e_{\mu} \otimes e_{\sigma}) = {A^{\sigma}}_{\nu} e^{\nu} \otimes e_{\sigma}$$
then if we reverse the order we get:
$$C_{\alpha}^{\mu} (\eta^{\mu \sigma} A_{\alpha \nu} e_{\mu} \otimes e_{\sigma} \otimes e^{\alpha} \otimes e^{\nu}) = {A^{\sigma}}_{\nu} e_{\sigma} \otimes e^{\nu}$$

The two are really two different tensors, where is the mistake ?
 
It’s nothing more significant than the ordering of the vector and co-vector arguments.
 
Suppose ##n=2## we get in the two cases:
$$A^{1}{}_{1} e^{1} \otimes e_{1} + A^{1}{}_{2} e^{2} \otimes e_{1} + A^{2}{}_{1} e^{1} \otimes e_{2} + A^{2}{}_{2} e^{2} \otimes e_{2}$$ $$A^{1}{}_{1} e_{1} \otimes e^{1} + A^{1}{}_{2} e_{1} \otimes e^{2} + A^{2}{}_{1} e_{2} \otimes e^{1} + A^{2}{}_{2} e_{2} \otimes e^{2}$$
So the difference is really just the slots order in which plug in the vector and co-vector.

Does the same thing hold for cases like the following ?

##\eta_{\mu \alpha} A^{\alpha \nu} \eta_{\sigma \nu} \Rightarrow A_{\mu \sigma} e^{\mu} \otimes e^{\sigma}##

##\eta_{\sigma \nu} A^{\alpha \nu} \eta_{\mu \alpha} \Rightarrow A_{\mu \sigma} e^{\sigma} \otimes e^{\mu}##
 
Yeah, chill, it’s just like the difference between ##f(x,y) = x^2 y## and ##g(x,y) = y^2 x##, whereby ##f(x,y) = g(y,x)##.
 
ergospherical said:
Yeah, chill, it’s just like the difference between ##f(x,y) = x^2 y## and ##g(x,y) = y^2 x##, whereby ##f(x,y) = g(y,x)##.
yes, the point confusing me is that when formally we raise and/or lower tensor indices through the metric tensor we need to take in account it (in other words we need to take in account the orders of the slots -- in the above example the order of the slots 'waiting' for vectors to be plugged in).

Hence from the point of view of the tensor we get from the raising/lowering operations through the metric tensor, the order makes the difference.
 
Last edited:

Similar threads

  • · Replies 9 ·
Replies
9
Views
1K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 38 ·
2
Replies
38
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 124 ·
5
Replies
124
Views
9K
  • · Replies 5 ·
Replies
5
Views
1K