# I Index algebra questions / order of indices

1. Oct 12, 2016

### binbagsss

Hi,

I've somehow gone the past year without paying attention to the order of the indicies when one is upper and one is lower i.e. that in general $g^{\mu}$ $_{\nu}$ $\neq g_{\nu}$ $^{\mu}$.

A have a couple of questions :

1)
$g^{u}$ $_{v} x^{v}=x^{u}$ [1]
$g _{v}$ $^{u} x^{v} = x^{u}$ [2]

I believe that both of these are mathematically correct to write, since there is a dummy index being summed over in both cases. However $x^{\mu}$ in [1] $\neq$ $x^{\mu}$ in [2] because $g^{\mu}$ $_{\nu}$ $\neq g_{\nu}$ $^{\mu}$, in general, is this correct? (i.e I am just confirming that the indices do not need to be next to each other to be summed over, as they are in [1] - this is probably a stupid question but the fact I haven't paid attention to the order of an upper and lower index for so long makes me question?)

2) Given the matrix $g^{v}$ $_{u}$, am I correct in thinking that we can obtain given a metric matrix $g _{v}$ $^{u}$ from it, but not, solely using the metric matrix, $g ^{u}$ $_ {v}$ because on top of raising and lowering the indices, the order needs to be interchanged?

Many thanks.

Last edited: Oct 12, 2016
2. Oct 12, 2016

### Orodruin

Staff Emeritus
So there are a number of issues that need to be addressed here. To start with, it is not clear whether your g denotes a general rank 2 tensor or the metric. If it is a general tensor, the two expressions are indeed different - they are different whenever the tensor with both indices up/down is not symmetric.

Second, if by g you mean the metric tensor, you would essentially never use one index up and one down as this is then always the Kronecker delta by definition.

Third, a tensor is not a matrix. A rank 2 tensor may be represented by a matrix, but that is a matter of book keeping. The same goes for transformation coefficients.

3. Oct 13, 2016

### vanhees71

Strictly speaking an expression like $g_{\mu \nu}$ is not a tensor but tensor components with respect to a basis.

4. Oct 13, 2016

### robphy

In the abstract index notation, that is a tensor, with no reference to any basis.

5. Oct 13, 2016

### vanhees71

I don't know what the abstract index notation is, but a tensor is a tensor and has no indices (at least no natural ones) it's a multilinear mapping from $V^j \times V^{*k}$ to numbers (real or complex depending on whether you have a real or complex vector space). Tensor components are always with respect to a basis and its dual:
$${V_{\mu_1,\ldots,\mu_j}}^{\nu_1,\ldots,\nu_k}=V(b_{\mu_1},\ldots, b_{\mu_j};b^{\nu_1},\ldots,b^{\nu_k}),$$
where $b_{\mu}$ is a basis of the vector space $V$ and $b^{\mu}$ the corresponding dual basis of its dual space $V^{*}$.

6. Oct 13, 2016

### Orodruin

Staff Emeritus
In abstract index notation you are denoting the types of linear mappings involved by indices. For example, $g_{ab}$ would denote a multilinear map from $V \times V$ to scalars. Contractions are denoted by repeating indices, just as they would be if you were using ordinary index notation. Many find this convenient since you get a coordinate free concise way of writing your equations and the expressions in a coordinate basis become exactly the same with the abstract indices replaced by actual indices.

7. Oct 13, 2016

### vanhees71

Hm, for me that sounds confusing ;-).

8. Oct 13, 2016

### Orodruin

Staff Emeritus
Are you sure you do not mean ... (*drumroll*) ... abstract?

9. Oct 13, 2016

### vanhees71

I'm just ignorant. Don't take it seriously :-).

10. Oct 13, 2016

### Ibix

Yeah - don't get tensor anything.

More seriously - what's the advantage to abstract index notation? Regular index notation basically works by suppressing the basis vectors where sense is not affected, right? So abstract index notation takes it one step further and reasons that if we can just fudge the basis vectors away and everything still works then there ought to be a formalism for it? Or am I way off?

11. Oct 13, 2016

### robphy

Last edited: Oct 13, 2016
12. Oct 14, 2016

### Ibix

"Hydra taming"!

Thanks - I'll give Penrose a proper read. At first glance it looks as though I'm on roughly the right track, though.

13. Oct 14, 2016

### binbagsss

Sorry $g_{ab}$ is not supposed to be a metric, bad choice by me.

So back to one, just to confirm, if I have:

$\lambda ^{u}$ $_{v} x ^{v} = \psi ^{u}$ [*] ,

so that is just by dimension analysis that we sum over $v$ , $\psi^{u}$ and $\lambda^{u}$ different vectors

and

$\lambda _{v}$ $^u x ^{v} = \psi ^{u}$ [**]

then $\psi^{u}$ in [*] and [**] are in general not the same, since generally not - symmetric

2) Given a rank 2 tensor $M^{a}$ $_{b}$, from it, using the metric, one can obtain $M_{ab}$ and $M_{a}$ $^{b}$ but not $M^{b}$ $_{a}$ ? is this correct? just to confirm my understanding? thanks

3) Last expression to check my understanding again, $g_{ab}$ the metric here:

$g_{uv}g^{vb}=\delta_{u}$ $^b \neq \delta^b$ $_{u} = g^{vb}g_{uv}$

14. Oct 14, 2016

### Orodruin

Staff Emeritus
You can obtain $M^b_{\phantom b a}$ by using the inverse metric, which is obtainable from knowing the metric and finding its inverse.

The metric is a symmetric tensor. It holds that $g_{ab} g^{bc} = g^{bc} g_{ba} = \delta^b_a$. Note that the order of the indices in the Kronecker delta is irrelevant. There is no way of confusing the indices.

15. Oct 14, 2016

### Ibix

Isn't $M^a {}_b$ the same thing as $M^b{}_a$, just with the indices labelled differently? That would cause chaos and meaninglessness if done carelessly as part of an expression, but is fine as written on its own, or if done carefully.

16. Oct 14, 2016

### Staff: Mentor

Wald's GR text presents it fairly early on and uses it throughout.

17. Oct 15, 2016

### Orodruin

Staff Emeritus
Right. I guess I did not read carefully enough.

18. Oct 15, 2016

### stevendaryl

Staff Emeritus
The abstract index notation, which I personally dislike, uses an expression such as $V^\mu$ to mean a 4-vector, rather than a component of a 4-vector. The nice thing about that is that it's clear what type of object $V$, which in the alternative notation isn't at all clear, unless you spell it out in terms of a basis, and write $V$ as $V^\alpha e_\alpha$. That's cumbersome to write, and it's also overkill, in the sense that (usually) nobody cares about the basis.

19. Oct 15, 2016

### binbagsss

is there any sort of convention where , before manipulating - raising , lowering indicies etc, the upper index is written before the first? (quick example I'm just looking at the way Lorentz transformation is written in a couple of textbooks). thanks.

20. Oct 16, 2016

### vanhees71

Yes, I don't see any merit in this abstract index notation. To the contrary it's confusing. At least Wald uses latin indices for it and for the usual components greek ones. That may help to distinguish them, but why should I use abstract tensors but still have the cumbersome indices of the Ricci notation, which however is very useful for practical calculations. So my usual notation is that a tensor is denoted by $\boldsymbol{T}$ and $T_{\mu \nu\ldots}$ are its (covariant) components wrt. to a basis $\boldsymbol{b}^{\mu}$ of the dual space. Then the relation is (Einstein summation convention implies)
$$\boldsymbol{T}=T_{\mu \nu \ldots} \boldsymbol{b}^{\mu} \otimes \boldsymbol{b}^{\nu} \otimes \cdots.$$
The disadvantage of this notation is, of course, that you don't know from just looking at the symbol $\boldsymbol{T}$ which rank the tensor has.