# Einstein notation

1. Sep 14, 2007

### ehrenfest

1. The problem statement, all variables and given/known data
I am a bit new to Einstein notation. Why is this true:

$$\eta_{\mu \nu} \frac{d \delta x ^ {\mu}}{d \tau} \frac{d x ^ {\nu}}{d \tau} = \frac{d \delta x ^ {\mu}}{d \tau} \frac{d x _ {\nu}}{d \tau}$$

How can you go from 16 terms on the LHS to just 4 terms on the RHS. If it is relevant this is a variational calculus problem and x^mu is the spacetime four-vector and delta is the variation and tau is some parametrization.

EDIT: this equation is wrong. see below

2. Relevant equations

3. The attempt at a solution

Last edited: Sep 14, 2007
2. Sep 14, 2007

### dextercioby

I think there's an error in the RHS . It should be

$$\frac{d\delta x^{\mu}}{d\tau}\frac{d\delta x_{\mu}}{d\tau}$$

which means that the second derivative term from the LHS was contracted with the metric tensor $\eta_{\mu\nu}$.

3. Sep 14, 2007

### ehrenfest

Actually it should be $$\eta_{\mu \nu} \frac{d \delta x ^ {\mu}}{d \tau} \frac{d x ^ {\nu}}{d \tau} = \frac{d \delta x ^ {\mu}}{d \tau} \frac{d x _ {\mu}}{d \tau}$$ .

There is only one delta on the RHS. I still do not see the equivalence.

Last edited: Sep 14, 2007
4. Sep 14, 2007

### dextercioby

Okay, i didn't see the second delta missing. But the explanation is the same. The second term got contracted with the metric tensor.

5. Sep 14, 2007

### ehrenfest

Sorry. Could you elaborate? What does it mean mathematically when the second term gets contracted with the metric tensor? Why does the metric tensor not contract the first term also?

6. Sep 14, 2007

### dextercioby

In the LHS you have a double summation. It doesn't matter which one is done first, i.e. the metric tensor is contracted with either of the 2 other terms, it doesn't matter with whom first. I chose the one to the right.

7. Sep 14, 2007

### ehrenfest

I see. The only reason this works is because the Minkowski metric has no off-diagonal elements. Otherwise, when you can contract the second element but you lose the Minkowski metric but you still have 16 terms, right?

8. Sep 14, 2007

### dextercioby

It's irrelevant wether the metric has only diagonal nonzero elements. It's not even important if the metric is symmetric or not. It's just a doubly contracted triple tensor product. If it hadn't been for the contraction, it would have resulted in an object (tensor or not) with 4 indices, 2 upstairs and 2 downstairs. Applying this double contraction one gets an object without any index. The # of terms in the double summation is 16.

9. Sep 14, 2007

### Avodyne

No. It works by definition. $$x_{\mu}$$ (with a down index) is defined by $$x_{\mu} = \eta_{\mu \nu} x^{\nu}}$$ (with an implicit sum over nu) whether or not $$\eta_{\mu \nu}$$ has off-diagonal elements.

10. Sep 16, 2007

### Elvex

The metric tensor has the property of converting covariant to contravariant and vice versa depending on whether it is had subscript or superscript indices. It's not even necessary to understand what covariant and contravariant really mean because einstein notation can be memorized via a set of rules you simply always obey.

Mathematical Physics by Kusse is a great book which explains everything you need to know about einstein summation notation in Ch. 1, 2, and 14(tensor analysis in non-orthonomal systems)... then again, if you're taking G.R. I assume your textbook goes through the notation as well.

11. Sep 16, 2007

### ehrenfest

If the metric had off-diagonal elements how could you possibly go from the 16 terms of $$\eta_{\mu \nu} \frac{d \delta x ^ {\mu}}{d \tau} \frac{d x ^ {\nu}}{d \tau}$$ to the four terms of $$\frac{d \delta x ^ {\mu}}{d \tau} \frac{d x _ {\mu}}{d \tau}$$ ?

12. Sep 16, 2007

### genneth

ehrenfest: you're not thinking clearly. 1+1=2, despite there being 2 terms on the left, but 1 on the right. Similarly, $$\sum_i \sum_j a_{ij} = \sum_i A_i$$ is a valid equation, despite there being different number of terms on the left and right. In fact, if we didn't change the number of terms, we wouldn't really have done any summation, would we?

Last edited: Sep 16, 2007
13. Sep 16, 2007

### nrqed

If you think about it, it's simply the fact that a number may be written as the sum of two terms, three terms, four terms, etc. Like you can write 15=7+8 or 15 = 5+ 7 + 3. The two expressions are equal even though they involve different number of terms in the sum. This is really all there is to it!!

14. Sep 16, 2007

### ehrenfest

$$\eta_{\mu \nu} \frac{d \delta x ^ {\mu}}{d \tau} \frac{d x ^ {\nu}}{d \tau} = \eta_{00} \frac{d \delta x ^ {0}}{d \tau} \frac{d x ^ {0}}{d \tau} +\eta_{01} \frac{d \delta x ^ {0}}{d \tau} \frac{d x ^ {1}}{d \tau} +\eta_{02} \frac{d \delta x ^ {0}}{d \tau} \frac{d x ^ {2}}{d \tau} +\eta_{03} \frac{d \delta x ^ {0}}{d \tau} \frac{d x ^ {3}}{d \tau} +\eta_{10} \frac{d \delta x ^ {1}}{d \tau} \frac{d x ^ {0}}{d \tau} +\eta_{11} \frac{d \delta x ^ {1}}{d \tau} \frac{d x ^ {1}}{d \tau} +\eta_{12} \frac{d \delta x ^ {1}}{d \tau} \frac{d x ^ {2}}{d \tau} +\eta_{13} \frac{d \delta x ^ {1}}{d \tau} \frac{d x ^ {3}}{d \tau} +\eta_{20} \frac{d \delta x ^ {2}}{d \tau} \frac{d x ^ {0}}{d \tau} +\eta_{21} \frac{d \delta x ^ {2}}{d \tau} \frac{d x ^ {1}}{d \tau} +\eta_{\mu \nu} \frac{d \delta x ^ {2}}{d \tau} \frac{d x ^ {\nu}}{d \tau} +\eta_{\mu \nu} \frac{d \delta x ^ {2}}{d \tau} \frac{d x ^ {\nu}}{d \tau +\eta_{\mu \nu} \frac{d \delta x ^ {3}}{d \tau} \frac{d x ^ {\nu}}{d \tau} +\eta_{\mu \nu} \frac{d \delta x ^ {3}}{d \tau} \frac{d x ^ {\nu}}{d \tau} +\eta_{\mu \nu} \frac{d \delta x ^ {3}}{d \tau} \frac{d x ^ {\nu}}{d \tau} +\eta_{\mu \nu} \frac{d \delta x ^ {3}}{d \tau} \frac{d x ^ {\nu}}{d \tau}$$

If the offdiagonal elements of the Minkwoski metric were not all zero, it seems pretty clear to me that you would have more than 4 terms on the RHS. Sorry, I do not see how adding numbers 2 =1+1 is the same as this. There is no rule really for adding these terms like that is there?

Last edited: Sep 16, 2007
15. Sep 16, 2007

### genneth

Of course there is... you just put brackets around things.

Have you really understood the meaning of $$x^\mu$$ and $$x_\mu$$? Because it currently looks like you don't understand how they're related to each other.

16. Sep 16, 2007

### ehrenfest

I do understand the covariant and contravariant vectors.

How do you put brackets around things? Can you please show me? I just do not see it sorry!

17. Sep 16, 2007

### nrqed

There is something wrong in your expression. You fix the values of $$\mu$$ in the differentials but you forgot to fix the corresponding value of mu in eta.

18. Sep 16, 2007

### genneth

Most textbooks take multiple chapters to explain them -- I suggest Gravitation by Misner, Thorne and Wheeler as the ultimate physics book. But any textbook on differential geometry should cover it.

My attempt:

For any vector space V, you can get a dual vector space V*, by considering linear functions on V: $$f:V\rightarrow \mathbb{R} \in V^*$$ iff $$f(aX + bY) = af(X) + bf(Y)$$ where $$X,Y \in V$$. The vector spaces turn out to have the same dimensions, and linear functions on V* are isomorphic to V (though not in a canonical way!) We can define multiplication between an element of V and an element of V* to simply be the application of the function to the vector.

However, there is not a canonical way to take an element of V and make it into an element of V*. This means that there is more than one possible isomorphism between the spaces, and none of them are preferred in anyway. We can define a way through a metric, $$g:V \rightarrow V*$$, combined with the multiplication we defined earlier, this allows us to define an inner product on V: $$\langle . | . \rangle : V \cross V \rightarrow \mathbb{R}$$.

Now, we can introduce a basis set on V, $$\mathbf{e}_\mu$$ (the superscript index does *not* denote a component!), such that a vector can be expressed as $$\mathbf{v} = v^\mu \mathbf{e}_\mu$$, where $$v^\mu$$ are the components of the vector. We can also introduce a basis in V* $$\mathbf{\theta}^\mu$$, such that $$\mathbf{\theta}^\mu \mathbf{e}_\nu = \delta^{\mu}_{\nu}$$. Note that the up/down of indices are just convention -- reversing them would have no effect on the maths being done. Now, given a v in V and u in V*, we would multiply them as: $$\mathbf{uv} = u_\mu \mathbf{\theta}^\mu v^\nu \mathbf{e}_\nu = u_\mu v^\nu \delta_\nu^\mu = u_\mu v^\mu$$. Because we only ever multiply elements from V to V*, the basis elements always drop out, and we usually never write them down explicitly. Now, our metric, g, is linear in its argument. As it happens, we can therefore express it as a series of numbers $$\mathbf{g} = g_{\mu\nu}\mathbf{\theta}^\mu\mathbf{\theta}^\nu$$, such that we can create V* vectors from V vectors by doing a summation: $$v_\mu = g_{\mu\nu}v^\nu$$. Technically, it's a bit of abuse to call the covector and the vector both v -- in many textbooks they'd be differentiated as $$\bar{v}$$ and $$\tilde{v}$$.

In your given case, $$g = \eta$$. As you can now see, it's pretty obvious why the contraction occurred.

19. Sep 17, 2007

### ehrenfest

Thank you for that explanation, but again, I know about covariant and contravariant vectors. Let's use a simple example:

So, why does
$$\eta_{\mu \nu} \frac{d \delta x ^ {\mu}}{d \tau} \frac{d x ^ {\nu}}{d \tau} = \eta_{00} \frac{d \delta x ^ {0}}{d \tau} \frac{d x ^ {0}}{d \tau} +\eta_{01} \frac{d \delta x ^ {0}}{d \tau} \frac{d x ^ {1}}{d \tau} +\eta_{10} \frac{d \delta x ^ {1}}{d \tau} \frac{d x ^ {0}}{d \tau} +\eta_{11} \frac{d \delta x ^ {1}}{d \tau} \frac{d x ^ {1}}{d \tau}$$

$$= \eta_{00} \frac{d \delta x ^ {0}}{d \tau} \frac{d x ^ {0}}{d \tau} +\eta_{11} \frac{d \delta x ^ {1}}{d \tau} \frac{d x ^ {1}}{d \tau}$$

$$=\frac{d \delta x ^ {\mu}}{d \tau} \frac{d x _ {\mu}}{d \tau}$$

The second equality simply makes no sense to me.

Last edited: Sep 17, 2007
20. Sep 17, 2007

### genneth

You claim that you understand them, but clearly you don't. All the equation is doing is "lowering an index", which is done by contracting with the metric. By *definition*

$$x_\mu = \eta_{\mu\nu}x_^\nu$$

If you understand that, then what's the problem?