# Einstein notation stuff

1. Nov 7, 2006

### quasar987

I'm reading a text on tensor analysis (on R³), and I don't understand the following exemple...

$$P=\frac{1}{2}(a_{ij}+a_{ji})x_ix_j=\frac{1}{2}(a_{ij}x_ix_j+a_{ij}x_jx_i)=a_{ij}x_ix_j$$

To pass from the second to the last equality, he commuted the second pair of $x_jx_i$ into $x_ix_j$. But he can't do that for it then changes radically the nature of P, for if we redistribute the $a_{ij}$, we are no longer summing according to the Einstein notation.

If what I just said is not clear, consider this. I am asserting that the author did the following in order to pass from the second to the third equality:

$$a_{ij}x_jx_i=a_{ij}x_ix_j$$

On the LHS, we are summing over j but not in the RHS. So this commutation changes the nature of the expression.

2. Nov 7, 2006

### James R

There's no problem commuting $x_i$ with $x_j$, since both of these are just numbers and numbers commute when you multiply them.

Note also that in the expression $a_{ij} x_i x_j$, you are summing over BOTH i and j, because both have a repeated index.

3. Nov 7, 2006

### coalquay404

Let's take things slowly and suppose that we're working in Minkowski space, or indeed in $\mathbb{R}^n$. We suppose that we have some rank-2 tensor with components $a_{ij}$ and a vector with components $x^i$. Then $P$ is defined by

$$P \equiv \frac{1}{2}\sum_{i,j}(a_{ij} + a_{ji})x_ix_j$$

Using the Einstein summation convention this is

$$P = \frac{1}{2}(a_{ij} + a_{ji}) x^i x^j$$

Then,

$$P = \frac{1}{2}(a_{ij} + a_{ji})x^i x^j = \frac{1}{2}(a_{ij} x^i x^j + a_{ji} x^i x^j) = \frac{1}{2}(a_{ij} x^i x^j + a_{ij} x^j x^i) = \frac{1}{2}a_{ij}(x^i x^j + x^j x^i)$$

However, there's no problem setting $x^ix^j=x^jx^i$ since the $x^i$ are just numbers (they are just the components of some vector, not the vector itself). Therefore

$$P = \frac{1}{2}a_{ij}(x^i x^j + x^i x^j) = a_{ij}x^i x^j$$

I think that you might be missing the essential point that since both $i$ and $j$ are repeated indices in the definition of $P$ then they both have to be summed over.

Last edited: Nov 7, 2006
4. Nov 7, 2006

### quasar987

Is the rule "As soon as some expression has indices that appear more than once in the expression, summation is implied"?

Why did you lift up the i and j of the x's? My text does not do that.

5. Nov 7, 2006

### coalquay404

Einstein's summation convention involves summing over repeated indices. The problem with the passage you quoted above is that all of the indices are `downstairs.' This is a very old-fashioned notation - all modern texts with which I'm familiar use the upstairs-downstairs notation to make it explicitly clear which indices are to be summed over.

There's also another benefit to using the modern notation. If, for example, I have some quantity $\alpha_i$ and another quantity $x^j$ then

$$\alpha_i x^i$$

is actually an expression of an inner product between $\alpha$ (which is a one-form) and $x$ (which is a vector). This notation makes a lot of sense because of its generality - it's easily extended to non-trivial manifolds. However, in your case (where you're dealing with $\mathbb{R}^3$) the distinction between raising and lowering indices is essentially unimportant.

6. Nov 8, 2006

### HallsofIvy

Staff Emeritus
In non-Euclidean tensors, the Einstein convention is that if an index appears once as a subscript and once as a superscript, then a summation is implied. Since the original post had everything as subscripts, I suspect the problem is in Euclidean tensors where the metric tensor is trivial.

7. Nov 9, 2006

### jbusc

further, latin indices are used, implying that the summation is from 1 to 3, not from 0 to 3 as you would expect in relativity where the super- and sub- scripts are important.

8. Nov 9, 2006

### coalquay404

That depends on which text you're reading from. Plenty of books (Wald being an obvious example) use latin indices for both spacetime and spatial components. Usually, early latin indices (a,b,c,...) run from 0 to 3 while mid-range indices (i,j,k,...) run from 1 to 3.