Quick question regarding matrix/index notation

  • Thread starter Thread starter Anchovy
  • Start date Start date
  • Tags Tags
    Notation
Click For Summary
The discussion centers on understanding matrix and index notation in tensor equations, specifically regarding the manipulation of indices in expressions like A^{\mu}_{\alpha} x^{\alpha} (A^{-1})^{\beta}_{\mu} y_{\beta}. Participants clarify that the commutativity of numbers allows for the rearrangement of tensor components without changing the result. The introduction of separate dummy indices, such as \alpha and \beta, is explained through the Einstein summation convention, which requires distinct indices for separate summations. The equivalence of the product (A^{-1})^{\beta}_{\mu} A^{\mu}_{\alpha} to the Kronecker delta is also discussed, emphasizing that this reflects the identity matrix's properties. Understanding these principles is crucial for working with tensor equations effectively.
Anchovy
Messages
99
Reaction score
2
Attached is a screenshot of a text I'm trying to follow. However, the author does something that I don't quite understand in line (3.5). They equate the following:
A^{\mu}_{\hspace{2 mm}\alpha} x^{\alpha} (A^{-1})^{\beta}_{\hspace{2 mm}\mu} y_{\beta} = (A^{-1})^{\beta}_{\hspace{2 mm}\mu} A^{\mu}_{\hspace{2 mm}\alpha} x^{\alpha} y_{\beta}
So they've taken the (A^{-1})^{\beta}_{\hspace{2 mm}\mu} that was initially operating on y_{\beta} and moved it backwards so that it now operates on A^{\mu}_{\hspace{2 mm}\alpha} x^{\alpha} y_{\beta}. My issue is that I don't know why this is allowed, it would not occur to me to do that. This 'trick', if one could call it that, also appears to be repeated in (3.7). Can someone explain what's going on here / why this stuff is allowed?

Another thing that keeps bothering me is regarding dummy indices, specifically, when to introduce a new one. In this example (line (3.5)) we see
s' = x'^{\mu}y'_{\mu} = A^{\mu}_{\hspace{2 mm}\alpha} x^{\alpha} (A^{-1})^{\beta}_{\hspace{2 mm}\mu} y_{\beta}
So for the coordinate transformation x'^{\mu} = A^{\mu}_{\hspace{2 mm}\alpha} x^{\alpha} they've introduced one new dummy index, namely \alpha, which is fair enough, but then for y'_{\mu} = (A^{-1})^{\beta}_{\hspace{2 mm}\mu} y_{\beta} they've introduced the index \beta. Why not just use \alpha here?
 

Attachments

  • index_notation_question.jpg
    index_notation_question.jpg
    43.6 KB · Views: 459
Physics news on Phys.org
Anchovy said:
Attached is a screenshot of a text I'm trying to follow. However, the author does something that I don't quite understand in line (3.5). They equate the following:
A^{\mu}_{\hspace{2 mm}\alpha} x^{\alpha} (A^{-1})^{\beta}_{\hspace{2 mm}\mu} y_{\beta} = (A^{-1})^{\beta}_{\hspace{2 mm}\mu} A^{\mu}_{\hspace{2 mm}\alpha} x^{\alpha} y_{\beta}
So they've taken the (A^{-1})^{\beta}_{\hspace{2 mm}\mu} that was initially operating on y_{\beta} and moved it backwards so that it now operates on A^{\mu}_{\hspace{2 mm}\alpha} x^{\alpha} y_{\beta}. My issue is that I don't know why this is allowed, it would not occur to me to do that. This 'trick', if one could call it that, also appears to be repeated in (3.7). Can someone explain what's going on here / why this stuff is allowed?

Another thing that keeps bothering me is regarding dummy indices, specifically, when to introduce a new one. In this example (line (3.5)) we see
s' = x'^{\mu}y'_{\mu} = A^{\mu}_{\hspace{2 mm}\alpha} x^{\alpha} (A^{-1})^{\beta}_{\hspace{2 mm}\mu} y_{\beta}
So for the coordinate transformation x'^{\mu} = A^{\mu}_{\hspace{2 mm}\alpha} x^{\alpha} they've introduced one new dummy index, namely \alpha, which is fair enough, but then for y'_{\mu} = (A^{-1})^{\beta}_{\hspace{2 mm}\mu} y_{\beta} they've introduced the index \beta. Why not just use \alpha here?

For the first question, you have the product of components of tensors. Components of tensors are just numbers. Numbers are commutative. You can order them any way you like. For the second, remember the Einstein summation convention. If an index occurs twice, it is assumed to be summed over. Your expression has two separate sums. Hence two separate repeated dummy indices. If they were all the same, that would imply only one sum.
 
Dick said:
For the first question, you have the product of components of tensors. Components of tensors are just numbers. Numbers are commutative. You can order them any way you like. For the second, remember the Einstein summation convention. If an index occurs twice, it is assumed to be summed over. Your expression has two separate sums. Hence two separate repeated dummy indices. If they were all the same, that would imply only one sum.

Ok thanks Dick. A further question I have is, the author has done the following:
(A^{-1})^{\beta}_{\hspace{2 mm}\mu} A^{\mu}_{\hspace{2 mm}\alpha} = \delta^{\beta}_{\hspace{2 mm}\alpha}
Now if we were talking about a straightforward product of a matrix, say M, and its inverse, we'd have M^{-1}M = \mathbb{1}. That's simple enough. However, I can see that we're again dealing with A and it's inverse A^{-1}, but now we're dealing not with whole matrices, only with just components/numbers like you say, as well as fiddly little indices --> the fact that (A^{-1})^{\beta}_{\hspace{2 mm}\mu} A^{\mu}_{\hspace{2 mm}\alpha} results in a simple Kronecker delta does not immediately leap off the page at me. How come this is true?
 
Anchovy said:
Ok thanks Dick. A further question I have is, the author has done the following:
(A^{-1})^{\beta}_{\hspace{2 mm}\mu} A^{\mu}_{\hspace{2 mm}\alpha} = \delta^{\beta}_{\hspace{2 mm}\alpha}
Now if we were talking about a straightforward product of a matrix, say M, and its inverse, we'd have M^{-1}M = \mathbb{1}. That's simple enough. However, I can see that we're again dealing with A and it's inverse A^{-1}, but now we're dealing not with whole matrices, only with just components/numbers like you say, as well as fiddly little indices --> the fact that (A^{-1})^{\beta}_{\hspace{2 mm}\mu} A^{\mu}_{\hspace{2 mm}\alpha} results in a simple Kronecker delta does not immediately leap off the page at me. How come this is true?

Because you know that product is the ##\alpha##, ##\beta## component of the identity matrix. The identity matrix is ones along the diagonal (when ##\alpha=\beta##) and zero off the diagonal. Isn't that a Kronecker delta?
 
Dick said:
Because you know that product is the ##\alpha##, ##\beta## component of the identity matrix. The identity matrix is ones along the diagonal (when ##\alpha=\beta##) and zero off the diagonal. Isn't that a Kronecker delta?

How do I know this if I don't know explicitly what A looks like?
 
Anchovy said:
How do I know this if I don't know explicitly what A looks like?

You don't need to know what ##A## looks like. You only need to know what the identity matrix looks like. The product of a matrix and its inverse is always the identity. Doesn't matter what the matrix is. You DO have the case ##A^{-1}A=I##, that arrangement of indices denotes a matrix product. Look up the definition of matrix product.
 
Don't forget that the definition of matrix multiplication is ##(AB)^i_j=A^i_k B^k_j##. (This is how it's written when we use the summation convention and the convention to write the row index upstairs and the column index downstairs).
 
Last edited:
thanks guys I'm doing OK now.
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 30 ·
2
Replies
30
Views
7K
  • · Replies 6 ·
Replies
6
Views
629
  • · Replies 1 ·
Replies
1
Views
3K