Is my use of Einstein notation correct in this example?

Click For Summary

Discussion Overview

The discussion revolves around the correct application of Einstein notation in the context of a specific matrix example. Participants are examining the notation used for a diagonal matrix and its implications in tensor algebra, particularly focusing on the representation of matrix products and identities.

Discussion Character

  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant questions their use of Einstein notation for a diagonal matrix, specifically asking if the expression ##R_{\alpha} R_{\beta} = \mathbb{1}_{\alpha \beta}## is correct.
  • Another participant asserts that the notation is incorrect, stating that equal indices in up and down positions imply summation, and thus the expression is not valid.
  • A different participant expresses confusion about the original intent and notation, suggesting that the matrix should be treated as having two indices rather than one.
  • One participant clarifies that the diagonal entries of the matrix represent components of a second rank tensor and suggests using ##\delta_{\alpha \beta}## instead of ##\mathbb{1}_{\alpha \beta}##.
  • Further elaboration includes a proposed correct formulation of the matrix product using Einstein notation, indicating that the notation used may not be standard and suggesting an alternative representation of the diagonal entries.

Areas of Agreement / Disagreement

Participants express disagreement regarding the correctness of the original notation and the interpretation of the matrix. There is no consensus on the proper use of Einstein notation in this context, with multiple competing views presented.

Contextual Notes

Participants highlight potential misunderstandings regarding the representation of matrix components and the implications of using Einstein notation, indicating that assumptions about the indices and their meanings may be influencing the discussion.

redtree
Messages
335
Reaction score
15
TL;DR
I am wondering if I am using it correctly
I am wondering if I am using Einstein notation correctly in the following example.

For a matrix ##R## diagonal in ##1##, except for one entry ##-1##, such as ##R = [1,-1,1]##, is it proper to write the following in Einstein notation:
##R_{\alpha} R_{\beta} = \mathbb{1}_{\alpha \beta} ##, such that ##\Gamma_{\alpha} \Gamma_{\beta} = \Gamma_{\alpha} \Gamma_{\beta} \mathbb{1}_{\alpha \beta} = \big(R_{\alpha}\Gamma_{\alpha} \Big) \Big( R_{\beta}\Gamma_{\beta}\Big)##
 
Physics news on Phys.org
It is wrong.
First of all, equal index in up and down positions implies summation. So

##R_a R_b = 1_{a b}## Is wrong, and to be honest, i don't even understand what did you was trying to say.

The second equation is also wrong, the index on left hand of the equation is different of the index on right hand side.
 
  • Like
Likes   Reactions: topsquark
LCSphysicist said:
I don't even understand what did you was trying to say.
First off, I am trying to write the following in Einstein notation, where ##R=\mathrm{diag}[1,-1,1]##, then ##R^T R = \mathbb{1}_{\dim{R}}##.
 
##R## is a matrix and so it has two indices not one. Is the notation ##R = \text{diag} [1,-1,1]## confusing you? They are the entries of the diagonal of the matrix, rather than the components of a vector in some basis. They are the diagonal components of a second rank tensor in some basis. You use ##\delta_{\alpha \beta}## instead of ##\mathbb{1}_{\alpha \beta}##. Anyway, using Einstein's summation convention you would write

$$
R^T R= \mathbb{1}
$$

as

$$
(R^T)^\alpha_{\;\; \gamma} R^\gamma_{\;\; \beta} = \delta^\alpha _{\;\; \beta} .
$$

or

$$
R^{\;\; \alpha}_ \gamma R^\gamma_{\;\; \beta} = \delta^\alpha _{\;\; \beta} .
$$

or as ##R## is symmetric

$$
R^\alpha_{\;\; \gamma} R^\gamma_{\;\; \beta} = \delta^\alpha _{\;\; \beta} .
$$I suppose, it's not standard, you could introduce the numbers ##r_{(1)}= 1 , r_{(2)} = -1, r_{(3)} = 1## where I have used brackets around the indices to indicate that I'm simply taking them to be numbers rather than components of a tensor in some basis, which is what they actually are. And then write

$$
R^\alpha_{\;\; \beta} = r_{(\beta)} \delta^\alpha_{\;\; \beta}
$$

no summation over ##\beta## is implied. Then you could write

\begin{align*}
R^\alpha_{\;\; \gamma} R^\gamma_{\;\; \beta} & = r_{(\alpha)} r_{(\beta)} \delta^\alpha_{\;\; \gamma} \delta^\gamma_{\;\; \beta}
\nonumber \\
& = r_{(\alpha)} r_{(\beta)} \delta^\alpha_{\;\; \beta}
\nonumber \\
& = r_{(\alpha)}^2 \delta^\alpha_{\;\; \beta}
\nonumber \\
& = \delta^\alpha_{\;\; \beta}
\end{align*}

But this is an abuse of the usual convention.
 
Last edited:
  • Like
Likes   Reactions: vanhees71, topsquark and redtree

Similar threads

  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 29 ·
Replies
29
Views
4K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K