Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A Algebraic Proofs of Levi-Civita Symbol Identities

  1. Jul 3, 2017 #1
    Hello everyone,

    my question concerns the following: Though widely used, there does not seem to be any standard reference where the common symmetrization and anti-symmetrization identities are rigorously proven in the general setting of ##n##-dimensional pseudo-Euclidean spaces. At least I have not found any after an extensive google/literature search, but I would be happy if you can name one. An example where they are more or less just given is found in the book "General Relativity" by Wald (p. 432 sq.). Of course, I tried to prove the identities myself with some success, but there are some instances where I have trouble. I and possibly others with the same issues would greatly appreciate your help.

    Now, let's get more specific:

    The ##n##-dimensional Levi-Civita Symbol can be defined via the Kronecker Symbol as
    $$ \varepsilon_{i_1 \dots i_n} := n! \,
    \delta^{[1}_{i_1} \cdots \delta^{n]}_{i_n} \quad ,$$
    which is ##1## for even permutations of ##(1, \dots, n)##, ##-1## for odd ones and ##0## otherwise.

    Now, using the following definition of the determinant
    $$ \det A := \varepsilon_{i_1 \dots i_n} A^{i_1}{}_1 \cdots A^{i_n}{}_n
    = n! \, A^{[1}{}_1 \cdots A^{n]}{}_n \quad ,$$
    one can prove the following:
    $$\varepsilon^{i_1 \dots i_n}\, \varepsilon_{i_1 \dots i_n} = (-1)^s \, n! $$
    Here the standard pseudo-Euclidean scalar product ##\eta## with ##s## minus-signs was used to raise the indices.

    Now the problems:

    1) The following identity
    $$\varepsilon^{i_1 \dots i_n}\, \varepsilon_{j_1 \dots j_n} = (-1)^s \, n! \,
    \delta^{[i_1}_{j_1} \cdots \delta^{i_n]}_{j_n}$$
    also supposedly holds, but I got stuck in the proof. Writing everything out, pulling out minus signs and pulling fixed indices down, I was able to show that
    $$ \varepsilon^{i_1 \dots i_n}\, \varepsilon_{j_1 \dots j_n}
    = (-1)^s \, (n!)^2 \, \delta^{[i_1}_{1} \cdots \delta^{i_n]}_{n} \,
    \delta^{1}_{[j_1} \cdots \delta^{n}_{j_n]} \quad , $$
    but what to do now?

    2) Using the above identity, I would like to obtain a nice expression for
    $$\varepsilon^{i_1 \dots i_k i_{k+1} \dots i_n}\, \varepsilon_{i_1 \dots i_k j_{k+1}
    \dots j_n} $$
    in terms of the Kronecker. I figured that I need to partially "dissolve" the anti-symmetrization, so I am looking for an identity along the lines of
    $$ T_{[i_1 \dots i_k i_{k+1} \dots i _l ]} = f (n,k,l) \, \sum_{\sigma \in S^l} \text{sgn} ( \sigma ) \, T _{[ \sigma (i_1) \dots \sigma (i_k) ] \sigma( i_{k+1}) \dots
    \sigma(i _l) }
    where ##f (n,k,l)## is some normalization factor. How do I prove this (algebraically)?
  2. jcsd
  3. Jul 3, 2017 #2


    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Gold Member
    2017 Award

    1) Both expressions are obviously completely anti-symmetric in both the upper and lower indices. You can therefore write
    \epsilon^{i_1\ldots i_n}\epsilon_{j_1\ldots j_n} = C \delta^{[i_1}_{j_1} \ldots \delta^{i_n]}_{j_n}
    It therefore only remains to find the constant ##C##. You can find ##C## by selecting a set of indices that make both sides non-zero, e.g., ##i_k = j_k = k##, which leads to
    \epsilon^{1\ldots n} \epsilon_{1\ldots n} = (-1)^s = C \delta^{[1}_{1} \ldots \delta^{n]}_{n} = \frac{C}{n!}.
    Solving for ##C## leads to ##C = (-1)^s n!##.

    You can use similar reasoning for (2).
  4. Jul 6, 2017 #3
    Thank you for answering.

    1) Well, I hoped to prove it by direct computation, but the proof you gave is certainly simpler. It's also the one Wald gives. I think one should add that the constant must exist because the respective tensor spaces are 1-dimensional.

    2) The same argument cannot be applied here, because the square of ##n \choose n-(k+1)## is not ##1## in general. One needs to compute the contraction on the right hand side to formally prove it...
  5. Jul 6, 2017 #4


    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Gold Member
    2017 Award

    What do you mean by "direct computation" then? All computations involve argumentation.

    (2) is a simple matter of arguing what values the indices can take while leaving the expression non-zero. The argument is very similar to the one used for (1).
  6. Jul 10, 2017 #5
    Well, I was wondering how to resolve
    $$\delta^{[i_1}_{1} \cdots \delta^{i_n]}_{n} \, \delta^{1}_{[j_1} \cdots \delta^{n}_{j_n]}$$
    algebraically. You know, getting into the mud without arguing indirectly via dimensions. Of course, formal combinatorial arguments are needed - I just don't have any idea how to do it.

    Sorry, but I just don't get it. Of course, one can handwavingly say that the expression needs to be proportional to
    $$\delta^{[i_{k+1}}_{j_{k+1}} \cdots \delta^{i_n ]}_{j_n}$$
    and then go on, but the formal argument from (1) fails here:
    $$T^{i_{k+1} \dots i_n}{}_{j_{k+1} \dots j_n } =T^{[i_{k+1} \dots i_n]}{}_{[j_{k+1} \dots j_n ]}$$
    for all possible values of the indicies does not imply that it has to be proportional to the expression above.

    So the question is how to do the contraction.
  7. Jul 10, 2017 #6


    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Gold Member
    2017 Award

    You need to start from the actual contraction and argue from there. Questions to ask:
    1. What values can the ##i_m## take when the expression is non-zero?
    2. What values can the ##j_m## for ##m > k## take for a given such setup of ##i_m## while the expression is non-zero?
    3. Take such a setup and determine the constant.
  8. Jul 11, 2017 #7
    Thank you! I think I have it now:

    Start with
    $$ n! \, \delta^{[i_1}_{i_1} \cdots \delta^{i_k}_{i_k}\delta^{i_{k+1}}_{j_{k+1}} \cdots \delta^{i_n]}_{j_n}
    = \delta^{i_1}_{i_1} \cdots \delta^{i_k}_{i_k}\delta^{i_{k+1}}_{j_{k+1}} \cdots \delta^{i_n}_{j_n}
    - \delta^{i_2}_{i_1} \delta^{i_1}_{i_2} \cdots \delta^{i_k}_{i_k} \delta^{i_{k+1}}_{j_{k+1}} \cdots \delta^{i_n}_{j_n} +
    \dots $$
    + (-1)^k \delta^{i_2}_{i_1} \delta^{i_3}_{i_2} \cdots \delta^{i_{k+1}}_{i_k} \delta^{i_{1}}_{j_{k+1}}
    \delta^{i_{1}}_{j_{k+1}}\delta^{i_{k+2}}_{j_{k+2}} \cdots \delta^{i_n}_{j_n} + \dots \, \, \, .

    Now observe that each summand is a constant times
    $$\delta^{i_{k+1}}_{j_{k+1}} \cdots \delta^{i_n}_{j_n} \, \, , $$
    up to permutation of the indices. Because the left hand side is antisymmetric in the lower and upper indicies, the total sum on the right needs to be anti-symmetric, too. So we may anti-symmetrize the top and bottom indices without changing anything, repermute, and pull out
    $$\delta^{[i_{k+1}}_{[j_{k+1}} \cdots \delta^{i_n]}_{j_n]} =
    \delta^{[i_{k+1}}_{j_{k+1}} \cdots \delta^{i_n]}_{j_n} \, \, .
    Thus, there indeed exists a constant ##C \in \mathbb R## such that
    $$\delta^{[i_1}_{i_1} \cdots \delta^{i_k}_{i_k}\delta^{i_{k+1}}_{j_{k+1}} \cdots \delta^{i_n]}_{j_n}
    = C \delta^{[i_{k+1}}_{j_{k+1}} \cdots \delta^{i_n]}_{j_n} \, \, .$$

    So in particular:
    $$\delta^{[i_1}_{i_1} \cdots \delta^{i_k}_{i_k}\delta^{k+1}_{k+1} \cdots \delta^{n]}_{n}
    = C \delta^{[{k+1}}_{{k+1}} \cdots \delta^{n]}_{n} = \frac{C}{(n-k-1)!}\, \, .$$
    But how to compute this?

    corrected mistake
    Last edited: Jul 11, 2017
  9. Jul 12, 2017 #8
    Sorry, in the last line it ought to be a ##(n-k)!## instead of ##(n-k-1)!##.

    I had another idea to calculate ##C##, namely by doing the full contraction:
    $$\delta^{[i_1}_{i_1} \cdots \delta^{i_k}_{i_k}\delta^{i_{k+1}}_{i_{k+1}} \cdots \delta^{i_n]}_{i_n} =1
    = C \delta^{[i_{k+1}}_{i_{k+1}} \cdots \delta^{i_n]}_{i_n} \, \, .$$
    But then the question is how to compute the right hand side.

    I also found a paper, where they used
    $$\delta^{[i_1}_{j_1} \cdots \delta^{i_n]}_{j_n} =
    \delta^{[i_1}_{l_1} \cdots \delta^{i_n]}_{l_n} \,
    \delta^{[l_1}_{j_1} \cdots \delta^{l_k]}_{j_k} \,\delta^{[l_{k+1}}_{j_{k+1}} \cdots \delta^{l_n]}_{j_n}
    to prove the Laplace expansion for determinants. I thought it might be useful here as well, but I haven't figured out in which way...
  10. Jul 13, 2017 #9
    Alright, in case anyone stumbles upon the same issue in the future, I shall give a solution.

    1) This is most easily solved as Orodruin explained above.

    2) I figured out how to solve it after reading the paper referenced above and finding the following reference (p. 111):

    David Lovelock and Hanno Rund, "Tensors, Differential Forms and Variational Principles",
    Pure and Applied Mathematics, Wiley, New York, 1975.

    First we introduce the so-called generalized Kronecker delta (this is also how I found the references):
    $$ \delta^{i_1 \dots i_k}_{j_1 \dots j_k} := k! \, \delta^{[i_1}_{j_1} \cdots \delta^{i_k]}_{j_k} \, \, .$$
    This is 1 when the indices on top are an even permutation of the indices below, -1 in the odd case and 0 else. Elementary manipulations (see the identity above) show that this has to equal
    $$\frac{1}{(k-1)!} \, \delta^{i_1 \dots i_k}_{l_1 \dots l_k} \, \delta^{l_1}_{j_1} \, \delta^{l_2 \dots l_k}_{j_2 \dots j_k}$$
    Now here comes the main point: Most sums of the ##l_2,\dots , l_k## are redundant for symmetry reasons, so the factor in front gets killed if we don't do the redundant sums. Resolving the first delta as a sign, we get a total of k summands
    $$\delta^{i_1}_{j_1} \delta^{i_2 \dots i_k}_{j_2 \dots j_k} - \delta^{i_2}_{j_1} \delta^{i_1 i_3 \dots i_k}_{j_2 \dots j_k}
    + \dots + (-1)^{(k-1)} \delta^{i_k}_{j_1} \delta^{i_1 \dots i_{k-1}}_{j_2 \dots j_k} \, \, .$$
    This makes it possible to compute the contraction
    $$\delta^{i_1 i_2 \dots i_k}_{i_1 j_2 \dots j_k} =
    \delta^{i_1}_{i_1} \delta^{i_2 \dots i_k}_{j_2 \dots j_k} - \delta^{i_2}_{i_1} \delta^{i_1 i_3 \dots i_k}_{j_2 \dots j_k}
    + \dots + (-1)^{(k-1)} \delta^{i_k}_{i_1} \delta^{i_1 \dots i_{k-1}}_{j_2 \dots j_k}
    = n \delta^{i_2 \dots i_k}_{j_2 \dots j_k} - \delta^{i_2 i_3 \dots i_k}_{j_2 \dots j_k}
    + \dots + (-1)^{(k-1)} \delta^{i_k i_2 \dots i_{k-1}}_{j_2 \dots j_k}
    = n \delta^{i_2 \dots i_k}_{j_2 \dots j_k} - (k-1) \delta^{i_2 \dots i_k}_{j_2 \dots j_k}
    = (n-k+1) \delta^{i_2 \dots i_k}_{j_2 \dots j_k}
    \, \, .$$
    By induction, we see what happens for ##m## contractions:
    $$\delta^{i_1 \dots i_m i_{m+1}\dots i_k}_{i_1 \dots i_m j_{m+1} \dots j_k} =
    \frac{(n-k+m)!}{(n-k)!} \, \delta^{i_{m+1} \dots i_k}_{j_{m+1} \dots j_k}
    \, \, .$$
    Therfore, using 1), we finally obtain
    $$\varepsilon^{i_1 \dots i_k i_{k+1} \dots i_n} \, \varepsilon_{i_1 \dots i_k j_{k+1} \dots j_n}
    = (-1)^s \, k! \, \delta^{i_{k+1} \dots i_n}_{j_{k+1} \dots j_n}
    = (-1)^s \, k! (n-k)! \, \delta^{[i_{k+1}}_{j_{k+1}} \cdots \delta^{i_n]}_{j_n} \, \, . $$

    By the way, the general formula above also implies that
    $$\delta^{[i_{1}}_{i_{1}} \cdots \delta^{i_k]}_{i_k} ={ n \choose k } \, \, . $$
    Actually, I agree now with the conclusion of the paper referenced above that the generalized Kronecker delta is in a sense more fundamental than the Levi-Civita symbol. For instance, if one uses it as in the proof of the identity above, one can also show that
    $$T_{[i_1 \dots i_m i_{m+1} \dots i_k]} = \frac{m!}{k!} \sum_{(l_1, \dots, l_m) \\ \text{subtuple of} \\ (i_1, \dots, i_k) }
    \delta^{l_1 \dots l_k}_{i_1 \dots i_k} \, T_{[l_1 \dots l_m] l_{m+1} \dots l_k} \,\, \, , $$
    where the delta should be viewed as the sign of the specific permutation of ##(i_1, \dots, i_k)##. The word `subtuble' here means that the ordered tuple ##(l_1, \dots, l_m)## can be obtained from ##(i_1, \dots, i_k)## by crossing out ##k-m## entries.
    Last edited: Jul 13, 2017
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted