Levi Civita (Permutation) Symbol Proof

AI Thread Summary
The discussion focuses on proving the identity of the Levi-Civita symbol, ε_{ijk}, as the determinant of a specific matrix involving Kronecker deltas. Participants express confusion about how to start the proof using the relations provided in their textbooks and seek guidance on manipulating these equations correctly. There is a consensus that using the determinant formula, ε_{lmn} A_{1l} A_{2m} A_{3n}, can simplify the proof process. Some users emphasize the importance of correctly applying index notation and ensuring that indices do not repeat improperly during calculations. The conversation highlights the challenge of deriving the determinant identity and the need for clarity in the steps involved.
blink-
Messages
14
Reaction score
0

Homework Statement


Prove the following:
\varepsilon_{ijk}=<br /> \left| \begin{array}{ccc}<br /> \delta_{1i} &amp; \delta_{1j} &amp; \delta_{1k} \\<br /> \delta_{2i} &amp; \delta_{2j} &amp; \delta_{2k} \\<br /> \delta_{3i} &amp; \delta_{3j} &amp; \delta_{3k}<br /> \end{array} \right|

Homework Equations


From my textbook:
\hat{e}_3 = \hat{e}_1 \times \hat{e}_2, \quad \hat{e}_1 = \hat{e}_2 \times \hat{e}_3, \quad \ldots \quad \varepsilon_{ijk} \hat{e}_k = \hat{e}_i \times \hat{e}_j \\<br /> \delta_{ij} = \hat{e}_i \cdot \hat{e}_j

From a website:
\varepsilon_{ijk} = (\hat{e}_i \times \hat{e}_j)\cdot\hat{e}_k

The Attempt at a Solution


I don't even know where to start. My textbook says I should be able to prove the determinant proof using those two relations they provide; however, I have not been able to prove anything.

It seems as though every continuum mechanics book I've ever seen likes to say "it's easy to show the determinant proof." Apparently it's so easy that no book feels the need to show the derivation. Am I missing any relations? Can someone give me hints or "suggestions" to get me going in the right direction?

Thanks.
 
Last edited:
Physics news on Phys.org
start with the first formula and sub the second one in

eijk = (ei x ej) . (ei x ej) dij

next notice that (ei x ej) is dotted with itself meaning ?
 
jedishrfu,

Thank you very much for your quick response. My wife and I have been looking at your post and can't seem to understand how you got there. Do I need the third equation? I found it on a website but my textbook informs me that I only need the two other equations... When I rearrange those, I seem to get something different than that listed on the website...

\varepsilon_{ijk} \hat{e}_k = \hat{e}_i \times \hat{e}_j
\varepsilon_{ijk} \hat{e}_k \cdot \hat{e}_k = (\hat{e}_i \times \hat{e}_j) \cdot \hat{e}_k
\varepsilon_{ijk} \delta{kk} = (\hat{e}_i \times \hat{e}_j) \cdot \hat{e}_k
3\varepsilon_{ijk} = (\hat{e}_i \times \hat{e}_j) \cdot \hat{e}_k

which is not equal to what the website says...

The only thing that we've gotten when trying to use your hint is the following:

\delta_{ij} = \hat{e}_i \cdot \hat{e}_j
\phantom{\delta_{ij}} = \left( \frac{\hat{e}_j \times \hat{e}_k}{\varepsilon_{ijk}} \right) \cdot \left( \frac{\hat{e}_k \times \hat{e}_i}{\varepsilon_{ijk}} \right)

which can be further manipulated, but it doesn't seem to give anything?

Furthermore, even if we could get it to the state you mention, I am still slightly baffled. I understand that with something dotted with itself you have the norm squared, but I can't figure out if there is another relation or how to use that one.

Sorry, I'm sure this is straight forward for you but I've always had a hard time with this damn permutation symbol. I'm used it for countless other identity proofs, but I've never been able to prove the determinant identity of itself.

Thanks Again.
 
blink- said:

Homework Statement


Prove the following:
\varepsilon_{ijk}=<br /> \left( \begin{array}{ccc}<br /> \delta_{1i} &amp; \delta_{1j} &amp; \delta_{1k} \\<br /> \delta_{2i} &amp; \delta_{2j} &amp; \delta_{2k} \\<br /> \delta_{3i} &amp; \delta_{3j} &amp; \delta_{3k}<br /> \end{array} \right)

Do you mean \varepsilon_{ijk}= \begin{vmatrix} \delta_{1i} &amp; \delta_{1j} &amp; \delta_{1k} \\ \delta_{2i} &amp; \delta_{2j} &amp; \delta_{2k} \\ \delta_{3i} &amp; \delta_{3j} &amp; \delta_{3k} \end{vmatrix}?

blink- said:
jedishrfu,

Thank you very much for your quick response. My wife and I have been looking at your post and can't seem to understand how you got there.

I have no idea what jedishrfu did (or was trying to do) there either, so don't feel too bad.

Do I need the third equation? I found it on a website but my textbook informs me that I only need the two other equations... When I rearrange those, I seem to get something different than that listed on the website...

\varepsilon_{ijk} \hat{e}_k = \hat{e}_i \times \hat{e}_j
\varepsilon_{ijk} \hat{e}_k \cdot \hat{e}_k = (\hat{e}_i \times \hat{e}_j) \cdot \hat{e}_k
\varepsilon_{ijk} \delta{kk} = (\hat{e}_i \times \hat{e}_j) \cdot \hat{e}_k
3\varepsilon_{ijk} = (\hat{e}_i \times \hat{e}_j) \cdot \hat{e}_k

which is not equal to what the website says...

Whenever you are doing index gymnastics, at each step, you should check 2 things:

(1) Does any index occur more than twice in a single term?
(2) Does each term have the same free indices?

When taking the dot product of \varepsilon_{ijk} \hat{e}_k (which has an implied summation over k) with \hat{e}_k, you need to use a different dummy index for the summation:

(\hat{e}_i \times \hat{e}_j) \cdot \hat{e}_k = \varepsilon_{ijm} \hat{e}_m \cdot \hat{e}_k = \varepsilon_{ijm}\delta_{km} = \varepsilon_{ijk}
 
sorry for he confusion but in your first post I saw a formula relating ek to eixej delta ij and didn't realize the delta ij was part of the 3rd equation. now though it shows it on the 3rd line. did you edit the post?
 
gabbagabbahey said:
Do you mean \varepsilon_{ijk}= \begin{vmatrix} \delta_{1i} &amp; \delta_{1j} &amp; \delta_{1k} \\ \delta_{2i} &amp; \delta_{2j} &amp; \delta_{2k} \\ \delta_{3i} &amp; \delta_{3j} &amp; \delta_{3k} \end{vmatrix}?

Yes, that is absolutely right. It is the determinant of the matrix, not just the matrix itself.

gabbagabbahey said:
Whenever you are doing index gymnastics, at each step, you should check 2 things:

(1) Does any index occur more than twice in a single term?
(2) Does each term have the same free indices?

When taking the dot product of \varepsilon_{ijk} \hat{e}_k (which has an implied summation over k) with \hat{e}_k, you need to use a different dummy index for the summation:

(\hat{e}_i \times \hat{e}_j) \cdot \hat{e}_k = \varepsilon_{ijm} \hat{e}_m \cdot \hat{e}_k = \varepsilon_{ijm}\delta_{km} = \varepsilon_{ijk}

That is a very clear explanation. I sometimes forget about the "occurance rule." I had the revelation while sleeping that what I did was wrong (mainly, that I can't sum one term (\delta_{kk}) because it was multiplied by \varepsilon_{ijk}). My plan was then to use the kronecker delta as an index switcher k\rightarrow{k}, which would leave me with that final formula (although my methodology was incorrect).

jedishrfu said:
sorry for he confusion but in your first post I saw a formula relating ek to eixej delta ij and didn't realize the delta ij was part of the 3rd equation. now though it shows it on the 3rd line. did you edit the post?

Sorry about that. I did edit the post because I think I had typed an error in one of the supplied equations. You just replied too fast (you probably never hear that!).

Thanks guys, I'll keep trying.
 
Last edited:
If you are allowed to use the well-known fact that the determinant of a 3 x 3 matrix can be written as \det(\mathbf{A}) = \varepsilon_{lmn} A_{1l} A_{2m} A_{3n} (see here and the reference cited therein), then it should be fairly easy to show.

If not, you will probably just have to expand the determinant using whatever methods you are allowed to use and compare the result to \varepsilon_{ijk}.
 
gabbagabbahey said:
If you are allowed to use the well-known fact that the determinant of a 3 x 3 matrix can be written as \det(\mathbf{A}) = \varepsilon_{lmn} A_{1l} A_{2m} A_{3n} (see here and the reference cited therein), then it should be fairly easy to show.

If not, you will probably just have to expand the determinant using whatever methods you are allowed to use and compare the result to \varepsilon_{ijk}.

Thanks gabbagabbahey. We are allowed the use the \det(\mathbf{A}) = \varepsilon_{lmn} A_{1l} A_{2m} A_{3n} (it was another proof I did).

The problem I am running into is starting the problem. The way the proofs are expected to be is is working from the left side until it equals the right. This method eliminates the proof by expanding the determinant. I have been trying for days now but can't seem to solve it using the two relations the book lays out (and explicitly says "can be easily proved using these relations"). Does anyone know how to start the proof with the equations I defined earlier in the thread?

Thanks Again.
 
blink- said:
Thanks gabbagabbahey. We are allowed the use the \det(\mathbf{A}) = \varepsilon_{lmn} A_{1l} A_{2m} A_{3n} (it was another proof I did).

The way the proofs are expected to be is is working from the left side until it equals the right.

Are you sure about this? What is the difference between proving a=b and proving b=a?

Does anyone know how to start the proof with the equations I defined earlier in the thread?

Just use the matrix \mathbf{A} = \begin{pmatrix} \delta_{1i} &amp; \delta_{1j} &amp; \delta_{1k} \\ \delta_{2i} &amp; \delta_{2j} &amp; \delta_{2k} \\ \delta_{3i} &amp; \delta_{3j} &amp; \delta_{3k} \end{pmatrix} and calculate the determinant using the above formula. What is A_{1l}? What is A_{2m}?...
 
  • #10
\mathrm{det}\begin{pmatrix} \delta_{1i} &amp; \delta_{1j} &amp; \delta_{1k} \\ \delta_{2i} &amp; \delta_{2j} &amp; \delta_{2k} \\ \delta_{3i} &amp; \delta_{3j} &amp; \delta_{3k} \end{pmatrix} = \varepsilon_{ijk}
\phantom{\mathbf{A} = \begin{pmatrix} \delta_{1i} &amp; \delta_{1j} &amp; \delta_{1k} \\ \delta_{2i} &amp; \delta_{2j} &amp; \delta_{2k} \\ \delta_{3i} &amp; \delta_{3j} &amp; \delta_{3k} \end{pmatrix}} = \mathrm{det}(\delta)
\phantom{\mathbf{A} = \begin{pmatrix} \delta_{1i} &amp; \delta_{1j} &amp; \delta_{1k} \\ \delta_{2i} &amp; \delta_{2j} &amp; \delta_{2k} \\ \delta_{3i} &amp; \delta_{3j} &amp; \delta_{3k} \end{pmatrix}} = \varepsilon_{lmn}\delta_{il}\delta_{jm}\delta_{kn}
\phantom{\mathbf{A} = \begin{pmatrix} \delta_{1i} &amp; \delta_{1j} &amp; \delta_{1k} \\ \delta_{2i} &amp; \delta_{2j} &amp; \delta_{2k} \\ \delta_{3i} &amp; \delta_{3j} &amp; \delta_{3k} \end{pmatrix}} = \varepsilon_{ijk}

It's much easier when I stop trying to solve for the other side. I could have done this from the beginning but I was really trying to manipulate the LHS. Thanks for your help.
 
  • #11
blink- said:
\mathrm{det}\begin{pmatrix} \delta_{1i} &amp; \delta_{1j} &amp; \delta_{1k} \\ \delta_{2i} &amp; \delta_{2j} &amp; \delta_{2k} \\ \delta_{3i} &amp; \delta_{3j} &amp; \delta_{3k} \end{pmatrix} = \varepsilon_{ijk}
\phantom{\mathbf{A} = \begin{pmatrix} \delta_{1i} &amp; \delta_{1j} &amp; \delta_{1k} \\ \delta_{2i} &amp; \delta_{2j} &amp; \delta_{2k} \\ \delta_{3i} &amp; \delta_{3j} &amp; \delta_{3k} \end{pmatrix}} = \mathrm{det}(\delta)
\phantom{\mathbf{A} = \begin{pmatrix} \delta_{1i} &amp; \delta_{1j} &amp; \delta_{1k} \\ \delta_{2i} &amp; \delta_{2j} &amp; \delta_{2k} \\ \delta_{3i} &amp; \delta_{3j} &amp; \delta_{3k} \end{pmatrix}} = \varepsilon_{lmn}\delta_{il}\delta_{jm}\delta_{kn}
\phantom{\mathbf{A} = \begin{pmatrix} \delta_{1i} &amp; \delta_{1j} &amp; \delta_{1k} \\ \delta_{2i} &amp; \delta_{2j} &amp; \delta_{2k} \\ \delta_{3i} &amp; \delta_{3j} &amp; \delta_{3k} \end{pmatrix}} = \varepsilon_{ijk}

It's much easier when I stop trying to solve for the other side. I could have done this from the beginning but I was really trying to manipulate the LHS. Thanks for your help.

This isn't quite correct. A_{1l} represents the lth component along the 1st row of the matrix \mathbf{A}. Using \mathbf{A}=\begin{pmatrix} \delta_{1i} &amp; \delta_{1j} &amp; \delta_{1k} \\ \delta_{2i} &amp; \delta_{2j} &amp; \delta_{2k} \\ \delta_{3i} &amp; \delta_{3j} &amp; \delta_{3k} \end{pmatrix}, you do not have A_{1l} = \delta_{il}. You do however know that A_{l1}=\delta_{li} (the lth component along the 1st column), so you want to be sure to expand the determinant along the columns of your matrix instead of the rows. Of course \delta_{il}=\delta_{li}, so technically there is nothing incorrect in your equations, but your reasoning is not clear. When you expand the determinant along each column you have \det(\mathbf{A}) = \varepsilon_{lmn} A_{l1} A_{m2} A_{n3}.
 
Back
Top