# Linear & Vector Algebra: Kronecker delta & Levi-Civita symbol

1. Mar 31, 2006

### Dr. Gonzo

Hello all. Happy to have finally found this forum, sorry that it took so long!

I'm working through a Vector Algebra tutorial and I am having much difficulty with the concepts of Kronecker deltas and the Levi-Civita symbol. I can't fully grasp either of them intiutively.

From what I've been able to gather, $$\delta_{ij}= \left\{\begin{array}{cc}1,&\mbox{ if }i=j,\\0, & \mbox{ if } i\neq k\end{array}\right.$$

I'm pretty sure this means that, in the case of two vectors I and J with components $$i_{1},i_{2},i_{3}$$ and $$j_{1},j_{2},j_{3}$$ that $$i_{1}= j_{1},i_{2}= j_{2}, i_{3}= j_{3}$$. In other words, vectors I and J are parallel and equal. Is this correct? Or am I missing something here?

And regarding the Levi-Civita symbol, it's been pointed out to me that another name for this is the anti-symmetric tensor. Unfortunately, this hint has not helped my understanding one iota. So far, my understanding of this symbol states that it takes Kronecker's delta one step further into a third dimension or plane.

I understand that $$\epsilon_{ijk}=\left\{\begin{array}{cc}1,&\mbox{ if }ijk=123, 231, or 312\\-1, & \mbox{ if } ijk= 321, 213, or 132\\0, & \mbox{ if } ijk=anything else\end{array}\right.$$

I am completely confused by these 123 values. What do they represent? Perhaps understanding that will help me complete this puzzle.

I appreciate all and any help on this!

Last edited: Mar 31, 2006
2. Mar 31, 2006

### JasonRox

The best way to see what the Kronecker Delta does is to create a 3x3 matrix where the Kronecker Delta determines each number.

So, for the first slot being 11, you put the number 1 (because 1=1). The slot is 12 (row 1 column 2) and you put the number 0 (because 1=/=2). The slot 13 (row 1 column 3) and you put the number 0.

Keep working that out and see what you find.

Last edited: Mar 31, 2006
3. Mar 31, 2006

### JasonRox

Also, the Kronecker Delta has many more uses. The whole point is just to see how the function works.

The Levi-Cevita symbol is almost the same, but using three variables.

http://mathworld.wolfram.com/PermutationSymbol.html

The site also has some relations with the Kronecker Delta. Maybe proving these statements will lead to a better understanding or simply just play around with them.

4. Mar 31, 2006

### Dr. Gonzo

Thanks for the post. I kind of figured that this delta went something like this, but what confused me is when to use 1 and when to use 0. Let me explain:

Am I to understand that for every matrix of this delta, the outcome will always be the same? If I simply put in a 1 for 1=1, 2=2 and 3=3 (a diagonal from top left to bottom right) and zeros in all other places, I don't see how this has any significance. If vector I = vector I, then there should & would be 1's in all places in the matrix. If vector I does not = vector J, then there would only be one's in some spots in the matrix. But when to use which? That's my dilemma. I don't understand where the values in the matrix come from, and then what to do with them.

5. Apr 1, 2006

### JasonRox

I'm not sure exactly what you are speaking of, but I'll give you another example using vectors.

Let $B=\{v_1, v_2, ...\}$ be an orthonormal set.

So, we have...

$$v_i * v_j = \delta_{ij}$$

* is the dot product or inner product.

Ok, I reread your post. Can you tell what section you are working on that requires these functions? Maybe I can help you much more on where the values come from.

6. Apr 1, 2006

### JasonRox

http://planetmath.org/encyclopedia/LeviCivitaPermutationSymbol3.html [Broken]

There's another website you can look at.

Do you happen to know what an even/odd permutation is?

Last edited by a moderator: May 2, 2017
7. Apr 1, 2006

### Hurkyl

Staff Emeritus
The letters i and j in $\delta_{ij}$ denote indices, not vectors. They are numbers ranging from 1 through the dimension of your vector space. $\delta_{ij}$ is the (i, j)-th component in the matrix representation of the kroneker delta1, according to whatever basis you've chosen.

1: if you're actually using tensor notation, then since they're both subscripts, both indices are selecting columns -- this would actually a 1xn² matrix that's partitioned into n rows of length n. But if you make the appropriate transpositions, you can treat it as an nxn matrix. A notation for the nxn identity matrix would be $\delta_i^j$.

Last edited: Apr 1, 2006
8. Apr 1, 2006

### Dr. Gonzo

OK...things are slowly getting a little bit clearer. I understand that $$v_i\bullet v_j=\delta_{ij}=(v_{ix}*v_{jx})+(v_{iy}*v_{jy})+(v_{ik}*v_{jk})=(1*0)+(0*1)+(0*0)=0$$

I also read planetmath's description of permutations (something I am/was totally unfamiliar with): http://planetmath.org/encyclopedia/Permutation.html [Broken] I'm not entirely sure, but my initial take on even/odd permutations has to do with the number of transpositions...which I do not understand. Is it as simple as seeing a vector with three elements will have 3! permutations?

As for the section I'm working on, it's a tutorial titled Vector Algebra and an Introduction to Matrices. Topics in this tutorial covered prior to this problem include:
1. Euclidean Vectors
1.1 Basic Features and Conventions
2. Vector Manipulations
2.1 Scalar Multiplication
2.3 Scalar Product
2.4 The Vector Product
2.5 Vector Components
3. Subscript Algebra
3.1 Summation Convention
3.2 The Kronecker Delta
3.3 The Levi-Civita Symbol

I am a third-semester physics undergrad with previous math that covers through Calculus III. This is my only experience with Linear Algebra, and my tutorial book is very flimsy in the way of instruction. I've gathered most of my understanding by doing my own research. Unfortunately, I just can't seem to get a handle on this one by myself.

Last edited by a moderator: May 2, 2017
9. Apr 1, 2006

### nrqed

As someone already mentioned, the indices i,j,k usually label the *components* of some vector, with the definition $v_1 = v_x, v_2= v_y, v_3 =v_z$.

You cannot know what value of i,j,k to use unless you have some context in which to use those quantities! The *context* will tell you what value to use. What I am saying is that what you have are the *definitions* of these symbols, but its only when you will use them in some specific problem that you will know what value to use for the indices.

The most famous example of using the Levi-Civita symbol is through the definition of the cross product. One way to write $\vec A \times \vec B = \vec C$ is to give a rule to calculate each component of the vector C by saying

$C_i = \sum_{j,k=1}^3 \epsilon_{ijk} A_j B_k$

(often people do not write the summation explicitly. Not writing explicitly the sums is called ''using Einstein's convention'').

Lets say you want the x component of C using the above formula. That fixes i to be 1. Then you get

$C_1 = \epsilon_{123} A_2 B_3 + \epsilon_{132} A_3 B_2$

(there are many more terms, for example $\epsilon_{112} A_1 B_2$ and so on but they are all zero because the levi civita symbol is zero whenever two indices are equal. So there are really 9 terms corresponding to the 9 values that j and k may take, but of those nine terms, only two are non zero and these are the two given above).

Now plug in the values for the levi-civita symbol for those indices and you get

$C_1 = A_2 B_3 - A_3 B_2$ which translates to $C_x = A_y B_z - A_z B_y$ as expected.

As an exercise, check that you get the correct results for C_y and C_z!!

So you see, when you use the symbols in specific problems you will know what values to take for the indices...

Hope this clarifies things

Patrick

10. Apr 1, 2006

### nrqed

I gave the most famous example of the use of the Levi-Civita symbol. The most famous use of the Kronecker delta is to define the scalar product.
One can write $\vec A \cdot \vec B = \sum_{i,j=1}^3 A_i B_j \delta_{i,j}$. You should verify that this leads to the usual result $A_x B_x + A_y B_y + A_z B_z$.

Patrick

11. Apr 1, 2006

### Hurkyl

Staff Emeritus
I'm going to use superscripts and subscripts... but I suppose you don't need to distingush the two, and can just write everything as a subscript.

Again, to restate what's being said so far, if we're working in 3-space, and we've chosen a basis on our vector space...

When we have a vector, we can write down its coordinates. I'm going to write coordinates as superscripts (in particular, the following are not exponents):

$$\vec{v} = \left[ \begin{array}{c} v^1 \\ v^2 \\ v^3 \end{array} \right]$$

So what is the i-th component of our vector $\vec{v}$? It's $v^i$.

When we have a covector, we do the same, but we use subscripts.

$$\hat{\omega} = [ \omega_1 \, \omega_2 \, \omega_3 ]$$

The i-th component of the covector $\hat{\omega}$ is then $\omega_i$.

When we have a matrix, we do the same:

$$\mathbf{A} = \left[ \begin{array}{ccc} A_1^1 & A_2^1 &A_3^1 \\ A_1^2 & A_2^2 &A_3^2 \\ A_1^3 & A_2^3 &A_3^3 \end{array} \right]$$

What is the (i, j)-th component of our matrix $\mathbf{A}$? It's $A^i_j$.

But the point is that we treat $\vec{v}$, $\hat{\omega}$, and $\mathbf{A}$ as simply being arrays of numbers. The first two were one-dimensional arrays, and the latter was a two-dimensional array. Subscripts and superscripts are how we indicate the actual elements of those arrays.

12. Apr 2, 2006

### Oxymoron

Levi-Civita symbols make long equations of tensors (or vectors if you prefer) short. For example, I assume that you have come across antisymmetric tensors? Well, if you haven't, they are tensors which change sign whenever you interchange any pair of its indices.

$$V(x^1,\dots,x^i,\dots,x^j,\dots,x^n) = -V(x^1,\dots,x^j,\dots,x^i,\dots,x^n)$$

Notice that V is an antisymmetric tensor with n components (or a vector in n dimensions, where each $x^k$ denotes one of its components). Notice that if we then interchange another pair of components (that is, doing it twice) the sign changes again! So we actually have

$$V(x^{\pi(1)},x^{\pi(2)},\dots,x^{\pi(n)}) = (-1)^{\pi}V(x^1,x^2,\dots,x^n)$$

where $\pi[/tex] is some permutation of n indices. Now, youve read mathworld's section on permutations, so you know what an even permutation is, and an odd permutation is? If you have (instead of n indices) say 3 indices, 1,2,3. Then 2,1,3 is an odd permutation because I have swapped a pair of indices 1 times (which is an odd number). What about 2,3,1? Well, I swapped 1,2 and then 1,3 which is two. So this is an even permutation. When it comes to generalizing this for n indices instead of just three, it is important to keep track of your signs. Because we know for an antisymmetric vector we get a negative sign every time we swap the indices an odd number of times! And the negative sign disappears when we do it an even number of times (thus the (-1)^m bit out the front). It gets a little more complicated, but the Levi-Civita symbols make this kind of index swapping very concise. 13. Mar 31, 2010 ### lotm You said what you were looking for was an intuitive picture of the Kroenecker delta and Levi-Civita symbol? Well, sounds like you're pretty damn close with the delta: I don't quite get why you start talking about vectors - you're on the right track when you say you have a matrix which is zero everywhere, except for a diagonal string of 1's from top left to bottom right. This matrix is of pretty massive significance: it's the identity matrix! So, that's one way of understanding the delta (as giving the components of the identity matrix). In linear algebra, doing calculations by writing out full matrices can take ages - by talking about indices instead, things can be done much quicker. Having the Kroenecker delta allows us to talk about the identity matrix in terms of indices, which is good news given the identity matrix's ubiquity. Having a delta pitch up in your calculations is usually good news, since it often allows you to cull some of your summations. A sum over one index (let's say j) of $$\delta_{ij}$$ is basically an assertion that we can scrap the sum over j, get rid of the delta symbol and replace all incidences of j by i. For example: [itex]\sum_{j=1}^3 \sum_{k=1}^3 A_{ij} \delta_{ij} B_{jk} = \sum_{k=1}^3 A_{ii} B_{ik}$

(It's worth checking this out, either by writing out the summation explicitly or rewriting it in matrix notation.)

PS as is probably clear, I'm kind of new and haven't quite got the hang of the latex stuff yet - could someone tell me what I've done wrong? Thanks.

Last edited: Mar 31, 2010
14. Mar 31, 2010

### lotm

The L-C symbol is substantially trickier to get an intuitive handle on - I still don't really have a 'gut feel' for it, even after a few years. The closest thing is probably to understand it as a kind of 'shuffler', which swaps about entries inside matrices in a cyclic sort of way. Hence, it crops up in areas where these shufflings are desired: for example, in taking the determinant of a matrix (think of how each element gets multiplied by the elements which it does not share a row or a column with), or in taking the cross product of two vectors (where a scrambling is needed to ensure that the new vector is perpendicular to both the old ones), or in the curl of a vector field (which you may well not have done yet - roughly, the 'spiraliness' of a field). Of course, all three of these applications are linked - I was taught to find the cross product $\vec u \times \vec v$ by taking the determinant of the following matrix:

$$\left[ \begin{array}{ccc}\mathbf{i} & \mathbf{j} & \mathbf{k} \\u_1 & u_2 & u_3 \\v_1 & v_2 & v_3\end{array} \right]$$

and the curl is found by taking the cross product of the nabla operator.

As to what manner of thing the L-C symbol is - it is indeed an example of a rank-3 tensor, which are (as you mention) extensions of the concept of matrices into three dimensions (if you like, you can imagine a three-dimensional array of numbers - a cube, of 3 numbers to a side - containing the 27 entries, just as a vector can be displayed as a one-dimensional array of the 3 entries, and a linear operator as a two-dimensional array of the 9 entries). However, it's not an extension of the Kroenecker delta specifically - as you can see, the two objects do quite different things to the matrices they act upon.