Levi-Civita proofs for divergence of curls, etc

theuserman
Messages
10
Reaction score
0
I've also posted this in the Physics forum as it applies to some physical aspects as well.
---

I want to know if I'm on the right track here. I'm asked to prove the following.

a) \nabla \cdot (\vec{A} \times \vec{B}) = \vec{B} \cdot (\nabla \times \vec{A}) - \vec{A} \cdot (\nabla \times \vec{B})
b) \nabla \times (f \vec{A}) = f(\nabla \times \vec{A}) - \vec{A} \times (\nabla f) (where f is a scalar function)

And want (read: need to, due to a professor's insistence) to prove these using Levi-Civita notation. I've used the following for reference:
http://www.uoguelph.ca/~thopman/246/indicial.pdf and http://folk.uio.no/patricg/teaching/a112/levi-civita/

Here's my attempts - I need to see if I have this notation down correctly...

a) \nabla \cdot (\vec{A} \times \vec{B})

= \partial_i \hat{u}_i \cdot \epsilon_{jkl} \vec{A}_j \vec{B}_k \hat{u}_l

= \partial_i \vec{A}_j \vec{B}_k \hat{u}_i \cdot \hat{u}_l \epsilon_{jkl}

Now I thought it'd be wise to use the identity that \hat{u}_i \cdot \hat{u}_l = \delta_{il}.

= \partial_i \vec{A}_j \vec{B}_k \delta_{il} \epsilon_{jkl}

In which we make i = l (and the \delta_{il} goes to 1).

= \partial_i \vec{A}_j \vec{B}_k \epsilon_{jki}

Then using 'scalar derivative product rules' we get two terms. Now, here's where I get a little mixed up. I'm wondering if we rearrange the terms and then modify the epsilon to go in order the the terms.

= \vec{B}_k \partial_i \vec{A}_j \epsilon_{kij} + \vec{A}_j \partial_i \vec{B}_k \epsilon_{jik}

Now since the first epsilon is 'even' it remains positive, the other epsilon is 'odd' so that term becomes negative and we end up with the required result.

= \vec{B} (\nabla \times \vec{A}) - \vec{A} (\nabla \times \vec{B})

b) \nabla \times (f \vec{A}) (where f is a scalar function)

= \partial_i f \vec{A}_j \hat{u}_k \epsilon_{ijk}

= f \partial_i \vec{A}_j \hat{u}_k \epsilon_{ijk}+ \vec{A}_j \partial_i f \hat{u}_k \epsilon_{jik}

Once again, the first epsilon is the positive ('even') while the other is negative ('odd').

= f (\nabla \times \vec{A}) - \vec{A}(\nabla f)

Man, my hands hurt from all that tex work :P Been awhile for me.
Since my teacher refuses to tell me if this is the correct method (he's only willing to show the concepts, and while I can appreciate that I don't want my mark to go to hell), can anyone help me out?
 
Last edited by a moderator:
Mathematics news on Phys.org
Some new equations I need to prove and have no idea how to go about it... I'll show what I've attempted so far.

a) \nabla (\vec{A} \cdot \vec{B}) = \vec{A} \times (\nabla \times \vec{B}) + \vec{B} \times (\nabla \times \vec{A}) + (\vec{A} \cdot \nabla)\vec{B} + (\vec{B} \cdot \nabla) \vec{A}

b) \nabla \times (\vec{A} \times \vec{B}) = (\vec{B} \cdot \nabla)\vec{A} - (\vec{A} \cdot \nabla)\vec{B} + \vec{A}(\nabla \cdot \vec{B}) - \vec{B}(\nabla \cdot \vec{A})

Attempts:

a) \nabla (\vec{A} \cdot \vec{B})

= \delta_i (\vec{A}_j \hat{u}_j \cdot \vec{B}_k \hat{u}_k)

= \delta_i (\vec{A}_j \vec{B}_j)

= \vec{B}_j \delta_i \vec{A}_j + \vec{A}_j \delta_i \vec{B}_j

Now I'm thinking we do the other side of the equation and see if they add up??
1st term :
\vec{A} \times (\nabla \times \vec{B})

= \epsilon_{ijk} \vec{A}_j (\nabla \times \vec{B})_k

= \epsilon_{ijk} \vec{A}_j \epsilon_{klm} \nabla_l \vec{B}_m

= \epsilon_{kij} \epsilon_{klm} \vec{A}_j \nabla_l \vec{B}_m

= (\delta_{il} \delta_{jm} - \delta_{im} \delta_{jl}) \vec{A}_j \nabla_l \vec{B}_m

So we make i=l, j=m and for the other case i=m and j=l

= \vec{A}_j \nabla_l \vec{B}_j - \vec{A}_l \nabla_l \vec{B}_m

= \vec{A}_j \partial_l \vec{B}_j - \vec{A}_l \partial_l \vec{B}_m

Which seems to resemble what I got when I did this normally...

2nd term : (I'm guessing we just switch the terms)
\vec{B} \times (\nabla \times \vec{A})

= \vec{B}_j \partial_l \vec{A}_j - \vec{B}_l \partial_l \vec{A}_m

3rd term:
(\vec{A} \cdot \nabla)\vec{B}

= (\vec{A}_i \cdot \nabla_j)\vec{B}_k

= (\vec{A}_i \nabla_i) \vec{B}_k

= \vec{A}_i \nabla_i \vec{B}_k

4th term:
(\vec{B} \cdot \nabla)\vec{A}

= \vec{B}_i \nabla_i \vec{A}_k

Adding them all up gets us:

\vec{A}_j \partial_l \vec{B}_j - \vec{A}_l \partial_l \vec{B}_m + \vec{B}_j \partial_l \vec{A}_j - \vec{B}_l \partial_l \vec{A}_m + \vec{A}_i \partial_i \vec{B}_k + \vec{B}_i \partial_i \vec{A}_k

Since most of these are dummy variables I can change them to...\vec{A}_j \partial_l \vec{B}_j - \vec{A}_l \partial_l \vec{B}_m + \vec{B}_j \partial_l \vec{A}_j - \vec{B}_l \partial_l \vec{A}_m + \vec{A}_l \partial_l \vec{B}_m + \vec{B}_l \partial_l \vec{A}_m

Which is \vec{B}_j \delta_i \vec{A}_j + \vec{A}_j \delta_i \vec{B}_j !
HAH!

b) \nabla \times (\vec{A} \times \vec{B}) I'm pretty sure I have this down.

= \partial_l \hat{u}_l \times (\vec{A}_i \vec{B}_j \hat{u}_k \epsilon_{ijk})

=\partial_l \vec{A}_i \vec{B}_j \epsilon_{ijk} (\hat{u}_l \times \hat{u}_k)

[(\hat{u}_l \times \hat{u}_k) = \hat{u}_m \epsilon_{lkm}]

=\partial_l \vec{A}_i \vec{B}_j \hat{u}_m \epsilon_{ijk} \epsilon_{mlk}

=\partial_l \vec{A}_i \vec{B}_j \hat{u}_m (\delta_{im} \delta_{jl} - \delta_{il} \delta_{jm})

=\partial_j \vec{A}_i \vec{B}_j \hat{u}_i- \partial_i \vec{A}_i \vec{B}_j \hat{u}_j

Using scalar derivative scalar product rule:

= \vec{A}_i \partial_j \vec{B}_j \hat{u}_i + \partial_j \vec{B}_j \vec{A}_i \hat{u}_i - (\vec{A}_i \partial_i \vec{B}_j \hat{u}_j + \vec{B}_j \vec{A}_i \partial_i \hat{u}_j)

= \vec{A}(\nabla \cdot \vec{B}) + (\vec{B} \cdot \nabla)\vec{A} - (\vec{A} \cdot \nabla)\vec{B} - \vec{B}(\nabla \cdot \vec{A})

Alright, so it took me 2 hours to type it but I managed to figure it out.
Thanks anyway.
 
theuserman said:
I've also posted this in the Physics forum as it applies to some physical aspects as well.
---

I want to know if I'm on the right track here. I'm asked to prove the following.

a) \nabla \cdot (\vec{A} \times \vec{B}) = \vec{B} \cdot (\nabla \times \vec{A}) - \vec{A} \cdot (\nabla \times \vec{B})
...
a) \nabla \cdot (\vec{A} \times \vec{B})

= \partial_i \hat{u}_i \cdot \epsilon_{jkl} \vec{A}_j \vec{B}_k \hat{u}_l

= \partial_i \vec{A}_j \vec{B}_k \hat{u}_i \cdot \hat{u}_l \epsilon_{jkl}

Now I thought it'd be wise to use the identity that \hat{u}_i \cdot \hat{u}_l = \delta_{il}.

I haven't got a lot of wherewithall at the moment to put a lot into this, but I can help a little with equation a)

Change your equation to:

a) \nabla \cdot (\vec{A} \times \vec{B}) = \partial_i \hat{u}_i \cdot \epsilon_{jki} \vec{A}_j \vec{B}_k \hat{u}_i

Then using 'scalar derivative product rules' we get two terms. Now, here's where I get a little mixed up. I'm wondering if we rearrange the terms and then modify the epsilon to go in order the the terms.

= \vec{B}_k \partial_i \vec{A}_j \epsilon_{kij} + \vec{A}_j \partial_i \vec{B}_k \epsilon_{jik}

Now since the first epsilon is 'even' it remains positive, the other epsilon is 'odd' so that term becomes negative and we end up with the required result.

= \vec{B} (\nabla \times \vec{A}) - \vec{A} (\nabla \times \vec{B})

For my own work, when things get a little dicy, I define e.

\ \ e_{ijk} = 1 where i,j,k are odd (cyclic) permutations of (1,2,3) and otherwise the permutations are are 0.

\ e_{123} = e_{231} = e_{312} = 1

To make things a bit transparent, define \ \overline{e}

\ \overline{e}_{ijk} = -1 where i,j,k are odd permutations of (1,2,3) and otherwise are 0.

\ \overline{e}_{321} = \overline{e}_{213} = \overline{e}_{132} = -1

So you get

\epsilon_{ijk} = e_{ijk} + \overline{e}_{ijk}
 
Phrak,
Thanks for the tip with \epsilon, as for your first suggestion it makes sense to make it i... Since it is a dummy variable.

Thanks.
 
Just completeness, I'm going to prove that:

<br /> \nabla \times (\nabla \times \vec{v}) = \nabla(\nabla \cdot \vec{v}) - \nabla^2\vec{v}<br />

So we have
<br /> \nabla \times (\nabla \times \vec{v}) <br />

<br /> = \partial_i(\nabla \times \vec{v})_j \epsilon_{ijk}<br />

<br /> = \partial_i(\partial_l v_m \epsilon_{jlm})_j \epsilon_{ijk}<br />

<br /> =\partial_i \partial_l v_m \epsilon_{jlm} \epsilon_{ijk}<br />

<br /> = \partial_i \partial_l v_m (\delta_{lk}\delta_{mi} - \delta_{li}\delta_{mk})<br />

<br /> = \delta_{lk}\delta_{mi} \partial_i \partial_l v_m - \delta_{li}\delta_{mk}\partial_i \partial_l v_m<br />

<br /> = \partial_m \partial_l v_m - \partial_i \partial_i v_m<br />

<br /> = \partial_l \partial_m v_m - \partial_i^2 v_m<br />

<br /> = \nabla(\nabla \cdot \vec{v}) - \nabla^2 \vec{v}<br />

Viola.
 
theuserman said:
Just completeness, I'm going to prove that:

<br /> \nabla \times (\nabla \times \vec{v}) = \nabla(\nabla \cdot \vec{v}) - \nabla^2\vec{v}<br />

So we have
<br /> \nabla \times (\nabla \times \vec{v}) <br />

<br /> = \partial_i(\nabla \times \vec{v})_j \epsilon_{ijk}<br />

<br /> = \partial_i(\partial_l v_m \epsilon_{jlm})_j \epsilon_{ijk}<br />

<br /> =\partial_i \partial_l v_m \epsilon_{jlm} \epsilon_{ijk}<br />

<br /> = \partial_i \partial_l v_m (\delta_{lk}\delta_{mi} - \delta_{li}\delta_{mk})<br />

<br /> = \delta_{lk}\delta_{mi} \partial_i \partial_l v_m - \delta_{li}\delta_{mk}\partial_i \partial_l v_m<br />

<br /> = \partial_m \partial_l v_m - \partial_i \partial_i v_m<br />

<br /> = \partial_l \partial_m v_m - \partial_i^2 v_m<br />

<br /> = \nabla(\nabla \cdot \vec{v}) - \nabla^2 \vec{v}<br />

Viola.

i am wondering, why it is such from the last 4th step onwards

going from the last 4th to last 3rd step, why does the variables become as such? how exactly does one compute the kronecker delta here?

also on the last step, del(del.v)
does it mean that the del(del is a multiplication? it is neither cross nor dot product? so i should treat it as a scalar multiplication?

also, del del v are all vectors right? so a dot product gives a number, and the outter del is just a scalar multiplcation?

also, when we take cross product , say AXB
where [AXB]i = EijkAjBk
when we say the i-th component is as such, does it mean the unit vector i, which is the x-plane, or is i just a dummy variable?
also what does it mean by the i-th component? does it mean there's also a j and k? and when we sum them up we get the vector [AXB] ?
 
Last edited:
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Thread 'Imaginary Pythagorus'
I posted this in the Lame Math thread, but it's got me thinking. Is there any validity to this? Or is it really just a mathematical trick? Naively, I see that i2 + plus 12 does equal zero2. But does this have a meaning? I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero? Ibix offered a rendering of the diagram using what I assume is matrix* notation...

Similar threads

Back
Top