Simplifying Index Notation in Vector Calculus

Click For Summary
SUMMARY

The discussion focuses on simplifying index notation in vector calculus, specifically the expression \((\vec{r} \times \vec{\nabla}) \cdot (\vec{r} \times \vec{\nabla})\). The correct index notation is established as \(xi\partial_jxi\partial_j - xi\partial_jxj\partial_i\). The participants confirm the equivalence of both sides of the equation using the Levi-Civita symbol and the Kronecker delta, demonstrating that the difference lies solely in the naming of dummy summation indices. This clarification resolves the confusion regarding the derivation of the terms.

PREREQUISITES
  • Understanding of vector calculus and operations involving the gradient operator.
  • Familiarity with index notation and tensor calculus.
  • Knowledge of the Levi-Civita symbol and Kronecker delta.
  • Experience with manipulating summation indices in mathematical expressions.
NEXT STEPS
  • Study the properties of the Levi-Civita symbol in tensor calculus.
  • Learn about the application of Kronecker delta in simplifying tensor equations.
  • Explore advanced topics in vector calculus, such as curl and divergence.
  • Practice converting vector expressions into index notation for better comprehension.
USEFUL FOR

Mathematicians, physicists, and engineering students who are working with vector calculus and need to understand index notation and tensor operations.

andrien
Messages
1,023
Reaction score
33
(r×∇).(r×∇)=r.∇×(r×∇)
now in index notation it is written as,
=xijxij-xijxji
but when I tried to prove it ,it just came out twice.can anyone tell how it is correct(given is the correct form).i really mean that i was getting four terms which gave twice of above after reshuffling so prove it.
 
Physics news on Phys.org
All your formulae are correct.

The left-hand side of your first equation reads in index notation
[tex](\vec{r} \times \vec{\nabla}) \cdot (\vec{r} \times \vec{\nabla}) \phi= \epsilon_{jkl} r_k \partial_l (\epsilon_{jmn} r_m \partial_n \phi).[/tex]
Now using
[tex]\epsilon_{jkl} \epsilon_{jmn}=\delta_{km} \delta_{ln}-\delta_{kn} \delta_{lm},[/tex]
you indeed get
[tex](\vec{r} \times \vec{\nabla}) \cdot (\vec{r} \times \vec{\nabla}) \phi = r_{k} \partial_l(r_k \partial_l \phi)-r_k \partial_l(r_l \partial_k \phi).[/tex]

The right-hand side of your first equation is
[tex]\vec{r} \cdot [\vec{\nabla} \times (\vec{r} \times \vec{\nabla}) \phi] = r_j \epsilon_{jkl} \partial_k (\epsilon_{lmn} r_m \partial_n \phi).[/tex]
Again we have
[tex]\epsilon_{jkl} \epsilon_{lmn}=\epsilon_{ljk} \epsilon_{lmn}=\delta_{jm} \delta_{kn} - \delta_{jn} \delta_{km}.[/tex]
Thus we have
[tex]\vec{r} \cdot [\vec{\nabla} \times (\vec{r} \times \vec{\nabla}) \phi] = r_j \partial_k (r_j \partial_k \phi)-r_j \partial_k (r_k \partial_j \phi).[/tex]
This shows that indeed both expressions of your first equations are equal, because the only difference is the naming of the dummy-summation indices :-).
 
[tex]\epsilon_{jkl} \epsilon_{jmn}=\delta_{km} \delta_{ln}-\delta_{kn} \delta_{lm},[/tex]
I was aware of it which I have seen in butkov an year ago.but this is the first use of it.so thanks,van I think i am just becoming lazy.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 10 ·
Replies
10
Views
4K
  • · Replies 11 ·
Replies
11
Views
4K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K