Vector identity proof using index notation

In summary: B)~]_l = \begin{cases} \begin{align} & πœ€_{lpr} A_p D_r = πœ€_{lmn} A_p [ πœ€_{npq} \partial_q B_q ]~\dots\nonumber \\ & πœ€_{lmn} πœ€_{npq} A_p (\partial_q B_q)~\dots\nonumber\end{align} \end{cases}$$The product of two Levi-Civita symbols has one important property whenever the last index in the first symbol is the same as the first index in the second:$$πœ€_{lmn}πœ€
  • #1
darthvishous
2
0

Homework Statement


I am trying to prove
$$\vec{\nabla}(\vec{a}.\vec{b}) = (\vec{a}.\vec{\nabla})\vec{b} + (\vec{b}.\vec{\nabla})\vec{a} + \vec{b}\times\vec{\nabla}\times\vec{a} + \vec{a}\times\vec{\nabla}\times\vec{b}.$$ I can go from RHS to LHS by writng $$\vec{b}\times\vec{\nabla}\times\vec{a}$$ as $$\epsilon_{ijk}b_k\epsilon_{klm}\partial_la_m=b_j\partial_ia_j-a_j\partial_jv_i$$, but I am unable do it the other way. Any help is appreciated.

Homework Equations

The Attempt at a Solution

 
Physics news on Phys.org
  • #2
This is the homework forum, so you have to show your attempt up to the point where you're stuck.

If you can do it one way, you can do it the other. Just write down all those things you found to be equal to each other in the reverse order.

There are some typos (I think) in the small piece of your work that you included in your post. The ##b_k## on the left should be ##b_i##, right? That last term on the right looks weird? What's ##v?## Should the ##a## be a ##b##?
 
  • #3
The equation to be proven had been improperly written because the vectors in the third and fourth terms had not been properly grouped. Replacing the lower-case-letter vectors with upper case letters, we should have $$\vec \nabla (\vec A\cdot \vec B)=(\vec A \cdot \vec \nabla)\vec B + (\vec B \cdot \vec \nabla)\vec A + \vec B\times(\vec \nabla \times \vec A) +\vec A\times(\vec \nabla \times \vec B) $$ or, for better viewing $$(1){~~~~~~~}\vec \nabla (\vec A\cdot \vec B)=(\vec A \cdot \vec \nabla)\vec B + (\vec B \cdot \vec \nabla)\vec A + [~\vec B\times(\vec \nabla \times \vec A)~] + [~\vec A\times(\vec \nabla \times \vec B)]{~~~}$$ The grouping of vectors in triple cross products must never be taken for granted.
The evaluation of the above expression using suffix notation involves the combination of the three elementary vector operations of computing the gradient, the scalar product (dot product) of two vectors, and the cross product of three vectors. Starting from a vector ##\vec A~##with components ##~(A_1 , A_2 , A_3)~##written in terms of unit basis vectors, we obtain, after setting ##\hat {\rm e} _{\rm x} = \hat {\rm e}_1~,~\hat {\rm e} _{\rm y} = \hat {\rm e}_2~,~\text {and}~\hat {\rm e} _{\rm z} = \hat {\rm e}_3~##, the equation$$\vec A = (A_1 , A_2 , A_3) = A_1{\hat {\rm e} }_{\rm x} + A_2{\hat {\rm e} }_{\rm y} + A_3{\hat {\rm e} }_{\rm z} = \sum_{i=1}^3 A_i{\hat {\rm e} }_i = A_i{\hat {\rm e} }_i~\Rightarrow~[~\vec A~]_i = A_i {~~}$$where ##\mathbf {repeated~indeces}## without the usual summation sign used ##\mathbf {denote}## ##\mathbf {summation}~-## hence the name ##summation~index## = ##repeated~index~-## over the range of ##i## from 1 to any integer ##n## (##n## = 3 in this case). As a rule, ##\mathbf {no~index~may}## ##\mathbf {occur~more~than~twice}## in any given expression. Then from there, we proceed to the divergence of the position vector$$\vec {\nabla} \cdot \vec r = \begin{cases}\begin{align} & \left[ \sum_{i=1}^3 {\hat {\rm e} }_i { \frac {\partial}{\partial x_i } } \right] \cdot \left[ {\sum_{j=1}^3 x_j{\hat {\rm e} }_j } \right] = \left[ \sum_{i=1}^3\hat {\rm e}_i \partial_i \right] \cdot \left[ {\sum_{j=1}^3 x_j{\hat {\rm e} }_j } \right] \nonumber \\ & ~[\hat {\rm e}_i \partial_i ] \cdot [ \hat {\rm e}_j x_j ] = [ \hat {\rm e}_i \cdot \hat {\rm e}_j ] \partial_i x_j = \delta_{ij} \partial_i x_j = 3~\dots \nonumber \end{align}\end{cases}$$ $$ \text {Kronecker delta symbol}~\delta_{ij} =\begin{cases}\begin{align} & 0~{\text {if} }~i \neq j~,~\text {orthogonality of}~\hat {\rm e}_i~,~\hat {\rm e}_j~\dots \nonumber \\ & 1~{\text {if}}~i = j~,~\text {their orthonormality} \dots\nonumber \end{align}\end{cases}$$Recall that the unit basis vectors are orthonormal to each other and that##~\partial_j = \partial/x_j~.~##Finally, for the cross product between two vectors, given in component form ##\vec A = (A_1 , A_2 , A_3)~ \text {and}~\vec B = (B_1 , B_2 , B_3)~##, that produces the third vector##\vec C = (C_1 , C_2 , C_3)~##, such that ##\vec C = \vec \nabla \times \vec A \times \vec B## , we find that$$C_i =\sum_{j,k=1}^3 πœ€_{ijk} A_j B_k = πœ€_{ijk} A_j B_k ~\Leftarrow~πœ€_{ijk} = \begin{cases}\begin{align} &~0~\text {if any index is equal to} \nonumber \\
&~\text {any other index}~\dots\nonumber \\ &+1~\text {if}~i ,j, k~\text {form an even} \nonumber \\ &~\text {permutation (a cyclic} \nonumber \\ &~\text {permutation) of 1, 2, 3}~\dots\nonumber \\ & -1~\text {if}~i ,j, k~\text {form an odd} \nonumber \\ &~\text {permutation of 1, 2, 3}~\dots\nonumber \end{align}\end{cases}$$The symbol##~πœ€_{ijk}~##is called the ##\text {Levi-Civita symbol (aka}~permutation~symbol) .##
The preceding considerations can now be applied to each of the four terms in eq. (1) that lead to the following results:
$$(2){~~~~~~~~~~~~~~~}(\vec A \cdot \vec \nabla)\vec B =\begin{bmatrix}~[ (A_i \hat {\rm e}_i) \cdot (\partial_j \hat {\rm e}_j) ] B_k \hat {\rm e}_k = [ \hat {\rm e}_i \cdot \hat {\rm e}_j ] (A_i \partial_j) B_k \hat {\rm e}_k~ \\ \delta_{ij} (A_i \partial_j) B_k \hat {\rm e}_k = (A_i \partial_i) B_k \hat {\rm e}_k \\ \Rightarrow~[~(\vec A \cdot \vec \nabla)\vec B~]_k = (A_i \partial_i) B_k \end{bmatrix}{~~~~~~~~~~~~~~~}$$ $$(3){~~~~~~~~~~~~~~~}(\vec B \cdot \vec \nabla)\vec A =\begin{bmatrix}~[ (B_i \hat {\rm e}_i) \cdot (\partial_j \hat {\rm e}_j) ] A_k \hat {\rm e}_k = [ \hat {\rm e}_i \cdot \hat {\rm e}_j ] (B_i \partial_j) A_k \hat {\rm e}_k~ \\ \delta_{ij} (B_i \partial_j) A_k \hat {\rm e}_k = (B_i \partial_i) A_k \hat {\rm e}_k \\ \Rightarrow~[~(\vec B \cdot \vec \nabla)\vec A~]_k = (B_i \partial_i) A_k \end{bmatrix}{~~~~~~~~~~~~~~~}$$ In addition##~\dots~\vec C = \nabla \times \vec A~\Rightarrow~C_i = (\nabla \times \vec A)_i = πœ€_{ijk}\partial_j A_k~\dots## $$\vec B\times(\vec \nabla \times \vec A)\Rightarrow [~\vec B\times(\vec \nabla \times \vec A)~]_k = \begin{cases} \begin{align} & πœ€_{kst} B_s C_t = πœ€_{kst} B_s [ πœ€_{tjp} (\partial_j A_p) ] ~\dots\nonumber \\ & πœ€_{kst} πœ€_{tjp} B_s (\partial_j A_p)~\dots\nonumber\end{align} \end{cases}$$The product of two Levi-Civita symbols has one important property whenever the last index in the first symbol is the same as the first index in the second:$$πœ€_{kst}πœ€_{tjp} B_s (\partial_j A_p) = ( \delta_{kj} \delta_{sp} - \delta_{kp} \delta_{sj} ) B_s (\partial_j A_p)$$ $$(4){~~~~~}\vec B\times(\vec \nabla \times \vec A) \Rightarrow [~\vec B\times(\vec \nabla \times \vec A)~]_k =\begin{cases}\begin{align} & \delta_{kj} \delta_{sp} B_s (\partial_j A_p) - \delta_{kp} \delta_{sj} B_s (\partial_j A_p) \nonumber \\ & B_p (\partial_k A_p) - B_j (\partial_j A_k) \nonumber \end{align}\end{cases}$$ Further still##~\dots~\vec D = \nabla \times \vec B~\Rightarrow~D_l = (\nabla \times \vec B)_l = πœ€_{lmn} \partial_m B_n ~\dots## $$\vec A\times(\vec \nabla \times \vec B)\Rightarrow [~\vec A\times(\vec \nabla \times \vec B)~]_k =\begin{cases} \begin{align} & πœ€_{kpq} A_p D_q = πœ€_{kpq} A_p [ πœ€_{qmn} \partial_m B_n ]~\dots \nonumber \\ & πœ€_{kpq} πœ€_{qmn} A_p (\partial_m B_n)~ \dots\nonumber \end{align} \end{cases}$$ According to the property of the product of two Levi-Civita symbols once more$$πœ€_{kpq} πœ€_{qmn} A_p (\partial_m B_n) = ( \delta_{km} \delta_{pn} - \delta_{kn} \delta_{pm} ) A_p (\partial_m B_n)$$ $$(5){~~~~}\vec A\times(\vec \nabla \times \vec B) \Rightarrow [~\vec A\times(\vec \nabla \times \vec B)~]_k =\begin{cases}\begin{align} & \delta_{km} \delta_{pn} A_p \partial_m B_n - \delta_{kn} \delta_{pm} A_p \partial_m B_n \nonumber \\ & A_n (\partial_k B_n) - A_m (\partial_m B_k) \nonumber\end{align} \end{cases}$$ According to eqs. (2), (3), (4), and (5), eq. (1) can be written as $$(6){~~~~~~~~~~~~~~~~~}[~\vec \nabla (\vec A\cdot \vec B)~]_k = \begin{bmatrix}~[~(A_i \partial_i) B_k~]^* + [~(B_i \partial_i) A_k~]^{**} \\+~[~B_p (\partial_k A_p) - B_j (\partial_j A_k)~]^{**} ~\\~+ [~A_n (\partial_k B_n) - A_m (\partial_m B_k)^*~]~ \end{bmatrix}{~~~~~~~~~~~~~~~~~~~~~~~~~}$$ Summing over repeated indeces in eq. (6):
$$\vec \nabla (\vec A\cdot \vec B)~]_k =\begin{cases}\begin{align}&~(A_1 \partial_1) B_k^* + (A_2 \partial_2) B_k^{**} + (A_3 \partial_3) B_k^{***} \nonumber \\ &~+ (B_1 \partial_1) A_k^{\uparrow} + (B_2 \partial_2) A_k^{\uparrow\uparrow} + (B_3 \partial_3)A_k^{\uparrow\uparrow\uparrow} \nonumber \\ &~+ B_1 (\partial_k A_1) + B_2 (\partial_k A_2) + B_3 (\partial_k A_3) \nonumber \\ &~- B_1 (\partial_1 A_k)^{\uparrow} - B_2 (\partial_2 A_k)^{\uparrow\uparrow} - B_3 (\partial_3 A_k)^{\uparrow\uparrow\uparrow} \nonumber \\ &~+ A_1 (\partial_k B_1) + A_2 (\partial_k B_2) + A_3 (\partial_k B_3) \nonumber \\ &~- A_1 (\partial_1 B_k)^* - A_2 (\partial_2 B_k)^{**} - A_3 (\partial_3 B_k)^{***} \nonumber \end{align}\end{cases}$$Terms with similar exponents of (*) and (##^{\uparrow}##) drop out, leaving only the following: $${~~~~~~~~}\vec \nabla (\vec A\cdot \vec B)~]_k =\begin{cases}\begin{align}&~B_1 (\partial_k A_1) + B_2 (\partial_k A_2)^{\downarrow} + B_3 (\partial_k A_3)^{\downarrow\downarrow} \nonumber \\ &~+ A_1 (\partial_k B_1) + A_2 (\partial_k B_2)^{\downarrow} + A_3 (\partial_k B_3)^{\downarrow\downarrow} \nonumber\end{align}\end{cases}$$ $$\vec \nabla (\vec A\cdot \vec B)~]_k = \partial_k ( A_1 B_1 ) + \partial_k ( A_2 B_2 ) + \partial_k ( A_3 B_3 )$$
 
  • Like
  • Wow
Likes docnet and Delta2

1. What is a vector identity?

A vector identity is a mathematical equation that expresses a relationship between different vector quantities. It is used in vector calculus to simplify calculations and solve complex problems.

2. What is index notation?

Index notation is a way of representing vectors and their operations using indices or subscripts. It is commonly used in vector calculus to simplify vector equations and proofs.

3. How do you prove a vector identity using index notation?

To prove a vector identity using index notation, you need to use the properties of vector operations, such as the commutative and associative properties. You also need to manipulate the indices and use the Einstein summation convention to simplify the equations.

4. Why is index notation useful in vector calculus?

Index notation is useful in vector calculus because it allows for a more compact and efficient representation of vector equations. It also makes it easier to spot patterns and relationships between different vector quantities.

5. Can you give an example of a vector identity proof using index notation?

Yes, an example of a vector identity that can be proved using index notation is the triple vector product identity: a x (b x c) = b(a Β· c) - c(a Β· b). This can be proved by using index notation to expand and manipulate the left-hand side of the equation, and then using the properties of vector operations to simplify it into the right-hand side.

Similar threads

  • Calculus and Beyond Homework Help
Replies
9
Views
674
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
3K
  • Calculus and Beyond Homework Help
Replies
5
Views
4K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
9
Views
1K
  • Calculus and Beyond Homework Help
Replies
9
Views
2K
Back
Top