How to approach vector calculus identities?

Click For Summary
The discussion focuses on deriving vector calculus identities, particularly the identity involving the divergence of a cross product. Participants emphasize the importance of understanding the determinant form of the cross product and the use of component-wise calculations. There is reassurance that even engineering students can learn notations like Einstein's summation convention and the Levi-Civita symbol, though they may not be essential initially. The conversation also highlights the usefulness of Griffiths' textbook for mastering these concepts. Overall, the derivation process is framed as manageable through systematic approaches and practice with components.
fatpotato
Homework Statement
Prove the relation ##\nabla \cdot (A \times B) = B \cdot (\nabla \times A) - A\cdot (\nabla \times B)##
Relevant Equations
Definition of the vector product, definition of the divergence.
Majoring in electrical engineering imply studying Griffiths book on electrodynamics, so I have begun reading its first chapter, which is a review of vector calculus. A list of vector calculus identities is given, and I would like to derive each one, with one of them being ##\nabla \cdot (A \times B) = B \cdot (\nabla \times A) - A\cdot (\nabla \times B)##.

Now, I have two questions about this :
  1. Methodology question : I have seen a lot of threads here on PF about deriving these identitites, and Einstein's summation notation and Levi-Civita symbol are mentioned every time. Given that I will be majoring in electrical engineering and not in physics, can a mere engineer learn these notations or does this involve higher unreachable mathematical concepts?
  2. Actual question about the identity : I have seen an identity ##A \cdot (B \times C) ## involving a determinant, but since ##\nabla## is an operator and not an actual vector, does it even make sense to use the identity?
I would consider using ##A \cdot (B \times C) = det \left(\begin{array}{ccc} A_x & A_y & A_z \\ B_x & B_y & B_z \\ C_x & C_y & C_z \end{array} \right)##, which would yield in this case ##\nabla \cdot (A \times B) = det \left(\begin{array}{ccc} \partial_x & \partial_y & \partial_z \\ A_x & A_y & A_z \\ B_x & B_y & B_z \end{array} \right)##, am I allowed to put ##\nabla## in this matrix?

Thank you.
 
  • Like
Likes PeroK
Physics news on Phys.org
What you are referring to A*(B X C) is called the scalar triple product. The keyword here is scaler.

Its a simple derivation (mostly calculation) using the definition of the dot product and cross product.

If you want to study Griffiths, then take a look at Vector Calculus, Linear Algebra, and Differential Forms by Hubbard and Hubbard.

I like it better than Mardsen: Vector Calculus.
 
  • Skeptical
  • Like
  • Informative
Likes PhDeezNutz, fatpotato and PeroK
You cna prove these identities by expanding everything in cartesian coordinates. The summation convention allows you to use less paper.
 
  • Like
Likes fatpotato, docnet and FactChecker
fatpotato said:
Now, I have two questions about this :
  1. Methodology question : I have seen a lot of threads here on PF about deriving these identitites, and Einstein's summation notation and Levi-Civita symbol are mentioned every time. Given that I will be majoring in electrical engineering and not in physics, can a mere engineer learn these notations or does this involve higher unreachable mathematical concepts?
  2. Actual question about the identity : I have seen an identity ##A \cdot (B \times C) ## involving a determinant, but since ##\nabla## is an operator and not an actual vector, does it even make sense to use the identity?

I would say:

Don't worry about Levi-Civita for now. But, do (certainly) use the determinant form of the cross product and curl.

##nabla## has some vector properties, so in principle vector identities translate to ##\nabla## identities. You should be able to check that one you quoted is valid.

If you have a vector identity, then do it component by component. Or, do it for the x-component and appeal to the right-handed symmetry of x-y-z coordinates.

Stick with Griffiths - there's all you need in those first 58 pages!
 
  • Like
  • Informative
Likes fatpotato and docnet
MidgetDwarf said:
If you want to study Griffiths, then take a look at Vector Calculus, Linear Algebra, and Differential Forms by Hubbard and Hubbard.
Thank you, though I don't feel I will ever reach this level, it looks like a good reference.

pasmith said:
The summation convention allows you to use less paper.
Since I have to steal paper sheets from my school's printer, I will definitely give it a try!

PeroK said:
I would say:

Don't worry about Levi-Civita for now. But, do (certainly) use the determinant form of the cross product and curl.

has some vector properties, so in principle vector identities translate to identities. You should be able to check that one you quoted is valid.

If you have a vector identity, then do it component by component. Or, do it for the x-component and appeal to the right-handed symmetry of x-y-z coordinates.

Stick with Griffiths - there's all you need in those first 58 pages!

Thank you PeroK, your messages always cheer me up when my stupidity brings me down.

For anyone interested, here is the method used. The first term obtained by developing the determinant expression gives a partial derivative which can be expanded using the product rule :
$$\frac{\partial}{\partial x}A_yB_z = A_y\frac{\partial}{\partial x}B_z + B_z\frac{\partial}{\partial x}A_y$$
Then, one can find each matching term from ##B\cdot (\nabla \times A) ## and ##- A\cdot(\nabla \times B)## and conclude that the expression does in fact hold, which is maybe less strenuous than expanding everything.

Edit : missing cross product symbol
 
Last edited by a moderator:
  • Like
Likes PeroK
fatpotato said:
Edit : missing cross product symbol
The cross product symbol is simply " \times ", which I see you have found by now.

##B\cdot (\nabla \times A)##

gives

##B\cdot (\nabla \times A)##
 
  • Like
Likes fatpotato
A helpful hint with such identities...

To start, write everything out in components.
Be consistent with the algebra and use the cyclic nature x->y->z->x with the same sign.
Swapping exactly two will introduce a minus sign.

It might help to arrange your calculation into sections (and not just write it out on a single line).
After a while, you see the patterns and then appreciate the index notations... and maybe differential forms.
 
  • Like
Likes fatpotato
Thank you for the subsequent replies to this post.

PhDeezNutz said:
the above link is a pretty good primer to teach you the basics of summation notation.
Thank you very much for the reference, I will try to get accustomed while studying EM.

robphy said:
A helpful hint with such identities...

To start, write everything out in components.
Be consistent with the algebra and use the cyclic nature x->y->z->x with the same sign.
Swapping exactly two will introduce a minus sign.
I believe you are mentioning tensor notation? Thank you for the advice, I will keep that in mind.
 

Similar threads

  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
5K
Replies
1
Views
1K
  • · Replies 11 ·
Replies
11
Views
4K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 29 ·
Replies
29
Views
5K