Gradient version of divergence theorem?

Click For Summary
The discussion centers on a proposed gradient version of the divergence theorem, expressed as the integral of the gradient of a scalar function p equating to a surface integral of p. The user attempts to prove this by manipulating vector and scalar representations, applying the divergence theorem, and using the product rule. Concerns are raised about the validity of transitioning between vector and scalar forms, particularly regarding the definitions of dS and ds. The conversation concludes with a clarification that the proof steps are similar to traditional methods, emphasizing the relationship between scalar fields and vector fields in the context of the divergence theorem. Overall, the exploration highlights the nuances of mathematical representation in vector calculus.
Cygnus_A
Messages
34
Reaction score
2
So we all know the divergence/Gauss's theorem as
∫ (\vec∇ ⋅ \vec v) dV = ∫\vec v \cdot d\vec S

Now I've come across something labeled as Gauss's theorem:
\int (\vec\nabla p)dV = \oint p d\vec S
where p is a scalar function.

I was wondering if I could go about proving it in the following way (replacing dot products with implied sums):
With e_i := \hat e_i and d\vec S := ds_1 \hat x + ds_2 \hat y + ds_3 \hat z = ds_i e_i,

\oint p d\vec S = \oint p (e_i ds_i) = \oint (p e_i)(ds_i) = \oint \vec p \cdot d\vec s (this p vector has scalar functional dependence still, it's just (p)(\vec v), a scalar times a vector, but still overall a vector in my mind)

then applying divergence theorem and getting
= \int (\vec \nabla \cdot \vec p) dV = \int \partial_i (p e_i) dV

and finally applying the product rule and the fact that e_i is a unit vector
\int (e_i \partial_i p + p \partial_i e_i) dV.

The second term is zero, since it's a partial of a unit vector, which has no spatial dependence, leaving
\int (e_i \partial_i p)dV = \int (\vec \nabla p) dV

Does that make sense? I think it seems to work out, but I'm concerned that it's flawed due to my free conversions between sums and vectors. It seems unnatural that I've said d\vec S =d\vec s = ds_i, despite defining them differently. One, I suppose has actual vector components, whereas the other is just a list of components.
 
Last edited:
Physics news on Phys.org
I believe the gradient version of the divergence theorem would be your typical statement that the integral of the path going through a potential is just the difference in potentials.
 
TMO said:
I believe the gradient version of the divergence theorem would be your typical statement that the integral of the path going through a potential is just the difference in potentials.

You must be thinking of the fundamental theorem of calculus for gradients -- \phi (\vec b) - \phi (\vec a) = \int_a^b \vec \nabla \phi \cdot d\vec r

This was what was presented to me during a proof, and I had never seen it before
\int (\vec\nabla p)dV = \oint p d\vec S(oops, and I just realized I forgot to close the surface integral of divergence theorem in my original post :P)
 
The usual proof of the scalar version of the divergence theorem involves replacing the vector field v by (p k), where p is a scalar field and k is an arbitrary constant vector. Then div v is equal to k.grad p. Since k is arbitrary it can then be removed from both sides giving the result.
 
  • Like
Likes Cygnus_A
Ah, ok. The steps are essentially the same then. Just slightly different in how you get your constant vector. Thanks!
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
884
Replies
6
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K