Gradient version of divergence theorem?

Click For Summary

Discussion Overview

The discussion revolves around the concept of a gradient version of the divergence theorem, specifically exploring the relationship between integrals of scalar functions and their gradients. Participants examine different formulations and proofs related to this theorem, including potential connections to the fundamental theorem of calculus.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant presents a formulation of the divergence theorem involving a scalar function p and proposes a proof using vector calculus identities and the product rule.
  • Another participant suggests that the gradient version of the divergence theorem relates to the integral of a path through a potential, indicating a connection to the fundamental theorem of calculus for gradients.
  • A later reply reiterates the connection to the fundamental theorem of calculus, emphasizing the difference in potentials as a key aspect.
  • Another participant describes a typical proof of the scalar version of the divergence theorem by substituting a vector field with a scalar field multiplied by a constant vector, leading to a simplification of the divergence expression.
  • One participant acknowledges the similarity in steps between their approach and the typical proof, noting a difference in how the constant vector is introduced.

Areas of Agreement / Disagreement

Participants express different interpretations of the gradient version of the divergence theorem, with some suggesting it aligns with the fundamental theorem of calculus, while others focus on the specific proof involving scalar fields. The discussion remains unresolved regarding the precise formulation and implications of the theorem.

Contextual Notes

Participants express uncertainty about the validity of certain steps in their proofs, particularly concerning the treatment of vector components and the implications of their manipulations. There is also a mention of a missing closure in the surface integral in one participant's original post.

Cygnus_A
Messages
34
Reaction score
2
So we all know the divergence/Gauss's theorem as
∫ (\vec∇ ⋅ \vec v) dV = ∫\vec v \cdot d\vec S

Now I've come across something labeled as Gauss's theorem:
\int (\vec\nabla p)dV = \oint p d\vec S
where p is a scalar function.

I was wondering if I could go about proving it in the following way (replacing dot products with implied sums):
With e_i := \hat e_i and d\vec S := ds_1 \hat x + ds_2 \hat y + ds_3 \hat z = ds_i e_i,

\oint p d\vec S = \oint p (e_i ds_i) = \oint (p e_i)(ds_i) = \oint \vec p \cdot d\vec s (this p vector has scalar functional dependence still, it's just (p)(\vec v), a scalar times a vector, but still overall a vector in my mind)

then applying divergence theorem and getting
= \int (\vec \nabla \cdot \vec p) dV = \int \partial_i (p e_i) dV

and finally applying the product rule and the fact that e_i is a unit vector
\int (e_i \partial_i p + p \partial_i e_i) dV.

The second term is zero, since it's a partial of a unit vector, which has no spatial dependence, leaving
\int (e_i \partial_i p)dV = \int (\vec \nabla p) dV

Does that make sense? I think it seems to work out, but I'm concerned that it's flawed due to my free conversions between sums and vectors. It seems unnatural that I've said d\vec S =d\vec s = ds_i, despite defining them differently. One, I suppose has actual vector components, whereas the other is just a list of components.
 
Last edited:
Physics news on Phys.org
I believe the gradient version of the divergence theorem would be your typical statement that the integral of the path going through a potential is just the difference in potentials.
 
TMO said:
I believe the gradient version of the divergence theorem would be your typical statement that the integral of the path going through a potential is just the difference in potentials.

You must be thinking of the fundamental theorem of calculus for gradients -- \phi (\vec b) - \phi (\vec a) = \int_a^b \vec \nabla \phi \cdot d\vec r

This was what was presented to me during a proof, and I had never seen it before
\int (\vec\nabla p)dV = \oint p d\vec S(oops, and I just realized I forgot to close the surface integral of divergence theorem in my original post :P)
 
The usual proof of the scalar version of the divergence theorem involves replacing the vector field v by (p k), where p is a scalar field and k is an arbitrary constant vector. Then div v is equal to k.grad p. Since k is arbitrary it can then be removed from both sides giving the result.
 
  • Like
Likes   Reactions: Cygnus_A
Ah, ok. The steps are essentially the same then. Just slightly different in how you get your constant vector. Thanks!
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
985
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K