Divergence Theorem Question (Gauss' Law?)

Click For Summary

Discussion Overview

The discussion revolves around applying the Divergence Theorem to show that the integral of a continuous vector field F over all of R³ is zero. Participants explore various approaches to set up the proof, particularly focusing on the implications of the divergence theorem and the use of spherical coordinates.

Discussion Character

  • Exploratory, Technical explanation, Debate/contested, Mathematical reasoning

Main Points Raised

  • One participant suggests using the Divergence Theorem and hints at taking a ball of radius r centered at the origin and letting the radius tend to infinity.
  • Another participant encourages the use of the Divergence Theorem but questions why the initial poster has not done so yet.
  • A suggestion is made to change to spherical coordinates for the integration due to the nature of the region of integration being a ball.
  • One participant expresses confusion over the lack of an explicitly given vector field and proposes using Gauss' Theorem for Divergence in spherical coordinates, indicating that the divergence would approach zero as the radius tends to infinity.
  • A later reply emphasizes the correct application of the Divergence Theorem, stating that the surface integral evaluates F at the boundary, which is at infinity, and concludes that the integral should be zero based on the behavior of F at infinity.
  • Another participant acknowledges the suggestion and expresses gratitude for the clarification provided.

Areas of Agreement / Disagreement

Participants generally agree on the use of the Divergence Theorem and the transition to spherical coordinates, but there is some confusion regarding the explicit form of the vector field and its implications for the proof. The discussion remains unresolved regarding the exact steps to take in the proof.

Contextual Notes

There are limitations regarding the assumptions about the vector field F, particularly its behavior at the origin and at infinity, which are not fully clarified in the discussion.

vector013
Messages
3
Reaction score
0
If F(x,y,z) is continuous and
ight%20%7C%5Cleq%20%5Cfrac%7B1%7D%7B%5Csqrt%7B%28x%5E2+y%5E2+z%5E2%29%5E3%7D+1%7D.gif
for all (x,y,z), show that R3
gif.latex?%5Cint%20%5Cint%20%5Cint.gif
gif.gif
dot F dV = 0

I have been working on this problem all day, and I'm honestly not sure how to proceed. The hint given on this problem is, "Take Br to be a ball of radius r centered at the origin, apply divergence theorem, and let the radius tend to infinity." I tried letting F = 1/((x2 +y2+z2)(3/2))+1), and taking the divergence of that, but it didn't really seem to get me anywhere. If anyone has any suggestions for at least how to set up this proof, I would really appreciate it.
 
Last edited:
Physics news on Phys.org
Well, from the title it seems that you know you should use divergence theorem. So why don't you?
 
Since your region of integration is a ball and the integrand involves x^2+ y^2+ z^2, I would recommend changing to spherical coordinates to do that integration over the surface of the ball.
 
Spherical was my thought too. I guess what's been confusing me is that the vector field isn't explicitly given; there's just the inequality which indicates it exists at (0,0,0). My thought was to use Gauss' Theorem for Divergence in Spherical Coordinates: div F = 1/p2*d/dp*(p2Fp)+1/(psin(phi))*d/dphi*(sin(phi)Fphi + 1/(psin(phi))*dFtheta/dtheta. Since you have 1/p terms, div F would go to zero as the radius, p, goes to infinity. Therefore, the integral of 0 is 0.
 
vector013 said:
Spherical was my thought too. I guess what's been confusing me is that the vector field isn't explicitly given; there's just the inequality which indicates it exists at (0,0,0). My thought was to use Gauss' Theorem for Divergence in Spherical Coordinates: div F = 1/p2*d/dp*(p2Fp)+1/(psin(phi))*d/dphi*(sin(phi)Fphi + 1/(psin(phi))*dFtheta/dtheta. Since you have 1/p terms, div F would go to zero as the radius, p, goes to infinity. Therefore, the integral of 0 is 0.
I don't see any application of divergence theorem in your explanation. This is how it should be:
## \int_V \nabla\cdot \vec F dV=\int_{\partial V} \vec F \cdot \hat n dA ##
Where ## \partial V ## is the boundary of V. Now because in the surface integral, F is evaluated only at the boundary and the boundary is at infinity and we know that F is smaller than a function which goes to zero at infinity, the integral should be zero.
 
Oh okay, that makes sense. I didn't even think of it that way. Thank you so much for the suggestion!
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
Replies
6
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
8K
  • · Replies 4 ·
Replies
4
Views
7K
  • · Replies 11 ·
Replies
11
Views
6K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
2
Views
2K