A Vanishing of an integral of a divergence over a closed surface

  • A
  • Thread starter Thread starter Kostik
  • Start date Start date
Kostik
Messages
269
Reaction score
32
TL;DR Summary
E. Poisson in his GR textbook claims the vanishing of an integral of a divergence over a closed surface in "angular" variables. Why?
I want to understand/prove why Eric Poisson drops the 2nd integral in the 2nd equation on the right side of the attached image, from pp. 69-70 of "A Relativist's Toolkit". It's hard to imagine a closed 3D hypersurface embedded in 4D, so I will look at the simpler example of a closed 2D surface embedded in 3D. So let ##M## be a 3D volume, and ##S=\partial M## its boundary, a closed surface.

Poisson views ##M## as a union of concentric surfaces ##S(\xi^0)##, analogous to the layers of an onion. Let the coordinate ##0 \le \xi^0 \le 1## index the concentric hypersurfaces, with ##\xi^0=1## on the outermost hypersurface ##S(1)=\partial M##, and ##\xi^0=0## on the innermost hypersurface ##S(0)##, the “center” of ##M## with zero volume. Let the other coordinates ##\xi^1, \xi^2## be the spherical coordinates ##\theta, \phi##.

Let ##\bf{A}## be a vector field in ##\mathbb{R}^3##, which can be expressed in this coordinate system ##{\bf{A}}(\xi^0,\theta,\phi)##. Poisson asserts that in these coordinates, the integral on a surface ##\xi^0 =## constant: $$\oint_S \text{div} { \bf{A} } \,dS=0 \,\,.$$ Here the divergence ##\text{div} { \bf{A} }## is the 2D divergence.

It seems the angular coordinates are important. Obviously, in Cartesian coordinates, choosing ##{ \bf{A} } = x{ \bf{i} } + y{ \bf{j} }##, we have ##\text{div} { \bf{A} } = A^x_{\,,x} + A^y_{\,,y} = 2##, so the integral which is supposed to vanish is equal to ##2 \,\, \times## the surface area of ##S##.

In spherical coordinates: $$\int_S \text{div}{\bf{A}}\,dS = \int_S \left[ \frac{1}{r\sin\theta}\frac{\partial(A^\theta \sin\theta)}{\partial\theta} + \frac{1}{r\sin\theta}\frac{\partial A^\phi}{\partial\phi} \right] r^2 \sin\theta d\theta d\phi$$ $$ \qquad\qquad\qquad\qquad\qquad = \int_0^{2\pi} \left[ \int_0^{\pi} r\frac{\partial(A^\theta \sin\theta)}{\partial\theta} d\theta \right] d\phi + \int_0^{\pi} \left[ \int_0^{2\pi} r\frac{\partial A^\phi}{\partial\phi} d\phi \right] d\theta \,\,. \quad\quad (*)$$ The surface ##S## can be defined by ##r=r(\theta,\phi)##. If ##r## is a constant (i.e., the surface is a sphere), it can be taken outside the integrals, and the integrals in ##(*)## do indeed vanish. However, in general, ##r=r(\theta,\phi)##, so we're stuck with that.

How does one show the integral vanishes for a general surface ##r=r(\theta,\phi)## and not just a sphere?

[1]: https://i.sstatic.net/jziTDqFd.jpg
Poisson.jpg
 
Last edited:
Physics news on Phys.org
This is a good question. I had to think about it a bit before answering and I don't think this book offers a good path to really understand the result, but you have some errors in your reasoning which I would like to point out which may help.

Let's start with you first in cartesian coordinates. You asserted that ##\oint_S \text{div} \mathbf A dS = 2 \times ## the surface area of S. However, the problem with this reasoning is the following. The theorem the author is trying to prove only holds for a Manifold with boundary and so S has to be a closed surface that is a manifold without boundary. So, then we can talk about the interior of a square and it's boundary, let's say in ##\mathbb R^2##, but then the boundary is the square which is one dimensional.

So to make your example with cartesian coordinates work we could construct a cube in ##\mathbb R^3## centered at the origin. Now imagine we have the vector field ##\mathbf A = A^x \mathbf i + A^y \mathbf j + A^z \mathbf k##. The key thing we need to notice is how the author split his integral of ##\text{div} \mathbf A##. He breaks it up into an integral over the parts of the divergence that are tangent to the surface and those that are normal (or alternatively we could construct a vector field that is only tangent to this surface, i.e., changes on each face). So in our example of this cube, for the top and bottom faces we would only consider div of the x and y components of ##\mathbf A##. The other thing to notice is that we need to integrate over oriented area elements defined by a normal vector. So what we will find is that ## \oint_{top} \text{div} (A^x \mathbf i + A^x \mathbf j) dS + \oint_{bottom} \text{div} (A^x \mathbf i + A^x \mathbf j) dS = 0## because the bottom face's normal points in the opposite direction introducing a minus sign. Similar cancellations happen for front and back, and left and right pairs, so we see we do get 0. The keys here to getting the result were the following:
  1. The vector field that we were taking the divergence of must be tangent to the surface everywhere.
  2. The surface must not have a boundary itself. I believe it might need to be compact and closed.
Now let's talk about your sphere example. If you think about what I have said above, if ##r = r(\theta, \phi)## is not constant and integrate the divergence of vector field with components in the ##\theta## and ##\phi## directions then, the result does not hold because then these directions are no longer tangent to the surface! This is the only point of the theorem.

The easiest way to think of this is as a fluid flow velocity described by a vector field. The divergence at a point is the limit of the net amount fluid leaving a cube per unit time. So Gauss' theorem is just the realization that all of these flows out must cancel with the flow into neighboring cube and the only net flow out can be through the boundary.

So, now if you think of the boundary itself as a manifold and only consider the fluid flowing along the boundary, i.e., tangent to it, and now are cubes are a squares since a boundary is a manifold of one dimension less, we can reason that all of these divergences must cancel since the boundary of our original manifold cannot have a boundary itself, often stated as the boundary of a boundary is empty, i.e., the closed unit ball in ##\mathbb R^3## has the sphere as it's boundary but the sphere does not have a boundary.

Hope this helps, but I really think you should look at Lee's book "Introduction to Manifolds" if you really want to understand it.

Note, I should also mention that any embedded surface is locally a level set of a function so you can always locally find coordinates such that the surface is constant along one of the coordinates.
 
Last edited:
In this video I can see a person walking around lines of curvature on a sphere with an arrow strapped to his waist. His task is to keep the arrow pointed in the same direction How does he do this ? Does he use a reference point like the stars? (that only move very slowly) If that is how he keeps the arrow pointing in the same direction, is that equivalent to saying that he orients the arrow wrt the 3d space that the sphere is embedded in? So ,although one refers to intrinsic curvature...
I started reading a National Geographic article related to the Big Bang. It starts these statements: Gazing up at the stars at night, it’s easy to imagine that space goes on forever. But cosmologists know that the universe actually has limits. First, their best models indicate that space and time had a beginning, a subatomic point called a singularity. This point of intense heat and density rapidly ballooned outward. My first reaction was that this is a layman's approximation to...
Thread 'Dirac's integral for the energy-momentum of the gravitational field'
See Dirac's brief treatment of the energy-momentum pseudo-tensor in the attached picture. Dirac is presumably integrating eq. (31.2) over the 4D "hypercylinder" defined by ##T_1 \le x^0 \le T_2## and ##\mathbf{|x|} \le R##, where ##R## is sufficiently large to include all the matter-energy fields in the system. Then \begin{align} 0 &= \int_V \left[ ({t_\mu}^\nu + T_\mu^\nu)\sqrt{-g}\, \right]_{,\nu} d^4 x = \int_{\partial V} ({t_\mu}^\nu + T_\mu^\nu)\sqrt{-g} \, dS_\nu \nonumber\\ &= \left(...
Back
Top