- #1

trini_sun

- 3

- 0

Sorry if this was addressed in another thread, but I couldn't find a discussion of it in a preliminary search. If it is discussed elsewhere, I'll appreciate being directed to it.

Okay, well here's my question. If I take the divergence of the unit radial vector field, I get the result:

Now since the direction of the vector field at any point is dependent on position, the non-zero result is not surprising. What is surprising however is why the constant magnitude vector field has a divergence that is inversely proportional to distance from the origin.

The fact that this vector field is unphysical might be why I'm having such a hard time trying to make sense of it. But here's the thinking that's getting me in trouble. If I try to interpret this vector field as say a flow of air moving away from the origin, then regardless of where I measure, I should find the exact same magnitude of flow of the air. Let's say I choose to measure at a point 1m away from the origin, and then measure at a point 2m away from the origin, then I should get the exact same magnitude for the flow at both points. Further, let's choose the two points such that the line connecting the two points also passes through the origin. So now there is absolutely no difference in the flow from one point to the next (in both magnitude and direction). Why then does the divergence of the two points differ by a factor of two? It seems to me that both points in the vector field are indistinguishable, and there isn't really any way to identify one point from the other. In other words, if I move the origin anywhere along the line connecting the two points, then the value and direction of the vector field at both points remain the same and is independent of the location of the new origin (provided the new origin does not lie between the two points!) Yet the divergence would change! What the heck is going on?

One of the ways Wikipedia defines the divergence is like this, "In vector calculus,

Okay well I've laid out my thought process as best I could. I appreciate any insight into what I'm missing regarding what exactly the divergence represents.

Okay, well here's my question. If I take the divergence of the unit radial vector field, I get the result:

[itex]\vec \nabla \cdot \hat r = \vec \nabla \cdot \frac{\vec r} {|\vec r|} = \frac {2} {|\vec r|}[/itex]

Now since the direction of the vector field at any point is dependent on position, the non-zero result is not surprising. What is surprising however is why the constant magnitude vector field has a divergence that is inversely proportional to distance from the origin.

The fact that this vector field is unphysical might be why I'm having such a hard time trying to make sense of it. But here's the thinking that's getting me in trouble. If I try to interpret this vector field as say a flow of air moving away from the origin, then regardless of where I measure, I should find the exact same magnitude of flow of the air. Let's say I choose to measure at a point 1m away from the origin, and then measure at a point 2m away from the origin, then I should get the exact same magnitude for the flow at both points. Further, let's choose the two points such that the line connecting the two points also passes through the origin. So now there is absolutely no difference in the flow from one point to the next (in both magnitude and direction). Why then does the divergence of the two points differ by a factor of two? It seems to me that both points in the vector field are indistinguishable, and there isn't really any way to identify one point from the other. In other words, if I move the origin anywhere along the line connecting the two points, then the value and direction of the vector field at both points remain the same and is independent of the location of the new origin (provided the new origin does not lie between the two points!) Yet the divergence would change! What the heck is going on?

One of the ways Wikipedia defines the divergence is like this, "In vector calculus,

**divergence**is a vector operator that measures the magnitude of a vector field's source or sink at a given point, in terms of a signed scalar. More technically, the divergence represents the volume density of the outward flux of a vector field from an infinitesimal volume around a given point." So in the above example, if we take two separate but equal infinitesimal volumes around each point, why should we expect half (or twice) the volume density of the outward flux at one point versus the other point?Okay well I've laid out my thought process as best I could. I appreciate any insight into what I'm missing regarding what exactly the divergence represents.

Last edited: