Can anyone explain to me the concept of div and curl? Of course, I know how to determine the div and curl of a vector, but I don't really understand their physical significance. Any help is greatly appreciated. --Brian
The curl of a vector field is a measure of its rotation, and in some books curl(v) is even written as rot(v). Vector fields that circulate with a higher curvature, have a higher value of |curl(v)|, and vector fields that do not circulate at all have a zero curl. The divergence of a vector field is an indication of how the field spreads out (or in). If the lines of a vector field in a region are converging towards a point (called a sink), then the field has a negative divergence in that region. If they are, in some other region, diverging away from a point (called a source), then the field has a positive divergence in that region.
Thanks Tom for the excellent reply. Given your description, I can see how these operators would be useful in the analysis of electromagnetic phenomena. You seem to be pretty good at offering simple introductory explanations of concepts. Would you mind giving a similar explanation of a fourier series? Its purpose, utility, and so on? Again, thanks for your help
You're welcome. OK, let's back it up to Day One of Mechanics I. Vectors: Components and Basis Vectors You learned that any vector v can be decomposed as follows: v=v_{x}e_{x}+v_{y}e_{y}+v_{z}e_{z} The v_{i} (i=x,y,z) are the components. The e_{i} (i=x,y,z) are the orthonormal basis vectors. Vectors: Spaces and Inner Products The vector space spanned by the basis vectors is called R^{3}. That is, any vector that exists in the vector space can be constructed as a linear combination of the spanning basis vectors. We can also define an inner product (aka dot product) on R^{3} as follows: e_{i}^{.}e_{j}=d_{ij}. That is, the inner product of two basis vectors is 1 if i=j, and zero otherwise. This encodes the orthonormality of the vectors. Vectors: Components Revisited So, having defined the inner product, we can express the components of a vector as follows: v_{i}=v^{.}e_{i} (i=x,y,z). Now, let's imagine that we have an enlarged vector space of n dimensions, R^{n}. All the above still applies, but we have more basis vectors. We can let n go to infinity, and have a countably infinite dimensional vector space, and it would all still apply. Now we get to Fourier series. Fourier Series: Components and Basis Functions An odd function f(x) can be decomposed on an interval (0,L) as follows (I'm sticking to odd functions for simplicity, I'll get to other functions later): f(x)=Σ_{n}a_{n}sin(npx/L) where the sum is taken from n=1 to infinity. The a_{n} are the Fourier coefficients, and we will come to see that they can be regarded as the "components" of the function in much the same way as the v_{i} are the components of the vector v above. The sin(npx/L) are the basis functions, and we will come to see that they can be regarded as the "basis vectors" in much the same way as the e_{i} are the basis vectors of R_{n}. Functions: Spaces and Inner Products I've already started to draw the parallel between vectors and functions. So, the natural question is, "What is the 'space' of functions"? Answer: Function space, which I'll call F. F can be thought of as an infinite dimensional vector space (and the results from above apply for infinite dimensional vector spaces, remember?) which is spanned by the basis functions. That is, any function that exists in the function space can be constructed by a linear combination of the basis functions. Let the basis functions be represented by their index n as follows: f_{n}(x)=sin(npx/L). We can define an inner product on this space as follows: <f_{m}(x),f_{n}(x)>=∫f_{m}(x)f(x)dx=d_{ij} (you can verify the last step yourself) where the integration is taken from 0 to L. This encodes the orthonormality of the basis functions, just like the vector inner product in R_{n} encodes the orthonormality of the basis vectors. Functions: Components Revisited So, having defined the inner product, we can express the components of a function as follows: a_{j}=(2/L)<f_{j}(x),f(x)> (I'll leave that as an exercise). Now, I said that the series I wrote down was for odd functions. It turns out that even functions must be represented in terms of cosine basis functions (because they are even). Functions that are neither even nor odd can be represented by series involving both sine and cosine terms. I hope that I have made Fourier series more concrete for you by connecting it to something more familiar. I am going to stop posting for a while, because my fingers are killing me! edit: fixed a couple of brackets
Odd and even needs elaboration on the interval (0,L). It should be emphasized that the fourier series is not equal to the original function, and that the basis of cos and sin is an analytic sense - the series must converge, whereas in a formal infinite dimensional vector space there is no such requirement.
Tom Mattson: "Vector fields that circulate with a higher curvature, have a higher value of |curl(v)|....If the lines of a vector field in a region are converging towards a point (called a sink), then the field has a negative divergence in that region. If they are, in some other region, diverging away from a point (called a source), then the field has a positive divergence in that region." That's not really true. Your description, would lead someone to think the E field near a point charge has divergence or the B field near a current carrying wire has curl. But they don't.
chroot asked: "So Gauus's law is wrong" No, and I didn't say it is. Tom Mattson said, " If (the field lines) are, in some other region, diverging away from a point (called a source), then the field has a positive divergence in that region. That seems misleading to me. The field lines near a point charge look like they're "diverging away from a point", but the divergence of the field is zero.