# Vector calculus questions in electrodynamics

im reading introduction to electrodynmics by griffiths, the math techniques used is sloppy to the point of frustration. hence i have several problems with the math while reading the text

1) it introduces the dirac delta function in dimension 1
δ(X) = 0 if x≠0 and δ(x)= ∞ if x= 0 and
∫δ(x)dx = 1
it then states div(r/r^3) = 4∏δ(X)
the justification is from the divergence theorem, if we take the surface integral around r/r^3 the result is 4 pi while the divergence is zero everywhere except at the origin. i find two problems with this argument, one, the proof for the divergence theorem given in elementary calculus assumes the function is continuously differentiable, which is not the case with the inverse squared field, hence the divergence theorem shoudnt even apply, and two, the reasoning is a bit fishy, just because the divergence diverges at zero, does this give a firm justification for assuming that the divergence is EQUAL to 4piδ
the reasoning in math is pretty sloppy

2)also another question, when deriving the vector potential it arrives at poisson's equation, my question is that does the pission's equation always have a solution(though it may not be in closed form) ? (that was needed to find a divergenceless vector potential for B)

thanks

Related Classical Physics News on Phys.org
gabbagabbahey
Homework Helper
Gold Member
im reading introduction to electrodynmics by griffiths, the math techniques used is sloppy to the point of frustration. hence i have several problems with the math while reading the text

1) it introduces the dirac delta function in dimension 1
δ(X) = 0 if x≠0 and δ(x)= ∞ if x= 0 and
∫δ(x)dx = 1
it then states div(r/r^3) = 4∏δ(X)
the justification is from the divergence theorem, if we take the surface integral around r/r^3 the result is 4 pi while the divergence is zero everywhere except at the origin. i find two problems with this argument, one, the proof for the divergence theorem given in elementary calculus assumes the function is continuously differentiable, which is not the case with the inverse squared field, hence the divergence theorem shoudnt even apply, and two, the reasoning is a bit fishy, just because the divergence diverges at zero, does this give a firm justification for assuming that the divergence is EQUAL to 4piδ
the reasoning in math is pretty sloppy
The use of the divergence theorem is indeed not mathematically rigorous in that the theorem applies only to continuously differentiable functions. However, it can be shown rigorously that the result $\int \mathbf{ \nabla } \cdot \frac{ \mathbf{r} }{r^3} d\tau = \oint \frac{ \mathbf{r} }{r^3} \cdot d \mathbf{a} = 4\pi$ is true nonetheless (as one would expect on physical grounds).

From this, it follows that $\mathbf{ \nabla } \cdot \frac{ \mathbf{r} }{r^3} = 4\pi \delta^3( \mathbf{r} )$ is certainly a valid choice, and mathematically equivalent to any other valid choice for $\mathbf{ \nabla } \cdot \frac{ \mathbf{r} }{r^3}$ (Remember, two distributions are equivalent if they give the same value when integrated over any/every region) .

2)also another question, when deriving the vector potential it arrives at poisson's equation, my question is that does the pission's equation always have a solution(though it may not be in closed form) ? (that was needed to find a divergenceless vector potential for B)
IIRC, there is always a unique solution to Poisson's equation provided certain types of boundary conditions are met. On physical grounds, we again expect boundary conditions to always be of these types.

In general, you will find a lot of physics textbooks and papers lacking mathematical rigor. This is due to several factors (not wanting to get too bogged down in rigor and distracting from the physics probably being the biggest factor, with laziness a close second and in some cases even technical incompitence) and the rigor is usually replaced with (the often unwritten) assumption that certain equations must hold (or have solutions) on physical grounds.

thanks
a question regarding the divergence, is it defined as ∇⋅F=lim V→0 ∮F⋅da/ V? however if defined as such how do we know usual operation rules apply(for all points), for example div (f A )=gradf * A +f*divA

gabbagabbahey
Homework Helper
Gold Member
thanks
a question regarding the divergence, is it defined as ∇⋅F=lim V→0 ∮F⋅da/ V? however if defined as such how do we know usual operation rules apply(for all points), for example div (f A )=gradf * A +f*divA
Griffiths defines the divergence as

$$\mathbf{ \nabla } \cdot \mathbf{v} = \frac{ \partial v_x }{ \partial x } + \frac{ \partial v_y }{ \partial y } + \frac{ \partial v_z }{ \partial z }$$

(Eq. 1.40) and although it is neither very rigorous nor very general, it is appropriate for the calculations in the text. The vector product rules are all calculated from this definition, along with the usual product rule. For your specific example, Griffiths provides (again a not very rigorous) proof in section 1.2.6.

vanhees71
Gold Member
2019 Award
I have the impression this book by Griffiths confuses people more than it helps. Perhaps I should get it from the library to look at it in detail. It seems to have a good reputation as an introductory textbook. My favorites are Landau-Lifschitz (vol. 2 on classical field theory and vol. 8 on macroscopic electromagnetics), Jackson, and particularly Schwinger.

It starts already with the definition of the $\delta$ distribution (it's important to call it distribution and not function!). Of course this "definition" doesn't make the slightest sense, and students introduced in this way to it have no chance to understand its purpose and how to handle it.

The correct definition is that the $\delta$ distribution is a functional from a function space into the (real or complex) numbers. Here it's already applied to functions $f:\mathbb{R}^3 \rightarrow \mathbb{R}.$ It is defined by the rule
$$\int_{\mathbb{R}^3} \mathrm{d}^3 \vec{x}' \; \delta^{(3)}(\vec{x}-\vec{x}') f(\vec{x}')=f(\vec{x}).$$
The "allowed" functions $f$ are sufficiently well-behaved (smooth and sufficiently quickly vanishing at infinity) functions.

Then the equation
$$\Delta \frac{1}{4 \pi |\vec{x}|}=\vec{\nabla} \cdot \frac{\vec{x}}{4 \pi |\vec{x}|^3}$$
becomes clear in the sense that for any such test function, $f$ you must show
$$\int_{\mathbb{R}^3} \mathrm{d}^3 \vec{x} f(\vec{x}) \vec{\nabla} \cdot \frac{\vec{x}}{4 \pi |\vec{x}|^3}=f(0).$$
This is not too difficult a task. The trick is to show with help of Gauß's theorem that, instead of integrating over the whole space you can as well integrate over an arbitrary sphere around the origin.

Also the divergence should be introduced as written in biggerst's posting. This is the coordinate independent definition, and the special rules for orthonormalized bases then follows by using a box-like volume spanned by the coordinate lines around the point, where you want to get the divergence.

Of course, since the differential operators are defined in a coordinate independent way, you can then prove general rules in Cartesian coordinates, where they take their simplest form. Here I prefer the Ricci-index calculus with Einstein's summation relation. Your example looks like this:
$$\vec{\nabla} \cdot (f \vec{a}) = \partial_j (f a_j)=(\partial_j f) a_j + f \partial_j a_j = \vec{a} \cdot \vec{\nabla} f+f \vec{\nabla} \cdot \vec{a}.$$

Whilst I am not a fan of Griffiths I think his 3 page introduction the the Dirac Delta Function (p45-47) is not bad for its intended purpose and rather more than the impression in post#1 gives.

1) it introduces the dirac delta function in dimension 1
δ(X) = 0 if x≠0 and δ(x)= ∞ if x= 0 and
∫δ(x)dx = 1
is not actually given as the definition but properties.

If you want to follow this route for a fomal definition consider the function

$${\delta _a}(x) = \left\{ {\frac{1}{{2a}}} \right.,\;\left| x \right| < a\;;\;0,\;\left| x \right| > a$$

Which is a well defined but discontinuous function.

the dirac delta function (of x) is

$$\delta (x) = \mathop {\lim }\limits_{a \to 0} {\delta _a}(x)$$

where could i find more details regarding the rigorous definition of divergence and its properties? books? internet(i have tried and failed)?

vanhees71
Gold Member
2019 Award
It should also not be too rigorous for physicists, only rigorous enough ;-)).

For a first introduction, I think the best source for vector calculus is

A. Sommerfeld, Lectures on Theoretical Physics, Vol. 2 (Hydrodynamics)

There you find a section on the relevant mathematics of vector calculus in Euclidean $\mathbb{R}^3$. For the $\delta$ distribution I recommend the small but profound booklet

M. J. Lighthill, An Introduction to Fourier Analysis and Generalised Functions (Cambridge Monographs on Mechanics)

It's difficult to know where to put an answer to this, since I don't know what you are studying at what level, but I agree with this sentiment

It should also not be too rigorous for physicists, only rigorous enough ;-)).
There is quite a substantial gap between high school texts and the full blooded maths books intended for higher physics.

The Chemistry Maths Book by Erich Steiner - Oxord University Press,

is a good bridge and contains many useful things apart from div for those making the transition. Most of the examples are form the directly physics sciences rather than mathematical eg thermodynamics, quantum theory etc.

A small step up is

Mathematical Methods for the Physical Science by K F Riley - Cambridge University Press.

Both are excellent books.

If you want to get more mathematical, specifically about vector calculus, a good solid introduction up to a level of useful calculations is

Vector Calculus by P C Matthews - Springer Verlag

And of course the comprehensive

Vector Calculus by Marsden and Tromba - Freeman.

Last edited:
thanks!