dreamLord said:
I'm afraid your first few lines were completely lost on me! Is there any way you can dumb it down a bit?
Also, the equation that you wrote ; is this the definition of the Dirac Delta function? Or the fact that it is 0 when x is not zero, and infinity when x is 0. Which one defines it? Or are they the same thing.
The point is that many introductory physics books confuse their readers with unprecise definitions of what a distribution is. Griffiths seems to be another example. I don't know his E&M book very well besides from discussions here in the forum.
Objects like the Dirac \delta are socalled distributions. They are defined as mappings from a function space (containing a certain set of functions, called test functions) to the (real or complex) numbers. They can only be defined in a manner that makes sense under an integral, where they are multiplied with a test function, and for the Dirac \delta distribution this definition reads
\int_{\mathbb{R}} \mathrm{d} x \delta(x) f(x)=f(0).
It's the value of the test function at the argument 0.
It is quite obvious that \delta(x) cannot be a function in the usual sense, because you won't find any function with the above given property. However you can define the \delta distribution as kind of limit, the socalled weak limit. The idea is to define functions which are sharply peaked around 0 with the integral normalized to 1. The most simple example is the "box function",
\delta_{\epsilon}(x)=\begin{cases}<br />
1/(2 \epsilon) & \text{for} \quad x \in (-\epsilon,\epsilon) \\<br />
0 & \text{elsewhere}.<br />
\end{cases}
The test function should have "nice properties" to make things convenient. They should still form a vector space of functions, i.e., with two functions also their sum and the product with a constant should belong to this function space. A very convenient choice is Schwartz's space of rapidly falling smooth functions, i.e., they are arbitrarily many times differentiable and fall off at infinity faster than any polynomial.
Now we check the integral
I_{\epsilon}=\int_{\mathbb{R}} \mathrm{d} x \delta_{\epsilon}(x) f(x) = \frac{1}{2 \epsilon} \int_{-\epsilon}^{\epsilon} \mathrm{d} x f(x).
Now according to the mean-value problem for integrals over continuous functions, there is a value \xi \in [-\epsilon,\epsilon] such that
I_{\epsilon}=f(\xi).
Now, since f is continuous and we let \epsilon \rightarrow 0^+, you get
\lim_{\epsilon \rightarrow 0^+} I_{\epsilon}=f(0).
This means that in the sense of a weak limit you may write
\lim_{\epsilon \rightarrow 0^+} \delta_{\epsilon}(x)=\delta(x).
"Weak limit" means that you have to follow the above given procedure: You first have to take an integral with a test function and then take the limit. This is the crucial point.
If you now look what happens in this example, Griffiths sloppy definition makes some sense, but one has to keep in mind the proper meaning in the above given sense. Obviously our functions \delta_{\epsilon} are concentrated around 0 and become the larger the smaller \epsilon gets, when taking the limit \epsilon \rightarrow 0^+. At the same time the interval, where our function is different from 0 shrinks, and the construction is such that the total area under the graph (which here is a rectangle) stays constant 1 for all \epsilon. In this sense you may charcterize the \delta distribution as cited by Griffiths. To avoid confusion, however, it's mandatory to learn about the proper definition of distributions (also known as "generalized functions").
The best book for the physicist I know of is
M. J. Lighthill, Introduction to Fourier Analysis and Generalised Functions, Cambridge University Press (1959)
That it treats the distributions together with Fourier series and Fourier integrals is no disadvantage since this you'll need anyway when studying electrodynamics.