friend said:
All this leaves me wondering if the generalized function approach to the Dirac delta may be too much of a generalization.
<snip>
The theory of generalised function was developed
because Dirac delta is poorly behaved: it must, inevitably, break calculus. For example if we go with
##1 = \int_{-\infty}^\infty \delta(x) \, dx##
then from the properties of the Riemann (or Lebesgue) integral
##\int_{-\infty}^\infty \delta(x) \, dx = \int_{-\infty}^0 \delta(x) dx + \int_{0}^0 \delta(x) dx + \int_{0}^\infty \delta(x) dx = 0+0+0##
because the integral at a point is zero. And yes even if ##\delta(0)=\infty##, from the definition of the integral, it evaluates to zero. And someone had a proof of Dirac breaking the product rule. IIRC they showed that the Heaviside step function is equal to ##x\delta(x)+1##, or something equally meaningless. (Hint: think about what happens when x<0)
The thing is, if you are willing to justify every single step you do (such as the substitution of x-x
0, the change of variables inside and so on), sure go ahead. But you can't be casual because something will fail. You are doing a completely different calculus.
PS. I don't know how you got ##\int_{-\epsilon}^\epsilon \delta(x)dx=1 \implies \int_{-\infty}^\infty \delta(x)f(x) \, dx = f(0)##.
I think what's going on in distribution theory is that since they have the delta as a distribution with compact support, they want any function integrated against it to also be a distribution with compact support so it will make sense to them to write (\delta ,f) to stand for a generalized inner product of the same kinds of things (like vectors, or functions with compact support). This prevents them from forming (\delta ,1) since 1 is not the same kind of thing as \delta. But all that seems to really matter in practice is that the integral is doable. So they treat the parameter \varepsilon to be large in comparison with the differentials in the process of doing the calculus, then they make \varepsilon approach zero if needed. Is it true that this avoids any problems with precise definitions in terms of generalized distributions?
There are three parts to this, so I'll break it up.
First the notation ##(\delta, f)## traces its origin to Dirac's bra-ket notation. There we write all vectors (i.e. functions) as ##|f\rangle## and all functionals as ##\langle \phi |##. The evaluation is written as ##\langle \phi | |f\rangle##, but usually we only write one pipe in the centre. Now quantum mechanics is usually on ##L^2## so every function ##|g \rangle## defines a functional ##\langle g|## through the inner product
##\langle g| f \rangle = IP(|g \rangle,|f\rangle)##.
In parallel to this, people wanted to still use the notation for non-innner product spaces, so they wrote ##\langle \phi |f\rangle## even though there is no inner product. It's notation overload. I don't why this changed to ##(\phi, f)## in generalised functions. Maybe not to be confused with bra-ket?
Secondly,
functionals with compact support are a completely different beast. Most textbooks just say this is what they are and ignore them. You can't do a countable sum and stay in the space, you can't take the anti-derivative and stay in the space. And so on. So for the majority of applications where you want generalized functions, they are a poor fit. I personally haven't actually seen any applications for this space.
Thirdly, as to your last question (if I am understanding your question correctly) my gut says yes. Take the set of all gen func with compact support. IIRC that set is denoted ##\mathcal{E}'(\mathbb{R})##. Now because we define it as a subset of ##\mathcal{D}'(\mathbb{R})## it'll have the same test function space. But we can enlargen the test functions to find the largest set that works for E'. I think that space is ##C^\infty(\mathbb{R})## and the constant ##f(x)=1## is in that. I don't know how to prove that though. Oh, and if you go down this route, you'll need to be careful in how you manipulate Dirac: you need to stay "inside" E'. And I have
absolutely no idea how compositions work in this space.
What worries me is that you probably won't be able to obtain global information about g. If you could do ##\sum_{n=-\infty}^\infty \delta(x- n x_0)## you could "ping" g everywhere on your manifold. But you'll be stuck with finite pings.
Similarly, x and x
0 will always need to stay "close" to each other.
Am I making sense?