meopemuk said:
I can always define the delta function as a limit of a sequence of "normal" functions whose integral is equal to 1 and whose support is shrinking around one point.
Similarly, I can define the "square root of the delta function" as a limit of a sequence of "normal" functions such that integrals of their squares are equal to 1 and supports are shrinking.
As others have indicated, this doesn't seem possible.
Could you please give a specific example of such a sequence of "normal
functions" that does indeed have the desired properties?
The obvious first attempt fails: consider the usual delta distribution
represented as the limit of a sequence of Gaussians:
<br />
\delta_\epsilon(x) <br />
~:=~ \lim_{\epsilon\to 0} ~ \frac{1}{\epsilon\sqrt{\pi}}<br />
~ \exp(-x^2/\epsilon^2)<br />
Then the naive square-root of this is
<br />
\sqrt{\delta_\epsilon(x)}<br />
~:=~ \lim_{\epsilon\to 0} ~ \frac{1}{\sqrt{\epsilon}\; \pi^{1/4}}<br />
~ \exp(-x^2/2\epsilon^2)<br />
but a short computation shows that
<br />
\lim_{\epsilon\to 0} \int\!dx \; \sqrt{\delta_\epsilon(x)} ~=~ 0<br />
and similarly,
<br />
\lim_{\epsilon\to 0} \int\!dx \; x^n \sqrt{\delta_\epsilon(x)} ~=~ 0<br />
for any non-negative n.
So defining the "square root of a delta distribution" in the above way
doesn't work usefully. It's equivalent to the trivial zero distribution,
hence a set of such "functions", indexed by a continuous parameter,
cannot serve as a basis for a nontrivial Hilbert space.
(BTW, this also means that "square-root" is a misleading name since we
normally think that if \sqrt{z}=0, then z=0,
which is not the case here.)
I suppose one could then say: ok, the square of a dirac delta is not a
distribution, but some other object type which I'll call "Rdist", with
properties (presumably) like the following:
Definition: An
Rdist space V is a linear space over the complex
field, equipped with a symmetric bilinear product
<br />
\star : V\times V \to D<br />
where D is a space of distributions, such that (for v,w\in V),
v\star v = \delta and v\star w = 0 if v \ne w.
But then how does one define a resolution of unity on V? There needs to
be two different kinds of product (inner and outer, presumably).
And we need integration over the elements of V to form continuous
linear combinations. But the trivially-zero integrals above seem to
prohibit this.
So if there's a way to make this idea both rigorous and useful,
you need to show me what it is. I've failed to figure it out for myself.