# Dirac delta

1. Nov 18, 2009

### alejandrito29

the integral

$$\int_{-\infty}^{\infty} \! f(t)*\delta (t-t_0) * \delta (w-w(t)) \, dt$$

is????????????

can be $$\delta (w-w(t)) * f(t_0) ?????$$

2. Nov 18, 2009

### Fredrik

Staff Emeritus
The product of two deltas isn't defined.

3. Nov 18, 2009

### alejandrito29

i have to solve:

$$\int_{-\infty}^{\infty} \! f(t)*\delta (t-t_0) *\delta (x-x_0) \delta (y-y_0) *\delta (z-z_0) * \delta (w-w(t))* \, dt*dx*dy*dz$$

4. Nov 19, 2009

### haushofer

In general? I'm not very into this whole "generalized functions" thing, but isn't a delta function just a functional? What would be the objection to define a product of two such functionals?

5. Nov 19, 2009

### Bob_for_short

The integral

$$\int_{-\infty}^{\infty} \! f(t)*\delta (t-t_0) * \delta (w-w(t)) \, dt$$

is equal to

$$\delta (w-w(t_0)) * f(t_0)$$

I.e., it is practically zero if it is not integrated over $$w$$.

6. Nov 19, 2009

### Fredrik

Staff Emeritus
Maybe I was wrong about delta. The product of two distributions isn't defined in general, but maybe it works for delta. I don't know why it can't be defined in general. There's an explanation in the Wikipedia article about distributions, but the notation is so annoying that I can't even try to understand it.

7. Nov 19, 2009

### DrFaustus

The pointwise product of distributions is, in general, ill defined. Meaning that if $$u(x)$$ and $$v(x)$$ are distributions, then $$u(x)v(x)$$ is in general ill defined. But $$u(x)v(y)$$ with $$x \ne y$$ is perfectly meaningful.

When I say in general it means that one has to check on a case by case basis. Two straightforward examples: $$\delta(x)\delta(x)$$ is ill defined (to see it approximate the deltas with gaussians and take the limit - it diverges). But $$\theta(x) \theta(x)$$, with $$\theta(x)$$ being the Heaviside step function, is perfectly defined and meaningful.

One way to check if the pointwise product of distributions is ill defined or not is to study their "wave front set". But it is long to explain and I'm not really that good at it so won't go into that. But this stuff can be found in Hormander's book "The analysis of partial differential operators I".

alejandrito29 -> The result bob_for_short gave is correct, but the comment is a bit misleading. The result is something proportional to a Dirac delta, and that's it. In other words you have another distribution as your result. And you will hence have to be careful when manipulating it.

8. Nov 19, 2009

### Fredrik

Staff Emeritus
I don't understand that. Take delta for example. The distribution $\delta$ is defined by $\delta(\phi)=\phi(0)$ for all test functions $\phi$. When we write

$$\int\delta(x)\phi(x)dx=\delta(\phi)$$

this is actually the definition of what we mean by the integral on the left. So when you're talking about a distribution u(x), I'm thinking of an equation

$$\int u(x)\phi(x)dx=u(\phi)$$

where the right-hand side is well-defined already and the left-hand side is defined by this equation. When you mention u(x)v(x), I'm thinking

$$\int u(x)v(x)\phi(x)dx=\int w(x)\phi(x)dx=w(\phi)$$

but I have no idea what it means, and in the case of u(x)v(y) I don't even know what to think.

9. Nov 19, 2009

### Bob_for_short

Come on, Fredrik! A product of δ-functions is well familiar to you. Consider a particle density like mδ(r - r0) - it is a product of three δ-functions. One integration removes one δ-function. Three integrations give the particle mass m.

Last edited: Nov 19, 2009
10. Nov 19, 2009

### Hurkyl

Staff Emeritus
You can't multiply a distribution in x by another distribution in x.

However, you can multiply a distribution in x by a distribution in y:
$$\iint u(x) v(y) \varphi(x) \psi(y) \, dx \, dy := u[\varphi] v[\psi]$$​
which extends by linearity and continuity to all bivariate test functions. It works kinda like a tensor product. (actually, I think it literally is a tensor product)

Similarly, recall that you can think of an expression like:
$$\delta(x - y)$$​
as being a (distribution in x)-valued function of y. Well, it's similarly fine to have a (distribution in x)-valued distribution of y.

11. Nov 19, 2009

### Fredrik

Staff Emeritus
Thanks Hurkyl. So we use two test functions. I was starting to think we should use one test function with two variables, something like this:

$$\int u(x)v(y)\phi(x,y) dx dy=\int u(x)\left(\int v(y)\phi_x(y)dy\right)dx=\int u(x)v(\phi_x)dx=u(x\mapsto v(\phi_x))$$

where $\phi_x$ is defined by $\phi_x(y)=\phi(x,y)$, but that seemed weird and awkward.

The only problem I have with what you wrote is that I don't see any reason to call this a product of distributions, since the quantity on the right is a product of two numbers ($u[\varphi]$ and $v[\psi]$).

Last edited: Nov 19, 2009
12. Nov 19, 2009

### Hurkyl

Staff Emeritus
Claim: Let f be a test function of two variables. Then there exist sequences gn and hn of test functions such that
$$f(x,y) = \sum_n g_n(x) h_n(y)$$​

So, to define a bivariate distribution, it is sufficient to specify its action on bivariate test functions of the form g(x)h(y).

I don't understand your objection -- I've just defined the product of two distributions by specifying how it acts on bivariate test functions.

13. Nov 19, 2009

### DarMM

Hey Fredrik,
I think the confusion may come from that this is how you define the multiplication of two distributions in two seperate variables. For two distributions in the same variable, this definition would not work.

14. Nov 19, 2009

### Fredrik

Staff Emeritus
Yes, but you didn't say that that that's what you were doing.

So let's see if I get it. u and v are functionals that take test functions on $\mathbb R$ to real numbers, and the product uv is a functional that takes test functions on $\mathbb R^2$ to real numbers. (Let's just assume that everything is real here for simplicity). And the actual definition of uv is

$$uv[f]=\sum_n u[g_n]v[h_n]$$

?

And the following formal manipulation of "integrals" is just a mnemonic for the definition above.

$$uv[f]=\int uv(x,y)f(x,y) dx\ dy=\int u(x)v(y)f(x,y) dx\ dy =\sum_n\int u(x)v(y)g_n(x)h_n(y) dx\ dy$$

$$=\sum_n\left(\int u(x)g_n(x)dx\right)\left(\int v(y)h_n(y) dy\right)=\sum_n u[g_n] v[h_n]$$

Does this mean that we can always define the product as a functional that acts on test functions on $U\times U$ when the original two distributions are functionals that act on test functions on $U$? And that the problem is that we can't in general define the product as a a functional that acts on test functions on $U$?

15. Nov 20, 2009

### strangerep

If we are a bit more pedantic, we could say that given a space of test functions U, whose
dual space U* is the space of tempered distributions, then we construct the tensor product
space $U\otimes U$ and consider its dual $(U\otimes U)^*$. We should probably say "topological dual",
since Hurkyl made the assumption about the functions being representable as (the limit of) a sequence.

However, the obvious meaning of a product of distributions (each in $U^*$) in different
variables as an element of $U^* \otimes U^*$ is delicate. In finite dimensions, in turns
out that $(U\otimes U)^* = (U^* \otimes U^*)$. For infinite dimensions, this is not
necessarily the case. (TBH, I'm a bit hazy on this: maybe that's only true for the algebraic dual,
but for the nice topological dual that Hurkyl appears to be using maybe it's true that
$(U\otimes U)^* = (U^* \otimes U^*)$. I hope someone will clarify this better.)

Yes. A functional is a mapping $U \to C$, but to define a product we need an operator $U \to U$.
And the distributions we're talking about here are not operators.

16. Nov 20, 2009

### Hurkyl

Staff Emeritus
I can't say much about it either -- but we do have an inclusion
$$U^* \otimes U^* \mapsto (U \otimes U)^*$$​
(At least, I expect it to be an inclusion; it would be weird if it's not! It's definitely a map, though)

17. Nov 20, 2009

### DarMM

It isn't usually the case and it was this property that was important in the discovery of distribution theory. The space of distributions $$\mathbb{D}^{'}(\mathbb{R})$$ is the dual of the space $$\mathbb{D}(\mathbb{R})$$, the space of compactly supported smooth functions. This space has the special property of being a nuclear space so $(U\otimes U)^* = (U^* \otimes U^*)$, in this case.

It was choosing a space with the nuclear property that allowed Schwartz be considered the inventor of distribution theory. Other such as Sobolev had consider linear functionals on a space of functions to make the dirac delta rigorous, but they could never obtain the property $(U\otimes U)^* = (U^* \otimes U^*)$, which is crucial.

18. Nov 20, 2009

### Hurkyl

Staff Emeritus
But we're interested in the tempered distributions -- their test functions are (generally) not compactly supported. Do we get equality in this case?

19. Nov 20, 2009

### DarMM

Yes, the space of Schwartz functions is a nuclear subspace of the space of test functions, so it's dual, the space of tempered distributions has this property. Which is pretty fortunate, because otherwise there would be no Fourier transform for distributions, making them useless. This was another insight of Schwartz.

20. Nov 20, 2009

### Hurkyl

Staff Emeritus
Something is being lost in translation. I'm confused because:

. The Schwartz functions do not form a subspace of D(R).
. In this setting, I thought "test function" was a synonym for Schwartz function.
. By Schwartz function, I mean the smooth functions whose partial derivatives are all "rapidly decreasing" at infinity.