Integrating Delta Functions: The Result

In summary: It would make more sense to call it a product of functions, or a composition, or something.In summary, the integral \int_{-\infty}^{\infty} \! f(t)*\delta (t-t_0) * \delta (w-w(t)) \, dt is equal to\delta (w-w(t_0)) * f(t_0)
  • #1
alejandrito29
150
0
the integral

[tex]\int_{-\infty}^{\infty} \! f(t)*\delta (t-t_0) * \delta (w-w(t)) \, dt[/tex]

is??

can be [tex]\delta (w-w(t)) * f(t_0) ?[/tex]
 
Physics news on Phys.org
  • #2
The product of two deltas isn't defined.
 
  • #3
Fredrik said:
The product of two deltas isn't defined.

i have to solve:

[tex]\int_{-\infty}^{\infty} \! f(t)*\delta (t-t_0) *\delta (x-x_0) \delta (y-y_0) *\delta (z-z_0) * \delta (w-w(t))* \, dt*dx*dy*dz[/tex]
 
  • #4
Fredrik said:
The product of two deltas isn't defined.
In general? I'm not very into this whole "generalized functions" thing, but isn't a delta function just a functional? What would be the objection to define a product of two such functionals?
 
  • #5
The integral

[tex]\int_{-\infty}^{\infty} \! f(t)*\delta (t-t_0) * \delta (w-w(t)) \, dt[/tex]

is equal to

[tex]\delta (w-w(t_0)) * f(t_0) [/tex]

I.e., it is practically zero if it is not integrated over [tex]w[/tex].
 
  • #6
haushofer said:
In general? I'm not very into this whole "generalized functions" thing, but isn't a delta function just a functional? What would be the objection to define a product of two such functionals?
Maybe I was wrong about delta. The product of two distributions isn't defined in general, but maybe it works for delta. I don't know why it can't be defined in general. There's an explanation in the Wikipedia article about distributions, but the notation is so annoying that I can't even try to understand it.
 
  • #7
The pointwise product of distributions is, in general, ill defined. Meaning that if [tex]u(x)[/tex] and [tex]v(x)[/tex] are distributions, then [tex]u(x)v(x)[/tex] is in general ill defined. But [tex]u(x)v(y)[/tex] with [tex]x \ne y[/tex] is perfectly meaningful.

When I say in general it means that one has to check on a case by case basis. Two straightforward examples: [tex]\delta(x)\delta(x)[/tex] is ill defined (to see it approximate the deltas with gaussians and take the limit - it diverges). But [tex]\theta(x) \theta(x)[/tex], with [tex]\theta(x)[/tex] being the Heaviside step function, is perfectly defined and meaningful.

One way to check if the pointwise product of distributions is ill defined or not is to study their "wave front set". But it is long to explain and I'm not really that good at it so won't go into that. But this stuff can be found in Hormander's book "The analysis of partial differential operators I".

alejandrito29 -> The result bob_for_short gave is correct, but the comment is a bit misleading. The result is something proportional to a Dirac delta, and that's it. In other words you have another distribution as your result. And you will hence have to be careful when manipulating it.
 
  • #8
I don't understand that. Take delta for example. The distribution [itex]\delta[/itex] is defined by [itex]\delta(\phi)=\phi(0)[/itex] for all test functions [itex]\phi[/itex]. When we write

[tex]\int\delta(x)\phi(x)dx=\delta(\phi)[/tex]

this is actually the definition of what we mean by the integral on the left. So when you're talking about a distribution u(x), I'm thinking of an equation

[tex]\int u(x)\phi(x)dx=u(\phi)[/tex]

where the right-hand side is well-defined already and the left-hand side is defined by this equation. When you mention u(x)v(x), I'm thinking

[tex]\int u(x)v(x)\phi(x)dx=\int w(x)\phi(x)dx=w(\phi)[/tex]

but I have no idea what it means, and in the case of u(x)v(y) I don't even know what to think.
 
  • #9
Come on, Fredrik! A product of δ-functions is well familiar to you. Consider a particle density like mδ(r - r0) - it is a product of three δ-functions. One integration removes one δ-function. Three integrations give the particle mass m.
 
Last edited:
  • #10
You can't multiply a distribution in x by another distribution in x.

However, you can multiply a distribution in x by a distribution in y:
[tex]\iint u(x) v(y) \varphi(x) \psi(y) \, dx \, dy := u[\varphi] v[\psi][/tex]​
which extends by linearity and continuity to all bivariate test functions. It works kinda like a tensor product. (actually, I think it literally is a tensor product)

Similarly, recall that you can think of an expression like:
[tex]\delta(x - y)[/tex]​
as being a (distribution in x)-valued function of y. Well, it's similarly fine to have a (distribution in x)-valued distribution of y.
 
  • #11
Thanks Hurkyl. So we use two test functions. I was starting to think we should use one test function with two variables, something like this:

[tex]\int u(x)v(y)\phi(x,y) dx dy=\int u(x)\left(\int v(y)\phi_x(y)dy\right)dx=\int u(x)v(\phi_x)dx=u(x\mapsto v(\phi_x))[/tex]

where [itex]\phi_x[/itex] is defined by [itex]\phi_x(y)=\phi(x,y)[/itex], but that seemed weird and awkward.

The only problem I have with what you wrote is that I don't see any reason to call this a product of distributions, since the quantity on the right is a product of two numbers ([itex]u[\varphi][/itex] and [itex]v[\psi][/itex]).
 
Last edited:
  • #12
Claim: Let f be a test function of two variables. Then there exist sequences gn and hn of test functions such that
[tex]f(x,y) = \sum_n g_n(x) h_n(y)[/tex]​


So, to define a bivariate distribution, it is sufficient to specify its action on bivariate test functions of the form g(x)h(y).


I don't understand your objection -- I've just defined the product of two distributions by specifying how it acts on bivariate test functions.
 
  • #13
Fredrik said:
The only problem I have with what you wrote is that I don't see any reason to call this a product of distributions, since the quantity on the right is a product of two numbers ([itex]u[\varphi][/itex] and [itex]v[\psi][/itex]).
Hey Fredrik,
I think the confusion may come from that this is how you define the multiplication of two distributions in two separate variables. For two distributions in the same variable, this definition would not work.
 
  • #14
Hurkyl said:
Claim: Let f be a test function of two variables. Then there exist sequences gn and hn of test functions such that
[tex]f(x,y) = \sum_n g_n(x) h_n(y)[/tex]​


So, to define a bivariate distribution, it is sufficient to specify its action on bivariate test functions of the form g(x)h(y).


I don't understand your objection -- I've just defined the product of two distributions by specifying how it acts on bivariate test functions.
Yes, but you didn't say that that that's what you were doing.

So let's see if I get it. u and v are functionals that take test functions on [itex]\mathbb R[/itex] to real numbers, and the product uv is a functional that takes test functions on [itex]\mathbb R^2[/itex] to real numbers. (Let's just assume that everything is real here for simplicity). And the actual definition of uv is

[tex]uv[f]=\sum_n u[g_n]v[h_n][/tex]

?

And the following formal manipulation of "integrals" is just a mnemonic for the definition above.

[tex]uv[f]=\int uv(x,y)f(x,y) dx\ dy=\int u(x)v(y)f(x,y) dx\ dy =\sum_n\int u(x)v(y)g_n(x)h_n(y) dx\ dy[/tex]

[tex]=\sum_n\left(\int u(x)g_n(x)dx\right)\left(\int v(y)h_n(y) dy\right)=\sum_n u[g_n] v[h_n][/tex]

DarMM said:
I think the confusion may come from that this is how you define the multiplication of two distributions in two separate variables. For two distributions in the same variable, this definition would not work.
Does this mean that we can always define the product as a functional that acts on test functions on [itex]U\times U[/itex] when the original two distributions are functionals that act on test functions on [itex]U[/itex]? And that the problem is that we can't in general define the product as a a functional that acts on test functions on [itex]U[/itex]?
 
  • #15
Fredrik said:
Does this mean that we can always define the product as a functional that acts on test functions on [itex]U\times U[/itex] when the original two distributions are functionals that act on test functions on [itex]U[/itex]?
If we are a bit more pedantic, we could say that given a space of test functions U, whose
dual space U* is the space of tempered distributions, then we construct the tensor product
space [itex]U\otimes U[/itex] and consider its dual [itex](U\otimes U)^*[/itex]. We should probably say "topological dual",
since Hurkyl made the assumption about the functions being representable as (the limit of) a sequence.

However, the obvious meaning of a product of distributions (each in [itex]U^*[/itex]) in different
variables as an element of [itex]U^* \otimes U^*[/itex] is delicate. In finite dimensions, in turns
out that [itex](U\otimes U)^* = (U^* \otimes U^*)[/itex]. For infinite dimensions, this is not
necessarily the case. (TBH, I'm a bit hazy on this: maybe that's only true for the algebraic dual,
but for the nice topological dual that Hurkyl appears to be using maybe it's true that
[itex](U\otimes U)^* = (U^* \otimes U^*)[/itex]. I hope someone will clarify this better.)


And that the problem is that we can't in general define the product as a a functional
that acts on test functions on [itex]U[/itex]?

Yes. A functional is a mapping [itex]U \to C[/itex], but to define a product we need an operator [itex]U \to U[/itex].
And the distributions we're talking about here are not operators.
 
  • #16
strangerep said:
(TBH, I'm a bit hazy on this: maybe that's only true for the algebraic dual,
but for the nice topological dual that Hurkyl appears to be using maybe it's true that
[itex](U\otimes U)^* = (U^* \otimes U^*)[/itex]. I hope someone will clarify this better.)
I can't say much about it either -- but we do have an inclusion
[tex]U^* \otimes U^* \mapsto (U \otimes U)^*[/tex]​
(At least, I expect it to be an inclusion; it would be weird if it's not! It's definitely a map, though)
 
  • #17
strangerep said:
However, the obvious meaning of a product of distributions (each in [itex]U^*[/itex]) in different
variables as an element of [itex]U^* \otimes U^*[/itex] is delicate. In finite dimensions, in turns
out that [itex](U\otimes U)^* = (U^* \otimes U^*)[/itex]. For infinite dimensions, this is not
necessarily the case. (TBH, I'm a bit hazy on this: maybe that's only true for the algebraic dual,
but for the nice topological dual that Hurkyl appears to be using maybe it's true that
[itex](U\otimes U)^* = (U^* \otimes U^*)[/itex]. I hope someone will clarify this better.)
It isn't usually the case and it was this property that was important in the discovery of distribution theory. The space of distributions [tex]\mathbb{D}^{'}(\mathbb{R})[/tex] is the dual of the space [tex]\mathbb{D}(\mathbb{R})[/tex], the space of compactly supported smooth functions. This space has the special property of being a nuclear space so [itex](U\otimes U)^* = (U^* \otimes U^*)[/itex], in this case.

It was choosing a space with the nuclear property that allowed Schwartz be considered the inventor of distribution theory. Other such as Sobolev had consider linear functionals on a space of functions to make the dirac delta rigorous, but they could never obtain the property [itex](U\otimes U)^* = (U^* \otimes U^*)[/itex], which is crucial.
 
  • #18
But we're interested in the tempered distributions -- their test functions are (generally) not compactly supported. Do we get equality in this case?
 
  • #19
Hurkyl said:
But we're interested in the tempered distributions -- their test functions are (generally) not compactly supported. Do we get equality in this case?
Yes, the space of Schwartz functions is a nuclear subspace of the space of test functions, so it's dual, the space of tempered distributions has this property. Which is pretty fortunate, because otherwise there would be no Fourier transform for distributions, making them useless. This was another insight of Schwartz.
 
  • #20
Something is being lost in translation. I'm confused because:

. The Schwartz functions do not form a subspace of D(R).
. In this setting, I thought "test function" was a synonym for Schwartz function.
. By Schwartz function, I mean the smooth functions whose partial derivatives are all "rapidly decreasing" at infinity.
 
  • #21
DarMM said:
[A nuclear space] has the special property [...] [itex](U\otimes U)^* = (U^* \otimes U^*)[/itex]
Ah! Thank you. (That particular penny had not yet dropped for me. :-)

It was choosing a space with the nuclear property that allowed Schwartz be considered the inventor of distribution theory. Other such as Sobolev had consider linear functionals on a space of functions to make the dirac delta rigorous, but they could never obtain the property [itex](U\otimes U)^* = (U^* \otimes U^*)[/itex], which is crucial.
Hmm... so... a Sobolev space is not necessarily a nuclear space?
 
  • #22
Hurkyl said:
Something is being lost in translation. I'm confused because:

. The Schwartz functions do not form a subspace of D(R).
. In this setting, I thought "test function" was a synonym for Schwartz function.
. By Schwartz function, I mean the smooth functions whose partial derivatives are all "rapidly decreasing" at infinity.
Sorry, [tex]\mathbb{D}(\mathbb{R})[/tex] is a nuclear subspace of the space of Schwartz functions, which is itself a nuclear space. Proofs of the nuclear property of Schwartz space is contained in Gelfand and Shilov's treatise on the subject, particularly Volume 4.
Also see
L. Ehrenpreis "On the Theory of the Kernels of Schwartz", Proc. Amer. Math. Soc., 7, 713 (1956).
H. Gask "A Proof of Schwartz's Kernel Theorem", Math. Scand., 8, 327 (1960).

The Phrase "test function" without qualification usually refers to [tex]\mathbb{D}(\mathbb{R})[/tex], not Schwartz functions, particularly in the theory of distributions. If one specifies tempered distributions, then the space of test functions will be understood to be Schwartz functions.
 
  • #23
strangerep said:
Hmm... so... a Sobolev space is not necessarily a nuclear space?
It's simply that Sobolev himself did not recognise this as an important property the way Schwartz did.
Also another thing is that Schwartz found a particularly small nuclear space, in the sense that it is a subspace of most other nuclear spaces. Hence its dual is contains the duals of other nuclear spaces and so it gives a more general notion of distribution.
 
  • #24
DarMM said:
The Phrase "test function" without qualification usually refers to [tex]\mathbb{D}(\mathbb{R})[/tex], not Schwartz functions, particularly in the theory of distributions.
I have always been assuming this is a math versus physics thing -- to my untrained eye, I think that physicists are usually interested in the tempered distributions, especially in quantum mechanics. This is a bad assumption?
 
  • #25
Hurkyl said:
I have always been assuming this is a math versus physics thing -- to my untrained eye, I think that physicists are usually interested in the tempered distributions, especially in quantum mechanics. This is a bad assumption?
Oh no, not at all. Quantum Fields are operator valued tempered distributions. In fact physicists would commonly want temperedness so that they can perform Fourier transforms. So in the context of quantum physics a tempered distribution is usually what is meant.
In General Relativity however [tex]\mathbb{D}(\mathbb{R})[/tex] is usually meant in the context of stability problems and perturbations.
However since we are discussing QM, there was no need for me to be so general.
 

1. What is a delta function?

A delta function, also known as the Dirac delta function, is a mathematical function that is usually defined as 0 everywhere except at a single point where it is defined as infinity. It is often used in physics and engineering to represent a point charge or a point mass.

2. How is a delta function integrated?

A delta function is integrated by using the following formula: ∫δ(x-a)dx = 1, where a is the point at which the delta function is defined. This means that the integral of a delta function is equal to 1 at the point where it is defined and 0 everywhere else.

3. What is the result of integrating a delta function with a function?

The result of integrating a delta function with a function is called a convolution. This is a mathematical operation that produces a new function by combining the two original functions. The resulting function will have the same shape as the original function, but it will be shifted and scaled by the delta function.

4. What are some applications of integrating delta functions?

Integrating delta functions has many applications in physics and engineering, including signal processing, quantum mechanics, and electromagnetism. It is also used in solving differential equations and in probability theory.

5. Are there any limitations to integrating delta functions?

One limitation of integrating delta functions is that they can only be used with continuous functions. They cannot be integrated with functions that have discontinuities or infinite discontinuities. Additionally, integrating delta functions can be challenging in some cases, especially when dealing with multiple delta functions or complex functions.

Similar threads

Replies
2
Views
572
  • Quantum Physics
Replies
15
Views
2K
Replies
15
Views
922
Replies
3
Views
781
Replies
24
Views
519
Replies
19
Views
2K
Replies
1
Views
637
  • Quantum Physics
Replies
2
Views
870
  • Quantum Physics
Replies
4
Views
1K
  • Differential Equations
Replies
1
Views
768
Back
Top