Product of dirac delta distributions

In summary, the conversation discusses the issue of defining the product of distributions, specifically the Dirac delta function, and whether a recursion relation can be established for it. It is argued that although the delta function is not continuous everywhere, it can be considered as a continuous compactly supported function for the purpose of the recursion relation. However, it is also noted that the validity of this approach may require further rigor and proof.
  • #1
friend
1,452
9
I'm told that a product of distributions is undefined. See,

http://en.wikipedia.org/wiki/Distribution_(mathematics)#Problem_of_multiplication

where the Dirac delta function is considered a distribution.

Now the Dirac delta function is defined such that,

[tex]\[
\int_{ - \infty }^{ + \infty } {{\rm{f(x}}_1 {\rm{)\delta (x}}_1 {\rm{ - x}}_0 ){\rm{dx}}_1 } = {\rm{f(x}}_0 )
\]
[/tex]

for all continuous compactly supported functions ƒ. See,

http://en.wikipedia.org/wiki/Dirac_delta_function

But the question is can we make [tex]\[
{\rm{f(x}}_1 ) = {\rm{\delta (x - x}}_1 )
\]
[/tex], in order to get,

[tex]\[
\int_{ - \infty }^{ + \infty } {{\rm{\delta (x - x}}_1 {\rm{)\delta (x}}_1 {\rm{ - x}}_0 ){\rm{dx}}_1 } = {\rm{\delta (x - x}}_0 )
\]
[/tex]

which is a very convenient recursion relation?

But then we are faced with the product of distributions inside the integral. So does the recursion relation actually exist?

We are told that the delta function is not everywhere continuous so it is not allowed to be [tex]\[
{\rm{f(x}}_1 )
\]
[/tex].

Nevertheless, it seems obvious if we consider the limits of the delta function individually, then of course the recursion relation is allowed. For if we use the gaussian form of the delta function, we have,

[tex]\[
{\rm{\delta (x - x}}_1 ) = \mathop {\lim }\limits_{\Delta _1 \to 0} \frac{1}{{(\pi \Delta _1 ^2 )^{1/2} }}e^{ - (x - x_1 )^2 /\Delta _1 ^2 }
\]
[/tex]

and

[tex]\[
{\rm{\delta (x}}_1 {\rm{ - x}}_0 ) = \mathop {\lim }\limits_{\Delta _0 \to 0} \frac{1}{{(\pi \Delta _0 ^2 )^{1/2} }}e^{ - (x_1 - x_0 )^2 /\Delta _0 ^2 }
\]
[/tex]

Then,

[tex]\[
\int_{ - \infty }^{ + \infty } {{\rm{\delta (x - x}}_1 {\rm{)\delta (x}}_1 {\rm{ - x}}_0 ){\rm{dx}}_1 } = \int_{ - \infty }^{ + \infty } {\left( {\mathop {\lim }\limits_{\Delta _1 \to 0} \frac{1}{{(\pi \Delta _1 ^2 )^{1/2} }}e^{ - (x - x_1 )^2 /\Delta _1 ^2 } } \right)\left( {\mathop {\lim }\limits_{\Delta _0 \to 0} \frac{1}{{(\pi \Delta _0 ^2 )^{1/2} }}e^{ - (x_1 - x_0 )^2 /\Delta _0 ^2 } } \right){\rm{dx}}_1 }
\]
[/tex]

[tex]\[
= \mathop {\lim }\limits_{\Delta _1 \to 0} \int_{ - \infty }^{ + \infty } {\left( {\frac{1}{{(\pi \Delta _1 ^2 )^{1/2} }}e^{ - (x - x_1 )^2 /\Delta _1 ^2 } } \right){\rm{\delta (x}}_1 {\rm{ - x}}_0 ){\rm{dx}}_1 } = \mathop {\lim }\limits_{\Delta _1 \to 0} \frac{1}{{(\pi \Delta _1 ^2 )^{1/2} }}e^{ - (x - x_0 )^2 /\Delta _1 ^2 } = {\rm{\delta (x - x}}_0 )
\]
[/tex]

For if we let [tex]\[
{\Delta _1 }
\]
[/tex] remain a fixed non-zero number until after the integration then the exponential delta function is a continuous compactly supported function and qualifies to be [tex]\[
{\rm{f(x}}_1 )
\]
[/tex]. Or

[tex]\[
= \mathop {\lim }\limits_{\Delta _0 \to 0} \int_{ - \infty }^{ + \infty } {{\rm{\delta (x - x}}_1 )\left( {\frac{1}{{(\pi \Delta _0 ^2 )^{1/2} }}e^{ - (x_1 - x_0 )^2 /\Delta _0 ^2 } } \right){\rm{dx}}_1 } = \mathop {\lim }\limits_{\Delta _0 \to 0} \frac{1}{{(\pi \Delta _1 ^2 )^{1/2} }}e^{ - (x - x_0 )^2 /\Delta _0 ^2 } = {\rm{\delta (x - x}}_0 )
\]
[/tex]

if we let [tex]\[
{\Delta _0 }
\]
[/tex] remain a fixed non-zero number until after the integration so that [tex]\[
{\rm{f(x}}_1 )
\]
[/tex] becomes a continuous compactly supported function as before.

Since the result is [tex]\[
{\rm{\delta (x - x}}_0 )
\]
[/tex] for any order in which we take the limits. Does this prove that the limit is valid and the recursion relation holds? Thank you.
 
Last edited:
Physics news on Phys.org
  • #2
friend said:
I'm told that a product of distributions is undefined.

This claim is correct usually...

It will of course become incorrect, if somebody comes up with the definition :smile:

But the question is can we make [tex]\[
{\rm{f(x}}_1 ) = {\rm{\delta (x - x}}_1 )
\]
[/tex], in order to get,

[tex]\[
\int_{ - \infty }^{ + \infty } {{\rm{\delta (x - x}}_1 {\rm{)\delta (x}}_1 {\rm{ - x}}_0 ){\rm{dx}}_1 } = {\rm{\delta (x - x}}_0 )
\]
[/tex]

which is a very convenient recursion relation?

But then we are faced with the product of distributions inside the integral. So does the recursion relation actually exist?

In my opinion this is fine. You can get correct results when you calculate like this, and that can also guide you towards a rigor proof in some situation.

Nevertheless, it seems obvious if we consider the limits of the delta function individually, then of course the recursion relation is allowed.

It seems to be a common phenomena in human behavior, that the more unclear, ambiguous and uncertain some claim is, the more likely human will emphasize the obviousness of the claim :tongue:

For if we use the gaussian form of the delta function, we have,

[tex]\[
{\rm{\delta (x - x}}_1 ) = \mathop {\lim }\limits_{\Delta _1 \to 0} \frac{1}{{(\pi \Delta _1 ^2 )^{1/2} }}e^{ - (x - x_1 )^2 /\Delta _1 ^2 }
\]
[/tex]

and

[tex]\[
{\rm{\delta (x}}_1 {\rm{ - x}}_0 ) = \mathop {\lim }\limits_{\Delta _0 \to 0} \frac{1}{{(\pi \Delta _0 ^2 )^{1/2} }}e^{ - (x_1 - x_0 )^2 /\Delta _0 ^2 }
\]
[/tex]

Then,

[tex]\[
\int_{ - \infty }^{ + \infty } {{\rm{\delta (x - x}}_1 {\rm{)\delta (x}}_1 {\rm{ - x}}_0 ){\rm{dx}}_1 } = \int_{ - \infty }^{ + \infty } {\left( {\mathop {\lim }\limits_{\Delta _1 \to 0} \frac{1}{{(\pi \Delta _1 ^2 )^{1/2} }}e^{ - (x - x_1 )^2 /\Delta _1 ^2 } } \right)\left( {\mathop {\lim }\limits_{\Delta _0 \to 0} \frac{1}{{(\pi \Delta _0 ^2 )^{1/2} }}e^{ - (x_1 - x_0 )^2 /\Delta _0 ^2 } } \right){\rm{dx}}_1 }
\]
[/tex]

[tex]\[
= \mathop {\lim }\limits_{\Delta _1 \to 0} \int_{ - \infty }^{ + \infty } {\left( {\frac{1}{{(\pi \Delta _1 ^2 )^{1/2} }}e^{ - (x - x_1 )^2 /\Delta _1 ^2 } } \right){\rm{\delta (x}}_1 {\rm{ - x}}_0 ){\rm{dx}}_1 } = \mathop {\lim }\limits_{\Delta _1 \to 0} \frac{1}{{(\pi \Delta _1 ^2 )^{1/2} }}e^{ - (x - x_0 )^2 /\Delta _1 ^2 } = {\rm{\delta (x - x}}_0 )
\]
[/tex]

For if we let [tex]\[
{\Delta _1 }
\]
[/tex] remain a fixed non-zero number until after the integration then the exponential delta function is a continuous compactly supported function and qualifies to be [tex]\[
{\rm{f(x}}_1 )
\]
[/tex]. Or

[tex]\[
= \mathop {\lim }\limits_{\Delta _0 \to 0} \int_{ - \infty }^{ + \infty } {{\rm{\delta (x - x}}_1 )\left( {\frac{1}{{(\pi \Delta _0 ^2 )^{1/2} }}e^{ - (x_1 - x_0 )^2 /\Delta _0 ^2 } } \right){\rm{dx}}_1 } = \mathop {\lim }\limits_{\Delta _0 \to 0} \frac{1}{{(\pi \Delta _1 ^2 )^{1/2} }}e^{ - (x - x_0 )^2 /\Delta _0 ^2 } = {\rm{\delta (x - x}}_0 )
\]
[/tex]

if we let [tex]\[
{\Delta _0 }
\]
[/tex] remain a fixed non-zero number until after the integration so that [tex]\[
{\rm{f(x}}_1 )
\]
[/tex] becomes a continuous compactly supported function as before.

Since the result is [tex]\[
{\rm{\delta (x - x}}_0 )
\]
[/tex] for any order in which we take the limits. Does this prove that the limit is valid and the recursion relation holds? Thank you.

I hope you understand that when you write the equality signs "[itex]=[/itex]" like that, it is not really an equality in such manner that you have numbers on the left and right side, and that the numbers would be the same.

For example, if I define a function [itex]\delta_{\Delta_0}(x)[/itex] like this:

[tex]
\delta_{\Delta_0}(x) = \lim_{\Delta_1\to 0^+} \int\limits_{-\infty}^{\infty} \Big(
\frac{1}{(\pi \Delta_1^2)^{1/2}} e^{-(x-x_1)^2/\Delta_1^2}\Big)\Big(
\frac{1}{(\pi \Delta_0^2)^{1/2}} e^{-x_1^2/\Delta_0^2}\Big) dx_1
[/tex]

then the following is true:

[tex]
\lim_{\Delta_0\to 0^+} \int\limits_{-\infty}^{\infty} \delta_{\Delta_0}(x-x_0) f(x_0)dx_0 = f(x)
[/tex]

Unlike your heuristic equations, these two equations which I wrote are actually real equations, which have equal numbers on the left and right sides. If you understand when equation is heuristic and when a real one, then IMO you are fine.

Now when I started to think of this...

Suppose [itex]\delta^{n}_{x_0}[/itex] is defined as a mapping [itex]C_0(\mathbb{R}^n)\to\mathbb{C}[/itex], [itex]f\mapsto f(x_0)[/itex], wouldn't it make sense to define a product of [itex]\delta^{n}_{x_0}[/itex] and [itex]\delta^{m}_{x_1}[/itex] simply as

[tex]
\delta^{n}_{x_0} \delta^{m}_{x_1} := \delta^{n+m}_{(x_0,x_1)},
[/tex]

which is a mapping [itex]C_0(\mathbb{R}^{n+m})\to\mathbb{C}[/itex], [itex]f\mapsto f(x_0,x_1)[/itex]. Can anyone say what would be a problem with this?

It could be that one problem is that the definition is not particularly useful, but on the other hand I've been left slightly sceptical about the usefulness of the distributions anyway... and repeating the sentece "product of distributions does not exist" is not very useful either.
 
Last edited:
  • #3
friend said:
I'm told that a product of distributions is undefined. See,
The difference here is that you're not really multiplying them -- this is more like a tensor product.

Given any two univariate distributions f and g, the expression [itex]f(x) g(y)[/itex] makes sense because they are distributional in different variables, and its defining property is that
[tex]\int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} f(x) g(y) \varphi(x) \psi(y) \, dx \, dy = \int_{-\infty}^{+\infty} f(x) \varphi(x) \, dx \int_{-\infty}^{+\infty} g(y) \varphi(y) \, dy[/tex]​

(any bivariate test function is a limit of sums of products of univariate test functions)


There's another subtlety here. Normally, [itex]\delta(x-y) \delta(x-z)[/itex] would only make sense used in a double integral, so it's a bit of good fortune that we can express it as an iterated integral as you did!
 
  • #4
jostpuur said:
Now when I started to think of this...

Suppose [itex]\delta^{n}_{x_0}[/itex] is defined as a mapping [itex]C_0(\mathbb{R}^n)\to\mathbb{C}[/itex], [itex]f\mapsto f(x_0)[/itex], wouldn't it make sense to define a product of [itex]\delta^{n}_{x_0}[/itex] and [itex]\delta^{m}_{x_1}[/itex] simply as

[tex]
\delta^{n}_{x_0} \delta^{m}_{x_1} := \delta^{n+m}_{(x_0,x_1)},
[/tex]

which is a mapping [itex]C_0(\mathbb{R}^{n+m})\to\mathbb{C}[/itex], [itex]f\mapsto f(x_0,x_1)[/itex]. Can anyone say what would be a problem with this?

Okey this has some problems in it. That works for situations like

[tex]
\delta(x - x_0)\delta(y - y_0) dx\; dy
[/tex]

but not for situations like

[tex]
\delta(x - y) \delta(y - y_0) dx\; dy
[/tex]
 
  • #5
however Hurkyl could we do this ??

given S and T to be distributions with

[tex] g(\frac{x}{\epsilon})=S(x) [/tex] and [tex] h(\frac{x}{\epsilon})=T(x) [/tex]

in the limit epsilon tends to infinity

then my idea is to define the product of distribution with respect to a certain analytic test-function [tex] \phi (x) [/tex] to be

[tex] (ST, \phi )=( g(\frac{x}{\epsilon})T,\phi)+(Sh(\frac{x}{\epsilon}),\phi)[/tex]
 
  • #6
Why would there exist a test function g with the property that
[tex]\lim_{y \to +\infty} g\left( \frac{x}{y} \right) = S(x)[/tex]​
? I think that might even require S to be a constant.

But even if it does exist, can you show that your definition of product doesn't depend on your choice of g and h? That's the real killer for multiplying distributions.


Every distribution is a limit of test functions; i.e.
[tex]S(x) = \lim_{n \to +\infty} g_n(x)[/tex]​
. Similarly, we can write T(x) as a limit of hn(x). The limit of gn(x)hn(x) (if it exists) is going to be a distribution -- but that depends crucially on your choice of g and h: it is not determined simply from S and T.

Here are four interesting sequences of functions that converge to the delta function. (They aren't test functions, but it's easy to smooth out these examples)

  • [tex]r_n(x) = \begin{cases} n & x \in \left[-\frac{1}{2n}, \frac{1}{2n}\right] \\ 0 & \text{otherwise}\right][/tex]
  • [tex]s_n(x) = \begin{cases} n & x \in \left[0, \frac{1}{n}\right] \\ 0 & \text{otherwise}\right][/tex]
  • [tex]t_n(x) = \begin{cases} 2n & x \in \left[-\frac{1}{2n}, 0\right] \\ 0 & \text{otherwise}\right][/tex]
  • [tex]u_n(x) = \begin{cases} n & x \in \left[-\frac{2}{2n}, -\frac{1}{2n}\right] \\
    n & x \in \left[\frac{1}{2n}, \frac{2}{2n}\right] \\
    0 & \text{otherwise}\right][/tex]

What do the various products of these sequences converge to?
 
  • #7
Hurkyl said:
Here are four interesting sequences of functions that converge to the delta function. (They aren't test functions, but it's easy to smooth out these examples)

  • [tex]r_n(x) = \begin{cases} n & x \in \left[-\frac{1}{2n}, \frac{1}{2n}\right] \\ 0 & \text{otherwise}\right][/tex]
  • [tex]s_n(x) = \begin{cases} n & x \in \left[0, \frac{1}{n}\right] \\ 0 & \text{otherwise}\right][/tex]
  • [tex]t_n(x) = \begin{cases} 2n & x \in \left[-\frac{1}{2n}, 0\right] \\ 0 & \text{otherwise}\right][/tex]
  • [tex]u_n(x) = \begin{cases} n & x \in \left[-\frac{2}{2n}, -\frac{1}{2n}\right] \\
    n & x \in \left[\frac{1}{2n}, \frac{2}{2n}\right] \\
    0 & \text{otherwise}\right][/tex]

What do the various products of these sequences converge to?

It may depend on which limit you take first. It seems there are 3 separate limits involved in taking the product and then integrating. Do we take the limit of one of the sequences first, then do the limits involved with integration, then do the limit of the other sequence?

And there are situations in which it matters which limit you take first. For example, consider the following:

[tex]\[
\mathop {\lim }\limits_{x,y \to 0,0} \frac{{x - y}}{{x + y}} = \mathop {\lim }\limits_{x \to 0} \mathop {\lim }\limits_{y \to 0} \frac{{x - y}}{{x + y}}
\]
[/tex]

Which limit do we do first. It matters because if we take the limit as x approaches zero first, leaving the y a fixed non-zero value, then the result is -1. But if we take the limit first as y approaches zero, then we get +1. And so here is an example of an undefined limiting process. But I think that if it doesn't matter which limit you do first because you get the same result, then those limit processes are defined. Does this sound right? Have you seen anything in functional analysis that considers more than one limiting process and rules for which limit is done first?

And it does seem that with the dirac delta that there are limiting processes that are done first before others. Part of the definition of the dirac delta is that it integrates to 1 no matter what the value is of the other parameter that goes to zero. So here we are taking the integration limit first before considering the other.
 
Last edited:
  • #8
The [itex]\int[/itex] symbol here isn't an integral. At least, it isn't like what you learned in elementary calculus. When used here, it's just a symbol denoting the evaluation of a distribution at a test function... [itex]\int[/itex] is used here as a suggestive analogy, and also because when the arguments are both test functions, it does turn out to give the same answers as ordinary integration.

Other notations for this operation include:
  • Functional notation: something like [itex]\delta[\varphi] = \varphi(0)[/itex], or maybe even [itex]\delta(\varphi) = \varphi(0)[/itex].
  • Matrix-like notation: we would just write [itex]\delta \varphi = \varphi(0)[/itex]
  • Inner product notation: [itex](\delta, \varphi) = \varphi(0)[/itex]
  • Bra-ket notation: [itex]\langle \delta | \varphi \rangle = \varphi(0)[/itex]

In any case, this operation is jointly continuous in both of its arguments. In inner-product-like notation:
[tex]\lim_{n \mapsto \infty} (S_n, \varphi_n) = \left(\lim_{n \mapsto \infty} S_n, \lim_{n \mapsto \infty} \varphi_n \right)[/tex]


In integral-like notation, where we write a distribution as a limit of test functions (really, as a limit of the distributions those test functions represent), this becomes the "always take the integral first" rule:
[tex]
\int_{-\infty}^{+\infty} S(x) \varphi(x) \, dx = \int_{-\infty}^{+\infty} \left( \lim_{n \mapsto \infty} \hat{s}_n(x) \right) \varphi(x) \, dx = \lim_{n \mapsto \infty} \int_{-\infty}^{+\infty} \hat{s}_n(x) \varphi(x) \, dx = \lim_{n \mapsto \infty} \int_{-\infty}^{+\infty} s_n(x) \varphi(x) \, dx[/tex]
where the last integrand is a distribution corresponding to a test function evaluated at a test function, and so can be computed as an ordinary Riemann integral.

I've added an extra feature to the above calculation: I put a hat (^) over the test function when I'm treating it as a distribution, so you can see more clearly where distributional things are happening, and when ordinary calculus is happening.
 
Last edited:
  • #9
Hurkyl said:
Other notations for this operation include:
  • Functional notation: something like [itex]\delta[\varphi] = \varphi(0)[/itex], or maybe even [itex]\delta(\varphi) = \varphi(0)[/itex].
  • Matrix-like notation: we would just write [itex]\delta \varphi = \varphi(0)[/itex]
  • Inner product notation: [itex](\delta, \varphi) = \varphi(0)[/itex]
  • Bra-ket notation: [itex]\langle \delta | \varphi \rangle = \varphi(0)[/itex]

However, I think the details of actually doing the calculation would be exactly the integration process. The functional would equate to an integral equation as before. The matrix form would require multiplying and adding components which would look exactly like integration. And the inner product and Bra-ket notation are just notational differences.

Hurkyl said:
In any case, this operation is jointly continuous in both of its arguments. In inner-product-like notation:
[tex]\lim_{n \mapsto \infty} (S_n, \varphi_n) = \left(\lim_{n \mapsto \infty} S_n, \lim_{n \mapsto \infty} \varphi_n \right)[/tex]

What is "jointly continuous in both of its arguments"? Are you sure this shouldn't be two individual limiting processes[tex]\[
\mathop {\lim }\limits_{n \to \infty }
\]
[/tex] and [tex]\[
\mathop {\lim }\limits_{m \to \infty }
\]
[/tex] so that you'd get,

[tex] \left(\lim_{n \mapsto \infty} S_n, \lim_{m \mapsto \infty} \varphi_m \right)[/tex]

Hurkyl said:
In integral-like notation, where we write a distribution as a limit of test functions (really, as a limit of the distributions those test functions represent), this becomes the "always take the integral first" rule:
[tex]
\int_{-\infty}^{+\infty} S(x) \varphi(x) \, dx = \int_{-\infty}^{+\infty} \left( \lim_{n \mapsto \infty} \hat{s}_n(x) \right) \varphi(x) \, dx = \lim_{n \mapsto \infty} \int_{-\infty}^{+\infty} \hat{s}_n(x) \varphi(x) \, dx = \lim_{n \mapsto \infty} \int_{-\infty}^{+\infty} s_n(x) \varphi(x) \, dx[/tex]

Yes, I suppose it would not make sense to integrate after you take the limit of the delta function. For then the area under the curve would not be 1 as required.


But my broader question has to do with the path integral. Some say that the measure of the path integral in not defined. But I'm still not sure what they mean. I think it has to do with the product of distributions.

What can "not defined" mean if not that the evaluation could have more than one value or is infinite. So I think the problem may be in competing limits, which one you do first may result in different answers. I've not yet seen such competing limit concerns in any of the functional analysis books I've browsed through. I searched the Web for "multiple limit processes". And I have seen a few webpages that acknowledge the problem without giving any guidance. And there seems to be reference to Advanced Calculus books that may have more information. Maybe you've seen this issue addressed in some book somewhere.

It is important to me that this issue is addressed. In fact EVERYTHING depends on it. For it seems the path integral of physics and perhaps all of physics can be derived from this recursion relation of the Dirac delta function, if only it is valid. I can easily show this here if there is interest.

It seems that this problem of the measure of the path integral probably came about because they derived the path integral from the point of view of physics concepts. But I've come to the path integral from a purely mathematical perspective. And assuming the recursion relation of the delta holds, then the path integral measure problem might be resolved by resolving the product of distribution problem.

I think I've shown that the integral of the product of two delta functions results in the same answer no matter which limit is done first (See original post). Have I actually solved the product of distributions problem (and by extention the path integral measure problem) by addressing the competing limits involved?
 
Last edited:
  • #10
friend said:
Then,

[tex]\[
\int_{ - \infty }^{ + \infty } {{\rm{\delta (x - x}}_1 {\rm{)\delta (x}}_1 {\rm{ - x}}_0 ){\rm{dx}}_1 } = \int_{ - \infty }^{ + \infty } {\left( {\mathop {\lim }\limits_{\Delta _1 \to 0} \frac{1}{{(\pi \Delta _1 ^2 )^{1/2} }}e^{ - (x - x_1 )^2 /\Delta _1 ^2 } } \right)\left( {\mathop {\lim }\limits_{\Delta _0 \to 0} \frac{1}{{(\pi \Delta _0 ^2 )^{1/2} }}e^{ - (x_1 - x_0 )^2 /\Delta _0 ^2 } } \right){\rm{dx}}_1 }
\]
[/tex]

I found a similar equation in, "The Feynman Integral and Feynman's Operational Calculus", by Gerald W. Johnson and Michael L. Lapidus, page 37, called a Chapman-Kolmogorov equation:

[tex]\[
\int_{ - \infty }^{ + \infty } {\left( {\frac{\lambda }{{2\pi \left( {t - s} \right)}}} \right)^{\frac{1}{2}} e^{ - {\textstyle{{\lambda (\omega - \upsilon )^2 } \over {2\left( {t - s} \right)}}}} \left( {\frac{\lambda }{{2\pi \left( {s - r} \right)}}} \right)^{\frac{1}{2}} e^{ - {\textstyle{{\lambda (\upsilon - u)^2 } \over {2\left( {s - r} \right)}}}} {\rm{d}}\upsilon } = \left( {\frac{\lambda }{{2\pi \left( {t - r} \right)}}} \right)^{\frac{1}{2}} e^{ - {\textstyle{{\lambda (\omega - u)^2 } \over {2\left( {t - r} \right)}}}}
\]
[/tex]

This equation does not consider limits. But it's easy to see that placing limits on (t-s) and (s-r) would lead to a gaussian form of the Dirac delta function. The book does not spell out how they got this equation. Does anyone know how they got this equation?

This equation is also confirmed in the book review at:

http://books.google.com/books?id=yp...n-kolmogorov equation Brownian motion&f=false
 
  • #11
friend said:
It is important to me that this issue is addressed. In fact EVERYTHING depends on it. For it seems the path integral of physics and perhaps all of physics can be derived from this recursion relation of the Dirac delta function, if only it is valid. I can easily show this here if there is interest.

I found this equation in the book, "Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets", by Hagen Kleinert, page 91. You can also see in a book review at:

http://users.physik.fu-berlin.de/~kleinert/public_html/kleiner_reb3/psfiles/pthic04.pdf

It shows how a quantum transition amplitude can be interpreted as a dirac delta function equal to the integration of a great number of products of delta functions.

[tex]\[
\left( {x_b t_b |x_a t_a } \right) = \prod\limits_{n = 1}^N {\left[ {\int_{ - \infty }^{ + \infty } {dx_n } } \right]} \prod\limits_{n = 1}^{N + 1} {\left\langle {x_n |x_{n - 1} } \right\rangle } = \prod\limits_{n = 1}^N {\left[ {\int_{ - \infty }^{ + \infty } {dx_n } } \right]} \prod\limits_{n = 1}^{N + 1} {\delta \left( {x_n - x_{n - 1} } \right)} = \delta \left( {x_b - x_a } \right)
\]
[/tex]

The last two equations on the right can be obtain by interating a recursion relation for the dirac delta function. So you can see here that QM can be derived from this recursion relation assuming that it is valid.
 
  • #12
why is the product undecided ??

using the "convolution theorem" i can get the product of 2 dirac delta functions

[tex] D^{m}\delta (u) D^{n}\delta (u) [/tex]

as the Fourier transform of the convolution of the 2 functions

[tex] A(x^{m}*x^{n} ) [/tex] so this convolution would define the product.

jere A is a constant that can be a real or pure imaginary number
 

1. What is a product of Dirac delta distributions?

A product of Dirac delta distributions is a mathematical concept used in quantum mechanics to represent the probability of a particle being in a certain location or state. It is a mathematical function that is zero everywhere, except at a single point, where it is infinite. This point represents the location or state of the particle.

2. How is a product of Dirac delta distributions calculated?

A product of Dirac delta distributions is calculated by multiplying the individual Dirac delta distributions together. This is done by evaluating the function at the point of interest and then multiplying the resulting values. In mathematical notation, it is represented as δ(x - x1)η(x - x2)η(x - x3)...

3. What is the significance of a product of Dirac delta distributions in physics?

In physics, a product of Dirac delta distributions is used to represent the probability of finding a particle at a specific location or state. It is also used to describe the position and momentum of a particle, as well as to calculate transition probabilities between quantum states. It is a fundamental tool in quantum mechanics and allows for precise calculations and predictions.

4. Can a product of Dirac delta distributions be visualized?

No, a product of Dirac delta distributions cannot be visualized in the traditional sense as it is a mathematical concept. However, it can be represented graphically as a spike at the point of interest, with a value of infinity at that point and zero everywhere else. This graphical representation helps to understand the behavior and properties of the function.

5. Are there any real-life applications of a product of Dirac delta distributions?

Yes, a product of Dirac delta distributions has many real-life applications in fields such as physics, engineering, and signal processing. It is used to model and analyze systems with discrete states or positions, such as electronic circuits, communication systems, and quantum systems. It is also used in image processing and pattern recognition to locate and identify specific features in an image.

Similar threads

Replies
2
Views
1K
  • Calculus
Replies
25
Views
908
Replies
10
Views
627
Replies
1
Views
830
Replies
3
Views
1K
Replies
3
Views
660
  • Calculus
Replies
4
Views
1K
Replies
24
Views
2K
Replies
2
Views
1K
Replies
16
Views
2K
Back
Top