Change of variables for Delta distribution

In summary, the conversation is about a physics graduate who is researching microwave guides and connectors for satellite devices and is having trouble with changing variables in a Dirac Delta distribution. The conversation includes suggestions to use Laplace transform and to consider the definition of 0 in the distribution. It is also discussed that values at individual points do not matter in a distribution. The conversation ends with considering the special case of f(t) = t^2.
  • #1
esorolla
20
0
Hello everybody

First I'd like to thank for the work all of you are developing with this forum. I found it for casuality but I'm sure since now it will be a perpetual partner.

I'm a Spanish Physics graduate and I am working about microwave guides and connectors for devices components in satellites and I have some trouble in my job. Although my questions's title it's not my trouble I think the solution could help for my investigation development.

My doubt is about how to change of variable in a Dirac Delta distribution. I know the usually called scaling property:

delta[f(x)]=Sum[delta(x-Xi)]/|f´(Xi)|

where Xi's are the roots of the function f(x). But my trouble is, for example, in the case that the function is as apparently innocent as

f(x)=x^2

because in this case the function has a double root Xi whose value is zero and this is a problem in the denominator of the expression, because

f´(Xi)=2*Xi

I'd like to receive ideas to solve this problem, although I have a possible way for the beginning.

If the function is f(x^2-a^2) with 'a' a real number, the solution is the well-known formula

delta[x^2-a^2]=1/(2·|a|)·[delta(x-a)+delta(x+a)]

How about if we take the limit 'a' tending to zero? I have no answer to this, but I think it could be an initial idea. I have looked at some books of calculus and I haven't found answer to this problem, but I recognize I have not read all the mathematical books that exist. I am sure my problem is that I have not read the development of this formula to know how adapt it to this case.

Thank you for to pay attention.
 
Last edited:
Physics news on Phys.org
  • #2
How about letting [itex]u = x^2[/itex] and re-expressing your integral in terms of [itex]u[/itex] instead of trying to manipulate the form of the delta function? I haven't thought much on whether or not the double root would be a problem, but naively at least I wouldn't think so, and this is probably the first method I'd try to tackle the integral.
 
  • #4
Thank you for the ideas, but I don't have to put the Delta distribution inside an integral. I am developing a model about the current that represents an electron in a point of the space and I am trying to get the coefficients of the Fourier series of the velocity of the electron represented as a delta(z-z'(t)) where z' is a function of t, but this is not the question.

I thank you a lot for the ideas, but I expected there would be a solution for an expression of the Delta distribution when the derivative of the function, that plays as variable, has a zero value in the point Xi that we are considering. In this case I have the trouble when dz'/dt is evaluated in a time Ti (equivalent to Xi) that vanishes dz'/dt (equivalent to f´).

Thank you... but I will go on thinking on it.
 
  • #5
Remember that distributions usually are not actual functions, so function-like things may not apply!

I've only studied their calculus superficially, so I do not know if there is a standardized definition for "change of variable" for distributions; I would actually expect
delta[f(x)]=Sum[delta(x-Xi)]/|f´(Xi)|​
to be a definition, not a theorem.


I think it would help if you showed the calculation you are trying to do -- problems with distributions usually arise from actual errors in their manipulation. For example, if I consider the expression [itex]\delta(z - f(t))[/itex] as being a function in t and a distribution in z, then it would be generally incorrect to plug in values for z!


I have some more ideas, but not the time to develop them now.
 
  • #6
Hurkyl said:
I've only studied their calculus superficially, so I do not know if there is a standardized definition for "change of variable" for distributions;

Well, I said before I am not a mathematician, but I studied change of variables for probability distribution, for example. And I think in Bayesian theory is accepted and usually studied the change of variables in probability distribution, but I'm sure you could take some brightness to my knowledge.

Hurkyl said:
I would actually expect
delta[f(x)]=Sum[delta(x-Xi)]/|f´(Xi)|​
to be a definition, not a theorem.

Well, although I'd accept this point, I'd thank a lot if you could tell me what's the delta function in the case I put as example, or when the derivative applied in the roots of the function vanishes.

Hurkyl said:
I think it would help if you showed the calculation you are trying to do -- problems with distributions usually arise from actual errors in their manipulation. For example, if I consider the expression [itex]\delta(z - f(t))[/itex] as being a function in t and a distribution in z, then it would be generally incorrect to plug in values for z!

I consider delta as a distribution in t through a function z-z'(t), but I knew that I had some problems because the physics don't know quite about the distributions. Nevertheless this model is usually taken for many prestigious physics especialized in electromagnetism to represent the instantaneous current that an electron produces.

Hurkyl said:
I have some more ideas, but not the time to develop them now.

I would be very grateful to you if you compart those other ideas when you could.
 
  • #7
As far as a distribution is concerned, the "value" at individual points doesn't matter. For example, consider the function given by

[tex]f(x) = \begin{cases}
0 & x \neq 0 \\
1 & x = 0
\end{cases}[/tex]

If [itex]\phi[/itex] is a test function, then we have:

[tex]\int_{-\infty}^{+\infty} f(x) \phi(x) \, dx = 0[/tex]

so f represents the same distribution as 0.


I suspect that's what you want to do here; one way to view [itex]\delta(z - f(t))[/itex] is as the two-variable distribution given by

[tex]\iint \delta(z - f(t)) \phi(z, t) \, dA =
\int_{-\infty}^{+\infty} \phi(f(t), t) \, dt[/tex]

If we define [itex]g_t(z) := \delta(z - f(t))[/itex] (i.e. we "plug in" values for t to get a distribution in z), then you can check that

[tex]
\int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} g_t(z) \phi(z, t) \, dz \, dt
= \int_{-\infty}^{+\infty} \phi(f(t), t) \, dt
[/tex]

and so we see that things behave well with respect to "plugging in" values for t.


Let's consider the special case that [itex]f(t) = t^2[/itex]. Then I claim that

[tex]h_z(t) := \begin{cases}
\frac{1}{2 \sqrt{z}} \left( \delta(t - \sqrt{z}) + \delta(t + \sqrt{z}) \right)
& z > 0
\\
0 & z < 0[/tex]

(I don't care about the value at z = 0) also represents the same distribution, and so it can be thought of as what happens if we "plug in" a value of z.

So, let's compute:

[tex]
\int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty}
h_z(t) \phi(z, t) \, dt \, dz
= \int_0^{+\infty} \frac{1}{2 \sqrt{z}} \left( \phi(z, \sqrt{z}) + \phi(z, -\sqrt{z})
\right) \, dz[/tex]

which, I believe, is equal to

[tex]\int_{-\infty}^{+\infty} \phi(t^2, t) \, dt[/tex]

as desired.


The point is that to treat [itex]\delta(z - t^2)[/itex] as a bivariate distribution, we don't actually need to be able to make sense of what happens when z = 0! In fact, I would expect that z = 0 to be some sort of singularity.


Am I making sense?
 
Last edited:
  • #8
I did have one last thought... (again, I want to give the disclaimer that I don't know the 'official' way to do this stuff)

Maybe, what you want to use is

[tex]\delta(x^2) = \frac{1}{2|x|} \delta(x)[/tex]

I find it very plausible that there is a rigorous way of treating these things that would lead to this equation. (I'm not entirely convinced about the 2 in the denominator)


In fact, observe that your original equation can be rewritten:

[tex]
\sum_i \frac{\delta(x - a_i) } { |f'(a_i)| } =
\sum_i \frac{\delta(x - a_i) } { |f'(x)| } =
\frac{1}{|f'(x)|} \sum_i \delta(x - a_i)
[/tex]

(at least, it can be rewritten like this if everything is well-behaved...)
 
Last edited:
  • #9
Hurkyl said:
I suspect that's what you want to do here; one way to view [itex]\delta(z - f(t))[/itex] is as the two-variable distribution given by

[tex]\iint \delta(z - f(t)) \phi(z, t) \, dA =
\int_{-\infty}^{+\infty} \phi(f(t), t) \, dt[/tex]

I'm not sure if I want to view the delta distribution as a two-variable distribution or as a single-variable "t" distribution through the function z'=f(t) better, in such a way that we have
[tex] delta[z-f(t)]=delta[z-z'(t)][/tex]
where z'(t) is the electron's position function.

Hurkyl said:
The point is that to treat [itex]\delta(z - t^2)[/itex] as a bivariate distribution, we don't actually need to be able to make sense of what happens when z = 0! In fact, I would expect that z = 0 to be some sort of singularity.

I agree all with you, but what happens if we have the single-variable delta distribution

[tex]delta[z^2][/tex]?

I wonder what's the delta distribution expression in this such apparently simple case as function of z. From my initial question I found another interesting question that I thought somebody could solve easily, and my curiosity wants to know the solution of this problem as apparently innocent.

Thank you for your ideas.
 
  • #10
esorolla said:
I agree all with you, but what happens if we have the single-variable delta distribution

[tex]delta[z^2][/tex]?

I wonder what's the delta distribution expression in this such apparently simple case as function of z.
I think my final verdict is that that expression probably doesn't make sense. As far as I know, composition of a distribution with a function isn't generally defined, and there doesn't even seem to be any reasonable way to make an ad hoc definition for what this expression might mean.
 
  • #11
Hurkyl said:
I think my final verdict is that that expression probably doesn't make sense. As far as I know, composition of a distribution with a function isn't generally defined, and there doesn't even seem to be any reasonable way to make an ad hoc definition for what this expression might mean.

Well, I think it could have the same sense that the case

[tex]delta(z^2-a^2)[/tex]

but in this case the solution is the well-known formula:

[tex]delta[z^2-a^2]=\frac{delta(z-a)+delta(z+a)}{2|a|}[/tex]

why can't we find an analog expression for a centered in z^2=0 delta distribution if we can when the distribution is shifted by a^2? I don't understand.
 
Last edited:
  • #12
I'd guess that this is exactly what previous students of Dirac Delta found difficult to express, so they applied a shift factor.

Dirac delta is technically not a function and it is a degenerate probability distribution -- so it is a rather idiosyncratic object.

Have you tried http://en.wikipedia.org/wiki/Dirac_delta#Fourier_transform for f2 = z2 and f1 = 0?
 
Last edited:
  • #13
Thank you for everything.

I couldn't log in the last days. I'll try what you suggest me.

Have nice Holidays.
 
  • #14
Hurkyl said:
I did have one last thought... (again, I want to give the disclaimer that I don't know the 'official' way to do this stuff)

Maybe, what you want to use is

[tex]\delta(x^2) = \frac{1}{2|x|} \delta(x)[/tex]

I find it very plausible that there is a rigorous way of treating these things that would lead to this equation. (I'm not entirely convinced about the 2 in the denominator)


In fact, observe that your original equation can be rewritten:

[tex]
\sum_i \frac{\delta(x - a_i) } { |f'(a_i)| } =
\sum_i \frac{\delta(x - a_i) } { |f'(x)| } =
\frac{1}{|f'(x)|} \sum_i \delta(x - a_i)
[/tex]

(at least, it can be rewritten like this if everything is well-behaved...)


I don't know if this problem has been resolved, but I think the above post is correct.
You can use the dirac identity for the dirac delta function of a real argument:

[tex]
\delta(x) =-\frac{1}{\pi} \lim_{\eta\to 0}\Im\left[\frac{1}{i\eta + x}\right]
[/tex]

This means that

[tex]
\delta(x^2) =-\frac{1}{\pi} \lim_{\eta\to 0}\Im\left[\frac{1}{i\eta + x^2}\right] =-\frac{1}{2\pi x} \lim_{\eta\to 0}\Im\left[\frac{1}{\sqrt{i\eta} - x} - \frac{1}{\sqrt{i\eta} + x}\right]
[/tex]

Splitting the term [itex]\sqrt{i\eta}[/itex] into real and imaginary parts and taking the limit [itex]\eta\to 0[/itex] recovers you the expression given by Hurkyl.
 
  • #15
There's something I don't understand when Hurkyl writes

Hurkyl said:
:

[tex]
\sum_i \frac{\delta(x - a_i) } { |f'(a_i)| } =
\sum_i \frac{\delta(x - a_i) } { |f'(x)| }
[/tex]

How can we go from the first summatory to the second? What I can't understand is why the denominator

[tex] { |f'(a_i)| } [/tex]

can be expressed as

[tex] { |f'(x)| } [/tex]

If this expression is well I got an expression for my problem indeed, because it seems not depending of the root of the function f -> a

Could you confirm me this?
 
  • #16
Two distributions are equal iff they always yield the same answer when convolved with a test function.

Try applying both sides of that equality to an arbitrary test function.
 
  • #17
It is the property of the Dirac funtion, i.e. [itex]\int g(x)\,\delta(x-\alpha)\,d\,x=g(\alpha)[/itex], thus

[tex]\int \sum_i\frac{\delta(x-\alpha_i)}{|f'(x)|}}\,g(x)\,d\,x=\sum_i\int \frac{\delta(x-\alpha_i)}{|f'(x)|}}\,g(x)\,d\,x=\sum_i\frac{1}{|f'(\alpha_i)|}}\,g(\alpha_i)=\sum_i\frac{1}{|f'(\alpha_i)|}\,\int\delta(x-\alpha_i)\,g(x)\,d\,x \Rightarrow[/tex]

[tex]\int \sum_i\frac{\delta(x-\alpha_i)}{|f'(x)|}}\,g(x)\,d\,x=\int \sum_i\frac{\delta(x-\alpha_i)}{|f'(\alpha_i)|}}\,g(x)\,d\,x \quad \forall g[/tex]

which gives

[tex]\sum_i\frac{\delta(x-\alpha_i)}{|f'(x)|}}=\sum_i\frac{\delta(x-\alpha_i)}{|f'(\alpha_i)|}}[/tex]
 
  • #18
Oupps! Hurkyl said it first! :smile:
 
  • #19
Rainbow Child said:
Oupps! Hurkyl said it first! :smile:

Ok. Thank you for all.
 
  • #20
On a related note, if I had a function of a vector argument:

[tex]
\delta[f(\mathbf{r})] = \sum_{\mathbf{r}_i}\frac{\delta(\mathbf{r}-\mathbf{r}_i)}{|\nabla_{\mathbf{r}}f(\mathbf{r})|}
[/tex]

Then is the above statement true? Do I simply take the modulus of the vector defined by [itex]\nabla_{\mathbf{r}}f(\mathbf{r})[/itex] in the denominator?

[kind of edit: in post 14 I messed up the signs of the expressions of the denominators...]
 
Last edited:
  • #21
I was just looking at the result that I think you need in Hormander's book Analysis of linear partial differential operators I: distribution theory and Fourier analysis, chapter six. I have written down Theorem 6.1.5 on p136 in my notes but I don't have the book to hand. Roughly if f is a test function on Euclidean n-space and g a real valued differentiable function with Dg not zero at x where g(x)=0 then
[itex]
\int\limits_{\mathbb{R}^n} f(x) \delta(g(x)) dx = \int\limits_{g^{-1}(0)}
\frac{f(x)}{|Dg(x)|} d \sigma (x)
[/itex]
where σ is the measure on the hypersurface [itex]g^{-1}(0)[/itex].

The above post I think has a special case
 
Last edited:

1. What is a Delta distribution?

A Delta distribution, also known as a Dirac delta function, is a mathematical distribution that is defined to be zero everywhere except at a single point, where it has an infinite value. It is often used to represent a point mass or impulse in physics and engineering applications.

2. Why is a change of variables necessary for Delta distribution?

A change of variables is necessary for Delta distribution because it allows us to express the distribution in terms of a different variable, making it more useful for solving problems and making calculations. It also allows us to transform the distribution into different coordinate systems, which can be helpful in certain applications.

3. How do you perform a change of variables for Delta distribution?

To perform a change of variables for Delta distribution, we start by defining a new variable, let's say x = g(y), where g(y) is some function that maps y onto x. We then substitute this new variable into the original expression for the Delta distribution. This will give us a new expression for the Delta distribution in terms of the new variable x.

4. What are some common examples of a change of variables for Delta distribution?

One common example of a change of variables for Delta distribution is in polar coordinates, where the Delta distribution can be expressed as a function of the radial coordinate. Another example is in quantum mechanics, where the Delta distribution is often used to represent a wavefunction in terms of momentum instead of position.

5. What are the benefits of using a change of variables for Delta distribution?

Using a change of variables for Delta distribution can make calculations and problem-solving easier, as it allows us to express the distribution in terms of a more convenient variable. It also allows us to transform the distribution into different coordinate systems, making it more versatile for different applications.

Similar threads

  • Calculus
Replies
25
Views
1K
Replies
1
Views
927
  • Calculus
Replies
2
Views
1K
Replies
24
Views
2K
Replies
18
Views
2K
Replies
14
Views
1K
Replies
4
Views
1K
Replies
9
Views
918
Replies
2
Views
281
Replies
7
Views
1K
Back
Top