Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Change of variables for Delta distribution

  1. Dec 1, 2007 #1
    Hello everybody

    First I'd like to thank for the work all of you are developing with this forum. I found it for casuality but I'm sure since now it will be a perpetual partner.

    I'm a Spanish Physics graduate and I am working about microwave guides and connectors for devices components in satellites and I have some trouble in my job. Although my questions's title it's not my trouble I think the solution could help for my investigation development.

    My doubt is about how to change of variable in a Dirac Delta distribution. I know the usually called scaling property:


    where Xi's are the roots of the function f(x). But my trouble is, for example, in the case that the function is as apparently innocent as


    because in this case the function has a double root Xi whose value is zero and this is a problem in the denominator of the expression, because


    I'd like to receive ideas to solve this problem, although I have a possible way for the beginning.

    If the function is f(x^2-a^2) with 'a' a real number, the solution is the well-known formula


    How about if we take the limit 'a' tending to zero? I have no answer to this, but I think it could be an initial idea. I have looked at some books of calculus and I haven't found answer to this problem, but I recognize I have not read all the mathematical books that exist. I am sure my problem is that I have not read the development of this formula to know how adapt it to this case.

    Thank you for to pay attention.
    Last edited: Dec 1, 2007
  2. jcsd
  3. Dec 1, 2007 #2


    User Avatar
    Homework Helper

    How about letting [itex]u = x^2[/itex] and re-expressing your integral in terms of [itex]u[/itex] instead of trying to manipulate the form of the delta function? I haven't thought much on whether or not the double root would be a problem, but naively at least I wouldn't think so, and this is probably the first method I'd try to tackle the integral.
  4. Dec 1, 2007 #3


    User Avatar
    Science Advisor
    Homework Helper

    Last edited: Dec 1, 2007
  5. Dec 1, 2007 #4
    Thank you for the ideas, but I don't have to put the Delta distribution inside an integral. I am developing a model about the current that represents an electron in a point of the space and I am trying to get the coefficients of the Fourier series of the velocity of the electron represented as a delta(z-z'(t)) where z' is a function of t, but this is not the question.

    I thank you a lot for the ideas, but I expected there would be a solution for an expression of the Delta distribution when the derivative of the function, that plays as variable, has a zero value in the point Xi that we are considering. In this case I have the trouble when dz'/dt is evaluated in a time Ti (equivalent to Xi) that vanishes dz'/dt (equivalent to f´).

    Thank you... but I will go on thinking on it.
  6. Dec 1, 2007 #5


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Remember that distributions usually are not actual functions, so function-like things may not apply!

    I've only studied their calculus superficially, so I do not know if there is a standardized definition for "change of variable" for distributions; I would actually expect
    to be a definition, not a theorem.

    I think it would help if you showed the calculation you are trying to do -- problems with distributions usually arise from actual errors in their manipulation. For example, if I consider the expression [itex]\delta(z - f(t))[/itex] as being a function in t and a distribution in z, then it would be generally incorrect to plug in values for z!

    I have some more ideas, but not the time to develop them now.
  7. Dec 2, 2007 #6
    Well, I said before I am not a mathematician, but I studied change of variables for probability distribution, for example. And I think in Bayesian theory is accepted and usually studied the change of variables in probability distribution, but I'm sure you could take some brightness to my knowledge.

    Well, although I'd accept this point, I'd thank a lot if you could tell me what's the delta function in the case I put as example, or when the derivative applied in the roots of the function vanishes.

    I consider delta as a distribution in t through a function z-z'(t), but I knew that I had some problems because the physics don't know quite about the distributions. Nevertheless this model is usually taken for many prestigious physics especialized in electromagnetism to represent the instantaneous current that an electron produces.

    I would be very grateful to you if you compart those other ideas when you could.
  8. Dec 2, 2007 #7


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    As far as a distribution is concerned, the "value" at individual points doesn't matter. For example, consider the function given by

    [tex]f(x) = \begin{cases}
    0 & x \neq 0 \\
    1 & x = 0

    If [itex]\phi[/itex] is a test function, then we have:

    [tex]\int_{-\infty}^{+\infty} f(x) \phi(x) \, dx = 0[/tex]

    so f represents the same distribution as 0.

    I suspect that's what you want to do here; one way to view [itex]\delta(z - f(t))[/itex] is as the two-variable distribution given by

    [tex]\iint \delta(z - f(t)) \phi(z, t) \, dA =
    \int_{-\infty}^{+\infty} \phi(f(t), t) \, dt[/tex]

    If we define [itex]g_t(z) := \delta(z - f(t))[/itex] (i.e. we "plug in" values for t to get a distribution in z), then you can check that

    \int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} g_t(z) \phi(z, t) \, dz \, dt
    = \int_{-\infty}^{+\infty} \phi(f(t), t) \, dt

    and so we see that things behave well with respect to "plugging in" values for t.

    Let's consider the special case that [itex]f(t) = t^2[/itex]. Then I claim that

    [tex]h_z(t) := \begin{cases}
    \frac{1}{2 \sqrt{z}} \left( \delta(t - \sqrt{z}) + \delta(t + \sqrt{z}) \right)
    & z > 0
    0 & z < 0[/tex]

    (I don't care about the value at z = 0) also represents the same distribution, and so it can be thought of as what happens if we "plug in" a value of z.

    So, let's compute:

    \int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty}
    h_z(t) \phi(z, t) \, dt \, dz
    = \int_0^{+\infty} \frac{1}{2 \sqrt{z}} \left( \phi(z, \sqrt{z}) + \phi(z, -\sqrt{z})
    \right) \, dz[/tex]

    which, I believe, is equal to

    [tex]\int_{-\infty}^{+\infty} \phi(t^2, t) \, dt[/tex]

    as desired.

    The point is that to treat [itex]\delta(z - t^2)[/itex] as a bivariate distribution, we don't actually need to be able to make sense of what happens when z = 0! In fact, I would expect that z = 0 to be some sort of singularity.

    Am I making sense?
    Last edited: Dec 2, 2007
  9. Dec 2, 2007 #8


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I did have one last thought... (again, I want to give the disclaimer that I don't know the 'official' way to do this stuff)

    Maybe, what you want to use is

    [tex]\delta(x^2) = \frac{1}{2|x|} \delta(x)[/tex]

    I find it very plausible that there is a rigorous way of treating these things that would lead to this equation. (I'm not entirely convinced about the 2 in the denominator)

    In fact, observe that your original equation can be rewritten:

    \sum_i \frac{\delta(x - a_i) } { |f'(a_i)| } =
    \sum_i \frac{\delta(x - a_i) } { |f'(x)| } =
    \frac{1}{|f'(x)|} \sum_i \delta(x - a_i)

    (at least, it can be rewritten like this if everything is well-behaved...)
    Last edited: Dec 2, 2007
  10. Dec 2, 2007 #9
    I'm not sure if I want to view the delta distribution as a two-variable distribution or as a single-variable "t" distribution through the function z'=f(t) better, in such a way that we have
    [tex] delta[z-f(t)]=delta[z-z'(t)][/tex]
    where z'(t) is the electron's position function.

    I agree all with you, but what happens if we have the single-variable delta distribution


    I wonder what's the delta distribution expression in this such apparently simple case as function of z. From my initial question I found another interesting question that I thought somebody could solve easily, and my curiosity wants to know the solution of this problem as apparently innocent.

    Thank you for your ideas.
  11. Dec 2, 2007 #10


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I think my final verdict is that that expression probably doesn't make sense. As far as I know, composition of a distribution with a function isn't generally defined, and there doesn't even seem to be any reasonable way to make an ad hoc definition for what this expression might mean.
  12. Dec 2, 2007 #11
    Well, I think it could have the same sense that the case


    but in this case the solution is the well-known formula:


    why can't we find an analog expression for a centered in z^2=0 delta distribution if we can when the distribution is shifted by a^2? I don't understand.
    Last edited: Dec 2, 2007
  13. Dec 2, 2007 #12


    User Avatar
    Science Advisor
    Homework Helper

    I'd guess that this is exactly what previous students of Dirac Delta found difficult to express, so they applied a shift factor.

    Dirac delta is technically not a function and it is a degenerate probability distribution -- so it is a rather idiosyncratic object.

    Have you tried http://en.wikipedia.org/wiki/Dirac_delta#Fourier_transform for f2 = z2 and f1 = 0?
    Last edited: Dec 2, 2007
  14. Dec 14, 2007 #13
    Thank you for everything.

    I couldn't log in the last days. I'll try what you suggest me.

    Have nice Holidays.
  15. Feb 14, 2008 #14

    I don't know if this problem has been resolved, but I think the above post is correct.
    You can use the dirac identity for the dirac delta function of a real argument:

    \delta(x) =-\frac{1}{\pi} \lim_{\eta\to 0}\Im\left[\frac{1}{i\eta + x}\right]

    This means that

    \delta(x^2) =-\frac{1}{\pi} \lim_{\eta\to 0}\Im\left[\frac{1}{i\eta + x^2}\right] =-\frac{1}{2\pi x} \lim_{\eta\to 0}\Im\left[\frac{1}{\sqrt{i\eta} - x} - \frac{1}{\sqrt{i\eta} + x}\right]

    Splitting the term [itex]\sqrt{i\eta}[/itex] into real and imaginary parts and taking the limit [itex]\eta\to 0[/itex] recovers you the expression given by Hurkyl.
  16. Feb 15, 2008 #15
    There's something I don't understand when Hurkyl writes

    How can we go from the first summatory to the second? What I can't understand is why the denominator

    [tex] { |f'(a_i)| } [/tex]

    can be expressed as

    [tex] { |f'(x)| } [/tex]

    If this expression is well I got an expression for my problem indeed, because it seems not depending of the root of the function f -> a

    Could you confirm me this?
  17. Feb 15, 2008 #16


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Two distributions are equal iff they always yield the same answer when convolved with a test function.

    Try applying both sides of that equality to an arbitrary test function.
  18. Feb 15, 2008 #17
    It is the property of the Dirac funtion, i.e. [itex]\int g(x)\,\delta(x-\alpha)\,d\,x=g(\alpha)[/itex], thus

    [tex]\int \sum_i\frac{\delta(x-\alpha_i)}{|f'(x)|}}\,g(x)\,d\,x=\sum_i\int \frac{\delta(x-\alpha_i)}{|f'(x)|}}\,g(x)\,d\,x=\sum_i\frac{1}{|f'(\alpha_i)|}}\,g(\alpha_i)=\sum_i\frac{1}{|f'(\alpha_i)|}\,\int\delta(x-\alpha_i)\,g(x)\,d\,x \Rightarrow[/tex]

    [tex]\int \sum_i\frac{\delta(x-\alpha_i)}{|f'(x)|}}\,g(x)\,d\,x=\int \sum_i\frac{\delta(x-\alpha_i)}{|f'(\alpha_i)|}}\,g(x)\,d\,x \quad \forall g[/tex]

    which gives

  19. Feb 15, 2008 #18
    Oupps! Hurkyl said it first! :smile:
  20. Feb 15, 2008 #19
    Ok. Thank you for all.
  21. Feb 15, 2008 #20
    On a related note, if I had a function of a vector argument:

    \delta[f(\mathbf{r})] = \sum_{\mathbf{r}_i}\frac{\delta(\mathbf{r}-\mathbf{r}_i)}{|\nabla_{\mathbf{r}}f(\mathbf{r})|}

    Then is the above statement true? Do I simply take the modulus of the vector defined by [itex]\nabla_{\mathbf{r}}f(\mathbf{r})[/itex] in the denominator?

    [kind of edit: in post 14 I messed up the signs of the expressions of the denominators...]
    Last edited: Feb 15, 2008
  22. Dec 4, 2011 #21
    I was just looking at the result that I think you need in Hormander's book Analysis of linear partial differential operators I: distribution theory and Fourier analysis, chapter six. I have written down Theorem 6.1.5 on p136 in my notes but I don't have the book to hand. Roughly if f is a test function on Euclidean n-space and g a real valued differentiable function with Dg not zero at x where g(x)=0 then
    \int\limits_{\mathbb{R}^n} f(x) \delta(g(x)) dx = \int\limits_{g^{-1}(0)}
    \frac{f(x)}{|Dg(x)|} d \sigma (x)
    where σ is the measure on the hypersurface [itex]g^{-1}(0)[/itex].

    The above post I think has a special case
    Last edited: Dec 4, 2011
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook