Dirac delta function related

  1. How can I find the derivative of a step function?
     
  2. jcsd
  3. Nugatory

    Staff: Mentor

    It's not defined at the step, so you can't.
    But if you can tell us more about the problem that leads you to ask this question, someone here will likely be able to help; there are many here who have experience dealing with the annoyances caused by discontinuities in step and delta functions.
     
  4. Matterwave

    Matterwave 3,774
    Science Advisor
    Gold Member

    You can use the definition of a step function. What do you think the derivative is?
     
  5. HallsofIvy

    HallsofIvy 40,212
    Staff Emeritus
    Science Advisor

    If you are not at the "step" then there exist an interval around x such that f(x) is constant. What is the derivative of that?
     
  6. The derivative of the step function is the Dirac Delta function. I don't know how good of a proof this is but it's the best I could come up with haha:

    You can define a function ##f## such that ##f(x)=\frac{d}{dx}step(x)## and you can see that ##f## is zero everywhere except at zero, where it is infinity. So you can define ##f## as a piecewise function. Now, to show that ##f=\delta## you need to show that $$\int_{-a}^{a}f(x)dx=1$$ if ##a>0##.

    I did this using the definition of the derivative and the definition of the step function $$\int_{-a}^{a}\frac{d}{dx}step(x)dx=\int_{-a}^{a}\lim_{h \to 0}\frac{1}{h}(step(x+h)-step(x))dx$$
    If you assume that ##h>0## and consider ##\int step(x)dx = x step(x)##, you can evaluate the integral and see that it equals 1. $$\int_{-a}^{a}f(x)dx=\lim_{h \to 0}\frac{1}{h}\int_{-a}^{a}(step(x+h)-step(x))dx=\lim_{h \to 0}\frac{1}{h}h=1$$
     
  7. julian

    julian 402
    Gold Member

    Being a bit more sophisticated. Green's functions and the like. It can easily found that

    [itex]\Theta (\tau) = - {1 \over 2 \pi i} \lim_{\epsilon \rightarrow 0} \int_{-\infty}^\infty d \omega {e^{-i \omega \tau} \over \omega + i \epsilon}[/itex]

    From this we get directly

    [itex]{d \Theta \over d \tau} = - {1 \over 2 \pi i} \lim_{\epsilon \rightarrow 0} \int_{-\infty}^\infty {d \over d \tau} {e^{-i \omega \tau} \over \omega + i \epsilon} d \omega[/itex]
    [itex]= - {1 \over 2 \pi i} \lim_{\epsilon \rightarrow 0} \int_{-\infty}^\infty {-i \omega \over \omega + i \epsilon} e^{-i \omega \tau} d \omega[/itex]
    [itex]= {1 \over 2 \pi} \int_{-\infty}^\infty e^{-i \omega \tau} d \omega = \delta (\tau)[/itex]
     
  8. pwsnafu

    pwsnafu 902
    Science Advisor

    That is not how the Dirac delta is defined, and proving ##\delta = H'## uses a different definition of derivative. Indeed it should be obvious there is no piecewise function with the properties you outlined because integrating over a point is zero from the definition of the integral.

    What are you using to justify the limit passage?
     
    Last edited: Aug 4, 2014
  9. julian

    julian 402
    Gold Member

    It is about contour integration and Jordan's lemma isn't it? You either complete the semi-circle in the UHP or LHP depending on the sign of [itex]\tau[/itex]? There is a pole in the LHP and not the UHP. For [itex]\tau > 0[/itex] you complete with semi-circle in the LHP (getting a non-zero answer from your contour integral), for [itex]\tau < 0[/itex] you complete with semi-circle in the UHP (getting a zero answer to your contour integral). That is where the piecewise-ness comes from. The rest, getting 1, comes from the limit [itex]\epsilon \rightarrow 0[/itex] from what you end up with from the residue theorem.

    That is a definition of the delta-function.
     
    Last edited: Aug 4, 2014
  10. julian

    julian 402
    Gold Member

    The Fourier transform is:

    [itex]\tilde{f} (\omega) = \int_{-\infty}^\infty d x e^{i \omega x} f(x) \quad Eq1[/itex]

    The inverse Fourier transform is:

    [itex]f (x) = {1 \over 2 \pi} \int_{-\infty}^\infty d \omega e^{-i \omega x} \tilde{f} (\omega) \quad Eq2[/itex]

    Say [itex]f(x) = \delta (x)[/itex] then putting this into Eq1 we get

    [itex]\tilde{f} (\omega) = 1[/itex]

    and putting that into Eq2 we get

    [itex]\delta (x) = {1 \over 2 \pi} \int_{-\infty}^\infty d \omega e^{-i \omega x}[/itex]. It is the completeness relation basically.
     
  11. Apologies, I'm not a mathematician. I just know that ##\delta = \frac{d}{dx}H## and this is the best I could do at showing why. Nobody else was really addressing the OP question.

    Anyways, consider the integral $$\int_{a}^{b}\delta(x)dx$$ where ##a<b<0## or ##b>a>0##. This integral equals zero right? Even in the case where ##a \to -\infty## and ##b<0## or ##b \to \infty## and ##a>0##. So doesn't this imply that the integral in my above post must equal one as long as ##a>0##? It is clear that the only nonzero contribution to the integral is at the point ##x=0##.
    $$\int_{-\infty}^{\infty}\delta(x)dx=1$$
    if ##a>0##
    $$\int_{-\infty}^{\infty}\delta(x)dx-\int_{-\infty}^{-a}\delta(x)dx-\int_{a}^{\infty}\delta(x)dx=\int_{-\infty}^{\infty}\delta(x)dx-0-0=\int_{-a}^{a}\delta(x)dx$$
    this is true for arbitrarily small a. I see why integrating over a single point of finite value is guaranteed to equal zero, but integrating over a single point of infinite value is like a zero times infinity indeterminate case isn't it?
     
  12. julian

    julian 402
    Gold Member

    Hello Hertz

    Well actually the delta function is often defined as the limit of a sequence of functions (non-zero values away from [itex]x=0[/itex]) and as such the step function is defined as a limit of a sequence of functions.

    What you said is what I would have said. I'm not a mathematician, theoretical physics - I just know the definition I gave from scattering theory and was going to mention it as an application to physics.

    Another time I went through the derivation of the Fokker-Planck equation from the Langevin equation, which involved the smoothing off of the corners of a time step function - this had a physics motivation.
     
    Last edited: Aug 4, 2014
  13. pwsnafu

    pwsnafu 902
    Science Advisor


    Given ##\mathbb{R}## we split it up into ##(-\infty,0)##, ##\{0\}## and ##(0, \infty)##. If you define ##f(x) = 0## for ##x \neq 0## and ##f(0)=\infty##; then
    ##\int_{-\infty}^\infty f(x) \, dx##
    ##=\int_{-\infty}^{0}f(x) \, dx + \int_0^0 f(x) \, dx + \int_0^{\infty}f(x) \, dx##
    ##= 0 \cdot \infty + \infty \cdot 0 + 0\cdot \infty##
    In measure theory we define ##0 \cdot \infty## equal to zero, so the integral is zero from the definition of the integral.

    This is why there is no real variable function with the properties of Dirac delta. Dirac is a generalized function, it does not take point values.

    It's not indeterminate, the product is defined as zero in this branch of mathematics (measure theory and integration).

    This is wrong and where your misunderstanding comes from. ##\frac{d}{dx}## is reserved for the standard definition of derivative you learned in high school. We have ##\frac{d}{dx}H(x) = 0## on the domain ##\mathbb{R}\setminus\{0\}##.

    The Dirac delta is the distributional derivative of the Heaviside step function. You have not learned the definition so the "proof" you wrote above is invalid because you are using the wrong definition of derivative.
     
    Last edited: Aug 5, 2014
  14. Ok.

    Ok. Well I was introduced to the delta function as a piecewise defined function where ##\delta=0## if ##x≠0## and ##\delta=\infty## if ##x=0## such that ##\int_{-\infty}^{\infty}\delta(x)dx=1##. Yes, I'm aware that this same function can be represented as a series of other functions. After being taught this definition, I was taught that the derivative of the heavyside step function is the delta function. It was proven to me at that point (given the definitions I was taught) but I have since forgotten what proof was used. My understanding of the Dirac Delta function is largely from Griffith's Electrodynamics book. Anyways, given my level of understanding, or lack thereof, what I said above is my best attempt at addressing the OP question of the derivative of the step function.

    Anyways, regarding your claim that ##0\cdot\infty=0##, I'm still a bit confused. I wasn't aware you could define something like this. If I define ##1\cdot 2=5## it doesn't make it true. I was taught in early calculus that ##0\cdot\infty## is an indeterminate form.

    -edit
    I just read through the section in Griffith's book where he talks about the 1D delta function again. He does make mention of the fact that it is a 'generalized function' but he introduces it using the definition I was using. Guess I have some learning to do about generalized functions.

    Anyways, even in hindsight I think my post about the derivative of the step function is quite interesting :)
     
    Last edited: Aug 5, 2014
  15. disregardthat

    disregardthat 1,811
    Science Advisor

    I don't think the convention of [itex]\infty \cdot 0 = 0[/itex] comes into this. The fault in the proof is basically that you can't switch the limit sign with the integration sign unless the derivative (the limit) exists and the absolute value of the derivative is bounded on the domain of integration (if I remember correctly). The derivative, in the conventional sense, of the heaviside step function does not exist.

    The proper way to introduce the dirac delta function is as a distribution (a linear operator on a set of functions). There are other ways, perhaps, but it's the most immediate way to do it mathematically correct, at least when restricting to functions on real domains. This can be seen as integration, but in a more general sense (integration over distributions). It is not the same as Riemann or Lebesgue integration.

    A distribution is not a function from the reals to the reals, and even when extending to the extended real number line it does not make sense to say that a function [itex]\delta(x)[/itex] satisfies [itex]\int^{\infty}_{-\infty} \delta(x)f(x)dx = f(0)[/itex], unless properly defining what you mean by this integration. There are no functions with codomain the extended real number line which satisfies this if you mean Lebesgue integration, even though Lebesgue integration can deal with such functions. Since [itex]\delta(0) = \infty[/itex], in your definition, and 0 elsewhere, it is still only non-zero on a null set, which means that integrating over it, multiplied with any function, must be 0 (with Lebesgue integration). That follows from the definition of this type of integration, and the convention [itex]\infty \cdot 0 = 0[/itex], although useful and consistent in this setting (still arguably somewhat arbitrary), is not the reason for it. It follows simply from that it's zero everywhere but on a null set (a null set is a set with Lebesgue measure 0).
     
    Last edited: Aug 5, 2014
  16. HallsofIvy

    HallsofIvy 40,212
    Staff Emeritus
    Science Advisor

    If you are going to talk about the delta "function" as being the derivative of the step function, then you are going to have to talk in terms of "distributions" or "generalized functions", not just functions.
     
Know someone interested in this topic? Share a link to this question via email, Google+, Twitter, or Facebook