Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Time independent schrodinger equation: delta potential

  1. May 28, 2007 #1
    I'm currently reading Griffith's book on time independent Schroedinger's equation about delta functions.

    However, I complete dislike how the book deals with the delta distribution.

    firstly, the book discusses how to solve:

    for E<0, by solving the equation piece-wise, and noticing that psi must be continuous:
    [tex]\psi=\sqrt k e^{-k|x|}[/tex]

    however, so far the alpha has not come into play, the book incorporate alpha by integrating the Schroedinger's equation from -epsilon to +epsilon, then take the limit of epsilon to zero.

    However, in doing so, the first integration becomes (according to the book)
    [tex]\left \lim_{\epsilon\rightarrow 0}\frac{d\psi}{dx} \right |_{-\epsilon}^{+\epsilon}[/tex]

    that is complete bs to me. psi is not even twice differentiable in the usual sense. The whole idea is psi is solved to be a distribution, and the derivative is a distribution, how in the heck can the book just invoke the fundamental theorem of calculus?? at least if they do that, they should at least realize that the integral is improper (zero is a bigggg discontinuity) and make the integral:
    [tex]\left \lim_{\epsilon\rightarrow 0}\frac{d\psi}{dx} \right |_{-\epsilon}^{0}+\left \frac{d\psi}{dx} \right |_{0}^{+\epsilon}

    If anybody can provide additional insights and a more rigorous treatment of this thing... I'll be greatly appreciated. I want to understand psi in the distributional sense.
    Last edited: May 28, 2007
  2. jcsd
  3. May 28, 2007 #2
    There's a discontinuity in the derivative of the function from the left and from the right, and the idea is to figure out that jump by looking at the delta function.

    Also, don't worry about the wave function being a distribution. It's still a function that satisfies a certain differential equation.
  4. May 28, 2007 #3
    ok, I get a better intuition out of it now... yes, the idea is to figure out the jumping of the discontinuity.

    I guess I'm just a little bit uncomfortable treating psi by seeing it as a function after my math professor has taught us the "right" way in terms of distributions....
  5. May 28, 2007 #4
    Never trust a math professor to teach you physics. They'll obscure a painfully obvious concept under layers of formalism just so that they can show that it does in fact work the way physicists know it works. The trick is to know when that level is needed, and when it isn't.
  6. May 28, 2007 #5

    Usually, when I deal with such things, I just replace the delta function by a very peaked Gaussian.
    Then I try to get the results.
    For elementary questions, this avoid me usually any trouble.
    It the example here, everything becomes obvious by keeping that in mind.
    Integrating the SE once around x=0 over a -sufficiently large compared to the Gaussian- is easy.
  7. May 28, 2007 #6
    I'm not sure about rigor, but at least getting the same result in several different ways gives some confidence to believe that the calculations have some truth behind them. I know in total three different ways to deal with the delta potential in this problem. The epsilon trickery you have encountered is one.

    Another way is to start with a square well potential, that has some fixed width R. The solution where psi is cosine in the well (without zeros in the well), and exponential outside, is the one that survives in the limit R->0. The boundary condition leads to some impossible equations. I don't remember precisly, but the are something like [tex]\cos(Ax)=Ax+B[/tex] or similar, but this problem can be avoided but using parabola approximation to the cosine. Then the boundary conditions can be solved precisly. The parabola approximation leads to small error, but this error vanishes on the limit R->0, and you should be able to solve the wave function and it's energy level for the delta potential.

    However, no doubt the easiest way is to just take the second derivative of the expression [tex]e^{-k|x|}[/tex] by using identities [tex]\partial |x|=2\theta(x)-1[/tex] and [tex]\partial\theta(x)=\delta(x)[/tex] blindly.
  8. Jun 1, 2007 #7

    George Jones

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Here's an attempt at a distributional treatment. The ideas were easier than the write-up seems to indicate. Informal reasoning usuall works well, but sometimes it leads ones astray. See this thread. Here's Roger Penrose on the question of mathematical rigour in quantum theory: "Quantum mechanics is full of irritating issues of this kind. As the state of the art stands, one either be decidedly sloppy about such mathematical niceties and even pretend that position states and momentum sates are actually states, or else spend the whole time insisting on getting the mathematics right, in which case there is a contrasting danger of getting trapped in 'rigour mortis'. ... I am not at all sure what the correct answer is for making progress in the subject!''

    A distribution is a continuous linear mapping (functional) from [itex]\mathcal{T}[/itex] to [itex]\mathbb{C}[/itex], where [itex]\mathcal{T}[/itex] is the space of test (i.e., sufficiently nice) functions. Any locally integrable function [itex]g[/itex] naturally defines a distribution [itex]G[/itex]:

    [tex]G\left[ f\right] =\int_{-\infty }^{\infty }g\left( x\right) f\left( x\right) dx[/tex]

    for all test functions [itex]f[/itex]. Not all distributions arise in such a fashion.

    Even though the Dirac delta function, defined by [itex]\delta \left[ f\right] =f\left( 0\right)[/itex], is an example of a distribution that doesn't arise in the above manner, it is convenient (and useful!) notationally to pretend that it does, i.e., that [itex]\delta[/itex] is a function such that

    [tex]f\left( 0\right) =\int_{-\infty }^{\infty }\delta \left( x\right) f\left( x\right) dx.[/tex]

    If [itex]g[/itex] is any function such that

    [tex]\int_{-\infty }^{\infty }g\left( x\right) dx=1,[/tex]

    then the family of functions [itex]g_{\varepsilon }\left( x\right) :=g\left( x/\varepsilon \right) /\varepsilon[/itex] defines a family of distributions (as above) [itex]G_{\varepsilon }[/itex], with [itex]G_{\varepsilon }\rightarrow \delta[/itex] (in the distributional or weak sense) as [itex]\varepsilon \rightarrow 0[/itex], i.e.,

    [tex]f\left( 0\right) =\lim_{\varepsilon \rightarrow 0}\int_{-\infty }^{\infty }g_{\varepsilon }\left( x\right) f\left( x\right) dx.[/tex]

    The definitions of differentiation of distributions and multiplication of distributions by somewhat nice functions are both motivated by functions considered as distributions. Let [itex]g[/itex] be a function that has locally integrable derivative [itex]g^{\prime }[/itex], and use [itex]g^{\prime }[/itex] to define the distribution [itex]G^{\prime }[/itex] in the usual way:

    G^{\prime }\left[ f\right] &= \int_{-\infty }^{\infty }g^{\prime }\left( x\right) f\left( x\right) dx \\
    &= \left[ g\left( x\right) f\left( x\right) \right] _{-\infty }^{\infty }-\int_{-\infty }^{\infty }g\left( x\right) f^{\prime }\left( x\right) dx.

    Test functions die at [itex]\pm \infty[/itex], so the first term in the last line is zero, giving [itex]G^{\prime }\left[ f\right] =-G\left[ f^{\prime }\right][/itex]. This definition works for all distributions (including the Dirac delta function), not just those that correspond to functions.

    Suppose [itex]v\left( x\right) =u\left( x\right) g\left( x\right)[/itex]. Then,

    [tex]V\left[ f\right] =\int_{-\infty }^{\infty }v\left( x\right) f\left( x\right) dx=\int_{-\infty }^{\infty }g\left( x\right) \left[ u\left( x\right) f\left( x\right) \right] dx=G\left[ uf\right].[/tex]

    This motivates the definition of a distribution [itex]\left( uG\right) \left[ f\right] :=G\left[ uf\right][/itex] for [itex]G[/itex] an arbitrary distribution and [itex]u[/itex] a function.

    Now to the problem at hand. Consider the standard textbook form of the Schrodinger equation with delta function potential,

    [tex]-\frac{\hbar ^{2}}{2m}\frac{d^{2}\psi }{dx^{2}}-\alpha \delta \left( x\right) \psi =E\psi .[/tex]

    The second term on the left is a distribution, so all terms in the equation need to be distributions, i.e., try to find a function [itex]\psi[/itex] that has corresponding distribution [itex]\Psi[/itex] (not to be confused with the time-dependent wavefunction), and that satisfies the distributional Schrodinger equation

    [tex]-\frac{\hbar ^{2}}{2m}\Psi ^{\prime \prime }\left[ f\right] -\left( \alpha \psi \delta \right) \left[ f\right] =E\Psi \left[ f\right][/tex]

    for all test functions [itex]f[/itex].

    The term on the right is a distribution that corresponds to a function, and therefore, if the first term also corresponds to a function, then (by rearrangement and linearity of distributions), then so does the delta distribution. Since the delta distribution doesn't correspond to a function, the first term on the left can't be a distribution that corresponds to a function. In other words, [itex]\psi[/itex] is twice differentiable as a distribution, but not as a function. Clearly, the only place is a problem is at [itex]x=0[/itex], so provisionally, assume [itex]\psi[/itex] is continuous everywhere and piecewise smooth on both sides of zero.

    The two different notions of differentiation give two options: 1) turn [itex]\psi[/itex] into the distribution [itex]\Psi[/itex] and use distributional differentiation (defined above) to produce the distributions [itex]\Psi ^{\prime }[/itex] and [itex]\Psi ^{\prime \prime }[/itex]; 2) apply piecewise differentiation of functions to [itex]\psi[/itex] (except at [itex]x=0[/itex]) to produce functions [itex]\phi ^{\prime }[/itex] and [itex]\phi ^{\prime \prime }[/itex], and use these function to define distributions [itex]\Phi ^{\prime }[/itex] and [itex]\Phi ^{\prime \prime }[/itex]. In order to distinguish between these two options, I've introduced somewhat awkward notation.

    Interesting question: Does [itex]\Psi ^{\prime }=\Phi ^{\prime }[/itex] and [itex]\Psi ^{\prime \prime }=\Phi ^{\prime \prime }[/itex] ?

    \Psi ^{\prime }\left[ f\right] &= -\Psi \left[ f^{\prime }\right] \\
    &= -\int_{-\infty }^{\infty }\psi \left( x\right) f^{\prime }\left( x\right) dx\\
    &= -\left( \int_{-\infty }^{0^{-}}\psi \left( x\right) f^{\prime }\left( x\right) dx+\int_{0^{+}}^{\infty }\psi \left( x\right) f^{\prime }\left( x\right) dx\right)\\
    &= -\left( \left[ \psi \left( x\right) f\left( x\right) \right] _{-\infty }^{0^{-}}-\int_{-\infty }^{0^{-}}\psi ^{\prime }\left( x\right) f\left( x\right) dx+\left[ \psi \left( x\right) f\left( x\right) \right] _{0^{+}}^{\infty }-\int_{0^{+}}^{\infty }\psi ^{\prime }\left( x\right) f\left( x\right) dx\right)\\
    &= -\psi \left( 0^{-}\right) f\left( 0^{-}\right) +\psi \left( 0^{+}\right) f\left( 0^{+}\right) +\int_{-\infty }^{0^{-}}\phi ^{\prime }\left( x\right) f\left( x\right) dx+\int_{0^{+}}^{\infty }\phi ^{\prime }\left( x\right) f\left( x\right) dx

    Since both [itex]\psi[/itex] and [itex]f[/itex] are continuous,

    -\psi \left( 0^{-}\right) f\left( 0^{-}\right) +\psi \left( 0^{+}\right) f\left( 0^{+}\right) =0,

    and hence [itex]\Psi ^{\prime }\left[ f\right] =\Phi ^{\prime }\left[ f\right][/itex].

    \Psi ^{\prime \prime }\left[ f\right] &= -\Psi ^{\prime }\left[ f^{\prime }\right]\\
    &= -\Phi ^{\prime }\left[ f^{\prime }\right]\\
    &= -\left( \int_{-\infty }^{0^{-}}\phi ^{\prime }\left( x\right) f^{\prime }\left( x\right) dx+\int_{0^{+}}^{\infty }\phi ^{\prime }\left( x\right) f^{\prime }\left( x\right) dx\right)\\
    &= -\psi ^{\prime }\left( 0^{-}\right) f\left( 0^{-}\right) +\psi ^{\prime }\left( 0^{+}\right) f\left( 0^{+}\right) +\int_{-\infty }^{0^{-}}\phi ^{\prime \prime }\left( x\right) f\left( x\right) dx+\int_{0^{+}}^{\infty }\phi ^{\prime \prime }\left( x\right) f\left( x\right) dx\\
    &= \left( \psi ^{\prime }\left( 0^{+}\right) -\psi ^{\prime }\left( 0^{-}\right) \right) f\left( 0\right) +\int_{-\infty }^{\infty }\phi ^{\prime \prime }\left( x\right) f\left( x\right) dx


    \Psi ^{\prime \prime }\left[ f\right] =\left( \psi ^{\prime }\left( 0^{+}\right) -\psi ^{\prime }\left( 0^{-}\right) \right) \delta \left[ f\right] +\Phi ^{\prime \prime }\left[ f\right]

    Use this in the distributional Schrodinger equation:

    -\frac{\hbar ^{2}}{2m}\left( \left( \psi ^{\prime }\left( 0^{+}\right) -\psi ^{\prime }\left( 0^{-}\right) \right) \delta \left[ f\right] +\Phi ^{\prime \prime }\left[ f\right] \right) -\left( \alpha \psi \delta \right) \left[ f\right] &= E\Psi \left[ f\right]\\
    -\left( \frac{\hbar^{2}}{2m}\left( \psi ^{\prime }\left( 0^{+}\right) -\psi ^{\prime }\left( 0^{-}\right) \right) +\alpha \psi \left( 0\right) \right) \delta \left[ f\right] &= \left( E\Psi +\frac{\hbar ^{2}}{2m}\Phi ^{\prime \prime }\right) \left[ f\right]

    The distribution on the left side of this equation is a distribution that can't be produced by a function, while the distribution on the right is produced by a function. This is only possible if both sides equal zero. Hence, the left side gives the jump discontinuity in [itex]\psi ^{\prime }[/itex], while the right gives the standard differential equation satisfied by [itex]\psi[/itex] (since [itex]f[/itex] is an arbitrary test function) for [itex]x\neq 0[/itex].
    Last edited: Feb 28, 2014
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?