# Klein-Gordon eqn: why dismiss messages at phase velocity

Hi All,

I've heard it said that the superluminal phase velocity of the KG eqn is not a problem for relativistic causality because signals travel at the packet/group velocity, which is the inverse of the phase velocity (c being 1). I'm a bit skeptical of this.

We can strip away all the quantum mystique and just consider the (1 dimensional) KG eqn to describe a guitar string with restoring springs along it, or (to put the same thing the other way around) a row of simple harmonic oscillators coupled together. Either way, we can bang on one end with a delta function and watch waves of all wavelengths spread along it at various superluminal speeds including infinity. To convince ourselves that signals can't travel at the phase velocity, we'll somehow have to prove that all those different components add up to zero at every event that's spacelike separated from the impulse. This would seem like a remarkable coincidence - there may be traveling nodes, but zero for the whole continuous spacetime region???

I'm not sure how to type formulae here, but I think I may have proved the opposite. We can consider an event at a spacelike interval from the impulse. Long waves travel faster under KG, so at that event, waves with k from 0 to K will be influencing psi at that event, but the shorter ones won't have arrived yet. We know K because we know that w^2 = k^2 + m^2. I'm not entirely sure what the amplitudes of the components of the fourier transform of the delta function are when the velocity isn't trivial, but let's just make it a constant. Whatever, you wind up with an integral over k of an exponential of i times something complicated but real. It doesn't really matter that it's complicated. The answer will always be some complex number of a known non-zero magnitude.

• atyy

Avodyne
We can strip away all the quantum mystique and just consider the (1 dimensional) KG eqn to describe a guitar string with restoring springs along it, or (to put the same thing the other way around) a row of simple harmonic oscillators coupled together. Either way, we can bang on one end with a delta function and watch waves of all wavelengths spread along it at various superluminal speeds including infinity. To convince ourselves that signals can't travel at the phase velocity, we'll somehow have to prove that all those different components add up to zero at every event that's spacelike separated from the impulse.
Which is exactly what happens.

This has just been extensively discussed in this thread:

Also, if you consider the one-dimensional massless KG equation with an initial condition ##\varphi(x,0)=f(x)## and ##\dot\varphi(x,0)=0## (where the dot denotes the time derivative), then the exact solution is
$$\varphi(x,t)=\frac12\bigl[f(x-ct)+f(x+ct)\bigr].$$
So if ##\varphi(x,0)## is zero outside some interval, then ##\varphi(x,t)## remains zero everywhere that is spacelike separated from the interval at time ##t##.

Last edited:
If it's massless I agree that everything travels at c and we have no problem. The massive field is the interesting case.

I read through that thread, but I still don't find an answer. I'm proposing that at t=x=0 I will flip a coin and bang on a KG field with either a positive or a negative delta function. It seems to me that a spacelike separated observer can tell the difference. I see nothing in that thread to say that those "phase-fronts" will all cancel out.

Avodyne
It's a theorem. Wave fronts do not travel faster than light, independent of the mass-squared (positive, negative, or zero).

Does this theorem have a name?

Demystifier
Gold Member
I read through that thread, but I still don't find an answer. I'm proposing that at t=x=0 I will flip a coin and bang on a KG field with either a positive or a negative delta function. It seems to me that a spacelike separated observer can tell the difference. I see nothing in that thread to say that those "phase-fronts" will all cancel out.
They will. One way to see it is to consider the appropriate Green function, which vanishes outside the light-cone. The appropriate Green function can be expressed as a sum of other "inappropriate" Green functions, so that the terms outside the light-cone cancel out. The full solution of the partial differential equation can be related to the initial condition via the appropriate Green function, so vanishing of the Green function implies also vanishing of the solution.

Of course, mathematically, one can also take a different Green function and in this way obtain a solution which propagates faster than light. However, for such a solution the initial condition is very constrained so the Cauchy problem with arbitrary initial condition is not well posed. In particular, you cannot have a delta-function as an initial condition. Such solutions are generally considered to be unphysical. Typically, the initial conditions related to such solutions cannot even be Fourier expanded.

Last edited:
• atyy
Demystifier
Gold Member
Does this theorem have a name?
Mathematicians usually refer to it as theorems regarding the domain of influence.

Can somebody refer me to some actual maths for this please? I did find the QFT stuff about the Feynmann propagator, etc, but that's a totally different model involving a bunch of coupled *quantum* simple harmonic oscillators, whereas the KG equation involves a bunch of coupled *classical* simple harmonic oscillators. I think the psi in the KG equation can be treated as real because even if it was complex, the real and imaginary terms would be independent systems in the absence of any first order diffs to mix them up. Somebody pointed me to the Kramers-Kronig relations but in that case the deal seems to be "If you want a causal fourier transform then you'll have to tolerate some friction in your system", but the KG equation doesn't have any friction so I guess I'll have to live without the causal fourier transform. But I still haven't found the maths to prove that those fast waves cancel out in a real, frictionless, 1D, KG system.

Demystifier
Gold Member
"If you want a causal fourier transform then you'll have to tolerate some friction in your system",
That's a nonsense. You can contruct the causal Green function by deforming the contour of integration over frequency, which effectively means that you add an infinitesimal imaginary energy to the theory, which mathematically looks similar to an infinitesimal friction. But this is merely a mathematical trick. You can also construct the causal Green function in a different way, without ever introducing the integration over frequency. Physicsts like the formalism with integration over frequency because that makes the formalism manifestly Lorentz invariant, but in the final expression there is no integration over frequency, only an integral over the spacial wave 3-vector k.

Last edited:
• atyy
Demystifier
Gold Member
But I still haven't found the maths to prove that those fast waves cancel out in a real, frictionless, 1D, KG system.
What you need is to find a proof that the causal Green function vanishes outside the light-cone. It is done in many textbooks. See e.g.
http://nptel.ac.in/courses/115106058/greenfunction.pdf
Eq. (9) and comments on it. For an exercise you can try to evaluate the two terms in (8) separately, and show that each of them treated separately does not vanish outside the lightcone. But outside the lightcone, as shown after (9), their sum is zero.

Last edited:
• atyy
Demystifier
Gold Member
I did find the QFT stuff about the Feynmann propagator, etc, but that's a totally different model involving a bunch of coupled *quantum* simple harmonic oscillators, whereas the KG equation involves a bunch of coupled *classical* simple harmonic oscillators.
That's not really totally different. You can construct the Feynman Green function for classical KG field, and it coincides with Feynman propagator for quantum KG field.

I've seen that kind of thing but it seems both overcomplicated and unsatisfactory. It hits a pole at the one and only point of interest and forces real quantities to be supposed complex for no other reason. So what's wrong with this approach...

$$\frac{\partial^2 \Psi }{\partial t^2} = \frac{\partial^2 \Psi }{\partial x^2} - m^2 \Psi$$
$$\Psi(k) = C(k).e^{i(\omega t-kx)}$$
where C is just an amplitude for that mode.
$$\omega^2 = k^2 + m^2$$
Phase velocity:
$$v(k) = \frac{\omega}{k} = \sqrt{1+\frac{m^2}{k^2}}$$

If I hit the origin with a very narrow gaussian impulse, I'll get a flat distribution of wavelengths spreading out in both directions, so ##C(k)## is constant. At an event t=1, x>1 where waves with ## k=K ## have only just arrived:

$$\Psi(k) = e^{i(\omega t - kx )} = e^{i(\omega - k.v(K) )}$$

$$\Psi(k) = e^{i(\sqrt{k^2 + m^2} - k.\sqrt{1 + m^2/K^2} )} = e^{ik(\sqrt{1 + m^2/k^2} - \sqrt{1 + m^2/K^2} )}$$

$$\Psi = \int_0^K dk.e^{ik(\sqrt{1 + m^2/k^2} - \sqrt{1 + m^2/K^2} )}$$

I don't know how to do that integral, but it doesn't look like zero to me. The ##k(\sqrt{1 + m^2/k^2} - \sqrt{1 + m^2/K^2} )## is 0 at ##k=0## and ##k=K## and positive in between. So the exponential heads off from 1 around the Argand plane and then turns around and goes back home. It'll take one hell of a coincidence to make that add up to zero.

BTW, the displacement of the string is the imaginary part of ##\Psi## because that narrow gaussian is an impulse.

Avodyne
That you get zero is indeed subtle from the point of view of these integrals.

As an example, here is an exact solution of the KG equation, valid for ##t>0##:
$$\varphi(x,t)=J_0\bigl(m\sqrt{t^2-x^2}\bigr)\theta(t-|x|)$$where ##J_0## is the ordinary Bessel function and ##\theta## is the unit step function. For ##t\ll 1/m##, this is a rectangular pulse (of unit height) that extends from ##x=-t## to ##x=+t##, with its leading edges moving out at the speed of light. Later it turns wavy behind the edges. This solution is the analog of the solution of the wave equation (KG with ##m=0##) of the form ##\theta(x+t)-\theta(x-t)##, which describes a unit height pulse that also expands outward, but with its edges staying sharp forever.

This solution illustrates that local disturbances have leading edges that propagate at the speed of light.

Last edited:
Demystifier
Gold Member
I've seen that kind of thing but it seems both overcomplicated and unsatisfactory. It hits a pole at the one and only point of interest and forces real quantities to be supposed complex for no other reason. So what's wrong with this approach...

$$\frac{\partial^2 \Psi }{\partial t^2} = \frac{\partial^2 \Psi }{\partial x^2} - m^2 \Psi$$
$$\Psi(k) = C(k).e^{i(\omega t-kx)}$$
where C is just an amplitude for that mode.
$$\omega^2 = k^2 + m^2$$
Phase velocity:
$$v(k) = \frac{\omega}{k} = \sqrt{1+\frac{m^2}{k^2}}$$

If I hit the origin with a very narrow gaussian impulse, I'll get a flat distribution of wavelengths spreading out in both directions, so ##C(k)## is constant. At an event t=1, x>1 where waves with ## k=K ## have only just arrived:

$$\Psi(k) = e^{i(\omega t - kx )} = e^{i(\omega - k.v(K) )}$$

$$\Psi(k) = e^{i(\sqrt{k^2 + m^2} - k.\sqrt{1 + m^2/K^2} )} = e^{ik(\sqrt{1 + m^2/k^2} - \sqrt{1 + m^2/K^2} )}$$

$$\Psi = \int_0^K dk.e^{ik(\sqrt{1 + m^2/k^2} - \sqrt{1 + m^2/K^2} )}$$

I don't know how to do that integral, but it doesn't look like zero to me. The ##k(\sqrt{1 + m^2/k^2} - \sqrt{1 + m^2/K^2} )## is 0 at ##k=0## and ##k=K## and positive in between. So the exponential heads off from 1 around the Argand plane and then turns around and goes back home. It'll take one hell of a coincidence to make that add up to zero.

BTW, the displacement of the string is the imaginary part of ##\Psi## because that narrow gaussian is an impulse.
Nice try, but your argument contains a subtle and deep error. From your assumption that ##\Psi(t=0)## is a ##\delta##-function it does not follow that ##c(k)={\rm const}##. All what follows is that ##|c(k)|={\rm const}##, i.e.
$$c(k)={\rm const}\, e^{i\varphi(k)}$$
As you may guess, for most choices of the phase function ##\varphi(k)## there will be no cancellation. Yet, it is possible to choose ##\varphi(k)## such that the cancellation really happens.

What does such a special choice of the phase function corresponds to? You required that ##\Psi(t=0)## is a ##\delta##-function, but you said nothing about ##\Pi(t=0)##, where
$$\Pi=d\Psi/dt$$
is the canonical momentum. On the other hand, the KG equation is a second order equation, so you must specify both ##\Psi(t=0)## and ##\Pi(t=0)## to uniquelly determine the solution. In particular, if you want the initial condition to be fully localized, you must require that both ##\Psi(t=0)## and ##\Pi(t=0)## are ##\delta##-functions. But you can easily see that for most choices of the phase function this will not happen. So you must adjust the phase function very carefully to make both initial conditions proportional to a ##\delta##-function. And, the point is, precisely when you make such an adjustement, the wanted cancellation outside the litght-cone will take place.

Last edited:
• atyy
@Avodyne: You're telling me things, and I guess you're probably right, but I'm trying to understand where the bug in my logic is.

@Demystifier: I glossed over this, but I did think about it. I'm not requiring that the displacement of the string is a delta function. I'm setting that to zero everywhere. I'm setting the first diff of the displacement to a narrow gaussian. I think that sets my phases - no matter which wavelength, at the origin, the displacement is zero with maximum velocity in the positive direction. I'm reading the displacement off the imaginary part of ##\Psi##. Also, when they say that the FT of a narrow gaussian is a wide gaussian, is it not the case that the phase is determined?

Incidentally, I haven't really justified flattening |##c(k)##|, but I think that we can choose ##K## and ##m## such that the whole ##k(\sqrt{...}...)## expression never exceeds ##\Pi## in which case the contributions to the imaginary part by every mode are all positive (that's if we can nail the phase down as I argued above) and |##c(k)##| wouldn't matter.

Demystifier
Gold Member
I'm not requiring that the displacement of the string is a delta function. I'm setting that to zero everywhere.
You said "If I hit the origin with a very narrow gaussian impulse, I'll get a flat distribution of wavelengths spreading out in both directions". If "narrow" means "narrow in space" , then initial displacement (right after the hit) is a delta function.

Did you perhaps meant "narrow in time, but wide in space"?

It's narrow in space and time. I just whack a point on the string with a little hammer. I keep saying "narrow gaussian" instead of "delta function" because I feel like I know where I am and can just declare the FT to be that wide gaussian, but come to think of it:
$$\delta(x) = \frac{1}{2\pi} \int_{-\infty}^\infty dk.e^{ikx}$$
basically defines ##\delta## as that whose fourier transform is a constant so it's easier for me. ##x## is a vector including space and time here.

What I'm saying here is intuitively obvious. Suppose you drop a stone in a pond and the long ripples travel faster than the short ones. Suppose also that the water is initially pushed upwards by the stone displacing it. The first wavefronts to arrive are all pushing upwards, and it really doesn't matter exactly which wavelengths arrive when or how the amplitude depends on the wavelength. The maths above is saying merely that, so why expect it to have a flaw?

Demystifier
Gold Member
It's narrow in space and time.
Then what I said in #15 and #17 can be applied to your case. Your initial impulse picks a special phase function, which in turn is responsible for the cancellation.

Avodyne
If the initial conditions are ##\varphi(x,0)=\delta(x)## and ##\dot\varphi(x,0)=0##, then the solution is
$$\varphi(x,t)=\lim_{K\to\infty}\int_{-K}^{+K}{dk\over 2\pi}\cos(kx)\cos\bigl(\sqrt{k^2+m^2}t\bigr).$$ I haven't been able to find a way to evaluate this integral in closed form (which is quite frustrating). We know that it must vanish for ##|x|>t##, but how to prove this? (Other than the roundabout method of knowing that this is a solution to the KG eq, which must satisfy the domain-of-dependence theorem.)

• atyy and Demystifier
Demystifier
Gold Member
If the initial conditions are ##\varphi(x,0)=\delta(x)## and ##\dot\varphi(x,0)=0##, then the solution is
$$\varphi(x,t)=\lim_{K\to\infty}\int_{-K}^{+K}{dk\over 2\pi}\cos(kx)\cos\bigl(\sqrt{k^2+m^2}t\bigr).$$ I haven't been able to find a way to evaluate this integral in closed form (which is quite frustrating). We know that it must vanish for ##|x|>t##, but how to prove this?
Here is the proof.

First take ##|x|>0##, ##t=0##. In this case the integral reduces to
$$\int_{-\infty}^{+\infty}{dk\over 2\pi}\cos(kx)=0 \;\;\; (Eq. 1)$$

Second consider general ##x## and ##t##. Using
$$\cos a \cos b = \frac{1}{2}[\cos(a-b)+\cos(a+b)]$$
it is easy to show that the subintegral function on the right-hand side is Lorentz invariant. (To see that, make also the change of integration variable ##k\rightarrow -k## when useful.)

Third, consider any ##x## and ##t## for which ##|x|>t##. For each such pair ##(x,t)## there is a Lorentz frame ##S'## for which this will mean that ##|x'|>0##, ##t'=0##. We have seen that the subintergral function is Lorentz invariant, so we can calculate it in ##S'##. Thus for each ##(x,t)##, ##|x|>t##, the integral can be reduced to
$$\int_{-\infty}^{+\infty}{dk\over 2\pi}\cos(k'x') \;\;\; (Eq. 2)$$

The measure ##dk## is not Lorentz invariant, so is not equal to ##dk'##. We have a Lorentz transformation
$$k=\gamma(k'-\beta\omega')$$
where ##\gamma## and ##\beta## are constants, while ##\omega'=\sqrt{k'^2+m^2}##. Hence
$$dk=\gamma(dk'-\beta d\omega')$$
so (Eq. 2) above splits into two integrals. One takes the same form as (Eq. 1), so it vanishes. The other is proportional to
$$\int_{-\infty}^{+\infty}{dk'\over 2\pi}{k'\over\sqrt{k'^2+m^2}}\cos(k'x')$$
which vanishes because the subintegral function is anti-symmetric. Therefore (Eq. 2) is zero, which finishes the proof. Q.E.D.

Last edited:
• atyy
Avodyne
Alas, this isn't right. When changing integration variables from ##k## to ##k'##, the transformation of the differential is (in general)
$$dk=\left|{\partial k\over\partial k'}\right|dk'$$ You left out the absolute-value sign. When you properly include it, your final integral does not vanish by symmetry.

EDIT: This is wrong! In fact, ##{\partial k/\partial k'}## is always positive, so the absolute-value sign can be safely omitted; see below.

Last edited:
Demystifier
Gold Member
Alas, this isn't right. When changing integration variables from ##k## to ##k'##, the transformation of the differential is (in general)
$$dk=\left|{\partial k\over\partial k'}\right|dk'$$ You left out the absolute-value sign. When you properly include it, your final integral does not vanish by symmetry.
First, for a one-dimensional integral one does not need to take the absolute value because
$$dk={d k\over d k'} dk'$$
without the absolute value. However, one has to be careful because, in general, the limits of integration for ##k## do not need to be the same as those for ##k'##.

Second, in our case this is not really important, because even if one takes the absolute value, one gets the same final result. Taking the absolute value, the integration measure becomes proportional to
$$\left| 1-\beta \frac{k'}{\omega(k')}\right| dk'$$
But we have
$$|\beta|=\frac{v}{c}<1$$
$$\frac{|k'|}{\omega(k')}<1$$
Consequently
$$1-\beta \frac{k'}{\omega(k')}>0$$
so we may write
$$\left| 1-\beta \frac{k'}{\omega(k')}\right| dk' = \left( 1-\beta \frac{k'}{\omega(k')}\right) dk'$$
In this way, even if one takes the absolute value, it doesn't really change the result.

Last edited:
• atyy
Demystifier
Gold Member
First, for a one-dimensional integral one does not need to take the absolute value because
$$dk={d k\over d k'} dk'$$
without the absolute value. However, one has to be careful because, in general, the limits of integration for ##k## do not need to be the same as those for ##k'##.
For pedagogic purposes, let me explain these subtleties in more detail. As the simplest possible case, consider the integral
$$\int_{x=a}^{x=b}dx$$
We assume that ##b>a##, i.e. that the upper limit is larger than the lower limit, which is the standard form of the integral. Consider the change of variable ##x'=-x##. The Jacobian of this transformation is
$$J=\frac{\partial x}{\partial x'}=-1$$
The integral above is equal to
$$\int_{x=a}^{x=b}-dx' = \int_{x'=-a}^{x'=-b}-dx' = \int_{x'=-a}^{x'=-b}J dx'$$
Note that we have ##J## and not ##|J|##. However, we have ##-b<-a##, i.e. the upper limit of integration is not larger than the lower limit, so the integral is not written in the standard form. The standard form is
$$\int_{x'=-b}^{x'=-a}-J dx'=\int_{x'=-b}^{x'=-a}|J| dx'$$
where ##|J|##, rather than ##J##, appears.

This is nothing but a demonstration of the general rule, valid even for multi-dimensional integrals. If all limits of integration are taken in the standard form (the upper limits are larger than the lower limits), then one has to take ##|J|##. (Here ##J## is the determinant of the Jacobian matrix, so ##J## is a real function.) If the limits are not taken in the standard form, then one has to be careful about the ##\pm## sign in front of ##|J|##.

In practice, the standard form is used almost always for multi-dimensional integrals, but not so for one-dimensional integrals.

Last edited:
• atyy
Avodyne
• 