Im trying to make my way through Feynman's "QED: Strange theory of light and matter". On page 89 he says:

How should I understand this ? Is he referring strictly to the speed of light "C" not "c" in a specific medium , in a transparent medium ? Im kinda puzzled by amplitudes greater than c.

c is the speed of light in vacuum, not in any medium. Light travels slower in a potential.

The amplitude he is talking about is the probability amplitude, it is the probability for a specific outcome in a measurement. It is not greater than c, in fact it is not greater than 1. If the probability amplitude is 1 for light moving faster than c then upon measurement we will find that it is moving faster than c. If the probability amplitude is [tex] 10^{-30} [/tex] there is a finite chance for light to be traveling faster than c.

They use probability amplitudes determined from mathematical equations to determine the outcome of an experiment.

I thought that the reason why physicists have rejected the relativistic Schrödinger equation is that it allows wave functions to propagate at speeds greater than the speed of light. I'm surprised to hear the Feynman himself explained that the superluminal propagation would be reality :surprised

Perhaps I should return to the popular quantum physics garbage now...

You know, normally students first read popular stuff, and then get into serious studying. In fact one might benefit of reading the popular stuff after studying serious physics too?

Yes, since light transfers energy. I am not familiar with the equation he is talking about. Admittedly I have not taken relativistic QM yet. I am sure there is more to it as there always is in physics.

Why do you find it strange?
In fact there are some effects (EPR) which propagate faster than c
These effects can't transfer information, but it should be a theorem rather than an axiom.

No, it is not. The absolute square of the probablity amplitude(s) gives the probability density, just like the absolute square of the light field gives the intensity. Accordingly the idea behind Feynman's path integral formalism used for the example of the emission and detection of a photon is that you sum up all probability amplitudes over all possible ways going from the emitter to the detector- whether they are physically sensible (corresponding to a speed of c) or not (faster than c). It then turns out that the superposition of all of these amplitudes cancels out and leads to a probability of 0 for results which are nonphysical.

In reality, however, nobody really uses this kind of path integrals - summing up an infinite number of probability amplitudes is not really a sensible approach. Usually one might use the sum off all probability amplitudes satisfying some sensible boundary condition.

Sure, for short distances this is not necessarily true. However, on shorter scales already the exact time of emission is usually not defined well enough to draw sensible conclusions photon speed differing from c. Usually you can only determine the exact emission time of one special photon with accuracy on the order of the coherence time of the light.

Sure - as long as I am not the one who has to do the calculations.

And yes, it is quite obvious - if these 2 components (FTL and slower than c) cancel each other, lets don't allow them doing this! I know that the result of this experiments is denied, but I am not satisfied with HOW it is denied: all that stuff with group velocity... what group velocity if we can send 1 photon at a time?

Yes, I know the stuff from Günter Nimtz, more or less some clever kind of pulse shaping.

I do not see a problem with single photons. What he does is basically shaping the light pulse such that the front gets less attenuated than the back. For single photons you would usually expect to find the same result. If you fire a triggered single photon you would expect to find some spread of the delay times between trigger and detection of the photon because the probability distribution for a photon emission event will be some distribution around the trigger time. The sum of these probability amplitudes will also create some "pulse" of finite width which will be attenuated. However, in this case it is not the intensity of a pulse train, which is altered, but the attenuation will happen on the probability amplitude level.

If one managed to create an ideal single photon source which has absolutely no spread in the emission time I would expect the shift Nimtz interprets as the ftl signal to vanish completely as there is no pattern to attenuate. This might be another test to prove Nimtz wrong. If you take two indistinguishable photons and make Hong-Ou-Mandel measurements, the coherence time of the two-photon interference pattern will give you roughly the timescale of the uncertainty of emission time and what can be interpreted as the single photon pulse width. If you now take several sources of indistinguishable photons with different coherence times seen in the two-photon interference and use one of these photons repeatedly in Nimtz' experimental setup, I would expect the shift he sees to become larger with increasing coherence time (or equivalently uncertainty in emission time) seen in the HOM to-photon interference pattern.

Although, the attenuation effect that you speak of is generally something that you should avoid when doing a fast light experiment. There is another effect, which can best be described as "rearranging the fourier components" of the pulse.

By creating a material with near zero absorption (=no attenuation) at the same time as a very sharp dispersion (like for example between two gain peaks), you can essentially tailor the speed of a pulse to an arbitrary amount, include group velocities both faster than c and even negative. Just note that this of course doesn't violate general relativity, but rather suggests that one needs to be careful about how exactly to define the "information of a pulse".