Ibix said:
It's always the same amount of energy (assuming we're not getting to cosmological distances here), just spread out over more area. Classically this would just make the source dimmer and dimmer. In a quantum model (which you do need for a sufficiently faint source) you eventually start to see shot noise from the arrival time statistics of photons, and the received energy in any region fluctuates around the classically predicted average.
This is the key to understanding how light works as it spreads out in my opinion. As those of us who do astrophotography know, light from very distant sources, like stars and galaxies, is so dim that you can open the shutter of your telescope's camera for 20 minutes and receive perhaps a few dozen to a few hundred photons from your target, or less.
The energy of a light wave is quantized into photons. A very, very bright source that is very close will deposit its energy in huge numbers of discrete events (photons), but they are still quantized interactions. This does not change as the sources moves to greater and greater distances. The only thing that changes is the number of events per unit of time, which eventually falls to below one. In other words, the number of photons detected per second goes from many millions or billions (or more) per second for something like the Sun, down to fractions of a photon per second for distant light sources. Of course we don't detect fractional photons, we just receive one every minute on average, for example.
So no, a light wave never runs out of energy during its travel (unless all of its energy is deposited into objects by the time it reaches some distance). We just detect fewer and fewer photons per second as the source gets dimmer or the light wave travels farther.
A.T. said:
Are there stars which we can see only due to this fluctuation, and we shouldn't see according to the classical model?
The 'fluctuation' is nothing more than the random arrival of photons around some mean value, which happens regardless of the intensity or distance of the source. In fact, the fluctuations are actually larger for a brighter source. They just don't increase linearly with increasing source intensity, which is why brighter sources are easier to see in images.
And its funny you should ask about the classical model. As far as I understand, in radio astronomy the energy of each photon is so low and the numbers so high for most sources that we can treat it classically anyways. There's still noise to deal with, but the incoming signal is virtually continuous, not discrete.
The only real difference between the two is that in the classical case you don't have to worry about shot noise, which is the random variation in the number of photons that arrive per unit of time. That is, if you take a picture and you expect to see 100 photons over the course of the exposure from your target you can say that the signal is 100 but the shot noise is the square root of that signal, or 10, for a signal-to-noise ratio of 100/10 = 10. So your picture might contain 100 photons from that target, or it might have 90, or 110, or almost any other amount. If you were to measure the number of photons in a series of images the number would vary in each image, but with enough images they would average out to 100. A source with an expected signal of 1000 would have an expected noise of the square root of 1000, for a signal to noise ratio of 1000/31.62 = 31.64. So an increase in the signal by 10x results in only a 3x gain in noise, raising the SNR.
This random variation is one of the main reasons dim objects are so hard to detect. A slightly brighter pixel might be a star or it might be the shot noise of the background. You just don't know until you've taken enough images to 'beat the noise'. In classical physics the signal is continuous, so there is no shot noise. For a signal with sufficiently large numbers of photons the noise is so small relative to the signal that we can virtually ignore it.