vanhees71 said:
the assumption that it reflects the "intensity" of electromagnetic radiation is very successful, or do you know any example, where the measured photon-detection rates deviate from the predictions based on this assumption?
Under natural idealization assumptions that are usually satisfied to good accuracy, the
predicted and measured single electron-ionization rate is proportional to the intensity of the electromagnetic radiation field (not the electric field alone). This is the only unambiguous statement. Since this statement is valid both in the semiclassical treatment where there are no photons and in a full quantum field treatment (where - see below - the question of what a single photon is is ambiguous), the interpretation of this in terms of photons, though widely used, is questionable.
Photodetection is measuring the intensity of the electromagnetic radiation field by measuring the electron-ionization rate, not by counting the number of emitted photons. If the intensity goes down, one simply needs to wait longer to get a reliable measurement of the intensity to a given accuracy. In my opinion, the detector events have no other meaning than this.
It is in some way analogous to measuring the output flow rate of a water faucet by (observing from far apart - not seeing the details) how many cups are filled in a given time. This works accurately in a short time if a lot of water flows, but if the faucet is only dripping it takes a long time before the number of cups accurately represents the water flow rate. I find this analogy helpful although it is only a classical analogy with a limited explanatory value as it lacks the randomness observed in the quantum case.
The problem with the notion of a single photon in a quantum field setting is that the quantum field typically is in some Heisenberg multiphoton state, and this single Heisenberg state gives rise to an electron-ionization
rate rather than a probability. For example, in an ordinary laser the electromagnetic field is in a fixed coherent Heisenberg state, and by waiting long enough
this fixed state gives rise to as many electron ionizations as one likes. This even holds when the coherent state has very low intensity. This is quite unlike what is assumed in the typical quantum optical experiments where everything is interpreted as if photons were particles just like nonrelativistic electrons, except that they move with the speed of light.
There is a nontrivial interpretation step in going from the former to the latter description, one never analyzed in the literature (as far as I can tell).
The Heisenberg description (QFT) has a straightforward, almost classical interpretation of the state. To go from it to the interaction picture (QM) with the interpretation of an ensemble of particles (prepared or actually being - depending on the interpretation of QM used) in a corresponding state one has to ''invent'' individual photons with ghostlike properties that are completely unobservable until detection events destroy the ghosts and thereby prove their alleged existence. Extremely weird, this assumed picture of individual photons. One is left wondering why these ghosts are completely absent in the semiclassical description (which is quantitatively correct in the case of coherent light) although the detection events still herald their existence (if they herald anything). But if they herald nothing in the semiclassical setting, they also shouldn't herald anything in the quantum case. This then implies that the ghosts are completely unobservable. They can be eliminated without making any observable differences, and using Ockham's razor, they should be eliminated. My language is chosen accordingly.