marcusl said:
Actually, this is not an autocorrelation nor is there a sum. It is a mutual coherence function (the brackets indicate ensemble average), which is a type of cross-correlation.
Indeed, from a formal point of view g^{(1)} is the cross correlation of some signal at two points/times. However, in most branches of spectroscopy it is common to call it an autocorrelation, too, in the most relevant case of r_1=r_2. The terminology gets washed out even more when going from field to intensity correlation functions.
However, in optics the experimentally accessible quantity is the total field E. In the mentioned case of two sines this corresponds to E being the superposition of two monochromatic waves.
marcusl said:
Furthermore, you have chosen an expression with no explicit frequency dependence so it does not directly address the OP's question about coherence of waves of differing frequencies.
This is why E is defined as the sum of the two monochromatic waves in the above example. It is always the total field present in optics. I thought that goes without saying it explicitly.
marcusl said:
We can make some observations, however, without digging into the equations. Assume for simplicity propagation in a vacuum. If the two frequencies are not too different, there will be macroscopic regions where the two waves are largely coherent and others where they interfere (as you described for same-frequency waves). The average size of the regions is characterized by a coherence length. Because the waves are propagating, the scene is time-varying as well, characterized by a coherence time. They both depend on the frequency spread, which we can include in an intuitive way through the relations
\Delta t \propto \frac{1}{\Delta\nu}
(you can see a similar equation in my first post) and, with \Delta l=c\Delta t,
\Delta l \propto \frac{c}{\Delta\nu}.
This gives a general sense of the time and distance over which waves of different frequencies are significantly coherent.
This is not necessarily so and exactly the reason why I chose a different definition of coherence. Often, optical coherence time and length are defined as the time and distance over which where is not a fixed, but a well defined phase difference. The difference becomes clear when one compares a broadened emission line to a superposition of monochromatic emission lines. Simple spontaneous two-level emitters have some characteristic upper level lifetime, which manifests in the linewidth of the emission leading to the coherence times and lengths you described. Those describe the time delay over which the phase becomes randomized. If you somehow managed to realize such a source with few discrete frequencies involved, you would see some characteristic beating in interferometric experiments which vanishes wi on the timescale of the coherence time.
However, you could also take an ensemble of perfectly monochromatic emitters and superpose them to realize light with the same spectral width. As all of them are perfectly monochromatic the long time phase relationship does not randomize and you will see the characteristic beatings (although it will be very difficult to see them for lots of superimposed frequencies) in interferometric experiments for arbitrarily large time delays.
Therefore it is possible to distinguish between multimode coherent light and a source having the same spectral width, but shorter coherence time by means of g^{(1)}. Although I agree that this is a very specific difference, I wanted to mention it as the question opening this post was rather general.