monish said:
I can't believe you claim that I ascribe such an assumption to you. I specifically noted in my post that you refrained from using a specific area in your calculation. Therefore, in order to attempt to refute your argument, it was necessary for me to fill in the blanks.
I certainly did not ascribe any assumption to you; I distinctly stated that this the assumption commonly made by others who wish to promote the photon theory. No, I didn't miss your point: you chose not to make your point. You chose to remain silent on the question of what would be the relevant cross-sectional area for the calculation you presented. Possibly you were being clever; but in these circumstances you can't blame me for supposedly missing your point.
This is definitely not the "prompt emission" argument that is usually made in physics textbooks. I have a paper in front of me from by Muthukrishnan, Scully, and Zubairy ("The concept of the photon - revisited", OPN Trends, Oct 2003) in which they make a similar argument. Understand that these are people who are working at the leading edge of semi-classical interpretations...and even THEY use the atomic cross-section in their calculation, which appears to me to be obviously incorrect:
"...if we persist in thinking about the field classically, energy is not conserved. Over a time interfal t, a classical field E brings ain a flux of energy epsilon-E-squared-At to bear on the atom, where A is the atomic cross-section. For short enough time intervals..."
Yes, you can modify this calculation, as you have, by putting in a macroscopic area, but are you quite sure the experiments have been done to back this up? It's not obvious to me that this is so easy to do. How do you turn a light source on and off with that kind of precision? And if you really could do the experiment, and you found that energy wasn't conserved, well...wouldn't that be a problem for the photon theory as well? It's not so obvious to me that you get around the conservation of energy by just by saying that light is made of particles.
But the real problem with all arguments of this kind is that they fail to come to grips with the question of why we NEED photons in the first place. Historically, photons were brought in because people couldn't understand some basic physical phenomena involving interaction of radiation and matter. It wasn't a question of picosecond time delays and tiny discrepancies...it was a case of all kinds of things that just "shouldn't have happened AT ALL" if light was a wave. But once the true nature of the electron was understood in 1926, many of these puzzles were cleared up. It turned out that you could explain most or all of these mysteries with the wave theory of light. So what was left? You go down to the very fringes of measurement, where you're able to supposedly isolate "one photon at a time". And there you find them. Supposedly.
It's a big difference from what I was told in high school: you shine ultraviolet light on a piece of metal, and an electric current flows in the circuit. "Shouldn't happen if light is a wave." Well, now it turns out, yes, it should too. So you redesign the experiment, bring in a lot of expensive and complicated instrumentation, and you claim that you can isolate it down to individual photons. (Which I don't think you really can, because everything you measure now has to be interpreted within your theoretical framework. It's not just a deflection on an ammeter anymore.) But even assuming you do find your single photons at the fringes of measurement...what did you really need them for? Everything you originally said you couldn't do without photons...the photo-electric effect, the Compton effect, the laser, you name it...it turns out you can get it from ordinary e-m radiation. Give or take a few picoseconds. So what do we really need photons for?
Well, since to the argument I presented you reacted (in post #27) by saying this:
"OK, we know where this argument leads. The spread-out wave energy is much too diffuse to be able to concentrate itself in the tiny cross-sectional area of an electron in such a short time as is observed experimentally. Usually people use the cross-sectional area of an atom to justify this claim.", you ascribed such an assumption also to me at least implicitly.
Also, I obviously belong among those "who wish to promote the photon theory", and that is why you wanted to refute the argument presented in my post, and not because I "chose not to make" my point. Obviously, my point in my first post (#25) was that the COMPLETE 'prompt emission argument' includes the conservation of energy setting the bound that time "...
T > W /( A c epsilon_0 [E^2] )
is needed for the absorption of the quantum of energy (h nu) which would exceed the work function W and thus enable the start of the emission of electrons. But, this is NOT found experimentally. I do not know what the experimental limit is at present, but the lack of time lags between incident light beam arrival and emitted photoelectron has long been an established experimental fact.
Therefore, if one insists on the classical EM field, the photoelectric effect would imply the non-conservation of energy. On the other hand, the quantized EM field, i.e., the photon concept, does not have the above problem, as the absorption of the quantum of energy happens "at once",
when the electron and the EM field quantum interact."
(end of my quote from #25)
For this argument it is essential that for the classical, continuous EM field, [E^2] can, at least in principle, take arbitrarily small values. Since I did not do any computations with concrete values, there was no need to fix A. But, you obviously thought ("OK, we know where this argument leads..." - in #27) that this argument, i.e., the bound on the time lags based on W/(A c epsilon_0 [E^2]) can in practice work only with a "tiny cross-sectional area" A, and in your opinion there was the mistake because you advocated much larger A, from "CLASSICAL absorption cross-sections" of some "10,000,000 angstroms squared" (= 10^-13 m^2) to hints that area "as big as the whole piece of metal" may be relevant.
Therefore, in my second post (#36) I did concrete calculations which showed that even for this latter, extreme choice of A being the surface of "the whole piece of metal", W/(A c epsilon_0 [E^2]) can set the limit T > 10^-9 seconds.
To that you answer "but are you quite sure the experiments have been done to back this up?" and "How do you turn a light source on and off with that kind of precision?" Since my interest in this is only pedagogical, I do not want to search for references, BUT it is enough to recall that in post #18, ZapperZ pointed out that nowadays some experiments (on metals) are
so precise that they measure the finite response time to be on a femtosecond (10^-15 s) scale!
That is the factor 10^-6 times shorter than what I got for W/(A c epsilon_0 [E^2]) in my second post (#36) with A = 1 mm^2 and E = 1 to 0.1 V/m, which means that again with the macroscopic, extremely high area A = 1 mm^2, one can get W/(A c epsilon_0 [E^2]) above the established femtosecond scale already with [E^2] not much below 10^6 V^2/m^2 ... and mind you, assuming A = 1 mm^2 , which is extremely large. This assumption is useful to show that the conservation of energy implies, through W/(A c epsilon_0 [E^2]), the quantization of EM field, even if one adopts your most EXTREME viewpoint on A.
On the other hand, the viewpoints on A of Muthukrishnan, Scully, Zubairy may well be correct. You correctly point out that they are "at the leading edge of semi-classical interpretations". Well, as experts in using semi-classical interpretations, they most probably have strong arguments
for using the atomic cross-section. I do not know what is in that paper you quote, but even as a non-expert I can think of of situations where any macroscopic area A cannot be conceivably justified: see, it is not obligatory to study photelectric effect on metal plates/samples. If one tries to eject electrons from some rarefied noble gas, you cannot argue that electron wave functions can extend beyond microscopic sizes. Although I showed above that the conservation-of-energy argument works even for a macroscopic A and thus a microscopic A is not absolutely necessary, it is clear that a microscopic A (i.e., from several to many orders of magnitude smaller than A = 1 mm^2) enables
W/(A c epsilon_0 [E^2])
to exceed nanosecond, and especially nowadays relevant femtosecond time-scale, for much larger intensities [E^2] than considered above in the A = 1 mm^2 case.