- #1

- 15

- 1

as a non-physicist enthusiast, but with decent math background, I tried to learn a bit about origins of quantum theory and very soon raised some questions, which I hope this community will answer.

So, Planck tried to model the blackbody radiation on where Raighley and Jeans have failed. They assumed that energy radiated from every single oscillating charge is in average kT, as Boltzmann statistic suggests, and is frequency independent. This assumption led to

*ultraviolet catastrophe*- or the ultimate failure of classical physics to explain atomic phenomena. Planck, in despair, tried to see what happens if energy is not frequency independent but proportional to it and in discrete steps as E = nhf, and came up with his law. The rest is history.

Now I'm wondering why did he choose this simple relation, E=nhf? Why did he think it has to be discretized, wouldn't he get the same outcome had he assumed continuous linear dependency? Or polynomial? Or something third? From what I've read, there were no theorethical assumption before that would reasonably justify this "discretezation" assumption...