# How did Planck do it

## Main Question or Discussion Point

To solve the the blackbody radiation Planck had to solve the partition function or average energy for a harmonic oscillator. But where did he get the inspiration to change the energy of the harmonic oscillator from $$E = \frac{1}{2} m v^2 + \frac{1}{2} k x^2$$ to simply $$E = n h v$$
This seems like a huge leap and I can't imagine what he was thinking to persuade him to even attempt changing the energy to this. Was there a hint somewhere to inspire him that the energy would take such a simple form? No one tells me his motivation - could it be just from luck and messing with equations

Related Quantum Physics News on Phys.org
I dont know.
These look like they might be relevant

http://en.wikipedia.org/wiki/Planck's_law
Max Planck originally produced this law in 1900 (published in 1901[12]) in an attempt to improve upon the Wien approximation, published in 1896 by Wilhelm Wien, which fit the experimental data at short wavelengths (high frequencies) but deviated from it at long wavelengths (low frequencies). The Rayleigh-Jeans law (first published in incomplete form by Rayleigh in 1900[13]) fit well in the complementary domain (long wavelength, low frequency). Planck found that the above function, Planck's function, fitted the data for all wavelengths remarkably well.

Contrary to another myth, Planck did not derive his law in an attempt to resolve the "ultraviolet catastrophe", the name given by Paul Ehrenfest to the paradoxical result that the total energy in the cavity tends to infinity when the equipartition theorem of classical statistical mechanics is (mistakenly) applied to black body radiation. Planck was aware that the free electromagnetic field, in a cavity with perfectly reflecting walls and containing no ponderable matter to transduce energy between frequency components, cannot exchange energy between frequency components[19] and so is not within the scope of the classical principle of equipartition of energy, which it accordingly neither obeys nor violates.
The Wien approximation may be derived from Planck's law by assuming hv >> kT. When this is true, then
the Blackbody Radiation expressed in terms of frequency ν = c/λ in the limit of small frequency:

http://en.wikipedia.org/wiki/Wien_approximation
Wien's approximation (also sometimes called Wien's law or the Wien distribution law) is a law of physics used to describe the spectrum of thermal radiation (frequently called the blackbody function). This law was first derived by Wilhelm Wien in 1896.[1][2] The equation does accurately describe the short wavelength (high frequency) spectrum of thermal emission from objects, but it fails to accurately fit the experimental data for long wavelengths (low frequency) emission

http://en.wikipedia.org/wiki/Rayleigh-Jeans_law
The Rayleigh–Jeans law agrees with experimental results at large wavelengths (or, equivalently, low frequencies) but strongly disagrees at short wavelengths (or high frequencies).

http://en.wikipedia.org/wiki/Ultraviolet_catastrophe
The ultraviolet catastrophe results from the equipartition theorem of classical statistical mechanics which states that all harmonic oscillator modes (degrees of freedom) of a system at equilibrium have an average energy of kT.

According to classical electromagnetism, the number of electromagnetic modes in a 3-dimensional cavity, per unit frequency, is proportional to the square of the frequency. This therefore implies that the radiated power per unit frequency should follow the Rayleigh-Jeans law, and be proportional to frequency squared. Thus, both the power at a given frequency and the total radiated power is unlimited
http://en.wikipedia.org/wiki/Equipartition_theorem#Potential_energy_and_harmonic_oscillators
Equipartition applies to potential energies as well as kinetic energies: important examples include harmonic oscillators such as a spring, which has a quadratic potential energy

http://en.wikipedia.org/wiki/Thermal_energy
Microscopically, the thermal energy may be related to the kinetic energy of its constituent particles, which may be atoms, molecules, electrons, or particles in plasmas. Thermal energy is the collective energy of the individually random, or disordered, motion of particles in a large ensemble. The thermal energy is equally partitioned between all available quadratic degrees of freedom of the particles. These degrees of freedom may include pure translational motion in fluids, normal modes of vibrations, such as intermolecular vibrations or crystal lattice vibrations, or rotational states.
http://en.wikipedia.org/wiki/Larmor_formula#Relativistic_Generalisation
In physics, in the area of electrodynamics, the Larmor formula is used to calculate the total power radiated by a nonrelativistic point charge as it accelerates. It was first derived by J. J. Larmor in 1897, in the context of the wave theory of light.

When accelerating or decelerating, any charged particle (such as an electron) radiates away energy in the form of electromagnetic waves. For velocities that are small relative to the speed of light, the total power radiated is given by the Larmor formula.

http://en.wikipedia.org/wiki/Abraham-Lorentz_force
In the physics of electromagnetism, the Abraham-Lorentz force is the recoil force on an accelerating charged particle caused by the particle emitting electromagnetic radiation. (dot a = derivative of acceleration)

In many cases, attenuation is an exponential function of the path length through the medium. In chemical spectroscopy, this is known as the Beer-Lambert law
The selective absorption of infrared (IR) light by a particular material occurs because the selected frequency of the light wave matches the frequency (or an integral multiple of the frequency) at which the particles of that material vibrate. Since different atoms and molecules have different natural frequencies of vibration, they will selectively absorb different frequencies (or portions of the spectrum) of infrared (IR) light.

Reflection and transmission of light waves occur because the frequencies of the light waves do not match the natural resonant frequencies of vibration of the objects. When IR light of these frequencies strike an object, the energy is either reflected or transmitted.

Last edited:
http://en.wikipedia.org/wiki/Mean_free_path

Where ℓ is the mean free path, n is the number of target particles per unit volume, and σ is the effective cross sectional area for collision

In kinetic theory mean free path of a particle, such as a molecule, is the average distance the particle travels between collisions with other moving particles. The formula \ell = (n\sigma)^{-1}, still holds for a particle with a high velocity relative to the velocities of an ensemble of identical particles with random locations. If, on the other hand, the velocities of the identical particles have a Maxwell distribution of velocities, the following relationship applies

where kB is the Boltzmann constant, T is temperature, p is pressure, and d is the diameter of the gas particles

Last edited by a moderator:
Cthugha
Planck was somewhat inspired by the earlier works of Boltzmann who was working on the meaning of entropy and interpreted it as a microscopic quantity. Although that formulation was not used at that time, he already had the basic result that the entropy of a macroscopic state is given by the log of the number of possible microstates compatible with the macroscopic state.

Planck intended to carry this concept over to the blackbody radiation problem. At that time the only somewhat valid approach to this problem was an approach by Wien which only gave correct results for short wavelengths. However, in order to be able to introduce countable microstates, Planck needed something which is indeed countable and not continuous. So he assumed that energy is quantized in multiples of some unknown constant which he deduced basically from fitting his resulting formula to the experimental data. Inprinciple he made two mistakes while doing so. He used a Maxwell-Boltzmann distribution instead of a Bose-Einstein distribution and he did not include the vacuum contribution. Of course both were not known at that time. Fortunately, making both of these errors at the same time cancels the errors out again and leads to the correct constant.

plancks law is

At high frequencies this reduces to

http://en.wikipedia.org/wiki/Mean_free_path
if we start with N particles moving at a certain speed s through a gas then
the number that havent yet collided with a gas molecule at time t is N/e^t.
the number of these that are colliding (for the first time) at any given time t is directly proportional to the number of uncollided particles left N/e^t.

Taking the distance that each travels before colliding (for the first time) as a kind of wavelength and converting it to frequency (1/t) I think gives (1/f)*N/(E^(1/f))
According to wolfram alpha this is:
Plot(1/(x*E^x^(-1)),+{x,+0,+6})

this can be thought of as how often any given path length occurs for any single particle over that particles lifetime.
extending this to all particles in the gas is therefore just a matter of multiplying by the number of particles.
the distance a particle travels before colliding does not depend on its speed (temperature) but
the resulting frequency does​

I dont know if it matters but,
since the collisions are sudden then it looks to me as though we can treat each single frequency above as a square wave

the fourier series of a square wave is:

which gives something like
∑sin(f)/f for odd f​

Last edited:
Ok. thats not getting anywhere

plancks law is

notice that e is raised to the power of ℎv/kT
both ℎv and kT have units of energy
kT is twice the energy per degree of freedom according to the Equipartition_theorem
http://en.wikipedia.org/wiki/Equipartition_theorem

ℎv/kT would presumably therefore be energy divided by energy per degree of freedom

Last edited:
Last edited:
correction

k = EH/T = 2hcR/K in hartree atomic units

the unit of temperature in hartree atomic units is (not surprisingly)
T = eh/kb

h = kK/2cR = EH/2Rf

hRf = EH

I think this is what you are looking for.

Last edited:
http://whatis.techtarget.com/definition/0,,sid9_gci869619,00.html

Hartree energy

The Hartree energy is a physical constant equal to twice the binding energy of the electron in the ground state (the lowest-energy state) of the hydrogen atom . When a hydrogen atom is in this state, an amount of energy equal to 0.5 Hartree is necessary to free the electron and thereby cause the atom to become an ion .

The value of the Hartree energy is approximately 4.36 x 10 -18 joule (J), or 27.2 electronvolts (eV). The constant gets its name from the 20th-century physicist Douglas Hartree. It is sometimes used as an energy unit in theoretical physics.
http://en.wikipedia.org/wiki/Natural_units
The unit of energy is called the Hartree energy in the Hartree system and the Rydberg energy in the Rydberg system. They differ by a factor of 2.

Last edited:
e^-f/T makes sense because higher frequencies should be logarithmically less likely to occur as I pointed out it post 5 and raising the temperature will have the opposite effect as raising the frequency.

but the inclusion of h seems strange.

ℏ is the angular momentum of the bohr atom.
whats it doing here?

kT is the energy per particle per degree of freedom at that temperature

hf/kT????

the only possible connection between h and boltzmanns law is α
http://en.wikipedia.org/wiki/Fine_structure_constant
In physics, the fine-structure constant (usually denoted α, the Greek letter alpha) is a fundamental physical constant, namely the coupling constant characterizing the strength of the electromagnetic interaction.
The fine structure constant α has several physical interpretations. α is
The ratio of two energies: (i) the energy needed to overcome the electrostatic repulsion between two electrons when the distance between them is reduced from infinity to some finite d, and (ii) the energy of a single photon of wavelength λ = 2πd (see Planck relation

http://en.wikipedia.org/wiki/Coupling_constant
In physics, a coupling constant, usually denoted g, is a number that determines the strength of an interaction. Usually the Lagrangian or the Hamiltonian of a system can be separated into a kinetic part and an interaction part. The coupling constant determines the strength of the interaction part with respect to the kinetic part, or between two sectors of the interaction part. For example, the electric charge of a particle is a coupling constant

unfortunately this is something that i have no understanding of at all.

I think h or rather alpha enters into the equation because it is both emitting and absorbing light.

I would think that the truest measure of a fields strength would be how much energy it contains and how much of an effect that energy has on the relevant mass

a2 = Eh/mc2

http://en.wikipedia.org/wiki/Fine_structure_constant#Physical_interpretations
The fine structure constant α has several physical interpretations. α is:
The ratio of the velocity of the electron in the Bohr model of the atom to the speed of light. Hence the square of α is the ratio between the Hartree energy (27.2 eV = twice the Rydberg constant) and the electron rest mass (511 keV).

ok. the math is beyond me and I'm missing some basic info here but
I am certain that h enters plancks law by means of
alpha, the fine-structure constant, acting as a coupling constant.

in particular:
α is the ratio between the Hartree energy (27.2 eV = twice the Rydberg constant) and the electron rest mass (511 keV).
α = Eh/mc2
ℏ = h/2π = mαcr = e²/4πεαc​

I wonder how much of an electrons rest mass is due to its own magnetic field?
http://en.wikipedia.org/wiki/Biot–Savart_law#Point_charge_at_constant_velocity

I'm done.
goodbye

Last edited:
alxm
Err, granpa: What's the question you're trying to answer here? If it's how Planck's law was derived, then there's a fairly detailed derivation at the "[URL [Broken] article[/URL], and I believe the original paper and translations of it are online if someone wants. (E.g. http://arxiv.org/pdf/physics/0402064", which shows Planck's rationale behind oscillators)

Or are you trying to explain where Planck's constant comes from? Because that one has a pretty simple answer: We don't know. That's why it's considered a fundamental constant. Explaining it in terms of the Fine-Structure Constant etc is just substituting one for another, although α is usually considered 'more fundamental' as a sort of measure of the electromagnetic force.

Hartree energy is IMO pretty non-fundamental; it's just the dimensionless unit of energy when solving the electronic Schrödinger equation, and the Rydberg constant is essentially the same thing. But none of those constants had anything to do with how Planck's constant was originally derived years earlier. Planck didn't assume that energy itself came in quanta, he assumed that it was being emitted by quantized Hertzian oscillators (an abstraction of the then-still-controversial idea of atoms/molecules). It was Einstein who introduced photons. Despite what Planck's original assumption might lead one to think, the quantization of atomic/molecular levels doesn't enter into it; there isn't any in an ideal blackbody.

Last edited by a moderator:
alpha doesnt occur in the larmour formula of the rate of emission of light

so it MUST occur in the equations describing the absorption of light

Inprinciple he made two mistakes while doing so. He used a Maxwell-Boltzmann distribution instead of a Bose-Einstein distribution and he did not include the vacuum contribution. Of course both were not known at that time. Fortunately, making both of these errors at the same time cancels the errors out again and leads to the correct constant.
This is interesting, apparently Einstein was bothered by this for a long time since it was obvious for him that Planck's law shouldn't possibly be derived just from Boltzmann statistics and Maxwell classic theory, so when in 1924 Bose came up with a derivation from what we now call Bose-Einstein statistics he was reliefed.
Cthugha, could you please elaborate on the vacuum contribution you mentioned?
Thanks

Cthugha
Oh, that probably sounds more complicated than it really is. If you reconsider the basic quantum treatment of the harmonic oscillator, you will find that its energy levels are
$$E_n=\hbar\omega(n+\frac{1}{2})$$, causing a nonzero ground state energy.
Planck just assumed $$E_n=\hbar\omega n$$ and a vanishing ground state energy.

alxm
so it MUST occur in the equations describing the absorption of light
You're still flailing after something, I don't know what. I don't quite see how that 'MUST' follow (given that absorption and emission are essentially the same process), or what it's got to do with Planck.

Planck was describing an ideal blackbody, which absorbs all light that hits it, by definition. He didn't need any equations to describe the absorption properties.

Oh, that probably sounds more complicated than it really is. If you reconsider the basic quantum treatment of the harmonic oscillator, you will find that its energy levels are
$$E_n=\hbar\omega(n+\frac{1}{2})$$, causing a nonzero ground state energy.
Planck just assumed $$E_n=\hbar\omega n$$ and a vanishing ground state energy.
I see. Just one thing, I always saw the n in the E=nhf formula as referring to number of photons, obviously at the time Planck developed his theory nothing like photons was in his mind as he was considering light as a classical wave, but he couldn't I guess think of terms of energy levels either as the harmonic oscillator model hadn't been applied to quanta yet, so I guess when you say he assumed an energy level without the vacuum contribution you are making an "a posteriori" reflection, since the n in the Planck formula was a way to introduce countable microstates as you said and had nothing to do with energy levels, is this right or am I missing something?
Sorry if I'm making this more contrived than you initially meant, but it helps me understand.

Cthugha
You are of course right. Planck did not know about photons at all and of course it was just straightforward to assume an energy term without vacuum contribution. He only distributes a fixed energy to (the following is roughly translated from the original) different linear resonators oscillating monochromatically, placed at large distance from each other. There are N oscillators with oscillation number (frequency) $$\nu$$, N' oscillators with oscillation number $$\nu'$$ and so on. The total energy shall be partially contained in the system as progressing radiation and partially inside the resonators as oscillations of these. The question is, how this energy is distributed between the oscillations of the resonators and the single colours of the radiation contained inside the medium in the stationary state and what is the resulting temperature.

http://en.wikipedia.org/wiki/Maxwell-Boltzmann_distribution
The Maxwell–Boltzmann distribution describes particle speeds in gases, where the particles do not constantly interact with each other but move freely between short collisions. It describes the probability of a particle's speed (the magnitude of its velocity vector) being near a given value as a function of the temperature of the system, the mass of the particle, and that speed value. This probability distribution is named after James Clerk Maxwell and Ludwig Boltzmann.

(m/kT)² can be removed from the inside of the square root and placed next to the v²

http://en.wikipedia.org/wiki/Planck's_law

from this I get:

(2kT/λ²)(hf/kt)(1/e^hf/kT - 1)

λ² would be the area that a single photon covers​

http://en.wikipedia.org/wiki/Equipartition_theorem

http://en.wikipedia.org/wiki/Bose–Einstein_statistics
Fermi-Dirac and Bose–Einstein statistics apply when quantum effects are important and the particles are "indistinguishable". Quantum effects appear if the concentration of particles (N/V) ≥ nq. Here nq is the quantum concentration, for which the interparticle distance is equal to the thermal de Broglie wavelength, so that the wavefunctions of the particles are touching but not overlapping. Fermi–Dirac statistics apply to fermions (particles that obey the Pauli exclusion principle), and Bose–Einstein statistics apply to bosons. As the quantum concentration depends on temperature; most systems at high temperatures obey the classical (Maxwell–Boltzmann) limit unless they have a very high density, as for a white dwarf. Both Fermi–Dirac and Bose–Einstein become Maxwell–Boltzmann statistics at high temperature or at low concentration

Last edited: