[SOLVED] Stat. Mech: energy-temperature relation of a perfect classical gas Note: This is really a problem I gave myself in an attempt to make myself understand thermodynamics better. If the problem itself is flawed (which is a possibility,) then please explain to me why and how so. 1. The problem statement, all variables and given/known data Consider the system consisting of a single particle of infinitesimal volume and finite mass m confined to a cube. (Without using the equipartition theorem,) verify that the relationship between the system temperature T = (dE/dS) and the speed |v| of the particle can be written in the form (1/2)m|v|^2 = (3/2)kT as predicted by the equipartition theorem. 2. Relevant equations T = (dE/dS) E = 1/2 m|v|^2 S = k ln W 3. The attempt at a solution The strategy is to express both energy E and entropy S as functions of the particle's speed |v|, and then differentiate and divide: T = dE/dS = (dE/d|v|)/(dS/d|v|) Obviously, since E = 1/2 m|v|^2, dE/d|v| = m|v|. Also, S = k ln W, dS/d|v| = k (1/W) (dW/d|v|). The tricky part is how to approximate W as a function of |v|, and I think that this is the part I don't really understand. W is the number of states the particle can have. Since particle motion is quantized at some level: W = (# possible positions) * (# possible velocities) The only information that we have on the particle is that it is confined to a cubic box (let its volume be V) and that its speed is |v|. The number of possible positions this particle could occupy is proportional to the volume, and the number of momentum-states with speed |v| varies as |v|^2, corrresponding to the numer of lattice points that exist within the spherical shell of radius |v|+d|v|; hence, W = MV * N|v|^2 = A|v|^2, and dS/d|v| = k (1/W) (dW/d|v|) = k * (1/A|v|^2) * (2A|v|) = k * 2/|v|. Dividing, we have T = dE/dS = (dE/d|v|)/(dS/d|v|) = m|v|/(2k/|v|) = m|v|^2/(2k); rearranging gives (1/2)m|v|^2 = kT, which is not what we wanted. I am aware that if my approximation for W were to have the form W = Av^3 instead of W = Av^2, then everything would work, but I don't understand how to rationalize that. What is wrong with the way I am approximating the number of possible states W?
No - V is the volume of the box, a constant, and |v| is the speed of the particle. Although if you could cook up a reason why the number of position-states should be dependent upon the speed, I would be grateful. Maybe the uncertainty principle constrains this, but I don't see how. Put another way, the faster a particle is moving in any given direction, the less precise its position, so the correspondingly fewer possible "position-states" there are; however, I don't see exactly how to do this in three dimensions...and this results in a -reduction- in the number of possible states anyway, so I don't see how that helps unless I've made an even bigger error of underestimation in the "velocity-states." I suppose I could write out everything not just in terms of |v| or |p| but in terms of p_x, p_y, p_z. Maybe I'll try that and get back.
Ah, of course. Right you are. I looked carefully through my copy of McQuarrie's Statistical Mechanics. He only says that "W is not generally available," which makes me think it may be difficult to reason out W from scratch. Where he does calculate it for an ideal gas, he assumes E=3kT/2 initially. Sorry could not help more.
Right - I recall similar experiences from my thermo class, which similarly dodged the question. This doesn't change the fact, though, that W -exists-, though its form may be unknown, and that in order for the definition of temperature based upon thermodyamic potential (T^-1 = dS/dE) to line up with the experimental result that |p|^2/(2m) = (3/2)kT, that W -must- have a |v|^3 dependency. Incidentally, I tried working through the 1-D case and ran into much the same problem; given that the particle has definite speed/magnitude of momentum, there are exactly 2 momentum states, corresponding to the particle travelling either towards the left or towards the right. However, in order for W to work out such that E = 1/2 kT, W must have a linear dependency upon |p|. In both cases we are missing the same factor of |p|, so I'm assuming that it comes from the position. The easiest way I can get a linear dependence upon |p| is to call the number of possible position states L/dx, where L is the length of the box and dx is uncertainty in position, and write L/dx = (L/C) * dp_x = (L/C) (dp_x/|p_x|) |p_x| The relation dx*dp_x = C follows from the uncertainty principle. But then I have to wave my hands and call dp_x/|p_x| a constant, perhaps reflective of one's measurement abilities - but I don't know how to justify that. Can you think of a better way, or if my hand-waving is in fact justified, explain to me why so?
Could the discrepancy come from the fact that you're only considering a single particle? Something bothers me about the idea of setting both the temperature and the velocity constant. If you considered a large collection of particles, for example, you'd typically set the temperature constant and let the velocities assume a distribution that would be strongly peaked at the average value. It seems like you'd get the full |v|^3 dependency if |v| were free to vary. But how could |v| vary when you have only a single particle with nothing to interact with? One possible explanation: To keep the temperature of your system constant, you have (virtually) placed it in contact with a large heat bath. There is no restriction on energy transfer between the two, and this would affect the particle's velocity. Conversely, if you completely isolated the system, I don't think you could consider temperature constant. Just a few thoughts that might help.
As Mapes alludes to, there are two types of energies being discussed: 1. A closed system has some energy that we specify, and which is constant over time. We then define S(E) and T(E) as you've described. This is the type of energy you're dealing with. 2. For a system in thermal contact with the environment, the independent variable is T, the temperature of the environment. We then find E and S as a function of T, where E will no longer be constant but fluctuate about some average value as the system exchanges energy with the environment. This average energy is what we're talking about when we say an atom has energy 3/2 kT. Note one important difference: the average energy need not be an energy the system can actually attain. For example, a two level system with energies [itex]\pm[/itex]e will in general have an average energy strictly between -e and e. Note also that when we put two systems in thermal contact, we expect their energies to redistribute themselves so that the corresponding temperatures are equal. This is because we define the temperature such that the total entropy of the composite system is maximized when the two temperatures are equal. In the case of systems with large numbers of particles, the point of maximum entropy is typically enormously more probable than any other point, so to good approximation the most probable and average energies are the same thing. This is not the case for a single atom system, which explains the discrepancy between your result and the standard one. Maybe some calculations will help clarify things. For a single atom system, the density* of states is [itex]g(E)=cE^{1/2} [/itex] (in what follows, c is some constant whose value we don't care about, and there is no relation between the various c's) so: [tex] S = c + \frac{k}{2} \log(E) [/tex] [tex] E = \frac{kT}{2} [/tex] For a two atom system, where the total energy can now be divided over the two atoms, we have (relabelling the one-particle degeneracy function g1): [tex] g_2(E) = \int_0^E dE' g_1(E') g_1(E-E') = \int_0^E dE' E'^{1/2} (E-E')^{1/2} = E^2 \int_0^1 dx x^{1/2} (1-x)^{1/2} = c E^2 [/tex] [tex] S = c + 2 k \log(E) [/tex] [tex] E = 2kT [/tex] It's not too hard to see (work out the three atom case and it should be obvious) that each atom adds a power of [itex]E^{3/2}[/itex] to g(E), and so for N atoms: [tex] S = c + \frac{3N-2}{2} k\log(E) [/tex] [tex] E = \frac{3N-2}{2} kT [/tex] Thus, as N goes to infinity, the energy per particle approaches 3/2 kT. Of course, a much simpler way to derive this result is to use the partition function, but hopefully this provides a link between your method and that one, and clarifies how they can give different answers. *Note: I'm looking at the energy density of statesa, not the velocity density that you computed. In the limit of a large number of particles, it won't matter which we use, but the calculation is easier with the energy density. The fact that the two give different results for the temperature of systems with low numbers of particles should show temperature isn't a very well-defined concept for such systems.
Those were very helpful explanations of why my original statement of the problem made no sense. I still don't completely understand why g(E) = cE^(1/2); everything follows after that, though. Importantly, I understand how the multiple-particle models change the final result by forcing integration over a much more broad distribution of states than a single-particle model would yield.
It just depends how you count the states. Assuming states are uniformly distributed in velocity space (which is justified by a more rigorous treatment with QM), you can unambiguously say that the number of states in a sphere of radius |v| is proportional to |v|^3, and, since E is proportional to |v|^2, this means the number of states with energy less than E is proportional to E^3/2. Then when we talk about the number of states at a certain energy or velocity, we really mean within some range. For example, you looked at the number of states within some fixed range d|v|, which gives S = log(dN/d|v| d|v|) = log(|v|^2) + const. I looked at the states within some fixed range d|E|, giving S = log(dN/dE dE) = log(E^1/2) + const. These can be related by noting dE/d|v| is proportional to |v|. When we construct the two particle density, we have to integrate over the appropriate variable. Thus, if I were to repeat the calculations for the velocity density, I'd get: [tex] g_2(|v|) = \int_0^|v| d|w| g_1(|w|)g_1(|v|-|w|) = |v|^5 \int_0^1 dx x^2 (1-x^2) = c|v|^5 [/tex] [tex] S(|v|) = 5k \log(|v|) + c [/tex] [tex] \frac{1}{T} = \frac{dS}{dE} = \frac{dS}{d|v|} \frac{d|v|}{dE} = \frac{5}{|v|} \frac{1}{m|v|} [/tex] [tex] \frac{1}{2} m|v|^2 = \frac{5}{2} kT [/tex] And repeating this gives an N particle energy of: [tex] \frac{1}{2} m|v|^2 = \frac{3N-1}{2} kT [/tex] so that we get the same large N limit.