- #1
VantagePoint72
- 821
- 34
I've been picking up a few undergraduate texts to patch up some education gaps that need filling. I'm brushing up on statistical mechanics right now, and I'm utterly bewildered by something Daniel Schroeder does in "Introduction to Thermal Physics" in section 2.4:
OK. Come on, this is ridiculous. How can this be remotely justified mathematically? I've read some of the derivations that use these arguments, and they go as follows: as we take the size of a system to be very large, "large number" coefficients are dropped. This essentially just taking a limit. Equation 2.12 is justified because as the number of particles gets arbitrarily large, the fractional difference in the term with the small number added gets arbitrarily small: ##\lim_{x\to\infty} \frac{x + C}{x}=1## for some fixed constant added. For a finite limit (i.e. a finite number of particles), this is a reasonable approximation if ##C<<x##, as Schroeder says.
However, ##\lim_{x\to\infty} \frac{Cx}{x}\neq1##. The difference between the "very large number" and the "very large number" times a "large number" is never insignificant, because it's proportional to the very large number itself. The hand-wavy argument in equation 2.13 can't be justified because the argument used to justify equation 2.12—that the true quantity and the approximation have a relative difference that becomes insignificant—is no longer valid when you use that expression as an exponent.
I get that to some extent undergraduate physics is the fine art of obtaining good conclusions from bad assumptions...but how is this sort of thing anything other than nonsense?
There are three kinds of numbers that commonly occur in statistical mechanics: small numbers, large numbers, and very large numbers.
Small numbers are small numbers, like 6,23, and 42. You already know how to manipulate small numbers. Large numbers are much larger than small numbers, and are frequently made by exponentiating small numbers. The most important large number in statistical mechanics is Avogadro's number, which is of order 1023. The most important property of large numbers is that you can add a small number to a large number without changing it. For example,
##10^{23} + 23 = 10^{23}## (2.12)
(The only exception to this rule is when you plan to eventually subtract off the same large number: 1023 + 42 - 1023 = 42.)
Very large numbers are even larger than numbers, and can be made by exponentiating large numbers. An example would be ##10^{10^{23}}##. Very large numbers have the amazing property that you can multiply them by large numbers without changing them. For instance,
##10^{10^{23}} \times 10^{23} = 10^{10^{23} + 23} = 10^{10^{23}}## (2.13)
by virtue of equation 2.12.
OK. Come on, this is ridiculous. How can this be remotely justified mathematically? I've read some of the derivations that use these arguments, and they go as follows: as we take the size of a system to be very large, "large number" coefficients are dropped. This essentially just taking a limit. Equation 2.12 is justified because as the number of particles gets arbitrarily large, the fractional difference in the term with the small number added gets arbitrarily small: ##\lim_{x\to\infty} \frac{x + C}{x}=1## for some fixed constant added. For a finite limit (i.e. a finite number of particles), this is a reasonable approximation if ##C<<x##, as Schroeder says.
However, ##\lim_{x\to\infty} \frac{Cx}{x}\neq1##. The difference between the "very large number" and the "very large number" times a "large number" is never insignificant, because it's proportional to the very large number itself. The hand-wavy argument in equation 2.13 can't be justified because the argument used to justify equation 2.12—that the true quantity and the approximation have a relative difference that becomes insignificant—is no longer valid when you use that expression as an exponent.
I get that to some extent undergraduate physics is the fine art of obtaining good conclusions from bad assumptions...but how is this sort of thing anything other than nonsense?