Small numbers, large numbers, and very large numbers

  • Context: Graduate 
  • Thread starter Thread starter VantagePoint72
  • Start date Start date
  • Tags Tags
    Large numbers Numbers
Click For Summary

Discussion Overview

The discussion centers on the treatment of small, large, and very large numbers in statistical mechanics as presented in Daniel Schroeder's "Introduction to Thermal Physics." Participants explore the mathematical justification and implications of approximating these numbers, particularly in the context of physical measurements and uncertainties.

Discussion Character

  • Debate/contested
  • Technical explanation
  • Conceptual clarification

Main Points Raised

  • One participant expresses confusion over the mathematical justification for treating large and very large numbers in a way that seems to disregard significant figures, questioning the validity of the approximations used in Schroeder's text.
  • Another participant argues that the approximations are only nonsensical if one insists on calculations requiring high precision, noting that physical measurements rarely reach such accuracy.
  • Some participants discuss the implications of significant figures, suggesting that neglecting small constants in certain contexts may not affect practical measurements significantly.
  • There is mention of the logarithmic nature of statistical mechanics, which may provide a rationale for the treatment of small and large numbers, although this is not universally accepted among participants.
  • One participant highlights that Schroeder's approach may be aimed at illustrating the negligible impact of certain factors in large-scale calculations, despite some dissatisfaction with his presentation style.
  • Another participant emphasizes that uncertainties in measurements of large quantities mean that small additions may be practically insignificant, supporting Schroeder's reasoning.
  • There is a discussion about the vast number of hydrogen atoms in the observable universe, with one participant suggesting that estimates can be made without needing to account for small factors, reinforcing the idea that practical calculations often overlook such details.
  • One participant formalizes arguments about significant figures and uncertainties, suggesting that the treatment of very large numbers involves a vast range of possible values, complicating the understanding of relative errors.

Areas of Agreement / Disagreement

Participants express differing views on the validity and practicality of the approximations used in Schroeder's text. While some find merit in the reasoning behind the approximations, others challenge their mathematical justification and implications for physical measurements. The discussion remains unresolved regarding the appropriateness of these approximations in various contexts.

Contextual Notes

Limitations include the dependence on the context of physical measurements and the potential for significant figures to affect the accuracy of calculations. The discussion highlights the complexity of applying mathematical approximations to physical systems, particularly at large scales.

VantagePoint72
Messages
820
Reaction score
34
I've been picking up a few undergraduate texts to patch up some education gaps that need filling. I'm brushing up on statistical mechanics right now, and I'm utterly bewildered by something Daniel Schroeder does in "Introduction to Thermal Physics" in section 2.4:

There are three kinds of numbers that commonly occur in statistical mechanics: small numbers, large numbers, and very large numbers.
Small numbers are small numbers, like 6,23, and 42. You already know how to manipulate small numbers. Large numbers are much larger than small numbers, and are frequently made by exponentiating small numbers. The most important large number in statistical mechanics is Avogadro's number, which is of order 1023. The most important property of large numbers is that you can add a small number to a large number without changing it. For example,
##10^{23} + 23 = 10^{23}## (2.12)
(The only exception to this rule is when you plan to eventually subtract off the same large number: 1023 + 42 - 1023 = 42.)
Very large numbers are even larger than numbers, and can be made by exponentiating large numbers. An example would be ##10^{10^{23}}##. Very large numbers have the amazing property that you can multiply them by large numbers without changing them. For instance,
##10^{10^{23}} \times 10^{23} = 10^{10^{23} + 23} = 10^{10^{23}}## (2.13)
by virtue of equation 2.12.

OK. Come on, this is ridiculous. How can this be remotely justified mathematically? I've read some of the derivations that use these arguments, and they go as follows: as we take the size of a system to be very large, "large number" coefficients are dropped. This essentially just taking a limit. Equation 2.12 is justified because as the number of particles gets arbitrarily large, the fractional difference in the term with the small number added gets arbitrarily small: ##\lim_{x\to\infty} \frac{x + C}{x}=1## for some fixed constant added. For a finite limit (i.e. a finite number of particles), this is a reasonable approximation if ##C<<x##, as Schroeder says.

However, ##\lim_{x\to\infty} \frac{Cx}{x}\neq1##. The difference between the "very large number" and the "very large number" times a "large number" is never insignificant, because it's proportional to the very large number itself. The hand-wavy argument in equation 2.13 can't be justified because the argument used to justify equation 2.12—that the true quantity and the approximation have a relative difference that becomes insignificant—is no longer valid when you use that expression as an exponent.

I get that to some extent undergraduate physics is the fine art of obtaining good conclusions from bad assumptions...but how is this sort of thing anything other than nonsense?
 
Science news on Phys.org
it is only nonsense if you insist on using calculations that required dozens or even hundreds of significant digits and nothing physical is ever measured that precisely.

for example, you will often hear, rightly, that Quantum Mechanics makes predictions that have been experimentally verified to an amazing degree of accuracy. Amazing in this case is the accuracy of measuring the width of the United States to within the width of a human hair. That is, nominally 3000 miles to within 1/500th of an inch. This is an accuracy of 1 part in 10E11, so when do you think you ever COULD make physical sense of dozens or hundreds of significant digits?
 
phinds said:
it is only nonsense if you insist on using calculations that required dozens or even hundreds of significant digits and nothing physical is ever measured that precisely.

I don't quite follow. If the unapproximated equation says the value of the measurement will be ##Cx## then it doesn't matter how small ##C## is compared to ##x##: approximating this as just ##x## will be wrong by a factor of C in every significant figure. If we're adding C then you're right: you have to make a measurement to at least as many significant figures as ##x/C## for neglecting C to be noticeabe. For C sufficiently small, this is indeed impractical. But if the formula is ##Cx## and if we do a measurement assuming we'll just get ##x##, we'll be wrong in the very first significant figure by a factor of C.
 
Last edited:
LastOneStanding said:
I don't quite follow. If the unapproximated equation says the value of the measurement will be ##Cx## then it doesn't matter how small ##C## is compared to ##x##: approximating this as just ##x## will be wrong by a factor of C in every significant figure. If we're adding C then you're right: you have to make a measurement to at least as many significant figures as ##x/C## for neglecting C to be noticeabe. For C sufficiently small, this is indeed impractical. But if the formula is ##Cx## and if we do a measurement assuming we'll just get ##x##, we'll be wrong in the very first significant figure by a factor of C.
Part of Schroeder's reasoning may be that virtually everything in stat mech is under a logarithm, so using your example of Cx becomes lnc + lnx and the lnc is insignificant.
 
Interesting thought. It certainly doesn't seem to be Schroeder's intention, but I can't think of any other way to justify it.
 
If you read further on, you'll see he's leading up to calculating the entropy of a system from the definition ##S = k \ln \Omega##, where ##\Omega## is the multiplicity of the system, which is generally a very large number indeed. After you've see and done a few examples of that kind of calculation, you'll see how small the practical effect of those approximations is.
 
How many Hydrogen atoms are in the observable universe? You can't be precise enough for the C to matter. This doesn't have to do with math, this is physics (or any other physical science). A backoftheenvelope calc. leads me to estimate that there are \approx 10^{80} Hydrogen atoms in the observable universe... I think that is a decent estimate. If I'm within a factor of a million, I would consider it a good estimate.

That's the point he's trying to make. Schroeder's book is a moderate quality introductory text. He is trying to get you to understand how unimportant the factor will often be. I don't like the way he presents it either. He should lead you to the conclusion through examples.
 
I'm very late in commenting here, but I happened on this topic in a search and wanted to chime in. (I'm mostly just formalizing the arguments about sig figs and the like above, but they evidently weren't entirely convincing as stated.)

As I understand it, Schroeder's point here is more or less about uncertainties. No measurable quantity of the order 1023 has an uncertainty less than (say) 1010, which is why he says that 1023 + 23 = 1023: it's equal to within any remotely plausible uncertainty.

So what do we make of 10^{10^{23}}? This is a number whose exponent has an uncertainty of at least 1010. Explicitly, that means that the true value could be anywhere between 10^{-10^{10}} \cdot 10^{10^{23}} and 10^{10^{10}} \cdot 10^{10^{23}}: an astoundingly vast range of orders of magnitude. A number like 10^{23} \cdot 10^{10^{23}} is enormously different on an absolute scale, but still practically at the middle of the uncertainty range implicit in the original number. (I don't think that our usual language of "relative error" is really equipped to handle large uncertainties in the order of magnitude.)

My conclusion? When Schroeder says that "very large numbers" are difficult to comprehend, he's not kidding. That's why he recommends taking a log first, just to help avoid failures in intuition. (My other conclusion? Schroeder's text is pretty awesome. It's worth assuming that he knows what he's talking about.)
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
2K
Replies
1
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
7
Views
7K
  • · Replies 5 ·
Replies
5
Views
2K