Small numbers, large numbers, and very large numbers

In summary: Schroeder makes here. I think it's confusing. The words "small" and "large" are not really well-defined. I think it's better to write something like ##\lim_{x\to\infty} \left(1+\frac{C}{x}\right)^x = e^C## and call that "large."In summary, the conversation discusses the concept of small, large, and very large numbers in statistical mechanics. The speakers debate the validity of using approximations in calculations, with one arguing that they can introduce significant errors. The other argues that in statistical mechanics, where most calculations involve logarithms, the practical effect of these approximations is small. The conversation also touches on the
  • #1
VantagePoint72
821
34
I've been picking up a few undergraduate texts to patch up some education gaps that need filling. I'm brushing up on statistical mechanics right now, and I'm utterly bewildered by something Daniel Schroeder does in "Introduction to Thermal Physics" in section 2.4:

There are three kinds of numbers that commonly occur in statistical mechanics: small numbers, large numbers, and very large numbers.
Small numbers are small numbers, like 6,23, and 42. You already know how to manipulate small numbers. Large numbers are much larger than small numbers, and are frequently made by exponentiating small numbers. The most important large number in statistical mechanics is Avogadro's number, which is of order 1023. The most important property of large numbers is that you can add a small number to a large number without changing it. For example,
##10^{23} + 23 = 10^{23}## (2.12)
(The only exception to this rule is when you plan to eventually subtract off the same large number: 1023 + 42 - 1023 = 42.)
Very large numbers are even larger than numbers, and can be made by exponentiating large numbers. An example would be ##10^{10^{23}}##. Very large numbers have the amazing property that you can multiply them by large numbers without changing them. For instance,
##10^{10^{23}} \times 10^{23} = 10^{10^{23} + 23} = 10^{10^{23}}## (2.13)
by virtue of equation 2.12.

OK. Come on, this is ridiculous. How can this be remotely justified mathematically? I've read some of the derivations that use these arguments, and they go as follows: as we take the size of a system to be very large, "large number" coefficients are dropped. This essentially just taking a limit. Equation 2.12 is justified because as the number of particles gets arbitrarily large, the fractional difference in the term with the small number added gets arbitrarily small: ##\lim_{x\to\infty} \frac{x + C}{x}=1## for some fixed constant added. For a finite limit (i.e. a finite number of particles), this is a reasonable approximation if ##C<<x##, as Schroeder says.

However, ##\lim_{x\to\infty} \frac{Cx}{x}\neq1##. The difference between the "very large number" and the "very large number" times a "large number" is never insignificant, because it's proportional to the very large number itself. The hand-wavy argument in equation 2.13 can't be justified because the argument used to justify equation 2.12—that the true quantity and the approximation have a relative difference that becomes insignificant—is no longer valid when you use that expression as an exponent.

I get that to some extent undergraduate physics is the fine art of obtaining good conclusions from bad assumptions...but how is this sort of thing anything other than nonsense?
 
Physics news on Phys.org
  • #2
it is only nonsense if you insist on using calculations that required dozens or even hundreds of significant digits and nothing physical is ever measured that precisely.

for example, you will often hear, rightly, that Quantum Mechanics makes predictions that have been experimentally verified to an amazing degree of accuracy. Amazing in this case is the accuracy of measuring the width of the United States to within the width of a human hair. That is, nominally 3000 miles to within 1/500th of an inch. This is an accuracy of 1 part in 10E11, so when do you think you ever COULD make physical sense of dozens or hundreds of significant digits?
 
  • #3
phinds said:
it is only nonsense if you insist on using calculations that required dozens or even hundreds of significant digits and nothing physical is ever measured that precisely.

I don't quite follow. If the unapproximated equation says the value of the measurement will be ##Cx## then it doesn't matter how small ##C## is compared to ##x##: approximating this as just ##x## will be wrong by a factor of C in every significant figure. If we're adding C then you're right: you have to make a measurement to at least as many significant figures as ##x/C## for neglecting C to be noticeabe. For C sufficiently small, this is indeed impractical. But if the formula is ##Cx## and if we do a measurement assuming we'll just get ##x##, we'll be wrong in the very first significant figure by a factor of C.
 
Last edited:
  • #4
LastOneStanding said:
I don't quite follow. If the unapproximated equation says the value of the measurement will be ##Cx## then it doesn't matter how small ##C## is compared to ##x##: approximating this as just ##x## will be wrong by a factor of C in every significant figure. If we're adding C then you're right: you have to make a measurement to at least as many significant figures as ##x/C## for neglecting C to be noticeabe. For C sufficiently small, this is indeed impractical. But if the formula is ##Cx## and if we do a measurement assuming we'll just get ##x##, we'll be wrong in the very first significant figure by a factor of C.
Part of Schroeder's reasoning may be that virtually everything in stat mech is under a logarithm, so using your example of Cx becomes lnc + lnx and the lnc is insignificant.
 
  • #5
Interesting thought. It certainly doesn't seem to be Schroeder's intention, but I can't think of any other way to justify it.
 
  • #6
If you read further on, you'll see he's leading up to calculating the entropy of a system from the definition ##S = k \ln \Omega##, where ##\Omega## is the multiplicity of the system, which is generally a very large number indeed. After you've see and done a few examples of that kind of calculation, you'll see how small the practical effect of those approximations is.
 
  • #7
How many Hydrogen atoms are in the observable universe? You can't be precise enough for the C to matter. This doesn't have to do with math, this is physics (or any other physical science). A backoftheenvelope calc. leads me to estimate that there are [itex]\approx 10^{80}[/itex] Hydrogen atoms in the observable universe... I think that is a decent estimate. If I'm within a factor of a million, I would consider it a good estimate.

That's the point he's trying to make. Schroeder's book is a moderate quality introductory text. He is trying to get you to understand how unimportant the factor will often be. I don't like the way he presents it either. He should lead you to the conclusion through examples.
 
  • #8
I'm very late in commenting here, but I happened on this topic in a search and wanted to chime in. (I'm mostly just formalizing the arguments about sig figs and the like above, but they evidently weren't entirely convincing as stated.)

As I understand it, Schroeder's point here is more or less about uncertainties. No measurable quantity of the order 1023 has an uncertainty less than (say) 1010, which is why he says that 1023 + 23 = 1023: it's equal to within any remotely plausible uncertainty.

So what do we make of [itex]10^{10^{23}}[/itex]? This is a number whose exponent has an uncertainty of at least 1010. Explicitly, that means that the true value could be anywhere between [itex]10^{-10^{10}} \cdot 10^{10^{23}}[/itex] and [itex]10^{10^{10}} \cdot 10^{10^{23}}[/itex]: an astoundingly vast range of orders of magnitude. A number like [itex]10^{23} \cdot 10^{10^{23}}[/itex] is enormously different on an absolute scale, but still practically at the middle of the uncertainty range implicit in the original number. (I don't think that our usual language of "relative error" is really equipped to handle large uncertainties in the order of magnitude.)

My conclusion? When Schroeder says that "very large numbers" are difficult to comprehend, he's not kidding. That's why he recommends taking a log first, just to help avoid failures in intuition. (My other conclusion? Schroeder's text is pretty awesome. It's worth assuming that he knows what he's talking about.)
 

FAQ: Small numbers, large numbers, and very large numbers

1. What is considered a small number?

A small number is typically considered to be less than 1,000. However, this can vary depending on the context and field of study.

2. Can you give an example of a large number?

One example of a large number is 1,000,000,000 (1 billion). This is often used in terms of population, currency, and scientific measurements.

3. How are very large numbers written?

Very large numbers are often written in scientific notation, which is a shorthand way of expressing numbers using powers of 10. For example, 10,000 can be written as 1 x 10^4.

4. What is the difference between a large number and a very large number?

The main difference between a large number and a very large number is the magnitude. A large number is typically in the millions or billions range, whereas a very large number is often in the trillions or higher.

5. How are small numbers and large numbers used in scientific research?

Small numbers and large numbers are used in scientific research to represent quantities and measurements. They are also used in statistical analysis to analyze data and make predictions. In addition, they play a crucial role in fields such as physics and astronomy, where extremely small or large numbers are often used to describe the properties of objects and phenomena.

Back
Top