
#1
Jan1313, 07:03 PM

P: 718

I've been picking up a few undergraduate texts to patch up some education gaps that need filling. I'm brushing up on statistical mechanics right now, and I'm utterly bewildered by something Daniel Schroeder does in "Introduction to Thermal Physics" in section 2.4:
However, ##\lim_{x\to\infty} \frac{Cx}{x}\neq1##. The difference between the "very large number" and the "very large number" times a "large number" is never insignificant, because it's proportional to the very large number itself. The handwavy argument in equation 2.13 can't be justified because the argument used to justify equation 2.12—that the true quantity and the approximation have a relative difference that becomes insignificant—is no longer valid when you use that expression as an exponent. I get that to some extent undergraduate physics is the fine art of obtaining good conclusions from bad assumptions...but how is this sort of thing anything other than nonsense? 



#2
Jan1313, 07:17 PM

PF Gold
P: 5,720

it is only nonsense if you insist on using calculations that required dozens or even hundreds of significant digits and nothing physical is ever measured that precisely.
for example, you will often hear, rightly, that Quantum Mechanics makes predictions that have been experimentally verified to an amazing degree of accuracy. Amazing in this case is the accuracy of measuring the width of the United States to within the width of a human hair. That is, nominally 3000 miles to within 1/500th of an inch. This is an accuracy of 1 part in 10E11, so when do you think you ever COULD make physical sense of dozens or hundreds of significant digits? 



#3
Jan1313, 07:27 PM

P: 718





#4
Jan1313, 07:47 PM

P: 1,025

Small numbers, large numbers, and very large numbers 



#5
Jan1313, 08:29 PM

P: 718

Interesting thought. It certainly doesn't seem to be Schroeder's intention, but I can't think of any other way to justify it.




#6
Jan1313, 08:46 PM

Mentor
P: 11,255

If you read further on, you'll see he's leading up to calculating the entropy of a system from the definition ##S = k \ln \Omega##, where ##\Omega## is the multiplicity of the system, which is generally a very large number indeed. After you've see and done a few examples of that kind of calculation, you'll see how small the practical effect of those approximations is.




#7
Jan1313, 08:55 PM

P: 428

How many Hydrogen atoms are in the observable universe? You can't be precise enough for the C to matter. This doesn't have to do with math, this is physics (or any other physical science). A backoftheenvelope calc. leads me to estimate that there are [itex]\approx 10^{80}[/itex] Hydrogen atoms in the observable universe... I think that is a decent estimate. If I'm within a factor of a million, I would consider it a good estimate.
That's the point he's trying to make. Schroeder's book is a moderate quality introductory text. He is trying to get you to understand how unimportant the factor will often be. I don't like the way he presents it either. He should lead you to the conclusion through examples. 



#8
Sep2613, 09:33 PM

P: 1

I'm very late in commenting here, but I happened on this topic in a search and wanted to chime in. (I'm mostly just formalizing the arguments about sig figs and the like above, but they evidently weren't entirely convincing as stated.)
As I understand it, Schroeder's point here is more or less about uncertainties. No measurable quantity of the order 10^{23} has an uncertainty less than (say) 10^{10}, which is why he says that 10^{23} + 23 = 10^{23}: it's equal to within any remotely plausible uncertainty. So what do we make of [itex]10^{10^{23}}[/itex]? This is a number whose exponent has an uncertainty of at least 10^{10}. Explicitly, that means that the true value could be anywhere between [itex]10^{10^{10}} \cdot 10^{10^{23}}[/itex] and [itex]10^{10^{10}} \cdot 10^{10^{23}}[/itex]: an astoundingly vast range of orders of magnitude. A number like [itex]10^{23} \cdot 10^{10^{23}}[/itex] is enormously different on an absolute scale, but still practically at the middle of the uncertainty range implicit in the original number. (I don't think that our usual language of "relative error" is really equipped to handle large uncertainties in the order of magnitude.) My conclusion? When Schroeder says that "very large numbers" are difficult to comprehend, he's not kidding. That's why he recommends taking a log first, just to help avoid failures in intuition. (My other conclusion? Schroeder's text is pretty awesome. It's worth assuming that he knows what he's talking about.) 


Register to reply 
Related Discussions  
large numbers  Precalculus Mathematics Homework  4  
Statistical Physics: very large and very small numbers  Introductory Physics Homework  6  
Remembering Large Numbers  General Discussion  30  
Strong Law of Large Numbers  Set Theory, Logic, Probability, Statistics  1  
Law of Large Numbers  Set Theory, Logic, Probability, Statistics  1 