Why Does ln(x!) Approximate to x ln(x) - x?

Click For Summary
SUMMARY

The discussion centers on the Stirling approximation, which states that ln(n!) approximates to n ln(n) - n for large values of n. Participants clarify that this approximation arises from comparing the sum of logarithms with the integral of the logarithmic function, leading to asymptotic convergence as n increases. The conversation also touches on the concepts of absolute and relative error, emphasizing their differences in measurement contexts. The Stirling approximation is validated through graphical representation and Riemann sums, enhancing understanding of its application in mathematical analysis.

PREREQUISITES
  • Understanding of Stirling's approximation
  • Familiarity with logarithmic functions
  • Basic knowledge of integrals and Riemann sums
  • Concepts of absolute and relative error in measurements
NEXT STEPS
  • Study the derivation of Stirling's approximation in detail
  • Learn about Riemann sums and their applications in calculus
  • Explore the differences between absolute and relative error in various contexts
  • Investigate the implications of asymptotic analysis in mathematical functions
USEFUL FOR

Mathematicians, engineering students, and anyone interested in advanced mathematical concepts, particularly those involving approximations and error analysis.

asdf1
Messages
734
Reaction score
0
! question...

Why does lnX!=XlnX-X?
 
Mathematics news on Phys.org
:confused:?
\ln{1!} = \ln1 = 0
1\ln1 - 1 = 1 \times 0 - 1 = -1
So 0 = -1??
Viet Dao,
 
I think you're referring to the Stirling approximation:

\ln n! = \ln 1 + \ln 2 + \ldots + \ln n = \sum_{k=1}^{n}\ln k \approx \int_1^n \ln x dx = n \ln n - n + 1 \approx n \ln n - n

where the approximation gets relatively better when n becomes larger.
 
What you said was "Why does lnX!=XlnX-X?". Now you tell us not only did you NOT mean "ln X!= X ln X- X, but you also tell us that you already KNOW the answer to your (unstated) question. What was your purpose in posting that?
 
he means antiderivative maybe. answer: check it.
 
What is the relation between energy & time period of a simple pendulam while not considering the small angle approximation? Please show the graph also them.
 
problem on SHM

undefined :smile:
 
?
i am referring to the Stirling approximation(sorry, i forgot to add that at the end of my question)...
i saw that equation in the "advanced engineering mathematics" book by kreyszig as part of the solution to a problem...
but what i wonder is how did the stirling approximation come from?
 
asdf1 said:
?
i am referring to the Stirling approximation(sorry, i forgot to add that at the end of my question)...
i saw that equation in the "advanced engineering mathematics" book by kreyszig as part of the solution to a problem...
but what i wonder is how did the stirling approximation come from?
\log(x!)=\sum_{n=1}^x \log(n) \sim \int_0^x \log(t) dt=x\log(x)-x
where ~ here means goes to asymptotically for large x
that is the integral becomes a good approximation of the sum as x becomes large.
 
Last edited:
  • #10
Why? That's the part that I don't understand...
 
  • #11
You can approximate the sum by an integral. If you draw a graph, the sum \sum_{n=1}^{x}\ln x is equal to the area of x rectangles, each of width 1. The heights are ln(1),ln(2),...,ln(x).
So you can approximate this area by the integral \int_1^x \ln t dt. Drawing a picture may help.
 
  • #12
asdf1 said:
Why? That's the part that I don't understand...
It is a Riemann sum we partition (0,x) into (we assume here x is a natural number)
[0,1],[1,2],[2,3],...,[n-2,n-1],[n-1,n]
and chose as the point of evaluation for each interval the right boundary
we can consider one term in the Reimann sum as an approximation to the integral over the region of the term.
\log(n) \sim \int_{n-1}^n \log(x)dx=\log(e^{-1}(1+\frac{1}{n-1})^{n-1}n)
clearly this will be a good approximation if x is large and not so good is n is not large. Thus the approximation over (0,x) cannot make up for its poor start, but the relative error gets better and better. So we have asymptodic convergence. The absolute error will never be small, but the relative error will. Often since x! grows rapidly we do not mind the absolute error being high (or moderate) so long as the relative error is low.
 
Last edited:
  • #13
thanks! It makes a lot more sense now...
but there's still one I don't get:
What's the difference between the absolute and relative error?
 
  • #14
?? That's a completely different question!

Suppose in measuring a distance of 100 meters, I make an error of 10 cm.

The absolute error is 10 cm. The relative error is that "relative" to the entire measurement: 10 cm/100m = 0.1 m/100m= 0.001 (and, of course, has no units).

There is an Engineering rule of thumb: when you add measurements, the absolute errors add. When you multiply measurements, the relative errors add.

That is, if I measure distance y with absolute error at most Δy and distance x with absolute error at most Δx, then the true values of x and y might be as low as x- Δx and y- Δy. The true value of x+ y might be as low as (x-Δx)+(y-Δy)= (x+y)- (Δx+Δy) The true values of x and y might be as large as x+&Deltax and y+Δy. The true value of x+ y might be as large as (x+Δx)+ (y+Δy)= (x+y)+(Δx+Δy). That is, the error in x+y might be as large as Δx+ Δy.

On the other hand, if I multiply instead of adding, the true value of xy might be as low as (x- Δx)(y- Δy)= xy- (xΔy+ yΔx)+ (ΔxΔy) which, ignoring the "second order" term ΔxΔy (that's why this is a "rule of thumb" rather than an exact formula), is xy- (xΔy+ yΔy). The true value of xy might be as large as xy+ (xΔy+ xΔx). The absolute error might be as large as xΔy+ yΔx which depend on x and y as well as the absolute errors Δx and Δy. However, the "relative" error in xy is (xΔy+ yΔx)/xy= Δy/y+ Δx/x, the sum of the two relative errors in x and y.
 
  • #15
asdf1 said:
thanks! It makes a lot more sense now...
but there's still one I don't get:
What's the difference between the absolute and relative error?
like HallsofIvy said
absolute error=|approximate-exact|
relative error=|approximate-exact|/exact
think about approximating (x+1)^2 with x^2 for large
the relative error becomes small
the absolute error grows
the approximation
log(x!)~log(x)-x
does the same
 
  • #16
thanks! :)
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 5 ·
Replies
5
Views
1K
Replies
10
Views
2K
  • · Replies 20 ·
Replies
20
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 6 ·
Replies
6
Views
5K
  • · Replies 7 ·
Replies
7
Views
2K