# Interesting physics question

1. Aug 11, 2005

### asdf1

! question...

Why does lnX!=XlnX-X?

2. Aug 11, 2005

### VietDao29

???
$$\ln{1!} = \ln1 = 0$$
$$1\ln1 - 1 = 1 \times 0 - 1 = -1$$
So 0 = -1??
Viet Dao,

3. Aug 11, 2005

### Timbuqtu

I think you're referring to the Stirling approximation:

$$\ln n! = \ln 1 + \ln 2 + \ldots + \ln n = \sum_{k=1}^{n}\ln k \approx \int_1^n \ln x dx = n \ln n - n + 1 \approx n \ln n - n$$

where the approximation gets relatively better when n becomes larger.

4. Aug 11, 2005

### HallsofIvy

Staff Emeritus
What you said was "Why does lnX!=XlnX-X?". Now you tell us not only did you NOT mean "ln X!= X ln X- X, but you also tell us that you already KNOW the answer to your (unstated) question. What was your purpose in posting that?

5. Aug 11, 2005

### mathwonk

he means antiderivative maybe. answer: check it.

6. Aug 11, 2005

### Subhasish Mandal

What is the relation between energy & time period of a simple pendulam while not considering the small angle approximation? Please show the graph also them.

7. Aug 11, 2005

### Subhasish Mandal

problem on SHM

undefined :rofl:

8. Aug 12, 2005

### asdf1

?
i am referring to the Stirling approximation(sorry, i forgot to add that at the end of my question)...
i saw that equation in the "advanced engineering mathematics" book by kreyszig as part of the solution to a problem...
but what i wonder is how did the stirling approximation come from?

9. Aug 12, 2005

### lurflurf

$$\log(x!)=\sum_{n=1}^x \log(n) \sim \int_0^x \log(t) dt=x\log(x)-x$$
where ~ here means goes to asymptotically for large x
that is the integral becomes a good approximation of the sum as x becomes large.

Last edited: Aug 12, 2005
10. Aug 13, 2005

### asdf1

Why? That's the part that I don't understand...

11. Aug 13, 2005

### Galileo

You can approximate the sum by an integral. If you draw a graph, the sum $\sum_{n=1}^{x}\ln x$ is equal to the area of x rectangles, each of width 1. The heights are ln(1),ln(2),...,ln(x).
So you can approximate this area by the integral $\int_1^x \ln t dt$. Drawing a picture may help.

12. Aug 13, 2005

### lurflurf

It is a Riemann sum we partition (0,x) into (we assume here x is a natural number)
[0,1],[1,2],[2,3],...,[n-2,n-1],[n-1,n]
and chose as the point of evaluation for each interval the right boundry
we can consider one term in the Reimann sum as an approximation to the integral over the region of the term.
$$\log(n) \sim \int_{n-1}^n \log(x)dx=\log(e^{-1}(1+\frac{1}{n-1})^{n-1}n)$$
clearly this will be a good approximation if x is large and not so good is n is not large. Thus the approximation over (0,x) cannot make up for its poor start, but the relative error gets better and better. So we have asymptodic convergence. The absolute error will never be small, but the relative error will. Often since x! grows rapidly we do not mind the absolute error being high (or moderate) so long as the relative error is low.

Last edited: Aug 13, 2005
13. Aug 14, 2005

### asdf1

thanks! It makes a lot more sense now...
but there's still one I don't get:
What's the difference between the absolute and relative error?

14. Aug 14, 2005

### HallsofIvy

Staff Emeritus
?? That's a completely different question!!

Suppose in measuring a distance of 100 meters, I make an error of 10 cm.

The absolute error is 10 cm. The relative error is that "relative" to the entire measurement: 10 cm/100m = 0.1 m/100m= 0.001 (and, of course, has no units).

There is an Engineering rule of thumb: when you add measurements, the absolute errors add. When you multiply measurements, the relative errors add.

That is, if I measure distance y with absolute error at most &Delta;y and distance x with absolute error at most &Delta;x, then the true values of x and y might be as low as x- &Delta;x and y- &Delta;y. The true value of x+ y might be as low as (x-&Delta;x)+(y-&Delta;y)= (x+y)- (&Delta;x+&Delta;y) The true values of x and y might be as large as x+&Deltax and y+&Delta;y. The true value of x+ y might be as large as (x+&Delta;x)+ (y+&Delta;y)= (x+y)+(&Delta;x+&Delta;y). That is, the error in x+y might be as large as &Delta;x+ &Delta;y.

On the other hand, if I multiply instead of adding, the true value of xy might be as low as (x- &Delta;x)(y- &Delta;y)= xy- (x&Delta;y+ y&Delta;x)+ (&Delta;x&Delta;y) which, ignoring the "second order" term &Delta;x&Delta;y (that's why this is a "rule of thumb" rather than an exact formula), is xy- (x&Delta;y+ y&Delta;y). The true value of xy might be as large as xy+ (x&Delta;y+ x&Delta;x). The absolute error might be as large as x&Delta;y+ y&Delta;x which depend on x and y as well as the absolute errors &Delta;x and &Delta;y. However, the "relative" error in xy is (x&Delta;y+ y&Delta;x)/xy= &Delta;y/y+ &Delta;x/x, the sum of the two relative errors in x and y.

15. Aug 14, 2005

### lurflurf

like HallsofIvy said
absolute error=|approximate-exact|
relative error=|approximate-exact|/exact
think about approximating (x+1)^2 with x^2 for large
the relative error becomes small
the absolute error grows
the approximation
log(x!)~log(x)-x
does the same

16. Aug 15, 2005

thanks! :)