- #1

- 734

- 0

**! question...**

Why does lnX!=XlnX-X?

- Thread starter asdf1
- Start date

- #1

- 734

- 0

Why does lnX!=XlnX-X?

- #2

VietDao29

Homework Helper

- 1,423

- 2

???

[tex]\ln{1!} = \ln1 = 0[/tex]

[tex]1\ln1 - 1 = 1 \times 0 - 1 = -1[/tex]

So 0 = -1??

Viet Dao,

[tex]\ln{1!} = \ln1 = 0[/tex]

[tex]1\ln1 - 1 = 1 \times 0 - 1 = -1[/tex]

So 0 = -1??

Viet Dao,

- #3

- 83

- 0

[tex]\ln n! = \ln 1 + \ln 2 + \ldots + \ln n = \sum_{k=1}^{n}\ln k \approx \int_1^n \ln x dx = n \ln n - n + 1 \approx n \ln n - n[/tex]

where the approximation gets relatively better when n becomes larger.

- #4

HallsofIvy

Science Advisor

Homework Helper

- 41,833

- 956

- #5

mathwonk

Science Advisor

Homework Helper

- 11,058

- 1,245

he means antiderivative maybe. answer: check it.

- #6

- 3

- 0

- #7

- 3

- 0

undefined :rofl:

- #8

- 734

- 0

i am referring to the Stirling approximation(sorry, i forgot to add that at the end of my question)...

i saw that equation in the "advanced engineering mathematics" book by kreyszig as part of the solution to a problem...

but what i wonder is how did the stirling approximation come from?

- #9

lurflurf

Homework Helper

- 2,432

- 132

[tex]\log(x!)=\sum_{n=1}^x \log(n) \sim \int_0^x \log(t) dt=x\log(x)-x[/tex]asdf1 said:

i am referring to the Stirling approximation(sorry, i forgot to add that at the end of my question)...

i saw that equation in the "advanced engineering mathematics" book by kreyszig as part of the solution to a problem...

but what i wonder is how did the stirling approximation come from?

where ~ here means goes to asymptotically for large x

that is the integral becomes a good approximation of the sum as x becomes large.

Last edited:

- #10

- 734

- 0

Why? That's the part that I don't understand...

- #11

Galileo

Science Advisor

Homework Helper

- 1,989

- 6

So you can approximate this area by the integral [itex]\int_1^x \ln t dt[/itex]. Drawing a picture may help.

- #12

lurflurf

Homework Helper

- 2,432

- 132

It is a Riemann sum we partition (0,x) into (we assume here x is a natural number)asdf1 said:Why? That's the part that I don't understand...

[0,1],[1,2],[2,3],...,[n-2,n-1],[n-1,n]

and chose as the point of evaluation for each interval the right boundry

we can consider one term in the Reimann sum as an approximation to the integral over the region of the term.

[tex]\log(n) \sim \int_{n-1}^n \log(x)dx=\log(e^{-1}(1+\frac{1}{n-1})^{n-1}n)[/tex]

clearly this will be a good approximation if x is large and not so good is n is not large. Thus the approximation over (0,x) cannot make up for its poor start, but the relative error gets better and better. So we have asymptodic convergence. The absolute error will never be small, but the relative error will. Often since x! grows rapidly we do not mind the absolute error being high (or moderate) so long as the relative error is low.

Last edited:

- #13

- 734

- 0

but there's still one I don't get:

What's the difference between the absolute and relative error?

- #14

HallsofIvy

Science Advisor

Homework Helper

- 41,833

- 956

Suppose in measuring a distance of 100 meters, I make an error of 10 cm.

The

There is an Engineering rule of thumb: when you add measurements, the absolute errors add. When you multiply measurements, the relative errors add.

That is, if I measure distance y with absolute error at most Δy and distance x with absolute error at most Δx, then the true values of x and y might be as low as x- Δx and y- Δy. The true value of x+ y might be as low as (x-Δx)+(y-Δy)= (x+y)- (Δx+Δy) The true values of x and y might be as large as x+&Deltax and y+Δy. The true value of x+ y might be as large as (x+Δx)+ (y+Δy)= (x+y)+(Δx+Δy). That is, the error in x+y might be as large as Δx+ Δy.

On the other hand, if I multiply instead of adding, the true value of xy might be as low as (x- Δx)(y- Δy)= xy- (xΔy+ yΔx)+ (ΔxΔy) which, ignoring the "second order" term ΔxΔy (that's why this is a "rule of thumb" rather than an exact formula), is xy- (xΔy+ yΔy). The true value of xy might be as large as xy+ (xΔy+ xΔx). The absolute error might be as large as xΔy+ yΔx which depend on x and y as well as the absolute errors Δx and Δy. However, the "relative" error in xy is (xΔy+ yΔx)/xy= Δy/y+ Δx/x, the sum of the two relative errors in x and y.

- #15

lurflurf

Homework Helper

- 2,432

- 132

like HallsofIvy saidasdf1 said:

but there's still one I don't get:

What's the difference between the absolute and relative error?

absolute error=|approximate-exact|

relative error=|approximate-exact|/exact

think about approximating (x+1)^2 with x^2 for large

the relative error becomes small

the absolute error grows

the approximation

log(x!)~log(x)-x

does the same

- #16

- 734

- 0

thanks! :)