Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Shortcut of taking factorial?

  1. Jul 20, 2006 #1

    Do anyone know the shortcut or trick of taking factorial directly?
    for example : 9! = 9*8*7*6*5*4*3*2*1.
    it is very time consuming method to multiply all these terms to get answer.
    please tell me easy mathod.
    thank you
  2. jcsd
  3. Jul 21, 2006 #2
    In what context? Do you mean that you want to write the fastest algorithm possible in some program that you're doing? There isn't a "simpler" formula as far as I know.

    If you have a program that does integrals and you're too lazy to code a loop, then you can use that [itex]n! = \Gamma(n+1) = \int_0^\infty t^n e^{-t} dt[/itex] (or, if you're really bored, you can use integration by parts to do that integral in your head when you want to calculate factorials. That's strictly harder than just multiplying the numbers though!). Of course, all of the common software packages that do integrals also have a built in factorial function (in maple you just use n!, and in mathematica it's Factorial[n]).

    If you want to code something to calculate factorials, then just use a loop, or recursion if you want to be more exciting! The mathworld page has other formulas, but I don't consider any of them simpler (read: none of them require less computation).

    I suggest avoiding arithmetic whenever possible, as most problems can be made arbitrarily computationally difficult by choosing sufficiently inconvenient numbers :smile:
    Last edited: Jul 21, 2006
  4. Jul 21, 2006 #3


    User Avatar
    Science Advisor

    Here's a shortcut for you, just drop the "*1",
    for example : 9! = 9*8*7*6*5*4*3*2.

    BTW, yes this was meant as a joke. :p
  5. Jul 21, 2006 #4


    User Avatar
    Science Advisor
    Homework Helper

    Integration by parts will just lead you back to n! if you are differentiating the t^n and integrating e^(-t). What did you have in mind for 'by parts' here (even if not efficient)? If you mean to approximate by a finite interval, you will then need some way of caclulating exp(-x) by hand for x larger than I'd like!


    Has many different algorithms (and code for them) as well as approximations. I don't know what would be best to use by hand, I'd lean towards breaking it up into smaller products that I "know" off hand or are "easy" over directly multiplying in some arbitrary order.
  6. Jul 21, 2006 #5
    That is what I meant (ie. you can choose that to be your definition and do it that way if you're bored - it's strictly harder because you still have to do all of the multiplication, and you also have to differentiate and integrate)! I wasn't being particularly serious :rofl:

    That page is pretty interesting. I definitely should have mentioned the Stirling approximation, at least!
    Last edited: Jul 21, 2006
  7. Jul 21, 2006 #6


    User Avatar
    Science Advisor

    There is no "shortcut" way of calculating n! (Before uart got in his jab, I was tempted to say "use a calculator".).

    However, in more complicated, for example, calculating the Permuation of n things, taken m at at time, [itex]\frac{n!}{m!}[/itex] with m< n, you can do a lot of cancelling: every factor from m down to 1 cancels:
    [tex]\frac{n!}{m!}= n(n-1)(n-1)\cdot\cdot\cdot (m+1)[/tex]
    With the binomial coefficient, [itex]\frac{n!}{i!(n-i)!}[/itex], you can probably do a lot more cancelling.
  8. Jul 21, 2006 #7


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I'm sure you could find a factorial table on the web somewhere. (Like this one) Then you can just look up the answer on the table.

    Or, if you're bored, make your own table. Each additional entry in the table requires just a single additional multiplication, after all, so it should proceed rather quickly.

    - Warren
  9. Jul 21, 2006 #8
    HOLY MOLY! would you check out that table...dang...

    my calculator only calculates up to 69! ...so using a calculator has
    it's limits...
  10. Jul 21, 2006 #9


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    My computer took a tiny fraction of a second to calculate the factorial of 1,000:

    Code (Text):
    The factorial of 100,000 took only about a second, but the number is far too long to post here (on the order of 40,000 digits).

    - Warren
  11. Jul 21, 2006 #10


    User Avatar
    Science Advisor
    Homework Helper

  12. Jul 22, 2006 #11


    User Avatar
    Science Advisor
    Homework Helper

    Does this help you?

    [tex]9! = 2^7\times3^4\times5\times7[/tex]

    I do recommend getting a calculator. :)
  13. Jul 22, 2006 #12
    I am surprised that no one mentions Sterling's Formula, named after James Stirling 1692-1770 (thus way before calculators).

    N! approximately =(N/e)^N*[square root (2piN)]
    Last edited: Jul 22, 2006
  14. Jul 22, 2006 #13
    It should be mentioned that this is only an approximation. It's a very accurate one, and becomes increasingly accurate for large N. I believe 10! is off by about 1%. But for an exact formula, I don't think there's a way to do it other than straight out multiplication.
  15. Jul 22, 2006 #14

    the "Stirling-Formula" is just an asymptotic expansion...the error term increases madly...:frown: :frown:

    [tex] n!\sim \sqrt 2\pi n^{n}e^{-n} [/tex] meaning that...

    [tex] \frac{n!}{\sqrt 2\pi n^{n}e^{-n}}\rightarrow 1 [/tex] for n-->oo
  16. Jul 22, 2006 #15
    Checking it out for 100!, using pari, I got (Sterling value)/100!=.999167; is that getting close or what?

    For 1000!, ratio as above = .999916670.
    Last edited: Jul 22, 2006
  17. Jul 22, 2006 #16
    Yes, but what he means is that [itex]n! - \sqrt{2\pi n}n^n e^{-n} \rightarrow \infty[/itex] for [itex]n \rightarrow \infty[/itex]. So yes it's getting close fractionally but it's not getting close absolutely. Still, the Sterling approximation is useful almost everywhere when you deal with large factorials. Most macroscopic thermodynamics problems would be a whole lot more difficult to deal with analytically without it!
    Last edited: Jul 22, 2006
  18. Jul 23, 2006 #17


    User Avatar
    Science Advisor
    Homework Helper

    Stirlings is in the 'approximations' section of the link I (and Alkatran) gave. There are also versions with more terms of the series to get better approximations. As already mentioned, the absolute error term gets attrocious using stirlings approximation for factorial, the factorial grows much more rapidly than the relative error is decreasing to make this simple approximation work. (note the relative error is something like 1+O(1/n), actually asymptotic to 1+1/(12n))

    However, you can use more terms of the Stirling series, can't find any decent online references at the moment so wiki will have to do,

    "[URL [Broken]

    see the "Derivation" section. The error term given for the approximation for log n! is a little misleading though, the constant does depend rather heavily on "m". For a fixed "m" and n-> infinity the absolute error of this approximation does go to zero, but for a fixed n and m->infinity it does not.

    The consequence is for a fixed n, there is a limit to how small the absolute error can be from this approach. Not a big problem though, you can always use log(n!)=log(n+k)!-log(n+k)-...-log(n+1) and find a "k" where log(n+k)! can be found with an appropriate absolute error using stirlings. Approximate the other logs (to as small an absolute error as you need), then exponentiate and you can get n! to as small a relative error as you like to ensure an exact value of n!.

    This is probably not the best way to find n! exactly, but more terms in stirlings will make the relative error plummet fairly fast (n! is still growing at an obscene rate though).
    Last edited by a moderator: May 2, 2017
  19. Jul 23, 2006 #18
    An interesting side point which I found from the references of Alkatran is the question of "Why is Gamma (n) = (n-l)! and not n!?"

    Answer: The normalization of the gamma function to Gamma(n+1) instead of Gamma(n) is due to Legendre and void of any rationality.

    That's good to know, 'cause I never understood that before!
  20. Jul 24, 2006 #19


    User Avatar
    Science Advisor

    I've never looked into the history but have always assumed that Gamma was defined with [tex]t^{x-1}[/tex] in the integral instead of [tex]t^{x}[/tex], (which would of course have made Gamma(n) equal to n! instead of (n-1)! ), because the resulting integral is well defined for all x>0. With the alternate definition the integral would have been well defined for x>-1 which is not quite as neat.
    Last edited: Jul 24, 2006
  21. Jul 24, 2006 #20


    User Avatar
    Homework Helper

    Some arguements for the unit shift on gamma(n)=(n-1)

    Historically the gamma function was defined as the limit of a finite product, namely

    [tex]\Gamma (z) = \lim_{k\rightarrow \infty} \frac{k! k^{z-1}}{z(z+1)\cdots (z+k-1)}[/tex]​

    which converges for all complex [tex]z\neq 0, -1, -2,\ldots ,[/tex] but this still seems to have the form [tex]\Gamma (z) = f(z-1)[/tex] where it seems that f(z) is somehow more simple.

    Equivalently, the infiniite product form used by Euler was

    [tex]\Gamma (z) = \frac{1}{z}\prod_{k=1}^{\infty} \left( 1+\frac{z}{k}\right) ^{-1}\left( 1+\frac{1}{k}\right) ^{z}[/tex]​

    also converging for all complex [tex]z\neq 0, -1, -2,\ldots ,[/tex] but this does have the look of a function that would not be any more attractive if it were shifted by a unit, and the same remarks hold for the Weierstrass product form of the gamma function, which is

    [tex]\frac{1}{\Gamma (z)} = ze^{\gamma z}\prod_{k=1}^{\infty} \left( 1+\frac{z}{k}\right) ^{-1} e^{-\frac{z}{k}}[/tex]​

    where [tex]\gamma =0.577\ldots [/tex] Euler's constant. Also of note is that the Eulerian integral of the second kind, which is that definition of recent conversation:

    [tex]\Gamma (z) = \int_{0}^{\infty}e^{-t}t^{z-1} \, dt ,[/tex]​

    is notable in that it is the Mellin transform of [tex]e^{-t}[/tex] since the Mellin transform of f(t) is

    [tex]\mathcal{M} \left[ f(t) \right] = \int_{0}^{\infty}f(t) t^{z-1} \, dt .[/tex]​

    :rolleyes: ...but I think that the most complelling reason for the shift lies in that the functional equation for the Riemann zeta function, which has the nice symmetric form

    [tex] \Gamma \left( \frac{s}{2}\right) \pi^{-\frac{s}{2}}\zeta (s) = \Gamma \left( \frac{1-s}{2}\right) \pi^{-\frac{1-s}{2}}\zeta (1-s) [/tex] ​

    would look like this

    [tex] \left( \frac{s-2}{2}\right) ! \pi^{-\frac{s}{2}}\zeta (s) = \left( \frac{-1-s}{2}\right) ! \pi^{-\frac{1-s}{2}}\zeta (1-s) [/tex] ​

    without the shift .
    Last edited: Jul 25, 2006
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook