Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Algorithmic Complexity and Big-Oh notation

  1. Dec 20, 2008 #1
    Hi Guys,
    Is algorithmic complexity determined mostly for primality tests or on-to prime-generating functions?
    Say, I have the function, Floor[(n!)/(n+1)], and for every n it produces primes, would I have to use the trig definition of the floor function to determine the complexity of this algorithm. Would it be O(n!)?

    Now, on the other hand, say, I have the function, Floor[(n!)/(n+1)], where n is an integer that is to be tested for primality, the complexity of this formula is O(n!), right? I would still have to use the trig definition of the floor function, right? Thus rendering this algorithm inefficient because it would take extremely long for the function to test for primality asymptotically.

    Note that these are purely hypothetical cases. The above function is neither an on-to prime generator nor a primality test.

    Sorry if this question renders me inexperienced because I am. This is my first experience with number theory and algorithmic complexity.
     
  2. jcsd
  3. Dec 20, 2008 #2

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I must say that I'm quite confused about what you're asking.

    If you had a function that required floor(n! / (n+1)) steps1 to run, then the algorithmic complexity would indeed be O(n!). Actually, you can make a stronger statement: it's actually [itex]\Theta( (n-1)! )[/itex], although the difference between n! and (n-1)! isn't often relevant.


    If you have a function that needs to compute the number floor(n! / (n+1)), and that is its most expensive step, then you can write the function so that it runs in time O(n^2 log n), or even better. (at least according to Wikipedia)
     
  4. Dec 23, 2008 #3
    I was looking on the internet, and I found some fast methods of computing factorials.
    Could someone explain to me how I could figure out the run-time in Big-Oh notation of the following program written in Java:
    http://www.luschny.de/math/factorial/index.html?

    I found an even faster method of computing factorials which has a run-time complexity of O(n) from: http://www.maik.ru/abstract/matnt/8/matnt0783_abstract.pdf Are there any flaws in Ladikov's argument and is this the fastest method (faster than the first link even)?
     
  5. Dec 23, 2008 #4

    CRGreathouse

    User Avatar
    Science Advisor
    Homework Helper

    Ladikov is apparently counting multiplication as O(1) which makes the analysis worthless. n! ~ (n/e)^n * sqrt(n), so you need to spend Omega((n/e)^n * sqrt(n)) = Omega(n log n) time to compute the factorial, otherwise you won't even have enough time to write down the answer.

    Further, there's no method known (AFAIK) to compute the factorial that fast; the best would be a smidge more than
    [tex]n(\log n)^2\log\log n[/tex]
    and that method would be impractical. For truly hideous computations, the best practical method would be
    [tex]O(n(\log n)^2(\log\log n)^2\log\log\log n)[/tex]
    with a factor decomposition algorithm using Schoenhage-Strassen multiplication.
     
  6. Dec 23, 2008 #5
    According to Wikipedia, using bottom-up multiplication, the run-time complexity of the factorial is [tex]n^2 \log n[/tex].

    But, could someone explain to me how I could figure out the run-time in Big-Oh notation of the following program written in Java:
    http://www.luschny.de/math/factorial/index.html?
     
  7. Feb 27, 2009 #6
    No need. :) In general, calculating the factorial takes O(n log n M(n)) time, where M(n) is the run-time of the multiplication algorithm which is used. For astronomically large numbers, though impractical, the best method of "fast" multiplication is Fürer's Algorithm.
     
  8. Feb 27, 2009 #7

    CRGreathouse

    User Avatar
    Science Advisor
    Homework Helper

    Why would any person consider implementing Fürer's multiplication and then using it for bottom-up calculation of the factorial? You could get far better performance with a divide-and-conquer approach, even if only using Karatsuba.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Algorithmic Complexity and Big-Oh notation
  1. Householder algorithm (Replies: 13)

Loading...