Algorithmic Complexity and Big-Oh notation

flouran
Messages
64
Reaction score
0
Hi Guys,
Is algorithmic complexity determined mostly for primality tests or on-to prime-generating functions?
Say, I have the function, Floor[(n!)/(n+1)], and for every n it produces primes, would I have to use the trig definition of the floor function to determine the complexity of this algorithm. Would it be O(n!)?

Now, on the other hand, say, I have the function, Floor[(n!)/(n+1)], where n is an integer that is to be tested for primality, the complexity of this formula is O(n!), right? I would still have to use the trig definition of the floor function, right? Thus rendering this algorithm inefficient because it would take extremely long for the function to test for primality asymptotically.

Note that these are purely hypothetical cases. The above function is neither an on-to prime generator nor a primality test.

Sorry if this question renders me inexperienced because I am. This is my first experience with number theory and algorithmic complexity.
 
Physics news on Phys.org
I must say that I'm quite confused about what you're asking.

If you had a function that required floor(n! / (n+1)) steps1 to run, then the algorithmic complexity would indeed be O(n!). Actually, you can make a stronger statement: it's actually \Theta( (n-1)! ), although the difference between n! and (n-1)! isn't often relevant.


If you have a function that needs to compute the number floor(n! / (n+1)), and that is its most expensive step, then you can write the function so that it runs in time O(n^2 log n), or even better. (at least according to Wikipedia)
 
I was looking on the internet, and I found some fast methods of computing factorials.
Could someone explain to me how I could figure out the run-time in Big-Oh notation of the following program written in Java:
http://www.luschny.de/math/factorial/index.html" ?

I found an even faster method of computing factorials which has a run-time complexity of O(n) from: http://www.maik.ru/abstract/matnt/8/matnt0783_abstract.pdf" Are there any flaws in Ladikov's argument and is this the fastest method (faster than the first link even)?
 
Last edited by a moderator:
Ladikov is apparently counting multiplication as O(1) which makes the analysis worthless. n! ~ (n/e)^n * sqrt(n), so you need to spend Omega((n/e)^n * sqrt(n)) = Omega(n log n) time to compute the factorial, otherwise you won't even have enough time to write down the answer.

Further, there's no method known (AFAIK) to compute the factorial that fast; the best would be a smidge more than
n(\log n)^2\log\log n
and that method would be impractical. For truly hideous computations, the best practical method would be
O(n(\log n)^2(\log\log n)^2\log\log\log n)
with a factor decomposition algorithm using Schoenhage-Strassen multiplication.
 
flouran said:
http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Number_theory", using bottom-up multiplication, the run-time complexity of the factorial is n^2 \log n.

But, could someone explain to me how I could figure out the run-time in Big-Oh notation of the following program written in Java:
http://www.luschny.de/math/factorial/index.html" ?

No need. :) In general, calculating the factorial takes O(n log n M(n)) time, where M(n) is the run-time of the multiplication algorithm which is used. For astronomically large numbers, though impractical, the best method of "fast" multiplication is Fürer's Algorithm.
 
Last edited by a moderator:
flouran said:
In general, calculating the factorial takes O(n log n M(n)) time, where M(n) is the run-time of the multiplication algorithm which is used. For astronomically large numbers, though impractical, the best method of "fast" multiplication is Fürer's Algorithm.

Why would any person consider implementing Fürer's multiplication and then using it for bottom-up calculation of the factorial? You could get far better performance with a divide-and-conquer approach, even if only using Karatsuba.
 

Similar threads

Back
Top