Algorithmic Complexity and Big-Oh notation

  • Thread starter flouran
  • Start date
  • #1
64
0

Main Question or Discussion Point

Hi Guys,
Is algorithmic complexity determined mostly for primality tests or on-to prime-generating functions?
Say, I have the function, Floor[(n!)/(n+1)], and for every n it produces primes, would I have to use the trig definition of the floor function to determine the complexity of this algorithm. Would it be O(n!)?

Now, on the other hand, say, I have the function, Floor[(n!)/(n+1)], where n is an integer that is to be tested for primality, the complexity of this formula is O(n!), right? I would still have to use the trig definition of the floor function, right? Thus rendering this algorithm inefficient because it would take extremely long for the function to test for primality asymptotically.

Note that these are purely hypothetical cases. The above function is neither an on-to prime generator nor a primality test.

Sorry if this question renders me inexperienced because I am. This is my first experience with number theory and algorithmic complexity.
 

Answers and Replies

  • #2
Hurkyl
Staff Emeritus
Science Advisor
Gold Member
14,916
19
I must say that I'm quite confused about what you're asking.

If you had a function that required floor(n! / (n+1)) steps1 to run, then the algorithmic complexity would indeed be O(n!). Actually, you can make a stronger statement: it's actually [itex]\Theta( (n-1)! )[/itex], although the difference between n! and (n-1)! isn't often relevant.


If you have a function that needs to compute the number floor(n! / (n+1)), and that is its most expensive step, then you can write the function so that it runs in time O(n^2 log n), or even better. (at least according to Wikipedia)
 
  • #3
64
0
I was looking on the internet, and I found some fast methods of computing factorials.
Could someone explain to me how I could figure out the run-time in Big-Oh notation of the following program written in Java:
http://www.luschny.de/math/factorial/index.html" [Broken]?

I found an even faster method of computing factorials which has a run-time complexity of O(n) from: http://www.maik.ru/abstract/matnt/8/matnt0783_abstract.pdf" [Broken] Are there any flaws in Ladikov's argument and is this the fastest method (faster than the first link even)?
 
Last edited by a moderator:
  • #4
CRGreathouse
Science Advisor
Homework Helper
2,820
0
Ladikov is apparently counting multiplication as O(1) which makes the analysis worthless. n! ~ (n/e)^n * sqrt(n), so you need to spend Omega((n/e)^n * sqrt(n)) = Omega(n log n) time to compute the factorial, otherwise you won't even have enough time to write down the answer.

Further, there's no method known (AFAIK) to compute the factorial that fast; the best would be a smidge more than
[tex]n(\log n)^2\log\log n[/tex]
and that method would be impractical. For truly hideous computations, the best practical method would be
[tex]O(n(\log n)^2(\log\log n)^2\log\log\log n)[/tex]
with a factor decomposition algorithm using Schoenhage-Strassen multiplication.
 
  • #5
64
0
Last edited by a moderator:
  • #6
64
0
http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Number_theory", using bottom-up multiplication, the run-time complexity of the factorial is [tex]n^2 \log n[/tex].

But, could someone explain to me how I could figure out the run-time in Big-Oh notation of the following program written in Java:
http://www.luschny.de/math/factorial/index.html" [Broken]?
No need. :) In general, calculating the factorial takes O(n log n M(n)) time, where M(n) is the run-time of the multiplication algorithm which is used. For astronomically large numbers, though impractical, the best method of "fast" multiplication is Fürer's Algorithm.
 
Last edited by a moderator:
  • #7
CRGreathouse
Science Advisor
Homework Helper
2,820
0
In general, calculating the factorial takes O(n log n M(n)) time, where M(n) is the run-time of the multiplication algorithm which is used. For astronomically large numbers, though impractical, the best method of "fast" multiplication is Fürer's Algorithm.
Why would any person consider implementing Fürer's multiplication and then using it for bottom-up calculation of the factorial? You could get far better performance with a divide-and-conquer approach, even if only using Karatsuba.
 

Related Threads on Algorithmic Complexity and Big-Oh notation

  • Last Post
Replies
1
Views
3K
Replies
12
Views
3K
Replies
8
Views
6K
  • Last Post
Replies
3
Views
2K
Replies
2
Views
7K
  • Last Post
Replies
5
Views
2K
  • Last Post
Replies
1
Views
2K
  • Last Post
Replies
14
Views
7K
  • Last Post
Replies
13
Views
4K
  • Last Post
Replies
1
Views
4K
Top