Algorithmic Complexity and Big-Oh notation

  • Context: Undergrad 
  • Thread starter Thread starter flouran
  • Start date Start date
  • Tags Tags
    Complexity Notation
Click For Summary

Discussion Overview

The discussion revolves around algorithmic complexity, particularly in relation to primality tests and prime-generating functions. Participants explore the complexity of specific mathematical functions and methods for computing factorials, examining their implications for algorithm efficiency.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant questions whether the complexity of the function Floor[(n!)/(n+1)] would be O(n!) and discusses its implications for primality testing, noting that these are hypothetical cases.
  • Another participant suggests that if a function requires Floor[(n!)/(n+1)] steps, the complexity would be O(n!) but also mentions that it could be more accurately described as Θ((n-1)!), while noting that the difference may not be significant.
  • A participant inquires about the runtime complexity of a specific Java program for computing factorials, referencing two different methods found online, one claiming O(n) complexity.
  • Another participant critiques Ladikov's argument regarding factorial computation, stating that counting multiplication as O(1) is flawed and provides a lower bound for factorial computation time.
  • Several participants reference the complexity of factorial computation, with one stating that it generally takes O(n log n M(n)) time, where M(n) is the multiplication algorithm's runtime.
  • There is a discussion about the practicality of using Fürer's Algorithm for multiplication in factorial calculations, with one participant suggesting that divide-and-conquer methods may yield better performance.

Areas of Agreement / Disagreement

Participants express differing views on the complexity of specific functions and methods for computing factorials. There is no consensus on the best approach or the accuracy of certain claims regarding algorithmic efficiency.

Contextual Notes

Some claims about algorithmic complexity depend on the definitions and assumptions regarding the functions and methods discussed. The discussion includes various proposed complexities that may not be universally accepted or applicable in all contexts.

Who May Find This Useful

This discussion may be of interest to those studying algorithmic complexity, number theory, or computational methods for mathematical operations, particularly in the context of factorial computation and primality testing.

flouran
Messages
64
Reaction score
0
Hi Guys,
Is algorithmic complexity determined mostly for primality tests or on-to prime-generating functions?
Say, I have the function, Floor[(n!)/(n+1)], and for every n it produces primes, would I have to use the trig definition of the floor function to determine the complexity of this algorithm. Would it be O(n!)?

Now, on the other hand, say, I have the function, Floor[(n!)/(n+1)], where n is an integer that is to be tested for primality, the complexity of this formula is O(n!), right? I would still have to use the trig definition of the floor function, right? Thus rendering this algorithm inefficient because it would take extremely long for the function to test for primality asymptotically.

Note that these are purely hypothetical cases. The above function is neither an on-to prime generator nor a primality test.

Sorry if this question renders me inexperienced because I am. This is my first experience with number theory and algorithmic complexity.
 
Physics news on Phys.org
I must say that I'm quite confused about what you're asking.

If you had a function that required floor(n! / (n+1)) steps1 to run, then the algorithmic complexity would indeed be O(n!). Actually, you can make a stronger statement: it's actually [itex]\Theta( (n-1)! )[/itex], although the difference between n! and (n-1)! isn't often relevant.


If you have a function that needs to compute the number floor(n! / (n+1)), and that is its most expensive step, then you can write the function so that it runs in time O(n^2 log n), or even better. (at least according to Wikipedia)
 
I was looking on the internet, and I found some fast methods of computing factorials.
Could someone explain to me how I could figure out the run-time in Big-Oh notation of the following program written in Java:
http://www.luschny.de/math/factorial/index.html" ?

I found an even faster method of computing factorials which has a run-time complexity of O(n) from: http://www.maik.ru/abstract/matnt/8/matnt0783_abstract.pdf" Are there any flaws in Ladikov's argument and is this the fastest method (faster than the first link even)?
 
Last edited by a moderator:
Ladikov is apparently counting multiplication as O(1) which makes the analysis worthless. n! ~ (n/e)^n * sqrt(n), so you need to spend Omega((n/e)^n * sqrt(n)) = Omega(n log n) time to compute the factorial, otherwise you won't even have enough time to write down the answer.

Further, there's no method known (AFAIK) to compute the factorial that fast; the best would be a smidge more than
[tex]n(\log n)^2\log\log n[/tex]
and that method would be impractical. For truly hideous computations, the best practical method would be
[tex]O(n(\log n)^2(\log\log n)^2\log\log\log n)[/tex]
with a factor decomposition algorithm using Schoenhage-Strassen multiplication.
 
flouran said:
http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Number_theory", using bottom-up multiplication, the run-time complexity of the factorial is [tex]n^2 \log n[/tex].

But, could someone explain to me how I could figure out the run-time in Big-Oh notation of the following program written in Java:
http://www.luschny.de/math/factorial/index.html" ?

No need. :) In general, calculating the factorial takes O(n log n M(n)) time, where M(n) is the run-time of the multiplication algorithm which is used. For astronomically large numbers, though impractical, the best method of "fast" multiplication is Fürer's Algorithm.
 
Last edited by a moderator:
flouran said:
In general, calculating the factorial takes O(n log n M(n)) time, where M(n) is the run-time of the multiplication algorithm which is used. For astronomically large numbers, though impractical, the best method of "fast" multiplication is Fürer's Algorithm.

Why would any person consider implementing Fürer's multiplication and then using it for bottom-up calculation of the factorial? You could get far better performance with a divide-and-conquer approach, even if only using Karatsuba.
 

Similar threads

  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 0 ·
Replies
0
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 15 ·
Replies
15
Views
5K
  • · Replies 3 ·
Replies
3
Views
2K