Algorithmic Complexity and Big-Oh notation

In summary, the conversation discusses the complexity of algorithmic primality tests and prime-generating functions, particularly in relation to calculating factorials. One hypothetical function is mentioned that would have a complexity of O(n!), but it is noted that there are more efficient methods for computing factorials. The conversation also mentions the use of Fürer's Algorithm for fast multiplication and its potential application in calculating factorials.
  • #1
flouran
64
0
Hi Guys,
Is algorithmic complexity determined mostly for primality tests or on-to prime-generating functions?
Say, I have the function, Floor[(n!)/(n+1)], and for every n it produces primes, would I have to use the trig definition of the floor function to determine the complexity of this algorithm. Would it be O(n!)?

Now, on the other hand, say, I have the function, Floor[(n!)/(n+1)], where n is an integer that is to be tested for primality, the complexity of this formula is O(n!), right? I would still have to use the trig definition of the floor function, right? Thus rendering this algorithm inefficient because it would take extremely long for the function to test for primality asymptotically.

Note that these are purely hypothetical cases. The above function is neither an on-to prime generator nor a primality test.

Sorry if this question renders me inexperienced because I am. This is my first experience with number theory and algorithmic complexity.
 
Physics news on Phys.org
  • #2
I must say that I'm quite confused about what you're asking.

If you had a function that required floor(n! / (n+1)) steps1 to run, then the algorithmic complexity would indeed be O(n!). Actually, you can make a stronger statement: it's actually [itex]\Theta( (n-1)! )[/itex], although the difference between n! and (n-1)! isn't often relevant.


If you have a function that needs to compute the number floor(n! / (n+1)), and that is its most expensive step, then you can write the function so that it runs in time O(n^2 log n), or even better. (at least according to Wikipedia)
 
  • #3
I was looking on the internet, and I found some fast methods of computing factorials.
Could someone explain to me how I could figure out the run-time in Big-Oh notation of the following program written in Java:
http://www.luschny.de/math/factorial/index.html" ?

I found an even faster method of computing factorials which has a run-time complexity of O(n) from: http://www.maik.ru/abstract/matnt/8/matnt0783_abstract.pdf" Are there any flaws in Ladikov's argument and is this the fastest method (faster than the first link even)?
 
Last edited by a moderator:
  • #4
Ladikov is apparently counting multiplication as O(1) which makes the analysis worthless. n! ~ (n/e)^n * sqrt(n), so you need to spend Omega((n/e)^n * sqrt(n)) = Omega(n log n) time to compute the factorial, otherwise you won't even have enough time to write down the answer.

Further, there's no method known (AFAIK) to compute the factorial that fast; the best would be a smidge more than
[tex]n(\log n)^2\log\log n[/tex]
and that method would be impractical. For truly hideous computations, the best practical method would be
[tex]O(n(\log n)^2(\log\log n)^2\log\log\log n)[/tex]
with a factor decomposition algorithm using Schoenhage-Strassen multiplication.
 
  • #6
flouran said:
http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Number_theory", using bottom-up multiplication, the run-time complexity of the factorial is [tex]n^2 \log n[/tex].

But, could someone explain to me how I could figure out the run-time in Big-Oh notation of the following program written in Java:
http://www.luschny.de/math/factorial/index.html" ?

No need. :) In general, calculating the factorial takes O(n log n M(n)) time, where M(n) is the run-time of the multiplication algorithm which is used. For astronomically large numbers, though impractical, the best method of "fast" multiplication is Fürer's Algorithm.
 
Last edited by a moderator:
  • #7
flouran said:
In general, calculating the factorial takes O(n log n M(n)) time, where M(n) is the run-time of the multiplication algorithm which is used. For astronomically large numbers, though impractical, the best method of "fast" multiplication is Fürer's Algorithm.

Why would any person consider implementing Fürer's multiplication and then using it for bottom-up calculation of the factorial? You could get far better performance with a divide-and-conquer approach, even if only using Karatsuba.
 

Related to Algorithmic Complexity and Big-Oh notation

1. What is algorithmic complexity and why is it important?

Algorithmic complexity refers to the measurement of the efficiency of an algorithm in terms of the time and space it requires to solve a problem. It is important because it allows us to compare different algorithms and choose the most efficient one for a given problem. It also helps in predicting the performance of an algorithm as the input size increases.

2. What is Big-Oh notation and how is it related to algorithmic complexity?

Big-Oh notation is a mathematical notation used to describe the upper bound of an algorithm's time or space complexity. It represents the worst-case scenario, or the maximum amount of time or space an algorithm will take to solve a problem. It is related to algorithmic complexity as it helps in classifying algorithms based on their efficiency and comparing them.

3. How do I calculate the algorithmic complexity of an algorithm?

To calculate the algorithmic complexity of an algorithm, you can analyze the number of operations or steps it takes to solve a problem. This can be done by counting the number of loops, conditional statements, and other operations in the algorithm. The number of inputs or the size of the input can also be a factor in determining the algorithmic complexity.

4. What is the difference between time and space complexity?

Time complexity refers to the amount of time an algorithm takes to solve a problem, while space complexity refers to the amount of memory or storage space it requires. In some cases, an algorithm may have a fast time complexity but a high space complexity, or vice versa. It is important to consider both when analyzing the efficiency of an algorithm.

5. How can I improve the algorithmic complexity of my code?

There are several ways to improve the algorithmic complexity of code, such as using efficient data structures, optimizing loops and conditional statements, and reducing unnecessary operations. It is also helpful to analyze the algorithm and find potential areas for improvement. Additionally, choosing a different algorithm with a better time or space complexity may also improve the overall efficiency of the code.

Similar threads

  • Linear and Abstract Algebra
Replies
11
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
551
  • Programming and Computer Science
Replies
13
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
990
  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
7
Views
1K
  • Linear and Abstract Algebra
Replies
15
Views
4K
Back
Top