Algorithmic complexity of primes

In summary, the conversation discusses the concept of algorithmic complexity for bit strings and whether there is a difference in complexity for primes compared to composites. It is mentioned that there are roughly (2^n / (n log 2)) primes of bit length n or less, and the best possible compression scheme is to represent each prime by its index. This requires roughly (n - lg n - lg log 2) bits, which is asymptotically equivalent to n.
  • #1
DavidK
31
0
Every number can be considered a bit string. For a bit string one can define some algorithmic complexity (the shortest algorithm/program that reproduces the desired bit string). Can something be said, in general, about difference in complexity for primes as compared to composites?
 
Physics news on Phys.org
  • #2
There are roughly (2^n / (n log 2)) primes of bit length n or less. (Since [itex]\pi(x) \approx x / \log x[/itex])

The best possible compression scheme is to represent each prime by its index, so that we use the positive integers less than roughly (2^n / (n log 2)) to represent the primes. It takes roughly (n - lg n - lg log 2) bits to denote such numbers.

(A good descriptive complexity scheme will be worse than this by some fixed additive constant)

However, [itex]n - \mathop{\mathrm{lg}} n - \mathop{\mathrm{lg}} \log 2 \in \Theta(n)[/itex], so asymptotically, we haven't saved anything.
 
Last edited:
  • #3


The algorithmic complexity of primes is a topic that has been studied extensively in computer science and mathematics. As stated in the content, every number can be represented as a bit string and the algorithmic complexity of a number refers to the shortest algorithm or program that can generate that particular bit string.

In general, it can be said that primes have a lower algorithmic complexity compared to composite numbers. This is because primes have a very specific and simple pattern, they can only be divided by 1 and themselves. This makes it easier to come up with a short algorithm or program to generate primes.

On the other hand, composite numbers have more complex patterns and can be divided by multiple numbers. This makes it harder to come up with a short algorithm or program to generate all composite numbers. In fact, the fastest known algorithm for finding all prime numbers up to a given number is significantly faster than the fastest known algorithm for finding all composite numbers up to the same number.

However, it is important to note that the complexity of primes and composites is not a definitive measure of their importance or significance. Primes have a unique and important role in mathematics, and the complexity of their generation does not diminish their value. Additionally, the complexity of a number does not necessarily reflect its difficulty in solving related problems, as some composite numbers can have simpler solutions compared to primes in certain mathematical problems.

In conclusion, while primes may have a lower algorithmic complexity compared to composites, this does not diminish the importance or significance of either type of number in mathematics. Both primes and composites have unique properties and play crucial roles in various mathematical concepts and problems.
 

Related to Algorithmic complexity of primes

What is algorithmic complexity?

Algorithmic complexity refers to the measure of the efficiency of an algorithm in terms of the amount of time and memory it requires to solve a specific problem. It is commonly measured in terms of the input size of the problem and is denoted by the Big O notation.

Why is the algorithmic complexity of primes important?

The algorithmic complexity of primes is important because it helps us understand the efficiency of algorithms that deal with prime numbers. As prime numbers are essential in many mathematical and cryptographic applications, having efficient algorithms to find, generate, and test them is crucial.

What is the most common algorithm for finding prime numbers?

The most common algorithm for finding prime numbers is the Sieve of Eratosthenes. This algorithm works by creating a list of numbers and eliminating all the multiples of each prime number until only the prime numbers remain.

What is the algorithmic complexity of the Sieve of Eratosthenes?

The algorithmic complexity of the Sieve of Eratosthenes is O(n log(log n)), where n is the upper bound of the range of numbers being checked for primality. This makes it a very efficient algorithm for finding prime numbers.

Can the algorithmic complexity of prime numbers be improved?

Yes, there are ongoing research and developments to improve the algorithmic complexity of prime numbers. Some of the recent advancements include the AKS primality test, which has a complexity of O(log^6 n), and the generalized Sieve of Atkin, which has a complexity of O(n/log(log n)).

Similar threads

  • Linear and Abstract Algebra
Replies
11
Views
1K
  • Programming and Computer Science
Replies
14
Views
2K
  • Programming and Computer Science
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
758
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
779
Replies
13
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
912
Replies
1
Views
752
Replies
2
Views
740
Back
Top