Practical pseudoprimality testing

In summary, the individual is working on a problem involving testing large numbers for primality and is using Pari/GP's built-in Baillie-PSW pseudoprimality test, which is running slowly. They have some questions about their approach, including how far to sieve for small prime factors and if there are better tests available. They also inquire about other considerations when working with such large numbers. Suggestions include using empiricism to find the best parameters, considering hardware failure, and looking into large number packages such as those used by GIMPS.
  • #1
CRGreathouse
Science Advisor
Homework Helper
2,844
0
I'm working on a problem that includes testing large numbers for primality. I'm using Pari/GP's built-in Baillie-PSW pseudoprimality test, but it (understandably!) runs slowly.

I have some qustions, then:
* I've been sieving the small prime factors from these numbers. For numbers if the size I'm working with (3300 to 3500 decimal digits), how far should I sieve? I've been checking up to a million, but should I go higher? As it is, I haven't been catching much this way (perhaps 1 in 10), since the numbers are fairly rough (similar to primorial primes). How much effort should I invest in this stage?
* Are there better tests for what I'm doing? If it means anything, I expect "most" (~99.5%) of the numbers to be composite. I would think the p-1 and perhaps p+1 tests would be useful, but are they fast enough if I know the relevant quantity is 'very smooth'?
* Are there other things I should be aware of when working with numbers of this magnitude?
 
Last edited:
Physics news on Phys.org
  • #2
* I've been sieving the small prime factors from these numbers. For numbers if the size I'm working with (3300 to 3500 decimal digits), how far should I sieve? I've been checking up to a million, but should I go higher? As it is, I haven't been catching much this way (perhaps 1 in 10), since the numbers are fairly smooth. How much effort should I invest in this stage?
Empiricism is the best guide. :smile:

What you need to do is to get a reasonable estimate of how much work it takes to sieve vs how long it takes to run the rest of your algorithm on what survives. If you do this with smaller values, you should be able to get a good model which you can extrapolate to the problem size of interest.

Once you have a good model, you can simply find the parameters which yield the best running time.


* Are there better tests for what I'm doing? If it means anything, I expect "most" (~99.5%) of the numbers to be composite.
Well, Wikipedia and Mathworld both have pages on primality testing, and they're usually good sources of information. Mathworld seems to advocate the elliptic curve method, but Wikipedia asserts that the cyclotomy test is better for numbers with fewer than 10,000 digits.

If a probabilistic algorithm is sufficient for your purposes, they both seem to advocate the Rabin-Miller strong pseudoprime test.

I'm not familiar with "Pari", "GP", or "Ballie-PSW" at all, so I can't say anything about that.

* Are there other things I should be aware of when working with numbers of this magnitude?
You might consider that there's a nonzero chance of hardware failure with long-running programs, so a faster probabilistic algorithm might actually be more reliable than a longer-running determinsitic one. It sounds like you're using a probabilistic algorithm anyways, though.

Um... I suppose you could also try out various large number packages, and try to find the one that seems to perform the best -- it's possible that some packages are much better than others for this type of thing. Besides, they probably even come with well-optimized primality proving functions!
 
Last edited:
  • #3
You might take a look at the source code available from GIMPS. They've been searching for the world's largest primes for years now and their code has been tweaked for efficiency.
 
  • #4
Hurkyl said:
Mathworld seems to advocate the elliptic curve method, but Wikipedia asserts that the cyclotomy test is better for numbers with fewer than 10,000 digits.

I've heard of the cyclotomy test, but I've never actually seen it implemented or even seen a good explanation of how it works.

Hurkyl said:
If a probabilistic algorithm is sufficient for your purposes, they both seem to advocate the Rabin-Miller strong pseudoprime test.

"A Baillie-Pomerance-Selfridge-Wagstaff pseudoprime [is a] (strong Rabin-Miller pseudo prime for base 2, followed by strong Lucas test for the sequence (P,-1), P smallest positive integer such that P^2-4 is not a square mod x)."

Hurkyl said:
I'm not familiar with "Pari", "GP", or "Ballie-PSW" at all, so I can't say anything about that.

Really? PARI is a fast calculation engine, and GP is a calculator/programming interface for PARI. I use it fairly often.
http://pari.math.u-bordeaux.fr/
 
  • #5
C1ay said:
You might take a look at the source code available from GIMPS. They've been searching for the world's largest primes for years now and their code has been tweaked for efficiency.

I think that their code is designed for Mersenne primes, not for general use.
 

1. What is practical pseudoprimality testing?

Practical pseudoprimality testing is a method used in number theory to determine whether a given number is likely to be a prime number or a composite number. It is a more efficient and less resource-intensive alternative to traditional primality testing methods.

2. How does practical pseudoprimality testing work?

Practical pseudoprimality testing works by using a probabilistic algorithm to check if a given number passes a set of predefined tests. If the number passes all the tests, it is considered a pseudoprime and is likely to be a prime number. However, if the number fails even one test, it is definitely a composite number.

3. What are the advantages of practical pseudoprimality testing?

Practical pseudoprimality testing is faster and more efficient compared to traditional primality testing methods, making it ideal for use in large-scale computations. It also requires less computing power and memory, making it more practical for use in real-world applications.

4. Are there any limitations to practical pseudoprimality testing?

While practical pseudoprimality testing is a useful tool, it is not foolproof. There is a small chance that a composite number may pass all the tests and be mistaken for a prime number. However, this probability is extremely low, making practical pseudoprimality testing a reliable method for determining primality.

5. How is practical pseudoprimality testing used in real-world applications?

Practical pseudoprimality testing is commonly used in cryptography and computer science, where large numbers are frequently used. It is also employed in other fields such as number theory and cryptography research. Additionally, many modern programming languages have built-in functions for practical pseudoprimality testing, making it easily accessible for developers.

Similar threads

  • Linear and Abstract Algebra
Replies
2
Views
799
  • Programming and Computer Science
Replies
22
Views
749
Replies
5
Views
411
  • Linear and Abstract Algebra
Replies
6
Views
3K
Replies
14
Views
2K
  • Linear and Abstract Algebra
Replies
6
Views
3K
Replies
6
Views
820
  • Linear and Abstract Algebra
Replies
6
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Electromagnetism
Replies
17
Views
1K
Back
Top