Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Primorials and the Next Prime

  1. Oct 16, 2006 #1
    Theorem 1: Given a primorial [itex]p_n^{\sharp}>2[/itex], [itex]p_{n+1}[/itex] is the second largest element in the reduced residue system modulo [itex]p_n^{\sharp}[/itex].


    Clearly [itex]p_{n+1}[/itex] is an element of the r.r.s. modulo [itex]p_n^{\sharp}[/itex] since every prime less than [itex]p_{n+1}[/itex] divides [itex]p_n^{\sharp}[/itex]. Next notice that the smallest element of the r.r.s. modulo [itex]p_n^{\sharp}[/itex] is invariably one.

    So it remains to show that the second largest element in r.r.s. modulo [itex]p_n^{\sharp}[/itex] is in fact prime. But wait this is already done since every prime less than this second element is a factor of [itex]p_n^{\sharp}[/itex] and the two are relatively prime, the second element must be prime and it must be the next prime.


    Lemma 1: It follows immediately that the third element in the r.r.s. modulo [itex]p_n^{\sharp}[/itex] is either prime or divisible by the second element in the r.r.s. modulo [itex]p_n^{\sharp}[/itex]. Further extension along these lines is possible.

    Theorem 2: We can construct a sequence of prime numbers using the sequence of primorials [itex]p^{\sharp}[/itex] greater than two and the reduced residue systems of this sequence.


    We are given [itex]p_2^{\sharp}=6[/itex] and r.r.s. modulo 6 = {1,5}
    By Theorem1 [itex]5=p_3[/itex]. We can write the r.r.s. modulo [itex]p_3^{\sharp}=30[/itex] using the reduced residue system modulo 6 as follows

    [tex]r.r.s. mod 30 = \{6x+r_{i}|r_{i} \in r.r.s. mod 6, 0<x<4\}-\{5,25\}[/tex]

    In general we can write

    Eq. 1 [tex]r.r.s. modulo p_{n+1}^{\sharp} = \{p_n^{\sharp}x+r_{i}|r_{i} \in r.r.s. mod p_n^{\sharp}, 0<x<p_{n+1}-1\}-\{p_{n+1}r_{j}r_{k}|r_{j}r_{k}= 1 mod p_n^{\sharp}\} [/tex]

    Using this construction we generate the sequence of primes like this

    2 -given

    3 - given

    5 - Theorem 1 using base 6

    7 - Theorem 1 using base 30, also given as 6*1+1, since the only 6*0 elments were 6*0+1, which is excluded and 6*0+5 which of course goes into the base 30.

    11 - Theorem 1 using base 210, also given as 210*0+11

    13 - Theorem 1 using base 2310, also given as 2310*0+13

    You see it happens in an odd way. For each step the second lowest number is prime by Theorem 1. Reduced residue systems of primorials are very rich in primes in the smaller values.

    So then the next prime 17 was added to the reduced residue system at base 30 and it just sat there as we climbed the ladder to [itex]p_7^{\sharp}=30030[/itex]. So there you have the power of [itex]p_n^{\sharp}[/itex] to reveal primes greater than [itex]p_n[/itex].

    The draw back from a computing standpoint is that the size of the reduced residue system for primorials grows relatively fast compared to the guaranteed number of primes you get back, so it is not a computational theory unless we can find a way to calculate recursively and in batches. Equation 1 might be worth looking at more closely along those lines.

    Define the following operations on sets.

    Given two sets A and B let [itex]A\otimesB = \{(a_{i},b_{j})\}[/itex] where the indices cover all elements of the two sets. Example [itex]\{1,3\}\otimes\{2,4\}=\{(1,2),(1,4),(3,2),(3,4)\}[/itex]. Given a set of ordered pairs [itex]A\otimesA[/itex] and a scalar s define [itex]sA\otimesA= \{(sa_{i}, a_{j})\}[/itex] where the indices cover all elements of A. Example let A = {4,5}, [itex]A\otimesA=\{(4,4),(4,5),(5,4),(5,5)\}[/itex] and let s = 3 under standard multiplication then [itex]sA\otimesA=\{(12,4),(12,5),(15,4),(15,5)\}[/itex] Now we need an operation to evaluate the ordered pairs, this can be done using a prefix as follows. Let [itex]\mu_{}A\otimesA=\{a_{i}a_{j}\}[/itex] where the indices cover all elements of A, and is a base for modular multiplication. This notation avoids confusion when using subscripts to denote the element of a sequence so for instance [itex]a_{n}a_m[/tex] is well understood as is [itex]a_{n}+_{}a_m[/itex] this an alternative to mod b notation. When is omitted it means standard multiplication over the natural numbers.

    Now we can write Equation 1 in a simpler form.

    Equation 2: [tex]r.r.s._{[p_{n+1}^{\sharp}]} = p_n^{\sharp}\{0,...,p_{n+1}-1\}\oplus r.r.s._{[p_n^{\sharp}]}- p_{n+1} Ker \mu r.r.s._{[p_{n}^{\sharp}]}\otimes r.r.s._{[p_{n}^{\sharp}]}[/tex]

    It follows that

    [tex]r.r.s._{[p_{n+1}^{\sharp}]} = \{0,p_n^{\sharp}, 2p_n^{\sharp},...,p_n^{\sharp}(p_{n+1}-1)\}\oplus r.r.s._{[p_n^{\sharp}]}- p_{n+1}Ker \mu r.r.s._{[p_{n}^{\sharp}]}\otimes r.r.s._{[p_{n}^{\sharp}]}[/tex]

    Ker their is the kernel of the map [itex]\mu_{[p_n^{\sharp}]} : r.r.s._{[p_n^{\sharp}]} \otimes r.r.s._{[p_n^{\sharp}]} \rightarrow r.r.s._{[p_n^{\sharp}]}[/itex]. But the notation is still clumsy. Something has to be gotten that does not interfere with existing notation but allows easy use of multiple bases in modular equations, we could use the term muddled equations to describe euqations like [itex]x \equiv y mod a mod b[/itex]. In the notation given here so far that might be written [itex]x =_{[a]}y[/itex] or perhaps [itex]x =_{[a,b]}y[/itex] not necessarily the same as [itex]x =_{[b,a]}y[/itex] .

    Still something better will have to be gotten. I'll try to work that out more later, Equation 2 is really a prescription for an algorithm it could encompass more than one algorithm. I'll have to look into that.
    Last edited: Oct 16, 2006
  2. jcsd
  3. Oct 16, 2006 #2


    User Avatar
    Science Advisor
    Homework Helper

    I guess you mean "second smallest"? i.e. the smallest n>1 relatively prime to p#.

    This doesn't work as you've written it. Even when n=2 this set you are trying to subtract off doesn't give what you want. To have 25 is this set you are removing, you'd need r_j*r_i=5, but this isn't =1 mod 2*3. You want to remove {p_{n+1}*r| r is in the rrs of p_{n}#}.

    Have you looked up the "wheel sieve" yet (Hurkyl mentioned it to you before)? There's an article "Explaining the Wheel Sieve" by Pritchard, from the early '80s' that you should find.

    I have no idea what you are trying to accomplish by introducing some strange notation that doesn't appear to simplify anything (and that kernel thing for the set you remove is still not what you want).
  4. Oct 16, 2006 #3
    Nope, I will find that article. I had a professor in math grad school who also understood the wheel sieve, discovered it on his own but never wrote it down.

    Yup I see that it does not work that second term. What we really want is to subtract this [itex] p_{n+1}r.r.s._{[p_n^{\sharp}]}[/itex].

    On the notation I want to investigate equations where we have combinations of arithmetic bases and modular operations like 100111 base 2 congruent to 23456 base ten mod 7 mod 6. That is really not connected to this thread, so I should have just started a different one.

    notice that in fact every element of the redeced residue system for the primorial is prime up to the square of the second element. With each larger primorial we get a larger and larger set of primes added by this principle. The problem is that the primorials grow much faster than the squares of consecutive primes, so the return compared to storage needed is eventually miniscule.

    But what I am wondering is if there is way to develop and algorithm that generates primes form the reduced residue systems (as I have described) without needing to explicitly know the entire r.r.s.?

    For instance, we know how to right the reduced residue system for the n+1 st prime in terms of the residue system of the nth prime. Infact we can do so in an ordered way. So it would remain only to identify p[n+1]^2 in that ordered set, that is solve the equation [itex]p_{n+1}^2=p_n^{\sharp}x+r_i[/itex] where r is an element of the reduced residue system modulo p[n]. But we know that the next prime is the second element of the reduced residue system so we have that and the division algorithm gives us x and r, but we do not have i. We could get i by one run thorugh the ordered rduced residue modulo p[n]. Ok so here is my attempt at fleshing out an algorithm.

    1) starting with base 30, form the ordered reduced residue system mod 30.
    2) the second element is the next prime, by Theorem 1, that is seven, so square it to get forty-nine. Every element in r.r.s. mod thirty that is less than 49 is prime, which is everything.

    "Up one rung"

    1) write out the reduced residue system modulo 210 using Equation 1 (the corrected equation) in an ordered way as [itex]30x+r, x \in \{0,..,p_{n+1} -1\}[/itex]
    2) Seven has been removed it will go into the base as 30*7-210. So eleven is the smallest save 1. 11^2=121. 121 mod 30 is 1, div 30 is 4 so the new set of primes is [itex]30x+r, x \in \{0,..,4}[/itex] the index of 1 in the r.r.s. mod 30 is 1 so once we get to (x,r)=(4,1) we stop.

    "Up one rung"

    1) write out the reduced residue system modulo 2310 using Equation 1 ... (?)

    2) 11 has now been removed and will go into the base 11*210=2310. So thirteen is the smallest save 1. 13^2=169, 169 is within the reduced residue system mod 210 so simply accept as prime (they are prime) the elements between 121 and 169. 11*13 was already removed (?), it seems like the way it is set up now certain square free numbers are left in as we go higher.

    Well it needs some work to be sure. The rate at which the primorials grow is quickly destroying the effectiveness of the sieve it appears. I'll have to go back and crank out the numerical examples step by step to see if I may not be seeing something. It does'nt feel quite right though.

    I will find that article Hurkyl mentioned.

    Last edited: Oct 16, 2006
  5. Oct 16, 2006 #4
    Eq. 1 [tex]r.r.s. modulo p_{n+1}^{\sharp} = \{p_n^{\sharp}x+r_{i}|r_{i} \in r.r.s. mod p_n^{\sharp}, 0<x<p_{n+1}-1\}-p_{n+1}r.r.s. modulo p_{n}^{\sharp}[/tex]

    That should work.
  6. Oct 16, 2006 #5


    User Avatar
    Science Advisor
    Homework Helper

    If you wanted primes up to N, you don't need to find the rrs of p# where p is the largest prime less than sqrt(N). You would only need the bits that are less than N.

    For example, using the wheel sieve up to 27 say. You'd find the rrs mod 6, {1,5}, then roll it along, stopping at 27 to get the set {1,5,7,11,13,17,19,23,25}. Then remove the multiples of 5 and you'd be done since 7^2>27. You never need to look higher than 27. This is more pronounced when sieving up to higher values, you'll have lots of steps where you don't really 'roll the wheel' at all, i.e. you can entirely skip adding stuff to get the rrs of p_{n+1}# from the rrs of p_{n} since the new elements lie above N, you just remove the multiples of p_{n+1}. The fast growth of p# doesn't cause memory problems.
  7. Oct 16, 2006 #6
    Ok then that is exactly what I meant. It is beautiful if we are allowed such notions these days. I did not realize that was already well known, and I certainly did not mean it as an insult.

    A question I still have is why do we keep on thinking there must be faster factorization algorithms? Just in pure computing aspects alone there will always be a next number that cannot be factored easily. Natural numbers with only too large prime factors become relatively less dense as we head off to infinity, still it ultimately comes down to parallel processing and speed doesn't it?

    What would be your take on that? Are there still faster algorithms to be found or is that mostly exhausted? Are we not now just talking about ever increasing speed and parallel processing to try to factor larger numbers?

    What if Goldbach lies outside those truths we can prove? What if it is a Godel Incompleteness issue? Could we always tell?

    It seems like we have some agreement then on that notion of the wheel sieve. I'm tired, see you later.
  8. Oct 17, 2006 #7


    User Avatar
    Science Advisor
    Homework Helper

    when weren't we?

    I would very much doubt that we have reached the pinnacle of factoring algorithms. Improvements and new algorithms entirely keep appearing, I don't see anything that suggests this will stop.

    Honestly, does it matter? There are plenty of conjectures in number theory (and all of math in general) that won't be resolved in my lifetime, and perhaps never at all. This doesn't mean all the work towards their resolution is a waste of time (even if they are impossible to prove and we don't know this), partial results can still be very interesting and/or useful.
  9. Oct 22, 2006 #8
    Very true and well said. You seem so much more positive, it's very refreshing. Has your paper on category theory been published, or was that Hurk?
  10. Dec 19, 2006 #9
    Yes Playdo, you are in right direction. But you already know about my preprint in the eljose's posting:
    What you are saying about the rss is exactly the sequences of numbers which I call "generations". But you failed to notice the more important things about the generations, they have notably symmetries. And studying the recursive sieve I define in my paper it is trivial to demonstrate the prime number theorem.
  11. Dec 19, 2006 #10
    Sorry I meant rrs in the previous post.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook