Other primes

  1. -Job-

    -Job- 1,132
    Science Advisor

    Other "primes"

    If, instead a defining a prime as an integer divisable only by 1 or itself, we define it as an integer not divisable by a number other than 1, 2, half of itself, or itself, (numbers not being divisable by 2 still "able" to be primes) then we generate a new "prime" sequence:
    1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 13, 14, 17, 19, 22, 23, 26, 27 .... etc
    This way we allow even numbers to be "prime". We could also define a prime as a number not divisable by any number other than 1, 2, 3, third of self, half of self, self. As we do so we are clearly increasing the number of "primes". The question then is, are any of these sequences any easier to predict than the traditional primes?
    If we let [tex]primes_{1}[/tex] be the traditional primes, then:
    [tex]primes_{1} = 1, 3, 5, 7, 11, 13, 17, 19, 23, 27 ...[/tex]
    [tex]primes_{2} = 1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 13, 14, 17, 19, 22, 23, 26, 27 ...[/tex]
    [tex]primes_{3} = 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 17, 19, 21, 22, 23, 26, 27 ...[/tex]
    [tex]primes_{4} = ...[/tex]
    Do you think these sequences behave similarly after a while? If they're all as "unpredictable", then that means that there are at least an infinity of sequences that are as unpredictable as the primes.
     
    Last edited: Mar 12, 2006
  2. jcsd
  3. Hurkyl

    Hurkyl 16,089
    Staff Emeritus
    Science Advisor
    Gold Member

    Technically, that's irreducible. For p to be prime, it has to satisfy:

    Whenever p | ab, we have p | a or p | b.

    (or both, of course)


    The traditional primes, of course, include 2 and do not include 1. :tongue2:


    Your set "primes2" is simply all numbers of the form:
    2^m, p, 2p
    where m < 4, and p is a prime.

    Your set "primes3" is simply all numbers of the form:
    2^m 3^n, p, 2p, 3p
    where m+n < 4, and p is a prime.

    Yes, you missed "18" in your list "primes3".


    We already knew that. Heck, as far as sequences go, the sequence of primes is extraordinarily simple. We can:
    (1) Write down a list of properties that completely characterizes the primes.
    (2) Write down an algorithm that, for any n, computes the n-th prime.
    (3) Write down a formula for the n-th prime.

    In some technical sense, (1) is extraordinarily rare, and I think (2) and (3) are even more rare.
     
  4. -Job-

    -Job- 1,132
    Science Advisor

    From an algorithm perspective identifying primes, or the nth prime isn't that simple, since Compositeness is in NP. We can't determine whether a number is composite or prime in polynomial time (that we/i know). To have an answer with 100% certainty you would need exponential time. I was wondering if adding numbers to the list of primes such as primes(2), and primes(3), or any other resulting from adding numbers to the primes, would produce a sequence whose elements we can identify in deterministic polynomial time, and at what point that change occurs. Such a sequence would say something about the certainty we can achieve in polynomial time, of whether a number is or isn't a prime. There are other problems in NP that seem to suffer the same problem of not being able to produce 100% certainty in polynomial time and which i have reduced to a sequence of numbers, similar but not equal to the primes.
     
  5. umm is Prime Factoring really in NP??? i was under the impression
    for (int i=0;i<=n/2;i++) isfactor(n,i) is an polynomial time problem? The problem is that it is computational inefficient to do this for very large n.
    but the algorithm is still polynomial isn't it?
     
  6. -Job-

    -Job- 1,132
    Science Advisor

    No, you're not seeing it correctly. Numbers are represented in binary. So your n is represented in log(n) bits, hence the input size is actually log(n). Letting m be the input size, then you actually require (2^m)/2 operations.
    I did see an article about an algorithm for identifying primes in polynomial time, but either it is not correct, or not yet accepted. In theory it could be, because Compositeness isn't NP complete, so it wouldn't say anything about whether P=NP, but i don't know that it is, and it is certainly not trivial to come up with such an algorithm.
     
    Last edited: Mar 12, 2006
  7. -Job-

    -Job- 1,132
    Science Advisor

    Actually, it is in P after all:
    http://en.wikipedia.org/wiki/AKS_primality_test
    Kind of unexpected for me, because in dealing with NP-complete problems i'd swear i was dealing with something very close to the primes. This means that a route to show that P=NP (unlikely) would be to reduce an NP-complete problem to the problem of "Primality testing".

    It seems like i'm always wrong no matter what though. :P One of these days i'll run out of wrong things to say. :P
     
    Last edited: Mar 13, 2006
  8. AKG

    AKG 2,585
    Science Advisor
    Homework Helper

    Does 'isfactor' take polynomial time?
     
  9. -Job-

    -Job- 1,132
    Science Advisor

    I'm guessing isFactor(n, i) would be equivalent to:
    n mod i == 0
    which is linear or quadratic, i'm not sure which, depends on how multiplication/division is implemented on the given architecture, but i think it's linear, it's definitely polynomial though. The problem is that looping from 0 to n/2 actually takes exponential time.
    Neurocomp's algorithm isn't polynomial, but whether he knew it or not he is correct about Primality being in P. :tongue2:
     
    Last edited: Mar 13, 2006
  10. AKG

    AKG 2,585
    Science Advisor
    Homework Helper

    Well, primality testing and prime factoring are two different tasks. His algorithm for primality testing looks like it would take polynomial time to me. You're saying that isfactor is linear, and it is run n/2 times, so we'll get something on the order of n²/2 as the running time, wouldn't we? I don't understand this:

    No, you're not seeing it correctly. Numbers are represented in binary. So your n is represented in log(n) bits, hence the input size is actually log(n). Letting m be the input size, then you actually require (2^m)/2 operations.
     
  11. -Job-

    -Job- 1,132
    Science Advisor

    Neurocomp's algorithm is the following:
    function isPrime(n){
    for(i=0; i<n/2; i++){
    if(n % i == 0){
    // n is composite
    return false;
    }
    }
    return true;
    }

    This function seems polynomial but isn't. The fact is that an algorithm's runtime is always defined with respect to the input size. The number n is the input, and the input size is the number of bits used to represent n. The number n is represented in binary, so, when we call isPrime(5) we are actually calling isPrime(101) since 5 is 101 in binary. The input size is 3.
    If we call isPrime(512) the input size is 9.
    The performance analysis is always done with respect to the input size. If we let m be the input size, then, the loop actually goes from 0 to ~(2^m)/2 which is exponential.
    This is an important distinction because it says that the Primality problem isn't obviously solvable in deterministic polynomial time and hence isn't immediately in P. Only until recently, i didn't even know, did we find an algorithm that is deterministic and polynomial:
    http://en.wikipedia.org/wiki/AKS_primality_test
    But from what i've read it was always suspected that Primality was in P.

    If we choose to see n as the input size, rather than log(n), then, by that standard, the AKS primality test would solve the Primality problem in O(log(n)^6) rather than Neurocomp's O(n), much faster.
    It may seem like only a formality, and it kind of is, but unless we define all problems in the same manner, then it's of no use to group them in categories such as P and NP.
    I mean that it's not incorrect to say that Neurocomp's algorithm is polynomial with respect to n, because it is, only that it's not in sync with how problems are generally analized, so it's a misleading statement.
     
    Last edited: Mar 13, 2006
  12. primality testing is in P, just look at the sieve of eratosthenes. super easy.
     
  13. Hurkyl

    Hurkyl 16,089
    Staff Emeritus
    Science Advisor
    Gold Member

    How do you figure?

    For testing a k-bit number, you have to sieve over an array of size [itex]2^k[/itex]. How do you plan on doing that in polynomial time?



    Job: it's interesting to note that factoring is known to be a L(1/3, 1+e) algorithm (number field sieve) -- isn't that better than the best we know for NP problems? More food for thought.
     
    Last edited: Mar 13, 2006
  14. Hmm.....

    Your right, I guess I should let my Number Theory prof know he was wrong :-) ( I like it when Profs are wrong, it reminds me that they are not infalable... no matter how some of them like to pretend they are)

    I did a search after I read your post and found the Dynamic Wheel Sieve which is O(log log n).
     
  15. my bad..."prime factoring" i meant primality testing...reasons why ineed to sleep more
     
  16. shmoe

    shmoe 1,994
    Science Advisor
    Homework Helper

    or you misunderstood him.

    What exactly are you claiming can be done in O(log log n) time? Certainly NOT test the primality of an integer n, this would give primality testing in the logarithmic time in the number of bits of n, much better than polynomial time.

    Pritchard's wheel sieve gave something like O(N/log log N) time (measured in additions) to give a list of primes less than N. The may certainly have been improved upon (not to O(log log N) of course, or anything close). What sieve are you refering to?

    edit- I googled "Dynamic Wheel Sieve" and one of the first links mentions pritchards with a running time of "O(n= log log n)", which may be what ComputerGeek found, a typo or ascii translation error.
     
    Last edited: Mar 13, 2006
  17. -Job-

    -Job- 1,132
    Science Advisor

    It's important to get our variables straight. If sieve(N) is a sieve that builds a list of all primes up to N, then lets define n such that n = log(N). So N is the input number, and n is the number of bits needed to represent N.
    The runtime of the sieve of Erastothenes is O(N log log N) which is equivalent to O(2^n log n).
    From what i saw online Pritchard's dynamic wheel sieve requires only O(N/log log N) which is equivalent to O(2^n/log n).

    AKS primality test, on the other hand, performs in O((log N)^12) which is equivalent to O(n^12). Apparently it can very likely be reduced to O(n^6).
    Of all algorithms only AKS is polynomial, but AKS isn't a sieve. It is used in determining whether a number is prime or not.
     
    Last edited: Mar 13, 2006
  18. nope.


    your right, I did use that first link. If it is a typo, then that is what it is.
     
  19. shmoe

    shmoe 1,994
    Science Advisor
    Homework Helper

    Do keep in mind the difference in polynomial time in n and polynomial time in the # of bits of n. It would be a true shame if your prof. claimed eratosthenes was polynomial in the number of bits of n.

    The = sign should be a division sign. The prime number theorem gives pi(x)~x/log(x) primes less than x, you can't possibly list them all in less time.
     
Know someone interested in this topic? Share a link to this question via email, Google+, Twitter, or Facebook

Have something to add?