
#1
Mar1206, 10:23 PM

Sci Advisor
P: 1,132

If, instead a defining a prime as an integer divisable only by 1 or itself, we define it as an integer not divisable by a number other than 1, 2, half of itself, or itself, (numbers not being divisable by 2 still "able" to be primes) then we generate a new "prime" sequence:
1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 13, 14, 17, 19, 22, 23, 26, 27 .... etc This way we allow even numbers to be "prime". We could also define a prime as a number not divisable by any number other than 1, 2, 3, third of self, half of self, self. As we do so we are clearly increasing the number of "primes". The question then is, are any of these sequences any easier to predict than the traditional primes? If we let [tex]primes_{1}[/tex] be the traditional primes, then: [tex]primes_{1} = 1, 3, 5, 7, 11, 13, 17, 19, 23, 27 ...[/tex] [tex]primes_{2} = 1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 13, 14, 17, 19, 22, 23, 26, 27 ...[/tex] [tex]primes_{3} = 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 17, 19, 21, 22, 23, 26, 27 ...[/tex] [tex]primes_{4} = ...[/tex] Do you think these sequences behave similarly after a while? If they're all as "unpredictable", then that means that there are at least an infinity of sequences that are as unpredictable as the primes. 



#2
Mar1206, 10:58 PM

Emeritus
Sci Advisor
PF Gold
P: 16,101

Whenever p  ab, we have p  a or p  b. (or both, of course) The traditional primes, of course, include 2 and do not include 1. Your set "primes2" is simply all numbers of the form: 2^m, p, 2p where m < 4, and p is a prime. Your set "primes3" is simply all numbers of the form: 2^m 3^n, p, 2p, 3p where m+n < 4, and p is a prime. Yes, you missed "18" in your list "primes3". (1) Write down a list of properties that completely characterizes the primes. (2) Write down an algorithm that, for any n, computes the nth prime. (3) Write down a formula for the nth prime. In some technical sense, (1) is extraordinarily rare, and I think (2) and (3) are even more rare. 



#3
Mar1206, 11:34 PM

Sci Advisor
P: 1,132

From an algorithm perspective identifying primes, or the nth prime isn't that simple, since Compositeness is in NP. We can't determine whether a number is composite or prime in polynomial time (that we/i know). To have an answer with 100% certainty you would need exponential time. I was wondering if adding numbers to the list of primes such as primes(2), and primes(3), or any other resulting from adding numbers to the primes, would produce a sequence whose elements we can identify in deterministic polynomial time, and at what point that change occurs. Such a sequence would say something about the certainty we can achieve in polynomial time, of whether a number is or isn't a prime. There are other problems in NP that seem to suffer the same problem of not being able to produce 100% certainty in polynomial time and which i have reduced to a sequence of numbers, similar but not equal to the primes.




#4
Mar1206, 11:46 PM

P: 1,373

Other "primes"
umm is Prime Factoring really in NP??? i was under the impression
for (int i=0;i<=n/2;i++) isfactor(n,i) is an polynomial time problem? The problem is that it is computational inefficient to do this for very large n. but the algorithm is still polynomial isn't it? 



#5
Mar1206, 11:50 PM

Sci Advisor
P: 1,132

No, you're not seeing it correctly. Numbers are represented in binary. So your n is represented in log(n) bits, hence the input size is actually log(n). Letting m be the input size, then you actually require (2^m)/2 operations.
I did see an article about an algorithm for identifying primes in polynomial time, but either it is not correct, or not yet accepted. In theory it could be, because Compositeness isn't NP complete, so it wouldn't say anything about whether P=NP, but i don't know that it is, and it is certainly not trivial to come up with such an algorithm. 



#6
Mar1306, 12:02 AM

Sci Advisor
P: 1,132

Actually, it is in P after all:
http://en.wikipedia.org/wiki/AKS_primality_test Kind of unexpected for me, because in dealing with NPcomplete problems i'd swear i was dealing with something very close to the primes. This means that a route to show that P=NP (unlikely) would be to reduce an NPcomplete problem to the problem of "Primality testing". It seems like i'm always wrong no matter what though. :P One of these days i'll run out of wrong things to say. :P 



#7
Mar1306, 12:49 AM

Sci Advisor
HW Helper
P: 2,589





#8
Mar1306, 01:03 AM

Sci Advisor
P: 1,132

I'm guessing isFactor(n, i) would be equivalent to:
n mod i == 0 which is linear or quadratic, i'm not sure which, depends on how multiplication/division is implemented on the given architecture, but i think it's linear, it's definitely polynomial though. The problem is that looping from 0 to n/2 actually takes exponential time. Neurocomp's algorithm isn't polynomial, but whether he knew it or not he is correct about Primality being in P. 



#9
Mar1306, 01:15 AM

Sci Advisor
HW Helper
P: 2,589

Well, primality testing and prime factoring are two different tasks. His algorithm for primality testing looks like it would take polynomial time to me. You're saying that isfactor is linear, and it is run n/2 times, so we'll get something on the order of n²/2 as the running time, wouldn't we? I don't understand this:
No, you're not seeing it correctly. Numbers are represented in binary. So your n is represented in log(n) bits, hence the input size is actually log(n). Letting m be the input size, then you actually require (2^m)/2 operations. 



#10
Mar1306, 01:33 AM

Sci Advisor
P: 1,132

Neurocomp's algorithm is the following:
function isPrime(n){ for(i=0; i<n/2; i++){ if(n % i == 0){ // n is composite return false; } } return true; } This function seems polynomial but isn't. The fact is that an algorithm's runtime is always defined with respect to the input size. The number n is the input, and the input size is the number of bits used to represent n. The number n is represented in binary, so, when we call isPrime(5) we are actually calling isPrime(101) since 5 is 101 in binary. The input size is 3. If we call isPrime(512) the input size is 9. The performance analysis is always done with respect to the input size. If we let m be the input size, then, the loop actually goes from 0 to ~(2^m)/2 which is exponential. This is an important distinction because it says that the Primality problem isn't obviously solvable in deterministic polynomial time and hence isn't immediately in P. Only until recently, i didn't even know, did we find an algorithm that is deterministic and polynomial: http://en.wikipedia.org/wiki/AKS_primality_test But from what i've read it was always suspected that Primality was in P. If we choose to see n as the input size, rather than log(n), then, by that standard, the AKS primality test would solve the Primality problem in O(log(n)^6) rather than Neurocomp's O(n), much faster. It may seem like only a formality, and it kind of is, but unless we define all problems in the same manner, then it's of no use to group them in categories such as P and NP. I mean that it's not incorrect to say that Neurocomp's algorithm is polynomial with respect to n, because it is, only that it's not in sync with how problems are generally analized, so it's a misleading statement. 



#11
Mar1306, 03:30 AM

P: 534

primality testing is in P, just look at the sieve of eratosthenes. super easy.




#12
Mar1306, 07:10 AM

Emeritus
Sci Advisor
PF Gold
P: 16,101

For testing a kbit number, you have to sieve over an array of size [itex]2^k[/itex]. How do you plan on doing that in polynomial time? Job: it's interesting to note that factoring is known to be a L(1/3, 1+e) algorithm (number field sieve)  isn't that better than the best we know for NP problems? More food for thought. 



#13
Mar1306, 09:57 AM

P: 534

Your right, I guess I should let my Number Theory prof know he was wrong :) ( I like it when Profs are wrong, it reminds me that they are not infalable... no matter how some of them like to pretend they are) I did a search after I read your post and found the Dynamic Wheel Sieve which is O(log log n). 



#14
Mar1306, 10:06 AM

P: 1,373

my bad..."prime factoring" i meant primality testing...reasons why ineed to sleep more




#15
Mar1306, 10:06 AM

Sci Advisor
HW Helper
P: 1,996

Pritchard's wheel sieve gave something like O(N/log log N) time (measured in additions) to give a list of primes less than N. The may certainly have been improved upon (not to O(log log N) of course, or anything close). What sieve are you refering to? edit I googled "Dynamic Wheel Sieve" and one of the first links mentions pritchards with a running time of "O(n= log log n)", which may be what ComputerGeek found, a typo or ascii translation error. 



#16
Mar1306, 03:50 PM

Sci Advisor
P: 1,132

It's important to get our variables straight. If sieve(N) is a sieve that builds a list of all primes up to N, then lets define n such that n = log(N). So N is the input number, and n is the number of bits needed to represent N.
The runtime of the sieve of Erastothenes is O(N log log N) which is equivalent to O(2^n log n). From what i saw online Pritchard's dynamic wheel sieve requires only O(N/log log N) which is equivalent to O(2^n/log n). AKS primality test, on the other hand, performs in O((log N)^12) which is equivalent to O(n^12). Apparently it can very likely be reduced to O(n^6). Of all algorithms only AKS is polynomial, but AKS isn't a sieve. It is used in determining whether a number is prime or not. 



#17
Mar1406, 11:15 AM

P: 534





#18
Mar1406, 12:04 PM

Sci Advisor
HW Helper
P: 1,996




Register to reply 
Related Discussions  
"primes" as Energy levels...(eigenvalues of a certain operator)  Linear & Abstract Algebra  5  
Difference between "Identical", "Equal", "Equivalent"  Calculus & Beyond Homework  9  
Is the distribution of "almostprimes" known.  Linear & Abstract Algebra  1  
Question about "primes"...  Linear & Abstract Algebra  11 