# P vs NP and factoring

I have some questions about the presentation of the P vs NP problem here. I'm quite new to this stuff, so pardon my ignorance if I make dumb statements.

http://www.claymath.org/millennium/P_vs_NP/Official_Problem_Description.pdf [Broken]

Specifically:

The security of the internet, including most financial transactions,
depend on complexity-theoretic assumptions such as the difficulty of
integer factoring or of breaking DES (the Data Encryption Standard). If P=
NP these assumptions are all false. Specifically, an algorithm solving 3-SAT
in n2 steps could be used to factor 200-digit numbers in a few minutes.

First of all this statement is confusing to me. Is he talking about 200 binary digit numbers? Decimal? It's a bit ambiguous to me.

By factoring is he talking about prime factoring?

Is he talking about any 200 digit number? I can prime factor 2200 in less than a second.

Finally I assume he's claiming that since you can verify factors of 200 digit numbers in a few minutes, if P = NP then you could factor numbers in equally bounded time. Is this correct? I'm not sure because I see 2 problems with it.

1) If the problem is bounded by nK, who's to say what k is? It could be huge, much larger than the k for the NP problem.

2) If he's talking about prime factorization, he is effectively saying that you can verify a prime number in P time, a very dubious claim.

As far as I can tell with the state of the art in cryptography, new factoring algorithms are big problems, actually increasing the need for larger keys/certificates all the time. My view is that the reason cryptography works is not because P != NP, but because prime number verification is Non P, meaning that incremental increases in number size create exponential increases in computational time. Again, I'm no expert, just wanting more information.

Thanks

Last edited by a moderator:

First of all this statement is confusing to me. Is he talking about 200 binary digit numbers? Decimal? It's a bit ambiguous to me.

Decimal digits. 200 binary digits is already easy to factor. Pari does this in under 2 minutes
on my pc using the quadratic sieve algorithm

(21:25) gp > factor( nextprime(2^98) * nextprime(2^102))
%4 =
[316912650057057350374175801351 1]

[5070602400912917605986812821771 1]

(21:27) gp >

By factoring is he talking about prime factoring?
yes
Is he talking about any 200 digit number? I can prime factor 2200 in less than a second.
Number with 2 prime factors that are roughly the same size, such as used for RSA encryption.

Finally I assume he's claiming that since you can verify factors of 200 digit numbers in a few minutes, if P = NP then you could factor numbers in equally bounded time. Is this correct?
I'm not sure because I see 2 problems with it.

1) If the problem is bounded by nK, who's to say what k is? It could be huge, much larger than the k for the NP problem.
It's indeed true that the polynomial could be a huge power, although this very rarely
occurs for any practical algorithm. Of course this is quite uncertain.

2) If he's talking about prime factorization, he is effectively saying that you can verify a prime number in P time, a very dubious claim.

Actually veryfying primality can be done in P time, look up the AKS primality test.
You don't need to do that however, since just checking the factorization by multiplying
the factors is enough, and there's no need to verify that they are prime.

peace,

a problem is called 'P' if it is in the set P of the problems with a solution in polynomial time. this means that the time to solve it is a polynomial function (y = a0 + a1n + a2n2 + ... + amxm for some integer m) of the size of the problem.

suppose we want to double an integer n. as n gets larger, we will always need more time to do the calculation. however we will always find a polynomial function of n that exceeds the needed time. the problem is in P.

suppose we want to look at the problem of factoring an integer n. this problem is not as easy as the first one. we will need a lot of steps to calculate the factors of n as n gets larger. in fact, up to now we don't know any algorithm that guarantees us a solution of the factorisation-problem in polynomial time. we have no proof that the problem is in P (actualy, most mathematicians would guess that it isn't in P).

now a problem is said to be in the set NP when the verification of its solution is possible in polynomial time.

it is trivial that a problem in P will also be in NP. but it is an open question if all problems in NP are also in P.

we can easily see that the factorisation problem is in NP. once we have obtained the factors p1, p2, p3..., pm of the integer n, we just have to multiply all these factors together to verify if our solution was correct. it is obvious that this can be done in polynomial time.

so if we can find a proof that the factorisation of n is a problem that is not in P, we would have a counterexample to the hypothesis that P = NP, we would have a problem p for which p ∉ P but p ∈ NP. this also means that if we would be able to find a proof that P = NP, it would imply that a smart algorithm exists (and could be found) that factorises n in polynomial time.

peace,

redwasp

Additionally, I think there is something worth mentioning with respect to the security of our crypto-systems. The reason that factoring a large integer is important is because, currently, this is the only way we know how to solve the discrete log problem. If we could find faster way to solve discrete log, then it wouldn't matter (with respect to cryptosystems) whether or not integers can be factored.

Also, if Quantum Computers are ever made that can, say, factor integers other than 15, this whole thing goes out the window. There are other crpytosystems, like lattice-point, that seem to be impervious to even quantum attacks.