# P vs NP. What if we just assumed P=NP?

1. Aug 9, 2011

### kramer733

I don't know much about this P vs NP debate at all but what happens if they assume P = NP or P=/=NP? What does it mean when somebody resolves the debate? Why are there tons of mathematicians and computer scientists who are split between the two?

2. Aug 9, 2011

3. Aug 9, 2011

### CyberShot

Basically, P = NP implies that if you can quickly verify that a "solution" to a problem is the correct one, then you can also quickly come up with the solution from scratch.

P $\neq$ NP implies just the opposite, namely, that if can quickly find the solution, then you can't quickly come up with the solution in the first place.

As an example, consider the following set

S = {1, 3, -5, 2, 6, 2, 11}

If I tell you that the sum of two numbers in S, 11 and -5 , is 6, it's pretty easy to verify that. Given that it's easy to verify that, if I asked you to come up two such numbers that sum to 6 on your own, it takes exponentially more time to do that.

This would mean in this case that P $\neq$ NP.

4. Aug 9, 2011

### praeclarum

I really don't know much about the problem. What do people mean when they can say "quickly" verify that a solution is the correct one with this problem. What do they mean by quickly?

5. Aug 9, 2011

### CyberShot

I believe it refers to the existence of asymptotic run times of the verification algorithm.

More simply, "quickly" means the amount of required time to verify solution doesn't blow up to infinity as you increase the size of input.

6. Aug 9, 2011

### MisterX

No, this is not what it means. It does not mean just any asymptote for run time exists, nor does it mean the time "doesn't blow up to infinity".

For this thread, "quickly" should be interpreted as run-time is upper-bounded by a polynomial function of the size of the input.

Last edited: Aug 9, 2011
7. Aug 9, 2011

### CyberShot

Technically, what I said implies what you said. If it's upper bounded like you say, then it doesn't blow up to infinity. You just gave an additional, stronger requirement to be considered "quick", correct?

8. Aug 9, 2011

### daveyp225

If the time for an algorithm to compute a solution of input size n is n^2, then it still "blows up to infinity." However, in this case, it is considered a "fast" computation. log(n) also tends to infinity but is considered "very fast." If the time is exponential, say 2^n, it is considered "slow" even though for small inputs such a function might be faster than a polynomial.

Someone decided that the space polynomials, dubbed "polynomial time" should be the cut-off between fast and slow. So say you have a function f(n), a function of input size n, if you can prove that the limit as n-> infinity f(n)/p(n) is a constant for some polynomial p(x), your function is considered "fast." Otherwise it is "slow."

9. Aug 10, 2011

### -Job-

In practice even $O(n^3)$ can be a challenge for large $n$, but it's certainly fast in comparison with $O(2^n)$.

The classification of polynomial as fast and of exponential as slow is informal but generally accepted (in some cases this distinction is not very accurate, e.g. $2^{0.0001n}, n^{logn}$ and $n^{1000000}$).

There isn't strong evidence for P = NP and it's widely thought that P and NP are not equal but the problem is wide open.

A proof of P = NP, and corresponding algorithm for NP-Complete problems, would be immediately useful and of incredible importance. NP-Complete problems are a set of hard problems that become easily solvable if P = NP.

Subgraph Isomorphism is one of the NP-Complete problems: given two graphs of n or fewer nodes, determine whether the first one contains the second. Given a graph A of 1,000 nodes and a graph B of 500 nodes, in order to determine whether A contains B, a naive algorithm might check each possible subgraph of A to see if it matches B. There are 2^1000 such subsets of A, which is a huge number and completely out of the realm of possibility.

In practice we don't have to resort to checking each possible subset of A because of known optimization techniques (e.g. check only the subsets of A that have 500 nodes) which can yield much faster performance. As an example i've tested solvers that can solve random instances of up to 300 nodes in a few seconds, which is remarkable given how even non-trivial algorithms might need hours or days.

On the other hand these optimized algorithms can only go so far and are a constant work-in-progress. As we approach 500 nodes even these algorithms start taking huge amounts of time or just not completing at all.

Theoretically, if P = NP then we can automate many of the tasks we rely on humans for, e.g. finding and proving mathematical theorems. Realistically, any problem assembled from a set of constraints has a good chance of either being an instance of an NP-Complete problem or being reducible to one, so the applications of P = NP are many and independent of the field.

P != NP would also be a significant result but not as earth shaking. Note that the problem of proving that P != NP is likely itself an instance of an NP-Complete problem, and thus reducible to an instance of Subgraph Isomorphism (all NP-Complete problems reduce to eachother).

This means that P != NP directly impacts the difficulty of its own proof. This is in tune with observation, because if P != NP then its proof ought to be hard, in the same way that determining whether A contains B is hard, though the door is always open for someone to take a lucky shot and see B in A at a glance.