How to prove problem is P or NP?

  • Thread starter Superposed_Cat
  • Start date
In summary: NP is either in NP or a subproblem of a problem in NP.Although the P = NP? problem itself remains open despite a million-dollar prize...The P != NP problem is still unsolved, but it's not impossible. There are many different ways to try to solve it, and many different ways to measure if a given problem is close to or in P.Although the P != NP problem is still unsolved, it's not impossible. There are many different ways to try to solve it, and many different ways to measure if a given problem is close to or in P.
  • #1
Superposed_Cat
388
5
Hey all, I was looking at P=NP and time complexity. I was wondering how one proves a problem is P or NP aside from "demonstrating" it. There must be a formal way otherwise P=NP would be impossible to prove and no one would try. Any help apreciated, thanks.
 
Technology news on Phys.org
  • #2
Ah, well, this is actually what I'm currently studying... namely, analysis and design of algorithms.

Each algorithm has a time complexity which we formulate as being asymptotically proportional to an order of growth, via theta-notation. In the field of classifying problems (rather than algorithms), 'P' stands for polynomial, while 'NP' stands for non-polynomial. Problems in the class P have been proven to have polynomial-time algorithms, while problems in NP have been proven not to have polynomial-time algorithms.

As you can imagine, proving a given problem is in P rather than NP is easier: Just design a polynomial algorithm for it. Proving a problem is in the class of problems NP is a little harder, and usually depends on the problem. For the famous traveling salesman problem, I believe you might invoke graph theory to establish a non-polynomial lower bound.

A lemma on the definition of 'polynomial-time' and 'non-polynomial time'... P is the set of problems proportional or less than proportional to the order of growth of polynomials (##\Theta(n^k)##), while the time-complexity of NP is strictly above that order of growth, e.g. ##\Theta(2^n)##, ##\Theta(n!)##

If you want technical, quantitative information (rather than the above qualitative description) on how to prove it, you're going to have to give a specific problem.

Wouldn't it be strange if it was shown to be impossible to prove P != NP or P=NP? Or perhaps it is also impossible to prove it is impossible to prove P != NP or P=NP.
 
Last edited:
  • #3
ellipsis said:
Wouldn't it be strange if it was shown to be impossible to prove P != NP or P=NP? Or perhaps it is also impossible to prove it is impossible to prove P != NP or P=NP.
It's worth a million bucks if you figure it out... :oldcool:
Although the P = NP? problem itself remains open despite a million-dollar prize...

http://en.wikipedia.org/wiki/P_versus_NP_problem
 
  • #4
ellipsis said:
invoke graph theory
K so graph theory is the bridge betweeen math (even if it's applied math and the more abstract computer science, got it thanks.
 
  • #5
Superposed_Cat said:
Hey all, I was looking at P=NP and time complexity. I was wondering how one proves a problem is P or NP aside from "demonstrating" it. There must be a formal way otherwise P=NP would be impossible to prove and no one would try. Any help apreciated, thanks.

My two cents: One main way is to show your NP problem is equivalent to another NP problem. I can tell you a thought experiment to convert traveling salesman Hamiltonian path problem or subset sum if you want intuition on it. You can't really prove they are not in P because the P!=NP conjecture isn't proven, at least that's my understanding.
 
  • #6
Fooality said:
I can tell you a thought experiment to convert traveling salesman Hamiltonian path problem or subset sum if you want intuition on it.
Ohkay, that would be nice.
 
  • #7
ellipsis said:
'P' stands for polynomial, while 'NP' stands for non-polynomial. Problems in the class P have been proven to have polynomial-time algorithms, while problems in NP have been proven not to have polynomial-time algorithms.

This is a common misconception. P=NP would be kind of a trivial question if that were true as polynomial and non-polynomial are obviously disjoint. In fact, NP stands for non-deterministic polynomial which stems from the formal most low level definition which uses deterministic and or non-deterministic Turing machines as abstract computational models. Essentially a problem is in NP if there exists a Deterministic Turing machine that, given a proposed solution, if correct, can verify it's correctness in polynomial time. Moreover, the size of the solution encoded as a "string" must also be polynomial bounded with respect to the problem's size.

Turing machines are special, because it's conjectured that a Turing machine can compute anything which is computable. It turns out that most programming languages are Turing complete ( that is they have the same power as a Turing machine ). In practice it is common to just come up with algorithms in pseudo-code to construct proofs rather than work with TM's. directly.

Anyways, a problem is in NP if there exists an algorithm that can verify a correct solution in polynomial time.

Does P=NP, is a question that asks whether a problem, in which a correct solution can be verified to be correct in polynomial time, can also be solved in polynomial time.

There are two other related sets of problems, NP hard, and NP complete. A problem is NP hard if all problems in NP can be reduced, in polynomial time, to it, and in NP complete if it's in NP and its in NP hard.

There are a few famous NP complete problems that have very clever direct proofs. One is SAT, which is the problem of determining if an assignment to a boolean expression can satisfy ( make the expression true ) it. This problem is also reducible to a special form 3-SAT, in which you have the expression in the form, ( a or b or c ) and ( d or e or f ) ... This problem is sort of a main starting point for proving NP hardness. Once a problem is shown to be NP hard, you can show another is also NP hard by reducing the first to the new problem. So you have a sort of chain, or domino effect, reducing one problem to another, and then to another, and so on..., and as you go, you keep adding to the set of known NP hard problems.

In other words, NP complete problems can essentially be generalized to solve any problem in NP.

To prove P=NP, you could construct a polynomial time algorithm for any NP complete problem ( emphasis on complete ). If you could do this, then you could also convert this algorithm in polynomial time to a form that can solve any problem in NP.

This is a big deal in cryptography because we rely on the ability to use a key to quickly decrypt a cypher, while assuming that deciphering the code without the key will take far too long to be feasible. But if P=NP, then any code that can be deciphered quickly with the key, could also be cracked quickly without a key.

I recommend Introduction to The Theory of Computation by Michael Sipser if you would like to dig deeper. Theoretical computer science is a very fascinating subject.

Here is an example problem from one of my home works, where it is shown that the problem [itex]D[/itex], of determining if a polynomial has an integer root is NP hard by giving an algorithm to polynomial time reduce [itex]3SAT[/itex] to it.

[itex]3SAT \leq_p D[/itex] ( reads 3SAT polynomial time reduces to D )

Given a forumla [itex]\phi \in 3SAT[/itex], you can convert it to an equation [itex]\beta \in D[/itex] using the following construction.

Each clause in [itex]\phi[/itex] is mapped to a term in [itex]\beta [/itex] such that if a variable v is complimented in the clause, it is converted to a factor of [itex](v - 1)^2[/itex] as a factor in the corresponding term of the polynomial, otherwise [itex]v[/itex] is converted to itself squared as a factor in the corresponding term of the polynomial.

Example:

[itex]f( "( x \vee \overline{y} \vee z ) ( \overline{a} \vee y \vee \overline{c} )( \overline{x} \vee z \vee a )" ) = "x^2(y-1)^2z^2 + (a-1)^2y^2(c-1)^2+(x-1)^2z^2a^2"[/itex]

Because each term is non-negative, the polynomial can only evaluate to 0, after plugging in values for it's variables, if each term also evaluates to 0. We want that if a truth assignment on [itex]\phi[/itex] satisfies [itex]\phi[/itex], then all terms of [itex]\beta[/itex], for some assignment on it's variables, evaluate to 0. And if such an assignment on [itex]\beta[/itex] does not exist, then [itex]\phi[/itex] is not satisfiable.

If a clause is satisfiable in [itex]\phi[/itex], you can make it's corresponding term equal 0 in beta by assigning 0's for variables in beta that are assignmed true in [itex]\phi[/itex], and 1 for corresponding falses. Then if a variable is assigned a value that makes a particular clause true in the [itex]\phi[/itex], a factor of 0 will be introduced to the corresponding term in [itex]\beta[/itex]. If under a given assignment, some clause in [itex]\phi[/itex] is not satisfied, then some term in [itex]\beta[/itex] will be non-zero, and then [itex]\beta[/itex] will not be satisfiable.
 
Last edited:
  • Like
Likes ellipsis and Superposed_Cat
  • #8
tAllan said:
This is a common misconception. P=NP would be kind of a trivial question if that were true as polynomial and non-polynomial are obviously disjoint. In fact, NP stands for non-deterministic polynomial which stems from the formal most low level definition which uses deterministic and or non-deterministic Turing machines as abstract computational models. Essentially a problem is in NP if there exists a Deterministic Turing machine that, given a proposed solution, if correct, can verify it's correctness in polynomial time. Moreover, the size of the solution encoded as a "string" must also be polynomial bounded with respect to the problem's size.

Oh, sorry. I understand what you mean, I was just trying to present the topic from an angle focusing on algorithms, instead of the problems themselves. Given your definition of NP, is it not true that every P satisfies that definition, given that an answer to P can be verified in its correctness in polynomial time? I think, in this, your definition leads to a trivial answer just as mine did.

tAllan said:
To prove P=NP, you could construct a polynomial time algorithm for any NP complete problem ( emphasis on complete ). If you could do this, then you could also convert this algorithm in polynomial time to a form that can solve any problem in NP.

Knuth thinks P = NP on basis of the fact that there might exist an algorithm of ##\Theta(n^{9999999})## complexity that does what you described. I consider that cheating, though.

Indeed, simple problems become harder when we consider word-sizes of arbitrary length. (e.g. addition is ##\Theta(n)## where ##n## is the number of bits in the word.) It makes me wonder about the time complexity of certain algorithms when we can only perform boolean operations on bits.
 
Last edited:
  • #9
ellipsis said:
Given your definition of NP, is it not true that every P satisfies that definition, given that an answer to P can be verified in its correctness in polynomial time? I think, in this, your definition leads to a trivial answer just as mine did.

Yes. P and NP are sets.. P is a subset of NP. That is that every problem in P is also in NP. Every cat is an animal, but not every animal is a cat. NP complete is also a subset of NP. It's the NP complete problems for which it is though that no polynomial time solver exists.

More formally P and NP are sets of languages, where languages are sets of strings. Algorithms are formalized as abstract computing machines such as Turing machines. The language of a machine is the set of strings which it accepts.

We know that [itex]P \subset NP[/itex]. To show [itex]P=NP[/itex] you have to show that [itex]NP \subset P[/itex].

Say for example, SAT is a language. It is the set of satisfiable booean expressions encoded as strings. An algorithm or machine that goes to an accept state for every string in SAT and a reject state for every string not in SAT is a decider for SAT.

Reducing another problem to SAT is a process involving showing that you can devise an algorithm that converts strings in the language to a satisfiable boolean expression, and strings not in the language to unsatisfiable Boolean expressions.

Because any NP problem can be reduced to solving an NP complete problem, finding a single p.t. algorithm for a single NPC problem amounts to proving [itex]P=NP[/itex] and not only that, it actually would provide the basis for constructing algorithms to solve every NP problem in polynomial time.

Indeed, simple problems become harder when we consider word-sizes of arbitrary length. (e.g. addition is Θ(n) where n is the number of bits in the word.) It makes me wonder about the time complexity of certain algorithms when we can only perform boolean operations on bits.

That is something we have to take into account. For example, the language D ( Diophantine equations with integer roots ) , that I showed you polynomial time reduced to 3SAT, is NP hard, but not NP complete ( because it's not in NP ).

But why is it not in NP? If you are given a string that represents a correct solution, you just plug in the values and see if the equation evaluates to 0. The problem is that there is no bound on the sizes of the integers in that solution. They can be arbitrarily large, and so the "certificate/proposed solution" is not bounded in length by the size of the problem input.

Another set of languages are the undecidable languages. These are languages which no machine/algorithm is guaranteed to both reject strings not in the language and accept strings in the language. Instead it may diverge, or loop infinitely. The latter you can detect, but not the former.

An example is the language D again. If a Diophantine equation has an integer root, then you will eventually find it ( end up in a accept state ). If it doesn't then you will be forced to keep searching forever, testing arbitrarily large assignments without ever knowing if the solution is just around the corner or not.

We can also use the fact that D is undecidable to show D is not NP complete, because it turns out that no undecidable problem is in NP.
 
Last edited:
  • Like
Likes ellipsis
  • #10
Superposed_Cat said:
Ohkay, that would be nice.

Okay, fair warning - the class that taught this stuff in college wasn't my best, but this is something I noticed when I was thinking about it all later that I thought was pretty cool. This post just shows how thinking about one NP problem can lead you another one. Its not formal at all.

Think of algorithms to solve the Travelling Salesman problem. The most obvious is to list all possible paths, clearly NP. But notice what a huge amount of those paths are longer than the shortest path. Can we exclude those and shorten it? Yes: We pretend some signal that travels at a constant speed, and remembers where its been, is emitted from one of them, and rebroadcast at each node to all others, and we model this process unfolding. So in terms of an algorithm, we can create a timeline. If we start at node A, we insert every other node into the timeline, at a point in the future proportionate to its distance to A. (when the signal will arrive there) Then we move to the next node in the timeline, which will be the one closest to A, call it W. We then insert every node (but A and W) into the timeline at times in the future in the future when the signal emitted from W will reach them, and so on. The first node that has no more nodes it can transmit to will be the shortest path, and we have avoided visiting longer paths like the first algorithm. So its a faster algorithm, but still pretty clearly NP.

How else could we speed it up? One thing I notice is how wasteful it still is. Suppose the shortest path through 100 nodes has node D as its 60th stop. On arriving at D, the algorithm now needs to compute to shortest path from D that visits all the remaining nodes. The wasteful thing is, that path has already been computed, much earlier in the algorithm. That path is in the subset of the paths computed after D first emitted a signal that visits the needed nodes in some order. Wouldn't it be nice if we could use what's already been computed?

So what we need is an algorithm that won't repeat itself, an algorithm that will use every branch to contribute to the solution, rather than just one branch out of a vast multitude of branches. It turns out this doesn't look that hard. We will make each node a vertex in a graph, that changes with the timeline. (For simplicity we can have the "signal" emitted from every node at once.) We start out with a completely disconnected simple graph, and then, at a time equal to the distance between two nodes closest to each other, we add an edge to the graph connecting those two nodes. By the time equal to the distance between the two nodes most distant from each other, we have a fully connected graph. So now, when our algorithm gets to node D, it no longer needs to re-emit, the paths from node D have already been walked that could be walked at time t. All we need to do use use some magic function F to check the graph and make sure a path exists between D and the remaining nodes at time t. If this magic function F is sufficiently fast, we have a solution in polynomial time: All we need to do is find the time when the graph first has a path that visits all nodes, and that will be the shortest path, and there are only N^2 points in time where this could be, before its a completely connected graph.

So what is this magic function F that checks a graph/subgraph to see if it visits each node once? Its the solution to the Hamiltonian path problem, which is NP. Damn. But there's an interesting lesson:
If we could solve the Hamiltonian path problem in P then we could solve Travelling Salesman in P.

I seem to remember there's another way you can do it where you give each node a number 2^N, and track where things have been by that number. You end up looking at a set, looking for the first time when there's a subset that adds up a number that's all 1's, but in thinking about it right now I can't for the life of me remember what it is. But it has the same result, you see that if you can solve subset sum in P than you solve Travelling Saleman in P.

So anyway, that's really loose, but that's my experience in seeing how all of these problems are related, and how the solution to one can be the solution to others, in a thought experiment sort of way that sets the formal stuff aside.And hopefully gives some fresh inspiration.
 
Last edited:
  • #11
Fooality said:
Okay, fair warning - the class that taught this stuff in college wasn't my best, but this is something I noticed when I was thinking about it all later that I thought was pretty cool. This post just shows how thinking about one NP problem can lead you another one. Its not formal at all.

Think of algorithms to solve the Travelling Salesman problem. The most obvious is to list all possible paths, clearly NP. But notice what a huge amount of those paths are longer than the shortest path. Can we exclude those and shorten it? Yes: We pretend some signal that travels at a constant speed, and remembers where its been, is emitted from one of them, and rebroadcast at each node to all others, and we model this process unfolding. So in terms of an algorithm, we can create a timeline. If we start at node A, we insert every other node into the timeline, at a point in the future proportionate to its distance to A. (when the signal will arrive there) Then we move to the next node in the timeline, which will be the one closest to A, call it W. We then insert every node (but A and W) into the timeline at times in the future in the future when the signal emitted from W will reach them, and so on. The first node that has no more nodes it can transmit to will be the shortest path, and we have avoided visiting longer paths like the first algorithm. So its a faster algorithm, but still pretty clearly NP.

How else could we speed it up? One thing I notice is how wasteful it still is. Suppose the shortest path through 100 nodes has node D as its 60th stop. On arriving at D, the algorithm now needs to compute to shortest path from D that visits all the remaining nodes. The wasteful thing is, that path has already been computed, much earlier in the algorithm. That path is in the subset of the paths computed after D first emitted a signal that visits the needed nodes in some order. Wouldn't it be nice if we could use what's already been computed?

So what we need is an algorithm that won't repeat itself, an algorithm that will use every branch to contribute to the solution, rather than just one branch out of a vast multitude of branches. It turns out this doesn't look that hard. We will make each node a vertex in a graph, that changes with the timeline. (For simplicity we can have the "signal" emitted from every node at once.) We start out with a completely disconnected simple graph, and then, at a time equal to the distance between two nodes closest to each other, we add an edge to the graph connecting those two nodes. By the time equal to the distance between the two nodes most distant from each other, we have a fully connected graph. So now, when our algorithm gets to node D, it no longer needs to re-emit, the paths from node D have already been walked that could be walked at time t. All we need to do use use some magic function F to check the graph and make sure a path exists between D and the remaining nodes at time t. If this magic function F is sufficiently fast, we have a solution in polynomial time: All we need to do is find the time when the graph first has a path that visits all nodes, and that will be the shortest path, and there are only N^2 points in time where this could be, before its a completely connected graph.

So what is this magic function F that checks a graph/subgraph to see if it visits each node once? Its the solution to the Hamiltonian path problem, which is NP. Damn. But there's an interesting lesson:
If we could solve the Hamiltonian path problem in P then we could solve Travelling Salesman in P.

I seem to remember there's another way you can do it where you give each node a number 2^N, and track where things have been by that number. You end up looking at a set, looking for the first time when there's a subset that adds up a number that's all 1's, but in thinking about it right now I can't for the life of me remember what it is. But it has the same result, you see that if you can solve subset sum in P than you solve Travelling Saleman in P.

So anyway, that's really loose, but that's my experience in seeing how all of these problems are related, and how the solution to one can be the solution to others, in a thought experiment sort of way that sets the formal stuff aside.And hopefully gives some fresh inspiration.

I just want to reiterate. P and NP are not complexity classes and NP does not stand for not-polynomial.

I.E. [itex]n^2[/itex] is in [itex]P[/itex] doesn't make any sense and [itex]2^n[/itex] is in [itex]NP[/itex] makes even less sense.
 
Last edited:
  • #12
tAllan said:
I just want to reiterate. P and NP are not complexity classes and NP does not stand for not-polynomial.

I.E. [itex]n^2[/itex] is in [itex]P[/itex] doesn't make any sense and [itex]2^n[/itex] is in [itex]NP[/itex] makes even less sense.

Hey, here's a quote from Wikipedia:
http://en.wikipedia.org/wiki/P_versus_NP_problem
"The relation between the complexity classes P and NP is studied in computational complexity theory, the part of the theory of computation dealing with the resources required during computation to solve a given problem."

So Wikipedia refers to P and NP as complexity classes. I'm not sure what you're talking about with them not being, (I do know that P is a subset of NP) . And the problems I referred to here I believe are NP-Complete, so maybe I was being too loose should have used that term.

The main thing is, I'm not a pure mathematician, I am a coder, but that's really all it takes. There's problems that don't have solutions forthcoming in polynomial time, like traveling salesman, and they tend to map to one another when you try to solve them. Its really a pretty brick and mortar in-your-face kind of thing when you run into it, no math formalisms are even needed to see it, you can really approach it and explore it at an intuitive level, which is what I have done in my response. Like I said, it wasn't intended to be formal, just intuitive.
 
  • #13
Sorry I confused what a complexity class is. What I mean is that P and NP are not classes of time or space complexities.

Most importantly its crucial to realize NP doesn't mean non polynomial. Without that distinction you cannot begin to think about P vs NP.

When you say damn still in NP, it doesn't make any sense. Problems not in NP are harder than ones in NP. If you can reconcile that fact with your interpretation of NP, then your interpretation may be correct.

And its also important to understand NP completeness . Not necessarily formally, but what it actually means. They are NP problems which if you could solve in p.t. you could solve all NP problems in p.t.. These are not simply problems that have no known p.t. algorithms. There are plenty of those that are not in NP.
 
Last edited:
  • #14
tAllan said:
Sorry I confused what a complexity class is. What I mean is that P and NP are not classes of time or space complexities.

Most importantly its crucial to realize NP doesn't mean non polynomial. Without that distinction you cannot begin to think about P vs NP.

When you say damn still in NP, it doesn't make any sense. Problems not in NP are harder than ones in NP. If you can reconcile that fact with your interpretation of NP, then your interpretation may be correct.

And its also important to understand NP completeness . Not necessarily formally, but what it actually means. They are NP problems which if you cold solve in p.t. you could solve all NP problems in p.t.. These are not simply problems that have no known p.t. algorithms. There are plenty of those that are not in NP.

Yeah, that first thing is just that English and math are really different languages, and my post was in English. For instance, if I give you the categories "animal" and "plant", and ask you to categorize "dog", "raccoon", "human", "tulip" and "cactus", into them, you will put "human" in the "animal" category. But if you say "I found out what was making that sound I heard at night" and I ask "was it an animal?" You will answer "no" if it was a human. English is weird. What I meant to say when I said "damn, still NP" was that was the most information we have about it at that point. Of course, since P is a subset of NP, its all NP, so yes that statement was mathematically incorrect, but it wasn't intended that way. It was intended to say that we have failed at that point to reduce it into the class P.

As far as your second point, which connects all the NP problems together, that's kind of what I was hoping to get at intuitively in my post - you go on a quest for a solution to one NP-Complete problem, and you end up with others. From this you can see how prove things about what class it is in, without "demonstrating it", which is what the OP was asking about: Its about proving equivalence to a problem known to be in one of the other classes.
 

1. What is P and NP?

P and NP are two classes of problems in computer science. P stands for "polynomial time," which means a problem can be solved in a reasonable amount of time using an algorithm. NP stands for "non-deterministic polynomial time," which means a problem can be verified in a reasonable amount of time, but the solution may not be found efficiently.

2. How do you prove a problem is P or NP?

To prove a problem is P, you must show that it can be solved in polynomial time. This can be done by providing an algorithm that solves the problem in a reasonable amount of time, and showing that the time complexity of the algorithm is polynomial. To prove a problem is NP, you must show that it can be verified in polynomial time. This can be done by providing a certificate that proves the problem is solved correctly, and showing that the verification of the certificate can be done in a reasonable amount of time.

3. What is the importance of proving a problem is P or NP?

Proving a problem is P or NP is important because it helps us understand the complexity of a problem and the resources needed to solve it. Problems in P can be solved efficiently, while problems in NP may require a significant amount of time and resources to solve. Understanding the complexity of a problem can also aid in the development of more efficient algorithms and solutions.

4. What are some examples of problems that are known to be P or NP?

Some examples of problems that are known to be in P include sorting, searching, and matrix multiplication. Some examples of problems that are known to be in NP include the traveling salesman problem, the knapsack problem, and the graph coloring problem.

5. Is it possible to prove a problem is neither P nor NP?

No, it is not possible to prove a problem is neither P nor NP. A problem must fall into one of these two classes. However, there are some problems that are believed to be in a class called NP-complete, which means they are the hardest problems in NP and are not known to be in P.

Similar threads

Replies
52
Views
2K
  • Programming and Computer Science
Replies
4
Views
4K
  • Programming and Computer Science
Replies
5
Views
2K
  • Programming and Computer Science
Replies
2
Views
1K
  • Programming and Computer Science
Replies
5
Views
2K
  • Programming and Computer Science
Replies
8
Views
2K
  • Programming and Computer Science
Replies
6
Views
973
  • Programming and Computer Science
Replies
1
Views
1K
  • Programming and Computer Science
Replies
3
Views
1K
  • Atomic and Condensed Matter
Replies
1
Views
959
Back
Top