Explaining Time Complexity and NP Problems

  • MHB
  • Thread starter evinda
  • Start date
In summary, The NP complexity class can be defined as the class of problems which can be solved in polynomial time by a nondeterministic Turing machine, while P can be defined as the class of problems that can be solved in polynomial time by a deterministic Turing machine. This means that P is a subset of NP. Additionally, NP does NOT stand for "non-polynomial time", but rather for NONDETERMINISTIC polynomial time, which is different.
  • #1
evinda
Gold Member
MHB
3,836
0
Hello! (Wave)

I want to choose if the following sentences are true or false and justify the reason for the choice.

  • Polynomial: good, exponential: bad.
  • Radix Sort works correctly if we use any right sorting algorithm to sort each digit.
  • Given an array $A[1 \dots n]$ from integers, the time that [m] CountingSort [/m] requires is polynomial in relation to the input size $n$.
  • Given an array $A[1 \dots n]$ from integers, the time that [m] HeapSort [/m] requires is polynomial in relation to the input size $n$.
  • For a Dynamic Programming algorithm, the calculation of all the values with bottom-up is asymptotically faster than the use of recursion and memoization.
  • The time of a dynamic algorithm is always $O(P)$ where $P$ is the number of subproblems.
  • Each problem that belongs to [m]ΝΡ[/m] is solved in exponential time.

  • This sentence is true. But how could we explain why it is actually like that? (Thinking)
  • False. We can only use a sorting algorithm, if it is stable.
  • Right. The [m] CountingSort [/m] is a linear algorithm.
  • False. [m] HeapSort [/m] requires $O(n \log n)$ time.
  • False. The calculation of all the values with bottom-up is asymptotically the same as if we use recursion and memoization.
    Do I have to explain why the above four properties hold further? If so, what could we add?
  • How could we deterine if this is true or not? :confused:
  • The [m] NP [/m] problems cannot be solved in polynomial time, right? But do we know that all these problems are solved in exponential time?
 
Technology news on Phys.org
  • #2
The NP problems cannot be solved in polynomial time, right? But do we know that all these problems are solved in exponential time?

Did you forget the definition of the NP complexity class? Because this is completely wrong. Hint: any problem in P is also in NP.
 
  • #3
Bacterius said:
Did you forget the definition of the NP complexity class? Because this is completely wrong. Hint: any problem in P is also in NP.

We define [m] NP [/m]( for nondeterministic polynomial ) to be the class of all languages that are decided by a polynomially bounded nondeterministic Turing machine.Do you mean the above definition? (Thinking)
 
  • #4
evinda said:
We define [m] NP [/m]( for nondeterministic polynomial ) to be the class of all languages that are decided by a polynomially bounded nondeterministic Turing machine.Do you mean the above definition? (Thinking)

Yes.. that easily implies that any problem in P is also in NP. So...
 
  • #5
Bacterius said:
Yes.. that easily implies that any problem in P is also in NP. So...

So because of the fact that the [m] NP [/m] problems can be solved in polynomial time, with the use of a Turing machine we deduce that the proposition is right? (Thinking)

Or have I understood it wrong?
 
  • #6
evinda said:
So because of the fact that the [m] NP [/m] problems can be solved in polynomial time, with the use of a Turing machine we deduce that the proposition is right? (Thinking)

Or have I understood it wrong?

Who said any NP problem can be solved in polynomial time? P is in NP, but it's not known whether NP is in P (i.e. whether P = NP). What it does mean, however, forgetting Turing Machines for now, is that any problem admitting a polynomial-time algorithm that solves it is in P and therefore also in NP. So take your favorite P problem (for instance, finding the maximum element in an array). This problem is in P, and therefore it is also in NP, but it can be solved in polynomial time, contradicting the proposition, and so the proposition is wrong. Does that make sense?

(unless the proposition meant "at most in exponential time"?)
 
  • #7
Bacterius said:
Who said any NP problem can be solved in polynomial time? P is in NP, but it's not known whether NP is in P (i.e. whether P = NP). What it does mean, however, forgetting Turing Machines for now, is that any problem admitting a polynomial-time algorithm that solves it is in P and therefore also in NP. So take your favorite P problem (for instance, finding the maximum element in an array). This problem is in P, and therefore it is also in NP, but it can be solved in polynomial time, contradicting the proposition, and so the proposition is wrong. Does that make sense?

(unless the proposition meant "at most in exponential time"?)

So does this mean that the problems that are in [m] P [/m] are a subset of the problems that are in [m] NP [/m]?
If so, could you explain me further why this holds? :confused:
 
  • #8
evinda said:
So does this mean that the problems that are in [m] P [/m] are a subset of the problems that are in [m] NP [/m]?
If so, could you explain me further why this holds? :confused:

Yes, $\mathrm{P} \subset \mathrm{NP}$. This holds because as we know, NP can be defined as the class of problems which can be solved in polynomial time by a nondeterministic Turing machine (or languages that can be accepted, or whatever your preferred theoretical definition is), while P can be defined as the class of problems that can be solved in polynomial time by a deterministic Turing machine. And a deterministic Turing machine is just a nondeterministic Turing machine that happens to not make any nondeterministic choices [just like an integer can be viewed as a real number that happens to have no fractional part]. In other words, any DTM is an NTM, and from that it follows that P is in NP.

Or, more loosely and directly, "if it is easy to find the solution to a problem, then it is also easy to verify any particular solution" (e.g. by simply solving it in polynomial time and comparing).

The most important thing to remember is that NP does NOT stand for "non-polynomial time", which is a very common misconception. It stands for NONDETERMINISTIC polynomial time which is completely different.
 
Last edited:
  • #9
Bacterius said:
Yes, $\mathrm{P} \subset \mathrm{NP}$. This holds because as we know, NP can be defined as the class of problems which can be solved in polynomial time by a nondeterministic Turing machine (or languages that can be accepted, or whatever your preferred theoretical definition is), while P can be defined as the class of problems that can be solved in polynomial time by a deterministic Turing machine.

So does this mean that all the problems in NP can be solved in polynomial time by a nondeterministic Turing machine? (Thinking)

  • Polynomial: good, exponential: bad.

Could we explain that the proposition is right as follows?

Suppose that we have an input of $n$ elements and want to run algorithm so that we will get an information about these $n$ elements. If the algorithm runs in polynomial time, the algorithm will finish in a reasonable time.
If, however, the algorithm is exponential, it will need a lot more time to calculate the result.
  • For a Dynamic Programming algorithm, the calculation of all the values with bottom-up is asymptotically faster than the use of recursion and memoization.
  • The time of a dynamic algorithm is always $O(P)$ where $P$ is the number of subproblems.

How could we determine if the above propositions hold or not?
 
  • #10
evinda said:
So does this mean that all the problems in NP can be solved in polynomial time by a nondeterministic Turing machine? (Thinking)

Yes. Of course, one cannot build a nondeterministic Turing machine (you might think of it as akin to predicting the future) so it's just a theoretical framework to understand complexity.

evinda said:
  • Polynomial: good, exponential: bad.

Could we explain that the proposition is right as follows?

Suppose that we have an input of $n$ elements and want to run algorithm so that we will get an information about these $n$ elements. If the algorithm runs in polynomial time, the algorithm will finish in a reasonable time.
If, however, the algorithm is exponential, it will need a lot more time to calculate the result.

You're not really saying much here, you're just repeating the question... polynomial, good, "finish in reasonable time", exponential, bad, "need a lot more time"... this is all pretty vague, don't you think? Why polynomial and exponential specifically? What can you say about their growth?

evinda said:
For a Dynamic Programming algorithm, the calculation of all the values with bottom-up is asymptotically faster than the use of recursion and memoization.

I'm not an expert in dynamic algorithms but think about what it means to calculate the values (= solutions to subproblems) bottom-up? Doesn't that mean that each subproblem builds on already computed subproblems, and so doesn't need to solve them again each time? On the other hand, recursion is top-down, but what is the defining property of memoization? Conclude... maybe try to visualize it with a simple dynamic programming algorithm of your choice, with a tree of the subproblems.
 
  • #11
Bacterius said:
Yes. Of course, one cannot build a nondeterministic Turing machine (you might think of it as akin to predicting the future) so it's just a theoretical framework to understand complexity.

Thinking bout it again... the NP-complete problems are also in NP and there is no known way to solve them in deterministic polynomial time...
But since the worst case is that an algorithm is exponential, can't we say that each problem in NP is solved in polynomial time? (Thinking)

Bacterius said:
You're not really saying much here, you're just repeating the question... polynomial, good, "finish in reasonable time", exponential, bad, "need a lot more time"... this is all pretty vague, don't you think? Why polynomial and exponential specifically? What can you say about their growth?

You mean that we have to justify the correctness of the proposition, saying that $O(n^a)=O(b^m), n,m,a,b \in \mathbb{N}$ ?

Bacterius said:
I'm not an expert in dynamic algorithms but think about what it means to calculate the values (= solutions to subproblems) bottom-up? Doesn't that mean that each subproblem builds on already computed subproblems, and so doesn't need to solve them again each time? On the other hand, recursion is top-down, but what is the defining property of memoization? Conclude... maybe try to visualize it with a simple dynamic programming algorithm of your choice, with a tree of the subproblems.
I thought that with bottom-up we start from the computation of the smallest subproblem, which we will need for the computation of the greater ones... At recursion, we start from the greater problems and in order to calculate them, we find the smaller ones..
Does it hold that each subproblem builds on already computed subproblems, and so doesn't need to solve them again each time in both cases? (Thinking)
 
  • #12
evinda said:
Thinking bout it again... the NP-complete problems are also in NP and there is no known way to solve them in deterministic polynomial time...
But since the worst case is that an algorithm is exponential, can't we say that each problem in NP is solved in polynomial time? (Thinking)

?

evinda said:
You mean that we have to justify the correctness of the proposition, saying that $O(n^a)=O(b^m), n,m,a,b \in \mathbb{N}$ ?

How are these two sets equal?! Anyway for the question I was thinking more about something along the lines of Cobham's thesis, if you have access to it (via your university or wherever) I think it would be a good read.

evinda said:
I thought that with bottom-up we start from the computation of the smallest subproblem, which we will need for the computation of the greater ones... At recursion, we start from the greater problems and in order to calculate them, we find the smaller ones..
Does it hold that each subproblem builds on already computed subproblems, and so doesn't need to solve them again each time in both cases? (Thinking)

Yes, with memoization the two techniques are actually equivalent (they compute the same things), I believe. I could be wrong, though.
 
  • #13
Bacterius said:
?

I thought that it isn't known whether the NP-complete problems can be solved in polynomial time using a non-deterministic Turing machine... And aren't these problems also in NP?

Or am I wrong?

Bacterius said:
How are these two sets equal?! Anyway for the question I was thinking more about something along the lines of Cobham's thesis, if you have access to it (via your university or wherever) I think it would be a good read.

Oh, yes... They cannot be equal... But it holds that $O(n^a) \subset O(b^m)$, right?
I haven't heard of it... What is it about?

Bacterius said:
Yes, with memoization the two techniques are actually equivalent (they compute the same things), I believe. I could be wrong, though.

So, the time complexity is asymptotically the same, right?
 
  • #14
evinda said:
I thought that it isn't known whether the NP-complete problems can be solved in polynomial time using a non-deterministic Turing machine... And aren't these problems also in NP?

Or am I wrong?

NP-complete is a subset of NP (NP-complete is the intersection of NP and NP-hard). Therefore they can be solved in polynomial-time using a nondeterministic Turing machine. NP *stands for* nondeterministic polynomial time. Again, nondeterministic Turing machines cannot be built, so that doesn't mean all NP problems are easy to solve, of course.

If you are trying to get a handle on the "real" complexity of a problem, it's easier to think of it in terms of being able to verify the solution to a problem in polynomial time (for NP problems) by a deterministic Turing machine. It's the same thing, just from a different perspective (the two definitions are equivalent).

evinda said:
Oh, yes... They cannot be equal... But it holds that $O(n^a) \subset O(b^m)$, right?
I haven't heard of it... What is it about?

It introduces the P complexity class, and why polynomial time is a good way to classify "easy" problems. If you can read the paper (it's not publicly available it seems) you should. Most universities provide free access to publications behind paywalls even to undergraduates, I believe. From http://en.wikipedia.org/wiki/Cobham%27s_thesis:

Cobham's thesis, also known as Cobham–Edmonds thesis (named after Alan Cobham and Jack Edmonds),[2][3][4] asserts that computational problems can be feasibly computed on some computational device only if they can be computed in polynomial time; that is, if they lie in the complexity class P.[5]

[...]

Alan Cobham's 1965 paper entitled "The intrinsic computational difficulty of functions"[6] is one of the earliest mentions of the concept of the complexity class P, consisting of problems decidable in polynomial time. Cobham theorized that this complexity class was a good way to describe the set of feasibly computable problems. Any problem that cannot be contained in P is not feasible, but if a real-world problem can be solved by an algorithm existing in P, generally such an algorithm will eventually be discovered.

Can you see why this could be relevant to your question?

evinda said:
So, the time complexity is asymptotically the same, right?

I would say so, since they seem to be equivalent in their operation. I could be wrong, though.
 
  • #15
Bacterius said:
It introduces the P complexity class, and why polynomial time is a good way to classify "easy" problems.

I don't have access to it at this moment... So how could I explain it in a few words? (Thinking)
 
Last edited:
  • #16
The class P is convenient for two reasons. First, time complexity grows not too fast, so that when the problem size grows by a factor of 10 or a 100, the time generally remains feasible. See an example on the Wikipedia page about the Cobham's thesis. The second reason is that this class does not depend on the precise formulation of the computing device used (single-tape Turing machine, multi-tape machine, random-access machine, etc.). A problem that takes polynomial time with respect to one of these machines will take polynomial time with respect to any of them.
 
  • #17
Evgeny.Makarov said:
The class P is convenient for two reasons. First, time complexity grows not too fast, so that when the problem size grows by a factor of 10 or a 100, the time generally remains feasible. See an example on the Wikipedia page about the Cobham's thesis. The second reason is that this class does not depend on the precise formulation of the computing device used (single-tape Turing machine, multi-tape machine, random-access machine, etc.). A problem that takes polynomial time with respect to one of these machines will take polynomial time with respect to any of them.

I see... (Smile)
What could we say for the following proposition?

The time of a dynamic algorithm is always $O(P)$ where $P$ is the number of subproblems.
 
  • #18
evinda said:
The time of a dynamic algorithm is always $O(P)$ where $P$ is the number of subproblems.
Do you mean the algorithm a constant complexity? Can you give an example of a dynamic algorithm?
 

1. What makes a sentence grammatically correct?

A sentence is considered grammatically correct if it follows the rules and conventions of the language, including proper word order, punctuation, and subject-verb agreement.

2. How can I tell if a sentence is grammatically correct?

You can determine if a sentence is grammatically correct by checking for subject-verb agreement, correct word order, and proper use of punctuation. You can also use grammar checkers or ask a native speaker for confirmation.

3. What should I do if I'm unsure if my sentence is grammatically correct?

If you are unsure about the correctness of a sentence, you can consult a grammar guide or seek feedback from a teacher, tutor, or language expert. You can also use online grammar checkers or language learning apps.

4. Are there different rules for sentence structure in different languages?

Yes, every language has its own set of rules and conventions for sentence structure. Some languages have more complex grammar rules than others, and it's important to be familiar with the specific rules of the language you are using.

5. Can a sentence be grammatically correct but still sound awkward?

Yes, a sentence can be grammatically correct but still sound awkward or unnatural. This can happen due to word choice, sentence structure, or other factors. It's important to not only focus on grammar but also on the overall flow and clarity of a sentence.

Similar threads

  • Programming and Computer Science
Replies
10
Views
3K
  • Programming and Computer Science
Replies
12
Views
1K
  • Programming and Computer Science
Replies
5
Views
3K
  • Engineering and Comp Sci Homework Help
Replies
3
Views
948
  • Programming and Computer Science
Replies
5
Views
2K
  • Programming and Computer Science
Replies
1
Views
1K
  • Quantum Physics
Replies
4
Views
736
  • Programming and Computer Science
Replies
3
Views
1K
  • Programming and Computer Science
3
Replies
97
Views
7K
  • Programming and Computer Science
Replies
1
Views
1K
Back
Top