Why are quantum computers faster than classical computers?

In summary, the article discusses how contextuality is essential to understanding why quantum computers work faster than classical computers. It cites two references which argue that quantum computers rely on this concept to some degree. Second, it provides some explanations for why this might be the case. Finally, it notes that this is not always the case and that classical computers can sometimes be faster.
  • #1
VforVendetta
16
1
I was recently reading about quantum computers because I once asked a teacher with more experience in the field "What was the origin of the quantum speedup" with his answer being quantum parallelism, which I kinda understood at that time, but I forgot about it. So, the other day I was thinking about it and I decided to study it again after seeing a presentation (about quantum contextuality in which the lecturer said that quantum contextuality could be an essential tool to explain why quantum computers worked) and the following article:

Contextuality supplies the ‘magic’ for quantum computation
Mark Howard, Joel Wallman, Victor Veitch & Joseph Emerson
1 9 J U N E 2 0 1 4 | VO L 5 1 0 | N AT U R E | 3 5 1
(yeah I copypasted, but I'm lazy to make it easier for others to find the article... sorry about that...)

So, now I was not only embarassed myself, but I was "embarassed for the people in the field". For myself because I couldn't remember what was quantum parallelism (which I promptly read about again)... But for the field because I'm not really sure we know why quantum computers are (believed to be generally) faster than classical computers...

Here are some references (within the article above) that shed some light over the matter. I don't know if contextuality solves the problem, because I couldn't read all the article arguments still - but I'd really appreciate if somebody could give me a clearer exposition.

PHYSICAL REVIEW A 88, 022307 (2013)
Discord and quantum computational resources
Aharon Brodutch

Found Phys (2010) 40: 1141–1154
DOI 10.1007/s10701-010-9452-0
The Elusive Source of Quantum Speedup
Vlatko Vedral

PRL 110, 060504 (2013)
Universal Quantum Computation with Little Entanglement
Maarten Van den Nest

There are other two interesting references...

A quantum computer only needs one universe
A. M. Steane
arXiv:quant-ph/0003084

PRL 102, 190501 (2009)
Most Quantum States Are Too Entangled To Be Useful As Computational Resources
D. Gross, S.T. Flammia, and J. Eisert

Why are quantum computers faster than classical computers? Is this a good question?

(I might also add that I still do not feel like I do get what contextuality mean in its whole and that I'm not embarassed for being ignorant, as I'm ignorant about a whole lot of stuff, but more embarassed because to me it seemed to be the premise of why we wanted to research this in the first place, and it seems we were talking bs about it)
 
Physics news on Phys.org
  • #2
VforVendetta said:
Why are quantum computers faster than classical computers?
They are not, in all cases. For some algorithms they are slower than classical computers, for some they are much faster, so as a blanket statement, it's not correct to say they are faster. Surely the articles you quoted must go into that.
 
  • #3
phinds said:
For some algorithms they are slower than classical computers, for some they are much faster(...)

Could you give me a reference stating exactly this, because to me this seems to say that quantum computers can't always simulate classical computers efficiently, and I took that for granted.
 
  • #4
VforVendetta said:
Could you give me a reference stating exactly this, because to me this seems to say that quantum computers can't always simulate classical computers efficiently, and I took that for granted.
Then it seems we are saying the same thing. I have no references. I'm re-stating what I've seen on this forum several times and which I have read in weekly magazines.
 
  • #5
The point of the quantum computer is that if one looks how to compute the quantum probabilities one has to compute a path integral. To compute this path integral on a classical computer one would need a lot of time - it is an integral over all possible paths.

The first idea is that, if one has to compute such a path integral, one could, instead of doing it on a classical computer, which would need a lot of ressources, simply using the real world as an analog computer. All one needs for this is a real experiment with a real device which, according to quantum theory, gives the same result as the path integral which we want to compute. In this case, Nature simply "computes" its own path integral microscopically and the result is what QT predicts - the path integral. And now the designer has to modify this construction, in such a way, that we can use the computions of the QT path integrals to compute something useful for us.

For many operations this will give nothing new - a classical computer can be understood as doing the similar thing in classical mechanics. So one would expect some improvement only for such special problems which are somehow similar to path integrals.
 
  • #6
Quantum computers are slower than classical computers. It's one of the biggest misconceptions that QM can speed up computation. Actually, QM sets the limit on the speed of computation while in Newtonian dynamics computers don't have such limit.

First thing first, all computers are quantum. We live in a world ruled by quantum dynamics, so "classical" computers don't exist. It is only a conceptual model.

When we say about a "classical" computers we may think of two things:
1. Computer running by the rules of Newtonian dynamics.
2. Computer running by the rules of quantum dynamics, but not using any qubits.

When we think of "quantum" computer, we mean a computer with qubits. In that case:
"Classical" computers in the sense 1 are always faster than any quantum computer.
"Classical" computers in the sense 2 are slower than quantum computers on certain algorithms and equal on others.

Please remember one thing: quantum mechanics sets a limit on the speed of any physical process, including computations. This limit is similar to the Bekenstein bound. While the Bekenstein bound sets the maximal space resolution, there is also a maximal time resolution of the universe.
 
  • Like
Likes tom aaron
  • #7
haael, your post directly contradicts experimental results where quantum computers are faster than classical computers for some algorithms. I suggest you Google "quantum computer experimental results"
 
  • #8
Short answer: nobody knows yet!
It's a really interesting question. People are trying to find what the minimal quantum resources are (like; entanglement, contextuality, discord, etc) such that the quantum algorithm will still give advantage over classical computing.
The usual algorithms, like the famous one of Shor etc, use pure entangled quantum states for the computation. Then you might say; "Ah! Entanglement is responsible for the speed up!". Not really though. The fact that you built a fast quantum computer with lots of entanglement doesn't mean that you cannot create a fast quantum computer with very little or no entanglement. So the problem becomes; What are the minimal quantum resources that i can have in a quantum computer that would still allow for an advantage (polynomial or exponential) over classical computing? This is still a very interesting open question.
 
Last edited:
  • Like
Likes Truecrimson
  • #9
haael said:
Quantum computers are slower than classical computers. It's one of the biggest misconceptions that QM can speed up computation. Actually, QM sets the limit on the speed of computation while in Newtonian dynamics computers don't have such limit.

First thing first, all computers are quantum. We live in a world ruled by quantum dynamics, so "classical" computers don't exist. It is only a conceptual model.

When we say about a "classical" computers we may think of two things:
1. Computer running by the rules of Newtonian dynamics.
2. Computer running by the rules of quantum dynamics, but not using any qubits.

When we think of "quantum" computer, we mean a computer with qubits. In that case:
"Classical" computers in the sense 1 are always faster than any quantum computer.
"Classical" computers in the sense 2 are slower than quantum computers on certain algorithms and equal on others.

Please remember one thing: quantum mechanics sets a limit on the speed of any physical process, including computations. This limit is similar to the Bekenstein bound. While the Bekenstein bound sets the maximal space resolution, there is also a maximal time resolution of the universe.

Maybe you should change your notion of 'classical' computer, and your problem is solved. Classical computers, defined as computers that do not take advantage of genuine quantum effects in the computation, are still constrained by the laws of physics when it comes to how fast a computation can be implemented. Taking this into account, it's straightforward to conclude that the speed of classical processes is always smaller than the speed of quantum processes, since the former is created from the latter in the microphysics involved. Therefore, the speed of computation in a quantum computer is -in principle- larger or equal to any classical computation (/process).
 
  • #10
Haael is technically right. To add to JK423's clarification, we can modify Haael's definition 1 of classical computers a little bit:

Classical computers are computer running by the rules of Newtonian dynamics when it applies.

There are various formulation of computation that we might think of as "classical," based on classical mechanics but e.g. assumes we can manipulate real numbers to infinite precision. Such machines can be faster than quantum computers or any computer that we may hope to build. Mathematically, classical mechanics doesn't forbid this, but physically this implies going into the domain where classical mechanics is not correct anymore. Hence this is not what scientists think when they say "classical computers."
 
  • #11
phinds said:
They are not, in all cases. For some algorithms they are slower than classical computers, for some they are much faster, so as a blanket statement, it's not correct to say they are faster.

This is a bit subtle. Quantum computers can always simulate classical computers efficiently. How? Just restrict the computation to one basis only. So when people say quantum computers are slower, I expect that they are comparing different problems. For example, Scott Aaronson has written a very nice article (published in Nature Physics) exposing this point that often got lost in the translation in the news in the case of the quantum algorithm to "solve" linear equations.
 
Last edited:
  • #12
Quantum computing is only 'faster' than convention computing for resolving problems that require parallel computation of a large number of variables.
A good example would be calculating orbits in multiple body systems.
Most everyday computing needs, such as reading a website, accessing a database, printing a document, etc are simple linear routines, and for those a conventional cpu is as good, and maybe better.
If Qubits became cheap enough Q computers might be useful for applications such as air traffic control.
Anything remotely capable of being an 'AI' in the sci-fi sense would probably need quantum computing.
 
Last edited:
  • #13
It's worth noting that there are types of computational parallelisation that quantum computers are not more suitable for. For example, the screen you're looking at now, is rendered by a highly parallelised graphics card. A quantum computer wouldn't be suitable for this at all. I'd phrase it differently and say that quantum computers are well suited to optimisation problems, rather than parallelisation.
 
  • #14
phinds said:
Then it seems we are saying the same thing. I have no references. I'm re-stating what I've seen on this forum several times and which I have read in weekly magazines.

Actually when I typed "can't" i meant "can". Sorry about that.

But if there are no references, then I don't think I clearly know what you mean.

Even still, this post is giving me some good ideas, even if some people are getting my question differently from what I wanted to know (JK423 got what I wanted to know almost exactly) - I thought my question was clear, maybe the fact that I HAVE to give a summarized question is what makes it a little less precise than I imagined.

I intend summarizing people's opinions (and 'understandings') later. I think asking imprecise questions maybe leads to better (or more complete) answers :).
 
  • #15
People also tend to forget about the No Cloning Theorem (google it). Its consequence is that almost all(!) of the algorithms we know from "algorithms & data structures" are not applicable in quantum computation, because even the simplest algorithm "a := b" (i.e., copying the value of one variable into another variable) does not work on such machines.

There are efficient quantum algorithms for factoring numbers (look up Shor's algorithm), which have the potential of invalidating the foundations of current asymmetric cryptographic algorithms (which is where the majority of the interest stems from---i.e., if real efficient quantum become a reality, cryptographic key exchange will become much more annoying). Will quantum computers actually be useful for anything else? That remains to be seen. It is not even clear if they would be more efficient (than classical computers) for the simulation of actual quantum systems.
 
  • #16
cgk said:
It is not even clear if they would be more efficient (than classical computers) for the simulation of actual quantum systems.

It might not be clear, but quantum simulation is a very active field of research. In fact, people in the field are touching some very cool physics, even if it is just in simulation... Even for phenomena for which we do not have experimental implementations done. I could give some examples upon request - but I won't because there are lots of them.
 
  • #17
VforVendetta said:
But if there are no references, then I don't think I clearly know what you mean.
I didn't say there are no references, I said *I* don't have any at hand, but the other threads I remember reading did have some.
 
  • #18
cgk said:
People also tend to forget about the No Cloning Theorem (google it). Its consequence is that almost all(!) of the algorithms we know from "algorithms & data structures" are not applicable in quantum computation

I don't see why not. CNOT can copy a state in the computational basis and that's enough to embed those algorithms into quantum computation.

cgk said:
Will quantum computers actually be useful for anything else? That remains to be seen. It is not even clear if they would be more efficient (than classical computers) for the simulation of actual quantum systems.

Could you please elaborate on that? It seems pretty clear to me that quantum systems can simulate themselves efficiently.
 
  • Like
Likes VforVendetta
  • #19
Giving a succinct summary of how quantum computers are faster is hard. You can get the speedups from different subsets of quantum behavior. For example, cluster state quantum computation is based almost entirely on measurement, whereas your more typical quantum logic circuits basically don't need measurement at all.

Ultimately, it always comes down to being able to hold and manipulate exponentially-huge superpositions without collapsing them. But that's more of a definition of a quantum computation than an explanation of why it helps. You'll probably understand better at first by reading concrete examples, like a beginner-level explanations of Grover's algorithm, than you will by trying to see the big picture.
 
  • Like
Likes Truecrimson
  • #20
Strilanc said:
Giving a succinct summary of how quantum computers are faster is hard.

Which is why I asked it.

;)

I will still learn how to put hypertext in my comments, though.
 
  • #21
Truecrimson said:
Could you please elaborate on that? It seems pretty clear to me that quantum systems can simulate themselves efficiently.

Yes, this is clear, but it's not determined yet whether a classical simulation can efficiently simulate a quantum system, and thus we technically don't know if the quantum algorithms are better or not.What quantum algorithms do in the end is that they have the ability to transform a problem of a certain complexity class (NP-Intermediate as far as we know now), which classically requires an exponential amount of resources to find a solution to, into a problem which require only a polynomial amount of resources. Thus, a quantum computer doesn't do much to most types of problems, because they're not hard enough to need a complexity class change to have an efficient solution, so in this case a quantum and a classical computer have the same power. However, there is a broad class of problems, known as the hidden subgroup problem (of which finding a numbers prime factors is one special case), which does belong to this complexity class, and for those problem, the quantum algorithms beat any known classical algorithms (they are exponentially better).

As was pointed out above, the exact source of the power is unknown, we know for example that an entangling gate + arbitrary single qubit operations is sufficient (but not necessary), but entanglement + a Hadamarad single qubit gate is not even sufficient (in fact any circuit consisting only of CNOTs and Hadamarads can be simulated efficiently classically!), so it's a very subtle thing.
 
  • #22
Edit: Thanks Zarqon. I thought it was cgk replying to me! :oldsmile:

Though the same caveat applies to the factoring problem and other hidden subgroup problems as well, as you noted "for those problem, the quantum algorithms beat any known classical algorithms." cgk's post kinda suggests that the power of quantum computers to factor numbers is more proven than simulations of quantum systems for some reason.
 
Last edited:
  • #23
So, as I'm about to consider this bunch of conversation dead I'll write the promised summary (that might not be so short, by the way) about this question - it will be more of a impressions thing, tell me if it is nonsense or not.

First, I'd like to introduce the "Five-and-a-half" definitions of a quantum computer by Michele Mosca, which basically goes like this:

Definition 1:

Since the world is quantum, any computer is a quantum computer. Conventional computers are just weak quantum computers, since they don’t exploit intrinsically quantum effects, such as superposition and entanglement (and other stuff, like discord, nonlocality, etc).

Definition 2:

A quantum computer is a computer that uses intrinsically quantum effects that cannot naturally be modeled by classical physics. Classical computers may be able to mathematically simulate instances of such computers, but they are not implementing the same kinds of quantum operations.

Definition 2’:

Definition 2, where there are strong tests or proofs of the quantum effects at play (e.g. by doing Bell tests).

Definition 3:

A quantum computer is a computer that uses intrinsically quantum effects to gain some advantage over the best known classical algorithms for some problem.

Definition 4:

A quantum computer is a computer that uses intrinsically quantum effects to gain an asymptotic speed-up over the best known classical algorithms for some problem. (The difference with definition 3 is that the advantage is a fundamental algorithmic one that grows for larger instances of the problem; versus advantages more closely tied to hardware or restricted to instances of some bounded size.)

Definition 5:

A quantum computer is a computer that is able to capture the full computational power of quantum mechanics, just as conventional computers are believed to capture the full computational power of classical physics. This means, e.g. that it could implement any quantum algorithm specified in any of the standard quantum computation models. It also means that the device is in principle scalable to large sizes so that larger instances of computational problems may be tackled.

Definition 1 tells me that it is not logically possible for quantum computers to be SLOWER than classical computers (besides, we have Shor's algorithm (to me a good theoretical candidate that indeed seems to require you to fill Definition 3 or 1), and if you want to say "hey, but this is wrong, because it could happen that BQP = BPP", yeah, but this argument could be applied the same way to "P = PSPACE"; experts of the field have the feeling - so they do not have a proof - that both equalities do not hold... so I'll bandwagon with them, so to speak, and agree with their feeling that ## P \subseteq BPP \subseteq BQP \subseteq PP \subseteq PSPACE ##).

But when people say "quantum computers ARE NOT FASTER than classical computers" they probably think on the tests - the latest I heard was published at arXiv:1401.2910v1 - made on D-Wave (which do not seem to be faster than a classical computer, but some commenters suggested that we have a lot of practice with classical optimization, while the quantum one could need more time); but bear in mind that we are not even sure WHAT IS NEEDED FOR A QUANTUM COMPUTER TO WORK (evoking Definition 2 and 2' - in my opinion, 2' AT LEAST gives us a way to experimentally verify/quantify(?) quantumness in a more estabilished manner, I think) and do not know the origin of the speedup - even D-Wave has entanglement (as it can be seen at http://journals.aps.org/prx/pdf/10.1103/PhysRevX.4.021041), but as far as I'm concerned this means "it is definately quantum", but so are you and me, "deep down there" (probably with much less entanglement or "quantumness", whatever that means) and that doesn't even mean it is "definetly a quantum computer", while it means it is "definetly quantum" - which kinda leads me to think we need more "measures of quantumness" than are available. This kinda tells me we don't know what we are talking about, which means WE DON'T KNOW THE ORIGIN OF THE QUANTUM SPEEDUP!

This to me is both good and bad news: it seems I could have a job in the future - which, trust me, is good - while it also means it is bad because I might not see quantum computers in the stores in the next 10-15 years (and some weeks ago I believed - naively - in such possibility, which was making me pretty worried, to be honest). I know some researchers that DO NOT CARE if quantum computers are built - what is wrong with this people... - as they just want to keep chasing for their research (I respect that feeling most of the days of the week, to be honest).

Two answers seems to confirm my point of view: Zarqon's and JK423's answers, which sumarizing (respectively - hopefully not changing too much what they really wanted to express, which means it is more a interpretation of mine):

"(...)it's not determined yet whether a classical simulation can efficiently simulate a quantum system, and thus we technically don't know if the quantum algorithms are better or not.(...) [and] the exact source of the power [of quantum computers] is unknown, so [for various reasons] it's a very subtle thing."

"Short answer: nobody knows yet!(...) You might say; 'Ah! Entanglement is responsible for the speed up!'. Not really though. The fact that you built a fast quantum computer with lots of entanglement doesn't mean that you cannot create a fast quantum computer with very little or no entanglement. So the problem becomes; What are the minimal quantum resources that i can have in a quantum computer that would still allow for an advantage (polynomial or exponential) over classical computing?" (and he also expresses one of the definitions by Mosca, while also making the point I first state after all of those definitions).

While one person mentioned a path integral way of seeing things - which I honestly do not understand, even if I'm aware of the existence of a formulation that touches the concept - I do not think we need such math to know "where we are looking at the problem from the wrong angle", so to speak.

And while cgk's "Will quantum computers actually be useful for anything else? That remains to be seen." is valid, quantum simulation, seems to me to be already achieving some cool results - while not technically quantum computation per se, it is related to it.

As noted by Strilanc it seems that "You can get the speedups from different subsets of quantum behavior.", which is what reading diagonally the articles suggested me in the first place.

I "used" all of you to guide myself through what people talk about as I don't think people here have much culture on the entire subject - or have a very personal view which honestly doesn't constitute a good guide in the questions I'm tryong to answer.

If anyone disagree, you can keep commenting.
 
  • Like
Likes Truecrimson
  • #24
VforVendetta said:
...
Definition 1 tells me that it is not logically possible for quantum computers to be SLOWER than classical computers (besides, we have Shor's algorithm (to me a good theoretical candidate that indeed seems to require you to fill Definition 3 or 1), and if you want to say "hey, but this is wrong, because it could happen that BQP = BPP", yeah, but this argument could be applied the same way to "P = PSPACE"; experts of the field have the feeling - so they do not have a proof - that both equalities do not hold... so I'll bandwagon with them, so to speak, and agree with their feeling that ## P \subseteq BPP \subseteq BQP \subseteq PP \subseteq PSPACE ##).

But when people say "quantum computers ARE NOT FASTER than classical computers" they probably think on the tests - the latest I heard was published at arXiv:1401.2910v1 - made on D-Wave (which do not seem to be faster than a classical computer, but some commenters suggested that we have a lot of practice with classical optimization, while the quantum one could need more time); but bear in mind that we are not even sure WHAT IS NEEDED FOR A QUANTUM COMPUTER TO WORK (evoking Definition 2 and 2' - in my opinion, 2' AT LEAST gives us a way to experimentally verify/quantify(?) quantumness in a more estabilished manner, I think) and do not know the origin of the speedup - even D-Wave has entanglement (as it can be seen at http://journals.aps.org/prx/pdf/10.1103/PhysRevX.4.021041), but as far as I'm concerned this means "it is definately quantum", but so are you and me, "deep down there" (probably with much less entanglement or "quantumness", whatever that means) and that doesn't even mean it is "definetly a quantum computer", while it means it is "definetly quantum" - which kinda leads me to think we need more "measures of quantumness" than are available. This kinda tells me we don't know what we are talking about, which means WE DON'T KNOW THE ORIGIN OF THE QUANTUM SPEEDUP!

I'd be very careful by what you mean by slower and faster. The terms are often used loosely, but without an in depth undestanding of how computers work, they can be very deceptive.

For example, a good modern graphics card can perform over a trillion floating point operations per second, which can be used to transmit a high resolution image. There is no like-for-like test which could be performed, nevertheless, there is no quantum computer at the moment,which could even perform that rate of floating point operations.

Generally speaking, it's not even possible to compare GPUs and CPUs in terms of performance, without qualifying it with a test. Pratically, they are simply applicable to different tasks, not by accident, but because they have been designed in a highly specialised way, specifically for those tasks. The last time CPUs were used widely used for graphics was the last millenium. A lot has changed since then. Quantum computers will find their niche (in my opinion, it will be within artificial intelligence) and will become specialised within that niche.

Generally, CPUs and GPUs operate at practial temperatures, however a Quantum Computer must operate at very close to zero Kelvin in order to prevent decoherence interfering with the processing.

A modern GPU contains thousands of processing cores. In principle, all computers are scalable into larger parallel systems, so energy consumption is an important consideration in determinining what performance we can realistically expect. For example, Google's processing consumes power comparable to that produced by a nuclear reactor.
 
Last edited:
  • #25
craigi said:
I'd be very careful by what you mean by slower and faster. The terms are often used loosely, but without an in depth undestanding of how computers work, they can be very deceptive.

Yes, I know that, and, to my understanding, that is the reason why the article comparing the processing speed by using floating point operations are not entirely meaningful - to me it just means we are very good at building what we call classical computers if compared to our very limited understanding and technical knowledge about quantum computers. It could loosely be like comparing the early solid state transistor with a valve, when transistors were in their infancy. I think there is always a gap in time in that each technology has to evolve in theoretical and practical aspects (to which we really have almost no means to estimate how long will such gap last). What we know is that FOR MOST PURPOSES, transistors supersede valves - with some exceptions.

craigi said:
a Quantum Computer must operate at very close to zero Kelvin in order to prevent decoherence interfering with the processing.

We are not really sure this is a requirement for all experimental implementations. The most popular ones, nevertheless, have this characteristic. There are some implementations that use basically just light - and that do not need to be cooled to such "absolute zero extent".

A particularly interesting concept is that of Linear Optical Quantum Computing in which some proposals do not need such cooling.

https://en.wikipedia.org/wiki/Linear_optical_quantum_computing

Nevertheless, I should have been more careful with the detail you emphasized. Let's say your answer is something to be reminded in this whole discussion (speed can mean two things).
 
  • #26
With exponentially more power than today's fastest supercomputers, quantum computers could herald a new era of innovation across industries. The first working quantum computer is closer to reality, with IBM overcoming two critical obstacles.

Quantum bits (Qubits) - The ability to in multiple states at one time

IBM superconducting qubits - built on a chip

Quantum computer - exponentially faster processing power

Detecting quantum errors
Exposure to heat and radiation makes qubits prone to errors

For the first time, IBM showed how to detect and measure both types of qubit errors simultaneously.

Bit- flip error and phase-flip error

Building larger quantum systems
IBM figured out a square design for a quantum circuit that, with more qubits can scale to a working quantum system.

What can a quantum computer do better?

Quantum computing will scale a class of problems that are unsolvable today opening up a new realm of applications.

Searching big data
Designing better drugs & new materials
Machine learning
Cryptography
 
  • #27
I will add my few 5 cents to this interesting topic.

What do we mean by computer "speed"? There are so called computational complexity classes.

In computer science, "time" means "number of elementary operations". The term "polynomial time" means "number of operations is limited by a function that is polynomial with respect to the input size".

Knowing that, there are following complexity classes:
P is a class of problems that can be solved in polynomial time on so called deterministic Turing machine.
NP is a class of problems that can be solved in polynomial time on so called nondeterministic Turing machine.

Both those "machines" are abstract concepts and they don't exist in nature.

As probably everybody knows, it is unknown whether P = NP. However, there are more arguments that they are not equal and we will use this assumption further.

If P != NP, then there are infinitely many classes between them. Let's call them C1, C2, C3, ... Let's also call one of the classes CX.
That means, the hierarchy goes: P, C1, C2, C3, ..., CX, ... NP.

Now there is a question: how fast certain problems can be solved in reality. Computer scientists have their own concept of "time" (the number of elementary computational operations). Now we want to know how this translates to the real, physical, ordinary time!

So we ask the question: what problems can be solved in a time that takes polynomial count of physical time units (i.e. nanoseconds) with respect to the input size.

P and NP refer to an abstract definition of time. Now we are talking about the real time.

Arguments have been presented that all real-world computers are quantum. Let's stick to that.

Now I present some theses that I read once in a popular-science article. I can't find it know but I think it was true.

Weak quantum computers (that means, ordinary computers like the one I have on my desk) can effectively solve problems from the class P.
Strong quantum computers (the one like D-Wave, for instance) can effectively solve problems from one of the classes CX. It is unknown which one but there was an argument that the class must be greater than P and smaller than NP.
Strong quantum computers with closed time loops would be able to solve problems from NP and beyond.

The author stated that experiments could be performed to test whether closed time loops exist. If we found a system that can solve problems from NP in polynomial real time, then time loops exist. To date, all systems we know have computation power less than NP. This might mean that sadly there are no time loops in our universe.

Again, I want to stress the difference between computational time and real time.
 
  • #28
Ilja said:
The point of the quantum computer is that if one looks how to compute the quantum probabilities one has to compute a path integral. To compute this path integral on a classical computer one would need a lot of time - it is an integral over all possible paths.

The first idea is that, if one has to compute such a path integral, one could, instead of doing it on a classical computer, which would need a lot of ressources, simply using the real world as an analog computer. All one needs for this is a real experiment with a real device which, according to quantum theory, gives the same result as the path integral which we want to compute. In this case, Nature simply "computes" its own path integral microscopically and the result is what QT predicts - the path integral. And now the designer has to modify this construction, in such a way, that we can use the computions of the QT path integrals to compute something useful for us.

For many operations this will give nothing new - a classical computer can be understood as doing the similar thing in classical mechanics. So one would expect some improvement only for such special problems which are somehow similar to path integrals.
Why are you so attached to path integrals? There are certainly other ways to compute things in quantum physics. Moreover, if you take a look at the literature on quantum computation, path integrals are (almost) never mentioned. The main technique used in quantum computation literature is the matrix formulation of (spin-like) systems in finite dimensional Hilbert spaces.
 
  • #29
Demystifier said:
Why are you so attached to path integrals? There are certainly other ways to compute things in quantum physics. Moreover, if you take a look at the literature on quantum computation, path integrals are (almost) never mentioned. The main technique used in quantum computation literature is the matrix formulation of (spin-like) systems in finite dimensional Hilbert spaces.

Yep. I think I also read once or twice about a path integral approach, but, honestly, it is so rare that it doesn't seem to be fundamentally necessary. If one has a preference for this description, it could be used in principle.

haael said:
What do we mean by computer "speed"? There are so called computational complexity classes.

In computer science, "time" means "number of elementary operations". The term "polynomial time" means "number of operations is limited by a function that is polynomial with respect to the input size".
(...)
Now there is a question: how fast certain problems can be solved in reality. Computer scientists have their own concept of "time" (the number of elementary computational operations). Now we want to know how this translates to the real, physical, ordinary time!

So we ask the question: what problems can be solved in a time that takes polynomial count of physical time units (i.e. nanoseconds) with respect to the input size.

P and NP refer to an abstract definition of time. Now we are talking about the real time.

Arguments have been presented that all real-world computers are quantum. Let's stick to that.

Now I present some theses that I read once in a popular-science article. I can't find it know but I think it was true.

Weak quantum computers (that means, ordinary computers like the one I have on my desk) can effectively solve problems from the class P.
Strong quantum computers (the one like D-Wave, for instance) can effectively solve problems from one of the classes CX. It is unknown which one but there was an argument that the class must be greater than P and smaller than NP.
Strong quantum computers with closed time loops would be able to solve problems from NP and beyond.

The author stated that experiments could be performed to test whether closed time loops exist. If we found a system that can solve problems from NP in polynomial real time, then time loops exist. To date, all systems we know have computation power less than NP. This might mean that sadly there are no time loops in our universe.

Again, I want to stress the difference between computational time and real time.

Precisely what I should have emphasized when I first asked the question... I thought that people would understand "computational time" when I typed "time", "speed", "slow", "faster"... This distinction was already in my mind, but I never really assumed this would cause such a mess - it actually happened since the first answer (and that confused me quite a bit at first).

I hope this thread is now free from such confusion. On the bright side, this made me search for the "D-Wave test" article which I heard so much about but never really found.
 
  • #30
haael said:
NP is a class of problems that can be solved in polynomial time on so called nondeterministic Turing machine.

Check this definition. The NP class refers to verfiability, not a solution.
 
  • #31
craigi said:
Check this definition. The NP class refers to verfiability, not a solution.

The two definitions are equivalent. Although one might prefer the definition based on verifiability because non-deterministic Turing machine is unphysical.
 
  • #32
For a problem in NP class, nondeterministic Turing machine can find the solution in polynomial time and deterministic Turing machine can verify the solution in polynomial time.

And yes, both Turing machines are unphysical.
 
  • #33
Speed can have as much to do with organization than the speed of the underlying hardware. For example, electrical signals travel way faster than chemical signals, but even with 50 years of Moore's law, we still can't make an electrical computer that can perform as many calculations as the three pound chemical computer inside your head.
 
  • #34
haael said:
Both Turing machines are unphysical.

Every mathematical model is an idealization. But some can be approximated well enough by reality like deterministic Turing machines, while non-deterministic Turing machines are not meant to be physical from the beginning.
 
  • #35
That depends on what you mean by "approximated well enough". I can make a physical simulation of a nondeterministic Turing machine (that will take exponential physical time to run of course).

Maybe a part of your definition of good approximation is "if the machine runs in polynomial computation time then it must also run in polynomial physical time". Or maybe you want to say that "elementary physical operation" is in some sense equivalent to "elementary computational operation" which is a controversial claim at best.

From computer science we know that the class EXPTIME (problems that need exponential time on deterministic Turing machine) is equal to NEXPTIME (problems that need exponential time on nondeterministic Turing machine).
I can make a physical simulation that solves problems from NEXPTIME and runs in exponential time. Can I say that I just made a physical model of a nondeterministic Turing machine? The class of problems solved is right and the time taken is right.

What I want to say: every mathematical model is an unphysical abstraction and whether we can make a physical simulation of it depends on what we mean by "physical simulation".
For me, deterministic Turing machines are as unphysical as nondeterministic ones.
 
Back
Top