B Quantum computer vs supercomputer performance

Summary
This is an attempt to show that a 19 qubit quantum computer matches a 200 petaflops supercomputer in performance.
I've been learning about quantum computing for the last year and I can hardly believe what I read and hear. However, assuming that no big technical hurdles get in the way, I do think that the promises will become reality someday. It is becoming a bit clearer to me, but it's a very difficult subject.

What can a quantum computer do that a conventional supercomputer can't? I will try and show here that a 19 qubit quantum computer could match the performance of a 200 petaflops IBM supercompter.


First, the state of a quantum computer is defined by a vector of length 2^n complex values where n is the number of qubits in that quantum computer. For example:

1 qubit = 2 values in the state vector
2 qubits = 4 values in the state vector
3 qubits = 8 values in the state vector
... ...
16 qubits = 65,536 values in the state vector
... ...
100 qubits = 1,267,650,600,228,229,401,496,703,205,376 values in the state vector
... ...
and so on.

The square of any one of the values in the state vector list indicates the probability that this particular list number (index) will be observed at the output of the quantum computer when the MEASURE logic gate (see below) is finally applied.

To program the quantum computer, a person will apply what are called 'quantum logic gates' or just 'gates' to the system in order to achieve the processing that they desire. Some of the names of the 'gates' are Hadamard, cnot, Pauli-X, and MEASURE (the last one applied).

When a programmer starts to manipulate the quantum computer, the state of the qubits are initialized and the first value in the state vector list is set equal to 'one' and all of the other values are set equal to 'zero'. If the MEASURE gate were applied now then the output would always read 0b0...00000 since the probability of observing that first list number is 100%.

The goal of the programmer is usually to apply the necessary quantum logic 'gates' in such a way that the 100% list value moves from list entry number 0b0...00000 to some other list entry number, and that number is what the scientists have been waiting for.

Each logic gate takes less than one microsecond to complete, and what each gate does is to modify all of the 2^n current state vector values in order to create (evolve to) the next state vector list. This is equivalent to a 2^n by 2^n matrix multiplied with the 2^n current state vector. This is 2^n * 2^n = 2^2n multiply/accumulate (MAC) operations in one microsecond.


A contemporary IBM supercomputer allegedly has 200 petaflops of performance. If we let one FLOP (floating point operation) equal one MAC (multiply/accumulate) operation then the IBM classical computer can do 200*10^15 MACS/sec * 10^-6 sec = 200*10^9 MACs in one quantum computer gate time.

So, equating the quantum ccomputer to the IBM supercomputer's performance (for one quantum gate time):
2^2n = 200*10^9
or
n=18.7 qubits (19 qubits) in order for a quantum computer to match the MAC performance of an IBM 200 petaflop supercomputer.


This is really amazing, and assuming that this is generally correct then each additional qubit means a 4x increase in MAC performance. Scientists are talking about applications requiring 500 qubits. It's going to be interesting.

Bob
 
Last edited:

FactChecker

Science Advisor
Gold Member
2018 Award
5,125
1,805
This is probably too simple to really compare computers, but it is enough to see why quantum computers are considered so promising. For one thing, a result of a quantum computer calculation can not effectively use all the qbits. There are additional qbits needed for error correction, and others that will not be relevant to any particular problem.
 
Yes, this is certainly a simple model, but even if it's off by a factor of 1,000x in MAC performance, that's only a deficit of about 5 qubits.

This discussion assumes that all qubits are what are termed 'logical qubits'. That is, they are fully error-corrected and have infinite 'coherence' times. These types of qubits are a LONG way off. Contemporary 2019 qubits are short-lived.

I've read that one logical qubit might take up to a thousand physical qubits each in order to implement error correction. If this can be done on silicon using traditional chip-making techniques (go, Intel!) then 500 logical qubits shouldn't be too difficult to manufacture. I hope I get to see this type of power becomes real.
 

atyy

Science Advisor
13,631
1,686

FactChecker

Science Advisor
Gold Member
2018 Award
5,125
1,805
If this can be done on silicon using traditional chip-making techniques (go, Intel!) then 500 logical qubits shouldn't be too difficult to manufacture.
Making a chip with large numbers of qbits has been done by D-Wave. But the connections and arrangements are critical and limit what the chip can do. The chips not the same as traditional chips. D-Wave computers can not do the usual logic calculations. They used annealing to do their work.
 
A physics professor recently said (I think it was an MIT video) that the D-Wave annealing approach had been modeled by a team using a conventional supercomputer and it had outperformed D-Wave's machine. Whether the D-Wave processor uses any type real speedup quantum effect (tunneling is one that is mentioned) remains to be seen, but even if it turns out to be a dud the learning that went with it seems important.
 
Is not the advantage of the quantum computer not the outperforming of "traditional " computers beating them at their own game but beating them at a different game where the ability to hold multiple superpositions is employed?
 
Is not the advantage of the quantum computer not the outperforming of "traditional " computers beating them at their own game but beating them at a different game where the ability to hold multiple superpositions is employed?
Yes, this is certainly not an apples-to-apples comparison since the algorithms used by both are different, but it's the only tangible comparison that I've been able to come up with, and I haven't found anything else that even hints at what is behind the processing power of a quantum computer.

The superpositions of the system show their existence - indirectly - within the HUGE matrix multiply that jibes with the probabilistic results of the system after the application of each quantum gate. Follow the process of the 16 multiply/accumulate operations that occur every gate time for a tiny two qubit computer and you'll see how each of the next state values gets updated every gate time by ALL of the present state values. This happens in one gate time - REGARDLESS of how many qubits are in the system. This aspect, along with the notion that there are 2^n state values stored somewhere within these qubits, truly amazes me.

Bob
 

FactChecker

Science Advisor
Gold Member
2018 Award
5,125
1,805
Is not the advantage of the quantum computer not the outperforming of "traditional " computers beating them at their own game but beating them at a different game where the ability to hold multiple superpositions is employed?
I think this is stated too simply. There are large families of problems, like integer optimization and searches for solutions, that traditional computers are used for. Within those types of problems, there are subsets where algorithms work to greatly simplify the problem and allow traditional computers to solve them relatively efficiently. The remaining problems present traditional computers with a huge number of combinations to try, one at a time. Those are the problems that quantum computers can conceivably use superposition to try many combinations at once to find the solution. That is where quantum computers have a possible advantage.
 

Related Threads for: Quantum computer vs supercomputer performance

Replies
2
Views
5K
  • Last Post
Replies
16
Views
4K
Replies
2
Views
406
Replies
4
Views
2K
Replies
2
Views
543
Replies
2
Views
411
Replies
3
Views
2K
Replies
8
Views
2K

Hot Threads

Recent Insights

Top