Shor's algorithm and similar exploitation of QM?

In summary, the conversation discusses the concept of quantum algorithms, specifically focusing on Shor's algorithm and its potential to solve problems believed to require exponential classical time. The participants also consider the possibility of other problems that could be solved in polynomial time using quantum computers, as well as the general approach of using Hadamar gates and classical functions in quantum algorithms. They also touch on the topic of quantum annealing and its potential for solving discrete optimization problems, but acknowledge the difficulty of maintaining the global minimum in larger problems due to the exponential growth of local minima. The conversation ends with a mention of tests performed by researchers at various institutions to evaluate the capabilities of quantum computers.
  • #1
jarekduda
82
5
Shor's algorithm is rather the most interesting quantum algorithm as it shifts a problem which is believed to need exponential classical time, to polynomial time for quantum computer, additionally endangering asymmetric encryption like RSA and ECC.
The real question is if there are other problems, beside factorization and similar discrete logarithm problem, which can be shifted from exponential to polynomial time thanks to quantum computers? Are there known such other examples?

It is generally clear that to design a quantum algorithm, we should start with Hadamar gates, perform some unitary evolution, then measurement. However, personally for me the unitary part is a bit too abstract for direct algorithmic thinking - Shor grasps what seems to be a general and powerful mechanism, which might be possible to exploit also for essentially different problems.
So I am looking for quantum algorithms exploiting QM in a similar way as Shor - here is a diagram for its quantum subroutine:
fqm1.png


  1. The Hadamar gates create superposition of all (exponential number) values for input qubits.
  2. Then we perform a classical function on them, which is here: "f(a) = y^a mod N", where N is the number we would like to factorize, y is some (usually random) number smaller than N.
  3. Then we perform measurement of value of this function f(a)=m, which is random, but this measurement restricts the original ensemble to only input values a, such that f(a)=m.
  4. Mathematics says that this restricted ensemble has to be periodic, this period can be concluded from value of (quantum) Fourier transform, and allows to conclude the factors.
Finally the general approach (we might try to exploit for different problems) is:
  1. use Hadamar gates to get superposition of all inputs,
  2. perform some classical function f,
  3. measure value of this function m, restricting the original ensemble to f(a)=m,
  4. extract some information from the final restricted ensemble.
This possibility of restring ensemble to a fixed value of a chosen function seems powerful, but trying to use it for various NP-complete problems, I couldn't get any valuable help from it - the main obstruction is that the measured value of f is random.
Beside periodicity with QFT, what other information could we extract from such restricted ensemble? Asking if its size is larger than one is simple - any other questions we can perform?

Any thoughts? Interesting quantum algorithms beside Deutsch, Grover and Shor?
 
Physics news on Phys.org
  • #2
I don't know if you want to consider it, but there is an entirely different approach, quantum annealing, which does not use an algorithm of the type of Shor's algorithm. It is being investigated for solving discrete optimization problems. See https://en.wikipedia.org/wiki/Quantum_annealing .
 
Last edited:
  • #3
Sure I know adiabatic quantum computers, but personally I don't think they will scale up to real: large problems.
They start with simple Hamiltonian with single minimum, and slowly (adiabatically) add perturbation such that the system remains in the global minimum ... and the global minimum of the final Hamiltonian solves our problem.

However, I have some experience (my https://arxiv.org/pdf/1703.04456 ) with transforming NP-complete problems into finding a global minimum - the problem is that the number of local minima grows exponentially.
For example in 3SAT you want all 3 variable clauses (e.g. x and (not y) and z) to be satisfied - local minima will satisfy most of clauses, but lie in a few of them - there is exponential number of such pathological attractive local minima.

So the problem of adiabatic quantum computers is not to loose the global minimum - while (non-unitary) going through saddle-points splitting to a trap of local minimum ... not only evolution would need to be extremely slow, but also temperature - to distinguish local from global minima.
 
  • #4
I don't want to hijack your thread, but I can't resist defending the annealing method a little.
jarekduda said:
Sure I know adiabatic quantum computers, but personally I don't think they will scale up to real: large problems.
Certainly the computer hardware scales well. D-Wave has produced a 2,000 qubit annealing computer while IBM recently celebrated 17 qubits for a universal quantum computer. If the Google researchers are to be believed, they can solve some difficult example problems now with a speed 100 million times faster than a traditional computer can.
They start with simple Hamiltonian with single minimum, and slowly (adiabatically) add perturbation such that the system remains in the global minimum ... and the global minimum of the final Hamiltonian solves our problem.

However, I have some experience (my https://arxiv.org/pdf/1703.04456 ) with transforming NP-complete problems into finding a global minimum - the problem is that the number of local minima grows exponentially.
Certainly the ability for an annealing computer to tunnel toward the global minimum rather than getting hung up at a local minimum is a critical advantage of the quantum computer.
For example in 3SAT you want all 3 variable clauses (e.g. x and (not y) and z) to be satisfied - local minima will satisfy most of clauses, but lie in a few of them - there is exponential number of such pathological attractive local minima.

So the problem of adiabatic quantum computers is not to loose the global minimum - while (non-unitary) going through saddle-points splitting to a trap of local minimum ... not only evolution would need to be extremely slow, but also temperature - to distinguish local from global minima.
In a military application, it can be an great advantage to find random, unpredictable, near-optimal solutions to a problem.

I worked briefly with people on the D-Wave quantum computer. The only thing that I can tell you is that they are geniuses and I am not.
 
  • #5
Sure you can scale up the number of qbits, but it doesn't it mean that they will still maintain the global minimum, solving our problem - they would perform some evolution, but almost certainty far from what you would like for them.
The problem is that the number of local minima - traps for the evolution, grows exponentially with this number of used qbits - making maintaining the global minimum an impossible task for larger problems.

Wikipedia writes "Tests performed by researchers at Quantum Artificial Intelligence Lab (NASA), USC, ETH Zurich, and Google show that as of now, there is no evidence of a quantum advantage." with links to 3 articles: https://en.wikipedia.org/wiki/Adiabatic_quantum_computation#D-Wave_quantum_processors
E.g. from abstract of http://science.sciencemag.org/content/345/6195/420 : "Comparing the performance of the device on random spin glass instances with limited precision to simulated classical and quantum annealers, we find no evidence of quantum speedup when the entire data set is considered, and obtain inconclusive results when comparing subsets of instances on an instance-by-instance basis.".

The business of quantum stuff has amazing marketing, but it doesn't mean that they really make sense.
Like for quantum cryptography which is always vulnerable to man-in-the-middle attack: if someone would take control of both quantum and classical channel, he can pretend to A that he is B, and to B that he is A. Analogous level of protection can be obtained with just two classical channels - such that taking control only on one of them gives you absolutely nothing.
 
Last edited:
  • #7
Here is their arxiv: https://arxiv.org/pdf/1512.02206.pdf
As I thought, even for 1000 qbits their energy landscape has just a few local minima - sure quantum annealing might be faster than standard stimulated annealing ... but for really hard problems the number of local minima grows exponentially - you have rather 2^1000 local minima, making it virtually impossible to optimize by both classical and quantum annealing.
 
  • #8
jarekduda said:
Here is their arxiv: https://arxiv.org/pdf/1512.02206.pdf
As I thought, even for 1000 qbits their energy landscape has just a few local minima - sure quantum annealing might be faster than standard stimulated annealing ... but for really hard problems the number of local minima grows exponentially - you have rather 2^1000 local minima, making it virtually impossible to optimize by both classical and quantum annealing.
I think your estimate of the number of local minima is worst case and probably extremely pessimistic in practical problems. Do you have any reason to think that every independent variable leads to an additional local minimum? And why do you think that a high density of local minima would prevent tunneling to the global? I am more inclined to think that the difference in energy levels is more important.
 
  • #9
So imagine you want to formulate 3SAT as minimization of polynomial (sum of squares) like in my arxiv - it turns out that you can get down to degree 2 polynomial for the clauses ... but there remain e.g. sum of
x^2 (1-x)^2
to enforce all variables to be finally 0 or 1.
So maybe try gradient descent from one of these {0,1}^n ... it turns out that you will land in one of neighboring local minima ...

Here is a discussion focused on it - with my post from which you can download Mathematica notebook to generate polynomial which minimization would solve factorization problem (and description of my frustrating attempts to try to minimize it numerically): https://encode.ru/threads/2739-Cryp...h-3-SAT-problem)?p=52371&viewfull=1#post52371
I wish you good luck!
 
  • #10
Minimizing in one variable does not force a local minimum. It depends on the coefficients of the problem. Furthermore, there are convexification transformation techniques that show promise for mitigating the problem of local minima. I am not sure how well the transformations can be applied in this situation and I do not think that it has been investigated.

This isn't really my field, so I can not say more. But I have confidence that the relevant research people at Google, Lockheed Martin, USC, and NASA are not dumb. They may eventually be proven wrong, but not at the level that we are discussing this.
 
Last edited:
  • #11
The number of variables is not one in real problems, but thousands and more - sure, just take some such nasty polynomial and try to minimize it to understand the difficulty.
After a few simplifications, we can get here to the subset sum problem, which can be seen as the question if a given hyperplane intersects {0,1}^n.
Looking at is as a minimization problem, each of these 2^n points have the minimal distance point on the hyperplane (orthogonal projection) and corresponding attractor for e.g. some gradient descent methods.
So we have like 2^n different attractors, let say only one of them is the proper one ... and this one is not so different from the remaining ones - I have sit on it for some time and now I see it just hopeless from the point of view of global minimization.

Beside, if you could transform some NP-complete problem into minimization of a function having only a polynomial number of local minima, you could search numerically all of them in polynomial time on a classical computer, so you would have P=NP.

Realistically, for this moment I see the interest in adiabatic quantum computers (and quantum cryptography) rather only as a fashion - sounds cool and interesting, people want to pay for it ... but sure, I also hope I am wrong here.
 
  • #12
Thinking about Shor's algorithm in the first post above, one might wonder what's finally happening with the auxiliary qbits?

Quantum computers require reversible/unitary operations, e.g. we cannot just use OR gate: (x,y) -> x or y ...
but we need to use e.g. (x, y, z) -> (x, y, z xor (x or y))
which is own inverse ... but uses one additional auxiliary qbit - so their required number is comparable to the number of gates of the classical function - can be quite large.

But what's happening with all these auxiliary qbits at the end of computation? After the computation?
Measuring the classical function has lead to the crucial restriction of the original ensemble - can we ensure that auxiliary qbits aren't finally also measured/collapsed, also restricting the ensemble? Is there some time interval when such measurement restricts the ensemble?
If no, can we ensure that restriction from such collapse does not cripple our computation?
 

1. What is Shor's algorithm and how does it exploit quantum mechanics?

Shor's algorithm is a quantum algorithm developed by mathematician Peter Shor in 1994. It is used to efficiently factor large integers, which is a task that would take classical computers an impractically long time to solve. This algorithm exploits the quantum properties of superposition and entanglement to perform calculations much faster than classical algorithms.

2. Why is Shor's algorithm considered a breakthrough in quantum computing?

Shor's algorithm is considered a breakthrough because it was the first quantum algorithm to demonstrate a significant speed-up over classical algorithms. It opened up the possibility of using quantum computers to solve complex problems that were previously thought to be intractable.

3. Can Shor's algorithm be used for other applications besides factoring large integers?

Yes, Shor's algorithm can be adapted to solve other problems that can be reduced to the problem of factoring. This includes breaking certain encryption schemes, such as RSA, which rely on the difficulty of factoring large numbers.

4. Are there any limitations to Shor's algorithm?

One limitation of Shor's algorithm is that it requires a large number of qubits and precise control of quantum operations, which is currently a challenge for practical implementation. Additionally, as more efficient classical factoring algorithms are developed, the advantage of Shor's algorithm may decrease.

5. How does Shor's algorithm impact the future of quantum computing?

Shor's algorithm is seen as a major milestone in the development of quantum computing and has sparked significant interest and advancements in the field. It has also raised concerns about the potential threat to current encryption methods and the need for developing post-quantum cryptography. Shor's algorithm serves as a proof of concept for the power of quantum computing and continues to inspire research and innovation in this field.

Similar threads

  • Quantum Physics
Replies
13
Views
894
Replies
2
Views
2K
  • Quantum Physics
Replies
3
Views
950
Replies
2
Views
1K
  • Quantum Physics
Replies
5
Views
2K
  • Quantum Physics
Replies
7
Views
2K
  • Quantum Physics
Replies
1
Views
708
  • Quantum Physics
2
Replies
39
Views
2K
  • Quantum Physics
Replies
1
Views
2K
Replies
1
Views
825
Back
Top