# Spectral Gap or Gapless Undecidable

Homework Helper
So there is a new paper.
Presented here: http://arxiv.org/pdf/1502.04135.pdf
Published here: http://www.nature.com/nature/journal/v528/n7581/full/nature16059.html

Here is an excerpt from the abstract:
These problems are all particular cases of the general spectral gap problem: Given a quantum many-body Hamiltonian, is the system it describes gapped or gapless? Here we show that this problem is undecidable, in the same sense as the Halting Problem was proven to be undecidable by Turing.6 A consequence of this is that the spectral gap of certain quantum many-body Hamiltonians is not determined by the axioms of mathematics, much as Godels incompleteness theorem implies ¨ that certain theorems are mathematically unprovable.

This suggests that there may be cases where you can experimentally determine whether there is a gap, but that, in theory, there is no way to determine this through analysis.

Is this true, and are there actual examples? For example, is there a specific superconductor that:
- relies on no spectral gap
- where there is enough known about its structure where a computation can be attempted
- but the computation is demonstrably undecidable.

??

Last edited by a moderator:
marcus, mfb and atyy

Strilanc
This suggests that there may be cases where you can experimentally determine whether there is a gap, but that, in theory, there is no way to determine this through analysis.

No it doesn't. The undecidability applies to solving the problem on an unbounded grid. You won't be able to perform such an experiment for obvious practical reasons.

Furthermore, note that quantum mechanics is not required for creating undecidable problems. Classical mechanics also allows for objects with undecidable properties (given unbounded space). We call them computers.

Last edited:
mfb, atyy and .Scott
Homework Helper
This suggests that there may be cases where you can experimentally determine whether there is a gap, but that, in theory, there is no way to determine this through analysis.
No it doesn't. The undecidability applies to solving the problem on an infinite grid. You won't be able to perform such an experiment for obvious practical reasons.

Furthermore, note that quantum mechanics is not required for creating undecidable problems. Classical mechanics also allows for widgets with undecidable properties (given unbounded space). They have a complicated name... what was it...? Oh yeah, computers.
The potential difference between the ones related to computers and the ones that might be related to physical events is that on the computers, they are left unsolved. But if they affected physical events, the universe wouldn't just stop when it gets to one of these problems.

Actually, the last statement in that phys.org article (http://phys.org/news/2015-12-quantum-physics-problem-unsolvable-godel.html) suggests that no such case has been found:
The researchers are now seeing whether their findings extend beyond the artificial mathematical models produced by their calculations to more realistic quantum materials that could be realised in the laboratory.

You don't sound optimistic about their success in that venture.

Last edited:
Strilanc
The potential difference between the ones related to computers and the ones that might be related to physical events is that one the computers, they are left unsolved. But if they affected physical events, the universe wouldn't just stop when it got to one of these problems.

In their problem, they're trying to figure out if adding more lattice sites will cause the problem instance to become gapped. Alternatively, it might stay gapless forever. They proved that this question is undecidable.

For a problem that does eventually have a gap, you will eventually figure out that it in fact has a gap by working your way through larger and larger lattices. But for a problem that stays gapless, you're forever left in limbo. No matter how large of a lattice you check, you don't become sure that an even larger lattice won't work. And because the problem is undecidable, you won't always be able to find a proof that there is no sufficiently large lattice.

Ok, now the exact same wording but for the halting problem!

In the halting problem, you're trying to figure out if running a program for more steps will cause it to halt. Alternatively, the program may never halt. Turing proved that this question is undecidable.

For any program that does halt, you will eventually figure out that it in fact halts by running more and more steps. Even for programs that run for a very VERY long time. But for a program that doesn't halt, you're forever left in limbo. No matter how long you simulate it, you don't become sure that just a few more steps won't make the program halt. And because the problem is undecidable, you won't always be able to find a proof that the program doesn't halt.

See the similarity now?

It's not that we wouldn't be able to recognize that there's a gap in a given system, it's that we're never sure if we have to keep trying more and more systems or not. Doing experiments doesn't fix this problem, it just makes it physical instead of symbolic. You still find yourself trying larger and larger instances unsure of when to stop.

Actually, the last statement in that phys.org article (http://phys.org/news/2015-12-quantum-physics-problem-unsolvable-godel.html) suggests that no such case has been found:

You don't sound optimistic about their success in that venture.

I don't see any problem with using the understanding that programs can be sorta encoded into lattices to find new and interesting substances. The researchers won't be able to solve the halting problem with what they make, but that doesn't mean what they make can't be useful or revolutionary.

maline, UncertaintyAjay and .Scott
Homework Helper
Ok, now the exact same wording but for the halting problem!
...
See the similarity now?
I saw the similarity from the start. I have been familiar with the computer halting problem for years or decades.

But what you're adding is that their halting problem only relates to one analytical method of determining whether there is a gap - specifically the method based on a grid. So, from what I now understand, they didn't show that the fundamental gap problem is beyond analysis - only that it is beyond a particular analytic approach.

Is that right?

Strilanc
I'm pretty sure the undecidability applies to any method of analysis and any experimental method. Otherwise we'd have a clear practical counter example to the Church-Turing thesis and it would be GIGANTIC news. Did you have some method in mind?

Homework Helper
I'm pretty sure the undecidability applies to any method of analysis and any experimental method. Otherwise we'd have a clear practical counter example to the Church-Turing thesis and it would be GIGANTIC news. Did you have some method in mind?
No. I was just trying to interpret what you said.
So they are not examining a realistic situation?

Demystifier
Gold Member
Most physicists are not very much familiar with the concepts such as undecidability, halting problem, and Godel theorems. So let me try to explain the meaning of this result in (my own) simple terms.

Consider the following SF scenario. In the far far future, Wolfram releases a highly advanced version of Mathematica (say, Mathematica 100.0.0) with the following function:
When you input parameters for an arbitrary infinite-size Hamiltonian, for any such input (however pathological it might be), Mathematica, after some finite time of processing, gives you the output YES or NO, telling you whether the Hamiltonian has or has not the spectral gap.

No doubt, it would be very difficult to make such a program in Mathematica. But would it be possible at least in principle? The theorem proved in the paper in Nature shows that such a program is impossible even in principle.

However, it is not really such a negative result as it might look.

The theorem does not exclude a possibility to make a program which will work for most cases, and only fail for some pathological cases, for which the program will fall into an infinite loop. All what theorem proves is that such pathological cases exist. It is quite possible that neither of the physically interesting cases is pathological in that sense.

Moreover, even for such pathological cases, it does not mean that a good mathematician cannot determine whether it has a gap or not. Given a particular pathological case, a creative mathematician can create a special method, working only for this special case, to determine whether it has gap or not.

Furthermore, it is possible to have two (or more) different algoritms, called A1 and A2, which fail for different cases. So if A1 fails in some case of interest, it may be that A2 works for that case. A2, however, will fail in some other case, for which A1 may work well.

To summarize, all what the theorem says is that there is no one algorithm by which machine can determine (whether there is a gap or not) for any possible input (Hamiltonian).

I hope it helps to clarify the mathematical and physical (in)significance of the result.

Last edited:
marcus, fedaykin, nrqed and 3 others
Homework Helper
Thanks Demystifier. You have lived up to your name!!

Demystifier
Strilanc
Moreover, even for such pathological cases, it does not mean that a good mathematician cannot determine whether it has a gap or not. Given a particular pathological case, a creative mathematician can create a special method, working only for this special case, to determine whether it has gap or not.

Furthermore, it is possible to have two (or more) different algoritms, called A1 and A2, which fail for different cases. So if A1 fails in some case of interest, it may be that A2 works for that case. A2, however, will fail in some other case, for which A1 may work well.

To summarize, all what the theorem says is that there is no one algorithm by which machine can determine (whether there is a gap or not) for any possible input (Hamiltonian).

The rest of your answer is good, but this part is a bit misleading. There are two important clarifications to make:

1. The combination of two algorithms is also an algorithm, and so must fail to solve some cases. You can't put together a team of algorithms that cover all the holes; there's always more holes.

2. A creative mathematician is also subject to the obstacles of undecidability (... depending on your philosophy). For example, they have to ground their proof of gapped-vs-gapless in an axiomatic system such as ZFC, but "iterate over all proofs and stop when you've shown X or not-X in ZFC" is an algorithm that would find such a proof. Because the problem is undecidable, that algorithm must fail for some cases; there may simply be no proof either way. So the creative mathematician would also fail, unless they're willing and able to introduce more and more new axioms. (Secondarily, given the presumption that physics is computable [i.e. the Church-Turing thesis] and that mathematicians are embedded in physics, mathematicians must fail on some instances of uncomputable problems.)

atyy
I have little real knowledge of Gödel theory, but if I recall it correctly if was not as harsh as commonly stated. The "proofs" the theorem talks about does not encompass all the ways to prove a given statement in an axiomatic system.

2. A creative mathematician is also subject to the obstacles of undecidability (... depending on your philosophy). For example, they have to ground their proof of gapped-vs-gapless in an axiomatic system such as ZFC, but "iterate over all proofs and stop when you've shown X or not-X in ZFC" is an algorithm that would find such a proof.
I don't understand this part. To me, generating proofs is not enough. They also have to be checked if they work or not. Maybe algorithms can't do it in general (assuming Church-Turing is false).

Strilanc
I don't understand this part. To me, generating proofs is not enough. They also have to be checked if they work or not. Maybe algorithms can't do it in general (assuming Church-Turing is false).

Sorry if that was unclear. I intended for checking that the proof was correct to be implied in "stop when you've shown it". Proofs are defined (or can be defined) in a way that's easy for machines to check, so that's not a problem.

Generally speaking, the hard creative part of math is considered to be the finding of proofs as opposed to the checking of proofs. And when we're only worried about computability, instead of time complexity, simply iterating through all proofs is viable.

Strilanc
I have little real knowledge of Gödel theory, but if I recall it correctly if was not as harsh as commonly stated. The "proofs" the theorem talks about does not encompass all the ways to prove a given statement in an axiomatic system.

No, it includes all possible proofs. To avoid undecidability you either need to stick to very trivial systems that don't allow a Turing machine to be encoded, or you need to keep introducing new ad hoc axioms magically whenever you encounter the next undecidable proposition.

Last edited:
Strilanc
Scott Aaronson, a well-known computational complexity theorist and blogger, has written a post about the paper:

it’s now my professional duty [...] to end the post by shooing you away from two tempting misinterpretations of the Cubitt et al. result [...]

[...] the result does not say—or even suggest—that there’s any real, finite physical system whose behavior is Gödel- or Turing-undecidable. Thus, it gives no support to speculations like Roger Penrose’s, about “hypercomputing” that would exceed the capabilities of Turing machines. The reason, again, is that as soon as you fix a lattice size L, everything becomes computable. The Cubitt et al. result applies only to questions about the limiting behavior, as the number of particles goes to infinity. But we already knew lots of examples of physical systems for which predicting their behavior in some infinite limit is at least as hard as the halting problem: for instance, the Wang tiles discussed earlier, or Post rewrite systems, or even Turing machines themselves. Local Hamiltonians are a profound, nontrivial addition to that list—one that will be particularly striking to physicists, many of whom calculate the spectral gaps of at least 50 Hamiltonians between dinner and dessert. But in some sense, there was no a-priori reason why a problem this general, about physical systems of unbounded size, ought to have been computable.

Demystifier
It's worth noting that even though the halting problem is provably undecidable, it is still possible in principle to construct pseudo halting oracles which examine computer programs which do not exceed some fixed size. Essentially they examine programs with an upper memory limit, of which there are only a finite amount, and in principle can examine all such programs, keep track of their states as they go and flag any which return to a previous state without looping as non-halting. This leads to the concept of busy-beaver functions and non-computability, which are a great way to spend an afternoon.

No, it includes all possible proofs. To avoid undecidability you either need to stick to very trivial systems that don't allow a Turing machine to be encoded, or you need to keep introducing new ad hoc axioms magically whenever you encounter the next undecidable proposition.

I mean, yes it denies all possible possible proof working within the theory but that was not the only ways to see if a proposition was true or false, or at least that is what I remember about the issue.

mfb
Mentor
This leads to the concept of busy-beaver functions and non-computability, which are a great way to spend an afternoon.
But can you be sure that you stop after an afternoon?
scnr.

We don't have to assume mathematicians are physical objects and that physical objects are computable: mathematicians produce proofs of finite length, something a conventional algorithm can do as well. So while mathematicians might be able to solve more problems than Mathematica 100, there are always cases where they won't work.

Demystifier
I mean, yes it denies all possible possible proof working within the theory but that was not the only ways to see if a proposition was true or false, or at least that is what I remember about the issue.
Yes this is more or less what I had in mind. There's just something that is off with that argument for me, but I don't have enough knowledge on the topic to see if it's just my blunder or not.

I'm trying to follow this thread and read the paper. But what is the fundamental relevance of critical quantum systems? Low temperature superconductors and sonons in crystals aside, are critical quantum systems considered to be possible descriptions of how a LQG spin foam becomes a fundamental particle or how a collection of some of those particles become a nucleus or am I totally misinterpreting that? It seems like the condensed matter angle is reckoning with core problems but I'm not sure I see the connection to the general case.

Also, if there was no spectral gap... or there weren't always some gaps somewhere wouldn't everything just be one big crystal or wave by now?

Last edited:
Demystifier
Gold Member
The rest of your answer is good, but this part is a bit misleading. There are two important clarifications to make:

1. The combination of two algorithms is also an algorithm, and so must fail to solve some cases. You can't put together a team of algorithms that cover all the holes; there's always more holes.

2. A creative mathematician is also subject to the obstacles of undecidability (... depending on your philosophy). For example, they have to ground their proof of gapped-vs-gapless in an axiomatic system such as ZFC, but "iterate over all proofs and stop when you've shown X or not-X in ZFC" is an algorithm that would find such a proof. Because the problem is undecidable, that algorithm must fail for some cases; there may simply be no proof either way. So the creative mathematician would also fail, unless they're willing and able to introduce more and more new axioms. (Secondarily, given the presumption that physics is computable [i.e. the Church-Turing thesis] and that mathematicians are embedded in physics, mathematicians must fail on some instances of uncomputable problems.)
These are very important and subtle issues that needed to be discussed carefully. So let me explain why do I still disagree, but feel free to criticize me if I missed something.

1. A combination of two algorithms is not an algorithm, if it is a mathematician (not a machine) that combines them. A creative mathematician may use his intuition (not an algorithm) to decide which algorithm to use when.

2. Undecidability does not always mean that you must add an infinite number of new axioms to make it "decidable".
a) For instance, the continuum problem is undecidable in ZFC theory, but you can make this problem decidable but adding only one new axiom. (The problem is that mathematician cannot agree which new axiom would look sufficiently obvious.)
b) Even more creative idea is to remove some standard axioms and replace them with new ones. In this way Tarski has shown that it is possible to make new axioms for real numbers (not containing Peano axioms among them) without leading to a Godel-like theorem.

Concerning the spectral gap problem, there is an important issue which I think was not addressed in the Nature paper. Given an algorithm, they have shown that it will work for same cases and fail for other cases. What they did not say is the following: Which set is larger; the set of cases in which it will fail, or the set of cases in which it will not? Does perhaps one of the sets has measure zero? Which one? Answering these questions would be very important for evaluating the practical implications of their theorem.

Last edited:
Strilanc
These are very important and subtle issues that needed to be discussed carefully. So let me explain why do I still disagree, but feel free to criticize me if I missed something.

1. A combination of two algorithms is not an algorithm, if it is a mathematician (not a machine) that combines them. A creative mathematician may use his intuition (not an algorithm) to decide which algorithm to use when.

2. Undecidability does not always mean that you must add an infinite number of new axioms to make it "decidable".
a) For instance, the continuum problem is undecidable in ZFC theory, but you can make this problem decidable but adding only one new axiom.
b) Even more creative idea is to remove some standard axioms and replace them with new ones. In this way Tarski has shown that it is possible to make new axioms for real numbers (not containing Peano axioms among them) without leading to a Godel-like theorem.

For (1): Strictly speaking, a combination of two algorithms is guaranteed to be an algorithm. An algorithm is a sequence of steps that terminate after a finite time with a result. You can merge any finite number of algorithms into a single algorithm by dove-tailing through them. Even an infinite set of algorithms can be merged into a single algorithm by dove-tailing, as long as the set is recursively enumerable and we're guaranteed that at least one of the algorithms in the set will give an answer that allows us to stop.

Note that, in the case of deciding propositions, we can simply iterate through all possible proofs instead of through some complicated set of algorithms. To decide X we consider each possible proof in turn, check that its steps are valid, that it only uses the allowed axioms as preconditions, and that it concludes with X or with not-X. This works as long as X is not independent of the axioms. If there's a proof, this process finds it. If there is no proof, no process finds it; including mathematicians.

It might help if you give a concrete example of a mathematician doing something that a computational process can't. Intuition is certainly helpful in practice, but when it comes to raw computability/decidability you can always just substitute brute force. There's no need to save time by intuitively skipping to the correct answer when the rules effectively let you check all possible answers without penalty as long as one of them works. (And if none of them works, intuition was clearly also going to fail.)

For (2): I meant that to make every statement decidable you'd have to add more and more axioms, because there's always more holes. But I agree that removing/replacing axioms, and limiting your algorithm's input space to a decidable theory such as Presburger arithmetic, sidesteps the problem.

Last edited by a moderator:
Demystifier
Gold Member
It might help if you give a concrete example of a mathematician doing something that a computational process can't.
1. Consider any consistent axiomatic system in which a statement of the form
"This statement cannot be proved within the system"
can be stated. A mathematician can see that this statement is true, yet a computational process (within the system) cannot prove it.

2. Constructing a useful axiomatic system (e.g. Peano axioms) itself. Mathematician can do it, algorithmic computational process can't.

Strilanc
1. Consider any consistent axiomatic system in which a statement of the form
"This statement cannot be proved within the system"
can be stated. A mathematician can see that this statement is true, yet a computational process (within the system) cannot prove it.

2. Constructing a useful axiomatic system (e.g. Peano axioms) itself. Mathematician can do it, algorithmic computational process can't.

For (1), why are we limiting the computational process to work within the system but not the mathematician? That's like saying a mathematician can prove something for all natural numbers, but a computational process not allowed to use induction couldn't.

For (2) you need expand "useful". How does it relate to decidability and the problem in question?

Demystifier
Gold Member
For (1), why are we limiting the computational process to work within the system but not the mathematician?
Because a computational process must work within the system by definition. On the other hand, we do not have such a precise definition of a mathematician.

That's like saying a mathematician can prove something for all natural numbers, but a computational process not allowed to use induction couldn't.
Not to prove, but to see that it is true. Mathematicians don't prove the Godel sentence, yet they see that it is true. (Perhaps it's not a fair analogy, but religious people, including some mathematicians, don't prove that God exists, yet they see that he does.)

Another example: Mathematicians don't prove that Peano axioms are consistent (the second Godel theorem proves that it is impossible to prove it), yet they are convinced that Peano axioms are consistent. They "see it" without a proof.

An even more interesting example: Mathematicians don't prove the axiom of choice, yet many of them (not all) see that it is true. They see it so vividly, that even the Banach-Tarski paradox cannot shake their vision of truth.

For (2) you need expand "useful". How does it relate to decidability and the problem in question?
My point is that mathematician (as any other human) can see that something is "obviously true", even though it is impossible to formally prove it. If such an "obviously true" statement can be put in a sufficiently precise form, then mathematicians like to call it "axiom".

The crucial question, which I have not tried to answer, is this: If, without a proof, all mathematicians see that something is true, does it mean that it is indeed true? That's a though question for philosophy of mathematics.

Last edited: