So we had a thread about the FQXi essay contest a couple weeks back, and when I first saw the winners list this one... ...jumped out at me, both because the inclusion of the word "undecidability" indicated the paper might actually touch on matters (i.e. formal logic) I feel qualified to comment on; and also because I am instinctively filled with uncontrollable rage whenever I see "free will" appear in the same sentence as the word "undecidability". I decided to give the paper a look and write it up for the thread, but since it took me awhile to get around to this I'm just posting it in its own thread now. The paper turns out to be only a little bit at the end about "free will" and "undecidability", and mostly about the question of how to meaningfully measure time in a quantum system. Here's what I got out of it: The paper starts with reviewing one of the old problems with reconciling General Relativity with quantum physics: General Relativity has no absolute time, no universal clocks, only relative distances between events in spacetime; but evolution in quantum physics is formulated specifically in terms of an absolute time variable, and if you try to reformulate the theory relatively you lose the ability to compute things. They focus on a specific 1983 proposal for relativizing quantum physics, which they call "Page–Wootters" (this sounds to me like a very respectable sports bar) where instead of thinking in terms of a universal t you pick some specific physical quantity which you define as your "clock"; you then formulate all other observables in terms of "how does this variable evolve as a function of the clock variable?". Put more accurately, you calculate the conditional probabilities of your observable having value A given the clock variable having value B. They explain this proposal fell apart because in a GR world there do not turn out to be any measurable things which could suitably serve as the "clock quantity". They then claim to have now solved this problem by applying ideas from a 1991 Carlo Rovelli paper; apparently in this paper Rovelli introduced an idea of "evolving constants”, which Gambini et al describe as a sort of artificial observable meant to behave like a time "parameter". What Gambini et al claim to have done is found a way to set up calculations such that you start out defining events relative to Rovelli's artificial "evolving constants" quantities; but then in the end the "evolving constants" cancel out entirely, and you're left with only the conditional probabilities of one-event-happening-given-another-event that Page–Wootters was meant to have provided in the first place. They work out the technical details of this in a separate paper, and claim to have yet another paper in which they use principles like this to formulate practical quantum notions of the "clocks and rods" that GR depends on so heavily. Well, okay. Once they start calculating the dynamics of some quantum system relative to these quantum clocks and rods, various unusual things happen. For example, in normal quantum physics, non-unitary changes-- in other words, information loss-- occur only when a measurement is performed. But relative to their Wootters-ish clocks and rods, unitary evolution no longer occurs at all and information loss happens continuously. They seem to be suggesting that this can be viewed as analogous to the clock mechanism undergoing quantum decoherence, which (if I'm understanding them correctly) from the perspective of the clock mechanism looks like the rest of the universe losing information. This bit-- the idea of using progressive information inaccessibility to model quantum evolution in a way that "looks" nonunitary-- was extremely interesting to me, but unfortunately they don't dwell on it. Instead at this point the paper shifts gears, and they start talking about what their Wootters-ish construction teaches us about the philosophy-of-science issues behind decoherence. Because I am not entirely sure I understand decoherence, I am not sure I entirely understand this part of the paper either. Let's stop for a moment and see if I can get this right: As I understand, "Decoherence" is an interpretation of quantum mechanics (or a feature certain interpretations of quantum mechanics adopt) where "wavefunction collapse" is an actual physical phenomenon that emerges when unitary systems become deeply entangled with each other very quickly. As Roger Penrose puts it in "Road to Reality", traditional quantum physics looks at the world as having two operations, a "U" operation ("Unitary", reversible) and a "D" operation ("Decohere", irreversible); when we choose an interpretation of quantum mechanics one of the things we're picking is what we choose to interpret the "D" operation as meaning (the wavefunction collapses, the universe splits, the pilot wave alters shape). If instead however we decide to take decoherence seriously, the distinction between U and D operation goes away completely; instead the "D operation" is just a specific bunch of U operations strung together, such that the results can present the illusion of something like the "D operation" having occurred. So, getting back to the paper, Gambini et al claim that the decoherence picture makes a lot more sense when you look at it in combination with their Wootters-ish construction. Specifically they bring up what they say are two traditional major objections to the idea that decoherence is sufficient to explain the measurement problem, and argue that both of these objections can be circumvented using their construction. The first of these objections against decoherence is that if you look at the "D operation" as being constructed out of U operations, then this means the "D operation" is in fact reversible-- because it's just a chain of [reversible] U operations. This is bad because near as we can gather from looking at the real world quantum measurement really does do something irreversible, something where information is lost in an irrecoverable way. This makes it seem like decoherence isn't the mysterious "D operation" after all. Gambini et al however point out that when you apply their Woottersy analysis, you find that you can show that decoherence as an operation does in fact lose information, and so is in fact irreversible and free of the risk of "revivals", relative to any given measuring device. In other words, they seem to have found a way to model quantum physics where the unitary picture that's supposed to be underlying quantum physics is everywhere preserved, but any given experiment will produce results as if state reduction occurs when a measurement is performed-- and all of this happens in a quantifiable way. That actually sounds really good-- if it actually works, it sounds like exactly what one would need to do in order to say one has solved the measurement problem. Depending, of course, on what exactly you consider "the measurement problem" to mean. This leads to the second objection against Decoherence the paper tries to rebut, which has to do with people focusing on the idea that a "measurement problem" solution should explain how it is we go from a quantum superposition of states to one single state. Decoherence analyses, again, tend to solve this by saying we don't go to one single state, we just enter a more complicated entanglement picture: whereas the Copenhagen interpretation would have the classical measuring apparatus imposing classicalness on a quantum system, the decoherence picture has the opposite happening, with a quantum system infecting an initially-classical measuring apparatus with quantumness. After this happens, the measuring apparatus is itself in a superposition of states-- such that each of those superimposed states individually sees the world as if the measured system were in a single state, but from the perspective of the ensemble the superposition never goes away. Not good enough!, goes the objection. Getting rid of superposition is the entire point! At this point the paper gets a bit more complicated and undergoes yet another gearshift, and here they start to lose me: here they get to the "undecidability" promised in the title. Basically they reiterate that their Wootersy construction describes a picture of the world where relatively speaking, on a small scale, systems are collapsing to single classical states and information is being lost; but mathematically, on the large scale, everything remains static and reversible and superimposed. But then they point out that from within the universe, you could never tell which of these two pictures, the small scale one or the large scale one, is the true one-- that is, it would be in principle impossible for you to experimentally determine whether you're in a universe where reversible operations are stacking in a way that presents the local illusion of information loss, or in a universe where it's actually just objectively the case that irreversible operations and information loss are occurring. They say it is "undecidable" which of these two things are happening. "Undecidability" is a word from mathematical logic and I'm not totally sure if I recognize the sense in which they use it here. In mathematical logic we say a problem is "undecidable" by a particular logical system if there is no possible way to demonstrate the idea is true or false by following the consequences of the logical system. An equivalent idea to undecidability is "independence"-- we can say a statement is "independent" of a logical system, if the statement could be either true or false without it having any bearing on the validity of the system. This is the same as saying the statement is not decidable by the system. Gambini and Pullin are in this same sense saying that the behavior of the world is "independent" of the ultimate truth about whether quantum state reduction is an objective thing or an illusion; i.e. it is "undecidable" whether when two systems interact they both go into a single classical state (as the Copenhagen interpretation says) or both go into a superposition (as the decoherence picture says). Okay, I think I agree with that. But then they do something squirrelly. They seem to be suggesting that because either of these two things could possibly be happening, that it's possible both could be happening-- that every time two systems interact, the universe gets to make a choice as to whether it's going to superimpose everything or collapse everything, and maybe it just freely toggles between the two. Why on earth would it do this? Their observation about the undecidability of the ultimate truth of the "D operation" looks to me like a fairly convincing argument that the ultimate truth of the D operation doesn't matter, and maybe we should find something more interesting to argue about. But instead they focus on this potential idea that because quantum systems might be randomly toggling back and forth between superimpose and don't-superimpose and we'd never be able to notice the difference-- "this freedom in the system is not even ruled by a law of probabilities for the possible outcomes"-- that something terribly interesting must be happening in whatever mechanism is [might be?] deciding how the toggling occurs. They say "the availability of this choice opens the possibility of the existence of free acts" and say this has bearing on the old argument about whether determinism in physical law precludes free will in humans, as if somehow humans got to have influence over the toggling and this is what "free will" means. I can't take this suggestion seriously. Even if we get past the question of by what conceivable mechanism the human brain could be influencing the outcome of this outside-the-accessible-universe decision, they're basically suggesting that "free will" comprises a set of decisions which-- as they have just specifically proven-- has literally no bearing whatsoever on anything that happens in the universe. That sounds like a really crappy sort of free will to have, as if Wal-Mart took over the world but then gave you a free choice of what color jumpsuit to wear in their industrial prisons. So they spend a good bit of space on this whole choice/will idea, but then when they finally get around to explaining what this decidability stuff has to do with the objection they originally raised all this to address-- is Decoherence really that useful as a way of explaining away the messiness of superposition if even after it happens all we have is an even messier, even more superimposed system?-- it turns out to not have a lot to do with the free will stuff at all. Instead they simply suggest that people might not mind so much that an "event" in a universe described by their Wootersy construction doesn't remove superposition from the system, so long as there were at least a specific, definable way in which the universe were different before and after the "event" occurs. They suggest you can provide this by defining the "event" as occurring at the exact moment that it becomes undecidable whether information loss has occurred or not. That sounds a lot more reasonable than the free will bit-- it's at least scientific-- but is "becoming undecidable" a quantifiable thing, something you can identify the specific moment where it happens when theoretically simulating the system? They don't give enough information for me to feel like I can answer that question. Anyway my objections about the end aside, overall this paper was very neat. Their whole argument at the end about free will and hypothetical coinflips outside the observable universe sounds like an unnecessary distraction to the much more interesting Page–Wootters-2.0 construction they describe in the first part of the paper, but it's easy to isolate and ignore that part of the argument if one wants. And anyway, I guess it would not be an FQXi paper if it didn't veer off into philosophy somewhere. I'd like to hear more about their method of quantifying the progression of decoherence and relative information loss, and I'd be curious whether anyone has heard anything about further work or knows whether they've been able to get any useful calculations useful out of their construction.