Gambini & Pullin on measuring time in quantum physics

In summary: The paper starts with reviewing one of the old problems with reconciling General Relativity with quantum physics: General Relativity has no absolute time, no universal clocks, only relative distances between events in spacetime; but evolution in quantum physics is formulated specifically in terms of an absolute time variable, and if you try to reformulate the theory relatively you lose the ability to compute things.They focus on a specific 1983 proposal for relativizing quantum physics, which they call "Page–Wootters" (this sounds to me like a very respectable sports bar) where instead of thinking in terms of a universal t you pick some specific physical quantity which you define as your "clock"; you then formulate all other observables in terms of "
  • #1
Coin
566
1
So we had a thread about the FQXi essay contest a couple weeks back, and when I first saw the winners list this one...

...jumped out at me, both because the inclusion of the word "undecidability" indicated the paper might actually touch on matters (i.e. formal logic) I feel qualified to comment on; and also because I am instinctively filled with uncontrollable rage whenever I see "free will" appear in the same sentence as the word "undecidability". I decided to give the paper a look and write it up for the thread, but since it took me awhile to get around to this I'm just posting it in its own thread now. The paper turns out to be only a little bit at the end about "free will" and "undecidability", and mostly about the question of how to meaningfully measure time in a quantum system. Here's what I got out of it:

The paper starts with reviewing one of the old problems with reconciling General Relativity with quantum physics: General Relativity has no absolute time, no universal clocks, only relative distances between events in spacetime; but evolution in quantum physics is formulated specifically in terms of an absolute time variable, and if you try to reformulate the theory relatively you lose the ability to compute things.

They focus on a specific 1983 proposal for relativizing quantum physics, which they call "Page–Wootters" (this sounds to me like a very respectable sports bar) where instead of thinking in terms of a universal t you pick some specific physical quantity which you define as your "clock"; you then formulate all other observables in terms of "how does this variable evolve as a function of the clock variable?". Put more accurately, you calculate the conditional probabilities of your observable having value A given the clock variable having value B. They explain this proposal fell apart because in a GR world there do not turn out to be any measurable things which could suitably serve as the "clock quantity". They then claim to have now solved this problem by applying ideas from a 1991 Carlo Rovelli paper; apparently in this paper Rovelli introduced an idea of "evolving constants”, which Gambini et al describe as a sort of artificial observable meant to behave like a time "parameter". What Gambini et al claim to have done is found a way to set up calculations such that you start out defining events relative to Rovelli's artificial "evolving constants" quantities; but then in the end the "evolving constants" cancel out entirely, and you're left with only the conditional probabilities of one-event-happening-given-another-event that Page–Wootters was meant to have provided in the first place. They work out the technical details of this in a separate paper, and claim to have yet another paper in which they use principles like this to formulate practical quantum notions of the "clocks and rods" that GR depends on so heavily. Well, okay.

Once they start calculating the dynamics of some quantum system relative to these quantum clocks and rods, various unusual things happen. For example, in normal quantum physics, non-unitary changes-- in other words, information loss-- occur only when a measurement is performed. But relative to their Wootters-ish clocks and rods, unitary evolution no longer occurs at all and information loss happens continuously. They seem to be suggesting that this can be viewed as analogous to the clock mechanism undergoing quantum decoherence, which (if I'm understanding them correctly) from the perspective of the clock mechanism looks like the rest of the universe losing information. This bit-- the idea of using progressive information inaccessibility to model quantum evolution in a way that "looks" nonunitary-- was extremely interesting to me, but unfortunately they don't dwell on it.

Instead at this point the paper shifts gears, and they start talking about what their Wootters-ish construction teaches us about the philosophy-of-science issues behind decoherence. Because I am not entirely sure I understand decoherence, I am not sure I entirely understand this part of the paper either. Let's stop for a moment and see if I can get this right: As I understand, "Decoherence" is an interpretation of quantum mechanics (or a feature certain interpretations of quantum mechanics adopt) where "wavefunction collapse" is an actual physical phenomenon that emerges when unitary systems become deeply entangled with each other very quickly. As Roger Penrose puts it in "Road to Reality", traditional quantum physics looks at the world as having two operations, a "U" operation ("Unitary", reversible) and a "D" operation ("Decohere", irreversible); when we choose an interpretation of quantum mechanics one of the things we're picking is what we choose to interpret the "D" operation as meaning (the wavefunction collapses, the universe splits, the pilot wave alters shape). If instead however we decide to take decoherence seriously, the distinction between U and D operation goes away completely; instead the "D operation" is just a specific bunch of U operations strung together, such that the results can present the illusion of something like the "D operation" having occurred.

So, getting back to the paper, Gambini et al claim that the decoherence picture makes a lot more sense when you look at it in combination with their Wootters-ish construction. Specifically they bring up what they say are two traditional major objections to the idea that decoherence is sufficient to explain the measurement problem, and argue that both of these objections can be circumvented using their construction.

The first of these objections against decoherence is that if you look at the "D operation" as being constructed out of U operations, then this means the "D operation" is in fact reversible-- because it's just a chain of [reversible] U operations. This is bad because near as we can gather from looking at the real world quantum measurement really does do something irreversible, something where information is lost in an irrecoverable way. This makes it seem like decoherence isn't the mysterious "D operation" after all. Gambini et al however point out that when you apply their Woottersy analysis, you find that you can show that decoherence as an operation does in fact lose information, and so is in fact irreversible and free of the risk of "revivals", relative to any given measuring device. In other words, they seem to have found a way to model quantum physics where the unitary picture that's supposed to be underlying quantum physics is everywhere preserved, but any given experiment will produce results as if state reduction occurs when a measurement is performed-- and all of this happens in a quantifiable way. That actually sounds really good-- if it actually works, it sounds like exactly what one would need to do in order to say one has solved the measurement problem.

Depending, of course, on what exactly you consider "the measurement problem" to mean. This leads to the second objection against Decoherence the paper tries to rebut, which has to do with people focusing on the idea that a "measurement problem" solution should explain how it is we go from a quantum superposition of states to one single state. Decoherence analyses, again, tend to solve this by saying we don't go to one single state, we just enter a more complicated entanglement picture: whereas the Copenhagen interpretation would have the classical measuring apparatus imposing classicalness on a quantum system, the decoherence picture has the opposite happening, with a quantum system infecting an initially-classical measuring apparatus with quantumness. After this happens, the measuring apparatus is itself in a superposition of states-- such that each of those superimposed states individually sees the world as if the measured system were in a single state, but from the perspective of the ensemble the superposition never goes away. Not good enough!, goes the objection. Getting rid of superposition is the entire point!

At this point the paper gets a bit more complicated and undergoes yet another gearshift, and here they start to lose me: here they get to the "undecidability" promised in the title. Basically they reiterate that their Wootersy construction describes a picture of the world where relatively speaking, on a small scale, systems are collapsing to single classical states and information is being lost; but mathematically, on the large scale, everything remains static and reversible and superimposed. But then they point out that from within the universe, you could never tell which of these two pictures, the small scale one or the large scale one, is the true one-- that is, it would be in principle impossible for you to experimentally determine whether you're in a universe where reversible operations are stacking in a way that presents the local illusion of information loss, or in a universe where it's actually just objectively the case that irreversible operations and information loss are occurring. They say it is "undecidable" which of these two things are happening.

"Undecidability" is a word from mathematical logic and I'm not totally sure if I recognize the sense in which they use it here. In mathematical logic we say a problem is "undecidable" by a particular logical system if there is no possible way to demonstrate the idea is true or false by following the consequences of the logical system. An equivalent idea to undecidability is "independence"-- we can say a statement is "independent" of a logical system, if the statement could be either true or false without it having any bearing on the validity of the system. This is the same as saying the statement is not decidable by the system. Gambini and Pullin are in this same sense saying that the behavior of the world is "independent" of the ultimate truth about whether quantum state reduction is an objective thing or an illusion; i.e. it is "undecidable" whether when two systems interact they both go into a single classical state (as the Copenhagen interpretation says) or both go into a superposition (as the decoherence picture says). Okay, I think I agree with that.

But then they do something squirrelly. They seem to be suggesting that because either of these two things could possibly be happening, that it's possible both could be happening-- that every time two systems interact, the universe gets to make a choice as to whether it's going to superimpose everything or collapse everything, and maybe it just freely toggles between the two. Why on Earth would it do this? Their observation about the undecidability of the ultimate truth of the "D operation" looks to me like a fairly convincing argument that the ultimate truth of the D operation doesn't matter, and maybe we should find something more interesting to argue about. But instead they focus on this potential idea that because quantum systems might be randomly toggling back and forth between superimpose and don't-superimpose and we'd never be able to notice the difference-- "this freedom in the system is not even ruled by a law of probabilities for the possible outcomes"-- that something terribly interesting must be happening in whatever mechanism is [might be?] deciding how the toggling occurs. They say "the availability of this choice opens the possibility of the existence of free acts" and say this has bearing on the old argument about whether determinism in physical law precludes free will in humans, as if somehow humans got to have influence over the toggling and this is what "free will" means. I can't take this suggestion seriously. Even if we get past the question of by what conceivable mechanism the human brain could be influencing the outcome of this outside-the-accessible-universe decision, they're basically suggesting that "free will" comprises a set of decisions which-- as they have just specifically proven-- has literally no bearing whatsoever on anything that happens in the universe. That sounds like a really crappy sort of free will to have, as if Wal-Mart took over the world but then gave you a free choice of what color jumpsuit to wear in their industrial prisons.

So they spend a good bit of space on this whole choice/will idea, but then when they finally get around to explaining what this decidability stuff has to do with the objection they originally raised all this to address-- is Decoherence really that useful as a way of explaining away the messiness of superposition if even after it happens all we have is an even messier, even more superimposed system?-- it turns out to not have a lot to do with the free will stuff at all. Instead they simply suggest that people might not mind so much that an "event" in a universe described by their Wootersy construction doesn't remove superposition from the system, so long as there were at least a specific, definable way in which the universe were different before and after the "event" occurs. They suggest you can provide this by defining the "event" as occurring at the exact moment that it becomes undecidable whether information loss has occurred or not. That sounds a lot more reasonable than the free will bit-- it's at least scientific-- but is "becoming undecidable" a quantifiable thing, something you can identify the specific moment where it happens when theoretically simulating the system? They don't give enough information for me to feel like I can answer that question.

Anyway my objections about the end aside, overall this paper was very neat. Their whole argument at the end about free will and hypothetical coinflips outside the observable universe sounds like an unnecessary distraction to the much more interesting Page–Wootters-2.0 construction they describe in the first part of the paper, but it's easy to isolate and ignore that part of the argument if one wants. And anyway, I guess it would not be an FQXi paper if it didn't veer off into philosophy somewhere. I'd like to hear more about their method of quantifying the progression of decoherence and relative information loss, and I'd be curious whether anyone has heard anything about further work or knows whether they've been able to get any useful calculations useful out of their construction.
 
Physics news on Phys.org
  • #2
Coin said:
So we had a thread about the FQXi essay contest a couple weeks back, and when I first saw the winners list this one...
...jumped out at me, ...

In my inexpert judgement this work is radical yet solid. Probably important. And has not yet been refuted. In fact there are a half-dozen papers by Gambini & Pullin that develop this idea, going back to 2004, as I recall.

An amusing detail is that G&P took on a young co-author named Raphael Porto for several of these papers, and in between times Porto has been co-author with Jacques Distler. I do not watch Distler but it seems to me likely that he is aware of the G&P line of thinking. There may have been some reaction already, or we may eventually see one, and this will be interesting.

What I think we should do is look at the main papers in this line that have appeared 2004-2008 and see who has CITED them. Also we should look at current conferences and see if Gambini has been invited to present these ideas.
I should point out that Pullin was chairman of the April APS Denver meeting quantum gravity session. The "April meeting" is the main annual Americal Physical Society event, for theoretical physics.

Personally, as I say, I believe the G&P work is important and solid. But I am always wanting to find real-world objective correlations against which to check my personal viewpoint. What i want to see is objective signs of recognition that Gambini and Pullin ideas are being taken seriously.

You shouldn't just base this on the reception of their FQXi essay. IMO that is, with all due respect, a warmed over rehash of their 2004-2008 work with some savory seasoning to liven it up. Prepared to take advantage of the FQXi contest podium, a target of opportunity. You have to look at the main papers published in the regular channels and judge on that basis. IMHO.
 
Last edited:
  • #3
marcus said:
In my inexpert judgement this work is radical yet solid. Probably important. And has not yet been refuted. In fact there are a half-dozen papers by Gambini & Pullin that develop this idea, going back to 2006, as I recall.

An amusing detail is that G&P took on a young co-author named Raphael Porto for several of these papers, and in between times Porto has been co-author with Jacques Distler. I do not watch Distler but it seems to me likely that he is aware of the G&P line of thinking. There may have been some reaction already, or we may eventually see one, and this will be interesting.

What I think we should do is look at the main papers in this line that have appeared 2006-2008 and see who has CITED them. Also we should look at current conferences and see if Gambini has been invited to present these ideas.
I should point out that Pullin was chairman of the April APS Denver meeting quantum gravity session. The "April meeting" is the main annual Americal Physical Society event, for theoretical physics.
.


It bothers me that you put so much emphasis on who has worked with whom, where people have worked, who is citing whom. I could not put my finger on why, exactly, but I knew that it bothered me a great deal.

Then I realized why it bothers me so much. It is completely against the fundamental principles of good science. Someone who has little experience in physics and who reads your post will get the feeling that we, scientists, pay more attention to the names of people, where they have worked, who has cited them, who they have collaborated with and so on, than on their actual work to decide what is of merit. Good science should be the exact opposite! Ideally, we should ignore completely all those irrelevant things and focus on the work itself. We should judge a paper ONLY on the content, and not at all on any of these other facts. Ideally, we should not even name the authors of a paper and discuss only the physics content .

It's true that it is hard to be completely objective. Of course, we pay more attention to a new paper by, say, Witten than by someone publishing their first paper. But getting into an author's whole professional life is totally irrelevant and gives the wrong idea about what good science should be, in my humble opinion. That's sociology, not physics.
 
  • #4
I have already spent a lot of time discussing the physics of the Gambini and Pullin papers. The work from a purely physics perspective is obviously important, since it addresses major problems in an original way (black hole info paradox, problem of time in QG, etc. etc.)

We have had threads about this going back to 2004-2006. Obviously some people aren't aware of this. In particular it has been a longstanding interest of mine.

However the sociological checks are important because physicists are obviously a bit like herd or flock animals. Work can be solid and original and yet it can be ignored if it does not catch the attention of the (largely conformist) mass of the community which tends (for various reasons) to follow fashion.

So now it is time to check to see if this work is getting some recognition.

What we need to do is check cites for the series of papers. It will take some work. I have been asked in Private Message to make an assessment like this, so I will go ahead and do it. Or make a stab at it. We'll see what comes out of it.
http://arxiv.org/abs/0809.4235
http://arxiv.org/cits/0809.4235
Conditional probabilities with Dirac observables and the problem of time in quantum gravity
Rodolfo Gambini, Rafael Porto, Sebastian Torterolo, Jorge Pullin
Phys.Rev.D79:041501R,2009
(Submitted on 24 Sep 2008)
"We combine the "evolving constants" approach to the construction of observables in canonical quantum gravity with the Page--Wootters formulation of quantum mechanics with a relational time for generally covariant systems. This overcomes the objections levied by Kuchar against the latter formalism. The construction is formulated entirely in terms of Dirac observables, avoiding in all cases the physical observation of quantities that do not belong in the physical Hilbert space. We work out explicitly the example of the parameterized particle, including the calculation of the propagator. The resulting theory also predicts a fundamental mechanism of decoherence."
4 pages

A point that caught my attention back around 2004 was that Gambini Porto Pullin had a resolution of the black hole info paradox, simply based on using a realistic clock, and some clever theoretical limits on the lifetime accuracy of a quantum clock, which made considerably better sense than the resolution offered, with lots of publicity, by Hawking at the same time. Hawking got the attention. G&P made sense. The contrast was striking.
I will get to that, I am gradually working back. In case you don't realize it NRQED this involves very interesting physics :biggrin: so don't have a fit.

Here's another one going back. If these things are not getting active attention, it sucks. I have not checked yet to see.

http://arxiv.org/abs/0708.2935
http://arxiv.org/cits/0708.2935
Loss of entanglement in quantum mechanics due to the use of realistic measuring rods
Rodolfo Gambini, Rafael A. Porto, Jorge Pullin
Phys.Lett.A372:1213-1218,2008
(Submitted on 21 Aug 2007)
"We show that the use of real measuring rods in quantum mechanics places a fundamental gravitational limit to the level of entanglement that one can ultimately achieve in quantum systems. The result can be seen as a direct consequence of the fundamental gravitational limitations in the measurements of length and time in realistic physical systems. The effect may have implications for long distance teleportation and the measurement problem in quantum mechanics."
6 pages

http://arxiv.org/abs/gr-qc/0611148
Fundamental spatiotemporal decoherence: a key to solving the conceptual problems of black holes, cosmology and quantum mechanics
Rodolfo Gambini, Rafael Porto, Jorge Pullin
6 pages, Honorable Mention GRF 2006, published version
Int.J.Mod.Phys.D15:2181-2186,2006

http://arxiv.org/abs/:gr-qc/0603090
http://arxiv.org/cits/:gr-qc/0603090
Fundamental decoherence from quantum gravity: a pedagogical review
Rodolfo Gambini, Rafael Porto, Jorge Pullin
9 pages, dedicated to Octavio Obregon on his 60th birthday
Gen.Rel.Grav.39:1143-1156,2007

http://arxiv.org/abs/hep-th/0406260
http://arxiv.org/cits/hep-th/0406260
Realistic clocks, universal decoherence and the black hole information paradox
Rodolfo Gambini, Rafael Porto, Jorge Pullin
3 Pages
Phys.Rev.Lett. 93 (2004) 240401

http://arxiv.org/abs/hep-th/0405183
http://arxiv.org/cits/hep-th/0405183
No black hole information puzzle in a relational universe
Rodolfo Gambini, Rafael Porto, Jorge Pullin
4 pages
Int.J.Mod.Phys. D13 (2004) 2315-2320

http://arxiv.org/abs/gr-qc/0402118
http://arxiv.org/cits/gr-qc/0402118
A relational solution to the problem of time in quantum mechanics and quantum gravity induces a fundamental mechanism for quantum decoherence
Rodolfo Gambini, Rafael Porto, Jorge Pullin
13 pages
New J.Phys. 6 (2004) 45
 
Last edited by a moderator:
  • #5
Hi nrqed,

I agree with you completely.
 
  • #6
Some people may not immediately "get it" that we are talking about interesting and highly original physics here. As we discussed in threads some years back, G&P found a clever bound on the duration/precision of a clock which depends on the clock forming a black hole if you try to push the durability and accuracy of the clock past this limit.
In finding this bound they were inspired by an earlier paper of Wigner.

This bound on physically realizable clocks causes a slow inevitable loss of unitarity, if one uses a realistic clock. They calculate this, and find it gives a resolution of the black hole information paradox, among other things.

Now I happen to have communicated in the past several times with Rafael Porto and so I keep marginally aware of his research interests besides this one involving decoherence. So I am going to toss out these links. They are not supposed to prove anything, they are supposed to round out the picture. I find the Distler connection amusing. I expect that Distler's hep-th/0604255 actually draws some on Gambini Pullin work and may even cite them. Have to check this.

http://arxiv.org/abs/0712.0448
The Private Higgs
Rafael A. Porto, A. Zee
8 pages. Version published in Phys. Lett. B
Phys.Lett.B666:491-495,2008

http://arxiv.org/abs/hep-ph/0604255
Falsifying Models of New Physics Via WW Scattering
Jacques Distler, Benjamin Grinstein, Rafael A. Porto, Ira Z. Rothstein
4 pages, 2 figures
Phys.Rev.Lett.98:041601,2007

Yes, in fact. The Distler paper cites this:
http://arxiv.org/abs/gr-qc/0402118
A relational solution to the problem of time in quantum mechanics and quantum gravity induces a fundamental mechanism for quantum decoherence
Rodolfo Gambini, Rafael Porto, Jorge Pullin
13 pages
New J.Phys. 6 (2004) 45
If you remember what the Distler paper was about, it naturally would cite that. The issue of unitarity and departures from it was paramount in the Distler paper. That was the one where Distler was so anxious to claim falsifiability.
 
Last edited:
  • #7
OK, so I listed some of the papers that are context for the one Coin mentioned. And in particular I see that this one 0402118 has gotten 34 cites.

In particular it has been cited by Steve Giddings (twice) and Don Marolf and Abhay Ashtekar and also by Max Tegmark and in another instance by Neil Cornish (he's a prominent cosmologist). And of course also by Jacques Distler.

Now if someone is not very alert they might think that I am using a sociological fact to prove Gambini and Pullin are good. The opposite is the case.

I've seen people draw that not-terribly-perceptive conclusion before.
I already know Gambini and Pullin's work is interesting and original physics. I have explained that. I know they are good for physics reasons.

The fact that they got cited by Giddings and by Marolf (Santa Barbara KITP) doesn't show G&P are good. We already know that. It shows the extent to which the broader physics community is smart and awake. It is a hopeful sign, in other words. Giddings and Distler are considered to be string theorists (though lately Giddings has shifted research focus some.)

Max Tegmark (MIT) directs the FQXi which had that recent essay contest where I think G&P got the second juried prize or some such thing.

So these are hopeful signs that the theoretical physics community is waking up to G&P's gambit. And it coincides with Coin's hunch that it was interesting. He spotted a recent paper of theirs about these themes and thought it was interesting enough to start a thread about.
It is.

I just checked cites on another. More recent:
http://arxiv.org/abs/0708.2935
http://arxiv.org/cits/0708.2935
Loss of entanglement in quantum mechanics due to the use of realistic measuring rods
Rodolfo Gambini, Rafael A. Porto, Jorge Pullin
Phys.Lett.A372:1213-1218,2008
(Submitted on 21 Aug 2007)
"We show that the use of real measuring rods in quantum mechanics places a fundamental gravitational limit to the level of entanglement that one can ultimately achieve in quantum systems. The result can be seen as a direct consequence of the fundamental gravitational limitations in the measurements of length and time in realistic physical systems. The effect may have implications for long distance teleportation and the measurement problem in quantum mechanics."
6 pages

This (as well as the other one) has been cited in a paper by Don Marolf, and also in a paper by Neil Cornish. Marolf and Cornish are repeat customers.
It hasn't gotten very many cites as yet (only published last year) but it got recognition from some high quality people. So that's how it goes. Gambini and Pullin have a highly original idea and it has been in obscurity much of the time since 2004, but maybe the community is beginning to wake up to it.

Again, let's be clear: community awareness doesn't mean their idea is right. It may eventually be refuted! Popularity does not imply validity. The string theory example should be sufficient to disabuse one of that notion! :biggrin:
 
Last edited:
  • #8
Hi guys, thanks for the responses.

I think sociological data is interesting and valuable as a thing unto itself. It is important not to let it supplant physics content, however I don't think it's a good idea to ignore the sociological data either. I look at things this way because, frankly, I do not know enough about physics to feel like I can accurately judge by myself whether ideas are solid. If however [sociologically speaking] there is a lot of activity around an idea that tells me the idea has been tested in the sense that a lot of people are thinking about it. This goes especially for something like the vaguely revolutionary kind of ideas being put foward by Gambini et al here. If these ideas had been around awhile and had had the chance to be evaluated but there weren't examples of people citing it or developing it further or collaborating on it, that would be a hint to me maybe there is some kind of obvious problem with it (scientific or practical) that I simply lack the background to see. So I find the links and background Marcus has provided here extremely useful because they seem to me to say that the Gambini et al ideas pass some sort of minimal "smell test".

Aside from this I think understanding a physicist's history-- what they've worked on, who they've collaborated with-- is useful because it tells you a lot about the researcher's mindset. If you understand a writer's mindset you're more likely to understand what it is they're trying to communicate.

That out of the way, nrqed, you say you'd like to discuss the physics content-- do you have any remarks on the physics content yourself? I'd be curious to hear your thoughts.
 
  • #9
Coin said:
Because I am not entirely sure I understand decoherence, I am not sure I entirely understand this part of the paper either. Let's stop for a moment and see if I can get this right: As I understand, "Decoherence" is an interpretation of quantum mechanics (or a feature certain interpretations of quantum mechanics adopt) where "wavefunction collapse" is an actual physical phenomenon that emerges when unitary systems become deeply entangled with each other very quickly.
This is wrong. Decoherence is not an interpretation, but an experimental fact. It is NOT wavefunction collapse. For more details I recommend
http://xxx.lanl.gov/abs/quant-ph/0312059
 
  • #10
Coin said:
In mathematical logic we say a problem is "undecidable" by a particular logical system if there is no possible way to demonstrate the idea is true or false by following the consequences of the logical system.

I'm not sure if I read that paper, you caught my interest, I'll try to skim ilater.

I associate here directly to the notion of establishing if a particular "probability estimate" is true or false. IMO, clearly this is not physically possible. In reality the entire world has changed one you have acquired enough statistics to make an assessement. The exception are cases where the context is massive compare to the system unders study, like often is the case for particle experiments.

As I see it, it is not necessary to even talk about stuff like "predicting a probability", because the only function of the probability is that it constitutes the basis for the observers actions. Once the action has been triggered, assessing correctness of past basis seems moot.

This is why I think decidability in that loose physics-sense only makes sense in an evolving context. Without evolution and change, decidability seems ambigous. IF you picture the observers reasoning, as a logical system in a fuzzy sense, somehow the "consequences of the logical system", IS the observers actions, and in that way, the feedback to the observer from the environment, is somehow the decision. If this is consistent with the observers, prior state, no correction is needed, if not, a correction is needed. I don't see that we need to make up and imaginary probability ensembles. They aren'y needed IMO.

If you look at biological systems, it makes little sense to ponder wether a particular "rat" makes correct decisions. What matters is that the rat is apparently a very fit creature that has managed to survive and evolve for a long time. So responding to your environments reactions to your possible imperfect actions is the important trait, rather than always making the correct action, because the measure of this "correctness" can not be cosntructed.

Once you accept hte idea that the ability to learn and adapt, is more important than "beeing right" in some ambigous sense, it puts also decidability in a new light.

I think this is an interesting question.

Suppose I make a prediction, then this prediction is the basis of my actions, then once I get feedback on this action from my environment, it's an outdated question to try to RE-assess my prior prediction in the light of new data as to see if I was right or wrong. Regardless of wether I was right or wrong, the current question is how I can optimally merge the feedback with my prior state of information.

/Fredrik
 
  • #11
I didn't get time to read the entire paper last night, but I started skimming, and Gambini says on page 7-8

" One of them is the “regularity theory”, many times attributed to Hume
[23]; in it, the laws of physics are statements about uniformities or regularities of the world and therefore are just “convenient descriptions” of the world. Ernest Nagel in The Structure of Science [24] describes this position in the following terms: “Hume proposed an analysis of causal statements in terms of constant conjunctions and de facto uniformities.. —according to Hume [physical laws consist] in certain habits of expectation that have been developed as a consequence of the uniform but de facto conjunctions of [properties].” The laws of physics are dictated by our experience of a preexisting world and are a representation of our ability to describe the world but they do not exhaust the content of the physical world.
A second point of view sometimes taken is the “necessitarian theory” [22], which states that
laws of nature are “principles” which govern the natural phenomena, that is, the world “necessarilyobeys” the laws of nature. The laws are the cornerstone of the physical world and nothing exists without a law. The presence of the undecidability we point out suggests strongly that the “regularity theory” point of view is more satisfactory since the laws do not dictate entirely the behavior of nature."

I like this. However I am not sure how this view distinguishes from Rovelli in the application. I also partially agree with rovelli's reasoning. I guess it depends on how he applies this.

As I interpret what he said above, which is close to my personal view as well, is that he tries to acknowledge that knowledge of "physical law", requires information, and information acquisition, just like a "physical state" does.

Here I associate to Smolins idea of evolving law as well. Undecidability then, could possible be interpreted as a particular view of physical law. This is exactly that perspective I like.

But the question is how you go on from here. My personal expectation of this, is that we will end up with an evolving law, and no there are not "meta laws" that govern this evolution. Rather do I think there is a hierarcy of laws, which can be seen to have originate from a condition of no represented laws.

This origin problem becomes similar to the origin of mass. How can laws have evolved, starting from no laws?

So in that sense, life is really like an unknown game, but without set rules. Finding out the effective rules, IS part of the game itself. One you get skilled you can even "learn" how to INFLUENCE the rules. But at no point can one make decisions about wether something is true or false. You can only place your bets, and move of from whatever feedback you get.

I think this is why the logic we might need could be somewhat different.

I'll read the other half of the paper tonight.

/Fredrik
 
  • #12
Regarding the physical interpretation of decidability.

In other words, as I see it, the "logical implications" whereby decisions are usually made in logic, is in physics a _physical process_, and the process of deciding something, is indistinguishable from the ordinary (time)evolution, because the only menas of "verification" is something like "place your bets, and revise your actions based on the feedback".

So the decidability problems seems to be constrained to processes. Unlike normal logic, where the set of statements and implications exists mathematically, the physical imlpementation of this is more like "computations" which introduces time, but with the comlpication that there exists no universal computer hardware, instead the observers is his own hardware, so the "computations" might be constrained to the observes own microstructure.

The thing that makes me doubt, and which reminds me of rovelli is this statement

"It is not too surprising that in the resulting picture one does not have a unitary evolution: although the underlying theory is unitary, our clocks and rods are not accurate enough to give a depiction of evolution that appears unitary."
-- page 3-4 in http://fqxi.org/data/essay-contest-files/Gambini_essay.pdf

I suspect that the "underlying theory" here, does represent a sort of structural realist view, or birds view. I don't like that.

This way of recovering unitarity in a more fundamental picture, must IMO still be subject to the same constraints. A conclusion is thus that such a theory is unreachable, unless you envision that in some form of mathematical reality like Tegemark. I fail to see the real utility of such picture. It seems more to give intellectual comfort and an illusion of something fundamental to fall back on, although it's imaginary.

/Fredrik
 
  • #13
I got around to reading the last section last night, and Gambinis reasoning is quite interesting. I can connect to what he says, but wether the brief paper allows me to infere the best interpretation of his reasning I don't know.

About the free will thing. I agree that whenever that appears in a paper I am sceptical. Because it's so commonly used in various crackpot stuff, but what Gambini says makes sense to me.

Coin said:
But then they do something squirrelly. They seem to be suggesting that because either of these two things could possibly be happening, that it's possible both could be happening-- that every time two systems interact, the universe gets to make a choice as to whether it's going to superimpose everything or collapse everything, and maybe it just freely toggles between the two. Why on Earth would it do this?

Gambini has a reference ot another paper in reference [8] which supposedly contains details, but given that I haven't read this this is how I interpret this.

To me the first observation here is that Gambini seems to partly acknowledge the limits of what I first called his "external birds view". I this view, there is always an uncertainty wether a superposition by a measurement device or another observer is collapsed or not. But from my perspective he is mixing the views here which is confusing. He seems to partly resolve the confusion by noting that from the "external" point of view, it's undecidable which is the case. Then from my perspective, in which there is no "external" view, this external view can be nothing but a third observer, and in that case, the conclusion is that this third observer has a "choice", to think that the imagine superpostion another observer has relative to a subsystem is still intact or not. Then this third obserer, bases his actions upon this. But in actuality, he would then act "as if both happens". This makes pretty good sense to me.

I wouldn't call this "free will" but I can understand the association. The free will is then simply the players freedom to "place his bets", and he has no choice but to face the consequences of this choices. The consequences are the environments backreaction, which is also undeterministic.

But I think more in terms of the observers betting process is a kind of unpredictable random process. The observer has en evolving dice, and this only choice is to throw it or not to throw it. Both choices are risky, so there are no safe strategies.

Maybe I should try to locate that reference [8]. Gambinis seems to mix the external view here with constraints of it. I think that eventually the constraints on this external view will prove to simply be the ones of an internal view. Then I think I would like it even more :)

/Fre
 
  • #14
They have a new paper. John86 flagged it and added it to the QG bibliography thread:

John86 said:
http://arxiv.org/abs/0905.4222

Undecidability and the problem of outcomes in quantum measurements
Authors: Rodolfo Gambini, Luis Pedro Garcia Pintos, Jorge Pullin
(Submitted on 26 May 2009)

Abstract: We argue that it is fundamentally impossible to recover information about quantum superpositions when a system has interacted with a sufficiently large number of degrees of freedom of the environment. This is due to the fact that gravity imposes fundamental limitations on how accurate measurements can be. This leads to the notion of undecidability: there is no way to tell, due to fundamental limitations, if a quantum system evolved unitarily or suffered wavefunction collapse. This in turn provides a solution to the problem of outcomes in quantum measurement by providing a sharp criterion for defining when an event has taken place. We analyze in detail in examples two situations in which in principle one could recover information about quantum coherence: a) "revivals" of coherence in the interaction of a system with the environment and b) the measurement of global observables of the system plus apparatus plus environment. We show in the examples that the fundamental limitations due to gravity and quantum mechanics in measurement prevent both revivals from occurring and the measurement of global observables. It can therefore be argued that the emerging picture provides a complete resolution to the measurement problem in quantum mechanics.

John didn't highlight the last sentence of their abstract, but I guess I will:

It can therefore be argued that the emerging picture provides a complete resolution to the measurement problem in quantum mechanics.
 
Last edited:
  • #15
Here id the next paper by Gambini and Pullin.

http://arxiv.org/abs/0905.4402
The Montevideo interpretation of quantum mechanics: frequently asked questions
Authors: Rodolfo Gambini, Jorge Pullin
(Submitted on 27 May 2009)

Abstract: In a series of recent papers we have introduced a new interpretation of quantum mechanics, which for brevity we will call the Montevideo interpretation. In it, the quantum to classical transition is achieved via a phenomenon called "undecidability" which stems from environmental decoherence supplemented with a fundamental mechanism of loss of coherence due to gravity. Due to the fact that the interpretation grew from several results that are dispersed in the literature, we put together this straightforward-to-read article addressing some of the main points that may confuse readers.


a question for Marcus or Fra the emergent properties of space time and the CC problem. I try to visualize this time again. if you envision the emergent properties of space time as a macroscapical thing, is it than wrong to think about cosmological constant as the underlying sub stratum. And the universe as a macroscopical bubble emergent from that sub stratum ?.
 
  • #16
Thanks Marcus and John for reporting more papers from these! I have been unusually disfocused lately due to various car-issues, and I haven't been able to read any new papers for a week.

About the cosmological constant I personall don't think of it as a "substratum" in the realist sense. So far I've associated a non-zero cosmological constant to the finite information constraint of hte observer - a finite observer can not establish a certain zero measure, I think this residual uncertainty or undecidability somehow is related to the cosmological constant. The effective measured cosmological constant is thus somehow relative to the observer, but that's not quite consistent with how we normally see this. But then again, if GR is emergent then an at least locally objective cosmological constant might appear.

But exactly how this fits in the large picture of emergent dimensionality and spacetime is yet not clear to me. To think of it as a material substratum sitting in space is too realist-inclined to my taste. I think it's link to how it's measured, and thus unavoidable adds the complexity of the observer.

I will see if I get time to read those papers this weekend.

/Fredrik
 
  • #17
"We argue that it is fundamentally impossible to recover information about quantum superpositions when a system has interacted with a sufficiently large number of degrees of freedom of the environment. This is due to the fact that gravity imposes fundamental limitations on how accurate measurements can be."
-- http://arxiv.org/abs/0905.4222

This SOUNDS very much to my liking. I can HOPE that their "fundamental limitations on how accurate measurements can be" can relate to what I have called "intrinsic construction of measures". The "fundamental limit" is simply the constraint imposed by the "inside view" of a finite observer. At least that's how I think of it, it remains to see if this is comparable with their reasoning.

I definitely will read this later.

From my point of view, given since angle, the next question is, WHAT actions a systems takes given this undecidabilit of the "optimal action"? This is where in my thinking of evolving observers and law come in.

By first impression was that they thought of another escape, but it remains to see.

/Fredrik
 
  • #18
I wanted to say that there is a popular misconception about the word 'undecidable', even Max Tegmark was a victim of that misconception.

Well-defined, simple and effectively computable 'laws' of physics in some Universe can make that particular Universe 'undecidable'

As an example, imagine a Universe where only Turing machines exist, and nothing else. Time in that universe is integer (step number).

Turing machine is DETERMINISTIC and quite primitive. However, even that simple world is undecidable, because there are some undecidable statements (for example, would Turing machine ever stop given the particular input).

However, all these undecidable statements have EXISTS or ALL in the beginning of the formula (EXISTS step N that blah-blah), however, the transition from step N into step N+1 is always well defined and it is even effectively computable.

Decidable and Effectively computable laws of physics can lead to globally undecidable universe, and I always wonder what type of Undecidability ('local' or globa) physicists are talking about.

P.S.
Another shocking example: Universe of well-known Conway Game of Life is undecidable
 
Last edited:
  • #19
The word "undecidable" obviously has a meaning outside the context of mathematical logic. And the word is used when something cannot be decided. Just like "indistinguishable" is used when two things cannot be distinguished.

I know of only one popular misconception about the word "undecidable", which is the misconception that it must always mean formally undecidable, as in the title of Gödel's book:
https://www.amazon.com/dp/0486669807/?tag=pfamazon01-20
That is, there are people who think the word always has the technical meaning of "formally undecidable" or "recursively undecidable" and so can only be used in the context of mathematical logic, as with Turing machines, sets of axioms, etc etc.

A naive person might think that whenever the word is used, there must be some narrow technical context of the sort he is familiar with. That is a misconception that I have seen occur. What other "popular misconception" about it is there?

I didn't understand the following statement, Dima:

Dmitry67 said:
I wanted to say that there is a popular misconception about the word 'undecidable', even Max Tegmark was a victim of that misconception.
...

What is the misconception you are talking about?

Could you give an example of it?

In particular, about Max Tegmark, would you please point to some instance where Tegmark "was a victim of" the misconception that you have in mind?

So far I am completely mystified by your post.
 
Last edited by a moderator:
  • #20
marcus said:
In particular, about Max Tegmark, would you please point to some instance where Tegmark "was a victim of" the misconception that you have in mind?

So far I am completely mystified by your post.

Yes, here: http://arxiv.org/PS_cache/arxiv/pdf/0704/0704.0646v2.pdf
Page 22 and his CUH:

3. Computable structures (whose relations are defined by halting computations)

He does not define any sub-levels of #3, while obviously there are multiple sublevels:

For simplicity, let's talk about 'lattice universe' with integer time and number of states S(t) where t is integer. There are 4 options:

1. Any statement about S1 and S2 is decidable.
2. For any given initial state S(t) we can effectively calculate state S(t+1). In another words, we can effectively emulate the Universe.
3. For any given pair of states S1 and S2 we can effectively calculate if S2 can be NEXT state (for t+1) of S1. In another words, we can effectively verify if laws of our Universe are violated or not on every step. Note that #3 is weaker then #2
4. We can't do #3 - non-computable Unvierse.

When Max Tegmarks talks about the computability, he does not clarify if he is talking about #1, #2 or #3 (because he sees o difference). But #1 is too strong - it is violated even in very simple systems like Game of life, so we can't hope that #1 is true. Probably he means #3, but I am not sure.
 
  • #21
Dmitry67 said:
...http://arxiv.org/abs/0704.0646
Page 22 and his CUH...
...

From Tegmark's abstract:
" I hypothesize that only computable and decidable (in Gödel's sense) structures exist"

Notice that he specifies a technical meaning "in Gödel's sense", because the word also can be used informally outside math logic.

So Tegmark was using "decidable" in the technical sense of mathematical logic, and may have blundered! That is interesting!

Maybe you should start a thread pointing this out. People interested in the formal mathematics of computation might want to discuss it, and give you feedback.

However, I don't think it has anything to do with Gambini and Pullin.
For them there is no formal logic context. Something is undecidable if you can't decide it.
They could as well have said "indistinguishable".
http://arxiv.org/abs/0905.4222
They are interested in where the quantum description comes to a point where you cannot physically distinguish between wavefunction collapse and the natural loss of information that accompanies realistic measurement (real world quantum clocks and rods.)

For them it is an important junction when you finally cannot decide by physical means whether the wavefunction has collapsed (because of some measurement interaction with environment) or instead information has worn out by the simple passage of time. Dima, you may see things differently. Please let me know if you think I have misunderstood their idea.

Let me quote from their abstract, to emphasize what I'm trying to say:

"We argue that it is fundamentally impossible to recover information about quantum superpositions when a system has interacted with a sufficiently large number of degrees of freedom of the environment. This is due to the fact that gravity imposes fundamental limitations on how accurate measurements can be. This leads to the notion of undecidability: there is no way to tell, due to fundamental limitations, if a quantum system evolved unitarily or suffered wavefunction collapse."

What they are talking about are fundamental physical limitations on the accuracy of measurements. For instance the fact that if you try to make a real clock too accurate and reliable it will turn into a black hole and you lose it. And there is, in the universe, no clock except for real clocks. So information inevitably decays---perfect unitarity is meaningless, one cannot operationally define it. And they have useful estimates of the unavoidable rate of decay---the rate at which the universe ineluctably forgets its past.
 
Last edited:
  • #22
John86 and Marcus, thanks for the links! This is exciting, this looks like the sort of coherent development of their ideas that was missing from the 30,000-feet-view FQXi paper.

I haven't fully injested all this yet, but I find this detail particularly interesting from John86's initial quote:

This [information inaccessibility] is due to the fact that gravity imposes fundamental limitations on how accurate measurements can be.

This is very interesting to me because it seems analagous or identical to a specific Penrose idea. There's a place in Road to Reality (I don't know if he ever elaborated on this in any "real" scientific context) where Penrose raises the question of how to classify something as "classical" or "quantum" in the Copenhagen interpretation and offhandedly proposes that you could invent an interpretation where the point at which "wavefunction collapse" happens is defined as "the point at which a singlular state is required in order to solve some problem involving gravity". Penrose had no idea how to make this work, and he was calling for this on a solely philosophy-of-science argument-- I.E. his proposal was founded on the idea that the problem of how to decide where collapse happens is hard and the problem of how to make gravity be nonclassical is hard so it sure would be convenient if you could just lump the two problems together. But it seems to me that once G&P make explicit (as in this paper) this idea that their information inaccessibility directly flows from limitations imposed by gravity they've basically supplied something very like a rigorous version of the theory Penrose was hoping for, since "information becomes inaccessible from the perspective of this rod or clock" is in their interpretation basically the same as copenhagen's "waveform collapse" and there's a bound on when this occurs based on the most-accurate-clock-you-can-make-without-forming-a-black-hole bound. Now given I may be putting too much emphasis on this since if I understand right the gravity consideration in G&P, even once they make it explicit, is mostly just an upper bound on a process where the actual mechanism at work is decoherence, but it's still interesting to me when two people reach a single idea coming from different directions...

---

An unrelated note: The clearer presentation in these new papers is making me curious exactly how their "interpretation" fits in with the many-worlds interpretation. (Their FAQ actually brings up many worlds, and says that they can't see any reason why they would be incompatible with many worlds but notes that once you've accepted their interpretation the philosophy-of-science motivation for accepting MWI becomes weaker.) It seems like in their interpretation information in the wavefunction is never "lost" except from the perspective of specific observables (clocks, rods etc), and nothing ever "becomes classical" except from the perspective of those observables-- that is their interpretation seems to be all about how individual objects can experience the illusion of classical physics. So if I understand this right, what happens if you choose to work with the "Montevideo interpretation" and many-worlds interpretation simultaneously? Is the idea that all "branches" on a quantum decision still exist within the wavefunction, but different observables will wind up stuck in a single "world" and only able to access the information in the universe's wavefunction from the perspective of that "world"?
 
  • #23
I still haven't had time to read the papers yet, but i'd like to hand onto this dicussion and get back to it later.

Coin I understand your association to Penrose gravity induced collapse. I too think there is a connection between fundamental of QM and gravity, and I also didn't find Penrose arguments suffuciently clear or convinving. He seems to try to want to explain quantum decoherence by assuming gravity.

While I agree that there must be a general connection, I see it the other way. I think that in the general information view, these various "information inaccessability" or undecidability, that follows from (as I think at least) a constrained physical inside view somehow IS gravity. This way there might also be interesting informational perspective on the constants of gravity, including the cosmological constant.

So that we could even predict gravity, from a proper informational view. I think this is somehow perpendicular to what Penrose seems to think, but I also have a feeling that the final result could be close to this original intention, only that maybe there is another way of implementing it.

Maybe you're right the Gambinis thinking is a progression from Penrose original idea.
It probabably will be a few more days before I get raound to reading the papers properly.

/Fredrik
 
  • #24
A question meanwhile I have time to read the papers, perhaps for Marcus?

Since Gambini works in LQG as far as I know, perhaps someone can charaterize differences and similarities if you compare Rovelli and Gambini on this point?

Am I right in the initial impression (from not yet reading the papers) that Gambini here attemtps to look for something deeper information-wise, at a point where Rovelli makes a leap?

Anyone think that's fair?

( I really think this is interesting too, but I have been overloaded with non-physics stuff lately, but hopefully ina week or so there is some light, I hope to keep this discussion alive until I get time to join more)

/Fredrik
 
  • #25
*Starting to read a little*

To start at the introduction of Gambinis fundamental limit...

"This hits a fundamental limit if one includes gravity in measurements. To make accurate measurements one need to expend energy. When one considers gravity one is limited in the amount of energy one can invest in a measurement. Too large an energy density creates a black hole and spoils the measurement. This argument has been put forward by many authors. It is heuristic and without a full theory of quantum gravity cannot be rigorously worked out."
-- Gambini, Undecidability and the problem of outcomes in quantum measurements

He raises two points here.

Too large an energy density creates a black hole and spoils the measurement.

one is limited in the amount of energy one can invest in a measurement

The first point is AFAIK the more common stated in QG introductions, but the second one is actually more interesting, because it connects to the fact that there is a complexity bound originating from the home of any measurement; the observer himself. One can also imagine the extreme case where a small observer has to "invest" his entire own mass/energy in order to make an accurate measurement, but then something strange happens because the measurement itself implies a complex feedback to the observer itself.

It's not just a matter of beeing limited in measurement energy because the system to be observerd in question been excited into a black hole, it's a matter of the observer having finite resources as well. This other point has IMO not been payed as much attention. To compare with a game, the player can not make infinitely high stakes if his pockets are finite.

I think this second point is the key to the so called instrinsic view, and construction of instrisic measurements. This should add a further constraint on what measurements are possible, and thus also what actions that are possible, given that the actions are based on the observers state of information about it's environment.

This is IMO the more rare component I this reasonings I wish to have elaborated.

I think the right angle of the above might be important, since it's the starting point. First of all we do not know how a microscopic black hole would actually behave, but I really would expect it to be very different from an ordinary continuum-GR black hole just "scaled down". So the intuition that too high energy would collapse the measurement into a black hole still needs to be reconstructed in a more general setting. And I think the discrete information view is more promising than the continuum geometric view.

Along with hte first point, one might argue that at some energy, the object can not be distinguished from a black hole since the measurements excites the object.

But also more importantly even, at some point the object itself can not distinguish a complex environment from a simple one, if it has information capacity; this would also possible constrains the complexity of it's response pattern? So maybe a very "small black hole" would reduce it's behaviour to a well defined elementary interaction pattern?

This is Imo, the other side of the coin, that is nto so often mentioned?

/Fredrik
 
  • #26
Fra said:
One can also imagine the extreme case where a small observer has to "invest" his entire own mass/energy in order to make an accurate measurement,...

That would be a great idea for a weigh-loss program... If not those pesky little black holes created in the process.
 
  • #27
[It seems like Gambini is making similar observations as I have done, but
IMO he does not think through the consequences.

Fra; [ said:
Too large an energy density creates a black hole and spoils the measurement.

Since physics is a quantitative science, one would like to quantify this
statement. Exactly how large is too large? AFAIU, the only way to do so is
to introduce the observer's mass M into the equations. Ultimately, the goal
of physics is to describe observations, and the result of an observation
depends on the observer's mass; a light observer observes something
different than does an obese observer sitting inside his black hole.

This leads to a radical corrollary: any observer-independent theory, be
it QFT, string theory or LQG, is necessarily wrong or incomplete, and can
at best be a limiting case of a more fundamental observer-dependent theory,
whose predictions depends on the observer's mass.

Fra; [ said:
one is limited in the amount of energy one can invest in a measurement

This is essentially the uncertainty relation applied to the observer,
right? To avoid problems with Pauli's theorem, it might be simpler to say
something similar about spatial localization. If the observer has finite
mass, there is a limit to how much an experiment can be localized.
In formulas, the observer's position and velocity do not commute, but
instead

[q, v] = i hbar/M
 
  • #28
Thomas Larsson said:
an obese observer sitting inside his black hole.

Priceless!

Thomas Larsson said:
In formulas, the observer's position and velocity do not commute, but
instead

[q, v] = i hbar/M

But this is a ridiculously small uncertainty, which can (and should) be ignored for all practical purposes.
 
  • #29
Just a quick response on one thing, I still didn't read all the papers.

I noted in a previous thread I share part of Thomas ideas with introducing the observers mass; in some way, but I do think that I see this differently that you, which is a bit illustrated by the below.

Thomas Larsson said:
This is essentially the uncertainty relation applied to the observer,
right?

I would say sort of but not quite. I definitely see it as much more involved than that.

The problem is if you consider an uncertainty relation between the observer and it's environment, you are implicitly defining another observer that observers the first observer, som kind of gods or birds view; that's exactly my point that that isn't valid. I suspect we differ in our views there. But for sure I agree that the observers mass "or some suitalbe complexity measure" for sure is missing in current models. I share your view fully on that point.

So what I mean with the limited energy for measurement is more abstract. I refer to a limited complexity that constraints all observable things, including physical law. It is not allowed to "explain this" by means of another external view, because that view is not at hand.

So I agree fully that "a light observer observes something different than does an obese observer sitting inside his black hole" but there exists no universal external description of this relation; this description is itself constrained by the same limits.

This is what I associate most closely to undecidable.

The difference here is between say something "undecidably undecidable" or something "decidable undecidable". My view is that we can not find hard universal constraints on the undecidability, instead it's evolving.

/Fredrik
 
  • #30
I guess what I'm objecting to is the separation of physics and science. The scientific context, the home of physical law, is subject to the same physical constraints as physical interactions and physical measurements as i see it.

Alot of physicistis doesn't seem to care about this.

If you ask what physical laws and "explnattions" a tiny physical observer can distinguish, then that is a different than just asking how a larger observer can partially explain the actions of the small observer due to it's small size. I think we can explain much more, including the actions - ie the form of lagrangians and hamiltonians etc - of nature if we take this even more seriously.

If the langrangians and hamiltonians are not universally fixed, but themselves merely evolving views of physical law, to which all observers have slightly shiften view, then the explanatory power would I think be far greater.

This is an even larger reconstruction indeed, but I think it's what we need, and it is what I personally expect.

/Fredrik
 
  • #31
Fra said:
This is what I associate most closely to undecidable.

The difference here is between say something "undecidably undecidable" or something "decidable undecidable". My view is that we can not find hard universal constraints on the undecidability, instead it's evolving.

This also relates a little bit to the original construction of QM, by Dirac etc.

In essenece, QM is an indeterministic, BUT, it is still a form of "determinisic indeterminism" because the indeterminism is encapsulated in the probability abstractions, and the idea is that the probabability is deterministically predictable.

This can not be valid IMO if you think bigger.

So this again relates back to the physical basis of the probability abstractions. And like I said to my defense before, the problem is not a mathematical one. The question is rather wether the mathematical abstractions and axioms of classical probability can be justified enough to encapsulate ALL indeterminism of the world, so that we end up with a "deterministic probability theory". I think not.

I rather think that a proper construction of what we have, will give a form of indeterministic indeterminism, which more takes the form of a game. All anyone has at any point, is an opinon of how to play. There are also expectations on how the game proceeds, but no one can predict it 100%. That is exactly why I like to think in terms of "games. That's exactly why it is a "game". Here is should be become intuitive also that a massive player, are more dominant and can make more viable predictions than a small observer.

Given that we accept this gaming idea, there is sense in decide something or estalibsh absolute truths, because all we have are expectations, from which we form our actions. Our actions are still not deterministically determined from the expectations, but they are "constraints". And once the game has proceeded, the situations repeats itself, and it has no survival uility to ask wether a predictions you made a year ago was right or false. Because that is somehow already erased, and replaced by a new prediction to make.

This is analogous to the human brain. A healthy human brain focus forward. Alot of research suggest that even our memories of the past, are mainly used in order to help us in the future. To accurately "recall" the past as it actually happened as per some actual time history, is not really the purpose of our memory. This is why the brain sometimes remembers the past in a way it actually didn't happen, it's because the brain transforms and reinterprets the record and instead remembers a form that it thinks is more useful for hte expected future. There are people who can very accurately recall in high detail what actually happened, and have amazing photohraphic memory, but these are usually disordered brains, like some savants. They have sometimes excellent, indeed almost supernatural memory, but then they have impaired ability to predict the future - figure which is the more important trait.

I see this quite analogous to the way I envision physical interactions, and evolving physical law.

/Fredrik
 
  • #32
Meopemuk:

hbar/M is ridiculously small in two limits:

1. hbar -> 0. This is classical physics.

2. M -> infinity. I presume that this is what you meant. This amounts to the assumption that all relevant energy scales are much smaller than M. This may seem obviously true in practical cases. However, note that *all* scales must be much smaller than M, including the energy of virtual quanta. Hence we cannot integrate over virtual momenta with energy higher than M, so M effectively becomes a cut-off.

In the absense of gravity, you can take the cut-off to infinity, after you have computed things in the regularized theory. But when gravity is presence, you again run into the problem that M is not only the inert mass but also the heavy mass.

This argument is of course a variant of the standard argument that renormalization hides new physics above the cut-off scale. What is unusual is that the new physics here is not another term in the Lagrangian, but fuzziness in the observer's location.
 
  • #33
Thomas Larsson,

Basically, you are saying that results of measurements must depend on the mass of the measuring apparatus. I have no problem with that. The gravitational field of the apparatus has some effect on trajectories of observed particles, energy levels, etc. However, there are lot of other (usually unaccounted) effects, which are many orders of magnitude stronger than the gravity field of the observer. For example, electromagnetic fields produced by a nearby radio station, or Earth vibrations from automobiles on the street... Are you going to discuss them as well? What is the practical point of discussing observers so heavy that they are buried in their own black hole? This is so bizarre.
 
  • #34
As I see it, the point of this discussion is that the interactions between systems are fundamentally dependent on their relative complexity (and mass), in a way that is not acknowledged in current formalisms and renormalisation procedures.

The focus is not at the 100'th decimal position in a standard experiment, the focus is on the fundaments of theory and physical law. A focus that if kept, might help us solve problems of unification.

The more interesting question here is what happens when the observer is NOT massive. How does the actions of a small system scale when it's complexity drops? Is there a possible inference to be made here?

Ordinary renormalisation, is simply a reduction. It can reduce degrees of freedom, but that's isn't a realistic inside scenario because it implies a birds view - to be decidably shaved. I think there is a better way to do this, that will be less deterministic, and more undecidable (to keep using that word here) BUT that might come with additional deeper insights of emergent constraints and physical law.

/Fredrik
 
  • #35
Meopemuk,

Yes, in principle the result of an electromagnetic experiment depends on the observer's charge e as well as his mass M. We are usually only interested in the aspects of the experiment which are independent of e and M, i.e. we are only interested in the double limit e -> 0, M -> infinity. This limit is presumably well described by renormalized QFT.

The special thing about gravity is that e = M; heavy mass equals inert mass. Therefore, for gravity and gravity alone, the QFT limit e -> 0 and M -> infinity does not make sense, and the observer's physical properties can not be ignored.
 

Similar threads

  • Beyond the Standard Models
Replies
13
Views
2K
  • Quantum Interpretations and Foundations
Replies
25
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
49
Views
2K
Replies
11
Views
1K
  • Quantum Interpretations and Foundations
Replies
25
Views
1K
  • Beyond the Standard Models
Replies
34
Views
13K
  • Beyond the Standard Models
Replies
11
Views
2K
Replies
8
Views
990
  • Quantum Physics
2
Replies
39
Views
2K
Replies
1
Views
1K
Back
Top