Can the Born Rule Be Derived Within the Everett Interpretation?

  • Thread starter Thread starter vanesch
  • Start date Start date
  • Tags Tags
    Born rule Paper
Click For Summary
The discussion centers on a paper arguing that the Born rule cannot be derived within the Everett interpretation of quantum mechanics without introducing an additional postulate, which the author refers to as an alternative projection postulate (APP). Two referees reviewed the paper, with the first criticizing its lack of novelty compared to previous work by Barnum et al. and questioning its relevance to Deutsch's arguments. The second referee acknowledged the paper's logical claims but suggested it needed a more thorough examination of existing literature on deriving the projection postulate. The author defends the paper's main argument, asserting that without an extra hypothesis, the Born rule cannot logically emerge from unitary quantum mechanics. The conversation highlights ongoing debates in quantum theory regarding measurement and the interpretation of probability.
  • #31
vanesch said:
Well, I checked finally the Barnum paper, it is available here:
http://www.journals.royalsoc.ac.uk/media/n9kmrgwwyr8226vrfl1y/contributions/a/r/x/p/arxp7pkn699n63c9.pdf
I really don't see what the referee meant when he said that we had identical critiques. This is a totally different critique of Deutsch' work in particular, where the authors try to establish that Gleason's theorem ALREADY ESTABLISHED Deutsch' results.
Wallace's response to the Barnum et al. paper can be found in section 6 of the following paper. (Later sections are also relevant.)

http://arxiv.org/abs/quant-ph/0303050

He interprets Barnum et al. as giving three different criticisms of Deutsch's paper, only one of which is the one about Gleason's theorem. One of the others is that Deutsch's conclusions don't follow from his assumptions unless an extra assumption is introduced. It seems to me this is the same criticism as yours, except that you're giving an explicit counterexample rather than just saying it doesn't follow. According to Wallace, though, this criticism doesn't work if you assume a decoherence-based Everett interpretation, because having to justify the measurement model from lower-level physics rather than taking it as basic puts extra constraints on what probabilities you can assign.

It seems to me now that referee-1 got it right. (Though I actually have no real technical competence to comment on all this. :) ) Papers like those of Barnum et al., Wallace, and Greaves already acknowledge that, when working in a measurement model like the one you're using, without making an extra assumption, you can't prove the PP, and alternatives like the APP are possible. The real question is whether it's possible to avoid making that extra assumption while keeping the reductionistic Everett interpretation, i.e. without introducing measurements or experiments as basic elements of your theory. To show that the decision theory program fails, you would need to show in your paper that the arguments made by Wallace, Saunders, Greaves, and so on are wrong. (As I think you've tried to do in this thread, by talking about how it doesn't matter operationally speaking.)
 
Last edited by a moderator:
Physics news on Phys.org
  • #32
Measurement Neutrality and Laplace's Principle of Indifference

straycat said:
Basically, Greaves explains that one of the essential assumptions in Deutsch-Wallace decision theory is the postulate of "measurement neutrality," which is "the assumption that a rational agent should be indifferent between any two quantum games that agree on the state |phi> to be measured, measurement operator X and payoff function P, regardless of how X is to me measured on |phi>." afaict, this means that if we think of the measurement process as a "black box," then Deutsch assumes that a rational agent should in principle be indifferent to the details of the innards of this black box.

Hi! I think something like this assumption was probably first made explicit by Wallace.
Based on email exchanges between Deutsch and the authors of "Quantum probability from decision theory?", I'd say Deutsch had it in mind as well, although even in the email exchange it only became gradually apparent, though not formalized as Wallace has done, what he had in mind... At first I thought Deutsch
thought his argument might apply whether one had a many-worlds or a definite-outcomes view of measurements, and that's why his paper was so unclear on this point. Now I'm not sure. But anyway, the crucial thing is that the measurement
neutrality assumption is a kind of quantum version of Laplace's Principle of Insufficient
Reason (PIR). In our paper, "QP from DT?", we argued Deutsch's implicit assumption was a kind of analogue of the PIR. Measurement neutrality is a more sophisticated one, but an analogue nonetheless. It seems "nonprobabilistic" because it isn't on the face of it about probabilities, whereas Laplace's PIR *is* explicitly about probabilities---but if one accepts a subjective, decision-theoretic view
of probabilities (which I have no problem with, in this context), then assumptions about
preferences may encode assumptions about probabilities, and I think that's so here. It's simply not a principle of "pure rationality" that whatever differences---physical differences, they'd likely be---between two ways of measuring a Hermitian operator, those differences should not affect our preferences between the result. Suppose
the differences have no intrinsic value to us: still, we could imagine having different beliefs about the likelihoods of the outcomes given different measurement processes,
and thus valuing the games differently. Measurement neutrality rules this out: therefore, it has substantive physical content (to the extent that physics is a theory
that guides action). Sure, it might seem crazy to think that the color of lettering
we use on the dials of our measuring device, or whatever, could affect the probabilities. But that it is crazy is part of our phenomenological theory of the world, acquired at least in part through experience and inference --- not a pure *principle* of rationality...and is also supported by arguments concerning the physics of the measuring device. No doubt we can't make do without some such prior predispositions to dismiss such possibilities as highly unlikely----but that doesn't mean invoking them is harmless in an effort to derive the probability rules solely from the assumption that the rest of quantum mechanics is "a fact" (whatever it would mean for the rest of QM to be "a fact" without the probability rules that are an important part of what ties it to the world and gives it content), plus "pure rationality".

Maybe I should
back off a little on that last parenthetical remark: there are things other than the probability rule that get QM in contact with the world: in fact, QM arrived a little earlier than the Born rule, as a theory explaining, among other things, some atomic spectra, by determining energies (of e.g. the hydrogen atom). Nevertheless, I tend to think that
Many-Worlds (depite my having spent a lot of effort in my life playing devil's advocate
for it) gets things backwards: stuff happens, we do scientific (i.e. some variety of roughly Bayesian in a very general, not necessarily conscious, sense) inference about the stuff that happens, the definite results we experience for measurements, we come up with a theory that systematizes the resulting beliefs (as evidenced by our willingess to bet, in a very generalized sense, on the outcomes of experiments and such). This systematization of our betting behavior faced with experiments can be represented in terms of probabilities given by the Born rule. Rederiving the Born probabilities from a part of the formalism that was cooked up, and especially, further developed and held onto, in part to give a good representation of just these probabilities, seems somewhat backwards. Without the probabilities, and the terrific guidance they give to our actions, who would have bothered with quantum mechanics anyway? I guess one can say that the rederivation is a sophisticated attempt to keep the probabilities and solve other problems that came along with quantum mechanics. But it still raises, for me, a serious problem of: what then of the formal and informal scientific reasoning, based on measurments having definite results, that brought us to the QM formalism and Born rule in the first place? Must we reconstruct it all in terms of Everettian branchings, with never a definite result?

Patrick's detailed exploration of an alternative probability rule (which happens to be a rule we devoted two sentences to on page 1180 of our paper, noting that it was contextual
but not obviously ruled out by Deutsch's other assumptions) is quite worthwhile, I think. I have only just read it, a couple of times through, but it looks basically right to me. FoP might be a good place for it. I think maybe Wallace, or somebody else (there is related work by Simon Saunders...) devoted some effort to ruling it out explicitly (I'll post it if I find a reference)... maybe just through establishing noncontextuality given certain assumptions. But any such effort is likely to be based on measurement neutrality or something similar.

Cheers!

Howard Barnum
 
  • #33
Quote (from lalbatros, quoted by vanesch):
For many people, the interaction with a 'classical' or 'macroscopic' system is all that is needed to derive the PP. I think this is the most probable explanation for the PP. Landau considered this so obvious that it comes in the first chapters in his QM book.

Quote from vanesch (replying to lalbatros)
This is the standard "explanation". But it is *postulated* and not *derived* from unitary QM. What qualifies a system as "macroscopic" and "classical" (without making circular reasoning ?) and why shouldn't it obey standard quantum theory ?
Or is there an upper limit to the number product hilbert spaces (number of particles) before the exponentiation of a hermitean operator suddenly doesn't become unitary anymore ?

My comments:

It's not that a large isolated system behaves nonunitarily, but that a smaller system interacting with a larger one may undergo a nonunitary effective evolution. That's one way of understanding entropy-increasing evolutions, like equilibration of a system with a larger "heat bath." Of course, it's true that in computing things like the entropy of the open system, one is implicitly using the Born probabilities, e.g. via the reduced density matrix (whose eigenvalues' interpretation as probablities relies on the Born rule at the level of combined system and reservoir).

Howard
 
  • #34
mbweissman said:
BTW, on some real issues, does anybody understand how Graham (1973) managed to get from APP to standard PP? I just can't follow his argument.

I have taken several stabs at trying to understand Graham's argument, which I recall relies in some manner on some sort of "two-step" process of measurement, but I never could piece together how his argument worked. The first part of his paper is actually pretty good, I think, in which he argues against the PP -- or rather, I suppose he argues FOR the APP. I try to recapitulate this argument in the first part of my paper. BTW, I agree with Dr. Weissman's statement in the abstract of his paper when he states that counting outcomes, ie the APP, is "the obvious algorithm for generating probabilities." To my mind, the APP is the only projection postulate that I think could legitimately be justified by a symmetry argument. In fact, I am absolutely flabbergasted that there are so few attempts in the literature to make the APP "work." The only published attempts that I know of are Weissman's and Hanson's -- perhaps many minds might count as well. Given the obviousness of the APP, why has it been paid so little attention? Is it the non-contextuality that Patrick speaks of?

David
 
  • #35
hbarnum said:
Nevertheless, I tend to think that
Many-Worlds (depite my having spent a lot of effort in my life playing devil's advocate
for it) gets things backwards: stuff happens, we do scientific (i.e. some variety of roughly Bayesian in a very general, not necessarily conscious, sense) inference about the stuff that happens, the definite results we experience for measurements, we come up with a theory that systematizes the resulting beliefs (as evidenced by our willingess to bet, in a very generalized sense, on the outcomes of experiments and such). This systematization of our betting behavior faced with experiments can be represented in terms of probabilities given by the Born rule. Rederiving the Born probabilities from a part of the formalism that was cooked up, and especially, further developed and held onto, in part to give a good representation of just these probabilities, seems somewhat backwards.

But doesn't science often progress that way: we make observations, we then make a theory to describe the observations, and we then come up with a way to derive the theory from something more fundamental and completely different. Example: we see things in motion, we come up with Newtonian mechanics, and then we find that we can derive Newton's laws from general relativity. Why not try to do with the Born rule what we did with Newton's laws -- derive it from something deeper?

David
 
  • #36
hbarnum said:
I think maybe Wallace, or somebody else (there is related work by Simon Saunders...) devoted some effort to ruling it out explicitly (I'll post it if I find a reference)... maybe just through establishing noncontextuality given certain assumptions.

I still don't think I completely understand the difficulty with "noncontextuality" ... I'll be rereading sec 4 of Patrick's paper tonight ... :rolleyes:

David
 
  • #37
straycat said:
I still don't think I completely understand the difficulty with "noncontextuality" ... I'll be rereading sec 4 of Patrick's paper tonight ... :rolleyes:
David

OK, reading Patrick's paper: "It is a property of AQT that changing the resolution of a measurement can change the probabilities of the outcome of the crude measurement, which is not the case under SQT." Is it not? Consider the quantum zeno effect, according to which (for example) we can change the probability of decay of a particle by changing the time-resolution of our measurements of the particle's state.
http://en.wikipedia.org/wiki/Quantum_Zeno_effect .

Later: "in AQT ... the probability of measuring 10 for X depends on whether we have measured Y or not ..." This sounds to me reminiscent of the 2-slit experiment: the probability of detecting the electron at a certain spot on the detector depends on whether we have measured which slit was traversed by the electron.

So it seems to me that some of the "strange properties of AQT" are in fact not so dissimilar to standard, good ol' quantum weirdness. In which case they do not disqualify it, but perhaps *qualify* it as a possible candidate of a physical theory of the world! Does this make sense?

David
 
  • #38
Hello M. Barnum,

Welcome to PF !

Concerning the following:

hbarnum said:
Patrick's detailed exploration of an alternative probability rule (which happens to be a rule we devoted two sentences to on page 1180 of our paper, noting that it was contextual
but not obviously ruled out by Deutsch's other assumptions) is quite worthwhile, I think. I have only just read it, a couple of times through, but it looks basically right to me. FoP might be a good place for it. I think maybe Wallace, or somebody else (there is related work by Simon Saunders...) devoted some effort to ruling it out explicitly (I'll post it if I find a reference)... maybe just through establishing noncontextuality given certain assumptions. But any such effort is likely to be based on measurement neutrality or something similar.

As I said already elsewhere, I'm doing this "as an interested amateur", so it is pretty obvious that I don't know the entire litterature of the domain although I did my part of reading up. And it seemed to me too that the reasoning I put forward in the paper was - at least informally - already presented a few times, but I found it useful to explore it in detail and try to write it down as formally as possible, because I think that is has important implications. I should probably make this clearer in the introduction. The main implication is that we should look for the best EXTRA postulate to introduce. Maybe the text was seen too much as an attack on Deutsch, which it wasn't, but Deutsch' work seemed to present the most obvious "case study" to illustrate the point I tried to make.

I think all these people do important work, but it seems that they are overlooking a point, namely that they WILL need, anyhow, to introduce something extra. Now, it is not forbidden to introduce something extra of course, if that makes the scheme work. So instead of minimizing the "extra assumption", or even try to hide it, it should be made clear and explored. As such, we could concentrate on the essential part, namely the meaning of the extra postulate - after it has been shown that that postulate makes things come out "as if" Copenhagen QM were true. But of course, as an amateur, one feels a bit uneasy to draw such a conclusion about people working in the field!

cheers,
Patrick.
 
  • #39
World counts incoherent?

Hello all, I'd like to join your conversation, a few days late.
Ontoplankton said:
This is true, but in section 5.3, Greaves argues that egalitarianism (which is the APP, but phrased in terms of utilities instead of objective probabilities) is *incoherent*, whether or not you accept measurement neutrality, because in a real-world setting where branch-splitting happens through decoherence, there is no well-defined number of branches. ... Patrick van Esch's case. If his intent is to prove that the APP is a consistent theory that "could have been true" (not just in an idealized model of a measurement/branching, but in messy statistical mechanics), then he needs to address these arguments. ... The question is whether you can justify measurement neutrality (or some equivalent assumption like equivalence or branching indifference or whatever they were called); for example, by showing that alternatives are incoherent, or require a huge amount of arbitrary input, or correspond to rationality principles that aren't even workable in theory. Wallace has a lot of philosophical discussion in his papers about this; for example, see section 9 in http://users.ox.ac.uk/~mert0130/papers/decprob.pdf .
This seems to me to be the essential issue. Wallace and Greaves and many others seem to accept the claim that if there are naturally distinguishable branches/worlds in the Everett approach, then it is natural to assign probabilities proportional to world counts, producing a difficult conflict with the Born rule. They claim, however, that world counting is incoherent. Page 21 of Wallace's paper cited above gives the most elaboration I've seen defending this view.

I'd like to discuss this point further. But being new here I'm not sure - does convention dictate that I should start a new thread or continue in this thread? So while I await instruction on this point, I'll just make one point.

Even if world counts are incoherent, I don't see that the Everett approach gives us the freedom to just pick some other probabilities according to convenient axioms. An objective collapse approach might give one freedom to postulate the collapse probabilities, but in the Everett approach pretty much everything is specified: the only places remaining for uncertainty are regarding particle properties, initial/boundary conditions, indexical uncertainty (i.e., where in this universe are we), and the mapping between our observations and elements of the theory (i.e., what in this universe are we). We might have some freedom to choose out utilities (what we care about) but such freedom doesn't extend to probabilities.
 
Last edited by a moderator:
  • #40
Hello M. Hanson,

Welcome to PF ! Hope you'll enjoy this forum!

RobinHanson said:
This seems to me to be the essential issue. Wallace and Greaves and many others seem to accept the claim that if there are naturally distinguishable branches/worlds in the Everett approach, then it is natural to assign probabilities proportional to world counts, producing a difficult conflict with the Born rule. They claim, however, that world counting is incoherent. Page 21 of Wallace's paper cited above gives the most elaboration I've seen defending this view.

I'd like to discuss this point further. But being new here I'm not sure - does convention dictate that I should start a new thread or continue in this thread? So while I await instruction on this point, I'll just make one point.

I think you can do here what you think is best. Feel free to start a new thread if you think it is a subject on its own (which it probably is).
 
  • #41
hbarnum said:
[snip]---but if one accepts a subjective, decision-theoretic view
of probabilities (which I have no problem with, in this context), [snip]
And in which context do you have a problem with it Howard? (...we are watching you... )
 
  • #42
RobinHanson said:
Even if world counts are incoherent, I don't see that the Everett approach gives us the freedom to just pick some other probabilities according to convenient axioms.

Well, the problem that I see is that in the Everett approach, there are NO probabilities at all. If we follow it strictly and we assume conscious awareness of our entire brain state, we should experience the entire state, and not just "one world". If they hit our toes with a hammer in one term, that should hurt us in any case :-)

Now, as people here know, I'm rather a proponent of Everett (as long as no naturally physical mechanism for a collapse is found), but I think that nevertheless *a* postulate about how we go from this quantum state to a perceived state, should be made. If we're going to perceive only ONE term of our brain state, I think that should be explicitly said and I find it a priori not evident why all states should get "equal probability" for me to perceive them. Of course, I understand that it is tempting, natural, etc... to do so, but not strictly necessary ; and in any case the rule should be postulated.
 
  • #43
vanesch said:
Well, the problem that I see is that in the Everett approach, there are NO probabilities at all. If we follow it strictly and we assume conscious awareness of our entire brain state, we should experience the entire state, and not just "one world".

When you say "entire brain state," I assume you mean (for example) the superposition of <Bob sees up> and <Bob sees down>. So your question is: why do you not "experience" up and down at the same time? Well, how do you know that you don't? In fact, the MWI predicts iiuc that you do experience both worlds, but actually "you" can be divided into two distinct halves, one of which experiences one world, the other of which experiences the other, and -- importantly -- the two distinct halves *cannot communicate with one another*. Therefore -- as Everett explains rather succinctly in a footnote in his original paper -- his formulation makes predictions that are entirely consistent with observation, and so my inability to experience multiple worlds at the same time cannot be used as an argument against the MWI, strict interpretation or otherwise.

Have you ever read about the "disconnection syndrome" in patients whose left and right hemispheres are not connected in the normal way via the corpus callosum? It is possible to demonstrate that the right half of the brain does not know what the left half knows, and vice versa. I think that this situation is analogous to the situation of Bob-sees-up and Bob-sees-down: if we imagine that Alice's corpus callosum has been cut, and the left half of her brain sees the number 1 while the right half sees the number 2, and we consider that the two halves cannot communicate to one another, then it seems apparent to me that there is no "unified Alice state" that "experiences" both brain inputs. And the key is that the two brain halves do not communicate. Likewise, Bob's two superpositional states do not communicate; that's why he does not "experience" both inputs at the same time.

David
 
  • #44
APP follows from definition of probability?

vanesch said:
If we're going to perceive only ONE term of our brain state, I think that should be explicitly said and I find it a priori not evident why all states should get "equal probability" for me to perceive them. Of course, I understand that it is tempting, natural, etc... to do so, but not strictly necessary ; and in any case the rule should be postulated.

I'm sort of on the fence regarding whether the APP requires a separate postulate. Certainly, if you are going to assume the Born rule, then that requires a separate postulate, as you (Patrick) argue in your paper. Therefore, it would stand to reason that if we assume something *other* than the Born rule, ie the APP, then that likewise requires a separate postulate.

However, there is a different part of me that thinks that probability could in fact be DEFINED in such a manner that the APP necessarily follows from the definition, and in fact rules out the Born rule (or any other non-APP contenders), so that a separate postulate is not necessary. The basic argument is spelled out in Graham's 1970-something paper, and recapitulated in the latest draft of my paper, and is also touched upon in Robin's powerpoint presentation of his work ( http://hanson.gmu.edu/mangledworlds.html ). The short version goes like this: define a predicted "probability measure" m_n, where m_n is specified by some rule eg the Born rule: m_n = |a|^2, or the APP: m_n = 1/N. Now define the observed frequency p_n, which is the frequency of observing outcome n -- say, spin "up," after doing M identical spin measurements. We could DEFINE probability as p_n, and then require (establish a "criterion") that in "most" worlds, the observed frequency p_n equals the predicted measure m_n. It turns out, I think, that the APP is the only "rule" that is consistent with this criterion.

Like I said, I'm sort of on the fence regarding whether the APP requires a separate postulate or can be derived from, say, the definition of "probability." But I'm not sure it matters to me which one it is. If the APP requires a separate postulate, then I think the APP could very well be given the status of a symmetry principle and postulated in the same sense that the "principle of relativity" is one of the postulates of GR, with justification nothing more nor less than an argument from symmetry.

Hmmm. Does the principle of relativity require its own separate postulate? I suppose it does. So I suppose that the APP, likewise, also requires its own separate postulate.

David
 
  • #45
straycat said:
In fact, the MWI predicts iiuc that you do experience both worlds, but actually "you" can be divided into two distinct halves, one of which experiences one world, the other of which experiences the other, and -- importantly -- the two distinct halves *cannot communicate with one another*.
I do realize that, I think it is one of the most important contributions of decoherence theory. But what does this have to do with probability ? As I argued in the other thread, why should there be an "equal probability" of "and you happen to be this and that" ?
Therefore -- as Everett explains rather succinctly in a footnote in his original paper -- his formulation makes predictions that are entirely consistent with observation, and so my inability to experience multiple worlds at the same time cannot be used as an argument against the MWI, strict interpretation or otherwise.
I think one should then make clear exactly what it means "to experience". It seems that in the concept, a classical view is already sneaked in. What we ultimately want to explain is how it comes that we experience consciously a brainstate that corresponds to one of those states in a high hilbert norm world.
Have you ever read about the "disconnection syndrome" in patients whose left and right hemispheres are not connected in the normal way via the corpus callosum? It is possible to demonstrate that the right half of the brain does not know what the left half knows, and vice versa. I think that this situation is analogous to the situation of Bob-sees-up and Bob-sees-down: if we imagine that Alice's corpus callosum has been cut, and the left half of her brain sees the number 1 while the right half sees the number 2, and we consider that the two halves cannot communicate to one another, then it seems apparent to me that there is no "unified Alice state" that "experiences" both brain inputs. And the key is that the two brain halves do not communicate. Likewise, Bob's two superpositional states do not communicate; that's why he does not "experience" both inputs at the same time.
What I do understand from Everett is that on an objective level - as a purely materialist interpretation - each brain state will act as if it were alone in its branch. So from a "god's viewpoint" there is no surprise that each individual state acts the way it does. From a "god's viewpoint", it is also clear that in the branches with the highest hilbert norms, brain states will have noticed the Born rule and have written books about it. Also from a god's viewpoint, in branches with very low hilbert norms, brain states will not have found the Born rule. They may even have found totally different laws of physics, given the weird events that they've been witnessing. But why should each of these worlds be given "equal a priori probability for me to be in" ? As I said, tongue in cheek: following the same reasoning: if there are 10^10 conscious ants, and 5 humans, then I should be, by an overwhelming probability, an ant, no ?
I have nothing against it, but I don't think it *follows* logically from anything. I think one STILL has to *postulate* that. Comparing to the second law of thermodynamics is, in my opinion, not exactly the same for the following reason: we assign equal probabilities there to "chunks of phase space" because these chunks evolve in one another, and come close to each other thanks to ergodicity. However, as you point out correctly, the brain states do not evolve into one another, they are separated for good. So there is no "ergodicity" that will make a "time average = ensemble average" when I'm hopping over all my possible brain states, giving me the impression that I have to deal with some probabilistic phenomenon when I only look at coarse-grained quantities. I "just happen to be" one of those brain states. How do you get a *probability* for that ?
 
  • #46
straycat said:
The short version goes like this: define a predicted "probability measure" m_n, where m_n is specified by some rule eg the Born rule: m_n = |a|^2, or the APP: m_n = 1/N. Now define the observed frequency p_n, which is the frequency of observing outcome n -- say, spin "up," after doing M identical spin measurements. We could DEFINE probability as p_n, and then require (establish a "criterion") that in "most" worlds, the observed frequency p_n equals the predicted measure m_n. It turns out, I think, that the APP is the only "rule" that is consistent with this criterion.

That is begging the question ! The fact that your CRITERION uses "most worlds" each with equal weight in the cost function IS ALREADY the APP.

Apply the same reasoning, but this time, you require as a cost function that in the worlds WEIGHTED WITH THEIR HILBERT NORM the observed frequency equals the predicted measure, and you will find the Born rule !
 
  • #47
vanesch said:
That is begging the question ! The fact that your CRITERION uses "most worlds" each with equal weight in the cost function IS ALREADY the APP.

Apply the same reasoning, but this time, you require as a cost function that in the worlds WEIGHTED WITH THEIR HILBERT NORM the observed frequency equals the predicted measure, and you will find the Born rule !

Yes, you are correct -- my "probability criterion" effectively assumes the APP, just as Deutsch's "measurement neutrality" assumes (or at least opens the door for the assumption of) the Born rule.

I'm not sure that it would be possible, though, to use my probability criterion to give rise to the Born rule. Sure, I could use cost functions, weightings, etc. But the basic idea is that I don't know how to interpret a "cost function," so I'd rather not even entertain the notion.

Here's one way to interpret a "cost function." I could postulate the existence of a Soul, and say that the Soul follows different trajectories with probability proportional to their "cost function." But such metaphyasical assumptions are precisely what I want to avoid. I'd rather just enumerate the different branches that exist, and leave it at that.

Let me digress once again to the notion of limits. Consider, for example, in differential geometry, the notion of the distance between two points, or the area of a region, in a curved spacetime. To define these terms, we need first to define the metric. To define the metric, we need a map from flat spacetime (the tangent space) to our curved spacetime. And in the tangent space, the notion of distance is a "natural" one, not too many conceptual steps away from counting discrete points. The point I am trying to make is that, whenever we use the concept of measure, we are ALWAYS relying, ultimately, on the notion of "counting discrete things."

So if we were hypothetically to associate a "cost function" or a "weighting" to different branches, then I would want to say this is analogous to the area of a region in curved spacetime in the preceeding paragraph. That is, I want to be able to go through the steps in the above paragraph, ie to work backwards until I reach a "natural" method of counting worlds. Somewhere in this process, I need to define a "tangent space" of worlds which is flat, ie in which I can easily and naturally calculate the number (or at least the density) of worlds within a given region. If this cannot be done, then the only way to interpret this cost function is metaphysically. And if it CAN be done, then you see that we have effectively explained the "cost function" in terms of world-counting!

Note that in my discussion of the probability criterion, the frequency p_n has a well-defined meaning -- it is the observed frequency that the n-th outcome was observed. The predicted quantity m_n is also well-defined: it is simply a calculated quantity, calculated by humans, and may or may not even be correct!

David
 
  • #48
vanesch said:
As I said, tongue in cheek: following the same reasoning: if there are 10^10 conscious ants, and 5 humans, then I should be, by an overwhelming probability, an ant, no ?

I worry that you are getting sucked into a tautological trap, like contemplating the sound of one hand clapping. You need to escape!

Patrick: Probabilistically, I should be an ant, right?
David: Who should be an ant?
Patrick: me.
David: Define "me."
Patrick: "me" = Patrick.
Well there you go, you have answered the question tautologically, ie by definition.

Try riddling this. Have you ever wondered why the year is 2005 and not, say, 1975, or 1224, or 3001? Or have you ever wondered why am I here, and not there? iow: why is spacetime point a located at spacetime point a, and not somewhere/sometime else? Well, the answer is that you have DEFINED a to be right there.

vanesch said:
How do you get a *probability* for that ?

Forget probability: just define p_n and m_n as I do using my "probability criterion" discussion. That's all there is.

David
 
  • #49
  • #50
vanesch said:
I do realize that, I think it is one of the most important contributions of decoherence theory. But what does this have to do with probability ? As I argued in the other thread, why should there be an "equal probability" of "and you happen to be this and that" ?
I think one should then make clear exactly what it means "to experience". It seems that in the concept, a classical view is already sneaked in. What we ultimately want to explain is how it comes that we experience consciously a brainstate that corresponds to one of those states in a high hilbert norm world.
What I do understand from Everett is that on an objective level - as a purely materialist interpretation - each brain state will act as if it were alone in its branch. So from a "god's viewpoint" there is no surprise that each individual state acts the way it does. From a "god's viewpoint", it is also clear that in the branches with the highest hilbert norms, brain states will have noticed the Born rule and have written books about it. Also from a god's viewpoint, in branches with very low hilbert norms, brain states will not have found the Born rule. They may even have found totally different laws of physics, given the weird events that they've been witnessing. But why should each of these worlds be given "equal a priori probability for me to be in" ? As I said, tongue in cheek: following the same reasoning: if there are 10^10 conscious ants, and 5 humans, then I should be, by an overwhelming probability, an ant, no ?
I have nothing against it, but I don't think it *follows* logically from anything. I think one STILL has to *postulate* that. Comparing to the second law of thermodynamics is, in my opinion, not exactly the same for the following reason: we assign equal probabilities there to "chunks of phase space" because these chunks evolve in one another, and come close to each other thanks to ergodicity. However, as you point out correctly, the brain states do not evolve into one another, they are separated for good. So there is no "ergodicity" that will make a "time average = ensemble average" when I'm hopping over all my possible brain states, giving me the impression that I have to deal with some probabilistic phenomenon when I only look at coarse-grained quantities. I "just happen to be" one of those brain states. How do you get a *probability* for that ?
I'm trying to understand what you guys are talking about. :confused:
Is there a real physical problem (that is so perplexing as to lead you to ponder the various probabilities of the existence of other worlds, whatever that might mean)? If so, then would it be possible, for the benefit of us interested laymen, to sort of delineate it clearly?

I mean, I understand that there is a problem with talking about quantum measurement processes --- eg., the 'projection postulate' isn't derivable. But the leap to other worlds seems unfounded.

You're not an ant, because, by definition, you're a human. We don't, by definition, experience, wrt probabilities attached prior to our experience, alternative realities. Reality is what it is, by definition. There's zero probability attached to possible outcomes which, prior to measurement, were alternatives to observed outcomes, because the probability attached to observed outcomes is 1. Once a detector registers a detection at a time, t, then there's no chance whatsoever that it didn't register a detection at a time, t.

You have happened to be in a particular brain state during any particular interval. The probability of any of those brain states happening is 1, because they happened. The probability that they didn't happen is 0, because they happened.

But, one might say, the unitary evolution of quantum processes, which exists and continues independent of measurement, indicates that all of the possible outcomes contained in the qm description have happened (albeit in some alternative reality). But, we don't live in, and quantum theory isn't being applied to a reality that is independent of measurement. Reality, as far as physics is concerned, is the set of all objective measurements. By definition, there is no alternative reality.

While you might be having some semantic fun, I don't understand how you're going to solve the physical problem of quantum measurement processes, or understand why the Born rule works or it's justification in the theory, by taking the approach that measurements which, by definition, have definite outcomes don't have definite outcomes. :smile:

As usual, I'm probably missing some important part of what it is that's being considered. Anyway, any clarification you can offer will be appreciated -- and if you don't have time, then I understand.
 
Last edited:
  • #51
vanesch said:
What I do understand from Everett is that on an objective level - as a purely materialist interpretation - each brain state will act as if it were alone in its branch. So from a "god's viewpoint" there is no surprise that each individual state acts the way it does. From a "god's viewpoint", it is also clear that in the branches with the highest hilbert norms, brain states will have noticed the Born rule and have written books about it. Also from a god's viewpoint, in branches with very low hilbert norms, brain states will not have found the Born rule. They may even have found totally different laws of physics, given the weird events that they've been witnessing. But why should each of these worlds be given "equal a priori probability for me to be in" ? As I said, tongue in cheek: following the same reasoning: if there are 10^10 conscious ants, and 5 humans, then I should be, by an overwhelming probability, an ant, no ?

If all you know is that you are conscious, then yes you should be surprised to be one of the few conscious beings who are human. It would be surprising enough that you should look for an explanation, such as that you've been making a wrong assumption about something. In fact however, you also know the crucial fact that you are able to reason about the fact that you are conscious, and about what that might imply. Since ants can't do that, you should be much less surprised that you are asking the question.

Similarly, if all you know is that you have found yourself in a Born rule world, I think you should be very surprised.
 
  • #52
straycat said:
Try riddling this. Have you ever wondered why the year is 2005 and not, say, 1975, or 1224, or 3001? Or have you ever wondered why am I here, and not there? iow: why is spacetime point a located at spacetime point a, and not somewhere/sometime else? Well, the answer is that you have DEFINED a to be right there.

I think you are being a bit flippant. There are real and deep questions to consider here. It is indeed possible to be surprised to find oneself at a particular time or place - one can't simply exclude this by making definitions.
 
  • #53
RobinHanson said:
There are real and deep questions to consider here. It is indeed possible to be surprised to find oneself at a particular time or place - one can't simply exclude this by making definitions.

Yes, but the point I am trying to make is that we have to pay close attention to the question being asked before we decide to be surprised at the answer, because it may be that we are contemplating the WRONG question. Is it possible to be surprised to find oneself at a particular time or place? Of course it is. Example: given everything that I know about the world as it is today, I would be surprised to find myself in, say, Costa Rica tomorrow. (Pleasantly, perhaps ...:cool: ). This, I think, is a well-posed question. But should I be surprised that I am not an ant? Given (only) that I am "a living entity" in a room full of lots of ants and one human, then yes, it IS surprising that I am human, and not an ant. Or I could ask this: given (only) that I am a 70 kg lump of matter, is it surprising that I just so happen to be living, breathing, sitting in front of a computer, in the year 2005, on the earth? Yes! what were the odds of THAT? So here's the issue: why would I ever contemplate questions that were set up like these last two? I mean, I can contemplate them -- but do they have any physical significance?

David
 
  • #54
What's wrong with modelling the universe as a sequence of unique configurations. Then you don't have the problem of world counts. Each universal configuration has one and only one descendent.
It seems to me that this would be a more physical way of approaching things.
Yesterday's probability that I would be writing this today doesn't matter as I write this. Probabilities regarding future events are just formalized guesses based on incomplete knowledge of reality. Probabilities regarding past events are meaningless.
We aren't, in fact, surprised by where we find ourselves at any particular moment (provided that we are operating with normal and sober, human biological functions) because we can and do, in fact, experience the uniqueness (and therefore, in a limited sense, track the temporal flow) of the instantaneous configurations encompassing our sensory range.
 
Last edited:
  • #55
I agree that, in a sense, you can be surprised that you are you and not someone else.

For example, if one theory predicts that there exist 99 left-handed people and 1 right-handed person, and another theory predicts that there exist 1 left-handed person and 99 right-handed people (all isolated from each other) then if you find that you are left-handed, this confirms the first theory. If you believed the second theory was true, your left-handedness should surprise you. Or at least, it's not obvious that it shouldn't.

By the way, I second Robin Hanson's recommendation (in the other thread) of Nick Bostrom's book at anthropic-principle.com. It's a very confusing subject, I think.
 
Last edited:
  • #56
straycat said:
Is it possible to be surprised to find oneself at a particular time or place? Of course it is.
Example: given everything that I know about the world as it is today, I would be surprised to find myself in, say, Costa Rica tomorrow. (Pleasantly, perhaps ... ).
If you do wind up in Costa Rica tomorrow, then I predict that you will not be surprised by it ... unless you black out a lot (does that count?).
If you suddenly find yourself in Costa Rica, then it would be a surprising realization. (Or if you've been right-handed your whole life and suddenly find yourself doing everything left-handed, or if your hair spontaneously turns green ... that would probably be surprising.) :smile:

But it isn't surprising that there is a quantum theory or that it employs something called the Born rule.
 
Last edited:
  • #57
Sherlock said:
If you do wind up in Costa Rica tomorrow, then I predict that you will not be surprised by it ... unless you black out a lot (does that count?).
If you suddenly find yourself in Costa Rica, then it would be a surprising realization. :smile:
I'm just trying to imagine what sequence of events could possibly :confused: result in placing me in Costa Rica :cool: tomorrow. You know, the stuff novels are made of, like the CIA suddenly needs my unique blend of intelligence and good looks :blushing: to solve the greatest threat known to mankind , which happens to be in Costa Rica, and all by Sunday ... Any scenario I come up with is, well, surprising :redface: -- of course, it is surprising, given the state of the world right now , from which we can calculate that the probability of the above sequence of events is about 0.01%, but with a bunch more "0"'s. :biggrin:
 
  • #58
straycat said:
I'm just trying to imagine what sequence of events could possibly :confused: result in placing me in Costa Rica :cool: tomorrow. You know, the stuff novels are made of, like the CIA suddenly needs my unique blend of intelligence and good looks :blushing: to solve the greatest threat known to mankind , which happens to be in Costa Rica, and all by Sunday ... Any scenario I come up with is, well, surprising :redface: -- of course, it is surprising, given the state of the world right now , from which we can calculate that the probability of the above sequence of events is about 0.01%, but with a bunch more "0"'s. :biggrin:
Just go with the flow, I say ... and good luck in making it to Costa Rica. It can be a fun place, and your dollars will buy more there than here, but I wouldn't want to live there. :smile:

Since my earlier posts I've read up on Everett's relative-state formulation. I'd classify the approach as interesting, but misguided ... and unfinished. Some of the resolutions to it are pretty wild. Definitely not good physics though. (as if I would know :confused:)

Anyway, wave function collapse and action-at-a-distance are pseudo-problems in my estimation.
 
  • #59
vanesch said:
Now, as people here know, I'm rather a proponent of Everett (as long as no naturally physical mechanism for a collapse is found)...

After following this thread a bit, I am beginning to understand better the attraction of the MWI. Reading Robin's "Mangled Worlds" page helped a lot too.

So here is my question: we have 2 entangled photons and perform a measurement on one at T=1 and a measurement on the other at T=2, let's assume they are more or less in the same location when the measurement is performed (perhaps we use coiled fiber optics on one so that the second measurement is delayed). These 2 particles were in a superposition. Can the measurement at T=1 be considered more fundamental in some respect than the one at T=2? I.e. did one "cause" the wave collapse while the other didn't?

In other words: does the branching (world counting) happen at T=1 and THEN at T=2? Or does half the time it is T=1, then T=2 and the other half of the time it is calculated as T=2, then T=1?

Also: is there any difference in how the MWer sees this as opposed to the orthodox QM view?
 
  • #60
straycat said:
Try riddling this. Have you ever wondered why the year is 2005 and not, say, 1975, or 1224, or 3001? Or have you ever wondered why am I here, and not there? iow: why is spacetime point a located at spacetime point a, and not somewhere/sometime else?


This was exactly the point I discussed in the epistemology forum (consciousness as an active ...) ! Indeed, given the "ontology" of the 4-d manifold in GR, one could then say that a brain is a 4-d structure (static and timeless) and your subjective world only "experiences" one timeslice of it.
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
3K
Replies
17
Views
3K
  • · Replies 44 ·
2
Replies
44
Views
5K
Replies
47
Views
5K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
Replies
8
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
58
Views
4K
  • · Replies 30 ·
2
Replies
30
Views
6K