My paper on the Born rule

In summary: But this claim is not justified. There are many other possible probability rules that could be implemented in this way, and it is not clear which one is the "most natural."In summary, the paper presents an alternative projection postulate that is consistent with unitary symmetry and with measurements being defined in terms of projection operators. However, it does not seem to add sufficiently to the criticisms of Deutsch's proposal to justify publication.
  • #71
no pruning

vanesch said:
That is the holy grail of MWI proponents, but if NO pruning or cutoff is introduced, everything seems to point out that the number of decendents is independent of the hilbert norm and as such, the APP will result (which is kind of logical, if you apply the APP on the "lowest level" then it will "propagate upward"). If you apply the "born rule" to the "worlds", then you will get the "Born rule" also for the outcomes upward.
However, what people noticed is that if you apply the "APP" to an arborescence with a cutoff on the hilbert norm, that the NUMBER of descendents is (under appropriate conditions) then more or less proportional to the hilbert norm of the "parent" branch.
This is what Hanson (present here) tries to establish with his mangled worlds proposition, which introduces a kind of natural cutoff.
There are other propositions of different kinds, but as far as I understand, one always something extra to "prune" the APP in order to get out something that looks like the Born rule.
Actually, my proposal involves anti-pruning, i.e. extra branching. There's an additional non-linear decoherence process which tends, in the long time limit, to make the average sub-branch (world) measures on each macro branch equal. Thus the limiting world counts on each branch asymptotically approach proportionality to measure.
 
Physics news on Phys.org
  • #72
vanesch said:
... Indeed, given the "ontology" of the 4-d manifold in GR, one could then say that a brain is a 4-d structure (static and timeless) and your subjective world only "experiences" one timeslice of it.

I agree with this ontology :approve:

David
 
  • #73
vanesch said:
What I wanted to point out is that in assigning probabilities of our subjective experiences to different worlds, there is no a priori necessity to have them being given by a uniform distribution. I agree that it would be a "natural" thing to do, but if that gives problems with what is observed, I don't see what is so impossible to postulate anything different.

My stance on this right now is that we can, indeed, postulate the APP, or the Born rule, or whatever. In fact, for the last 80 years, this is in fact exactly what we have done! (postulate the Born rule).

So my argument for the APP is simply that it is a symmetry principle, perhaps a deeper one than most people have appreciated. Similar to the principle of relativity. So we should just assume it and see if any new physics suggests itself. (This approach has worked in the past, why not again?) If not, then we can go back to the old ways.

vanesch said:
As I said in the beginning of this discussion, if "conscious experience" were to be strictly connected to a physical object such as a brain, we should experience a kind of "god's viewpoint" and have all these states in parallel.

I don't follow your reasoning. Assume that conscious experience is strictly connected to a physical object. So what do you mean that we should "have all these states in parallel?" Do you mean my consciousness should experience parallel, unconnected states? You seem to be implying that my consciousness should have access to god's viewpoint -- but this contradicts the starting assumption, that my consciousness is connected to (by definition) a physical object.

David
 
  • #74
DrChinese said:
In other words: does the branching (world counting) happen at T=1 and THEN at T=2? Or does half the time it is T=1, then T=2 and the other half of the time it is calculated as T=2, then T=1?

One observer may see event 1 happening prior to event 2, whereas another observer would see event 2 happening prior to event 1. This is standard relativity for the analysis of spacelike separated events.

Now when you draw out the tree branching diagram, you of course have to know which event happened first. So you have to keep in mind that according to Everett's original proposal, all of your calculations are done relative to the state of some particular observer. If you pick (say) Bob to be the observer, then (say) event 1 happens first. But if you pick (say) Alice to be the observer, then (say) event 2 happens first. Therefore, each observer has his/her own "tree diagram."

This is why Everett called his scheme the "relative state" formulation. I have always liked this phrase better than "multiple worlds."

David
 
  • #75
vanesch said:
Now, the hope of many people is that if somehow you can introduce a CUTOFF based upon the Hilbert norm, that if you only count worlds ABOVE this cutoff, and let not count those underneath, and if the branching follows a certain pattern, ...

OK, I'm feeling a bit dense. What exactly is a cutoff? Above and below what, exactly? I mean, what is the parameter that we refer to when we say a world is below or above the cutoff?

David
 
  • #76
DrChinese said:
... is there any mechanism to the effect that: equivalent worlds consolidate later (so there aren't quite as many branches) ? Seems like it would be nice to tidy things up later if that were possible.

Worlds can fuse as well as split, although the second law of thermodynamics implies that splitting happens "more" than fusion. See Q17 of the Everett FAQ:
http://www.hedweb.com/manworld.htm

David
 
  • #77
Probabilities and preferences in Everettian QM...

RobinHanson said:
Even if world counts are incoherent, I don't see that the Everett approach gives us the freedom to just pick some other probabilities according to convenient axioms. An objective collapse approach might give one freedom to postulate the collapse probabilities, but in the Everett approach pretty much everything is specified: the only places remaining for uncertainty are regarding particle properties, initial/boundary conditions, indexical uncertainty (i.e., where in this universe are we), and the mapping between our observations and elements of the theory (i.e., what in this universe are we). We might have some freedom to choose out utilities (what we care about) but such freedom doesn't extend to probabilities.

Hello Robin--- Seems reasonable to me to have a single thread on deriving the Born rule with the MWI, so I'll just go ahead and reply! Perhaps what I'm about to say is just rewording what you meant, but I'm not sure. Basically, I tend to agree that within the Everett approach, "we might have some freedom to choose our utilities (what we care about) ... ". Essentially, I'd argue we DO have this freedom (to choose different preference orderings over "quantum lotteries", and that *some* choices of preference orderings may be representable by an additional utility function attached to "decohered outcomes" (or whatever is chosen as "worlds"---definite experiential states, perhaps), plus some "probabilities" for outcomes---i.e. nonnegative numbers adding up to one. These probabilities function solely as a way of representing preferences over "quantum lotteries"--- evolutions leading to superpositions of decohered alternatives (entangled with the rest of the universe). So, they are not probabilities in the sense of standard classical decision theory. But OK, we can still perhaps "choose them" consistent with (a weak version of) "many-worlds". Choosing probabilities *is* choosing preferences, because what *IS*, is the superposition. These probabilties just help "represent" our attitude towardst that. What the "quantum suicide" style arguments point to is that it isn't clear our preferences towards such things shouldn't depend crucially on the fact that it is a superposition, and not a classical lottery... possibly not even be representable in "standard" ways as analogous to those towards a classical lottery. (Payoffs may appear to influence probabilities, for instance.) Who's to say this would be irrational? The Wallace/Deutsch style arguments claim that only a preference ordering representable by the Born rule and maximization of some (variable) utitlity function can be a rational choice in this situation, but I just don't find them convincing.

Incidentally, I've long maintained there was something "funny" about probabilities in the Everett interpretation, but Hilary Greaves and David Wallace have really helped me pinpoint it. I used to like to write as if the probabilities were probabilites of "perspectival" facts, i.e., probability that "I will perceive myself to end up in branch X". Howevever, all those perspectives are actually there (under MWI), in superposition, and ahead of time, there is no fact about which branch I will be in, and indeed, from the perspective from which the decision is made there will NEVER be a fact about which branch I will end up in, because "I" will be continued, having different experiences, in all branches. So it isn't really legitimate to invoke any part of classical decision theory under uncertainty here --- axioms that one might invoke that are formally analogous to those of classical decision theory, are just that: formally analogous, but having a very different content since they refer to quantum lotteries that have entangled superpositions, not definite but currently unknown, outcomes, as results. (This cerrtainly undermines one of Deutsch's original claims, which was to have used classical decision theory to derive the Born rule.) ["Quantum suicide" arguments say: suppose we face an experiment having one really desirable though unlikely outcome, while the world is destroyed if it doesn't--- then wouldn't you prefer that experiment to doing nothing? It's an outlandish situation, of course, but the point it makes is nonetheless worthwhile---- that having a component of something *definitely existing* in a branch of a superposition might be valued in a way very different from its occurence as one of many possible outcomes, a possibility we might want to take into account even in less extreme situations, and which might make it hard to represent nonetheless arguably reasonable preferences by expected utilities over worlds at all... ]

This summer, David Wallace and I were involved in a short "panel discussion" at a conference about the derivation of probabilities in the MWI. I argued that the "measurement neutrality" sorts of arguments involving claims that certain things (like the color of the dial on the measuring device, etc...) shouldn't affect the probabilities of measurement outcomes were analogues of assumptions in classical decision theory (about being able to condition different prizes on events without affecting their probabilities). But, I argued, unlike in the classical case, where we may make auxiliary assumptions about *some* beliefs (independence of likelihood of events from prizes conditioned on them, in many situations) and *some* desires (which prizes we like better), in the quantum case the whole question of how physics gives us probabilities is up for grabs, so we can't just assume that things that clearly are physical differences (dial colors, etc...) just CAN''T affect probabilities. The whole question is what beliefs we should/will assign. David (W) pointed out, though, that there is in fact no belief component here... it's all desire. He was right... and that's pretty much what I'd recognized (stimulated directly and indirectly by Hilary) in other contexts, and what I said above in this posts. Now, sure it's a bad theory to assume that dial color will routinely affect probabilities, and we'd be hard pressed to come up with a reasonable theory of its effects. But it may just be the case that *nothing* really forces us, in terms of pure rationality, to assign ANY probabilities in this case, from an Everettian point of view. There's going to be this superposition, or that superposition, evolving. You choose. What is the "scientific" question here?
Well, OK, you can say science must be a guide to action, so it better at least have some bearing on choice between quantum lotteries, otherwise what's the point. So, to make it (maybe) agree with our erstwhile preferences over quantum lotteries, the ones we had when we thought they had definite outcomes, we could just say by fiat, it should look like utility-maximization with the Born probabilities. Or you could say that the postulates that were hoped to be part of "pure rationality" are to be taken as part of Everettian quantum physics conceived of as a guide to action. But the "quantum suicide" arguments make one question whether one can even do that.
I guess this also relates to my other issue, about "reconstructing the history of science" in light of no experiment ever having had a definite outcome. What we thought were genuine probabilities of outcomes have gotten reinterpreted as perceptions of being in one branch of a superposition... I agree Everettians may want to reconstruct this process as one of discovering "the right sort of preference ordering to have over these superpositions", but, while perhaps not impossible, it strikes me as tricky to go back over a process of scientific reasoning based in part on definite outcomes and "bayesian" probabilistic reasoning, and justify it, or even understand it, in light of the wholly new attitude toward "outcomes" that Everettism represents.
 
  • #78
hbarnum said:
We DO have this freedom (to choose different preference orderings over "quantum lotteries", and that *some* choices of preference orderings may be representable by an additional utility function attached to "decohered outcomes" (or whatever is chosen as "worlds"---definite experiential states, perhaps), plus some "probabilities" for outcomes---i.e. nonnegative numbers adding up to one. These probabilities function solely as a way of representing preferences over "quantum lotteries"--- evolutions leading to superpositions of decohered alternatives (entangled with the rest of the universe). So, they are not probabilities in the sense of standard classical decision theory. ... I used to like to write as if the probabilities were probabilities of "perspectival" facts, i.e., probability that "I will perceive myself to end up in branch X". However, all those perspectives are actually there (under MWI), in superposition, and ahead of time, there is no fact about which branch I will be in, and indeed, from the perspective from which the decision is made there will NEVER be a fact about which branch I will end up in, because "I" will be continued, having different experiences, in all branches. So it isn't really legitimate to invoke any part of classical decision theory under uncertainty here ... in the quantum case the whole question of how physics gives us probabilities is up for grabs, so we can't just assume that things that clearly are physical differences (dial colors, etc...) just CAN''T affect probabilities. The whole question is what beliefs we should/will assign. David (W) pointed out, though, that there is in fact no belief component here... it's all desire. He was right... and that's pretty much what I'd recognized (stimulated directly and indirectly by Hilary) in other contexts, and what I said above in this posts.

As I said in post #64 in this thread,

RobinHanson said:
This problem is related to a more general problem that has received more attention, that of priors over indexical uncertainty. See Bostrom's book http://www.anthropic-principle.com/book/" .

You are using "I" to refer to your entire tree of "selves" at different worlds and times. One can also use "I" to refer only to a particular self at a particular time and world. Such a self can be uncertain about which self it is. This is indexical uncertainty. Reasoning about such uncertainty is central to reasoning about the Doomsday argument, for example (see the Bostrom book). Indexical uncertainty is possible even when the state of the universe as a whole is known with certainty. So classical decision theory can be directly relevant.

You and Wallace and others are too distracted with the idea of expressing preferences over future actions. I instead want to draw your attention to back to physicists' past tests of the Born rule. We need a conceptual framework for talking about what beliefs such tests have provided empirical support for or against. The framework of indexical uncertainty seems to me a reasonable one for having such a discussion. Given a prior over indexical possibilities, and conditional on a many worlds physics, one can predict the chances of seeing any particular measurement frequency, and one can then compare that to the observed frequencies.

Within this framework, if one uses a uniform indexical prior, there is then a conflict with the Born rule observations. Without some fix, this would seem to be evidence against the many worlds view. (This is what Hillary Putnam argues in the latest BJPS.)
 
Last edited by a moderator:
  • #79
vanesch said:
I've been thinking, apart from the PP and the APP, about yet another "assignment" of probabilities of observation that not seem to contradict the postulates of unitary QM.

The Born rule states that the probability associated with the n^th outcome is |a_n|^2.

So how about this alternate rule: probability = |a|^3? :rolleyes: Or = |a|^n? :bugeye: or any arbitrary f(a)?

If we were to substitute |a|^2 with any arbitrary f(a), then would this violate unitary QM?? :confused:

vanesch said:
I should check it, but I think that the "maximum length" world is ALSO compatible with unitary QM:

Take a finite number of "worlds" or outcomes or whatever, well, you will experience the one with the highest Hilbert norm with certainty. Let's call it the MPP (Maximum Projection Postulate). With the MPP, the resulting quantum theory is in fact deterministic: an observer will ALWAYS observe the outcome with maximum hilbert norm. This will of course also not lead to the Born rule, but I think it is just as well a logically consistent quantum theory.

How about a Minimum Projection Postulate? Or, say, a "half-max" projection postulate?

David
 
  • #80
On indexical uncertainty

RobinHanson said:
You are using "I" to refer to your entire tree of "selves" at different worlds and times. One can also use "I" to refer only to a particular self at a particular time and world. Such a self can be uncertain about which self it is.
This is indexical uncertainty. Reasoning about such uncertainty is central to reasoning about the Doomsday argument, for example (see the Bostrom book). Indexical uncertainty is possible even when the state of the universe as a whole is known with certainty. So classical decision theory can be directly relevant.

Hi! I think I was mostly using "I" to refer to particular selves at particular times in that post.. though which self and which time depends on which point in the post. However, you're right that I was tending to reject an "indexical uncertainty" interpretation of probabilities in the MWI/RSI (RSI=relative state interpretation, my
preferred term as I notice it is of some other posters here too). Whereas, earlier in my thinking on these issues (I wrote a long paper rejected by FoP in 1990, which I never bothered to publish, maybe I'll post a scan when I get a website up), I had vacillated between viewing the probabilities as essentially similar to classical decision-theoretic probabilities, concerning something like what you call "indexical uncertainty", and feeling that this way of viewing them was somehow fishy. My way of interpreting the RSI is as subjective---the unity of an "I" being given by some sort of unity and structure of mental content through time--- just the sort of unity that I would argue is disrupted, except *within* each branch, by performing a quantum experiment. So there is only one "I" before the branching, lots of "I"'s afterwards, on my view. Actually it's a bit subtle since each "I" afterward is mentally unified with the single "I" before. However, I'll have a look at Bostrom's book, and at anything of yours I can find online, to see if it challenges this view. Bayesian approaches to anthropic arguments are something I've always thought would be interesting to look into, too. Thanks also for the mention of Putnam's recent paper in your post 64 (British Journal of the Phil of Sci?), which I'll look at as well. My views on uniform priors over discrete alternatives actually date back to a paper I wrote for an undergraduate seminar taught by Putnam... I rejected, and still do, the notion that there is a single natural "objectively right" way of dividing up the world into discrete alternatives, associated with a natural "objectively right" uniform prior. (Convincing Schack, Fuchs, and Caves of this, at a time when at least some of them inclined towards thinking there could be objective priors associated with e.g. group invariances (a la Ed Jaynes) is probably my main contribution to their evolving views on subjective probabilities and their attempt to view quantum states as subjective in a sense analogous to probabilities.)

Actually the main beef the referee had with my 1990 paper may be related to the issues surrounding indexical uncertainty. He or she didn't see how it differed from the Albert and Loewer "Many Minds" version that had recently appeared. I thought the idea of "Branching Minds" was quite distinct from Albert and Loewer's of "Many Minds with a measure over them", but didnt' bother to argue. (I didn't know about griping to the editor then...)

<quoting Robin Hanson again>
You and Wallace and others are too distracted with the idea of expressing preferences over future actions. I instead want to draw your attention to back to physicists' past tests of the Born rule. We need a conceptual framework for talking about what beliefs such tests have provided empirical support for or against. The framework of indexical uncertainty seems to me a reasonable one for having such a discussion. Given a prior over indexical possibilities, and conditional on a many worlds physics, one can predict the chances of seeing any particular measurement frequency, and one can then compare that to the observed frequencies.
<end quote>

Well, I'm reluctant to admit that's a distraction, because I tend to view the very meaning of the probabilistic "beliefs" that such tests provide, or fail to provide, support for, as inextricably bound up with the way they help structure preferences over future actions. But I heartily agree that understanding past tests of the Born rule... and I would go beyond that, to the whole process through which QM including the Born rule was adopted... is important to a relative-states-theory. I'm not so sure that it makes sense to do it solely "conditional on a many worlds physics", though, since on my view the reconstruction of the reasoning process should include how we got to a many worlds view at all. Nor, for the reasons I gave above, am I convinced that indexical uncertainty is the right framework for it... which is why I'm somewhat more pessimistic about whether it can be done coherently at all. But I'll do some reading before saying more...

Cheers!

Howard
 
Last edited:
  • #81
hbarnum said:
I had vacillated between viewing the probabilities as essentially similar to classical decision-theoretic probabilities, concerning something like what you call "indexical uncertainty", and feeling that this way of viewing them was somehow fishy. ... However, I'll have a look at Bostrom's book, and at anything of yours I can find online, to see if it challenges this view. Bayesian approaches to anthropic arguments are something I've always thought would be interesting to look into, too. ... I rejected, and still do, the notion that there is a single natural "objectively right" way of dividing up the world into discrete alternatives, associated with a natural "objectively right" uniform prior. ... I heartily agree that understanding past tests of the Born rule... is important to a relative-states-theory. I'm not so sure that it makes sense to do it solely "conditional on a many worlds physics", though, since on my view the reconstruction of the reasoning process should include how we got to a many worlds view at all. Nor, for the reasons I gave above, am I convinced that indexical uncertainty is the right framework for it... which is why I'm somewhat more pessimistic about whether it can be done coherently at all. But I'll do some reading before saying more...

My reference to "conditional on a many worlds physics" was meant to refer to setting up an application of Bayes' rule, for which one would of course also have to do the analysis conditional on other assumptions. That is, we want to compare how well the different approaches do at predicting the observed measurement frequencies. To do that, we need to get the relative state approach to make predictions, using minimal assumptions about utilities.

My quantum papers do not explicitly formulate these problems in indexical terms, though that is implicitly wht I have in mind. Bostrom's book and papers are more explicit about such things, though even he could stand to be more explicit.
 
  • #82
straycat said:
One observer may see event 1 happening prior to event 2, whereas another observer would see event 2 happening prior to event 1. This is standard relativity for the analysis of spacelike separated events.

Now when you draw out the tree branching diagram, you of course have to know which event happened first. So you have to keep in mind that according to Everett's original proposal, all of your calculations are done relative to the state of some particular observer. If you pick (say) Bob to be the observer, then (say) event 1 happens first. But if you pick (say) Alice to be the observer, then (say) event 2 happens first. Therefore, each observer has his/her own "tree diagram."
This is why Everett called his scheme the "relative state" formulation. I have always liked this phrase better than "multiple worlds."

David

The times are different but the observer and location are to be the same so that relativistic order is not a factor.
 
  • #83
mbweissman said:
The branching order in MW is not important, fortunately, since the order of the events is generally not a Lorentz invariant. This issue is much less problematic for MWI than for collapse pictures. No choices are made in MWI, unlike collapse, so no superluminal communication is needed to keep spacelike separated choices coordinated in Bell-type experiments.

Interesting... Vanesch thought the branching would follow the order. You are thinking perhaps the same, but that the outcome wouldn't matter (i.e. the distinction is not important). So follow this example and see if you still agree with that assessment.

In a normal Bell test (see how I cleverly come back to this :smile: ) you have 2 entangled particles. Measure Alice at T=1 at angle setting 0 degrees, and Bob at T=2 at angle setting 120 degrees. You get a .25 correlation rate regardless of the order (i.e. reversing the order does not change the correlation between Alice and Bob). This is standard to MWI and QM both (keep in mind that Alice and Bob are in the same location and reference frame).

Now add a new twist: 3 (or 4 or more) entangled photons. You would think that wouldn't change anything, but it might. Measure Alice at T=1 at angle setting 0 degrees, and Bob at T=2 at angle setting 120 degrees. You get a .25 correlation rate, just as before. But if we also measure Charlie at T=2 at angle setting 240 degrees, you will also get a correlation rate of .25. No surprise there either.

But in the last example, Bob and Charlie have a correlation rate between their results of .625, or over DOUBLE what we would expect! The reason this would occur (assuming that the rule is applies in the order of world branching) is that T=1 the polarization of Alice is known. Subsequent results must match this fact. The outcomes for Bob and Charlie - once Alice is known - are no different than if we had used light of known polarization to create Bob and Charlie. So where does the .625 value come from?

When Alice=+:
.0625: Bob=+, Charlie=+ (25% x 25%)
.1875: Bob=+, Charlie=- (25% x 75%)
.1875: Bob=-, Charlie=+ (75% x 25%)
.5625: Bob=-, Charlie=- (75% x 75%)

Add up the two cases in which Bob and Charlie are the same and you get .625. Note that the values in the first column are required so that the relationships between (Alice and Bob) and (Alice and Charlie) are intact.

On the other hand, if Alice's measurement is delayed until T=3, then Bob and Charlie will see the normal coincidence rate of .25 between them. So changing Alice from being the first observed to the last observed would cause the coincidence rate between Bob and Charlie to change.

I believe it should be possible to actually perform this experiment - it is similar to a multi-photon experiment performed (Eibl, Gaertner, Bourennane, Kurtsiefer, Zukowski, Weinfurter: Experimental observation of four-photon entanglement from down-conversion). I would guess - not entirely sure - that orthodox QM is silent on this point. It is hard for me to picture what the expected result should be.

In other words: if the predicted branching actually occurs in order, I believe this experiment should confirm the phenomena.
 
  • #84
DrChinese said:
Interesting... Vanesch thought the branching would follow the order. You are thinking perhaps the same, but that the outcome wouldn't matter (i.e. the distinction is not important). So follow this example and see if you still agree with that assessment.
In a normal Bell test (see how I cleverly come back to this :smile: ) you have 2 entangled particles. Measure Alice at T=1 at angle setting 0 degrees, and Bob at T=2 at angle setting 120 degrees. You get a .25 correlation rate regardless of the order (i.e. reversing the order does not change the correlation between Alice and Bob). This is standard to MWI and QM both (keep in mind that Alice and Bob are in the same location and reference frame).
Now add a new twist: 3 (or 4 or more) entangled photons. You would think that wouldn't change anything, but it might. Measure Alice at T=1 at angle setting 0 degrees, and Bob at T=2 at angle setting 120 degrees. You get a .25 correlation rate, just as before. But if we also measure Charlie at T=2 at angle setting 240 degrees, you will also get a correlation rate of .25. No surprise there either.
But in the last example, Bob and Charlie have a correlation rate between their results of .625, or over DOUBLE what we would expect! The reason this would occur (assuming that the rule is applies in the order of world branching) is that T=1 the polarization of Alice is known. Subsequent results must match this fact. The outcomes for Bob and Charlie - once Alice is known - are no different than if we had used light of known polarization to create Bob and Charlie. So where does the .625 value come from?
When Alice=+:
.0625: Bob=+, Charlie=+ (25% x 25%)
.1875: Bob=+, Charlie=- (25% x 75%)
.1875: Bob=-, Charlie=+ (75% x 25%)
.5625: Bob=-, Charlie=- (75% x 75%)
Add up the two cases in which Bob and Charlie are the same and you get .625. ...
It's hard to follow the example in detail, but the result cannot be right. If it were, then remote choices of whether to measure Alice would change the Bob-Charlie correlation. With a steady source of these entangled particles, somebody on a remote planet (spacelike separated from our measurements here) could send signals to us by changing our BC correlations by measuring A or not. That sort of information-bearing superluminal communication creates causal havoc.
All sorts of similar multi-particle entangled experiments have been performed, and none give superluminal information tranfer.
 
  • #85
agreed

RobinHanson said:
As I said in post #64 in this thread,

You and Wallace and others are too distracted with the idea of expressing preferences over future actions. I instead want to draw your attention to back to physicists' past tests of the Born rule. We need a conceptual framework for talking about what beliefs such tests have provided empirical support for or against. The framework of indexical uncertainty seems to me a reasonable one for having such a discussion. Given a prior over indexical possibilities, and conditional on a many worlds physics, one can predict the chances of seeing any particular measurement frequency, and one can then compare that to the observed frequencies.

Within this framework, if one uses a uniform indexical prior, there is then a conflict with the Born rule observations. Without some fix, this would seem to be evidence against the many worlds view. (This is what Hillary Putnam argues in the latest BJPS.)

Exactly! Let's talk about real data, i.e. counts of past outcomes, not unmeasurable utility functions.
 
  • #86
mbweissman said:
It's hard to follow the example in detail, but the result cannot be right. If it were, then remote choices of whether to measure Alice would change the Bob-Charlie correlation. With a steady source of these entangled particles, somebody on a remote planet (spacelike separated from our measurements here) could send signals to us by changing our BC correlations by measuring A or not. That sort of information-bearing superluminal communication creates causal havoc.
All sorts of similar multi-particle entangled experiments have been performed, and none give superluminal information tranfer.

Oh, I definitely agree that it can't work this way for exactly the reason you describe. Although the experiment still poses some problems with standard theory, that is a separate subject and I don't want to get away from the MWI focus of this thread.

My question was simply whether MWI took a stance on the ordering - it's not something that has ever needed a lot of thought. However, with the advent of new multi-entanglement scenarios I predict it will get some attention eventually.
 
  • #87
branching ordering

DrChinese said:
My question was simply whether MWI took a stance on the ordering - it's not something that has ever needed a lot of thought. However, with the advent of new multi-entanglement scenarios I predict it will get some attention eventually.

If somehow the probabilities could be properly justified in a unitary MWI, I don't see why the ordering would have any significance. For non-unitary pictures, along the lines I suggested, this issue could be more serious and problematic.
 
  • #88
straycat said:
The Born rule states that the probability associated with the n^th outcome is |a_n|^2.
So how about this alternate rule: probability = |a|^3? :rolleyes: Or = |a|^n? :bugeye: or any arbitrary f(a)?
If we were to substitute |a|^2 with any arbitrary f(a), then would this violate unitary QM?? :confused:
How about a Minimum Projection Postulate? Or, say, a "half-max" projection postulate?
David

The f(a) must each time be re-normalized, but I think it is feasible. However, don't forget that probabilities assigned to a complete and mutually exclusive set of projectors defined over unitary quantum theory must satisfy 2 conditions in order for the system to be consistent:

1) they must remain invariant under a unitary transformation (so all functions of the hilbert norm and the number of them are OK)

2) they must give 100% certainty when EIGENSTATES are considered
(this is where your minimum or halfmax postulate won't do, and where the functions of the hilbert norm have to be such that this is true). This is because this property is a defining property of the hilbert space of states in the first place.

cheers,
Patrick.
 
  • #89
hbarnum said:
Incidentally, I've long maintained there was something "funny" about probabilities in the Everett interpretation, but Hilary Greaves and David Wallace have really helped me pinpoint it. I used to like to write as if the probabilities were probabilites of "perspectival" facts, i.e., probability that "I will perceive myself to end up in branch X".
I'm probably still in the same mindset of this "me" (not my body, but my subjective experienced world) ending up in branch X, and I'm not sure that this is a "wrong" viewpoint.
Howevever, all those perspectives are actually there (under MWI), in superposition, and ahead of time, there is no fact about which branch I will be in, and indeed, from the perspective from which the decision is made there will NEVER be a fact about which branch I will end up in, because "I" will be continued, having different experiences, in all branches.
What's wrong with "your current subjective experience-world getting into branch number 5 with probability X" ? I mean, a kind of continuity of the subjective experience, while the other branches are "new" worlds ?
I like to compare this to the following hypothetical (purely classical) situation. Imagine it is possible to make a perfect copy of your body. According to the above reasoning, the two bodies are two "I"'s. But you know that this is not true! You will go into the copying machine, and you will come out of it and that will still be "you" as if you went, in, say your car, or your bathroom ; the copy will be a totally different person, with exactly the same memories and so on, but this will not affect YOUR subjective experience.
Now, imagine the following situation: one proposes you for you to become rich, if you allow to make a copy of you which will then be tortured slowly to death. Would you accept ?
Again: imagine that one proposes for you to make a copy of yourself which will be made rich while the original you will be tortured to death. Would you accept ?
Would you give equal probabilities to both possibilities ?
 
  • #90
DrChinese said:
Interesting... Vanesch thought the branching would follow the order.

Yes, that's because you insisted that the optical fiber was wound up and that the two detectors were essentially in the same place. There is only a possible ambiguity in time ordering when the two events are spacelike separated. When two events are timelike connected (as I understood it was) then there is no ambiguity.

Also the branching only occurs with respect to the physical structure of the observer (considered "local"). There can be "common parts" which have nothing to do with it of remote physical structures:

(|me1>|closestuff1> + |me2>|closestuff2>)(|farawaystuff>|Joefaraway>)

is two branches for "me" and one branch for "Joefaraway".

If the unitary physics is local, then entanglement can only occur with stuff that is local (afterwards, of course, that stuff can be taken far away).

(|me1>|closestuff1> + |me2>|closestuff2>)|farawaystuff>

can evolve into:

(|me1>|closestuffgotaway1> + |me2>|closestuff2>)|farawaystuff>

and now closestuffgotaway1 can interact with farawaystuff

|me1>(|closestuffgotaway1A>|farawaystuffA>+|closestuffgotaway1B>|farawaystuffB>)+ |me2>|closestuff2>|farawaystuff>

but this doesn't affect me anymore: I'm still in two branches.
 
  • #91
vanesch said:
Yes, that's because you insisted that the optical fiber was wound up and that the two detectors were essentially in the same place. There is only a possible ambiguity in time ordering when the two events are spacelike separated. When two events are timelike connected (as I understood it was) then there is no ambiguity.

That is exactly what I was intending to specify, that all measurements are local and in the same frame - your closestuff1/2...

Thanks for clarifying that point.

In your opinion, is this application of branching exactly the same as how the Born rule would be applied in oQM?
 
  • #92
DrChinese said:
In your opinion, is this application of branching exactly the same as how the Born rule would be applied in oQM?

Hehe, you'll get different replies to this one :biggrin:

As experimentally, when we say that "QM is confirmed by experiment", we ALWAYS use the Born rule, then if that branching has to have the slightest bit of chance to survive it *better* behave exactly as how the Born rule would be applied of course.

But let us remember what are the two main problems with the Born rule in oQM: 1) we don't have a physical mechanism for it (all physical mechanisms are described by unitary operators which cannot lead to a projection)
2) the technique is bluntly non-local (even though the *results* are not signal-non-local even though Bell non-local).

So how is this branching *supposed* to work ? Well, there is something "irreversible" in the projection postulate of course, and that "irreversibility" is established by entanglement with the environment. This is not mathematically irreversible of course (it happens by a unitary operator, and that one is of course reversible), but is "irreversible FAPP". So this is what separates practically "for good" the different terms which have classically different outcomes (pointerstates).
The discussion that remains (witness the different contributions here from players in the field!) is about how probabilities emerge in that context. The "most natural" probability rule would of course be that if you "happen to be" in one of those branches, well, you could just be in *any* of them, so give them all the same probability. (that's my famous :-) APP)
Trouble is, one has to twist oneself in a lot of strange positions to get the Born rule out that way!
The other (probably less severe) problem is: how do we know that the resulting terms which are now irreversibly entangled, correspond to the classical worlds we would like to get out ? Decoherence gives a hint at a solution there.

Now, I would like to re-iterate my point of view on all these matters: they are a picture of *current* quantum theory, as we know it today. But clearly it doesn't make sense to talk about macroscopic superpositions of systems without taking into account the *gravitational* effect (because macroscopically different states will clearly have a slightly different mass-energy distribution, and as such, correspond to slightly different classical spacetimes, and as such to different derivatives wrt time (what time ? Of which term ? In what spacetime ?)). As we have, as of now, not yet one single clue of how quantum theory will get married with gravity (no matter the hype in certain circles), it is difficult to say whether the MWI picture will make sense once one has found the riddle.
 
  • #93
Zurek's derivation of the Born rule

Hey all,

How many here are familiar with Zurek's derivation of the Born rule? (See, eg, [1].) I know Howard is, having written a paper [2] on it. I just this evening watched an online lecture [3] by Zurek about his derivation, and skimmed the other papers listed below. It appears to me that Zurek's work assumes Patrick's "alternate projection postulate" (= outcome counting [Weissman] = the "equal probability postulate" [me]). Cool! (If Zurek gives his version of the APP a name, I haven't encountered it yet.) Actually, Zurek does not *assume* the APP - rather, he attempts iiuc to *derive* it, based on an assumption termed "envariance." From envariance, Zurek gets (again, iiuc) the APP. And from there, Zurek gets the Born rule -- although I'm not sure how exactly. Does the Born rule emerge because Zurek assumes a Hilbert space formalism, so that Gleason's theorem can be plugged in? Not sure -- I still need to look at Zurek's papers more in depth.

Here's another question: does Zurek's derivation of the APP from envariance make sense? I tend to agree with Schlosshauer and Fine [4] that it does not, ie that the APP stands as an independent probability assumption: "We cannot derive probabilities from a theory that does not already contain some probabilistic concept; at some stage, we need to 'put probabilities into get probabilities out'." I think Patrick would see it the same way.

Patrick, I think you definitely need to talk about Zurek a lot in your revised paper. How's it comin', by the way? :smile:

David

(PS I owe thanks to Simon Yu and Andy Sessler at Lawrence Berkeley for getting me interested in Zurek.)

[1]
Probabilities from Entanglement, Born's Rule from Envariance
Authors: W. H. Zurek
http://xxx.lanl.gov/abs/quant-ph?papernum=0405161

[2]
No-signalling-based version of Zurek's derivation of quantum
probabilities: A note on "Environment-assisted invariance,
entanglement, and probabilities in quantum physics"
Authors: Howard Barnum
http://xxx.lanl.gov/abs/quant-ph?papernum=0312150

[3]
http://www.physics.berkeley.edu/colloquia%20archive/5-9-05.html

[4]
On Zurek's derivation of the Born rule
Authors: Maximilian Schlosshauer, Arthur Fine
http://xxx.lanl.gov/abs/quant-ph?papernum=0312058
 
Last edited by a moderator:
  • #94
straycat said:
And from there, Zurek gets the Born rule -- although I'm not sure how exactly. Does the Born rule emerge because Zurek assumes a Hilbert space formalism, so that Gleason's theorem can be plugged in? Not sure -- I still need to look at Zurek's papers more in depth.

The trick resides, I think, above equation 7b. There, it is assumed that if we do a fine-grained measurement corresponding to the mutually exclusive outcomes sk1...skn, that we get probability n/N (this is correct) ; however, one CANNOT conclude from this, that if we were only to perform the coarse-grained measurement testing the EIGENSPACE corresponding to sk1,...skn, it would STILL have the same probability.

In the entire discussion above that point, it was ASSUMED that our observable was going to be an entirely exhaustive measurement (a different outcome for each different |sk>). But here (as did, in fact, Deutch do in a very similar way !), we are going to introduce the probabilities for measurements with an outcome PER EIGENSPACE assuming that it equal the sum of the measurements per individual eigenvector, and then SUMMING OVER THE probabilities per eigenvector to restore the outcome of the eigenspace. BUT THAT IS NOTHING ELSE BUT NON-CONTEXTUALITY. It is always the same trick (equation 9a).

The extra hypothesis is again, that we can construct an eigenspace of sufficient dimensionality which corresponds to the ONE outcome of the original eigenvector, so that we can make them all equal, and sum over the fine-grained probability outcomes (which are equal, through a symmetry argument), to obtain the original coarse-grained probability. But again, this assumption of the behaviour of probabilities is nothing else but the assumption of non-contextuality (and then, through Gleason, we already knew that we had the Born rule).

Zurek's derivation is here VERY VERY close to Deutsch's derivation. The language is different, but the statements are very close. In 7b and in 9a, he effectively eliminates the APP. As usual...

cheers,
Patrick.
 
  • #95
vanesch said:
The trick resides, I think, above equation 7b. There, it is assumed that if we do a fine-grained measurement corresponding to the mutually exclusive outcomes sk1...skn, that we get probability n/N (this is correct) ; however, one CANNOT conclude from this, that if we were only to perform the coarse-grained measurement testing the EIGENSPACE corresponding to sk1,...skn, it would STILL have the same probability.

This is a valid issue to raise. But my reading of the paper is that the "coarse-grained" measurement (yielding the value of k) should be reconceptualized as, in fact, being a "fine-grained" measurement (yielding the value of n, with n > k) in disguise.

Suppose the measurement is of a spin 1/2 particle, with premeasurement probabilities of k=up and k=down being 9/10 and 1/10, respectively. My reading of Zurek is that when we measure spin, we are doing more than measure the value of k; we are, in fact, measuring n, with n = 1, 2, 3, ..., 10; and we further assume that "n = 10" implies "k = down," and "n = anything between 1 and 9" implies "k = up." For this scheme to be compatible with the APP, we must assume that the spin measurement must give us the exact value of n. If the measurement gives only the binary result: "n = 10" versus "n somewhere between 1 and 9," then your criticism applies.

So does Zurek say somewhere that the measurement does not give us the exact value of n? I still am struggling through his paper, so it is possible that I've missed it if he did say such a thing. I would like to think that his scheme works the way I mentioned above, and hence evades your criticism, because that would mean that this part of Zurek's argument exactly matches the beginning of my own argument (up to Figure 1 B of my paper).

David
 
  • #96
vanesch said:
Zurek's derivation is here VERY VERY close to Deutsch's derivation. ...

OK, I have finally read the whole paper once through (excluding appendices). I note that Zurek agrees with us regarding Deutsch/Wallace decision theory -- ie, he thinks that it employs circular reasoning in the derivation of the Born rule:

"Reliance on the (classical) decision theory makes the arguments of [24] and [36] very much dependent on decoherence as Wallace often emphasizes. But as we have noted repeatedly, decoherence cannot be practiced without an independent prior derivation of Born's rule. Thus, Wallace's arguments (as well as similar 'operational aproach' of Saunders [52]) appears to be circular." (page 27, left column [arXived version])

Zurek states repeatedly in his paper that he has taken great care not to assume the Born rule in his derivation. So at the very least, he is aware of this danger!

David
 
  • #97
straycat said:
This is a valid issue to raise. But my reading of the paper is that the "coarse-grained" measurement (yielding the value of k) should be reconceptualized as, in fact, being a "fine-grained" measurement (yielding the value of n, with n > k) in disguise.
Suppose the measurement is of a spin 1/2 particle, with premeasurement probabilities of k=up and k=down being 9/10 and 1/10, respectively. My reading of Zurek is that when we measure spin, we are doing more than measure the value of k; we are, in fact, measuring n, with n = 1, 2, 3, ..., 10; and we further assume that "n = 10" implies "k = down," and "n = anything between 1 and 9" implies "k = up." For this scheme to be compatible with the APP, we must assume that the spin measurement must give us the exact value of n. If the measurement gives only the binary result: "n = 10" versus "n somewhere between 1 and 9," then your criticism applies.

The problem is that in his derivation of the probability of 9/10, he needs an extra space (which he can always find in the environment) with enough dimensional liberty to *imagine* that to the 9/10, he can use 9 dimensions, and for the remaining 1/10, he can have a 10th dimension, so that he can include this in an *imagined* finegrained measurement where all events are now equi-probable and have identical hilbert norms. As he argued before, from pure symmetry arguments, he can then derive that the probabilities of all of these outcomes are equal, and hence the probability of the "coarse grained event" is the sum of the respective probabilities of the fine-grained events. Now, admit that the way Zurek does it, is very artificial. There's no good reason why there should be exactly 9 extra dimensions, with equal lengths, in the environment corresponding to the "spin up" case, and 1 corresponding to the "spin down" case! He just gives this case, because then all fine-grained probabilities are equal because of a symmetry argument. But there's no reason why, in a real interaction, this should be the case, and it is certainly not argued that way. He only needs an artificial finegrained case which is exactly of the right composition so that his argument can work. Now, his argument works of course, because it is always *thinkable* that the fine-grained (but not too finegrained!) measurement works exactly that way on the environment ; meaning that we measure exactly SUCH an extra quantity of the environment that his scheme works. (If we measure too well the environment, it might not work - we may have too many or too few components for each term). So we can accept that SOME relatively finegrained measurement exists so that his scheme of things works out.

But this is implicitly assuming that the probability of the coarse grained event, when calculated from the probabilities of the fine-grained events, is the same probability as if we were going to perform only a coarsegrained measurement directly, without first fine-graining, and then not considering the information. As I tried to point out in my paper, *these are physically different measurements*. But it is very natural to assume that the two probabilities are equal. This is assuming that the probability of some coarse-grained event is NOT DEPENDING ON THE DEGREE OF EXTRA USELESS FINEGRAINING that is present in the measurement - and that is nothing else but postulating non-contextuality. Non-contextuality is exactly that: given the state and the eigenspace one wants to consider (the coarse-grained event), the probability can only depend upon the state and the eigenspace, and not upon the slicing up or not of that eigenspace and the complementary eigenspace. But that assumption is sufficient to derive Gleason's theorem!

Now, what's wrong with that ? Nothing of course, except that in order to be even able to _state_ that property of the probabilities that you would like to extract from the state and a set of eigenspaces, that you are going to HAVE TO STATE THAT PROBABILITIES EXIST IN THE FIRST PLACE. And if you state that, you already left the purely unitary part of QM. You already assumed that somehow, probabilities should emerge and have a certain property. So you are NOT deriving any probabilistic framework purely from the unitary machinery. Now, even Zurek himself seems to be aware of the non-triviality of the statement of additivity, because he addresses it (badly) in section V. I didn't see a convincing argument *without* invoking probabilities in section V.

I have to say that it is exactly in situations such as Zurek's paper that I think that my little paper is useful: take the APP, and see where it fails. THAT is the place where an extra (non-unitary QM) postulate has been sneaked in!

So does Zurek say somewhere that the measurement does not give us the exact value of n? I still am struggling through his paper, so it is possible that I've missed it if he did say such a thing.

He's making up the extra hilbert space of states in order to have equal-length components so that you can make orthogonal sums of them that come close to the hilbert norms of the original coefficients. He argues that in the big extra space of states of the environment, you will always find enough room to consider such an extra space. It is exactly the same scheme as is used by Deutsch to go from symmetrical states with equal probabilities to states with arbitrary coefficients.
 
  • #98
straycat said:
Zurek states repeatedly in his paper that he has taken great care not to assume the Born rule in his derivation. So at the very least, he is aware of this danger!

Well, he doesn't make that error, indeed. He makes the error of assuming non-contextuality, which he introduces by assuming the additivity of probabilities. He even seems to be aware of the danger (he refers to it, and a discussion in section V, which is however, deceiving).

From the moment you make ONE assumption about probabilities generated by states, apart from respecting the symmetries of the state, you're done!
 
  • #99
vanesch said:
Well, he doesn't make that error, indeed. He makes the error of assuming non-contextuality, which he introduces by assuming the additivity of probabilities.

Hmm. It seems to me that assuming additivity of probabilities is fine, if you assume that (to use my example above) the spin measurement is in fact the more fine-grained measurement of the exact value of n. I suppose n could be called a "hidden variable," and when we think we are only measuring spin, we are in fact measuring this hidden variable -- we just haven't been smart enough to figure it out yet!

I'll admit that he does not provide an explanation -- not that I see, at least -- for where n comes from, what it represents, what it means physically, what these "extra dimensions" are, why n turns out to be just the right amount of "fine-grained-ness" that we need to recover the Born rule, etc. (The reason I wrote my paper is to answer precisely these questions!) But that is a separate objection from the one you make. The way I see it, Zurek has taken a tiny baby step, and there are lots of questions (what is n and why does it have the properties Zurek postulates) that are left unanswered. But what's wrong with baby steps?

David
 
Last edited:
  • #100
vanesch said:
But this is implicitly assuming that the probability of the coarse grained event, when calculated from the probabilities of the fine-grained events, is the same probability as if we were going to perform only a coarsegrained measurement directly, without first fine-graining, and then not considering the information.

Where does Zurek make this assumption -- implicitly or otherwise?
 
  • #101
straycat said:
Where does Zurek make this assumption -- implicitly or otherwise?

He does it implicitly, in two places. He first does it when he introduces the states |C_k> in equation 8b, and his hilbert space HC of sufficient dimensionality in 9a. Clearly, he's now supposing a fine-grained measurement, where the c_j states are measured too, and from which are DERIVED afterwards the probabilities for the eventual coarse grained measurement. As such, he implicitly assumes that the the coarse grained measurement will give you the SAME probabilities as the sums of the probabilities of the fine grained measurement.

But he KNOWS that he's doing something fishy ! On p18, he writes (just under 1. Additivity...
In the axiomatic formulation ... as well as in the proof of the Born rule due to Gleason, additivity is an assumption motivated by the mathematical demand...

And he tries to weasel out with his Lemma 5 and his probabilities calculated from the state (27) "representing both fine-grained and coarse-grained records". However, he effectively only considers the probabilities of the fine-grained events.

Again, we will use our non non-contextual example to illustrate the flaw in his proof:

we consider |psi> = |x1>|y1> + |x1>|y2> + |x2>|y3>

As such, for the (finegrained) Y measurement, we have:
P_f(y1) = 1/3
P_f(y2) = 1/3
P_f(y3) = 1/3

and thus: P_f(x1) = 2/3 and P_f(x2) = 1/3

However, for the coarsegrained X measurement, we have:
P_c(x1) = 1/2
P_c(x2) = 1/2

AND IT MAKES NO SENSE TO TALK ABOUT THE PROBABILITY OF THE FINEGRAINED EVENTS. If I were to talk about the probabilities of y1, y2 and y3 for the probability measure P_c, I would get nonsense of course.

From the moment you ASSIGN a probability to the finegrained events, of course from the Kolmogorov axioms, additivity is implicitly incorporated.

Only, Zurek uses only ONE probability function, p(). As he is considering probabilities of fine-grained events in his subtraction procedure, the p() is the finegrained probability measure (P_f in my example). There of course, additivity is correct.

He's assuming that the probability function is the SAME ONE for fine grained and coarse grained measurements, and that is nothing else but the (rightly identified) extra assumption of Gleason of non-contextuality. But he's making the same assumption in his Lemma 5!
 
  • #102
straycat said:
Hmm. It seems to me that assuming additivity of probabilities is fine, if you assume that (to use my example above) the spin measurement is in fact the more fine-grained measurement of the exact value of n. I suppose n could be called a "hidden variable," and when we think we are only measuring spin, we are in fact measuring this hidden variable -- we just haven't been smart enough to figure it out yet!

Ok, but in that way, I can produce you ANY probability measure that is compatible with unitary dynamics: the APP, the Born rule, any other function that does the trick. If I'm allowed to say that the measurement of an observable O1 is in fact the measurement of the observable O1 x O2, where O2 works onto a yet to be specified Hilbert space with a yet to be established number of degrees of freedom and a yet to be established dynamics (interacting with O1) so that I get out the right number of "different" outcomes, I can provide you with just ANY probability rule.

But even there, you have a problem when I change something. Suppose that I start from a state u1|a> + u2|b> and I do a binary measurement (a versus b). Now, you claim that there is some physics that will evolve:

|a> (|x1>+|x2> +...|xn>) + |b> (|y1> + ... ym>)

such that n is proportional to u1^2 and m is proportional to u2^2, and that my "binary measurement" is in fact a measurement of the x1... ym states. Ok.

But suppose now that I'm measuring not u1|a> + u2|b>, but rather u2 |a> + u1 |b>. If we have the same unitary evolution of the measurement, I would now measure in fact the x1... ym states in the state:

u2/u1 |a> (|x1>+|x2> +...|xn>) + u1/u2 |b> (|y1> + ... ym>)

right ?

But using the APP, I would find probability |u1|^2 for |a> and |u2|^2 for |b> and not the opposite, no ?

Why would the dimensionality of the x1...xn depend on the coefficient u1 of |a> in the original state ? This cannot be achieved with a unitary
operator which is TRANSPARENT to the coefficient.

Isn't this a fundamental problem to assuming a certain dimensionality of hidden variables in order to restore the Born rule ?
 
  • #103
vanesch said:
Ok, but in that way, I can produce you ANY probability measure that is compatible with unitary dynamics: the APP, the Born rule, any other function that does the trick. If I'm allowed to say that the measurement of an observable O1 is in fact the measurement of the observable O1 x O2, where O2 works onto a yet to be specified Hilbert space with a yet to be established number of degrees of freedom and a yet to be established dynamics (interacting with O1) so that I get out the right number of "different" outcomes, I can provide you with just ANY probability rule.

Yes, you are correct: You could in fact take Zurek's basic idea and come up with any probability rule! This is what I mean by "baby steps." I do believe that Zurek has avoided the circularity trap. What he has not done afaict is to demonstrate why the Born rule, and not some other rule, must emerge. But that is progress, no?

So now we turn to your next argument:

vanesch said:
Why would the dimensionality of the x1...xn depend on the coefficient u1 of |a> in the original state ?

I believe that some additional rule or set of rules is necessary to answer this question. And the sole motivation for postulating the "Born constraints" in my draft manuscript is to provide an "existence proof" that it is possible to accomplish this.

vanesch said:
This cannot be achieved with a unitary
operator which is TRANSPARENT to the coefficient.
Isn't this a fundamental problem to assuming a certain dimensionality of hidden variables in order to restore the Born rule ?

I'm not sure I entirely follow your argument that this cannot be achieved. I have a feeling, though, that the answer has something to do with the fact that you need to consider, not only the state of the system under observation, but also the state of the measurement apparatus. To use your example above,

|a> (|x1>+|x2> +...|xn>) + |b> (|y1> + ... ym>)

suppose the "binary measurement" is a spin measurement along the x-axis. We could suppose that the number of dimensions of the fine-grained measurement has something to do with the interaction between the particle and the SG apparatus. IOW, if the SG apparatus is oriented to measure along the x-axis, then the relevant "number of dimensions" is n and m (following your notation above). But if we rotate the SG apparatus so that it measures along some other axis, then the relevant number of dimensions becomes n' and m'. Of course, it still is necessary to explain WHY this should work out just right, so that the Born rule emerges. But the point I wish to make is that there is no reason to ASSUME that this CANNOT be done! Unless I have missed some element of your argument, which is why I am enjoying this discussion ...:biggrin:

David
 
  • #104
vanesch said:
... we consider |psi> = |x1>|y1> + |x1>|y2> + |x2>|y3>
As such, for the (finegrained) Y measurement, we have:

P_f(y1) = 1/3
P_f(y2) = 1/3
P_f(y3) = 1/3

and thus: P_f(x1) = 2/3 and P_f(x2) = 1/3

However, for the coarsegrained X measurement, we have:

P_c(x1) = 1/2
P_c(x2) = 1/2

AND IT MAKES NO SENSE TO TALK ABOUT THE PROBABILITY OF THE FINEGRAINED EVENTS. If I were to talk about the probabilities of y1, y2 and y3 for the probability measure P_c, I would get nonsense of course.

You raise the issue: given a measurement of the above system, which should we use: P_f or P_c? How do we justify using one and not the other?

Following the spirit of Everett's original proposal, I believe that the number of branches (ie, the "number of dimensions") associated with a given measurment must be reflected in the number of distinct physical states that the observer can evolve into, as a result of the measurement process. So if the interaction of the observer with the environment results in evolution of the observer from one to 3 different possible states, then we have 1/3 probability associated with each state. If two of these observer-states are associated with x1, and the third is associated with x2, then we get:

P_c(x1) = 2/3
P_c(x2) = 1/3

So the above result, yielding probabilities 2/3 and 1/3, depends upon the assertion that there are two mutually exclusive distinct physical observer-states associated with x1, but only one observer-state associated with x2. A fully developed underlying theory must give an exact prescription for this number of observer states, as well as tell us which states are associated with which observable (x1 or x2).

My point is that the choice between P_f and P_c is not arbitrary, but should be uniquely determined by the underlying theory, which must (if it is going to work) describe the evolution of the physical state of the observer. This underlying theory has not been found yet, but I think it will be!

David
 
  • #105
straycat said:
Yes, you are correct: You could in fact take Zurek's basic idea and come up with any probability rule! This is what I mean by "baby steps." I do believe that Zurek has avoided the circularity trap. What he has not done afaict is to demonstrate why the Born rule, and not some other rule, must emerge. But that is progress, no?

Eh ? What progress ? That we can have any probability rule ? :-p

So now we turn to your next argument:
I believe that some additional rule or set of rules is necessary to answer this question.

:biggrin: :biggrin: :biggrin:

That's what I'm claiming all along! Now why can that extra rule not simply be: "use the Born rule" ?

And the sole motivation for postulating the "Born constraints" in my draft manuscript is to provide an "existence proof" that it is possible to accomplish this.

Ok... but...

To use your example above,
|a> (|x1>+|x2> +...|xn>) + |b> (|y1> + ... ym>)
suppose the "binary measurement" is a spin measurement along the x-axis. We could suppose that the number of dimensions of the fine-grained measurement has something to do with the interaction between the particle and the SG apparatus. IOW, if the SG apparatus is oriented to measure along the x-axis, then the relevant "number of dimensions" is n and m (following your notation above). But if we rotate the SG apparatus so that it measures along some other axis, then the relevant number of dimensions becomes n' and m'.

The point is that we're not going to rotate the apparatus, but simply the initial state of the system to be measured. As such, the apparatus and environment and whatever that is going to do the measurement is IDENTICAL in the two cases. So if you have an argument of why we need n extra finegrained outcomes for |a> and m extra finegrained outcomes for |b> is to hold for the first case, it should also hold for the second case, because *the only thing that is changed is the to-be-measured state of the system, not the apparatus.
Whatever may be your reason to expect the n extra finegrained steps in the case we have |a> and the m extra finegrained steps in the case we have |b>, this entire measurement procedure will be resumed into A UNITARY INTERACTION OPERATOR that will split the relevant observer state into the n + m distinct states. A unitary interaction operator being a linear operator, it SHIFTS THROUGH the coefficients.

Let us take this again: let us assume that there are n+m distinct observer states that can result from the measurement of the "binary" measurement, namely the |x1> ...|yn> states (which now include the observer states which are to be distinct, and to which you can apply the APP). Of course, in his great naivity, the observer will lump together his n "x" states, and call it "a", and lump together his m "y" states, and call it "x" (post-coarse graining using the Kolmogorov additivity of probabilities).

But the evolution operator of the measurement apparatus + observer + environment and everything that could eventually matter (and that you will use for your argument of WHY there ought to be n |x_i> states and so on) is not supposed to DEPEND upon the incoming state. It is supposed to ACT upon the incoming state. If it were to DEPEND on it, and then ACT on it, it would be a non-linear operation ! Let us call it U.

So U(u1 |a> + u2|b>) results in the known state
|a> (|x1>+|x2> +...|xn>) + |b> (|y1> + ... ym>)

This means that U (|a>) needs to result in 1/u1 |a> (|x1>+|x2> +...|xn>)
and U (|b>) needs to result in 1/u2 |b> (|y1> + ... ym>)

(I took a shortcut here. In order to really prove it, one should first consider what U is supposed to do on |a> only, then on |b> only, and then on u1|a> + u2|b>, with the extra hypothesis that U(|a>) will not contain a component of |b> and vice versa - IOW that we have an ideal measurement)

This means that U(u2|a> + u1|b>) will result in what I said it would result in, namely:
u2/u1|a> (|x1>+|x2> +...|xn>) + u1/u2|b> (|y1> + ... ym>)

and this simply by the linearity of the U operator, which in turn is supposed to depend only on the measurement apparatus + environment + observer and NOT on the to-be-measured state. As this measurement environment is supposed to be indentical for both states, I don't see how you're going to wiggle out in this way (because from the moment you make it depend upon the to-be-measured state, you kill the unitarity (and even linearity) of the time evolution!)

The way to wiggle out is of course to say that the split does not occur in the measurement setup, but in the incoming state already! However, then you will meet another problem: namely decoherence. If you claim that the initial state is in fact, when we think that it is u1 |a> + u2|b>, in fact:
|a> (|x1>+|x2> +...|xn>) + |b> (|y1> + ... ym>)

then it is going to be difficult to show how we are going to obtain interference if ever we are now measureing |c> = |a> + |b> and |d> = |a> - |b>. Try it:
You'll find out that you will have to assume |x1> = |y1> etc... to avoid the in product being zero (decoherence!).

Now: (a|b) is supposed to be equal to u1 u2, or to: (sqrt(n)xsqrt(m))/(m+n)
But the in product of the two terms with the finegraining is going like m/(m+n) (if m is smaller than n). I don't see how you're going to get the right in products in all cases (all values of n and m) in this approach, unless I'm missing something.
 

Similar threads

Replies
17
Views
2K
Replies
44
Views
3K
  • Quantum Physics
Replies
17
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
47
Views
2K
  • Quantum Physics
Replies
4
Views
2K
  • Quantum Physics
Replies
10
Views
1K
Replies
6
Views
2K
  • Quantum Interpretations and Foundations
Replies
11
Views
1K
  • Quantum Physics
Replies
1
Views
2K
  • Quantum Physics
Replies
8
Views
738
Back
Top