My paper on the Born rule

In summary: But this claim is not justified. There are many other possible probability rules that could be implemented in this way, and it is not clear which one is the "most natural."In summary, the paper presents an alternative projection postulate that is consistent with unitary symmetry and with measurements being defined in terms of projection operators. However, it does not seem to add sufficiently to the criticisms of Deutsch's proposal to justify publication.
  • #141
mbweissman said:
Treating the probabilities of S outcomes as sums over (more detailed) SC outcomes then gives the Born rule. This step, however, does not amount to simply using additivity of probabilities within a single probability space but rather implicitly assumes that the probabilities defined on S are simply related to the probabilities defined on SC. No matter how much that step accords with our experience-based common sense, it does not follow from the stated assumptions, which are deeply based on the idea that probabilities cannot be defined in general but only on a given system. Thus the question of why quantum probabilities take on Born values, or more generally of why they seem independent of where a line is drawn between system and environment, is not answered by Zurek's argument.

Hi Michael,

5 or so years ago when I was visiting Paul Kwiat you gave me a preprint of how you thought the Born rule could/should be derived. I remember there was a cute idea in there somewhere, though I can't remember what it was! How did it pan out?

Tez
 
Physics news on Phys.org
  • #142
vanesch said:
And I think it is difficult (if not hopeless) to posit that these "micromeasurements" will arrange themselves each time in such a way that they work according to the APP, but give rise to the Born rule on the coarse-grained level. Mainly because the relationship between finegrained and coarse grained is given by the measurement apparatus itself, and not by the quantum system under study (your E_i = k_i/K is fixed by the physics of the apparatus, independent of the state you care to send onto it ; the number of atoms on the x-ray film per identified "pixel" on the scanner is fixed, and not depending on how it was irradiated).

Well, maybe it's not as difficult / hopeless as you might think! Let's play around for a moment with the idea that all measurements boil down to one particle interacting with another. That is, the fundamental limit of resolution of a particle detector is governed by the fact that the detector is made of individual particles. So if we look at the micro-organization at the fine-grained level, we see micro-structure that is determined by the properties of the particles in question; let's say, some property that is characteristic of fermions / bosons for fermi /bose statistics, respectively. When a particle hits an atom in a CCD detector, then there is a corresponding micro-structure that always follows some particular pattern, and it gives rise to the Born rule when you look at it from a coarse-grained perspective. So if particles were "constructed" differently, then we might not have the Born rule, we might have some other rule. This, in fact, is exactly how my toy scheme works!

This view is consistent with the notion that it does not matter whether there is data reduction up to the display. That is, it does not matter whether the CCD has resolution of 1 mm or 1 cm; if two different CCD's have different pixel resolution, but are made of the same types of atoms, then they will have the same fundamental fine-grained "resolution" when we look at the micro-structure.

I'm starting to contemplate a thought experiment, not sure where it will take me. Suppose we have a CCD camera (length, say, 10 cm) and we remove a 2 cm chunk of it which we replace with a lens that focuses all particles that would have hit the plate on that 2 cm stretch onto (for the sake of argument) a single atom. What effect do we expect this will have on our measurement probabilities? Contrast that to a different scenario: we have a CCD camera, length 10 cm, with resolution 1 mm. Remove a 2 cm chunk and replace it with a single pixel, ie 2 cm resolution. But both CCD setups are made of the same types of atoms. I would expect that the probability of detection over the 2 cm single pixel equals the sum of the probability of detection of all 20 of the individual 1 mm pixels; my reasoning is that in both setups, we have the same density and type of atoms in the CCD's. But I would imagine that using the lens setup, we would get something completely different, since we are effectively replacing detection over a 2 cm stretch using lots of atoms with detection using only one atom.
 
  • #143
straycat said:
I'm starting to contemplate a thought experiment, not sure where it will take me. Suppose we have a CCD camera (length, say, 10 cm) and we remove a 2 cm chunk of it which we replace with a lens that focuses all particles that would have hit the plate on that 2 cm stretch onto (for the sake of argument) a single atom. What effect do we expect this will have on our measurement probabilities? Contrast that to a different scenario: we have a CCD camera, length 10 cm, with resolution 1 mm. Remove a 2 cm chunk and replace it with a single pixel, ie 2 cm resolution. But both CCD setups are made of the same types of atoms. I would expect that the probability of detection over the 2 cm single pixel equals the sum of the probability of detection of all 20 of the individual 1 mm pixels; my reasoning is that in both setups, we have the same density and type of atoms in the CCD's. But I would imagine that using the lens setup, we would get something completely different, since we are effectively replacing detection over a 2 cm stretch using lots of atoms with detection using only one atom.

Actually this reminds me of the quantum zeno effect ( http://en.wikipedia.org/wiki/Quantum_Zeno_effect ), which I mentioned in post #37 of this thread. From the wiki description, the experiment they do is sort of similar to the thought experiment I outlined above, except that I am playing around with resolution of the position measurement, whereas they were playing around with the resolution of a time measurement in the experiment described in wiki. The point of the zeno effect is that if you change the resolution of the time measurement at the fine grained level, then you change the probability distribution as a function of time. Similarly, I expect that if you change the resolution of the position measurement in a fundamental sense, ie using the lens setup, then you should change the probability distribution as a function of position. But if you simply swap the 1 mm pixel with the 2 cm pixel, then (I expect) you will not change the probability as a function of position, because you have done nothing to change the fundamental micro-structure, since the 1 mm and 2 cm CCD detectors have the same density of atoms.
 
  • #144
Doesn't your spatial-resolution fiddling bear a family resemblance to Asfhar's analysis? I believe that was described here recently in Quantum Zeno terms.
 
  • #145
selfAdjoint said:
Doesn't your spatial-resolution fiddling bear a family resemblance to Asfhar's analysis? I believe that was described here recently in Quantum Zeno terms.

Hmm, I've never thought of comparing the two. It's been a long time since I've thought about the Afshar experiment. I always belonged to the camp that thought there was a flaw somewhere in his analysis, though. That is, I tend to think that the CI and the Everett interpretation each make exactly the same predictions as any of the other formulations of QM (see, eg, the wonderful paper [1]) -- so I am a bit biased against Afshar's (and Cramer's) claims to the contrary.

As for the Zeno effect, I have actually not really pondered it really really deeply. But from my cursory contemplation, the existence of the Zeno effect does not surprise me all that much. To me, the main lesson of the Zeno effect could be stated loosely: how you measure something (the resolution of the time measurements) has an effect on the probability distribution (probability of decay as a function of time). But that is simply Lesson # 1 (in my mind) in quantum mechanics. eg, the 2-slit exp tells us that how we measure something (whether we do or do not look at the slits) has an effect on the resulting probability distribution (where it hits the screen). So perhaps the Zeno effect is just teaching us the same lesson as the 2-slit exp, but dressed up differently.

So my knee jerk reaction to your question would be that Afshar's analysis is based in a (somehow) flawed reading/implementation of the CI (and MWI), but the Zeno effect is founded upon a correct implementation of quantum theory. I'd have to take another look at Afshar though to see the comparison with Zeno ...

David

[1] Styer et al. Nine formulations of quantum mechanics. Am J Phys 70:288-297, 2002
 
  • #146
selfAdjoint said:
Doesn't your spatial-resolution fiddling bear a family resemblance to Asfhar's analysis? I believe that was described here recently in Quantum Zeno terms.

Oh yea! Afshar used a lens in his setup too -- now I remember -- duhhh :blushing:

D

http://en.wikipedia.org/wiki/Afshar_experiment
 
  • #147
wow I just the wiki article on the ashfar experiment... mmm.. so proc. spie is an optical engineering journal and not a physics journal...

I guess it must be generally believed by the physics powers that be that ashfar's interpretation of the experiment is erroneous.

good enough for me i guess.. hehe
 
  • #148
alfredblase said:
I guess it must be generally believed by the physics powers that be that ashfar's interpretation of the experiment is erroneous.

Yea, I just linked from wiki to Lubos Motl's blog article [1] criticising the Afshar analysis, and I see with some amusement that Lubos' critique is essentially the same critique that I made myself [2] over in undernetphysics ... except that I made my critique a month earlier!

That's right, I beat him to the punch ... who's ya' daddy now? :cool:

DS <ducking in case Lubos is lurking about somewhere ...>

[1] http://motls.blogspot.com/2004/11/violation-of-complementarity.html

[2] http://groups.yahoo.com/group/undernetphysics/message/1231
 
  • #150
non-linear decoherence

Tez said:
Hi Michael,

5 or so years ago when I was visiting Paul Kwiat you gave me a preprint of how you thought the Born rule could/should be derived. I remember there was a cute idea in there somewhere, though I can't remember what it was! How did it pan out?

Tez


Hi Tez- Sorry for the delay- haven't been checking the forum. The idea was that if there were a non-linear decoherence process, the proper ratio of world-counts could arise asymptotically without fine-tuning. Basically it runs like this: if large-measure branches decohere faster than small-measure ones the limiting steady-state distributions would have the same average measure per branch. Hence branch count is simply proportional to measure.

How'd it work out? It was published in Found Phys. Lett., after some extraordinary constructive criticism from a referee. So far there are no obvious holes in it- e.g. no problem with superluminal communication, unlike some types of non-linear dynamics. On the other hand, it proposes extra machinery not in ordinary quantum mechanics, without giving a specific theory. Although the extra gunk is much less Rube-Goldbergish than in explicity collapse theories, it would be nice not to have to propose something like that at all.

I'm about to post a follow-on, in which I point out that once the non-linear processes have been proposed to rescue quantum measurement, they give the second law at no extra cost. A similar point was made by Albert in his book Time and Chance, but he was referring to non-linear collapse (much uglier) rather than simple non-linear decoherence.
 
  • #151
Huw Price

Hey everyone,

I ran across this recent paper [1] (it was posted to Vic Stenger's list) that is relevant to the issues of this thread. "Egalitarianism" (= the APP) is discussed, and Huw seems to agree with Wallace and Greaves that Egalitarianism is "not ... a serious possibility." However, in a footnote he makes a distinction between "branch-Egalitarianism" and "outcome-Egalitarianism," and states that it is only the former that is not a possibility, whereas the latter "does seem to remain in play -- an alternative decision policy whose exclusion needs to be justified ..." I'm not sure I understand his distinction between branch and outcome Egalitarianism, though -- if anyone can explain it to me, I'd be interested!

Huw also describes a very interesting problem called the "Sleeping Beauty problem" which I had never heard of before. It raises a very interesting conceptual method for ascribing a "weighting" to each branch. I won't recap it here, since he does a good job of it in the paper.

David

[1] Huw Price. "Probability in the Everett World: Comments on Wallace and Greaves." 26 Apr 2006
http://arxiv.org/PS_cache/quant-ph/pdf/0604/0604191.pdf

Abstract:
It is often objected that the Everett interpretation of QM cannot make sense of quantum probabilities, in one or both of two ways: either it can't make sense of probability at all, or it can't explain why probability should be governed by the Born rule. David Deutsch has attempted to meet these objections. He argues not only that rational decision under uncertainty makes sense in the Everett interpretation, but also that under reasonable assumptions, the credences of a rational agent in an Everett world should be constrained by the Born rule. David Wallace has developed and defended Deutsch's proposal, and greatly clarified its conceptual basis. In particular, he has stressed its reliance on the distinguishing symmetry of the Everett view, viz., that all possible outcomes of a quantum measurement are treated as equally real. The argument thus tries to make a virtue of what has usually been seen as the main obstacle to making sense of probability in the Everett world. In this note I outline some objections to the Deutsch-Wallace argument, and to related proposals by Hilary Greaves about the epistemology of Everettian QM. (In the latter case, my arguments include an appeal to an Everettian analogue of the Sleeping Beauty problem.) The common thread to these objections is that the symmetry in question remains a very significant obstacle to making sense of probability in the Everett interpretation.
 
Last edited by a moderator:

Similar threads

Replies
17
Views
2K
Replies
44
Views
3K
  • Quantum Physics
Replies
17
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
47
Views
1K
  • Quantum Physics
Replies
4
Views
2K
Replies
6
Views
2K
  • Quantum Physics
Replies
10
Views
1K
  • Quantum Interpretations and Foundations
Replies
11
Views
1K
  • Quantum Physics
Replies
1
Views
2K
  • Quantum Physics
Replies
8
Views
728
Back
Top