Recognitions:
Gold Member
Staff Emeritus

my paper on the Born rule...

 Quote by straycat I've been pondering your last post. It seems you are drawing a distinction between (1) the number of dimensions of Hilbert space and (2) the number of eigenspaces of the measurement operator. I realize these are different,
Yes, this is essential. The number of dimensions in Hilbert space is given by the physics, and by physics alone, of the system, and might be very well infinite-dimensional. I think making assumptions on the finiteness of this dimensionality is dangerous. After all, you do not know what degrees of freedom are hidden deep down there. So we should be somehow independent of the number of dimensions of the Hilbert space.

However, the number of eigenspaces of the measurement operator is purely determined by the measurement apparatus. It is given by the resolution by which we could, in principle, determine the quantity we're trying to measure, using the apparatus in question. You and I agree that this must be a finite number, and a rather well-determined one. This is probably where we are differing in opinion, and where you seem to claim "micromeasurements" of eventually unknown physics of which we are not aware versus "macromeasurements" which are just our own coarse-graining of these micromeasurements- while I claim that with every specific measurement goes a certain, well-defined number of outcomes (which could eventually be more fine-grained than the observed result but that this should not be dependent on "unknown physics", but that a detailled analysis of the measurement setup should reveil that to us). I would even claim that a good measurement apparatus makes the observed number of outcomes about equal to the real number of eigenspaces.

 Suppose we assume the APP. Given a particular measurement to be performed, suppose we have K total fine-grained outcomes, with k_i the number of fine-grained outcomes corresponding to the i^th coarse-grained result. eg, we have N position detector elements, i an integer in [1,N], and the sum of k_i over all i equals K. So the probability of detection at the i^th detector element is k_i / K, and we define: E_i = k_i / K
Yes, but I'm claiming now that for a good measurement system, k_i = 1 for all i, and even if it isn't (for instance, you measure with a precision of 1 mm, and your numerical display only displays up to 1 cm resolution), you're not free to fiddle with k_i as you like.
Also, you now have a strange outcome! You ALWAYS find probability E_i for outcome i, no matter what was the quantum state ! Even if the quantum state is entirely within the E_i eigenspace, you'd still have a fractional probability ? That would violate the rule that two measurements applied one after the other will give the same result.

 So if I claim that p_i can depend only upon E_i (ie p_i = E_i), then it seems to me that I could argue, using the same reasoning that you use above for the Born rule, that the APP is non-contextual.
No, non-contextuality has nothing to do with the number E_i you're positioning here, it is a property of being only a function of the eigenspace (spanned by the k_i subspaces) and the quantum state, no matter how the other eigenspaces are sliced up. Of course, in a way, you're right: if the outcome is INDEPENDENT on the quantum state (as it is in your example), you are indeed performing a non-contextual measurement. In fact, the outcome has nothing to do with the system: outcome i ALWAYS appears with probability E_i. But I imagine that you only want to consider THOSE OUTCOMES i THAT HAVE A PART OF the quantum state in them, right ? And THEN you become dependent on what happens in the other eigenspaces.

 That is, Hilbert space assumes the state is represented by f, and the sum of |f|^2 over all states equals 1 (by normalization). So really it's not that surprising that |f|^2 is the only way to get a probability.
Well, there's a difference in the following sense: if you start out with a normalized state, you will always keep a normalized state under unitary evolution, and if you change basis (change measurement), you can keep the same normalized vector. That cannot be said for the E_i and k_i construction, which needs to be redone after each evolution, and after each different measurement basis.

 Do you say "at first sight" because a careful analysis indicates that it's not all that reasonable?
Well, it is reasonable, but it is an EXTRA assumption (and, according to Gleason, logically equivalent to postulating the Born rule). It is hence "just as" reasonable as postulating the Born rule.
What I meant with "at first sight" is that one doesn't realise the magnitude of the step taken! In unitary QM, there IS no notion of probability. There is just a state vector, evolving deterministically by a given differential equation of first order, in a hilbert space. From the moment that you require, no matter how little, a certain quality of a probability issued from that vector, you are in fact implicitly postulating an entire construction: namely that probabilities ARE going to be generated from this state vector (probabilities for what, for whom?), that only part of the state vector is going to be observed (by whom?) etc... So the mere statement of a simple property of the probabilities postulates in fact an entire machinery - which is not obvious at first sight. Now if your aim is to DEDUCE the appearance of probabilities from the unitary machinery, then implicitly postulating this machinery is NOT reasonable, because it implies that you are postulating what you were trying to deduce in one way or another.

 I actually agree with you here. The argument in my mind goes like this: consider a position measurement. If you want to come up with a continuous measurement variable, this would be it. But from a practical perspective, a position measurement is performed via an array or series of discrete measurement detectors. The continuous position measurement is then conceived as the theoretical limit as the number of detector elements becomes infinite. But from a practical, and perhaps from a theoretical, perspective, this limit cannot ever be achieved: the smallest detector element I can think of would be (say) an individual atom, for example the atoms that make up x-ray film.
That's what I meant, too. There's a natural "resolution" to each measurement device, which is given by the physics of the apparatus. An x-ray film will NOT be in different quantum states for positions which differ much less than the size of an atom (or even a bromide xtal). This is not "unknown physics" with extra degrees of freedom. I wonder whether a CCD type camera will be sensing on a better resolution than one pixel (meaning that the quantum states would be different for hits at different positions on the same pixel). Of course, there may be - and probably there will be - some data reduction up to the display, but one cannot invent, at will, more fine-grained measurements than the apparatus is actually naturally performing. And this is what determines the slicing-up of the Hilbert space in a finite number of eigenspaces, which will each result in macroscopically potentially distinguishable "pointer states". And I think it is difficult (if not hopeless) to posit that these "micromeasurements" will arrange themselves each time in such a way that they work according to the APP, but give rise to the Born rule on the coarse-grained level. Mainly because the relationship between finegrained and coarse grained is given by the measurement apparatus itself, and not by the quantum system under study (your E_i = k_i/K is fixed by the physics of the apparatus, independent of the state you care to send onto it ; the number of atoms on the x-ray film per identified "pixel" on the scanner is fixed, and not depending on how it was irradiated).

cheers,
Patrick.

Recognitions:
Gold Member
Staff Emeritus
 You can try to fight this, and I'm sure you'll soon run into thermodynamical problems (and you'll even turn into a black hole ).
Proof by threat of black hole!

Recognitions:
Gold Member
Staff Emeritus
 Quote by Hurkyl Proof by threat of black hole!
I'm proud to have found a new rethorical technique
 Recognitions: Gold Member Science Advisor Staff Emeritus It takes a singular mind to come up with such things! (Okay, I'll stop now)

 Quote by mbweissman Treating the probabilities of S outcomes as sums over (more detailed) SC outcomes then gives the Born rule. This step, however, does not amount to simply using additivity of probabilities within a single probability space but rather implicitly assumes that the probabilities defined on S are simply related to the probabilities defined on SC. No matter how much that step accords with our experience-based common sense, it does not follow from the stated assumptions, which are deeply based on the idea that probabilities cannot be defined in general but only on a given system. Thus the question of why quantum probabilities take on Born values, or more generally of why they seem independent of where a line is drawn between system and environment, is not answered by Zurek's argument.
Hi Michael,

5 or so years ago when I was visiting Paul Kwiat you gave me a preprint of how you thought the Born rule could/should be derived. I remember there was a cute idea in there somewhere, though I can't remember what it was! How did it pan out?

Tez

 Quote by vanesch And I think it is difficult (if not hopeless) to posit that these "micromeasurements" will arrange themselves each time in such a way that they work according to the APP, but give rise to the Born rule on the coarse-grained level. Mainly because the relationship between finegrained and coarse grained is given by the measurement apparatus itself, and not by the quantum system under study (your E_i = k_i/K is fixed by the physics of the apparatus, independent of the state you care to send onto it ; the number of atoms on the x-ray film per identified "pixel" on the scanner is fixed, and not depending on how it was irradiated).
Well, maybe it's not as difficult / hopeless as you might think! Let's play around for a moment with the idea that all measurements boil down to one particle interacting with another. That is, the fundamental limit of resolution of a particle detector is governed by the fact that the detector is made of individual particles. So if we look at the micro-organization at the fine-grained level, we see micro-structure that is determined by the properties of the particles in question; let's say, some property that is characteristic of fermions / bosons for fermi /bose statistics, respectively. When a particle hits an atom in a CCD detector, then there is a corresponding micro-structure that always follows some particular pattern, and it gives rise to the Born rule when you look at it from a coarse-grained perspective. So if particles were "constructed" differently, then we might not have the Born rule, we might have some other rule. This, in fact, is exactly how my toy scheme works!

This view is consistent with the notion that it does not matter whether there is data reduction up to the display. That is, it does not matter whether the CCD has resolution of 1 mm or 1 cm; if two different CCD's have different pixel resolution, but are made of the same types of atoms, then they will have the same fundamental fine-grained "resolution" when we look at the micro-structure.

I'm starting to contemplate a thought experiment, not sure where it will take me. Suppose we have a CCD camera (length, say, 10 cm) and we remove a 2 cm chunk of it which we replace with a lens that focuses all particles that would have hit the plate on that 2 cm stretch onto (for the sake of argument) a single atom. What effect do we expect this will have on our measurement probabilities? Contrast that to a different scenario: we have a CCD camera, length 10 cm, with resolution 1 mm. Remove a 2 cm chunk and replace it with a single pixel, ie 2 cm resolution. But both CCD setups are made of the same types of atoms. I would expect that the probability of detection over the 2 cm single pixel equals the sum of the probability of detection of all 20 of the individual 1 mm pixels; my reasoning is that in both setups, we have the same density and type of atoms in the CCD's. But I would imagine that using the lens setup, we would get something completely different, since we are effectively replacing detection over a 2 cm stretch using lots of atoms with detection using only one atom.

 Quote by straycat I'm starting to contemplate a thought experiment, not sure where it will take me. Suppose we have a CCD camera (length, say, 10 cm) and we remove a 2 cm chunk of it which we replace with a lens that focuses all particles that would have hit the plate on that 2 cm stretch onto (for the sake of argument) a single atom. What effect do we expect this will have on our measurement probabilities? Contrast that to a different scenario: we have a CCD camera, length 10 cm, with resolution 1 mm. Remove a 2 cm chunk and replace it with a single pixel, ie 2 cm resolution. But both CCD setups are made of the same types of atoms. I would expect that the probability of detection over the 2 cm single pixel equals the sum of the probability of detection of all 20 of the individual 1 mm pixels; my reasoning is that in both setups, we have the same density and type of atoms in the CCD's. But I would imagine that using the lens setup, we would get something completely different, since we are effectively replacing detection over a 2 cm stretch using lots of atoms with detection using only one atom.
Actually this reminds me of the quantum zeno effect ( http://en.wikipedia.org/wiki/Quantum_Zeno_effect ), which I mentioned in post #37 of this thread. From the wiki description, the experiment they do is sorta similar to the thought experiment I outlined above, except that I am playing around with resolution of the position measurement, whereas they were playing around with the resolution of a time measurement in the experiment described in wiki. The point of the zeno effect is that if you change the resolution of the time measurement at the fine grained level, then you change the probability distribution as a function of time. Similarly, I expect that if you change the resolution of the position measurement in a fundamental sense, ie using the lens setup, then you should change the probability distribution as a function of position. But if you simply swap the 1 mm pixel with the 2 cm pixel, then (I expect) you will not change the probability as a function of position, because you have done nothing to change the fundamental micro-structure, since the 1 mm and 2 cm CCD detectors have the same density of atoms.
 Recognitions: Gold Member Staff Emeritus Doesn't your spatial-resolution fiddling bear a family resemblance to Asfhar's analysis? I believe that was described here recently in Quantum Zeno terms.

 Quote by selfAdjoint Doesn't your spatial-resolution fiddling bear a family resemblance to Asfhar's analysis? I believe that was described here recently in Quantum Zeno terms.
Hmm, I've never thought of comparing the two. It's been a long time since I've thought about the Afshar experiment. I always belonged to the camp that thought there was a flaw somewhere in his analysis, though. That is, I tend to think that the CI and the Everett interpretation each make exactly the same predictions as any of the other formulations of QM (see, eg, the wonderful paper [1]) -- so I am a bit biased against Afshar's (and Cramer's) claims to the contrary.

As for the Zeno effect, I have actually not really pondered it really really deeply. But from my cursory contemplation, the existence of the Zeno effect does not surprise me all that much. To me, the main lesson of the Zeno effect could be stated loosely: how you measure something (the resolution of the time measurements) has an effect on the probability distribution (probability of decay as a function of time). But that is simply Lesson # 1 (in my mind) in quantum mechanics. eg, the 2-slit exp tells us that how we measure something (whether we do or do not look at the slits) has an effect on the resulting probability distribution (where it hits the screen). So perhaps the Zeno effect is just teaching us the same lesson as the 2-slit exp, but dressed up differently.

So my knee jerk reaction to your question would be that Afshar's analysis is based in a (somehow) flawed reading/implementation of the CI (and MWI), but the Zeno effect is founded upon a correct implementation of quantum theory. I'd have to take another look at Afshar though to see the comparison with Zeno ...

David

[1] Styer et al. Nine formulations of quantum mechanics. Am J Phys 70:288-297, 2002

 Quote by selfAdjoint Doesn't your spatial-resolution fiddling bear a family resemblance to Asfhar's analysis? I believe that was described here recently in Quantum Zeno terms.
Oh yea! Afshar used a lens in his setup too -- now I remember -- duhhh

D

http://en.wikipedia.org/wiki/Afshar_experiment
 wow I just the wiki article on the ashfar experiment... mmm.. so proc. spie is an optical engineering journal and not a physics journal... I guess it must be generally believed by the physics powers that be that ashfar's interpretation of the experiment is erroneous. good enough for me i guess.. hehe

 Quote by alfredblase I guess it must be generally believed by the physics powers that be that ashfar's interpretation of the experiment is erroneous.
Yea, I just linked from wiki to Lubos Motl's blog article [1] criticising the Afshar analysis, and I see with some amusement that Lubos' critique is essentially the same critique that I made myself [2] over in undernetphysics ... except that I made my critique a month earlier!

That's right, I beat him to the punch ... who's ya' daddy now???

DS <ducking in case Lubos is lurking about somewhere ...>

[1] http://motls.blogspot.com/2004/11/vi...mentarity.html

[2] http://groups.yahoo.com/group/undern...s/message/1231
 Recognitions: Gold Member Science Advisor Staff Emeritus Afshar's experiment has been discussed here before also: http://www.physicsforums.com/showthread.php?t=59795

 Quote by Tez Hi Michael, 5 or so years ago when I was visiting Paul Kwiat you gave me a preprint of how you thought the Born rule could/should be derived. I remember there was a cute idea in there somewhere, though I can't remember what it was! How did it pan out? Tez

Hi Tez- Sorry for the delay- haven't been checking the forum. The idea was that if there were a non-linear decoherence process, the proper ratio of world-counts could arise asymptotically without fine-tuning. Basically it runs like this: if large-measure branches decohere faster than small-measure ones the limiting steady-state distributions would have the same average measure per branch. Hence branch count is simply proportional to measure.

How'd it work out? It was published in Found Phys. Lett., after some extraordinary constructive criticism from a referee. So far there are no obvious holes in it- e.g. no problem with superluminal communication, unlike some types of non-linear dynamics. On the other hand, it proposes extra machinery not in ordinary quantum mechanics, without giving a specific theory. Although the extra gunk is much less Rube-Goldbergish than in explicity collapse theories, it would be nice not to have to propose something like that at all.

I'm about to post a follow-on, in which I point out that once the non-linear processes have been proposed to rescue quantum measurement, they give the second law at no extra cost. A similar point was made by Albert in his book Time and Chance, but he was referring to non-linear collapse (much uglier) rather than simple non-linear decoherence.

Hey everyone,

I ran across this recent paper [1] (it was posted to Vic Stenger's list) that is relevant to the issues of this thread. "Egalitarianism" (= the APP) is discussed, and Huw seems to agree with Wallace and Greaves that Egalitarianism is "not ... a serious possibility." However, in a footnote he makes a distinction between "branch-Egalitarianism" and "outcome-Egalitarianism," and states that it is only the former that is not a possibility, whereas the latter "does seem to remain in play -- an alternative decision policy whose exclusion needs to be justified ..." I'm not sure I understand his distinction between branch and outcome Egalitarianism, though -- if anyone can explain it to me, I'd be interested!

Huw also describes a very interesting problem called the "Sleeping Beauty problem" which I had never heard of before. It raises a very interesting conceptual method for ascribing a "weighting" to each branch. I won't recap it here, since he does a good job of it in the paper.

David

[1] Huw Price. "Probability in the Everett World: Comments on Wallace and Greaves." 26 Apr 2006
http://arxiv.org/PS_cache/quant-ph/pdf/0604/0604191.pdf

Abstract:
 It is often objected that the Everett interpretation of QM cannot make sense of quantum probabilities, in one or both of two ways: either it can't make sense of probability at all, or it can't explain why probability should be governed by the Born rule. David Deutsch has attempted to meet these objections. He argues not only that rational decision under uncertainty makes sense in the Everett interpretation, but also that under reasonable assumptions, the credences of a rational agent in an Everett world should be constrained by the Born rule. David Wallace has developed and defended Deutsch's proposal, and greatly clarified its conceptual basis. In particular, he has stressed its reliance on the distinguishing symmetry of the Everett view, viz., that all possible outcomes of a quantum measurement are treated as equally real. The argument thus tries to make a virtue of what has usually been seen as the main obstacle to making sense of probability in the Everett world. In this note I outline some objections to the Deutsch-Wallace argument, and to related proposals by Hilary Greaves about the epistemology of Everettian QM. (In the latter case, my arguments include an appeal to an Everettian analogue of the Sleeping Beauty problem.) The common thread to these objections is that the symmetry in question remains a very significant obstacle to making sense of probability in the Everett interpretation.