My paper on the Born rule

In summary: But this claim is not justified. There are many other possible probability rules that could be implemented in this way, and it is not clear which one is the "most natural."In summary, the paper presents an alternative projection postulate that is consistent with unitary symmetry and with measurements being defined in terms of projection operators. However, it does not seem to add sufficiently to the criticisms of Deutsch's proposal to justify publication.
  • #106
vanesch said:
Eh ? What progress ? That we can have any probability rule ? ... Now why can that extra rule not simply be: "use the Born rule" ?

Well that's all fine and good ... IF YOU'RE FEELING COMPLACENT :tongue: :biggrin:

Here's the idea: start with the APP, and justify it via an argument-by-symmetry (or envariance). Then, to recover the Born rule, add some additional rule or set of rules so that the Born rule and not some other rule emerges. Then, try to derive these newly postulated rules from GR. (Or if not GR, then something like it, maybe some alternate field theory.) iow, "work backwards" in baby steps from the APP, to some other set of rules, back to whatever the underlying theory is. Then clean up all the intermediate steps to make sure they're rigorous, etc etc. If not, start all over again.

Surely you can see that this would be a big payoff?:rofl: :!)

Robin, Mike, and I have each (independently) proposed just such an alternate set of "extra rules" from which the Born rule is argued to emerge. Someday someone's going to hit paydirt!

I'll address the issue of unitarity in my next post.

David
 
Physics news on Phys.org
  • #107
unitarity, linearity

Regarding the issue of unitarity: There are two possibilities: 1) the underlying theory is unitary; or, 2) the underlying theory is NOT unitary; and QM is unitary because it is an approximation of the underlying theory.

wrt my own scheme, I'm not entirely sure, although I'm leaning toward the latter case. Mike has argued that the underlying theory should be nonlinear. (which of course means not unitary.) In my own scheme, the fundamental entity is the state of the observer, which is specified in full via providing the metric [tex] g_{ij} [/tex] and its power series at a point in spacetime. Since the power series has an infinite number of components, observer state space in my scheme is infinite dimensional (as might be expected/hoped). The operator that gives us the time-dependent evolution of the state of the observer is calculated by the parallel-transport law, ie by the geodesic equations:

[tex]
\frac{d^{2}x^{k}}{dt^{2}} +
\Gamma^{k}_{ij}
\frac{dx^{i}}{dt}
\frac{dx^{j}}{dt} = 0
[/tex]

Assuming that spacetime is multiply connected, there will be more than one geodesic equation that is a solution to the above equations. Each separate geodesic is interpreted as an alternate possible evolution of the observer; ie, each separate geodesic exists in "superposition."

My point is that the above equation is nonlinear, so I'm thinking that my underlying scheme is fundamentally nonlinear, as Mike would argue it should be. The Born rule emerges in my scheme only if you make a particular approximation, in which a big chunk of the possible evolutions are approximated as being not there. (This is justified because this particular chunk has a very low probability, via the APP.)

But I'd like to understand your objection better -- it should help me understand why I should be so happy :blushing: that my scheme is nonlinear.

vanesch said:
The point is that we're not going to rotate the apparatus, but simply the initial state of the system to be measured. As such, the apparatus and environment and whatever that is going to do the measurement is IDENTICAL in the two cases.

I don't think I follow you here. If you are going to "rotate the initial state of the system to be measured," then there are several ways to do this, conceptually:

1) replace the initial system with a DIFFERENT system in a DIFFERENT (rotated) state; or

2) rotate the system (the particle), ie, PHYSICALLY; or

3) keep the particle in place, but rotate the SG apparatus, ie, by PHYSICALLY actually rotating the magnets.

However you do it, you are making a physical change of something: either the particle, or the apparatus. So you're not looking at the same overall (system+apparatus) state; you're looking at a DIFFERENT state. So why should you expect to go from one to the other via a unitary operator? I may just being dunce here, so keep hammering away ...

David
 
  • #108
straycat said:
Well that's all fine and good ... IF YOU'RE FEELING COMPLACENT :tongue: :biggrin:
Here's the idea: start with the APP, and justify it via an argument-by-symmetry (or envariance).

Just as a note: in the case of a symmetry argument, such as envariance, ALL probability rules have to give equal probabilities. This is, however, the ONLY case where we can deduce something about a hypothetical probability rule without postulating something about it.

Then, to recover the Born rule, add some additional rule or set of rules so that the Born rule and not some other rule emerges. Then, try to derive these newly postulated rules from GR. (Or if not GR, then something like it, maybe some alternate field theory.) iow, "work backwards" in baby steps from the APP, to some other set of rules, back to whatever the underlying theory is. Then clean up all the intermediate steps to make sure they're rigorous, etc etc. If not, start all over again.
Surely you can see that this would be a big payoff?:rofl: :!)

I'm not sure it can even work in principle. Of course, for SOME situations, one can derive the Born rule in this (artificial) way, but I think that you cannot build it up as a general way ; as I tried to show with the in product example (although I just typed it like that ; I might have messed up, I agree that it is for the moment still "just intuition"). And if it succeeds, you need to postulate A LOT of unexplained physics!

Robin, Mike, and I have each (independently) proposed just such an alternate set of "extra rules" from which the Born rule is argued to emerge. Someday someone's going to hit paydirt!
I'll address the issue of unitarity in my next post.
David

I have all respect for the different attempts. As I think I proved, you do not need much as an extra hypothesis to derive the Born rule. I think that non-contextuality is a fair hypothesis ; I find the additivity of probabilities also a fair hypothesis (these are in fact very very close!). But they ALL need you to already say that SOME probability rule should emerge, and then you find WHICH ONE can emerge, satisfying the desired property.
So it is logically equivalent to say: "the probabilities that should emerge must follow the Born rule", and "the probabilities that should emerge must do so in a non-contextual (or additive) way". These are logically equivalent statements.

I find this fine. I don't see why one needs to go through this "equal probability stuff" (which is NOT non-contextual!) by postulating extra physics (unknown degrees of freedom) so that we miraculously obtain the Born rule, just to avoid to say that, well, the probability to be in a certain state is given by the Born rule, and not by equal probability. Why should the probabilities of the different worlds be equal for you to be in ?
 
  • #109
vanesch said:
And if it succeeds, you need to postulate A LOT of unexplained physics!
True ... but that is (hopefully) only a temporary state of affairs. The ultimate goal is that the "lot of unexplained physics" can be replaced by a very simple physical postulate -- like Einstein's equation, something like that.
vanesch said:
But they ALL need you to already say that SOME probability rule should emerge, and then you find WHICH ONE can emerge, satisfying the desired property.

So it is logically equivalent to say: "the probabilities that should emerge must follow the Born rule", and "the probabilities that should emerge must do so in a non-contextual (or additive) way". These are logically equivalent statements.

Again, the ultimate goal is that the assumption: "the probabilities that should emerge must follow the Born rule" is only a provisional one that is characteristic of a theory-in-progress. Once the "extra physics" mentioned above gets figured out, then the Born rule becomes derived, so need not be postulated.

vanesch said:
I find this fine. I don't see why one needs to go through this "equal probability stuff" (which is NOT non-contextual!) by postulating extra physics (unknown degrees of freedom) so that we miraculously obtain the Born rule, just to avoid to say that, well, the probability to be in a certain state is given by the Born rule, and not by equal probability. Why should the probabilities of the different worlds be equal for you to be in ?

Well you know the arguments at least as well as I do! We want the final finished product to avoid the charge of non-contextuality, to rely fundamentally on the APP. And we want to say that the Born rule, ie quantum mechanics, emerges from some underlying field theory, like GR -- and the ONLY probabililty assumption that we need to plug in is the APP, which is justified by a symmetry argument. The Born rule then gets "demoted," in a manner of thinking, because the APP is more fundamental to it.

I agree this is a difficult endeavor to undertake. But let's just suppose that it will work.How hard could it be? There are currently, what, hundreds of really smart mathematicians working on quantum gravity (string, loop, etc etc) -- and NONE of them afaik have incorporated the beautiful symmetry of the APP. Instead, they all ASSUME the (not symmetric, therefore ugly) Born rule to be the FUNDAMENTAL probability rule. (Smolin of course argues that string theory, in addition, makes the non-symmetric, therefore ugly, assumption of background dependence ... an argument to which I am sympathetic, although that is a whole differenet can of worms!) Perhaps the reason that all of these brilliant folks have not succeeded in figuring out quantum gravity is that their programmes are all tainted by one or both of the aforementioned assumptions. Surely the APP (like background independence) is a worthy symmetry principle into which some effort should be invested.

Don't forget that seemingly innocuous and simple symmetry principles have a long history of unexpectedly big payoffs. Einstein's principle of relativity being one such example!

David
 
  • #110
don't need no reeducation pleeez!

Tez said:
And in which context do you have a problem with it Howard? (...we are watching you... )

Hi Tez! Logic, me boy, logic... there's no logical implication that I have a problem with it in any context... really I don't, really... I didn't mean to imply I did... Please don't think I did! I don't need to be reeducated, really I don't!


Actually, I just meant to imply I'm not committed to probability *always* meaning subjective probability... in mathematics, it just refers to some things satisfying some formal axiomatic structure (s) ... so there, one is not committed to any view about its use in the world, or even whether it has any use... as far as probability in science, though, I'm a straight subjectivist a la Savage, even though I didn't bother to argue with the post about "unobservable preference orderings" vs. "measurable experimental outcomes"... yet.

Cheers,

H
 
  • #111
straycat said:
Surely the APP (like background independence) is a worthy symmetry principle into which some effort should be invested.

I fail to see what's so "symmetrical" about it:

You have the state a|u> + b|v>. If a and b are not equal, there is no symmetry between u and v, so I do not see why |u> should be equally probable to |v>. The "APP" arizes in those cases where a symmetry operator can swap the states ; but that's not the APP, it is *in the case of a symmetric state* that ALL probability rules need to assign equal probabilities to that particular case.
But I don't see why it is forbidden to say that the probability to observe |u> is not allowed to be a function of its hilbert coefficient ?
 
  • #112
vanesch said:
I fail to see what's so "symmetrical" about it:
You have the state a|u> + b|v>. If a and b are not equal, there is no symmetry between u and v, so I do not see why |u> should be equally probable to |v>. The "APP" arizes in those cases where a symmetry operator can swap the states ; but that's not the APP, it is *in the case of a symmetric state* that ALL probability rules need to assign equal probabilities to that particular case.

Well in the above, you have assumed standard QM, including the Born rule and the Hilbert space formalism. So of course you are correct that there is no symmetry. That's the point: the Born rule asserts asymmetry (except in special cases) wrt probabilities. The APP, otoh, is symmetrical wrt probabilities.

vanesch said:
But I don't see why it is forbidden to say that the probability to observe |u> is not allowed to be a function of its hilbert coefficient ?

I do not mean to imply that we are forced to accept the APP over the Born rule. ie, a symmetry argument is insufficient to forbid any non-symmetric alternative.

Take Einstein's principle of relativity. This is a symmetry principle, stating that the laws of physics are the same in all frames of reference. Does symmetry in and of itself mean we are "forbidden" to postulate frame-dependent laws? Well, no. (I could replace GR with Lorentzian relativity, for example.) All it means is that if I am trying to come up with new physics, and I have the choice between a frame-dependent and a frame-independent postulate, and I haven't yet worked out all the math so I don't know yet which one will work out, then as a betting man, I would put my money on the frame-independent one, all other considerations being equal (no pun intended :wink: ).

David
 
  • #113
vanesch said:
You have the state a|u> + b|v>.

There is another hidden assumption you are making here: that the number of "branches" associated with at measurement is in one-to-one correspondence with the different possible states of the observed system. In keeping with the spirit of Everett, the former should be equated with the number of physically distinct states into which the observer may evolve as a result of a measurement. The latter is equated with the number of physically distinct state of the observed system. In the standard treatment of the MWI, these two numbers are assumed to be the same. But why? It is conceivable that these may be different. So we have yet another independent postulate that is implicit to the standard MWI.

David
 
  • #114
straycat said:
Take Einstein's principle of relativity. This is a symmetry principle, stating that the laws of physics are the same in all frames of reference. Does symmetry in and of itself mean we are "forbidden" to postulate frame-dependent laws? Well, no.

Eh, yes ! That's exactly the content of the principle of relativity: we are forbidden to postulate frame-dependent laws!
 
  • #115
straycat said:
Well in the above, you have assumed standard QM, including the Born rule and the Hilbert space formalism. So of course you are correct that there is no symmetry. That's the point: the Born rule asserts asymmetry (except in special cases) wrt probabilities. The APP, otoh, is symmetrical wrt probabilities.

No, I didn't assume the Born rule. I just got out of unitary QM (on which we agree) that the state of the entire system (including myself) is a |u> + b|v>. One cannot say that the state is invariant under a swap of |u> and |v> which would be the statement of symmetry. In this case, there is no symmetry between the |u> and the |v> state, and I simply claimed that I don't see how a "symmetry" principle can assign now equal probabilities to |u> and to |v>. We can do so, by postulate, but it doesn't follow from any symmetry consideration. Given that doing so attracts us a lot of trouble, why do we do so ? And given that by assuming the Born rule we don't have that trouble, why not do so ?

I know of course where this desire comes from, and it is the plague of most MWI versions: one DOESN'T WANT TO SAY ANYTHING ABOUT PROBABILITIES. As such one seems to be locked up in the situation where we have a lot of independent terms (worlds) and we have to explain somehow how it comes that we only perceive ONE of them, given that we have "some state" in each of them. It seems indeed the most innocent to say that all these independent worlds are 'equally probable to be in', but that is nevertheless a statement about probability. Because without such a statement, we should be aware of ALL of our states, not just of one. So YOU CANNOT AVOID postulating somewhere a probability. My point is: pick the one that works ! Because there is no symmetry between those different worlds, we are NOT OBLIGED to say that they are equally probable. There is no physical symmetry between the low-hilbert norm states and the high hilbert norm states, in the sense that the state is not an eigenstate of any swap operator. But it is indeed tempting to state that all worlds are "equally probable" because that's how we've been raised into probabilities. We've been raised into counting the number of possible outcomes, and then counting the number of "successes", and taking the ratio. This is usually because we had genuinly symmetrical situations (like a dice) and in this case of course a symmetry argument implies that the probabilities are to be equal. So we do this also to the different terms in the wavefunction (although, I repeat, there is no a priori reason to take THIS distribution over another one, given that there is no symmetry in the situation). And so one has a *lot* of terms with small hilbert norm, and relatively few terms with high hilbert norm, and we can only find correspondence between the (observed) Born rule behaviour by "giving more weight" (1) to the few terms with high hilbert norm, or by "eliminating" (2) the vast lot of terms with small hilbert norm. Hence a lot of brain activity to find a mechanism to do so. You do (1) by trying to find extra physical degrees of freedom which we ignore, but split the few high-hilbert norm terms into a lot of small ones, so that their population is finally much larger than the initial small-hilbert norm states. Hanson does (2) by taking that worlds with very small hilbert norm are "mangled" (continue to suffer interactions with big terms), so that they somehow "don't count" in the bookkeeping.

And at the end of the day, you want to find the Born rule. Why not say so from the start ? :tongue:
 
  • #116
vanesch said:
Eh, yes ! That's exactly the content of the principle of relativity: we are forbidden to postulate frame-dependent laws!

I think you and I are merely speaking past each other a bit here. What I mean to say is that there is nothing that forbids us to say: "here is a symmetry principle, and despite its beauty, it's wrong." So, if we assume the principle of relativity is right, then we are forbidden from violating it; but we could, alternatively (and hypothetically), recognize the principle of relativity as being beautiful, but then turn around and not assume it. (Some people actually do this! ie proponents of Lorentzian relativity.)

My whole purpose in bringing up the principle of relativity is to compare it to the APP. Both are symmetry principles. With GR, of course, we assume the principle of relativity to be true. The question still remains, however, whether we should likewise adopt the APP as being true. I think that if the APP enables a succint derivation of quantum laws from deeper principles, in a manner that minimizes the total number of assumptions that must enter into the overall scheme, then the answer becomes "yes."

David
 
  • #117
vanesch said:
And at the end of the day, you want to find the Born rule. Why not say so from the start ? :tongue:

I think this whole discussion boils down to the fact that we see the APP differently. To you, the APP is no "better" than the Born rule, in the sense that we still need to postulate it. I can respect this PoV. However, there is a part of me that feels that the APP is sufficiently "natural" that it does not require an independent postulate.

I realize I sort of equivocate on this issue. Perhaps I should be more forceful in saying that the APP must be true. According to this view, the assumption of the Born rule in the standard formalism is a "band-aid" that we use because it "works," but which is ultimately unsatisfactory. Hence the need to replace it with the APP and some more physics. :rolleyes:

David
 
  • #118
vanesch said:
No ... I just got out of unitary QM (on which we agree) ...

Another instance of me equivocating ... I'm not sure I agree that the underlying theory must be characterized by unitary (= linear) evolution! See a few messages back.
 
  • #119
straycat said:
So, if we assume the principle of relativity is right, then we are forbidden from violating it; but we could, alternatively (and hypothetically), recognize the principle of relativity as being beautiful, but then turn around and not assume it.

Ah, ok! Yes, a wrong principle, no matter how beautiful, must not be adhered to :rolleyes:

Both are symmetry principles. With GR, of course, we assume the principle of relativity to be true. The question still remains, however, whether we should likewise adopt the APP as being true. I think that if the APP enables a succint derivation of quantum laws from deeper principles, in a manner that minimizes the total number of assumptions that must enter into the overall scheme, then the answer becomes "yes."

I agree with that, IF you have a (totally different!) theory, in which for some or other reason, there is a symmetry between the states, then you are right. But I fail to see how *the APP* is a "symmetry principle" in quantum theory. A symmetry principle should apply to the mathematical structure that is supposed to describe nature ; that is: it is an operator acting upon whatever set is supposed to be the set of possible states of nature (in QM, it is the rays of hilbert space ; in classical physics, it is the phase space, in GR, it is the set of all 4-dimensional manifolds that respect certain properties...), and that transforms it into the same state, or a state which has identical meaning (in the case of redundancy in the state space, like is the case with gauge symmetries).

I fail to see what operator can correspond to something which makes all states equivalent in a random state. The *envariance* symmetry of Zurek, on the other hand, IS a genuine symmetry (which is based upon the arbitrary phases in the tensor product of two subsystems).
Probabilities (just as any other observable phenomenon) that are to be derived from a state respecting a certain symmetry, should also obey the symmetry that is implemented. As such, I can understand that the probabilities, in the case of equal hilbert coefficients, must be equal (no matter what rule; a rule that does not obey it will run into troubles).

Of course, if we would now have some "APP quantum theory" in which we ALWAYS have, in each relevant basis, the same hilbert coefficients, then of course you are right. If you trim the hilbert space by a superselection rule that only allows for states in which the components are all equally long, so that all allowed-for states are "swappable", then of course ANY probability rule will be equivalent to the APP, and so will the Born rule. But I wonder how you're going to implement this ! This is very far from the original idea of quantum theory, and its superposition principle.
 
  • #120
vanesch said:
Of course, if we would now have some "APP quantum theory" in which we ALWAYS have, in each relevant basis, the same hilbert coefficients, then of course you are right. If you trim the hilbert space by a superselection rule that only allows for states in which the components are all equally long, so that all allowed-for states are "swappable", ...

To be honest I do not fully understand how Zurek can define "swappability" without letting some piece of QM -- and hence the Born rule! -- "sneak" in. iiuc, two states are swappable iff they have the same Hilbert space coefficients. Why couldn't we assume that they are swappable even if they have different coefficients? Because that would mean that they have different physical properties. So at the very least, Zurek is assuming that states must be elements of a Hilbert space, and that the Hilbert space coefficient is some sort of property characteristic of that state. Well if we are going to assume all that, we may as well just plug in Gleason's theorem, right? Or am I missing something?

vanesch said:
But I wonder how you're going to implement this !

Well you'll just have to read my paper! Which, btw, I am finally biting the bullet and submitting, today! :rofl: :cool:

David
 
  • #121
straycat said:
To be honest I do not fully understand how Zurek can define "swappability" without letting some piece of QM -- and hence the Born rule! -- "sneak" in.

Well, Zurek accepts (of course) entirely the unitary part of QM, unaltered, and without "extra degrees of freedom". He introduces a unitary symmetry operator which "turns" randomly the phases of the basis vectors of system 1, and turns in opposite ways the phases of the basis vectors of system 2, and calls this an envariance symmetry operator. He argues that we can never measure the *phases* of the different basis vectors of system 1 (this comes from the redundancy in state description, namely the fact that a physical state corresponds to a RAY and not an element in hilbert space) or of system 2, and that, as such, his symmetry operator does not affect the physical state. He then goes on to enlarge the envariance symmetry operators, in which he swaps at the same time two states in the two hilbert spaces 1 and 2 (so that the overall effect of the Schmidt decomposition is simply to swap the hilbert coefficients), and notices that in the case of EQUAL COEFFICIENTS, this is a symmetry of the state.
He then introduces some assumptions (in that a unitary transformation of system 2 should not affect outcomes of system 1, including probabilities) and from some considerations arrives at showing that in such a case, all probabilities should be equal FOR THIS SPECIFIC STATE.

Why couldn't we assume that they are swappable even if they have different coefficients? Because that would mean that they have different physical properties. So at the very least, Zurek is assuming that states must be elements of a Hilbert space, and that the Hilbert space coefficient is some sort of property characteristic of that state.

Yes, in other words, he's accepting unitary quantum theory.

Well if we are going to assume all that, we may as well just plug in Gleason's theorem, right? Or am I missing something?

In order to derive Gleason's theorem, you have to make an extra assumption related to probabilities, which is the non-contextuality ; in other words, to assume that the probability of an outcome ONLY depends upon the properties of the component of the state within the compatible eigenspace corresponding to the desired outcome, and NOT on other properties of the state or the observable, such as THE NUMBER of different eigenspaces, and the components in the OTHER eigenspaces (of other, potential, outcomes). (the other possible outcomes, and their relation to the state, are the *context* of the measurement). As you know, the APP NEEDS this information: it needs to know HOW MANY OTHER EIGENSPACES have a non-zero component of the state in them. The Born rule doesn't: the length of the component in the relevant eigenspace is sufficient. And Gleason proves that the Born rule is THE ONLY rule which satisfies this property.

Zurek does something else, which is assuming additivity of the probabilities of the "fine-grained" state components and then uses the specific case where there are exactly a sufficient number of fine-grained state components to arrive at the requested Born rule. The request that, for this specific fine-grained situation, the probability of the component in the relevant (coarse-grained) eigenspace is given ONLY by the sum of "component probabilities" within this eigenspace, is yet another form of requiring non-contextuality: namely that the probability is entirely determined by the component in the relevant eigenspace (by taking the sum), and NOT by the context, which is how the rest is sliced up, and how the components are distributed over the rest. So in an involved way, he also requires non-contextuality. And then, by Gleason, you find the Born rule.
 
  • #122
vanesch said:
In order to derive Gleason's theorem, you have to make an extra assumption related to probabilities, which is the non-contextuality ; in other words, to assume that the probability of an outcome ONLY depends upon the properties of the component of the state within the compatible eigenspace corresponding to the desired outcome, and NOT on other properties of the state or the observable, such as THE NUMBER of different eigenspaces, and the components in the OTHER eigenspaces (of other, potential, outcomes). (the other possible outcomes, and their relation to the state, are the *context* of the measurement). As you know, the APP NEEDS this information: it needs to know HOW MANY OTHER EIGENSPACES have a non-zero component of the state in them. The Born rule doesn't: the length of the component in the relevant eigenspace is sufficient.

True, the length of the component is not the same thing as the number of other eigenspaces. However, the length is a normalized length, right? and the normalization process injects, does it not, information regarding the other eigenspaces? ie, instead of counting the total number of eigenspaces, we are adding up the "measure" of all the eigenspaces when we normalize.

David
 
  • #123
straycat said:
True, the length of the component is not the same thing as the number of other eigenspaces. However, the length is a normalized length, right? and the normalization process injects, does it not, information regarding the other eigenspaces? ie, instead of counting the total number of eigenspaces, we are adding up the "measure" of all the eigenspaces when we normalize.
David

Eh, yes. But the eigenspace, and its complement, are of course representing "the same" outcome (A or NOT A). So, ok, if you want to allow for non-normalized states, you need to allow for the eigenspace and its complement - I was assuming that we could normalize the states (and so does Gleason). We all know that the same physical state is represented by a ray in Hilbert space, so a common coefficient has no meaning and may just as well be normalized out. In fact, if the initial state is normalized, unitary time evolution will preserve this normalization. What counts is that the way that the complementary eigenspace is eventually sliced up or not, should not influence the outcome A to have non-contextuality.
 
  • #124
vanesch said:
Eh, yes. But the eigenspace, and its complement, are of course representing "the same" outcome (A or NOT A). So, ok, if you want to allow for non-normalized states, you need to allow for the eigenspace and its complement - I was assuming that we could normalize the states (and so does Gleason). We all know that the same physical state is represented by a ray in Hilbert space, so a common coefficient has no meaning and may just as well be normalized out. In fact, if the initial state is normalized, unitary time evolution will preserve this normalization.

Although in the case of successive measurements (say A -> B -> C), you have to renormalize with each measurement result. So if we want to assert that normalization happens only once, at the beginning, then we are restricting ourselves to a "no-collapse" framework. Let's suppose we want to know the conditional probability: what is the probability of outcome b', given outcome a' ? To answer this, we need to "collapse" onto outcome a', which means we have to recalculate the wavefunction, which includes a renormalization procedure. So ok, we could calculate the probability of b' using only unitary evolution (allowing A to remain superpositioned), but NOT if we want CONDITIONAL probabilities based on the outcome of A.
 
  • #125
straycat said:
Although in the case of successive measurements (say A -> B -> C), you have to renormalize with each measurement result. So if we want to assert that normalization happens only once, at the beginning, then we are restricting ourselves to a "no-collapse" framework. Let's suppose we want to know the conditional probability: what is the probability of outcome b', given outcome a' ? To answer this, we need to "collapse" onto outcome a', which means we have to recalculate the wavefunction, which includes a renormalization procedure. So ok, we could calculate the probability of b' using only unitary evolution (allowing A to remain superpositioned), but NOT if we want CONDITIONAL probabilities based on the outcome of A.

Bzzzzt ! The conditional probability P(a|b) is completely known if we know P(a and b) and P(b), because it is equal (by definition) to P(a and b)/P(b).

Now, imagine that the initial state is |psi0>, and that this first evolves into u1|b> + u2|c>, where of course |u1|^2 + |u2|^2 = 1 if psi0 was normalized. This means that we had probability |u1|^2 to observe |b> (so P(b) will be equal to |u1|^2).
Now, imagine that this further evolves into:
u1 (v1|a> + v2 |d>) |b> + u2 |c'>

Clearly, |v1|^2 + |v2|^2 = 1 too if the entire state is to stay normalized, which it will, through unitary evolution. If, after having observed |b>, we now observe |a>, we are in fact in the branch |a>|b>, which has hilbert norm u1 v1 and thus probability |u1|^2 |v1|^2. This is the probability to observe a and b, so P(a and b) = |u1|^2 |v1|^2

Applying our definition of conditional probability, we see that P(a|b) = |v1|^2, and we didn't have to re-normalize the state.
 
  • #126
vanesch said:
Bzzzzt ! The conditional probability P(a|b) is completely known if we know P(a and b) and P(b), because it is equal (by definition) to P(a and b)/P(b).

Now, imagine that the initial state is |psi0>, and that this first evolves into u1|b> + u2|c>, where of course |u1|^2 + |u2|^2 = 1 if psi0 was normalized. This means that we had probability |u1|^2 to observe |b> (so P(b) will be equal to |u1|^2).
Now, imagine that this further evolves into:
u1 (v1|a> + v2 |d>) |b> + u2 |c'>

Clearly, |v1|^2 + |v2|^2 = 1 too if the entire state is to stay normalized, which it will, through unitary evolution. If, after having observed |b>, we now observe |a>, we are in fact in the branch |a>|b>, which has hilbert norm u1 v1 and thus probability |u1|^2 |v1|^2. This is the probability to observe a and b, so P(a and b) = |u1|^2 |v1|^2

Applying our definition of conditional probability, we see that P(a|b) = |v1|^2, and we didn't have to re-normalize the state.


Aack! I think you are correct; my "conditional probability" critique was wrong. You caught me napping :zzz: .

There is still something I do not understand. Suppose we are measuring the spin state of a particle. If it is a spin 1/2 particle, then there are 2 states; spin 1, 3 states; etc. So when we apply the Schrodinger equation to the intial state |psi0>, it evolves into

u1|b> + u2|c> if the particle is spin 1/2, or

u1|b> + u2|c> + u3|e> if it is spin 1, etc.

So my question: how does the Schrodinger equation "know" how many states are possible? Is it part and parcel of our original definition of |psi0> ? Or does it somehow emerge from the Schrodinger equation itself, without our having to inject it externally?
 
  • #127
straycat said:
So my question: how does the Schrodinger equation "know" how many states are possible? Is it part and parcel of our original definition of |psi0> ?

?? I'd say it is part of the saying that it is a spin-1 particle in the first place. If it goes into 3 parts, we call it a spin-1 particle !

Sounds like: "how does a green car know it has to reflect green light ?" or something... unless I miss what you want to say.
 
  • #128
agreed

vanesch said:
BUT THAT IS NOTHING ELSE BUT NON-CONTEXTUALITY. It is always the same trick (equation 9a).cheers,
Patrick.


Yes, I agree completely. Deutsch, Wallace, Zurek etc do fine through the point of showing that equal-measure outcomes have equal probabilities. The next step involves assuming that probability is fixed after an experiment, and independent of observer. These are exactly the features which one does not find in APP. It's nice to show that Born can be derived from slightly weaker assumptions, but that doesn't mean that those assumptions follow from unitary QM. Worse, it doesn't mean that those assumptions are even consistent with unitary QM and our operational definition of probability.
 
  • #129
vanesch said:
Zurek does something else, which is assuming additivity of the probabilities of the "fine-grained" state components and then uses the specific case where there are exactly a sufficient number of fine-grained state components to arrive at the requested Born rule. The request that, for this specific fine-grained situation, the probability of the component in the relevant (coarse-grained) eigenspace is given ONLY by the sum of "component probabilities" within this eigenspace, is yet another form of requiring non-contextuality: namely that the probability is entirely determined by the component in the relevant eigenspace (by taking the sum), and NOT by the context, which is how the rest is sliced up, and how the components are distributed over the rest. So in an involved way, he also requires non-contextuality. And then, by Gleason, you find the Born rule.
Yeah- I thought you might be interested in these excerpts from a I comment wrote on the zurek argument. (I tried a bit to get it published there.) The point is that our arguments are almost identical, which is reassuring.


...Here I argue Zurek makes an implicit assumption which runs counter to the explicit assumptions.
...
The difficulty arises in extending the argument to decoherent outcomes whose measures are not equal, i.e. to density matrices whose non-zero diagonals are not equal. Here Zurek introduces a second environment, called C, and proposes that it is possible for E to become entangled with C in such a way that the density matrix for SC traced over E can (almost) be expressed by a collection of equal diagonal terms, with each diagonal term in the density matrix of S expanding into a finite set of equal diagonal terms in the density matrix of SC.. Now applying the swapping-symmetry argument to SC, Zurek gets that these SC outcomes must have equal probabilities.
Since the particular C-E entanglement required will occur on a set of measure zero of starting states, and cannot even approximately arise for the general case by any physical process represented by linear time evolution, the argument is not that such processes will occur but rather that they might sometimes occur, and the probabilities obtained in those special cases must be the same as those obtained in all cases because the density matrix for S is unaffected by the C-E* entanglement .
Treating the probabilities of S outcomes as sums over (more detailed) SC outcomes then gives the Born rule. This step, however, does not amount to simply using additivity of probabilities within a single probability space but rather implicitly assumes that the probabilities defined on S are simply related to the probabilities defined on SC. No matter how much that step accords with our experience-based common sense, it does not follow from the stated assumptions, which are deeply based on the idea that probabilities cannot be defined in general but only on a given system. Thus the question of why quantum probabilities take on Born values, or more generally of why they seem independent of where a line is drawn between system and environment, is not answered by Zurek's argument.
A counterexample may reinforce this point. A simple probability, defined from numbers* of diagonal terms in the density matrix of the system without weighting by measure, is entirely "envariant" and obeys the swapping symmetry. It does not obey any simple general relation between the probabilities defined on a system and on some supersystem. This probability is, of course, none other than 'world counting', which has frequently been argued to be the obvious probability to arise in collapse-free pictures without dynamical or metaphysical addenda. 2-8
Thus the problem of showing why the probabilities we experience emerge from quantum mechanics remains. ...
 
  • #130
vanesch said:
?? I'd say it is part of the saying that it is a spin-1 particle in the first place. If it goes into 3 parts, we call it a spin-1 particle !

Sounds like: "how does a green car know it has to reflect green light ?" or something... unless I miss what you want to say.

Sorry for the delayed reply ... been busy at work.

Let me see if I can explain my question. The operator for the (non-relativistic) Schrodinger equation in general form is

(1) H = T + V

If we are dealing with a spin 1/2 particle, then we know that the coefficients for the spin states take the form:

(2) |a_up|^2 = cos^2(theta), |a_down|^2 = sin^2(theta)

Obviously, these are normalized, meaning that sin^2 + cos^2 = 1 for any theta. But how exactly did we get from the general relation (1) to the specific relation (2)? Somewhere in this process, we had to inject the fact that there are *two* states. My point is that the Schrodinger equation does not tell us that there are two states; rather, this is an additional piece of information that is put in "externally" when we derive (2) from (1) so that we can normalize correctly. Therefore, the length of the component in the relevant eigenspace *does* depend on the total number of eigenspaces.

Unless I am missing something, which is entirely possible. (I have never understood the significance/meaning of "noncontextuality" -- hopefully I can fix that in this thread ...)

Just for reference, this is the statement that prompted my question:

vanesch said:
As you know, the APP NEEDS this information: it needs to know HOW MANY OTHER EIGENSPACES have a non-zero component of the state in them. The Born rule doesn't: the length of the component in the relevant eigenspace is sufficient.

David
 
  • #131
mbweissman said:
The difficulty arises in extending the argument to decoherent outcomes whose measures are not equal, i.e. to density matrices whose non-zero diagonals are not equal. ... This step, however ... rather implicitly assumes that the probabilities defined on S are simply related to the probabilities defined on SC. ...

I agree with this general line of reasoning. It seems so obvious that, when I first read Zurek's paper, it made me wonder whether I had missed some subtle point. So far I haven't found it though ...

David
 
  • #132
mbweissman said:
Treating the probabilities of S outcomes as sums over (more detailed) SC outcomes then gives the Born rule. This step, however, does not amount to simply using additivity of probabilities within a single probability space but rather implicitly assumes that the probabilities defined on S are simply related to the probabilities defined on SC.

EXACTLY !

No matter how much that step accords with our experience-based common sense, it does not follow from the stated assumptions, which are deeply based on the idea that probabilities cannot be defined in general but only on a given system. Thus the question of why quantum probabilities take on Born values, or more generally of why they seem independent of where a line is drawn between system and environment, is not answered by Zurek's argument.

Yes, that was also the reasoning I had. And IF you make that extra assumption (which, I think, corresponds to non-contextuality) then we *already know* that we will find the Born rule through Gleason's theorem.

A counterexample may reinforce this point. A simple probability, defined from numbers* of diagonal terms in the density matrix of the system without weighting by measure, is entirely "envariant" and obeys the swapping symmetry. It does not obey any simple general relation between the probabilities defined on a system and on some supersystem. This probability is, of course, none other than 'world counting', which has frequently been argued to be the obvious probability to arise in collapse-free pictures without dynamical or metaphysical addenda. 2-8
Thus the problem of showing why the probabilities we experience emerge from quantum mechanics remains. ...

Yes, your counter example is of course the "APP", which we should maybe give the name "RHA" (Revelator of Hidden Assumptions) :smile:
 
  • #133
straycat said:
Sorry for the delayed reply ... been busy at work.

Let me see if I can explain my question. The operator for the (non-relativistic) Schrodinger equation in general form is

(1) H = T + V

If we are dealing with a spin 1/2 particle, then we know that the coefficients for the spin states take the form:

(2) |a_up|^2 = cos^2(theta), |a_down|^2 = sin^2(theta)

Obviously, these are normalized, meaning that sin^2 + cos^2 = 1 for any theta. But how exactly did we get from the general relation (1) to the specific relation (2)? Somewhere in this process, we had to inject the fact that there are *two* states.

We don't derive (2) from (1). (2) is part of the interpretation of the Hilbert space of states. In order to set up a quantum theory, we need to start NAMING the independent degrees of freedom, and *assign* them operational meanings. This is a part that is often skipped in textbooks, because they are usually interested in quantizing classical systems, and in this case, the independent degrees of freedom are given by the points in configuration space of the classical system, so it is "automatic". And then we STILL need to put in, by hand, a few extra degrees of freedom, such as spin.
Things like (2) are usually called the "kinematics" of the theory, while (1) is the "dynamics".

This compares, in classical physics, to: (1) specifying the number of particles and degrees of freedom of the mechanical system and (2), writing down Newton's equation or its equivalent.
Clearly, you cannot derive the number of planets from Newton's equation, and you cannot derive the number of degrees of freedom for spin from the Schroedinger equation.
 
  • #134
vanesch said:
...
This compares, in classical physics, to: (1) specifying the number of particles and degrees of freedom of the mechanical system and (2), writing down Newton's equation or its equivalent.

Clearly, you cannot derive the number of planets from Newton's equation, and you cannot derive the number of degrees of freedom for spin from the Schroedinger equation.

This is a good analogy. To calculate the orbital of a planet (say Mars), we take as input the masses, intial positions, and initial velocities of the sun, planets, and other objects in the solar system, and plug these into Newton's equation, which spits out the solution x_mars(t). (We could then talk about the "Isaac rule" which states that "x(t) is interpreted as the trajectory.") Likewise, to calculate the probability associated with a given observation, we take the initial state of the system, which includes stuff like the particle's spin, which includes the total number of possible spin states, and plug this into the Schrodinger equation, which spits out the amplitudes a_n. Finally, we apply the Born rule, which tells us that |a|^2 gives us the probability.

Now you said earlier that

vanesch said:
As you know, the APP NEEDS ... to know HOW MANY OTHER EIGENSPACES have a non-zero component of the state in them. The Born rule doesn't: the length of the component in the relevant eigenspace is sufficient.

So you are saying that the probability of, say, spin up is independent of the total number of eigenstates (two in this case), because it depends only upon a_up. But isn't this like saying that the trajectory of Mars is independent of the trajectories of the other planets, because it depends only on x_mars(t)? Obviously this statement is false, because you cannot change (say) x_jupiter(t) without inducing a change in x_mars(t) as well.

To sum up: if we assume non-contextuality, this means (correct me if I am wrong) that we are assuming that the probability of the n^th outcome depends only on a_n and not on anything else, such as the total number N of eigenspaces. My difficulty is that I do not see how this assumption is tenable, given that you cannot calculate a_n without knowing (among other things) the total number of eigenspaces. It would be analogous to the assumption that the trajectory of Mars is independent of the trajectory of Jupiter.

So what am I missing?

David
 
  • #135
straycat said:
This is a good analogy. To calculate the orbital of a planet (say Mars), we take as input the masses, intial positions, and initial velocities of the sun, planets, and other objects in the solar system, and plug these into Newton's equation, which spits out the solution x_mars(t). (We could then talk about the "Isaac rule" which states that "x(t) is interpreted as the trajectory.")

Worse! The number of planets will even CHANGE THE FORM of Newton's equation: the number of 1/r^2 terms in the force law for each planet will be different to whether there are 2, 3, 5, 12, 2356 planets. In other words, the DIMENSION OF THE PHASE SPACE defines (partly) the form of Newton's equation.

Likewise, to calculate the probability associated with a given observation, we take the initial state of the system, which includes stuff like the particle's spin, which includes the total number of possible spin states, and plug this into the Schrodinger equation, which spits out the amplitudes a_n.

In the same way, the FORM of the Schroedinger equation (or better, of the Hamiltonian) will depend upon the spins of the particles ; and this time not only the number of terms, but even the STRUCTURE of the Hamiltonian: a Hamiltonian for a spin-1/2 particle CANNOT ACT upon the Hilbert space of a spin-1 particle: it is not the right operator on the right space.

So you are saying that the probability of, say, spin up is independent of the total number of eigenstates (two in this case), because it depends only upon a_up. But isn't this like saying that the trajectory of Mars is independent of the trajectories of the other planets, because it depends only on x_mars(t)? Obviously this statement is false, because you cannot change (say) x_jupiter(t) without inducing a change in x_mars(t) as well.

I was referring to the number of eigenspaces of the hermitean measurement operator, NOT about the unitary dynamics (which is of course sensitive to the number of dimensions of HILBERTSPACE, which is nothing else but the number of physical degrees of freedom). The eigenspaces of the hermitean measurement operator, however, depend entirely on the measurement to be performed (the different, distinguishable, results). When the measurement is complete, then all eigenspaces are one-dimensional. But clearly that's not possible, because that means an almost infinite amount of information.
The hermitean measurement operator is in fact just a mathematically convenient TRICK to say which different eigenspaces of states correspond to distinguishable measurement results (and to include a name = real number for these results). What actually counts is the slicing up of the hilbert space in slices of subspaces which "give the same results".

The extraction of probabilities in quantum theory, comes from the confrontation of two quantities:
1) the wavefunction |psi> and 2) the measurement operator (or better, the entire set of eigenspaces) {E_1,E_2...E_n},
and the result of this confrontation has to result in assigning a probability to each distinct outcome.

I tried to argue in my paper that every real measurement always has only a finite number of eigenspaces {E_1...E_n} ; that is to say, the result of a measurement can always be stored in a von Neumann computer with large, but finite, memory. You can try to fight this, and I'm sure you'll soon run into thermodynamical problems (and you'll even turn into a black hole :biggrin: ).

As such, when I do a specific measurement, so when I have a doublet: {|psi>,{E_1...E_n}}, then I need to calculate n numbers, p_1,... p_n, which are the predicted probabilities of outcomes associated, respectively, to E_1 ... E_n.

This means that p_i ({|psi>,{E_1...E_n}}) in general.
This means that p_i can change completely if we change ANY of the E_j. On the other hand, FOR A GIVEN SET OF {E_1...E_n}, they have to span a Kolmogorov probability measure. But FOR A DIFFERENT SET, we can have ANOTHER probability measure.

The Born rule says that p_i is given by <psi|P_i|psi> (if psi is normalized), where P_i is the projector on E_i. The APP says that p_1 = 1/k, with k the number of projectors which do not annihilate |psi>.

Non-contextuality is the claim that p_i can only depend upon |psi> and E_i. Gleason's theorem says then, that IF WE REQUIRE that p_i is ONLY a function of |psi> and E_i (no matter what the other E_k are, and how many there are), then the ONLY solution is the Born rule. If the probability for a measurement to say that the state is in E_i can only depend upon E_i itself, and the quantum state |psi>, then the only solution is the Born rule.

This rules out the APP, for the APP needs to know the number k of eigenspaces which contain a component of |psi>. The APP is hence a non-non-contextual rule. It needs to know 'the context' of the measurement in WHICH we are trying to calculate the probability of |psi> to be in E_i.

To sum up: if we assume non-contextuality, this means (correct me if I am wrong) that we are assuming that the probability of the n^th outcome depends only on a_n and not on anything else, such as the total number N of eigenspaces. My difficulty is that I do not see how this assumption is tenable, given that you cannot calculate a_n without knowing (among other things) the total number of eigenspaces. It would be analogous to the assumption that the trajectory of Mars is independent of the trajectory of Jupiter.

No, that's because you're confusing two different sets of dimensions. The physical situation determines the number of dimensions in Hilbert space, and the dynamics (the unitary evolution) is dependent upon that only. But there's no discussion about the number of dimensions of Hilbert space (the number of physical degrees of freedom).
The number of EIGENSPACES is related to the resolution and kind of the measurement, in that many physical degrees of freedom will give identical measurement results, which are then lumped into ONE eigenspace. It is more about HOW WE DISTRIBUTE the physical degrees of freedom over the DIFFERENT measurement outcomes, and how this relates to the probability of outcome.
Non-contextuality says that if we perform TWO DIFFERENT measurements, but which happen to have a potential outcome in common (thus, have one of their E_i in common), that we should find the same probability for that outcome, for the two different cases. It is a very reasonable requirement at first sight.
 
  • #136
Sorry again for the much delayed response.

I've been pondering your last post. It seems you are drawing a distinction between (1) the number of dimensions of Hilbert space and (2) the number of eigenspaces of the measurement operator. I realize these are different, but there is still something I am not quite grokking. I'll have to re-read this thread and ponder more when I get more time. In the meantime, a few comments/questions.

vanesch said:
Non-contextuality is the claim that p_i can only depend upon |psi> and E_i. ...

... the APP needs to know the number k of eigenspaces which contain a component of |psi>. The APP is hence a non-non-contextual rule.

Suppose we assume the APP. Given a particular measurement to be performed, suppose we have K total fine-grained outcomes, with k_i the number of fine-grained outcomes corresponding to the i^th coarse-grained result. eg, we have N position detector elements, i an integer in [1,N], and the sum of k_i over all i equals K. So the probability of detection at the i^th detector element is k_i / K, and we define:
E_i = k_i / K

So if I claim that p_i can depend only upon E_i (ie p_i = E_i), then it seems to me that I could argue, using the same reasoning that you use above for the Born rule, that the APP is non-contextual. What is wrong with my reasoning? I suppose you might say that you cannot calculate E_i (in the framework of the APP) without knowledge of K, ie without knowledge of the context. But it still seems to me that you likewise cannot calculate E_i in the framework of the Born rule, without knowledge of the measurement operator. iow, I'm trying to argue that the APP and the Born rule are either both contextual, or both non-contextual, depending on how exactly you define contextual, and you can't distinguish them based on "contextuality."

Perhaps I should study Gleason's theorem in greater detail than I have done so far. I actually think it is somewhat remarkable that it leads to the Born rule. However, it still seems to me that the assumption of a normalized Hilbert space for state representation is where the Born rule sneaks in. That is, Hilbert space assumes the state is represented by f, and the sum of |f|^2 over all states equals 1 (by normalization). So really it's not that surprising that |f|^2 is the only way to get a probability.

vanesch said:
It is a very reasonable requirement at first sight.

Do you say "at first sight" because a careful analysis indicates that it's not all that reasonable?

vanesch said:
I tried to argue in my paper that every real measurement always has only a finite number of eigenspaces {E_1...E_n} ; that is to say, the result of a measurement can always be stored in a von Neumann computer with large, but finite, memory. You can try to fight this, and I'm sure you'll soon run into thermodynamical problems (and you'll even turn into a black hole :biggrin: ).

I actually agree with you here. The argument in my mind goes like this: consider a position measurement. If you want to come up with a continuous measurement variable, this would be it. But from a practical perspective, a position measurement is performed via an array or series of discrete measurement detectors. The continuous position measurement is then conceived as the theoretical limit as the number of detector elements becomes infinite. But from a practical, and perhaps from a theoretical, perspective, this limit cannot ever be achieved: the smallest detector element I can think of would be (say) an individual atom, for example the atoms that make up x-ray film.

David
 
  • #137
straycat said:
I've been pondering your last post. It seems you are drawing a distinction between (1) the number of dimensions of Hilbert space and (2) the number of eigenspaces of the measurement operator. I realize these are different,

Yes, this is essential. The number of dimensions in Hilbert space is given by the physics, and by physics alone, of the system, and might be very well infinite-dimensional. I think making assumptions on the finiteness of this dimensionality is dangerous. After all, you do not know what degrees of freedom are hidden deep down there. So we should be somehow independent of the number of dimensions of the Hilbert space.

However, the number of eigenspaces of the measurement operator is purely determined by the measurement apparatus. It is given by the resolution by which we could, in principle, determine the quantity we're trying to measure, using the apparatus in question. You and I agree that this must be a finite number, and a rather well-determined one. This is probably where we are differing in opinion, and where you seem to claim "micromeasurements" of eventually unknown physics of which we are not aware versus "macromeasurements" which are just our own coarse-graining of these micromeasurements- while I claim that with every specific measurement goes a certain, well-defined number of outcomes (which could eventually be more fine-grained than the observed result but that this should not be dependent on "unknown physics", but that a detailled analysis of the measurement setup should reveil that to us). I would even claim that a good measurement apparatus makes the observed number of outcomes about equal to the real number of eigenspaces.

Suppose we assume the APP. Given a particular measurement to be performed, suppose we have K total fine-grained outcomes, with k_i the number of fine-grained outcomes corresponding to the i^th coarse-grained result. eg, we have N position detector elements, i an integer in [1,N], and the sum of k_i over all i equals K. So the probability of detection at the i^th detector element is k_i / K, and we define:
E_i = k_i / K

Yes, but I'm claiming now that for a good measurement system, k_i = 1 for all i, and even if it isn't (for instance, you measure with a precision of 1 mm, and your numerical display only displays up to 1 cm resolution), you're not free to fiddle with k_i as you like.
Also, you now have a strange outcome! You ALWAYS find probability E_i for outcome i, no matter what was the quantum state ! Even if the quantum state is entirely within the E_i eigenspace, you'd still have a fractional probability ? That would violate the rule that two measurements applied one after the other will give the same result.

So if I claim that p_i can depend only upon E_i (ie p_i = E_i), then it seems to me that I could argue, using the same reasoning that you use above for the Born rule, that the APP is non-contextual.

No, non-contextuality has nothing to do with the number E_i you're positioning here, it is a property of being only a function of the eigenspace (spanned by the k_i subspaces) and the quantum state, no matter how the other eigenspaces are sliced up. Of course, in a way, you're right: if the outcome is INDEPENDENT on the quantum state (as it is in your example), you are indeed performing a non-contextual measurement. In fact, the outcome has nothing to do with the system: outcome i ALWAYS appears with probability E_i. But I imagine that you only want to consider THOSE OUTCOMES i THAT HAVE A PART OF the quantum state in them, right ? And THEN you become dependent on what happens in the other eigenspaces.

That is, Hilbert space assumes the state is represented by f, and the sum of |f|^2 over all states equals 1 (by normalization). So really it's not that surprising that |f|^2 is the only way to get a probability.

Well, there's a difference in the following sense: if you start out with a normalized state, you will always keep a normalized state under unitary evolution, and if you change basis (change measurement), you can keep the same normalized vector. That cannot be said for the E_i and k_i construction, which needs to be redone after each evolution, and after each different measurement basis.

Do you say "at first sight" because a careful analysis indicates that it's not all that reasonable?

Well, it is reasonable, but it is an EXTRA assumption (and, according to Gleason, logically equivalent to postulating the Born rule). It is hence "just as" reasonable as postulating the Born rule.
What I meant with "at first sight" is that one doesn't realize the magnitude of the step taken! In unitary QM, there IS no notion of probability. There is just a state vector, evolving deterministically by a given differential equation of first order, in a hilbert space. From the moment that you require, no matter how little, a certain quality of a probability issued from that vector, you are in fact implicitly postulating an entire construction: namely that probabilities ARE going to be generated from this state vector (probabilities for what, for whom?), that only part of the state vector is going to be observed (by whom?) etc... So the mere statement of a simple property of the probabilities postulates in fact an entire machinery - which is not obvious at first sight. Now if your aim is to DEDUCE the appearance of probabilities from the unitary machinery, then implicitly postulating this machinery is NOT reasonable, because it implies that you are postulating what you were trying to deduce in one way or another.

I actually agree with you here. The argument in my mind goes like this: consider a position measurement. If you want to come up with a continuous measurement variable, this would be it. But from a practical perspective, a position measurement is performed via an array or series of discrete measurement detectors. The continuous position measurement is then conceived as the theoretical limit as the number of detector elements becomes infinite. But from a practical, and perhaps from a theoretical, perspective, this limit cannot ever be achieved: the smallest detector element I can think of would be (say) an individual atom, for example the atoms that make up x-ray film.

That's what I meant, too. There's a natural "resolution" to each measurement device, which is given by the physics of the apparatus. An x-ray film will NOT be in different quantum states for positions which differ much less than the size of an atom (or even a bromide xtal). This is not "unknown physics" with extra degrees of freedom. I wonder whether a CCD type camera will be sensing on a better resolution than one pixel (meaning that the quantum states would be different for hits at different positions on the same pixel). Of course, there may be - and probably there will be - some data reduction up to the display, but one cannot invent, at will, more fine-grained measurements than the apparatus is actually naturally performing. And this is what determines the slicing-up of the Hilbert space in a finite number of eigenspaces, which will each result in macroscopically potentially distinguishable "pointer states". And I think it is difficult (if not hopeless) to posit that these "micromeasurements" will arrange themselves each time in such a way that they work according to the APP, but give rise to the Born rule on the coarse-grained level. Mainly because the relationship between finegrained and coarse grained is given by the measurement apparatus itself, and not by the quantum system under study (your E_i = k_i/K is fixed by the physics of the apparatus, independent of the state you care to send onto it ; the number of atoms on the x-ray film per identified "pixel" on the scanner is fixed, and not depending on how it was irradiated).

cheers,
Patrick.
 
  • #138
You can try to fight this, and I'm sure you'll soon run into thermodynamical problems (and you'll even turn into a black hole :biggrin:).
Proof by threat of black hole!
 
  • #139
Hurkyl said:
Proof by threat of black hole!

I'm proud to have found a new rethorical technique :biggrin:
 
  • #140
It takes a singular mind to come up with such things! (Okay, I'll stop now)
 

Similar threads

Replies
17
Views
2K
Replies
44
Views
3K
  • Quantum Physics
Replies
17
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
47
Views
1K
  • Quantum Physics
Replies
4
Views
2K
Replies
6
Views
2K
  • Quantum Physics
Replies
10
Views
1K
  • Quantum Interpretations and Foundations
Replies
11
Views
1K
  • Quantum Physics
Replies
1
Views
2K
  • Quantum Physics
Replies
8
Views
732
Back
Top