Local realism ruled out? (was: Photon entanglement and )

Click For Summary
The discussion revolves around the validity of local realism in light of quantum mechanics and Bell's theorem. Participants argue that existing experiments have not conclusively ruled out local realism due to various loopholes, such as the detection and locality loopholes. The Bell theorem is debated, with some asserting it demonstrates incompatibility between quantum mechanics and local hidden variable theories, while others claim it does not definitively negate local realism. References to peer-reviewed papers are made to support claims, but there is contention over the interpretation of these findings. Overall, the conversation highlights ongoing disagreements in the physics community regarding the implications of quantum entanglement and the measurement problem on local realism.
  • #691
akhmeteli said:
I will skip a large part of the quoteIn the same article Goldstein writes:
"The second formulation of the measurement problem, though basically equivalent to the first one, suggests an important question: Can Bohmian mechanics itself provide a coherent account of how the two dynamical rules might be reconciled? How does Bohmian mechanics justify the use of the "collapsed" wave function in place of the original one? This question was answered in Bohm's first papers on Bohmian mechanics (Bohm 1952, Part I, Section 7, and Part II, Section 2). What would nowadays be called effects of decoherence, produced by interaction with the environment (air molecules, cosmic rays, internal microscopic degrees of freedom, etc.), make it extremely difficult for the component of the after-measurement wave function corresponding to the actual result of the measurement to develop significant overlap — in the configuration space of the very large system that includes all systems with which the original system and apparatus come into interaction — with the other components of the after-measurement wave function. But without such overlap the future evolution of the configuration of the system and apparatus is generated, to a high degree of accuracy, by that component all by itself. The replacement is thus justified as a practical matter. (See also Dürr et al. 1992, Section 5.)"

"To a high degree of accuracy"! So Goldstein says exactly the same as Demystifier (or, if you wish, Demystifier says exactly the same as Goldstein:-) ), namely: collapse is an approximation. The overlap does not disappear!
But here he is talking about the assumption of a collapse followed by another measurement later. What I said before about Demystifier applies to Goldstein too:
It's also possible Demystifier would distinguish between the procedure of repeatedly applying the projection postulate for multiple measurements vs. assuming unitary evolution until the very end of a series of measurements and then applying the Born rule to find the probabilities for different possible combinations of recorded outcomes for all the previous measurements, and that he would say there are cases where Bohmian mechanics would predict slightly different statistics from the first case but not from the second case.
If you assume unitary evolution and only apply the Born rule once at the very end, then the probabilities for different final observed states should be exactly equal to the probabilities given by Bohmian mechanics + the quantum equilibrium hypothesis. See for example the beginning of section 9 where he writes:
According to the quantum formalism, the probability density for finding a system whose wave function is ψ in the configuration q is |ψ(q)|2. To the extent that the results of measurement are registered configurationally, at least potentially, it follows that the predictions of Bohmian mechanics for the results of measurement must agree with those of orthodox quantum theory (assuming the same Schrödinger equation for both) provided that it is somehow true for Bohmian mechanics that configurations are random, with distribution given by the quantum equilibrium distribution |ψ(q)|2.
Would you agree that if we assume unitary evolution and then apply the Born rule once, at the very end, the probability that this last measurement will find the system in configuration q will be exactly |ψ(q)|2? And here Goldstein is saying that according to the quantum equilibrium hypothesis, at any given time the probability that a system's full configuration has an arrangement of positions corresponding to the observable state 1 is also exactly |ψ(q)|2. He says something similar in this paper where he writes:
Bohmian mechanics is arguably the most naively obvious embedding imaginable of Schr¨ odinger’s equation into a completely coherent physical theory. It describes a world in which particles move in a highly non-Newtonian sort of way, one which may at first appear to have little to do with the spectrum of predictions of quantum mechanics. It turns out, however, that as a consequence of the defining dynamical equations of Bohmian mechanics, when a system has wave function ψ its configuration is typically random, with probability density ρ given by |ψ|2, the quantum equilibrium distribution.
In any case, I want to be clear on one point: are you really arguing that Bohmian mechanics, when used the predict statistics for observable pointer states (which it can do assuming the same dynamical equation guides particle positions at all times, with no special rule for measurement), might not predict Bell inequality violations in an experiment of the type imagined by Bell? I don't think anyone would argue that Bohmian mechanics gives "approximately" the same results as the standard QM formalism if this were the case, that would be a pretty huge difference! And note section 13 of the Stanford article where Goldstein notes that Bohmian mechanics is explicitly nonlocal--the motions of each particle depend on the instantaneous positions of every other particle in the system.
 
Last edited:
Physics news on Phys.org
  • #692
JesseM said:
But in terms of the formalism, do you agree that you can apply the Born rule once for the amplitude of a joint state? One could consider this as an abstract representation of a pair of simultaneous measurements made at the same t-coordinate, for example. Even if the measurements were made at different times, one could assume unitary evolution for each measurement so that each measurement just creates entanglement between the particles and the measuring devices, but then apply the Born rule once to find the probability for the records of the previous measurements ('pointer states' in Bohmian lingo)

I am not quite sure. Could you write down the amplitude you have in mind? It should be relevant to the correlation, shouldn't it? As I said, maybe you can design just one measurement to measure the correlation directly (it would probably be some nonlocal measurement), but that has nothing to do with what is done in Bell experiments and, therefore, is not useful for analysis of experiments. So you can do a lot as a matter of formalism, but the issue at hand is if what we do is relevant to Bell experiments. I don't accept the procedure you offer as it has nothing to do with practical measurements, which are performed on both particles. As I said, records are not even permanent. And measurements are never final. That is the curse of unitary evolution.

Furthermore, I am not sure the Born rule can be used as anything more than an operating principle, because I don't have a clear picture of how the Born rule arises from dynamics (unitary evolution).

Let me explain my problem with the derivation for quantum theory in more detail. Say, you are performing a measurement on one particle. If we take unitary evolution seriously, the measurement cannot destroy the superposition, therefore, the probability is not zero for each sign of the measured spin projection even after the measurement. Therefore, the same is true for the second particle. So, technically, the probability should not be zero for both particles having the same spin projection? You cannot eliminate this possibility, at least not if you perform just one measurement.
 
  • #693
I skipped a large part of the quote again
JesseM said:
In any case, I want to be clear on one point: are you really arguing that Bohmian mechanics, when used the predict statistics for observable pointer states (which it can do assuming the same dynamical equation guides particle positions at all times, with no special rule for measurement), might not predict Bell inequality violations in an experiment of the type imagined by Bell?

The probability density may be the same in Bohmian and standard theory for the entire system. But nobody models the instruments in the Bell proof. So you need something more to calculate the correlation in quantum theory and prove the violations. You have two measurements in experiments (so it is sufficient to use your understanding of Goldstein and Demystifier's words on approximation: "collapse followed by another measurement later") . To get the result, you can use the projection postulate in standard quantum mechanics, or you can say in Bohmian mechanics that collapse is a good approximation there. I am not aware of any proofs that do not use tricks of this kind. So yes, I do think that if you do not use such trick, you cannot prove violations in Bohmian mechanics. If you can offer a derivation that does not use something like this, I am all ears.

JesseM said:
I don't think anyone would argue that Bohmian mechanics gives "approximately" the same results as the standard QM formalism if this were the case, that would be a pretty huge difference! And note section 13 of the Stanford article where Goldstein notes that Bohmian mechanics is explicitly nonlocal--the motions of each particle depend on the instantaneous positions of every other particle in the system.

Goldstein and Demystifier seem to say just that: collapse (part and parcel of standard quantum mechanics (SQM) so far) is just an approximation in Bohmian mechanics. So don't shoot the messenger (me:-) ). Again, if collapse were precise in Bohmian mechanics (BM), that would mean that BM contains the same internal contradictions as SQM.

And yes, Bohmian mechanics is explicitly nonlocal, but for some reason, there is no faster-than-light signaling there, for example (for the standard probability density). "My" model may have the same unitary evolution as an explicitly nonlocal theory (a quantum field theory), but it's local.
 
  • #694
akhmeteli said:
I am not quite sure. Could you write down the amplitude you have in mind?
The amplitude would depend on the experimental setup, but see the bottom section of p. 155 of this book, which says:
If we denote the basis states for Alice as \mid a_i \rangle and the basis states for Bob by \mid b_j \rangle, then the basis states for the composite system are found by taking the tensor product of the Alice and Bob basis states:

\mid \alpha_{ij} \rangle = \mid a_i \rangle \otimes \mid b_j \rangle = \mid a_i \rangle \mid b_j \rangle = \mid a_i b_j \rangle
So in an experiment where the \mid a_i \rangle basis states represent eigenstates of spin for the particle measured by Alice, and the \mid b_j \rangle basis states represent eigenstates of spin for the particle measured by Bob, you can have basis states for the composite system like \mid a_i b_j \rangle and these states will naturally be assigned some amplitude by the wavefunction of the whole system.

Also see p. 194 of this book where they say:
In contrast with the classical physics, where the state of a system is completely defined by describing the state of each of its component pieces separately, in a quantum system the state cannot always be described considering only the component pieces. For instance, the state

\frac{1}{\sqrt{2}} (\mid 00 \rangle + \mid 11 \rangle )

cannot be decomposed into separate states for each of the two bits.

akhmeteli said:
It should be relevant to the correlation, shouldn't it? As I said, maybe you can design just one measurement to measure the correlation directly (it would probably be some nonlocal measurement), but that has nothing to do with what is done in Bell experiments and, therefore, is not useful for analysis of experiments. So you can do a lot as a matter of formalism, but the issue at hand is if what we do is relevant to Bell experiments.
Suppose "as a matter of formalism" we adopt the procedure of applying unitary evolution to the whole experiment and then applying the Born rule to joint states (which includes measurement records/pointer states) at the very end. And suppose this procedure gives predictions which agree with the actual statistics we see when we examine records of experiments done in real life. Then don't we have a formalism which has a well-defined procedure for making predictions and whose predictions agree with experiment? It doesn't matter that the formalism doesn't make predictions about each individual measurement at the time it's made, as long as it makes predictions about the final results at the end of the experiment which we can compare with the actual final results (or compared with the predictions about the final results that any local realist theory would make).
akhmeteli said:
As I said, records are not even permanent. And measurements are never final.
No, but forget what you know theoretically about QM, do you agree that in real life we can write down and share the results we have found at the end of an experiment? The fact that these records may no longer exist in 3000 AD doesn't mean we can't compare the records we see now with the predictions of some formal model.
akhmeteli said:
Furthermore, I am not sure the Born rule can be used as anything more than an operating principle, because I don't have a clear picture of how the Born rule arises from dynamics (unitary evolution).
As a theoretical problem you may be interested in how it arises from dynamics, but if you just want a formal model that makes well-defined predictions that can be compared with reality, you don't need to know. That's why I keep calling it a pragmatic recipe--it doesn't need to have any theoretical elegance! All it needs to be is a procedure that always gives a prediction about the sort of quantitative results human experimenters obtain from real experiments.
akhmeteli said:
Let me explain my problem with the derivation for quantum theory in more detail. Say, you are performing a measurement on one particle. If we take unitary evolution seriously,
On a theoretical level I agree it's good to "take unitary evolution seriously", but not in terms of the pragmatic recipe. If the pragmatic recipe says to apply unitary evolution until some time T when all measurement results have been recorded, then apply the Born rule to the pointer states at time T, that's a perfectly well-defined procedure whose predictions can be compared with the actual recorded results at T, even if we have no theoretical notion of how to justify this application of the Born rule.
akhmeteli said:
the measurement cannot destroy the superposition, therefore, the probability is not zero for each sign of the measured spin projection even after the measurement. Therefore, the same is true for the second particle. So, technically, the probability should not be zero for both particles having the same spin projection?
If they are entangled in such a way that QM predicts you always get opposite spins, that would mean the amplitude for joint states \mid 11 \rangle and \mid 00 \rangle is zero. But since there is a nonzero amplitude for \mid 01 \rangle and \mid 10 \rangle, that means there's some nonzero probability for Alice to get result 1 and also some nonzero probability for her to get result 0, and likewise for Bob.
 
Last edited:
  • #695
zonde said:
You consider ensemble as statistical ensemble of completely independent members where each member possesses all the properties of ensemble as a whole, right?
Otherwise I do not understand how you can justify your statement.

What's your definition of ensemble? I just think that however you consider an ensemble, you cannot neglect the effect of one of its part on another. If you mean that a measurement is averaging over the particles beyond the subensemble or approximation, say so. In both cases the predictions of unitary evolution and projection postulate differ, hence the contradiction.
 
  • #696
DrChinese said:
1. That's what you call a contribution? I guess I have a different assessment of that. Better Bell tests will always be on the agenda and I would say Zeilinger's agreement on that represents no change in his overall direction.

Yes, this is what I call a contribution

2. I consider your comment in 1. above to be acknowledgment of the obvious, which is that it is generally agreed that Bell Inequality violations have been found in every single relevant test performed to date. "Gen-u-wine" ones at that! So you can try and misrepresent the mainstream all you want, but you are 180 degrees off.

Why don't you call it for what it is: you are part of a very small minority regarding Bell. Where's the disrespect in that? If you are confident, just call yourself a rebel and continue your research.[/QUOTE]

No, it's no acknowledgment. Is it really "generally agreed", but not by Zeilinger, Shimony and Genovese?
 
  • #697
DrChinese said:
That is a reasonable comment.

1. I am guessing that for you, entangled particles have states in common due to their earlier interaction. Further, that entangled particles are in fact discrete and are not in communication with each other in any ongoing manner. And yet, it is possible to entangle particles that have never existed in a common light cone. My point is that won't go hand in hand with any local realistic view.

Again, do entangled "particles that have never existed in a common light cone" present a loophole-free evidence of nonlocality? At last? As far as I know, nobody claimed that. Except maybe you. And entanglement exists in some form in the model.

DrChinese said:
2. EPR argued that the HUP could be beaten with entangled particles. You could learn the value of position on Alice and the momentum of Bob. And yet, a subsequent observation of Alice's momentum cannot be predicted using Bob's value. (Of course this applies to all non-commuting pairs, including spin). So EPR is wrong in that regard. That implies that the reality of Alice is somehow affected by the nature of the observation of Bob. I assume you deny this.

It is affected. But not instantaneously. At least it has not been demonstrated experimentally that the effect propagates faster than light. And again, I just don't question HUP, and HUP is valid for the model.
 
  • #698
akhmeteli said:
The probability density may be the same in Bohmian and standard theory for the entire system. But nobody models the instruments in the Bell proof.
You can include the state of the measuring device in a quantum analysis (simplifying its possible states so you don't actually consider it as composed of a vast number of particles), see this google scholar search for "state of the measuring" and "quantum".
akhmeteli said:
You have two measurements in experiments (so it is sufficient to use your understanding of Goldstein and Demystifier's words on approximation: "collapse followed by another measurement later") . To get the result, you can use the projection postulate in standard quantum mechanics
Or just include the measuring device in the quantum state, and apply the Born rule to the joint state of all the measuring devices/pointer states at some time T after the experiment is finished. Goldstein's point about the Bohmian probability being |ψ(q)|^2 means the probabilities for different joint pointer states at T should be exactly equal to the Bohmian prediction about the pointer states at T.
akhmeteli said:
or you can say in Bohmian mechanics that collapse is a good approximation there.
Huh? My understanding is that a purely Bohmian analysis of any physical situation will never make use of "collapse", it'll only find the probabilities for the particles to end up in different positions according to the quantum equilibrium hypothesis. The idea that "collapse is a good approximation" would only be used if you wanted to compare Bohmian predictions to the predictions of a QM recipe which uses the collapse assumption, but if you were just interested in what Bohmian mechanics predicted, you would have no need for anything but the Bohmian guiding equation which tells you how particle positions evolve.
akhmeteli said:
I am not aware of any proofs that do not use tricks of this kind.
OK, but have you actually studied the math of Bohmian mechanics and looked at how it makes predictions about any experiments, let alone Bell-type experiments? I haven't myself, but from what I've read I'm pretty sure that no purely Bohmian derivation of predictions would need to make use of any "trick" involving collapse.
akhmeteli said:
So yes, I do think that if you do not use such trick, you cannot prove violations in Bohmian mechanics. If you can offer a derivation that does not use something like this, I am all ears.
Well, take a look at section 7.5 of Bohm's book The Undivided Universe, which is titled "The EPR experiment according to the causal interpretation" (another name for Bohmian mechanics), which can be read in its entirety on google books here. Do you see any mention of a collapse assumption there?
akhmeteli said:
Goldstein and Demystifier seem to say just that: collapse (part and parcel of standard quantum mechanics (SQM) so far) is just an approximation in Bohmian mechanics. So don't shoot the messenger (me:-) ).
But Goldstein also says that the probabilities predicted by Bohmian mechanics are just the same as those predicted by QM. Again, I think the seeming inconsistency is probably resolved if by assuming that when he talks of agreement he's talking of a single application of the Born rule to a quantum system which has been evolving in a unitary way, whereas when he talks about "approximation" he's talking about a repeated sequence of unitary evolution, projection onto an eigenstate by measurement, unitary evolution starting again from that eigenstate, another projection, etc.
 
  • #699
GeorgCantor said:
Do you know of a totally 100% loophole-free experiement from anywhere in the universe?


akhmeteli said:
I can just repeat what I said several times: for some mysterious reason, Shimony is not quite happy about experimental demonstration of violations, Zeilinger is not quite happy... You are quite happy with it? I am happy for you. But that's no reason for me to be happy about that demonstration. Again, the burden of proof is extremely high for such radical ideas as elimination of local realism.


Yes, they aren't quite happy yet. The departure from the old concepts is just too great.

This isn't much different than Darwin's TOE in the mid-nineteen century. Not everyone would immediately recognize the evidence(no matter what), for the idea of a fish turning into a human being was just too radical, as you are saying about local realism. The TOE turned the world upside down, but we made do with it. Controversial or not, the theory of evolution is here to stay and so is the death of classical realism.
 
  • #700
GeorgCantor said:
Controversial or not, the theory of evolution is here to stay and so is the death of classical realism.

Agree! :approve:

(Very good answer GC!)
 
  • #701
JesseM said:
akhmeteli said:
So yes, I do think that if you do not use such trick, you cannot prove violations in Bohmian mechanics. If you can offer a derivation that does not use something like this, I am all ears.
Well, take a look at section 7.5 of Bohm's book The Undivided Universe, which is titled "The EPR experiment according to the causal interpretation" (another name for Bohmian mechanics), which can be read in its entirety on google books here. Do you see any mention of a collapse assumption there?

And here's another:

A causal account of non-local Einstein-Podolsky-Rosen spin correlations

Section 5 on p.12-13 of the pdf says:
The preceding analysis enables us to see clearly the manner in which the assumptions made by Bell [7] in his derivation of an inequality that any local hidden variables theory must apparently satisfy are violated in the causal interpretation ...

In the causal interpretation the probability distribution of positions is derived from the quantum mechanical wavefunction which is a function of all the contributing parts of the process, including the orientation of the magnets ...

Bell's inequality is therefore violated because the hidden variables are non-locally interconnected by the quantum potential derived from the total quantum state. It is in this sense that the causal interpretation implies non-local correlations in the properties of distantly separated systems.
So it seems that the analysis is based only on the positions of the parts of the system (including which direction the particles are deflected by the magnets, which is what a determination of 'spin' is based on), and that "the system" explicitly includes the magnets and their orientations. And this Bohmian analysis does apparently show that Bell's inequality can be violated.
 
Last edited:
  • #702
DevilsAvocado said:
My personal advice to an independent researcher:

Thank you for your advice, but I think it's completely misplaced. Let me explain.

DevilsAvocado said:
Now, what’s my personal opinion on EPR-Bell experiments and loopholes? Well, I think you are presenting a terrible biased picture of the situation. You want us to believe that current experts in EPR-Bell experiments have the same bizarre valuation of their experiments as you have. Namely, that every performed EPR-Bell experiment so far is worth nothing?? Zero, zip, nada, zilch, 0!? :eek:

Before you start another episode of your soap opera here, why don't you just read the question you were asked?

I wrote the following:

"Those experts are telling us, mere mortals, that there have been no loophole-free Bell experiments. You are certainly free to disagree with them, but then why don’t you just pinpoint that loophole-free experiment? And it would be most helpful if you could explain how it so happened that Shimony, Zeilinger and Genovese have no knowledge whatsoever about this experiment.
Again, Ruta is no fan of local realism either, but he also admits that there are no such experiments.
So, to summarize, it seems obvious that there have been no such experiments so far (DrChinese will strongly disagree, but let me ask you, DevilsAvocado, what is your personal opinion?)"

So it should be clear that I asked you the following question: "Do you agree that there have been no loophole-free Bell experiments?" Did I say "that every performed EPR-Bell experiment so far is worth nothing", as you imply? No, I did not. In my opinion, those experiments are very valuable as they explored the new area of parameters, so we know now what Nature is like in this area. Why are you substituting my question with something very different? Why are you ascribing me an opinion that I don't share?

So let me ask you again:

"Do you agree that there have been no loophole-free Bell experiments?"


DevilsAvocado said:
You are also trying to apply this faulty logic on RUTA:


Yes, RUTA is an honest scientist and he would never lie and say that a 100% loophole-free Bell experiment has been performed, when it hasn’t yet.

So you agree that a loophole-free Bell experiment has not been performed? Or not?

DevilsAvocado said:
But where do you see RUTA saying that performed Bell experiments so far is worth absolutely nothing, nil?? Your twist is nothing but a scam:

But where do you see me saying that performed Bell experiments so far is worth absolutely nothing, nil?? Your twist is nothing but a scam

DevilsAvocado said:
I can guarantee you that RUTA, Zeilinger or any other real scientist in the community all agree that all performed EPR-Bell experiments so far has proven with 99.99% certainty that all local realistic theories are doomed. But they are fair, and will never lie, and say 100%, until they are 100%.

You are exploiting this fact in a very deceive way, claiming that they are saying that there is 0% proof of local realistic theories being wrong.

And where am I "claiming that they are saying that there is 0% proof of local realistic theories being wrong"? I give their direct quotes confirming that there have been no loophole-free Bell experiments. Why twist my words again? They do believe there is little or no chance for local realism, but this is their opinion. The fact (that they admit) is, however, that there have been no loophole-free experiments ruling out local realism.

DevilsAvocado said:
And then comes the "Grand Finale", where you use a falsification of Anton Zeilinger’s standpoint, as the "foundation" for this personal cranky statement:

Outrageous :mad:

Since when a literal quote is a falsification?

And I gave you the reasons, so there are indeed "some reasons to believe these inequalities cannot be violated either in experiments or in quantum theory, EVER". If you are outraged by this statement, that does not mean there are no such reasons.
 
  • #703
JesseM said:
You can include the state of the measuring device in a quantum analysis (simplifying its possible states so you don't actually consider it as composed of a vast number of particles), see this google scholar search for "state of the measuring" and "quantum".

So was the violation in quantum theory derived in this way?

JesseM said:
Or just include the measuring device in the quantum state, and apply the Born rule to the joint state of all the measuring devices/pointer states at some time T after the experiment is finished. Goldstein's point about the Bohmian probability being |ψ(q)|^2 means the probabilities for different joint pointer states at T should be exactly equal to the Bohmian prediction about the pointer states at T.

So was the violation in quantum theory derived without PP or something like that? Mind that "different joint pointer states" overlap in principle.

JesseM said:
Huh? My understanding is that a purely Bohmian analysis of any physical situation will never make use of "collapse", it'll only find the probabilities for the particles to end up in different positions according to the quantum equilibrium hypothesis. The idea that "collapse is a good approximation" would only be used if you wanted to compare Bohmian predictions to the predictions of a QM recipe which uses the collapse assumption, but if you were just interested in what Bohmian mechanics predicted, you would have no need for anything but the Bohmian guiding equation which tells you how particle positions evolve.

So were violations proven in "a purely Bohmian analysis"? I am not aware of that.

JesseM said:
OK, but have you actually studied the math of Bohmian mechanics and looked at how it makes predictions about any experiments, let alone Bell-type experiments? I haven't myself, but from what I've read I'm pretty sure that no purely Bohmian derivation of predictions would need to make use of any "trick" involving collapse.

Again, same question, is there a "purely Bohmian derivation of" violations? I am not aware of that.

JesseM said:
Well, take a look at section 7.5 of Bohm's book The Undivided Universe, which is titled "The EPR experiment according to the causal interpretation" (another name for Bohmian mechanics), which can be read in its entirety on google books here. Do you see any mention of a collapse assumption there?

Yes: "Using the theory of measurement..." and "do not overlap for different j"

JesseM said:
But Goldstein also says that the probabilities predicted by Bohmian mechanics are just the same as those predicted by QM. Again, I think the seeming inconsistency is probably resolved if by assuming that when he talks of agreement he's talking of a single application of the Born rule to a quantum system which has been evolving in a unitary way, whereas when he talks about "approximation" he's talking about a repeated sequence of unitary evolution, projection onto an eigenstate by measurement, unitary evolution starting again from that eigenstate, another projection, etc.

Very well, and this is what we have in Bell experiments, as there are two measurements.
 
  • #704
akhmeteli said:
... And where am I "claiming that they are saying that there is 0% proof of local realistic theories being wrong"? I give their direct quotes confirming that there have been no loophole-free Bell experiments. Why twist my words again? They do believe there is little or no chance for local realism, but this is their opinion. The fact (that they admit) is, however, that there have been no loophole-free experiments ruling out local realism.

You are a funny guy, not a scientist.

Is this really so hard? You are continuously making the same cranky INSINUATIONS – as if all the hard work by one of the most famous experts in EPR-Bell experiments, Anton Zeilinger, has only resulted in a PERSONAL OPINION!?

You are way out my friend, and alone on your twisted road:
RUTA said:
When I first entered the foundations community (1994), there were still a few conference presentations arguing that the statistical and/or experimental analyses of EPR-Bell experiments were flawed. SUCH TALKS HAVE GONE THE WAY OF THE DINOSAURS. VIRTUALLY EVERYONE AGREES THAT THE EPR-BELL EXPERIMENTS AND QM ARE LEGIT, SO WE NEED A SIGNIFICANT CHANGE IN OUR WORLDVIEW. There is a proper subset who believe this change will be related to the unification of QM and GR :-)
Stanford Encyclopedia of Philosophy – Bell's Theorem
...
In the face of the spectacular experimental achievement of Weihs et al. and the anticipated result of the experiment of Fry and Walther THERE IS LITTLE THAT A DETERMINED ADVOCATE OF LOCAL REALISTIC THEORIES CAN SAY except that, despite the spacelike separation of the analysis-detection events involving particles 1 and 2, the backward light-cones of these two events overlap, and it is conceivable that some controlling factor in the overlap region is RESPONSIBLE FOR A CONSPIRACY AFFECTING THEIR OUTCOMES. THERE IS SO LITTLE PHYSICAL DETAIL IN THIS SUPPOSITION that a discussion of it is best delayed until a methodological discussion in Section 7.


I made the important parts in upper-case + bold, since you seem to having trouble understanding simple English.
 
  • #705
GeorgCantor said:
Yes, they aren't quite happy yet. The departure from the old concepts is just too great.

This isn't much different than Darwin's TOE in the mid-nineteen century. Not everyone would immediately recognize the evidence(no matter what), for the idea of a fish turning into a human being was just too radical, as you are saying about local realism. The TOE turned the world upside down, but we made do with it. Controversial or not, the theory of evolution is here to stay and so is the death of classical realism.

So TOE has been confirmed by now. So what? Should we consider that a confirmation of elimination of local realism? No way. This elimination must be confirmed independently. Has it been confirmed experimentally so far? As there are no experimental demonstrations of violations of genuine Bell inequalities, local realism has not been ruled out so far. What should we expect? In 10 years? In fifty years? It's a matter of opinion. You believe local realism will be eliminated by future experiments, I don't expect that. But both of us will have to accept the results of the future experiments, whether we'll like them or dislike them.
We have yet to see decisive experiments, so we both still have the right to have our opinions.
 
  • #706
JesseM said:
You can include the state of the measuring device in a quantum analysis (simplifying its possible states so you don't actually consider it as composed of a vast number of particles), see this google scholar search for "state of the measuring" and "quantum".
akhmeteli said:
So was the violation in quantum theory derived in this way?
I'm pretty sure you can derive any quantum statistics in this way. Doing a little research, it turns out this was essentially Von Neumann's approach to the measurement problem--he conceived of two stages of the measurement process, a first where the system being measured simply becomes entangled with the measuring-device, and a second where the measuring-device is "observed" and found to be in some definite pointer state, with the probability of different pointer states determined by the Born rule. See this paper where on page 3 they write:
The crucial step to describe the measurement process as an interaction of two quantum systems [as is implicit in (2.2)] was made by von Neumann [6], who recognized that an interaction between a classical and a quantum system cannot be part of a consistent quantum theory. In his Grundlagen, he therefore proceeded to decompose the quantum measurement into two fundamental stages. The first stage (termed "von Neumann measurement") gives rise to the wavefunction (2.2). The second stage (which von Neumann termed "observation" of the measurement) involves the collapse described above, i.e., the transition from (2.2) to (2.3).
The same authors have another paper here where they apply this sort of analysis to "Bell-type measurements" on p. 16, with two quantum particles Q1 and Q2 along with two measuring-devices or "ancillae" A1 and A2, such that after the ancillae interact with the particles they are all in one entangled state \mid Q_1 Q_2 A_1 A_2 \rangle = \frac{1}{\sqrt{2}} (\mid \uparrow \uparrow 1 1 \rangle + \mid \downarrow \downarrow 0 0 \rangle ). They then say that "after observing A1, for instance, the state of A2 can be inferred without any uncertainty". Unfortunately they don't give explicit calculations for the probabilities of different results on A1 and A2 when the ancillae aren't measuring spin on the same axis, so they don't clearly show how von Neumann's approach predicts Bell inequality violations. And although I came across a lot of other papers that model measurement in terms of measuring-devices becoming entangled with measuring-systems, like http://www.hep.princeton.edu/~mcdonald/examples/QM/zurek_prd_24_1516_81.pdf , most did not use von Neumann's approach of assuming a collapse at the very end when the measuring devices were all "observed", instead they were generally trying to show how one could make meaningful statements about measurement results without making use of even a single "collapse" or application of the Born rule (perhaps part of the problem is that von Neumann's approach is rather old hat so most physicists would just consider it pedantic to explicitly demonstrate what predictions it would give for a Bell-type experiment). But anyway, given that this approach has been around for so many years, I seriously doubt that it would fail to predict Bell inequality violations without anyone having noticed this fact! (or without it being widely commented-on in these sorts of papers if it was known that it failed to predict BI violations)

Also, I found one other interesting paper http://www.lps.uci.edu/barrett/publications/SuggestiveProperties.pdf which discusses what happens if we assume measurement just creates entanglement between pointer states and particle states with no collapse ever (what the author calls the 'bare theory' of QM0, and then we consider the limit as an observer makes an infinite series of measurements in an EPR type experiment. On p. 13-14 the author discusses the result:
For another example suppose that two systems SA and SB are initially in the EPR state (2) and that A and B make space-like measurements of their respective systems ... What does the bare theory predict in the limit as this experiment is performed an infinite number of times? ... given the general limiting property, A and B will approach an eigenstate of reporting that their measurement results were randomly distributed and statistically correlated in just the way the standard theory predicts .. if they perform an appropriate sequence of different experiments, then they will approach an eigenstate of reporting that their results fail to satisfy the Bell-type inequalities.
akhmeteli said:
Again, same question, is there a "purely Bohmian derivation of" violations? I am not aware of that.
I believe so, the section of Bohm's book I linked to and the paper I linked to in post #701 both appeared to analyze EPR-type experiments from a purely Bohmian perspective.
JesseM said:
Well, take a look at section 7.5 of Bohm's book The Undivided Universe, which is titled "The EPR experiment according to the causal interpretation" (another name for Bohmian mechanics), which can be read in its entirety on google books here. Do you see any mention of a collapse assumption there?
akhmeteli said:
Yes: "Using the theory of measurement..." and "do not overlap for different j"
I think you're probably misunderstanding the import of those phrases. When Bohm wrote on p. 122 "Using the theory of measurement described in chapters 2 and 6, we may assume an interaction Hamiltonian..." and gives an equation, that's the Hamiltonian equation which guides the continuous time evolution of the system, I don't see how it has anything to do with discontinuous collapse. And the "theory of measurement" described in chapter 6 appears to be one that does not involve collapse--scroll down to p. 104 here to look at that chapter, he says on p. 109:
At this stage we can say that everything has happened as if the overall wave function had 'collapsed' to one corresponding to the actual result obtained in the measurement. We emphasise, however, that in our treatment there is no actual collapse; there is merely a process in which the information represented by the unoccupied packets effectively loses all potential for activity ... It follows that in this regard measurement is indeed just a special case of a transition process in which the two systems interact and then come out in correlated states. It is this correlation that enables us, from the observed result, to attribute a corresponding property to the final state of the observed system.

In the transition process that takes place in a measurement, it is clear that (as happens indeed in all transition processes) there is no need to place any 'cuts' or arbitrary breaks in the description of reality, such as that, for example, introduced by von Neumann between the quantum and classical levels.
I also don't see why "\alpha \Delta t must be large enough so that the \Phi_A (y - j \alpha \Delta t ) do not overlap for different j" lower on the same page has anything to do with collapse, \Phi_A (y) is supposed to represent the "initial wave packet of the apparatus" so this condition also may express some constraint on the design of the apparatus (maybe something like the idea that it should be designed so there isn't significant interference between different possible pointer states), I'm not sure. Do you actually understand the detailed meaning of the math in this section or are you just looking at the verbal descriptions of Bohmian calculations like me? You didn't answer the question I asked earlier:
OK, but have you actually studied the math of Bohmian mechanics and looked at how it makes predictions about any experiments, let alone Bell-type experiments? I haven't myself, but from what I've read I'm pretty sure that no purely Bohmian derivation of predictions would need to make use of any "trick" involving collapse.
JesseM said:
But Goldstein also says that the probabilities predicted by Bohmian mechanics are just the same as those predicted by QM. Again, I think the seeming inconsistency is probably resolved if by assuming that when he talks of agreement he's talking of a single application of the Born rule to a quantum system which has been evolving in a unitary way, whereas when he talks about "approximation" he's talking about a repeated sequence of unitary evolution, projection onto an eigenstate by measurement, unitary evolution starting again from that eigenstate, another projection, etc.
akhmeteli said:
Very well, and this is what we have in Bell experiments, as there are two measurements.
Have you not been paying attention to the distinction I've been making in previous posts between two procedures for calculating probabilities in QM? I have been saying over and over that if you have a series of measurements, you don't have to treat each measurement as leading to a collapse, you can instead treat each measurement as just creating entanglement between measuring apparatus and system being measured, and then apply the Born rule once to the final pointer states after a long series of measurements. That seems to be exactly the approach von Neumann used to deal with measurement too, as noted at the top. So my point is that as long as you only apply the Born rule once in this way, I think there is perfect agreement in the probabilities for different pointer states between this approach and Bohmian mechanics; it's only when you use the projection postulate repeatedly at the moment of each measurement that the agreement with Bohmian mechanics may only be "approximate".
 
Last edited by a moderator:
  • #707
JesseM said:
I'm pretty sure you can derive any quantum statistics in this way.

Being pretty sure is one thing. Giving a proof or a reference is something else. I comment on your references below.

JesseM said:
Doing a little research, it turns out this was essentially Von Neumann's approach to the measurement problem--he conceived of two stages of the measurement process, a first where the system being measured simply becomes entangled with the measuring-device, and a second where the measuring-device is "observed" and found to be in some definite pointer state, with the probability of different pointer states determined by the Born rule.

Again, we may have some opinion about adequacy of this procedure, but it is not quite relevant. This procedure is about just one measurement, the Bell theorem is about two measurements.

JesseM said:
See this paper where on page 3 they write:

The same authors have another paper here where they apply this sort of analysis to "Bell-type measurements" on p. 16, with two quantum particles Q1 and Q2 along with two measuring-devices or "ancillae" A1 and A2, such that after the ancillae interact with the particles they are all in one entangled state \mid Q_1 Q_2 A_1 A_2 \rangle = \frac{1}{\sqrt{2}} (\mid \uparrow \uparrow 1 1 \rangle + \mid \downarrow \downarrow 0 0 \rangle ). They then say that "after observing A1, for instance, the state of A2 can be inferred without any uncertainty". Unfortunately they don't give explicit calculations for the probabilities of different results on A1 and A2 when the ancillae aren't measuring spin on the same axis, so they don't clearly show how von Neumann's approach predicts Bell inequality violations.

So no proof.

JesseM said:
And although I came across a lot of other papers that model measurement in terms of measuring-devices becoming entangled with measuring-systems, like http://www.hep.princeton.edu/~mcdonald/examples/QM/zurek_prd_24_1516_81.pdf , most did not use von Neumann's approach of assuming a collapse at the very end when the measuring devices were all "observed", instead they were generally trying to show how one could make meaningful statements about measurement results without making use of even a single "collapse" or application of the Born rule (perhaps part of the problem is that von Neumann's approach is rather old hat so most physicists would just consider it pedantic to explicitly demonstrate what predictions it would give for a Bell-type experiment).

So no proof.

JesseM said:
But anyway, given that this approach has been around for so many years, I seriously doubt that it would fail to predict Bell inequality violations without anyone having noticed this fact! (or without it being widely commented-on in these sorts of papers if it was known that it failed to predict BI violations)

In this case you are right to seriously doubt it:-), as "someone" has indeed noticed this fact, and it was not me! You see, I clearly said in this thread and in my article that I have little, if anything new to say about the Bell theorem, I just repeat other people's analysis. These people are nightlight and Santos (nightlight told me that they corresponded for years via emails). I give the references in my article. If you feel the references are not specific enough, let me know, I'll try to do something about that.

I'll try to address the other points of your post later.
 
Last edited by a moderator:
  • #708
Yes, I haven't linked to a proof, and I don't feel like spending hours combing through papers looking for one (as I said, most modern papers would probably just consider the result too trivial to explicitly demonstrate). Are you just being pedantic in noting I haven't proved it, or do you actually believe it is plausible that von Neumann's approach to QM measurement, which has been around for decades, would fail to predict Bell inequality violations without anyone noticing this fact? (or if physicists had noticed, without it being a widely discussed result?) Or does your claim here:
as "someone" has indeed noticed this fact, and it was not me! You see, I clearly said in this thread and in my article that I have little, if anything new to say about the Bell theorem, I just repeat other people's analysis. These people are nightlight and Santos (nightlight told me that they corresponded for years via emails). I give the references in my article.
...mean that you believe "nightlight and Santos" have actually proved that von Neumann's approach, where we model measurements as just creating entanglement and we then "observe" the measurement records later (using the Born rule on the records), fails to predict violations of Bell inequalities in those records?

Also, note the paper http://www.lps.uci.edu/barrett/publications/SuggestiveProperties.pdf I linked to above, which shows that in the limit as the number of measurements (without collapse) in an EPR type experiment goes to infinity the state vector will approach "an eigenstate of reporting that their measurement results were randomly distributed and statistically correlated in just the way the standard theory predicts". This does at least imply that in the limit as the number of measurements goes to infinity, if we "collapse" the records at the very end, the probability that the records will show measurement results that were "randomly distributed and statistically correlated in just the way the standard theory predicts" should approach 1 in this limit. Do you disagree?
 
Last edited by a moderator:
  • #709
JesseM said:
Also, I found one other interesting paper http://www.lps.uci.edu/barrett/publications/SuggestiveProperties.pdf which discusses what happens if we assume measurement just creates entanglement between pointer states and particle states with no collapse ever (what the author calls the 'bare theory' of QM0, and then we consider the limit as an observer makes an infinite series of measurements in an EPR type experiment. On p. 13-14 the author discusses the result:

JesseM, with all due respect, a couple of lines later the author writes: "Note, however, that since the linear dynamics can be written in a perfectly local form, there are in fact no nonlocal causal connections in the bare theory. ...Just as reports of determinate results, relative frequencies, and randomness would generally be explained by the bare theory as illusions the apparent nonlocality here would be just that, apparent." :-) And on page 4: "According to the bare theory,
an observer who begins in an eigenstate of being ready to make a
measurement would end up in an eigenstate of reporting that he has
an ordinary, determinate result to his measurement. This might mean
that the observer believes that he has a determinate measurement
result, but in the context of the bare theory this would not generally
mean that there is any determinate result that the observer believes he
has. Contrary to what Everett and others have claimed, the bare theory
does not make the same empirical predictions as the standard theory;
rather, the bare theory at best provides an explanation for why it might
appear to an observer that the standard theory's empirical predictions
are true when they are in fact false. That is, the bare theory provides
the basis for claiming in some circumstances that some of one's beliefs
are the result of an illusion." So no, this link, while indeed interesting, does not prove what you want.


JesseM said:
I believe so, the section of Bohm's book I linked to and the paper I linked to in post #701 both appeared to analyze EPR-type experiments from a purely Bohmian perspective.

I commented on the Bohm's book, and in the paper by Dewdney e.a., they take their formulae for correlations 3.2 and 3.2a from nowhere, just as "well known expectation value for the correlations". After that, the inequalities are violated. But you cannot get these formulae without the projection postulate, at least that's what I think so far.


JesseM said:
I think you're probably misunderstanding the import of those phrases. When Bohm wrote on p. 122 "Using the theory of measurement described in chapters 2 and 6, we may assume an interaction Hamiltonian..." and gives an equation, that's the Hamiltonian equation which guides the continuous time evolution of the system, I don't see how it has anything to do with discontinuous collapse. And the "theory of measurement" described in chapter 6 appears to be one that does not involve collapse--scroll down to p. 104 here to look at that chapter, he says on p. 109:

I also don't see why "\alpha \Delta t must be large enough so that the \Phi_A (y - j \alpha \Delta t ) do not overlap for different j" lower on the same page has anything to do with collapse, \Phi_A (y) is supposed to represent the "initial wave packet of the apparatus" so this condition also may express some constraint on the design of the apparatus (maybe something like the idea that it should be designed so there isn't significant interference between different possible pointer states), I'm not sure. Do you actually understand the detailed meaning of the math in this section or are you just looking at the verbal descriptions of Bohmian calculations like me?

While you may regard the mention of measurement theory as purely formal (I did not check it), the "overlap" phrase is critical. No overlap - no interference. This is where they get rid of superposition. And no condition can prevent overlap. The word "significant" is not good enough.

I did not check the "proof" in detail, but I know Bohm's theory of measurement and know where they get rid of superposition to get "appearance of collapse".

JesseM said:
You didn't answer the question I asked earlier:

Sorry, as I said, I am struggling trying to keep up with you:-)

Yes, I studied their math, and it is my understanding that the neglect of the overlap takes care of superpositions, so I disagree with your "pretty sure".



JesseM said:
Have you not been paying attention to the distinction I've been making in previous posts between two procedures for calculating probabilities in QM? I have been saying over and over that if you have a series of measurements, you don't have to treat each measurement as leading to a collapse, you can instead treat each measurement as just creating entanglement between measuring apparatus and system being measured, and then apply the Born rule once to the final pointer states after a long series of measurements. That seems to be exactly the approach von Neumann used to deal with measurement too, as noted at the top. So my point is that as long as you only apply the Born rule once in this way, I think there is perfect agreement in the probabilities for different pointer states between this approach and Bohmian mechanics; it's only when you use the projection postulate repeatedly at the moment of each measurement that the agreement with Bohmian mechanics may only be "approximate".

I got your idea, but, as I said, the procedure you describe has nothing to do with real Bell experiments, where measurements are done separately on each particle, they actually use the results of two measurements. So your procedure applying the Born rule once does not seem relevant to experiments. How do you get the correlation in your procedure?
 
Last edited by a moderator:
  • #710
JesseM said:
Yes, I haven't linked to a proof, and I don't feel like spending hours combing through papers looking for one (as I said, most modern papers would probably just consider the result too trivial to explicitly demonstrate).

I fully understand.

JesseM said:
Are you just being pedantic in noting I haven't proved it, or do you actually believe it is plausible that von Neumann's approach to QM measurement, which has been around for decades, would fail to predict Bell inequality violations without anyone noticing this fact? (or if physicists had noticed, without it being a widely discussed result?)

I would like to be accurate here, as it is my understanding that the projection postulate was also introduced by Neumann. But I believe you have in mind an approach where actual measurement is performed only once. I just don't understand how this approach is relevant to Bell experiments where measurements are performed twice. And I really don't think you can get theoretical violations in standard QM or in Bohmian mechanics without the projection postulate or something like that.



JesseM said:
Or does your claim here:

...mean that you believe "nightlight and Santos" have actually proved that von Neumann's approach, where we model measurements as just creating entanglement and we then "observe" the measurement records later (using the Born rule on the records), fails to predict violations of Bell inequalities in those records?

See my comment above on "Neumann's approach". I cannot say in good faith that nightlight or Santos "proved" that violations in QM cannot be proven without using the projection postulate or something like that (maybe they did, but I am not sure). What they did do (at least this is my understanding), they noted that the projection postulate is used in standard proofs of the Bell theorem where it is proven that the inequalities can be violated in QM, and they also noted that the projection postulate contradicts unitary evolution. Can you prove the violations without this postulate? I cannot eliminate such a possibility, but I don't think it is possible. I perfectly understand that you don't want to spend hours to find a proof of what you think is true, but I hope you'll understand that neither I want to spend hours to find a proof of what you think is true and I think is false:-) My logic is as follows. Violations spell nonlocality, the projection postulate spells nonlocality (as soon as the spin projection of one particle is measured, the spin projection of the other particle, however remote, becomes determinate - this stinks to heaven!), so a suspicion that this is the only source of nonlocality seems quite natural.

JesseM said:
Also, note the paper http://www.lps.uci.edu/barrett/publications/SuggestiveProperties.pdf I linked to above, which shows that in the limit as the number of measurements (without collapse) in an EPR type experiment goes to infinity the state vector will approach "an eigenstate of reporting that their measurement results were randomly distributed and statistically correlated in just the way the standard theory predicts". This does at least imply that in the limit as the number of measurements goes to infinity, if we "collapse" the records at the very end, the probability that the records will show measurement results that were "randomly distributed and statistically correlated in just the way the standard theory predicts" should approach 1 in this limit. Do you disagree?

I commented on that in my post 709.
 
Last edited by a moderator:
  • #711
akhmeteli said:
My logic is as follows. Violations spell nonlocality, the projection postulate spells nonlocality (as soon as the spin projection of one particle is measured, the spin projection of the other particle, however remote, becomes determinate - this stinks to heaven!), so a suspicion that this is the only source of nonlocality seems quite natural.

What also stinks to heaven, is when wannabes pretend to have "serious proof" that dismiss all work of John Bell, and all serious scientist who worked on EPR-Bell experiments for decades – without even a basic understanding of Bell's Theorem!?

EPR-Bell experiments is not about "the other particle, however remote, becomes determinate"! This is only the case when the polarizers are aligned parallel! EPR-Bell experiments are all about statistics, and there is no way one could violate Bell's Inequality when the polarizers are aligned parallel only, JesseM can verify this.

What is also hilarious is that when the polarizers are aligned parallel, and the correlation is 100%, anyone can easily construct LHV that explains this in a LR model, except you.

But I’m not surprised you have missed this. You seem to spend all your time looking for irrelevant "stuff" to discredit the work of John Bell.
 
  • #712
akhmeteli said:
I would like to be accurate here, as it is my understanding that the projection postulate was also introduced by Neumann. But I believe you have in mind an approach where actual measurement is performed only once. I just don't understand how this approach is relevant to Bell experiments where measurements are performed twice.
You seem to be misunderstanding something really basic about my argument--you are conflating "measurement" with "projection", but my whole point is that they don't need to be treated as equivalent! You can instead assume that each interaction between the quantum system and the measuring-device can be treated in a purely unitary way--i.e. these measurements do not involve projection--and that after all the measurements in your experiment are done, you have a pure state where all the records of the previous measurements are in a massive superposition, and only then do you use the projection postulate once on the whole collection of records (records of many different prior measurements). I've already explained this several times in the past but you continue to misunderstand...for example, from post #706:
Have you not been paying attention to the distinction I've been making in previous posts between two procedures for calculating probabilities in QM? I have been saying over and over that if you have a series of measurements, you don't have to treat each measurement as leading to a collapse, you can instead treat each measurement as just creating entanglement between measuring apparatus and system being measured, and then apply the Born rule once to the final pointer states after a long series of measurements. That seems to be exactly the approach von Neumann used to deal with measurement too, as noted at the top. So my point is that as long as you only apply the Born rule once in this way, I think there is perfect agreement in the probabilities for different pointer states between this approach and Bohmian mechanics; it's only when you use the projection postulate repeatedly at the moment of each measurement that the agreement with Bohmian mechanics may only be "approximate".
And post #694:
Suppose "as a matter of formalism" we adopt the procedure of applying unitary evolution to the whole experiment and then applying the Born rule to joint states (which includes measurement records/pointer states) at the very end. And suppose this procedure gives predictions which agree with the actual statistics we see when we examine records of experiments done in real life. Then don't we have a formalism which has a well-defined procedure for making predictions and whose predictions agree with experiment? It doesn't matter that the formalism doesn't make predictions about each individual measurement at the time it's made, as long as it makes predictions about the final results at the end of the experiment which we can compare with the actual final results (or compared with the predictions about the final results that any local realist theory would make).
post #690:
But in terms of the formalism, do you agree that you can apply the Born rule once for the amplitude of a joint state? One could consider this as an abstract representation of a pair of simultaneous measurements made at the same t-coordinate, for example. Even if the measurements were made at different times, one could assume unitary evolution for each measurement so that each measurement just creates entanglement between the particles and the measuring devices, but then apply the Born rule once to find the probability for the records of the previous measurements ('pointer states' in Bohmian lingo)
So, after reviewing these comments, do you understand what I mean now? That if we want to make predictions about what our records will show at the end of a series of N measurements, we can assume unitary evolution until after all N measurements are complete, and then just apply the Born rule to the records at that time to get a prediction about the statistics?

If you do understand this, note that this is exactly what von Neumann's approach was. In his approach we do not assume that each measurement collapses the wavefunction, instead it just causes entanglement, and then only later are the measurement records "observed". In post #706 I quoted this paper which described his approach on p. 3:
The crucial step to describe the measurement process as an interaction of two quantum systems [as is implicit in (2.2)] was made by von Neumann [6], who recognized that an interaction between a classical and a quantum system cannot be part of a consistent quantum theory. In his Grundlagen, he therefore proceeded to decompose the quantum measurement into two fundamental stages. The first stage (termed "von Neumann measurement") gives rise to the wavefunction (2.2). The second stage (which von Neumann termed "observation" of the measurement) involves the collapse described above, i.e., the transition from (2.2) to (2.3).
Similarly, consider the paper Quantum Mechanics and Reality which discusses different approaches to "measurement", and on p. 16 describes von Neumann's approach:
In contrast to Bohr, the measuring apparatus A as well as systems S are both to be described by quantum mechanics.

...

The state \sum c_n \phi_n f(a_n) is a linear superposition of states with different pointer states. It is a grotesque state. We still have not obtained a definite value of the pointer reading. Von-Neumann now postulates that when the measurement is completed, and not before that, the wave function collapses to one of the terms in the linear superposition e.g. to \phi_N f(a_N). It is this collapse postulate which adds an extra ingredient to Quantum mechanics and making quantum mechanical description nonclosed.

In an elaboration of von-Neumann view by London and Bauer, and also subscribed to by Wigner and Stapp, this final collapse of wave function takes place when the result of measurement is recorded in a human mind.
Anyway, if you now understand the approach I'm suggesting but aren't convinced that von Neumann's was the same, I can try to find more sources explaining his approach. But I want to make sure that you actually do understand my approach now, given that you still seem to be conflating "measurement" with "collapse"...
 
  • #713
JesseM said:
You seem to be misunderstanding something really basic about my argument--you are conflating "measurement" with "projection", but my whole point is that they don't need to be treated as equivalent! You can instead assume that each interaction between the quantum system and the measuring-device can be treated in a purely unitary way--i.e. these measurements do not involve projection--and that after all the measurements in your experiment are done, you have a pure state where all the records of the previous measurements are in a massive superposition, and only then do you use the projection postulate once on the whole collection of records (records of many different prior measurements). I've already explained this several times in the past but you continue to misunderstand...for example, from post #706:

OK, so you just use the projection postulate, not the Born rule? Then I agree that you can prove violations in quantum mechanics. But this is exactly where you introduce nonlocality. Remember that actual records are not even permanent. Where exactly do you perform this second stage of your procedure, the "observation"? Near the point where the first particle is? Or where the second particle is? If where the first particle is, as soon as you "observe" its spin projection, the spin projection of the second one becomes immediately determinate, according to the projection postulate, and remember that you can choose on the spot which spin projection you want to determine. So you do introduce nonlocality. Or are you trying to say that the Born rule and the projection postulate are one and the same thing? But as far as I understand, the Born rule does not state that after the measurement the system is in a certain eigenstate, it just gives the probability of a certain measurement result.

JesseM said:
So, after reviewing these comments, do you understand what I mean now? That if we want to make predictions about what our records will show at the end of a series of N measurements, we can assume unitary evolution until after all N measurements are complete, and then just apply the Born rule to the records at that time to get a prediction about the statistics?

No, I don't understand what you mean. Now you're telling me that you use the Born rule. A few lines before you said you were using the projection postulate. Please explain.
 
  • #714
DevilsAvocado said:
EPR-Bell experiments is not about "the other particle, however remote, becomes determinate"! This is only the case when the polarizers are aligned parallel! EPR-Bell experiments are all about statistics, and there is no way one could violate Bell's Inequality when the polarizers are aligned parallel only, JesseM can verify this.

With all due respect, you did not understand anything I said. I did not speak about two polarizers at all. Let me try to explain. Suppose you have two particles of a singlet, and you are measuring the spin projection of the first particle, so you need just one polarizer. The projection postulate says that after you measure the spin projection on some axis, say, it is +1, the wavefunction immediately collapses into an eigenstate of this spin projection of the first particle. That means that the spin projection of the second particle on the same axis immediately becomes determinate, it equals -1. That's where nonlocality is introduced through the projection postulate. Then you may measure the spin projection of the second particle on a different axis using another polarizer to prove violations or for whatever purpose you want, but that is a different story.
 
  • #715
akhmeteli said:
With all due respect, you did not understand anything I said.

With all due respect, I think you are talking bull. If it’s one thing Bell showed, it’s that the Einsteinian argument fails:

no action on a distance (polarisers parallel) ⇒ determinism
determinism (polarisers nonparallel) ⇒ action on a distance

Determinism is stone dead.

akhmeteli said:
That's where nonlocality is introduced through the projection postulate.

Are you talking about John von Neumann and the "wavefunction collapse" from 1932?? The collapse of the wavefunction is just an interpretation?? And AFAICT, it not very "hot" either?? Are you saying that John von Neumann, who died in 1957, proved John Bell wrong in 1964?? Or are you saying that you have discovered "something" that John Bell and the whole scientific community totally missed??
http://en.wikipedia.org/wiki/Wave_function_collapse#History_and_context"
...
Although von Neumann's projection postulate is often presented as a normative description of quantum measurement, it was conceived by taking into account experimental evidence available during the 1930s (in particular the Compton-Simon experiment has been paradigmatic), and that many important http://en.wikipedia.org/wiki/Measurement_in_quantum_mechanics#Wavefunction_collapse" do not satisfy it (so-called measurements of the second kind).[4]

The existence of the wave function collapse is required in
  • the Copenhagen interpretation
  • the objective collapse interpretations
  • the so-called transactional interpretation
  • in a "spiritual interpretation" in which consciousness causes collapse.
On the other hand, the collapse is considered as a redundant or optional approximation in
  • interpretations based on Consistent Histories
  • the Many-Worlds Interpretation
  • the Bohm interpretation
  • the Ensemble Interpretation


http://en.wikipedia.org/wiki/Dunning–Kruger_effect"
 
Last edited by a moderator:
  • #716
akhmeteli said:
OK, so you just use the projection postulate, not the Born rule?
If you only use it once, at the very end, and then don't attempt to predict anything about what happens to the records afterwards, I don't see the difference. For example, if there were three measurements which could each yield result 1 or 0, then at the end right before "observation" the records will be a single quantum state which can be expressed as a sum of eigenstates:

\alpha_1 \mid 000 \rangle + \alpha_2 \mid 001 \rangle + \alpha_3 \mid 010 \rangle + \alpha_4 \mid 011 \rangle +\alpha_5 \mid 100 \rangle + \alpha_6 \mid 101 \rangle + \alpha_7 \mid 110 \rangle + \alpha_8 \mid 111 \rangle

where the \alpha_i are complex amplitudes. Then if you apply the "projection postulate", you're saying the quantum state will randomly become one of those eigenstates, with the probability of it going to a given eigenstate like \mid 010 \rangle being \alpha_3 \alpha_3* (i.e. the amplitude times its complex conjugate). And the "Born rule" just tells you that the probability of getting a given result like 010 is \alpha_3 \alpha_3*. So if you're not interested in what happens to the quantum state later, but just in the probabilities of seeing different combinations of measurement records at some time T after all the measurements are complete, I don't see the distinction between applying the "projection postulate" at T to get these probabilities vs. applying the "Born rule" at T. What difference are you seeing?
akhmeteli said:
But this is exactly where you introduce nonlocality. Remember that actual records are not even permanent. Where exactly do you perform this second stage of your procedure, the "observation"? Near the point where the first particle is?
If you see a paper listing a bunch of results taken at different places, how do you think they got into that one paper? Presumably the information from each measuring device was transferred to a common location at some point, so you're free to assume that each measuring-device was transferred to a common location before the "observation" of their records happened, or that each sent an email to a common location before "observation", whatever.
 
  • #717
DevilsAvocado said:
With all due respect, I think you are talking bull. If it’s one thing Bell showed, it’s that the Einsteinian argument fails:

no action on a distance (polarisers parallel) ⇒ determinism
determinism (polarisers nonparallel) ⇒ action on a distance

Determinism is stone dead.



Are you talking about John von Neumann and the "wavefunction collapse" from 1932?? The collapse of the wavefunction is just an interpretation?? And AFAICT, it not very "hot" either?? Are you saying that John von Neumann, who died in 1957, proved John Bell wrong in 1964?? Or are you saying that you have discovered "something" that John Bell and the whole scientific community totally missed??



http://en.wikipedia.org/wiki/Dunning–Kruger_effect"

I give up. You have no use for any explanations, and I have no use for your soap opera.
 
Last edited by a moderator:
  • #718
akhmeteli said:
I give up.

This is often the case when frauds are proven wrong:
http://en.wikipedia.org/wiki/Wave_function_collapse#History_and_context"
...
Although von Neumann's projection postulate is often presented as a normative description of quantum measurement, it was conceived by taking into account experimental evidence available during the 1930s (in particular the Compton-Simon experiment has been paradigmatic), and that many important http://en.wikipedia.org/wiki/Measurement_in_quantum_mechanics#Wavefunction_collapse" do not satisfy it (so-called measurements of the second kind).[4]

The existence of the wave function collapse is required in
  • the Copenhagen interpretation
  • the objective collapse interpretations
  • the so-called transactional interpretation
  • in a "spiritual interpretation" in which consciousness causes collapse.
On the other hand, the collapse is considered as a redundant or optional approximation in
  • interpretations based on Consistent Histories
  • the Many-Worlds Interpretation
  • the Bohm interpretation
  • the Ensemble Interpretation
 
Last edited by a moderator:
  • #719
JesseM said:
If you only use it once, at the very end, and then don't attempt to predict anything about what happens to the records afterwards, I don't see the difference. For example, if there were three measurements which could each yield result 1 or 0, then at the end right before "observation" the records will be a single quantum state which can be expressed as a sum of eigenstates:

\alpha_1 \mid 000 \rangle + \alpha_2 \mid 001 \rangle + \alpha_3 \mid 010 \rangle + \alpha_4 \mid 011 \rangle +\alpha_5 \mid 100 \rangle + \alpha_6 \mid 101 \rangle + \alpha_7 \mid 110 \rangle + \alpha_8 \mid 111 \rangle

where the \alpha_i are complex amplitudes. Then if you apply the "projection postulate", you're saying the quantum state will randomly become one of those eigenstates, with the probability of it going to a given eigenstate like \mid 010 \rangle being \alpha_3 \alpha_3* (i.e. the amplitude times its complex conjugate). And the "Born rule" just tells you that the probability of getting a given result like 010 is \alpha_3 \alpha_3*. So if you're not interested in what happens to the quantum state later, but just in the probabilities of seeing different combinations of measurement records at some time T after all the measurements are complete, I don't see the distinction between applying the "projection postulate" at T to get these probabilities vs. applying the "Born rule" at T. What difference are you seeing?

I don't know. Generally speaking, the projection postulate immediately introduces nonlocality. Right now I don't quite know how the procedure you describe is supposed to be used to prove the violations in quantum mechanics. Before I see the proof, I cannot tell you if there is any difference or not. Anyway, strictly speaking, the projection postulate is not compatible with unitary evolution, whether you use the postulate at the end, at the beginning, or in the middle.

JesseM said:
If you see a paper listing a bunch of results taken at different places, how do you think they got into that one paper? Presumably the information from each measuring device was transferred to a common location at some point, so you're free to assume that each measuring-device was transferred to a common location before the "observation" of their records happened, or that each sent an email to a common location before "observation", whatever.

Then problems with spatial separation may arise. Again, until I see how your procedure is used in a proof of violations, it is difficult to say what is important and what isn't. And remember, in principle, records are not permanent.
 
  • #720
akhmeteli said:
I give up.

Ohh! Sorry, I missed the most important part:
http://en.wikipedia.org/wiki/Measurement_in_quantum_mechanics#Measurements_of_the_second_kind"
...
Note that many present-day measurement procedures are measurements of the second kind, some even functioning correctly only as a consequence of being of the second kind (for instance, a photon counter, detecting a photon by absorbing and hence annihilating it, thus ideally leaving the electromagnetic field in the vacuum state rather than in the state corresponding to the number of detected photons; also the http://en.wikipedia.org/wiki/Stern-Gerlach_experiment" would not function at all if it really were a measurement of the first kind).
 
Last edited by a moderator:

Similar threads

Replies
4
Views
1K
Replies
58
Views
4K
Replies
6
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 41 ·
2
Replies
41
Views
8K
  • · Replies 7 ·
Replies
7
Views
3K
Replies
2
Views
2K
Replies
63
Views
8K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K