Von Neumann QM Rules Equivalent to Bohm?

  • #61
vanhees71 said:
This I don't understand. In Bell tests you start with an entangled pair of photons (biphotons), created by some local process, e.g., the parametric down conversion in a crystal. They are mostly emitted back to back and you simply have to wait long enough to be able to detect the photons at large distances (making sure that nothing disturbs them to prevent the decoherence the state). The single-photon polarizations are maximally random (unpolarized photons) but the 100% correlation between polarizations measured in the same direction are inherent in this state. So it's a property of the biphotons, and the correlations are thus "caused" by their production in this state and not due to the measurement of one of the single-photon polarization states. It doesn't matter, whether the registration events by A and B are time-like or space-like separated. You'll always measure the correlation due to the entanglement provided there was no disturbance in between to destroy the entanglement by interactions with this disturbances. This shows that there's no collapse necessary in the analysis of these experiments (the same holds of course when you use not aligned setups of the two polarizers at A's and B's places, as necessary to demonstrate the violation of Bell's inequality or variations of it).

Well, immediately before the first measurement, the most complete description of the state of the photons possible is:

Description 1:
  • Statement 1.A: The probability of the first photon passing a filter at angle A is 50%
  • Statement 1.B: The probability of the second photon passing a filter at angle B is 50%
  • Statement 1.C: The probability of both events happening is 0.5 cos^2(A-B)
Now you perform the first measurement, and the result is that the first photon does pass the filter oriented at angle A. Then immediately after the first measurement, but before the second measurement, the most complete description of the photons possible is:

Description 2:
  1. Statement 2.A: The first photon is polarized at angle A
  2. Statement 2.B: The probability of the second photon passing a filter at angle B is cos^2(A-B)
The "collapse" is simply a name for the transition from Description 1 to Description 2. So it's definitely there. The only issue is, what is the nature of this transition? Is it simply a change of knowledge in the mind of the experimenters? Or, is there some objective facts about the world that change?

Before either measurement is made, is Description 1 an objective fact about the world, or is it simply a statement about our knowledge? Same question for Description 2.

If you say that Description 1 and Description 2 are objective facts about the world, then it seems to me that collapse is a physical process.
 
Physics news on Phys.org
  • #62
Descriptions 1 and 2 are different experiments! I relabel thus "Description" with "Experiment". No wonder that you get different results. Of course, the state of a system depends on the preparation procedure, which is (in my understanding of quantum theory) a tautology because the state is defined as an equivalence class of preparation procedures (see also this nice book by Strocchi, where this is formalized using the ##C^*## approach to observables).

Experiment 1 considers all biphotons and Experiment 2 filters out those biphotons, where A finds it to be polarized at angle A. So the probabilities (relative frequencies!) refer to different ensembles.

Note that Experiment 2 can be achieved even post factum, i.e., if you make precise enough timing at both A and B you have the information about which B photon belongs to which A photon, i.e., you know which photons belonged to the same biphoton. Then you can make the selection necessary to do Experiment 2 after everything has long happened to the photons. This is this famous post-selection thing, which of course also becomes somewhat "spooky" when (in my opinion falsely) interpret the measurement at A as the cause for the outcome at B and not as (in my opinion correctly) interpret the preparation in an entangled state as the cause for the correlations described by it.
 
  • #63
vanhees71 said:
No it's due to the fact that A knows that the photons are entangled and thus what B must measure at his photon.

Saying that "the photons are entangled" is just a description of the initial state of the two photons (my Description 1) above. "Collapse" is about the transition from Description 1 to Description 2.
 
  • #64
atyy said:
What is the status of domain wall fermions and the standard model? Can a lattice standard model be constructed with domain wall fermions, at least in principle, even if it is too inefficient to simulate? Or is the answer still unknown?
I don't think there is a necessity for chiral fermions on the lattice. In the standard model, fermions come in pairs of Dirac particles. These can be put on the lattice.

The point is, of course, that one wants some gauge groups acting only on chiral components. And this seem impossible to do exactly on a simple lattice model. So strange things like domain wall fermions are invented. In fact, there is no need for them - use an approximate gauge symmetry on the lattice - anyway, these gauge fields are massive. Not renormalizable? Who cares, the long distance limit rules out the non-renormalizable elements anyway. (Of course, one has to start with the old Dirac-Fermi approach to gauge field quantization, because the Gupta-Bleuler approach depends on exact gauge symmetry to get rid of negative probabilities.)
 
  • Like
Likes atyy
  • #65
stevendaryl said:
Saying that "the photons are entangled" is just a description of the initial state of the two photons (my Description 1) above. "Collapse" is about the transition from Description 1 to Description 2.

I think vanhees71 considers that one can add Description 2 to Description 1. To do so, one would modify Description 2 to be conditional: "If Alice measures the photon to be polarized at angle A ...

All that is fine. But what he doesn't realize is that whatever one does Einstein causality is gone, and Einstein causality is either not meaningful (if the variables of QM are not real) or violated by QED (if the variables of QM are real).
 
  • #66
Again: The result for Bob's photon is not caused by Alice's measurement
Sure, but only physical collapse contradicts this, non-physical collapse follows from Bell's inequalities violations and respects forbidden ftl causation.
vanhees71 said:
No it's due to the fact that A knows that the photons are entangled and thus what B must measure at his photon.

It seems to me entanglement correlations are empirical embodiments of non-physical collapse
 
  • #67
What is a "non-physical collapse"? Either there is something collapsing in the real world when a measurement is made or not! In my opinion there's not the slightest evidence for anything collapsing when we observe something.
 
  • #68
stevendaryl said:
Saying that "the photons are entangled" is just a description of the initial state of the two photons (my Description 1) above. "Collapse" is about the transition from Description 1 to Description 2.
But where is anything "collapsing" here. A measures her photon's polarization. If she find it to be polarized in A-direction, she considers what's the probability that B finds his photon to be polarized in B-direction; if her photon is absorbed, she doesn't consider anything further, i.e., she uses a sub-ensemble of the complete ensemble originally prepared, which is another preparation procedure, i.e., a different measurement than done in Experiment 1, where all photon pairs are considered. Nothing has collapsed, it's just the choice of the ensemble based on Alice's (local!) measurement of her photon's polarization. Everything is just calculated by the usual rules of probability theory from the given initial biphoton state. There's no need to assume an instantaneous collapse of B's photon's state by A's measurement on her photon necessary to explain all the probabilistic properties of the two experiments under consideration.
 
  • #69
vanhees71 said:
You keep repeating this every time, but I've not seen a single example for such an experimental observation, which would imply that either Einstein causality or QT must be wrong. Before I believe either of this, I need a very convincing experimental evidence for a collapse!

How do you obtain a wave function as the initial state? You make a measurement, it has a value, that means, you have obtained a state with the corresponding eigenstate as the wave function. Without collapse there would be no method of state preparation in quantum theory.

vanhees71 said:
Either you believe in the existence of a collapse or Einstein causality and locality.
Anyway, to believe in Einstein causality is nonsensical (causality with Reichenbach's principle of common cause is not causality, one could name correlaity or so. But if you accept common cause, then you need FTL causal influences to explain the violations of Bell's inequalities. So, the collapse is not important for this at all, the violation of Bell's inequality is the point, and this point is quite close to loophole-free experimental validation.

vanhees71 said:
The most successful model ever, the Standard Model of elementary particle physics, obeys both.
No, it obeys only what I have named "correlaity" - instead of claims about causality it contains only claims about correlations.
 
  • #70
vanhees71 said:
But where is anything "collapsing" here.

Well, it has to do with whether you think that my Description 1 and Description 2 are objective facts about the world, or whether they are just states of somebody's knowledge. If they are objective facts about the world, then there is a physical transition from the situation described by Description 1 to the situation described by Description 2.

On the other hand, if Description 1 and Description 2 are not objective facts about the world, then that raises other thorny questions. What IS an objective fact about the world? If you say it's only the results of measurements, then that's a little weird, because a measurement is just a macroscopic interaction. Why should facts about macroscopic states be objective if facts about microscopic states are not?

A measures her photon's polarization. If she find it to be polarized in A-direction, she considers what's the probability that B finds his photon to be polarized in B-direction; if her photon is absorbed, she doesn't consider anything further, i.e., she uses a sub-ensemble of the complete ensemble originally prepared, which is another preparation procedure

It seems to me that this business of "preparing a sub-ensemble" is equivalent to invoking collapse.
 
  • #71
vanhees71 said:
What is a "non-physical collapse"? Either there is something collapsing in the real world when a measurement is made or not!

I'm not sure who you are responding to, but in classical probability, there is a situation analogous to quantum entanglement, and something analogous to collapse, but it's clearly NOT physical. I have a pair of shoes, and randomly select one to put in a box and send to Alice, and another one to put in a box to send to Bob. Before Alice opens her box, she would describe the situation as "There is a 50/50 chance of my getting a left shoe or a right shoe. There is also a 50/50 chance of Bob getting either shoe." After opening the box and finding a left shoe, she would describe the situation as "I definitely have the left shoe, and Bob definitely has the right shoe". So, the probability distribution "collapses" when she opens the box.

But that's clearly not physical. The box contained a left shoe before she opened it, she just didn't know it. So the probabilities reflect her knowledge, not the state of the world.

In an EPR-type experiment, the analogous explanation would be that the photon was polarized at angle A before Alice detected it, she just didn't know it. But that interpretation of what's going on is contradicted by Bell's theorem. To me, talking about "ensembles" and "filtering a sub-ensemble" is another way of talking about hidden variables, so it seems equally inconsistent with Bell's theorem.
 
  • #72
vanhees71 said:
What is a "non-physical collapse"? Either there is something collapsing in the real world when a measurement is made or not! In my opinion there's not the slightest evidence for anything collapsing when we observe something.
The wavefunction is what is collapsing in the usual account, but unless you follow an interpretation with collapse that considers wavefunctions as physical entities, you probably see wavefunctions just as mathematical tools, mathematical tools don't "collapse" in any real world sense. So you are left with the concept of non-physical collpse, just non-unitary evolution(you talked about it in #29, remember?) that is fully compatible with microcausality((anti)commutation of spacelike fields).
 
  • #73
stevendaryl said:
Saying that "the photons are entangled" is just a description of the initial state of the two photons (my Description 1) above. "Collapse" is about the transition from Description 1 to Description 2.
Yes, and thus it's not a physical process, named collapse, but the mere adaption of the state by A due to information gained by the outcome of her measurement on her photon. It's epistemic not ontological to put it in this philosophical language (which I personally don't like very much, because it's not very sharply defined).
 
  • #74
Ilja said:
How do you obtain a wave function as the initial state? You make a measurement, it has a value, that means, you have obtained a state with the corresponding eigenstate as the wave function. Without collapse there would be no method of state preparation in quantum theory.
I associate the initial state (not wave function, because there's no sensible description of photons as wave functions) to the system under consideration due to the preparation procedure. I don't need a collapse but a laser and an appropriate birefringent crystal for parametric down conversion. Of course, there's a filtering involved to filter out the entangled photon pairs.

I don't know of any paper deriving this photon-pair production process from first principles. It's of course experimental evidence ensuring that you prepare these states. For the effective theory describing it see the classical paper

Hong, C. K., Mandel, L.: Theory of parametric frequency down conversion of light, Phys. Rev. A 31, 2409, 1985
http://dx.doi.org/10.1103/PhysRevA.31.2409
 
  • #75
vanhees71 said:
Yes, and thus it's not a physical process, named collapse, but the mere adaption of the state by A due to information gained by the outcome of her measurement on her photon. It's epistemic not ontological to put it in this philosophical language (which I personally don't like very much, because it's not very sharply defined).

I would say that it's definitely NOT that. I suppose there are different interpretations possible, but the way I read Bell's theorem is that the purely epistemic interpretation of the wave function is not viable.

Once again, I want to point out what are the implications of the claim the updating is purely epistemic. Again, we assume that both Alice and Bob have their filters oriented at the same angle, A. We ask what Alice knows about the state of Bob's photon. Immediately before measuring her photon's polarization, the most that Alice knows is: "There is a 50/50 chance that Bob's photon has polarization A". Immediately afterward, she knows "There is a100% chance that Bob's photon has polarization A".

It seems to me that if you want to say that the change is purely epistemic, then that means that the state of Bob's photon wasn't changed by Alice's measurement, only Alice's information about it changed. Okay, that's fine. But let's go through the reasoning here:
  1. After Alice's measurement, Bob's photon has definite polarization state A.
  2. Alice's measurement did not change the state of Bob's photon.
  3. Therefore, Bob's photon had definite polarization state A BEFORE Alice's measurement.
So it seems to me that assuming that measurements are purely epistemic implies that photons have definite (but unknown) polarizations even before they are measured. But that's a "hidden variables" theory of the type ruled out by Bell's theorem.
 
Last edited:
  • #76
stevendaryl said:
So it seems to me that assuming that measurements are purely epistemic implies that photons have definite (but unknown) polarizations even before they are measured. But that's a "hidden variables" theory of the type ruled out by Bell's theorem.
I disagree. What about hidden nonlocal influences?

The measurement of Alice make a local random choice of the direction, this choice is somehow transferred to Bob's particle which changes its hidden internal state correspondingly. This would be a non-local interaction in reality, of course, but not excluded by Bell's theorem. And the wave function could be, nonetheless, purely epistemic.
 
  • #77
Ilja said:
I disagree. What about hidden nonlocal influences?

Yes, you're right. I meant making the auxiliary assumption of locality.

The measurement of Alice make a local random choice of the direction, this choice is somehow transferred to Bob's particle which changes its hidden internal state correspondingly. This would be a non-local interaction in reality, of course, but not excluded by Bell's theorem. And the wave function could be, nonetheless, purely epistemic.
 
  • #78
Non-physical collapse the way I see it is equivalent to a version of decoherence that contrary to the usual account cannot be made reversible even in principle i.e. no possibility of combining system plus environment in any meaningful way, this is what an intrinsic cut in QM is, whether the cut is referring to system/apparatus, system/environment, microscopic/ macroscopic degrees of freedom in coarse-graining, probabilistic/deterministic evolution. This should be common to any interpretation that takes seriosly single measurements.
 
  • #79
TrickyDicky said:
Non-physical collapse the way I see it is equivalent to a version of decoherence that contrary to the usual account cannot be made reversible even in principle i.e. no possibility of combining system plus environment in any meaningful way, this is what an intrinsic cut in QM is, whether the cut is referring to system/apparatus, system/environment, microscopic/ macroscopic degrees of freedom in coarse-graining, probabilistic/deterministic evolution. This should be common to any interpretation that takes seriosly single measurements.

I disagree. The most detailed consideration of the measurement process which is known is that of de Broglie-Bohm theory. So, to say that it does not take measurements seriously would be unjust. But it does not have a cut.

It has an effective collapse, by putting in the trajectory of the measurement device into the wave function of device and system, which defines the effective wave function of the system. You can do this at every moment, before, after and during the measurement, and obtain a nice picture of a non-Schroedinger evolution for the collapsing effective wave function. But where to make the cut between device and system remains your free choice.
 
  • #80
stevendaryl said:
I would say that it's definitely NOT that. I suppose there are different interpretations possible, but the way I read Bell's theorem is that the purely epistemic interpretation of the wave function is not viable.

Once again, I want to point out what are the implications of the claim the updating is purely epistemic. Again, we assume that both Alice and Bob have their filters oriented at the same angle, A. We ask what Alice knows about the state of Bob's photon. Immediately before measuring her photon's polarization, the most that Alice knows is: "There is a 50/50 chance that Bob's photon has polarization A". Immediately afterward, she knows "There is a100% chance that Bob's photon has polarization A".

It seems to me that if you want to say that the change is purely epistemic, then that means that the state of Bob's photon wasn't changed by Alice's measurement, only Alice's information about it changed. Okay, that's fine. But let's go through the reasoning here:
  1. After Alice's measurement, Bob's photon has definite polarization state A.
  2. Alice's measurement did not change the state of Bob's photon.
  3. Therefore, Bob's photon had definite polarization state A BEFORE Alice's measurement.
So it seems to me that assuming that measurements are purely epistemic implies that photons have definite (but unknown) polarizations even before they are measured. But that's a "hidden variables" theory of the type ruled out by Bell's theorem.
No, that's not what's implied although the "change of state" due to A's measurement is in my opinion indeed purely epistemic. Before any measurement, both A and B have simply unpolarized photons, which however are known to be entangled due to the preparation procedure in an entangled biphoton. Let's write down the math, because that helps here. I simplify it (somewhat too much) by just noting the polarization states of the photons and let's simplify it to the case that both measure the polarization in the same direction.

So initially we have the two-photon polarization state
$$|\Psi_0 \rangle=\frac{1}{\sqrt{2}} (|H V \rangle-|VH \rangle).$$
The single photon states are given by tracing out the other photon respectively, and both Alice and Bob describe it by
$$\hat{\rho}_{\text{Alice}}=\frac{1}{2} \mathbb{1}, \quad \hat{\rho}_{\text{Bob}}=\frac{1}{2} \mathbb{1}.$$
Now we assume that Alice measures the polarization of her photon and find's that it is horizontally polarized and nothing happens with Bob's photon which may be detected very far away from Alice at a space-like distance so that, if you assume that QED microcausality holds (which I think is a very weak assumption given the great success of QED). Then the state after this measurement is described (for Alice!) by the pure two-photon polarization state (which I leave unrenormalized to store the probability for this to happen conveniently in the new representing state ket; the state itself is of course the ray):
$$|\Psi_1 \rangle = |H \rangle \langle H| \otimes \mathbb{1}|\Psi_0 \rangle=\frac{1}{\sqrt{2}} |H V \rangle.$$
This, of course, happens with the probability
$$\|\Psi_1 \|^2=1/2,$$
which was already clear from the reduced state for A's single photon derived above.

Now, what Bob finds is with probability 1/2 H and with probability 1/2 V, because he cannot know (faster than allowed by the speed of light via communicating with Alice) what Alice has found. So Bob will still describe his single-photon's state as ##\hat{\rho}_{\text{Bob}}=1/2 \mathbb{1}##. Nothing has changed for Bob, and according to the usual understanding of relativistic causality he cannot know before measring his photon's polarization more about it if he doesn't exchange information about Alice's result, and this he can do (again using the standard interpretation of relativistic causality) only by exchanging some signal with Alice which he can get only with the speed of light and not quicker.

Alice knows after her measurement that Bob must find a vertically polarized photon, i.e., the conditional propability given Alice's result gives for 100% V polarization for Bob's photon. That this is true can be verified after exchanging the measurement protocols between Alice and Bob, given the fact that via precise timing it is possible to know which photons A and B measure belong to one biphoton. That's why Bob can "post-select" his photons by only considering the about 50% of photons where Alice found an H-polarized photon, and then finds 100% V-polarized ones.

This clearly shows that the notion of state is an epistemic one in this interpretation, because A and B describe the same situation with different states, depending on her knowledge. Note that there can never be contradictions between these two descriptions, because given that A doesn't know that B's photon is entangled in the way described by ##|\Psi_0 \rangle## A wouldn't ever be able to say that B would find V-polarization with 100% probability, if she has found H-polarization. This is what's stated in the linked-cluster theorem, and this of course holds for any local microcausal relativistic QFT. If on the other hand a theory obeying the linked-cluster theorem (which is the minimum assumption you must make to stay in accordance with usual relativistic causality) must necessarily be such a local microcausal relativistic QFT is not clear to me, and I've not seen any attempts to prove this (see Weinberg QT of Fields, vol. 1).

As a "minimal interpreter" I stay silent about the question, whether or not there is an influence of Alice's measurement on Bob's photon or not, as already mentioned by Ilja above. I only say this is the case in standard QED, which is by construction a local microcausal relativistic QFT. If there is an extension to QT where you can describe non-local interactions in the sense of a non-local deterministic hidden-variable theory that is consistent with the relativistic space-time structure, I don't know, at least I've not seen any convincing yet in the published literature. But what's for sure a "naive" instantaneous collapse assumption is for sure at odds with the relativistic space-time description and it is, as the above argument (hopefully convincingly) shows, not necessary to understand the probabilistic outcomes according to QT.
 
  • #81
vanhees71 said:
As a "minimal interpreter" I stay silent about the question, whether or not there is an influence of Alice's measurement on Bob's photon or not, as already mentioned by Ilja above. I only say this is the case in standard QED, which is by construction a local microcausal relativistic QFT. If there is an extension to QT where you can describe non-local interactions in the sense of a non-local deterministic hidden-variable theory that is consistent with the relativistic space-time structure, I don't know, at least I've not seen any convincing yet in the published literature. But what's for sure a "naive" instantaneous collapse assumption is for sure at odds with the relativistic space-time description and it is, as the above argument (hopefully convincingly) shows, not necessary to understand the probabilistic outcomes according to QT.

It has nothing to do with a minimal interpretation. Any relativistic QFT is not consistent with the classical meaning of "relativistic space-time structure" or "Einstein causality". Relativistic QFT does not allow faster than light signalling of classical information, and that is the meaning of "local microcausal relativistic QFT".
 
Last edited:
  • Like
Likes TrickyDicky and Ilja
  • #82
Microcausality means that local observables commute at space-like distances, and this implies that there is no action at a distance on the quantum level. In our example, the local interaction of A's photon with her polarizer doesn't affect instantaneously Bob's photon.
 
  • #83
vanhees71 said:
Microcausality means that local observables commute at space-like distances, and this implies that there is no action at a distance on the quantum level. In our example, the local interaction of A's photon with her polarizer doesn't affect instantaneously Bob's photon.
No, QFT simply does not tell us anything about this question.

QFT in the minimal interpretation is not a realistic theory, thus, does not make claims that the polarizer affects instantaneously Bob's photon, but also no claims that it doesn't.
 
  • #84
According to standard QFT the Hamilton density commutes with any other local observable at any spacelike separated argument. Thus a local interaction doesn't affect any observable instantaneously (or at speeds faster than light).

Whether or not there are non-local deterministic theories (which I think is what's meant by "realistic theories" by philosophers) which are as successful as standard QFT, I don't know.
 
  • Like
Likes TrickyDicky
  • #85
vanhees71 said:
Now, what Bob finds is with probability 1/2 H and with probability 1/2 V, because he cannot know (faster than allowed by the speed of light via communicating with Alice) what Alice has found. So Bob will still describe his single-photon's state as ##\hat{\rho}_{\text{Bob}}=1/2 \mathbb{1}##. Nothing has changed for Bob,

Okay, but Alice knows what result will get, with 100% certainty, before Bob makes the measurement. So, from her point of view, Bob's information is incomplete. The more complete story is that he will definitely get the same polarization as Alice (assuming their filters are aligned).

So if there is such a thing as "the objective state of Bob's photon", then that state is NOT 50/50 chance of passing Bob's filter.

You could deny that there is such a thing as the state of Bob's photon. But that's pretty weird, too. Alice can certainly reason as if Bob's photon is in a definite state of polarization, and that reasoning gives correct results.

I don't see how it makes sense to say that Alice's updating is purely epistemic.
 
  • #86
But that's very common in probability theory. It's just Bayes formula for conditional probability. This is not more mysterious in QT than in any "classical" probabilistic description.

This example for me makes it very clear that it's purely epistemic, because Alice's measurement updates her information and thus she changes her description of the state of the system. Bob doesn't have this information and thus stays with the description he assigns to the situation due to his knowledge. Physically nothing has changed for his photon by Alice's measurement. So the association of the state is determined by the preparation procedure and can vary for Alice and Bob due to the different information available to them about this system. This for me clearly shows the epistemic character of probabilistic descriptions (not restricted to QT; the difference between QT and classical probabilistic models are the strong correlations described by entangled states, which are stronger than ever possible in classical deterministic local theories, as shown by Bell).
 
  • #87
stevendaryl said:
Okay, but Alice knows what result will get, with 100% certainty, before Bob makes the measurement. So, from her point of view, Bob's information is incomplete. The more complete story is that he will definitely get the same polarization as Alice (assuming their filters are aligned).

So if there is such a thing as "the objective state of Bob's photon", then that state is NOT 50/50 chance of passing Bob's filter.

You could deny that there is such a thing as the state of Bob's photon. But that's pretty weird, too. Alice can certainly reason as if Bob's photon is in a definite state of polarization, and that reasoning gives correct results.

I don't see how it makes sense to say that Alice's updating is purely epistemic.

Just a little expansion on this:

After Alice measures her photon to be polarized horizontally, she would describe Bob's photon as being in the PURE state |H\rangle. As you say, Bob would describe his own photon as being in the mixed state \rho = \frac{1}{2}(|H\rangle\langle H| + |V\rangle\langle V|). But this disagreement is completely explained by saying that Bob's photon is REALLY in state |H\rangle\langle H|, he just doesn't know it. Density matrices reflect both quantum superpositions and classical uncertainty (due to lack of information).

You (vanhees71) say that Bob's photon is still in the state \rho = \frac{1}{2}(|H\rangle\langle H| + |V\rangle\langle V|), even after Alice finds her photon to be horizontally-polarized. That doesn't make sense to me. There is zero probability of Bob detecting polarization V, while his density matrix would say it's 1/2. His density matrix is wrong (or is less informative than the one Alice is using for Bob's photon).
 
Last edited:
  • #88
Ilja said:
I disagree. The most detailed consideration of the measurement process which is known is that of de Broglie-Bohm theory. So, to say that it does not take measurements seriously would be unjust. But it does not have a cut.

It has an effective collapse, by putting in the trajectory of the measurement device into the wave function of device and system, which defines the effective wave function of the system. You can do this at every moment, before, after and during the measurement, and obtain a nice picture of a non-Schroedinger evolution for the collapsing effective wave function. But where to make the cut between device and system remains your free choice.
It does have a cut in the sense I commented above, pilot wave/particle trajectories.
 
  • #89
vanhees71 said:
But that's very common in probability theory. It's just Bayes formula for conditional probability. This is not more mysterious in QT than in any "classical" probabilistic description.

No, it's not the same. In classical probability, there is a distinction between what is true and what my knowledge of the truth is. Someone randomly puts a left shoe into one box and a right shoe into the other box. One box is sent to Alice, and the other box is sent to Bob. When Alice opens her box, she finds a left shoe. She updates her epistemic probabilities for Bob's box to be 100% chance of a right shoe. There's clearly no nonlocal influence going on. HOWEVER, Alice knows that Bob actually had a right shoe BEFORE she opened the box. She just didn't know it until she opened her box.

In the EPR experiment, Alice finds out that Bob's photon has polarization H. If it's purely epistemic updating of Alice's information, that means that Bob's photon had polarization H BEFORE she measured her photon.

You can't have it both ways. If it's purely epistemic, then the objective state of the photon cannot be changed by Alice's updating. If the objective state after updating is H, then it must have been H beforehand. I don't see how it could be otherwise.

I guess you could say that the state H that Alice deduces for Bob's photon isn't objective, it's subjective, for Alice only. But that's hard to maintain. Would you then say that the polarization state of a photon is NEVER objective?

As Einstein, Rosen and Podolsky said, if you can predict the result of a future measurement with 100% accuracy, it sure seems like it's something objective.
 
  • #90
vanhees71 said:
Microcausality means that local observables commute at space-like distances, and this implies that there is no action at a distance on the quantum level. In our example, the local interaction of A's photon with her polarizer doesn't affect instantaneously Bob's photon.

No, that is wrong (well, this particular quote is ambiguous, but I'm taking it in the context of your earlier remarks on EPR). The commutation of spacelike-separated observables says that there is no faster than light transfer of classical information. That is a different issue from Einstein causality, which means that the nonlocal correlations are entirely explained by each event having a cause in its past light cone. Relativistic quantum field theory means that Einstein causality is either empty or false.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
1K
Replies
52
Views
6K
  • · Replies 92 ·
4
Replies
92
Views
8K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 376 ·
13
Replies
376
Views
21K
Replies
35
Views
762
  • · Replies 92 ·
4
Replies
92
Views
14K
  • · Replies 105 ·
4
Replies
105
Views
8K
Replies
2
Views
2K
Replies
9
Views
3K