How do entanglement experiments benefit from QFT (over QM)?

In summary, the conversation discusses two important points: the first being the difference between QFT and quantum mechanics (QM) and the second being the role of QFT in analyzing entanglement experiments. QFT is a more comprehensive theory than QM, and while it is commonly used in quantum optics papers, it is not often referenced in experimental papers on entanglement. The main reason for this is that QFT is primarily used when dealing with particle-number changing processes, which are not relevant in most entanglement experiments. In addition, while QFT helps to understand how entanglement should not be explained, it does not provide a significant advantage in explaining entanglement itself, and discussions of entanglement often focus on photons due to
  • #176
Going back a bit in this thread...

I think I have learned a bit more about QFT from some of the great posts here. Especially learning some of the situations in which QFT would be helpful for application. Specifically, it seems as if QFT is best to apply when scattering is being discussed and the results might include any of a variety of particles. On the other hand: while QFT might include elements that describe entanglement, apparently that is a weaker/less useful side of things. My sense is that explains why entanglement experiments don't require the deeper theory of QFT - the basics of entanglement are well described by QM/QED without the need for any relativistic considerations (I don't consider descriptions of entangled photons as being relativistic although others might).

And as to some of the discussions about "microcausality": As I now understand it, there are 2 key (and somewhat opposing) elements at play. Both relate to the act of performing a measurement on entangled Alice and considering what happens to remote Bob (the previously entangled partner):

1) No signaling theorem being that the marginal probability of an outcome for Bob does NOT change due to Alice's choice of measurement. In short, Bob's outcomes are always random.
2) The experimentally demonstrated quantum nonlocality being that the state of Bob DOES change due to Alice's choice of measurement. In short, Bob is cast into a pure state relative to Alice.

I realize some of the posters here may not agree with my assessments, no problem there. But hopefully I am a little further along than before. :smile:
 
Physics news on Phys.org
  • #177
Again, as soon as photons are involved, there's no other way then QED to describe them adequately. QFT contains of course everything about entanglement as any flavor of QT.

Also read again you QFT books about what "local interaction" and "microcausality" means in contradistinction to long-ranged correlations due to entanglement. This resolves the apparent contradiction between the possibility of long-range correlations described by entanglement on the one hand and the fact that no instantaneous or even acausal influence of A's measurement on B's photons are necessary.
 
  • #178
DrChinese said:
1) No signaling theorem being that the marginal probability of an outcome for Bob does NOT change due to Alice's choice of measurement. In short, Bob's outcomes are always random.

Yes.

DrChinese said:
2) The experimentally demonstrated quantum nonlocality being that the state of Bob DOES change due to Alice's choice of measurement. In short, Bob is cast into a pure state relative to Alice.

I think a better phrasing would be that the correlations between Bob's and Alice's measurement outcomes can violate the Bell inequalities. Putting it in terms of "change of state" raises issues (discussed already quite thoroughly in some recent thread or other) that don't need to be raised to describe the experimental facts of quantum nonlocality.
 
  • Like
Likes vanhees71, RUTA and A. Neumaier
  • #179
vanhees71 said:
Yes sure, but it's an established fact of 100 years testing QT. For me that's the only conclusion I can come to in view of all the Bell tests disproving local deterministic HV theories and confirm QT.
Indeed, but the fact that your choice of measurement selects the sample space is what leads to the fact that the measurement "creates" in some sense the value. It means for example that we cannot think of the state ##|\uparrow_{z}\rangle## to actually represent a particle with angular momentum ##\frac{\hbar}{2}## about the ##z##-axis, for we must select the sample space via our measurement choice. It only means:
If you choose to measure ##S_{z}## then you will get ##\uparrow## with probability ##1##
Taking it otherwise, that is to actually mean the particle has ##\frac{\hbar}{2}## angular momentum about the ##z##-axis, leads to nonlocality issues.

It is in this sense that we are led to the measurement "creating the value". I don't think it is sloppy language.
 
  • #180
vanhees71 said:
1. Again, as soon as photons are involved, there's no other way then QED to describe them adequately.

2. QFT contains of course everything about entanglement as any flavor of QT.

1. And yet, entanglement fundamentally does not require photons and does not require QFT. Hard to make that a case for a more complex theory. The old case of spin 1/2 electrons brings about the fundamental issues of quantum locality that we wish to resolve.

2. I guess I can't dispute that. But I certainly saw doubts about the entanglement side from a number of the posters. Apparently there are some entanglement issues that are not fully resolved. Although you seem satisfied, so that is a good recommendation.
 
  • #181
Kurt Gottfried and Tung-Mow Yan in “Quantum Mechanics: Fundamentals” (Second Edition):

“Thus it is finally a matter of taste whether one calls quantum mechanics local or not. In the statistical distribution of measurement outcomes on separate systems in entangled states there is no hint of non-locality. Quantum theory does not offer any means for superluminal signaling. But quantum mechanics, and by that token nature itself, does display perfect correlations between distant outcomes, even though Bell's theorem establishes that pre-existing values cannot be assigned to such outcomes and it is impossible to predict which of the correlated outcome any particular event will reveal.” [emphasized by LJ]
 
  • #182
DarMM said:
Indeed, but the fact that your choice of measurement selects the sample space is what leads to the fact that the measurement "creates" in some sense the value. It means for example that we cannot think of the state ##|\uparrow_{z}\rangle## to actually represent a particle with angular momentum ##\frac{\hbar}{2}## about the ##z##-axis, for we must select the sample space via our measurement choice. It only means:
If you choose to measure ##S_{z}## then you will get ##\uparrow## with probability ##1##
Taking it otherwise, that is to actually mean the particle has ##\frac{\hbar}{2}## angular momentum about the ##z##-axis, leads to nonlocality issues.

It is in this sense that we are led to the measurement "creating the value". I don't think it is sloppy language.
I think this formulation: "the fact that your choice of measurement selects the sample space is what leads to the fact that the measurement "creates" in some sense the value"

is what leads to the misunderstandings documented by @DrChinese 's point of view. Taking the minimal statistical interpretation seriously, you should rather say: "the choice of measurements selects the ensembles you consider, given an ensemble defined by the preparation of the state".

In this way you get rid of the misunderstanding as if the local measurement at A must lead to an instantaneous influence on the measured entities at B. It is in accordance with the fact that the temporal order of the measurements does not play any role (if the measurement events are space-like separated there's even no temporal order at all!), because you don't need the argument of the collapse proponent that the measurement at A causally affects the measurement at B. Both A and B can choose what they measure, and all you know from the state preparation are the probabilities for the outcomes of measurements at A and B. With sufficiently detailed measurement protocols and clever arrangements as described by the delayed-choice setups of Bell tests (and these are realized in various realizations of "quantum-erasure setups" in the real-world lab nowadays!) allow you to choose different subensembles based on the meausrements from the measurement protocol.

For me the only consistent interpretation, i.e., obeying both the locality/microcausality principle of the usual QFT formalism and the possibility of stronger-than-classically-possible long-ranged correlations described through entanglement, is the minimal statistical interpretation, based on the assumption that the random nature of the outcome of measurements (no matter whether you describe them in idealized (gedanken) setups as complete measurements or more realistically, taking into account the non-ideality of real-world measurement devices in terms of the POVM formalism) is inherent in nature and not due to incomplete knowledge of the state as in classical statistical physics.

The important lesson to be learned from all these discussions is that, when in doubt on metaphysical concepts, which are necessarily unsharp compared to the scientific content of a theory, you have to go back to the successful formalism and find a metaphysical interpretation that is consistent with it, i.e., the empirically well-established facts about the behavior of nature as analyzed for over 100 years since the first discovery of quantum aspects of nature in 1900. The great success of modern natural science methodology is due to the decoupling of science from philosophy, and as far as I can see, philosophy can only a posteriori build a metaphysical world view after the scientific issues are clear, and then it might be of some value also for the understanding of the implications of the scientific discoveries for a more general worldview.
 
  • #183
vanhees71 said:
Taking the minimal statistical interpretation seriously, you should rather say: "the choice of measurements selects the ensembles you consider, given an ensemble defined by the preparation of the state".
I'm not sure it is this easy. So the initial preparation gives one an ensemble ##\rho##. When one selects a measurement you're saying it "selects the ensemble" you consider. What is the relation of this ensemble to the original ensemble given by a preparation? A subensemble or what?
 
  • Like
Likes Lord Jestocost
  • #184
vanhees71 said:
I think this formulation: "the fact that your choice of measurement selects the sample space is what leads to the fact that the measurement "creates" in some sense the value"

is what leads to the misunderstandings documented by @DrChinese 's point of view. Taking the minimal statistical interpretation seriously, you should rather say: "the choice of measurements selects the ensembles you consider, given an ensemble defined by the preparation of the state".

In this way you get rid of the misunderstanding as if the local measurement at A must lead to an instantaneous influence on the measured entities at B. It is in accordance with the fact that the temporal order of the measurements does not play any role (if the measurement events are space-like separated there's even no temporal order at all!), because you don't need the argument of the collapse proponent that the measurement at A causally affects the measurement at B. Both A and B can choose what they measure, and all you know from the state preparation are the probabilities for the outcomes of measurements at A and B. With sufficiently detailed measurement protocols and clever arrangements as described by the delayed-choice setups of Bell tests (and these are realized in various realizations of "quantum-erasure setups" in the real-world lab nowadays!) allow you to choose different subensembles based on the meausrements from the measurement protocol.

For me the only consistent interpretation, i.e., obeying both the locality/microcausality principle of the usual QFT formalism and the possibility of stronger-than-classically-possible long-ranged correlations described through entanglement, is the minimal statistical interpretation, based on the assumption that the random nature of the outcome of measurements (no matter whether you describe them in idealized (gedanken) setups as complete measurements or more realistically, taking into account the non-ideality of real-world measurement devices in terms of the POVM formalism) is inherent in nature and not due to incomplete knowledge of the state as in classical statistical physics.

The important lesson to be learned from all these discussions is that, when in doubt on metaphysical concepts, which are necessarily unsharp compared to the scientific content of a theory, you have to go back to the successful formalism and find a metaphysical interpretation that is consistent with it, i.e., the empirically well-established facts about the behavior of nature as analyzed for over 100 years since the first discovery of quantum aspects of nature in 1900. The great success of modern natural science methodology is due to the decoupling of science from philosophy, and as far as I can see, philosophy can only a posteriori build a metaphysical world view after the scientific issues are clear, and then it might be of some value also for the understanding of the implications of the scientific discoveries for a more general worldview.

Much wording around a simple question: Does an observable of a quantum system has the same value just before the measurement as is obtained by the measurement or not? (the Copenhagen’s deny that an observable has any value before the measurement)
 
  • #185
DarMM said:
I'm not sure it is this easy. So the initial preparation gives one an ensemble ##\rho##. When one selects a measurement you're saying it "selects the ensemble" you consider. What is the relation of this ensemble to the original ensemble given by a preparation? A subensemble or what?
In a highly idealized way you start with a preparation procedure. E.g., you prepare polarization-entangled (say the singlet state) photon pairs via parametric downconversion. Then A and B measure the polarization of both photons. If both choose to measure the polarization in the same direction, each get just ideally unpolarized photons. With sufficiently precise time stamps in each of the observers' measurement protocols they can relate the outcomes of their polarization measurements to each entangled pair and later select subensembles, i.e., they can select all pairs, where A measured horizontal polarization and look what B has found for his photon in the pair and finding the 100% correlation, i.e., whenever A finds H, B finds V and vice versa. It's of course a subensemble half as large as the original. The other partial ensemble is just complementary, and the total ensemble simply reflects that each of the single photon is perfectly unpolarized.

Of course, a more realistic evaluation of real-lab experiments you have to take into account that all preparations and measurements are non-ideal and you have to carefully evaluate the systematic and statistical errors. In the formalism that can (sometimes) described by the POVM formalism. I'm not arguing against the POVM formalism but against the claim it's something going beyond standard Q(F)T in the minimal interpretation.

Of course A and B can choose arbitrary directions for their polarization measurements, and you can still select subensembles and evaluate the correlations. You can choose appropriate different measurement setups to also demonstrate the violation of Bell's inequality. This is of course only possible on ensembles, because you need measurements of incompatible observables, which can not realized on a single system but only subsequently on ensembles of equally prepared systems. All these are indeed probabilistic statements about the outcome of measurements and nothing more.
 
  • #186
Lord Jestocost said:
Much wording around a simple question: Does an observable of a quantum system has the same value just before the measurement as is obtained by the measurement or not? (the Copenhagen’s deny that an observable has any value before the measurement)
Within the minimal interpretation, which is a no-nonsense flavor of Copenhagen, it depends on the prepared state, whether an observable has a determined value or not. If it has not a determined value, you only know the probabilities for the outcome of measurements of this observable given the state the measured system is prepared in. That's it. There's no necessity for any additional elements within QT. It's a complete description of what's observed, including the randomness for the outcome of measurements on observables that are not determined by state preparation.
 
  • #187
vanhees71 said:
Of course A and B can choose arbitrary directions for their polarization measurements, and you can still select subensembles and evaluate the correlations
But one's choice of measurement produces a complete sample space that cannot be understood as a subensemble of the preparation. The state ##\rho## and one's choice of a context give a complete sample space that cannot be seen as a subensemble of another, that's basically what the CHSH set up tells you, as does Fine's theorem.

That's what's confusing about QM, the perparation alone is not an ensemble. Only the preparation and a context.
 
  • #188
vanhees71 said:
If it has not a determined value, you only know the probabilities for the outcome of measurements of this observable given the state the measured system is prepared in
Yes, but the Kochen-Specker theorem, the CHSH inequality and Fine's theorem show you that just because ##|\uparrow_{z}\rangle## will give ##\frac{\hbar}{2}## when measured along the ##z##-axis with certainty, the particle does not actually possesses ##\frac{\hbar}{2}## along the ##z##-axis prior to the measurement.

I mean in a certain sense one just needs the Kochen-Specker theorem alone. If you cannot assign pre-existent values to variables, but then in the measurement one obtains a value, then how do you get out of the fact that the value arises in measurement?

I mean you are either saying there was a value prior to measurement or there wasn't. If there was you run into contextuality problems and possible fine tuning and you're sort of talking about a hidden variable theory. If you are saying the latter then literally the value is created by the measurement process. I don't see what else one could be saying.
 
  • Like
Likes Auto-Didact and Lord Jestocost
  • #189
I don't understand this statement. Of course, all subensembles together give the prepared ensemble (everything in an idealized sense of no losses). The choice of the subensembles of course depend on the specific measurement setup.

Concerning CHSH, I think the example in Wikipedia,

https://en.wikipedia.org/wiki/CHSH_inequality#A_typical_CHSH_experiment
is correctly described. You need indeed "four subexperiments" distinguished by different relative orientations of the polarization measurements. You can of course not do all four measurements on a single realization. So you select four different and mutually exclusive "subensembles" by each measurement. The total ensemble, given by the same state preparation of all subexperiments.
 
  • Like
Likes DarMM
  • #190
vanhees71 said:
I don't understand this statement. Of course, all subensembles together give the prepared ensemble (everything in an idealized sense of no losses). The choice of the subensembles of course depend on the specific measurement setup.
But it is literally not true due to the structure of quantum probability. All variables in a CHSH test cannot be considered as defined on a common sample space via Fine's theorem. Thus they cannot be considered to be drawn from a common ensemble. If they're not marginals on a common sample space they cannot be thought of as subensembles. See Streater's book "Lost Causes in Theoretical Physics" Chapter 6. They're not sub-experiments.

However the main point here is how you react to the Kochen-Specker theorem. It says that observables either have no pre-existent value or they do but they are contextual. Which one do you take to be true? If the former how do you avoid the conclusion that the measurement creates the value?
 
Last edited:
  • Like
Likes Auto-Didact and Lord Jestocost
  • #191
It's clear that vanhees71 isn't bothered by entanglement because it has a precise mathematical description and is empirically verified. He doesn't require any ontological explanation for entanglement and is confused by the fact that anyone else does. What confuses me is that he participates in foundations discussions, given his lack of appreciation for the ontological motives of the participants. Although, I must admit, I'm almost as bad when I point out a desire for dynamical explanation, e.g., via causal mechanisms and/or hidden variables, is what has to be abandoned and replaced by constraint-based explanation (with no dynamical counterpart). For many people, that's equivalent to telling them to forget ontological explanation altogether :-)
 
  • Like
Likes Auto-Didact and DrChinese
  • #192
RUTA said:
It's clear that vanhees71 isn't bothered by entanglement because it has a precise mathematical description and is empirically verified. He doesn't require any ontological explanation for entanglement and is confused by the fact that anyone else does. What confuses me is that he participates in foundations discussions, given his lack of appreciation for the ontological motives of the participants.
Forgive the psychoanalysis but from my experience with such matters, the fact that he sees that other serious physicists are worried about these issues and continues to participate in good faith demonstrates to me that he either feels he can actually alleviate our worries through his explanation, or - even though he believes his stance is pragmatically justified - he has some uncertainty regarding this important issue about which he subconsciously wishes to learn more; what better place to directly learn more than from those honestly expressing their doubts?
 
  • #193
Well this just broaches the topic of vanhees realism once more. All we can say is that for suitably prepared initial posts there are probabilities of various vanhees responses formed as irreversible records on our monitors. To go beyond this and posit an ontic vanhees is unwarrented by the formalism.

Considering the physicsforums servers are in America and there are vanhees observations in Frankfurt, the correlation between these would require a nonlocally extended ontic vanhees. There is literally no other explanation.
 
  • Haha
  • Like
Likes DrChinese, RUTA and Auto-Didact
  • #194
In any case, I am happy that @vanhees71 does continue to discuss these matters because it helps to demonstrate - from the more rigorous contrary position - exactly how fragile the minimal interpretation actually is. The demonstration thereof in the public domain may naturally elucidate feelings of uneasiness among physicists - who are not used to encounter such fragile arguments w.r.t. physics - but it is necessary for them to take these feelings seriously, because we are talking about the currently accepted foundations of physics: all of (theoretical) physics based on these foundations is what is at stake.

In the face of this uneasiness the scientist is actually being forced to make an ethical decision which displays his character: either he confronts the matter head on and honestly admits that he doesn't know or he can pretend to know and so abandon the very principles of science; those who opt for the latter choice are easy to detect because they will then tend to even begin to argue for censorship of further discussion. Self-censorship is the beginning of the death of science; it is very interesting to note that Peter Woit's latest blogpost is also on this very topic.

As Feynman puts it, the scientist - funded by and therefore having obligations to society - actually only has one choice: the scientist must fearlessly admit that he does not know and live with the uncertainty that his beloved theory might actually be wrong: doing anything else is just an exercise in self-deception and - even worse - deception of others, including deception of the public; Smolin has made this very point clearer than anyone else I have encountered either in the popular or professional literature.

As Feynman says: I can live with doubt, and uncertainty, and not knowing. I think it's much more interesting to live not knowing than to have answers which might be wrong. I have approximate answers, and possible beliefs, and different degrees of certainty about different things, but I'm not absolutely sure of anything. There are many things I don't know anything about, such as whether it means anything to ask "Why are we here?" I might think about it a little bit, and if I can't figure it out then I go on to something else. But I don't have to know an answer. I don't feel frightened by not knowing things, by being lost in the mysterious universe without having any purpose — which is the way it really is, as far as I can tell. Possibly. It doesn't frighten me.
 
  • Like
Likes DrChinese
  • #195
Regarding statistical interpretations of quantum mechanics, Paul Busch, Pekka J. Lahti and Peter Mittelstaedt put it in the following way in chapter IV.3. “Ensemble and Hidden Variable Interpretations” in „The Quantum Theory of Measurement” (Second Revised Edition, Springer):

“The statistical interpretations of quantum mechanics can be divided into two groups, the measurement statistics and the statistical ensemble interpretations (Sects. III.3.2-3). These interpretations rely explicitly on the relative frequency interpretation of probability, and in them the meaning of probability is often wrongly identified with the common method of testing probability assertions...

In the measurement statistics interpretation the quantum mechanical probability distributions, such as ##p^A_T##, are considered only epistemically as the distributions for measurement outcomes………… In this pragmatic view quantum mechanics is only a theory of measurement outcomes providing convenient means for calculating the possible distributions of such outcomes. It may well be that such an interpretation is sufficient for some practical purposes; but it is outside the interest of this treatise to go into any further details, for example, to study the presuppositions of such a minimal interpretation. The measurement problem is simply excluded in such an interpretation……...

The ensemble interpretation of quantum mechanics describes individual objects only statistically as members of ensembles. This interpretation is motivated by the idea that each physical quantity has a definite value at all times. Thus no measurement problem would occur in this interpretation. Some merits of the ensemble interpretation of quantum mechanics are put forward, for example, in [Bal70,88, d'Espagnat 1971]. But these merits seem to consist only of a more or less trivial avoiding of the conceptual problems, like the measurement problem, arising in a realistic approach. In fact it is only in the hidden variable approaches that one tries to take seriously the idea of the value-definiteness of all quantities.”
 
  • Like
Likes atyy
  • #196
DarMM said:
But it is literally not true due to the structure of quantum probability. All variables in a CHSH test cannot be considered as defined on a common sample space via Fine's theorem. Thus they cannot be considered to be drawn from a common ensemble. If they're not marginals on a common sample space they cannot be thought of as subensembles. See Streater's book "Lost Causes in Theoretical Physics" Chapter 6. They're not sub-experiments.

However the main point here is how you react to the Kochen-Specker theorem. It says that observables either have no pre-existent value or they do but they are contextual. Which one do you take to be true? If the former how do you avoid the conclusion that the measurement creates the value?
Again I don't understand. It must be possible to do the described experiments to test the CHSH relation. If you cannot do this within QT, you cannot even define the quantities entering this relation to test it.

In the example of the Wikipedia quoted above. There are four incompatible experimental setups necessary. Each experiment very clearly subdivides and ensemble in subensembles according to the polarization measurements on the two photons. If this were not possible, you couldn't do this very experiment.

Since the measurements are mutually incompatible you need to prepare four ensembles in the same state and do one of the four measurements to divide each of them in the appropriate subensembles and then combine the probabilistic outcomes to check the CHSH relation.

It's like in the simpler example to test the uncertainty relation ##\Delta x \Delta p \geq \hbar/2##. Of course you cannot measure position and momentum accurately on one particle. Thus you need to prepare a first ensemble of a single particle in the state ##\hat{\rho}## and do a very accurate position measurement and evaluate ##\Delta x##. Then you have to prepare a 2nd ensemble of a single particle, again in the same state ##\hat{\rho}##, and measure momentum very accurately and evaluate ##\Delta p##. With these two incompatible measurements together you can test the uncertainty relation for particles prepared in the state ##\hat{\rho}##.
 
  • #197
vanhees71 said:
Each experiment very clearly subdivides and ensemble in subensembles according to the polarization measurements on the two photons. If this were not possible, you couldn't do this very experiment
That's not the point. The point is that the ensembles found in each of the four measurement choices have mathematical features preventing them from being understood as selections from one common ensemble. If you measure ##A, C## and you measure ##B, D## they cannot be thought of as subensembles of one ensemble nor alternate course grainings of a common ensemble. They are simply two different ensembles. That is a mathematical fact reflected in the fact that there is no Gelfand homomorphism subsuming all four observables.

However even the whole CHSH set up is a side point. The main point is the KS theorem. Which says either the values don't pre-exist the measurement or they do but are contextual. It's one or the other. If you take the option that they don't pre-exist the measurement, then how do you avoid the measurement creating them?
 
  • Like
Likes Auto-Didact
  • #198
DarMM said:
[]

However even the whole CHSH set up is a side point. The main point is the KS theorem. Which says either the values don't pre-exist the measurement or they do but are contextual. It's one or the other. If you take the option that they don't pre-exist the measurement, then how do you avoid the measurement creating them?
Th problem that makes this unconvincing to me is that a projective measurement is not a measurement of a state but an enforced change of state. Measuring a dynamic variable is never projective. Given that the 'value' before projection is irrelevant to the dynamics ( the angular momentum is preserved ) isn't KS just saying that we cannot assign values because we cannot know them ?

The remote influence vs non-local correlations argument cannot be settled by the formalism.
The remote influence believers should explain what actually is passed between the locations and how this can be experimentally detected.
 
  • #199
Mentz114 said:
Th problem that makes this unconvincing to me is that a projective measurement is not a measurement of a state but an enforced change of state. Measuring a dynamic variable is never projective. Given that the 'value' before projection is irrelevant to the dynamics ( the angular momentum is preserved ) isn't KS just saying that we cannot assign values because we cannot know them ?
No, this is pretty clear in its actual proof.
 
  • #200
DarMM said:
No, this is pretty clear in its actual proof.
The proof only applies to projections not measurements.
What is the point in assigning values to irrelevant and unknowable properties ?
No mathematical theorem can prove the existence or not of a real thing.
 
  • #201
Mentz114 said:
The proof only applies to projections not measurements.
What is the point in assigning values to irrelevant and unknowable properties ?
No mathematical theorem can prove the existence or not of a real thing.
Measurements in labs have the structure of POVMs that the KS theorem applies to. Thus if quantum theory is correct, which seems to be the case, the theorem applies to the real world.
They're not "irrelevant and unknowable properties", they are how quantum theory represents actual measurements in labs.
 
  • #202
DarMM said:
That's not the point. The point is that the ensembles found in each of the four measurement choices have mathematical features preventing them from being understood as selections from one common ensemble. If you measure ##A, C## and you measure ##B, D## they cannot be thought of as subensembles of one ensemble nor alternate course grainings of a common ensemble. They are simply two different ensembles. That is a mathematical fact reflected in the fact that there is no Gelfand homomorphism subsuming all four observables.

However even the whole CHSH set up is a side point. The main point is the KS theorem. Which says either the values don't pre-exist the measurement or they do but are contextual. It's one or the other. If you take the option that they don't pre-exist the measurement, then how do you avoid the measurement creating them?
I think it's a language issue. Perhaps I don't express myself clearly. What I mean is the following:

I consider two spins. The ensemble is described by the spin singlet state, i.e., you prepare photons in this state. In the usual notation not taking into account the Bose nature of the photons, which is sufficient here, we have
$$\hat{\rho}=|\Psi \rangle \langle \Psi|$$
with
$$|\Psi \rangle = \frac{1}{\sqrt{2}} (|HV \rangle-|VH \rangle).$$
now in the first measurement setup A measures here photon's polarization in ##\theta_A=0## and ##B## his in the ##\theta_B=\pi/4## direction. The outcome of the measurement is ##+## or ##-## for each observer, depending on whether the corresponding photon goes through the polarizer or not. Now you count the four possible outcomes ##++##, ##+-##, ##-+##, ##--##. In this idealized discussion for each photon in the ensemble with certainty one of these outcomes occurs, and this devides the original ensemble in 4 subensembles.

To check the CHSH relation you only need to count the outcomes and form ##E## as defined in Eq. (3) of

https://en.wikipedia.org/wiki/CHSH_inequality#A_typical_CHSH_experiment

Then you repeat this experiment 3 more times with the other pairs of angles quoted there to evaluate the quantitity in Eq. (2). I haven't checked it explicitly, but if I remember right with the chosen angles that yields the maximal possible violation of the CHSH relation, ##S = 2 \sqrt{2}>2##.

Of course, in this case you don't need to argue about "subensembles" as is needed in the quantum eraser or the entanglement-swapping experiments.
 
  • Like
Likes DarMM
  • #203
vanhees71 said:
I haven't checked it explicitly, but if I remember right with the chosen angles that yields the maximal possible violation of the CHSH relation
This is all correct. The difference is that in the classical case each of the four choices of measurement set ups could be derived as alternate course grainings of a single ensemble. So let's say we have angles ##A,B## for Alice and angles ##C,D## for Bob.

When you do the experiment with ##A,C## say, we get four sub-ensembles for each outcome. Again as you said. I'll just call these subensembles ##E_{i}##.
When you do the experiment with ##A,D## say, same thing. Four sub-ensembles ##F_{i}##

What makes QM different from classical probability theories is that ##E_{i}## and ##F_{i}## cannot be considered as alternate subensembles/partitions of the same ensemble. This is due to the structure of quantum probability theory being different from Kolomogorovian probability. Only the state and the measurement choice define an ensemble, e.g. the ensemble is given by the triple ##\left\{\rho, A, C\right\}##

Streater's monograph explains this quite well. It is odd as it means the experimenters choice of apparatus defines not just the subensembles, but literally the total ensemble itself!

As I said though my original point is with the KS theorem. I think you have to accept that measurements create values in the minimal statistical interpretation and that this is not just sloppy language.
 
Last edited:
  • Like
Likes Morbert and Lord Jestocost
  • #204
Sure, that's the whole point of all these Bell ideas, including the CHSH variant. I think I've abused the term "(sub)ensemble(s)".

Now I'm a bit confused about this one sentence:

"Streaters monograph explains this quite well. It is odd as it means the experimenters choice of apparatus defines not just the subensembles, but literally the total ensemble itself!"

Isn't the total ensemble defined by the preparation before the measurements? Of course, the split in subensembles due to a measurement depend on the chosen measurent.

I'm agnostic concerning the meaning of "measurements create values". You measure an observable, and adequately calibrating the apparatus you get the values predicted by QT (i.e., the eigenvalues of the corresponding operators). How it comes that you always get a definite outcome, I think one cannot say. QT describes only the probabilities. The occurance of definite outcomes in measurements is an empirical fact that's described by but not derived from QT. For sure true is that an observable only takes a determined value if the state is accordingly prepared, i.e., in the case that there are no degeneracies the preparation in the eigenstate of the corresponding observable operator. In the case of degeneracy, it should be the projector to the corresponding eigenspace,
$$\hat{\rho}=\frac{1}{d} \sum_{\alpha=1}^d |a,\alpha \rangle \langle a,\alpha|.$$
 
  • Like
Likes Mentz114 and DarMM
  • #205
DarMM said:
Measurements in labs have the structure of POVMs that the KS theorem applies to. Thus if quantum theory is correct, which seems to be the case, the theorem applies to the real world.
They're not "irrelevant and unknowable properties", they are how quantum theory represents actual measurements in labs.
I have to disagree with the text I've emphasized. No quantum states are measured in labs.
A value which does not appear in the Hamiltonian is irrelevant to the dynamics.
[edit] left out a vital 'not'
 
Last edited:
  • #206
vanhees71 said:
[]

I'm agnostic concerning the meaning of "measurements create values". You measure an observable, and adequately calibrating the apparatus you get the values predicted by QT (i.e., the eigenvalues of the corresponding operators). How it comes that you always get a definite outcome, I think one cannot say. QT describes only the probabilities. The occurance of definite outcomes in measurements is an empirical fact that's described by but not derived from QT. For sure true is that an observable only takes a determined value if the state is accordingly prepared, i.e., in the case that there are no degeneracies the preparation in the eigenstate of the corresponding observable operator. In the case of degeneracy, it should be the projector to the corresponding eigenspace,
$$\hat{\rho}=\frac{1}{d} \sum_{\alpha=1}^d |a,\alpha \rangle \langle a,\alpha|.$$
This cannot be emphasized enough. The actual outcome is created by evolution of dynamic variables not probabilities.
 
  • #207
vanhees71 said:
You measure an observable, and adequately calibrating the apparatus you get the values predicted by QT (i.e., the eigenvalues of the corresponding operators). How it comes that you always get a definite outcome, I think one cannot say. QT describes only the probabilities. The occurance of definite outcomes in measurements is an empirical fact that's described by but not derived from QT. For sure true is that an observable only takes a determined value if the state is accordingly prepared
By "determined value" I assume you mean that there will be an observable with a completely predictable outcome, not "already has that value prior to measurement" in line with your agnosticism on the issue.

vanhees71 said:
Isn't the total ensemble defined by the preparation before the measurements?
In a sense yes and no.

A quantum state is a sort of a pre-ensemble (not a standard term, I'm just not sure how to phrase it), Robert Griffiths often uses the phrase "pre-probability". When provided with a context, the state together with the observables of that context will define an ensemble.

A basic property of an ensemble is something like the total law of probability which says that if I have two observables to measure on the ensemble ##A## and ##B## with outcomes ##A_{i}## and ##B_{i}##, the for a given ##A## outcome:
$$
P\left(A_{i}\right) = \sum_{j}P\left(A_{i} | B_{j}\right)P\left(B_{j}\right)
$$
which just reflects that ##A## and ##B## and their outcomes just partition the ensemble differently. This fails in Quantum Theory and is one of the ways in which it departs from classical probability. Thus quantum observables cannot be seen as being drawn from the same ensemble.

Thus to define an ensemble in QM you have to give the state and the context of observables, not the state alone.

Streater explains it well in Chapter 6 of his text, as does Griffith in Chapter 5 of his Consistent Quantum Theory. There are explanations in Quantum Probability texts, but I think you'd prefer those books.
 
  • Like
Likes kith, mattt and Auto-Didact
  • #208
Mentz114 said:
I have to disagree with the text I've emphasized. No quantum states are measured in labs.
A value which does not appear in the Hamiltonian is irrelevant to the dynamics.
[edit] left out a vital 'not'
I never said they were. The KS theorem has nothing to do with quantum states, nor did my post mention quantum states. I don't understand the Hamiltonian remark.
 
  • #209
Mentz114 said:
This cannot be emphasized enough. The actual outcome is created by evolution of dynamic variables not probabilities.
Quantum Theory does not seem to describe that evolution as @vanhees71 mentioned. We know such variables if they exist will have to be nonlocal, retro/acausal or involve multiple worlds.
 
  • #210
DarMM said:
Quantum Theory does not seem to describe that evolution as @vanhees71 mentioned. We know such variables if they exist will have to be nonlocal, retro/acausal or involve multiple worlds.
Surely if "Quantum Theory does not seem to describe that evolution" then why is this inability considered a problem ? The things I'm talking about are not hidden variables. Pressure, temperature, momentum and energy are and actual stuff are dynamic variables that drive the universe, not probabilities.
 

Similar threads

  • Quantum Physics
Replies
3
Views
2K
Replies
1
Views
622
Replies
4
Views
616
Replies
1
Views
804
  • Quantum Physics
Replies
4
Views
953
  • Quantum Physics
Replies
34
Views
3K
  • Quantum Physics
2
Replies
69
Views
4K
Replies
5
Views
1K
  • Quantum Physics
4
Replies
115
Views
6K
  • Quantum Physics
Replies
0
Views
541
Back
Top