A How do entanglement experiments benefit from QFT (over QM)?

  • #251
vanhees71 said:
An ensemble is a collection of independent equally prepared systems. What else is there to define? What else do you understand under an "ensemble"?
The same. But your definition conflicts with your earlier usage of the word subensemble, which makes no sense with this meaning. Hence I wondered what you mean.

Or does your notion of ensemble have some sort of temporal permanence so that it remains the same when you change its momentum through a mirror and that it splits in a beam splitter? But then the state would not be associated with the ensemble (i.e., the independent equally prepared systems) but with their momentary mode of existence.
 
Physics news on Phys.org
  • #252
What I called "subensemble" was simply to sort each measurement into the different outcomes of the measurement. I guess it's a misleading wording, and I'll avoid it henceforth.
 
  • #253
DarMM said:
The interesting thing is that in classical mechanics the preparation alone would define an ensemble.
I still do not understand what you mean by the word "ensemble". Obviously I could find some kind of agreement with @A. Neumaier . Why is for you the preparation of many independent systems not defining an ensemble in the QT case but in the classical case?
 
  • #254
vanhees71 said:
I still do not understand what you mean by the word "ensemble". Obviously I could find some kind of agreement with @A. Neumaier . Why is for you the preparation of many independent systems not defining an ensemble in the QT case but in the classical case?
Because different measurements cannot be considered as partitioning a common ensemble into alternate subensembles due to the failure of the total law of probability.
 
  • #255
This I never claimed, but the preparation procedure is independent of the meaurements you can do afterwards. So how can the "ensembles" defined by state preparation depend on what's measured afterwards? I guess, what was really misleading was my use of the word "subensembles".
 
  • #256
vanhees71 said:
This I never claimed, but the preparation procedure is independent of the meaurements you can do afterwards
Of course the preparation is. However the ensemble is not. The preparation procedure alone does not define an ensemble.

vanhees71 said:
So how can the "ensembles" defined by state preparation depend on what's measured afterwards?
Because unlike the classical case the preparation alone does not give a well-defined sample space of outcomes or lattice of events.
 
  • #257
DarMM said:
Of course the preparation is. However the ensemble is not. The preparation procedure alone does not define an ensemble.

Because unlike the classical case the preparation alone does not give a well-defined sample space of outcomes or lattice of events.
I've been reading these posts and trying to figure out where the mystery lies and how it's resolved according to this "statistical" interpretation. DarMM you seem to have a grasp of that, so let me ask you to explain it in terms of the Mermin device ... in the spirit of Dr. Chinese, who started this thread. For anyone who doesn't know the Mermin device, I've attached his original paper.

Fact 1 about the Mermin device states that the outcomes (Red or Green) are always the same when Alice and Bob choose the same measurement setting (both choose 1, both choose 2, or both choose 3). Mermin posits the existence of "instruction sets" to account for Fact 1. He says it's the only way he knows to guarantee Fact 1, since the outcomes can be spacelike separated from each other and the other person's measurement choice, and we don't want superluminal communication or causation. Instruction sets would be the classical case where the state preparation alone determines the sample space, right? That is, each trial of the experiment instantiates one of the possible instruction sets, 1R2R3G, 1R2G3R, 1G2G3R, 1R2R3R, etc., at particle creation independently of Alice and Bob's measurement choices. Mermin then shows that instruction sets entail an overall agreement of outcomes (for all trials, regardless of settings) of more than 5/9 (Bell's inequality for the Mermin device). But, Fact 2 of the Mermin device is that we have an overall agreement of outcomes (for all trials, regardless of settings) of only 1/2, in violation of Bell's inequality. So, the quantum preparation (as modeled by the Mermin device) does not define an ensemble ...

Would you please finish the translation from there?
 

Attachments

  • #258
DarMM said:
Of course the preparation is. However the ensemble is not. The preparation procedure alone does not define an ensemble.Because unlike the classical case the preparation alone does not give a well-defined sample space of outcomes or lattice of events.
I see. So an ensemble is mathematically only defined by the specification of the complete random experiment, i.e., the preparation procedure together with what's measured on the prepared systems, in the most general form defined by a POVM.
 
  • Like
Likes Auto-Didact
  • #259
vanhees71 said:
I see. So an ensemble is mathematically only defined
An ensemble is defined only when one has a well defined sample space. In a sense an ensemble is an approximate physical realization of a sample space.

In Classical Mechanics the preparation alone (of multiple copies) gives one a well-defined ensemble, since after the preparation one has a well defined lattice of events. An Observable is just a family of events and an observable outcome is one event/subspace of the sample space.

In Quantum Theory the preparation alone does not give a well-defined ensemble as you cannot consider the outcomes for different observables to be events on one common sample space. This is what prevents counterfactual reasoning. If you measure ##S_{z}## say, since there isn't a common sample space you cannot consider an ##S_{x}## event which may have occurred but you didn't measure. There is no common sample space containing both ##S_{x}## and ##S_{z}## events.

Another way of phrasing it is that the difference between the classical (stochastic) case and the quantum case is that in the classical case the observables you don't measure still had an outcome you just didn't observe it. In the quantum case they don't, only the observable you look at obtains a value/outcome. And since a sample space is a collection of outcomes you have to specify the observable to even define outcomes. And then further since an ensemble is an approximate realization of a sample space, we have to choose an observable to even speak about what the preparation has in fact prepared.
 
  • Like
Likes kith, dextercioby and vanhees71
  • #260
RUTA said:
Instruction sets would be the classical case where the state preparation alone determines the sample space, right?
Correct.

RUTA said:
Would you please finish the translation from there?
No problem. I'll just think on it a bit as I'd like to make it as concise as possible without rambling. I'll add a bit about your Relational Blockworld at the end as it has a simple enough explanation there (of course I imagine you already know this, but more for others)
 
  • #261
vanhees71 said:
Why is for you the preparation of many independent systems not defining an ensemble in the QT case but in the classical case?

“Ensembles” or “subensembles” are artificial contrivances based upon concepts of statistical thermodynamics. An “ensemble” is a collective of identically prepared systems which superficially seem to be identical but distinguish from each other on a deeper level; in that sense, an “ensemble” is a “statistical collective”. However, in the case of quantum mechanics, thinking of the post-measurement situation in a statistical way doesn’t allow to infuse statistical considerations into the thinking of the pre-measurement situation: “The deeper reason for the circumstance that the wave function cannot correspond to any statistical collective lies in the fact that the concept of the wave function belongs to the potentially possible (to experiments not yet performed), while the concept of the statistical collective belongs to the accomplished (to the results of experiments already carried out).”(V. A. Fock)
 
  • Like
Likes vanhees71 and dextercioby
  • #262
I see. The "potentiality interpretation" of the "wave function" (or more generally a quantum state) is due to Schrödinger.
 
  • Like
Likes *now*
  • #263
vanhees71 said:
I still do not understand what you mean by the word "ensemble". Obviously I could find some kind of agreement with @A. Neumaier . Why is for you the preparation of many independent systems not defining an ensemble in the QT case but in the classical case?
The preparation of many independent systems may define an ensemble in the quantum case, but
then it is not consistent with what you describe here:
vanhees71 said:
What I called "subensemble" was simply to sort each measurement into the different outcomes of the measurement. I guess it's a misleading wording, and I'll avoid it henceforth.
But a Stern-Gerlach experiment does not constitute a measurement: it is a unitary operation.
Thus in such an experiment you are not sorting measurement outcomes into different groups.

This were the case if you'd perform the same experiment on each of you prepared independent systems, producing certain results for each system, including a spin up or down, and afterwards group the systems into those systems where spin was up and those systems where spin was down, and look at the other observables of the resulting subensembles.

But instead you:
1. take the ensemble of prepared systems, each in the state given by a symmetric superposition of (spin-up, momentum-up) and (spin-down, momentum-down);
2. change the system description by selecting the upper path, say, for further consideration only - not by measuring anything but by arrangement of your measuring equipment (no detectors at the down beam);
3. measure (at half the rate of the rate you'd have gotten with the original beam) a position on the upper beam;
4. declare the result as a spin-up measurement, invoking Born's rule for spin measurement.

Step 2 looks like taking a subensemble (since you lose in step 3 half the rate) but is not associated
with measurement but with the choice of a subset of the basis in which to measure. Thus it does not fit your explanation of what an ensemble is. Effectively you simply changed the preparation and
prepared a new state.

Step 4 makes sense only if you interpret Step 2 as having collapsed the system to the state spin-up, momentum-up) by projecting it on the upper eigenspace of the momentum. For only then you are guaranteed to find spin up (as you claim having obtained).

But you always said that collapse is not needed. This is why I still find your terminology confusing if not misleading.
 
  • #264
A. Neumaier said:
The preparation of many independent systems may define an ensemble in the quantum case, but
then it is not consistent with what you describe here:

But a Stern-Gerlach experiment does not constitute a measurement: it is a unitary operation.
Thus in such an experiment you are not sorting measurement outcomes into different groups.

This were the case if you'd perform the same experiment on each of you prepared independent systems, producing certain results for each system, including a spin up or down, and afterwards group the systems into those systems where spin was up and those systems where spin was down, and look at the other observables of the resulting subensembles.
But that's precisely what I meant. I don't necessarily need to perform other measurements on the subensemble, though I could of course do so, and at least in gedanken experiments one does this by measuring the spin component in another direction demonstrating that it is not determined though the spin state is completely specified as a pure state.

In an SGE(z) through the magnetic field the spin-z-component gets entangled with the z-position-component of the atom, i.e., it becomes split into two partial beams, each with a (almost perfectly) determined spin-z component being either ##+\hbar/2## or ##-\hbar/2##. So far it's a unitary operation, which can in principle be reversed (though not in practice). Now I can define a subensemble with ##s_z=+\hbar/2## by just "dumping" the other partial beam. Now I could perform of course other measurements like an SGE(x) with the well-known result that I get with 50% probability either of the two possible results ##s_x=\pm \hbar/2##.

If I understood it right, according to @DarMM only this complete operation defines a subensemble, i.e., after the preparation through the filtering ("partial-beam dumping") I also have to specify the observable I want to measure on it afterwards to completely specify the subensemble. I still don't understand, why this should be necessary, but I can live with it. I'll just avoid the "word subensemble", though I find it very helpful when describing things like "quantum-erasure delayed-choice experiments".

I don't understand the difference between what you summarized in bullets 1-4 to my description. Also nowhere I need a collapse, except you call using a beam dump a collapse ;-)).
 
  • #265
vanhees71 said:
Now I can define a subensemble with sz=+ℏ/2 by just "dumping" the other partial beam.
vanhees71 said:
nowhere I need a collapse, except you call using a beam dump a collapse ;-)).
Why does the beam dump define a subensemble with sz=+ℏ/2?
Only because you collapse the superposition to something pure!
 
  • Like
Likes vanhees71
  • #266
Sure, and if you call a beam-dump a collapse, fine with me. Indeed a "filter measurement" is FAPP a collapse, but it's still a local interaction of the beam with the material it hits, no "spooky action at a distance".
 
  • #267
vanhees71 said:
Sure, and if you call a beam-dump a collapse, fine with me. Indeed a "filter measurement" is FAPP a collapse, but it's still a local interaction of the beam with the material it hits, no "spooky action at a distance".
Suppose a beam is split into a superposition of two beams. At positions where the two beams are very far apart, a beam dump collapse is obtained if one destroys one of the resulting beams (by position measurements there) and makes measurements on the other one. This is a bilocal activity created by coordinated local interactions at two far apart places.

Such activities, together with a comparison of the joint measurement statistics at a later time,
are at the heart of all nonlocality experiments. It is not ''spooky action at a distance'' but ''spooky passion at a distance''.
 
  • #268
DarMM said:
By "determined value" I assume you mean that there will be an observable with a completely predictable outcome, not "already has that value prior to measurement" in line with your agnosticism on the issue.In a sense yes and no.

A quantum state is a sort of a pre-ensemble (not a standard term, I'm just not sure how to phrase it), Robert Griffiths often uses the phrase "pre-probability". When provided with a context, the state together with the observables of that context will define an ensemble.

A basic property of an ensemble is something like the total law of probability which says that if I have two observables to measure on the ensemble ##A## and ##B## with outcomes ##A_{i}## and ##B_{i}##, the for a given ##A## outcome:
$$
P\left(A_{i}\right) = \sum_{j}P\left(A_{i} | B_{j}\right)P\left(B_{j}\right)
$$
which just reflects that ##A## and ##B## and their outcomes just partition the ensemble differently. This fails in Quantum Theory and is one of the ways in which it departs from classical probability. Thus quantum observables cannot be seen as being drawn from the same ensemble.

Thus to define an ensemble in QM you have to give the state and the context of observables, not the state alone.

Streater explains it well in Chapter 6 of his text, as does Griffith in Chapter 5 of his Consistent Quantum Theory. There are explanations in Quantum Probability texts, but I think you'd prefer those books.

If the total law of probability breaks down for QM, then it should also break down for BM. However, BM is described by classical probability, so I'm not sure it's quite correct to say that a breakdown of the total law of probability is not consistent with classical probability.
 
  • #269
atyy said:
If the total law of probability breaks down for QM, then it should also break down for BM. However, BM is described by classical probability, so I'm not sure it's quite correct to say that a breakdown of the total law of probability is not consistent with classical probability.
BM has additional variables that makes the position basis preferred, and if one only looks at probabilities for position alone, these integrate to 1. In contrast to standard quantum mechanics, these are the only probabilities that matter in BM.

Indeed, BM has no probabilities for spin or momentum measurements. The latter are only illusions, in reality being position measurements in disguise:
A. Neumaier said:
In the analysis of
Figure 2 suggests that rather than measuring spin it measures starting in the upper part of the SG arrangement, independent of spin!
Demystifier said:
From the Bohmian perspective it's indeed silly to call it measurement of spin. [...] Bohmians speak to "ordinary" physicists by saying something like this: The procedure that you call measurement of spin is really a measurement of position and I will tell you what is really going on when you think you measure spin.
Whether Born's rule for arbitrary observables follows from BM (with quantum equilibrium assumption) is unclear to me.
 
Last edited:
  • Like
Likes Demystifier and Auto-Didact
  • #270
atyy said:
then it should also break down for BM.
It doesn't. In the probability theory for Bohmian Mechanics the total law holds.
 
  • #271
DarMM said:
It doesn't. In the probability theory for Bohmian Mechanics the total law holds.

How can that be if the predictions of QM and BM are the same? For example, if A and B are the outcomes of position and momentum measurements in QM, then the probabilities of the outcomes should be the same in QM and BM. What is different in the formula between QM and BM?
 
  • #272
atyy said:
How can that be if the predictions of QM and BM are the same? For example, if A and B are the outcomes of position and momentum measurements in QM, then the probabilities of the outcomes should be the same in QM and BM. What is different in the formula between QM and BM?
Bohmian Mechanics in equilibrium is equivalent to QM because of a posited strict restriction on epistemic reasoning within its probability theory. In Bohmian Mechanics in general the total law holds.

When we demand equilibrium we impose a very specific restriction on access to/ability to reason about the hidden variables. Provided this epistemic block holds always the probability theory effectively reduces to that of QM. When blocked in this absolute way the effective Bayesian reasoning about observations is a probability like QM's. The resultant probability theory then does break Kolomogorv's axioms. Breaking the total law is not consistent with classical probability theory. Blocking statistical inference in a very specific way on a theory that normally has classical probability can cause it to not have classical probability.

Note that equilibrium exactly holding cannot be true but must be a thermalisation effect, so in essence if Bohmian Mechanics were true one should be able to see the total law restored.

There is a much broader point here that most of the interpretations of QM are not really interpretations but actually different theories. All hidden variable theories replicate QM under some kind of epistemic restriction that cannot hold in general and in some scenarios even with that restriction will have divergent predictions. Other views such as Many Worlds make conjectures about the formal structure of the theory that have yet to be verified.

The only actual interpretations proper are things like Quantum Bayesianism vs Copenhagen where really it's a purely philosophical thing, e.g. how do you view probabilities. You'll see a similar remark from Rudolf Peierls in "The Ghost in the Atom" from Cambridge University Press.

EDIT:
Note also that even in the underlying Bohmian theory preparations do not prepare ensembles of most quantities, we still have contextuality afterall. Only a position ensemble is prepared.
 
  • Like
Likes mattt and kith
  • #273
DarMM said:
Bohmian Mechanics in equilibrium is equivalent to QM because of a posited strict restriction on epistemic reasoning within its probability theory. In Bohmian Mechanics in general the total law holds.

When we demand equilibrium we impose a very specific restriction on access to/ability to reason about the hidden variables. Provided this epistemic block holds always the probability theory effectively reduces to that of QM. When blocked in this absolute way the effective Bayesian reasoning about observations is a probability like QM's. The resultant probability theory then does break Kolomogorv's axioms. Breaking the total law is not consistent with classical probability theory. Blocking statistical inference in a very specific way on a theory that normally has classical probability can cause it to not have classical probability.

Note that equilibrium exactly holding cannot be true but must be a thermalisation effect, so in essence if Bohmian Mechanics were true one should be able to see the total law restored.

There is a much broader point here that most of the interpretations of QM are not really interpretations but actually different theories. All hidden variable theories replicate QM under some kind of epistemic restriction that cannot hold in general and in some scenarios even with that restriction will have divergent predictions. Other views such as Many Worlds make conjectures about the formal structure of the theory that have yet to be verified.

The only actual interpretations proper are things like Quantum Bayesianism vs Copenhagen where really it's a purely philosophical thing, e.g. how do you view probabilities. You'll see a similar remark from Rudolf Peierls in "The Ghost in the Atom" from Cambridge University Press.

EDIT:
Note also that even in the underlying Bohmian theory preparations do not prepare ensembles of most quantities, we still have contextuality afterall. Only a position ensemble is prepared.

Yes, I agree. Maybe the only difference is that I would say that all the routes here are also open to the minimal interpretation, so we don't have to say that the minimal interpretation goes beyond classical probability, any more than BM does. We could also say the minimal interpretation is contained within classical probability.
 
Last edited:
  • #274
atyy said:
Yes, I agree. Maybe the only difference is that I would say that all the routes here are also open to the minimal interpretation, so we don't have to say that the minimal interpretation goes beyond classical probability, any more than BM does. We could also say the minimal interpretation is contained within classical probability.
QM mathematically violates classical probability, thus is not contained in it. It is contained in a classical theory with a certain kind of epistemic restriction such as Bohmian Mechanics at equilibrium. However note that in equilibrium we are violating Kolmogorov's axioms anyway due to how the epsitemic restriction functions. I'll say more about this in a while as it links into Spekkens model. Bohmian Mechanics and other hidden variable theories replicate much of QM by this restriction alone, you only need the nonlocality/retrocausality to violate CHSH or Bell inequalities.

So in no sense is QM contained in classical probability. Mathematically classical probability is a subset of quantum probability not the other way around. It's like saying curved spacetime is contained in flat spacetime because the latter might turn out to be the correct description of nature.

I think more so one should say that a truly minimal view is neutral to there being a deeper theory where classical probability theory holds. However it would have to acknowledge that as far as we can tell now and operationally in labs preparations do not constitute ensembles. Regarding your previous statement:
atyy said:
so I'm not sure it's quite correct to say that a breakdown of the total law of probability is not consistent with classical probability
Breaking the total law is not consistent with classical probability mathematically. It may be the case that there is a deeper theory which uses classical probability but that is a separate statement. Similarly a Newton-Cartan bundle is not consistent with a Lorentzian metric theory, but the deeper gravitational turned out to involve such.

Also note that from contextuality even still in such a deeper theory a preparation does not constitute an ensemble for most observables. Although classical probability is restored we cannot view our preparation as an ensemble for observables like angular momentum, but only the hidden ##\lambda##.
 
  • #275
A. Neumaier said:
Suppose a beam is split into a superposition of two beams. At positions where the two beams are very far apart, a beam dump collapse is obtained if one destroys one of the resulting beams (by position measurements there) and makes measurements on the other one. This is a bilocal activity created by coordinated local interactions at two far apart places.

Such activities, together with a comparison of the joint measurement statistics at a later time,
are at the heart of all nonlocality experiments. It is not ''spooky action at a distance'' but ''spooky passion at a distance''.
What do you mean by "superposition of two beams"? If you talk about superposition you have to tell the basis, according to which the state ket is a superposition.

The beam dump is due to a local interaction between the particles in the one partial beam dumped and the material you bump the particles in. This is with very good will something like a position measurement though nobody cares about the precise position where the beam is dumped ;-)). Then you do experiments with the other beam, which is also due to usual local interactions of the particles in this beam with the various elements of the experiment (in the SGE the magnet and the particle detector like in the original experiment the glas plates on which the silver atoms where catched and then developed to be measured under a microscope afterwards). Of course, in principle the beam dump and the experiment with the other beam can be as far apart as you wish. This has nothing to do with spooky actions at a distance.

One should clearly distinguish between nonlocal interactions, which according to standard relativistic QFT (aka the standard model) does not exist and correlations between far-distant parts of quantum systems described by entanglement. What you mean by "spooky passion at a distance" I can't say.
 
  • #276
vanhees71 said:
What do you mean by "superposition of two beams"? If you talk about superposition you have to tell the basis, according to which the state ket is a superposition.
I had done so in an earlier post to the same topic:
A. Neumaier said:
take the ensemble of prepared systems, each in the state given by a symmetric superposition of (spin-up, momentum-up) and (spin-down, momentum-down);
A. Neumaier said:
Such activities, together with a comparison of the joint measurement statistics at a later time, are at the heart of all nonlocality experiments. It is not ''spooky action at a distance'' but ''spooky passion at a distance''.
vanhees71 said:
What you mean by "spooky passion at a distance" I can't say.
It is a meaningful play with words. It means that something happens at a distance - namely that nature cooperates globally at long distance to ensure that the perfect nonclassical correlations predicted by quantum mechanics in certain experiments actually happen. But it cannot be controlled hence is a passive happening (a ''passion'') rather than an active one (an ''action''). In spite of (and consistent with) the locally induced interactions!
 
  • Like
Likes mattt
  • #277
vanhees71 said:
I see. The "potentiality interpretation" of the "wave function" (or more generally a quantum state) is due to Schrödinger.
It's important to note that this is interpretation neutral. Due to contextuality some of the quantities we measure have to arise during interaction with the measurement device and can only be taken as properties of the device-system pair. Thus the state preparation has not prepared an ensemble for these quantities.
 
  • #278
vanhees71 said:
Sure, and if you call a beam-dump a collapse, fine with me.
Situations like these are precisely what induced Heisenberg 1927 to talk about state reduction (aka reduction of the state vector, aka collapse). That you don"t like the commonly used words for it doesn't mean that you don't make use of the same concept.
 
  • #279
A. Neumaier said:
I had done so in an earlier post to the same topic:
It is a meaningful play with words. It means that something happens at a distance - namely that nature cooperates globally at long distance to ensure that the perfect nonclassical correlations predicted by quantum mechanics in certain experiments actually happen. But it cannot be controlled hence is a passive happening (a ''passion'') rather than an active one (an ''action''). In spite of (and consistent with) the locally induced interactions!
This is the gibberish, I fight against. What do you mean by "nature cooperates globally".

Here you have a very clear preparation procedure consisting of entirely local physics: A beam of silver atoms comes through a hole from an oven, which gives a beam of unpolarized particles. Then it runs through a magnetic field such that you get an entanglement between the measured spin component and position. The entanglement refers to one and the same particle, and thus it's a "local property" of the single particle. Here you thus don't even have the long-distant correlations via entanglement as in the Bell experiments with two photons!

Of course you cannot control which spin state a single particle in the beam takes, that's the irreducible randomness of QT, but it allows you to prepare states with a definite spin component in the measured direction by selection of the wanted partial beam, thanks to the spin-component-position entanglement.

It's of course right that everything is consistent with local interactions. Otherwise we'd have to find a new theory instead of the Standard Model, which is difficult, because the Standard Model works better than wanted by the majority of particle physicists who are dissatisfied with it for various reasons.
 
  • #280
DarMM said:
It's important to note that this is interpretation neutral. Due to contextuality some of the quantities we measure have to arise during interaction with the measurement device and can only be taken as properties of the device-system pair. Thus the state preparation has not prepared an ensemble for these quantities.
Are you saying the protons in the LHC do not have the very well determined momentum? This claim contradicts the very functioning of the entire device!
 
  • #281
A. Neumaier said:
Situations like these are precisely what induced Heisenberg 1927 to talk about state reduction (aka reduction of the state vector, aka collapse). That you don"t like the commonly used words for it doesn't mean that you don't make use of the same concept.
The important difference of my view to Heisenberg's is Heisenberg's claim that this is something outside of quantum theory. I have not seen a single convincing argument that the understanding of a "beam dump" needs any other laws than the usual quantum theoretical laws about the interaction of particles with other particles forming the beam-dumping material.
 
  • #282
DarMM said:
QM mathematically violates classical probability, thus is not contained in it. It is contained in a classical theory with a certain kind of epistemic restriction such as Bohmian Mechanics at equilibrium. However note that in equilibrium we are violating Kolmogorov's axioms anyway due to how the epsitemic restriction functions. I'll say more about this in a while as it links into Spekkens model. Bohmian Mechanics and other hidden variable theories replicate much of QM by this restriction alone, you only need the nonlocality/retrocausality to violate CHSH or Bell inequalities.

Hmmm, does BM at equilibrium really break the Kolmogorov axioms? Why can't we just put it down to contextuality, which if I understand correctly, just means that if you set up an experiment to measure position, then you cannot also measure momentum.

DarMM said:
So in no sense is QM contained in classical probability. Mathematically classical probability is a subset of quantum probability not the other way around. It's like saying curved spacetime is contained in flat spacetime because the latter might turn out to be the correct description of nature.

Hmmm, I do tend to think that curved spacetime is contained in flat spacetime.

DarMM said:
I think more so one should say that a truly minimal view is neutral to there being a deeper theory where classical probability theory holds.

Yes, that is what I mean, though I guess I'm not sure what the distinction is between that and saying that classical probability contains QM.

DarMM said:
However it would have to acknowledge that as far as we can tell now and operationally in labs preparations do not constitute ensembles. Regarding your previous statement:

Breaking the total law is not consistent with classical probability mathematically. It may be the case that there is a deeper theory which uses classical probability but that is a separate statement. Similarly a Newton-Cartan bundle is not consistent with a Lorentzian metric theory, but the deeper gravitational turned out to involve such.

Generally, my instinctive understanding of why QM does not prepare ensembles in the same sense as classical probability is that for classical probability, a mixed state is a unique combination of pure states, whereas in QM a mixed state is not a unique combination of pure states. Thus I would say that the ensemble in QM is underspecified in terms of what subensembles constitute it, not that the subensembles cannot exist until measurement.

DarMM said:
Also note that from contextuality even still in such a deeper theory a preparation does not constitute an ensemble for most observables. Although classical probability is restored we cannot view our preparation as an ensemble for observables like angular momentum, but only the hidden ##\lambda##.

Yes, I agree.
 
  • #283
vanhees71 said:
Are you saying the protons in the LHC do not have the very well determined momentum? This claim contradicts the very functioning of the entire device!
No I'm not saying that. I'm just saying basic aspects of contextuality and quantum probability.

LHC beams have very well determined momenta for momentum measurements as shown by the tightness of the resulting momentum distribution.

However you cannot consider the beams as being an ensemble of different momenta independent of momentum measurements purely from the preparation.
 
  • #284
atyy said:
Hmmm, I do tend to think that curved spacetime is contained in flat spacetime
Well then it's just a different use of the word "contain".

I would have considered curved spaces to be mathematically more general than flat spaces thus not contained in flat spacetime. You're using it to mean "May ultimately be a physical limiting case in some sense of...".

Mathematically the theory of curved spaces is not contained in the the theory of flat spaces, but physically the flat space theory could be correct. It's a separate notion.

atyy said:
Hmmm, does BM at equilibrium really break the Kolmogorov axioms?
Yes, there are restrictions in the lattice of events that you don't have in Kolmogorov's axioms.

atyy said:
Yes, that is what I mean, though I guess I'm not sure what the distinction is between that and saying that classical probability contains QM
It's as I said above.

Mathematically quantum probability is more general. However you are discussing physically how any given mathematical structure may only arise in a specific physical limit of another theory. Our two notions of "contain" were different.

In the case you're talking about we don't find out that classical probability theory contains quantum probability theory, that's impossible as the latter is more general. Rather we find that the correct hidden variable theory contained an epistemic special case isomorphic to a quantum probability theory.

My only problem is that under this definition in some sense any theory is contained by almost anything as it could be wrong and be a limit of something else entirely different.

My statement more considered QM as it is now where we seem to not have a common sample space for our observables from their operational statistics and thus we currently have no grounds to accept a preparation as constituting an ensemble.

atyy said:
Generally, my instinctive understanding of why QM does not prepare ensembles in the same sense as classical probability is that for classical probability, a mixed state is a unique combination of pure states, whereas in QM a mixed state is not a unique combination of pure states. Thus I would say that the ensemble in QM is underspecified in terms of what subensembles constitute it, not that the subensembles cannot exist until measurement.
I wouldn't say this as even for a pure states we lack a common sample space which prevents one thinking of preparations as ensembles.
 
Last edited:
  • Like
Likes mattt and dextercioby
  • #285
vanhees71 said:
The important difference of my view to Heisenberg's is Heisenberg's claim that this is something outside of quantum theory. I have not seen a single convincing argument that the understanding of a "beam dump" needs any other laws than the usual quantum theoretical laws about the interaction of particles with other particles forming the beam-dumping material.
Heisenberg didn't think of state reduction as being outside of quantum theory but (like most physicists since him) as being an aspect of it.
Werner Heisenberg (1927) said:
Jede Ortsbestimmung reduziert also das Wellenpaket wieder auf seine ursprüngliche Grösse
Paul Dirac (1930) said:
The state of the system after the observation must be an eigenstate of [the operator corresponding to the observable] ##\alpha##, since the result of a measurement of ##\alpha## for this state must be a certainty.
 
  • Like
Likes dextercioby and Lord Jestocost
  • #286
A. Neumaier said:
It means that something happens at a distance - namely that nature cooperates globally at long distance to ensure that the perfect nonclassical correlations predicted by quantum mechanics in certain experiments actually happen. But it cannot be controlled hence is a passive happening (a ''passion'') rather than an active one (an ''action''). In spite of (and consistent with) the locally induced interactions!
vanhees71 said:
What do you mean by "nature cooperates globally".
Nature ensures in perfect correlation experiments with entangled photon pairs (the ''certain experiments'') that whenever Alice measures $Ak$ to get ##a_k## then Bob measures $Bk$ and also gets ##a_k##, while for Bob it seems that his results are random. In spite of (and consistent with) the locally induced interactions!
 
  • #287
DarMM said:
No I'm not saying that. I'm just saying basic aspects of contextuality and quantum probability.

(a) LHC beams have very well determined momenta for momentum measurements as shown by the tightness of the resulting momentum distribution.

(b) However you cannot consider the beams as being an ensemble of different momenta independent of momentum measurements purely from the preparation.
For me (a) and (b) are contradicting each other since for me (a) is what I understand as an ensemble of protons with pretty well defined momenta. It's something I'd expect to be quite well described by a wave function sharply peaked in momentum space (or the appropriate formulation in QFT as the correspondingly smeared creation operator applied to the vacuum state).

How can the state concept of QT make physical sense, if (a) doesn't define an ensemble of protons with pretty well determined momentum?
 
  • #288
vanhees71 said:
For me [a] and are contradicting each other since for me [a] is what I understand as an ensemble of protons with pretty well defined momenta. It's something I'd expect to be quite well described by a wave function sharply peaked in momentum space (or the appropriate formulation in QFT as the correspondingly smeared creation operator applied to the vacuum state).
That's just Kochen-Specker contextuality and quantum probability though. It's not my personal view or interpretation.

How do you view the Kochen-Specker theorem then? That might be an easier way to pin point the misunderstanding.
 
  • #289
Morbert said:
Is the protean nature of ensembles in QM a weakness in the minimalist ensemble interpretation?

My understanding so far: The theory of a given system is the double ##(H,\rho)##, the dynamics and the preparation. I.e. All physical content is contained in these terms. The triple ##(H,\rho,\sigma)## describes an ensemble in terms of possible outcomes of a measurement (or possible outcomes of a sequence of measurements), where ##\sigma## is the set of possibilities. The triple ##(H,\rho,\sigma')## describes an ensemble in terms of a different, incompatible set of measurement possibilities ##\sigma'##.

Could we say the physical content of the triples ##(H,\rho,\sigma)## and ##(H,\rho,\sigma')## is the same, and the choice of one over the other is merely a choice of appropriate descriptive terms for a measurement context. I.e. A choice of measurement context does not change any physical content of the preparation. It merely constrains the physicist to use a description appropriate for that context.

[edit] - Added some clarification.
Forgot to respond to this. Yes indeed and it's essentially that constraint that prevents one viewing it as an ensemble. Only after such a "choice" does QM give a well defined statistical population.
 
Last edited:
  • Like
Likes Morbert
  • #290
vanhees71 said:
for me [a] is what I understand as an ensemble of protons with pretty well defined momenta.
But before you had said,
vanhees71 said:
In my example the LHC is a "preparation machine" for (unpolarized) proton beams with a quite well-defined momentum and energy. These beams are prepared such that they collide in specific points along the beam line. For me the preparation procedure delivers a well-defined ensemble of colliding proton beams.
An ensemble of proton beams (prepared moving blops consisting of many protons) is not an ensemble of protons. Protons are never seen in the LHC experiments, prepared are the blops
and measured are the traces of the collision products. In the spirit of the quote by Peres, this is what you have in the labs, not protons.
 
  • #291
I don't see, why my notion of ensembles as the interpretation of quantum states should violate the KS theorem at all. My point is that the state for itself defines an ensemble, since I'm still free to measure whatever I can measure (restricting myself to precise PV measurements, I can always measure any set of compatible observables I like, independent of the state preparation).

E.g., if I prepare a proton beam polarized in ##z## direction, I'm still free to measure any spin component of each proton in this beam I like. Accordingly, given this state I know for any spin component the probabilities for measuring one of the two values ##\pm \hbar##. So no matter, which (quantum theoretically sensible) measurement I perform on my "ensemble" I have well-defined probabilities. That's why I think it's too narrow to say the ensemble is not only given by the quantum state alone but only in context of the measurement to be performed on it. I think that's also demonstrated by the correct prediction for the (probabilistic) outcome of "delayed-choice measurements".
 
  • #292
A. Neumaier said:
But before you had said,

An ensemble of proton beams (prepared moving blops consisting of many protons) is not an ensemble of protons. Protons are never seen in the LHC experiments, prepared are the blops
and measured are the traces of the collision products. In the spirit of the quote by Peres, this is what you have in the labs, not protons.
This is semantics. As any object also a proton is defined by its properties. In fact in QT objects are much more definitely defined than in classical physics since any proton is completely indistinguishable from any other proton. A beam of protons consists of many protons forming an ensemble. You can of course argue whether a specific bunch in the LHC is an ensemble of independently prepared single protons.
 
  • #293
vanhees71 said:
for me [a] is what I understand as an ensemble of protons with pretty well defined momenta.
But before you had said that it is an ensemble of proton beams with pretty well defined momenta,
not an ensemble of protons. Protons are never seen in the experiment, prepared are the bunches.
vanhees71 said:
This is semantics.
Yes, and semantics (meaning) counts in arguments about the meaning of concepts.
vanhees71 said:
A beam of protons consists of many protons forming an ensemble.
A single beam does not, according to your definition:
vanhees71 said:
An ensemble is a collection of independent equally prepared systems.
The protons in a single beam are neither independent nor a collection. They are not even distinguishable, one can point to none of them, only to the time-dependent multiparticle state formed by the whole bunch.
 
  • #294
Well, then how do you explain that the LHC measures outcomes precisely in accordance with the standard model assuming two protons in the initial state. Of course the answer is simply that the bunches are still dilute enough that for FAPP you can assume that only a single pp collision occurs in each interaction of the bunches.
 
  • #295
vanhees71 said:
Well, then how do you explain that the LHC measures outcomes precisely in accordance with the standard model assuming two protons in the initial state. Of course the answer is simply that the bunches are still dilute enough that for FAPP you can assume that only a single pp collision occurs in each interaction of the bunches.
I explain it by ''shut up and calculate''. In that mode all inquiries about the precise meaning of the concepts involved is meaningless, as the meaning is left to the discretion of everyone. On this level it is alright to equate ensemble with preparation, as such ''equations'' are only proxies for intuitive reasoning.

But if one starts to inquire into the meaning of the concepts used (as in many of these foundational threads), one finds them problematic and often inconsistent in how they are used.
 
  • Like
Likes Auto-Didact
  • #296
Well, one can also overprobemetize the problems.
 
  • #297
vanhees71 said:
My point is that the state for itself defines an ensemble
But it simply doesn't as a mathematical fact. An ensemble is an approximate realisation of a sample space. In quantum probability the state alone does not give one a well defined sample space due to Kochen-Specker contextuality. That's all there is to it.

What would help me is understanding what you think the Kochen-Specker theorem implies. Then we could more easily discuss this since currently what you are saying seems in direct contradiction to it.
 
  • #298
DarMM said:
Well then it's just a different use of the word "contain".

I would have considered curved spaces to be mathematically more general than flat spaces thus not contained in flat spacetime. You're using it to mean "May ultimately be a physical limiting case in some sense of...".

Mathematically the theory of curved spaces is not contained in the the theory of flat spaces, but physically the flat space theory could be correct. It's a separate notion.Yes, there are restrictions in the lattice of events that you don't have in Kolmogorov's axioms.It's as I said above.

Mathematically quantum probability is more general. However you are discussing physically how any given mathematical structure may only arise in a specific physical limit of another theory. Our two notions of "contain" were different.

In the case you're talking about we don't find out that classical probability theory contains quantum probability theory, that's impossible as the latter is more general. Rather we find that the correct hidden variable theory contained an epistemic special case isomorphic to a quantum probability theory.

My only problem is that under this definition in some sense any theory is contained by almost anything as it could be wrong and be a limit of something else entirely different.

My statement more considered QM as it is now where we seem to not have a common sample space for our observables from their operational statistics and thus we currently have no grounds to accept a preparation as constituting an ensemble.

I guess it is the difference between saying that determinism is a special case of randomness (which is mathematically true, since one can use the delta measure), or saying that randomness is a special case of determinism (which is not mathematically true, but is physically true in the sense that we can consider statistical mechanics as arising from Newton's laws for many particles and ignorance of the exact state).

DarMM said:
I wouldn't say this as even for a pure states we lack a common sample space which prevents one thinking of preparations as ensembles.

I guess, reading your replies to @vanhees71, that this is due to the KS theorem. But I thought the point of the KS theorem is that QM is contextual? What has contextuality got to do with an inability to consider preparations as ensembles?
 
  • Like
Likes vanhees71
  • #299
DarMM said:
QM mathematically violates classical probability, thus is not contained in it. It is contained in a classical theory with a certain kind of epistemic restriction such as Bohmian Mechanics at equilibrium.
This makes no sense. BM is conceptually a completely classical, deterministic theory, and quantum equilibrium states are well-defined probabilistic states defined on the configuration space.
DarMM said:
However note that in equilibrium we are violating Kolmogorov's axioms anyway due to how the ep[is]temic restriction functions. I'll say more about this in a while as it links into Spekkens model. Bohmian Mechanics and other hidden variable theories replicate much of QM by this restriction alone, you only need the nonlocality/retrocausality to violate CHSH or Bell inequalities.
All you need is, indeed, a preferred frame. Everything else (logic, probability theory, local configurations) are completely classical. Epistemic restrictions (as far as that means that we are unable to prepare some states but are restricted to prepare only states in quantum equilibrium) do not lead to any violations of Kolmogorovian probability.

By the way, the quantum theory fits into Kolmogorovian probability in a quite trivial way, which is described in Kochen, S., Specker, E.P. (1967). The Problem of Hidden Variables in Quantum Mechanics, J. Math. Mech. 17(1), 59-87 on page 63. They have combined this with some bad words about it, to motivate some additional restrictions (non-contextuality) for "good" hidden variables, which they prove are incompatible with quantum theory.
 
  • #300
Elias1960 said:
This makes no sense. BM is conceptually a completely classical, deterministic theory, and quantum equilibrium states are well-defined probabilistic states defined on the configuration space.
All true. I don't know how it affects what I say though.

Quantum theory generalizes classical probability theory. That observation is decades old by now.

Elias1960 said:
By the way, the quantum theory fits into Kolmogorovian probability in a quite trivial way, which is described in Kochen, S., Specker, E.P. (1967). The Problem of Hidden Variables in Quantum Mechanics, J. Math. Mech. 17(1), 59-87 on page 63. They have combined this with some bad words about it, to motivate some additional restrictions (non-contextuality) for "good" hidden variables, which they prove are incompatible with quantum theory
Can you show me where in that paper they show that quantum theory "trivially" fits into Kolmogorovian probability in a way that isn't essentially the sense @atyy and I have mentioned.
 
Back
Top