A Is the Ensemble Interpretation Inconsistent with the PBR Theorem?

  • #51
vanhees71 said:
This quote I would not sign.
So you disagree with Ballentine? o_O :wink:
 
Physics news on Phys.org
  • #52
I fear so ;-). I've to read the old RMP paper again. The more one thinks about the foundations the more you change your opinion yourself over the years!
 
  • Like
Likes Demystifier
  • #53
Demystifier said:
Except Bohm, of course. :-p

This quote from the 1970 paper...

In contrast, the Statistical Interpretation considers a particle to always be at some position in space, each position being realized with relative frequency ##|\psi(r)|^2## an ensemble of similarily prepared experiments.

...makes it seem to me like Ballentine himself is a Bohmian!
 
  • Like
  • Haha
Likes vanhees71, mattt and Demystifier
  • #54
vanhees71 said:
This quote I would not sign. In QT an observable has either a determined value (due to preparation) or it has no determined value, because the system is prepared in a state, where the probability for finding some value is non-zero for at least one possible outcome of the measurement.

For me the strength of the statistical interpretation was that it takes Born's rule seriously and states that the only meaning of the quantum state are the probabilities for the outcomes of measurements.

To assume that "a particle always is at some (definite) position in space" would somehow imply that the position vector has always a determined value, no matter in which state the particle is prepared, but this, at least for me, is not what the quantum formalism tells us. It then would immediately imply some HVs which determine this position and thus that the "quantum probabilities" would be only "subjective", i.e., due to incomplete knowledge about the state. Then you'd need an extension of QT to some (according to Bell and the empirical findings about Bell's inequality necessarily non-local) deterministic theory, which however nobody ever has been able to formulate.
Can you phrase all that using statistical interpretation language? You talk about the system/particle, its state and the observables as if everything refers to a single object, but the state is the state of the ensemble, not of just one representative of it and so on.
 
  • Like
Likes Demystifier
  • #55
PeterDonis said:
This quote from the 1970 paper...
...makes it seem to me like Ballentine himself is a Bohmian!
The difference is that Ballentine is agnostic about determinism. Particle can have a position x at each time t, but x(t) can be stochastic (instead of deterministic). An explicit example is the Nelson interpretation.
 
  • Like
Likes mattt
  • #56
PeterDonis said:
...makes it seem to me like Ballentine himself is a Bohmian!

Yes, both the book and the paper mention Einstein's interpretation, which is why it is plausible to read Ballentine as assuming hidden variables. However, such an assumption ought to be stated clearly, and the variables and their dynamics stated. But Ballentine doesn't do that. And even if one preferred a hidden variables interpretation, it would not justify his criticism of the standard interpretation - since if the hidden variables view were right, it must derive the standard Copenhagen-style interpretation as an effective theory. I dislike his criticism of Messiah, since Messiah discusses the possibility of hidden variables, and says they have not been ruled out, but it appears not possible to test at the moment, and says he will present Copenhagen in the rest of the book - so it's a broad minded view that gives proper weight to Einstein's view.

Incidentally, the paper also has another wrong criticism of the standard interpretation. The paper claims that position and momentum can be simultaneously measured, but in the counterexample he gives, the position and momentum are not canonically conjugate. So like the book there are technically incorrect criticisms of standard physics. And although these might be incidental carelessness, the overall point he is making is a huge point - he is saying that textbook QM is wrong (as opposed to saying that standard textbooks are a little sloppy in their presentation). Incidentally, the error he makes shows he has not understood why the Bohmian and Copenhagen interpretations are consistent - in making the error, he does not use Bohmian trajectories.
 
Last edited:
  • Like
Likes physika and Demystifier
  • #57
vanhees71 said:
Ok, I try to understand it again. There must be some meaning in what's contradicted by the quantum formalism to what's described by probability distributions for ##\lambda## which is not defined ;-)). I'm always a bit lost with such presumably "mathematical" proofs with only vaguely defined quantities, which then are supposed to have some philosophical meaning like the contradistinction between ontic and epistemic ;-)). It's strange to have vague definitions in mathematics and proving something about these vague definitions ;-)).

I think one way you can think of it is that the state space of quantum mechanics is not a simplex, However, the state space of classical probability is a simplex. The question is whether it is possible to construct a theory preserving all the predictions of QM (to some accuracy) that has an enlarged state space that is a simplex. [Though I guess this criterion is problematic for continuous variables, since I think the state space is not a simplex for classical continuous variables?]
 
  • Like
Likes vanhees71
  • #58
vanhees71 said:
I fear so ;-). I've to read the old RMP paper again. The more one thinks about the foundations the more you change your opinion yourself over the years!
Perhaps it's time that you write down a paper on your own interpretation! :wink:
 
  • Like
Likes physika
  • #59
atyy said:
I think one way you can think of it is that the state space of quantum mechanics is not a simplex, However, the state space of classical probability is a simplex. The question is whether it is possible to construct a theory preserving all the predictions of QM (to some accuracy) that has an enlarged state space that is a simplex. [Though I guess this criterion is problematic for continuous variables, since I think the state space is not a simplex for classical continuous variables?]
What do you mean by "simplex"?
 
  • #60
Demystifier said:
What do you mean by "simplex"?

A shape with sharp points. Like Fig 1.2 in https://www.researchgate.net/publication/258239605_Geometry_of_Quantum_States.
 
  • Informative
  • Like
Likes vanhees71 and Demystifier
  • #61
atyy said:
...The paper claims that position and momentum can be simultaneously measured,...
Can you point to where the claim is, so that we can see the context.
 
  • #62
atyy said:
A shape with sharp points. Like Fig 1.2 in https://www.researchgate.net/publication/258239605_Geometry_of_Quantum_States.
So if I understood it correctly, classical probability space is simplex because the probabilities satisfy ##p_i\geq 0##, while the quantum state space is not simplex because the coefficients of superposition do not satisfy ##c_i\geq 0##, is that right?
 
  • #63
Demystifier said:
So if I understood it correctly, classical probability space is simplex because the probabilities satisfy ##p_i\geq 0##, while the quantum state space is not simplex because the coefficients of superposition do not satisfy ##c_i\geq 0##, is that right?

I'm not sure off the top of my head, but in corresponds to a classical uncertainty being a unique mix of "pure states" (the complete state that can be assigned to a single system), whereas quantum density matrices don't have a unique decomposition into pure states (ie. preferred basis must be picked out by measurement or decoherence or whatever).

Holevo has some discussion at the start of his book (I'm don't have it with me at the moment).
 
  • Like
Likes vanhees71
  • #64
Demystifier said:
I think it would be very strange to deny that. Perhaps consistent-histories interpretation denies that (I'm not sure about that), but other interpretations don't.

I don't think all interpretations would model reality with an ontic state space (which as far as I can tell is a classical state space).

PBR: If we partition the ##\lambda## state space into discrete regions, each with a unique label ##L_i##, then ##L## is a property of the system since the state ##\lambda## uniquely selects a value ##L_i## of ##L##

Consistent Histories: If we perform a projective decomposition of the Hilbert space into orthogonal subspaces, each with a unique label ##L_i##, then ##L## is a property of the system, but a state ##|\psi\rangle## will not necessarily select a value ##L_i## unless one of the subspaces of the decomposition is spanned by ##|\psi\rangle##, so the state does not uniquely specify all properties.

Typical Copenhagen: If we perform a projective decomposition of the Hilbert space into orthogonal subspaces, each with a unique label ##L_i##, then there is a hypothetical measurement of ##L## that can be carried out, which will produce one of the outcomes ##L_i## with a probability uniquely specified by the state.
 
Last edited:
  • #65
martinbn said:
Can you phrase all that using statistical interpretation language? You talk about the system/particle, its state and the observables as if everything refers to a single object, but the state is the state of the ensemble, not of just one representative of it and so on.
Maybe, I've my own version of "minimal interpretation". So here I try to very quickly state my point of view:

State: Describes an equivalence class of preparation procedures on a single system. It is represented by the statistical operator ##\hat{\rho}## (positive semidefinite self-adjoint operator of trace 1).

Observable: describes an equivalence of measurement procedures. An observable is represented by a self-adjoint operator. The possible values when measured accurately are the eigenvalues of the operators.

Meaning of states: The meaning and the only meaning of the states is that in a precise measurement the probabilities for the outcomes of an exact measurement of an observable are given by Born's rule or equivalently the all moments of the ditribution are given by ##\mathrm{Tr}(\hat{\rho} \hat{A}^n)## with ##n \in \mathbb{N}_0##.

In this way the states are referring on the one hand to single objects (preparation procedure for single objects). On the other hand they don't have much of a meaning for the single object and measurements on a singe object, because the state preparation only determines the probabilities for the outcome of measurements, and these probabilities can be empirically tested only on ensembles of equally prepared systems.

The dynamics of a closed system is described by the usual unitary time evolutions of statistical operators and observable operators (and thus their eigenvectors) after fixing an arbitrary picture of time evolution. The physical meaning is independent of the choice of the picture.
 
  • #66
Demystifier said:
Perhaps it's time that you write down a paper on your own interpretation! :wink:
Adding again one more interpretation? What should this be good for?
 
  • #67
vanhees71 said:
the states are referring on the one hand to single objects ... On the other hand they don't have much of a meaning for the single object
Don't you find it confusing?
 
  • #68
vanhees71 said:
Adding again one more interpretation? What should this be good for?
You would not need to respond to silly questions on this forum, you could just point to your paper. In that way you would have much more time for shut up and calculate. :oldbiggrin:
 
  • #69
Demystifier said:
think the ensemble interpretation with non-objective properties would be more-or-less equivalent to QBism
There is no full inside agent theory yet but conceptually the ensemble picture of small subatomic physics seems to conceptually correspond to agents living in the the classical background environment, where they moreover can "communicate" classically and form consensus without "quantum weirdness" and without risk beeing "saturated" by information. Ie. the Agents can make inferences and non-lossy storage. These agents are making inferences and predictions from a "safe" distance, so that we can assume that they themselves are not affected by the backreaction ofthe system they interact with.

So the connecton between ensemble view and agent view makes sense.

/Fredrik
 
  • #70
martinbn said:
Can you point to where the claim is, so that we can see the context.

Section 3.2 and Fig 3 of Ballentine's 1970 article. It is true that the usual uncertainty principle does not refer to simultaneous measurement, but it does not mean that simultaneous measurement of canonically conjugate position and momentum are possible.
 
  • #71
atyy said:
[...]Incidentally, the paper also has another wrong criticism of the standard interpretation. The paper claims that position and momentum can be simultaneously measured, but in the counterexample he gives, the position and momentum are not canonically conjugate. [...]

Do you have /know of a proof of that?
 
  • #72
dextercioby said:
Do you have /know of a proof of that?

Using the position at the screen to measure the momentum gives the momentum at the slit. I don't have a detailed calculation at the moment, but the essential idea is that the far field distribution is the Fourier transform of the wave function at the slit, by analogy to the Fraunhofer approximation https://en.m.wikipedia.org/wiki/Fraunhofer_diffraction_equation

A simpler way to see it is that a sharp position measurement will be distributed according to squared amplitude of the wave function in position coordinates, and a sharp momentum measurement distributed according to the squared amplitude of the wave function in momentum coordinates, and these two distributions are not typically equal.
 
  • #73
atyy said:
Using the position at the screen to measure the momentum gives the momentum at the slit.
It doesn't determine the momentum in the orthodox sense. It only determines momentum if one assumes a semi-classical picture in which the particle is a point-like object with straight (not Bohmian) trajectory. As Einstein would say, it is theory that determines what is measurable.
 
  • Like
Likes dextercioby
  • #74
Demystifier said:
It doesn't determine the momentum in the orthodox sense. It only determines momentum if one assumes a semi-classical picture in which the particle is a point-like object with straight (not Bohmian) trajectory. As Einstein would say, it is theory that determines what is measurable.

I think it does determine momentum in the orthodox sense without assuming a semi-classical picture if the position measurement is taken at infinity. Roughly, it should be something like momentum being the Fourier transform, and in the far field limit unitary Schroedinger evolution causes the wave function to be the Fourier transformed version, so if one measures position in the far field limit, one is measuring the Fourier transform, which is momentum in the orthodox sense. And then in the far field limit, the "quick-and-dirty" derivation assuming classical paths can be rigorously justified without assuming any particle tranjectories. Or at least that's what I remember, but I cannot find a derivation by a quick search at the moment.

This isn't for wave functions, but I think the maths should work out similarly.
https://en.wikipedia.org/wiki/Fraunhofer_diffraction_equation
Capture.JPG


I found a reference by Atom Optics by Adams, Siegel and Mlynek which says " The atomic momentum distribution, or in the Fraunhofer diffraction limit the far-field real space distribution (equation (35) in Section 2.4)"
 
Last edited:
  • #75
Thinking about this a bit more (and rereading the thread), I am not sure what exactly the problem was! In fact it seems to me that the ensemble interpretation as described by Ballentine is fine.
 
  • Like
Likes WernerQH and vanhees71
  • #76
atyy said:
I'm not sure off the top of my head, but in corresponds to a classical uncertainty being a unique mix of "pure states" (the complete state that can be assigned to a single system), whereas quantum density matrices don't have a unique decomposition into pure states (ie. preferred basis must be picked out by measurement or decoherence or whatever).

Holevo has some discussion at the start of his book (I'm don't have it with me at the moment).
The point is the uniqueness. For each point inside a simplex, there is a decomposition into ## \sum_i p_i n^i## where ##n^i## are the vertices of the simplex. In 3D those vertices would be maximal four points. If you start with more vertices in 3D, the convex hull is in general no longer a simplex, and the decomposition is no longer unique.

In classical mechanics, this would be a decomposition like ##\int \rho(p,q)dp dq##. It is unique.

In quantum theory you can have such decompositions into pure states, but they are not unique. There can be several. Each particular complete measurement of some ##\hat{a}## defines such a decomposition of arbitrary states, ##\hat{\rho} = \sum p_a |\psi_a\rangle\langle\psi_a|##, for that operator it is unique. But measure different incompatible operators and you get different decompositions.
 
  • Like
Likes atyy
  • #77
vanhees71 said:
In this way the states are referring on the one hand to single objects (preparation procedure for single objects). On the other hand they don't have much of a meaning for the single object and measurements [...]

It is misleading to speak (and think) of quantum "objects" and their properties. Quantum theory is about the statistical correlations between preparation and measurement events. Never mind the anthropocentric terms "preparation" and "measurement"; the nuclear reactions in the interior of the sun proceed without any observers "preparing" and "measuring" them.
 
  • Like
Likes vanhees71
  • #78
Well, to talk about preparation and measurement you must give a meaning to preparing of and measuring observables on the single objects making up the ensemble. Otherwise, of course, I agree. Indeed, all the meaning of the quantum state are the statistical properties of an ensemble defined by an equivalence class of preparation procedures, and these can only be empirically tested on an ensemble of such equally prepared "quantum objects".
 
  • #79
vanhees71 said:
Well, to talk about preparation and measurement you must give a meaning to preparing of and measuring observables on the single objects making up the ensemble.

It's a burden that "measurement" still occupies a central position in QT. It involves classical apparatus, and is guaranteed to produce a definite result (qua postulate). But an apparatus is composed of atoms, and there must be a microscopic picture of what happens when e.g. the polarization of a photon is measured. It would clearly be desirable to keep the microscopic picture in view.

You think of ensembles of objects, and here we disagree. I prefer to think of ensembles of microscopic processes, or events, if you like. Measurement of the polarization of a photon boils down to the absorption of a photon. An absorption coefficient can be expressed in terms of a Fourier integral of the current density fluctuations (Kubo formula). It is a real number that directly represents the expected number of microscopic processes in a given patch of space-time. Polarization of a photon means a statistical correlation of the field components a quarter wavelength apart, and the detection probability is proportional to a similar correlation of the microscopic currents. QT allows us to compute these correlations.

My interpretation is a blend of the statistical and transactional interpretations, but with minimal ontology: neither particles nor waves, only events. The "confirmation waves" of the transactional interpretation are not physical, but merely part of the mathematical (Keldysh-) formalism to predict the probabilities of events.
 
  • #80
What are "events" if not "clicks of a detector"? Measurements always mean the interaction of the measured object with a macroscopic measurement device which enables an irreversibly stored measurement result. That such devices exist is (a) empirically clear, because quantum physicists in all labs successfully measure quantum objects (single photons, elementary particles, etc.) and (b) also follows from quantum statistics, according to which the macroscopic ("coarse grained") observables indeed follow classical laws. It's also not true that we only measure quantum-mechanical expectation values. E.g., in the usual Stern-Gerlach experiment with unpolarized Ag atoms we don't measure 0 spin components but ##\pm \hbar/2## components for each individual silver atom. Of course, on the average over a large example we get 0.

I'm not familiar with the transactional interpretation. So I cannot comment on this.
 
  • #81
vanhees71 said:
What are "events" if not "clicks of a detector"?
...
It's also not true that we only measure quantum-mechanical expectation values.

We are speaking different languages, apparently. :-)

I was speaking of events as points in space time. Of course, these points combine to form larger patterns, like the click of Geiger counter. Then there are huge numbers of atoms involved, and the composite event has only approximate coordinates in space and time.

The Stern-Gerlach apparatus separates different angular momentum states, and of course the formalism can predict the probabilities of each one. Evaluate the expectation value of a projection operator, if you wish. I don't perceive a limitation here.
 
  • Like
Likes vanhees71
  • #82
Formally speaking, an event is a spectral projection of an observable's operator, which can be e.g. logically equivalent to a detector click.
 
Last edited:
  • Like
Likes vanhees71
  • #83
vanhees71 said:
What are "events" if not "clicks of a detector"? Measurements always mean the interaction of the measured object with a macroscopic measurement device which enables an irreversibly stored measurement result. That such devices exist is (a) empirically clear, because quantum physicists in all labs successfully measure quantum objects (single photons, elementary particles, etc.) and (b) also follows from quantum statistics, according to which the macroscopic ("coarse grained") observables indeed follow classical laws.

As usual, (b) is not correct. As Landau and Lifshitz (Vol III, p3) say of quantum mechanics: it contains classical mechanics as a limiting case, yet at the same time it requires this limiting case for its own formulation.

vanhees71 said:
It's also not true that we only measure quantum-mechanical expectation values. E.g., in the usual Stern-Gerlach experiment with unpolarized Ag atoms we don't measure 0 spin components but ##\pm \hbar/2## components for each individual silver atom. Of course, on the average over a large example we get 0.

In typical cases the probability distribution can be recovered from the cumulants, which are expectation values. So formally the probability distribution and the expectation values provide the same information.
 
Last edited:
  • Like
Likes vanhees71
  • #84
Landau and Lifshitz is some decades old. There's much progress in the understanding of the classical behavior of macroscopic systems from the point of view of quantum theory. Nevertheless note that Landau and Lifshitz vol. X is the only general textbook containing a derivation of the transport equation (classical) from the Kadanoff-Baym equation (though it's not named so, as is understandable, because it's a Russian textbook ;-)).

Sure if you know all cumulants you know the complete probabilities/probability distribution, but that's far more than just the expectation values. You need an ensemble to measure all (relevant) cumulants to reconstruct the probability function, i.e., you have to measure on single systems of an ensemble with a sufficient accuracy. The resolution of the detector must be the better and the ensemble must be the larger (and the more accurately prepared) the more cumulants you want to resolve.
 
  • Like
Likes WernerQH
  • #85
vanhees71 said:
Landau and Lifshitz is some decades old. There's much progress in the understanding of the classical behavior of macroscopic systems from the point of view of quantum theory. Nevertheless note that Landau and Lifshitz vol. X is the only general textbook containing a derivation of the transport equation (classical) from the Kadanoff-Baym equation (though it's not named so, as is understandable, because it's a Russian textbook ;-)).

Although there has been progress, it doesn't solve the conceptual problems that Landau and Lifshitz were thinking of. You can still see this even in papers by Peres, where a classical apparatus is needed.
https://arxiv.org/abs/quant-ph/9906023
"Is it possible to maintain a strict quantum formalism and treat the intervening apparatus as a quantum mechanical system, without ever converting it to a classical description? We could then imagine not only sets of apparatuses spread throughout spacetime, but also truly delocalized apparatuses [26], akin to Schrodinger cats [27, 28], so that interventions would not be localized in spacetime as required in the present formalism. However, such a process would only be the creation of a correlation between two nonlocal quantum systems. This would not be a true measurement but rather a “premeasurement” [18]. A valid measuring apparatus must admit a classical description equivalent to its quantum description [22] and in particular it must have a positive Wigner function."

vanhees71 said:
Sure if you know all cumulants you know the complete probabilities/probability distribution, but that's far more than just the expectation values. You need an ensemble to measure all (relevant) cumulants to reconstruct the probability function, i.e., you have to measure on single systems of an ensemble with a sufficient accuracy. The resolution of the detector must be the better and the ensemble must be the larger (and the more accurately prepared) the more cumulants you want to resolve.

I mean that each cumulant itself can be considered an expectation. So the mean is is ##E(x)##, and the variance is the ##E((x-E(x))^2)##. So the statement that quantum theory only predicts expectation values, is formally the same as the statement that quantum theory only predicts probabilities of measurement outcomes (although in practice no one would measure all the cumulants in order to measure the probability distribution).
 
Last edited:
  • #86
Of course a "classical apparatus" is needed, but this doesn't imply that it cannot be described by quantum statistics leading to "classical behavior" without the need for ad-hoc assumptions on a quantum-classical cut or a collapse and the explanation, why it shows definite measurement results when measuring an observable on a single quantum system, leading to the verification of the predicted probabilities for the outcome of these measurements on an ensemble.

Again: you need to measure all cumulants with sufficient accuracy and resolution to reconstruct the probability distribution not only the expectation value. Of course the cumulants are expectation values too, but you need to measure all of them or at least some relevant subset to get a sufficiently accurate reconstruction of the probability distributions.
 

Similar threads

Replies
84
Views
6K
Replies
211
Views
11K
Replies
309
Views
14K
Replies
11
Views
2K
Replies
62
Views
5K
Replies
91
Views
7K
Replies
78
Views
6K
Replies
47
Views
6K
Back
Top