The quantum state cannot be interpreted statistically?

Click For Summary
The discussion centers on the Pusey, Barrett, and Rudolph paper, which argues against the statistical interpretation of quantum states, claiming it is inconsistent with quantum theory's predictions. The authors suggest that quantum states must represent distinct physical properties of systems rather than merely statistical distributions. Participants express skepticism about the paper's assumptions and conclusions, particularly regarding the relationship between a system's properties and its quantum state. There is a call for deeper analysis and understanding of the paper's arguments, with some questioning the clarity and validity of the reasoning presented. The conversation highlights the ongoing debate about the interpretation of quantum mechanics and the implications of the paper's claims.
  • #121
Fredrik said:
I think it's obvious enough that it makes sense to think of phase space points as complete sets of properties. I don't think a proof or even a definition is required*. If you want a reason to think of them that way, then consider the fact that if you know one point on the curve that describes the particle's motion, you can use the force (:wink:) to find all the other. So if you know a point, you know everything.
But you don't know a point, you only know that the particle is in some box. In other words, if I replace classical mechanics the way it is normally described (a theory of impossible precision) with a theory that only talks about intervals, rather than points, do I not have an epistemic version? And here's the real kicker: how is such a theory not completely equivalent to classical mechanics? And what's more, isn't the second theory the one we actually test, not the first one? If the second version is the only version of classical mechanics that ever gets tested, then I claim the second version is the actual theory of classical mechanics, and the first one is just a kind of make-believe version that we only use out of a kind of laziness to talk about the theory that we have actually checked. I like laziness as much as the next guy, but we should at least recognize it. (If we had, we would never have concluded that quantum mechanics was "unclassical", we would have called it what it really is "super-classical." It includes classical physics, and adds more complexity at smaller scales inside the boxes that classical physics never tested.)
As you know it isn't possible to define everything, but more importantly, there are some things that we simply can't avoid treating as more fundamental than other things. For example, the concept of "measuring devices" is more fundamental than any theories of physics, and the concept of natural numbers is more fundamental than even the formal language used to define the set theories that we use to give the term "natural number" a set theoretic definition.
I'm with you up to here.
It seems reasonable to me to take "property" to be one of those things that we consider so fundamental that we don't need to define it.
The problem is not with using "properties" as conceptual devices, we do that all the time-- physics would be impotent without that ability. The issue is what does it mean when we invoke a conceptual device and call it a property. Does it mean that if we knew all the properties, we'd understand the system completely? That's the part I balk at, I see zero evidence of that, and I find it such a complete departure from anything that physics has ever been in the past. I think the more we know about something, the deeper the mysteries about it become-- we never understand it completely, we understand what we didn't understand before and now don't understand something new. So much for properties!

So I ask the same question-- for an approximate theory to work well, why does this require that there be an exact theory underlying it? I think that is a bogus proposition, yet it seems to be the very first assumption of PBR. The crucial assumption is not that the concept of a property might be useful, it is that systems really have properties that determine outcomes. If we strip out that part of the proof, what does it prove now?
Right, but in this context, it's what we know about the ontic states. Like it or not, that's seems to be how these guys are defining it.
Yes, and that is exactly what I think limits the generality of their proof. Let's go back to classical mechanics, and my point that it was never really a theory about points in phase space, it was always a theory about boxes in phase space (since that was all that was ever tested about it). If we had been more careful, and framed classical mechanics that way, then we might have had someone say "of course there really are ontic points inside those boxes, we only use boxes because of our epistemic limits in gathering information about those ontic points."

Indeed, that's what many people did say. Then along comes the hydrogen atom, and oops, those boxes are not boxes of ontic states at all. Why does this always seem to come as a surprise? The whole point of an epistemic treatment is to not pretend we know something we don't know-- like that epistemics is just a lack of information about ontics! If there was ever a lesson of quantum mechanics, it is that epistemics is something potentially much more general than just lack of information about ontics.
This is something that I find confusing. I'm tempted to say "none of it". Suppose that we consider all models that for each measuring device and each member of some set \Lambda assigns a probability P(k|λ,M) to each result k to be "ontic". We have no way of knowing if the ontic states really represent properties, but that also means that nothing will go seriously wrong if we just pretend that we do.
It seems to me the key assumption is that the ontics decide what happens to the system, and the epistemics are just lack of information about the ontics. Could we not prove things about any theory that could be consistent with classical mechanics by making the same assumption, that inside any epistemic "box" in phase space there are ontic points that determine the outcomes of when a hydrogen atom recombines? But quantum mechanics does not respect the ontic points of what people imagined classical mechanics was (but never demonstrated by experiment that it was), yet quantum mechanics does reproduce every experimental prediction that classical mechanics works for. Quantum mechanics is a mathematical structure "at least as good as classical mechanics."

Now, granted, quantum mechanics also makes different predictions at small scales. But that's my point-- I think the real value of the PBR theorem is that it might help us to figure out experiments to test quantum mechanics that quantum mechanics might not get right. If it does that, then it will be a truly valuable theorem. But I don't think it tells us anything about quantum mechanics, any more than proving theorems about ontic points inside boxes in phase space tells us anything about classical mechanics. Classical mechanics never was a theory about ontic points in phase space, it was always, demonstrably, a theory about epistemic boxes in phase space. This is also true of quantum mechanics, with different epistemics. Ultimately, I claim that all theories are built of epistemic primitives, and it is only a kind of laziness that allows us to imagine that any physics theory is ontic.
I think that this is what the HS article does, because their first example of an ontic model (they may have used the term "hidden-variable theory" instead) simply defines \Lambda to be the set of Hilbert subspaces of the Hilbert space of QM.
Expressing quantum mechanics in terms of Hilbert spaces is certainly a useful way to go, just as expressing classical mechanics in terms of points in phase space was. If that is what we mean by quantum mechanics (and that is indeed how it gets defined in the textbooks), then it is definitively ontic, as you point out later. But does this mean that it has to be an ontic theory to work as well as it does? I say no, it should be easy to replace the Hilbert space with a more epistemic version that restricts the theory to what has actually been verified by experiment. Such a theory would be completely equivalent in terms of its experimental justification, but would be much more "honest" (and less lazy but also less parsimonious), because it would not pretend to be an ontic theory when only its epistemic character has actually been tested. It would serve just as well, in every way except parsimony, as the theory we call "quantum mechanics". But we like parsimony, so we use the ontic theory, and that's fine-- as long as we recognize that in choosing parsimony over demonstrability, we have entered into a kind of pretense that we know more than we actually do. Look where that got is in DesCartes' era!
 
Physics news on Phys.org
  • #122
Fredrik said:
From this point of view, it's a bit odd that ontic theories are allowed to make non-trivial probability assignments.
Yes, I'm a bit unclear on this issue as well. If a "property" can result in nothing but a statistical tendency, what you call a nontrivial probability, then what does it mean to have a property? I just don't see why quantum mechanics needs this element at all, quantum mechanics is about taking preparations and using them to calculate probabilities, there just isn't any step that looks like "now convert the state into properties." The state itself is ontic in the "lazy" (yet official) version of quantum mechanics, but the state is all you need to make predictions. If you simply define the predictions as the properties, how are predictions that come from state vectors something that leads to the state vectors? Quantum mechanics is a theory about state vectors and operators, not about properties, so why even mention them at all when trying to understand quantum mechanics?

If we make the requirement that each ontic state must represent all the properties of the system, and leave the term "properties" undefined, then the PBR result can't be considered a theorem.
Yes, not defining properties is bothersome, and I feel it raises the spectre of circularity. If one says "you know what I mean by a property" and move on, there is a danger that what I know what they mean is that a property is whatever it is that makes quantum mechanics work in experiments. Then when we note that state vectors is how quantum mechanics makes predictions, and we have assumed the predictions are right (to test what other theories are equivalent) and that what made the predictions right is the properties, then we have assumed that the means of making the predictions connects to the properties. Isn't that what is being claimed to be proven?
 
  • #123
Demystifier said:
I believe I have found a flaw in the paper.

In short, they try to show that there is no lambda satisfying certain properties. The problem is that the CRUCIAL property they assume is not even stated as being one of the properties, probably because they thought that property was "obvious". And that "obvious" property is today known as non-contextuality. Indeed, today it is well known that QM is NOT non-contextual. But long time ago, it was not known. A long time ago von Neumann has found a "proof" that hidden variables (i.e., lambda) were impossible, but later it was realized that he tacitly assumed non-contextuality, so today it is known that his theorem only shows that non-contextual hidden variables are impossible. It seems that essentially the same mistake made long time ago by von Neumann is now repeated by those guys here.

Let me explain what makes me arrive to that conclusion. They first talk about ONE system and try to prove that there is no adequate lambda for such a system. But to prove that, they actually consider the case of TWO such systems. Initially this is not a problem because initially the two systems are independent (see Fig. 1). But at the measurement, the two systems are brought together (Fig. 1), so the assumption of independence is no longer justified. Indeed, the states in Eq. (1) are ENTANGLED states, which correspond to not-independent systems. Even though the systems were independent before the measurement, they became dependent in a measurement. The properties of the system change by measurement, which, by definition, is contextuality. And yet, the authors seem to tacitly (but erroneously) assume that the two systems should remain independent even at the measurement. In a contextual theory, the lambda at the measurement is NOT merely the collection of lambda_1 and lambda_2 before the measurement, which the authors don't seem to realize.

I had a brief exchange of e-mails with the authors of that paper. After that, now I am even more convinced that I am right and they are wrong. Here are some crucial parts of that exchange, so that you can draw a conclusion by yourself:

> Prof. Barrett:
> Briefly, the vectors in Eq.(1) are entangled, yes but they don't represent
> the state of the system. They are the Hilbert space vectors which
> correspond to the four possible outcomes of the measurement.

Me (H.N.):
But in my view, the actual outcome of the measurement (i.e., one of those
in Eq. (1) ) DOES represent the state of the system.
Not the state before the measurement, but the state immediately after the
measurement. At the measurement the wave function "collapses",
either through a true von Neumann collapse, or through an effective
collapse as in the many-world interpretation or Bohmian interpretation.

.
.
.

> Prof. Barrett:
> The assumption is that the probabilities for the different outcomes of
> this procedure depend only on the physical properties of the systems at a
> time just before the procedure begins (along with the physical properties
> of the measuring device).

Me (H.N.):
Yes, I fully understand that if you take that assumption, you get
the conclusion you get. (In fact, that conclusion is not even
entirely new. For example, the Kochen-Specker theorem proves something
very similar.) But it is precisely that assumption that I don't
find justified. Any measurement involves an interaction, and any measurement
takes some time (during which decoherence occurs), so I don't think it is
justified to assume that the measurement does not
affect the probabilities for the different outcomes.
 
  • #124
In short, to make their results meaningfull, a correct title of their paper should be changed to
"The quantum state cannot be interpreted non-contextually statistically"

But that is definitely not new!
 
  • #125
Demystifier said:
I had a brief exchange of e-mails with the authors of that paper. After that, now I am even more convinced that I am right and they are wrong. Here are some crucial parts of that exchange, so that you can draw a conclusion by yourself:

Thanks for sharing!

Demystifier said:
Me (H.N.):
But in my view, the actual outcome of the measurement (i.e., one of those
in Eq. (1) ) DOES represent the state of the system.
Not the state before the measurement, but the state immediately after the
measurement. At the measurement the wave function "collapses",
either through a true von Neumann collapse, or through an effective
collapse as in the many-world interpretation or Bohmian interpretation.

.
.
.

> Prof. Barrett:
> The assumption is that the probabilities for the different outcomes of
> this procedure depend only on the physical properties of the systems at a
> time just before the procedure begins (along with the physical properties
> of the measuring device).

[bolding mine]

I could be wrong, but to me, it looks like you are talking about different things?

Prof. Barrett talks about "probabilities for the different outcomes" and you about "the actual outcome of the measurement". This could never represent the same thing, could it?

How could a definite measurement represent a superposition or entanglement? When the measurement is completed, these things are "gone"... aren’t they?
 
  • #126
Demystifier said:
In short, to make their results meaningfull, a correct title of their paper should be changed to
"The quantum state cannot be interpreted non-contextually statistically"

But that is definitely not new!


Would that be compatible to Matt Leifer’s conclusions?
Conclusions
The PBR theorem rules out psi-epistemic models within the standard Bell framework for ontological models. The remaining options are to adopt psi-ontology, remain psi-epistemic and abandon realism, or remain psi-epistemic and abandon the Bell framework. [...]​
 
  • #127
Ken G said:
But you don't know a point, you only know that the particle is in some box.
It sounds like you're just saying that a preparation procedure doesn't uniquely identify a ontic state. So it corresponds to a probability distribution of states. This means that to get the best predictions with the best estimates of the margins of error, we should use the epistemic state defined by the preparation procedure to assign probabilities to measurement results.

Ken G said:
The issue is what does it mean when we invoke a conceptual device and call it a property. Does it mean that if we knew all the properties, we'd understand the system completely?
It occurred to me after I went to bed that one can interpret the definition of an "ontic model" as saying that to know "all the properties" is to have the information that determines the probabilities of all possible measurement results for all possible preparation procedures.

Ken G said:
So I ask the same question-- for an approximate theory to work well, why does this require that there be an exact theory underlying it?
I doubt that it's possible that there's no exact description of reality. I would expect the universe to be more chaotic if that was the case, too chaotic for us to exist. But I too have a problem with the idea that the ultimate description of reality is an ontic model. They are just too convenient. However, I don't think any of the articles we have discussed are assuming that the relevant ontic model is exactly right.

Ken G said:
The whole point of an epistemic treatment is to not pretend we know something we don't know-- like that epistemics is just a lack of information about ontics! If there was ever a lesson of quantum mechanics, it is that epistemics is something potentially much more general than just lack of information about ontics.
Agreed.

Ken G said:
Quantum mechanics is a mathematical structure "at least as good as classical mechanics."
This is more clear in the algebraic and quantum logic approaches to QM. They show that QM can be thought of as a generalization of probability theory that includes classical probability theory as a special case.
 
  • #128
DevilsAvocado said:
I could be wrong, but to me, it looks like you are talking about different things?
The three points between two texts indicate that one is not a response to the other, but correspond to independent pieces of a dialog.
 
  • #129
Fredrik said:
It sounds like you're just saying that a preparation procedure doesn't uniquely identify a ontic state. So it corresponds to a probability distribution of states. This means that to get the best predictions with the best estimates of the margins of error, we should use the epistemic state defined by the preparation procedure to assign probabilities to measurement results.
Yes, and we should be aware that inside that "margin of error" might be something quite a bit more than just error and uncertainty-- there could be a whole new theory living down there, which we never dreamed of except for a few troubling questions about the theory we already had-- as was true with classical mechanics giving rise to quantum mechanics. That's why I don't understand why we should care what hidden variables theories could make all the same predictions as quantum mechanics-- what we actually want are hidden variables theories that make different predictions, we just want them to predict the same things in the arena that has been tested. That's why I think the real value of the PBR theorem will only be realized if it motivates experiments to look for cracks in quantum mechanical predictions. After all, isn't the wave function a "hidden variable" underlying classical mechanics?
It occurred to me after I went to bed that one can interpret the definition of an "ontic model" as saying that to know "all the properties" is to have the information that determines the probabilities of all possible measurement results for all possible preparation procedures.
Yes, it does seem to have some connection with a concept of "complete information." They seem to be saying, let's assume that such "complete information" is possible, and then ask if the wave function is one of the things that would appear as a primitive element of that complete information, something that is one of the ontic properties rather than something that emerges from the ontic properties but is not itself ontic. I'm not surprised that if there are such ontic properties, the wave function is one of them, but I just don't see why assuming there are ontic properties tells us something fundamental about quantum mechanics-- because quantum mechanics doesn't require that there be ontic properties, any more than classical mechanics did (remember, classical mechanics is still widely used, even now that we know it is not based on ontic properties at all). Theories are built top-down, not bottom-up, and they only penetrate so far. We only know everything from our theory on up, but never anything below our theory. Why does being a "realist" require ignoring everything that physics has ever demonstrably been and done, and pretending it was all about something below what physics has ever been or done?
I doubt that it's possible that there's no exact description of reality. I would expect the universe to be more chaotic if that was the case, too chaotic for us to exist. But I too have a problem with the idea that the ultimate description of reality is an ontic model. They are just too convenient.
What is the difference between an exact description and an ontic model? And aren't we the children of chaos? I have the opposite view-- I think that any universe that has an exact description is sterile and uninteresting, much like a mathematical system that cannot support a Godel statement.

However, I don't think any of the articles we have discussed are assuming that the relevant ontic model is exactly right.
They don't assume the theory is exactly right, but they do assume that the outcomes are determined by ontic properties. They seem to start from the same perspective that you are stating-- that there has to be something, call it properties, that determine what happens (perhaps only statistically, this is an unclear issue what that means), and can be expressed as a mathematical structure. That seems to be a key assumption-- the structure is what determines what happens. If the mathematical structure is only approximate, how can it determine what happens? It must be exact to claim that outcomes can be traced back to properties, even if only statistically exact, doesn't it?
 
  • #130
What is bothering me is that epistemic view where all scientific theories are and will be forever forbidden to make ontological claims... That's not science but a reversed dogmatism : we are sure that will never know for sure... Note the paradox...
In fact, there is all sorts of scientific theories some well founded and others a lot less... Some of their results could be considered as scientific facts and others not... We can discuss forever on the theory of everything that explains the epiphany of the universe, but no one can seriously deny our actual knowledge on the structure of the atom, per example... Even if we don't understand how something, as a particle, could be both a wave and a point like object...
There is a serious misunderstanding of what science is : an experience of knowledge between a group of subjects and a structure of objects... How some of us conclude that we don't even study the objects but we are only constructing theories (Hell, about what ? ) is a mystery for me... They urge science to give them a full understanding of the universe to give it the right to make ontological claims... Which is not reasonable...
When something looks like an orange, smells like an orange, tastes like an orange and have the DNA of the orange... It is an orange...
 
Last edited:
  • #131
Demystifier said:
The three points between two texts indicate that one is not a response to the other, but correspond to independent pieces of a dialog.

Sure, but despite this little 'dot technicality', you two seems to be talking about completely different things. And it doesn’t get any better when you finish up by changing your initial standpoint:
the actual outcome of the measurement [...] *DOES represent* the state of the system

To:
Any measurement involves an interaction, and any measurement takes some time (during which decoherence occurs), so I don't think it is justified to assume that the measurement does not *affect the probabilities* for the different outcomes.

In this situation claim that Prof. Barrett repeated "the von Neumann mistake", doesn’t convince me fully.

(Shouldn’t a professor be aware of Bohm’s theory and Bell’s work? Sounds strange...)
 
  • #132
Ken G said:
What is the difference between an exact description and an ontic model?
An ontic model can make predictions that disagree with experiments. This would make it wrong (even if it's a good theory). An exact description* can't be wrong, but it's also not required to make any predictions. This would disqualify it from being called a theory.

*) Note that this was just a term I made up for that post.
 
  • #133
Fredrik said:
An ontic model can make predictions that disagree with experiments. This would make it wrong (even if it's a good theory). An exact description* can't be wrong, but it's also not required to make any predictions. This would disqualify it from being called a theory.

*) Note that this was just a term I made up for that post.

In the Jaynes paper (http://bayes.wustl.edu/etj/articles/prob.in.qm.pdf), cited by the above article, this is made quite clear in the section titled "How would QM be different", pg 9)

For example if we expand ψ in the energy representation: ψ(x,t) = Ʃa_n(t)u_n(x), the
physical situation cannot be described merely as

"the system may be in state u1(x) with probability p1 = |a1|^2; or it may be in state u2(x) with probability p2 = |a2|^2, and we do not know which of these is the true state".
...
[Bohr] would never say (as some of his unperceptive disciples did) that |an|^2
is the probability that an atom is "in" the n'th state, which would be an unjustified ontological statement; rather, he would say that |an|^2 is the probability that, if we measure its energy, we shall find the value corresponding to the n 'th state.


...
But notice that there is nothing conceptually disturbing in the statement that a vibrating bell
is in a linear combination of two vibration modes with a definite relative phase; we just interpret the mode (amplitudes)^2 as energies, not probabilities. So it is the way we look at quantum theory, trying to interpret |ψ|^2 directly as a probability density, that is causing the difficulty.

The bold part explains the difference of how ψ is interpreted (ontic vs epistemic). The former is the ontic interpretation, and the latter is the epistemic interpretation.
 
  • #134
Ken G brought up some interesting issues about why we should care about hidden variables theories which merely reproduce the predictions of QM. I certainly concur that such a theory lacking any new empirical content would lack scientific value as a new theory. That does not mean it would be without value though. Even proving that such a theory is in principle possible, even lacking a specific theory, would be of some value. Much as the no-go theorems themselves have a degree of value. Certainly finding cracks in the predictions of QM is the ultimate game changer, but there are whole classes of possibilities which extends the empirical content and/or cohesiveness between QM and GR that do not involve invalidating any predictions QM is presently capable of. These would certainly extend the scientific value with or without explicit cracks in the predictions of QM as presently formulated well beyond simple equivalency. Now about the issues PBR article raises wrt this.

Traditionally it has been taught that the statistical nature of QM has a fundamentally different nature than the randomness associated with classical physics. Whereas in the later case randomness was simply a product of trading detailed information about positions and momentums for mean values, it the QM case no such underlying ontic states have ever been found, even in principle, to successfully derive the empirical content of QM from. Much less wed QM and GR or provide equivalent empirical content in combination with new empirical, content, or show any empirically invalid content in the present formulation. It was this fundamentally different nature of randomness associated with QM, distinct from classical randomness, that the PBR article took aim at. For those realist that interpreted quantum randomness in a quasi-classical sense in their personal interpretations the PBR article makes no explicit mention of one way or the other. In effect, when the PBR article states: "The quantum state cannot be interpreted statistically", it is equivalent to a claim stating: Quantum randomness is not as fundamentally different from classical randomness as traditionally claimed. The PBR definition of "statistical" then both justifies and leaves untouched the definition of "statistical" as defined by at least some realist in the field.

It boils down to a distinction between a causally independent verses a causally dependent concept of randomness. It's is unfortunate that the terminology for the prototypical distinction is given as quantum verses classical randomness. This unfortunate terminology provides for misrepresenting an authors claim that quantum statistics has some limited classical characteristics by a strawman argument supplanting such an authors use of the term "quantum statistics" with the definition of randomness that the prototypical term academically implies traditionally. Francis bacon anyone? The PBR article in effect is not refuting a statistical interpretation of QM in general, it is merely attempting to refute the prototype characterization of "statistical" that is traditionally implied by the term "quantum statistics", while using and denying that term purely within the context of that traditional (quantum) interpretation.

Consider a prototypical classical variable with a statistical character, such as temperature. Temperature is in fact a contextual variable. Given only a temperature it is fundamentally impossible to derive a complete specification for the ontic positions and momentums resulting in that temperature. In fact the number of possible ontic states grows exponentially with the number of degrees of freedom that system possess. Yet from a complete ontic specification of a classical system it is quiet trivial to derive the temperature. QM observables limit measurable even more. Suppose rather than measuring temperature on a scale we were limited to temperature measurements which could only determine whether a temperature was above or below some threshold, set in some ill defined way by the choice of measuring instrument used. We would then be stuck with temperature observables that imply that temperature has an indeterminate value before measurement but a binary value of either |0> or |+> after measurement.

Of course in the classical regime we can make a coordinate choice that for most practical purposes provides an independent framework to track positions and momentums of the presumed ontic elements. Along with Relativity to correct the incongruence in the more general case. Hence showing classical consistency between the presumed ontic states (such as positions and momentum) before and after measurements is trivial. Yet, given the malleability of spacetime as defined by GR, it is not unreasonable to presume that IIF nature has a fundamental ontic structure that the very definition of position and momentum are dynamically generated in a manner not unlike temperature. How then do you define a position and momentum of a thing in which the dynamics of that thing defines the very definition of the observables used to characterize it? This would entail that on a fine scale positions and momentums would fluctuate in the same way the temperature of a gas fluctuates on a sufficiently fine scale. Hence, on a fine scale, the position of an ontic element at rest with respect to an observer could in principle fluctuate wrt that observer as a result of the dynamic variances in an ensemble of ontic elements in the vicinity of that element.

This is not a claim or model to explain anything. It is merely an analogy in an attempt to broaden the conceptual possibilities as to what an observable represents that gets past a perception that an ontic model entails a linear or reversibly derivable relation between observables and ontic substructures. As noted, the classical observable [temperature] is not sufficient to establish the ontic state that gives rise to it either. The debate on the ontic nature of quantum observables is much more stark in GR and phenomena like virtual particles, interaction free measurements, etc, than it is in the QM formalism. For instance it is often denied that a vacuum represents a "nothing" by spelling out all the virtual entities zipping in and out of existence within that "nothing". Sort of like affirming that which is being denied in the same breath. The claim that ψ has an ontic substructure is tantamount to the claim that a vacuum has an ontic substructure independent of the virtual particles in it. The irreversible failings of classical physics entails removing the idea that any distinct observable be associated with any distinct singular ontic element, in an analogous manner as the temperature of a medium is not associated with any particular ontic element in that medium. Space, time, positions, and momentums included.

Perhaps there is no ontic substructure of this form to be described, even in principle. But to deny the possibility simply on the grounds that observables obviously don't posses classical Newtonian relationships with any presumed ontic elements undermines a whole world of scientific possibilities, which may or may not include the type of empirical updates Ken G wrote about. The ultimate scientific value will nonetheless explicitly depend on the empirical value provided, not the philosophical value. The PBR article at the very least raises the bar on the types of contextual constructs those working in foundational issues can attempt to work with.
 
  • #135
Nice post my_wan, but I have to say that my take on this detail is the opposite of yours:
my_wan said:
In effect, when the PBR article states: "The quantum state cannot be interpreted statistically", it is equivalent to a claim stating: Quantum randomness is not as fundamentally different from classical randomness as traditionally claimed.
If state vectors had corresponded to (overlapping) probability distributions of ontic states of a theory in which the ontic states determine all measurement results (rather than just their probabilities), then quantum probabilities would have been exactly the same as classical probabilities.

If state vectors had corresponded to (overlapping) probability distributions of ontic states of a theory in which the ontic states determine the probabilities of all measurement results, then quantum probabilities are similar to but not exactly the same as classical probabilities.

The theorem rules out both of these options, since they are just different kinds of ψ-epistemic ontic models. So I would say that this just leaves us the possibility that quantum probabilities are very different from classical probabilities.
 
  • #136
Fredrik said:
If state vectors had corresponded to (overlapping) probability distributions of ontic states of a theory in which the ontic states determine all measurement results (rather than just their probabilities), then quantum probabilities would have been exactly the same as classical probabilities.

If state vectors had corresponded to (overlapping) probability distributions of ontic states of a theory in which the ontic states determine the probabilities of all measurement results, then quantum probabilities are similar to but not exactly the same as classical probabilities.
Here I equivocated, not due to a perceived variance in the symmetries associated with classical verses quantum probabilities, but due to a more general uncertainty in the type and nature of what the presumed ontic variables associated with or responsible for ψ might be. Obviously they don't have precisely the same character as classical ontic variables, else quantum physics would be classical physics. Nor did the PBR article make such a claim, else rather than attempting to prove a theorem they would have simply defined such variables and derived QM from it. This of course being an independent issue from the question of whether just the probabilities themselves are fundamentally different in the quantum and classical regime. Something I am still searching for good criticisms of wrt the PBR article. My prejudices not withstanding. Although I think the article makes a valid point I think the strength or meaning of that point is more limited than I would like it to be, or that many people will likely try to make it out to be.

Fredrik said:
The theorem rules out both of these options, since they are just different kinds of ψ-epistemic ontic models. So I would say that this just leaves us the possibility that quantum probabilities are very different from classical probabilities.
Let's look at the notion of a ψ-epistemic ontic model in the context of the PBR article. In a prior post DevilsAvocado summed it up this way (note the qualifier: standard bell framework):
DevilsAvocado said:
epistemic state = state of knowledge
ontic state = state of reality

  1. ψ-epistemic: Wavefunctions are epistemic and there is some underlying ontic state.
  2. ψ-epistemic: Wavefunctions are epistemic, but there is no deeper underlying reality.
  3. ψ-ontic: Wavefunctions are ontic.
Conclusions
The PBR theorem rules out psi-epistemic models within the standard Bell framework for ontological models. The remaining options are to adopt psi-ontology, remain psi-epistemic and abandon realism, or remain psi-epistemic and abandon the Bell framework.[...]​

Reading Matt Leifer's blog, from which the above was pulled, would be useful in the context to come.

Now ask yourself if temperature is a classical epistemic or ontic variable. Though it is the product of presumably ontic entities it is a variable that is not dependent on the state of any particular ontic entity nor singular state of those entities as a whole. It is an epistemic state variable, in spite of having a very real existence. In this sense I would say it qualifies as "epistemic ontic", since it is an epistemic variable in which it's existence is contingent upon on underlying ontic group state. Momentum is another epistemic variable, since the self referential momentum of any ontic entity (lacking internal dynamics) is precisely zero. That's the whole motivation behind relativity.

Ironically, viewed in this way, by expecting QM to somehow conform to the ontic character of classical physics, we are using a prototypical epistemic variable [momentum] as a foundational variable upon which the presumed ontic construct must conform to, rather than the other way around as is typical of epistemic variables. Epistemic variables only exist in contextual states between ontic variables. The foundational question is whether or not nature is defined by these epistemic variables all the way down, or does the buck stop at some set of ontic entities somewhere down the hierarchy.

Now look at a quote from the PBR article:
If the quantum state is a physical property of the system (the first view), then either \lambda is identical with |\phi_0> or |\phi_1>, or \lambda consists of |\phi_0> or |\phi_1>, supplemented with values for additional variables not described by quantum theory. Either way, the quantum state is uniquely determined by \lambda.
Bolding added. Keep in mind in the following text that it said that the quantum state is uniquely determined by \lambda, and not necessarily that \lambda is uniquely determined by the quantum state.

In effect the bolded part explicitly allowed the possibility that ψ constituted an epistemic variable, in the sense that temperature and momentum are epistemic variables, whereas the theorem only pertains to the character of \lambda. If ψ and \lambda were interchangeable in terms of what the theorem pertained to then there was no need in leaving open the possibility that ψ may or may not be "supplemented with values for additional variables not described by quantum theory". Hence, wrt 1 as posted by DevilsAvocado: "ψ-epistemic: Wavefunctions are epistemic and there is some underlying ontic state", the PBR article is moot on. This particular form of ψ-epistemic, i.e., ψ-epistemic ontic, is in fact allowed but not required by the articles theorem.

So what specifically did the articles theorem take aim at? This I previously attempted to reframe as a causally independent verses a causally dependent concept of randomness. Whereas the traditional prototypical language, which the article simply accepted as the de facto meaning without comment in spite of many realist being at odds with it, involves the terms quantum verses classical randomness. Hence the title claim: "cannot be interpreted statistically". Meaning statistically in the quantum prototype sense, not the classical prototype sense. More meaningfully that the statistical character of QM cannot be interpreted as a causally independent form of randomness. Just as classical randomness is not a causally independent form of randomness.

This is why I previously said that the theorem is much more limited in scope than some will try to make it out to be. This is also (more or less) why Matt Leifer, an epistemicists, does not have any real issues with the article, and even stated (out of context): "I regard this as the most important result in quantum foundations in the past couple of years". In context Leifer was quiet careful not to overstate the scope of what the theorem actually entails. Ruling out "ψ-epistemic ontic models", as opposed to purely ψ-epistemic models as defined by 2 in DevilsAvocado's post, is not one of the claims the theorem has sufficient scope to rule out.​
 
  • #137
DevilsAvocado said:
Would that be compatible to Matt Leifer’s conclusions?
Thank you very much for that link. It has been very useful, and now I believe I understand the content of the PBR theorem much better. Here is my summary and conclusion, which I have written there:

It simple terms, the PBR theorems claims the following:
If the true reality “lambda” is known (whatever it is), then from this knowledge one can calculate the wave function.

However, it does not imply that the wave function itself is real. Let me use a classical analogy. Here “lambda” is the position of the point-particle. The analogue of the wave function is a box, say one of the four boxes drawn at one of the Matt’s nice pictures. From the position of the particle you know exactly which one of the boxes is filled with the particle. And yet, it does not imply that the box is real. The box can be a purely imagined thing, useful as an epistemic tool to characterize the region in which the particle is positioned. It is something attributed to a single particle (not to a statistical ensemble), but it is still only an epistemic tool.
 
  • #138
Demystifier said:
Thank you very much for that link. It has been very useful, and now I believe I understand the content of the PBR theorem much better. Here is my summary and conclusion, which I have written there:

It simple terms, the PBR theorems claims the following:
If the true reality “lambda” is known (whatever it is), then from this knowledge one can calculate the wave function.

However, it does not imply that the wave function itself is real. Let me use a classical analogy. Here “lambda” is the position of the point-particle. The analogue of the wave function is a box, say one of the four boxes drawn at one of the Matt’s nice pictures. From the position of the particle you know exactly which one of the boxes is filled with the particle. And yet, it does not imply that the box is real. The box can be a purely imagined thing, useful as an epistemic tool to characterize the region in which the particle is positioned. It is something attributed to a single particle (not to a statistical ensemble), but it is still only an epistemic tool.

Thanks very much DM, this makes sense.

Do you understand why they are 'focusing' on zero probabilities?

"Finally, the argument so far uses the fact that quantum probabilities are sometimes exactly zero."

And in the first example (FIG 1) they are measuring NOT values:

29xyhzo.png


I had this "nutty guess" that they found a way to show that zero probabilities doesn’t mean "nothing" (in terms of probabilities), but something in terms of an actual measurement resulting in 0...?? not(1)

Or is this just completely nuts... :blushing:


P.S. Credit for the link goes to bohm2.
 
  • #139
DevilsAvocado said:
Do you understand why they are 'focusing' on zero probabilities?
Yes. When the probability of something is 0 (or 1), then you know WITH CERTAINTY that the system does not (or does) have certain property. But then you can ascribe this to a SINGLE system; You can say that this one single system does not (or does) have certain property. You don't need a statistical ensemble of many systems to make this claim meaningfull. In this sense, you can show that what you are talking about is something about a single system, not merely about a statistical ensemble. That is what their theorem claims for the quantum state.
 
  • #140
my_wan said:
Let's look at the notion of a ψ-epistemic ontic model in the context of the PBR article. In a prior post DevilsAvocado summed it up this way (note the qualifier: standard bell framework):
As you said, this is from Matt Leifer's blog. PBR doesn't seem to acknowledge option 2 at all. So I would describe their conclusion as "option 1 contradicts QM, and therefore experiments".

my_wan said:
Now look at a quote from the PBR article:

Bolding added. Keep in mind in the following text that it said that the quantum state is uniquely determined by \lambda, and not necessarily that \lambda is uniquely determined by the quantum state.

In effect the bolded part explicitly allowed the possibility that ψ constituted an epistemic variable, in the sense that temperature and momentum are epistemic variables, whereas the theorem only pertains to the character of \lambda.
The bolded part only says that a ψ-ontic ontic model (i.e. one that's not ψ-epistemic) may not be ψ-complete. (See the HS article for the terminology, but note that they used the term "hidden-variable theory" instead of "ontic model"). The statement "the quantum state is uniquely determined by \lambda" applies to ψ-ontic ontic models. The term "ψ-epistemic" is defined by the requirement that ψ is not uniquely determined by λ.

my_wan said:
Hence, wrt 1 as posted by DevilsAvocado: "ψ-epistemic: Wavefunctions are epistemic and there is some underlying ontic state", the PBR article is moot on. This particular form of ψ-epistemic, i.e., ψ-epistemic ontic, is in fact allowed but not required by the articles theorem.
I would say that option 1 is what they're ruling out. What you describe as "this particular form of ψ-epistemic" is (if I understand you correctly) what HS calls "ψ-supplemented". The ψ-supplemented ontic models are by definition not ψ-epistemic.
 
  • #141
I believe that this is an accurate summary of what the PBR theorem is saying:

Is it possible that quantum probabilities are classical probabilities in disguise? If the answer is yes, then there's a ψ-epistemic ontic model that assigns probabilities 0 or 1 to each possible measurement result. We could prove that the answer is "no" by proving that such a model can't reproduce the predictions of QM, but since we can, we will prove a stronger result: No ψ-epistemic ontic model can reproduce the predictions of QM.

This result implies the result we actually care about, that no ψ-epistemic ontic model that only assigns probabilities 0 and 1 can reproduce the predictions of QM. This tells us that quantum probabilities are not classical probabilities in disguise.
 
  • #142
Demystifier said:
Yes. When the probability of something is 0 (or 1), then you know WITH CERTAINTY that the system does not (or does) have certain property. But then you can ascribe this to a SINGLE system; You can say that this one single system does not (or does) have certain property. You don't need a statistical ensemble of many systems to make this claim meaningfull. In this sense, you can show that what you are talking about is something about a single system, not merely about a statistical ensemble. That is what their theorem claims for the quantum state.

Ah! Many thanks!
 
  • #143
About "ontic models"...

I took another look at the article by Harrigan and Spekkens. They use the term "hidden-variable theory" in the introduction, but when they get to the actual definition, they use the term "ontological model". I've been wondering if there's any difference between what Leifer calls an "ontic model" and what I call a "theory of physics". I think the HS definition of "ontological model" answers that clearly. (I'm assuming that Leifer's "ontic models" are the same as HS's "ontological models"). The answer is right there in the first half of the first sentence of the definition
An ontological model of operational quantum theory...
An ontological model isn't something independent, like a theory. We are only talking about ontological models for QM. The end of the definition also says explicitly that the probability assignments must be exactly the same as those of QM.

I'm going to use "ontological model" rather than "ontic model" from now on. I always felt awkward writing "ψ-ontic ontic model". The term "ψ-ontic ontological model for quantum mechanics" sounds better.
 
  • #144
In going back over [/PLAIN] Matt Leifer's blog it is obvious that my previous post is at odds with his take on the PBR article. In the blog Matt clearly singled out #1 as the target of the PBR theorem:
[PLAIN]http://mattleifer.info/2011/11/20/can-the-quantum-state-be-interpreted-statistically/ said:
[/PLAIN]
  1. Wavefunctions are epistemic and there is some underlying ontic state. Quantum mechanics is the statistical theory of these ontic states in analogy with Liouville mechanics.
  2. Wavefunctions are epistemic, but there is no deeper underlying reality.
  3. Wavefunctions are ontic (there may also be additional ontic degrees of freedom, which is an important distinction but not relevant to the present discussion).

Here is a reiteration of why I am at odds with that particular take. There are two main points outlined in the article to make the case, with a third to tie it to the results.

1) As the title itself indicates the theorem took aim at the statistical character of QM. In this case what is termed "interpreted statistically" refers to a causally independent characterization of randomness, as is typical when referring to quantum, as opposed to classical, randomness.

2) Note, as in my previous post, that the article explicitly states that "the quantum state is uniquely determined by \lambda". This does not entail that \lambda is uniquely determined by ψ. Much like the temperature of an isolated system is uniquely determined by the position and momentum of its constituent elements. Yet a temperature does not uniquely determine the positions and momentums of its constituent elements.

3) The article states: "If the quantum state is statistical in nature (the second view), then a full specification of \lambda need not determine the quantum state uniquely." More on the central importance of this after I outline a classical analog of the contradiction it entails.

Now translating the contradiction to a classical medium if the position and momentum before and after an interaction (collision) were the result of pure (causally independent) randomness of some degree then the total momentum of the interacting elements run the risk of having a different total momentum after an interaction than they had before. In effect it entails a violation of conservation laws which would obviously entail that a full specification of \lambda would not be sufficient to uniquely determine its state as stated by 3) above. Rather than restricting the total momentum in the classical analog, the QM version instead restricts certain quantum probabilities to zero in the event that certain incompatible properties are present.

Thus the theorem says nothing about ψ-ontic or ψ-epistemic ontic. It merely establishes that QM predictions entail that the randomness associated with ψ must be causally connected in some way which enforces restrictions on some properties (random outcomes) as a result of possessing certain other properties. If the variables was purely random, causally independent, then there would be no mechanism for restricting some properties as a result of the presents of others. Hence the causally indeterminate statistical interpretation, or #2 in Matt Leifer's quote: "Wavefunctions are epistemic, but there is no deeper underlying reality", is the only one of the 3 that in the cross hairs.

So what makes this result so unique if the classical analog indicates nothing more than the fact that conservation laws are valid. Because this result was obtained purely from the formal description of ψ and pertains only to the statistical results described by ψ without reference to any such conservation laws. A construct that is often considered a pure mathematical fiction lacking any causal mechanisms for enforcing conservation laws.

Anyway, the PBR theorem does not say much about a bewildering number ontologies, which include emergent epistemic constructs imbedded in them. The title of the article, that some took issue with, probably stated the scope of what the theorem entails better than all the opinions written about it. So long as you understand "statistical" in the sense used to imply causally independent randomness. It doesn't even address the issues that Demystifier brought up. The contextual variables Demystifier's argument hinges on fit perfectly within the scope of what is allowed given the scope of the theorem, so long as the contextual variables being posited are causally dependent variables. Nor does it put constraints on the nature of that causal dependency. The scope really is limited to saying that quantum randomness has causal restrictions defined by the ψ alone.

Correction (clarification) from previous post #136:
Although in context it should be obvious, I said: "Epistemic variables only exist in contextual states between ontic variables." Although epistemic variables are generated by contextual states between other variables it happens that epistemic variables also can be treated as ontic for the purpose of creating hierarchies of epistemic or contextual variables. Hence the claim that epistemic variables only exist in contextual states between ontic variables is false. They can also exist as contextual states between other epistemic variables. I think it should be obvious I intended such a meaning when read in context, but nonetheless that sentence was in fact wrong. This error was

On Matt Leifer's blog under the "results" section the theory was restricted to only allow epistemic states with disjoint support, which was given as what the PBR article results indicated was the required situation in QM given the results of the PBR article. With this restriction it is said that the ontic state determines the epistemic state uniquely. The problem is that if a set of epistemic variables are partitioned off they can be logically treated as if those epistemic variables where ontic variables. Much like we routinely treat momentum as an ontic variable wrt a given coordinate choice, even though that coordinate choice is actually what determines the value of a given momentum. Hence, to say that the ontic state determines the quantum state uniquely does not strictly limit the presumed ontic state variables to variables that are fundamentally ontic, but may themselves be epistemic or contextual variables. Thus the most that can be said concerning the theorem is that it is possible to choose a variable set, ontic and/or epistemic, that is capable of modeling the ψ. That is certainly more than any specific model has yet to be able to achieve, which certainly does not lack importance in itself. So the significance of the theorem remains even if the ontic verses epistemic characterizations are mere abstractions in the formalism. This definitely extends the utility beyond what I was originally seeing, even if I had other reasons for holding the equivalent opinion.

I may have to rethink my whole take on this article.
 
Last edited by a moderator:
  • #145
my_wan said:
1) As the title itself indicates the theorem took aim at the statistical character of QM. In this case what is termed "interpreted statistically" refers to a causally independent characterization of randomness, as is typical when referring to quantum, as opposed to classical, randomness.
The title and the abstract are extremely misleading. This line is the first clue about what they really mean:
We begin by describing more fully the difference between the two different views of the quantum state [11].​
Reference [11] is the HS article. PBR then go on to describe some of the details, and if you compare it to HS, it's clear that PBR are describing three kinds of ontological models for QM: ψ-complete, ψ-supplemented and ψ-epistemic. The assumption they make in order to derive a contradiction from it, is that the criterion that defines the ψ-epistemic class is satisfied. So I think that it's clear that what they're attempting to disprove is that there's a ψ-epistemic ontological model for QM. This is option 1 on Matt Leifer's list.

my_wan said:
2) Note, as in my previous post, that the article explicitly states that "the quantum state is uniquely determined by \lambda".
Right, but the assumption that they disprove by deriving a contradiction from it is that the quantum state is not uniquely determined by λ. That's what defines the ψ-epistemic class.
 
Last edited:
  • #146
Fredrik said:
The term "ψ-epistemic" is defined by the requirement that ψ is not uniquely determined by λ.

Maybe I have something backwards here. If we use temperature as a proxy for ψ and phase space as a proxy for λ, then in that context ψ is uniquely determined by λ but λ is not uniquely determined by ψ.

Then you have the quote from the article:
If the quantum state is statistical in nature (the second view), then a full specification of λ need not determine the quantum state uniquely.
Whereas the results claimed to invalidate "the second view". Hence λ uniquely determines the quantum state. Just as outlined in the temperature/phase space analogy. Hence we are using incompatible definitions of epistemic.

My question is why would "ψ-epistemic" be limited to models in which ψ is not uniquely determined by λ? Epistemic refers to a "state of knowledge" which ostensibly does not correspond to a complete specification of the system under consideration, only an approximation. By specifying that "ψ-epistemic" is not uniquely determined by λ it is tantamount to the claim that by completing your state knowledge with a complete specification, via λ, of the actual state giving rise to the epistemic state does not complete your state of knowledge of the epistemic state. Put more simply: Obtaining complete information does not complete your available information. There might be circumstances under which this situation holds but it makes no sense to me to restrict "epistemic", an ostensibly limited "state of knowledge" or approximation, only to situations in which even a complete specification cannot even in principle reduce the limitations on your "state of knowledge".

To illustrate why consider a prototypical epistemic variable, a classical probability. Now suppose we build a machine with enough restrictions in its degrees of freedom that a complete state specification at one point gives us a complete state progression into the future, so long as it remains effectively isolated. Would then reformulating this information in terms of probabilistic states at some undefined random point in the future mean that this probabilistic state does not constitute an epistemic state of knowledge? No.

Hence the state specification corresponding to a statistical approximation is an epistemic state of knowledge irrespective of to what degree that approximation can be made exact in principle given a complete specification defined by λ. It doesn't require that λ provide a unique determination of ψ, but an epistemic variable ψ is not invalidated as epistemic simply on the grounds that given λ a unique determination of ψ is possible.

From where did this definition you are using come?
 
  • #147
my_wan said:
My question is why would "ψ-epistemic" be limited to models in which ψ is not uniquely determined by λ?
See post #94.

my_wan said:
From where did this definition you are using come?
I got it from the Harrigan & Spekkens article. It's also covered in Matt Leifer's blog. Leifer references HS a couple of times, so maybe he got the definitions from there, but he also said that the terminology has been more or less the same since Bell (or something to that effect).
 
  • #148
Fredrik said:
The title and the abstract are extremely misleading. This line is the first clue about what they really mean:
We begin by describing more fully the difference between the two different views of the quantum state [11].[...]

Yes, you made me aware of that reference within the PBR article previously. So I cross compared previously. Yet I couldn't find any indication that the PBR article conformed to any standards as defined in the HS article. In fact the only mention in the PBR article that mentions any variation of the term "epistemic" occurs solely in a quotation of Jaynes. There is absolutely no occurrence of the term ontic, or any variation thereof, anywhere in the document. Cross comparing the HS article back to the PBR article indicates that the terms used in the PBR article had no corresponding terms in the HS article with which to imply any level of adherence to the definition standards used in the HS article. Hence the definitional standards of HS are moot wrt deciphering the content of the PBR article. No common terminology whatsoever.
 
  • #149
Just look at what PBR are saying a few lines after the reference to HS. The first quote is from the end of page 1, and the second is from the beginning of page 2.
If the quantum state is a physical property of the system (the first view), then either \lambda is identical with |\phi_0\rangle or |\phi_1\rangle, or \lambda consists of |\phi_0\rangle or |\phi_1\rangle, supplemented with values for additional variables not described by quantum theory. Either way, the quantum state is uniquely determined by \lambda.​
This makes it very clear that PBR defines "the first view" to be what HS calls a ψ-ontic ontological model for QM. The "either-or" statement is clearly describing the distinction HS makes between ψ-complete and ψ-supplemented ontological models for QM. (A ψ-ontic ontological model for QM is said to be ψ-supplemented if it's not ψ-complete).

If the quantum state is statistical in nature (the second view), then a full specification of \lambda need not determine the quantum state uniquely.​
This makes it very clear that PBR defines "the second view" as what HS calls a ψ-epistemic ontological model for QM. This is very strongly supported by the fact that the article claims to be proving that state vectors can't be interpreted statistically, and the fact that the proof starts by assuming the exact thing that defines the term "ψ-epistemic".

And again, these things were said just a few lines after they said this:
We begin by describing more fully the difference between the two different views of the quantum state [11].​
What could that mean if not "this is a good time to read HS, because we are using their classification to define the two views"?
 
Last edited:
  • #150
Fredrik said:
And again, these things were said just a few lines after they said this:
We begin by describing more fully the difference between the two different views of the quantum state [11].​
What could that mean if not "this is a good time to read HS, because we are using their classification to define the two views"?

Might it mean Einstein's original 2 views about the nature of the wave function? See p. 194-195 with direct Einstein quotes, in particular. All of Chapter 7 is pretty interesting. Maybe that is why this theory if accurate rules out Einstein's arguments ? I'm not sure.

http://www.tcm.phy.cam.ac.uk/~mdt26/local_papers/valentini.pdf
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
58
Views
4K
  • · Replies 69 ·
3
Replies
69
Views
7K
  • · Replies 15 ·
Replies
15
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 65 ·
3
Replies
65
Views
9K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 37 ·
2
Replies
37
Views
5K