The quantum state cannot be interpreted statistically?

Click For Summary
The discussion centers on the Pusey, Barrett, and Rudolph paper, which argues against the statistical interpretation of quantum states, claiming it is inconsistent with quantum theory's predictions. The authors suggest that quantum states must represent distinct physical properties of systems rather than merely statistical distributions. Participants express skepticism about the paper's assumptions and conclusions, particularly regarding the relationship between a system's properties and its quantum state. There is a call for deeper analysis and understanding of the paper's arguments, with some questioning the clarity and validity of the reasoning presented. The conversation highlights the ongoing debate about the interpretation of quantum mechanics and the implications of the paper's claims.
  • #91
DevilsAvocado said:
Many thanks. Finally it’s possible to get a chance to understand what this is all about:
epistemic state = state of knowledge
ontic state = state of reality


  1. ψ-epistemic: Wavefunctions are epistemic and there is some underlying ontic state.

  2. ψ-epistemic: Wavefunctions are epistemic, but there is no deeper underlying reality.

  3. ψ-ontic: Wavefunctions are ontic.


Many realists have trouble understanding the purely epistemic stance. As Ghirardi writes in discussing Bell's view:

Bell has considered this position and he has made clear that he was inclined to reject any reference to information unless one would, first of all, answer to the following basic questions: Whose information?, Information about what?

So if one takes that pure epistemic/instrumentalist stance it seems to me one is almost forced to treat QT as "a science of meter readings". That view seems unattractive to me. It has the same stench/smell that held back progress in the cognitive sciences (e.g. behaviourism). But then, I could be mistaken?

But if one treats the wave function as a real "field"-like entity it is very much different than the typical fields we are accustomed to. The wave function evolves in 3N-dimensional configuration space, there's the contextuality/non-separability also and stuff like that make it a very strange kind of "causal" agent. If one takes the Bohmian perspective (at least one Bohmian version), how do the 2 (pilot wave and particle) "interact"? It can't be via the usual contact-mechanical stuff we are accustomed to because of the non-locality that is required in any realist interpretation.

http://arxiv.org/PS_cache/arxiv/pdf/0904/0904.0958v1.pdf

Furthermore, if one wishes to scrap Bohm's dualistic ontology but remain a realist so that the wave function is everything then, there's another problem:

Since the proposal is to take the wave function to represent physical objects, it seem natural to take configuration space as the true physical space. But clearly, we do not seem to live in confguration space. Rather, it seems obvious to us that we live in 3 dimensions. Therefore, a proponent of this view has to provide an account of why it seems as if we live in a 3-dimensional space even though we do not. Connected to that problem, we should explain how to "recover the appearances" of macroscopic objects in terms of the wave function.

http://www.niu.edu/~vallori/AlloriWfoPaper-Jul19.pdf​
 
Last edited by a moderator:
Physics news on Phys.org
  • #92
my_wan said:
@Fredrik
As much as I have learned to respect and often concur with your input here I was strongly at odds with your earlier take in this thread. Though I didn't know how to properly articulate it without moving well off topic, so I did the best I could with generalities. However, it seems you did comprehend quiet well :cool:
There were a few things that I failed to understand, but I think I got the main point right: What they are attempting to disprove isn't what people who claim to prefer a statistical view actually believe in.

I must admit that I had a rather strong emotional reaction when I read the title and the abstract. They made me expect a bunch of crackpot nonsense, and I think this made it harder for me to understand some of the details correctly. For example, when they got to the condition that defines the ψ-epistemic theories, I thought they were saying that this was implied by the statistical view, but all they did was to consider a definition that makes an idea precise.

I also had no idea that there is a definition that makes that idea precise. This is why I said that the argument doesn't even look like a theorem. At the time, I thought about saying that a person who calls this a theorem doesn't know what a theorem is, but I decided that this was too strong a statement about something that I knew that I didn't fully understand. :smile:

I still think the title and the abstract makes the article look like crackpot nonsense, so I'm surprised that it didn't get dismissed as such by more people. I still don't fully understand the argument in the article, but now at least it looks like a theorem and a proof.

my_wan said:
Thanks for the http://arxiv.org/abs/0706.2661" , it does make many of the issues I struggle with a lot clearer. It fails to fully articulate a distinction between ontic locality verses epistemic locality, which I find pertinent, but was as clear an articulation of the basic issues as I have ever seen.
I read their definition of locality, but I didn't understand it. I'm going to have to give it another try later, because it's something I've always felt needs a definition.

my_wan said:
However, your characterization of the PBR article as anti ψ-epistemic, though not explicitly wrong, is more nuanced than you seemed to imply when you noted the comparison with the HS article.
Matt Leifer's blog brought up a few nuances that are absent both from my posts and the PBR article (like how there could be a hidden-variable theory where properties are relative rather than objective). But I don't see how PBR can be interpreted as anything but an argument against what HS called ψ-epistemic theories. Note that when PBR said
We begin by describing more fully the difference between the two different views of the quantum state [11].
reference 11 is HS. (I didn't realize this until later).

my_wan said:
A clue to this may be in your post #78 when you noted an inability to make sense of Spekkens view unless it was somehow related to many worlds.
This was a quote from Matt Leifer's comments to Scott Aaronson's blog post. But I have actually had similar thoughts (about how relational stuff seems to be MWI ideas in disguise), and even mentioned them in the forum a couple of times. I have no idea what Spekkens' toy model is about though. But I'm probably going to take some time to read some of the articles that Leifer is referencing soon.

my_wan said:
When the PBR article argues that the quantum state cannot be interpreted "statistically" it does not explicitly imply a one to one correspondence between |ψ|^2 and an ontic specification of ψ. Only that ψ refers to an actual ontic construct in a manner that may or may not involve a ψ-complete specification, at least as defined by the HS article to qualify as ψ-complete.
I understood this, but maybe I typed it up wrong. :smile:

my_wan said:
I would be interested in a discussion about Spekkens views, particularly the concept of relational degrees of freedom, (lack of) properties in isolation, and relativistic (emergent) properties in general. It may help clear up some issues with Spekkens views. Some familiarity with Relational QM (RQM) would be useful, but would almost certainly exceed the scope of this thread. Personally I can't see any way to escape the non-realist views without an understanding of RQM or related concepts.
Sounds like a good topic for another thread. (But I have spent a lot of time on this PBR stuff the past few days, so I'm somewhat reluctant to get into a long discussion about a new topic).
 
Last edited by a moderator:
  • #93
DevilsAvocado said:
What am I missing?? A local hidden-variable theory that can reproduce the predictions of QM...?

This has been quite dead for awhile, hasn’t it?? :bugeye::bugeye::bugeye:
Yes, but you're probably thinking that it's been dead since 1963 (± a few) when Bell's theorem was published, but HS proves it using two of Einstein's arguments, from 1927 and 1935.

The reason why that result was worth mentioning is that the PBR theorem is a result of the same type, a theorem that rules out some class of hidden-variable theories.
 
  • #94
DevilsAvocado said:
an argument that there can be no overlap in the probability distributions representing |0⟩ and |+⟩ in the model.
This is part of the definition of "ψ-epistemic theory". I think there are two basic ideas involved:
  • A probability distribution can be thought of as a representation of our knowledge of the system's properties.
  • Something that's completely determined by the properties of the system can be thought of as another property of the system.
If there are no overlapping probability distributions in the theory, then each λ determines exactly one probability distribution. Now the two ideas are in conflict. You can think of the probability distribution as "knowledge", but you can also think of it as a "property". If there's at least one λ that's associated with two probability distributions, then the probability distributions can't all be considered properties of the system. So now we have to consider at least some of them representations of "knowledge.

This motivates the definition that says that only theories of the latter kind (the ones with at least two overlapping probability distributions in the theory) are considered ψ-epistemic. These are the theories that the PBR article apparently has refuted. The first argument in the article is a bit naive, because it assumes specifically that there's overlap between the probability distributions associated with |0\rangle and |+\rangle.
 
  • #95
I believe I have found a flaw in the paper.

In short, they try to show that there is no lambda satisfying certain properties. The problem is that the CRUCIAL property they assume is not even stated as being one of the properties, probably because they thought that property was "obvious". And that "obvious" property is today known as non-contextuality. Indeed, today it is well known that QM is NOT non-contextual. But long time ago, it was not known. A long time ago von Neumann has found a "proof" that hidden variables (i.e., lambda) were impossible, but later it was realized that he tacitly assumed non-contextuality, so today it is known that his theorem only shows that non-contextual hidden variables are impossible. It seems that essentially the same mistake made long time ago by von Neumann is now repeated by those guys here.

Let me explain what makes me arrive to that conclusion. They first talk about ONE system and try to prove that there is no adequate lambda for such a system. But to prove that, they actually consider the case of TWO such systems. Initially this is not a problem because initially the two systems are independent (see Fig. 1). But at the measurement, the two systems are brought together (Fig. 1), so the assumption of independence is no longer justified. Indeed, the states in Eq. (1) are ENTANGLED states, which correspond to not-independent systems. Even though the systems were independent before the measurement, they became dependent in a measurement. The properties of the system change by measurement, which, by definition, is contextuality. And yet, the authors seem to tacitly (but erroneously) assume that the two systems should remain independent even at the measurement. In a contextual theory, the lambda at the measurement is NOT merely the collection of lambda_1 and lambda_2 before the measurement, which the authors don't seem to realize.
 
Last edited:
  • #96
Excellent post, Demystifier ! Very well and clearly written.
 
  • #97
Thanks dextercioby! Now I have sent also an e-mail to the authors, with a similar (but slightly more polite) content. If they answer to me, I will let you know.
 
  • #98
Just to be sure, are they assuming collapse, that is what they're taking for granted is essentially the Copenhagen interpretation ?
 
  • #99
dextercioby said:
Just to be sure, are they assuming collapse, that is what they're taking for granted is essentially the Copenhagen interpretation ?
As far as I can see, they don't assume collapse.
 
  • #100
dextercioby said:
Just to be sure, are they assuming collapse, that is what they're taking for granted is essentially the Copenhagen interpretation ?
I doubt that there are even two people who mean the same thing by the term "Copenhagen interpretation", so I try to avoid it. The informal version of the assumption they're making (in order to derive a contradiction) is that a state vector represents the experimenter's knowledge of the system. This is how some people describe "the CI". But nothing can be derived from an informal version of a statement, so the authors are choosing one specific way to give the statement a precise meaning. They are defining the claim that "a state vector represents knowledge of the system" as "there's a ψ-epistemic theory that makes the same predictions as QM".

In such theories, "collapse" is not a physical process. It's just a matter of changing your probability assignments when you have ruled out some of the possibilities.
 
  • #101
Demystifier said:
The properties of the system change by measurement, which, by definition, is contextuality. And yet, the authors seem to tacitly (but erroneously) assume that the two systems should remain independent even at the measurement. In a contextual theory, the lambda at the measurement is NOT merely the collection of lambda_1 and lambda_2 before the measurement, which the authors don't seem to realize.
I don't think this is something the authors don't realize, but it is essentially the objection I had from the start-- the assumption that an individual quantum system has "properties" that determine what happens to the system. I don't even think there is any such thing as an "individual quantum system", to me that is already an idealization that has left the building of any rigorous realism we should be using to prove theorems! But the authors do seem to associate that assumption with realism, all the same, so what they are doing is saying for all the people who want to be realists, they cannot believe in psi-epistemic interpretations. In other words, if there is a reality there that can be described completely by a mathematical structure, then the wave function is part of that structure (so is psi-ontic, even if incompletely so).

My objection was that this is a very narrow interpretation of realism, so I did not count it as a "mild" assumption, nor that it would be "radical" to reject it! You are giving more flesh to that objection-- you are talking about how a system could still be realistic but not be described completely by its own "properties"-- if realism must include contextuality. I believe this was also Spekkens' view, as summarized above in the Matt Leifer quote: "Spekkens thinks that the ultimate theory will have an ontology consisting of relational degrees of freedom, i.e. systems do not have properties in isolation, but only relative to other systems."

In other words, realists can retreat to a reality with a higher level of sophistication and reject the "individual system properties" concept, allowing them to maintain a psi-epistemic interpretation. I wasn't really counting that as realism at all, because I believe the "relational degrees of freedom" are not just between systems, they are between systems and observers, so I take a more Copenhagenesque spin. Whether or not that should count as some form of realism is highly debatable (remember Bohr said "there is no quantum world"). But I can certainly agree that it is not radical, so I concur with the bloggers who felt that the theorem eliminates a corner of interpretation space that was already largely unpopulated.

Personally, my main objection is with what I think is a rather naive claim: that most physicists want to hold to a form of realism that individual systems have properties that completely describe the system, they are not just attributes that we attach to the system ourselves, for some purpose. Indeed, I would argue that physics must be physics before it should be "realistic", and what physics is, by definition, is the intentional attachment of properties to systems to achieve some purpose. That's just exactly what any physics book does, we only need to look at it! So why on Earth is it now a "mild assumption" to say that physics should be something different from what physics books do, that physics should not be about attaching properties ourselves for certain specific purposes, it should be a study of the true properties of individual systems that nature really uses to control what happens? That's the radical claim, if you ask me-- the claim that nature "thinks just like we do." I'm a realist, but I think my mind, and my mathematical structures, are looking at the reality from the inside, so PBR's very first assumption has already left what I consider to be a true way to look at physics.
 
Last edited:
  • #102
Fredrik said:
One thing I realized when I read HS is that hidden-variable theories can be used to give precise definitions of statements like
  • QM doesn't describe reality.
  • A state vector represents the observer's knowledge of the system.
The former is made precise by the concept of ψ-incomplete hidden-variable theories, and the latter by the concept of ψ-epistemic hidden-variable theories. Now, I'm sure that some of you (in particular Ken G) will find these definitions unsatisfactory.
Actually I have no problem with those definitions, I think you have done an excellent job unearthing the idea that what PBR are fundamentally talking about are hidden-variable theories. My objection was always with the whole concept of hidden-variable theories, I believe they represent a form of pipe dream that physics should have figured out by now it just isn't! Hidden variables are nothing but the variables of the next theory that we haven't figured out yet, there's nothing ontological about them. Physics just makes theories, and they work very well, but none of that has anything do with the existence or non-existence of a "perfect theory" of a mathematical structure that completely describes the properties of a system. There is absolutely no reason to ever assume that such a structure exists, and any proof that starts there has entered into a kind of fantasy realm (and claimed it was a "mild assumption" to boot!). That's just never what physics was, so why should we keep pretending that's what it should be?
So if these arguments all hold, they have ruled out all hidden-variable theories except the non-local ψ-ontic ones.
Yes, that seems to be the key of the whole business. But that is also what I was saying before about the argument being circular-- I view the form of realism that they have assumed to be more or less (and now with this theorem, it's more the "more" than the "less") the same thing as the notion that psi is ontic in character, because if there is a true ontology there that can be described mathematically and have theorems proven about it, then it's not surprising that psi is saying something about it. That's what they proved, but man what a big "if." I think we intentionally retreat from reality when we place a mathematical template over it and start proving theorems about it, so to call that "realism" I think is way off, but that does seem to be how the term has been co-opted.
 
Last edited:
  • #103
Thanks B2, excellent. I can put my finger on it, but I just love every word you wrote in the last post... the 'openness'... it’s refreshing. More than often, there’s an "interpretational war" going on in this forum, and that’s maybe good, people learn how to sharpen their arguments and so on. But sometimes I wonder if "dogmatic interpretationalism" is really the thing that is going to take us to the "next level" in QM... I don’t know...
bohm2 said:
So if one takes that pure epistemic/instrumentalist stance it seems to me one is almost forced to treat QT as "a science of meter readings". That view seems unattractive to me.

I agree 100%. We humans are by nature 'curious creatures', we constantly strive make a coherent picture of the world around us. That’s just how our brains work. To just stare at "meter readings" and say:
– Well guys, this is it! This is the theory of everything, and we won’t get any further!

Is depressing...

Furthermore, I think Fredrik has put forward a quite powerful argument; if the measuring apparatus is not real, then how could we verify our theories? (or something like that)

bohm2 said:
It has the same stench/smell that held back progress in the cognitive sciences (e.g. behaviourism). But then, I could be mistaken?

I don’t think you are, and the "Fredrik argument" should be a quite powerful foundation for this stand.

I have just started to read about http://plato.stanford.edu/entries/structural-realism/#OntStrReaOSR"

bohm2 said:
But if one treats the wave function as a real "field"-like entity it is very much different than the typical fields we are accustomed to. The wave function evolves in 3N-dimensional configuration space, there's the contextuality/non-separability also and stuff like that make it a very strange kind of "causal" agent.

Exceptionally interesting... new ideas/perspectives that never crossed my crinkly little brain...

If we adopt the ψ-ontology (wavefunctions are states of reality) then the space where wavefunctions "live" must also be ontic, right? And this space is very different from 'our' normal 3D space... probably "unreal" to humans...??

Catch-22

Demystifier started a thread on this topic, but I don’t if there’s any answers (yet):

Configuration space vs physical space
https://www.physicsforums.com/showthread.php?t=285019

bohm2 said:
If one takes the Bohmian perspective (at least one Bohmian version), how do the 2 (pilot wave and particle) "interact"? It can't be via the usual contact-mechanical stuff we are accustomed to because of the non-locality that is required in any realist interpretation.

http://arxiv.org/PS_cache/arxiv/pdf/0904/0904.0958v1.pdf

Extremely good question! How on Earth does a particle (ontic) interact with a pilot wave if causality and information is forbidden (by SR & No-communication theorem)??

bohm2 said:
Furthermore, if one wishes to scrap Bohm's dualistic ontology but remain a realist so that the wave function is everything then, there's another problem:

http://www.niu.edu/~vallori/AlloriWfoPaper-Jul19.pdf

Many thank for this link, I must read this paper and the others that you and Fredrik provided:

The interpretation of quantum mechanics: where do we stand?
http://arxiv.org/abs/0904.0958

Einstein, incompleteness, and the epistemic view of quantum states
http://arxiv.org/abs/0706.2661
 
Last edited by a moderator:
  • #104
Fredrik said:
Yes, but you're probably thinking that it's been dead since 1963 (± a few) when Bell's theorem was published, but HS proves it using two of Einstein's arguments, from 1927 and 1935.

The reason why that result was worth mentioning is that the PBR theorem is a result of the same type, a theorem that rules out some class of hidden-variable theories.

Ops... I’m sorry Fredrik, my fault. :blushing:

[I’ve acquire a sort of "brain damage" after years of debating "Bell Disclaimers"... everything goes RED when I see LHV... :smile:]
 
  • #105
Fredrik said:
This is part of the definition of "ψ-epistemic theory". I think there are two basic ideas involved:
  • A probability distribution can be thought of as a representation of our knowledge of the system's properties.
  • Something that's completely determined by the properties of the system can be thought of as another property of the system.
If there are no overlapping probability distributions in the theory, then each λ determines exactly one probability distribution. Now the two ideas are in conflict. You can think of the probability distribution as "knowledge", but you can also think of it as a "property". If there's at least one λ that's associated with two probability distributions, then the probability distributions can't all be considered properties of the system. So now we have to consider at least some of them representations of "knowledge.

This motivates the definition that says that only theories of the latter kind (the ones with at least two overlapping probability distributions in the theory) are considered ψ-epistemic. These are the theories that the PBR article apparently has refuted. The first argument in the article is a bit naive, because it assumes specifically that there's overlap between the probability distributions associated with |0\rangle and |+\rangle.

Many thanks!

I must digest this and reread everything again + the new papers + the blogs, and then get back on this.

Feels like the 'fog of ignorance' is slowly dissolving...
 
  • #106
bohm2 said:
So if one takes that pure epistemic/instrumentalist stance it seems to me one is almost forced to treat QT as "a science of meter readings". That view seems unattractive to me.
I think the problem here is that the possibilities are being too narrowly constrained. You seem to be making a choice between imagining that there is some mathematical object, call it "properties", that underlie some "true theory" that nature actually follows, versus the opposite choice that the only reality is what the meter reads, and all physics should do is predict observations. I don't think either of those models is what physics has ever been, nor what it ever should be. So let me propose a third option.

What's wrong with saying that physics is the art of taking objective measurements and braiding them into a consistent mathematical picture that gives us significant understanding of, and power over, those objective measurements? Isn't that just exactly what physics has always been, so why should we want it to be something different going forward? I see nothing unattractive about it, the mathematical structures we create come just from where they demonstrably come from, our brains, and they work to do just exactly what they work to do-- convey a sense of understanding, beauty, symmetry, and reason to the universe around us. That's what they do, it doesn't make any difference if we imagine there is some "true theory" that we don't yet know underlying it all, I have no idea where that fantasy even comes from!

Some say that they would find it disappointing if there was no true theory like that, no mathematical structure of properties that really does describe everything that happens. I can't agree-- I would find it extremely disappointing to live in such an unimaginative universe as that! We certainly would never want to actually discover such a theory, in which our own minds have mastered everything that happens. We might as well be dead! No more life to the discovery process, no more surprises about anything that nature does, no mystery or wonder beyond the amazement that we actually figured it all out. Even if we did all that, we'd still have at least one mystery to ponder: the paradox of how our brains managed to figure out how we think. Can a thought know where it comes from? Isn't the origin of a thought another thought that needs an origin?

So on the contrary, I would never characterize physics as the attempt to figure out the mathematical structure that determines the true properties of everything. Instead, I would characterize it as the process of inventing properties to answer questions and resolve mysteries, fully aware that this process will only serve to replace more superficial mysteries with more profound ones. And that was in fact the purpose all along, since when has physics been about eliminating mystery? I don't find this view either disappointing, or supportive of the concept of the existence of a unique mathematical structure that determines the true properties of a system. I can hardly even imagine a theory that could give unambiguous meaning to every word in that sentence!
 
  • #107
Ken G said:
... My objection was always with the whole concept of hidden-variable theories, I believe they represent a form of pipe dream that physics should have figured out by now it just isn't! Hidden variables are nothing but the variables of the next theory that we haven't figured out yet, there's nothing ontological about them.

If you add local before hidden-variable theories, the pipe dream is dead.

If you add non-local before hidden-variable theories, I know at least one guy in this thread that will have something to say about this... :biggrin:
 
  • #108
The issue isn't local vs. nonlocal, it is in the whole idea of what a hidden variables theory is. It's an oxymoron-- if the variables are hidden, it's not a theory, and if they aren't hidden, well, then they aren't hidden! The whole language is basically a kind of pretense that the theory is trying to be something different from what it actually is. In other words, I have no objection at all to trying to unearth additional variables underneath quantum mechanics, perhaps following the template of deBroglie-Bohm-- doing that would just be good physics. What I object to is the pretense that the resulting theory will be something other than a physics theory, and would not simply have it's own new version of "hidden variables" underlying it. Framed this way, a belief in "hidden variables theories" is simply the belief that physics is an ongoing process of replacing more superficial theories with more profound ones.
 
  • #109
Ken G said:
And that was in fact the purpose all along, since when has physics been about eliminating mystery?

I’m trying real hard to comprehend what you are saying, but with all due respect – it doesn’t make sense.

Are for real saying that one of the goals of physics is to keep us ignorant about how the world works? To preserve the mysteries??

Geeze dude, I smell a rat...
 
  • #110
DevilsAvocado said:
I’m trying real hard to comprehend what you are saying, but with all due respect – it doesn’t make sense.

Are for real saying that one of the goals of physics is to keep us ignorant about how the world works? To preserve the mysteries??
.
Where did I say our goal is to remain ignorant? Talk about the fallacy of the excluded middle-- you are saying that if we don't believe there is a mathematical structure that completely describes everything that happens, then it must be because our goal is to remain ignorant of such a structure. Ah, no. What I am saying is that the process of explaining mysteries is just that: a process of explaining mysteries. No claim needs to be made about what other mysteries might crop up in the process, and I'd say the history of physics is really pretty clear on this point, not that we seem to be getting the message.
 
  • #111
Ken G said:
The issue isn't local vs. nonlocal, it is in the whole idea of what a hidden variables theory is.

Wrong. This is exactly what it is, and it is well supported in theory and all performed experiments performed this far. When the EPR-Bell loopholes are all finally closed, Local Realism is forever dead. This will be an empirical fact.

New successful theories do not change empirical facts, and Newton’s apple will not suspend itself in mid-air just because of a new more precise theory.

That’s just nuts.
 
  • #112
Ken G said:
My objection was always with the whole concept of hidden-variable theories, I believe they represent a form of pipe dream that physics should have figured out by now it just isn't!
I think the last comment is a bit unfair, because how do you figure it out if not by proving theorems like this?

Ken G said:
Hidden variables are nothing but the variables of the next theory that we haven't figured out yet, there's nothing ontological about them.
Yes, this is something that's been bugging me about these "ontic models" as Matt Leifer is calling them. There's a set \Lambda whose members are called ontic states. Given a \lambda\in\Lambda, and a measurement procedure M, the theory assigns a probability P(k|λ,M) to each possible result k. This probability is not assumed to be either 0 or 1. There's nothing inherently "ontic" about this. If we say that a model is called "ontic" if and only if each \lambda\in\Lambda represents all the properties of the system (in a sense that's left undefined), then we don't have any way of knowing if a given theory really is ontic. And if we simply define all models that make probability assignments of the type discussed above to be "ontic models", then nothing can tell us if λ really represents properties.

Ken G said:
Physics just makes theories, and they work very well, but none of that has anything do with the existence or non-existence of a "perfect theory" of a mathematical structure that completely describes the properties of a system. There is absolutely no reason to ever assume that such a structure exists, and any proof that starts there has entered into a kind of fantasy realm (and claimed it was a "mild assumption" to boot!).
I don't think their assumption is quite that extreme, but I agree that's it's not "mild". We can imagine a less than perfect theory where the members of \Lambda can be thought of as approximate representations of the system's properties. (The meaning of that is still left undefined). If the epistemic states of this theory (its probability distributions of ontic states) give us exactly the same probability assignments of QM. This theorem is telling us (assuming that its proof is correct) that none of the probability distributions in such a theory are overlapping.

This is hardly worthy of a title like "the quantum state cannot be interpreted statistically", but at least it's a somewhat interesting result, because it tells us something we didn't know before about theories that can reproduce the predictions of QM.
 
  • #113
DevilsAvocado said:
Wrong. This is exactly what it is, and it is well supported in theory and all performed experiments performed this far. When the EPR-Bell loopholes are all finally closed, Local Realism is forever dead.
I never said that wasn't true, and I have no idea why you think I did. What I actually said is that this issue is completely irrelevant to the question of what a quantum mechanical state is. We have absolutely no reason to expect that quantum systems (i.e., states in the theory of quantum mechanics) have "hidden properties" at all-- so I don't care if such imaginary properties are local or nonlocal. We constantly apply many types of unhidden local and nonlocal properties, like charges and action at a distance, and quite successfully, there's no problem at all if we apply them appropriately-- unless we want those pictures to be "the truth", which is just silly. Do we imagine that "hidden properties" of Newtonian gravity turns it into general relativity? Did we debate endlessly on whether Newtonian gravity was a theory that could be consistent with hidden variables that describe why inertial mass is the same as gravitational mass? Maybe they did once, but quickly gave up on the uselessness of the endeavor. Instead, they just came up with the next theory, guided by whatever worked, which is what physics does.

Yes, we know that local hidden properties can't completely reproduce quantum mechanics, wonderful. That provides guidance for the next theory, and how to borrow from the success of QM in such a theory. It is fine to want guidance for the next theory, but people seem to want quantum mechanics to be a description of some part of the "ultimate theory" that is the mathematical structure that describes all the properties of a system. There is zero evidence that it is that, and we should never have expected it to be. Instead, what we should expect it to do is the same things that every physics theory in the history of the discipline has ever done: supply us with a useful picture for making fairly precise calculations and entering into pictorial modes of thought that offer us a sense of understanding. Hidden variables are simply not part of that theory, so wondering what types of hidden variable theories could make all the same predictions as quantum mechanics, including untested predictions, is nothing but an exercise in guiding the next generation of useful observations that could give rise to better theories. It doesn't tell you what a quantum state is, only one thing can do that: the theory of quantum mechanics.

In this light, what the PBR theorem is really saying is, "if you want to replace QM with a hidden variables theory that you can sell as a part of the ultimate theory of ontological truth, don't try to do it using a generalization of the state vector that involves it being epistemic rather than ontic." Fine, thanks for the guidance, it's very relevant for those looking for a theory they can sell that implausible way. It doesn't tell us anything about the theory of quantum mechanics, however, because it's very first assumption has nothing demonstrably to do with quantum mechanics.
 
Last edited:
  • #114
Ken G said:
I think the problem here is that the possibilities are being too narrowly constrained. You seem to be making a choice between imagining that there is some mathematical object, call it "properties", that underlie some "true theory" that nature actually follows, versus the opposite choice that the only reality is what the meter reads, and all physics should do is predict observations. I don't think either of those models is what physics has ever been, nor what it ever should be. So let me propose a third option.

What's wrong with saying that physics is the art of taking objective measurements and braiding them into a consistent mathematical picture that gives us significant understanding of, and power over, those objective measurements?

I've always had trouble understanding this third option. For instance, I tried reading the Fuchs paper (we discussed this on the philosophy board) and I just could not understand it. I only seem to be able to understand the two options. Maybe I'm mistaken but I fear there is no difference between the purely epistemic/instrumentalist stance and the third option you favour.

I know some "Bohmians" treat the wave function as some type of nomological (law of nature)/abstract entity (e.g. Goldstein, Durr, etc.) but there are problems with this approach as mentioned by Valentini. I also understand the Bohrian view, I think, but I can't seem to grasp that third option. I mean, what exactly are those objective measurements about? What do those mathematical objects in QM (e.g. wave function) refer to in that third option?

Edit: So there's no confusion I'm not a "naive" realist. And I'm pretty supportive of this position, I think:

the propositions of physics are equations, equations that contain numbers, terms that refer without describing, many other mathematical symbols, and nothing else; and that these equations, being what they are, can only tell us about the abstract or mathematically characterizable structure of matter or the physical world without telling us anything else about the nature of the thing that exemplifies the structure. Even in the case of spacetime, as opposed to matter or force—to the doubtful extent that these three things can be separated—it’s unclear whether we have any knowledge of its intrinsic nature beyond its abstract or mathematically representable structure."

Thus, in physics, the propositions are invariably mathematical expressions that are totally devoid of direct pictoriality. Physicists believe that physics has to 'free itself' from ‘intuitive pictures’ and give up the hope of ‘visualizing the world'. Steven Weinberg traces the realistic significance of physics to its mathematical formulations: ‘we have all been making abstract mathematical models of the universe to which at least the physicists give a higher degree of reality than they accord the ordinary world of sensations' ( e.g. so-called 'Galilean Style').
 
Last edited:
  • #115
Fredrik said:
I think the last comment is a bit unfair, because how do you figure it out if not by proving theorems like this?
The theorem only "proves" that one type of thing is a pipe dream by assuming an even larger pipe dream. No theorem is any better than its postulates, and in this case, we have a postulate that there exists a physics theory that is unlike any physics theory ever seen. So the theorem only means something to people who believe in that postulate. That may be a lot of people, in which case the theorem does have significance for them, but many of the bloggers are basically saying "the theorem has no significance for me because I didn't expect epistemic states to work like that anyway." I'm saying it has no significance for me because I don't even expect physics theories of any kind to be the objects that they are assumed to be in that paper, some kind of "mini version" of an ultimate mathematical description of life, the universe, and everything.
Yes, this is something that's been bugging me about these "ontic models" as Matt Leifer is calling them. There's a set \Lambda whose members are called ontic states. Given a \lambda\in\Lambda, and a measurement procedure M, the theory assigns a probability P(k|λ,M) to each possible result k. This probability is not assumed to be either 0 or 1. There's nothing inherently "ontic" about this. If we say that a model is called "ontic" if and only if each \lambda\in\Lambda represents all the properties of the system (in a sense that's left undefined), then we don't have any way of knowing if a given theory really is ontic. And if we simply define all models that make probability assignments of the type discussed above to be "ontic models", then nothing can tell us if λ really represents properties.
Yes, that bothers me too. I really don't see what an "ontic model" is, it sounds like something that no physics model has ever been. Can someone give me an example, anywhere in physics, of an ontic model, and tell me why it is not an epistemic model? To me an "epistemic model" is a model about what we know, rather than about what is actually there. I'm very curious what physics theory talks about what is really there, rather than what we know about that system.

I don't think their assumption is quite that extreme, but I agree that's it's not "mild". We can imagine a less than perfect theory where the members of \Lambda can be thought of as approximate representations of the system's properties. (The meaning of that is still left undefined). If the epistemic states of this theory (its probability distributions of ontic states) give us exactly the same probability assignments of QM. This theorem is telling us (assuming that its proof is correct) that none of the probability distributions in such a theory are overlapping.
The key question is, how much of this proof requires that there be these things called "properties" that can adjudicate the meaning of an ontic state and an epistemic state? It seems to me that the properties are crucial-- the theory essentially assumes that there is such a thing as ontic states, and only then does it ask if quantum states refer to ontic states. I feel that if one is to think of a quantum state as an epistemic state, one is not thinking of it as a probability distribution of ontic states, one is rejecting the whole concept of an ontic state. If you embrace the ontic state, then you are doing deBroglie-Bohm or some such hidden variable theory, you are not doing an epistemic interpretation at all. To me, a real full-blown epistemic interpretation is saying that our knowledge of a system is not some idle "fly on the wall" to the behavior of that system, it is part of the defining quality of what we mean by that "system" and its "behavior" in the first place. I thus see no reason to adopt epistemic interpretations if ontic states exist at all!

This is hardly worthy of a title like "the quantum state cannot be interpreted statistically", but at least it's a somewhat interesting result, because it tells us something we didn't know before about theories that can reproduce the predictions of QM.
Yes, the theorem does connect some interesting ramifications with some questionable postulates, I will agree there. The value is only in the observations it could motivate, in that they might help us find outcomes where quantum mechanics is wrong-- we already have quantum mechanics, we don't need any other theory to get the same answers that quantum mechanics does.
 
  • #116
bohm2 said:
Maybe I'm mistaken but I fear there is no difference between the purely epistemic/instrumentalist stance and the third option you favour.
There is a big difference, if by "instrumentalist stance" you basically mean "shut up and calculate." To me, a purely instrumentalist stance is a kind of radical empiricism, that says reality is what dials read. I am saying, reality is a way of thinking about our environment, it is a combination of the dial readings and how we synthesize them into a rational whole. It is what we make sense of. I think Bohr said it best: physics is about what we can say about nature. The "saying" part is really crucial, and that is where I differ from pure instrumentalism, because it is not true that all we can say about nature is facts and figures.

I mean, what exactly are those objective measurements about? What do those mathematical objects in QM (e.g. wave function) refer to in that third option?
They are about, and refer to, whatever we make of them being about, and referring to. That's it, that's what we get: what we can make of it, what we can say about it. It doesn't need to be some approximate version of a "true theory", there is no need for any such concept, and no such concept ever appears anywhere in physics, so I'm mystified why so many people seem to imagine that physics requires it in order to work. We should ask ourselves: for an approximate theory to work, why must there be an exact one?
 
  • #117
Ken G said:
Can someone give me an example, anywhere in physics, of an ontic model, and tell me why it is not an epistemic model?
The first part is easy. The classical theory of a single particle in Galilean spacetime, moving under the influence of a force. The phase space of this this theory meets the requirements I mentioned in my previous post: Denote the phase space by \Lambda. Given a \lambda\in\Lambda and a measuring procedure M, the theory assigns a probability P(k|λ,M) to each possible result k.

The second part is harder, or maybe I just feel that way because I don't understand these things well enough yet. (I have two answers. The first one is right here. The other is what I'm saying in response to the last quote below). I think it's obvious enough that it makes sense to think of phase space points as complete sets of properties. I don't think a proof or even a definition is required*. If you want a reason to think of them that way, then consider the fact that if you know one point on the curve that describes the particle's motion, you can use the force (:wink:) to find all the other. So if you know a point, you know everything.

*) I don't think it's necessarily crazy to leave some things undefined. As you know it isn't possible to define everything, but more importantly, there are some things that we simply can't avoid treating as more fundamental than other things. For example, the concept of "measuring devices" is more fundamental than any theories of physics, and the concept of natural numbers is more fundamental than even the formal language used to define the set theories that we use to give the term "natural number" a set theoretic definition. It seems reasonable to me to take "property" to be one of those things that we consider so fundamental that we don't need to define it.

Ken G said:
To me an "epistemic model" is a model about what we know, rather than about what is actually there.
Right, but in this context, it's what we know about the ontic states. Like it or not, that's seems to be how these guys are defining it.

Ken G said:
The key question is, how much of this proof requires that there be these things called "properties" that can adjudicate the meaning of an ontic state and an epistemic state?
This is something that I find confusing. I'm tempted to say "none of it". Suppose that we consider all models that for each measuring device and each member of some set \Lambda assigns a probability P(k|λ,M) to each result k to be "ontic". We have no way of knowing if the ontic states really represent properties, but that also means that nothing will go seriously wrong if we just pretend that we do.

I think that this is what the HS article does, because their first example of an ontic model (they may have used the term "hidden-variable theory" instead) simply defines \Lambda to be the set of Hilbert subspaces of the Hilbert space of QM.
 
  • #118
A few more thoughts... (some of this has already been suggested by Ken G)

If we define an "ontic model" as a theory that involves a set \Lambda and assigns probability P(k|λ,M) to measurement result k, given \lambda\in\Lambda and a measuring procedure M, then QM is already an ontic model.

It's a ψ-complete (and therefore a ψ-ontic) ontic model. So if we really want to ask whether probabilities in QM are a result of our ignorance of ontic states, then we have to consider some other ontic model. We are now asking if there's another ontic model such that
  • The ontic states in QM (the pure states, the state vectors) correspond to the epistemic states of this alternative ontic model.
  • This alternative ontic model makes the same probability assignments as QM.
  • Some of the probability distributions are overlapping.
Suppose that we could somehow verify that there is an ontic model with these properties. Would that result be at all interesting?

I would say "yes", if and only if the P(k|λ,M) of the alternative ontic model are all 0 or 1. If the alternative model also assigns non-trivial probabilities, then why should we care about the result? Now someone is just going to ask "Are these probabilities the result of our ignorance of ontic states?"

From this point of view, it's a bit odd that ontic theories are allowed to make non-trivial probability assignments.
 
  • #119
One more thing... (Edit: OK, two more things...)

If we make the requirement that each ontic state must represent all the properties of the system, and leave the term "properties" undefined, then the PBR result can't be considered a theorem. (Because theorems are based on assumptions about terms that have definitions in set theory). I still think it makes sense to leave a term like "property" undefined in a general discussion, but it makes no sense to make such terms part of a definition of a term that's involved in a theorem.

In other words, if PBR defines the "knowledge of the system" view as "There's a ψ-epistemic ontic model that can reproduce the predictions of QM", the definition of the term "ontic model" in that statement can't include the concept of "property", unless it's defined. The only definition that I would consider appropriate is the probability-1 definition, but since neither HS nor Leifer is using it, I don't think we should. The only possibility appears to be to leave out any mention of "properties" from the definition. That would mean that there's no technical difference between an "ontic model" and just a "model".

Do these guys distinguish between the terms "model" and "theory"? I don't think they do. Here's a distinction I would like to make: Both are required to assign the probabilities p(k|λ,M), but for a "model", we don't require that it's possible to identify preparation procedures with probability distributions of ontic states. In other words, a theory must make testable predictions, and a model doesn't. (This is just a suggestion. I don't think there's an "official" definition, and I don't know if this concept of "model" is useful).
 
Last edited:
  • #120
There's something not quite right with this paper:
1 - if you are talking about physical properties, they must be talking about a single individual system (eg one electron, or one photon or one atom).
2- QM does not make predictions about individual events, so they seem to be mixing concepts.
3- If the outcome of each individual measurement is uniquely determined by the complete physical properties of the electron, photon etc, then that outcome is certain and can not be "statistical", in which case the measuring device can not and does not give probabilities. The statement "the probabilities for different outcomes is only determined by the complete physical state of the two systems at the time of the measurement" is the source of all their problems IMHO (See the last paragraph on the left hand side of page (2))
4- An ontic but incomplete QM state, is not very different from an epistemic state with hidden ontic properties. Both will result in "statistical" predictions since incomplete specification of the QM state results in lack of certainty. (cf "uniquely determines"). The only way to distinguish the two is to make a prediction about a single event and compare with an experiment in which only a single event happens. Good luck with that.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
58
Views
4K
  • · Replies 69 ·
3
Replies
69
Views
7K
  • · Replies 15 ·
Replies
15
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 65 ·
3
Replies
65
Views
9K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 37 ·
2
Replies
37
Views
5K