The quantum state cannot be interpreted statistically?

  • #51
Man, I'm an idiot! Where did I get |1> for |+>?!
 
Physics news on Phys.org
  • #52
[my bolding]
alxm said:
... By extension the main result here is that for two identical systems prepared in isolation from each other, the result predicted by quantum mechanics for a joint measurement cannot be enforced merely by knowing lambda1 and lambda2, since it doesn't tell you how you got it there, which has importance for what you measure.

But if lambda is actually the wave-function (or can tell you it), then obviously there's no problem.


Isn’t this exactly what David Wallace describes in his simple https://www.physicsforums.com/showthread.php?p=3623347#post3623347"?
 
Last edited by a moderator:
  • #53
Two comments/questions:

1. Although I have seen various people claim an equivalence between "the statistical interpretation" and what they are calling view 2, I don't understand this claim. This looks to be similar to what Fredrik is saying. Don't physical properties include the probability distributions of all possible probes of the system?

2. Related to 1., it seems like I can understand their paper as giving me a particular experimental method to (more fully) determine \lambda using additional experiments on composite systems.

For example, in the paragraph beginning "The simple argument is ...", sentence 3 is particularly interesting. Can we not argue that q is zero since their later experiment can determine which of the two preparations was used? In other words, aren't they proving that we can always determine the "preparation method"? This is partially predicated on my confusion in 1. about why the "preparation method" story is equivalent to the statistical interpretation.
 
  • #54
DevilsAvocado said:
[my bolding]
Isn’t this exactly what David Wallace describes in his simple https://www.physicsforums.com/showthread.php?p=3623347#post3623347"?


Well, the conclusion is the same. But it seems to me that he's more describing the ordinary double-slit experiment.

One key difference between that and what's being described in the paper, is that the states of the double-slit/half-silvered mirror paths aren't created independently of each other. It's quite a bit less weird to have "spooky action at a distance" between a single state "split" in two, than between two states prepared in isolation that never had any interaction. That's what seems to be the main novelty here.
 
Last edited by a moderator:
  • #55
alxm said:
The way I read it, what they mean by "all the properties" is some set of hidden variables or similar that are sufficient to determine the outcome of any measurement. The "real" state is represented by lambda, and the quantum state is just a classical statistical distribution over the various "lambda states". It's not a classical analogy, it is classical. Although whatever goes into putting the system into a particular lambda state is not necessarily deterministic or local or whatever; only point is that QM tells us that certain processes will allow us to prepare states with certain distributions.

So knowing lambda doesn't tell you how you got there.
Thanks for posting this. This is a very nice explanation. I've been thinking that they probably meant something other than this, since they weren't very explicit about it. Now I'm thinking that this must have been what they meant.

I thought that they were leaving it undefined what it means for λ to represent all the properties of the system. Now I think that they are using a definition of "property" similar to this one:
A property of the system is a pair (D,d) (where D denotes a measuring device and d denotes one of its possible results) such that the theory predicts that if we perform a measurement with the device D, the result will certainly be d.​
To say that λ represents all the properties is to say that the super-awesome classical theory that λ is a part of can predict the result of every possible measurement.

I will do some more thinking and post a new summary when I have something.
 
Last edited:
  • #56
Ensemble interpretation says that QM works for ensembles but does not work for individual systems.
This paper under discussion says that indeed ensemble interpretation leads to contradiction if QM is applicable to individual systems (thought experiment in fig.1). So what?

Or in terms of properties. Quantum state is determined by properties of ensemble that include properties of individual systems and emergent properties. Then certain properties of individual systems can correspond to different quantum states but that does not mean that there is any ambiguity in correspondence between quantum state and properties of ensemble.
 
  • #57
So quantum mechanics is non-commutative probability. The basic problem we have with these probilities is interpeting them, early work of Von Neumann was directed at showing that non-commuting probabilities don't results as probability distributions over some classical theory.

The strongest result in this regard is the Kochen-Specker theorem which says that if there is a real deterministic theory underneath QM with matter in some state λ, then that theory can only model quantum mechanics if it allows contextuality (which basically implies non-locality in a relativistic theory). Basically QM can only be the statistical mechanics of some underlying "true" classical theory if that theory has faster-than-light signalling.

However this new paper appears to be pushing even further, saying that even contextual theories don't work and QM can't be seen as the statistical mechanics of any deterministic theory. Whether it actually does this remains to be seen.
 
  • #58
alxm said:
Well, the conclusion is the same. But it seems to me that he's more describing the ordinary double-slit experiment.

One key difference between that and what's being described in the paper, is that the states of the double-slit/half-silvered mirror paths aren't created independently of each other. It's quite a bit less weird to have "spooky action at a distance" between a single state "split" in two, than between two states prepared in isolation that never had any interaction. That's what seems to be the main novelty here.

Okay, thanks!


P.S. Although I’m sure that DrC can convince you that there’s absolutely nothing 'weird' about entanglement between objects that never had any interaction... :wink:
 
  • #59
zonde said:
Ensemble interpretation says that QM works for ensembles but does not work for individual systems.
This paper under discussion says that indeed ensemble interpretation leads to contradiction if QM is applicable to individual systems (thought experiment in fig.1). So what?

So what? You are also claiming "that there is difference between statistical "sum" of 1000 experiments with single photon and single experiment with 1000 photons".

So how can I take you seriously?
 
  • #60
Fredrik said:
This is wrong, and it's also a very different claim from the one made by this article. A state vector is certainly an accurate representation of the properties of an ensemble of identically prepared systems. It's conceivable that it's also an accurate representation of the properties of a single system. The article claims to be proving that it's wrong to say that it's not a representation of the properties of a single system.

This is even more wrong. Also, if you want to discuss these things, please keep them to the other thread where you brought this up.

As the first postulate of QM states clearly, the pure quantum state describes the state of a physical system, not of an ensemble.

The so-called «statistical interpretation» is wrong as both this paper and the link given by me before show. The paper is also right when it points that the «statistical interpretation» was introduced for eliminating the collapse of the quantum state. But this collapse is a real process, which is described by the von Neumann postulate, in QM, and by dynamical equations in more general formulations beyond QM.

I remark again that the paper is right: the quantum pure state is not «akin to a probability distribution in statistical mechanics», as some ill-informed guys still believe.

As any decent textbook in QM explains, ensembles in quantum theory are introduced by impure states not by pure states.
 
Last edited:
  • #61
Fredrik said:
That's the same thing.

"state-as-probability" = "ensemble interpretation" = "statistical interpretation" = "Copenhagen interpretation" (although some people will insist that the CI belongs on the "state-as-physical" side).

Those equality signs are misguided.
 
  • #62
zonde said:
Ensemble interpretation says that QM works for ensembles but does not work for individual systems.

But that's not true. QM makes some predictions about individual systems. For example, in an experiment that produces correlated electron-positron pairs with total spin 0, QM predicts with certainty that for any axis A, if you measure spin-up for the electron relative to axis A, then you will measure spin-down for the positron relative to axis A.

The existence of such definite predictions about a single experiment is what makes QM not a purely ensemble theory.
 
  • #63
bohm2 said:
Their assumptions:

1. If a quantum system is prepared in isolation from the rest of the universe, such that quantum theory assigns a pure state, then after preparation, the system has a well defined set of physical properties.

2. It is possible to prepare multiple systems such that their physical properties are uncorrelated.

3. Measuring devices respond solely to the physical properties of the systems they measure.

They are definitely not making assumption 1. If we let ψ be the quantum state, and λ the unknown physical state, they are saying that in the statistical view, ψ does not uniquely determine λ. A pure state does not mean that the physical properties are uniquely determined. The quantum state ψ only gives probability distribution on physical states λ, it doesn't uniquely determine it.
 
  • #64
New summary. I have a better idea what they meant now.

Definition: A property of the system is a pair (D,d) (where D denotes a measuring device and d denotes one of its possible results) such that the theory predicts that if we perform a measurement with the device D, the result will certainly be d.​
Note that this is a theory-independent definition in the sense that it explains what the word "property" means in every theory.
Assumption: There's a theory that's at least as good as QM, in which a set \lambda=\{(D_i,d_i)|i\in I\} contains all the properties of the system.​
By calling this a "theory", we are implicitly assuming that it's possible to obtain useful information about the value of λ. (If it's not, then the "theory" isn't falsifiable in any sense of the word, and shouldn't be called a theory). So we are implicitly assuming that we can at least determine a probability distribution of values of λ.

By saying that this theory is at least as good as QM, we are implicitly assuming that the set \{D_i|i\in I\} contains all the measuring devices that QM makes predictions about.

I will call this theory the super-awesome classical theory (SACT). It has to be considered a classical theory, because it assigns no probabilities other than "certainty" to results of measurements on pure states. (A system is said to be in a pure state if the value of λ is known, and is said to be in a mixed state if a probability measure on the set of values of λ is known. The simplest kind of mixed state is a system such that all but a finite number of values of λ can be ruled out with certainty, and the remaining values are all associated with a number in [0,1] to be interpreted as the probability that the system is in the pure state λ).

OK, that concludes my comments about the stuff I believe I understand. The stuff below this line are comments about things I don't understand, so don't expect them to make as much sense as the stuff above.

__________________________________________________


I still can't make sense of what two ideas they are comparing. If the above is what they meant when then said that λ corresponds to a complete list of properties of the system, then they appear to be comparing the following two ideas:
  1. A state vector corresponds to a subset of the set λ defined by the SACT.
  2. A state vector corresponds to a mixed state in the SACT.
But how are we to make sense of 1? If we only know a proper subset of λ, then aren't we still talking about a mixed state? Should we assume that the subset corresponding to the state vector contains the property that determines the result of the specific measurement we're going to make? Should we assume that it doesn't?

The fact that we're even talking about mixed states suggests that what they really want to compare are the following two ideas:
  1. The probabilities in QM have nothing to do with ignorance about properties of the system.
  2. The probabilities in QM are a result of our ignorance about the properties of the system.
But I have never thought of either of these as contradicting the statistical view. :confused:

What they actually end up comparing is of course the following two ideas:
  1. The state vector is always determined by λ.
  2. The state vector is not always determined by λ.
If a state vector corresponds to a mixed state (option 2 in the first list in this post), then this option 2 is just saying that a pure state isn't always determined by the mixed state it's a part of. These two clearly follow from the items on the first list in this post, but it's not clear to me how they are connected to more interesting statements like the ones on the second list or the ones on my original list:
  1. A state vector represents the properties of the system.
  2. A state vector represents the properties of an ensemble of identically prepared systems, and does not also represent the properties of a single system.
(I deleted the word "statistical" because I think it's more likely to confuse than to clarify).
I need to get something to eat and watch Fringe. Maybe Walter Bishop can inspire me to figure this out. I'll be back in a couple of hours.
 
  • #65
Let's clarify a few things that I feel need to be said at this point:
1) No one who holds the ensemble interpretation, and understands the first thing about quantum mechanics, believes that a pure state in quantum mechanics represents a classical probability distribution! They all know that quantum mechanics uses probability amplitudes, not probability distributions. Nothing in this new paper is aimed at defeating that blatant straw man. Instead, the ensemble distribution is the claim that the all-important correlations that quantum mechanics relies on, that don't appear in any classical probability distribution, are nevertheless correlations that only havng meaning for predicting the behavior of many trials. To me, the key flavor of the ensemble interpretation is a sense of incompleteness-- quantum mechanics is not a complete treatment of "what really happens" to a single system, it emerges as a description of many trials and that is its only connection with reality.
2) If you assume that a system has properties that completely determine the outcome of an experiment before it happens, then you are claiming that a hidden variables theory exists. It is far from "radical" to deny that possibility!

In short, I interpret their conclusion as "if individual systems have properties, then quantum mechanics states must refer to them." That really doesn't shock me, and I don't think it invalidates the ensemble interpretation, but then I view the ensemble interpretation as a rejection of the concept that individual systems have properties that determine the probability amplitudes (not determine the outcomes).
 
Last edited:
  • #66
I hope this wasn't linked already and I look like the idiot that I know I am but here is an interesting blog discussing this issue:

To understand the new result, the first question we should ask is, what exactly do PBR mean by a quantum state being “statistically interpretable”? Strangely, PBR spend barely a paragraph justifying their answer to this central question—but it’s easy enough to explain what their answer is. Basically, PBR call something “statistical” if two people, who live in the same universe but have different information, could rationally disagree about it. (They put it differently, but I’m pretty sure that’s what they mean.) As for what “rational” means, all we’ll need to know is that a rational person can never assign a probability of 0 to something that will actually happen.

...So, will this theorem finally end the century-old debate about the “reality” of quantum states—proving, with mathematical certitude, that the “ontic” camp was right and the “epistemic” camp was wrong? To ask this question is to answer it.

I expect that PBR’s philosophical opponents are already hard at work on a rebuttal paper: “The quantum state can too be interpreted statistically”, or even “The quantum state must be interpreted statistically.”

I expect the rebuttal to say that, yes, obviously two people can’t rationally assign different pure states to the same physical system—but only a fool would’ve ever thought otherwise, and that’s not what anyone ever meant by calling quantum states “statistical”, and anyway it’s beside the point, since pure states are just a degenerate special case of the more fundamental mixed states.

The quantum state cannot be interpreted as something other than a quantum state

http://www.scottaaronson.com/blog/?p=822
 
Last edited:
  • #67
I don't think it matters to them that pure states are idealizations. It is true that we only ever have substates, which we only treat as pure by treating all entanglements as entirely decohered by the preparation. So one might imagine that the ensemble approach is needed because we do not decohere all the entanglements when we prepare the system. But that's just the kind of eventuality that this paper is arguing against-- it is saying that even if the pure state is not the complete mathematical description of the properties, it is still something that constrains the physical reality of the individual system. If you imagine that there really are deterministic properties there, I don't see how you could have thought that a quantum mechanical state doesn't constrain the physical state of those properties, so to me, the ensemble interpretation always required denial of the concept of hidden variables

I realize a lot of people hold to an ensemble interpretation, while holding out hope for a more complete theory that unEarth's those deterministic hidden variables (like Einstein did), but I don't understand why those people don't just go with deBroglie-Bohm. If you want properties that determine outcomes, that's the way to do it. But it's kind of the opposite of the ensemble interpretation-- the ensemble interpretation says that the state is being over-interpreted if you think it specifies the reality of a given system, and deBroglie-Bohm says the state is being under-interpreted if you say that-- in that view the reality of the individual system is the state plus more, not something disjoint from the state that requires the state to only refer to many trials. That's why I'm not surprised that embracing properties forces one to adopt the state as a constraint on the physical reality of the system.

On the other hand, I think another problem here is there may not be agreement on just what the claims of the "ensemble interpretation" really are. Does someone who holds that interpretation want to explain just what it is that they are holding as true?
 
Last edited:
  • #68
Ken G said:
... but I don't understand why those people don't just go with deBroglie-Bohm.

My guess is that if you are a hardcore Ensemble'ist you don’t like non-locality, which comes with dBB...

Ken G said:
Does someone who holds that interpretation want to explain just what it is that they are holding as true?

[I’m not sure this paper is aimed at EI, but what the heck...]

I’m in on this one too, because I don’t understand what they are talking about (especially zonde’s "version"). I love Einstein, he’s my hero and probably one of the brightest souls ever lived, but I think he went into a dead end when trying to 'refute' QM. Bell finally proved him wrong. This is what he says about EI:
Albert Einstein said:
The attempt to conceive the quantum-theoretical description as the complete description of the individual systems leads to unnatural theoretical interpretations, which become immediately unnecessary if one accepts the interpretation that the description refers to ensembles of systems and not to individual systems.

It doesn’t make sense? QM can’t say anything useful about one single electron in the Double-slit experiment? Is this really true??

If we assume that the first electron fired in this single electron Double-slit experiment is the one in the top/left corner in picture a:

250px-Double-slit_experiment_results_Tanamura_2.jpg


Now according EI, does this first single electron in the corner exist, when it’s all alone? And could QM say anything about that single electron?

If not, in which one of the following frames b-e, will the single electron in the corner start to exist, and able to be describe by QM? And why this particular frame?

It doesn’t make sense, does it??

I’m not a fan of David Mermin’s "Shut up and calculate" approach, but I think he’s closer to the truth than Einstein:
David Mermin said:
For the notion that probabilistic theories must be about ensembles implicitly assumes that probability is about ignorance. (The 'hidden variables' are whatever it is that we are ignorant of.) But in a non-deterministic world probability has nothing to do with incomplete knowledge, and ought not to require an ensemble of systems for its interpretation.
...
The second motivation for an ensemble interpretation is the intuition that because quantum mechanics is inherently probabilistic, it only needs to make sense as a theory of ensembles. Whether or not probabilities can be given a sensible meaning for individual systems, this motivation is not compelling. For a theory ought to be able to describe as well as predict the behavior of the world. The fact that physics cannot make deterministic predictions about individual systems does not excuse us from pursuing the goal of being able to describe them as they currently are.
 
Last edited:
  • #69
bohm2 said:
I hope this wasn't linked already and I look like the idiot that I know I am but here is an interesting blog discussing this issue:



The quantum state cannot be interpreted as something other than a quantum state

http://www.scottaaronson.com/blog/?p=822
This blog post looks pretty good. I have only skimmed it, but I will return to it for a closer look later. The most useful detail on that page appeared in the comments. Two of the commenters (Lubos Motl and Matt Leifer) posted a link to this article about hidden-variable theories. It explains the basic terminology and some previous results. I have started to read it, and it looks pretty good. I will read at least a few more pages before I return to the article that this thread is about.

Some of you might find it entertaining to read the blog post by Lubos Motl (the angriest man in physics) about the topic. It will not help you understand anything, but it's mildly amusing to see how aggressively he attacks everything. It has a calming effect on me actually. I'm thinking about how I expressed some irritation earlier, and I'm thinking "I hope I don't sound like that". :smile:
 
  • #70
Fredrik said:
New summary. I have a better idea what they meant now.

Definition: A property of the system is a pair (D,d) (where D denotes a measuring device and d denotes one of its possible results) such that the theory predicts that if we perform a measurement with the device D, the result will certainly be d.​
Note that this is a theory-independent definition in the sense that it explains what the word "property" means in every theory.
Assumption: There's a theory that's at least as good as QM, in which a set \lambda=\{(D_i,d_i)|i\in I\} contains all the properties of the system.​
By calling this a "theory", we are implicitly assuming that it's possible to obtain useful information about the value of λ. (If it's not, then the "theory" isn't falsifiable in any sense of the word, and shouldn't be called a theory). So we are implicitly assuming that we can at least determine a probability distribution of values of λ.

By saying that this theory is at least as good as QM, we are implicitly assuming that the set \{D_i|i\in I\} contains all the measuring devices that QM makes predictions about.

I will call this theory the super-awesome classical theory (SACT). It has to be considered a classical theory, because it assigns no probabilities other than "certainty" to results of measurements on pure states. (A system is said to be in a pure state if the value of λ is known, and is said to be in a mixed state if a probability measure on the set of values of λ is known. The simplest kind of mixed state is a system such that all but a finite number of values of λ can be ruled out with certainty, and the remaining values are all associated with a number in [0,1] to be interpreted as the probability that the system is in the pure state λ).

OK, that concludes my comments about the stuff I believe I understand. The stuff below this line are comments about things I don't understand, so don't expect them to make as much sense as the stuff above.

__________________________________________________


Very nice Fredrik! Keep up the good work and tell us what the heck this is all about!

Now I shall read the paper on a netbook, horizontally entangled with Walternate... :smile:
 
  • #71
DevilsAvocado said:
Very nice Fredrik! Keep up the good work and tell us what the heck this is all about!
Thanks. I hope I will be able to do that soon, but I'm still pretty confused about what's going on.

DevilsAvocado said:
I think he went into a dead end when trying to 'refute' QM.
I would say that he made valuable contributions to QM, and never tried to refute it.

DevilsAvocado said:
It doesn’t make sense? QM can’t say anything useful about one single electron in the Double-slit experiment? Is this really true??
To me that quote doesn't seem to say anything like that. Regardless of interpretation, QM assigns very accurate probabilities to positions where the particle might be detected. This assignment is certainly useful to someone who's forced to bet all his money on where the first dot will appear. If you can imagine one person that it's useful to, then how can you say that it's useless?

I don't want to spend too much time talking about the ensemble interpretation in this thread. This thread is about ψ-epistemic hidden variable theories*, not about the ensemble interpretation.

*) The terminology is explained in the article I linked to in my previous post.
 
Last edited:
  • #72
Fredrik said:
Some of you might find it entertaining to read the blog post by Lubos Motl (the angriest man in physics) about the topic. It will not help you understand anything, but it's mildly amusing to see how aggressively he attacks everything. It has a calming effect on me actually. I'm thinking about how I expressed some irritation earlier, and I'm thinking "I hope I don't sound like that". :smile:

I have performance/social/generalized anxiety among other things so the way Lubos Motl sounds to you is the way every human being on the planet sounds to me. Lubos is beyond scary for me. I would never dare to question his posts even if his posts sounded like they were coming from an anti-quantum-mechanical crackpot/lunatic. And I'm not implying they are...in case he drops by.
 
  • #73
bohm2 said:
The quantum state cannot be interpreted as something other than a quantum state

http://www.scottaaronson.com/blog/?p=822

Scott Aaronson made an observation that I find meaningful here:

I expect the rebuttal to prove a contrary theorem, using a definition of the word “statistical” that subtly differs from PBRs. I expect the difference between the two definitions to get buried somewhere in the body of the paper.

I expect the rebuttal to get blogged and Slashdotted. I expect the Slashdot entry to get hundreds of comments taking strong sides, not one of which will acknowledge that the entire dispute hinges on the two camps’ differing definitions.

Now consider what Fredrik has pointed out:

Fredrik said:
They are comparing two different schools of thought:
  1. A state vector represents the properties of the system.
  2. A state vector represents the statistical properties of an ensemble of identically prepared systems, and does not also represent the properties of a single system.
[...]Then either of the two inequivalent preparation procedures could have given the system the properties represented by λ.[/color] Yada-yada-yada. Contradiction![...]

What does "and does not also represent the properties of a single system" actually entail? In fact it entails:
  1. That the law of the excluded middle applies to 'properties'.
  2. That physical 'properties' are defined non-contextually, i.e., value definiteness.

In effect this particular definition of 'properties' simply brings the debate full circle into known debate territory. By this preprint hinging their argument the particular qualifier: "not also properties of a single system", it merely reaches the same conclusions as the Kochen-Specker Theorem. That being that either quantum properties are not value definite (contextual), or that they are not 'real' properties by whatever definition of 'real' some author chooses to define it to be. Whatever definition is chosen is not going to be satisfactory for every point of view.

The irony here is that the same contextuality issues used to argue non-real in the EPR and Kochen-Specker cases are used to argue quantum states are real. To compound that irony this preprint has taken certain non-realist approach to defining "statistical interpretation" to argue this. Yet to a very high degree much of the debate hinges on semantics, and which definition of semantics is a priori the academically valid one. Meanwhile both sides astutely avoid recognizing that the semantic variances in the respective definitions are anything more than an choice of perspective and continue to impose definitions on each others words that are not applicable in the context they were intended.

Given these semantic variances in how individual scientist internalize these definitions I do not know how to formulate and argument that would even be generally comprehensible to everybody. It's possible to make the exact same argument twice, working off of incongruent choices of definitions or semantic choices, and simply be accused of contradicting myself. Yet these semantic choices are as truth independent as a coordinate choice.

So my take on this paper is that it is valid in the context of the semantic choices it made and leads me to the same conclusions about QM as EPR, Kochen-Specker, and other theoretical issues have. If by academic definition "real" properties must a priori be value definite and non-contextual then that makes me a non-realist by definition, yet I am in the realist camp.
 
  • #74
I've found the discussion here quite useful. My "thinking out loud" take on the question can be found here: http://www.tjradcliffe.com/?p=621 and can be summed up as follows:

“'Preparing a photon in the same quantum state will sometimes result in photons in different physical states' does not imply 'Preparing a photon in different quantum states will sometimes result in photons that are in the same physical state'. The former proposition is the statistical interpretation. The latter is the assumption that the author’s argument depends on."

There really is no basis for assuming their primary assumption, and if you don't grant them that assumption that argument just fails. The statistical interpretation (which I do not cleave to myself) does not in any wise imply that individual photons must be ignorant of the means used to prepare them. That's just an arbitrary imposition.
 
Last edited by a moderator:
  • #75
my_wan said:
Scott Aaronson made an observation that I find meaningful here:
Yes, that quote is my favorite part of his blog post. :smile:

my_wan said:
In effect this particular definition of 'properties'
...
By this preprint hinging their argument the particular qualifier: "not also properties of a single system"
That qualifier is a part of how I characterized the difference between the two views.

The article doesn't define the term "property". The definition I posted is one I came up with on my own (inspired by the probability-1 definition in Isham's QM book) when I was trying to guess what Pusey, Barrett and Rudolph (PBR) meant. Now that I've read a big enough part of the article by Harrigan & Spekkens (HS) (link) that defines the terminology used by people who deal with hidden-variable theories, I no longer think that my guess was correct.

In the PBR article, the difference between the two views that is actually used in the argument is that in one of them, λ determines the state vector, and in the other it doesn't. This is exactly how HS defines the difference between ψ-ontic and ψ-epistemic hidden variable theories. So it appears that the title and the abstract of the PBR paper are extremely misleading. The paper argues against ψ-epistemic hidden variable theories, not against the idea that QM is just a set of rules that tells us how to calculate probabilities of possible results of experiments.

By the way, HS doesn't define "property" either, but they certainly don't mean that if you know all of them, you know the result of every experiment with certainty. So if PBR are using similar terminology, my guess was way off.
 
Last edited:
  • #76
bohm2 said:
I have performance/social/generalized anxiety among other things so the way Lubos Motl sounds to you is the way every human being on the planet sounds to me. Lubos is beyond scary for me. I would never dare to question his posts even if his posts sounded like they were coming from an anti-quantum-mechanical crackpot/lunatic. And I'm not implying they are...in case he drops by.
In that case, I apologize for pointing out that others had posted the same link before. I assumed that you would just think "D'oh" (like Homer Simpson), and be completely over it a few seconds later. I certainly didn't mean to cause any anxiety.

Lubos is definitely not "an anti-quantum-mechanical crackpot/lunatic", but he thinks everyone else is. :smile:
 
  • #77
Just a quick comment about hidden-variable theories...

One thing I realized when I read HS is that hidden-variable theories can be used to give precise definitions of statements like
  • QM doesn't describe reality.
  • A state vector represents the observer's knowledge of the system.
The former is made precise by the concept of ψ-incomplete hidden-variable theories, and the latter by the concept of ψ-epistemic hidden-variable theories. Now, I'm sure that some of you (in particular Ken G) will find these definitions unsatisfactory. But I didn't bring this up because I think these definitions describe exactly what a person who uses one of these statements has in mind. I'm bringing it up because until now I thought that statements like these can't be defined in terms of operational concepts like preparation procedures and measurement procedures, and I think it's pretty cool that there are meaningful definitions that can be expressed in scientific terms.

The HS article defines two classes of hidden-variable theories: ψ-ontic and ψ-epistemic. The first class can be further divided into ψ-complete and ψ-supplemented. The criteria that define the three classes are precisely the ones used on page 1 of the PBR article. The PBR article is arguing against the ψ-epistemic class of hidden-variable theories. It's interesting to note that the HS article is arguing that a local hidden-variable theory that can reproduce the predictions of QM is necessarily ψ-epistemic. So if these arguments all hold, they have ruled out all hidden-variable theories except the non-local ψ-ontic ones. (Bohmian mechanics is a non-local ψ-supplemented (and therefore ψ-ontic) hidden-variable theory).
 
Last edited:
  • #78
These are a few good quotes from the comments section of Scott Aaronson's blog. (The one bohm2 linked to in #66).

Scott:
As I said, I would’ve strongly preferred if PBR had given a careful discussion of what they mean by “statistical” and what they don’t mean (and for which meanings the “statistical interpretation” can be trivially ruled out even without their theorem, etc. etc.), rather than breezing past these issues in a few sentences.
...
Let me put it this way: if what the epistemic camp believed is overturned by the PBR theorem, then what they believed is so obviously wrong that they shouldn’t have needed such a theorem to set them straight! And therefore, being charitable, I’m going to proceed on the assumption that they meant something else.​

Matt Leifer:
The psi-epistemicist response to PBR is quite straightforward. Basically, the result does not rule out any proposal that we were taking seriously in the first place. For the neo-Copenhagenists (e.g. Quantum Bayesians, Zeilinger’s, etc.) there is no underlying state of reality beyond the quantum predictions, so the result is irrelevant to their program and they can continue as before. Those of us who are realist, e.g. Rob Spekkens and myself, have more of a problem and we must deny one of the assumptions of the theorem. However, Bell’s theorem, Kochen-Specker, Hardy’s Ontological Excess Baggage theorem, and a host of results by Alberto Montina have already given us enough problems with the usual framework for ontological models that we had already abandoned it as a serious proposal a long time ago. Spekkens thinks that the ultimate theory will have an ontology consisting of relational degrees of freedom, i.e. systems do not have properties in isolation, but only relative to other systems. Personally, I can’t make much sense of that beyond a rephrasing of many worlds, so I favor a theory with retrocausal influences instead. Neither of these proposals is ruled out by the PBR theorem.

That said, I do think the PBR result is the most significant result in quantum foundations for several years. It was an important open question as to whether psi-epistemicism was possible within the standard framework for ontological theories and that has now been answered in the negative. However, as I said, this only confirms intuitions that we (both psi-ontologists and psi-epistemicists) already had.​

Gene:
Here is video of Lubos singing Queen’s Bohemian Rhapsody:

 
Last edited by a moderator:
  • #79
The layman’s take on PBR – feel free to laugh :smile:

To begin with, this helped me not to fully understand what this is all about:
[PLAIN said:
http://arxiv.org/abs/1111.3328][/PLAIN] The[/URL] statistical view of the quantum state is that it merely encodes an experimenter's information about the properties of a system. We will describe a particular measurement and show that the quantum predictions for this measurement are incompatible with this view.


Then I got somewhat confused:
[PLAIN said:
http://arxiv.org/abs/1111.3328][/PLAIN]
If the quantum state is a physical property of the system (the first view), then either λ is identical with |ø0> or |ø1>, or λ consists of |ø0> or |ø1>, supplemented with values for additional variables not described by quantum theory. Either way, the quantum state is uniquely determined by λ.

If the quantum state is statistical in nature (the second view), then a full specification of λ need not determine the quantum state uniquely. Some values of λ may e compatible with the quantum state being either |ø0> or |ø1>. This can be understood via a classical analogy. Suppose there are two different methods of flipping a coin, each of which is biased. Method 1 gives heads with probability p0 > 0 and method 2 with probability 0 < p0 ≠ p1. If the coin is flipped only once, there is no way to determine by observing only the coin which method was used. The outcome heads is compatible with both. The statistical view says something similar about the quantum system after preparation. The preparation method determines either |ø0> or |ø1> just as the flipping method determines probabilities for the coin. But a complete list of physical properties λ is analogous to a list of coin properties, such as position, momentum, etc. Just as “heads” up is compatible with either flipping method, a particular value of λ might be compatible with either preparation method.

We will show that the statistical view is not compatible with the predictions of quantum theory.


And I after this sentence... well, it’s no use to pretend – I was completely lost.

But, I continue to read (stubborn :devil:) and this paragraph helped me to form an 'illusion' that there might be something to understand after all:
[PLAIN said:
http://arxiv.org/abs/1111.3328][/PLAIN] Finally, the argument so far uses the fact that quantum probabilities are sometimes exactly zero. The argument has not taken any account of the experimental errors that will occur in any real laboratory. It is very important to have a version of the argument which is robust against small amounts of noise. Otherwise the conclusion – that the quantum state is a physical property of a quantum system – would be an artificial feature of the exact theory, but irrelevant to the real world. Experimental test would be impossible.

Add this to the measurement apparatus in FIG 1:

29xyhzo.png


They are measuring NOT values!

... This is a completely *WILD GUESS* but to me it looks like the logic goes like this:

Quantum probabilities are sometimes exactly zero, and when they are – we should not be able to measure 'this outcome'. If we manage to set up a system (I have no idea how they do that) where it is possible to indeed measure 'zero probabilities', well then these 'probabilities' are not probabilities, but real!

Showtime! Boos/Applause/LOL – anything goes! :biggrin:
 
Last edited by a moderator:
  • #80
The title and the abstract of this paper are like newspaper headlines. They aren't meant to be honest, or even close to the truth. They are only meant to get your attention. But the article's argument against ψ-epistemic hidden-variable theories may still be correct.

I have examined the argument now. Let |0\rangle and |1\rangle be the eigenvectors of some operator on a two-dimensional Hilbert space (like a spin component operator for a spin-1/2 particle). Define
<br /> \begin{align}<br /> |+\rangle &amp;=\frac{1}{\sqrt{2}} \left(|0\rangle+|1\rangle\right)\\<br /> |-\rangle &amp;=\frac{1}{\sqrt{2}} \left(|0\rangle-|1\rangle\right).<br /> \end{align}<br /> These vectors form another orthonormal basis for the same Hilbert space. Suppose that there's one preparation procedure that puts the system in the |0\rangle state, and another that puts the system in the |1\rangle state. Each time we perform one of these procedures, the particle ends up having some set of properties. We are assuming that there's a λ (a "state" in the hidden-variable theory, i.e. a complete list of all the particle's properties) such that regardless of which of these two procedures we use, there's a probability ≥q>0 that the properties of the particle will be λ.

Suppose that we prepare two particles in isolation, both in the state |0\rangle. This puts the two-particle system in the state |0\rangle\otimes|0\rangle. There is a probability \geq q^2&gt;0 that both particles will have properties λ.

Now suppose that we measure an operator (on the four-dimensional two-particle Hilbert space) with the following eigenvectors.
<br /> \begin{align}<br /> |\xi_1\rangle &amp;=\frac{1}{\sqrt{2}} \left(|0\rangle\otimes|1\rangle +|1\rangle\otimes|0\rangle\right)\\<br /> |\xi_2\rangle &amp;=\frac{1}{\sqrt{2}} \left(|0\rangle\otimes|-\rangle +|1\rangle\otimes|+\rangle\right)\\<br /> |\xi_3\rangle &amp;=\frac{1}{\sqrt{2}} \left(|+\rangle\otimes|1\rangle +|-\rangle\otimes|0\rangle\right)\\<br /> |\xi_4\rangle &amp;=\frac{1}{\sqrt{2}} \left(|+\rangle\otimes|-\rangle +|-\rangle\otimes|+\rangle\right)<br /> \end{align}<br /> The first one is orthogonal to |0\rangle\otimes|0\rangle, so QM assigns probability 0 to the result corresponding to that eigenvector. But at least a fraction q^2&gt;0 of the time, the system has properties that it could have gotten from the preparation procedure that puts the system in state |0\rangle\otimes|0\rangle.

Right now I'm not sure why the above should be considered a contradiction. I have to go to a grocery store before it closes, so I don't have time to think it through right now. Is it that if you know the state vector, you know which of the four eigenvectors represents an impossible result, but if you just know λ, you don't?
 
  • #81
Fredrik said:
Is it that if you know the state vector, you know which of the four eigenvectors represents an impossible result, but if you just know λ, you don't?

How does that match up with what Scott Aaronson interprets PBR where he writes:

Basically, PBR call something “statistical” if two people, who live in the same universe but have different information, could rationally disagree about it. (They put it differently, but I’m pretty sure that’s what they mean.) As for what “rational” means, all we’ll need to know is that a rational person can never assign a probability of 0 to something that will actually happen.
 
Last edited:
  • #82
Aaronson's blog helped me understand at least one detail in the PBR argument, but I think I understand less of his argument than of the PBR argument right now. He seems to be saying that QM is statistical if two people could assign different wavefunctions to the same system, and both be right. PBR are saying that QM is statistical if two preparation procedures that QM considers inequivalent might actually give the system the same properties. This doesn't sound like the same thing to me.
 
  • #83
bohm2 said:
How does that match up with what Scott Aaronson interprets PBR where he writes:

a rational person can never assign a probability of 0 to something that will actually happen.


[Again, wild guessing]
This sounds logical, but maybe the 'trick' is:
[PLAIN said:
http://arxiv.org/abs/1111.3328]Finally,[/PLAIN] the argument so far uses the fact that quantum probabilities are sometimes exactly zero.


And you do find a clever way to measure that zero probability... that should give a completely new meaning to "probability", right...?
 
Last edited by a moderator:
  • #84
Just to add more input (and confuse me even more) concerning the implications of this paper is another blog (Matt Leifer) just posted:

First up, I would like to say that I find the use of the word “Statistically” in the title to be a rather unfortunate choice. It is liable to make people think that the authors are arguing against the Born rule (Lubos Motl has fallen into this trap in particular), whereas in fact the opposite is true. The result is all about reproducing the Born rule within a realist theory. The question is whether a scientific realist can interpret the quantum state as an epistemic state (state of knowledge) or whether it must be an ontic state (state of reality). It seems to show that only the ontic interpretation is viable, but, in my view, this is a bit too quick.

Various contemporary neo-Copenhagen approaches also fall under this option, e.g. the Quantum Bayesianism of Carlton Caves, Chris Fuchs and Ruediger Schack; Anton Zeilinger’s idea that quantum physics is only about information; and the view presently advocated by the philosopher Jeff Bub. These views are safe from refutation by the PBR theorem, although one may debate whether they are desirable on other grounds, e.g. the accusation of instrumentalism. Pretty much all of the well-developed interpretations that take a realist stance fall under option 3, so they are in the psi-ontic camp. This includes the Everett/many-worlds interpretation, de Broglie-Bohm theory, and spontaneous collapse models. Advocates of these approaches are likely to rejoice at the PBR result, as it apparently rules out their only realist competition, and they are unlikely to regard anti-realist approaches as viable.

The PBR theorem rules out psi-epistemic models within the standard Bell framework for ontological models. The remaining options are to adopt psi-ontology, remain psi-epistemic and abandon realism, or remain psi-epistemic and abandon the Bell framework. One of the things that a good interpretation of a physical theory should have is explanatory power. For me, the epistemic view of quantum states is so explanatory that it is worth trying to preserve it. Realism too is something that we should not abandon too hastily. Therefore, it seems to me that we should be questioning the assumptions of the Bell framework by allowing more general ontologies, perhaps involving relational or retrocausal degrees of freedom. At the very least, this option is the path less travelled, so we might learn something by exploring it more thoroughly.

http://mattleifer.info/2011/11/20/can-the-quantum-state-be-interpreted-statistically/

Edit: I just read all of it. Matt wrote a great piece.
 
Last edited by a moderator:
  • #85
Last edited by a moderator:
  • #86
Fredrik said:
I would say that he made valuable contributions to QM, and never tried to refute it.

Absolutely, as I said he’s my hero, but even the Sun has spots. When the EPR-Bell loopholes are finally closed Local Realism is forever dead, and I guess that would have been some kind of a 'blow' to Einstein... I know 'refute' is a quite 'incomplete' description, but I did use single quotes... :wink:

Is there any person on this planet who clearly can describe what went on between Einstein & Bohr for 20+ years?? All we know for sure is that Einstein, after the 1927 Solvay Conference, was not completely happy with the state of affairs. The quote in the paper is quite telling:
... I incline to the opinion that the wave function does not (completely) describe what is real, but only a (to us) empirically accessible maximal knowledge regarding that which really exists [...] This is what I mean when I advance the view that quantum mechanics gives an incomplete description of the real state of affairs. -- A. Einstein

Fredrik said:
To me that quote doesn't seem to say anything like that. Regardless of interpretation, QM assigns very accurate probabilities to positions where the particle might be detected. This assignment is certainly useful to someone who's forced to bet all his money on where the first dot will appear. If you can imagine one person that it's useful to, then how can you say that it's useless?

I don't want to spend too much time talking about the ensemble interpretation in this thread.

But I don’t get it? EI is no different than any other interpretation when comes to individual particles? What’s it good for then??

But you’re right, this thread is not about EI.
 
  • #87
@Fredrik
As much as I have learned to respect and often concur with your input here I was strongly at odds with your earlier take in this thread. Though I didn't know how to properly articulate it without moving well off topic, so I did the best I could with generalities. However, it seems you did comprehend quiet well :cool:

I remain interested in how to better articulate these ontic and epistemic variances in peoples perceptions but am at a complete loss. Namely because there are a number of arguments I would like to make but these perceptual variances invariably lead to an almost universal misunderstanding of the perceived consequences. For me it makes no difference which set of definitions I work under to make a point, so long as the point is understood in the context intended. That just doesn't seem possible under the present state of affairs. I can't even fully grok the full range of other peoples internal perspectives on these ontic and epistemic issues. Thanks for the http://arxiv.org/abs/0706.2661" , it does make many of the issues I struggle with a lot clearer. It fails to fully articulate a distinction between ontic locality verses epistemic locality, which I find pertinent, but was as clear an articulation of the basic issues as I have ever seen.

In particular I like the fact that this HS paper explicitly points out why Einstein would not have been swayed by the modern EPR experiments. In fact, not only did he reject the original EPR paper as representative of his view, his preferred form of the argument explicitly depended on the validity of modern EPR experiments and implicitly on the inequalities outlined by Bell. I'll keep this paper in mind in the event I get another debate involving opinions as to what Einstein would have been convinced of as a result of modern experiments.

In regards to post #77 I to was impressed with the rigor with which the authors made precise these confounding definitional issues, not perfect but impressive. However, your characterization of the PBR article as anti ψ-epistemic, though not explicitly wrong, is more nuanced than you seemed to imply when you noted the comparison with the HS article. A clue to this may be in your post #78 when you noted an inability to make sense of Spekkens view unless it was somehow related to many worlds. Spekkens toy model notwithstanding (it was a "toy" model after all) his views are not too far from mine, and many worlds has nothing to do with it. When the PBR article argues that the quantum state cannot be interpreted "statistically" it does not explicitly imply a one to one correspondence between |ψ|^2 and an ontic specification of ψ. Only that ψ refers to an actual ontic construct in a manner that may or may not involve a ψ-complete specification, at least as defined by the HS article to qualify as ψ-complete. In this respect it is a weaker argument than some may attempt to make it out to be but is somewhat more attuned to the type of argument Einstein made, the EPR paper notwithstanding as Einstein distanced himself from that article immediately upon publication. In the sense that the HS article defined ψ-epistemic the PBR article made no specific claims. To us realist this may seem anti-climatic, but that under-values a range of opinions and perspectives not shared by many realist.

I would be interested in a discussion about Spekkens views, particularly the concept of relational degrees of freedom, (lack of) properties in isolation, and relativistic (emergent) properties in general. It may help clear up some issues with Spekkens views. Some familiarity with Relational QM (RQM) would be useful, but would almost certainly exceed the scope of this thread. Personally I can't see any way to escape the non-realist views without an understanding of RQM or related concepts.
 
Last edited by a moderator:
  • #88
Hey my_wan! Long time no see! :smile:
 
  • #89
bohm2 said:
Just to add more input (and confuse me even more) concerning the implications of this paper is another blog (Matt Leifer) just posted:

http://mattleifer.info/2011/11/20/can-the-quantum-state-be-interpreted-statistically/

Edit: I just read all of it. Matt wrote a great piece.

Many thanks. Finally it’s possible to get a chance to understand what this is all about:
epistemic state = state of knowledge
ontic state = state of reality


  1. ψ-epistemic: Wavefunctions are epistemic and there is some underlying ontic state.

  2. ψ-epistemic: Wavefunctions are epistemic, but there is no deeper underlying reality.

  3. ψ-ontic: Wavefunctions are ontic.

Conclusions
The PBR theorem rules out psi-epistemic models within the standard Bell framework for ontological models. The remaining options are to adopt psi-ontology, remain psi-epistemic and abandon realism, or remain psi-epistemic and abandon the Bell framework. One of the things that a good interpretation of a physical theory should have is explanatory power. For me, the epistemic view of quantum states is so explanatory that it is worth trying to preserve it. Realism too is something that we should not abandon too hastily. Therefore, it seems to me that we should be questioning the assumptions of the Bell framework by allowing more general ontologies, perhaps involving relational or retrocausal degrees of freedom. At the very least, this option is the path less travelled, so we might learn something by exploring it more thoroughly.​

What’s left is to understand the proof, that seems to involve the Born rule and a 0 result which contradicts the normalization assumption of 1, and an argument that there can be no overlap in the probability distributions representing |0⟩ and |+⟩ in the model.

psiontic.png


But I don’t understand it and are hoping that someone can 'translate'...
 
Last edited by a moderator:
  • #90
Fredrik said:
... a local hidden-variable theory that can reproduce the predictions of QM is necessarily ψ-epistemic.

What am I missing?? A local hidden-variable theory that can reproduce the predictions of QM...?

This has been quite dead for awhile, hasn’t it?? :bugeye::bugeye::bugeye:
 
  • #91
DevilsAvocado said:
Many thanks. Finally it’s possible to get a chance to understand what this is all about:
epistemic state = state of knowledge
ontic state = state of reality


  1. ψ-epistemic: Wavefunctions are epistemic and there is some underlying ontic state.

  2. ψ-epistemic: Wavefunctions are epistemic, but there is no deeper underlying reality.

  3. ψ-ontic: Wavefunctions are ontic.


Many realists have trouble understanding the purely epistemic stance. As Ghirardi writes in discussing Bell's view:

Bell has considered this position and he has made clear that he was inclined to reject any reference to information unless one would, first of all, answer to the following basic questions: Whose information?, Information about what?

So if one takes that pure epistemic/instrumentalist stance it seems to me one is almost forced to treat QT as "a science of meter readings". That view seems unattractive to me. It has the same stench/smell that held back progress in the cognitive sciences (e.g. behaviourism). But then, I could be mistaken?

But if one treats the wave function as a real "field"-like entity it is very much different than the typical fields we are accustomed to. The wave function evolves in 3N-dimensional configuration space, there's the contextuality/non-separability also and stuff like that make it a very strange kind of "causal" agent. If one takes the Bohmian perspective (at least one Bohmian version), how do the 2 (pilot wave and particle) "interact"? It can't be via the usual contact-mechanical stuff we are accustomed to because of the non-locality that is required in any realist interpretation.

http://arxiv.org/PS_cache/arxiv/pdf/0904/0904.0958v1.pdf

Furthermore, if one wishes to scrap Bohm's dualistic ontology but remain a realist so that the wave function is everything then, there's another problem:

Since the proposal is to take the wave function to represent physical objects, it seem natural to take configuration space as the true physical space. But clearly, we do not seem to live in confguration space. Rather, it seems obvious to us that we live in 3 dimensions. Therefore, a proponent of this view has to provide an account of why it seems as if we live in a 3-dimensional space even though we do not. Connected to that problem, we should explain how to "recover the appearances" of macroscopic objects in terms of the wave function.

http://www.niu.edu/~vallori/AlloriWfoPaper-Jul19.pdf​
 
Last edited by a moderator:
  • #92
my_wan said:
@Fredrik
As much as I have learned to respect and often concur with your input here I was strongly at odds with your earlier take in this thread. Though I didn't know how to properly articulate it without moving well off topic, so I did the best I could with generalities. However, it seems you did comprehend quiet well :cool:
There were a few things that I failed to understand, but I think I got the main point right: What they are attempting to disprove isn't what people who claim to prefer a statistical view actually believe in.

I must admit that I had a rather strong emotional reaction when I read the title and the abstract. They made me expect a bunch of crackpot nonsense, and I think this made it harder for me to understand some of the details correctly. For example, when they got to the condition that defines the ψ-epistemic theories, I thought they were saying that this was implied by the statistical view, but all they did was to consider a definition that makes an idea precise.

I also had no idea that there is a definition that makes that idea precise. This is why I said that the argument doesn't even look like a theorem. At the time, I thought about saying that a person who calls this a theorem doesn't know what a theorem is, but I decided that this was too strong a statement about something that I knew that I didn't fully understand. :smile:

I still think the title and the abstract makes the article look like crackpot nonsense, so I'm surprised that it didn't get dismissed as such by more people. I still don't fully understand the argument in the article, but now at least it looks like a theorem and a proof.

my_wan said:
Thanks for the http://arxiv.org/abs/0706.2661" , it does make many of the issues I struggle with a lot clearer. It fails to fully articulate a distinction between ontic locality verses epistemic locality, which I find pertinent, but was as clear an articulation of the basic issues as I have ever seen.
I read their definition of locality, but I didn't understand it. I'm going to have to give it another try later, because it's something I've always felt needs a definition.

my_wan said:
However, your characterization of the PBR article as anti ψ-epistemic, though not explicitly wrong, is more nuanced than you seemed to imply when you noted the comparison with the HS article.
Matt Leifer's blog brought up a few nuances that are absent both from my posts and the PBR article (like how there could be a hidden-variable theory where properties are relative rather than objective). But I don't see how PBR can be interpreted as anything but an argument against what HS called ψ-epistemic theories. Note that when PBR said
We begin by describing more fully the difference between the two different views of the quantum state [11].
reference 11 is HS. (I didn't realize this until later).

my_wan said:
A clue to this may be in your post #78 when you noted an inability to make sense of Spekkens view unless it was somehow related to many worlds.
This was a quote from Matt Leifer's comments to Scott Aaronson's blog post. But I have actually had similar thoughts (about how relational stuff seems to be MWI ideas in disguise), and even mentioned them in the forum a couple of times. I have no idea what Spekkens' toy model is about though. But I'm probably going to take some time to read some of the articles that Leifer is referencing soon.

my_wan said:
When the PBR article argues that the quantum state cannot be interpreted "statistically" it does not explicitly imply a one to one correspondence between |ψ|^2 and an ontic specification of ψ. Only that ψ refers to an actual ontic construct in a manner that may or may not involve a ψ-complete specification, at least as defined by the HS article to qualify as ψ-complete.
I understood this, but maybe I typed it up wrong. :smile:

my_wan said:
I would be interested in a discussion about Spekkens views, particularly the concept of relational degrees of freedom, (lack of) properties in isolation, and relativistic (emergent) properties in general. It may help clear up some issues with Spekkens views. Some familiarity with Relational QM (RQM) would be useful, but would almost certainly exceed the scope of this thread. Personally I can't see any way to escape the non-realist views without an understanding of RQM or related concepts.
Sounds like a good topic for another thread. (But I have spent a lot of time on this PBR stuff the past few days, so I'm somewhat reluctant to get into a long discussion about a new topic).
 
Last edited by a moderator:
  • #93
DevilsAvocado said:
What am I missing?? A local hidden-variable theory that can reproduce the predictions of QM...?

This has been quite dead for awhile, hasn’t it?? :bugeye::bugeye::bugeye:
Yes, but you're probably thinking that it's been dead since 1963 (± a few) when Bell's theorem was published, but HS proves it using two of Einstein's arguments, from 1927 and 1935.

The reason why that result was worth mentioning is that the PBR theorem is a result of the same type, a theorem that rules out some class of hidden-variable theories.
 
  • #94
DevilsAvocado said:
an argument that there can be no overlap in the probability distributions representing |0⟩ and |+⟩ in the model.
This is part of the definition of "ψ-epistemic theory". I think there are two basic ideas involved:
  • A probability distribution can be thought of as a representation of our knowledge of the system's properties.
  • Something that's completely determined by the properties of the system can be thought of as another property of the system.
If there are no overlapping probability distributions in the theory, then each λ determines exactly one probability distribution. Now the two ideas are in conflict. You can think of the probability distribution as "knowledge", but you can also think of it as a "property". If there's at least one λ that's associated with two probability distributions, then the probability distributions can't all be considered properties of the system. So now we have to consider at least some of them representations of "knowledge.

This motivates the definition that says that only theories of the latter kind (the ones with at least two overlapping probability distributions in the theory) are considered ψ-epistemic. These are the theories that the PBR article apparently has refuted. The first argument in the article is a bit naive, because it assumes specifically that there's overlap between the probability distributions associated with |0\rangle and |+\rangle.
 
  • #95
I believe I have found a flaw in the paper.

In short, they try to show that there is no lambda satisfying certain properties. The problem is that the CRUCIAL property they assume is not even stated as being one of the properties, probably because they thought that property was "obvious". And that "obvious" property is today known as non-contextuality. Indeed, today it is well known that QM is NOT non-contextual. But long time ago, it was not known. A long time ago von Neumann has found a "proof" that hidden variables (i.e., lambda) were impossible, but later it was realized that he tacitly assumed non-contextuality, so today it is known that his theorem only shows that non-contextual hidden variables are impossible. It seems that essentially the same mistake made long time ago by von Neumann is now repeated by those guys here.

Let me explain what makes me arrive to that conclusion. They first talk about ONE system and try to prove that there is no adequate lambda for such a system. But to prove that, they actually consider the case of TWO such systems. Initially this is not a problem because initially the two systems are independent (see Fig. 1). But at the measurement, the two systems are brought together (Fig. 1), so the assumption of independence is no longer justified. Indeed, the states in Eq. (1) are ENTANGLED states, which correspond to not-independent systems. Even though the systems were independent before the measurement, they became dependent in a measurement. The properties of the system change by measurement, which, by definition, is contextuality. And yet, the authors seem to tacitly (but erroneously) assume that the two systems should remain independent even at the measurement. In a contextual theory, the lambda at the measurement is NOT merely the collection of lambda_1 and lambda_2 before the measurement, which the authors don't seem to realize.
 
Last edited:
  • #96
Excellent post, Demystifier ! Very well and clearly written.
 
  • #97
Thanks dextercioby! Now I have sent also an e-mail to the authors, with a similar (but slightly more polite) content. If they answer to me, I will let you know.
 
  • #98
Just to be sure, are they assuming collapse, that is what they're taking for granted is essentially the Copenhagen interpretation ?
 
  • #99
dextercioby said:
Just to be sure, are they assuming collapse, that is what they're taking for granted is essentially the Copenhagen interpretation ?
As far as I can see, they don't assume collapse.
 
  • #100
dextercioby said:
Just to be sure, are they assuming collapse, that is what they're taking for granted is essentially the Copenhagen interpretation ?
I doubt that there are even two people who mean the same thing by the term "Copenhagen interpretation", so I try to avoid it. The informal version of the assumption they're making (in order to derive a contradiction) is that a state vector represents the experimenter's knowledge of the system. This is how some people describe "the CI". But nothing can be derived from an informal version of a statement, so the authors are choosing one specific way to give the statement a precise meaning. They are defining the claim that "a state vector represents knowledge of the system" as "there's a ψ-epistemic theory that makes the same predictions as QM".

In such theories, "collapse" is not a physical process. It's just a matter of changing your probability assignments when you have ruled out some of the possibilities.
 

Similar threads

Back
Top