Quantum Mechanics without Measurement

  • #51
atyy said:
If Griffiths has made a mistake, where could it be? On the one hand, CH is not a realistic theory, so it seems that it could escape the Bell theorem.

In http://quantum.phys.cmu.edu/CQT/chaps/cqt24.pdf (p289) he writes that "Thus the point at which the derivation of (24.10) begins to deviate from quantum principles is in the assumption that a function ##\alpha(w_{a}, \lambda )## exists for different directions ##w_{a}##."

Well, so far I think what he says is ok, since Bell's point is indeed that these exist only if local realism holds, and quantum mechanics is not a local realistic theory.

(24.10) is the CHSH inequality:

\rho(a,b) + \rho(a,b') + \rho(a',b) - \rho(a',b') \leq 2

"Thus the point at which the derivation of (24.10) begins to deviate from quantum principles is in the assumption that a function ##\alpha(w_{a}, \lambda )## exists for different directions ##w_{a}##."

To me this means that as long as we only deal with the simplest case of parallel settings (i.e. the deterministic 1935 EPR picture), we're okay and LHV would still work, but as soon as we introduce more and 'tougher' settings for the measuring apparatus (polarizer), LHV does not work anymore, only NLHV does.

[my bolding]
atyy said:
Then he says "The claim is sometimes made that quantum theory must be nonlocal simply because its predictions violate (24.10). But this is not correct. First, what follows logically from the violation of this inequality is that hidden variable theories, if they are to agree with quantum theory, must be nonlocal or embody some other peculiarity. But hidden variable theories by definition employ a different mathematical structure from (or in addition to) the quantum Hilbert space, so this tells us nothing about standard quantum mechanics."

This seems fishy, because http://arxiv.org/abs/0901.4255 argues that the Bell theorem is compatible with quantum mechanics, since the wave function itself can serve as the hidden variable. It is simply that if one accepts "realism", then the wave function is nonlocal. So I don't think the Bell inequality is incompatible with quantum mechanics. Perhaps it is here that Griffiths has made a mistake.

Fishy indeed... or maybe worse...

The first bold part above is where things start to go quite wrong. Bell's theorem is not a description or definition of quantum mechanics; instead it sets a limit for a theory of local hidden variables, aka Bell's inequality, which is violated both theoretically and experimentally by quantum mechanics, hence leading to this simple form:

No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics.

Bell's theorem is an abstract mathematical formulation for the limit of theories of local hidden variables; it does not say anything specific about QM. And to use the Hilbert space as an argument is nothing but ridiculous – we have experiments for god's sake! And as I mentioned in a previous post, we could substitute QM for "Barnum & Bailey Circus", and Bell's theorem would still hold (though be it a little bit 'peculiar'):

No physical theory of local hidden variables can ever reproduce all of the predictions of "Barnum & Bailey Circus".

Bell's theorem is only about theories of local hidden variables; it does not say anything fundamental about theories violating the inequality (and of course the definition of local realism comes from the original 1935 EPR paper).

I don't know what to say about the second bold part... quantum mechanics violates Bell's inequality... it cannot be 'compatible' with it, Griffiths must have misunderstood the whole thing...

atyy said:
Nonetheless, in the broader sense, it seems that Griffiths could be right, and CH could be local since it does seem to reject realism (ie. Griffiths's definition of "realism" is not common sense realism). Hohenberg's introduction to CH http://arxiv.org/abs/0909.2359, for example, says CH is not realistic theory - which given how some versions of Copenhagen don't favour realism - CH could I think be argued to be Copenhagen done right.

Well, here is when things get so perplex that it is almost justified to talk about 'crackpotish' ideas...

If CH is local and non-real there is no problem whatsoever!

But then... when Griffiths claim that CH is also consistent (which the name indicates), we're back in the rabbit hole of total confusion: What consistency is he talking about?? In which way is CH more consistent than any other QM interpretation?? I just don't get it. :bugeye:

CH is surely not consistent according to the classification adopted by Einstein and EPR:

[PLAIN said:
http://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics]The[/PLAIN] [/PLAIN] current usage of realism and completeness originated in the 1935 paper in which Einstein and others proposed the EPR paradox. In that paper the authors proposed the concepts element of reality and the completeness of a physical theory. They characterised element of reality as a quantity whose value can be predicted with certainty before measuring or otherwise disturbing it, and defined a complete physical theory as one in which every element of physical reality is accounted for by the theory. In a semantic view of interpretation, an interpretation is complete if every element of the interpreting structure is present in the mathematics. Realism is also a property of each of the elements of the maths; an element is real if it corresponds to something in the interpreting structure. For example, in some interpretations of quantum mechanics (such as the many-worlds interpretation) the ket vector associated to the system state is said to correspond to an element of physical reality, while in other interpretations it is not.

But what finally put the nail in the coffin for me, are statements like this:

http://quantum.phys.cmu.edu/CQT/chaps/cqt24.pdf (p289) said:
If quantum theory is a correct description of the world, then since it predicts correlation functions which violate (24.10), one or more of the assumptions made in the derivation of this inequality must be wrong.

Wow... "If" and "must"... looks like he's refuting QM and/or Bell's theorem in one sentence... not bad at all!

atyy said:
What I'm asking is: in CH is the Bell inequality violated in any single framework?

I don't know atyy, all this looks like a mess to me, but of course, I could be wrong (and then I will put on my red face, bowing to the floor, apologizing)...

This is what Wikipedia has to say:

[PLAIN said:
http://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics]The[/PLAIN] [/PLAIN] consistent histories interpretation generalizes the conventional Copenhagen interpretation and attempts to provide a natural interpretation of quantum cosmology. The theory is based on a consistency criterion that allows the history of a system to be described so that the probabilities for each history obey the additive rules of classical probability. It is claimed to be consistent with the Schrödinger equation.

According to this interpretation, the purpose of a quantum-mechanical theory is to predict the relative probabilities of various alternative histories (for example, of a particle).

It just makes it worse, the rules of classical probability can't possibly be non-realistic... a complete mess...
 
Last edited by a moderator:
Physics news on Phys.org
  • #52
stevendaryl said:
I think that is way too harsh. I don't see it that way at all. As someone else said, I see it as a way of doing Copenhagen without making measurement devices primary to the formulation. Instead, it makes histories of observables primary.

Okay, maybe too harsh (I do have my red face ready, in case you find anything ;), but I don't understand how you are able to make any 'consistency' whatsoever of things you know absolutely nothing about, except probability densities (before measurement)??

stevendaryl said:
Let me do some Googling to see if there is a good analysis of EPR from the point of view of consistent histories.

Great!
 
  • #53
DevilsAvocado said:
It just makes it worse, the rules of classical probability can't possibly be non-realistic... a complete mess...

I think it's alright as long as it is not logically inconsistent, ie. one doesn't incur a contradiction by adding an additional axiom. Since you brought up Goedel earlier, an analogy would be that the Goedel statement is true if we are talking about the natural numbers. However, at the syntactic level, since neither the statement nor its negation are provable from the Peano axioms, one could consistently add the negation of the Goedel statement as an axiom. We wouldn't have the natural numbers any more, but it would still be a consistent system with a model.
 
Last edited:
  • #54
DevilsAvocado said:
I don't know what to say about the second bold part... quantum mechanics violates Bell's inequality... it cannot be 'compatible' with it, Griffiths must have misunderstood the whole thing...

That was not a quote from Griffiths, that was another paper by a different author.

If CH is local and non-real there is no problem whatsoever!

But then... when Griffiths claim that CH is also consistent (which the name indicates), we're back in the rabbit hole of total confusion: What consistency is he talking about??

I think you misunderstood what Griffiths is talking about. The word "consistent" is a property of a set of histories. A "history" for Griffiths is a sequence of statements, each of which refers to a fact that is true at a particular moment in time. Basically, a history amounts to a record of the sort:

History H_1:
At time t_{1 1} observable \mathcal{O}_{1 1} had value \lambda_{1 1}
At time t_{1 2} observable \mathcal{O}_{1 2} had value \lambda_{1 2}
...

History H_2:
At time t_{2 1} observable \mathcal{O}_{2 1} had value \lambda_{2 1}
At time t_{2 2} observable \mathcal{O}_{2 2} had value \lambda_{2 2}
...

So history H_i says that observable \mathcal{O}_{i j} had value \lambda_{i j} at time t_{i j}, where i is used to index histories, and j is used to index moments of time within that the history.

The entire collection H_1, H_2, H_3, ... of possible histories is said to be a consistent collection if the histories are mutually exclusive. That is, it is impossible (or vanishingly small probability) that more than one history in the collection could be true. (Mathematically, each history corresponds to a product of time-evolved projection operators, and the condition of consistency is that the two histories, as projection operators, result in zero when applied to the initial density operator, or something like that).

So the word "consistent" is not talking about any particular history being consistent, or about Griffiths' theory being consistent. It's talking about it being consistent to reason about that collection of histories using classical logic and probability.

But what finally put the nail in the coffin for me, are statements like this:

Quote by http://quantum.phys.cmu.edu/CQT/chaps/cqt24.pdf (p289)
If quantum theory is a correct description of the world, then since it predicts correlation functions which violate (24.10), one or more of the assumptions made in the derivation of this inequality must be wrong.​

Wow... "If" and "must"... looks like he's refuting QM and/or Bell's theorem in one sentence... not bad at all!

You're a lot harsher than I would be reading that statement. To me, it's only saying "If the conclusion of a theorem is false, then one of the assumptions must be false."

Bell's theorem is of the form: If we assume that we have a theory of type X, then that theory will satisfy inequality Y. Since quantum mechanics does not satisfy inequality Y, then the assumption that it is a theory of type X must be false.

That's all he's saying. He's not "refuting" Bell. To say that an assumption is false is not to refute the theorem.
 
  • #55
atyy said:
Why don't you consider CH to be a version of local nonrealism without a measurement problem?
First, it is not clear to me whether CH is supposed to be about realism or nonrealism. Second, even if I accept CH to be a version of local nonrealism without a measurement problem, I do not consider it to be a very satisfying version. That's because I cannot easily diggest a change of the rules of logic (unless it is absolutely necessary, which in the case of QM is not).
 
  • #56
Demystifier said:
First, it is not clear to me whether CH is supposed to be about realism or nonrealism. Second, even if I accept CH to be a version of local nonrealism without a measurement problem, I do not consider it to be a very satisfying version. That's because I cannot easily diggest a change of the rules of logic (unless it is absolutely necessary, which in the case of QM is not).

Surely, we must consider the laws of logic to be a purely mathematical construct and not given to us as a feature of nature. That must mean that we're free to define them how we like.

Provided that we're clear about which set of logical rules we are using, their selection should be arbitrary and I can't see how we can arrive at an unsatifsying conclusion.

If a set of logical rules is complete and self-consistent then I would expect it to arrive at the same conclusion as any other.

The set of logical rules that we use is so deeply ingrained in our way of thinking, that an attempt to use another set is very likely to fail into the trap of mixing and matching rule sets, which would give rise to inconsistencies and unsatisfying conclusions.
 
Last edited:
  • #57
atyy said:
I think it's alright as long as it is not logically inconsistent, ie. one doesn't incur a contradiction by adding an additional axiom.

Yes, maybe you're right. Still I find it very confusing, and what 'bothers' me (that is never explicitly spelled out), is that maybe the most straightforward name of this interpretation should be "Classical Histories"... There's no doubt that Griffiths does not like what Bell is telling us:

[PLAIN said:
http://quantum.phys.cmu.edu/CQT/chaps/cqt24.pdf]In[/PLAIN] summary, the basic lesson to be learned from the Bell inequalities is that it is difficult to construct a plausible hidden variable theory which will mimic the sorts of correlations predicted by quantum theory and confirmed by experiment. Such a theory must either exhibit peculiar nonlocalities which violate relativity theory, or else incorporate influences which travel backwards in time, in contrast to everyday experience. This seems a rather high price to pay just to have a theory which is more “classical” than ordinary quantum mechanics.

And maybe most mindboggling is that he makes the correct conclusion regarding premises for LHV, but do not [here] present his "third alternative" of "subjective logic" and "forbidden frameworks", but just conclude that "this seems a rather high price to pay".

And what on Earth is "a theory which is more “classical” than ordinary quantum mechanics", I'm totally lost...

atyy said:
Since you brought up Goedel earlier, an analogy would be that the Goedel statement is true if we are talking about the natural numbers. However, at the syntactic level, since neither the statement nor its negation are provable from the Peano axioms, one could consistently add the negation of the Goedel statement as an axiom. We wouldn't have the natural numbers any more, but it would still be a consistent system with a model.

Yes, but it could still never be proven to be complete and consistent from within itself. :wink:
 
Last edited by a moderator:
  • #58
stevendaryl said:
I think you misunderstood what Griffiths is talking about. The word "consistent" is a property of a set of histories. A "history" for Griffiths is a sequence of statements, each of which refers to a fact that is true at a particular moment in time. Basically, a history amounts to a record of the sort:

History H_1:
At time t_{1 1} observable \mathcal{O}_{1 1} had value \lambda_{1 1}
At time t_{1 2} observable \mathcal{O}_{1 2} had value \lambda_{1 2}
...

History H_2:
At time t_{2 1} observable \mathcal{O}_{2 1} had value \lambda_{2 1}
At time t_{2 2} observable \mathcal{O}_{2 2} had value \lambda_{2 2}
...

So history H_i says that observable \mathcal{O}_{i j} had value \lambda_{i j} at time t_{i j}, where i is used to index histories, and j is used to index moments of time within that the history.

The entire collection H_1, H_2, H_3, ... of possible histories is said to be a consistent collection if the histories are mutually exclusive. That is, it is impossible (or vanishingly small probability) that more than one history in the collection could be true. (Mathematically, each history corresponds to a product of time-evolved projection operators, and the condition of consistency is that the two histories, as projection operators, result in zero when applied to the initial density operator, or something like that).

Thank you for explaining this; however what use do we have of this in providing "a natural interpretation of quantum mechanics"? If we take EPR-Bell test experiments, this is the Bell state:

\frac{1}{\sqrt{2}} \left( | \uparrow \rangle_A |\uparrow \rangle_B + |\rightarrow \rangle_A |\rightarrow \rangle_B \right)

In standard QM this is interpreted as a quantum superposition in the shared wavefunction. Now, if CH wants to make consistent histories out of this, I guess it is okay, but afaik this can only happen afterwards, right? And what "prediction power" has CH then?

And most interesting of all:

Exactly how does CH explain the outcome of EPR-Bell test experiments if the "hidden observables" did have definite values all along??

stevendaryl said:
So the word "consistent" is not talking about any particular history being consistent, or about Griffiths' theory being consistent. It's talking about it being consistent to reason about that collection of histories using classical logic and probability.

Thank you very much for this, and I'm sorry if I went too far in my criticism of CH.

However, I believe it is not possible to explain EPR-Bell experiments outcome, using only classical logic and classical probability.

stevendaryl said:
You're a lot harsher than I would be reading that statement. To me, it's only saying "If the conclusion of a theorem is false, then one of the assumptions must be false."

Bell's theorem is of the form: If we assume that we have a theory of type X, then that theory will satisfy inequality Y. Since quantum mechanics does not satisfy inequality Y, then the assumption that it is a theory of type X must be false.

That's all he's saying. He's not "refuting" Bell. To say that an assumption is false is not to refute the theorem.

Okay, we are interpreting this differently. To me "If quantum theory is a correct description of the world", means that the writer questions if quantum theory is correct, and "one or more of the assumptions made in the derivation of this inequality must be wrong", to me means that the writer questions Bell's theorem.

We all know the outstanding precision and validity of QM, the gadget world of today would simply stop if there was slightest error in QM's "description of the world". John Bell was nominated for the Nobel Prize in Physics the same year he died (without ever knowing it). Anton Zeilinger and Alain Aspect will get it any year now.

Then to write this kind of 'insinuations' is just not right.
 
  • #59
DevilsAvocado said:
John Bell was nominated for the Nobel Prize in Physics the same year he died (without ever knowing it).

For his inequality or for the chiral anomaly?
 
  • #60
Inequality

Edit:
I can't find an exact verification... I just took it for granted... it seems illogical not to reward him for what many agrees is one of the most profound discoveries in science, but you never know with these old farts in Stockholm, they've done bigger mistakes...


A Chorus of Bells
http://arxiv.org/abs/1007.0769
 
Last edited:
  • Like
Likes 1 person
  • #61
DevilsAvocado said:
However, I believe it is not possible to explain EPR-Bell experiments outcome, using only classical logic and classical probability.
If the statement A is true in history H1 and the statement B is true in history H2 the statement A AND B may be meaningless because the histories themself are not compatible. In this sense, you could say that Griffiths abandons classical logic but I don't think this is a very accurate description of the situation.

Like stevendaryl, I don't see the quotes you gave regarding Bell's theorem as controversial. The "third way" is simply not to introduce hidden variables. The only thing in CH which could be called a hidden variable is which history belongs to our world. But such a history is a history of observations and doesn't include simultaneous sharp values of incompatible observables.

/edit: As far as the measurement problem is concerned, it is not obvious to me if and how CH eliminates measurements as primitives but I haven't read the paper yet
 
Last edited:
  • #62
Regarding "classical logic": would it be it would be more accurate to say, like Devils Avocado's comment above, that the usual rules of probability to classical reality are not applied?

Is Bell's theorem meaningless in CH simply because P(A,B,a,b), where A,B are measurement outcomes and a,b are measurement settings, is declared not to exist? A,B,a,b are all classical realities, and we can certainly form P(A,B,a,b) over them without any problem. Or does CH obtain locality by some other means?
 
  • #63
It would appear that if you can live with negative probabilities there should be no problem. This is the only concession to realism that is really necessary. Rather than meaningless perhaps it would be better to think of the amplitude as being imaginary, so the probability is negative. Of course we measure that as a zero hence the violation of the inequality.
http://drchinese.com/David/Bell_Theorem_Negative_Probabilities.htm
 
  • #64
@atyy, kith, Jilang

I think the key to all this is:

[PLAIN said:
http://quantum.phys.cmu.edu/CQT/chaps/cqt24.pdf]This[/PLAIN] seems a rather high price to pay just to have a theory which is more “classical” than ordinary quantum mechanics.

And the "high price" is to abandon either locality or realism, which Griffiths obviously is not willing to do.

Problem: No one in this thread seems to be sure on how Griffiths actually preserves this "new" local realism.

Hint: Since Bell was nominated for the Nobel Prize for 'destroying' local realism, I'm pretty sure Griffiths now will get the Nobel Prize for 'restoring' the darned thing, i.e. if we just can get hold of the actual proof...

To-Do: Someone must email Zeilinger & Aspect, and warn them that their experiments will stop working as soon as we have found Griffiths proof!


:smile:
 
Last edited by a moderator:
  • #65
DevilsAvocado said:
@atyy, kith, Jilang

I think the key to all this is:
And the "high price" is to abandon either locality or realism, which Griffiths obviously is not willing to do.

Problem: No one in this thread seems to be sure on how Griffiths actually preserves this "new" local realism.

Hint: Since Bell was nominated for the Nobel Prize for 'destroying' local realism, I'm pretty sure Griffiths now will get the Nobel Prize for 'restoring' the darned thing, i.e. if we just can get hold of the actual proof...

To-Do: Someone must email Zeilinger & Aspect, and warn them that their experiments will stop working as soon as we have found Griffiths proof!:smile:

Regardless of whether CH is local, I think it is nonrealistic because there are multiple incompatible frameworks, and you can choose anyone of these frameworks to describe "reality". To me the question is whether CH is nonlocal and nonrealistic, or local and nonrealistic. And if it is the latter, why exactly does it evade the Bell theorem? Is it just that P(A,B,a,b) where A,B are classical measurement outcomes and a,b are classical measurement choices?
 
  • #66
atyy said:
I think it is nonrealistic because there are multiple incompatible frameworks, and you can choose anyone of these frameworks to describe "reality".

They consider it realistic, but have their own peculiar version of realism - weak property realism:
http://www.siue.edu/~evailat/pdf/qm12.pdf

Things like this make me laugh - like I say its defining your way out of problems. In doing that I believe it complicates things unnecessarily.

I don't want to be too hard on it however - I have Griffiths book - Consistent Quantum Theory - and its actually quite good. Certainly an excellent source for coming to grips with some of these issues and seeing how CH handles it.

Thanks
Bill
 
  • #67
According to Laloe http://arxiv.org/abs/quant-ph/0209123 (p86, p50), Griffiths's version of CH is local because it rejects counterfactual definiteness, which is an assumption in the proof of Bell's theorem. But if I reject counterfactual definiteness, isn't even dBB local, since the Bell inequality doesn't exist?
 
  • #68
atyy said:
But if I reject counterfactual definiteness, isn't even dBB local, since the Bell inequality doesn't exist?
No. First, you can see that dBB is non-local even without Bell inequality. Indeed, Bell FIRST noted that dBB is non-local, and only AFTER discovered his inequality, as a tool to see non-locality of QM without referring to dBB.
Second, I don't see how one might accept dBB and reject counterfactual definiteness at the same time.
 
  • #69
DevilsAvocado said:
Hint: Since Bell was nominated for the Nobel Prize for 'destroying' local realism, I'm pretty sure Griffiths now will get the Nobel Prize for 'restoring' the darned thing, i.e. if we just can get hold of the actual proof...
If Bell was nominated for the Nobel Prize, it was because he made a new measurable prediction, which was tested by an actual experiment. I don't think that it was the case with Griffiths.

Speaking of nominations for the Nobel Prize, is there an official site where one can see who was nominated and when?
 
  • #70
atyy said:
According to Laloe http://arxiv.org/abs/quant-ph/0209123 (p86, p50), Griffiths's version of CH is local because it rejects counterfactual definiteness, which is an assumption in the proof of Bell's theorem. But if I reject counterfactual definiteness, isn't even dBB local, since the Bell inequality doesn't exist?

The whole point of dBB is that it is counterfactual definite. If you take dBB and remove counterfactual definiteness then all you're left with is the pilot wave. At this stage you haven't chosen locality or objective realism yet. You could choose either or both but you certainly don't have dBB anymore.
 
  • #71
atyy said:
Regardless of whether CH is local, I think it is nonrealistic because there are multiple incompatible frameworks, and you can choose anyone of these frameworks to describe "reality".

I have to be honest and admit that I don't understand CH well enough to judge if this is the case or not. However if CH is nonrealistic, then Griffiths has paid that "high price" that he rejects in his book and this, to me, makes this story even more inconsistent...

But if we assume that CH is nonrealistic, could you explain – step by step – what happens in an EPR-Bell experiment, according to CH and multiple incompatible frameworks?

atyy said:
To me the question is whether CH is nonlocal and nonrealistic, or local and nonrealistic.

If CH is nonlocal and nonrealistic... Griffiths has paid the "high price" twice, and then maybe we are beyond inconsistent storytelling...

atyy said:
Regarding "classical logic": would it be it would be more accurate to say, like Devils Avocado's comment above, that the usual rules of probability to classical reality are not applied?

To avoid any confusion, maybe I should explain what I mean by "classical probability" (in this allegory):

  • Take a coin, and let it spin at very high speed on both vertical and horizontal axes.

  • Initial conditions are completely unknown and the outcome is regarded as 100% random.

  • Send the coin toward a metal plate with vertical and horizontal slit +.


    [*]The coin will always go through the vertical or horizontal slit with a 50/50 chance.


    [*]Now we introduce a second coin, with exactly the same properties, and send both coins in opposite direction towards two space-like separated metal plates with a vertical/horizontal slit +.


    [*]When we check the outcome, the two coins are always correlated, i.e. if they have gone through the same orientation they show the same face, if they have gone through the opposite orientation they show the opposite face.


    [*]We make the conclusion that "something magical" happened at the source when we created the spin of the two coins, that make them act randomly but correlated.


    [*]We also make the conclusion that there is no "spooky action at a distance" going on (the source is the explanation) and also make the conclusion that these coins are real, it's just that with current technology we can't inspect all their properties.

This is the "classical probability", however now we change the setup:

  • We modify the metal plates to tilt randomly between 0° = + and 45° = X, and repeat the experiment.


    [*]To our surprise it turns out that when metal plates have the same tilting, we get exactly the same results as in previous setup. But when metal plates have the different tilting, we get a random correlation of 50% head or tail, and there is no explanation on how the two space-like separated coins 'knew' they were going through different orientations, none whatsoever, and the "common source explanation" can't save us this time.


    [*]Now an extensive debate starts – whether the coins are real or not, or if there is some non-local influence on the coins – which is still ongoing...

This would be "non-classical probability".
 
  • #72
Demystifier said:
If Bell was nominated for the Nobel Prize, it was because he made a new measurable prediction, which was tested by an actual experiment. I don't think that it was the case with Griffiths.

Ehh... it was meant more like a 'joke'... sorry, my silly humor again... :blushing:

Demystifier said:
Speaking of nominations for the Nobel Prize, is there an official site where one can see who was nominated and when?

I don't think so, they are very secretive in the committee and nominations are kept secret for 50 years.
http://www.nobelprize.org/nomination/physics/

But some (old) data are available in the nomination database (not Physics though??):
http://www.nobelprize.org/nomination/archive/

But there is always 'talk' and I take it for granted that Jeremy Bernstein somehow has gotten the correct information.
(page 13)
http://arxiv.org/abs/1007.0769
 
  • #73
DevilsAvocado said:
And the "high price" is to abandon either locality or realism [...]
That's not what he says in your quote. He says if we want to construct a hidden variables theory, Bell tells us that we have to embrace either non-locality or backwards causation. His "solution" is simple: like Bohr, he doesn't want to construct a hidden variables theory in the first place. So what he rejects is EPR realism. Calling his theory realistic may be sensible from another point of view but this is certainly not EPR realism which is what Bell's theorem is about.

/edit: I also wrote a statement about locality here but actually, I think this should be discussed in an own thread.
 
Last edited:
  • #74
I think a lot of confusion arises because there isn't much clarity about the terms realism and locality.

Do we not just consider CH to have the same types of locality and realism as MWI?

Locality is preserved, though splitting is global and instantaneous.

Realism is preserved in that all observers in the same framework have the same reality.

These concepts are compatible with those which apply to other interpretations too, since they are not concerned with splitting, worlds or frameworks, though in those interpretations it is not possible for both to preserved.

If we follow these, I don't see how Bell Inequality can apply, because there is no hidden variable or information transfer.

Is it not true that in order to calculate the Bell Inequality in this context, we would incorporate quantities outside of the universe?

I don't see how there is a modification to the rules of logic here, simply a clarification that in order to generate inference by combining statements, they must pertain to the same universe.

Not of this undermines the significance of Bell's work, but it's applicability was to information transfer via hidden variables, which neither the MWI nor CH are concerned with.
 
Last edited:
  • #75
kith said:
That's not what he says in your quote. He says if we want to construct a hidden variables theory, Bell tells us that we have to embrace either non-locality or backwards causation.

And this shows that Griffiths has not gotten the complete picture, since there are other options for non-realism than backwards causation. Shouldn't a professor, claiming to have a new solution to this problem, be better informed?

kith said:
His "solution" is simple: like Bohr, he doesn't want to construct a hidden variables theory in the first place. So what he rejects is EPR realism. Calling his theory realistic may be sensible from another point of view but this is certainly not EPR realism which is what Bell's theorem is about.

Most of us doesn't care what Griffiths wants, we're more interested in what he can prove (which seems to be nothing, this far). Introducing something as "almost real" and then name this new invention "consistent", would generally be considered a joke.

I don't know how many times I have asked this question:
Could you please explain – step by step – what happens in an EPR-Bell experiment, according to CH and the new "Almost-realism"?

(Even if Griffiths don't acknowledge EPR realism, I sure hope he accept experimental outcomes...)
 
  • #76
craigi said:
Is it not true that in order to calculate the Bell Inequality in this context, we would incorporate quantities outside of the universe?

I don't see how there is a modification to the rules of logic here, simply a clarification that in order to generate inference by combining statements, they must pertain to the same universe.

I could be wrong, but my firm belief is that if we incorporate "stuff" outside this universe to solve scientific problems inside this universe, we have to move to the Vatican and finish our thesis inside these walls.

It's probably even possible to prove the existents of the flying Centaur, if we just have the option to throw any unpleasant data in the "I-Don't-Like-Bin", and just toss it out of this universe.

But I could be wrong, of course...

[Note: strong irony warning]
 
  • #77
DevilsAvocado said:
And this shows that Griffiths has not gotten the complete picture, since there are other options for non-realism than backwards causation. Shouldn't a professor, claiming to have a new solution to this problem, be better informed?
I'm a bit puzzled by your fixation on this. Why exactly do you think that Griffiths thinks something about Bell's theorem needs to be "solved"? In everything I have read from him, Griffiths says that it doesn't make sense to search for hidden variable theories because Bell's theorem tells us that they are ugly. This is simply the mainstream view. I don't know what he says about the definition of the terms "locality" and "realism", but this is just a semantic sidenote and really not the core issue of this thread.

What Griffiths wants to solve (and what caused stevendaryl to open this thread) is the problem that textbooks assign a special role to the concept of measurement and make it seem like QM can't be used to describe the measurement process.
 
  • #78
DevilsAvocado said:
And this shows that Griffiths has not gotten the complete picture, since there are other options for non-realism than backwards causation. Shouldn't a professor, claiming to have a new solution to this problem, be better informed?



Most of us doesn't care what Griffiths wants, we're more interested in what he can prove (which seems to be nothing, this far). Introducing something as "almost real" and then name this new invention "consistent", would generally be considered a joke.

I don't know how many times I have asked this question:
Could you please explain – step by step – what happens in an EPR-Bell experiment, according to CH and the new "Almost-realism"?

(Even if Griffiths don't acknowledge EPR realism, I sure hope he accept experimental outcomes...)

I'm not sure what it about Griffiths' interpretation that's bugging you so much, but none of the interpretations prove any new physics. That is not their purpose. Their goal is epistemological rather than ontological. Some, including myself, believe that an interpretation could hint at something of ontological value, but this hasn't happened yet.

Of course Griffiths understands the EPR experiments very well. He is one of the leading experts in the field of QM and by no means denies the results of the experiments, which are not in the slightest inconsistent with his interpretation.
 
Last edited:
  • #79
DevilsAvocado said:
I could be wrong, but my firm belief is that if we incorporate "stuff" outside this universe to solve scientific problems inside this universe, we have to move to the Vatican and finish our thesis inside these walls.

It's probably even possible to prove the existents of the flying Centaur, if we just have the option to throw any unpleasant data in the "I-Don't-Like-Bin", and just toss it out of this universe.

But I could be wrong, of course...

[Note: strong irony warning]


That's the point, we don't incorporate stuff outside of this universe and that is where part of the Bell Inequality calculation lies, under the CH interpretation. I can understand a reactionary attitude to this terminology, I don't like it either, because it does sound like something from science fiction, or perhaps as you suggest, theology. You can just consider it, stuff that does not happen.

All of the interpretations throw out stuff they don't like in favour of stuff that they do, but none of these things are tangible physical things, purely concepts that we use to try make sense of them.
 
Last edited:
  • #80
kith said:
In everything I have read from him, Griffiths says that it doesn't make sense to search for hidden variable theories because Bell's theorem tells us that they are ugly. This is simply the mainstream view.

Agreed, a lot of things don't make sense. Regarding ugly HV, I think that is something you have to confront Demystifier, or maybe atyy with, personally I'm agnostic.

kith said:
I don't know what he says about the definition of the terms "locality" and "realism", but this is just a semantic sidenote and really not the core issue of this thread.

Okay, "semantic sidenote" is fine by me, with the reservation that if an interpretation can't handle Bell's theorem it's basically dead, and if I'm not mistaken, that's also what stevendaryl said last time he posted.
 
  • #81
craigi said:
I'm not sure what it about Griffiths' interpretation that's bugging you so much, but none of the interpretations prove any new physics. That is not their purpose. Their goal is epistemological rather than ontological. Some, including myself, believe that they could something of ontological value, but this hasn't happened yet.

But I think the Devil's Avocado wants (and I couldn't find it by googling) is a demonstration of how CH works by applying it to the EPR problem. What are the possible sets of consistent histories, and what would be an example of an inconsistent set?

It's a little complicated to see how to apply the technical definition, because the notion of "consistency" involves time evolution of projection operators. But once you involve macroscopic objects like measuring devices, we don't have a comprehensible expression for the time evolution (because it involves an ungodly number of particles).

Let me just think out loud:

My guess would be that a (simplified, approximate) history would have 6 elements:
  1. Alice's detector orientation. (\theta_A)
  2. Bob's detector orientation. (\theta_B)
  3. A spin state for Alice's particle immediately before detection. (\sigma_A)
  4. A spin state for Bob's particle immediately before detection. (\sigma_B)
  5. Alice's result (spin up or spin down) (R_A)
  6. Bob's result (spin up or spin down) (R_B)

So a history is a vector of six elements:
\langle \theta_A, \theta_B, \sigma_A, \sigma_B, R_A, R_B \rangle

To apply Griffiths' approach, we need to first figure out which collections of 6-tuples are consistent. What I think is true is that any macroscopic state information is consistent, in Griffiths' sense (although it might have probability zero). So whatever rules for consistent histories should only affect the unobservable state information (the particle spins).
 
  • #82
stevendaryl said:
But I think the Devil's Avocado wants (and I couldn't find it by googling) is a demonstration of how CH works by applying it to the EPR problem. What are the possible sets of consistent histories, and what would be an example of an inconsistent set?

Thanks a lot Steven, finally! :thumbs:

I'll study your explanation and get back.
 
  • #83
I have now read Griffiths' paper and I am not sure what to think of it.

Firstly, my previous notion of one histroy being the "right" one isn't what he has in mind (he explicitly acknowledges different, mutually exclusive histories to be equally valid in the middle of section VI). So the catch phrase "Many worlds without the many worlds" doesn't seem appropriate to me.

Now what does he do? In section V, he uses a toy model to analyze the measurement process. This analysis seems conceptually not very different from what Ballentine or a MWI person would do.

In section VI, he introduces his families of histories to explore which assumptions about properties before performing a measurement can be combined consistently. A history is a succession of statements about the system, while a family of histories is a set of possible histories. Although within one family, the realized outcome of an experiment may be only compatible with one history, different views about the possible intermediate states corresponding to different families are possible. As mentioned above, he thinks that all of these families / points of view about intermediate states should be considered equally valid or "real". Therefore, CH seems more lika a meta interpretation to me.

Now what I don't understand is the relevance of the existence of more than one family of histories to the measurement problem. For example, his analysis of the measurment process takes place before he even introduces them.
 
  • #84
DevilsAvocado said:
Thanks a lot Steven, finally! :thumbs:

I'll study your explanation and get back.

I haven't explained anything. I was trying to publicly work out what the CH description of EPR might look like. I'm not finished, because I'm stuck on figuring out which collections of histories are "consistent" in Griffiths' sense.
 
  • #85
kith said:
Now what I don't understand is the relevance of the existence of more than one family of histories to the measurement problem. For example, his analysis of the measurement process takes place before he even introduces them.

The way I understand it is that we choose to use a family of histories in which macroscopic objects (e.g., measuring devices) have definite macroscopic states. But one could instead choose a different family of histories, where macroscopic objects are in macroscopic superpositions. The latter family would be pretty much useless for our purposes, but would be perfectly fine as far as the Rules of Quantum Mechanics (and the CH interpretation) are concerned. So CH makes it a matter of usefulness that we treat measuring devices specially--it's a choice on our part, rather than being forced on us by the physics. So to me it seems very much like Copenhagen, except that the "wave function collapse caused by measurement" is no longer considered a physical effect, but is instead an artifact of what we choose to analyze.

I think that in some ways, CH is like Copenhagen, and in other ways, it's like MWI, although there are two completely different notions of "alternatives" considered at the same time. Within a particular family of histories, there are alternative histories. So that's one notion of alternative, and it's the one that people normally think of when they think of many worlds. But there is a second kind of alternative, which is the choice of which family to look at.
 
  • #86
Here's a first note that maybe could help you to get further:

stevendaryl said:
My guess would be that a (simplified, approximate) history would have 6 elements:
  1. Alice's detector orientation. (\theta_A)
  2. Bob's detector orientation. (\theta_B)
  3. A spin state for Alice's particle immediately before detection. (\sigma_A)
  4. A spin state for Bob's particle immediately before detection. (\sigma_B)
  5. Alice's result (spin up or spin down) (R_A)
  6. Bob's result (spin up or spin down) (R_B)

So a history is a vector of six elements:
\langle \theta_A, \theta_B, \sigma_A, \sigma_B, R_A, R_B \rangle

If you have definite spin in 3 & 4, everything I know tells me that the only way to handle 5 & 6 is by non-locality, since what settles the level of correlations in 5 & 6 is the relative angle between 1 & 2.
 
  • #87
stevendaryl said:
I haven't explained anything. I was trying to publicly work out what the CH description of EPR might look like. I'm not finished, because I'm stuck on figuring out which collections of histories are "consistent" in Griffiths' sense.

It's okay, your post is definitely a progress compared to what we (including myself) have produced in this thread lately. :wink:
 
Last edited:
  • #88
kith said:
That's not what he says in your quote. He says if we want to construct a hidden variables theory, Bell tells us that we have to embrace either non-locality or backwards causation. His "solution" is simple: like Bohr, he doesn't want to construct a hidden variables theory in the first place. So what he rejects is EPR realism. Calling his theory realistic may be sensible from another point of view but this is certainly not EPR realism which is what Bell's theorem is about.

/edit: I also wrote a statement about locality here but actually, I think this should be discussed in an own thread.

But is it true that not having hidden variables is enough to make quantum mechanics local? Gisin http://arxiv.org/abs/0901.4255 (Eq 2) argues that the wave function itself can be the "hidden variable", but a nonlocal one. Laloe http://arxiv.org/abs/quant-ph/0209123 (p50) says it is still unsettled whether quantum mechanics is itself local.
 
  • #89
stevendaryl said:
The way I understand it is that we choose to use a family of histories in which macroscopic objects (e.g., measuring devices) have definite macroscopic states. But one could instead choose a different family of histories, where macroscopic objects are in macroscopic superpositions.
Let me check if I get you right: In order to describe measurements, we use a family with an observable whose eigenstates are product states of system+apparatus. It would be equally valid to use another family with an observable which is incompatible with the first one. Such an observable could have entangled states of system+apparatus as eigenstates. In the second family, a measurement wouldn't yield a definite state but a state with different probabilities for macroscopic superpositions. Do you agree with this so far?
 
  • #90
atyy said:
But is it true that not having hidden variables is enough to make quantum mechanics local? Gisin http://arxiv.org/abs/0901.4255 (Eq 2) argues that the wave function itself can be the "hidden variable", but a nonlocal one. Laloe http://arxiv.org/abs/quant-ph/0209123 (p50) says it is still unsettled whether quantum mechanics is itself local.
I don't really have an informed opinion on this. QM without simultaneous hidden variables still allows for different ontologies and I think it depends mostly on them whether we say it is local or not.
 
  • #91
kith said:
Let me check if I get you right: In order to describe measurements, we use a family with an observable whose eigenstates are product states of system+apparatus. It would be equally valid to use another family with an observable which is incompatible with the first one. Such an observable could have entangled states of system+apparatus as eigenstates. In the second family, a measurement wouldn't yield a definite state but a state with different probabilities for macroscopic superpositions. Do you agree with this so far?

I think that's correct. As I said in another post, reasoning about macroscopic objects using the apparatus of quantum mechanics is very difficult, because you can't really write down a wave function for the object. So there is a certain amount of handwaving involved, and it's never clear (to me, anyway) whether whatever conclusions we draw are artifacts of the handwaving or are real implications of QM.
 
  • #92
stevendaryl said:
The latter family would be pretty much useless for our purposes, but would be perfectly fine as far as the Rules of Quantum Mechanics (and the CH interpretation) are concerned. So CH makes it a matter of usefulness that we treat measuring devices specially--it's a choice on our part, rather than being forced on us by the physics.
Isn't the conncection to physics that although we can easily predict what happens using the second family, we cannot build the corresponding measurement devices because the fundamental interactions between the device and the system will decohere the macroscopic superposition eigenstates very quickly? Or put differently: We will always have the ambiguity of multiple histories from this family because we never end up in eigenstates.
 
  • #93
kith said:
Isn't the conncection to physics that although we can easily predict what happens using the second family, we cannot build the corresponding measurement devices because the fundamental interactions between the device and the system will decohere the macroscopic superposition eigenstates very quickly? Or put differently: We will always have the ambiguity of multiple histories from this family because we never end up in eigenstates.

I'm on shaky grounds here, but that sounds right. And philosophical, I find it to be an improvement over Copenhagen, in that, as I said, the assumption that measuring devices always have definite macroscopic states is a practical, subjective choice, rather than there being something magical about the measurement process. In the end, you probably get the same quantitative predictions either way, so maybe it's a matter of taste.
 
  • #94
stevendaryl said:
I haven't explained anything. I was trying to publicly work out what the CH description of EPR might look like. I'm not finished, because I'm stuck on figuring out which collections of histories are "consistent" in Griffiths' sense.

Try chapter 12 here:
http://www.siue.edu/~evailat/

I can't vouch for this but it does seem to cover it.

I'm sure Griffiths must have published his own treatment of the problem, though.
 
  • #95
Jilang said:
It would appear that if you can live with negative probabilities there should be no problem. This is the only concession to realism that is really necessary. Rather than meaningless perhaps it would be better to think of the amplitude as being imaginary, so the probability is negative. Of course we measure that as a zero hence the violation of the inequality.
http://drchinese.com/David/Bell_Theorem_Negative_Probabilities.htm

I once worked out for myself a way to "explain" EPR results using negative probabilities. I may have already posted about it, but it's short enough that I can reproduce it here.

Let's simplify the problem of EPR by considering only 3 possible axes for spin measurements:

\hat{a} = the x-direction
\hat{b} = 120 degrees counterclockwise from the x-direction, in the x-y plane.
\hat{c} = 120 degrees clockwise from the x-direction, in the x-y plane.

We have two experimenters, Alice and Bob. Repeatedly we generate a twin pair, and have Alice measure the spin of one along one of the axes, and have Bob measure the spin of the other along one of the axes.

Let i range over \{ \hat{a}, \hat{b}, \hat{c} \}.
Let X range over { Alice, Bob }
Let P_X(i) be the probability that experimenter X measures spin-up along direction i.
Let P(i, j) be the probability that Alice measures spin-up along axis i and Bob measures spin-up along axis j. The predictions of QM are:

  1. P_X(i) = 1/2
  2. P(i,j) = 3/8 if i \neq j
  3. P(i, i) = 0

One approach for a hidden-variables explanation would be this:
  • Associated with each twin-pair is a hidden variable \lambda which can take on 8 possible values: \lambda_{\{\}}, \lambda_{\{a\}}, \lambda_{\{b\}}, \lambda_{\{c\}}, \lambda_{\{a, b\}}, \lambda_{\{a, c\}}, \lambda_{\{b, c\}}, \lambda_{\{a, b, c\}}
  • The probability of getting \lambda_x is p_x (where x ranges over all subsets of \{ a, b, c \}.)
  • If the variable has value \lambda_x, then Alice will get spin-up along any of the directions in the set x, and will get spin-down along any other direction.
  • If the variable has value \lambda_x, then Bob will get spin-down along any of the directions in the set x, and will get spin-upalong any other direction (the opposite of Alice).

So if you assume symmetry among the three axis, then it's easy to work out what the probabilities must be to reproduce the predictions of QM. They turn out to be:

p_{\{\}} = p_{\{a, b, c\}} = -1/16
p_{\{a\}} = p_{\{b\}} = p_{\{c\}} = p_{\{a, b\}} = p_{\{a, c\}} = p_{\{b, c\}} = 3/16

So the probability that Alice gets spin-up along direction \hat{a} is:

p_{\{a\}} + p_{\{a, b\}} + p_{\{a, c\}} + p_{\{a, b, c\}} = 3/16 + 3/16 +3/16 - 1/16 = 1/2

The probability that Alice gets spin-up along direction \hat{a} and Bob gets spin-up along direction \hat{b} is:

p_{\{a\}} + p_{\{a, c\}} = 3/16 + 3/16 - 1/16 = 3/8

So if we knew what a negative probability meant, then this would be a local hidden-variables model that reproduces the EPR results.
 
  • Like
Likes 1 person
  • #96
I'm not sure this is related to the negative probabilities above, but thought I'd mention it. There is a standard object in quantum mechanics, called the Wigner function, which is considered the closest thing to a joint probability distribution over canonical variables like position and momentum. As with a classical probability distribution, integrating over momentum gives a classical position distribution, and integrating over position gives a classical momentum distribution. For a free particle or harmonic oscillator, the Wigner function evolves as a classical probability distribution. In general the Wigner function itself has negative parts, which prevents it from being interpreted as a classical probability distribution, but when it is entirely positive, such as for a Gaussian wavefunction, I believe it is ok to assign trajectories to quantum particles.
 
  • #97
kith said:
I don't really have an informed opinion on this. QM without simultaneous hidden variables still allows for different ontologies and I think it depends mostly on them whether we say it is local or not.

Yes. For example, many-worlds evades the Bell theorem because the Bell theorem assumes that each measurement has only one outcome, but in many-worlds all outcomes appear. Incidentally, Wallace seems to say the state vector in many-worlds is nonlocal. At any rate, it seems clear in many-worlds why the Bell theorem is evaded. The question is whether in CH the requirement of consistency is enough to evade the Bell theorem, or whether something more is required. What exactly is the means by which CH evades the Bell theorem, if it does?
 
  • #98
  • #99
http://arxiv.org/abs/1201.0255
Quantum Counterfactuals and Locality
Robert B. Griffiths
Found. Phys. 42 (2012) pp. 674-684

"Stapp asserts that the validity of a certain counterfactual statement, SR in Sec. 4 below, referring to the properties of a particular particle, depends upon the choice of which measurement is made on a different particle at a spatially distant location. ... It will be argued that, on the contrary, the possibility of deriving the counterfactual SR depends on the point of view or perspective that is adopted—specifically on the framework as that term is employed in CQT—when analyzing the quantum system, and this dependence makes it impossible to construct a sound argument for nonlocality, contrary to Stapp’s claim."

"Our major disagreement is over the conclusions which can be drawn from these analyses. Stapp believes that because he has identified a framework which properly corresponds to his earlier argument for nonlocal influences, and in this framework the ability to deduce SR is linked to which measurement is carried out on particle a, this demonstrates a nonlocal influence on particle b. I disagree, because there exist alternative frameworks in which there is no such link between measurement choices on a and the derivation of SR for b."

So CH is nonlocal in some frameworks?
 
  • #100
http://arxiv.org/abs/0908.2914
Quantum Locality
Robert B. Griffiths
(Submitted on 20 Aug 2009 (v1), last revised 13 Dec 2010 (this version, v2))
Foundations of Physics, Vol. 41, pp. 705-733 (2011)

"It is argued that while quantum mechanics contains nonlocal or entangled states, the instantaneous or nonlocal influences sometimes thought to be present due to violations of Bell inequalities in fact arise from mistaken attempts to apply classical concepts and introduce probabilities in a manner inconsistent with the Hilbert space structure of standard quantum mechanics. Instead, Einstein locality is a valid quantum principle: objective properties of individual quantum systems do not change when something is done to another noninteracting system. There is no reason to suspect any conflict between quantum theory and special relativity."

"Many errors contain a grain of truth, and this is true of the mysterious nonlocal quantum influences. Quantum mechanics does deal with states which are nonlocal in a way that lacks any precise classical counterpart."

"The analysis in this paper implies that claims that quantum theory violates “local realism” are misleading."

!
 
Last edited:
Back
Top